question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
76,301,828 | 2023-5-21 | https://stackoverflow.com/questions/76301828/how-to-set-a-pydantic-field-value-depending-on-other-fields | from pydantic import BaseModel class Grafana(BaseModel): user: str password: str host: str port: str api_key: str | None = None GRAFANA_URL = f"http://{user}:{password}@{host}:{port}" API_DATASOURCES = "/api/datasources" API_KEYS = "/api/auth/keys" With Pydantic I get two unbound variables error messages for user, password, etc. in GRAFANA_URL. Is there a way to solve this? In a regular class, I would just create GRAFANA_URL in the __init__ method. With Pydantic, I'm not sure how to proceed. | Pydantic v2 In Pydantic version 2 you can define a computed field for this exact purpose. from pydantic import BaseModel, computed_field class Model(BaseModel): foo: str bar: str @computed_field @property def foobar(self) -> str: return self.foo + self.bar obj = Model(foo="a", bar="b") print(obj) # foo='a' bar='b' foobar='ab' One nice thing about this is that foobar will be part of the the serialization schema, but not part of the validation schema. Roughly speaking, this means that the foobar field will be part of a model instance, when it is dumped/returned somewhere, but no foobar value is expected for constructing a model instance. This is very useful when for example generating OpenAPI documentations from your models. Demo, with the Model from above: ... import json schema_val = Model.model_json_schema(mode="validation") schema_ser = Model.model_json_schema(mode="serialization") print(json.dumps(schema_val, indent=4)) print(json.dumps(schema_ser, indent=4)) Output: { "properties": { "foo": { "title": "Foo", "type": "string" }, "bar": { "title": "Bar", "type": "string" } }, "required": [ "foo", "bar" ], "title": "Model", "type": "object" } { "properties": { "foo": { "title": "Foo", "type": "string" }, "bar": { "title": "Bar", "type": "string" }, "foobar": { "readOnly": true, "title": "Foobar", "type": "string" } }, "required": [ "foo", "bar", "foobar" ], "title": "Model", "type": "object" } See the API reference for the computed_field decorator for additional options. Pydantic v1 Option A: Use a @validator See the validators documentation for details. from typing import Any from pydantic import BaseModel, validator class Model(BaseModel): foo: str bar: str foobar: str = "" @validator("foobar", always=True) def set_if_empty(cls, v: str, values: dict[str, Any]) -> str: if v == "": return values["foo"] + values["bar"] return v obj = Model(foo="a", bar="b") print(obj) # foo='a' bar='b' foobar='ab' That way foobar remains a regular model field. Note that for this to work, foobar must be defined after foo and bar. Otherwise you will have to use a root validator. PS: This approach also works analogously with Pydantic v2 @field_validator and @model_validator. Option B: Make it a @property from pydantic import BaseModel class Model(BaseModel): foo: str bar: str @property def foobar(self) -> str: return self.foo + self.bar obj = Model(foo="a", bar="b") print(obj) # foo='a' bar='b' print(obj.foobar) # ab Then foobar will not be a model field anymore and therefore not part of the schema. That may or may not be relevant to you. PS: This of course also works with Pydantic v2, though there probably is no benefit of using @property without @computed_field (see above). | 12 | 17 |
76,273,150 | 2023-5-17 | https://stackoverflow.com/questions/76273150/how-can-i-highlight-python-function-calls-with-in-vs-code | I would like to know how to enable the highlighting of function call in VS Code with python. See the following example where function call is blank, as other part of the code: | See the theming docs and the Developer: Inspect Editor Tokens and Scopes command in the command palette. Ex. Semantic highlighting customization route (requires a language extension that provides semantic highlighting support for Python) (highlights both function definitions and calls): "editor.semanticTokenColorCustomizations": { "[Theme Name Goes Here]": { // remove this wrapper to apply to all themes "rules": { "function:python": { // remove ":python" to apply to all languages "foreground": "#FF0000", // TODO // "fontStyle": "" // optional } } } } ^the :python part will make the colour customization only apply to functions for the Python language. If you want the call for function declarations to be different, then write another similar rule, but use the declaration modifier. Ex. function.declaration:python. Token colour customization route (works without any extensions, since VS Code bundles TextMate grammar support for Python) (may require disabling extensions that provide semantic highlighting support for Python): "editor.tokenColorCustomizations": { "[Theme Name Goes Here]": { // remove this wrapper to apply to all themes "textMateRules": [ { "scope": "meta.function-call.python", "settings": { "foreground": "#FF0000", // TODO // "fontStyle": "" // optional } } ] } }, See also the scopes support.function.builtin.python, meta.function-call.generic.python, and meta.member.access.python, meta.function-call.python. | 3 | 4 |
76,305,207 | 2023-5-22 | https://stackoverflow.com/questions/76305207/openai-api-asynchronous-api-calls | I work with the OpenAI API. I have extracted slides text from a PowerPoint presentation, and written a prompt for each slide. Now, I want to make asynchronous API calls, so that all the slides are processed at the same time. this is the code from the async main function: for prompt in prompted_slides_text: task = asyncio.create_task(api_manager.generate_answer(prompt)) tasks.append(task) results = await asyncio.gather(*tasks) and this is generate_answer function: @staticmethod async def generate_answer(prompt): """ Send a prompt to OpenAI API and get the answer. :param prompt: the prompt to send. :return: the answer. """ completion = await openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}] ) return completion.choices[0].message.content the problem is: object OpenAIObject can't be used in 'await' expression and I don't know how to await for the response in generate_answer function Would appreciate any help! | For those landing here, the error here was probably the instantiation of the object. It has to be: client = AsyncOpenAI(api_key=api_key) Then you can use: response = await client.chat.completions.create( model="gpt-4", messages=custom_prompt, temperature=0.9 ) | 12 | 18 |
76,314,792 | 2023-5-23 | https://stackoverflow.com/questions/76314792/python-catching-and-then-re-throw-warnings-from-my-code | I want to catch and then re-throw warnings from my Python code, similarly to try/except clause. My purpose is to catch the warning and then re-throw it using my logger. The warnings are issued from whatever packages I'm using, I would like something that is totally generic, exactly like the try/except clause. How can I do that in Python >= v3.8? | There are at least two ways to tackle this: Record the warnings and replay them Make warnings to behave like exceptions and break the control flow Option 1: Record warnings and replay them You could use warnings.catch_warnings with record=True to record the Warning objects. This way all your application code will get executed regardless of any warnings. import warnings import logging logger = logging.getLogger(__name__) class NotImportantWarning(UserWarning): ... def do_something(): warnings.warn("doing something", UserWarning) warnings.warn("This is not important", NotImportantWarning) # Executing our main function and recording warnings with warnings.catch_warnings(record=True) as recorded_warnings: do_something() print("done") # Handling warnings. You may log it here. for w in recorded_warnings: if isinstance(w.message, NotImportantWarning): continue # re-issue warning warnings.warn_explicit( message=w.message, category=w.category, filename=w.filename, lineno=w.lineno, source=w.source, ) # or use a logger logger.warning(str(w.message)) This will print then something like done <ipython-input-6-cc898056d044>:12: UserWarning: doing something warnings.warn("doing something", UserWarning) doing something Notes Using the lower level warnings.warn_explicit to re-issue the warnings. The denefit over warnings.warn is that with the warn_explicit it is possible to retain the original filename and line number information. Option 2: Make warnings to behave like exceptions If you really want to break the control flow right at the warning, like with exceptions, it is possible with warnings.filterwarnings("error"): import warnings warnings.filterwarnings('error') try: do_something() except UserWarning as w: print(w) Note: The code execution stops at the first warning. No code inside do_something after that warning will ever get executed. | 3 | 3 |
76,269,633 | 2023-5-17 | https://stackoverflow.com/questions/76269633/how-to-accept-only-the-next-word-in-pycharm-github-copilot-suggestion | I would like to be able to accept only the next word of a github Copilot suggestion instead of the full suggestion. This is possible with VS Code as documented here. Is there a way to do this in PyCharm too? | This feature has been added with a recent update for the GitHub Copilot extension. One may now hit Ctrl + Right to accept the next word only, and in a multiline suggestion also Ctrl + Alt + Right to accept the next line only. | 10 | 0 |
76,275,641 | 2023-5-17 | https://stackoverflow.com/questions/76275641/mkdocs-how-to-attach-a-downloadable-file | I have a mkdocs project that resembles the following: project ├─mkdocs.yml ├─docs │ ├─home.md │ ├─chapter1.md │ ├─static ├─file.ext ├─image.png I am trying to find a way to "attach" file1.ext to the build, for instance as a link in chapter1.md. Any suggestions how to achieve that? Detail: I want the file to be downloadable on click. | In mkdocs, to get the file to be downloadable on click using markdown, first you need to add this to your mkdocs.yml file : markdown_extensions: - attr_list and then in your chapter1.md you can add download attribute to your link ... like so : [file.ext](../static/file.ext){:download} Heck you can even name the downloaded file: [file.ext](../static/file.ext){:download="awesome-file"} Explanation MkDocs converts Markdown to HTML using Python-Markdown which offers a flexible extension mechanism, which makes it possible to change and/or extend the behavior of the parser without having to edit the actual source files. The markdown_extensions in mkdocs.yml setting specifies the Python-Markdown extensions for MkDocs. adding attr_list entry point enables the Python-Markdown attribute lists extension, which adds support for HTML-style attributes using curly brackets {} and a CSS-like syntax. example: let's say we want to open a link in new tab, we can achieve this like so : [Google](https://www.google.com){:target="_blank"} | 3 | 8 |
76,313,592 | 2023-5-23 | https://stackoverflow.com/questions/76313592/import-langchain-error-typeerror-issubclass-arg-1-must-be-a-class | I want to use langchain for my project. so I installed it using following command : pip install langchain but While importing "langchain" I am facing following Error: File /usr/lib/python3.8/typing.py:774, in _GenericAlias.__subclasscheck__(self, cls) 772 if self._special: 773 if not isinstance(cls, _GenericAlias): --> 774 return issubclass(cls, self.__origin__) 775 if cls._special: 776 return issubclass(cls.__origin__, self.__origin__) TypeError: issubclass() arg 1 must be a class Any one who can solve this error ? | typing-inspect==0.8.0 typing_extensions==4.5.0 | 37 | 22 |
76,268,348 | 2023-5-17 | https://stackoverflow.com/questions/76268348/how-to-update-modify-request-headers-and-query-parameters-in-a-fastapi-middlewar | I'm trying to write a middleware for a FastAPI project that manipulates the request headers and / or query parameters in some special cases. I've managed to capture and modify the request object in the middleware, but it seems that even if I modify the request object that is passed to the middleware, the function that serves the endpoint receives the original, unmodified request. Here is a simplified version of my implementation: from fastapi import FastAPI, Request from starlette.datastructures import MutableHeaders, QueryParams from starlette.middleware.base import BaseHTTPMiddleware class TestMiddleware(BaseHTTPMiddleware): def __init__(self, app: FastAPI): super().__init__(app) def get_modified_query_params(request: Request) -> QueryParams: pass ## Create and return new query params async def dispatch( self, request: Request, call_next, *args, **kwargs ) -> None: # Check and manipulate the X-DEVICE-TOKEN if required header_key = "X-DEVICE-INFo" new_header_value = "new device info" new_header = MutableHeaders(request._headers) new_header[header_key] = new_header_value request._headers = new_header request._query_params = self.get_modified_query_params(request) print("modified headers =>", request.headers) print("modified params =>", request.query_params) return await call_next(request) Even though I see the updated values in the print statements above, when I try to print request object in the function that serves the endpoint, I see original values of the request. What am I missing? | To update or modify the request headers within a middleware, you would have to update request.scope['headers'], as described in this answer. In that way, you could add new custom headers, as well as modify existing ones. In a similar way, by updating request.scope['query_string'], you could modify existing, as well as add new, query parameters. A working example is given below. Working Example from fastapi import FastAPI, Request from urllib.parse import urlencode app = FastAPI() @app.middleware('http') async def some_middleware(request: Request, call_next): # update request headers headers = dict(request.scope['headers']) headers[b'custom-header'] = b'my custom header' request.scope['headers'] = [(k, v) for k, v in headers.items()] # update request query parameters q_params = dict(request.query_params) q_params['custom-q-param'] = 'my custom query param' request.scope['query_string'] = urlencode(q_params).encode('utf-8') return await call_next(request) @app.get('/') async def main(request: Request): return {'headers': request.headers, 'q_params': request.query_params} | 4 | 4 |
76,318,098 | 2023-5-23 | https://stackoverflow.com/questions/76318098/could-not-build-wheels-for-pycrypto-which-is-required-to-install-pyproject-toml | I'm facing an error while deploying to Heroku. ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects. However, my project does not specify use for pycrypto. What is causing this issue? My requirements.txt looks like: python==3.10.9 firebase_admin pyrebase pyrebase4 dash dash_auth dash_bootstrap_components dash_daq pandas plotly Heroku CLI output Building wheel for pycrypto (setup.py): started remote: Building wheel for pycrypto (setup.py): finished with status 'error' remote: error: subprocess-exited-with-error remote: remote: × python setup.py bdist_wheel did not run successfully. remote: │ exit code: 1 remote: ╰─> [71 lines of output] remote: checking for gcc... gcc remote: checking whether the C compiler works... yes remote: checking for C compiler default output file name... a.out remote: checking for suffix of executables... remote: checking whether we are cross compiling... no remote: checking for suffix of object files... o remote: checking whether we are using the GNU C compiler... yes remote: checking whether gcc accepts -g... yes remote: checking for gcc option to accept ISO C89... none needed remote: checking for __gmpz_init in -lgmp... yes remote: checking for __gmpz_init in -lmpir... no remote: checking whether mpz_powm is declared... yes remote: checking whether mpz_powm_sec is declared... yes remote: checking how to run the C preprocessor... gcc -E remote: checking for grep that handles long lines and -e... /usr/bin/grep remote: checking for egrep... /usr/bin/grep -E remote: checking for ANSI C header files... yes remote: checking for sys/types.h... yes remote: checking for sys/stat.h... yes remote: checking for stdlib.h... yes remote: checking for string.h... yes remote: checking for memory.h... yes remote: checking for strings.h... yes remote: checking for inttypes.h... yes remote: checking for stdint.h... yes remote: checking for unistd.h... yes remote: checking for inttypes.h... (cached) yes remote: checking limits.h usability... yes remote: checking limits.h presence... yes remote: checking for limits.h... yes remote: checking stddef.h usability... yes remote: checking stddef.h presence... yes remote: checking for stddef.h... yes remote: checking for stdint.h... (cached) yes remote: checking for stdlib.h... (cached) yes remote: checking for string.h... (cached) yes remote: checking wchar.h usability... yes remote: checking wchar.h presence... yes remote: checking for wchar.h... yes remote: checking for inline... inline remote: checking for int16_t... yes remote: checking for int32_t... yes remote: checking for int64_t... yes remote: checking for int8_t... yes remote: checking for size_t... yes remote: checking for uint16_t... yes remote: checking for uint32_t... yes remote: checking for uint64_t... yes remote: checking for uint8_t... yes remote: checking for stdlib.h... (cached) yes remote: checking for GNU libc compatible malloc... yes remote: checking for memmove... yes remote: checking for memset... yes remote: configure: creating ./config.status remote: config.status: creating src/config.h remote: In file included from /app/.heroku/python/include/python3.11/Python.h:86, remote: from src/_fastmath.c:31: remote: /app/.heroku/python/include/python3.11/cpython/pytime.h:208:60: warning: ‘struct timespec’ declared inside parameter list will not be visible outside of this definition or declaratio remote: 208 | PyAPI_FUNC(int) _PyTime_FromTimespec(_PyTime_t *tp, struct timespec *ts); remote: | ^~~~~~~~ remote: /app/.heroku/python/include/python3.11/cpython/pytime.h:213:56: warning: ‘struct timespec’ declared inside parameter list will not be visible outside of this definition or declaratio remote: 213 | PyAPI_FUNC(int) _PyTime_AsTimespec(_PyTime_t t, struct timespec *ts); remote: | ^~~~~~~~ remote: /app/.heroku/python/include/python3.11/cpython/pytime.h:217:63: warning: ‘struct timespec’ declared inside parameter list will not be visible outside of this definition or declaratio remote: 217 | PyAPI_FUNC(void) _PyTime_AsTimespec_clamp(_PyTime_t t, struct timespec *ts); remote: | ^~~~~~~~ remote: src/_fastmath.c:33:10: fatal error: longintrepr.h: No such file or directory remote: 33 | #include <longintrepr.h> /* for conversions */ remote: | ^~~~~~~~~~~~~~~ remote: compilation terminated. remote: error: command '/usr/bin/gcc' failed with exit code 1 remote: [end of output] remote: remote: note: This error originates from a subprocess, and is likely not a problem with pip. remote: ERROR: Failed building wheel for pycrypto remote: Running setup.py clean for pycrypto remote: Building wheel for sseclient (setup.py): started remote: Building wheel for sseclient (setup.py): finished with status 'done' remote: Created wheel for sseclient: filename=sseclient-0.0.27-py3-none-any.whl size=5565 sha256=30988661931e8740f4a7ee87948f5f69e2803cb6d31179cbe4cb6e9bbea1241e remote: Stored in directory: /tmp/pip-ephem-wheel-cache-ivcb6vdl/wheels/7c/54/eb/a223b1599728ecaf0528281c17c96c503aa7d18a752a4e4e3a remote: Building wheel for jwcrypto (setup.py): started remote: Building wheel for jwcrypto (setup.py): finished with status 'done' remote: Created wheel for jwcrypto: filename=jwcrypto-1.4.2-py3-none-any.whl size=90472 sha256=b9d97dca4df5d53e6f69d6e0c7c0406fdb519a521a08f8e15d02bcfff20c8cb3 remote: Stored in directory: /tmp/pip-ephem-wheel-cache-ivcb6vdl/wheels/42/b6/e3/23d953d3b1a939d81aa460121597ac050eaf99d04578eb4340 remote: Successfully built dash_daq gcloud sseclient jwcrypto remote: Failed to build pycrypto remote: ERROR: Could not build wheels for pycrypto, which is required to install pyproject.toml-based projects remote: ! Push rejected, failed to compile Python app. remote: remote: ! Push failed remote: remote: Verifying deploy... remote: remote: ! Push rejected to acuradyne. remote: To ##app link ! [remote rejected] master -> master (pre-receive hook declined) Any possible solutions or reasoning behind this problem will be helpful | The problem occurred because I had both pyrebase and pyrebase4 inside the requirements.txt I removed pyrebase and kept pyrebase4. It solved the problem | 10 | 0 |
76,279,266 | 2023-5-18 | https://stackoverflow.com/questions/76279266/webdriverexception-unknown-error-runtime-callfunctionon-threw-exception-typee | I'm using Selenium with Python to generate inputs to credit card fields on a website. When you try send_keys to the field it always returns this error. I used different webdrivers (Chrome, Edge, Firefox) with the same effect. The error pops up before any input shows up in the field. from selenium import webdriver browser = webdriver.Chrome(r"C:\Users\m234234\Downloads\chromedriver_win32.exe") ... ... Element3 = browser.find_element(By.ID, 'scp_cardPage_csc_input') Element3.send_keys('231') The element that is trying to access: <input autocomplete="off" maxlength="4" type="text" class="scp_text_input" name="csc" aria-required="true" aria-label="Security Code" aria-describedby="scp_cardPage_csc_error" id="scp_cardPage_csc_input" value="" size="5" onkeypress="return checkNum(event)"> Full error below WebDriverException: unknown error: Runtime.callFunctionOn threw exception: TypeError: JSON.stringify is not a function at buildError (<anonymous>:323:18) (Session info: chrome=113.0.5672.126) Update Once the script reaches this payment page (it's a third party payment page), it throws this same error when it tries to interact with the page in any way, even if I just try to retrieve current_url | json.stringify() The json.stringify() static method is a built-in function in JavaScript that converts a JavaScript value to a JSON string, optionally replacing values if a replacer function is specified or optionally including only the specified properties if a replacer array is specified. The json.stringify() method takes one, two or three parameters, such as: Value: The JavaScript object or value to be converted to a JSON string. Replacer (optional): It is a function that can be used to modify the values and properties of the object that is being converted. space (optional): A string or number that can be used to insert white space or line break characters into the output JSON string for readability purposes. Root cause The error... TypeError: JSON.stringify is not a function can occur if the program is trying to use the JSON.stringify() method on a non-object or when the method is not defined for a particular object. As an example: var person = "undetectedSelenium"; var jsonString = JSON.stringify(name); console.log(jsonString); In the above code block we are trying to convert a string value to a JSON string using the JSON.stringify() method. JSON.stringify() method only works with JavaScript objects and arrays. Calling this method on a simple string value will result in this error. To fix this error, we can wrap the string value in an object or an array as follows: var person = "undetectedSelenium"; var jsonObject = { "name": person }; var jsonString = JSON.stringify(jsonObject); console.log(jsonString); Some common causes Some of the most common causes of this error are: Using an Older Version of JavaScript Incorrect Syntax Overwriting the JSON Object Incorrect Data Type This usecase In this usecase it seems the script is unable to locate the element using the following line of code: Element3 = browser.find_element(By.ID, 'scp_cardPage_csc_input') effectively Element3 remains a non-object. Hence the error is raised. Possible fixes There are a couple of approaches to fix up this error as follows: Verify if the json.stringify() method is defined and incase it isn't define a new function that converts an object to a JSON string. This solution is useful in older browsers that doesn't support the JSON object natively. if (typeof JSON.stringify !== 'function') { JSON.stringify = function(obj) { // code to convert obj to a JSON string }; } Check if the obj variable is a valid JSON object before calling the json.stringify() method which can prevent the type error from being thrown when trying to call the method on a non-object. if (typeof obj === 'object' && obj !== null) { var jsonString = JSON.stringify(obj); } Using a third-party JSON library, like MyJSON to convert the object to a JSON string. var jsonString = MyJSON.stringify(obj); Using a try-catch block to prevent the program from crashing if the error occurs: try { var jsonString = JSON.stringify(obj); } catch (e) { console.error('Error: ' + e.message); } Road ahead This issue isn't related to Selenium Python Client but an issue with the sourcecode of ChromeDriver and needs to be addressed by the ChromeDriver team. tl; dr [🐛 Bug]: org.openqa.selenium.WebDriverException: unknown error: Runtime.callFunctionOn threw exception: TypeError: value.hasOwnProperty is not a function at Object.stringify Issue 4481: org.openqa.selenium.WebDriverException: unknown error: Runtime.callFunctionOn threw exception: TypeError: value.hasOwnProperty is not a function at Object.stringify | 2 | 4 |
76,309,946 | 2023-5-22 | https://stackoverflow.com/questions/76309946/conda-attributeerror-module-brotli-has-no-attribute-error-after-update | I just run the command conda update conda. After that, all of my commands gives AttributeError: module 'brotli' has no attribute 'error'. I searched for solutions but none works. Anaconda Error - module 'brotli' has no attribute 'error' seems a reasonable answer but my anaconda3/lib directory does not contain a site-packages folder. The lib folder contains the following. cmake libicudata.so.72.1 libQt5Designer.so.5 libQt5X11Extras.so.5 cups libicui18n.so libQt5Designer.so.5.15 libQt5X11Extras.so.5.15 cyrillic_and_mic.so libicui18n.so.72 libQt5Designer.so.5.15.8 libQt5X11Extras.so.5.15.8 dbus-1.0 libicui18n.so.72.1 libQt5DeviceDiscoverySupport.a libQt5XcbQpa.prl dict_snowball.so libicuio.so libQt5DeviceDiscoverySupport.prl libQt5XcbQpa.so euc2004_sjis2004.so libicuio.so.72 libQt5EdidSupport.a libQt5XcbQpa.so.5 euc_cn_and_mic.so libicuio.so.72.1 libQt5EdidSupport.prl libQt5XcbQpa.so.5.15 euc_jp_and_sjis.so libicutest.so libQt5EglFSDeviceIntegration.prl libQt5XcbQpa.so.5.15.8 euc_kr_and_mic.so libicutest.so.72 libQt5EglFSDeviceIntegration.so libQt5XkbCommonSupport.a euc_tw_and_big5.so libicutest.so.72.1 libQt5EglFSDeviceIntegration.so.5 libQt5XkbCommonSupport.prl gettext libicutu.so libQt5EglFSDeviceIntegration.so.5.15 libQt5XmlPatterns.prl girepository-1.0 libicutu.so.72 libQt5EglFSDeviceIntegration.so.5.15.8 libQt5XmlPatterns.so glib-2.0 libicutu.so.72.1 libQt5EglFsKmsSupport.prl libQt5XmlPatterns.so.5 gstreamer-1.0 libicuuc.so libQt5EglFsKmsSupport.so libQt5XmlPatterns.so.5.15 icu libicuuc.so.72 libQt5EglFsKmsSupport.so.5 libQt5XmlPatterns.so.5.15.8 itcl4.2.2 libicuuc.so.72.1 libQt5EglFsKmsSupport.so.5.15 libQt5Xml.prl krb5 libitm.so libQt5EglFsKmsSupport.so.5.15.8 libQt5Xml.so latin2_and_win1250.so libitm.so.1 libQt5EglSupport.a libQt5Xml.so.5 latin_and_mic.so libitm.so.1.0.0 libQt5EglSupport.prl libQt5Xml.so.5.15 libarchive.a libjpeg.a libQt5EventDispatcherSupport.a libQt5Xml.so.5.15.8 libarchive.so libjpeg.so libQt5EventDispatcherSupport.prl libquadmath.so libarchive.so.13 libjpeg.so.8 libQt5FbSupport.a libquadmath.so.0 libarchive.so.13.6.2 libjpeg.so.8.2.2 libQt5FbSupport.prl libquadmath.so.0.0.0 libasound.so libk5crypto.so libQt5FontDatabaseSupport.a libreadline.so libasound.so.2 libk5crypto.so.3 libQt5FontDatabaseSupport.prl libreadline.so.8 libasound.so.2.0.0 libk5crypto.so.3.1 libQt5Gamepad.prl libreadline.so.8.2 libasprintf.a libkadm5clnt_mit.so libQt5Gamepad.so libRemarks.so.16 libasprintf.so libkadm5clnt_mit.so.12 libQt5Gamepad.so.5 libsharpyuv.so libasprintf.so.0 libkadm5clnt_mit.so.12.0 libQt5Gamepad.so.5.15 libsharpyuv.so.0 libasprintf.so.0.0.0 libkadm5clnt.so libQt5Gamepad.so.5.15.8 libsharpyuv.so.0.0.0 libatomic.so libkadm5srv_mit.so libQt5GlxSupport.a libsmime3.so libatomic.so.1 libkadm5srv_mit.so.12 libQt5GlxSupport.prl libSM.so libatomic.so.1.2.0 libkadm5srv_mit.so.12.0 libQt5Gui.prl libSM.so.6 libatopology.so libkadm5srv.so libQt5Gui.so libSM.so.6.0.1 libatopology.so.2 libkdb5.so libQt5Gui.so.5 libsndfile.so libatopology.so.2.0.0 libkdb5.so.10 libQt5Gui.so.5.15 libsndfile.so.1 libattr.so libkdb5.so.10.0 libQt5Gui.so.5.15.8 libsndfile.so.1.0.35 libattr.so.1 libkeyutils.a libQt5Help.prl libsoftokn3.chk libattr.so.1.1.2501 libkeyutils.so libQt5Help.so libsoftokn3.so libblas.so libkeyutils.so.1 libQt5Help.so.5 libsqlite3.so libblas.so.3 libkeyutils.so.1.9 libQt5Help.so.5.15 libsqlite3.so.0 libbrotlicommon.so libkrad.so libQt5Help.so.5.15.8 libsqlite3.so.0.8.6 libbrotlicommon.so.1 libkrad.so.0 libQt5InputSupport.a libssl3.so libbrotlicommon.so.1.0.9 libkrad.so.0.0 libQt5InputSupport.prl libssl.so libbrotlidec.so libkrb5.so libQt5KmsSupport.a libssl.so.3 libbrotlidec.so.1 libkrb5.so.3 libQt5KmsSupport.prl libstdc++.so libbrotlidec.so.1.0.9 libkrb5.so.3.3 libQt5Location.prl libstdc++.so.6 libbrotlienc.so libkrb5support.so libQt5Location.so libstdc++.so.6.0.30 libbrotlienc.so.1 libkrb5support.so.0 libQt5Location.so.5 libsyn123.so libbrotlienc.so.1.0.9 libkrb5support.so.0.1 libQt5Location.so.5.15 libsyn123.so.0 libbz2.a liblapack.so libQt5Location.so.5.15.8 libsyn123.so.0.1.5 libbz2.so liblapack.so.3 libQt5MultimediaGstTools.prl libsystemd.so.0 libbz2.so.1.0 liblcms2.so libQt5MultimediaGstTools.so libsystemd.so.0.36.0 libbz2.so.1.0.8 liblcms2.so.2 libQt5MultimediaGstTools.so.5 libtcl8.6.so libcairo.a liblcms2.so.2.0.15 libQt5MultimediaGstTools.so.5.15 libtclstub8.6.a libcairo-gobject.a libLerc.so libQt5MultimediaGstTools.so.5.15.8 libtextstyle.a libcairo-gobject.so libLerc.so.4 libQt5Multimedia.prl libtextstyle.so libcairo-gobject.so.2 libLIEF.so libQt5MultimediaQuick.prl libtextstyle.so.0 libcairo-gobject.so.2.11600.0 libLLVM-16.so libQt5MultimediaQuick.so libtextstyle.so.0.1.2 libcairo-script-interpreter.a libLTO.so.16 libQt5MultimediaQuick.so.5 libtiff.so libcairo-script-interpreter.so liblz4.so libQt5MultimediaQuick.so.5.15 libtiff.so.6 libcairo-script-interpreter.so.2 liblz4.so.1 libQt5MultimediaQuick.so.5.15.8 libtiff.so.6.0.0 libcairo-script-interpreter.so.2.11600.0 liblz4.so.1.9.4 libQt5Multimedia.so libtiffxx.so libcairo.so liblzma.so libQt5Multimedia.so.5 libtiffxx.so.6 libcairo.so.2 liblzma.so.5 libQt5Multimedia.so.5.15 libtiffxx.so.6.0.0 libcairo.so.2.11600.0 liblzma.so.5.2.6 libQt5Multimedia.so.5.15.8 libtinfo.so libcap.a liblzo2.a libQt5MultimediaWidgets.prl libtinfo.so.6 libcap.so liblzo2.so libQt5MultimediaWidgets.so libtinfo.so.6.3 libcap.so.2 liblzo2.so.2 libQt5MultimediaWidgets.so.5 libtinfow.so libcap.so.2.67 liblzo2.so.2.0.0 libQt5MultimediaWidgets.so.5.15 libtinfow.so.6 libcblas.so libmenu.so libQt5MultimediaWidgets.so.5.15.8 libtinfow.so.6.3 libcblas.so.3 libmenu.so.6 libQt5NetworkAuth.prl libtk8.6.so libcharset.a libmenu.so.6.3 libQt5NetworkAuth.so libtkstub8.6.a libcharset.so libmenuw.so libQt5NetworkAuth.so.5 libturbojpeg.a libcharset.so.1 libmenuw.so.6 libQt5NetworkAuth.so.5.15 libturbojpeg.so libcharset.so.1.0.0 libmenuw.so.6.3 libQt5NetworkAuth.so.5.15.8 libturbojpeg.so.0 libclang.so libmp3lame.a libQt5Network.prl libturbojpeg.so.0.2.0 libclang.so.13 libmp3lame.so libQt5Network.so libuuid.a libcom_err.so libmp3lame.so.0 libQt5Network.so.5 libuuid.so libcom_err.so.3 libmp3lame.so.0.0.0 libQt5Network.so.5.15 libuuid.so.1 libcom_err.so.3.0 libmpg123.so libQt5Network.so.5.15.8 libuuid.so.1.3.0 libcrmf.a libmpg123.so.0 libQt5Nfc.prl libverto.so libcrypto.so libmpg123.so.0.47.0 libQt5Nfc.so libverto.so.0 libcrypto.so.3 libmysqlclient.so libQt5Nfc.so.5 libverto.so.0.0 libcupsimage.so libmysqlclient.so.21 libQt5Nfc.so.5.15 libvorbisenc.so libcupsimage.so.2 libmysqlclient.so.21.2.32 libQt5Nfc.so.5.15.8 libvorbisenc.so.2 libcups.so libncurses++.a libQt5OpenGLExtensions.a libvorbisenc.so.2.0.12 libcups.so.2 libncurses.so libQt5OpenGLExtensions.prl libvorbisfile.so libdbus-1.a libncurses.so.6 libQt5OpenGL.prl libvorbisfile.so.3 libdbus-1.so libncurses.so.6.3 libQt5OpenGL.so libvorbisfile.so.3.3.8 libdbus-1.so.3 libncurses++w.a libQt5OpenGL.so.5 libvorbis.so libdbus-1.so.3.24.0 libncursesw.so libQt5OpenGL.so.5.15 libvorbis.so.0 libdeflate.so libncursesw.so.6 libQt5OpenGL.so.5.15.8 libvorbis.so.0.4.9 libdeflate.so.0 libncursesw.so.6.3 libQt5PacketProtocol.a libwebpdecoder.so libecpg.a libnsl.so libQt5PacketProtocol.prl libwebpdecoder.so.3 libecpg_compat.a libnsl.so.3 libQt5PlatformCompositorSupport.a libwebpdecoder.so.3.1.6 libecpg_compat.so libnsl.so.3.0.0 libQt5PlatformCompositorSupport.prl libwebpdemux.so libecpg_compat.so.3 libnspr4.so libQt5Positioning.prl libwebpdemux.so.2 libecpg_compat.so.3.15 libnss3.so libQt5PositioningQuick.prl libwebpdemux.so.2.0.12 libecpg.so libnssckbi.so libQt5PositioningQuick.so libwebpmux.so libecpg.so.6 libnssckbi-testlib.so libQt5PositioningQuick.so.5 libwebpmux.so.3 libecpg.so.6.15 libnssdbm3.chk libQt5PositioningQuick.so.5.15 libwebpmux.so.3.0.11 libedit.so libnssdbm3.so libQt5PositioningQuick.so.5.15.8 libwebp.so libedit.so.0 libnsssysinit.so libQt5Positioning.so libwebp.so.7 libedit.so.0.0.63 libnssutil3.so libQt5Positioning.so.5 libwebp.so.7.1.6 libevent-2.1.so libogg.so libQt5Positioning.so.5.15 libX11.so libevent-2.1.so.7 libogg.so.0 libQt5Positioning.so.5.15.8 libX11.so.6 libevent-2.1.so.7.0.1 libogg.so.0.8.4 libQt5PrintSupport.prl libX11.so.6.4.0 libevent_core-2.1.so libopenblasp-r0.3.21.so libQt5PrintSupport.so libX11-xcb.so libevent_core-2.1.so.7 libopenblas.so.0 libQt5PrintSupport.so.5 libX11-xcb.so.1 libevent_core-2.1.so.7.0.1 libopenjp2.a libQt5PrintSupport.so.5.15 libX11-xcb.so.1.0.0 libevent_core.so libopenjp2.so libQt5PrintSupport.so.5.15.8 libXau.so libevent_extra-2.1.so libopenjp2.so.2.5.0 libQt5Purchasing.prl libXau.so.6 libevent_extra-2.1.so.7 libopenjp2.so.7 libQt5Purchasing.so libXau.so.6.0.0 libevent_extra-2.1.so.7.0.1 libopus.so libQt5Purchasing.so.5 libxcb-composite.so libevent_extra.so libopus.so.0 libQt5Purchasing.so.5.15 libxcb-composite.so.0 libevent_openssl-2.1.so libopus.so.0.8.0 libQt5Purchasing.so.5.15.8 libxcb-composite.so.0.0.0 libevent_openssl-2.1.so.7 libout123.so libQt5QmlDebug.a libxcb-damage.so libevent_openssl-2.1.so.7.0.1 libout123.so.0 libQt5QmlDebug.prl libxcb-damage.so.0 libevent_openssl.so libout123.so.0.4.7 libQt5QmlDevTools.a libxcb-damage.so.0.0.0 libevent_pthreads-2.1.so libpanel.so libQt5QmlDevTools.prl libxcb-dpms.so libevent_pthreads-2.1.so.7 libpanel.so.6 libQt5QmlModels.prl libxcb-dpms.so.0 libevent_pthreads-2.1.so.7.0.1 libpanel.so.6.3 libQt5QmlModels.so libxcb-dpms.so.0.0.0 libevent_pthreads.so libpanelw.so libQt5QmlModels.so.5 libxcb-dri2.so libevent.so libpanelw.so.6 libQt5QmlModels.so.5.15 libxcb-dri2.so.0 libexpat.a libpanelw.so.6.3 libQt5QmlModels.so.5.15.8 libxcb-dri2.so.0.0.0 libexpat.so libpcre2-16.so libQt5Qml.prl libxcb-dri3.so libexpat.so.1 libpcre2-16.so.0 libQt5Qml.so libxcb-dri3.so.0 libexpat.so.1.8.10 libpcre2-16.so.0.11.0 libQt5Qml.so.5 libxcb-dri3.so.0.1.0 libffi.a libpcre2-32.so libQt5Qml.so.5.15 libxcb-ewmh.a libffi.so libpcre2-32.so.0 libQt5Qml.so.5.15.8 libxcb-ewmh.so libffi.so.8 libpcre2-32.so.0.11.0 libQt5QmlWorkerScript.prl libxcb-ewmh.so.2 libffi.so.8.1.0 libpcre2-8.so libQt5QmlWorkerScript.so libxcb-ewmh.so.2.0.0 libFLAC++.so libpcre2-8.so.0 libQt5QmlWorkerScript.so.5 libxcb-glx.so libFLAC.so libpcre2-8.so.0.11.0 libQt5QmlWorkerScript.so.5.15 libxcb-glx.so.0 libFLAC++.so.10 libpcre2-posix.so libQt5QmlWorkerScript.so.5.15.8 libxcb-glx.so.0.0.0 libFLAC++.so.10.0.0 libpcre2-posix.so.3 libQt5Quick3DAssetImport.prl libxcb-icccm.a libFLAC.so.12 libpcre2-posix.so.3.0.2 libQt5Quick3DAssetImport.so libxcb-icccm.so libFLAC.so.12.0.0 libpgcommon.a libQt5Quick3DAssetImport.so.5 libxcb-icccm.so.4 libfontconfig.a libpgcommon_shlib.a libQt5Quick3DAssetImport.so.5.15 libxcb-icccm.so.4.0.0 libfontconfig.so libpgfeutils.a libQt5Quick3DAssetImport.so.5.15.8 libxcb-image.a libfontconfig.so.1 libpgport.a libQt5Quick3D.prl libxcb-image.so libfontconfig.so.1.13.0 libpgport_shlib.a libQt5Quick3DRender.prl libxcb-image.so.0 libform.so libpgtypes.a libQt5Quick3DRender.so libxcb-image.so.0.0.0 libform.so.6 libpgtypes.so libQt5Quick3DRender.so.5 libxcb-keysyms.a libform.so.6.3 libpgtypes.so.3 libQt5Quick3DRender.so.5.15 libxcb-keysyms.so libformw.so libpgtypes.so.3.15 libQt5Quick3DRender.so.5.15.8 libxcb-keysyms.so.1 libformw.so.6 libpixman-1.a libQt5Quick3DRuntimeRender.prl libxcb-keysyms.so.1.0.0 libformw.so.6.3 libpixman-1.so libQt5Quick3DRuntimeRender.so libxcb-present.so libfreebl3.chk libpixman-1.so.0 libQt5Quick3DRuntimeRender.so.5 libxcb-present.so.0 libfreebl3.so libpixman-1.so.0.40.0 libQt5Quick3DRuntimeRender.so.5.15 libxcb-present.so.0.0.0 libfreeblpriv3.chk libplc4.so libQt5Quick3DRuntimeRender.so.5.15.8 libxcb-randr.so libfreeblpriv3.so libplds4.so libQt5Quick3D.so libxcb-randr.so.0 libfreetype.a libpng16.a libQt5Quick3D.so.5 libxcb-randr.so.0.1.0 libfreetype.so libpng16.so libQt5Quick3D.so.5.15 libxcb-record.so libfreetype.so.6 libpng16.so.16 libQt5Quick3D.so.5.15.8 libxcb-record.so.0 libfreetype.so.6.18.3 libpng16.so.16.39.0 libQt5Quick3DUtils.prl libxcb-record.so.0.0.0 libgcc_s.so libpng.a libQt5Quick3DUtils.so libxcb-render.so libgcc_s.so.1 libpng.so libQt5Quick3DUtils.so.5 libxcb-render.so.0 libgcrypt.so libpq.a libQt5Quick3DUtils.so.5.15 libxcb-render.so.0.0.0 libgcrypt.so.20 libpq.so libQt5Quick3DUtils.so.5.15.8 libxcb-render-util.a libgcrypt.so.20.4.1 libpq.so.5 libQt5QuickControls2.prl libxcb-render-util.so libgettextlib-0.21.1.so libpq.so.5.15 libQt5QuickControls2.so libxcb-render-util.so.0 libgettextlib.so libpqwalreceiver.so libQt5QuickControls2.so.5 libxcb-render-util.so.0.0.0 libgettextpo.a libpsx.a libQt5QuickControls2.so.5.15 libxcb-res.so libgettextpo.so libpsx.so libQt5QuickControls2.so.5.15.8 libxcb-res.so.0 libgettextpo.so.0 libpsx.so.2 libQt5QuickParticles.prl libxcb-res.so.0.0.0 libgettextpo.so.0.5.8 libpsx.so.2.67 libQt5QuickParticles.so libxcb-screensaver.so libgettextsrc-0.21.1.so libpulse-mainloop-glib.so libQt5QuickParticles.so.5 libxcb-screensaver.so.0 libgettextsrc.so libpulse-mainloop-glib.so.0 libQt5QuickParticles.so.5.15 libxcb-screensaver.so.0.0.0 libgfortran.so libpulse-mainloop-glib.so.0.0.6 libQt5QuickParticles.so.5.15.8 libxcb-shape.so libgfortran.so.5 libpulse-simple.so libQt5Quick.prl libxcb-shape.so.0 libgfortran.so.5.0.0 libpulse-simple.so.0 libQt5QuickShapes.prl libxcb-shape.so.0.0.0 libgio-2.0.so libpulse-simple.so.0.1.1 libQt5QuickShapes.so libxcb-shm.so libgio-2.0.so.0 libpulse.so libQt5QuickShapes.so.5 libxcb-shm.so.0 libgio-2.0.so.0.7600.2 libpulse.so.0 libQt5QuickShapes.so.5.15 libxcb-shm.so.0.0.0 libglib-2.0.so libpulse.so.0.24.2 libQt5QuickShapes.so.5.15.8 libxcb.so libglib-2.0.so.0 libpython3.9.so libQt5Quick.so libxcb.so.1 libglib-2.0.so.0.7600.2 libpython3.9.so.1.0 libQt5Quick.so.5 libxcb.so.1.1.0 libgmodule-2.0.so libpython3.so libQt5Quick.so.5.15 libxcb-sync.so libgmodule-2.0.so.0 libQt53DAnimation.prl libQt5Quick.so.5.15.8 libxcb-sync.so.1 libgmodule-2.0.so.0.7600.2 libQt53DAnimation.so libQt5QuickTemplates2.prl libxcb-sync.so.1.0.0 libgobject-2.0.so libQt53DAnimation.so.5 libQt5QuickTemplates2.so libxcb-util.a libgobject-2.0.so.0 libQt53DAnimation.so.5.15 libQt5QuickTemplates2.so.5 libxcb-util.so libgobject-2.0.so.0.7600.2 libQt53DAnimation.so.5.15.8 libQt5QuickTemplates2.so.5.15 libxcb-util.so.1 libgomp.so libQt53DCore.prl libQt5QuickTemplates2.so.5.15.8 libxcb-util.so.1.0.0 libgomp.so.1 libQt53DCore.so libQt5QuickTest.prl libxcb-xf86dri.so libgomp.so.1.0.0 libQt53DCore.so.5 libQt5QuickTest.so libxcb-xf86dri.so.0 libgpg-error.so libQt53DCore.so.5.15 libQt5QuickTest.so.5 libxcb-xf86dri.so.0.0.0 libgpg-error.so.0 libQt53DCore.so.5.15.8 libQt5QuickTest.so.5.15 libxcb-xfixes.so libgpg-error.so.0.33.1 libQt53DExtras.prl ... Also any kind of command with conda gives the same error so conda install brotli solutions do not work. How can I proceed? (I am in a remote linux server) | Lib\site-packages\urllib3\response.py tries to import brotlicffi as brotli and then tries import brotli, which yields brotli.error AttributeError: module 'brotli' has no attribute 'error'. pip install brotlicffi fixes the error in conda. Here are the versions I ended up with. Upgrading conda removed brotlipy-0.7.0-py310h2bbff1b_1002 and added several brotli 1.0.9 pieces, perhaps missing as least one. conda list | grep brotli brotli 1.0.9 hcfcfb64_8 conda-forge brotli-bin 1.0.9 hcfcfb64_8 conda-forge brotlicffi 1.0.9.2 pypi_0 pypi libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge libbrotlidec 1.0.9 hcfcfb64_8 conda-forge libbrotlienc 1.0.9 hcfcfb64_8 conda-forge With conda working again, conda install brotlicffi cleans up the environment. Expect the following conda complaint; it will install enum34 to fix. You may have to repeat conda install brotlicffi The environment is inconsistent, please check the package plan carefully The following packages are causing the inconsistency: - pypi/pypi::brotlicffi==1.0.9.2=pypi_0 conda upgrade --all -n base should work normally. | 6 | 3 |
76,273,001 | 2023-5-17 | https://stackoverflow.com/questions/76273001/how-to-solve-typeerror-type-object-does-not-support-context-manager-protocol | I was creating a voice assistant project but I am having problem with the line with with command in it. The code I've written is this import speech_recognition as sr import win32com.client speaker = win32com.client.Dispatch("SAPI.SpVoice") def say(text): speaker.Speak(f"{text}") def takeCommand(): r = sr.Recognizer() with sr.Microphone as source: r.pause_threshold = 1 audio = r.listen(source) query = r.recognize_google(audio, language="en-in") print(f"User said: {query}") return query if __name__ == "__main__": print("VS Code") say("Hello I am Jarvis A.I.") while 1: print("listening...") text = takeCommand() say(text) And the error it always gets is this VS Code listening... Traceback (most recent call last): File "f:\Jarvis AI\main.py", line 23, in <module> text = takeCommand() ^^^^^^^^^^^^^ File "f:\Jarvis AI\main.py", line 11, in takeCommand with sr.Microphone as source: TypeError: 'type' object does not support the context manager protocol I've installed packages in my system like pywin32, pyaudio and speechrecognition but now I don't know what to do and how to proceed. | try changing with sr.Microphone as source: to with sr.Microphone() as source: | 5 | 17 |
76,297,649 | 2023-5-20 | https://stackoverflow.com/questions/76297649/auto-arima-in-python-results-in-poor-fitting-prediction-of-trend | New to ARIMA and attempting to model a dataset in Python using auto ARIMA. I'm using auto-ARIMA as I believe it will be better at defining the values of p, d and q however the results are poor and I need some guidance. Please see my reproducible attempts below Attempt as follows: # DEPENDENCIES import pandas as pd import numpy as np import matplotlib.pyplot as plt import pmdarima as pm from pmdarima.model_selection import train_test_split from statsmodels.tsa.stattools import adfuller from pmdarima.arima import ADFTest from pmdarima import auto_arima from sklearn.metrics import r2_score # CREATE DATA data_plot = pd.DataFrame(data removed) # SET INDEX data_plot['date_index'] = pd.to_datetime(data_plot['date'] data_plot.set_index('date_index', inplace=True) # CREATE ARIMA DATASET arima_data = data_plot[['value']] arima_data # PLOT DATA arima_data['value'].plot(figsize=(7,4)) The above steps result in a dataset that should look like this. # Dicky Fuller test for stationarity adf_test = ADFTest(alpha = 0.05) adf_test.should_diff(arima_data) Result = 0.9867 indicating non-stationary data which should be handled by appropriate over of differencing later in auto arima process. # Assign training and test subsets - 80:20 split print('Dataset dimensions;', arima_data.shape) train_data = arima_data[:-24] test_data = arima_data[-24:] print('Training data dimension:', train_data.shape, round((len(train_data)/len(arima_data)*100),2),'% of dataset') print('Test data dimension:', test_data.shape, round((len(train_data)/len(arima_data)*100),2),'% of dataset') # Plot training & test data plt.plot(train_data) plt.plot(test_data) # Run auto arima arima_model = auto_arima(train_data, start_p=0, d=1, start_q=0, max_p=5, max_d=5, max_q=5, start_P=0, D=1, start_Q=0, max_P=5, max_D=5, max_Q=5, m=12, seasonal=True, stationary=False, error_action='warn', trace=True, suppress_warnings=True, stepwise=True, random_state=20, n_fits=50) print(arima_model.aic()) Output suggests best model is 'ARIMA(1,1,1)(0,1,0)[12]' with AIC 1725.35484 #Store predicted values and view resultant df prediction = pd.DataFrame(arima_model.predict(n_periods=25), index=test_data.index) prediction.columns = ['predicted_value'] prediction # Plot prediction against test and training trends plt.figure(figsize=(7,4)) plt.plot(train_data, label="Training") plt.plot(test_data, label="Test") plt.plot(prediction, label="Predicted") plt.legend(loc='upper right') plt.show() # Finding r2 model score test_data['predicted_value'] = prediction r2_score(test_data['value'], test_data['predicted_value']) Result: -6.985 | Is auto_arima a method done by you? It depends how you differentiate and what you do there. Did you check the autocorrelation and partial autocorrelation to know which repeating time lags you have there? Also, it seems you have some seasonality patterns every year, you could try a SARIMA model if you are not doing it already. To try a SARIMA model you have to: Stationarized the data, in this case by differentiation you can convert the moving mean a stationary one. data_stationarized = train_data.diff()[1:] Check the autocorrelation and partial autocorrelation to check the seasonality. You can use the library statsmodels for this. import statsmodels.api as sm sm.graphics.tsa.plot_acf(data_stationarized); You can see that the most prominent flag is the twelfth flag, so as the granularity of the data is by month, that means there is prominent seasonality pattern every 12 months. We can check the partial autocorrelation to confirm it too: sm.graphics.tsa.plot_pacf(data_stationarized); Again the most prominent flag is the twelfth one. Fit the model with a seasonality order of 12. There are more parameters to explain which can be adjusted to have better results, but then this post will be very long. model = sm.tsa.SARIMAX(endog=train_data, order=(2,0,0), seasonal_order=(2,0,0,12)) model_fit = model.fit() Evaluate the results from sklearn.metrics import mean_squared_error y_pred = model_fit.forecast(steps=24) # when squared=False then is equals to RMSE mean_squared_error(y_true=test_data.values, y_pred=y_pred, squared=False) This outputs 12063.88, which you can use to compare different results more rigorously. For a graphical check: prediction = pd.DataFrame(model_fit.forecast(steps=25), index=test_data.index) prediction.columns = ['predicted_value'] prediction # Plot prediction against test and training trends plt.figure(figsize=(7,4)) plt.plot(train_data, label="Training") plt.plot(test_data, label="Test") plt.plot(prediction, label="Predicted") plt.legend(loc='upper right') plt.xticks([]) plt.yticks([]) plt.show(); Now you can see that the predictions get closer to the expected values. You could continue fine tuning the order and seasonal order to get even better results, I will advice to check the docs of statsmodel. Another advice it's to analyze the autocorrelation and partial autocorrelation of the residuals to check if your model is capturing all of the patterns. You have them in the model_fit object. | 7 | 2 |
76,290,771 | 2023-5-19 | https://stackoverflow.com/questions/76290771/results-not-reproducible-between-runs-despite-seeds-being-set | How is it possible, that running the same Python program twice with the exact same seeds and static data input produces different results? Calling the below function in a Jupyter Notebook yields the same results, however, when I restart the kernel, the results are different. The same applies when I run the code from the command line as a Python script. Is there anything else people do to make sure their code is reproducible? All resources I found talk about setting seeds. The randomness is introduced by ShapRFECV. This code runs on a CPU only. MWE (In this code I generate a dataset and eliminate features using ShapRFECV, if that's important): import os, random import numpy as np import pandas as pd from probatus.feature_elimination import ShapRFECV from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification global_seed = 1234 os.environ['PYTHONHASHSEED'] = str(global_seed) np.random.seed(global_seed) random.seed(global_seed) feature_names = ['f1', 'f2', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Code from tutorial on probatus documentation X, y = make_classification(n_samples=100, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) def shap_feature_selection(X, y, seed: int) -> list[str]: random_forest = RandomForestClassifier(random_state=seed, n_estimators=70, max_features='log2', criterion='entropy', class_weight='balanced') # Set to run on one thread only shap_elimination = ShapRFECV(clf=random_forest, step=0.2, cv=5, scoring='f1_macro', n_jobs=1, random_state=seed) report = shap_elimination.fit_compute(X, y, check_additivity=True, seed=seed) # Return the set of features with the best validation accuracy return report.iloc[[report['val_metric_mean'].idxmax() - 1]]['features_set'].to_list()[0] Results: # Results from the first run shap_feature_selection(X, y, 0) >>> ['f17', 'f15', 'f18', 'f8', 'f12', 'f1', 'f13'] # Running again in same session shap_feature_selection(X, y, 0) >>> ['f17', 'f15', 'f18', 'f8', 'f12', 'f1', 'f13'] # Restarting the kernel and running the exact same command shap_feature_selection(X, y, 0) >>> ['f8', 'f1', 'f17', 'f6', 'f18', 'f20', 'f12', 'f15', 'f7', 'f13', 'f11'] Details: Ubuntu 22.04 Python 3.9.12 Numpy 1.22.0 Sklearn 1.1.1 | This has now been fixed in probatus (the issue was a bug, apparently connected to the pandas implementation they were using, see here). For me, everything works as expecting when using the probatus' latest code version (not the package). | 7 | 1 |
76,289,322 | 2023-5-19 | https://stackoverflow.com/questions/76289322/selecting-python-interpreter-in-vscode | I am using VSCode with ArcGIS Pro 3.0 in a virtual environment. Until yesterday, everything worked just fine. After updating to Pro 3.0, I was still able to use open a script and then have it run in the terminal window. Previously, I was able to select a line from the script, run it, and then it would open the correct interpreter. However, now I am unable to do so and cannot troubleshoot why this is happening. I have added the correct path to the ArcGIS Pro python executable in the interpreter path, but the terminal opens to another python executable. Any advice would be greatly appreciated as to how I can run specific python executable that I want to run. UPDATE: I can open VSCode using code from my anaconda installation, but still am having trouble running python interactively in the terminal. Previously, I used to be able to do this (e.g. test indented code cells), but this doesn't seem to be functioning anymore. | I was typing python instead of Python. python was engaging python that was in PYTHONPATH. | 10 | 0 |
76,284,412 | 2023-5-18 | https://stackoverflow.com/questions/76284412/how-can-i-stream-a-response-from-langchains-openai-using-flask-api | I am using Python Flask app for chat over data. In the console I am getting streamable response directly from the OpenAI since I can enable streming with a flag streaming=True. The problem is, that I can't "forward" the stream or "show" the strem than in my API call. Code for the processing OpenAI and chain is: def askQuestion(self, collection_id, question): collection_name = "collection-" + str(collection_id) self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=os.environ.get('OPENAI_API_KEY'), streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key='answer') chroma_Vectorstore = Chroma(collection_name=collection_name, embedding_function=self.embeddingsOpenAi, client=self.chroma_client) self.chain = ConversationalRetrievalChain.from_llm(self.llm, chroma_Vectorstore.as_retriever(similarity_search_with_score=True), return_source_documents=True,verbose=VERBOSE, memory=self.memory) result = self.chain({"question": question}) res_dict = { "answer": result["answer"], } res_dict["source_documents"] = [] for source in result["source_documents"]: res_dict["source_documents"].append({ "page_content": source.page_content, "metadata": source.metadata }) return res_dict and the API route code: @app.route("/collection/<int:collection_id>/ask_question", methods=["POST"]) def ask_question(collection_id): question = request.form["question"] # response_generator = document_thread.askQuestion(collection_id, question) # return jsonify(response_generator) def stream(question): completion = document_thread.askQuestion(collection_id, question) for line in completion['answer']: yield line return app.response_class(stream_with_context(stream(question))) I am testing my endpoint with curl and I am passing flag -N to curl, so I should get the streamable response, if it is possible. When I make API call first the endpoint is waiting to process the data (I can see in my terminal in VS code the streamable answer) and when finished, I get everything displayed in one go. | With the usage of threading and callback we can have a streaming response from flask API. In flask API, you may create a queue to register tokens through langchain's callback. class StreamingHandler(BaseCallbackHandler): ... def on_llm_new_token(self, token: str, **kwargs) -> None: self.queue.put(token) You may get tokens from the same queue in your flask route. from flask import Response, stream_with_context import threading @app.route(....): def stream_output(): q = Queue() def generate(rq: Queue): ... # add your logic to prevent while loop # to run indefinitely while( ...): yield rq.get() callback_fn = StreamingHandler(q) threading.Thread(target= askQuestion, args=(collection_id, question, callback_fn)) return Response(stream_with_context(generate(q)) In your langchain's ChatOpenAI add the above custom callback StreamingHandler. self.llm = ChatOpenAI( model_name=self.model_name, temperature=self.temperature, openai_api_key=os.environ.get('OPENAI_API_KEY'), streaming=True, callback=[callback_fn,] ) For reference: https://python.langchain.com/en/latest/modules/callbacks/getting_started.html#creating-a-custom-handler https://flask.palletsprojects.com/en/2.3.x/patterns/streaming/#streaming-with-context | 7 | 3 |
76,268,799 | 2023-5-17 | https://stackoverflow.com/questions/76268799/how-should-i-declare-enums-in-sqlalchemy-using-mapped-column-to-enable-type-hin | I am trying to use Enums in SQLAlchemy 2.0 with mapped_column. So far I have the following code (taken from another question): from sqlalchemy.dialects.postgresql import ENUM as pgEnum import enum class CampaignStatus(str, enum.Enum): activated = "activated" deactivated = "deactivated" CampaignStatusType: pgEnum = pgEnum( CampaignStatus, name="campaignstatus", create_constraint=True, metadata=Base.metadata, validate_strings=True, ) class Campaign(Base): __tablename__ = "campaign" id: Mapped[UUID] = mapped_column(primary_key=True, default=uuid4) created_at: Mapped[dt.datetime] = mapped_column(default=dt.datetime.now) status: Mapped[CampaignStatusType] = mapped_column(nullable=False) However, that gives the following error upon the construction of the Campaign class itself. Traceback (most recent call last): File "<stdin>", line 27, in <module> class Campaign(Base): ... AttributeError: 'ENUM' object has no attribute '__mro__' Any hint about how to make this work? The response from ENUM type in SQLAlchemy with PostgreSQL does not apply as I am using version 2 of SQLAlchemy and those answers did not use mapped_column or Mapped types. Also, removing str from CampaignStatus does not help. | The crux of the issue relating to __mro__ causing the AttributeError is that CampaignStatusType is not a class, but rather an instance variable of type sqlalchemy.dialects.postgresql.ENUM (using pyright may verify this - given that it complains about Mapped[CampaignStatusType] being an "Illegal type annotation: variable not allowed unless it is a type alias"). As a test, replacing the type annotation for status with Mapped[CampaignStatus] does resolve the issue (and pyright reports no errors), but that does not hook the column type to the enum with postgresql dialect that is desired. So the only way around this while using the dialect specific enum type is to use the non-annotated construct: status = mapped_column(CampaignStatusType, nullable=False) However, if type annotation is still desired, i.e. whatever being Mapped must be a type, and that sqlalchemy.dialects.postgresql.ENUM (which was imported as pgEnum) is the underlying type for the instance CampaignStatusType, it may be thought that the following might be a solution # don't do this erroneous example despite it does run status: Mapped[sqlalchemy.dialects.postgresql.ENUM] = mapped_column( CampaignStatusType, nullable=False, ) While it works, it does NOT actually reflect what will be represented by the data, so DO NOT actually do that. Moreover, it only works because the type annotation is ignored when the specific column type is passed, so putting anything in there will work while having an invalid type. Now, given that SQLAlchemy is now 2.0 (as the question explicitly want this newer version), perhaps reviewing the documentation and see now native enums should be handled now. Adapting the examples in the documentation, the following MVCE may now be derived, using all the intended keyword arguments that was passed to the PostgreSQL dialect specific ENUM type passed generic sqlalchemy.Enum instead (aside from metadata=Base.metadata as that's completely superfluous): from typing import Literal from typing import get_args from sqlalchemy import Enum from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import Mapped from sqlalchemy.orm import mapped_column CampaignStatus = Literal["activated", "deactivated"] class Base(DeclarativeBase): pass class Campaign(Base): __tablename__ = "campaign" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) status: Mapped[CampaignStatus] = mapped_column(Enum( *get_args(CampaignStatus), name="campaignstatus", create_constraint=True, validate_strings=True, )) Note the use of typing.get_args on CampaignStatus and splat it to the Enum here as opposed to what the official examples have done in repeating themselves. Now to include the usage: from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker def main(): engine = create_engine('postgresql://postgres@localhost/postgres') Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() session.add(Campaign(status='activated')) session.add(Campaign(status='deactivated')) session.commit() s = 'some_unvalidated_string' try: session.add(Campaign(status=s)) session.commit() except Exception: print("failed to insert with %r" % s) if __name__ == '__main__': main() The above will produce failed to insert with 'some_unvalidated_string' as the output, showing that unvalidated strings will not be inserted, while validated strings that are mapped to some enum are inserted without issues. Moreover, pyright will not produce errors (though honestly, this is not necessarily a good metric because type hinting in Python is still fairly half-baked, as pyright did not detect the erroneous example as an error in the very beginning no matter what went inside Mapped, but I digress). Viewing the newly created entities using psql postgres=# select * from campaign; id | status ----+------------- 1 | activated 2 | deactivated (2 rows) postgres=# \dt campaign; Table "public.campaign" Column | Type | Collation | Nullable | Default --------+----------------+-----------+----------+-------------------------------------- id | integer | | not null | nextval('campaign_id_seq'::regclass) status | campaignstatus | | not null | Indexes: "campaign_pkey" PRIMARY KEY, btree (id) postgres=# \dT+ campaignstatus; List of data types Schema | Name | Internal name | Size | Elements | Owner | Access privileges | Description --------+----------------+----------------+------+-------------+----------+-------------------+------------- public | campaignstatus | campaignstatus | 4 | activated +| postgres | | | | | | deactivated | | | (1 row) The enum of course cannot be dropped without dropping the campaign table: postgres=# drop type campaignstatus; ERROR: cannot drop type campaignstatus because other objects depend on it DETAIL: column status of table campaign depends on type campaignstatus HINT: Use DROP ... CASCADE to drop the dependent objects too. So the enum more or less behaves as expected despite only using generic SQLAlchemy types, without needing dialect specific imports. | 12 | 16 |
76,315,436 | 2023-5-23 | https://stackoverflow.com/questions/76315436/html-iframe-with-dash-output | I have 2 pretty simple dashboards and I would like to run this two dashboards with flask using main.py for routing. app1.py import dash from dash import html, dcc app = dash.Dash(__name__) app.layout = html.Div( children=[ html.H1('App 1'), dcc.Graph( id='graph1', figure={ 'data': [{'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'App 1'}], 'layout': { 'title': 'App 1 Graph' } } ) ] ) and app2.py import dash from dash import html, dcc app = dash.Dash(__name__) app.layout = html.Div( children=[ html.H1('App 2'), dcc.Graph( id='graph2', figure={ 'data': [{'x': [1, 2, 3], 'y': [2, 4, 1], 'type': 'bar', 'name': 'App 2'}], 'layout': { 'title': 'App 2 Graph' } } ) ] ) main.py # main_app.py from flask import Flask, render_template import app1 import app2 app = Flask(__name__) @app.route('/') def index(): return 'Main App' @app.route('/app1') def render_dashboard1(): return render_template('dashboard1.html') @app.route('/app2') def render_dashboard2(): return render_template('dashboard2.html') if __name__ == '__main__': app.run(debug=True) dashboard1.html <!-- dashboard1.html --> <!DOCTYPE html> <html> <head> <title>Dashboard 1</title> </head> <body> <h1>Dashboard 1</h1> <iframe src="/app1" width="1000" height="800"></iframe> </body> </html> dashboard2.html <!-- dashboard2.html --> <!DOCTYPE html> <html> <head> <title>Dashboard 2</title> </head> <body> <h1>Dashboard 2</h1> <iframe src="/app2" width="1000" height="800"></iframe> </body> </html> structure / app1.py app2.py main.py /templates dashboard1.html dashboard2.html but when I run my main.py and route for app1 I can see frame for the app1 but there is no graph. Could someone please explain how to use iframe to for me to be able to see output? | I can't access the templates with your code, I think dask uses flask so this may be causing problems. What I have done is calling two dash apps and one flask app; this for main.py: from flask import Flask, render_template from app1 import create_app as create_app1 from app2 import create_app as create_app2 server = Flask(__name__) app1 = create_app1(server) app2 = create_app2(server) @server.route('/') def index(): return 'Main App' def render_dashboard1(): return render_template('dashboard1.html') def render_dashboard2(): return render_template('dashboard2.html') if __name__ == '__main__': server.run(debug=True) Use the method create_app for each app, app1.py: import dash from dash import html, dcc def create_app(server): app = dash.Dash(server=server, routes_pathname_prefix='/app1/') app.layout = html.Div( children=[ html.H1('App 1'), dcc.Graph( id='graph1', figure={ 'data': [{'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'App 1'}], 'layout': { 'title': 'App 1 Graph' } } ) ] ) return app and app2.py (I changed y values to be sure it is reading the app2 template): import dash from dash import html, dcc def create_app(server): app = dash.Dash(server=server, routes_pathname_prefix='/app2/') app.layout = html.Div( children=[ html.H1('App 2'), dcc.Graph( id='graph2', figure={ 'data': [{'x': [1, 2, 3], 'y': [10, 10, 10], 'type': 'bar', 'name': 'App 2'}], 'layout': { 'title': 'App 2 Graph' } } ) ] ) return app and I can see both graphs correctly. | 5 | 3 |
76,316,261 | 2023-5-23 | https://stackoverflow.com/questions/76316261/how-to-edit-an-already-created-python-script-in-powerbi | In my Power BI dashboard, I created a Python Script that accesses an API and generates a Pandas data frame. It works fine, but how can I edit the Python code? I thought it would be something simple, but I can't really find how to find it in the interface. If I send the .pbix file to someone, they will receive an alert that a Python script is executing and display the code nicely formatted. I can find the code if I go to "Model Exhibition -> Edit query -> Advanced Editor" (I'm translating the options from another language, they can be somewhat different). I is a M language code and the Python script is displayed as a long line as the image below: I believe it is possible to open a text box to edit the Python script, but can't really find it. | In Power Query you should be able to click on the ribbon to insert Python code as below. If the script is existing, then click the little cog icon to the right of the step in APPLIED STEPS as below: | 5 | 2 |
76,319,199 | 2023-5-23 | https://stackoverflow.com/questions/76319199/getting-the-price-of-the-game-from-egs | I'm trying to get the price of the game from the epic games store, but I get a 403 error import requests from bs4 import BeautifulSoup url = "https://store.epicgames.com/ru/p/cities-skylines" response = requests.get(url) if response.status_code == 200: soup = BeautifulSoup(response.text, 'html.parser') price_element = soup.find(class_='css-119zqif') if price_element is not None: price = price_element.get_text(strip=True) print("Price Cities: Skylines:", price) else: print("Error") else: print("Error:", response.status_code) I tried to access the game page directly, and take the price information from the class in html, but I get the error "Request error: 403" | You get to the cloudflare page dedicated to fighting robots. To get around this limitation, you need to imitate a real person. To do this, you can use the following library or similar. Here is an example of working code. Don't forget to pip install cloudscraper. from bs4 import BeautifulSoup import cloudscraper url = "https://store.epicgames.com/ru/p/cities-skylines" scraper = cloudscraper.create_scraper() response = scraper.get(url) if response.status_code == 200: soup = BeautifulSoup(response.text, 'html.parser') price_element = soup.find('div', 'css-169q7x3').find(class_='css-119zqif') print(price_element) if price_element is not None: price = price_element.get_text(strip=True) print("Price Cities: Skylines:", price) else: print("Error") else: print("Error:", response.status_code) result: Price Cities: Skylines: 330 ₽ One remark. This library has free limitations and sometimes an error may occur: cloudscraper.exceptions.CloudflareChallengeError: Detected a Cloudflare version 2 Captcha challenge, This feature is not available in the opensource (free) version. Ignore it and get the result. But you can always rewrite your code using this API. If you sign up, they will give you a token for 10,000 requests per month. If this option does not suit you, look in the direction of selenium. | 3 | 0 |
76,282,003 | 2023-5-18 | https://stackoverflow.com/questions/76282003/binary-image-classifier-in-pytorch-progress-bar-and-way-to-check-if-the-training | I would like to build and train a binary classifier in PyTorch that reads images from the path process them and trains a classifier using their labels. My images can be found in the following folder: -data - class_1_folder - class_2_folder Hence, to read them in tensors I am doing the following: PATH = "data/" transform = transforms.Compose([transforms.Resize(256), transforms.RandomCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224,0.225])]) dataset = datasets.ImageFolder(PATH, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=356, shuffle=True) images, labels = next(iter(dataloader)) This code actually reads the images and performs some necessary transformations-preprocess. Next step to create a model and perform the training: acc = model_train(images, labels, images, labels) With model_train to be: import pdb import torch from torchvision import datasets, transforms, models from matplotlib import pyplot as plt from torchvision.transforms.functional import to_pil_image import torch.nn as nn import numpy as np import torch.optim as optim import tqdm import copy from time import sleep def my_model(): device = "cuda" if torch.cuda.is_available() else "cpu" model = models.resnet18(pretrained=False) num_features = model.fc.in_features model.fc = nn.Linear(num_features, 1) # Binary classifier with 2 output classes # Move the model to the device model = model.to(device) return model def model_train(X_train, y_train, X_val, y_val): model = my_model() dtype = torch.FloatTensor loss_fn = nn.CrossEntropyLoss().type(dtype) # binary cross entropy optimizer = optim.Adam(model.parameters(), lr=0.0001) n_epochs = 20 # number of epochs to run batch_size = 10 # size of each batch batch_start = torch.arange(0, len(X_train), batch_size) # Hold the best model best_acc = - np.inf # init to negative infinity best_weights = None for epoch in range(n_epochs): model.train() with tqdm.tqdm(batch_start, unit="batch", mininterval=0, disable=True) as bar: bar.set_description(f"Epoch {epoch}") for start in bar: # take a batch bar.set_description(f"Epoch {epoch}") X_batch = X_train[start:start+batch_size] y_batch = y_train[start:start+batch_size] # forward pass y_pred = model(X_batch) y_pred = torch.max(y_pred, 1)[0] loss = loss_fn(y_pred, y_batch.float()) # backward pass print(loss) optimizer.zero_grad() loss.backward() # update weights optimizer.step() # print progress acc = (y_pred.round() == y_batch).float().mean() bar.set_postfix( loss=float(loss), acc=float(acc) ) bar.set_postfix(loss=loss.item(), accuracy=100. * acc) sleep(0.1) # evaluate accuracy at end of each epoch model.eval() y_pred = model(X_val) acc = (y_pred.round() == y_val).float().mean() acc = float(acc) if acc > best_acc: best_acc = acc best_weights = copy.deepcopy(model.state_dict()) # restore model and return best accuracy torch.save(model.state_dict(), "model/my_model.pth") model.load_state_dict(best_weights) return best_acc I am trying to understand how I can correctly portray the progress bar during training and second, how can I validate that the training process took place correctly. For the latter, I have noticed a weird behavior. For class zero I am getting always zero loss while for class one it's between range 13-24. It seems to be incorrect, however, I am sure how to dive deeper! tensor(-0., grad_fn=<DivBackward1>) tensor([-0.0986, -0.0806, -0.0161, 0.0287, -0.0279, 0.0083, -0.0526, -0.1393, -0.2082, -0.0141], grad_fn=<MaxBackward0>) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=<DivBackward1>) tensor([-0.1779, 0.0936, -0.0341, -0.1531, -0.1222, -0.1169, -0.0160, -0.0674, 0.1230, -0.1181], grad_fn=<MaxBackward0>) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=<DivBackward1>) tensor([-0.0438, -0.1269, -0.1624, -0.0976, -0.0132, -0.1944, -0.0034, -0.0454, -0.1559, 0.0657], grad_fn=<MaxBackward0>) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=<DivBackward1>) tensor([-0.1655, 0.0222, -0.0801, -0.1390, -0.0905, -0.1472, -0.0395, -0.0180, -0.1492, 0.0914], grad_fn=<MaxBackward0>) tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) torch.float32 torch.int64 tensor(-0., grad_fn=<DivBackward1>) tensor([-0.7035, -0.1989, 0.0921, -0.1082, -0.2588, -0.3557, 0.3093, 0.0909, 0.1603, 0.1838], grad_fn=<MaxBackward0>) tensor([0, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(20.4545, grad_fn=<DivBackward1>) tensor([-0.4783, -0.1027, -0.0357, 0.0882, -0.2955, -0.0968, 0.3323, -0.0472, 0.1017, -0.2186], grad_fn=<MaxBackward0>) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.2550, grad_fn=<DivBackward1>) tensor([ 0.1554, -0.2664, 0.1419, 0.0203, 0.0895, -0.0085, -0.2867, -0.1957, -0.1315, -0.2340], grad_fn=<MaxBackward0>) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.1584, grad_fn=<DivBackward1>) tensor([-0.0406, -0.2144, 0.1997, 0.2196, -0.3464, 0.1311, -0.0743, -0.2440, -0.1751, -0.2371], grad_fn=<MaxBackward0>) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.2112, grad_fn=<DivBackward1>) tensor([-0.0080, -0.1138, -0.1035, 0.0697, -0.1745, -0.1438, -0.2360, -0.1308, 0.0146, 0.1209], grad_fn=<MaxBackward0>) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.0853, grad_fn=<DivBackward1>) tensor([-0.1235, 0.0081, -0.1073, -0.1036, -0.2037, -0.1204, -0.0570, -0.1146, 0.0849, 0.0798], grad_fn=<MaxBackward0>) tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) torch.float32 torch.int64 tensor(23.0666, grad_fn=<DivBackward1>) tensor([-0.0660, -0.0832, -0.0414, -0.0334, -0.0123, -0.0203, -0.0549, -0.0747, -0.0779, -0.1629], grad_fn=<MaxBackward0>) What can be wrong in this case? | Synopsis There are few issues with the attached code: the torch API is underused (because the code is too long) the dataset is not fed properly (the training part iterates over one chunk of data returned once by iter), the last layer and loss don't look correct (because of one neuron in nn.Linear(num_features, 1) and because of nonbinary cross-entropy used). While the first two issues can impact quality and efficiency, I would blame the "binary setup" in first place. In order to fix that, I would suggest to set up the training with 2 classes first - that is, labels of shape [batch,2] - and if that works, carefully move to the binary-encoded case. Working Solution Let's attack the task of discriminating between digits 3 and 8, on FashionMNIST: curl -L -o data.zip https://github.com/DeepLenin/fashion-mnist_png/raw/master/data.zip unzip data.zip Here is the appropriate dataloader: ## Dataset import torch from torchvision import datasets import torchvision.transforms as transforms PATH = "data/train" transform = transforms.Compose([transforms.Resize(256), transforms.RandomCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224,0.225])]) dataset = datasets.ImageFolder(PATH, transform=transform) dataset.classes = ['3','8'] dataset.class_to_idx = {'3':0,'8':1} dataset.samples = list(filter(lambda s: s[1] in [0,1], dataset.samples)) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) As the second step, we build the model ## Model construction import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms, models device = "cuda" if torch.cuda.is_available() else "cpu" def my_model(): model = models.resnet18(pretrained=False) num_features = model.fc.in_features model.fc = nn.Linear(num_features, 2) # classifier with 2 output classes # Move the model to the device model = model.to(device) return model model = my_model() dtype = torch.FloatTensor loss_fn = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) Finally, we do training with the progress bar (see Adam Oudad's post on tqdm): ## Model traning from tqdm import tqdm model.train() for epoch in range(1, 6): with tqdm(dataloader, unit="batch") as tepoch: for X_batch, y_batch in tepoch: tepoch.set_description(f"Epoch {epoch}") X_batch, y_batch = X_batch.to(device), y_batch.to(device) optimizer.zero_grad() y_pred = model(X_batch) loss = loss_fn(y_pred, y_batch) loss.backward() optimizer.step() tepoch.set_postfix(loss=loss.item()) The loss is being watched by the progress bar, like this: Epoch 1: 100%|██████████| 375/375 [01:01<00:00, 6.13batch/s, loss=0.211] Epoch 2: 100%|██████████| 375/375 [00:59<00:00, 6.25batch/s, loss=0.0149] Epoch 3: 100%|██████████| 375/375 [01:00<00:00, 6.24batch/s, loss=0.00139] Epoch 4: 100%|██████████| 375/375 [00:59<00:00, 6.27batch/s, loss=0.00149] Epoch 5: 100%|██████████| 375/375 [00:59<00:00, 6.29batch/s, loss=0.00252] and in case of any doubt, the accuracy can be evaluated on the test dataset: PATH = "data/test" test_dataset = datasets.ImageFolder(PATH, transform=transform) test_dataset.classes = ['3','8'] test_dataset.class_to_idx = {'3':0,'8':1} test_dataset.samples = list(filter(lambda s: s[1] in [0,1], test_dataset.samples)) test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=32) def check_accuracy(loader, model, device): num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(test_dataloader, model, device) # Got 1985 / 2000 with accuracy 99.25 So we are done! Reproducible Code See this notebook which can be run on Colab's GPU. | 5 | 2 |
76,283,892 | 2023-5-18 | https://stackoverflow.com/questions/76283892/how-to-add-an-information-display-button-to-the-interactive-plot-toolbar | The matplotlib plot toolbar has some support for customization. This example is provided on the official documentation: import matplotlib.pyplot as plt from matplotlib.backend_tools import ToolBase, ToolToggleBase plt.rcParams['toolbar'] = 'toolmanager' class ListTools(ToolBase): """List all the tools controlled by the `ToolManager`.""" default_keymap = 'm' # keyboard shortcut description = 'List Tools' def trigger(self, *args, **kwargs): print('_' * 80) fmt_tool = "{:12} {:45} {}".format print(fmt_tool('Name (id)', 'Tool description', 'Keymap')) print('-' * 80) tools = self.toolmanager.tools for name in sorted(tools): if not tools[name].description: continue keys = ', '.join(sorted(self.toolmanager.get_tool_keymap(name))) print(fmt_tool(name, tools[name].description, keys)) print('_' * 80) fmt_active_toggle = "{0!s:12} {1!s:45}".format print("Active Toggle tools") print(fmt_active_toggle("Group", "Active")) print('-' * 80) for group, active in self.toolmanager.active_toggle.items(): print(fmt_active_toggle(group, active)) class GroupHideTool(ToolToggleBase): """Show lines with a given gid.""" default_keymap = 'S' description = 'Show by gid' default_toggled = True def __init__(self, *args, gid, **kwargs): self.gid = gid super().__init__(*args, **kwargs) def enable(self, *args): self.set_lines_visibility(True) def disable(self, *args): self.set_lines_visibility(False) def set_lines_visibility(self, state): for ax in self.figure.get_axes(): for line in ax.get_lines(): if line.get_gid() == self.gid: line.set_visible(state) self.figure.canvas.draw() fig = plt.figure() plt.plot([1, 2, 3], gid='mygroup') plt.plot([2, 3, 4], gid='unknown') plt.plot([3, 2, 1], gid='mygroup') # Add the custom tools that we created fig.canvas.manager.toolmanager.add_tool('List', ListTools) fig.canvas.manager.toolmanager.add_tool('Show', GroupHideTool, gid='mygroup') # Add an existing tool to new group `foo`. # It can be added as many times as we want fig.canvas.manager.toolbar.add_tool('zoom', 'foo') # Remove the forward button fig.canvas.manager.toolmanager.remove_tool('forward') # To add a custom tool to the toolbar at specific location inside # the navigation group fig.canvas.manager.toolbar.add_tool('Show', 'navigation', 1) plt.show() Which opens this plot where you can hide/show some data: How can I add such a button to display some text (regarding the plot data) on a new window? | You could modify the example you provided from the matplotlib documentation to add a button that create a figure in a new window with only text in it. And, once you click on the same button again it closes the window. See code below, I called the implemented tool/button 'Info': import matplotlib.pyplot as plt from matplotlib.backend_tools import ToolBase, ToolToggleBase plt.rcParams['toolbar'] = 'toolmanager' class ListTools(ToolBase): """List all the tools controlled by the `ToolManager`.""" default_keymap = 'm' # keyboard shortcut description = 'List Tools' def trigger(self, *args, **kwargs): print('_' * 80) fmt_tool = "{:12} {:45} {}".format print(fmt_tool('Name (id)', 'Tool description', 'Keymap')) print('-' * 80) tools = self.toolmanager.tools for name in sorted(tools): if not tools[name].description: continue keys = ', '.join(sorted(self.toolmanager.get_tool_keymap(name))) print(fmt_tool(name, tools[name].description, keys)) print('_' * 80) fmt_active_toggle = "{0!s:12} {1!s:45}".format print("Active Toggle tools") print(fmt_active_toggle("Group", "Active")) print('-' * 80) for group, active in self.toolmanager.active_toggle.items(): print(fmt_active_toggle(group, active)) class GroupHideTool(ToolToggleBase): """Show lines with a given gid.""" default_keymap = 'S' description = 'Show by gid' default_toggled = True def __init__(self, *args, gid, **kwargs): self.gid = gid super().__init__(*args, **kwargs) def enable(self, *args): self.set_lines_visibility(True) def disable(self, *args): self.set_lines_visibility(False) def set_lines_visibility(self, state): for ax in self.figure.get_axes(): for line in ax.get_lines(): if line.get_gid() == self.gid: line.set_visible(state) self.figure.canvas.draw() class ShowInfoTool(ToolToggleBase): """Show lines with a given gid.""" default_keymap = 'I' description = 'Show info' default_toggled = False def __init__(self,*args,**kwargs): super().__init__(*args, **kwargs) def enable(self,*args): fig=plt.figure() plt.box(False) plt.xticks([]) plt.yticks([]) plt.text(0.5,0.5,'Sed ut perspiciatis, unde omnis iste natus error\n sit voluptatem accusantium doloremque laudantium, totam rem aperiam\n eaque ipsa, quae ab illo inventore veritatis et quasi\n architecto beatae vitae dicta sunt, explicabo.' ,fontsize=10,ha='center',va='center') plt.show() def disable(self, *args): plt.close() fig=plt.figure() plt.plot([1, 2, 3], gid='mygroup') plt.plot([2, 3, 4], gid='unknown') plt.plot([3, 2, 1], gid='mygroup') # Add the custom tools that we created fig.canvas.manager.toolmanager.add_tool('List', ListTools) fig.canvas.manager.toolmanager.add_tool('Show', GroupHideTool, gid='mygroup') fig.canvas.manager.toolmanager.add_tool('Info', ShowInfoTool) # Add an existing tool to new group `foo`. # It can be added as many times as we want fig.canvas.manager.toolbar.add_tool('zoom', 'foo') # Remove the forward button fig.canvas.manager.toolmanager.remove_tool('forward') # To add a custom tool to the toolbar at specific location inside # the navigation group fig.canvas.manager.toolbar.add_tool('Show', 'navigation', 1) fig.canvas.manager.toolbar.add_tool('Info', 'navigation', 2) plt.show() | 4 | 0 |
76,311,807 | 2023-5-23 | https://stackoverflow.com/questions/76311807/attributeerror-adam-object-has-no-attribute-build-during-unpickling | I'm training a Keras model and saving it for later use using pickle. When I unpickle I get this error: AttributeError: 'Adam' object has no attribute 'build' Here's the code: from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense import pickle model = Sequential() model.add(Dense(32, activation='relu', input_shape=(3,))) model.add(Dense(1, activation='linear')) # Use linear activation for regression model.compile(loss='mean_squared_error', optimizer='adam') pickle.dump(model, open("m.pkl", 'wb')) loadedModel = pickle.load(open("m.pkl", 'rb')) I get this error with TensorFlow 2.11.x and 2.13.0-rc0 on MacOS M1 | Instead of pickling, you should save the model using h5. This solves the issue: from keras.models import load_model model.save('m.h5') loadedModel = load_model('m.h5') | 4 | 3 |
76,314,229 | 2023-5-23 | https://stackoverflow.com/questions/76314229/how-to-download-spacy-models-in-a-poetry-managed-environment | I am writing a Python Jupyter notebook that does some NLP processing on Italian texts. I have installed spaCy 3.5.3 via Poetry and then attempt to run the following code: import spacy load_model = spacy.load('it_core_news_sm') The import line works as expected, but running spacy.load produces the following error: OSError: [E050] Can't find model 'it_core_news_sm'. It doesn't seem to be a Python package or a valid path to a data directory. The model name is correct as shown on https://spacy.io/models/it After a web search, I see that a solution is to issue the following command: python3 -m spacy download it_core_news_sm After running this command the above code works as expected, however, is there a more 'kosher' way of doing this via Poetry? | You can add a URL dependency. First edit your pyproject.toml file to add the following (note: the name used here should match the name of the package (i.e. it_core_news_sm): [tool.poetry.dependencies] it_core_news_sm = {url = "https://github.com/explosion/spacy-models/releases/download/it_core_news_sm-3.5.0/it_core_news_sm-3.5.0.tar.gz"} Then run the corresponding add call: poetry add https://github.com/explosion/spacy-models/releases/download/it_core_news_sm-3.5.0/it_core_news_sm-3.5.0.tar.gz All of the spaCy models can be found on spaCy's model releases GitHub page. | 10 | 19 |
76,313,231 | 2023-5-23 | https://stackoverflow.com/questions/76313231/detect-the-foreground-rocks-stockpile-from-the-background-wall-to-find-mater | I am trying to find angles of a stockpile (on the left and right sides) by using Otsu threshold to segment the image. The image I have is like this: In the code, I segment it and find the first black pixel in the image The segmented photo doesn't seem to have any black pixels in the white background, but then it detects some black pixels even though I have used the morphology.opening If I use it with different image, it doesn't seem to have this problem How do I fix this problem? Any ideas? (The next step would be to find the angle on the left and right hand side) The code is attached here from skimage import io, filters, morphology, measure import numpy as np import cv2 as cv from scipy import ndimage import math # Load the image image = io.imread('mountain3.jpg', as_gray=True) # Apply Otsu's thresholding to segment the image segmented_image = image > filters.threshold_otsu(image) # Perform morphological closing to fill small gaps structuring_element = morphology.square(1) closed_image = morphology.closing(segmented_image, structuring_element) # Apply morphological opening to remove small black regions in the white background structuring_element = morphology.disk(10) # Adjust the disk size as needed opened_image = morphology.opening(closed_image, structuring_element) # Fill larger gaps using binary_fill_holes #filled_image = measure.label(opened_image) #filled_image = filled_image > 0 # Display the segmented image after filling the gaps io.imshow(opened_image) io.show() # Find the first row containing black pixels first_black_row = None for row in range(opened_image.shape[0]): if np.any(opened_image[row, :] == False): first_black_row = row break if first_black_row is not None: edge_points = [] # List to store the edge points # Iterate over the rows below the first black row for row in range(first_black_row, opened_image.shape[0]): black_pixel_indices = np.where(opened_image[row, :] == False)[0] if len(black_pixel_indices) > 0: # Store the first black pixel coordinates on the left and right sides left_x = black_pixel_indices[0] right_x = black_pixel_indices[-1] y = row # Append the edge point coordinates edge_points.append((left_x, y)) edge_points.append((right_x, y)) if len(edge_points) > 0: # Plotting the edge points import matplotlib.pyplot as plt edge_points = np.array(edge_points) plt.figure() plt.imshow(opened_image, cmap='gray') plt.scatter(edge_points[:, 0], edge_points[:, 1], color='red', s=1) plt.title('Edge Points') plt.show() else: print("No edge points found.") else: print("No black pixels found in the image.") | Your issue is that the image has noise. You need to deal with the noise. That is usually done with some kind of lowpassing, i.e. blurring. I'd recommend a median blur. Here's the result of a median filter, kernel size 9: And the per-pixel absolute differences to the source, magnified in amplitude by 20x: (this suggests that you could do a bandpass to catch the "texture" of the pile vs the flatness of the background) And here's the picture after Otsu thresholding (and inversion): And your foreground barely contrasts against the background. If you had better contrasting background, this wouldn't be nearly that much of an issue. Here's thresholding based on hue, because background and foreground slightly differ in hue: With morphological closing: To get your lines for the left and right slope of the stockpile, you need something that deals with contours or edge pixels anyway. Both contour finding and connected components labeling will have trouble with this, which is why the answers recommending those also must recommend explicitly filtering the results to remove small debris (the noise). Hence, those approaches (contours/CCs) don't solve the problem of noise, they just transform it into a different problem in which you still have to deal with the noise (by filtering it), just after processing the image. I'd recommend dealing with the noise early. | 5 | 4 |
76,315,624 | 2023-5-23 | https://stackoverflow.com/questions/76315624/registering-discriminated-union-automatically | Using pydantic 1.10.7 and python 3.11.2 I have a recursive Pydantic model and I would like to deserialize each types properly using discriminated union. from pydantic import BaseModel, Field class Base(BaseModel): kind: str sub_models: Annotated[ List[Union[A,B]], Field( default_factory=list, discriminator="kind" ) ] class A(Base): kind: Literal["a"] a_field: str class B(Base): kind: Literal["b"] b_field: str I would like to automatically register the subclasses in a way Pydantic will be able to understand, like so from pydantic import BaseModel, Field B = TypeVar("B", bound="Base") class Base(BaseModel): kind: str sub_models: Annotated[ List[B], Field( default_factory=list, discriminator="kind" ) ] _subs: Set[Type[B]] = set() def __init_subclass__(cls, /, **kwargs): Base._subs.add(cls) cls.__annotations__["kind"] = Literal[cls.__name__.lower()] # <- works # list comprehension in a type definition is not valid, Base.__annotations__["sub_models"] = List[Union[subclass for subclass in Base._subs]] class A(Base): a_field: str class B(Base): b_field: str Any idea how to have discriminated union configured dynamically? I have tried registering manually the subclasses, but it involves a circular dependency of type hints and I need to "not forget" to add each new type to the union. | The challenge here is that a lot of the heavy lifting of Pydantic model creation is done by the metaclass, specifically its __new__ method. Simply re-defining sub_models and kind in the __annotations__ dictionary will not be enough for those changes to affect the actual fields because those are created and configured by the metaclass before __init_subclass__ is called. So if we don't want to go through the trouble of overriding the metaclass, we need some function to set a field on an existing (i.e. fully defined) model. Solution This seems to work: from __future__ import annotations from typing import Annotated, Any, ClassVar, Literal, Union from pydantic import BaseModel, Field from pydantic.fields import ModelField, Undefined def set_field_on_model( model: type[BaseModel], name: str, annotation: Any, value: Any = Undefined, ) -> None: """ Inspired heavily by what is done inside `ModelMetaclass.__new__`. May be incomplete! """ model.__fields__[name] = ModelField.infer( name=name, value=value, annotation=annotation, class_validators=None, config=model.__config__, ) model.__annotations__[name] = annotation # more code below... Now we can use that function inside our __init_subclass__ method to dynamically update the model fields we want: class Base(BaseModel): _subs: ClassVar[set[type["Base"]]] = set() sub_models: list["Base"] = Field(default_factory=list) kind: str def __init_subclass__(cls, /, **kwargs: object) -> None: super().__init_subclass__(**kwargs) Base._subs.add(cls) set_field_on_model(cls, "kind", Literal[cls.__name__.lower()]) if len(Base._subs) < 2: return # discriminated union requires at least two distinct types sub_models_annotation = list[ # type: ignore[index, misc, valid-type] Annotated[ Union[tuple(Base._subs)], Field(discriminator="kind"), ] ] for model in {Base} | Base._subs: set_field_on_model( model, "sub_models", sub_models_annotation, Field(default_factory=list), ) Notes To "dynamically" define a type Union, you can just pass a tuple of types to it because at runtime Union[X, Y] will be equivalent to Union[(X, Y)]. (see this answer) For a proper definition of your sub_models list, you have to define the discriminated union inside it, i.e. you must pass the Annotated type as the type argument to list like list[Annotated[Union[...], Field(...)]]. The discriminated parameter relates to the item type of the list, not the list itself. Conversely, you must set the default_factory on the sub_models list field itself, not in the Annotated union of types inside the list. That is why I pass a separate FieldInfo as the value argument to set_field_on_model for the sub_models field. It is not enough to just modify the fields of the Base model during subclassing. To make everything consistent, we must also update the __fields__ for every subclass of Base, when we create a new one. That is why I loop over Base._subs (and Base itself). Since a discriminated union requires at least two types to work, we need an escape hatch for when we subclass Base for the first time. If we didn't check, whether the number of sub-models is at least 2, we would get an error from Pydantic during our first subclass creation. This implementation for setting/overriding model fields is based on my moderate but limited understanding of the intricacies of the Pydantic machinery. It may well be that this is incomplete and there are additional steps you would have to take, to ensure everything remains consistent. In general, when you start fiddling with __fields__ after the model is created, all bets are off. Meaning this is not really documented there are no guarantees that things will not break with the next release. (Not to mention Pydantic v2.) Demo/Test Let's define (at least) two sub-models and try this out: from pydantic import ValidationError # ... import Base class A(Base): a_field: str class B(Base): b_field: str test_data = { "kind": "a", "a_field": "foo", "sub_models": [ {"kind": "a", "a_field": "bar"}, {"kind": "b", "b_field": "baz"}, ], } obj = A.parse_obj(test_data) print(type(obj.sub_models[0]), type(obj.sub_models[1])) print(obj.json(indent=4)) # Now introduce an error: test_data["sub_models"][1]["kind"] = "a" try: A.parse_obj(test_data) except ValidationError as err: print(err.json(indent=4)) Output: <class '__main__.A'> <class '__main__.B'> { "sub_models": [ { "sub_models": [], "kind": "a", "a_field": "bar" }, { "sub_models": [], "kind": "b", "b_field": "baz" } ], "kind": "a", "a_field": "foo" } [ { "loc": [ "sub_models", 1, "A", "a_field" ], "msg": "field required", "type": "value_error.missing" } ] That is the output we want. The two objects in the outer sub_models list are actually instances of distinct classes/models A and B. Their fields are all correctly assigned. And if the kind does not match the expected fields (as in the last example), we get a validation error. | 3 | 2 |
76,308,514 | 2023-5-22 | https://stackoverflow.com/questions/76308514/how-to-implement-mase-mean-absolute-scaled-error-in-python | I have Predicted values and Actual values, and I can calculate Mean Absolute Percentage Error by doing: abs(Predicted-Actual)/ Predicted *100 How do I calculate MASE with respect to my Predicted and Actual values? | The Mean Absolute Scaled Error (MASE) is calculated by subtracting the forecasted or predicted value from the actual value divided by the average forecast error where the actual value from the prior step is used as the prediction or forecast. Here is the equation as a list-comprehension: mase = numpy.mean([abs(Actual[i] - Predicted[i]) / (abs(Actual[i] - Actual[i - 1]) / len(Actual) - 1) for i in range(1, len(Actual))]) Here is the equation as a function: def MASE(Actual, Predicted): values = [] for i in range(1, len(Actual)): values.append(abs(Actual[i] - Predicted[i]) / (abs(Actual[i] - Actual[i - 1]) / len(Actual) - 1)) return numpy.mean(values) | 4 | 0 |
76,313,534 | 2023-5-23 | https://stackoverflow.com/questions/76313534/shuffle-with-a-constraint | I have a problem analog to: We have 20 balls each ball is unique and identified by a number from 1 to 20. The balls numbered from 1 to 5 are yellow, from 6 to 10 green, from 10 to 15 red and from 16 to 20 blue. Find a method to randomly shuffle the balls while respecting the constraint: "2 successive balls cannot have the same color". Example: I have considered two methods that I am not satisfied with, and I am looking for a better one draw a ball at random, then draw a ball at random among all the others having a different color from the previous one, etc. Problem: we can fall in cases where it is impossible to finish the drawing, if there are only remaining balls of the same color for example. divide the balls into 4 sets of 5 balls (1 per color), form 5 other sets of 4 balls by randomly drawing a ball of each color. Reassemble these 5 sets in order to respect the criterion (the first balls of the last set are permuted until the criterion is respected). Problem some combinations are never drawn, for example red, yellow, red, yellow ... green, blue, green, blue. Edit: Here is an implementation of Unlikus solution. Many thanks to him. import enum import random Color = enum.Enum('Color', ['RED', 'GREEN', 'BLUE', "YELLOW"]) NB_PERMUTATION = { (Color.RED, 1, 0, 0, 0): 1, (Color.GREEN, 0, 1, 0, 0): 1, (Color.BLUE, 0, 0, 1, 0): 1, (Color.YELLOW, 0, 0, 0, 1): 1, (Color.RED, 0, 0, 0, 0): 0, (Color.GREEN, 0, 0, 0, 0): 0, (Color.BLUE, 0, 0, 0, 0): 0, (Color.YELLOW, 0, 0, 0, 0): 0, } def compute_nb_permutations(*args): try: return NB_PERMUTATION[args] except KeyError: second_colors = {1, 2, 3, 4} - {args[0].value} nb_ball = [args[1], args[2], args[3], args[4]] nb_ball[args[0].value - 1] -= 1 result = NB_PERMUTATION[args] = sum( compute_nb_permutations(Color(c), *nb_ball) for c in second_colors if nb_ball[c - 1] > 0 ) return result def arrange_color(): colors = {*Color} nb_ball = [5,5,5,5] # choose uniformly among colors for the first ball results = [random.choices(list(colors))[0]] nb_ball[results[-1].value - 1] -= 1 while nb_ball != [0,0,0,0]: candidates = list(colors - {results[-1]}) weights = [compute_nb_permutations(c, *nb_ball) for c in candidates] results.append(random.choices(candidates, weights=weights)[0]) nb_ball[results[-1].value - 1] -= 1 return results def draw(): numbers = { Color.RED: list(range(1,6)), Color.GREEN: list(range(6, 11)), Color.BLUE: list(range(11, 16)), Color.YELLOW: list(range(16, 21)), } for v in numbers.values(): random.shuffle(v) return {numbers[c].pop(): c.name for c in arrange_color()} | It boils down to calculate the number of permutations which obey your constrains. See https://math.stackexchange.com/questions/1208392/ways-to-arrange-4-different-colour-balls-with-no-two-of-the-same-colour-next-to for an answer how to calculate the number of ways to arrange your balls with the first one being one particular color. (The answer deals with the last one, but the numbers are the same). You can calculate the values of f by dynamic programming. Now you sample your first color in proportion to f(5,5,5,5,red), f(5,5,5,5,green), f(5,5,5,5,blue), f(5,5,5,5, yellow). Suppose you have sampled red (the first color in f). You sample the second color in proportion to f(4,5,5,5,green), f(4,5,5,5,blue), f(4,5,5,5, yellow) and so on. You are guarantied that the sum of these 3 values is greater than 0, so there are still solutions left, otherwise f(5,5,5,5, red) would already be 0. If you have the colors right, the numbers are then easy. | 6 | 1 |
76,315,335 | 2023-5-23 | https://stackoverflow.com/questions/76315335/can-a-python-package-and-its-corresponding-pypi-project-have-different-names | For example, I'm wondering how is it possible that scikit-learn is the name of a PyPi package while the actual Python module is named sklearn. The reason I'm asking is that I have a local Python package packageA that I can't upload to PyPi since that name happens to already be taken. I therefore wonder if I can upload it as packageB (which actually is available on PyPi)? If so, how can I do that? | The names on PyPi or the names you are useing when doing pip install NAME are Distribution Packages. The names you use when doing import NAME are Import Packages. One Distribution Package can have multiple Import Packages in it. Example As an example see this demo project bit-demo: The name of the repositories (you find in the URL) is "bit-demo". The name of the Distribution Package (you would use via pip) is bitdemo. This is defined in the packages metadata via pyproject.toml file. There are two Import Packages in it named bitcli and bitgui. This is also defined in the package metadata. Scikit-learn The project setup of Scikit-learn is bit more complex but you also can see it there. The Distributon Package name is defined in setup.py Not absolute sure about how the Import Package name is defined. But I would say setup.cfg tells to look up sub-folders name. And there is scilearn. | 3 | 7 |
76,313,907 | 2023-5-23 | https://stackoverflow.com/questions/76313907/remove-element-in-array-i-dont-understand-how-work-output-in-my-case-python | I don't understand how work output in my case. Could you explain to me what I'm doing wrong? task:(27. Remove Element) Given an integer array nums and an integer val, remove all occurrences of val in nums in-place. The order of the elements may be changed. Then return the number of elements in nums which are not equal to val. Consider the number of elements in nums which are not equal to val be k, to get accepted, you need to do the following things: Change the array nums such that the first k elements of nums contain the elements which are not equal to val. The remaining elements of nums are not important as well as the size of nums. Return k. My solution: class Solution: def removeElement(self, nums: List[int], val: int) -> int: if len(nums) > 1: nums = [u for u in nums if u != val] print("nums = ", nums) if val not in nums: return len(nums) Testcase: nums =[0,1,2,2,3,0,4,2] val = 2 Results: Stdout (print in loop): nums = [0, 1, 3, 0, 4] Output: [0,1,2,2,3] Expected: [0,1,4,0,3] | The problem is that you are not removing the elements in-place, so nums outside the function is not affected. One option is to iterate over the list and remove items with pop. To avoid index out of range exception iterate over the list from end to start for i in range(len(nums) - 1, 0, -1): if nums[i] == val: nums.pop(i) | 3 | 3 |
76,313,897 | 2023-5-23 | https://stackoverflow.com/questions/76313897/to-delete-a-transparent-area-of-an-image-in-python | I want to use Python to cut the transparent area of the image file. Please look at the picture below. image 1 (original image) image 2 (result image) The original image has alternating opaque and transparent areas. I want to transform the image consisting of 1-spaces-2-spaces-3-spaces-4 like the original image into 1-2-3-4. I tried the following code but failed to get the desired output typfrom PIL import Image import numpy as np def trim_transparency(image): # Convert image to np.array im = np.array(image) # Import alpha channels only alpha = im[:, :, 3] # Found a range of nonzero indexes on each axis non_empty_columns = np.where(alpha.max(axis=0)>0)[0] non_empty_rows = np.where(alpha.max(axis=1)>0)[0] # Found a range of nonzero indexes on each axis cropBox = (min(non_empty_rows), max(non_empty_rows), min(non_empty_columns), max(non_empty_columns)) image_data = Image.fromarray(im).crop(cropBox) return image_data image = Image.open("original_image.png") new_image = trim_transparency(image) new_image.save("new_image.png")e here | You were very close - I would use Numpy indexing to gather the rows you want: from PIL import Image import numpy as np # Open image and ensure not palettised, make into Numpy array and select alpha channel im = Image.open('e0rRU.png').convert('RGBA') na = np.array(im) alpha = na[:, :, 3] # Find opaque rows non_empty_rows = np.where(alpha.max(axis=1)>0)[0] # Copy them to new image opaque = na[non_empty_rows,...] # Save Image.fromarray(opaque).save('result.png') | 3 | 2 |
76,301,807 | 2023-5-21 | https://stackoverflow.com/questions/76301807/comparison-of-bfs-and-dfs-algorithm-for-the-knapsack-problem | I am fairly new to python and I have a task which tells me to compare both algorithm time expended and space used in memory. I have coded both algorithms and ran them both. I was able to measure the time used, but wasnt able to look for ways to know how much space was used. I am also not sure if the question is asking me to calculate it based on general BFS and DFS or the code I have coded. Comparison of the time expended by the algorithms. Comparison of the space used in memory at a time by the algorithms To get the time I used start_time = time.time() and end = time.time() BFS algorithm 0.0060007572174072266s DFS algorithm 0.005002260208129883s How would I calculate the space used in memory assuming that it is based on my code. I might be just confused but the wording of the question makes me feel like I need to measure it when running both algorithms to compare the performance. Code: BFS : def knapsack_bfs(items, max_weight): queue = deque() root = Node(-1, 0, 0, []) queue.append(root) max_benefit = 0 best_combination = [] while queue: current = queue.popleft() if current.level == len(items) - 1: if current.benefit > max_benefit: max_benefit = current.benefit best_combination = current.items else: next_level = current.level + 1 next_item = items[next_level] include_benefit = current.benefit + next_item.benefit include_weight = current.weight + next_item.weight if include_weight <= max_weight: include_node = Node(next_level, include_benefit, include_weight, current.items + [next_item.id]) if include_benefit > max_benefit: max_benefit = include_benefit best_combination = include_node.items queue.append(include_node) exclude_node = Node(next_level, current.benefit, current.weight, current.items) queue.append(exclude_node) return max_benefit, best_combination DFS: def knapsack_dfs(items, max_weight): queue = [] root = Node(-1, 0, 0, []) queue.append(root) max_benefit = 0 best_combination = [] while queue: current = queue.pop() if current.level == len(items) - 1: if current.benefit > max_benefit: max_benefit = current.benefit best_combination = current.items else: next_level = current.level + 1 next_item = items[next_level] include_benefit = current.benefit + next_item.benefit include_weight = current.weight + next_item.weight if include_weight <= max_weight: include_node = Node(next_level, include_benefit, include_weight, current.items + [next_item.id]) if include_benefit > max_benefit: max_benefit = include_benefit best_combination = include_node.items queue.append(include_node) exclude_node = Node(next_level, current.benefit, current.weight, current.items) queue.append(exclude_node) return max_benefit, best_combination Edit: Results based on the answer below: program.py:42: size=4432 B (+840 B), count=79 (+15), average=56 B program.py:116: size=0 B (-768 B), count=0 (-1) program.py:79: size=0 B (-744 B), count=0 (-13) program.py.py:85: size=0 B (-72 B), count=0 (-1) program.py:57: size=0 B (-56 B), count=0 (-1) program.py:56: size=0 B (-56 B), count=0 (-1) program.py:74: size=0 B (-32 B), count=0 (-1) program.py:37: size=32 B (+0 B), count=1 (+0), average=32 B | You can try doing what another answer suggested: https://stackoverflow.com/a/45679009/670693 While I couldn't run your code to try things out, missing the Node definition, you can try something like: import tracemalloc tracemalloc.start() knapsack_dfs(...) dfs_snapshot = tracemalloc.take_snapshot() knapsack_bfs(...) bfs_snapshot = tracemalloc.take_snapshot() tracemalloc.stop() bfs_snapshot = bfs_snapshot.filter_traces((tracemalloc.Filter(False, '*tracemalloc.py'),)) dfs_snapshot = dfs_snapshot.filter_traces((tracemalloc.Filter(False, '*tracemalloc.py'),)) for statdiff in bfs_snapshot.compare_to(dfs_snapshot, 'lineno'): print(statdiff) | 3 | 1 |
76,302,654 | 2023-5-22 | https://stackoverflow.com/questions/76302654/why-does-property-override-object-getattribute | I noticed that contrary to the classmethod and staticmethod decorators, the property decorator overrides the object.__getattribute__ method: >>> list(vars(classmethod)) ['__new__', '__repr__', '__get__', '__init__', '__func__', '__wrapped__', '__isabstractmethod__', '__dict__', '__doc__'] >>> list(vars(staticmethod)) ['__new__', '__repr__', '__call__', '__get__', '__init__', '__func__', '__wrapped__', '__isabstractmethod__', '__dict__', '__doc__'] >>> list(vars(property)) ['__new__', '__getattribute__', '__get__', '__set__', '__delete__', '__init__', 'getter', 'setter', 'deleter', '__set_name__', 'fget', 'fset', 'fdel', '__doc__', '__isabstractmethod__'] The functionality of the property decorator doesn’t seem to require this override (cf. the equivalent Python code in the Descriptor HowTo Guide). So which behaviour exactly does this override implement? Please provide a link to the corresponding C code in the CPython repository, and optionally an equivalent Python code. | To refresh yourself on what __getattribute__ is and how this is implemented in CPython, please refer to this very excellent answer first, as that answers contains all the detailed background information on what to look for in the CPython source, and the paragraphs below will reference those details without further explanations. In CPython, all builtin types (e.g. object, int, str, float) all follow a standard way on how they are defined. These builtin types with their dunder methods are all slotted in the underlying C implementation, for example float simply reference the default PyObject_GenericGetAttr for tp_getattro, and this is exactly the same for property. Essentially, the descriptor howto guide is correct in omitting any references to __getattribute__, as its presence is nothing more than an artifact on how builtin types in Python are defined (i.e. they all point to the same generic implementation for "the normal way of looking for object attributes" if it's just a pointer to the aptly named PyObject_GenericGetAttr). As for why classmethod does not have __getattribute__ appear as part of its vars output, that's because the definition for classmethod does not include tp_getattro nor tp_getattr, but it just means that its __getattribute__ falls back to its parent's, which is object.__getattribute__ (check in Python for output of classmethod.__getattribute__ is object.__getattribute__). As for why classmethod is defined without the explicit linkage while various other builtin types define an explicit linkage to PyObject_GenericGetAttr (despite the outcome of either decisions being functionally the same in nearly all code usage), you may have to ask the developers themselves. Addendum: though at some point 12 years ago (2012), around the time of Python 3.2 was being developed, classmethod did have tp_getattro set, but subsequent to that commit 2cf936fe7a55 (included since 2.7.5 and 3.3.0a1) - the commit message has no hints as to why it was simply changed to "use defaults". As for float and property type, the change was introduced way back in the merge commit 6d6c1a35e08b from nearly 22 years ago (2001) that brought in the changes outlined in PEP-252 which is the descriptor protocol itself. Note that it set all tp_getattro to PyObject_GenericGetAttr for standard types that didn't have it defined before (and likewise for classmethod which is where this type made its debut). | 3 | 3 |
76,309,963 | 2023-5-22 | https://stackoverflow.com/questions/76309963/checking-a-column-in-a-pandas-df-does-not-contain-certain-text | I am attempting to write a specific value to a column in a pandas df depending on if another column does or does not contain certain text. I have 3 outputs in the destination columns: Distributor, OEM and End User. import pandas as pd df = pd.read_excel("Customer Records.xlsx") #Checking for distributor pricing tag df.loc[df['CustomerRoles'].str.contains('Discount-Distributor-STD'), 'Customer Type'] = 'Distributor' #Checking for OEM pricing tag df.loc[df['CustomerRoles'].str.contains('Discount-OEM-STD'), 'Customer Type'] = 'OEM' Those lines of code work as intended. The last thing I need is for every other string that contains neither of those phrases to print "End User" in the 'Customer Type' column. | You can use np.select: conds = [ df['CustomerRoles'].str.contains('Discount-Distributor-STD'), df['CustomerRoles'].str.contains('Discount-OEM-STD') ] choices = ['Distributor', 'OEM'] df['Customer Type'] = np.select(condlist=conds, choicelist=choices, default='End User') | 2 | 3 |
76,286,028 | 2023-5-19 | https://stackoverflow.com/questions/76286028/how-to-cancel-all-tasks-in-a-taskgroup | import asyncio import random task_group: asyncio.TaskGroup | None = None async def coro1(): while True: await asyncio.sleep(1) print("coro1") async def coro2(): while True: await asyncio.sleep(1) if random.random() < 0.1: print("dead") assert task_group is not None task_group.cancel() # This function does not exist. else: print("Survived another second") async def main(): global task_group async with asyncio.TaskGroup() as tg: task_group = tg tg.create_task(coro1()) tg.create_task(coro2()) task_group = None asyncio.run(main()) In this example, coro1 will print "coro1" every second, coro2 has a 10% chance to cancel the entire TaskGroup, i.e., cancel both coro1 and coro2 and exit the async with block every second. The problem is I don't know how to cancel the task group. There is no TaskGroup.cancel() function. | I end up use another task to wrap the function that contains the TaskGroup and cancel that task instead. It works as desired. # Only show functions that has been changed main_task: asyncio.Task[None] | None = None async def coro2(): while True: await asyncio.sleep(1) if random.random() < 0.1: print("dead") assert main_task is not None main_task.cancel() main_task = None else: print("Survived another second") async def main2(): global main_task main_task = asyncio.create_task(main()) asyncio.run(main2()) | 5 | 1 |
76,304,374 | 2023-5-22 | https://stackoverflow.com/questions/76304374/how-can-we-add-a-list-of-documents-to-an-existing-index-in-llama-index | I have an existing index that is created using GPTVectorStoreIndex. However, when I am trying to add a new document to the existing index using the insert method, I am getting the following error : AttributeError: 'list' object has no attribute 'get_text' my code for updating the index is as follows : max_input_size = 4096 num_outputs = 5000 max_chunk_overlap = 256 chunk_size_limit = 3900 prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit) llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo", max_tokens=num_outputs)) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper) directory_path = "./trial_docs" file_metadata = lambda x : {"filename": x} reader = SimpleDirectoryReader(directory_path, file_metadata=file_metadata) documents = reader.load_data() print(type(documents)) index.insert(document = documents, service_context = service_context) | I got it right, the mistake I was doing it was passing documents as a whole, which is a List object. The right way to update is as follows max_input_size = 4096 num_outputs = 5000 max_chunk_overlap = 256 chunk_size_limit = 3900 prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit) llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo", max_tokens=num_outputs)) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper) directory_path = "./trial_docs" file_metadata = lambda x : {"filename": x} reader = SimpleDirectoryReader(directory_path, file_metadata=file_metadata) documents = reader.load_data() print(type(documents)) for d in documents: index.insert(document = d, service_context = service_context) | 4 | 8 |
76,297,879 | 2023-5-21 | https://stackoverflow.com/questions/76297879/benchmarks-of-fastapi-vs-async-flask | I'm a developer without an interest in benchmarking and I'm trying to decide whether I should use Flask or FastAPI to build some Python/Vue projects. I'm seeing stuff online about how FastAPI was faster than Flask because Flask was single-threaded or something like that, whereas FastAPI was async, but apparently more-recently Flask added async routes, and so now I'm wondering if FastAPI is still(?) faster than Flask. Has anyone done benchmarking tests comparing FastAPI to Flask async routes? I can't find any when I search Google. | According to a benchmark study by Miguel Grinberg, FastAPI can be faster or slower than async Flask, depending on the web server and the Flask async type. Generally Flask on a Greenlet powered WSGI server (Meinheld / Gevent) can offer comparable throughput as an async-first ASGI framework like FastAPI. Note that Grinberg is comparing the overall performance of three parts: the Framework, the Web server and the Web application. Here are the relevant frameworks' throughput in the three different scenarios he tried (a higher number is better): Framework Web Server Type Test 1 Throughput Test 2 Throughput Test 3 Throughput Flask Meinheld Async / Greenlet 1.43 5.27 1.06 Flask Gevent Async / Greenlet 1.22 4.54 1.01 FastAPI Uvicorn Async / Coroutine 1.21 4.33 1.02 Aioflas Uvicorn Async / Coroutine 1.11 - - Flask uWSGI Sync 1.09 1.01 1.26 Flask Gunicorn Sync 1.00 (baseline) 1.00 (baseline) 1.00 (baseline) * Meinheld is a WSGI server written in C. Grinberg points out in his article that the exact relative results between frameworks will depend on the particular load that the server is put under, but his takeaway is that there isn't a large difference between frameworks: remember that the difference in performance between different frameworks or web servers isn't going to be very significant, so choose the tools that make you more productive! More about Async approaches of Flask Flask adopts different approaches to async view. In Grinberg's benchmark tests, three approaches have been implemented: Aioflask (standard python ASGI with uvicorn), Greenlet in Meinheld and Gevent. Since the Aioflask test was not fully finished, only the result of Test 1 is available, which agrees with the Flask documentation: Flask’s async/coroutine support is less performant than async-first frameworks due to the way it is implemented. Be aware that the Greenlet approach requires a special way of writing asynchronous operations, instead of standard python coroutines. Therefore, existing asyncio codebase needs to be patched with adapters like greenletio. Gevent relies heavily on proper monkey patching the python standard libraries, database engines and other performance critical libraries. Grinberg addressed multiple times this in his blog: Gevent tests did reasonably well in my benchmark and terribly in the original one. This is because the author forgot to patch the psycopg2 package so that it becomes non-blocking under greenlets. | 9 | 15 |
76,297,052 | 2023-5-20 | https://stackoverflow.com/questions/76297052/how-to-write-a-python-regex-that-matches-strings-with-both-words-and-digits-exc | I want to write a regex that matches a string that may contain both words and digits and not digits only. I used this regex [A-z+\d*], but it does not work. Some matched samples: expression123 123expression exp123ression Not matched sample: 1235234567544 Can you help me with this one? Thank you in advance | Lookarounds to the rescue! ^(?!\d+$)\w+$ This uses a negative lookahead construct and anchors, see a demo on regex101.com Note that you could have the same result with pure Python code alone: samples = ["expression123", "123expression", "exp123ression", "1235234567544"] filtered = [item for item in samples if not item.isdigit()] print(filtered) # ['expression123', '123expression', 'exp123ression'] See another demo on ideone.com. With both approaches you wouldn't account for input strings like -1 or 1.0 (they'd be allowed). Tests As the discussion somewhat arose, here's a small test suite for different sample sizes and expressions: import string, random, re, timeit class RegexTester(): samples = [] expressions_to_test = {"Cary": "^(?=.*\D)\w+$", "Jan": "^(?!\d+$)\w+$"} def __init__(self, sample_size=100, word_size=10, times=100): self.sample_size = sample_size self.word_size = word_size self.times = times # generate samples self.samples = ["".join(random.choices(string.ascii_letters + string.digits, k=self.word_size)) for _ in range(self.sample_size)] # compile the expressions in question for key, expression in self.expressions_to_test.items(): self.expressions_to_test[key] = {"raw": expression, "compiled": re.compile(expression)} def describe_sample(self): only_digits = [item for item in self.samples if all(char.isdigit() for char in item)] return only_digits def test_expressions(self): def regex_test(samples, expr): return [expr.search(item) for item in samples] for key, values in self.expressions_to_test.items(): t = timeit.Timer(lambda: regex_test(self.samples, values["compiled"])) print("{key}, Times: {times}, Result: {result}".format(key=key, times=self.times, result=t.timeit(100))) rt = RegexTester(sample_size=10 ** 5, word_size=10, times=10 ** 4) #rt.describe_sample() rt.test_expressions() Which for a sample size of 10^5, a word size of 10 gave the comparable results for the both expressions: Cary, Times: 10000, Result: 6.1406331 Jan, Times: 10000, Result: 5.948537699999999 When you set the sample size to 10^4 and the word size to 10^3, the result is the same: Cary, Times: 10000, Result: 10.1723557 Jan, Times: 10000, Result: 9.697761900000001 You'll get significant differences when the strings consist only of numbers (aka the samples are generated only with numbers): Cary, Times: 10000, Result: 25.4842013 Jan, Times: 10000, Result: 17.3708319 Note that this is randomly generated text and due to the method of generating it, the longer the strings are, the less likely they are to consist only of numbers. In the end it will depend on the actual text inputs. | 3 | 7 |
76,276,568 | 2023-5-17 | https://stackoverflow.com/questions/76276568/how-can-i-reuse-logic-to-handle-a-keypress-and-a-button-click-in-pythons-tkinte | I have this code: from tkinter import * import tkinter as tk class App(tk.Frame): def __init__(self, master): def print_test(self): print('test') def button_click(): print_test() super().__init__(master) master.geometry("250x100") entry = Entry() test = DoubleVar() entry["textvariable"] = test entry.bind('<Key-Return>', print_test) entry.pack() button = Button(root, text="Click here", command=button_click) button.pack() root = tk.Tk() myapp = App(root) myapp.mainloop() A click on the button throws: Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) File "[somefilepath]", line 10, in button_click print_test() TypeError: App.__init__.<locals>.print_test() missing 1 required positional argument: 'self' While pressing Enter while in the Entry widget works, it prints: test See: Now if I drop the (self) from def print_test(self):, as TypeError: button_click() missing 1 required positional argument: 'self' shows, the button works, but pressing Enter in the Entry widget does not trigger the command but throws another exception: Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) TypeError: App.__init__.<locals>.print_test() takes 0 positional arguments but 1 was given How can I write the code so that both the button click event and pressing Enter will trigger the print command? | Commands callbacks for button clicks are called without arguments, because there is no more information that is relevant: the point of a button is that there's only one "way" to click it. However, key presses are events, and as such, callbacks for key-binds are passed an argument that represents the event (not anything to do with the context in which the callback was written). For key-press handlers, it's usually not necessary to consider any information from the event. As such, the callback can simply default this parameter to None, and then ignore it: def print_test(event=None): print('test') Now this can be used directly as a handler for both the key-bind and the button press. Note that this works perfectly well as a top-level function, even outside of the App class, because the code uses no functionality from App. Another way is to reverse the delegation logic. In the original code, the button handler tries to delegate to the key-press handler, but cannot because it does not have an event object to pass. While it would work to pass None or some other useless object (since the key-press handler does not actually care about the event), this is a bit ugly. A better way is to delegate the other way around: have the key-press handler discard the event that was passed to it, as it delegates to the button handler (which performs a hard-coded action). Thus: from tkinter import * import tkinter as tk def print_test(): print('test') def enter_pressed(event): print_test() class App(tk.Frame): def __init__(self, master): super().__init__(master) master.geometry("250x100") entry = Entry() test = DoubleVar() entry["textvariable"] = test entry.bind('<Key-Return>', enter_pressed) entry.pack() button = Button(root, text="Click here", command=print_test) button.pack() root = tk.Tk() myapp = App(root) myapp.mainloop() | 2 | 3 |
76,296,055 | 2023-5-20 | https://stackoverflow.com/questions/76296055/how-to-avoid-tkinter-slowing-down-as-number-of-shapes-increases | I have a python project with tkinter. On this project I draw small squares over time. I noticed tkinter is slowing down as the number of square increases. Here is a simple example that draws 200 red squares on each iteration: import tkinter as tk import random import time WIDTH = 900 CELL_SIZE = 2 GRID_WIDTH = int(WIDTH / CELL_SIZE) CELL_PER_ITERATION = 200 SLEEP_MS = 50 root = tk.Tk() canvas = tk.Canvas(root, width=WIDTH, height=WIDTH, bg="black") canvas.pack() current_iteration = 0 cell_count = 0 previous_iteration_end = time.time() text = tk.Label(root, text=f"iteration {current_iteration}") text.pack() def draw_cell(x_grid, y_grid): x = x_grid * CELL_SIZE y = y_grid * CELL_SIZE canvas.create_rectangle(x, y, x + CELL_SIZE, y + CELL_SIZE, fill="red") def iteration(): global current_iteration global previous_iteration_end global cell_count current_iteration_start = time.time() for _ in range(CELL_PER_ITERATION): draw_cell( x_grid=random.randint(0, GRID_WIDTH), y_grid=random.randint(0, GRID_WIDTH), ) cell_count += 1 current_iteration_end = time.time() # duration of this iteration current_iteration_duration = current_iteration_end - current_iteration_start # duration between start of this iteration and end of previous iteration between_iteration_duration = current_iteration_start - previous_iteration_end current_iteration += 1 text.config(text=f"iteration {current_iteration} | cell_count: {cell_count} | iter duration: {int(current_iteration_duration*1000)} ms | between iter duration: {int(between_iteration_duration*1000)} ms") previous_iteration_end = current_iteration_end def main_loop(): iteration() root.after(ms=SLEEP_MS, func=main_loop) root.after(func=main_loop, ms=SLEEP_MS) root.mainloop() Which gives (time data is written at the bottom of picture): And after a few seconds: So the time to execute an iteration stays constant. But between two iterations, the duration keeps increasing over time. I don't understand why tkinter is slowing down. Is it redrawing the entire canvas (so all already drawn squares) at each iteration ? Is there a way to avoid this slow down ? Note: This is an example, the real project i am working on looks like this: Slime Mold Simulation | When adding shapes, you are not just colorize pixels. You create graphics with contexts and store them into memory. Instead of abusing the Canvas paint a picture and show this picture in the Canvas this will be a lot faster and will give you more options to colorize your image that you rather want to draw. | 5 | 2 |
76,292,501 | 2023-5-19 | https://stackoverflow.com/questions/76292501/query-existing-pinecone-index-without-re-loading-the-context-data | I'm learning Langchain and vector databases. Following the original documentation I can read some docs, update the database and then make a query. https://python.langchain.com/en/harrison-docs-refactor-3-24/modules/indexes/vectorstores/examples/pinecone.html I want to access the same index and query it again, but without re-loading the embeddings and adding the vectors again to the ddbb. How can I generate the same docsearch object without creating new vectors? # Load source Word doc loader = UnstructuredWordDocumentLoader("C:/Users/ELECTROPC/utilities/openai/data_test.docx", mode="elements") data = loader.load() # Text splitting text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(data) # Upsert vectors to Pinecone Index pinecone.init( api_key=PINECONE_API_KEY, # find at app.pinecone.io environment=PINECONE_API_ENV ) index_name = "mlqai" embeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY']) docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name) # Query llm = OpenAI(temperature=0, openai_api_key=os.environ['OPENAI_API_KEY']) chain = load_qa_chain(llm, chain_type="stuff") query = "que sabes de los patinetes?" docs = docsearch.similarity_search(query) answer = chain.run(input_documents=docs, question=query) print(answer) | You need to access the existing index. In order to do this, you must know the name of the index, and what embeddings were used to create it. index_name = "mlqai" embeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY']) docsearch = Pinecone.from_existing_index(index_name, embeddings) Documentation. | 6 | 12 |
76,279,731 | 2023-5-18 | https://stackoverflow.com/questions/76279731/why-snakemake-prefers-calling-script-using-script-directive-instead-of-calling-f | Snakemake rules in standardized workflows run Python scripts using the script directive, such as this template rule: rule XXXXX: input: ..., output: ...., params: ..., conda: "../envs/python.yaml" script: "../scripts/XXXX.py" Then in the script, it is possible to use snakemake object. However, the script is then tightly coupled with that rule, which seems a big disadvantage. Why is this approach preferred to the approach using shell that calls the script, such as in this rule? rule XXXXX: input: ..., output: ...., params: absolute_script_path = ..., # get argument1 = ..., conda: "../envs/python.yaml" shell: "python {params.absolute_script_path} {input} {params.argument1} > {output}" In this approach, python script is decoupled from the Snakemake rule. Also it looks more cohesive, as called arguments are clear from the rule, not hidden in the script. I am only starting with writing Snakemake workflows, so I am just a beginner. I do not understand why the first approach is preferred (or used in standardized Snakemake workflows) to the second approach? Am I missing something? Are there some problems with the second approach? Thank you very much for answers! | The script approach is a bit more flexible in terms of the objects that the script can access via the params and other directives. If you follow the shell approach you might find it cumbersome to (re) define the argparse or other approaches to properly take account of the arguments passed via shell. It's going to be mostly boilerplate, but can get somewhat tedious. The notebook directive might be useful in scenarios that require interactive reproduction/development. All in all, there are no hard rules, and for a given workflow one approach might be more suitable/convenient than other approaches. | 2 | 5 |
76,288,658 | 2023-5-19 | https://stackoverflow.com/questions/76288658/python-dataframe-subtract-value-from-one-column-from-each-list-element-of-anothe | I have a dataframe. Column one has a list of numbers. Second column has average of list of numbers in column one. I need to create third column such that I subtract mean value from each of the elements of column one. df = pd.DataFrame({'A':[[4.2,2.3,6.5,2.3],[4.1,5.3,6.5,3.8]]}) df['avg'] = df['A'].apply(lambda p: np.average(p)) df['a_avg' = df['A'].apply(lambda p: (np.array(p)-df['avg']).to_list()) Expected output: df A avg a_avg 0 [4.2,2.3,6.5,2.3] 3.825 [0.375, -1.525, 2.675, -1.525] 1 [4.1,5.3,6.5,3.8] 4.925 [-0.825, 0.375, 1.575, -1.125] I created column two for my clarity. if there is a way we can directly get column three from column one, that is also good. whats wrong with the code i have written? | Use list comprehension with convert lists to numpy for improve performance: df['a_avg'] = [(np.round(np.array(p) - np.average(p), 3)).tolist() for p in df['A']] Or: df['a_avg'] = df.A.apply(lambda p: (np.round(np.array(p) - np.average(p), 3)).tolist()) print (df) A a_avg 0 [4.2, 2.3, 6.5, 2.3] [0.375, -1.525, 2.675, -1.525] 1 [4.1, 5.3, 6.5, 3.8] [-0.825, 0.375, 1.575, -1.125] Vectorized solution working if each list has same length: arr = np.array(df.A.tolist()) df['a_avg'] = np.round(arr - np.average(arr), 3).tolist() print (df) A a_avg 0 [4.2, 2.3, 6.5, 2.3] [-0.175, -2.075, 2.125, -2.075] 1 [4.1, 5.3, 6.5, 3.8] [-0.275, 0.925, 2.125, -0.575] Testing performance: df = pd.DataFrame({'A':[[4.2,2.3,6.5,2.3],[4.1,5.3,6.5,3.8]]}) #2k rows df = pd.concat([df] * 1000, ignore_index=True) #Timeless solution In [36]: %timeit df["a_avg"] = [[round(e - np.average(lst), 3) for e in lst] for lst in df["A"]] 118 ms ± 659 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [37]: %timeit df['a_avg'] = [(np.round(np.array(p) - np.average(p), 3)).tolist() for p in df['A']] 37.1 ms ± 162 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [38]: %timeit df['a_avg'] = df.A.apply(lambda p: (np.round(np.array(p) - np.average(p), 3)).tolist()) 37.7 ms ± 446 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [39]: %%timeit ...: arr = np.array(df.A.tolist()) ...: df['a_avg'] = np.round(arr - np.average(arr), 3).tolist() ...: 1.36 ms ± 46.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Testing in 20k rows #20k rows df = pd.concat([df] * 10000, ignore_index=True) In [41]: %timeit df["a_avg"] = [[round(e - np.average(lst), 3) for e in lst] for lst in df["A"]] 1.18 s ± 5.82 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [42]: %timeit df['a_avg'] = [(np.round(np.array(p) - np.average(p), 3)).tolist() for p in df['A']] 366 ms ± 1.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [43]: %timeit df['a_avg'] = df.A.apply(lambda p: (np.round(np.array(p) - np.average(p), 3)).tolist()) 364 ms ± 824 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) In [44]: %%timeit ...: arr = np.array(df.A.tolist()) ...: df['a_avg'] = np.round(arr - np.average(arr), 3).tolist() ...: ...: 13.7 ms ± 111 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) | 2 | 1 |
76,287,668 | 2023-5-19 | https://stackoverflow.com/questions/76287668/reading-extracting-data-from-databricks-database-hive-metastore-with-pyspar | I am trying to read in data from Databricks Hive_Metastore with PySpark. In screenshot below, I am trying to read in the table called 'trips' which is located in the database nyctaxi. Typically if this table was located on a AzureSQL server I was use code like the following: df = spark.read.format("jdbc")\ .option("url", jdbcUrl)\ .option("dbtable", tableName)\ .load() Or if the table was in the ADLS I would use code similar to the following: df = spark.read.csv("adl://mylake.azuredatalakestore.net/tableName.csv",header=True) Can some let me know how I would read in the table using PySpark from Databricks Database below: The additional screenshot my also help Ok, I've just realized that I think I should be asking how to read tables from "samples" meta_store. In any case I would like help reading in the "trips" table from the nyctaxi database please. | The samples catalog can be accessed in using spark.table("catalog.schema.table"). So you should be able to access the table using: df = spark.table("samples.nyctaxi.trips") Note also if you are working direct in databricks notebooks, the spark session is already available as spark - no need to get or create. | 2 | 9 |
76,287,980 | 2023-5-19 | https://stackoverflow.com/questions/76287980/how-can-i-convert-a-dictionary-into-a-pandas-dataframe-with-specific-keys-as-col | I have a dictionary as follows: D = {'mark': [['height', 7], ['weight', 70]], 'david': [['height', 8], ['weight', 80]], 'john': [['height', 9], ['weight', 90]]} print (D) I wanted to get the pandas dataframe into this form: names height weight 0 mark 7 70 1 david 8 80 2 john 9 90 I tried as follows. df = pd.DataFrame(D) print (df) However, it produces a wrong result below. mark david john 0 [height, 7] [height, 8] [height, 9] 1 [weight, 70] [weight, 80] [weight, 90] Edit: What if dictionary has not same number of values as follows? D2 = {'mark': [['height', 7], ['weight', 70]], 'david': [['height', 8], ['weight', 80]], 'john': [['height', 9]]} print (D2) | You can use : df = pd.DataFrame({k: dict(v) for (k,v) in D.items()}).T.reset_index(names="names") Output : print(df) names height weight 0 mark 7 70 1 david 8 80 2 john 9 90 | 2 | 3 |
76,286,148 | 2023-5-19 | https://stackoverflow.com/questions/76286148/how-do-custom-init-functions-work-in-pydantic-with-inheritance | I'm trying to use inheritance in pydantic with custom __init__ functions. I have parent (fish) and child (shark) classes that both require more in initialization than just setting fields (which in the MWE is represented by an additional print statement). So I need to override their inits. I tried: class fish(BaseModel): name: str def __init__(self, name): super().__init__(name=name) print("Fish initialization successful!") class shark(fish): color: str def __init__(self, name, color): super().__init__(name=name) self.color=color print("Shark initialization successful!") f = fish(name="nemo") print(f) s = shark(name="bruce", color="grey") but that throws a validation error: Fish initialization successful! name='nemo' --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[149], line 17 15 f = fish(name="nemo") 16 print(f) ---> 17 s = shark(name="bruce", color="grey") Cell In[149], line 11, in shark.__init__(self, name, color) 10 def __init__(self, name, color): ---> 11 super().__init__(name=name) 12 self.color=color 13 print("Shark initialization successful!") Cell In[149], line 4, in fish.__init__(self, name) 3 def __init__(self, name): ----> 4 super().__init__(name=name) 5 print("Fish initialization successful!") File ~/Desktop/treeline_wt/1588-yieldmodeling-integration/device-predictions/.venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for shark color field required (type=value_error.missing) The solution I got from a coworker that works is: class fish(BaseModel): name: str def __init__(self, **kwargs): super().__init__(**kwargs) print("Fish initialization successful!") class shark(fish): color: str def __init__(self, **kwargs): super().__init__(**kwargs) print("Shark initialization successful!") # f = fish(name="nemo") # print(f) s = shark(name="bruce", color="grey") which, on inspection, only works because the fish super().__init__ receives the color keyword, i.e. changing it to super().__init__(name=kwargs['name']) throws the same validation error. This is baffling to me, I don't understand why the fish class needs to know anything about the properties of its child classes. How do I understand this? | This has nothing to do with Fish needing to know anything about the fields defined on Shark. It has everything to do with BaseModel.__init__ knowing, which fields any given model has, and validating all keyword-arguments against those. You need to keep in mind that a lot is happening "behind the scenes" with any model class during class creation, i.e. way before you initialize any specific instance of it. The metaclass is responsible for this. Essentially, you need to think of the Shark definition process like this: The Shark class namespace is read. The annotations/attributes are collected (in this case color: str). Fields are created from those. The parent class' fields (in this case name from Fish) are added. All the fields for Shark (plus validators and a bunch of other things) are fully constructed. It now has the fields name and color. The BaseModel.__init__ method will always look at all the fields defined on the given model and validate the provided keyword arguments against those fields. When you call super().__init__(name=name) from inside Shark.__init__, you are basically calling the Fish.__init__(self, name=name), i.e. you are passing the (uninitialized) Shark instance self as well as the name argument to Fish.__init__. Then from inside Fish.__init__ you are again doing super().__init__(name=name), which means you are calling BaseModel.__init__(self, name=name) and again only passing that unfinished Shark instance and the keyword-argument name to it. (Remember self is still that Shark object.) But BaseModel.__init__ will look at that Shark instance it got, see that the Shark class has two fields (name and color) defined for it and neither of them is optional/has a default value. It will see that you only provided the name keyword-argument, but failed to provide color. Therefore it will raise a corresponding ValidationError. The fact that you then manually try to assign self.color = color does not matter because that line is never even reached. That is why you must always pass all the field-related keyword-arguments "up the chain" to BaseModel.__init__. This is why the code in your second code snippet works without error. This may seem unintuitive at first, but Pydantic models are not simple data classes. A lot is happening under the hood and this imposes some restrictions, which are unfortunately sometimes left undocumented (as in this case) and only become clear, once you actually dig into the source code. | 9 | 13 |
76,279,304 | 2023-5-18 | https://stackoverflow.com/questions/76279304/configure-pylance-to-stop-prefixing-project-directory-on-import-auto-complete | When working in a Git repository where my Python/Django source is in a subfolder {$workspace}/app, as seen below. project/ .vscode/ .git/ app/ -- the app source code (not a Python module) docs/ .gitignore LICENSE The problem is that VSCode adds an incorrect app. prefix when auto-generating import statements. For example, it will generate app.core when the actual project module is core. Also, there isn't an __init__.py in the /app directory. I have read that I can open a workspace in the app/ directory, but the project repository is one directory above the app source and I sometimes need to make changes at the project root level. I have tried the following items in settings.json, which don't seem correct when all I want to do is configure the primary auto-complete and analysis paths. { "python.analysis.extraPaths": [ "/app", ], "python.autoComplete.extraPaths": [ "/app", ], { How can I get VSCode to stop treating the project /app directory as a Python module? | If it's the file structure in your question, open the project as a workspace. settings.json should be like this { "python.analysis.extraPaths": [ "./app", ], "python.autoComplete.extraPaths": [ "./app", ], } | 3 | 4 |
76,284,920 | 2023-5-18 | https://stackoverflow.com/questions/76284920/how-would-i-pad-a-string-with-random-symbols-in-python | I'm trying to make my 'hacking' portion of a Fallout terminal simulator more akin to what is actually in the game. I'm in the process of setting it up so that it uses a word list, picks some words from it, and then puts them into the list of address-like display. However, because the length of the words ranges from 4 letters (FISH) to 12 letters (Hippopotamus), the lines get staggered, and the pipes I'm using to separate the options no longer line up. I tried using an answer I found here, which works fine for ust spaces but trying to insert {"+varNameHere+":<15}".format(a) in the middle of the string, (typing_.typingPrint('0xCC01 |{)&@*'+"{:"+fillerChar+"^12}".format(words[0])+'@ |0xCC02 |!(#'+"{:"+fillerChar+"^12}".format(words[0])+'@)!#\n') , throws an ValueError about there only being one '}' in the format string. I understand what the error is talking about, but the reason for my question is, Is there anyway to have the padding character be randomly decided when the string is printed out into the terminal window? Code: possibleChars = ['!','@','#','$','%','^','&','*','(',')',';',':'] fillerChar = random.sample(possibleChars) print('0xCC01 |{)&@*'+"{:"+fillerChar+"^12}".format(words[0])+'@ |0xCC02 |!(#'+"{:"+fillerChar+"^12}".format(words[0])+'@)!#\n') | Create a random 15-character string. Then extract a slice of it long enough to pad out what you want. random_string = 'weopi94nf0683d0' word = 'FISH' left = (len(random_string) - len(word)) // 2 right = left + len(word) padded_word = random_string[:left] + word + random_string[right:] | 3 | 2 |
76,284,570 | 2023-5-18 | https://stackoverflow.com/questions/76284570/converting-real-time-audio-to-phonemes | Using a microphone as an input for real-time audio. How do I extract the currently said phoneme from the audio? I need it for lipsyncing 2d characters. Basically, my approach would be to: Fetch the real-time audio using a microphone Detect the current phoneme that is being pronounced from the audio. I have tried looking everywhere for an example or library that could solve this type of problem. Most libraries don't seem to output phonemes from audio. There is a website that explains how they used machine learning to solve this, however without any code or tutorial on how to do it. https://www.arxiv-vanity.com/papers/1910.08685/ There is also this cool speech recognition tool called Pocketsphinx, but I cannot seem to find an example of it using Phoneme Recognition yet. | The way I would approach this is to get the word from the audio using Whisper or a similar STT service (the Python Speech Recognition Library is the go-to at the moment), then I would use the CMU Dict Library to provide phonemes for each word. The phonemes are given using the CMU dictionary - for example DH for the θ phoneme - the th sound in this and that. That is, they are not given in IPA pronunciation - so you may need another layer if you need the phonemes in IPA format. If you need IPA formatted phonemes, then consider the IPA2 library. | 3 | 4 |
76,282,454 | 2023-5-18 | https://stackoverflow.com/questions/76282454/pandas-dataframe-groupby-and-aggregate-counting-on-condition | I have started playing with Data Analysis and all the related tools: Pandas, Numpy, Jupyter etc... The task I am working on is simple, and I could do easily with regular python. However I am more interested in exploring Pandas, and I am looking therefore for a Pandas solution. I have this simple Pandas DataFrame. The timestamp column is just a Unix timestamp, but to make things more readable I have just put a more comfortable number: id timestamp success 1 9999 True 2 1111 True 3 9999 False 4 1111 True 5 9999 True 6 1111 True I want to group by timestamp, but I want another aggregate column which is the result of the success column: if True count as one, if False count as 0. I hope the table below clarify what I am trying to achieve. Basically 1111 has three True therefore the sum is 3. 9999 has two True and one False, therefore the sum is 2. timestamp success 1111 3 9999 2 | import pandas as pd # The DataFrame a = { 'id': [1, 2, 3, 4, 5, 6], 'timestamp': [9999, 1111, 9999, 1111, 9999, 1111], 'success': [True, True, False, True, True, True] } df = pd.DataFrame(a) # Group by timestamp and calculate the sum of success result = df.groupby('timestamp')['success'].sum().reset_index() # Result print(result) Do you mean this? You group your data frame by timestamp and then you find the frequency of true values. | 3 | 3 |
76,274,802 | 2023-5-17 | https://stackoverflow.com/questions/76274802/plt-imshow-of-a-single-color-image-showing-as-black | I am trying to show a light-gray image using plt.imshow(), but the image turns out black. I tried: import matplotlib.pyplot as plt import numpy as np test_image = np.zeros((3871, 2484)) test_image.fill(200) plt.imshow(test_image, cmap="gray") plt.show() But ended up getting: Matplotlib version: 3.7.1 Numpy version: 1.24.3 Python version: 3.11.3 | You have to include the vmin,vmax parameters when you plot a single color image with plt.imshow(...). Set vmin=0 and vmax=500 to get a gray image. If vmin,vmax are not specified, then they will be set to the min and max values of the image data. This means that all of your input data is equal to vmin, which is the darkest possible value (black). import matplotlib.pyplot as plt import numpy as np test_image = np.zeros((3871, 2484)) test_image.fill(200) plt.imshow(test_image, cmap="gray", vmin=0, vmax=500) plt.show() | 3 | 3 |
76,275,425 | 2023-5-17 | https://stackoverflow.com/questions/76275425/how-to-add-custom-annotations-with-uncertainty-to-a-heatmap | I am attempting to visualize some data as a table, where the boxes of each table element are colored according to their value, the numerical value is also displayed, and the uncertainty on each element is shown. I can achieve 2 out of these 3 things using pandas.pivot_table and sns.heatmap, but cannot seem to include the uncertainty on each table element as part of the annotation. In the example code snippet: import pandas as pd import seaborn as sns import numpy as np df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo", "bar", "bar", "bar", "bar"], "B": ["one", "one", "one", "two", "two", "one", "one", "two", "two"], "C": ["small", "large", "large", "small", "small", "large", "small", "small", "large"], "D": [1, 2, 2, 3, 3, 4, 5, 6, 7], "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]}) table = pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'], aggfunc=np.sum, fill_value=0) sns.heatmap(table,annot=True) we produce a table like so: However, imagine that the entries "E" represented the uncertainty on elements "D". Is there any way these can be displayed on the table, as "E"[i]+/-"D"[i]? I tried using a custom annotation grid, but this requires a numpy array and so string formatting each element didn't work for this. | You can pass a DataFrame with the formatted strings to sns.heatmap: table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'B'], columns=['C'], aggfunc=np.sum, fill_value=0) sns.heatmap(table['D'], annot=table['D'].astype(str)+'±'+table['E'].astype(str), fmt='') | 3 | 4 |
76,270,706 | 2023-5-17 | https://stackoverflow.com/questions/76270706/mypy-errors-when-using-arraylike | I don't understand how I should be using ArrayLike in my code. If check mypy, I keep getting errors when I try to use the variables for anything without calling cast. I am trying to define function signatures that work with ndarray as well as regular lists. For example, the code below import numpy.typing as npt import numpy as np from typing import Any def f(a: npt.ArrayLike) -> int: return len(a) def g(a: npt.ArrayLike) -> Any: return a[0] print(f(np.array([0, 1])), g(np.array([0, 1]))) print(f([0, 1]), g([0, 1])) give me theses errors for f() and g(): Argument 1 to "len" has incompatible type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"; expected "Sized" [arg-type] Value of type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" is not indexable [index] | The purpose of numpy.typing.ArrayLike is to be able to annotate objects that can be coerced into an ndarray. With that purpose in mind, they defined the type to be the following union: Union[ _SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]] ] _SupportsArray is simply a protocol with an __array__ method. It neither requires __len__ (for use with the len function) nor __getitem__ (for indexing) to be implemented. _NestedSequence is a more restrictive protocol that does actually require __len__ and __getitem__. But the problem with this code is that the parameter annotation is that union: import numpy.typing as npt ... def f(a: npt.ArrayLike) -> int: return len(a) So a could be a sequence-like object that supports __len__, but it could also be just an object that supports __array__ and nothing else. It could even be just an int for example (see the union again). Therefore the call len(a) is unsafe. Similarly, here the item access is not type safe because a might not implement __getitem__: ... def g(a: npt.ArrayLike) -> Any: return a[0] So the reason it is not working for you is that it is not meant to be used as an annotation for things that are numpy arrays or other sequences; it is meant to be used for things that can be turned into numpy arrays. If you want to annotate your functions f and g to take both lists and numpy arrays, you could just use a union of list and NDArray like list[Any] | npt.NDArray[Any]. If you want to have a wider annotation to accommodate any type that has __len__ and __getitem__, you need to define your own protocol: from typing import Any, Protocol, TypeVar import numpy as np T = TypeVar("T", covariant=True) class SequenceLike(Protocol[T]): def __len__(self) -> int: ... def __getitem__(self, item: int) -> T: ... def f(a: SequenceLike[Any]) -> int: return len(a) def g(a: SequenceLike[T]) -> T: return a[0] print(f(np.array([0, 1])), g(np.array([0, 1]))) print(f([0, 1]), g([0, 1])) To be more precise __getitem__ should probably also take slice objects, but the overloads may be overkill for you. | 6 | 8 |
76,268,855 | 2023-5-17 | https://stackoverflow.com/questions/76268855/remove-element-by-value-in-list-python-fastest | what is the fastest way to remove an element from a list by its value? I believe list.remove("element_to_be_removed") is the naive way. How can we optimize it? | Finding an element in a list by value is O(n), as is removing it once you have found it. There is no way to reduce this; it's inherent in how lists are built. Finding and/or removing an element in a set is O(1). Converting a list into a set is O(n). If you have a list and you need to remove one item, converting it to a set and then removing one item doesn't get you to O(1), because the set conversion itself is an O(n) operation. You're better off using list.remove(). If you have a list of unique items, and you anticipate needing to remove many items from it in an unordered way (say, you're going to eventually remove each item one by one), you can change that from an O(n^2) operation to an O(n) operation by converting the entire list into a set (once, in O(n)) and then removing the elements one by one after that (in O(1) per item, hence O(n) overall). The ideal solution (if possible) is to anticipate the ways in which you'll need to use this collection of items and make it a set from the beginning if the functionality of a set is a better fit for what you're doing. | 2 | 4 |
76,268,656 | 2023-5-17 | https://stackoverflow.com/questions/76268656/how-to-extract-dict-values-of-pandas-dataframe-in-new-columns | I would like to extract the values of a dictionary inside a Pandas DataFrame df into new columns of that DataFrame. All keys in the referring dict are the same across all rows. import pandas as pd df = pd.DataFrame({'a': [1, 2, 3], 'b': [{'x':[101], 'y': [102], 'z': [103]}, {'x':[201], 'y': [202], 'z': [203]}, {'x':[301], 'y': [302], 'z': [303]}]}) dfResult = pd.DataFrame({'a': [1, 2, 3], 'x':[101, 201, 301], 'y': [102, 202, 302], 'z': [103, 203, 303]}) I am as far as I can get the keys and values out of the dict, but I do not know how to make new columns out of them: df.b.apply(lambda x: [x[y] for y in x.keys()]) 0 [[101], [102], [103]] 1 [[201], [202], [203]] 2 [[301], [302], [303]] df.b.apply(lambda x: [y for y in x.keys()]) 0 [x, y, z] 1 [x, y, z] 2 [x, y, z] | If there are always one element lists is possible use nested list with dictionary comprehension and pass to DataFrame constructor: df = df.join(pd.DataFrame([{k: v[0] for k, v in x.items()} for x in df.pop('b')], index=df.index)) print (df) a x y z 0 1 101 102 103 1 2 201 202 203 2 3 301 302 303 Another idea is create DataFrame for each row in dictionary comprehension and join by concat: df = df.join(pd.concat({k: pd.DataFrame(v) for k, v in df.pop('b').items()}).droplevel(1)) print (df) a x y z 0 1 101 102 103 1 2 201 202 203 2 3 301 302 303 | 3 | 3 |
76,223,362 | 2023-5-11 | https://stackoverflow.com/questions/76223362/in-a-polars-group-by-aggregation-how-do-you-concatenate-string-values-in-each-g | When grouping a Polars dataframe in Python, how do you concatenate string values from a single column across rows within each group? For example, given the following DataFrame: import polars as pl df = pl.DataFrame( { "col1": ["a", "b", "a", "b", "c"], "col2": ["val1", "val2", "val1", "val3", "val3"] } ) Original df: shape: (5, 2) ┌──────┬──────┐ │ col1 ┆ col2 │ │ --- ┆ --- │ │ str ┆ str │ ╞══════╪══════╡ │ a ┆ val1 │ │ b ┆ val2 │ │ a ┆ val1 │ │ b ┆ val3 │ │ c ┆ val3 │ └──────┴──────┘ I want to run a group_by operation, like: df.group_by('col1').agg( col2_g = pl.col('col2').some_function_like_join(',') ) The expected output is: ┌──────┬───────────┐ │ col1 ┆ col2_g │ │ --- ┆ --- │ │ str ┆ str │ ╞══════╪═══════════╡ │ a ┆ val1,val1 │ │ b ┆ val2,val3 │ │ c ┆ val3 │ └──────┴───────────┘ What is the name of the some_function_like_join function? I have tried the following methods, and none work: df.group_by('col1').agg(pl.col('col2').list.concat(',')) df.group_by('col1').agg(pl.col('col2').join(',')) df.group_by('col1').agg(pl.col('col2').list.join(',')) | If you want to concatenate them, I assume you want the result as a string with your specified delimiter: out = df.group_by("col1").agg( pl.col("col2").str.join(",") ) Result: shape: (3, 2) ┌──────┬───────────┐ │ col1 ┆ col2 │ │ --- ┆ --- │ │ str ┆ str │ ╞══════╪═══════════╡ │ a ┆ val1,val1 │ │ b ┆ val2,val3 │ │ c ┆ val3 │ └──────┴───────────┘ If you want them within a List, you simply do: out = df.groupby("col1").agg( pl.col("col2") ) Result: shape: (3, 2) ┌──────┬──────────────────┐ │ col1 ┆ col2 │ │ --- ┆ --- │ │ str ┆ list[str] │ ╞══════╪══════════════════╡ │ a ┆ ["val1", "val1"] │ │ c ┆ ["val3"] │ │ b ┆ ["val2", "val3"] │ └──────┴──────────────────┘ | 6 | 8 |
76,231,965 | 2023-5-11 | https://stackoverflow.com/questions/76231965/add-hint-of-duration-for-each-iteration-in-tqdm | I have a list of tasks that each take a different amount of time. Let's say, I have 3 tasks, with durations close to 1x, 5x, 10*x. My tqdm code is something like: from tqdm import tqdm def create_task(n): def fib(x): if x == 1 or x == 0: return 1 return fib(x - 1) + fib(x - 2) return lambda: fib(n) n = 1 tasks = [create_task(n), create_task(5*n), create_task(10*n)] for task in tqdm(tasks): task.run() The problem is that tqdm thinks each iteration takes the same amount of time. As the first takes approximately 1/10 of the time, the ETA is unreliable. My question: is it possible to somehow add a hint to tqdm to inform how much each iteration takes compared to the first? Something like informing the duration weights of each iteration... Thanks! | There are two standard usages for tqdm progress bars: iterable-based, and manual. Manually updating a progress bar allows you to specify progress bar weights. Consider the following code: def my_func(x): """ Sleep for x / 5 seconds. """ duration = x / 5 time.sleep(duration) in_vals = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] # Ex. 1: unweighted, updated automatically (iterable-based) for x in tqdm(in_vals, position=0): my_func(x) # Ex. 2: unweighted, updated manually # functionally identical to example 1 with tqdm(total=len(in_vals), position=0) as pbar: for x in in_vals: my_func(x) pbar.update(n=1) # Ex. 3: weighted, updated manually with tqdm(total=sum(in_vals), position=0) as pbar: for x in in_vals: my_func(x) pbar.update(n=x) The first two examples give a progress bar with an inaccurate estimated duration that grows longer as the iterations progress. This is because tqdm doesn't know that later iterations will take longer to complete than earlier iterations. To fix this problem and get an accurate ETA from the start, we specify the total argument in tqdm() as the sum of all weights (example 3). To update the progress bar after completing an iteration with weight w, call pbar.update(n=w). For the OP's use case, a tqdm progress bar with duration weights would look something like this: def fib(x): time.sleep(.01) # added this so that progress bar isn't completed instantly if x == 1 or x == 0: return 1 return fib(x - 1) + fib(x - 2) in_vals = [1, 5, 10] with tqdm(total=sum(in_vals), unit='calc', position=0) as pbar: for num in in_vals: fib(num) pbar.update(num) | 3 | 1 |
76,247,812 | 2023-5-14 | https://stackoverflow.com/questions/76247812/how-to-create-pagination-embed-menu-in-discord-py | I need a 'SLASH COMMAND' that can displays an embed with 10 elements per page and buttons below for navigation (not reactions, those clickable buttons recently introduced). I am using Discord.py version 2.2.3 Here is the code snippet of my bot: import os import discord from discord import app_commands import asyncio TOKEN='YOUR TOKEN HERE' GUILD='GUILD NAME HERE' intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents) #I am using client, instead of commands.Bot() tree = app_commands.CommandTree(client) #This is for the slash command, I am using tree. @client.event async def on_ready(): for guild in client.guilds: if guild.name == GUILD: break print( f'{client.user} has successfully connected to the following guild(s):\n' f'{guild.name}(id: {guild.id})' ) await client.change_presence(activity=discord.Activity(name='anything', type=discord.ActivityType.playing)) @tree.command(name='EMBED', description='Description here', guild=discord.Object(id=000000000000000000)) #guild id here ''' Cannot proceed further ''' I tried different modules such as discord_slash (for slash command), interactions etc. and still unable to create a paginator. I was expecting to create a slash command that displays 10 elements per page out of any number of elements, in the form of an embed with buttons like this: Screenshot of what I wanted. Extra Info: I am using client instead of Commands.Bot() I am using tree for Slash commands. discord_slash is not compatible with my discord.py version. Slash command is necessary for me. I am using Python 3.11 IDLE. | I have a pagination view already prepared that I use in my bots: import discord from typing import Callable, Optional class Pagination(discord.ui.View): def __init__(self, interaction: discord.Interaction, get_page: Callable): self.interaction = interaction self.get_page = get_page self.total_pages: Optional[int] = None self.index = 1 super().__init__(timeout=100) async def interaction_check(self, interaction: discord.Interaction) -> bool: if interaction.user == self.interaction.user: return True else: emb = discord.Embed( description=f"Only the author of the command can perform this action.", color=16711680 ) await interaction.response.send_message(embed=emb, ephemeral=True) return False async def navegate(self): emb, self.total_pages = await self.get_page(self.index) if self.total_pages == 1: await self.interaction.response.send_message(embed=emb) elif self.total_pages > 1: self.update_buttons() await self.interaction.response.send_message(embed=emb, view=self) async def edit_page(self, interaction: discord.Interaction): emb, self.total_pages = await self.get_page(self.index) self.update_buttons() await interaction.response.edit_message(embed=emb, view=self) def update_buttons(self): if self.index > self.total_pages // 2: self.children[2].emoji = "⏮️" else: self.children[2].emoji = "⏭️" self.children[0].disabled = self.index == 1 self.children[1].disabled = self.index == self.total_pages @discord.ui.button(emoji="◀️", style=discord.ButtonStyle.blurple) async def previous(self, interaction: discord.Interaction, button: discord.Button): self.index -= 1 await self.edit_page(interaction) @discord.ui.button(emoji="▶️", style=discord.ButtonStyle.blurple) async def next(self, interaction: discord.Interaction, button: discord.Button): self.index += 1 await self.edit_page(interaction) @discord.ui.button(emoji="⏭️", style=discord.ButtonStyle.blurple) async def end(self, interaction: discord.Interaction, button: discord.Button): if self.index <= self.total_pages//2: self.index = self.total_pages else: self.index = 1 await self.edit_page(interaction) async def on_timeout(self): # remove buttons on timeout message = await self.interaction.original_response() await message.edit(view=None) @staticmethod def compute_total_pages(total_results: int, results_per_page: int) -> int: return ((total_results - 1) // results_per_page) + 1 You can save it in a pagination.py file. Its use is very simple. You just need to pass a function that will generate your pages. This function will receive the index and should return the Embed corresponding to the page and also the total number of pages. I'll show you an example of use: import discord from pagination import Pagination users = [f"User {i}" for i in range(1, 10000)] # This is a long list of results # I'm going to use pagination to display the data L = 10 # elements per page @tree.command(name="show") async def show(interaction: discord.Interaction): async def get_page(page: int): emb = discord.Embed(title="The Users", description="") offset = (page-1) * L for user in users[offset:offset+L]: emb.description += f"{user}\n" emb.set_author(name=f"Requested by {interaction.user}") n = Pagination.compute_total_pages(len(users), L) emb.set_footer(text=f"Page {page} from {n}") return emb, n await Pagination(interaction, get_page).navegate() My pagination model has 3 buttons. The first two have the function of going to the previous page and going to the next page. These buttons will be disabled when they cannot be used. The third button has a double function: if you are halfway through the pages, it will allow you to advance to the last one. If you are past half the pages, it will allow you to rewind the first one. | 3 | 4 |
76,218,303 | 2023-5-10 | https://stackoverflow.com/questions/76218303/in-python-polars-filter-and-aggregate-dict-of-lists | I have got a dataframe with string representation of json: df = pl.DataFrame({ "json": [ '{"x":[0,1,2,3], "y":[10,20,30,40]}', '{"x":[0,1,2,3], "y":[10,20,30,40]}', '{"x":[0,1,2,3], "y":[10,20,30,40]}' ] }) shape: (3, 1) ┌───────────────────────────────────┐ │ json │ │ --- │ │ str │ ╞═══════════════════════════════════╡ │ {"x":[0,1,2,3], "y":[10,20,30,40… │ │ {"x":[0,1,2,3], "y":[10,20,30,40… │ │ {"x":[0,1,2,3], "y":[10,20,30,40… │ └───────────────────────────────────┘ Now I would like to calculate the average for y where x > 0 and x < 3 for each row. This is my current working solution: First evaluate the string -> dict and then create a dataframe, which is filtered by x. # import ast df = df.with_columns( pl.col('json').map_elements(lambda x: pl.DataFrame(ast.literal_eval(x)).filter((pl.col('x') < 3) & (pl.col('x') > 0))['y'].mean()) ) shape: (3, 1) ┌──────┐ │ json │ │ --- │ │ f64 │ ╞══════╡ │ 25.0 │ │ 25.0 │ │ 25.0 │ └──────┘ This works fine, but for large datasets the apply functions is slowing down the process significantly. Is there a more elegant and faster way of doing it? | json.loads() To parse JSON strings in Polars you can use .str.json_decode() (i.e. the equivalent of json.loads) df.with_columns(pl.col("json").str.json_decode()) shape: (3, 1) ┌──────────────────────────────┐ │ json │ │ --- │ │ struct[2] │ ╞══════════════════════════════╡ │ {[0, 1, … 3],[10, 20, … 40]} │ │ {[0, 1, … 3],[10, 20, … 40]} │ │ {[0, 1, … 3],[10, 20, … 40]} │ └──────────────────────────────┘ unnest() The "JSON object" becomes a Polars struct which you can .unnest into separate columns. As the lists are of the same length, you can .explode() them both at the same time. (df.with_columns(pl.col("json").str.json_decode()) .unnest("json") .explode("x", "y") ) shape: (12, 2) ┌─────┬─────┐ │ x ┆ y │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪═════╡ │ 0 ┆ 10 │ │ 1 ┆ 20 │ │ 2 ┆ 30 │ │ 3 ┆ 40 │ │ … ┆ … │ │ 0 ┆ 10 │ │ 1 ┆ 20 │ │ 2 ┆ 30 │ │ 3 ┆ 40 │ └─────┴─────┘ If we add a row count before exploding, we can use it in a .group_by to rebuild each "row" after filtering. (df.with_row_count() .with_columns(pl.col("json").str.json_decode()) .unnest("json") .explode("x", "y") .filter(pl.col("x").is_between(1, 2)) .group_by("row_nr") .agg(pl.col("y").mean()) ) shape: (3, 2) ┌────────┬──────┐ │ row_nr ┆ y │ │ --- ┆ --- │ │ u32 ┆ f64 │ ╞════════╪══════╡ │ 0 ┆ 25.0 │ │ 1 ┆ 25.0 │ │ 2 ┆ 25.0 │ └────────┴──────┘ List/Struct Namespaces As well as unnesting/exploding into columns/rows, Polars also has the .list and .struct namespaces. (df.with_columns(parsed = pl.col("json").str.json_decode()) .with_columns(mean = pl.col("parsed").struct["y"].list.gather( pl.col("parsed").struct["x"].list.eval( pl.element().is_between(1, 2).arg_true() ) ) .list.mean() ) ) shape: (3, 3) ┌───────────────────────────────────┬──────────────────────────────┬──────┐ │ json ┆ parsed ┆ mean │ │ --- ┆ --- ┆ --- │ │ str ┆ struct[2] ┆ f64 │ ╞═══════════════════════════════════╪══════════════════════════════╪══════╡ │ {"x":[0,1,2,3], "y":[10,20,30,40… ┆ {[0, 1, … 3],[10, 20, … 40]} ┆ 25.0 │ │ {"x":[0,1,2,3], "y":[10,20,30,40… ┆ {[0, 1, … 3],[10, 20, … 40]} ┆ 25.0 │ │ {"x":[0,1,2,3], "y":[10,20,30,40… ┆ {[0, 1, … 3],[10, 20, … 40]} ┆ 25.0 │ └───────────────────────────────────┴──────────────────────────────┴──────┘ | 3 | 1 |
76,265,735 | 2023-5-16 | https://stackoverflow.com/questions/76265735/does-pygbag-directly-interprets-python-in-the-browser-or-compiles-it-to-wasm-and | I was wandering through the docs of pygbag, and I couldn't find how the python scripts are actually executed from the browser. I made a test project to look how the files created by pygbag looked like, but I couldn't really figure out what role the index.html exactly plays. It seemed to me like I couldn't find any script in it, so I supposed that it could be directly interpreted, but I'm not really sure. There is a python script in the html file, and I found one line which seems to run the main program : await shell.runpy(main, callback=ui_callback), but I don't know whether it just executes the python script in the folder or if the script is somewhere compiled in this file. Could anyone explain me ? | When running in the webpage pygbag is in fact a C runtime linked to libpython ( cpython-wasm from python.org) compiled to WebAssembly with emscripten compiler and hosted on a CDN (pygame-web.github.io). It is downloaded once per game and per version update for fast local use. There's some javascript glue to connect the C library used by pygbag and python to some file descriptors. You cannot guess the mechanism that easily because calls originate from wasm cpu which is not exposed in javascript console. Those file descriptors manipulated by the libc (musl provided by emsdk the portable emscripten compiler) can be in a virtual filesystem hosted by MEMFS from emscripten runtime ( eg for /tmp ) or BrowserFS a more advanced virtual filesystem ( /data and /usr ). They can also be stdin/stdout/stderr file descriptors and this is why you cannot find the file on startup : the python part of html file is sent to python interpreter as if you typed it in your shell this is done by calling PyRun_InteractiveLoop on that file descriptor. Later if a file "main.py" is found in the Virtual filesystem it is queued but you can also pass relative file url on the command line eg https://pygame-web.github.io/showroom/pypad.html#src/test_panda3d_cube.py. It also work with github gist raw links. pygbag can also embed code or git repo eg https://pygame-web.github.io/showroom/test_embed_git.html directly in html pages. Some packages like pygame-ce, Panda3D or Harfang3D are indeed pre-compiled to WebAssembly and that's because they are mostly C or C++. Python code is actually interpreted and type-annotated code could be compiled direcly to Wasm but that fonctionnality is not (yet) available for public use. The format choosen for game archive is a zip file similar to android APK though unaligned and unsigned. The android runtime to make these run on real android is not (yet) available for public use either. Everyone is welcome to add more informations on pygame-web.github.io wiki. | 4 | 2 |
76,249,186 | 2023-5-14 | https://stackoverflow.com/questions/76249186/probability-of-moving-on-a-cartesian-plane | I am working on the below coding problem which looks more like a probability question rather than a coding problem platform consisting of 5 vertices. The coordinates of the vertices are: (-1,0), (0.-1). (0,0), (0.1). (1.0). You start at vertex (xs,ys) and keep moving randomly either left (i.e., x coordinate decreases by 1), right (i.e., x coordinate increases by 1), up, or down. The direction of subsequent moves is independent. What is the probability that you reach vertex (xe, ye) before falling off the platform? Constraints: (xs, ys) in [(-1.0), (0.-1), (0.0), (0.1), (1,0)] (xe, ye) in [(-1,0), (0.-1), (0,0), (0,1), (1.0)] xs != xend or ys != yend below is what I implemented which works for the case I shared but fails for all other cases def calculate_probability(xs, ys, xe, ye): edges = [[-1, 0], [0, -1], [0, 1], [1, 0]] if [xs, ys] in edges: if xe == 0 and ye == 0: return 0.25 elif xs == xe and ys == ye: return 1.0 elif [xe, ye] in edges: return 0.075 if xs == 0 and ys == 0: if [xe, ye] in edges: return 0.3 elif xe == 0 and ye == 0: return 1 return 0 | If we handle the edge cases where you start at your destination, or you start at an edge and the destination is the center, we're left with a simple scenario: find you way to the center, and then to the destination. Getting to the origin is a flat 0.25 probability, and then its just a matter of getting to the right edge. If you randomly walk in the wrong direction, you can always backtrack with 0.25 probability of success. This can repeat any number of times before walking in the correct direction (to the destination). This means from the origin, we have a 1/4 chance of picking the right direction, and a 3/4 chance of picking the wrong direction. If we pick the right direction, we're done, if we pick the wrong direction, we have to pick the opposite direction to get back to the origin and avoid falling off, which is a 1/4 chance. Combining these into one, we have a 1/4 of being right the first time, and 3/16 chance at a second chance. Continuing this repeatedly, we end up with a formula like: 1/4 + 1/4 * (3/16) + 1/4 * (3/16)^2 + 1/4 * (3/16)^3 + ... = 1/4 * (1 + (3/16) + (3/16)^2 + (3/16)^3 + ...) = 1/4 * (16/13) = 4/13 So from the origin, we have a 4/13 chance of walking to the correct edge tile without falling off. In code: def solve(xs, ys, xe, ye): # already at destination if xs == xe and ys == ye: return 1 # if destination is the origin, the probability is a flat 0.25 if xe == 0 and ye == 0: return 0.25 # first move must take you to the origin if not already there prob = 0.25 if xs != 0 or ys != 0 else 1 # multiply by probability of walking from origin to destination return prob * 4 / 13 Update: I have an alternative and possibly simpler way of approaching the math in the section above. Take P to represent the probability of successfully walking from the origin to the destination without falling off. Ignore the edge case where the origin is the destination- that's already handled above. We know have two cases to handle: The first move takes is to the destination. This has a 1/4 chance of happening. The first move takes us in one of the other 3 directions (3/4 chance). We then have to return to the origin, with a 1/4 chance of not falling off. We then repeat the process recursively, with a P chance of success (by definition of P). This gives 3/16 * P chance for this scenario. This means we have algebraically: P = 1/4 + 3/16 * P 13/16 * P = 1/4 P = 4/13 We arrive at the same probability of 4/13 as before. | 4 | 9 |
76,225,595 | 2023-5-11 | https://stackoverflow.com/questions/76225595/nameerror-name-partialstate-is-not-defined-error-while-training-hugging-face | Here is the code block which caused the error training_args = TrainingArguments( output_dir="my_awesome_mind_model", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=3e-5, per_device_train_batch_size=32, gradient_accumulation_steps=4, per_device_eval_batch_size=32, num_train_epochs=10, warmup_ratio=0.1, logging_steps=10, load_best_model_at_end=True, metric_for_best_model="accuracy", push_to_hub=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=dataset["train"], # eval_dataset=encoded_minds["test"], tokenizer=feature_extractor, compute_metrics=compute_metrics, ) trainer.train() getting the following error NameError Traceback (most recent call last) in <cell line: 1>() 1 training_args = TrainingArguments( 2 output_dir="my_awesome_mind_model", 3 evaluation_strategy="epoch", 4 save_strategy="epoch", 5 learning_rate=3e-5, 4 frames /usr/local/lib/python3.10/dist-packages/transformers/training_args.py in _setup_devices(self) 1629 self._n_gpu = 1 1630 else: 1631 self.distributed_state = PartialState(backend=self.ddp_backend) 1632 self._n_gpu = 1 1633 if not is_sagemaker_mp_enabled(): NameError: name 'PartialState' is not defined I am trying to follow the audio classification guide of hugging face(link on another dataset but upon running the training args code i am getting name "PartialState" not defined error. | As of 2023-05-11: The error seems to be caused by an issue in the huggingface/accelerate library. You can try following solutions: Reinstall transformers & accelerate pip uninstall -y transformers accelerate pip install transformers accelerate If you are using colab/Jupyter, make sure to restart the notebook's Runtime. Install dev version of accelerate pip install git+https://github.com/huggingface/accelerate Reverse to previous version of transformers (4.28.0) # You might also need to uninstall transformers first: pip uninstall -y transformers pip install transformers==4.28.0 | 11 | 21 |
76,220,715 | 2023-5-10 | https://stackoverflow.com/questions/76220715/type-vector-does-not-exist-on-postgresql-langchain | I was trying to embed some documents on postgresql with the help of pgvector extension and langchain. Unfortunately I'm having trouble with the following error: (psycopg2.errors.UndefinedObject) type "vector" does not exist LINE 4: embedding VECTOR(1536), ^ [SQL: CREATE TABLE langchain_pg_embedding ( collection_id UUID, embedding VECTOR(1536), document VARCHAR, cmetadata JSON, custom_id VARCHAR, uuid UUID NOT NULL, PRIMARY KEY (uuid), FOREIGN KEY(collection_id) REFERENCES langchain_pg_collection (uuid) ON DELETE CASCADE ) ] My environment info: pgvector docker image ankane/pgvector:v0.4.1 python 3.10.6, psycopg2 2.9.6, pgvector 0.1.6 List of installed extensions on postgres Name | Version | Schema | Description ---------+---------+------------+-------------------------------------------- plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language vector | 0.4.1 | public | vector data type and ivfflat access method I've tried the following ways to resolve: Fresh installing the Postgres docker image with pgvector extension enabled. Manually install the extension with the official instruction. Manually install the extension on Postgres like the following: CREATE EXTENSION IF NOT EXISTS vector SCHEMA public VERSION "0.4.1"; But no luck. | Update 17th July 2023 As previously I mentioned my issue was somewhere else in my configuration, here is the other reason that may be responsible for the error, The pgvector extension isn't enabled in the database you are using. Make sure you run CREATE EXTENSION vector; in each database you are using for storing vectors. The vector schema is not in the search_path. Run SHOW search_path; to see the available schemas in the search path and \dx to see the list of installed extensions with schemas. Unfortunately, the issue was somewhere else. My extension installation and search_path schema were totally okay for the defined database I was supposed to use. But my environment variable which was responsible for which database to use, got messed up and was using the default database postgres instead of my defined database, which didn't have the extension enabled. | 18 | 17 |
76,265,631 | 2023-5-16 | https://stackoverflow.com/questions/76265631/chromadb-add-single-document-only-if-it-doesnt-exist | I'm working with langchain and ChromaDb using python. Now, I know how to use document loaders. For instance, the below loads a bunch of documents into ChromaDb: from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() from langchain.vectorstores import Chroma db = Chroma.from_documents(docs, embeddings, persist_directory='db') db.persist() But what if I wanted to add a single document at a time? More specifically, I want to check if a document exists before I add it. This is so I don't keep adding duplicates. If a document does not exist, only then do I want to get embeddings and add it. How do I do this using langchain? I think I mostly understand langchain but have no idea how to do seemingly basic tasks like this. | Filter based solely on the Document's Content Here is an alternative filtering mechanism that uses a nice list comprehension trick that exploits the truthy evaluation associated with the or operator in Python: # Create a list of unique ids for each document based on the content ids = [str(uuid.uuid5(uuid.NAMESPACE_DNS, doc.page_content)) for doc in docs] unique_ids = list(set(ids)) # Ensure that only docs that correspond to unique ids are kept and that only one of the duplicate ids is kept seen_ids = set() unique_docs = [doc for doc, id in zip(docs, ids) if id not in seen_ids and (seen_ids.add(id) or True)] # Add the unique documents to your database db = Chroma.from_documents(unique_docs, embeddings, ids=unique_ids, persist_directory='db') In the first line, a unique UUID is generated for each document by using the uuid.uuid5() function, which creates a UUID using the SHA-1 hash of a namespace identifier and a name string (in this case, the content of the document). The if condition in the list comprehension checks whether the ID of the current document exists in the seen_ids set: If it doesn't exist, this implies the document is unique. It gets added to seen_ids using seen_ids.add(id), and the document gets included in unique_docs. If it does exist, the document is a duplicate and gets ignored. The or True at the end is necessary to always return a truthy value to the if condition, because seen_ids.add(id) returns None (which is falsy) even when an element is successfully added. This approach is more practical than generating IDs using URLs or other document metadata, as it directly prevents the addition of duplicate documents based on content rather than relying on metadata or manual checks. | 15 | 8 |
76,233,163 | 2023-5-12 | https://stackoverflow.com/questions/76233163/valueerror-run-not-supported-when-there-is-not-exactly-one-output-key-got | I got an error says ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources', 'source_documents']. Here's the traceback error File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "C:\Users\Science-01\Documents\Working Folder\Chat Bot\Streamlit\alpha-test.py", line 67, in <module> response = chain.run(prompt, return_only_outputs=True) File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\langchain\chains\base.py", line 228, in run raise ValueError( I tried to run langchain on Streamlit. I use RetrievalQAWithSourcesChain and ChatPromptTemplate Here is my code import os import streamlit as st from apikey import apikey from langchain.document_loaders import PyPDFLoader from langchain.document_loaders import DirectoryLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.chains import RetrievalQAWithSourcesChain from langchain.llms import OpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.chat_models import ChatOpenAI os.environ['OPENAI_API_KEY'] = apikey st.title('🐔 OpenAI Testing') prompt = st.text_input('Put your prompt here') loader = DirectoryLoader('./',glob='./*.pdf', loader_cls=PyPDFLoader) pages = loader.load_and_split() text_splitter = RecursiveCharacterTextSplitter( chunk_size = 1000, chunk_overlap = 200, length_function = len, ) docs = text_splitter.split_documents(pages) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(docs, embeddings) system_template = """ Use the following pieces of context to answer the users question. If you don't know the answer, just say that "I don't know", don't try to make up an answer. ---------------- {summaries}""" messages = [ SystemMessagePromptTemplate.from_template(system_template), HumanMessagePromptTemplate.from_template("{question}") ] prompt = ChatPromptTemplate.from_messages(messages) chain_type_kwargs = {"prompt": prompt} llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, max_tokens=256) # Modify model_name if you have access to GPT-4 chain = RetrievalQAWithSourcesChain.from_chain_type( llm=llm, chain_type="stuff", retriever=docsearch.as_retriever(search_kwargs={'k':2}), return_source_documents=True, chain_type_kwargs=chain_type_kwargs ) if prompt: response = chain.run(prompt, return_only_outputs=True) st.write(response) It seems like the error is in chain.run(), anyone know how to solve this error? | I found the solution, change this code if prompt: response = chain.run(prompt, return_only_outputs=True) st.write(response) to this if st.button('Generate'): if prompt: with st.spinner('Generating response...'): response = chain({"question": prompt}, return_only_outputs=True) answer = response['answer'] st.write(answer) else: st.warning('Please enter your prompt') I also added st.button, st.spinner, and st.warning (optional) | 5 | 3 |
76,222,409 | 2023-5-10 | https://stackoverflow.com/questions/76222409/how-to-create-python-c-extension-with-submodule-that-can-be-imported | I'm creating a C++ extension for python. It creates a module parent that contains a sub-module child. The child has one method hello(). It works fine if I call it as import parent parent.child.hello() > 'Hi, World!' If I try to import my function it fails import parent from parent.child import hello > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > ModuleNotFoundError: No module named 'parent.child'; 'parent' is not a package parent.child > <module 'child'> here is my code setup.py from setuptools import Extension, setup # Define the extension module extension_mod = Extension('parent', sources=['custom.cc']) # Define the setup parameters setup(name='parent', version='1.0', description='A C++ extension module for Python.', ext_modules=[extension_mod], ) and my custom.cc #include <Python.h> #include <string> std::string hello() { return "Hi, World!"; } static PyObject* hello_world(PyObject* self, PyObject* args) { return PyUnicode_FromString(hello().c_str()); } static PyMethodDef ParentMethods[] = { {nullptr, nullptr, 0, nullptr} }; static PyMethodDef ChildMethods[] = { {"hello", hello_world, METH_NOARGS, ""}, {nullptr, nullptr, 0, nullptr} }; static PyModuleDef ChildModule = { PyModuleDef_HEAD_INIT, "child", "A submodule of the parent module.", -1, ChildMethods, nullptr, nullptr, nullptr, nullptr }; static PyModuleDef ParentModule = { PyModuleDef_HEAD_INIT, "parent", "A C++ extension module for Python.", -1, ParentMethods, nullptr, nullptr, nullptr, nullptr }; PyMODINIT_FUNC PyInit_parent(void) { PyObject* parent_module = PyModule_Create(&ParentModule); if (!parent_module) { return nullptr; } PyObject* child_module = PyModule_Create(&ChildModule); if (!child_module) { Py_DECREF(parent_module); return nullptr; } PyModule_AddObject(parent_module, "child", child_module); return parent_module; } I install and build with python setup.py build install. So, how do I make sure that my parent is a package? My code is a toy example but I actually want both modules defined on C++ level. I don't want to split them into several modules - since they are sharing some C++ code. I'm hoping for something similar to approach of this answer Python extension with multiple modules | Doing this from within the extension is a "simple" matter of emulating the behavior for modules that the import system recognizes as packages. (Depending on context, it might be nicer to provide an import hook that did the same thing from the outside.) Just a few changes are needed: Make the name of the child "parent.child". Make parent a package: PyModule_AddObject(parent_module, "__path__", PyList_New(0)); PyModule_AddStringConstant(parent_module, "__package__", "parent"); Make child a member of that package: PyModule_AddStringConstant(child_module, "__package__", "parent"); Update sys.modules as Python would if it had performed the import: PyDict_SetItemString(PyImport_GetModuleDict(), "parent.child", child_module); Of course, several of these calls can fail; note that if PyModule_AddObject fails you have to drop the reference to the object being added (which I very much did not do here for clarity). | 5 | 1 |
76,249,589 | 2023-5-14 | https://stackoverflow.com/questions/76249589/building-python-c-module-on-windows | I am trying to build a 'C' python extension on Windows, the core C code compiles absolutely fine, but I am unable to build the python module using setuptool as I am getting mandlebrot.c(36): fatal error C1083: Cannot open include file: 'stdio.h': No such file or directory error: command 'e:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\bin\HostX86\x86\cl.exe' failed with exit code 2 I am trying to build using: python3.9 setup.py build_ext --inplace --plat-name win32 Any help would be welcome as I have spent a long time trying to work out how to get this to work. The setup file: from setuptools import setup, Extension from subprocess import getoutput LIB_DIRS=['.\mandlebrot_set\libs'] INCLUDE_DIRS=['.\mandlebrot_set\includes'] module1 = Extension("mandlebrot", sources = ["mandlebrot.c", "mandlebrot_python.c"], libraries = ['mpfr', 'mpir'], library_dirs = LIB_DIRS, include_dirs = INCLUDE_DIRS) setup(name = "PackageName", version = '0.1', description = 'Mandlebrot Set calculator', author = '', author_email = '', ext_modules = [module1] ) | 1. The error By default, Python is built on Win with VStudio ([Python.Wiki]: WindowsCompilers), and it also uses that to build C / C++ code (unless otherwise instructed). It seems like you don't have VStudio's cross build tools (for 032bit) installed. A quick search revealed: [SO]: Cannot open include file: 'stdio.h' - Visual Studio Community 2017 - C++ Error [MSDN.Social]: how to resolve fatal error C1083: Cannot open include file: 'stdio.h': No such file or directory My installation (Visual Studio Installer) looks like: Make sure you have installed: The build tools At least one Windows SDK version Try building a simple Hello World application ([MS.DevBlogs]: C++ Tutorial: Hello World) for 032bit (Win32) Some VStudio links that might help: [SO]: Visual Studio NMake build fails with: fatal error U1052: file 'win32.mak' not found (@CristiFati's answer) [SO]: LNK2005 Error in CLR Windows Form (@CristiFati's answer) [SO]: Calling Python function with parametrs from C++ project (Visual Studio) (@CristiFati's answer) [SO]: How to include OpenSSL in Visual Studio (@CristiFati's answer) Once the build works, try again the extension If any of the 2 previous steps still fails, you might think of repairing (or even reinstalling - as a last resort) your VStudio installation 2. Setup Since there are files not provided for a MCVE ([SO]: How to create a Minimal, Reproducible Example (reprex (mcve))), I'm going to use some from [SO]: How to create python C++ extension with submodule that can be imported (@CristiFati's answer) (latest one that I worked on involving extension modules). I also want to point out setup.py deprecation ([SO]: 'setup.py install is deprecated' warning shows up every time I open a terminal in VSCode), but I'm not going to insist on that. setup.py: #!/usr/bin env python import os from setuptools import Extension, setup # @TODO - cfati: Use sources from the other SO question SOURCE_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "q076222409") # Define the extension module extension_mod = Extension("parent", sources=[os.path.join(SOURCE_DIR, "custom.cc")]) # Define the setup parameters setup( name="parent", version="1.0", description="A C++ extension module for Python.", ext_modules=[extension_mod], ) I have 2 separate dirs for 064bit (pc064) and 032bit (pc032) builds (1) (you'll see later why). 3. Build for pc064 This is native (or direct) build, as (CPU and) default Python installation (that I'm going to use) is pc064. Output: [cfati@CFATI-5510-0:e:\Work\Dev\StackExchange\StackOverflow\q076249589\pc064]> sopr.bat ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [prompt]> [prompt]> dir /b [prompt]> [prompt]> dir /b .. pc032 pc064 setup.py [prompt]> [prompt]> :: Build for pc064 [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" ..\setup.py build_ext --inplace running build_ext building 'parent' extension creating build creating build\temp.win-amd64-cpython-310 creating build\temp.win-amd64-cpython-310\Release creating build\temp.win-amd64-cpython-310\Release\Work creating build\temp.win-amd64-cpython-310\Release\Work\Dev creating build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange creating build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow creating build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409 C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Ie:\Work\Dev\VEnvs\py_pc064_03.10_test0\include -Ic:\Install\pc064\Python\Python\03.10\include -Ic:\Install\pc064\Python\Python\03.10\Include -IC:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\include -IC:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\ATLMFC\include -IC:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Auxiliary\VS\include "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tpe:\Work\Dev\StackExchange\StackOverflow\q076 222409\custom.cc /Fobuild\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\custom.obj custom.cc creating e:\Work\Dev\StackExchange\StackOverflow\q076249589\pc064\build\lib.win-amd64-cpython-310 C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:e:\Work\Dev\VEnvs\py_pc064_03.10_test0\libs /LIBPATH:c:\Install\pc064\Python\Python\03.10\libs /LIBPATH:c:\Install\pc064\Python\Python\03.10 /LIBPATH:e:\Work\Dev\VEnvs\py_pc064_03.10_test0\PCbuild\amd64 /LIBPATH:C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\ATLMFC\lib\x64 /LIBPATH:C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\lib\x64 "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.22000.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22000.0\\um\x64" /EXPORT:PyInit_parent build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\custom.obj /OUT:build\lib.win-amd64-cpython-310\parent.cp310-win_amd64.pyd /IMPLIB:build\temp.win-am d64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\parent.cp310-win_amd64.lib Creating library build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\parent.cp310-win_amd64.lib and object build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\parent.cp310-win_amd64.exp Generating code Finished generating code copying build\lib.win-amd64-cpython-310\parent.cp310-win_amd64.pyd -> [prompt]> [prompt]> dir /b build parent.cp310-win_amd64.pyd [prompt]> [prompt]> :: Try importing in Python pc064 [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" -c "import sys;print(sys.version);from parent.child import hello;print(hello());print(\"Done.\n\")" 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Hi, World! Done. So far, so good. 4. Build for pc032 This is cross build (as host and target architectures don't match). According to [Python.Docs]: Creating Built Distributions - Cross-compiling on Windows (emphasis is mine): To cross-compile, you must download the Python source code and cross-compile Python itself for the platform you are targeting - it is not possible from a binary installation of Python (as the .lib etc file for other platforms are not included). Although it's old, and the other way around it also applies (partially) to our situation. As explained in the (build related) URLs above, when building for Win, the linker needs the .lib files (${PYTHONCORE}.lib in our case) for the target architecture, so they are required to exist on the host, and also setup.py (SetupTools) needs to find them. Output (I'll reuse this console): [cfati@CFATI-5510-0:e:\Work\Dev\StackExchange\StackOverflow\q076249589\pc032]> sopr.bat ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [prompt]> [prompt]> dir /b [prompt]> [prompt]> dir /b .. pc032 pc064 setup.py [prompt]> [prompt]> :: Build for pc032 [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" ..\setup.py build_ext --inplace --plat-name win32 -L"e:\Work\Dev\VEnvs\py_pc032_03.10_test0\libs" running build_ext building 'parent' extension creating build creating build\temp.win-amd64-cpython-310 creating build\temp.win-amd64-cpython-310\Release creating build\temp.win-amd64-cpython-310\Release\Work creating build\temp.win-amd64-cpython-310\Release\Work\Dev creating build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange creating build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow creating build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409 C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\bin\HostX86\x86\cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Ie:\Work\Dev\VEnvs\py_pc064_03.10_test0\include -Ic:\Install\pc064\Python\Python\03.10\include -Ic:\Install\pc064\Python\Python\03.10\Include -IC:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\include -IC:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\ATLMFC\include -IC:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Auxiliary\VS\include "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tpe:\Work\Dev\StackExchange\StackOverflow\q076 222409\custom.cc /Fobuild\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\custom.obj custom.cc creating e:\Work\Dev\StackExchange\StackOverflow\q076249589\pc032\build\lib.win-amd64-cpython-310 C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\bin\HostX86\x86\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:e:\Work\Dev\VEnvs\py_pc032_03.10_test0\libs /LIBPATH:e:\Work\Dev\VEnvs\py_pc064_03.10_test0\libs /LIBPATH:c:\Install\pc064\Python\Python\03.10\libs /LIBPATH:c:\Install\pc064\Python\Python\03.10 /LIBPATH:e:\Work\Dev\VEnvs\py_pc064_03.10_test0\PCbuild\win32 /LIBPATH:C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\ATLMFC\lib\x86 /LIBPATH:C:\Install\pc064\Microsoft\VisualStudioCommunity\2022\VC\Tools\MSVC\14.35.32215\lib\x86 "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.22000.0\ucrt\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\\lib\10.0.22000.0\\um\x86" /EXPORT:PyInit_parent build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\custom.obj /OUT:build\lib.win-amd64-cpython-310 \parent.cp310-win_amd64.pyd /IMPLIB:build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\parent.cp310-win_amd64.lib Creating library build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\parent.cp310-win_amd64.lib and object build\temp.win-amd64-cpython-310\Release\Work\Dev\StackExchange\StackOverflow\q076222409\parent.cp310-win_amd64.exp Generating code Finished generating code copying build\lib.win-amd64-cpython-310\parent.cp310-win_amd64.pyd -> [prompt]> [prompt]> dir /b build parent.cp310-win_amd64.pyd [prompt]> [prompt]> :: Success! But its name doesn't seem right. Try importing in Python pc032 [prompt]> "e:\Work\Dev\VEnvs\py_pc032_03.10_test0\Scripts\python.exe" -c "import sys;print(sys.version);from parent.child import hello;print(hello());print(\"Done.\n\")" 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 19:43:38) [MSC v.1934 32 bit (Intel)] Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'parent' [prompt]> [prompt]> :: Try importing in Python pc064 (this also seems wrong) [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" -c "import sys;print(sys.version);from parent.child import hello;print(hello());print(\"Done.\n\")" 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: DLL load failed while importing parent: %1 is not a valid Win32 application. Notes: The .pyd name is the same as for pc064 case, which is not correct (this is why I used 2 dirs (check note #1.), otherwise the pc064 .pyd would have been overwritten). Check: [SO]: What does version name 'cp27' or 'cp35' mean in Python? (@WayneWerner's answer) for naming details [SO]: How do I determine if my python shell is executing in 32bit or 64bit mode on OS X? (@CristiFati's answer) [SO]: Python Ctypes - loading dll throws OSError: [WinError 193] %1 is not a valid Win32 application (@CristiFati's answer) for the encountered error Things are definitely wrong, could be a SetupTools bug (maybe fixed in later versions), this might not be the case on your side. Simplest way is to rename the .pyd (there might also be a setup.py argument?). Output (continued): [prompt]> ren parent.cp310-win_amd64.pyd parent.cp310-win32.pyd [prompt]> [prompt]> dir /b build parent.cp310-win32.pyd [prompt]> [prompt]> :: Try importing in Python pc032 [prompt]> "e:\Work\Dev\VEnvs\py_pc032_03.10_test0\Scripts\python.exe" -c "import sys;print(sys.version);from parent.child import hello;print(hello());print(\"Done.\n\")" 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 19:43:38) [MSC v.1934 32 bit (Intel)] Hi, World! Done. Worked like a charm! This is the most generic way of doing things, and it should work for any target build (pc032, aarch64, arm32), once the cross build tools and the appropriate .lib files are present on the host machine. But, if aiming for pc032 (win32) only, there's a shortcut: Download and install the pc032 Python (which runs on pc064 machines) Use it (like in #3.) to build natively | 3 | 1 |
76,266,695 | 2023-5-16 | https://stackoverflow.com/questions/76266695/cant-install-pyqt5-using-pip-on-alpine-docker | Here is my Dockerfile: FROM python:3.11-alpine AS app RUN apk update && apk add make automake gcc g++ subversion python3-dev gfortran openblas-dev RUN pip install --upgrade pip WORKDIR /srv When I connect to my container and I launch: pip install pyqt5 I got error: $ pip install pyqt5 Collecting pyqt5 Using cached PyQt5-5.15.9.tar.gz (3.2 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [25 lines of output] Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel whl_basename = backend.build_wheel(metadata_directory, config_settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/api.py", line 46, in build_wheel project = AbstractProject.bootstrap('wheel', ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/abstract_project.py", line 87, in bootstrap project.setup(pyproject, tool, tool_description) File "/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/project.py", line 586, in setup self.apply_user_defaults(tool) File "/tmp/pip-install-p2ogfk1p/pyqt5_97a9414aa7ba410f9715856d348d62b4/project.py", line 68, in apply_user_defaults super().apply_user_defaults(tool) File "/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/pyqtbuild/project.py", line 70, in apply_user_defaults super().apply_user_defaults(tool) File "/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/project.py", line 237, in apply_user_defaults self.builder.apply_user_defaults(tool) File "/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/pyqtbuild/builder.py", line 69, in apply_user_defaults raise PyProjectOptionException('qmake', sipbuild.pyproject.PyProjectOptionException [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. How to solve this ? | The PyQt5 Pypi project requires that qmake can be found (emphasis mine): pip will also build and install the bindings from the sdist package but Qt’s qmake tool must be on PATH. This can be done by installing e.g. qt5-qtbase-dev, potentially together with other packages. Then the qmake command can be found in the path. (If not, it can be added this way: export PATH=/usr/lib/qt5/bin:$PATH) So, this should work on the docker container: apk add qt5-qtbase-dev pip install --no-cache-dir pyqt5 (--no-cache-dir can save memory usage. But the installation is still quite long and memory intensive.) Honestly, I haven't managed to install this on my computer with 32GB RAM. The installation step is quite heavy under Alpine Docker and easily leads to OOM. Sadly, there are no pre-compiled sources available (at least not in pypi): https://pypi.org/project/PyQt5/#files pip install PyQt5-5.15.9-cp37-abi3-manylinux_2_17_x86_64.whl ERROR: PyQt5-5.15.9-cp37-abi3-manylinux_2_17_x86_64.whl is not a supported wheel on this platform. If you do not find wheel files for Alpine from other locations and have not enough RAM to build yourself, I simply recommend not using Alpine Linux. Installation seems much simpler under Ubuntu and other distributions like e.g. Debian Bullseye: FROM python:3.11 AS app RUN apt update && apt install -y make automake gcc g++ subversion python3-dev gfortran libopenblas-dev RUN pip install --upgrade pip WORKDIR /srv Then pip install pyqt5 is no problem. Remark: I am not an expert for qmake. So it is possible that there are better ways instead of installation of qt5-qtbase-dev. But at least you will end having qmake available in the PATH. | 3 | 2 |
76,217,754 | 2023-5-10 | https://stackoverflow.com/questions/76217754/how-to-process-multiple-input-yaml-json-from-s3-using-nextflow-dsl2 | I need to process over 1k samples with the nextflow (dsl2) pipeline in aws batch. current version of the workflow process single input per run. I'm looking workflow syntax (map tuple to iterate) to process multiple inputs to run in parralel. The inputs should be in json or yaml format, path to the input files are unique to each sample. To preserve the input file path "s3://..." I used .fromPath in channel. Following is my single sample input config input.yaml (-parms-file) id: HLA1001 bam: s3://HLA/data/HLA1001.bam vcf: s3://HLA/data/HLA1001.vcf.gz Workflow to run single sample input process samtools_stats { tag "${id}" publishDir "${params.publishdir}/${id}/samtools", mode: "copy" input: path bam output: path "${id}.stats" script: """ samtools stats ${bam} > ${id}.stats """ } process mosdepth_bam { tag "${id}" publishDir "${params.publishdir}/${id}/mosdepth", mode: "copy" input: path bam path bam_idx output: path "${id}.regions.bed.gz" script: """ mosdepth --no-per-base --by 1000 --mapq 20 --threads 4 ${id} ${bam} """ } process mosdepth_cram { tag "${id}" publishDir "${params.publishdir}/${id}/mosdepth", mode: "copy" input: path bam path bam_idx path reference path reference_idx output: path "${id}.regions.bed.gz" script: """ mosdepth --no-per-base --by 1000 --mapq 20 --threads 4 --fasta ${reference} ${id} ${bam} """ } process bcftools_stats { tag "${id}" publishDir "${params.publishdir}/${id}/bcftools", mode: "copy" input: path vcf path vcf_idx output: path "*" script: """ bcftools stats -f PASS ${vcf} > ${id}.pass.stats """ } process multiqc { tag "${id}" publishDir "${params.publishdir}/${id}/multiqc", mode: "copy" input: path "*" output: path "multiqc_data/*", emit: multiqc_ch script: """ multiqc . --data-format json --enable-npm-plugin """ } process compile_metrics { tag "${id}" publishDir "${params.publishdir}/${id}", mode: "copy" input: path multiqc output: path "${params.id}.metrics.json", emit: compile_metrics_out script: """ # parse and calculate all the metrics in the multiqc output to compile compile_metrics.py \ --multiqc_json multiqc_data.json \ --output_json ${params.id}.metrics.json \ --biosample_id ${params.id} """ } /* ---------------------------------------------------------------------- WORKFLOW --------------------------------------------------------------------- */ id = params.id aln_file = file ( params.bam ) aln_file_type = aln_file.getExtension() vcf_file = ( params.vcf ) vcf_idx = channel.fromPath(params.vcf + ".tbi", checkIfExists: true) if (aln_file_type == "bam") { cbam = channel.fromPath(params.bam, checkIfExists: true) cbam_idx = channel.fromPath(params.bam + ".bai", checkIfExists: true) } else if (aln_file_type == "cram") { cbam = channel.fromPath(params.bam, checkIfExists: true) cbam_idx = channel.fromPath(params.bam + ".crai", checkIfExists: true) } reference = channel.fromPath(params.reference, checkIfExists: true) reference_idx = channel.fromPath(params.reference + ".fai", checkIfExists: true) // main workflow { if (aln_file_type == "bam") { samtools_stats( bam ) mosdepth_bam( bam, bam_idx ) bcftools_stats ( vcf, vcf_idx ) multiqc( samtools_stats.out.mix( mosdepth_bam.out ).collect() ) compile_metrics(multiqc.out) } else if (aln_file_type == "cram") { samtools_stats( bam ) mosdepth_cram( bam, bam_idx, reference, reference_idx ) bcftools_stats ( vcf, vcf_idx ) multiqc( samtools_stats.out.mix( mosdepth_cram.out ).collect() ) compile_metrics(multiqc.out) } } I want to modify the workflow to run for the following multi sample input in parellel samples: - id: HLA1001 bam: s3://HLA/data/udf/HLA1001.bam vcf: s3://HLA/data/udf/HLA1001.vcf.gz - id: NHLA1002 bam: s3://HLA/data/sdd/HLA1002.bam vcf: s3://HLA/data/sdd/HLA1002.vcf.gz - id: NHLA1003 bam: s3://HLA/data/klm/HLA1003.bam vcf: s3://HLA/data/klm/HLA1003.vcf.gz - id: NHLA2000 bam: s3://HLA/data/rcb/HLA2000.bam vcf: s3://HLA/data/rcb/HLA2000.vcf.gz The expected final output folder structure for the multiple samples.. s3://mybucket/results/HLA1001/ samtools/ mosdepth/ bcftools/ multiqc/ metrics/HLA1001.metrics.json s3://mybucket/results/HLA1002/ samtools/ mosdepth/ bcftools/ multiqc/ metrics/HLA1002.metrics.json The input of bam/cram, vcf and input of multiqc and compile_metrics all must fetch the same sample in every single process. Appreciate your help! Thanks Follwing the method answered by @steve.. Contents of main.nf: update include { compile_metrics } from './modules/compile_metrics' Channel .fromList( params.samples ) .map { it.biosample_id } .set { sample_ids } compile_metrics ( sample_ids, multiqc.out.json_data ) } Contents of modules/compile_metrics/main.nf: process compile_metrics { tag { sample_ids } input: val(sample_ids) path "multiqc_data.json" output: tuple val(sample_ids), path("${sample_ids}.metrics.json"), emit: compile_metrics_out script: """ compile_metrics.py \ --multiqc_json multiqc_data.json \ --output_json "${sample_ids}.metrics.json" \\ --biosample_id "${sample_ids}" \\ """ } Update main.nf: include { mosdepth_datamash } from './modules/mosdepth_datamash' autosomes_non_gap_regions = file( params.autosomes_non_gap_regions ) mosdepth_datamash( autosomes_non_gap_regions, mosdepth_bam.out.regions.mix( mosdepth_cram.out.regions ).collect() ) Update mosdepth_datamash: process mosdepth_datamash { tag { sample } input: path autosomes_non_gap_regions tuple val(sample), path(regions) output: tuple val(sample), path("${sample}.mosdepth.csv"), emit: coverage script: """ zcat "${sample}.regions.bed.gz" | bedtools intersect -a stdin -b ${autosomes_non_gap_regions} | gzip -9c > "${sample}.regions.autosomes_non_gap_n_bases.bed.gz" ..... } Update main.nf: fix - use queue channel instead of collect mosdepth_datamash( autosomes_non_gap_regions, mosdepth_bam.out.regions.mix( mosdepth_cram.out.regions ) ) Process verifybamid works with file instead of channel.fromPath vbi2_ud = file( params.vbi2_ud ) vbi2_bed = file( params.vbi2_bed ) vbi2_mean = file( params.vbi2_mean ) How to modify the channel to handle backward (previous version of workflow single sample input format) compatible of single sample input format which lacks sample key id: HLA1001 bam: s3://HLA/data/HLA1001.bam vcf: s3://HLA/data/HLA1001.vcf.gz Content of processing the input mani.nf: Channel .fromList( params.samples ) .branch { rec -> def aln_file = file( rec.bam ) bam: aln_file.extension == 'bam' def bam_idx = file( "${rec.bam}.bai" ) return tuple( rec.id, aln_file, bam_idx ) cram: aln_file.extension == 'cram' def cram_idx = file( "${rec.bam}.crai" ) return tuple( rec.id, aln_file, cram_idx ) } .set { aln_inputs } Channel .fromList( params.samples ) .map { rec -> def vcf_file = file( rec.vcf ) def vcf_idx = file( "${rec.vcf}.tbi" ) tuple( rec.id, vcf_file, vcf_idx ) } .set { vcf_inputs } Channel .fromList( params.samples ) .map { it.biosample_id } .set { sample_ids } Updated main.nf works well INPUT format A or B: A) biosample_id: NA12878-chr14 bam: s3://sample-qc/data/NA12878-chr14.bam B) samples: - biosample_id: NA12878-chr14 bam: s3://sample-qc/data/NA12878-chr14.bam --------------------------------------------------------------- workflow { .... .... params.samples = null Channel .fromList( params.samples ) .ifEmpty { ['biosample_id': params.biosample_id, 'bam': params.bam] } .branch { rec -> def aln_file = rec.bam ? file( rec.bam ) : null bam: rec.biosample_id && aln_file?.extension == 'bam' def bam_idx = file( "${rec.bam}.bai" ) return tuple( rec.biosample_id, aln_file, bam_idx ) cram: rec.biosample_id && aln_file?.extension == 'cram' def cram_idx = file( "${rec.bam}.crai" ) return tuple( rec.biosample_id, aln_file, cram_idx ) } .set { aln_inputs } Channel .fromList( params.samples ) .ifEmpty { ['biosample_id': params.biosample_id] } .map { it.biosample_id } .set { sample_ids } compile_metrics ( sample_ids, multiqc.out.json_data ) ... ... } Trying other way of not duplicate the code (the above code block .ifEmpty) in each process to parse the params.sample. eg two process here required to use params.sample INPUT format A or B: A) biosample_id: NA12878-chr14 bam: s3://sample-qc/data/NA12878-chr14.bam B) samples: - biosample_id: NA12878-chr14 bam: s3://sample-qc/data/NA12878-chr14.bam ------------------------------------------------------------- params.samples = '' // params.samples = null def get_samples_list() { if (params.samples) { return params.samples } else { return ['biosample_id': params.biosample_id, 'bam': params.bam] } } workflow { // params.samples = '' samples = get_samples_list() ... ... Channel .fromList( samples ) .branch { rec -> def aln_file = rec.bam ? file( rec.bam ) : null bam: rec.biosample_id && aln_file?.extension == 'bam' def bam_idx = file( "${rec.bam}.bai" ) return tuple( rec.biosample_id, aln_file, bam_idx ) cram: rec.biosample_id && aln_file?.extension == 'cram' def cram_idx = file( "${rec.bam}.crai" ) return tuple( rec.biosample_id, aln_file, cram_idx ) } .set { aln_inputs } samtools_stats_bam( aln_inputs.bam, [] ) samtools_stats_cram( aln_inputs.cram, ref_fasta ) Channel .fromList( params.samples ) .map { it.biosample_id } .set { sample_ids } compile_metrics ( sample_ids, multiqc.out.json_data ) } ERROR: Workflow execution stopped with the following message: Exit status : null Error message : Cannot invoke method branch() on null object Error report : Cannot invoke method branch() on null object ERROR ~ Cannot invoke method branch() on null object Calling the samples from channel to reuse works well and much better approach. Channel .fromList( params.samples ) .ifEmpty { ['biosample_id': params.biosample_id, 'bam': params.bam] } .set { samples } Channel samples.branch { rec -> .... Channel samples.map { it.biosample_id } How to read the input.yml as an argument --input_listusing .fromList and read as list compatible with code in the 'sample channel with minimal change? --input_list s3://mybucket/input.yaml instead of directly reading -params-file input.yaml as a list in the channel. eg. nextflow run main.nf \ -ansi-log false \ // -params-file input.yaml \ --input_list s3://mybucket/input.yaml -work 's3://mybucket/work' \ --publish_dir 's3://mybucket/results' \ --ref_fasta 's3://mybucket/ref.fa' Current code .... Channel // .fromPath( params.input_list ) .fromList( params.samples ) .ifEmpty { ['biosample_id': params.biosample_id, 'bam': params.bam] } .set { samples } input.yaml samples: - id: HLA1001 bam: s3://HLA/data/udf/HLA1001.bam - id: NHLA1002 bam: s3://HLA/data/sdd/HLA1002.bam | With channels it is possible to process any number of samples, including just one. Here's one way that use modules to handle both BAM and CRAM inputs. Note that each process below expects an input tuple where the first element is a sample name or key. To greatly assist with being able to merge channels downstream, we should also ensure we output tuples with the same sample name or key. The following is untested on AWS Batch, but it should at least get you started: Contents of main.nf: include { bcftools_stats } from './modules/bcftools' include { mosdepth as mosdepth_bam } from './modules/mosdepth' include { mosdepth as mosdepth_cram } from './modules/mosdepth' include { multiqc } from './modules/multiqc' include { samtools_stats as samtools_stats_bam } from './modules/samtools' include { samtools_stats as samtools_stats_cram } from './modules/samtools' include { mosdepth_datamash } from './modules/mosdepth_datamash' workflow { ref_fasta = file( params.ref_fasta ) autosomes_non_gap_regions = file( params.autosomes_non_gap_regions ) Channel .fromList( params.samples ) .ifEmpty { ['id': params.id, 'bam': params.bam] } .branch { rec -> def aln_file = rec.bam ? file( rec.bam ) : null bam: rec.id && aln_file?.extension == 'bam' def bam_idx = file( "${rec.bam}.bai" ) return tuple( rec.id, aln_file, bam_idx ) cram: rec.id && aln_file?.extension == 'cram' def cram_idx = file( "${rec.bam}.crai" ) return tuple( rec.id, aln_file, cram_idx ) } .set { aln_inputs } Channel .fromList( params.samples ) .ifEmpty { ['id': params.id, 'vcf': params.vcf] } .branch { rec -> def vcf_file = rec.vcf ? file( rec.vcf ) : null output: rec.id && vcf_file def vcf_idx = file( "${rec.vcf}.tbi" ) return tuple( rec.id, vcf_file, vcf_idx ) } .set { vcf_inputs } mosdepth_bam( aln_inputs.bam, [] ) mosdepth_cram( aln_inputs.cram, ref_fasta ) samtools_stats_bam( aln_inputs.bam, [] ) samtools_stats_cram( aln_inputs.cram, ref_fasta ) bcftools_stats( vcf_inputs ) Channel .empty() .mix( mosdepth_bam.out.regions ) .mix( mosdepth_cram.out.regions ) .set { mosdepth_regions } mosdepth_datamash( mosdepth_regions, autosomes_non_gap_regions ) Channel .empty() .mix( mosdepth_bam.out.dists ) .mix( mosdepth_bam.out.summary ) .mix( mosdepth_cram.out.dists ) .mix( mosdepth_cram.out.summary ) .mix( samtools_stats_bam.out ) .mix( samtools_stats_cram.out ) .mix( bcftools_stats.out ) .mix( mosdepth_datamash.out ) .map { sample, files -> files } .collect() .set { log_files } multiqc( log_files ) } Contents of modules/samtools/main.nf: process samtools_stats { tag { sample } input: tuple val(sample), path(bam), path(bai) path ref_fasta output: tuple val(sample), path("${sample}.stats") script: def reference = ref_fasta ? /--reference "${ref_fasta}"/ : '' """ samtools stats \\ ${reference} \\ "${bam}" \\ > "${sample}.stats" """ } Contents of modules/mosdepth/main.nf: process mosdepth { tag { sample } input: tuple val(sample), path(bam), path(bai) path ref_fasta output: tuple val(sample), path("*.regions.bed.gz"), emit: regions tuple val(sample), path("*.dist.txt"), emit: dists tuple val(sample), path("*.summary.txt"), emit: summary script: def fasta = ref_fasta ? /--fasta "${ref_fasta}"/ : '' """ mosdepth \\ --no-per-base \\ --by 1000 \\ --mapq 20 \\ --threads ${task.cpus} \\ ${fasta} \\ "${sample}" \\ "${bam}" """ } Contents of modules/bcftools/main.nf: process bcftools_stats { tag { sample } input: tuple val(sample), path(vcf), path(tbi) output: tuple val(sample), path("${sample}.pass.stats") """ bcftools stats \\ -f PASS \\ "${vcf}" \\ > "${sample}.pass.stats" """ } Contents of modules/multiqc/main.nf: process multiqc { input: path 'data/*' output: path "multiqc_report.html", emit: report path "multiqc_data", emit: data """ multiqc \\ --data-format json \\ . """ } Contents of modules/compile_metrics/main.nf: process compile_metrics { tag { sample_id } input: val sample_id path multiqc_json output: tuple val(sample_id), path("${sample_id}.metrics.json") """ compile_metrics.py \\ --multiqc_json "${multiqc_json}" \\ --output_json "${sample_id}.metrics.json" \\ --biosample_id "${sample_id}" """ } Contents of ./modules/mosdepth_datamash/main.nf: process mosdepth_datamash { tag { sample_id } input: tuple val(sample_id), path(regions_bed) path autosomes_non_gap_regions output: tuple val(sample_id), path("${sample_id}.mosdepth.csv") """ zcat -f "${regions_bed}" | bedtools intersect -a stdin -b "${autosomes_non_gap_regions}" | gzip -9 > "${sample_id}.regions.autosomes_non_gap_n_bases.bed.gz" # do something touch "${sample_id}.mosdepth.csv" """ } Contents of nextflow.config: plugins { id 'nf-amazon' } params { ref_fasta = null autosomes_non_gap_regions = null samples = null id = null bam = null vcf = null publish_dir = './results' publish_mode = 'copy' } process { executor = 'awsbatch' queue = 'test-queue' errorStrategy = 'retry' maxRetries = 3 withName: 'samtools_stats' { publishDir = [ path: "${params.publish_dir}/samtools", mode: params.publish_mode, ] } withName: 'bcftools_stats' { publishDir = [ path: "${params.publish_dir}/bcftools", mode: params.publish_mode, ] } withName: 'mosdepth' { cpus = 4 publishDir = [ path: "${params.publish_dir}/mosdepth", mode: params.publish_mode, ] } withName: 'multiqc' { publishDir = [ path: "${params.publish_dir}/multiqc", mode: params.publish_mode, ] } } aws { region = 'us-east-1' batch { cliPath = '/home/ec2-user/miniconda/bin/aws' } } And run using something like: $ nextflow run main.nf \ -ansi-log false \ -params-file input.yaml \ -work 's3://mybucket/work' \ --publish_dir 's3://mybucket/results' \ --ref_fasta 's3://mybucket/ref.fa' | 3 | 2 |
76,266,682 | 2023-5-16 | https://stackoverflow.com/questions/76266682/how-to-raise-custom-exceptions-in-a-fastapi-middleware | I have a simple FastAPI setup with a custom middleware class inherited from BaseHTTPMiddleware. Inside this middleware class, I need to terminate the execution flow under certain conditions. So, I created a custom exception class named CustomError and raised the exception. from fastapi import FastAPI, Request from starlette.middleware.base import ( BaseHTTPMiddleware, RequestResponseEndpoint ) from starlette.responses import JSONResponse, Response app = FastAPI() class CustomError(Exception): def __init__(self, message): self.message = message def __str__(self): return self.message class CustomMiddleware(BaseHTTPMiddleware): def execute_custom_logic(self, request: Request): raise CustomError("This is from `CustomMiddleware`") async def dispatch( self, request: Request, call_next: RequestResponseEndpoint, ) -> Response: self.execute_custom_logic(request=request) response = await call_next(request) return response app.add_middleware(CustomMiddleware) @app.exception_handler(CustomError) async def custom_exception_handler(request: Request, exc: CustomError): return JSONResponse( status_code=418, content={"message": exc.message}, ) @app.get(path="/") def root_api(): return {"message": "Hello World"} Unfortunately, FastAPI couldn't handle the CustomError even though I added custom_exception_handler(...) handler. Questions What is the FastAPI way to handle such situations? Why is my code not working? Versions FastAPI - 0.95.2 Python - 3.8.13 | The obvious way would be to raise an HTTPException; however, in a FastAPI/Starlette middleware, this wouldn't work, leading to Exception in ASGI application error on server side, and hence, an Internal Server Error would be returned to the client. Option 1 - Using middleware and try/except block You could use a try/except block to handle the custom exception raised in your custom function. Once the error is raised, you could return a JSONResponse (or custom Response, if you prefer), including the msg (and any other arguments) from CustomException, as well as the desired status_code (in the example given below, 500 status code is used, which could be replaced by the status code of your choice). Working Example from fastapi import FastAPI, Request, HTTPException from fastapi.responses import JSONResponse app = FastAPI() class CustomException(Exception): def __init__(self, msg: str): self.msg = msg def exec_custom_logic(request: Request): raise CustomException(msg='Something went wrong') @app.middleware("http") async def custom_middleware(request: Request, call_next): try: exec_custom_logic(request) except CustomException as e: return JSONResponse(status_code=500, content={'message': e.msg}) return await call_next(request) @app.get('/') async def main(request: Request): return 'OK' Option 2 - Using an APIRouter with a custom APIRoute class You could use an APIRouter with a custom APIRoute class, as demonstrated in Option 4 of this answer, and either handle the custom exception inside a try/except block (as shown in the previous option above), or raise an HTTPException directly. The advantages of this approach are: (1) you could raise an HTTPException directly, and hence, there is no need for using try/except blocks, and (2) you could add to the APIRouter only those routes that you would like to handle that way, using, for instance, the @router.get() decorator, while the rest of the routes could be added to the app instance, using, for example, the @app.get() decorator. Working Example from fastapi import FastAPI, APIRouter, Response, Request, HTTPException from fastapi.routing import APIRoute from typing import Callable def exec_custom_logic(request: Request): raise HTTPException(status_code=500, detail='Something went wrong') class CustomAPIRoute(APIRoute): def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: exec_custom_logic(request) return await original_route_handler(request) return custom_route_handler app = FastAPI() router = APIRouter(route_class=CustomAPIRoute) @router.get('/') async def main(request: Request): return 'OK' app.include_router(router) | 3 | 3 |
76,263,712 | 2023-5-16 | https://stackoverflow.com/questions/76263712/streamlit-change-button-size-in-python | I have a program with an expander next to a button, but the button is smaller than the expander, and it bothers me a little. Is it possible to make the button bigger/make the expander's height smaller in only python. I have found solutions online using css, but I am just using python for my code. Here is my code if anyone wants to look at it: instructionCol, buttonCol = st.columns([4,1]) with instructionCol: with st.expander("Instructions"): st.write("Pretend these are the instructions.") with buttonCol: st.button("\nRestart\n", on_click=board.reset) Here is also what it looks like: | You can use st.markdown(css, unsafe_allow_html=True) directly inside the Python code: import streamlit as st st.markdown( """ <style> button { height: auto; padding-top: 10px !important; padding-bottom: 10px !important; } </style> """, unsafe_allow_html=True, ) instructionCol, buttonCol = st.columns([4,1]) with instructionCol: with st.expander("Instructions"): st.write("Pretend these are the instructions.") with buttonCol: st.button("\nRestart\n", on_click=board.reset) It gives: Notice that the button is now aligned with the expander (because I made it a bit bigger). Since you have found css solutions already, you can replace the css I generated with the one you found. | 3 | 4 |
76,249,666 | 2023-5-14 | https://stackoverflow.com/questions/76249666/streamlit-with-poetry-is-not-found-when-run-my-docker-container | Solved According to this: https://stackoverflow.com/a/57886655/15537469 and to this: https://stackoverflow.com/a/74918400/15537469 I make a Multi-stage Docker build with Poetry and venv FROM python:3.10-buster as py-build RUN apt-get update && apt-get install -y \ build-essential \ curl \ software-properties-common \ && rm -rf /var/lib/apt/lists/* RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/opt/poetry python3 - COPY . /app WORKDIR /app ENV PATH=/opt/poetry/bin:$PATH RUN poetry config virtualenvs.in-project true && poetry install FROM python:3.10-slim-buster EXPOSE 8501 HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health COPY --from=py-build /app /app WORKDIR /app CMD ./.venv/bin/python ENTRYPOINT ["streamlit", "run", "mtcc/app.py", "--server.port=8501", "--server.address=0.0.0.0"] My build is going well but when I run my docker container I get this error: docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "streamlit": executable file not found in $PATH: unknown. I don't know if this is really useful but here is my file structure mtcc ├── mtcc │ └── app.py └── Dockerfile | With poetry config virtualenvs.in-project true poetry will create a virtual environment in the .venv directory and install all it's dependencies in it. The typical approach is to activate a virtual environment before using it. Typically this is done with the .venv/bin/activate (or with poetry run / poetry shell). E.g. you could do something like: CMD source /app/.venv/bin/activate && exec streamlit run ... Alternatively you can also activate a virtual environment by manipulating the path environment variable. If you prepend the path to the bin directory of the virtual env, the Docker environment will find all binaries: ENV VIRTUAL_ENV=/app/.venv ENV PATH="$VIRTUAL_ENV/bin:$PATH" CMD ["streamlit", "run", ...] You can find these methods, and extended explanations at https://pythonspeed.com/articles/activate-virtualenv-dockerfile/. | 3 | 3 |
76,262,205 | 2023-5-16 | https://stackoverflow.com/questions/76262205/error-upgrading-pip-errno2-no-such-file-or-directory | I am trying to upgrade pip by doing: pip install --upgrade pip And I got: Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in /home/VICOMTECH/bdacosta/.local/lib/python3.8/site-packages (22.0.4) Collecting pip Using cached pip-23.1.2-py3-none-any.whl (2.1 MB) WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages WARNING: Error parsing requirements for pip: [Errno 2] No such file or directory: '/home/mypersonal/path/.local/lib/python3.8/site-packages/pip-22.0.4.dist-info/METADATA' Installing collected packages: pip Attempting uninstall: pip WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages Found existing installation: pip 22.0.4 ERROR: Cannot uninstall pip 22.0.4, RECORD file not found. You might be able to recover from this via: 'pip install --force-reinstall --no-deps pip==22.0.4'. I've tried to solve as shown in this similar question but I encountered the same error: pip install --force-reinstall --no-deps pip I still get: Defaulting to user installation because normal site-packages is not writeable Collecting pip Using cached pip-23.1.2-py3-none-any.whl (2.1 MB) WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages Installing collected packages: pip Attempting uninstall: pip WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages Found existing installation: pip 22.0.4 ERROR: Cannot uninstall pip 22.0.4, RECORD file not found. You might be able to recover from this via: 'pip install --force-reinstall --no-deps pip==22.0.4'. And if I go to the specified folder by doing: cd /home/mypersonal/path/.local/lib/python3.8/site-packages/pip-22.0.4.dist-info && ls I just find: REQUESTED This problem also does not allow me to install or upgrade any new package. Any suggestion about what I could do? | The problem was due because there were two versions of pip in the site-packages. In the folder /home/mypersonal/path/.local/lib/python3.8/site-packages/ weirdly there were: pip-23.1.2.dist-info (correct and up-to-date) pip-22.0.4.dist-info (root of the problem). I just removed the second one and everything started to work well again, including installing new packages. | 3 | 2 |
76,257,827 | 2023-5-15 | https://stackoverflow.com/questions/76257827/sqlalchemy-isnt-batching-rows-using-server-side-cursor-via-yield-per | Following documentation, and the code snippet provided from https://docs.sqlalchemy.org/en/14/core/connections.html#streaming-with-a-fixed-buffer-via-yield-per (posted directly below), my query is not being batched into 50_000 rows. with engine.connect() as conn: result = conn.execution_options(yield_per=100).execute(text("select * from table")) for partition in result.partitions(): # partition is an iterable that will be at most 100 items for row in partition: print(f"{row}") Here I am writing the results of a SQL query to a CSV file: with engine.connect() as conn: # connect to database with open(csv_path, mode='w', newline='') as f: c = csv.writer(f, quoting=csv.QUOTE_MINIMAL) c.writerow(columns) # write column headers result = conn.execution_options(yield_per=50_000).execute(sql_string) # execute query that results in 110k rows, expecting batches of 50k rows for partition in result.partitions(): for row in tqdm.tqdm(partition): # each partition only has 1 row, when I expect 50k rows c.writerow(row) This sql_string results in 110k rows, so I expect 3 iterations of result.partitions(), however, I am seeing 110k iterations. I fear this is equivalent to DDOS-ing my own SQL Server database. I've also tried doing this without partitions(), and the same thing happens - no batching. with engine.connect() as conn: with open(csv_path, mode='w', newline='') as f: c = csv.writer(f, quoting=csv.QUOTE_MINIMAL) c.writerow(columns) for row in tqdm.tqdm(conn.execution_options(yield_per=50_000).execute(sql_string)): # I expect 3 iterations, but I get 110k. c.writerow(row) The server seems to handle this ok, I get 22k iterations / second, but I still wonder, am I DDOS-ing myself? I am using SQL Alchemy 1.4.42 (pip install --upgrade 'sqlalchemy==1.4.42') and Microsoft SQL Server 2019 - 15.0.X (X64) | The yield_per argument was having no effect in execute_options. In the example query snippets I posted in my question, fetchone() gets called N times, where N is the query result's row count. I discovered these fetchone() calls by getting the traceback from doing ctrl-C during an execution. That's a lot of server calls. Instead of concerning myself with yield_per, I was able to achieve pagination by using fetchmany(batch_size). with engine.connect() as conn: print('Connected!...') with open(csv_path, mode='w', newline='', encoding='utf-8') as f: c = csv.writer(f, quoting=csv.QUOTE_MINIMAL) c.writerow(columns) result = conn.execute(sql_string) counter = 1 while True: print(f'Querying batch {counter}...') rows = result.fetchmany(50_000) # batch size print(f'Contains {len(rows)} rows...') if not rows: break print(f'Writing batch {counter} to CSV...') for row in rows: c.writerow(row) counter += 1 result.close() I don't understand the reason for yield_per in SQLAlchemy 1.4.42, or why the the SQLAlchemy docs don't reference using fetchmany() instead. Oh well. | 3 | 2 |
76,240,311 | 2023-5-12 | https://stackoverflow.com/questions/76240311/visualization-of-descending-count | I have a dataframe that looks like this: components non_breaking_count breaking_count 0 paths-modified 22956 8640 1 endpoints-modified 22155 8149 2 endpoints-added 8109 5354 3 paths-added 7375 4787 4 info-version 5680 857 5 components-schemas-added 2555 1597 6 info-description 1940 762 7 tags-added 1031 564 8 info-title 859 551 9 servers-added 711 332 10 info-termsOfService 701 161 11 servers-deleted 609 262 12 components-schemas-deleted 588 938 13 components-securitySchemes 301 112 14 tags-modified 297 128 15 components-securitySchemes 229 171 16 components-parameters-added 209 239 17 tags-deleted 199 541 18 components-responses-added 183 164 19 info-contact-name 153 96 20 security-added 140 121 21 info-contact-email 132 110 22 servers-modified 120 32 23 info-license-name 115 46 24 info-license-url 105 36 25 paths-deleted 97 4979 26 info-contact-url 93 51 27 components-securitySchemes-deleted 87 71 28 endpoints-deleted 84 5612 I have been looking for a good way to visualize this with their counts and annotation( for all the components column). What i am looking for is one graph for breaking and another for non breaking. I had a sliced bar chart in mind, with two bars for breaking and non breaking, however fitting in 28 values is a bit difficult, so I had to discard that option. I tried with nightingale chart in echarts as well, but the proportions are a mismatch somehow. Another way would be a treemap, but that and pie chart is something I wish to avoid. Does anyone have any suggestions on which type of graph I could use to effectively visualize this data. | I think there are several challenges here: amount of categories extreme value difference Bar plots (either horizontal or polar) handle numerous categories well but it's not always easy to deal with the extreme value difference. Additionally using a log axis with the bar plot could provide a useful visualization. A downside to the log axis is it may not be fully comprehended with some audiences. That said, I'd prefer to use a Bubble Chart in this case so that I can use the marker size to help illustrate the value's magnitude. A log axis can add further clarity. For convenience I've put the data in a text file and imported via pandas: import pandas as pd df = pd.read_csv("visualization-of-descending-count.txt", delim_whitespace=True) While the request specifies two seperate plots, a scatter could contain both sets of data and what I've gone with. Further condideration can be given in how to scale the buuble size... either scaled to the max of both groups or the max for each group (see comments in example). import plotly.graph_objects as go max_value = df[['breaking_count', 'non_breaking_count']].max().max() fig = go.Figure() fig.add_trace(go.Scatter(name="Non-Breaking", x=df.non_breaking_count, y=df.components, mode='markers', marker=dict( size=df.non_breaking_count, sizemode='area', # unified scale between two groups using max_value sizeref=2.*max_value/(40.**2), # unique scale for each group # sizeref=2.*max(df.non_breaking_count)/(40.**2), sizemin=3))) fig.add_trace(go.Scatter(name="Breaking", x=df.breaking_count, y=df.components, mode='markers', marker=dict( size=df.breaking_count, sizemode='area', # unified scale between two groups using max_value sizeref=2.*max_value/(40.**2), # unique scale for each group # sizeref=2.*max(df.breaking_count)/(40.**2), sizemin=3))) fig.update_layout(height=1000, width=1000,) fig.show() Switching to a log axis: fig.update_layout(xaxis=dict(type='log')) fig.show() Side note, components-securitySchemes has two entries. | 3 | 3 |
76,241,352 | 2023-5-13 | https://stackoverflow.com/questions/76241352/how-to-resolve-no-qt-platform-plugin-could-be-initialized-for-a-qt5-applicatio | I am working on a python application based on PyQt5. Everything was running good until I redo my PC and reinstall windows again because of some issue. I had copied my environment and after reinstalling Anaconda, I copied that environment again in env folder. Now the problem is that, when I run my code in PyCharm IDE, it displays and error dialog like this: I have tried multiple solutions such as: Solution 1: Change the QT Files Location Sometimes, a simple trick such as changing the QT files location is enough to get rid of the error. Here’s how you can do it: Launch File Explorer and open This PC. Using the Search field, search for pyqt5_tools. When Windows finishes the search, right-click the pyqt5_tools and head to Open folder location. Head to PyQt5 > Qt > bin. Copy the platforms folder. in my case bin is not available in this folder Make a new search for site-packages and open the folder. There, paste the platforms folder. Windows will warn you there’s already a folder with the same name. Click Replace the files in the destination. Solution 2: Run an SFC Scan There’s a chance Windows display the “Application failed because no QT platform plugin could be initialized” error due to corrupt system files. Fortunately, Windows has a built-in tool to help you fix the problem. In the Start menu search bar, search for command prompt and select Run as administrator. Then, run the sfc /scannow command line. Windows will scan and automatically replace any corrupted system file. Solutions Reference Nothing worked for me, I'm getting the same error all the time. How can I resolve the error displayed in the figure above? Any help will be appreciated, Thanks. | Updated Answer to this question Answer Reference To add PyQt5's library path to your PATH environment variable: 1. Open the Edit Environment Variables dialog. 2. Select the appropriate Path variable (either User or System variables). (Note: I have changed in system variables) 3. Edit the variable and add the PyQt5 library path as three entries at the top. 4. Add the following 3 entries, and make sure the entries are in the desired order and at the top of path variable. 5. Save the changes. C:\ProgramData\Anaconda3\envs\ann_tool\Lib\site-packages\PyQt5\Qt5\bin C:\ProgramData\Anaconda3\envs\ann_tool\Lib\site-packages\PyQt5\Qt5\plugins C:\ProgramData\Anaconda3\envs\ann_tool\Lib\site-packages\PyQt5\Qt5\plugins\platforms 6. Also, if you have a variable named: QT_PLUGIN_PATH, you want to put this line at the top of the list: C:\ProgramData\Anaconda3\envs\ann_tool\Lib\site-packages\PyQt5\Qt5\plugins (Note: In my case, Anaconda is installed for all users so, Anaconda will be found in C:\ProgramData) 7. Once you have saved the changes to your environment variables, it is necessary to Restart PyCharm in order for it to recognize and locate the updated libraries.. | 6 | 1 |
76,260,479 | 2023-5-16 | https://stackoverflow.com/questions/76260479/hex-to-decimal-conversion-with-nan-values | I am trying to convert a Pandas dataframe column data['hexValues'] consisting of hex values to decimal. data["decValues"] = data.apply(lambda row: int(str(row["hexValues"]), 16), axis=1) This works but the column hexValues look like this 0 FF 1 F 2 nan 3 nan 4 FFFF 5 FFFF 6 F 7 F 8 F 9 F 10 FF 11 nan 12 nan I want to skip nan values, so if the value is nan, no conversion will take place. I thought of using try-except-else but I am not sure how I can do it. Can you help me? | If nans are missing values use: data["decValues"] = [int(x, 16) if pd.notna(x) else np.nan for x in data["hexValues"]] More general solution with try-except: def test(x): try: return int(x, 16) except ValueError: return np.nan data["decValues"] = data["hexValues"].apply(test) | 3 | 1 |
76,250,688 | 2023-5-15 | https://stackoverflow.com/questions/76250688/webdriverexception-unhandled-inspector-error-no-node-with-given-id-found-at-a | I have written a Python script using Selenium and ChromeDriver to scrape data. The script navigates through several pages and clicks on various buttons to retrieve the data. However, I am encountering the following error: WebDriverException: Message: unknown error: unhandled inspector error: {"code":-32000,"message":"No node with given id found"} The error seems to occur at a specific point in the iteration, rather than being random. I have tried to troubleshoot the issue, but I am not sure what is causing it or how to fix it. I am using Python 3.10.5 and the Selenium library with ChromeDriver version 113.0.5672.63 on a Windows 10 machine. Any help with resolving this issue would be greatly appreciated. I'm still a beginner and this is my first time trying selenium. I have tried adding time.sleep(1) to make sure the web is loaded, check the visibility of the element, and the element is clickable but the problem still occurs. This is the current script that I have written url = '.../' path = Service(r'...\chromedriver_win32') options = Options() options.add_experimental_option("debuggerAddress", "localhost:9222") driver = webdriver.Chrome(service=path, options=options) driver.get(url) wait = WebDriverWait(driver, 10) def scrape_left_table(prob, kab, kec): data = [] rows = driver.find_elements(By.CSS_SELECTOR, 'div:nth-child(1) > table > tbody > tr') for row in rows: wilayah = row.find_element(By.CSS_SELECTOR, 'td.text-xs-left.wilayah-name > button').text persentasi = row.find_element(By.CSS_SELECTOR, 'td.text-xs-left.wilayah-name > span').text class_1= row.find_element(By.CSS_SELECTOR, 'td:nth-child(2)').text class_2= row.find_element(By.CSS_SELECTOR, 'td:nth-child(3)').text data.append([prob, kab, kec, wilayah, persentasi, class_1, class_2]) return data def scrape_right_table(prob, kab, kec): data = [] rows = driver.find_elements(By.CSS_SELECTOR, 'div:nth-child(2) > table > tbody > tr') for row in rows: wilayah = row.find_element(By.CSS_SELECTOR, 'td.text-xs-left.wilayah-name > button').text persentasi = row.find_element(By.CSS_SELECTOR, 'td.text-xs-left.wilayah-name > span').text class_1= row.find_element(By.CSS_SELECTOR, 'td:nth-child(2)').text class_2= row.find_element(By.CSS_SELECTOR, 'td:nth-child(3)').text data.append([prob, kab, kec, wilayah, persentasi, class_1, class_2]) return data data = [] provinsi = driver.find_elements(By.CSS_SELECTOR, 'div:nth-child(1) > table > tbody > tr') button = provinsi[1].find_element(By.TAG_NAME, 'button') pro = button.text wait.until(EC.element_to_be_clickable(button)).click() wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, 'div:nth-child(1) > table > tbody > tr'))) for i in [1,2]: time.sleep(1) kabupaten = driver.find_elements(By.CSS_SELECTOR, 'div:nth-child(' + str(i) + ') > table > tbody > tr') for kab in kabupaten: time.sleep(1) wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, 'div:nth-child(' + str(i) + ') > table > tbody > tr'))) kab_button = kab.find_element(By.TAG_NAME, 'button') kab_name = kab_button.text driver.execute_script("arguments[0].scrollIntoView();", kab_button) driver.execute_script("arguments[0].click();", kab_button) for i in [1,2]: time.sleep(1) kecamatan = driver.find_elements(By.CSS_SELECTOR, 'div:nth-child(' + str(i) + ') > table > tbody > tr') for kec in kecamatan: wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, 'div:nth-child(' + str(i) + ') > table > tbody > tr'))) kec_button = kec.find_element(By.CSS_SELECTOR, 'td.text-xs-left.wilayah-name > button') kec_name = kec_button.text driver.execute_script("arguments[0].scrollIntoView();", kec_button) driver.execute_script("arguments[0].click();", kec_button) kelurahan = driver.find_elements(By.CSS_SELECTOR, 'div:nth-child(1) > table > tbody > tr') time.sleep(1) left_table = scrape_left_table(pro, kab_name, kec_name) right_table = scrape_right_table(pro, kab_name, kec_name) data += left_table + right_table back = driver.find_element(By.CSS_SELECTOR, '#app > div.sticky-top.bg-white > div > div:nth-child(2) > div > div > div > div:nth-child(5) > div > div > div.vs__actions > button') driver.execute_script("arguments[0].scrollIntoView();", back) driver.execute_script("arguments[0].click();", back) back = driver.find_element(By.CSS_SELECTOR, '#app > div.sticky-top.bg-white > div > div:nth-child(2) > div > div > div > div:nth-child(4) > div > div > div.vs__actions > button') driver.execute_script("arguments[0].scrollIntoView();", back) driver.execute_script("arguments[0].click();", back) After a certain iteration i.e. for provinsi[0] errors occur after 689 iterations for provinsi[1] errors occur after 35 iterations. WebDriverException Traceback (most recent call last) c:\...\web_scraping.ipynb Cell 4 in () 23 for kec in kecamatan: 24 wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, 'div:nth-child(' + str(i) + ') > table > tbody > tr'))) ---> 26 kec_button = kec.find_element(By.CSS_SELECTOR, 'td.text-xs-left.wilayah-name > button') 27 kec_name = kec_button.text 28 driver.execute_script("arguments[0].scrollIntoView();", kec_button) WebDriverException: Message: unknown error: unhandled inspector error: {"code":-32000,"message":"No node with given id found"} | This appears to be a defect with the recent ChromeDriver v113: https://bugs.chromium.org/p/chromedriver/issues/detail?id=4440 It appears currently this is the most likely suspect: it happens when the element being interacted with has been determined as stale by Chromedriver It looks like due to a defect, WebDriver is throwing a WebDriverException instead of StaleElementReferenceException (this is in C#) in such a case. | 5 | 10 |
76,217,781 | 2023-5-10 | https://stackoverflow.com/questions/76217781/how-to-continue-training-with-huggingface-trainer | When training a model with Huggingface Trainer object, e.g. from https://www.kaggle.com/code/alvations/neural-plasticity-bert2bert-on-wmt14 from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments import os os.environ["WANDB_DISABLED"] = "true" batch_size = 2 # set training arguments - these params are not really tuned, feel free to change training_args = Seq2SeqTrainingArguments( output_dir="./", evaluation_strategy="steps", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, logging_steps=2, # set to 1000 for full training save_steps=16, # set to 500 for full training eval_steps=4, # set to 8000 for full training warmup_steps=1, # set to 2000 for full training max_steps=16, # delete for full training # overwrite_output_dir=True, save_total_limit=1, #fp16=True, ) # instantiate trainer trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args, train_dataset=train_data, eval_dataset=val_data, ) trainer.train() When it finished training, it outputs: TrainOutput(global_step=16, training_loss=10.065429925918579, metrics={'train_runtime': 541.4209, 'train_samples_per_second': 0.059, 'train_steps_per_second': 0.03, 'total_flos': 19637939109888.0, 'train_loss': 10.065429925918579, 'epoch': 0.03}) If we want to continue training with more steps, e.g. max_steps=16 (from previous trainer.train() run) and another max_steps=160, do we do something like this? from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments import os os.environ["WANDB_DISABLED"] = "true" batch_size = 2 # set training arguments - these params are not really tuned, feel free to change training_args = Seq2SeqTrainingArguments( output_dir="./", evaluation_strategy="steps", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, logging_steps=2, # set to 1000 for full training save_steps=16, # set to 500 for full training eval_steps=4, # set to 8000 for full training warmup_steps=1, # set to 2000 for full training max_steps=16, # delete for full training # overwrite_output_dir=True, save_total_limit=1, #fp16=True, ) # instantiate trainer trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args, train_dataset=train_data, eval_dataset=val_data, ) # First 16 steps. trainer.train() # set training arguments - these params are not really tuned, feel free to change training_args_2 = Seq2SeqTrainingArguments( output_dir="./", evaluation_strategy="steps", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, logging_steps=2, # set to 1000 for full training save_steps=16, # set to 500 for full training eval_steps=4, # set to 8000 for full training warmup_steps=1, # set to 2000 for full training max_steps=160, # delete for full training # overwrite_output_dir=True, save_total_limit=1, #fp16=True, ) # instantiate trainer trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args_2, train_dataset=train_data, eval_dataset=val_data, ) # Continue training for 160 steps trainer.train() If the above is not the canonical way to continue training a model, how to continue training with HuggingFace Trainer? Edited With transformers version, 4.29.1, trying @maciej-skorski answer with Seq2SeqTrainer, trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args, train_dataset=train_data, eval_dataset=val_data, resume_from_checkpoint=True ) Its throwing an error: TypeError: Seq2SeqTrainer.__init__() got an unexpected keyword argument 'resume_from_checkpoint' | If your use-case is about adjusting a somewhat-trained model then it can be solved just the same way as fine-tuning. To this end, you pass the current model state along with a new parameter config to the Trainer object in PyTorch API. I would say, this is canonical :-) The code you proposed matches the general fine-tuning pattern from huggingface docs trainer = Trainer( model, tokenizer=tokenizer, training_args, train_dataset=..., eval_dataset=..., ) You may also resume training from existing checkpoints trainer.train(resume_from_checkpoint=True) | 5 | 5 |
76,240,871 | 2023-5-13 | https://stackoverflow.com/questions/76240871/how-do-i-add-memory-to-retrievalqa-from-chain-type-or-how-do-i-add-a-custom-pr | How do i add memory to RetrievalQA.from_chain_type? or, how do I add a custom prompt to ConversationalRetrievalChain? For the past 2 weeks ive been trying to make a chatbot that can chat over documents (so not in just a semantic search/qa so with memory) but also with a custom prompt. I've tried every combination of all the chains and so far the closest I've gotten is ConversationalRetrievalChain, but without custom prompts, and RetrievalQA.from_chain_type but without memory | Here's a solution with ConversationalRetrievalChain, with memory and custom prompts, using the default 'stuff' chain type. There are two prompts that can be customized here. First, the prompt that condenses conversation history plus current user input (condense_question_prompt), and second, the prompt that instructs the Chain on how to return a final response to the user (which happens in the combine_docs_chain). from langchain import PromptTemplate # note that the input variables ('question', etc) are defaults, and can be changed condense_prompt = PromptTemplate.from_template( ('Do X with user input ({question}), and do Y with chat history ({chat_history}).') ) combine_docs_custom_prompt = PromptTemplate.from_template( ('Write a haiku about a dolphin.\n\n' 'Completely ignore any context, such as {context}, or the question ({question}).') ) Now we can initialize the ConversationalRetrievalChain with the custom prompts. from langchain.llms import OpenAI from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) chain = ConversationalRetrievalChain.from_llm( OpenAI(temperature=0), vectorstore.as_retriever(), # see below for vectorstore definition memory=memory, condense_question_prompt=condense_prompt, combine_docs_chain_kwargs=dict(prompt=combine_docs_custom_prompt) ) Note that this calls _load_stuff_chain() under the hood, which allows for an optional prompt kwarg (that's what we can customize). This is used to set the LLMChain , which then goes to initialize the StuffDocumentsChain. We can test the setup with a simple query to the vectorstore (see below for example vectorstore data) - you can see how the output is determined completely by the custom prompt: chain("What color is mentioned in the document about cats?")['answer'] #'\n\nDolphin leaps in sea\nGraceful and playful in blue\nJoyful in the waves' And memory is working correctly: chain.memory #ConversationBufferMemory(chat_memory=ChatMessageHistory(messages=[HumanMessage(content='What color is mentioned in the document about cats?', additional_kwargs={}), AIMessage(content='\n\nDolphin leaps in sea\nGraceful and playful in blue\nJoyful in the waves', additional_kwargs={})]), output_key=None, input_key=None, return_messages=True, human_prefix='Human', ai_prefix='AI', memory_key='chat_history') Example vectorstore dataset with ephemeral ChromaDB instance: from langchain.vectorstores import Chroma from langchain.document_loaders import DataFrameLoader from langchain.embeddings.openai import OpenAIEmbeddings data = { 'index': ['001', '002', '003'], 'text': [ 'title: cat friend\ni like cats and the color blue.', 'title: dog friend\ni like dogs and the smell of rain.', 'title: bird friend\ni like birds and the feel of sunshine.' ] } df = pd.DataFrame(data) loader = DataFrameLoader(df, page_content_column="text") docs = loader.load() embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(docs, embeddings) | 17 | 5 |
76,228,791 | 2023-5-11 | https://stackoverflow.com/questions/76228791/conda-23-3-1-what-shall-be-the-content-of-build-sh | I found grayskull for creating meta.yml files and I found this github action for publishing on conda. However, said github action require a build.sh file and according to the official guide such a file must contain "...the text exactly as shown:" $PYTHON setup.py install # Python command to install the script. Nevertheless, newest project have pyproject.toml and I am not sure the above guide applies today since it is fairly old, apparently. As for conda 23.3.1, what shall be the content of said build.sh? EDIT: The question has been closed due to that it has been considered opinion-based. Unfortunately, it is not, as the following Theorem shows. Theorem: The question is NOT opinion based. Proof: By contradiction assume that the question is opinion based. Then, there should not exist any non-opinion based guide explaining what shall be included in build.sh. However, an official, non-opinion based guide exists and it is here, which contradicts the initial hypothesis. Hence, the question is not opinion-based. QED | I found a sort of answer triggered by the comment I received that pushed me in searching towards the right direction. I decided to share my learnings (that may not be 100% accurate) that I hope will give some insights on how the conda package machinery work. The way to go today (2023) is grayskull, that I mentioned also before. That tool generate meta.yml files starting from pypi packages (and not only from pypi, you can even use a sdist as a staring point, see grayskull docs). If your package is pure python code (plus some data eventually), then the meta.yml generated from grayskull contain all the information needed for building your conda package. You should edit such meta.yml file a bit though, for example you should edit the recipe-mainteiner field. Then you can run conda build and you should get your conda package. In reality, things are a bit more complex. In-fact, in order to make a conda package you need a recipe. What's that? A recipe is a folder that contains a meta.yml a build.sh and a bld.bat files at least, plus some optional files such as run_test.[py,pl,sh,bat] where you specify how your package shall be tested. A recipe can also include patches files and resources like icons, etc. But what is the purpose of these additional build.sh, etc. files? Well, if your package has more than python code but include e.g. additional code written in other languages, then you should explain how such code shall be processed. For example, if in addition to python code you have some C source code. In in the build.sh you specify (among other things) how such C code shall be compiled. No need to say that you cannot pre-compile it because then the compiled code would be platform specific. The same principle applies to tests: you may add a run_test.[py,pl,sh,bat] where you specify how you package shall be tested. However, for what I understood, if the build process of your package is only made of few lines and the tests are rather straightforward, then you can inline such info by using the Section build and tests of the meta.yml file instead of having separated build.sh, etc. files. Note that in the easiest case of pure python source code, as mentioned in a comment, you have that build is just {{ PYTHON }} -m pip install . -vv whereas test is just : import your_package which is exactly how grayskull fill up such fields in the meta.yml when you generate it from pypi. Now that such info is included in the meta.yml file you don't need any build.sh, bld.bat and run_tests files. Examples of recipes can be found, as already pointed out in a comment, here whereas a guide for conda recipe can be found here. - but still, it is from 2018. In summary, the great thing of conda, for what I understood, is that it can package stuff written in different languages. But there is more: it seems that conda handles dependencies better than other tools (again, for what I read, so take it with a grain of salt). However, there have been many complains about its speed, and that is true. It is terribly slow in its default setting. However, you can install libmamba that speed up things drastically - and that I can confirm that it's true. I hope that this summary is not too wrong and it should answer the initial question on what should be in the build.sh file. | 4 | 2 |
76,255,967 | 2023-5-15 | https://stackoverflow.com/questions/76255967/is-there-a-way-to-inherit-a-class-only-if-a-runtime-flag-is-true-in-python | I would like to have an existing class A inherit B only if a runtime flag is turned on. Is this possible? Usually for these cases, I either Just have A inherit B by default, and not use B if that flag is False Create another class A_mod, that inherits from A and B, and use this class when that flag is true. | You can use a conditional expression when specifying the parent class class A(B if flag else object): | 3 | 7 |
76,248,162 | 2023-5-14 | https://stackoverflow.com/questions/76248162/weird-time-series-plots-when-adding-the-dates-on-the-x-axis | I'm trying to plot a time series on python using plotly, here is the result without the dates on the x-axis : And when I add the date : Here is a view of my table : {'sasdate': {4: Timestamp('1959-01-09 00:00:00'), 5: Timestamp('1959-01-12 00:00:00'), 6: Timestamp('1960-01-03 00:00:00'), 7: Timestamp('1960-01-06 00:00:00'), 8: Timestamp('1960-01-09 00:00:00'), 9: Timestamp('1960-01-12 00:00:00'), 10: Timestamp('1961-01-03 00:00:00'), 11: Timestamp('1961-01-06 00:00:00'), 12: Timestamp('1961-01-09 00:00:00'), 13: Timestamp('1961-01-12 00:00:00')}, 'CDT': {4: 0.9633000000000003, 5: 0.35329999999999995, 6: 0.6134, 7: 1.2666999999999997, 8: 1.4733, 9: 1.5799999999999996, 10: 1.4367, 11: 1.4867, 12: 1.6766999999999999, 13: 1.5133}, 'crise': {4: 0, 5: 0, 6: 0, 7: 1, 8: 1, 9: 1, 10: 1, 11: 0, 12: 0, 13: 0}, 'prev': {4: 0.187232436694447, 5: 0.3105689948109355, 6: 0.2539228791308063, 7: 0.13916924600664243, 8: 0.11171277377207611, 9: 0.09915690331613247, 10: 0.11627115062260285, 11: 0.1100762567460945, 12: 0.08869839928964451, 13: 0.10687875779138178}} And here is the code of the plot : import plotly.express as px fig = px.line(test,x='sasdate', y="prev", labels = 'ligne') fig.update_layout(legend=dict( yanchor="top", y=0.99, xanchor="left", x=0.01 )) fig.update_layout(showlegend =True) fig.show() It seems the plots doesn't consider all the observations of the series. My 'date' features is on a datetime format, the problem is the same with matplotlib. Do you have suggestions ? | You dates are in the wrong format. They need to be YYYY-MM-DD rather than YYYY-DD-MM, e.g., swap '1959-01-09 00:00:00' to be '1959-09-01 00:00:00' as so on, and the plot will look as expected. If you don't want to do the conversion manually, you could use the datetime package and tell it the format of the date/time that you have, by doing something like: from datetime import datetime test["sasdate"] = { i: Timestamp(datetime.strptime(str(test["sasdate"][i]), "%Y-%d-%m %H:%M:%S")) for i in test["sasdate"] } If test is already a DataFrame, then you could do: from pandas import Series test["sasdate"] = Series( [ Timestamp(datetime.strptime(str(date), "%Y-%d-%m %H:%M:%S")) for date in test["sasdate"] ], index=test["sasdate"].index ) | 3 | 2 |
76,251,837 | 2023-5-15 | https://stackoverflow.com/questions/76251837/default-values-for-typeddict | Let's consider I have the following TypedDict: class A(TypedDict): a: int b: int What is the best practice for setting default values for this class? I tried to add a constructor but it doesn't seem to work. class A(TypedDict): a: int b: int def __init__(self): TypedDict.__init__(self) a = 0 b = 1 EDIT: I don't want to use dataclass because I need to serialize and deserialize to JSON files and dataclasses have some problem with it. What do you think? | TypedDict is only for specifying that a dict follows a certain layout, not an actual class. You can of course use a TypedDict to create an instance of that specific layout but it doesn't come with defaults. One possible solution is to add a factory method to the class. You could use this factory method instead to set defaults. from typing import TypedDict class A(TypedDict): a: int b: int @classmethod def create(cls, a: int = 0, b: int = 1) -> A: return A(a=a, b=b) a = A.create(a=4) # {"a": 4, "b": 1} If having a dict is not a strict requirement then @dataclass is good having small objects with defaults. from dataclasses import dataclass @dataclass class A: a: int = 0 b: int = 1 If you need to create a dictionary from them, you can use asdict from dataclasses import asdict a = A(a=4) asdict(a) # {"a": 4, "b": 1} | 8 | 12 |
76,235,292 | 2023-5-12 | https://stackoverflow.com/questions/76235292/setting-tags-during-model-logging-mlflow | I am logging the model using mlflow.sklearn.log_model(model, "my-model") and I want to set tags to the model during logging, I checked that this method does not allow to set tags, there is a mlflow.set_tags() method but it is tagging the run not the model. Does anyone know how to tag the model during logging? Thank you! | When using mlflow.sklearn.log_model you work with the experiment registry which is run-focused so only experiments and runs can be described and tagged. If you want to set tags on models, you need to work with the model registry. The solution I would recommend is to register the model when logging using registered_model_name (there are more fine-grained ways, too) and use MLFlowClient API to set custom properties (like tags) of the already registered model. Here is a working example: import mlflow from mlflow.client import MlflowClient mlflow.set_tracking_uri('http://0.0.0.0:5000') experiment_name = 'test_mlflow' try: experiment_id = mlflow.create_experiment(experiment_name) except: experiment_id = mlflow.get_experiment_by_name(experiment_name).experiment_id from sklearn.linear_model import LogisticRegression from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score with mlflow.start_run(experiment_id = experiment_id): # log performance and register the model X, y = load_iris(return_X_y=True) params = {"C": 0.1, "random_state": 42} mlflow.log_params(params) lr = LogisticRegression(**params).fit(X, y) y_pred = lr.predict(X) mlflow.log_metric("accuracy", accuracy_score(y, y_pred)) mlflow.sklearn.log_model(lr, artifact_path="models", registered_model_name='test-model' ) # set extra tags on the model client = MlflowClient(mlflow.get_tracking_uri()) model_info = client.get_latest_versions('test-model')[0] client.set_model_version_tag( name='test-model', version=model_info.version, key='task', value='regression' ) Here is the illustration See also this excellent documentation of MLFlow Client. | 3 | 3 |
76,246,578 | 2023-5-14 | https://stackoverflow.com/questions/76246578/module-numpy-has-no-attribute-warnings | I'm trying to reproduce this tutorial with my own data. I've a simple square grid of polygons: from shapely import wkt import pandas as pd import geopandas as gpd data_list = [ [0,51, wkt.loads("POLYGON ((-74816.7238 5017078.8988, -74716.7238 5017078.8988, -74716.7238 5016978.8988, -74816.7238 5016978.8988, -74816.7238 5017078.8988))")], [1,91, wkt.loads("POLYGON ((-74816.7238 5016978.8988, -74716.7238 5016978.8988, -74716.7238 5016878.8988, -74816.7238 5016878.8988, -74816.7238 5016978.8988))")], [2,88, wkt.loads("POLYGON ((-74816.7238 5016878.8988, -74716.7238 5016878.8988, -74716.7238 5016778.8988, -74816.7238 5016778.8988, -74816.7238 5016878.8988))")], [3,54, wkt.loads("POLYGON ((-74816.7238 5016778.8988, -74716.7238 5016778.8988, -74716.7238 5016678.8988, -74816.7238 5016678.8988, -74816.7238 5016778.8988))")], [4,51, wkt.loads("POLYGON ((-74816.7238 5016678.8988, -74716.7238 5016678.8988, -74716.7238 5016578.8988, -74816.7238 5016578.8988, -74816.7238 5016678.8988))")], ] df = pd.DataFrame(data_list, columns=["id", "data", "geometry"]) gdf = gpd.GeoDataFrame(df, geometry="geometry", crs=32633) I've translate GeoPandas GeoDataFrame to SpatialPandas Geodataframe: from spatialpandas import GeoDataFrame sp_gdf = GeoDataFrame(gdf) At this point I try to create a choropleth map according to this example: import datashader as ds canvas = ds.Canvas(plot_width=1000, plot_height=1000) agg = canvas.polygons(sp_gdf, 'geometry', agg=ds.mean('data')) But I'm facing in the error below: AttributeError Traceback (most recent call last) Cell In[7], line 4 1 import datashader as ds 3 canvas = ds.Canvas(plot_width=1000, plot_height=1000) ----> 4 agg = canvas.polygons(sp_gdf, 'geometry', agg=ds.mean('data')) 6 agg File ~/.cache/pypoetry/virtualenvs/drakonotebook-larABRfp-py3.10/lib/python3.10/site-packages/datashader/core.py:753, in Canvas.polygons(self, source, geometry, agg) 751 agg = any_rdn() 752 glyph = PolygonGeom(geometry) --> 753 return bypixel(source, self, glyph, agg) File ~/.cache/pypoetry/virtualenvs/drakonotebook-larABRfp-py3.10/lib/python3.10/site-packages/datashader/core.py:1258, in bypixel(source, canvas, glyph, agg, antialias) 1255 canvas.validate() 1257 # All-NaN objects (e.g. chunks of arrays with no data) are valid in Datashader -> 1258 with np.warnings.catch_warnings(): 1259 np.warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered') 1260 return bypixel.pipeline(source, schema, canvas, glyph, agg, antialias=antialias) File ~/.cache/pypoetry/virtualenvs/drakonotebook-larABRfp-py3.10/lib/python3.10/site-packages/numpy/__init__.py:320, in __getattr__(attr) 317 from .testing import Tester 318 return Tester --> 320 raise AttributeError("module {!r} has no attribute " 321 "{!r}".format(__name__, attr)) AttributeError: module 'numpy' has no attribute 'warnings' I'm on Ubuntu 22.04, with Python 3.10 and the code above runs in a Jupyter Notebook. Below the version of libraries in use: shapely: 2.0.1 pandas: 2.0.1 geopandas: 0.12.2 spatialpandas: 0.4.7 datashader: 0.14.4 numpy: 1.24.3 Moreover the python environment is managed by Poetry 1.4.2 NB: this thread is complete unuseful. | It turns out that numpy.warnings is just a reference to the warnings built-in Python module, as I can see on my NumPy version: >>> import numpy as np >>> np.__version__ '1.21.5' >>> np.warnings <module 'warnings' from 'D:\\Anaconda3\\lib\\warnings.py'> So, one possible workaround for your problem, may be adding that reference to the NumPy module in your script, manually, at runtime: import numpy as np import warnings np.warnings = warnings <YOUR SCRIPT HERE> However, as described in the thread pointed by user @WarrenWeckesser, you may want to update the version of the datashader package, so that package will no longer try to access the np.warnings reference, which turns out to have been removed from numpy. | 5 | 8 |
76,249,640 | 2023-5-14 | https://stackoverflow.com/questions/76249640/python-import-error-undefined-symbol-for-custom-c-module | I've been experimenting with C modules in python and I've had no problems with compilation. However, when it comes to importing the module I receive this error message: >>> import _strrev Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: /path/to/my/module.so: undefined symbol: strrev The contents of the file are _strrev.c #include <Python.h> #include <string.h> static PyObject* _strrev(PyObject* self, PyObject *args) { PyObject* name; if (!PyArg_ParseTuple(args, "U", &name)) { return NULL; } PyObject *ret = strrev(*name); return ret; } static struct PyMethodDef methods[] = { {"strrev", (PyCFunction)_strrev, METH_VARARGS}, {NULL, NULL} }; static struct PyModuleDef module = { PyModuleDef_HEAD_INIT, "_strrev", NULL, -1, methods }; PyMODINIT_FUNC PyInit__strrev(void) { return PyModule_Create(&module); } setup.py from distutils.core import setup, Extension setup( name='strrev_lib', version='1', ext_modules=[Extension('_strrev', ['_strrev.c'])], ) When running nm -D on this object file, I get this output $ nm -D _strrev.cpython-39-x86_64-linux-gnu.so w __cxa_finalize@GLIBC_2.2.5 w __gmon_start__ w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable U PyArg_ParseTuple 00000000000011a0 T PyInit__strrev U PyModule_Create2 U __stack_chk_fail@GLIBC_2.4 U strrev I can see that strrev is here, so I don't understand why it says that the symbol in undefined. I've also tried using a different module and that works perfectly fine when attempting to import it. $ python3 Python 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import _hello >>> _hello.hello("Jared") 'hello Jared' The contents of the files are: _hello.c #include <Python.h> static PyObject* _hello(PyObject* self, PyObject *args) { PyObject* name; if (!PyArg_ParseTuple(args, "U", &name)) { return NULL; } PyObject *ret = PyUnicode_FromFormat("hello %U", name); return ret; } static struct PyMethodDef methods[] = { {"hello", (PyCFunction)_hello, METH_VARARGS}, {NULL, NULL} }; static struct PyModuleDef module = { PyModuleDef_HEAD_INIT, "_hello", NULL, -1, methods }; PyMODINIT_FUNC PyInit__hello(void) { return PyModule_Create(&module); } setup.py from setuptools import setup from setuptools import Extension setup( name='hello-lib', version='1', ext_modules=[Extension('_hello', ['_hello.c'])], ) Can anyone give me pointers as to what I'm doing wrong? Is it something with the compilation or with the code that I've written? Any help would be appreciated! | I can see that strrev is here, so I don't understand why it says that the symbol in undefined. Au contraire: you can see that strrev is not there -- U in nm output means undefined. The strrev appears to be a Windows thing, and you are not on Windows. There is no such symbol in GLIBC: $ nm -D /lib/x86_64-linux-gnu/libc.so.6 | grep strrev $ To fix this, you would need to implement strrev() yourself. | 3 | 1 |
76,246,837 | 2023-5-14 | https://stackoverflow.com/questions/76246837/how-do-i-drop-and-change-dtype-in-a-pipeline-with-sklearn | I have some scraped data that needs some cleaning. After the cleaning, I want to create a "numerical and categorical pipelines" inside a ColumnTransformer such as: categorical_cols = df.select_dtypes(include='object').columns numerical_cols = df.select_dtypes(exclude='object').columns num_pipeline = Pipeline( steps=[ ('scaler', StandardScaler()) ] ) cat_pipeline = Pipeline( steps=[ ('onehotencoder', OneHotEncoder(handle_unknown='ignore')) ] ) preprocessor = ColumnTransformer([ ('num_pipeline', num_pipeline, numerical_cols), ('cat_pipeline', cat_pipeline, categorical_cols) ]) My idea was to create a transformer class Transformer(BaseEstimator, TransformerMixin): and create a pipeline with it. That transformer would include all the cleaning steps. My problem is that some of the steps change the dtype from object to integer mostly so I'm thinking that instead of defining the categorical_cols and numerical_cols with dtypes, instead, do it with column names. Would that be the correct approach? The idea would be to automate the process so I can train the model with new data every time. | Instead of making a list of columns beforehand you can use scikit-learn's make_column_selector to dynamically specify the columns that each transformer will be applied to. In your example: from sklearn.compose import make_column_selector as selector preprocessor = ColumnTransformer([ ('num_pipeline', num_pipeline, selector(dtype_exclude=object)), ('cat_pipeline', cat_pipeline, selector(dtype_include=object)) ]) Under the hood it uses pandas' select_dtypes for the type selection. You can pass a regex and select based on column name as well. I also recommend you checking out make_column_transformer for more control over the pipeline. | 3 | 3 |
76,244,436 | 2023-5-13 | https://stackoverflow.com/questions/76244436/regular-expression-matching-either-an-empty-string-or-a-string-in-a-given-set | I would like to match either "direction" (">" or "<") from a string like "->", "<==", "...". When the string contains no direction, I want to match "". More precisely, the equivalent Python expression would be: ">" if ">" in s else ("<" if "<" in s else "") I first came up with this simple regular expression: re.search(r"[<>]|", s)[0] ... but it evaluates to "" as soon as the direction is not in the first position. How would you do that? | A simpler version of Andrej Kesely's solution: re.search('<|>|$', s)[0] | 3 | 4 |
76,215,725 | 2023-5-10 | https://stackoverflow.com/questions/76215725/python-script-file-missing-in-singularity-image | In AWS, created a docker image with a python script to print a string(basicprint.py) docker file: FROM python COPY ./basicprint.py ./ CMD ["python", "basicprint.py"] It works fine then saved docker image as .tgz file. copy that .tgz file in to my local. I converted docker image(.tgz) into singularity image by using singularity build sing_image.sif docker-archive://filename.tgz it was successfully created sing_image.sif singularity run sing_image.sif It throws an error : basicprint.py: No such file or directory Any suggestions on correct method of conversion without missing the file. | Note that you put your file under the very root. singularity removed your file during conversion and cleaning. Similar issues have been reported and are, in general, expected when working with containers. This slightly modified image FROM python:3.11-slim COPY ./basicprint.py ./ CMD ["ls"] demonstrates that the file lands under the linux root: (base) maciej.skorski@kaggle-cpu-maciej:~/docker-debug$ docker build . --tag test-sing (base) maciej.skorski@kaggle-cpu-maciej:~/docker-debug$ docker run -it test-sing root@ed8e3545a00b:/# ls basicprint.py bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var Now, let me demonstrate the working version. Importantly, follow best Docker practices and use the working dir # Dockerfile FROM python:3.11-slim-bullseye RUN mkdir -p /usr/src WORKDIR /usr/src COPY ./basicprint.py /usr/src/ CMD ["python", "/usr/src/basicprint.py"] where the sample Python script is # basicprint.py print('Welcome to my Docker!') Then I build and test on Debian GNU/Linux 10 with Docker 20.10.17 and Singularity 3.11: (base) maciej.skorski@kaggle-cpu-maciej:~/docker-debug$ singularity --version singularity-ce version 3.11.0+277-gcd6fa5c0d (base) maciej.skorski@kaggle-cpu-maciej:~/docker-debug$ sudo docker build . --tag test-sing:latest Successfully tagged test-sing:latest (base) maciej.skorski@kaggle-cpu-maciej:~/docker-debug$ docker save test-sing:latest --output test-sing.tgz (base) maciej.skorski@kaggle-cpu-maciej:~/docker-debug$ sudo singularity build test-sing.sif docker-archive:test-sing.tgz ... INFO: Build complete: test-sing.sif (base) maciej.skorski@kaggle-cpu-maciej:~/docker-debug$ sudo singularity run test-sing.sif Welcome to my Docker! Under the same software, I reproduced your error. So I think it has been solved! If you need any further assistance, we can set up a shared Virtual Machine with GitHub Codespace to make it fully reproducible. | 3 | 2 |
76,237,951 | 2023-5-12 | https://stackoverflow.com/questions/76237951/i-made-a-model-using-jupyter-notebook-and-then-i-am-trying-to-deploy-the-model-u | I created a model and deployed using streamlit. I am running in a virtual environement and inspite of running pip install streamlit it is still not working. The following error is shown enter image description here The error that is shown is this Traceback (most recent call last): File "C:\Python39\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Python39\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "D:\Machine Learning\sms-spam-classifier\venv\Scripts\streamlit.exe\__main__.py", line 4, in <module> File "d:\machine learning\sms-spam-classifier\venv\lib\site-packages\streamlit\__init__.py", line 70, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "d:\machine learning\sms-spam-classifier\venv\lib\site-packages\streamlit\delta_generator.py", line 90, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "d:\machine learning\sms-spam-classifier\venv\lib\site-packages\streamlit\elements\arrow_altair.py", line 35, in <module> from altair.vegalite.v4.api import Chart ModuleNotFoundError: No module named 'altair.vegalite.v4' PS D:\Machine Learning\sms-spam-classifier> | Did you try pip install altair | 3 | 1 |
76,233,164 | 2023-5-12 | https://stackoverflow.com/questions/76233164/how-to-add-hatches-to-histplot-bars-and-legend | I created a bar plot with hatches using seaborn. I was also able to add a legend that included the hatch styles, as shown in the MWE below: import matplotlib.pyplot as plt import seaborn as sns tips = sns.load_dataset("tips") hatches = ['\\\\', '//'] fig, ax = plt.subplots(figsize=(6,3)) sns.barplot(data=tips, x="day", y="total_bill", hue="time") # loop through days for hues, hatch in zip(ax.containers, hatches): # set a different hatch for each time for hue in hues: hue.set_hatch(hatch) # add legend with hatches plt.legend().loc='best' plt.show() However, when I try to create a histogram on seaborn with a legend, the same code does not work; I get the following error: No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument. I've looked for an answer online, but I haven't been successful at finding one. How can I add the hatches to the legend of a histogram for the MWE below? import matplotlib.pyplot as plt import seaborn as sns tips = sns.load_dataset("tips") hatches = ['\\\\', '//'] fig, ax = plt.subplots(figsize=(6,3)) sns.histplot(data=tips, x="total_bill", hue="time", multiple='stack') # loop through days for hues, hatch in zip(ax.containers, hatches): # set a different hatch for each time for hue in hues: hue.set_hatch(hatch) # add legend with hatches plt.legend().loc='best' # this does not work plt.show() | The issue is ax1.get_legend_handles_labels() returns empty lists for seaborn.histplot. Refer to this answer. Use the explicit interface by adding ax=ax to seaborn.histplot(...). Use ax.get_legend().legend_handles (.legendHandles is deprecate) to get the handles for the legend, and add hatches with set_hatch(). ax.get_legend().legend_handles returns the handles in the reverse order compared to the container, so the order can be reversed with [::-1]. Tested in python 3.11.2, matplotlib 3.7.1, seaborn 0.12.2 import matplotlib.pyplot as plt import seaborn as sns tips = sns.load_dataset("tips") hatches = ['\\\\', '//'] fig, ax = plt.subplots(figsize=(6, 3)) sns.histplot(data=tips, x="total_bill", hue="time", multiple='stack', ax=ax) # added ax=ax # iterate through each container, hatch, and legend handle for container, hatch, handle in zip(ax.containers, hatches, ax.get_legend().legend_handles[::-1]): # update the hatching in the legend handle handle.set_hatch(hatch) # iterate through each rectangle in the container for rectangle in container: # set the rectangle hatch rectangle.set_hatch(hatch) plt.show() | 3 | 3 |
76,222,239 | 2023-5-10 | https://stackoverflow.com/questions/76222239/pip-install-gymnasiumbox2d-not-working-on-google-colab | I have been working with the gymnasium environment for some weeks now and I had no problems with it in Google Colab by using this command in the notebook: pip3 install gymnasium[box2d] However, without changing anything I try to run the command once again and it suddenly stopped installing Box2d properly. I get the following output: Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting gymnasium[box2d] Downloading gymnasium-0.28.1-py3-none-any.whl (925 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 925.5/925.5 kB 19.7 MB/s eta 0:00:00 Requirement already satisfied: numpy>=1.21.0 in /usr/local/lib/python3.10/dist-packages (from gymnasium[box2d]) (1.22.4) Collecting jax-jumpy>=1.0.0 (from gymnasium[box2d]) Downloading jax_jumpy-1.0.0-py3-none-any.whl (20 kB) Requirement already satisfied: cloudpickle>=1.2.0 in /usr/local/lib/python3.10/dist-packages (from gymnasium[box2d]) (2.2.1) Requirement already satisfied: typing-extensions>=4.3.0 in /usr/local/lib/python3.10/dist-packages (from gymnasium[box2d]) (4.5.0) Collecting farama-notifications>=0.0.1 (from gymnasium[box2d]) Downloading Farama_Notifications-0.0.4-py3-none-any.whl (2.5 kB) Collecting box2d-py==2.3.5 (from gymnasium[box2d]) Downloading box2d-py-2.3.5.tar.gz (374 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 374.4/374.4 kB 11.0 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Collecting pygame==2.1.3 (from gymnasium[box2d]) Downloading pygame-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.7/13.7 MB 28.2 MB/s eta 0:00:00 Collecting swig==4.* (from gymnasium[box2d]) Downloading swig-4.1.1-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 13.0 MB/s eta 0:00:00 Building wheels for collected packages: box2d-py error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Building wheel for box2d-py (setup.py) ... error ERROR: Failed building wheel for box2d-py Running setup.py clean for box2d-py Failed to build box2d-py ERROR: Could not build wheels for box2d-py, which is required to install pyproject.toml-based projects I tried changing versions of gymnasium and even installing simply from gym, but the error persists. I don't know what's going on because literally yesterday I was not having any problems with this. | I was running into the same problem. After some digging and quite a lot of trial and error, I was able to get it to work by first running pip install swig. I hope that helps. | 9 | 36 |
76,238,080 | 2023-5-12 | https://stackoverflow.com/questions/76238080/in-python-merge-two-dataframes-with-the-merge-key-of-one-dataframe-contained-in | I would like to merge two dataframes df1 and df2 in order to compare two values info 1 and info 2. The key to merge them is hidden in the name columns. Df1 is 'clean' as it has a first name column and a last name column. Df2, however, is tricky. There is only a name column and the names can be given in different ways. The standard case is first and last name but as shown in the picture below it can contain two names separated by an 'and' or a '&' or it can even be something totally different like a school. Here is the dummy data in code: data1 = [['Anna','Tessmann',10], ['Ben','Fachmann',20], ['John','Smith',10]] df1 = pd.DataFrame(data1, columns=['FirstName','LastName','Info1']) data2 = [['Ben Fachmann',30], ['School AAA',40], ['John and Melissa Smith',50], ['Bob & Anna Tessmann',20]] df2= pd.DataFrame(data2, columns=['Name','Info2']) Would anyone know an efficient way to merge these two? Is there the possibility to merge on st like 'df2.Name contains df1.Lastname'? Or I was looking into trying to parse df2.Name, I found nameparser import HumanName but I think it can't deal with 'and' and '&'. I apologize if something is unclear. Thanks a lot for any help in advance! | You can use a double substring merge: import re pattern1 = '|'.join(map(re.escape, df1['FirstName'])) pattern2 = '|'.join(map(re.escape, df1['LastName'])) match1 = df2['Name'].str.extractall(f'(?P<FirstName>{pattern1})').droplevel(1) match2 = df2['Name'].str.extractall(f'(?P<LastName>{pattern2})').droplevel(1) out = df1.merge(df2.join(match1).join(match2), on=['FirstName', 'LastName']) Output: FirstName LastName Info1 Name Info2 0 Anna Tessmann 10 Bob & Anna Tessmann 20 1 Ben Fachmann 20 Ben Fachmann 30 2 John Smith 10 John and Melissa Smith 50 | 4 | 4 |
76,235,874 | 2023-5-12 | https://stackoverflow.com/questions/76235874/downgrade-python-from-3-11-2-to-3-10-in-specifc-environment | I am using Python's virtual environment 'venv'. My current version is 3.11.2 I need to downgrade it. I have already tried the following steps: pip3 install python==3.10.10 and got the following error: ERROR: Could not find a version that satisfies the requirement python==3.10.10 (from versions: none) ERROR: No matching distribution found for python==3.10.10 I also tried for lower versions like 3.8.0, 3.9.0, ... Allways same error. Thanks | To answer your question directly Try python.org. Head straight to https://www.python.org/downloads/ to download the distribution you want. If I am to guess what problem you are facing, here are some details When you are running your command prompt, make sure you know which python you are executing. Example: PS C:\Users\jli8\pythonWork\dpr-data-presenter> conda activate p310 (p310) PS C:\Users\jli8\pythonWork\dpr-data-presenter> .\venv\Scripts\activate (venv) (p310) PS C:\Users\jli8\pythonWork\dpr-data-presenter> where.exe python C:\Users\jli8\pythonWork\dpr-data-presenter\venv\Scripts\python.exe C:\Users\jli8\Anaconda3\envs\p310\python.exe C:\Users\jli8\AppData\Local\Programs\Python\Python311\python.exe C:\Users\jli8\AppData\Local\Microsoft\WindowsApps\python.exe (venv) (p310) PS C:\Users\jli8\pythonWork\dpr-data-presenter> From my terminal above, I actually have 3 pythons. Python3.11 installed in AppData\Local\Programs\Python\Python311\python.exe Python3.10 installed in Anaconda env "p310" (which I used to create venv), finally the last venv in use with is in my working folder dpr-data-presenter\venv\Scripts\python.exe From this limited context here, the most straightforward answer to you would be just download python3.10.10 from python.org. Make sure you understand how to execute from that python3.10.10 that you have installed and create your venv from that python3.10.10. Some helpful reference for you: https://realpython.com/python-virtual-environments-a-primer/ | 8 | 2 |
76,234,354 | 2023-5-12 | https://stackoverflow.com/questions/76234354/how-does-poetry-associate-a-project-to-its-virtual-environment | How does Poetry associate a project to its virtual environment in ~/.cache/pypoetry/virtualenvs? I can't find any link inside the project, e.g. grep -ie NKJBdMnE . returns nothing. poetry env info: Virtualenv Python: 3.10.11 Implementation: CPython Path: /home/lddpro/.cache/pypoetry/virtualenvs/lxxo-NKJBdMnE-py3.10 Executable: /home/lddpro/.cache/pypoetry/virtualenvs/lxxo-NKJBdMnE-py3.10/bin/python Valid: True System Platform: linux OS: posix Python: 3.10.11 Path: /var/lang Executable: /var/lang/bin/python3.10 poetry config --list: [lddpro@0a0aecf400ca lddpro-bff]$ poetry config --list cache-dir = "/home/lddpro/.cache/pypoetry" experimental.new-installer = true experimental.system-git-client = false installer.max-workers = null installer.no-binary = null installer.parallel = true virtualenvs.create = true virtualenvs.in-project = null virtualenvs.options.always-copy = false virtualenvs.options.no-pip = false virtualenvs.options.no-setuptools = false virtualenvs.options.system-site-packages = false virtualenvs.path = "{cache-dir}/virtualenvs" # /home/lddpro/.cache/pypoetry/virtualenvs virtualenvs.prefer-active-python = false virtualenvs.prompt = "{project_name}-py{python_version}" https://python-poetry.org/docs/configuration/ | The path to the virtual environment is generated on the fly. It is not stored anywhere. The 8 letters NKJBdMnE are a part of the SHA256 of the working directory of you project. If you move your project into a different directory, poetry will use a different virtual environment as the SHA256 will be different. You can see the exact algorithm used in the poetry source code: env.py#L1196-L1204 | 3 | 6 |
76,234,312 | 2023-5-12 | https://stackoverflow.com/questions/76234312/importerror-cannot-import-name-is-categorical-from-pandas-api-types | I want to convert file1.hic file into .cool format using hic2cool, which is written in Python. I converted the files using command line: hic2cool convert file1.hic file1.cool -r 10000 Traceback: Traceback (most recent call last): File "/home/melchua/.local/bin/hic2cool", line 5, in <module> from hic2cool.__main__ import main File "/home/melchua/.local/lib/python3.9/site-packages/hic2cool/__init__.py", line 2, in <module> from .hic2cool_utils import ( File "/home/melchua/.local/lib/python3.9/site-packages/hic2cool/hic2cool_utils.py", line 27, in <module> import cooler File "/home/melchua/.local/lib/python3.9/site-packages/cooler/__init__.py", line 14, in <module> from .api import Cooler, annotate File "/home/melchua/.local/lib/python3.9/site-packages/cooler/api.py", line 12, in <module> from .core import (get, region_to_offset, region_to_extent, RangeSelector1D, File "/home/melchua/.local/lib/python3.9/site-packages/cooler/core.py", line 3, in <module> from pandas.api.types import is_categorical ImportError: cannot import name 'is_categorical' from 'pandas.api.types' (/home/melchua/.local/lib/python3.9/site-packages/pandas/api/types/__init__.py) | The latest version of pandas 2.0.1 seems to not have is_categorical but instead it has is_categorical_dtype. Seems that in hic2cool, pandas version is not pinned to the one that has that. I suggest installing a previous version of pandas before the changes took place. Or install a newer version of cooler as they updated their library to use is_categorical_dtype | 5 | 4 |
76,233,950 | 2023-5-12 | https://stackoverflow.com/questions/76233950/new-discord-username-system | so currently i have this python bot that when a user tries to store something the author name along with the discriminator('#') is stored as well return "**Hello** **" + message.author.name + "#" + message.author.discriminator + "** ** :wave: " so my question is when they remove the discriminator and apply the new username system, how can i change that in my bot that it stores the username? also am i going to need to upgrade my discord python package and will they add something like this? message.author.username because when i currently try to call message.author.username i get this 'User' object has no attribute 'username` | The Discord.py package will be updated sometime after discord applies the username system. For now, you can't use the new system as the developers of Discord.py need to see how the new system works before they implement changes. As for your question: You will have to update the Discord.py package when an update eventually comes out, otherwise you will not be able to use the updated methods. We also don't know how storing the username will be executed, but message.author.username seems like a good guess of what the method will be like | 4 | 4 |
76,231,723 | 2023-5-11 | https://stackoverflow.com/questions/76231723/pyspark-pandas-vectorized-udfs | I am trying to convert this udf into this pandas udf, in order to avoid creating two pandas udfs. Convert this: @udf("string") def splitEmailUDF(email: str, position: int) -> str: return email.split("@")[position] into this in one pandas udf --- position ??? Datatype or something else! from pyspark.sql.functions import pandas_udf, PandasUDFType @pandas_udf("string") def splitEmailUDFVec(email: pd.Series, position: ???????) -> pd.Series: return email.str.split("@").str[position] Of course I can always create two pandas_udfs: from pyspark.sql.functions import pandas_udf @pandas_udf("string") def splitFirstNameUDFVec(email: pd.Series) -> pd.Series: return email.str.split("@").str[0] @pandas_udf("string") def splitDomainUDFVec(email: pd.Series) -> pd.Series: return email.str.split("@").str[1] Any help will be appreciated! | Setup df.show() +------------+ | email| +------------+ | [email protected]| |[email protected]| +------------+ Define a wrapper function which takes email and pos as arguments and returns the underlying pandas udf function def split(email, pos): @F.pandas_udf('string') def _split(email: pd.Series) -> pd.Series: return email.str.split('@').str[pos] return _split(email) df = df.withColumn('firstname', split('email', 1)) Result df.show() +------------+--------+ | email| domain| +------------+--------+ | [email protected]| bar.com| |[email protected]|spam.com| +------------+--------+ Alternatively a better/efficient approach is to use regex extraction if your goal is to only split the email address into its name and domain components. f = lambda n: F.regexp_extract('email', '(.*)@(.*)', n) df = df.select('*', f(1).alias('firstname'), f(2).alias('domain')) Result df.show() +------------+---------+--------+ | email|firstname| domain| +------------+---------+--------+ | [email protected]| foo| bar.com| |[email protected]| baz|spam.com| +------------+---------+--------+ | 3 | 2 |
76,231,351 | 2023-5-11 | https://stackoverflow.com/questions/76231351/how-to-apply-enum-nonmember | I was trying to come up with a use case for the new @enum.nonmember decorator in Python 3.11. The docs clearly mention it is a decorator meant to be applied to members. However, when I tried literally decorating a member directly: import enum class MyClass(enum.Enum): A = 1 B = 2 @enum.nonmember C = 3 this results in an error as: Traceback (most recent call last): File "C:\Program Files\Python311\Lib\code.py", line 63, in runsource code = self.compile(source, filename, symbol) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\codeop.py", line 153, in __call__ return _maybe_compile(self.compiler, source, filename, symbol) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\codeop.py", line 73, in _maybe_compile return compiler(source, filename, symbol) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\codeop.py", line 118, in __call__ codeob = compile(source, filename, symbol, self.flags, True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<input>", line 9 C = 3 ^ SyntaxError: invalid syntax However, if I had declared an atribute as a property or a descriptor it also wouldn't become an Enum member... So how, when and why do you use @enum.nonmember? | You would use it like so: import enum class MyClass(enum.Enum): A = 1 B = 2 C = enum.nonmember(3) As far as I can tell, the only reason why it is called a decorator, is because of nested classes. Currently, class MyClass(enum.Enum): A = 1 B = 2 class MyNestedClass: pass makes MyClass.MyNestedClass into one of the members of MyClass. This will change in 3.13. So if you want the new behaviour now, you can use: class MyClass(enum.Enum): A = 1 B = 2 @enum.nonmember class MyNestedClass: pass In 3.13, if you want the current behaviour of making nested classes members, you can use class MyClass(enum.Enum): A = 1 B = 2 @enum.member class MyNestedClass: pass There is no reason to use enum.nonmember as a decorator on a method, since methods are already excluded from being members, but I think you could use enum.member on one to be able to define a method member if you wanted. Not sure why you would, though. | 7 | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.