question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,323,172
2025-1-2
https://stackoverflow.com/questions/79323172/django-request-get-adds-and-extra-quote-to-the-data
When I pass my parameters via Django request.GET I get an extra comma in the dictionary that I do not need. Encoded data that I redirect to the endpoint: /turnalerts/api/v2/statuses?statuses=%5B%7B%27conversation%27%3A+%7B%27expiration_timestamp%27%3A+%271735510680%27%2C+%27id%27%3A+%2757f7d7d4d255f4c7987ac3557bf536e3%27%2C+%27origin%27%3A+%7B%27type%27%3A+%27service%27%7D%7D%2C+%27id%27%3A+%27wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA%3D%27%2C+%27pricing%27%3A+%7B%27billable%27%3A+True%2C+%27category%27%3A+%27service%27%2C+%27pricing_model%27%3A+%27CBP%27%7D%2C+%27recipient_id%27%3A+%272349039756628%27%2C+%27status%27%3A+%27sent%27%2C+%27timestamp%27%3A+%271735424268%27%7D%5D The request: <rest_framework.request.Request: GET '/turnalerts/api/v2/statuses?statuses=%5B%7B%27conversation%27%3A+%7B%27expiration_timestamp%27%3A+%271735510680%27%2C+%27id%27%3A+%2757f7d7d4d255f4c7987ac3557bf536e3%27%2C+%27origin%27%3A+%7B%27type%27%3A+%27service%27%7D%7D%2C+%27id%27%3A+%27wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA%3D%27%2C+%27pricing%27%3A+%7B%27billable%27%3A+True%2C+%27category%27%3A+%27service%27%2C+%27pricing_model%27%3A+%27CBP%27%7D%2C+%27recipient_id%27%3A+%272349039756628%27%2C+%27status%27%3A+%27sent%27%2C+%27timestamp%27%3A+%271735424268%27%7D%5D'> Data after request.GET: {'statuses': "[{'conversation': {'expiration_timestamp': '1735510680', 'id': '57f7d7d4d255f4c7987ac3557bf536e3', 'origin': {'type': 'service'}}, 'id': 'wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA=', 'pricing': {'billable': True, 'category': 'service', 'pricing_model': 'CBP'}, 'recipient_id': '2349039756628', 'status': 'sent', 'timestamp': '1735424268'}]"} The issue here is the double quote in the list as in "[{'conversation'... Expected result is: {'statuses': [{'conversation': {'expiration_timestamp': '1735510680', 'id': '57f7d7d4d255f4c7987ac3557bf536e3', 'origin': {'type': 'service'}}, 'id': 'wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA=', 'pricing': {'billable': True, 'category': 'service', 'pricing_model': 'CBP'}, 'recipient_id': '2349039756628', 'status': 'sent', 'timestamp': '1735424268'}]} Here's my view: class StatusesPayloadLayerView(generics.GenericAPIView): permission_classes = (permissions.AllowAny,) def get(self, request, *args, **kwargs): payload = request.GET data = payload.dict() try: error_code = data['statuses'][0]['errors'][0]['code'] except KeyError: error_code = None From the url encoding reference here, the encoded data looks fine as it doesn't appear to have a double quote before the list which would be "%22" notation so I can't quite understand where the double quote is coming from. I try to do a try-except but because it's a string, I end up with a type error. How can I represent the data properly so it's not a string with no double quote.
The issue here is the double quote in the list. Nope. This is not part of the content of the string. This is because Python's repr(…) function [python-doc] tries to print the value as a Python literal expression, for example {'a': 'b'} prints single quotes around 'a' and 'b', but these are not part of the content of the string. You can convert the repr of Python literals (and a combination of these) back to Python with ast.literal_eval(…) [python-doc]; from ast import literal_eval class StatusesPayloadLayerView(generics.GenericAPIView): permission_classes = (permissions.AllowAny,) def get(self, request, *args, **kwargs): payload = request.GET data = payload.dict() statuses = literal_eval(data['statuses']) print(statuses) # … and this will print: [{'conversation': {'expiration_timestamp': '1735510680', 'id': '57f7d7d4d255f4c7987ac3557bf536e3', 'origin': {'type': 'service'}}, 'id': 'wamid.HBgNMjM0OTAzOTc1NjYyOBUCABEYEjdCMTJFNUZDNzNFQjkxQ0IyRQA=', 'pricing': {'billable': True, 'category': 'service', 'pricing_model': 'CBP'}, 'recipient_id': '2349039756628', 'status': 'sent', 'timestamp': '1735424268'}]
2
1
79,322,543
2025-1-2
https://stackoverflow.com/questions/79322543/find-the-closest-converging-point-of-a-group-of-vectors
I am trying to find the point that is closest to a group of vectors. For context, the vectors are inverted rays emitted from center of aperture stop after exiting a lens, this convergence is meant to locate the entrance pupil. The backward projection of the exiting rays, while not converging at one single point due to spherical aberration, is quite close to converging toward a point, as illustrated in the figure below. (For easier simulation the positive z is pointing down) I believe that to find the closest point, I would be finding a point that has the shortest distance to all these lines. And wrote the method as follow: def FindConvergingPoint(position, direction): A = np.eye(3) * len(direction) - np.dot(direction.T, direction) b = np.sum(position - np.dot(direction, np.dot(direction.T, position)), axis=0) return np.linalg.pinv(A).dot(b) For the figure above and judging visually, I would have expected the point to be something around [0, 0, 20] However, this is not the case. The method yielded a result of [ 0., 188.60107764, 241.13690715], which is far from the converging point I was expecting. Is my algorithm faulty or have I missed something about python/numpy? Attached are the data for the vectors: position = np.array([ [0, 0, 0], [0, -1.62, 0.0314], [0, -3.24, 0.1262], [0, -4.88, 0.2859], [0, -6.53, 0.5136], [0, -8.21, 0.8135], [0, -9.91, 1.1913], [0, -11.64, 1.6551], [0, -13.43, 2.2166], [0, -15.28, 2.8944], [0, -17.26, 3.7289] ]) direction = np.array([ [0, 0, 1], [0, 0.0754, 0.9972], [0, 0.1507, 0.9886], [0, 0.2258, 0.9742], [0, 0.3006, 0.9537], [0, 0.3752, 0.9269], [0, 0.4494, 0.8933], [0, 0.5233, 0.8521], [0, 0.5969, 0.8023], [0, 0.6707, 0.7417], [0, 0.7459, 0.6661] ])
I'm assuming the answer to my question in the comment is that you have symmetry such that the "closest point" must lie on the z-axis. Furthermore, I'm assuming you are somewhat flexible about the notion of "closest". First, let's remove the zeroth point from position and direction, since that will pass through all points on the z-axis (and cause numerical problems). Next, let's find the distance from each position along direction that brings us back to the z-axis. # transpose and remove the x-axis for simplicity position = position.T[1:] direction = direction.T[1:] # more carefully normalize direction vectors direction /= np.linalg.norm(direction, axis=0) # find the distance from `position` along `direction` that # brings us back to the z-axis. distance = position[0] / direction[0] # All these points should be on the z-axis point_on_z = position - distance * direction # array([[ 0. , 0. , 0. , 0. , 0. , # 0. , 0. , 0. , 0. , 0. ], # [21.45665199, 21.380772 , 21.34035527, 21.23103513, 21.09561354, # 20.89001607, 20.608748 , 20.26801397, 19.79193392, 19.14234148]]) Indeed, the y-coordinate of all these points is 0, and the z-coordinate is nearly 20 for all of them, as you expected. If you are flexible about your notion of "closest", the mean of the z-coordinates will minimize the sum of square of distances to the point where the rays intersect the z-axis (but not necessarily the minimum distances between your point and the rays). This might not solve exactly the problem you were hoping to solve, but hopefully you will find it "useful". If it's not spot on, let me know what assumptions I've made that I shouldn't, and we can try to find a solution to the refined question.
1
1
79,322,091
2025-1-1
https://stackoverflow.com/questions/79322091/unable-to-acquire-impersonated-credentials
Im trying to generate signed urls, so i followed the official guide but im getting this error: google.auth.exceptions.TransportError: Error calling sign_bytes: {'error': {'code': 403, 'message': "Permission 'iam.serviceAccounts.signBlob' denied on resource (or it may not exist).", 'status': 'PERMISSION_DENIED', 'details': [{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'IAM_PERMISSION_DENIED', 'domain': 'iam.googleapis.com', 'metadata': {'permission': 'iam.serviceAccounts.signBlob'}}]}} I have done: Enable the Service Account Credentials API Create a Service Account and grant Storage Object User role and Service Account Token Creator role. Set up authentication for Cloud Storage and Service account impersonation gcloud auth application-default login --impersonate-service-account=virtu-backend@hip-apricot-446418-f2.iam.gserviceaccount.com Run the example code import datetime from google.cloud import storage storage_client = storage.Client() bucket = storage_client.bucket('virtu_users_uploaded_pictures') blob = bucket.blob('test1') url = blob.generate_signed_url( version="v4", # This URL is valid for 15 minutes expiration=datetime.timedelta(minutes=15), # Allow GET requests using this URL. method="GET", ) print("Generated GET signed URL:") print(url) print("You can use this URL with any user agent, for example:") print(f"curl '{url}'") I also checked the assigned permissions of Service Account Token Creator UPDATE: I tried a more minimal case: from google.cloud import storage storage_client = storage.Client() bucket = storage_client.bucket('virtu_users_uploaded_pictures') print(bucket.exists()) And i get this error: google.auth.exceptions.RefreshError: ('Unable to acquire impersonated credentials', '{\n "error": {\n "code": 403,\n "message": "Permission \'iam.serviceAccounts.getAccessToken\' denied on resource (or it may not exist).",\n "status": "PERMISSION_DENIED",\n "details": [\n {\n "@type": "type.googleapis.com/google.rpc.ErrorInfo",\n "reason": "IAM_PERMISSION_DENIED",\n "domain": "iam.googleapis.com",\n "metadata": {\n "permission": "iam.serviceAccounts.getAccessToken"\n }\n }\n ]\n }\n}\n') The bucket exists and there is no typo error
Per the documentation Service Account Token Creator (roles/iam.serviceAccountTokenCreator): this role is required for generating short-lived credentials for a service account when a private key file is not provided locally. This role should be granted to the principal that will create the signed URL. This means you have to assign the role to the user (person/human being) who is invoking the call to impersonate the service account from your CLI e.g. if you're the one doing it, then you have to assign it to yourself and not the service account.
1
1
79,321,029
2025-1-1
https://stackoverflow.com/questions/79321029/python-unit-test-get-requests-assertion-error-not-picking-up-call
I'm testing basic GET requests under from the requests module. I'm using requests.Session to create a session instance which is later passed into the function. I'm mocking the the function calls. I've patched the session.get call within the function, however I get the following error: AssertionError: get('https://example-name/v1/0001') call not found I'm assuming I haven't patched the location correctly but I'm not sure where specifically I am going wrong. main.py import requests base_url = "https://example-name/v1/" session = requests.Session() def fetch_data_with_session(session, base_url, endpoint): response = session.get(base_url + endpoint) return response.json() test_main.py import unittest from requests import Session from main import fetch_data_with_session from unittest.mock import Mock, patch class TestingGet(unittest.TestCase): @patch.object(Session, 'get') def test_fetch(self, mock_get): data = { "id": "0001", "name": "John Doe", } mock_response = Mock() mock_response.json.return_value = data mock_get.return_value = mock_response mock_session = Mock() retrieved_get_data = fetch_data_with_session(mock_session, "https://example-name/v1/", "0001") mock_get.assert_any_call("https://example-name/v1/0001") self.assertEqual = (outages_data, data) if __name__ == '__main__': unittest.main() I also tried using the mock.get.assert_called_with and got a similar error. assert_called_with raise AssertionError(error_message) AssertionError: expected call not found. Expected: get('https://example-name/v1/0001') Actual: not called. Again, I'm assuming I'm not patching the GET call correctly, but not sure how to correct it. EDIT: Thanks @Guy who provided the answer. The updated module code is below. Please see @Guy's answer for the explanation @patch.object(requests.Session, 'get') def test_fetch(self, mock_get): data = { "id": "exampleID", "name": "example", "devices": [ { "id": "0001", "name": "Device A" } ] } mock_response = Mock() mock_response.json.return_value = data mock_get.return_value = mock_response outages_data = fetch_data_with_session(mock_get, "https://example-name/v1/", "0001") mock_get.get.assert_called_with("https://example-name/v1/0001") self.assertEqual = (outages_data, data)
There are two issues in you code You use mock_session for the API call but test mock_get for it mock_get and mock_session are the session.get function, not Session instance Simplified test with two options @patch.object(Session, 'get') def test_fetch(self, mock_get): fetch_data_with_session(mock_get, "https://example-name/v1/", "0001") mock_get.assert_any_call("https://example-name/v1/0001") # or mock_session = Mock() fetch_data_with_session(mock_session, "https://example-name/v1/", "0001") mock_session.assert_any_call("https://example-name/v1/0001") and main.py def fetch_data_with_session(get, base_url, endpoint): response = get(base_url + endpoint) return response.json()
4
3
79,321,224
2025-1-1
https://stackoverflow.com/questions/79321224/cpython-pyatomic-gcc-h-is-not-a-file-or-a-directory
Im trying to use python in C++ but I get an error while trying to import Python : Error message : cmd /c chcp 65001>nul && C:\msys64\ucrt64\bin\g++.exe -fdiagnostics-color=always -g C:\Users\21211433\Desktop\Code\C++\First\src\python.c -o C:\Users\21211433\Desktop\Code\C++\First\src\python.exe In file included from C:/Users/21211433/AppData/Local/Programs/Python/Python313/include/pyatomic.h:9, from C:/Users/21211433/AppData/Local/Programs/Python/Python313/include/Python.h:70, from C:\Users\21211433\Desktop\Code\C++\First\src\python.c:2: C:/Users/21211433/AppData/Local/Programs/Python/Python313/include/cpython/pyatomic.h:532:12: fatal error: cpython/pyatomic_gcc.h: No such file or directory 532 | # include "cpython/pyatomic_gcc.h" | ^~~~~~~~~~~~~~~~~~~~~~~~ While including "Python.h" The file 'pyatomic_gcc.h' seems to not be recognized even if it is inside of cpython Python.cpp : #define PY_SSIZE_T_CLEAN #include <C:/Users/21211433/AppData/Local/Programs/Python/Python313/include/Python.h> int main() { return 0; } pyatomic_gcc.h : // This is the implementation of Python atomic operations using GCC's built-in // functions that match the C+11 memory model. This implementation is preferred // for GCC compatible compilers, such as Clang. These functions are available // in GCC 4.8+ without needing to compile with --std=c11 or --std=gnu11. #ifndef Py_ATOMIC_GCC_H # error "this header file must not be included directly" #endif // --- _Py_atomic_add -------------------------------------------------------- static inline int _Py_atomic_add_int(int *obj, int value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline int8_t _Py_atomic_add_int8(int8_t *obj, int8_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline int16_t _Py_atomic_add_int16(int16_t *obj, int16_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline int32_t _Py_atomic_add_int32(int32_t *obj, int32_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline int64_t _Py_atomic_add_int64(int64_t *obj, int64_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline intptr_t _Py_atomic_add_intptr(intptr_t *obj, intptr_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline unsigned int _Py_atomic_add_uint(unsigned int *obj, unsigned int value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline uint8_t _Py_atomic_add_uint8(uint8_t *obj, uint8_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline uint16_t _Py_atomic_add_uint16(uint16_t *obj, uint16_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline uint32_t _Py_atomic_add_uint32(uint32_t *obj, uint32_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline uint64_t _Py_atomic_add_uint64(uint64_t *obj, uint64_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline uintptr_t _Py_atomic_add_uintptr(uintptr_t *obj, uintptr_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } static inline Py_ssize_t _Py_atomic_add_ssize(Py_ssize_t *obj, Py_ssize_t value) { return __atomic_fetch_add(obj, value, __ATOMIC_SEQ_CST); } // --- _Py_atomic_compare_exchange ------------------------------------------- static inline int _Py_atomic_compare_exchange_int(int *obj, int *expected, int desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_int8(int8_t *obj, int8_t *expected, int8_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_int16(int16_t *obj, int16_t *expected, int16_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_int32(int32_t *obj, int32_t *expected, int32_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_int64(int64_t *obj, int64_t *expected, int64_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_intptr(intptr_t *obj, intptr_t *expected, intptr_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_uint(unsigned int *obj, unsigned int *expected, unsigned int desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_uint8(uint8_t *obj, uint8_t *expected, uint8_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_uint16(uint16_t *obj, uint16_t *expected, uint16_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_uint32(uint32_t *obj, uint32_t *expected, uint32_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_uint64(uint64_t *obj, uint64_t *expected, uint64_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_uintptr(uintptr_t *obj, uintptr_t *expected, uintptr_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_ssize(Py_ssize_t *obj, Py_ssize_t *expected, Py_ssize_t desired) { return __atomic_compare_exchange_n(obj, expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline int _Py_atomic_compare_exchange_ptr(void *obj, void *expected, void *desired) { return __atomic_compare_exchange_n((void **)obj, (void **)expected, desired, 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } // --- _Py_atomic_exchange --------------------------------------------------- static inline int _Py_atomic_exchange_int(int *obj, int value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline int8_t _Py_atomic_exchange_int8(int8_t *obj, int8_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline int16_t _Py_atomic_exchange_int16(int16_t *obj, int16_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline int32_t _Py_atomic_exchange_int32(int32_t *obj, int32_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline int64_t _Py_atomic_exchange_int64(int64_t *obj, int64_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline intptr_t _Py_atomic_exchange_intptr(intptr_t *obj, intptr_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline unsigned int _Py_atomic_exchange_uint(unsigned int *obj, unsigned int value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline uint8_t _Py_atomic_exchange_uint8(uint8_t *obj, uint8_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline uint16_t _Py_atomic_exchange_uint16(uint16_t *obj, uint16_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline uint32_t _Py_atomic_exchange_uint32(uint32_t *obj, uint32_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline uint64_t _Py_atomic_exchange_uint64(uint64_t *obj, uint64_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline uintptr_t _Py_atomic_exchange_uintptr(uintptr_t *obj, uintptr_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline Py_ssize_t _Py_atomic_exchange_ssize(Py_ssize_t *obj, Py_ssize_t value) { return __atomic_exchange_n(obj, value, __ATOMIC_SEQ_CST); } static inline void * _Py_atomic_exchange_ptr(void *obj, void *value) { return __atomic_exchange_n((void **)obj, value, __ATOMIC_SEQ_CST); } // --- _Py_atomic_and -------------------------------------------------------- static inline uint8_t _Py_atomic_and_uint8(uint8_t *obj, uint8_t value) { return __atomic_fetch_and(obj, value, __ATOMIC_SEQ_CST); } static inline uint16_t _Py_atomic_and_uint16(uint16_t *obj, uint16_t value) { return __atomic_fetch_and(obj, value, __ATOMIC_SEQ_CST); } static inline uint32_t _Py_atomic_and_uint32(uint32_t *obj, uint32_t value) { return __atomic_fetch_and(obj, value, __ATOMIC_SEQ_CST); } static inline uint64_t _Py_atomic_and_uint64(uint64_t *obj, uint64_t value) { return __atomic_fetch_and(obj, value, __ATOMIC_SEQ_CST); } static inline uintptr_t _Py_atomic_and_uintptr(uintptr_t *obj, uintptr_t value) { return __atomic_fetch_and(obj, value, __ATOMIC_SEQ_CST); } // --- _Py_atomic_or --------------------------------------------------------- static inline uint8_t _Py_atomic_or_uint8(uint8_t *obj, uint8_t value) { return __atomic_fetch_or(obj, value, __ATOMIC_SEQ_CST); } static inline uint16_t _Py_atomic_or_uint16(uint16_t *obj, uint16_t value) { return __atomic_fetch_or(obj, value, __ATOMIC_SEQ_CST); } static inline uint32_t _Py_atomic_or_uint32(uint32_t *obj, uint32_t value) { return __atomic_fetch_or(obj, value, __ATOMIC_SEQ_CST); } static inline uint64_t _Py_atomic_or_uint64(uint64_t *obj, uint64_t value) { return __atomic_fetch_or(obj, value, __ATOMIC_SEQ_CST); } static inline uintptr_t _Py_atomic_or_uintptr(uintptr_t *obj, uintptr_t value) { return __atomic_fetch_or(obj, value, __ATOMIC_SEQ_CST); } // --- _Py_atomic_load ------------------------------------------------------- static inline int _Py_atomic_load_int(const int *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline int8_t _Py_atomic_load_int8(const int8_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline int16_t _Py_atomic_load_int16(const int16_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline int32_t _Py_atomic_load_int32(const int32_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline int64_t _Py_atomic_load_int64(const int64_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline intptr_t _Py_atomic_load_intptr(const intptr_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline uint8_t _Py_atomic_load_uint8(const uint8_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline uint16_t _Py_atomic_load_uint16(const uint16_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline uint32_t _Py_atomic_load_uint32(const uint32_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline uint64_t _Py_atomic_load_uint64(const uint64_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline uintptr_t _Py_atomic_load_uintptr(const uintptr_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline unsigned int _Py_atomic_load_uint(const unsigned int *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline Py_ssize_t _Py_atomic_load_ssize(const Py_ssize_t *obj) { return __atomic_load_n(obj, __ATOMIC_SEQ_CST); } static inline void * _Py_atomic_load_ptr(const void *obj) { return (void *)__atomic_load_n((void * const *)obj, __ATOMIC_SEQ_CST); } // --- _Py_atomic_load_relaxed ----------------------------------------------- static inline int _Py_atomic_load_int_relaxed(const int *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline int8_t _Py_atomic_load_int8_relaxed(const int8_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline int16_t _Py_atomic_load_int16_relaxed(const int16_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline int32_t _Py_atomic_load_int32_relaxed(const int32_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline int64_t _Py_atomic_load_int64_relaxed(const int64_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline intptr_t _Py_atomic_load_intptr_relaxed(const intptr_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline uint8_t _Py_atomic_load_uint8_relaxed(const uint8_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline uint16_t _Py_atomic_load_uint16_relaxed(const uint16_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline uint32_t _Py_atomic_load_uint32_relaxed(const uint32_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline uint64_t _Py_atomic_load_uint64_relaxed(const uint64_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline uintptr_t _Py_atomic_load_uintptr_relaxed(const uintptr_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline unsigned int _Py_atomic_load_uint_relaxed(const unsigned int *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline Py_ssize_t _Py_atomic_load_ssize_relaxed(const Py_ssize_t *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } static inline void * _Py_atomic_load_ptr_relaxed(const void *obj) { return (void *)__atomic_load_n((void * const *)obj, __ATOMIC_RELAXED); } static inline unsigned long long _Py_atomic_load_ullong_relaxed(const unsigned long long *obj) { return __atomic_load_n(obj, __ATOMIC_RELAXED); } // --- _Py_atomic_store ------------------------------------------------------ static inline void _Py_atomic_store_int(int *obj, int value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_int8(int8_t *obj, int8_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_int16(int16_t *obj, int16_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_int32(int32_t *obj, int32_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_int64(int64_t *obj, int64_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_intptr(intptr_t *obj, intptr_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_uint8(uint8_t *obj, uint8_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_uint16(uint16_t *obj, uint16_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_uint32(uint32_t *obj, uint32_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_uint64(uint64_t *obj, uint64_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_uintptr(uintptr_t *obj, uintptr_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_uint(unsigned int *obj, unsigned int value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_ptr(void *obj, void *value) { __atomic_store_n((void **)obj, value, __ATOMIC_SEQ_CST); } static inline void _Py_atomic_store_ssize(Py_ssize_t *obj, Py_ssize_t value) { __atomic_store_n(obj, value, __ATOMIC_SEQ_CST); } // --- _Py_atomic_store_relaxed ---------------------------------------------- static inline void _Py_atomic_store_int_relaxed(int *obj, int value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_int8_relaxed(int8_t *obj, int8_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_int16_relaxed(int16_t *obj, int16_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_int32_relaxed(int32_t *obj, int32_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_int64_relaxed(int64_t *obj, int64_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_intptr_relaxed(intptr_t *obj, intptr_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_uint8_relaxed(uint8_t *obj, uint8_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_uint16_relaxed(uint16_t *obj, uint16_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_uint32_relaxed(uint32_t *obj, uint32_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_uint64_relaxed(uint64_t *obj, uint64_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_uintptr_relaxed(uintptr_t *obj, uintptr_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_uint_relaxed(unsigned int *obj, unsigned int value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_ptr_relaxed(void *obj, void *value) { __atomic_store_n((void **)obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_ssize_relaxed(Py_ssize_t *obj, Py_ssize_t value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } static inline void _Py_atomic_store_ullong_relaxed(unsigned long long *obj, unsigned long long value) { __atomic_store_n(obj, value, __ATOMIC_RELAXED); } // --- _Py_atomic_load_ptr_acquire / _Py_atomic_store_ptr_release ------------ static inline void * _Py_atomic_load_ptr_acquire(const void *obj) { return (void *)__atomic_load_n((void * const *)obj, __ATOMIC_ACQUIRE); } static inline uintptr_t _Py_atomic_load_uintptr_acquire(const uintptr_t *obj) { return (uintptr_t)__atomic_load_n(obj, __ATOMIC_ACQUIRE); } static inline void _Py_atomic_store_ptr_release(void *obj, void *value) { __atomic_store_n((void **)obj, value, __ATOMIC_RELEASE); } static inline void _Py_atomic_store_uintptr_release(uintptr_t *obj, uintptr_t value) { __atomic_store_n(obj, value, __ATOMIC_RELEASE); } static inline void _Py_atomic_store_int_release(int *obj, int value) { __atomic_store_n(obj, value, __ATOMIC_RELEASE); } static inline void _Py_atomic_store_ssize_release(Py_ssize_t *obj, Py_ssize_t value) { __atomic_store_n(obj, value, __ATOMIC_RELEASE); } static inline int _Py_atomic_load_int_acquire(const int *obj) { return __atomic_load_n(obj, __ATOMIC_ACQUIRE); } static inline void _Py_atomic_store_uint32_release(uint32_t *obj, uint32_t value) { __atomic_store_n(obj, value, __ATOMIC_RELEASE); } static inline void _Py_atomic_store_uint64_release(uint64_t *obj, uint64_t value) { __atomic_store_n(obj, value, __ATOMIC_RELEASE); } static inline uint64_t _Py_atomic_load_uint64_acquire(const uint64_t *obj) { return __atomic_load_n(obj, __ATOMIC_ACQUIRE); } static inline uint32_t _Py_atomic_load_uint32_acquire(const uint32_t *obj) { return __atomic_load_n(obj, __ATOMIC_ACQUIRE); } static inline Py_ssize_t _Py_atomic_load_ssize_acquire(const Py_ssize_t *obj) { return __atomic_load_n(obj, __ATOMIC_ACQUIRE); } // --- _Py_atomic_fence ------------------------------------------------------ static inline void _Py_atomic_fence_seq_cst(void) { __atomic_thread_fence(__ATOMIC_SEQ_CST); } static inline void _Py_atomic_fence_acquire(void) { __atomic_thread_fence(__ATOMIC_ACQUIRE); } static inline void _Py_atomic_fence_release(void) { __atomic_thread_fence(__ATOMIC_RELEASE); } Include path : ${workspaceFolder}/** C:\Users\21211433\AppData\Local\Programs\Python\Python313\include I set this in VScode from the C/C++ extension inside of IntelliSense Configurations I use Windows 11 Python 3.13.1 gcc & g++ version 14.2.0 with mysys2 VScode version 1.95.3 Note that I didn't install vscode with the installer but installed it from a .zip file with the entire application I tried changing #include <C:/Users/21211433/AppData/Local/Programs/Python/Python313/include/Python.h> to #include <Python.h> but this error occured: Error message : cmd /c chcp 65001>nul && C:\msys64\ucrt64\bin\g++.exe -fdiagnostics-color=always -g C:\Users\21211433\Desktop\Code\C++\First\src\python.cpp -o C:\Users\21211433\Desktop\Code\C++\First\src\python.exe C:\Users\21211433\Desktop\Code\C++\First\src\python.cpp:2:10: fatal error: Python.h: No such file or directory 2 | #include <Python.h> | ^~~~~~~~~~ compilation terminated. Build finished with error(s). I tried removing the line that includes "pyatomic_gcc.h" from pyatomic.h but that caused more errors and I undid the change
You seem to think that the compiler should know where to find an included header file based on the location of a previously included header file. That is not how it works. You need to specify to the compiler where to look for header files. In this case it seems you need to add 'C:/Users/21211433/AppData/Local/Programs/Python/Python313/include' to your include path in the command line with the -I option. So change the command line to something like C:\msys64\ucrt64\bin\g++.exe -fdiagnostics-color=always -g -IC:/Users/21211433/AppData/Local/Programs/Python/Python313/include python.cpp -o python.exe and change your include statement to #include <python.h> I've don't know what you are indicating with your include path above. I guess it is from your c_cpp_properties.json file, in which case it is irrelevant to this issue. The compiler does not use that file.
3
3
79,352,553
2025-1-13
https://stackoverflow.com/questions/79352553/beautifulsoup-prettify-changes-content-not-just-layout
BeautifulSoup prettify() modifies significant whitespace even if the attribute xml:space is set to "preserve". Example xml file with significant whitespace: <svg viewBox="0 0 160 50" xmlns="http://www.w3.org/2000/svg"> <text y="20" xml:space="default"> Default spacing</text> <text y="40" xml:space="preserve"> <tspan>reserved spacing</tspan></text> </svg> Code: from bs4 import BeautifulSoup xml_string_with_significant_whitespace =''' <svg viewBox="0 0 160 50" xmlns="http://www.w3.org/2000/svg"> <text y="20" xml:space="default"> Default spacing</text> <text y="40" xml:space="preserve"> <tspan>reserved spacing</tspan></text> </svg> ''' soup = BeautifulSoup(xml_string_with_significant_whitespace, "xml") # no modifications made print(soup.prettify()) # modifies significant whitespace # print(str(soup)) # doesn't modify significant whitespace Output: <svg viewBox="0 0 160 50" xmlns="http://www.w3.org/2000/svg"> <text xml:space="default" y="20"> Default spacing </text> <text xml:space="preserve" y="40"> <tspan> reserved spacing </tspan> </text> </svg> Text will be moved due to modified whitespace. How do I prevent prettify() from changing the meaning of the xml file, instead of just changing the layout?
As MendelG expained in his answer BeautifulSoup prettify() does change the meaning of documents and prettify() is only meant as an aid for readability. I wanted a solution that would reformat the document without changing its meaning and with the least amount of changes. The following code prettifies xml without modifying significant whitespace (xml:space="preserve"), CDATA and XML declarations: import xml.sax from io import StringIO import re import sys class Prettifier(xml.sax.ContentHandler, xml.sax.handler.LexicalHandler): def __init__(self, print_method=None): self.level = -1 self.preserve_space_stack=[False] self.last_was_opening_tag = False #tags without content or empty content don't need closing tag on next line self.indent = " "*4 self.first_tag = True self.external_print_method = print_method self.string="" self.CDATA = False def get_string(self): return self.string def print_method(self, text="", end="\n"): if self.external_print_method: self.external_print_method(text, end) self.string += text + end # Call when an element starts def startElement(self, tag, attributes): self.level += 1 if 'xml:space' in attributes: self.preserve_space_stack.append(attributes['xml:space'] == 'preserve') else: self.preserve_space_stack.append(self.preserve_space_stack[-1]) attributes_string = " ".join([f'{key}="{value}"' for key,value in attributes.items()]) if self.preserve_space_stack[-1]: if self.preserve_space_stack[-2] == False: self.print_method() self.print_method(self.indent*self.level, end="") self.print_method(f"<{tag}", end="") self.print_method(" "*(len(attributes_string)!=0), end="") self.print_method(attributes_string, end="") self.print_method(">", end="") else: if not self.first_tag: self.print_method() if len(attributes_string) > 60: attributes_string = f"\n{self.indent*self.level + ' '*(len(tag)+2)}".join([f'{key}="{value}"' for key,value in attributes.items()]) self.print_method(self.indent*self.level, end="") self.print_method(f"<{tag}", end="") self.print_method(" "*(len(attributes_string)!=0), end="") self.print_method(attributes_string, end="") self.print_method(">", end="") self.last_was_opening_tag = True self.first_tag = False # Call when an elements ends def endElement(self, tag): if self.preserve_space_stack[-1]: self.print_method(f"</{tag}", end="") self.print_method(">", end="") else: if not self.last_was_opening_tag: self.print_method() self.print_method(self.indent*self.level, end="") self.print_method(f"</{tag}", end="") self.print_method(">",end="") self.level -= 1 self.preserve_space_stack.pop() self.last_was_opening_tag = False # Call when a character is read def characters(self, content): if self.CDATA: self.print_method(content, end="") else: empty_content = False if self.preserve_space_stack[-1]: self.print_method(content, end="") empty_content = content == "" else: empty_content = content.strip() == "" if not empty_content: self.print_method() self.print_method(self.indent*(self.level+1), end="") self.print_method(content.strip(), end="") self.last_was_opening_tag = self.last_was_opening_tag and empty_content # lexical handler methods: def comment(self, content): if not self.preserve_space_stack[-1]: self.print_method() self.print_method(self.indent*(self.level+1), end="") self.print_method(f"<!--{content}-->", end="") def startCDATA(self): #The contents of the CDATA marked section will be reported through the characters handler. if not self.preserve_space_stack[-1]: self.print_method() self.print_method(self.indent*(self.level+1), end="") self.print_method("<![CDATA[", end="") self.CDATA = True def endCDATA(self): self.print_method("]]>", end="") self.CDATA = False def process_xml_declaration(xml_string): declaration = "" regex = r"^\s*(<\?xml [^\?>]*\?>)" matches = re.search(regex, xml_string) if matches: declaration = matches.group(1) + "\n" return declaration def prettify_file(file_name): Handler = Prettifier() parser = xml.sax.make_parser() parser.setFeature(xml.sax.handler.feature_namespaces, 0)# turn off namespaces parser.setContentHandler(Handler) parser.setProperty(xml.sax.handler.property_lexical_handler, Handler) parser.parse(file_name) declaration = "" with open(file_name, 'r', encoding='utf8') as f: declaration = process_xml_declaration(f.read()) return declaration + Handler.get_string() def prettify_string(xml_string): Handler = Prettifier() parser = xml.sax.make_parser() parser.setFeature(xml.sax.handler.feature_namespaces, 0)# turn off namespaces parser.setContentHandler(Handler) parser.setProperty(xml.sax.handler.property_lexical_handler, Handler) xml_string_stream = StringIO(xml_string) parser.parse(xml_string_stream) declaration = process_xml_declaration(xml_string) return declaration + Handler.get_string() if __name__ == "__main__": if len(sys.argv) > 1: for bad_image_path in sys.argv[1:]: svg_string_pretty = prettify_file(bad_image_path) with open(bad_image_path, 'w', encoding='utf8', newline='\n') as f: f.write(svg_string_pretty) else: pass
3
0
79,355,866
2025-1-14
https://stackoverflow.com/questions/79355866/optimizing-the-exact-prime-number-theorem
For example, given this sequence of the first 499 primes, can you predict the next prime? 2,3,5,7,...,3541,3547,3557,3559 The 500th prime is 3571. Prime Number Theorem The Prime Number Theorem (PNT) provides an approximation for the n-th prime: Computing p_500 ≈ 3107 takes microseconds! Exact Prime Number Theorem My experimental Exact Prime Number Theorem (EPNT) computes the exact n-th prime: Computing p_500 = 3571 takes 25 minutes! Question So far, the EPNT correctly predicts the first 500 primes. Unfortunately, numerically verifying the formula for higher primes is extremely slow! Are there any optimization tips to improve the EPNT computational speed? Perhaps Do not use Python Add multiple threads Implement a faster math precision library Modify the decimal precision mp.dps at runtime Use a math computing engine like WolframAlpha Here's the current Python code: import time from mpmath import ceil, ln, mp, mpf, exp, fsum, power, zeta from sympy import symbols, Eq, pprint, prime N=500 # <--- Compute the N-th prime. mp.dps = 20000 primes = [] def vengy_prime(k): # Compute the k-th prime deterministically s = ceil(k * ln(k * ln(k))) # Determine the dynamic Rosser (1941) upper bound N = int(ceil(k * (ln(k) + ln(ln(k))))) # Compute finite summation to N print(f"Computing {N} zeta terms ...") start_time = time.time() sum_N = fsum([1 / power(mpf(n), s) for n in range(1, N)]) end_time = time.time() print(f"Time taken: {end_time - start_time:.6f} seconds") # Compute the product term involving the first k-1 primes print(f"Computing product of {k-1} previous primes ...") start_time = time.time() prod = exp(fsum([ln(1 - power(p, -s)) for p in primes[:k-1]])) end_time = time.time() print(f"Time taken: {end_time - start_time:.6f} seconds") # Compute next prime p_k p_k=ceil((1 - 1 / (sum_N * prod)) ** (-1 / s)) return p_k # Generate the previous known k-1 primes print("\nListing", N-1, "known primes:") for k in range(1, N): p = prime(k) primes.append(p) print(primes) primes.append(vengy_prime(N)) pprint(Eq(symbols(f'p_{N}'), int(primes[-1]))) Update Wow! Running Jérôme Richard's new optimized code only took 10 seconds! Computing 4021 zeta terms ... Time taken: 7.968423 seconds Computing product of 499 previous primes ... Time taken: 1.960771 seconds p₅₀₀ = 3571 The old code timings were 1486 seconds: Computing 4021 zeta terms ... Time taken: 1173.899538 seconds Computing product of 499 previous primes ... Time taken: 313.833039 seconds p₅₀₀ = 3571 The optimized code computed the 4000th prime in 45 minutes: N = 4000, precision = 700000
TL;DR: the code can be optimized with gmpy2 and accelerated with multiple threads, but the main issue is that this formula is simply a very inefficient way of finding the next prime number. Implement a faster math precision library mpmath is indeed a bit slow. You can just use gmpy2 instead! It is a bit faster. gmpy2 is one of the fastest library I am aware of (for large numbers). Note that the very last digits of the two modules can differ (due to rounding and the accuracy of the math functions). Do not use Python Native languages will not make this code significantly faster. Indeed, with gmpy2, most of the time should be clearly spent in the GMP library written in C and highly optimized. Thus, Python is fine here. Add multiple threads Indeed, we can easily spawn CPython processes here since the operation is compute bound. This can easily be done with joblib. However, there is a catch: we need to reset the mpmath/gmpy2 context for each process (i.e. the precision) so to compute the numbers correctly. Here is the code applying all optimizations so far: import time import gmpy2 as gmp from gmpy2 import log as ln, ceil, mpfr, exp from sympy import symbols, Eq, pprint, prime from joblib import Parallel, delayed N = 500 # <--- Compute the N-th prime. precision = 66432 # 20_000 decimals ~= 66432 bits gmp.get_context().precision = precision primes = [] # TODO: use a pair-wise sum for better precision or even a Kahan sum! fsum = sum def vengy_prime(k): # Compute the k-th prime deterministically s = ceil(k * ln(k * ln(k))) # Determine the dynamic Rosser (1941) upper bound N = int(ceil(k * (ln(k) + ln(ln(k))))) ms = -s parallel = Parallel(-1) # Compute finite summation to N print(f"Computing {N} zeta terms ...") start_time = time.time() def compute(n): gmp.get_context().precision = precision return n ** ms lst = parallel(delayed(compute)(n) for n in range(1, N)) sum_N = fsum(lst) #sum_N = fsum([n**ms for n in range(1, N)]) end_time = time.time() print(f"Time taken: {end_time - start_time:.6f} seconds") # Compute the product term involving the first k-1 primes print(f"Computing product of {k-1} previous primes ...") start_time = time.time() def compute(p): gmp.get_context().precision = precision return ln(1-p**ms) lst = parallel(delayed(compute)(p) for p in primes[:k-1]) prod = exp(fsum(lst)) #prod = exp(fsum([ln(1 - p**ms) for p in primes[:k-1]])) end_time = time.time() print(f"Time taken: {end_time - start_time:.6f} seconds") # Compute next prime p_k p_k=ceil((1 - 1 / (sum_N * prod)) ** (-1 / s)) return p_k # Generate the previous known k-1 primes print("\nListing", N-1, "known primes:") for k in range(1, N): p = prime(k) primes.append(p) print(primes) primes.append(vengy_prime(N)) pprint(Eq(symbols(f'p_{N}'), int(primes[-1]))) On my i5-9600KF CPU (6 cores), it takes 18.3 seconds compared to 128.1 seconds. This means the optimized code is 7 times faster. Modify the decimal precision mp.dps at runtime This is a good idea. In fact, you do not need the value to be exact! if the last digit is wrong this is not a problem since you can test if a number is a prime in polynomial time and even relatively quickly for pretty big numbers. For example, you can use the Miller Rabin primality test to check that. Note that there are deterministic algorithms not doing any assumptions on non-proven hypothesis like AKS (AFAIK significantly slower in practice and more complex to implement though). There is a catch tough: you need to quantify the error for big numbers so to know the exact range of number to search. This can be done by repeating the same algorithm multiple times while changing the rounding (and possibly the algorithms so to ensure the min/max values are actually an upper/lower bound). In the end, I think it is faster to use a primality test on numbers following primes[i] so to quickly find primes[i+1] (much more efficiently than your method). For relatively-small numbers like the ones you test (or even ones <10_000_000), a basic sieve of Eratosthenes is be much faster. Root of the problem The heart of the problem with your formula are the following: First, it require numbers to be very precise for this to work and this high precisions must increase when the searched number also increase (IDK exactly how much). Moreover, for a required precision p, the (best known) complexity of operations like multiplication/divisions is Θ(p log p) and the one of the exponentiation (in finite fields) is roughly Θ(p²). See this article for more information about the complexity of mathematical operations. Last but not least, it requires Θ(N) iterations (where N is the number of primes) so it should take about Θ(N p²) operations. This is not great, especially if p ~ Ω(N) (rather likely since you use a >66400-bit number for N is set to only 500) which would results in a Ω(N**3) complexity. With p ~ Ω(N²), it would even be Ω(N**5) which is pretty bad. Remember that a sieve of Eratosthenes roughly runs in Θ(N √N) †. In practice, you already have all the previous prime numbers. Thus, finding the next one is cheaper that a sieve of Eratosthenes. Indeed, the average distance between two prime number is log(N log N) = log(N) and testing if a number is prime can be done in O(√(N log N) / log N) with the trial division algorithm (much less with the Miller–Rabin test). This means the complexity of using trial division to check the next numbers in order the find the next prime numbers is O(√(N log N)). This is much better than your algorithm with a lower bound of Ω(N). Put it shortly, this formula is a very inefficient way of finding the next prime number. † This formula is simplified for sake of clarity. Indeed, I assume that the number of prime number in a range containing n numbers is O(n) while it is not. However, I think a O(1/(ln n)) factor is not very important here and IMHO it makes it harder to compare the complexities. The correct complexity for the sieve of Eratosthenes is Θ((N √N log N)). I also assume that all numbers are small enough to fit in native 64-bit numbers (computed in constant time). Indeed, the sieve of Eratosthenes would take far too much memory to run (like your algorithm which require all the previous prime numbers to be stored).
2
1
79,352,480
2025-1-13
https://stackoverflow.com/questions/79352480/force-direction-of-line-vector
I have a line vector where each segment has a randomly assigned direction. The image shows an example (which consists of two connected lines), How to set the direction of each line so that it is consistent? That is, connected lines have the same direction? The line vector does not have a direction attribute. This information is somehow stored in the GPKG file in some other way.
You can use line_merge. For it to work you need each cluster of connected lines to be a multiline, which can be done using GeoPandas: from shapely.wkt import loads import geopandas as gpd data = [[1, 'LineString (543125 6941963, 544217 6941907)'], [2, 'LineString (544957 6941417, 544217 6941907)'], [3, 'LineString (544957 6941417, 545151 6942222)'], [4, 'LineString (545882 6941574, 545854 6940094)'], [5, 'LineString (545531 6942647, 546567 6943424)'], [6, 'LineString (546548 6944201, 546567 6943424)'], [7, 'LineString (546548 6944201, 547297 6944248)']] geoms = [loads(x[1]) for x in data] df = gpd.GeoDataFrame(data=data, geometry=geoms, columns=["line_id", "wkt"], crs=3006) #Creata a column with buffered lines (polygon geometries) df["buffered"] = df.buffer(100) #Dissolve them into three polygons, each connetcting a number of lines poly_df = df.set_geometry("buffered").dissolve().explode() poly_df["poly_id"] = range(poly_df.shape[0]) #Give each polygon a unique id ax = poly_df.plot(column="poly_id", zorder=0, cmap="Pastel2") df.plot(column="line_id", ax=ax, cmap="tab10", zorder=1) #Join the polygon id to the connected lines, # to give connected lines the same poly_id df["poly_id"] = df.sjoin(poly_df)["poly_id"] #Dissolve by polygon id to union connected lines into multilines, # line_merge to align the line directions, explode back to individual lines df = df.dissolve(by="poly_id").line_merge().explode()
1
1
79,345,986
2025-1-10
https://stackoverflow.com/questions/79345986/fastest-exponentiation-of-numpy-3d-matrix
Q is a 3D matrix and could for example have the following shape: (4000, 25, 25) I want raise Q to the power n for {0, 1, ..., k} and sum it all. Basically, I want to calculate \sum_{i=0}^{k-1}Q^n I have the following function that works as expected: def sum_of_powers(Q: np.ndarray, k: int) -> np.ndarray: Qs = np.sum([ np.linalg.matrix_power(Q, n) for n in range(k) ], axis=0) return Qs Is it possible to speed up my function or is there a faster method to obtain the same output?
We can perform this calculation in O(log k) matrix operations. Let M(k) represent the k'th power of the input, and S(k) represent the sum of those powers from 0 to k. Let I represent an appropriate identity matrix. Approach 1 If you expand the product, you'll find that (M(1) - I) * S(k) = M(k+1) - I. That means we can compute M(k+1) using a standard matrix power (which takes O(log k) matrix multiplications), and compute S(k) by using numpy.linalg.solve to solve the equation (M(1) - I) * S(k) = M(k+1) - I: import numpy.linalg def option1(Q, k): identity = numpy.eye(Q.shape[-1]) A = Q - identity B = numpy.linalg.matrix_power(Q, k+1) - identity return numpy.linalg.solve(A, B) This does depend on M(1) - I being invertible, though. Fortunately, there's another approach that doesn't have that limitation. Approach 2 The standard exponentation by squaring algorithm computes M(2*k) as M(k)*M(k) and M(2*k+1) as M(2*k)*M(1). We can alter the algorithm to track both S(k-1) and M(k), by computing S(2*k-1) as S(k-1)*M(k) + S(k-1) and S(2*k) as S(2*k-1) + M(2*k): import numpy def option2(Q, k): identity = numpy.eye(Q.shape[-1]) if k == 0: res = numpy.empty_like(Q) res[:] = identity return res power = Q sum_of_powers = identity # Looping over a string might look dumb, but it's actually the most efficient option, # as well as the simplest. (It wouldn't be the bottleneck even if it wasn't efficient.) for bit in bin(k+1)[3:]: sum_of_powers = (sum_of_powers @ power) + sum_of_powers power = power @ power if bit == "1": sum_of_powers += power power = power @ Q return sum_of_powers
3
4
79,356,690
2025-1-14
https://stackoverflow.com/questions/79356690/how-to-set-a-column-which-suffix-name-is-based-on-a-value-in-another-column
#Column X contains the suffix of one of V* columns. Need to put set column V{X} to 9 if X > 1. #But my code created a new column 'VX' instead of updating one of the V* columns import pandas as pd df = pd.DataFrame({'EMPLID': [12, 13, 14, 15, 16, 17, 18], 'V1': [2,3,4,50,6,7,8], 'V2': [3,3,3,3,3,3,3], 'V3': [7,15,8,9,10,11,12], 'X': [2,3,1,3,3,1,2] }) # Expected output: # EMPLID V1 V2 V3 X # 12 2 9 7 2 # 13 3 3 9 3 # 14 4 3 8 1 # 15 50 3 9 3 # 16 6 3 9 3 # 17 7 3 11 1 # 18 8 9 12 2 My code created a new column 'VX' instead of updating one of the V* columns: df.loc[(df['X'] > 1), f"V{'X'}"] = 9 Any suggestion is appreciated. Thank you.
# your dataframe df = pd.DataFrame({'EMPLID': [12, 13, 14, 15, 16, 17, 18], 'V1': [2,3,4,50,6,7,8], 'V2': [3,3,3,3,3,3,3], 'V3': [7,15,8,9,10,11,12], 'X': [2,3,1,3,3,1,2] }) First, we get the columns names that we want to change and their indices on the original dataframe. # column name x = df['X'][(df['X'] > 1)] # column names mapped to your scenario columns = [f'V{v}' for v in x] # desired indexes positions = x.index.values #Then we convert the column names to indices and use these indices to update the positions matching the conditions. column_indices = [df.columns.get_loc(col) for col in columns] Now, we can use two approaches here. A vectorized approach Convert the dataframe to numpy array, change the desired positions all at once and change the result back to a dataframe. import numpy as np # the original dataframe column names column_names = df.columns # convert the dataframe to numpy (this will 'remove' the column names) df_array = df.values # put the columns back (axis=0 will stack the columns at the top of the array) df_array = np.concatenate([[column_names], df_array], axis=0) # position+1 because when using pandas to get the row index, we ignore the columns (which would have index 0) df_array[positions+1, column_indices] = 9 # convert the result back to a dataframe df = pd.DataFrame(df_array[1:], columns=column_names) print(df) Output: EMPLID V1 V2 V3 X 0 12 2 9 7 2 1 13 3 3 9 3 2 14 4 3 8 1 3 15 50 3 9 3 4 16 6 3 9 3 5 17 7 3 11 1 6 18 8 9 12 2 A loop approach The easiest way would be just to loop over the rows and columns, changing one value at a time. for row, column in zip(positions, column_indices): df.iat[row,column] = 9 If your dataframe is small, the vectorized approach won't have as much of an advantage over the for loop.
1
1
79,356,521
2025-1-14
https://stackoverflow.com/questions/79356521/is-it-possible-to-create-groups-layers-in-method-chaining-in-python
I am aware that I can use method chaining by simply having methods return self, e.g. object.routine1().routine2().routine3() But is it possible to organize methods into layers or groups when applying method chaining? e.g. object.Layer1.routine1().routine2().Layer2.routine3() The context is that I am trying to build a text analytics pipeline and the different layers would correspond to text level, sentence level and token level preprocessing steps. So what I am trying to build is something like this text = "This is an example foo text with some special characters!!!! And some sentences" pr = TextPreprocessor(text) processed_text = ( pr.text_level.lower_case() .sentence_level.split_sentences().remove_special_characters() .token_level.tokenize() .text ) This is the code that almost (!) gets the text processing example to work: import re class TextLevelPreprocessor: def __init__(self, parent): self.parent = parent def lower_case(self): self.parent.text = self.parent.text.lower() return self.parent class SentenceLevelPreprocessor: def __init__(self, parent): self.parent = parent def split_sentences(self): self.parent.text = self.parent.text.split('. ') return self.parent def remove_special_characters(self): self.parent.text = [re.sub('[!@#$]', '', s) for s in self.parent.text] return self.parent class TokenLevelPreprocessor: def __init__(self, parent): self.parent = parent def tokenize(self): self.parent.text = [t.split() for t in self.parent.text] return self.parent class TextPreprocessor: def __init__(self, text): self.text = text self.text_level = TextLevelPreprocessor(self) self.sentence_level = SentenceLevelPreprocessor(self) self.token_level = TokenLevelPreprocessor(self) However here only this syntax would work pr = TextPreprocessor(text) processed_text = ( pr.text_level.lower_case() .sentence_level.split_sentences(). .sentence_level.remove_special_characters() .token_level.tokenize() .text ) which would mean that one would have to add the "Layer" or "Group" everytime one uses a method, which seems verbose.
You can make each level-specific object a proxy object to the parent object so that it has access to both level-specific methods and parent-specific attributes, and as a bonus level-specific methods can then reference self.text instead of self.parent.text: class TextPreprocessorLevel: def __init__(self, parent): self.__dict__['parent'] = parent def __getattr__(self, name): return getattr(self.parent, name) def __setattr__(self, name, value): setattr(self.parent, name, value) class TextLevelPreprocessor(TextPreprocessorLevel): def lower_case(self): self.text = self.text.lower() return self class SentenceLevelPreprocessor(TextPreprocessorLevel): def split_sentences(self): self.text = self.text.split('. ') return self def remove_special_characters(self): self.text = [re.sub('[!@#$]', '', s) for s in self.text] return self class TokenLevelPreprocessor(TextPreprocessorLevel): def tokenize(self): self.text = [t.split() for t in self.text] return self class TextPreprocessor: def __init__(self, text): self.text = text self.text_level = TextLevelPreprocessor(self) self.sentence_level = SentenceLevelPreprocessor(self) self.token_level = TokenLevelPreprocessor(self) so that method chaining can work on level-specific instances while allowing access to attributes of the other levels: text = "This is an example foo text with some special characters. And some sentences" pr = TextPreprocessor(text) processed_text = ( pr.text_level.lower_case() .sentence_level.split_sentences().remove_special_characters() .token_level.tokenize() .text ) print(processed_text) This outputs: [['this', 'is', 'an', 'example', 'foo', 'text', 'with', 'some', 'special', 'characters'], ['and', 'some', 'sentences']] Demo here The downside of a proxy object, however, is that attributes are dynamically delegated and therefore linters and static type checkers will likely complain about references to these attributes being undefined, so if you want to make them happy you can explicitly define the delegated attributes as properties instead: class TextPreprocessorLevel: def __init__(self, parent): self.parent = parent @property def text(self): return self.parent.text @text.setter def text(self, value): self.parent.text = value @property def text_level(self): return self.parent.text_level @property def sentence_level(self): return self.parent.sentence_level @property def token_level(self): return self.parent.token_level Demo here
2
1
79,344,159
2025-1-9
https://stackoverflow.com/questions/79344159/disable-pyspark-to-print-info-when-running
I have started to use PySpark. Version of PySpark is 3.5.4 and it's installed via pip. This is my code: from pyspark.sql import SparkSession pyspark = SparkSession.builder.master("local[8]").appName("test").getOrCreate() df = pyspark.read.csv("test.csv", header=True) print(df.show()) Every time I run the program using: python test_01.py It prints all this info about pyspark(in yellow): How to disable it, so it will not print it.
Different lines are coming from different sources. Windows ("SUCCESS: ..."), spark launcher shell/batch scripts (":: loading settings ::...") core spark code logging using log4j2 core spark code printing using System.out.println() Different lines are written to different fds (std-out, std-error, log4j log file) Spark offers different "scripts" (pyspark, spark-submit, spark-shell, ...) for different purposes. You're probably using the wrong one here. It's very tedious to pick and choose to disable lines from specific sources going to specific fds. Easiest of course is to control the core logs using log4j2, which can be done as described in wiltonsr's answer or a little more detailed here. Based on what you're looking to do, simplest is to use spark-submit, which is meant to be used for headless execution: CMD> cat test.py from pyspark.sql import SparkSession spark = SparkSession.builder \ .config('spark.jars.packages', 'io.delta:delta-core_2.12:2.4.0') \ # just to produce logs .getOrCreate() spark.createDataFrame(data=[(i,) for i in range(5)], schema='id: int').show() CMD> spark-submit test.py +---+ | id| +---+ | 0| | 1| | 2| | 3| | 4| +---+ CMD> To understand who is writing what to which fd is a tedious process, it might even change with platform (Linux/Windows/Mac). I would not recommend it. But if you really want, here are a few hints: From your original code: print(df.show()) df.show() prints df to stdout and returns None. print(df.show()) prints None to stdout. Running using python instead of spark-submit: CMD> python test.py :: loading settings :: url = jar:file:/C:/My/.venv/Lib/site-packages/pyspark/jars/ivy-2.5.1.jar!/org/apache/ivy/core/settings/ivysettings.xml Ivy Default Cache set to: C:\Users\e679994\.ivy2\cache The jars for the packages stored in: C:\Users\e679994\.ivy2\jars io.delta#delta-core_2.12 added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent-499a6ac1-b961-44da-af58-de97e4357cbf;1.0 confs: [default] found io.delta#delta-core_2.12;2.4.0 in central found io.delta#delta-storage;2.4.0 in central found org.antlr#antlr4-runtime;4.9.3 in central :: resolution report :: resolve 171ms :: artifacts dl 8ms :: modules in use: io.delta#delta-core_2.12;2.4.0 from central in [default] io.delta#delta-storage;2.4.0 from central in [default] org.antlr#antlr4-runtime;4.9.3 from central in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 3 | 0 | 0 | 0 || 3 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent-499a6ac1-b961-44da-af58-de97e4357cbf confs: [default] 0 artifacts copied, 3 already retrieved (0kB/7ms) +---+ | id| +---+ | 0| | 1| | 2| | 3| | 4| +---+ CMD> SUCCESS: The process with PID 38136 (child process of PID 38196) has been terminated. SUCCESS: The process with PID 38196 (child process of PID 35316) has been terminated. SUCCESS: The process with PID 35316 (child process of PID 22336) has been terminated. CMD> Redirecting stdout (fd=1) to a file: CMD> python test.py > out.txt 2> err.txt CMD> CMD> cat out.txt :: loading settings :: url = jar:file:/C:/My/.venv/Lib/site-packages/pyspark/jars/ivy-2.5.1.jar!/org/apache/ivy/core/settings/ivysettings.xml +---+ | id| +---+ | 0| | 1| | 2| | 3| | 4| +---+ SUCCESS: The process with PID 25080 (child process of PID 38032) has been terminated. SUCCESS: The process with PID 38032 (child process of PID 21176) has been terminated. SUCCESS: The process with PID 21176 (child process of PID 38148) has been terminated. SUCCESS: The process with PID 38148 (child process of PID 32456) has been terminated. SUCCESS: The process with PID 32456 (child process of PID 31656) has been terminated. CMD> Redirecting stderr (fd=2) to a file: CMD> cat err.txt Ivy Default Cache set to: C:\Users\kash\.ivy2\cache The jars for the packages stored in: C:\Users\kash\.ivy2\jars io.delta#delta-core_2.12 added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent-597f3c82-718d-498b-b00e-7928264c307a;1.0 confs: [default] found io.delta#delta-core_2.12;2.4.0 in central found io.delta#delta-storage;2.4.0 in central found org.antlr#antlr4-runtime;4.9.3 in central :: resolution report :: resolve 111ms :: artifacts dl 5ms :: modules in use: io.delta#delta-core_2.12;2.4.0 from central in [default] io.delta#delta-storage;2.4.0 from central in [default] org.antlr#antlr4-runtime;4.9.3 from central in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 3 | 0 | 0 | 0 || 3 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent-597f3c82-718d-498b-b00e-7928264c307a confs: [default] 0 artifacts copied, 3 already retrieved (0kB/5ms) CMD> SUCCESS: The process with PID Note how this is printed AFTER CMD>. I.e. it's printed by "Windows" after it completes execution of python You won't see it on Linux. E.g. from my linux box: kash@ub$ python test.py 19:15:50.037 [main] WARN org.apache.spark.util.Utils - Your hostname, ub resolves to a loopback address: 127.0.1.1; using 192.168.177.129 instead (on interface ens33) 19:15:50.049 [main] WARN org.apache.spark.util.Utils - Set SPARK_LOCAL_IP if you need to bind to another address :: loading settings :: url = jar:file:/home/kash/workspaces/spark-log-test/.venv/lib/python3.9/site-packages/pyspark/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml Ivy Default Cache set to: /home/kash/.ivy2/cache The jars for the packages stored in: /home/kash/.ivy2/jars io.delta#delta-core_2.12 added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent-7d38e7a2-a0e5-47fa-bfda-2cb5b8b443e0;1.0 confs: [default] found io.delta#delta-core_2.12;2.4.0 in spark-list found io.delta#delta-storage;2.4.0 in spark-list found org.antlr#antlr4-runtime;4.9.3 in spark-list :: resolution report :: resolve 390ms :: artifacts dl 10ms :: modules in use: io.delta#delta-core_2.12;2.4.0 from spark-list in [default] io.delta#delta-storage;2.4.0 from spark-list in [default] org.antlr#antlr4-runtime;4.9.3 from spark-list in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 3 | 0 | 0 | 0 || 3 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent-7d38e7a2-a0e5-47fa-bfda-2cb5b8b443e0 confs: [default] 0 artifacts copied, 3 already retrieved (0kB/19ms) Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). +---+ | id| +---+ | 0| | 1| | 2| | 3| | 4| +---+ kash@ub$ Anyway that I can disable that on Windows 10/11 This part SUCCESS: The process with PID 5552 (child process of PID 4668) has been terminated. ... When you run it with python. –IGRACH Would not recommend. Seems like it's coming from java_gateway.py. You can add stdout=PIPE to the Popen call in your local installation and the output of taskkill will be suppressed. if on_windows: # In Windows, the child process here is "spark-submit.cmd", not the JVM itself # (because the UNIX "exec" command is not available). This means we cannot simply # call proc.kill(), which kills only the "spark-submit.cmd" process but not the # JVMs. Instead, we use "taskkill" with the tree-kill option "/t" to terminate all # child processes in the tree (http://technet.microsoft.com/en-us/library/bb491009.aspx) def killChild(): Popen(["cmd", "/c", "taskkill", "/f", "/t", "/pid", str(proc.pid)]) Change last line to: Popen(["cmd", "/c", "taskkill", "/f", "/t", "/pid", str(proc.pid)], stdout=PIPE)
4
4
79,353,260
2025-1-13
https://stackoverflow.com/questions/79353260/tkinter-with-turtle
I was creating a tkinter/turtle program similar to MS paint and I have the barebones Turtle finished but I am unsure of how to add the turtle into tkinter as a sort of window (Having the tkinter as the main application window with a window of python inside the window acting as a widget almost how a label or checkbox or check button would) how would I implement this given the code: import turtle import tkinter as tk from tkinter import ttk root = tk.Tk() root.title("") frame = tk.Frame(root, padx=20, pady=20) frame.pack(padx=10, pady=10) label = tk.Label(frame, text="", font=("Arial", 16, "bold")) label.pack() # Setup turtle t = turtle.Turtle() t.shape("square") t.fillcolor("") t.up() # Pen is up by default t.turtlesize(1000000, 1000000) # Hide the turtle border turtle.tracer(False) turtle.hideturtle() # Functions def draw(x, y): t.down() # Pen down to draw t.goto(x, y) turtle.update() def move(x, y): t.up() # Pen up to move without drawing t.goto(x, y) turtle.update() # Bind events t.ondrag(draw) # Draw when dragging turtle.onscreenclick(move) # Move when clicking # Press Space to Save turtle.listen() turtle.done() root.mainloop()
You can use the RawTurtle() function to define your turtle then from there you can use ScrolledCanvas() function and TurtleScreen to create the screen and then it goes into tkinter, then you can use screen instead of updating turtle itself. from turtle import RawTurtle, TurtleScreen, ScrolledCanvas import tkinter as tk from tkinter import ttk root = tk.Tk() root.title("Drawing in turtle with TKinter") # Create a ScrolledCanvas for the turtle canvas = ScrolledCanvas(root, width=800, height=600) # Canvas with scrollbars canvas.pack(fill=tk.BOTH, expand=True) # Make the canvas expand to fill the window # Create a TurtleScreen from the canvas screen = TurtleScreen(canvas) # TurtleScreen is needed for turtle graphics screen.bgcolor("white") # Set the background color of the screen frame = tk.Frame(root, padx=20, pady=20) frame.pack(padx=10, pady=10) label = tk.Label(frame, text="", font=("Arial", 16, "bold")) label.pack() # Setup turtle t = RawTurtle(screen) # Create a turtle object t.shape("square") # Set the shape of the turtle t.fillcolor("") # No fill color for the turtle t.pencolor("black") # Set the default pen color t.pensize(10) t.up() # Pen is up by default (no drawing when moving) t.turtlesize(1000000, 1000000) # Hide the turtle border by making it extremely large screen.tracer(False) # Disable automatic screen updates for smoother drawing # Functions def draw(x, y): t.down() # Pen down to draw t.goto(x, y) screen.update() def move(x, y): t.up() # Pen up to move without drawing t.goto(x, y) screen.update() # Bind events t.ondrag(draw) # Draw when dragging screen.onscreenclick(move) # Move when clicking # Press Space to Save root.mainloop()
2
0
79,355,428
2025-1-14
https://stackoverflow.com/questions/79355428/inheriting-str-and-enum-why-is-the-output-different
I have this python code. But why does it print "NEW" in the first case, and "Status.NEW" in the second case? import enum class Status(str, enum.Enum): """Status options.""" NEW = "NEW" EXCLUDED = "EXCLUDED" print("" + Status.NEW) print(Status.NEW)
This is a quirk of multiple inheritance (one of the reasons why a lot of people choose to shun it). print("" + Status.NEW) Here you're using the + operator on your Status.NEW object. Since Status inherits from str, it inherits the __add__ method from there. str.__add__ does string concatenation and uses its raw string value. print(Status.NEW) Here there's no string concatenation, but print calls the __str__ method on any object you pass to it. Your Status.NEW object inherits its __str__ method from the Enum side of the family, so in that context it gets printed as an Enum value instead of a string. If you're in Python 3.11 or greater, you can do this: import enum class Status(enum.StrEnum): """Status options.""" NEW = "NEW" EXCLUDED = "EXCLUDED" print("" + Status.NEW) print(Status.NEW) That avoids ambiguity about whether your objects are treated as Enums or strings. Otherwise, you just have to be careful.
13
19
79,344,960
2025-1-10
https://stackoverflow.com/questions/79344960/extracting-vendor-info-from-probe-request-using-scapy
Trying to extract the vendor information (Apple, Samsung, etc) from Probe Request coming from mobile, So far no luck. Not sure where the corrections to be made to get this info. Adding my code: import codecs from scapy.all import * from netaddr import * def handler(p): if not (p.haslayer(Dot11ProbeResp) or p.haslayer(Dot11ProbeReq) or p.haslayer(Dot11Beacon)): return rssi = p[RadioTap].dBm_AntSignal dst_mac = p[Dot11].addr1 src_mac = p[Dot11].addr2 ap_mac = p[Dot11].addr2 global macf maco = EUI(src_mac) try: macf = maco.oui.registration().org except NotRegisteredError: macf = "Not available" info = f"rssi={rssi:2}dBm, dst={dst_mac}, src={src_mac}, ap={ap_mac}, manf= {macf}" if p.haslayer(Dot11ProbeReq): stats = p[Dot11ProbeReq].network_stats() ssid = str(stats['ssid']) channel = None if "channel" in stats: channel = stats['channel'] print(f"[ProbReq ] {info}") print(f"ssid = {ssid}, channel ={channel}") #rate= {rates} sniff(iface="wlan1", prn=handler, store=0)
There are a few things that should be taken into consideration when dealing with your problem. First, the OUI used by the netaddr 1.3.0 package is outdated. I have an iPhone 16 with OUI 0C-85-E1. You can check directly in IEEE or here that it is a valid OUI, but it's not updated in the netaddr source. You can solve this problem using another approach to get OUI info from the web. oui = src_mac[:8].upper().replace(":", "-") try: response = requests.get(f"https://api.macvendors.com/{oui}") if response.status_code == 200: macf = response.text else: macf = "Not available" except Exception as e: macf = "Not available" But here there's the second problem. Apple uses a private Wi-Fi addresses security functionality that prevents from showing the real OUI on all requests, including probe requests. Check here when this option is off: And when it's on: You can check that OUI 6E-BA-4F it's invalid. Android has a similar function too. So you will have the same problem. If your clients use this function there is no way to determine the vendor based on OUI from probe requests.
5
1
79,345,392
2025-1-10
https://stackoverflow.com/questions/79345392/running-functions-in-parallel-and-seeing-their-progress
I am using joblib to run four processes on four cores in parallel. I would like to see the progress of the four processes separately on different lines. However, what I see is the progress being written on top of each other to the same line until the first process finishes. from math import factorial from decimal import Decimal, getcontext from joblib import Parallel, delayed from tqdm import trange import time def calc(n_digits): # number of iterations n = int(n_digits+1/14.181647462725477) n = n if n >= 1 else 1 # set the number of digits for our numbers getcontext().prec = n_digits+1 t = Decimal(0) pi = Decimal(0) deno = Decimal(0) for k in trange(n): t = ((-1)**k)*(factorial(6*k))*(13591409+545140134*k) deno = factorial(3*k)*(factorial(k)**3)*(640320**(3*k)) pi += Decimal(t)/Decimal(deno) pi = pi * Decimal(12) / Decimal(640320 ** Decimal(1.5)) pi = 1/pi # no need to round return pi def parallel_with_joblib(): # Define the number of cores to use n_cores = 4 # Define the tasks (e.g., compute first 100, 200, 300, 400 digits of pi) tasks = [1200, 1700, 900, 1400] # Run tasks in parallel results = Parallel(n_jobs=n_cores)(delayed(calc)(n) for n in tasks) if __name__ == "__main__": parallel_with_joblib() I would also like the four lines to be labelled "Job 1 of 4", "Job 2 of 4" etc. Following the method of @Swifty and changing the number of cores to 3 and the number of tasks to 7 and changing leave=False to leave=True I have this code: from math import factorial from decimal import Decimal, getcontext from joblib import Parallel, delayed from tqdm import trange import time def calc(n_digits, pos, total): # number of iterations n = int(n_digits + 1 / 14.181647462725477) n = n if n >= 1 else 1 # set the number of digits for our numbers getcontext().prec = n_digits + 1 t = Decimal(0) pi = Decimal(0) deno = Decimal(0) for k in trange(n, position=pos, desc=f"Job {pos + 1} of {total}", leave=True): t = ((-1) ** k) * (factorial(6 * k)) * (13591409 + 545140134 * k) deno = factorial(3 * k) * (factorial(k) ** 3) * (640320 ** (3 * k)) pi += Decimal(t) / Decimal(deno) pi = pi * Decimal(12) / Decimal(640320 ** Decimal(1.5)) pi = 1 / pi # no need to round return pi def parallel_with_joblib(): # Define the number of cores to use n_cores = 3 # Define the tasks (e.g., compute first 100, 200, 300, 400 digits of pi) tasks = [1200, 1700, 900, 1400, 800, 600, 500] # Run tasks in parallel results = Parallel(n_jobs=n_cores)(delayed(calc)(n, pos, len(tasks)) for (pos, n) in enumerate(tasks)) if __name__ == "__main__": parallel_with_joblib() I have change it to leave=True as I don't want the blank lines that appear otherwise. This however gives me: and then at the end it creates even more mess: How can this be fixed?
My idea was to create all the task bars in the main process and to create a single multiprocessing queue that each pool process would have access to. Then when calc completed an iteration it would place on the queue an integer representing its corresponding task bar. The main process would continue to get these integers from the queue and update the correct task bar. Each calc instance would place a sentinel value on the queue telling the main process that it had no more updates to enqueue. With a multiprocessing.pool.Pool instance we can use a "pool initializer" function to initialize a global variable queue in each pool process, which will be accessed by calc. Unfortunately, joblib provides no authorized equivalent pool initializer. I tried various workarounds mentioned on the web, but none worked. So if you can live with not using joblib, then try this: from math import factorial from decimal import Decimal, getcontext from multiprocessing import Pool, Queue from tqdm import tqdm import time def init_pool(_queue): global queue queue = _queue def calc(n_digits, pos): # number of iterations n = int(n_digits + 1 / 14.181647462725477) n = n if n >= 1 else 1 # set the number of digits for our numbers getcontext().prec = n_digits + 1 t = Decimal(0) pi = Decimal(0) deno = Decimal(0) for k in range(n): t = ((-1) ** k) * (factorial(6 * k)) * (13591409 + 545140134 * k) deno = factorial(3 * k) * (factorial(k) ** 3) * (640320 ** (3 * k)) pi += Decimal(t) / Decimal(deno) # Tell the main process to update the appropriate bar: queue.put(pos) pi = pi * Decimal(12) / Decimal(640320 ** Decimal(1.5)) pi = 1 / pi # no need to round queue.put(None) # Let updater know we have no more updates return pi def parallel_with_pool(): # Define the number of cores to use n_cores = 4 # Define the tasks (e.g., compute first 100, 200, 300, 400 digits of pi) tasks = [1200, 1700, 900, 1400] # Edit to make code for longer n_tasks = len(tasks) queue = Queue() LEAVE_PROGRESS_BAR = False # Create the bars: pbars = [ tqdm(total=tasks[idx], position=idx, desc=f"Job {idx + 1} of {n_tasks}", leave=LEAVE_PROGRESS_BAR ) for idx in range(n_tasks) ] # Run tasks in parallel with Pool(n_cores, initializer=init_pool, initargs=(queue,)) as pool: # This doesn't block and allows us to retrieve items from the queue: async_result = pool.starmap_async(calc, zip(tasks, range(n_tasks))) n = n_tasks while n: pos = queue.get() # Is this a sentinel value? if pos is None: n -= 1 # One less task to await else: pbars[pos].update() # We have no more updates to perform, so wait for the results: results = async_result.get() # Cause the bars to be removed before we display results # (See following Notes): for pbar in pbars: pbar.close() # So that the next print call starts at the start of the line # (required if leave=False is specified): if not LEAVE_PROGRESS_BAR: print('\r') for result in results: print(result) if __name__ == "__main__": parallel_with_pool() Notes In the above code the progress bars are instantiated with the argument leave=False signifying that we do not want the bars to remain. Consider the following code: from tqdm import tqdm import time with tqdm(total=10, leave=False) as pbar: for _ in range(10): pbar.update() time.sleep(.5) print('Done!') When the with block is terminated, the progress bar will disappear as a result of the implicit call to pbar.__exit__ that occurs. But if we had instead: pbar = tqdm(total=10, leave=False) for _ in range(10): pbar.update() time.sleep(.5) print('Done') We would see instead: C:\Ron\test>test.py 100%|██████████████████████| 10/10 [00:04<00:00, 2.03it/s]Done Since, in the posted answer we are not using the progress bar as context manager the progress bar are not immediately erased and if we had a print statement to output the actual results of our PI calculations, we would have the problem. The solution is to explicitly call close() on each progress bar: ... def parallel_with_pool(): ... # We have no more updates to perform, so wait for the results: results = async_result.get() # Cause the bars to be removed before we display results. for pbar in pbars: pbar.close() # So that the next print call starts at the start of the line # (required if leave=False is specified): print('\r') for result in results: print(result) If you want the progress bars to remain even after they have completed, then specify leave=True as follows: pbars = [ tqdm(total=tasks[idx], position=idx, desc=f"Job {idx + 1} of {n_tasks}", leave=True ) for idx in range(n_tasks) ] It is no longer necessary to call close for each bar, but it does not hurt to do so. Update Instead of using a multiprocessing.Queue instance to communicate we can instead create a multiprocessing.Array instance (which uses shared memory) of N counters where N is the number of progress bars whose progress is being tracked. Every iteration of calc will include an increment of the appropriate counter. The main process now has to periodically (say every .1 seconds) check the counters and update the progress bar accordingly: from math import factorial from decimal import Decimal, getcontext from multiprocessing import Pool, Array from tqdm import tqdm import time def init_pool(_progress_cntrs): global progress_cntrs progress_cntrs = _progress_cntrs def calc(n_digits, pos): # number of iterations n = int(n_digits + 1 / 14.181647462725477) n = n if n >= 1 else 1 # set the number of digits for our numbers getcontext().prec = n_digits + 1 t = Decimal(0) pi = Decimal(0) deno = Decimal(0) for k in range(n): t = ((-1) ** k) * (factorial(6 * k)) * (13591409 + 545140134 * k) deno = factorial(3 * k) * (factorial(k) ** 3) * (640320 ** (3 * k)) pi += Decimal(t) / Decimal(deno) progress_cntrs[pos] += 1 pi = pi * Decimal(12) / Decimal(640320 ** Decimal(1.5)) pi = 1 / pi return pi def parallel_with_pool(): # Define the number of cores to use n_cores = 4 # Define the tasks (e.g., compute first 100, 200, 300, 400 digits of pi) tasks = [1200, 1700, 900, 1400] # Edit to make code for longer n_tasks = len(tasks) progress_cntrs = Array('i', [0] * n_tasks, lock=False) LEAVE_PROGRESS_BAR = True # Create the bars: pbars = [ tqdm(total=tasks[idx], position=idx, desc=f"Job {idx + 1} of {n_tasks}", leave=LEAVE_PROGRESS_BAR ) for idx in range(n_tasks) ] # Run tasks in parallel with Pool(n_cores, initializer=init_pool, initargs=(progress_cntrs,)) as pool: # This doesn't block and allows us to retrieve items form the queue: async_result = pool.starmap_async(calc, zip(tasks, range(n_tasks))) n = n_tasks while n: time.sleep(.1) for idx in range(n_tasks): ctr = progress_cntrs[idx] if ctr != -1: # This bar isn't complete pbars[idx].n = ctr pbars[idx].refresh() if ctr == tasks[idx]: # This bar is now complete progress_cntrs[idx] = -1 # So we do not process this bar again n -= 1 # We have no more updates to perform, so wait for the results: results = async_result.get() # Cause the bars to be removed before we display results # (See following Notes): for pbar in pbars: pbar.close() # So that the next print call starts at the start of the line # (required if leave=False is specified) if not LEAVE_PROGRESS_BAR: print('\r') for result in results: print(result) if __name__ == '__main__': parallel_with_pool()
6
5
79,355,881
2025-1-14
https://stackoverflow.com/questions/79355881/how-to-extinguish-cycle-in-my-code-when-calculating-emwa
I'm calculating EWMA values for array of streamflow, and code is like below: import polars as pl import numpy as np streamflow_data = np.arange(0, 20, 1) adaptive_alphas = np.concatenate([np.repeat(0.3, 10), np.repeat(0.6, 10)]) streamflow_series = pl.Series(streamflow_data) ewma_data = np.zeros_like(streamflow_data) for i in range(1, len(streamflow_series)): current_alpha = adaptive_alphas[i] ewma_data[i] = streamflow_series[:i+1].ewm_mean(alpha=current_alpha)[-1] # When set dtype of ewma_data to float when initial it, output is like this Output: [0 0.58823529 1.23287671 1.93051717 2.67678771 3.46668163, 4.29488309 5.1560635 6.04512113 6.95735309 9.33379473 10.33353466, 11.33342058 12.33337091 13.33334944 14.33334021 15.33333625 16.33333457, 17.33333386 18.33333355] # When I don't point dtype of ewma_data and dtype of streamflow_data is int, output will be floored Output: [0 0 1 1 2 3 4 5 6 6 9 10 11 12 13 14 15 16 17 18] But when length of streamflow_data is very big (such as >100000), this code will become very slow. So how can I extinguish for in my code and don't influence its result? Hope for your reply.
If you have only few alpha values and/or have some condition on which alpha should be used for which row, you could use pl.coalesce(), pl.when() and pl.Expr.ewm_mean(): df = pl.DataFrame({ "adaptive_alpha": np.concatenate([np.repeat(0.3, 10), np.repeat(0.6, 10)]), "streamflow": np.arange(0, 20, 1) }) df.with_columns( pl.coalesce( pl.when(pl.col.adaptive_alpha == alpha) .then(pl.col.streamflow.ewm_mean(alpha = alpha)) for alpha in df["adaptive_alpha"].unique() ).alias("ewma") ).with_columns(ewma_int = pl.col.ewma.cast(pl.Int32)) shape: (20, 4) ┌────────────────┬────────────┬───────────┬──────────┐ │ adaptive_alpha ┆ streamflow ┆ ewma ┆ ewma_int │ │ --- ┆ --- ┆ --- ┆ --- │ │ f64 ┆ i64 ┆ f64 ┆ i32 │ ╞════════════════╪════════════╪═══════════╪══════════╡ │ 0.3 ┆ 0 ┆ 0.0 ┆ 0 │ │ 0.3 ┆ 1 ┆ 0.588235 ┆ 0 │ │ 0.3 ┆ 2 ┆ 1.232877 ┆ 1 │ │ 0.3 ┆ 3 ┆ 1.930517 ┆ 1 │ │ 0.3 ┆ 4 ┆ 2.676788 ┆ 2 │ │ … ┆ … ┆ … ┆ … │ │ 0.6 ┆ 15 ┆ 14.33334 ┆ 14 │ │ 0.6 ┆ 16 ┆ 15.333336 ┆ 15 │ │ 0.6 ┆ 17 ┆ 16.333335 ┆ 16 │ │ 0.6 ┆ 18 ┆ 17.333334 ┆ 17 │ │ 0.6 ┆ 19 ┆ 18.333334 ┆ 18 │ └────────────────┴────────────┴───────────┴──────────┘
3
1
79,356,278
2025-1-14
https://stackoverflow.com/questions/79356278/merging-lists-of-dictionaries-based-on-nested-list-values
I'm struggling to create a new list based on two input lists. Here's an example: data_1 = [ { "title": "System", "priority": "medium", "subtitle": "mason", "files": [ {"name": "mason", "path": "/tmp/mason/mason.json"}, {"name": "mason", "path": "/tmp/mason/build.json"} ]}, { "title": "System", "priority": "medium", "subtitle": "kylie", "files": [ {"name": "kylie", "path": "/tmp/kylie/build.tar"}, {"name": "kylie", "path": "/tmp/kylie/kylie.json"} ]} ] data_2 = [ { "title": "System", "priority": "medium", "files": [ {"name": "build", "path": "/tmp/kylie/build.tar"}, {"name": "kylie", "path": "/tmp/kylie/kylie.json"}, {"name": "mason", "path": "/tmp/mason/mason.json"}, {"name": "build", "path": "/tmp/mason/build.json"} ]} ] merged = [ { "title": "System", "priority": "medium", "subtitle": "mason", "files": [ {"name": "mason", "path": "/tmp/mason/mason.json"}, {"name": "build", "path": "/tmp/mason/build.json"} ]}, { "title": "System", "priority": "medium", "subtitle": "kylie", "files": [ {"name": "build", "path": "/tmp/kylie/build.tar"}, {"name": "kylie", "path": "/tmp/kylie/kylie.json"} ]} ] Basically, it would be OK to keep data from data_1 list and just replace files.name with the names coming from data_2 list, having the same path. Or multiply the element in data_2, assigning a subtitle and deleting all data_2.files items that don't match the paths specified in data_1 list. Anyway, I started doing something like: for d1 in data_1: for d2 in data_2: d1_key = (d1.get("title"), d1.get("priority")) d2_key = (d1.get("title"), d1.get("priority")) if d1_key == d2_key: for df in d1.get("files"): if any(df.get("path") in entry.get("path") for entry in d2.get("files")): print("Help! There must be easier way!") And I don't think it's clear enough to get into this further. Would you suggest any other way to get the merged list created?
You can create a reverse mapping that maps paths in data_2 to names, and iteratively modify all names in data_1 with the paths mapped with the mapping: from itertools import chain from operator import itemgetter name_of = dict( map( itemgetter('path', 'name'), chain.from_iterable(map(itemgetter('files'), data_2)) ) ) for record in data_1: for file in record['files']: file['name'] = name_of[file['path']] data_1 becomes: [{'title': 'System', 'priority': 'medium', 'subtitle': 'mason', 'files': [{'name': 'mason', 'path': '/tmp/mason/mason.json'}, {'name': 'build', 'path': '/tmp/mason/build.json'}]}, {'title': 'System', 'priority': 'medium', 'subtitle': 'kylie', 'files': [{'name': 'build', 'path': '/tmp/kylie/build.tar'}, {'name': 'kylie', 'path': '/tmp/kylie/kylie.json'}]}] You can deepcopy data_1 to a new list named merged first if you prefer not to modify data_1. Demo here
1
1
79,356,647
2025-1-14
https://stackoverflow.com/questions/79356647/python-accessing-eo-edmund-optics-camera
I have a camera (EO Edmund Optics Camera, Model UI-154xSE-M) connected to a Windows computer via USB. When I open IDS Camera Manager, The camera is "configured correctly and can be opened". I am trying to write a code in Python to have the camera capture an image every x minutes for a period of y total minutes using the cv2 package. Unfortunately, I am getting an error while doing this. Please find my code below and the result: import cv2 import os import time from datetime import datetime, timedelta output_directory = camera = cv2.VideoCapture(0) if not camera.isOpened(): print("Error: Could not access the camera.") else: print("Camera found and initialized. Starting photo capture...") end_time = datetime.now() + timedelta(hours=24) capture_interval = 10 * 60 # Capture every 10 minutes (600 seconds) try: while datetime.now() < end_time: ret, frame = camera.read() print(f"Capture status: {ret}") # Print whether the capture was successful if not ret: print("Error: Failed to capture image.") continue # Skip this iteration and try again timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") file_path = os.path.join(output_directory, f"photo_{timestamp}.jpg") cv2.imwrite(file_path, frame) print(f"Photo captured and saved at {file_path}") time.sleep(capture_interval) except KeyboardInterrupt: print("Photo capture interrupted by user.") finally: camera.release() print("Camera released. Photo capture ended.") "Error: Failed to capture image." prints for the time frame I specify. Although "Camera found and initialized. Starting photo capture..." prints as well. I am very unfamiliar with accessing/controlling external devices with Python code. I am open to alternate ways to do this besides Python. I am new to coding and devices.
So isOpened() says True, but the read() call returns False? It might be worth trying the read call several times, instead of failing at the first sign of trouble. Some cameras are weird like that. OpenCV also has this quirk where it tries to set 640 x 480 resolution. If a camera doesn't support that, the interaction will fail. Try setting a supported resolution, e.g. the sensor's preferred video mode resolution (1280 x 1024 was it?) OpenCV struggles because that camera doesn't appear to support normal camera APIs, such as USB Unified Video Class (UVC). That is the interface supported by all webcams and most modern frame grabbers and capture cards. Your camera, being an industrial camera, probably supports either "GigE Vision" or "USB3 Vision", which are standards for industrial cameras. OpenCV might support that, if you compile it to do that. To my knowledge, no (official or third-party) binary build of OpenCV for Python has such support. The manufacturer's own site links to a Python library you can use to access your camera: https://pypi.org/project/pyueye I just checked the spec sheet for the part number you gave. The model has been discontinued. You might be out of luck in finding any support for it. The spec sheet doesn't speak about USB 3, so it's got to be USB 2, and that never, to my knowledge, carried any data conforming to an industrial vision API.
2
4
79,355,047
2025-1-14
https://stackoverflow.com/questions/79355047/multiprocessing-with-tkinter-progress-bar-minimal-example
I'm looking for a way to track a multiprocessing task with Tkinter progress bar. This is something that can be done very straightforwardly with tqdm for display in the terminal. Instead of using tqdm I'd like to use ttk.Progressbar, but all attempts I have made at this, the tasks block on trying to update the progressbar (e.g. using update_idletasks and similar). Below is a template of the kind of solution I'm looking for: import time from multiprocessing import Pool from tqdm import tqdm import tkinter as tk import tkinter.ttk as ttk def task(x): time.sleep(0.1) return x * x def start_task(): num_processes = 12 num_tasks = 100 with Pool(processes=num_processes) as pool: with tqdm(total=num_tasks, desc="Processing") as pbar: def update_progress(_): # <Insert update to tk progress bar here> pbar.update(1) for i in range(num_tasks): pool.apply_async(task, args=(i,), callback=update_progress) pool.close() pool.join() if __name__ == "__main__": root = tk.Tk() root.title("Task Progress") progress_bar = ttk.Progressbar(root, maximum=100, length=300) progress_bar.pack(pady=20) button = tk.Button(text="Start", command=start_task) button.pack(fill="x", padx=10, pady=10) root.mainloop() In the solution I'd also like to get the output of the task (in this case a list of x*x). If another multiprocessing structure would work better please feel free to adjust (pool just seemed the simplest for demonstration). This is a question that has been asked on Stack Overflow before, but all the previous answers I've found have not been minimal examples and I didn't find them very helpful.
Calling pool.join() blocks the main thread until all the tasks are done, which causes Tkinter to hang. To get around this, you can call start_task in a thread. Running the thread with .start() (instead of .join()) will make it run in the background so it doesn't block the main thread. You can pass lists to the thread for keeping track of progress and results. These lists can be updated by the update_progress function, which takes the return value of task as an argument. To update the progress bar I've added the update_bar function. This removes each value from progress using pop, then increases the progress bar by that amount with step. This is run every 100ms while the thread is alive. Once it finishes, use_results is called, which can then do something with the result (I've made it display the sum of values as an example). Finally, I've introduced start_start_task (which could probably be named better but you get the idea). This is called when the button is clicked and initialises the lists, starts the thread and calls the update_bar function. import time from multiprocessing import Pool from threading import Thread import tkinter as tk import tkinter.ttk as ttk def task(x): time.sleep(0.1) return x*x def start_task(progress, results): num_processes = 12 num_tasks = 100 with Pool(processes=num_processes) as pool: def update_progress(result): progress.append(1) results.append(result) for i in range(num_tasks): pool.apply_async(task, args=(i,), callback=update_progress) pool.close() pool.join() def update_bar(thread, progress, results): # check for progress while progress: progress_bar.step(progress.pop()) # if the tasks are still running then call the update function after 100ms if thread.is_alive(): root.after(100, lambda: update_bar(thread, progress, results)) # if the tasks are done call the result handler else: use_results(results) def use_results(results): # do whatever you want with the result result_label.config(text = f"Sum of results: {sum(results)}") def start_start_task(): progress = [] results = [] thread = Thread(target=start_task, args=(progress, results)) thread.start() update_bar(thread, progress, results) if __name__ == "__main__": root = tk.Tk() root.title("Task Progress") progress_bar = ttk.Progressbar(root, maximum=100, length=300) progress_bar.pack(pady=20) button = tk.Button(root, text="Start", command=start_start_task) button.pack(fill="x", padx=10, pady=10) result_label = tk.Label(root, text="Press Start to get result") result_label.pack() root.mainloop()
2
2
79,356,143
2025-1-14
https://stackoverflow.com/questions/79356143/how-to-narrow-types-in-python-with-enum
In python, consider the following example from enum import StrEnum from typing import Literal, overload class A(StrEnum): X = "X" Y = "Y" class X: ... class Y: ... @overload def enum_to_cls(var: Literal[A.X]) -> type[X]: ... @overload def enum_to_cls(var: Literal[A.Y]) -> type[Y]: ... def enum_to_cls(var: A) -> type[X] | type[Y]: match var: case A.X: return X case A.Y: return Y case _: raise ValueError(f"Unknown enum value: {var}") When I attempt to call enum_to_cls, I get a type error, with the following case: selected_enum = random.choice([x for x in A]) enum_to_cls(selected_enum) # Argument of type "A" cannot be assigned to parameter "var" of type "Literal[A.Y]" in # function "enum_to_cls" # "A" is not assignable to type "Literal[A.Y]" [reportArgumentType] I understand the error and it makes sense, but I wanted to know, if there is any way to avoid this error. I know I can avoid this error, creating a branch for each enum case but then I am back to square one of why I wanted to created the function enum_to_cls.
The simple workaround is to include the implementation's signature as a third overload: (playgrounds: Pyright, Mypy) @overload def enum_to_cls(var: Literal[A.X]) -> type[X]: ... @overload def enum_to_cls(var: Literal[A.Y]) -> type[Y]: ... @overload def enum_to_cls(var: A) -> type[X] | type[Y]: ... def enum_to_cls(var: A) -> type[X] | type[Y]: # Implementation goes here selected_enum = random.choice([x for x in A]) reveal_type(enum_to_cls(selected_enum)) # type[X] | type[Y] This is a workaround rather than a definitive solution, because the enum A is supposed to be expanded to the union of its members during overload evaluation. Indeed, if the argument were declared to be of the type Literal[A.X, A.Y], there would be no error: (playgrounds: Pyright, Mypy) @overload def enum_to_cls(var: Literal[A.X]) -> type[X]: ... @overload def enum_to_cls(var: Literal[A.Y]) -> type[Y]: ... def enum_to_cls(var: A) -> type[X] | type[Y]: # Implementation goes here selected_enum: Literal[A.X, A.Y] = ... reveal_type(enum_to_cls(selected_enum)) # type[X] | type[Y] Type expansion during overload evaluation is part of a recent proposed addition to the specification. Pyright has yet to conform to this, but it will once the proposal is accepted.
4
4
79,355,830
2025-1-14
https://stackoverflow.com/questions/79355830/python-sqlalchemy-mapping-column-names-to-attributes-and-vice-versa
I have a problem where I need to access a MS SQL DB, so naturally it's naming convention is different: TableName(Id, ColumnName1, ColumnName2, ABBREVIATED,...) The way I got the model constructed in Python: class TableName(Base): __tablename__ = 'TableName' id = Column('Id', BigInteger, primary_key=True, autoincrement=True) order_id = Column('OrderId', BigInteger, ForeignKey('Orders.Id'), nullable=False) column_name1 = Column('ColumnName1', String(50), nullable=True) column_name2 = Column('ColumnName2', String(50), nullable=True) abbreviated= Column('ABBREVIATED', String(50), nullable=False) ... Then, I have a Repository: class TableNameRepository: def __init__(self, connection_string: str): self.engine = create_engine(connection_string) self.Session = sessionmaker(bind=self.engine) def get_entry(self, order_id: int, column_name1: str) -> Optional[TableName]: with self.Session() as session: return session.query(TableName).filter_by( order_id=order_id, column_name1=column_name1 ).first() This works, however, now to my actual problem: I have a template text file with placeholders using the SQL column names, meaning there are Id, ColumnName1, ColumnName2, ABBREVIATED strings and I need to replace them with values from the model. Here is how I tried to approach this: retrieved_table_entity = self.repository.get_entry(order_id, something) attr_to_column = { column.key: column.name for column in retrieved_table_entity.__table__.columns } The problem is that doesnt work: It still creates keys that look like: {'Id': 'Id', 'OrderId': 'OrderId', 'ColumnName1': 'ColumnName1', ...} And so, the next step which is: for attr_name, column_name in attr_to_column.items(): value = getattr(retrieved_table_entity, attr_name, None) logging.debug(f"Accessing {attr_name}: {value}") reg_dict[column_name] = '' if value is None else str(value) Doesnt work because the keys are not the python keys, in debug mode testing, if I manually do like: getattr(retrieved_table_entity, 'id'), it returns the value. To summarize: I need a way to map, so that I have key/value dictionary like: { Id: 1 OrderID: 123 ColumnName1: 'SomeValue' } And then I could simply to soemthing like self.template.format(**keys_to_values_dict) I have tried consulting with LLMs, and they keep circling around not suggesting a way to do this, I dont want to manually map the model again in this service function. I also reviewed the Mapping API and cant find something relevant: https://docs.sqlalchemy.org/en/14/orm/mapping_api.html For reference, my current dependencies: sqlalchemy==2.0.36 pymssql==2.2.11
Untested but something like this: from sqlalchemy import inspect old_to_new = {c.name: k for k, c in inspect(TableName).columns.items()} https://docs.sqlalchemy.org/en/20/orm/mapping_api.html#sqlalchemy.orm.Mapper.columns https://docs.sqlalchemy.org/en/20/orm/mapping_styles.html#inspection-of-mapper-objects
1
1
79,352,803
2025-1-13
https://stackoverflow.com/questions/79352803/how-to-detect-most-mortared-stones-with-opencv-findcontours
I need to correctly outline as many as possible of the mortared stones in a street zone. The code below correctly detects some of them in the stones image "in.jpg", but it is not obvious why many remain undetected or only partly outlined. I'd also like to fix cases like contour 56, 57 or 92 by exploiting the fact that the stones are aligned and mostly oval image with detected contours "out.jpg". I know how to remove the obvious outliers, but before that I need to improve the detection method. import cv2 img = cv2.imread('in.jpg') gray = cv2.bitwise_not(img) gray = cv2.cvtColor(gray, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (113, 113), 0) thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 61, 2) thresh = cv2.dilate(thresh, None, iterations=5) thresh = cv2.erode(thresh, None, iterations=1) cnts, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) print(f'Found {len(cnts)} contours') cv2.drawContours(img, cnts, -1, (0, 255, 0), 2) centroids = [cv2.moments(c) for c in cnts] for i, c in enumerate(centroids): cv2.putText(img, str(i), (int(c['m10'] / c['m00']), int(c['m01'] / c['m00']) - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2) cv2.imwrite('out.jpg', img) The various parameters (blur 113, etc.) have been obtained by trial and error. It is not enough to just count the stones, I need to get their shape as closely as possible. Thank you for your suggestions! Contributing to Markus' question: here is an example of a mixed zone:
You can use cv2.moments(contour) for calculating the ratio between the contour length and the contour area. This will let you rule out non oval contours. Having said that, this is a relatively difficult problem for a classical CV approach. A neural network will do a better job (given enough training data) than findContours. This is mainly because the stones are quite different in their appearance and modeling them using a classical model requires many parameters that will capture this variance. Neural networks are naturally better suited for that kind of task. There are off the shelf segmenters like the SAM model that can do the job and a quick search shows there are also models designed specifically for stones segmentation. Here is an example of using SAM2 segmentation: You can find a colab here
2
1
79,355,372
2025-1-14
https://stackoverflow.com/questions/79355372/how-to-get-the-day-month-name-of-a-column-in-polars
I have a polars dataframe df which has a datetime column date. I'm trying to get the name of the day and month of that column. Consider the following example. import polars as pl from datetime import datetime df = pl.DataFrame({ "date": [datetime(2024, 10, 1), datetime(2024, 11, 2)] }) I was hoping that I could pass parameter to month() or weekday() to get the desired format. df.with_columns( pl.col("date").dt.month().alias("Month"), pl.col("date").dt.weekday().alias("Day") ) shape: (2, 3) ┌─────────────────────┬───────┬─────┐ │ date ┆ Month ┆ Day │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ i8 ┆ i8 │ ╞═════════════════════╪═══════╪═════╡ │ 2024-10-01 00:00:00 ┆ 10 ┆ 2 │ │ 2024-11-02 00:00:00 ┆ 11 ┆ 6 │ └─────────────────────┴───────┴─────┘ However, this does not seem to be the case. My desired output looks as follows. shape: (2, 3) ┌─────────────────────┬───────┬──────────┐ │ date ┆ Month ┆ Day │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ str ┆ str │ ╞═════════════════════╪═══════╪══════════╡ │ 2024-10-01 00:00:00 ┆ Oct ┆ Tuesday │ │ 2024-11-02 00:00:00 ┆ Nov ┆ Saturday │ └─────────────────────┴───────┴──────────┘ How can I extract the day and month name from the date column?
You can use pl.Expr.dt.strftime to convert a date / time / datetime column into a string column of a given format. The format can be specified using the chrono strftime format. In your specific example, the following specifiers might be of interest: %B for the full month name. %b for the abbreviated month name. Always 3 letters. %A for the full weekday name. %a for the abbreviated weekday name. Always 3 letters. df.with_columns( pl.col("date").dt.strftime("%B").alias("month"), pl.col("date").dt.strftime("%b").alias("month (short)"), pl.col("date").dt.strftime("%A").alias("day"), pl.col("date").dt.strftime("%a").alias("day (short)"), ) shape: (2, 5) ┌─────────────────────┬──────────┬───────────────┬──────────┬─────────────┐ │ date ┆ month ┆ month (short) ┆ day ┆ day (short) │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ str ┆ str ┆ str ┆ str │ ╞═════════════════════╪══════════╪═══════════════╪══════════╪═════════════╡ │ 2024-10-01 00:00:00 ┆ October ┆ Oct ┆ Tuesday ┆ Tue │ │ 2024-11-02 00:00:00 ┆ November ┆ Nov ┆ Saturday ┆ Sat │ └─────────────────────┴──────────┴───────────────┴──────────┴─────────────┘
5
7
79,354,633
2025-1-14
https://stackoverflow.com/questions/79354633/reshape-dictionary-to-make-violin-plot
I have some data that is saved in a dictionary of dataframes. The real data is much bigger with index up to 3000 and more columns. In the end I want to make a violinplot of two of the columns in the dataframes but for multiple dictionary entries. The dictionary has a tuple as a key and I want to gather all entries which first number is the same. import pandas as pd import numpy as np import matplotlib.pyplot as plt data_dict = { (5, 1): pd.DataFrame({"Data_1": [0.235954, 0.739301, 0.443639], "Data_2": [0.069884, 0.236283, 0.458250], "Data_3": [0.170902, 0.496346, 0.399278], "Data_4": [0.888658, 0.591893, 0.381895]}), (5, 2): pd.DataFrame({"Data_1": [0.806812, 0.224321, 0.504660], "Data_2": [0.070355, 0.943047, 0.579285], "Data_3": [0.526866, 0.251339, 0.600688], "Data_4": [0.283107, 0.409486, 0.307315]}), (7, 3): pd.DataFrame({"Data_1": [0.415159, 0.834547, 0.170972], "Data_2": [0.125926, 0.401789, 0.759203], "Data_3": [0.398494, 0.587857, 0.130558], "Data_4": [0.202393, 0.395692, 0.035602]}), (7, 4): pd.DataFrame({"Data_1": [0.923432, 0.622174, 0.185039], "Data_2": [0.759154, 0.126699, 0.783596], "Data_3": [0.075643, 0.287721, 0.939428], "Data_4": [0.983739, 0.738550, 0.108639]}) } My idea was that I could re-arrange it into a different dictionary and then plot the violinplot. Say that 'Data_1' and 'Data_4' are of interest. So then I loop over the keys in dict as below. new_dict = {} for col in ['Data_1','Data_4']: df = pd.DataFrame() for i in [5,7]: temp = [] for key, value in dict.items(): if key[0]==i: temp.extend(value[col]) df[i] = temp new_dict[col] = df This then make the following dict. new_dict = {'Data_1': 5 7 0 0.235954 0.415159 1 0.739301 0.834547 2 0.443639 0.170972 3 0.806812 0.923432 4 0.224321 0.622174 5 0.504660 0.185039, 'Data_4': 5 7 0 0.888658 0.202393 1 0.591893 0.395692 2 0.381895 0.035602 3 0.283107 0.983739 4 0.409486 0.738550 5 0.307315 0.108639} Which I then loop over to make the violin plots for Data_1and Data_4. for key, value in new_dict.items(): fig, ax = plt.subplots() ax.violinplot(value, showmeans= True) ax.set(title = key, xlabel = 'Section', ylabel = 'Value') ax.set_xticks(np.arange(1,3), labels=['5','7']) While I get the desired result it's very cumbersome to re-arrange the dictionary. Could this be done in a faster way? Since it's the same column I want for each dictionary entry I feel that it should.
You could minimize the reshaping by using concat+melt and a higher level plotting library like seaborn: import seaborn as sns sns.catplot(data=pd.concat(data_dict, names=['section', None]) [['Data_1', 'Data_4']] .melt(ignore_index=False, var_name='dataset') .reset_index(), row='dataset', x='section', y='value', kind='violin', ) Output: Another approach to reshape: tmp = (pd .concat(data_dict, names=['section', None]) [['Data_1', 'Data_4']] .pipe(lambda x: x.set_axis(pd.MultiIndex.from_arrays([x.index.get_level_values('section'), x.groupby('section').cumcount()]))) .T.stack() ) # then access the datasets tmp.loc['Data_1'] # section 5 7 # 0 0.235954 0.415159 # 1 0.739301 0.834547 # 2 0.443639 0.170972 # 3 0.806812 0.923432 # 4 0.224321 0.622174 # 5 0.504660 0.185039
2
2
79,354,459
2025-1-14
https://stackoverflow.com/questions/79354459/apply-operation-to-all-elements-in-matrix-skipping-numpy-nan
I have an array filled with data only in lower triangle spaces, the rest is np.nan. I want to do some operations on this matrix, more precisely- with data elements, not nans, because I expect the behaviour when nans elements are skipped in vectorized operation to be much quicker. I have two test arrays: arr = np.array([ [1.111, 2.222, 3.333, 4.444, 5.555], [6.666, 7.777, 8.888, 9.999, 10.10], [11.11, 12.12, 13.13, 14.14, 15.15], [16.16, 17.17, 18.18, 19.19, 20.20], [21.21, 22.22, 23.23, 24.24, 25.25] ]) arr_nans = np.array([ [np.nan, np.nan, np.nan, np.nan, np.nan], [6.666, np.nan, np.nan, np.nan, np.nan], [11.11, 12.12, np.nan, np.nan, np.nan], [16.16, 17.17, 18.18, np.nan, np.nan], [21.21, 22.22, 23.23, 24.24, np.nan] ]) Thats the way I test them: test = timeit.timeit('arr * 5 / 2.123', globals=globals(), number=1000) test_nans = timeit.timeit('arr_nans * 5 / 2.123', globals=globals(), number=1000) masked_arr_nans = np.ma.array(arr_nans, mask=np.isnan(arr_nans)) test_masked_nans = timeit.timeit('masked_arr_nans * 5 / 2.123', globals=globals(), number=1000) print(test) # 0.0017232997342944145s print(test_nans) # 0.0017070993781089783s print(test_masked_nans) # 0.052730199880898s I have created a mask array masked_arr_nans and masked all nans. But this way is far slower then the first two. I dont understand why. The main question is- which is the quckest way to operate with arrays like arr_nans containing a lot of nans, probably there is a qicker approach then the ones I mentioned. Side question is- why masked array works much slower?
I think this hypothesis is incorrect: I expect the behaviour when nans elements are skipped in vectorized operation to be much quicker In your array the data is contiguous, which is among others why vectorization is fast. If you used a masked array, this doesn't change this fact, there will be as much data and the masked portions will need to be ignored during processing. This has an extra cost of verifying which data is masked or not. Skipping the data will still need to happen in the masked array. Quite often, with vectorized operations, if is more efficient to perform extra operations and handle the data as contiguous values rather that trying to optimize the number of operations. If really you need to perform several operations or complex/expensive computations on a subset of the data, I would advise to create a new array with just this data. The cost of selecting the data will be only paid once or will be lower than of the computations. idx = np.tril_indices_from(arr, k=-1) tril_arr = arr[idx] # do several things with tril_arr # restore a rectangular form out = np.full_like(arr, np.nan) out[idx] = tril_arr Example Let take your input array and perform repeated operations on it (for each operation we compute arr = 1 - arr). We either apply the operation on the full array or on the flattened lower triangle. The cost of selecting the subset of the data is not worth it if we perform a few operations. After enough intermediate operations this become identical in speed: Now let's use a more complex/expensive computation (arr = log(exp(arr))). Now we see two things: After a threshold, it is faster to subset the data The position of the threshold at which the two approaches (subset vs full) have the same speed is not the same than with the arr = 1-arr example: As a rule of thumb, if the operation you want to perform on the non-masked values is cheap or non repeated, don't bother and apply it on the whole thing. If the operation is complex/expensive/repeated, then consider subsetting the data. Plots above is a subset vs relative format: arr = 1 - arr arr = log(exp(arr))
1
2
79,353,825
2025-1-14
https://stackoverflow.com/questions/79353825/2d-how-to-make-bezier-curve-line-has-width-by-using-python
If I have a Bezier curve, how could I make it have width, and how to get vertices of its contour? My attempt: I have plot a Bezier curve by using one start point, two control points and one end points, their coordinates are: p0_ = [11, -0.45] p1_ = [13.5, -0.45] p2_ = [13.5, -4] p3_ = [16, -4] the figure is as shown below: But I want to make it have width, lets say width=0.7, then I change the original coordinates into: width = 0.7 p01_ = [11, -0.45 + width / 2] p11_ = [13.5, -0.45 + width / 2] p21_ = [13.5, -4 + width / 2] p31_ = [16, -4 + width / 2] p02_ = [11, -0.45 - width / 2] p12_ = [13.5, -0.45 - width / 2] p22_ = [13.5, -4 - width / 2] p32_ = [16, -4 - width / 2] And then I plot new figure, I find it looks quite strange. It is not uniform. Obviously, this is not the one I want, I guess the coordinates are wrong, But I do not know how to get correct one? What I want is like the one shown below: You see, everywhere is uniform.
This follows the prescription in MBo's comment. For each point on the Bezier curve you compute the normal and then go width/2 along that normal to each side of the Bezier curve. To compute the normals you could take successive line segments along your discretised Bezier curve. However, I think you will get a smoother answer (especially at the end points of each Bezier) if you obtain the normals by differentiating the original Bezier curve r(t) to get a tangent dr/dt and then get a normal as k cross dr/dt, where k is a unit vector in the z direction. This is what is done below. If you want a filled or hatched region then take the left (xL,yL) and right (xR,yR) vertices and create a polygon, which you can then fill. (You will have to reverse either the left or right vertices so that you continue to traverse the polygon in the same sense.) import numpy as np import matplotlib.pyplot as plt #----------------------------- class Bezier: '''class for Cubic Bezier curve''' def __init__( self, q0, q1, q2, q3 ): # control points self.p0 = np.array( q0 ) self.p1 = np.array( q1 ) self.p2 = np.array( q2 ) self.p3 = np.array( q3 ) # normals (z cross p) self.n0 = np.array( [-q0[1], q0[0] ] ) self.n1 = np.array( [-q1[1], q1[0] ] ) self.n2 = np.array( [-q2[1], q2[0] ] ) self.n3 = np.array( [-q3[1], q3[0] ] ) def pt( self, t ): return (1-t)**3 * self.p0 + 3*t*(1-t)**2 * self.p1 + 3*t**2*(1-t) * self.p2 + t**3 * self.p3 def sides( self, t, width ): normal = -3*(1-t)**2 * self.n0 + (3*(1-t)**2-6*t*(1-t)) * self.n1 + (6*t*(1-t)-3*t**2) * self.n2 + 3*t**2 * self.n3 normal /= np.linalg.norm( normal ) point = self.pt( t ) return point + ( width / 2 ) * normal, point - ( width / 2 ) * normal #----------------------------- def getCurves( BezierData, n, width ): x = []; y = []; xL = []; yL = []; xR = []; yR = [] for B in BezierData: arc = Bezier( B[0], B[1], B[2], B[3] ) for t in np.linspace( 0.0, 1.0, n ): P = arc.pt( t ); x.append( P[0] ); y.append( P[1] ) L, R = arc.sides( t, width ); xL.append( L[0] ); yL.append( L[1] ); xR.append( R[0] ); yR.append( R[1] ) return x, y, xL, yL, xR, yR #----------------------------- data = [ [ [11,-0.45], [13.5, -0.45], [13.5, -4], [16, -4] ] ] n = 100 # points per Bezier width = 0.7 # width x, y, xL, yL, xR, yR = getCurves( data, n, width ) plt.plot( x , y , 'k:' ) plt.plot( xL, yL, 'b-' ) plt.plot( xR, yR, 'b-' ) plt.show()
2
2
79,354,062
2025-1-14
https://stackoverflow.com/questions/79354062/how-to-concantenate-elements-of-a-binary-column
I have a DataFrame with a binary column that represents the hexadecimal encoding of an initial string: random_id random_id_cesu8 123456789012 [31 32 33 34 35 36 37 38 39 30 31 32] The random_id_cesu8 column contains the binary representation of the random_id string encoded in UTF-8 and displayed as a list of byte values in hexadecimal format. I want to transform the random_id_cesu8 column into a single concatenated hexadecimal string: 313233343536373839303132, which is the concatenation of each individual byte value from the random_id_cesu8 list. I’ve tried multiple approaches, but they all result in the original random_id value (123456789012) instead of the desired concatenated hexadecimal string (313233343536373839303132). How can I correctly achieve this transformation?
Here's a solution using PySpark to convert the binary column to its hexadecimal representation: from pyspark.sql import functions as F # Create DataFrame with the initial value df = spark.createDataFrame([("123456789012",)], ["value"]) # Convert the value to UTF-8 encoding and then to hex df = df.withColumn( "encoded_hex", F.hex(F.encode(F.col("value"), 'utf-8')), ) # Display the resulting DataFrame df.show(truncate=False) Explanation: encode(df.value, 'utf-8'): This function encodes the string value into a binary format using UTF-8 encoding. hex(): This function converts the binary data into its hexadecimal representation. The result will be a DataFrame with the encoded_hex column containing the desired hexadecimal string 313233343536373839303132. +------------+------------------------+ |value |encoded_hex | +------------+------------------------+ |123456789012|313233343536373839303132| +------------+------------------------+
1
1
79,354,405
2025-1-14
https://stackoverflow.com/questions/79354405/polars-schema-typeerror-dtypes-must-be-fully-specified-got-datetime
Hi I want to define a polars schema. It works fine without a datetime format. However it fails with pl.Datetime. import polars as pl testing_schema: pl.Schema = pl.Schema( { "date": pl.Datetime, "some_int": pl.Int64, "some_str": pl.Utf8, "some_cost": pl.Float64, }, ) The error: lib/python3.11/site-packages/polars/schema.py", line 47, in _check_dtype raise TypeError(msg) TypeError: dtypes must be fully-specified, got: Datetime Unfortunately I do not have any idea on how to fix it.
If you look at pl.Datetime, you'll find that it is initialized with parameters: class polars.datatypes.Datetime( time_unit: TimeUnit = 'us', time_zone: str | timezone | None = None ) Unlike, say, pl.Int64. Hence, you need to add the parentheses at the end, (), to use the defaults, or to pass arguments and override them. import polars as pl testing_schema: pl.Schema = pl.Schema( { "date": pl.Datetime(), "date2": pl.Datetime(time_unit="ns", time_zone="UTC"), "some_int": pl.Int64, }, )
3
4
79,354,192
2025-1-14
https://stackoverflow.com/questions/79354192/safe-eval-by-explitily-whitelisting-builtins-and-bailing-on-dunders
I know it's inadvisable to use eval() on untrusted input, but I want to see where this sanitiser fails. It uses a whitelist to only allow harmless builtins, and it immediately bails if there are any dunder properties called. (Note: the reason it string searches .__ and not just __ is because I want to allow things like foo.bar__baz). def safe_eval(code: str) -> Any | None: if '.__' in code: raise ValueError allowed_builtins = [ 'abs', 'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes', 'callable', 'chr', 'complex', 'dict', 'divmod', 'enumerate', 'filter', 'float', 'format', 'frozenset', 'hasattr', 'hash', 'hex', 'int', 'isinstance', 'issubclass', 'iter', 'len', 'list', 'map', 'max', 'min', 'next', 'object', 'oct', 'ord', 'pow', 'range', 'repr', 'reversed', 'round', 'set', 'slice', 'sorted', 'str', 'sum', 'tuple', 'zip' ] return eval( code, globals={ '__builtins__': { builtin : getattr(__builtins__, builtin) for builtin in allowed_builtins } }, locals={}, ) Re: "Why do you want to do this?" I want to be able to filter python objects based on user input. These objects are "payloads" for "sessions" in the application, so allowing users to filter sessions based on arbitrary expressions on payload contents - e.g. len(payload.things) > 12 - would be a useful feature. So my question is: what input string would allow an attacker access data "outside" the eval, i.e. variables in the script or access to the OS?
Just put a space in: . __ and the standard bypasses work fine. Like this one: [x for x in (). __class__. __base__. __subclasses__() if x. __name__ == 'Quitter'][0]. __init__. __globals__['__builtins__']['__import__']( 'os').system('install ransomware or something') Seriously, don't use eval. These ad-hoc sanitizers never cover the full attack surface, and even if one miraculously does, the attack surface expands with every Python version.
1
2
79,350,300
2025-1-12
https://stackoverflow.com/questions/79350300/cant-change-recursion-limit-python
sys.setrecursionlimit is set to 20000, but I get error because of "996 more times", despite setting limit to a bigger number import sys from functools import * sys.setrecursionlimit(20000) @lru_cache(None) def f(n): if n <= 3: return n - 1 if n > 3: if n % 2 == 0: return f(n - 2) + (n / 2) - f(n - 4) else: return f(n - 1) * n + f(n - 2) result = f(4952) + 2 * f(4958) + f(4964) print(result) error: Traceback (most recent call last): File "-", line 21, in <module> result = f(4952) + 2 * f(4958) + f(4964) ^^^^^^^ File "-", line 14, in f return f(n - 2) + (n / 2) - f(n - 4) ^^^^^^^^ File "-", line 14, in f return f(n - 2) + (n / 2) - f(n - 4) ^^^^^^^^ File "-", line 14, in f return f(n - 2) + (n / 2) - f(n - 4) ^^^^^^^^ [Previous line repeated 996 more times] RecursionError: maximum recursion depth exceeded the answer should be 9200 Why doesn't sys.setrecursionlimit change the limit?
First, your code, as shown above, works fine on my system. Even when I reduce the recursion limit from 20000 down to 4000. However, if you rewrite your expressions to invoke f() on smaller numbers before larger numbers, then I'm able to run it with a recursion limit around 1250: import sys from functools import lru_cache sys.setrecursionlimit(1250) @lru_cache(None) def f(n): if n <= 3: return n - 1 if n % 2 == 0: return - f(n - 4) + f(n - 2) + n/2 return f(n - 2) + f(n - 1) * n result = f(4952) + 2 * f(4958) + f(4964) print(result) The problem you describe sounds similar to this issue 3.12 setrecursionlimit is ignored in connection with @functools.cache which may also be Windows specific. Try an older or newer Python, possibly on a different architecture.
1
1
79,353,015
2025-1-13
https://stackoverflow.com/questions/79353015/a-function-to-modify-other-already-defined-functions
Is there a way to write a function that takes some parameter, to modify multiple existing functions? For example, if I have: def add(a, b): return a + b def minus(a, b): return a - b def offset_b(func, n): (??) func(a, b - n) then I want to execute below: offset_b(add(1, 2), 0.5) offset_b(minus(1, 2), 0.5) I know one way to do it is: def new_b(b): return b-0.5 add(1, new_b(b)) minus(1, new_b(b)) Is there a way to use a wrapper or something to take the func (add/minus) and n, so that every time if I input some add(1, 2) it actually does add(1, 2 - 0.5) instead of the original version?
Define a function that accepts an existing function with known parameters, and returns a new function that wraps it: import functools # To borrow wrapped function's docs nicely def offset_b(func, n): @functools.wraps(func) def wrapped(a, b): return func(a, b-n) return wrapped You can then define new versions of add and minus that apply the offset: add_with_offset_1 = offset_b(add, 1) minus_with_offset_1 = offset_b(minus, 1) Or if you want to be able to make a proper decorator to apply this mutation to a new function that should only be called with an offset applied, you can increase the nesting, to produce a decorator with parameters, so that the first call takes the offset and returns a function that, when called with another function, produces the wrapped function: def offset_b(n): def do_wrap(func): @functools.wraps(func) def wrapped(a, b): return func(a, b-n) return wrapped return do_wrap which allows you to use it as before, but in a different ordering: offset_by_1 = offset_b(1) add_with_offset_1 = offset_by_one(add) # Or as one-liner: add_with_offset_1 = offset_b(1)(add) minus_with_offset_1 = offset_by_one(minus) or use it as a decorator to modify the wrapped function immediately: @offset_b(0.5) def add(a, b): return a + b @offset_b(0.5) def minus(a, b): return a - b so add and minus exist immediately with the offset of 0.5 applied to all calls (confusing in this case, but useful in others).
1
4
79,352,669
2025-1-13
https://stackoverflow.com/questions/79352669/how-why-are-2-3-10-and-x-3-10-with-x-2-ordered-differently
Sets are unordered, or rather their order is an implementation detail. I'm interested in that detail. And I saw a case that surprised me: print({2, 3, 10}) x = 2 print({x, 3, 10}) Output (Attempt This Online!): {3, 10, 2} {10, 2, 3} Despite identical elements written in identical order, they get ordered differently. How does that happen, and is that done intentionally for some reason, e.g., for optimizing lookup speed? My sys.version and sys.implementation: 3.13.0 (main, Nov 9 2024, 10:04:25) [GCC 14.2.1 20240910] namespace(name='cpython', cache_tag='cpython-313', version=sys.version_info(major=3, minor=13, micro=0, releaselevel='final', serial=0), hexversion=51183856, _multiarch='x86_64-linux-gnu')
It's a function of a couple things: Hash bucket collisions - For the smallest set size, 8 (implementation detail of CPython), 2 and 10 collide on their cutdown hash codes (which, again implementation detail, are 2 and 10; mod 8, they're both 2). Whichever one is inserted first "wins" and gets bucket index 2, the other gets moved by the probing operation. The probing operation (again, CPython implementation detail) initially checks linearly adjacent buckets for an empty bucket (because it usually finds one, and better memory locality improves cache performance), and only if it doesn't find one does it begin the randomized jumping about algorithm to find an empty bucket (it can't do pure linear probing, because that would make it far too easy to trigger pathological cases that change set operations from amortized average-case O(1) to O(n)). Compile-time optimizations: In modern CPython, sets and lists of constant literals that are at least three elements long are constructed at compile time as an immutable container (frozenset and tuple respectively). At runtime, it builds an empty set/list, then updates/extends it with the immutable container, rather than performing individual loads and adds/appends for each element. This means that when you build with s = {2, 3, 10}, you're actually doing s = set(), s.update(frozenset({2, 3, 10})) (with the frozenset pulled from cache), while s = {x, 3, 10} is building by loading x, 3 and 10 on the stack, then building the set as a single operation. The two of these mean that you're actually building it differently; {x, 3, 10} is inserting 2, then 3, then 10, so buckets 2 and 3 are filled, and 10 gets relocated (the probing strategy clearly puts it in bucket 0 or 1, before bucket 2). When you do {2, 3, 10}, at compile-time it's making a frozenset({3, 10, 2}), then at runtime, it's creating the empty set, then updating it by iterating that frozenset, which has already reordered the elements, so now they're no longer being added in 2, 3, 10 order, and the race for "preferred" buckets is won by different elements. In summary, the behavior of {x, 3, 10} is equivalent to: s = set() s.add(x) s.add(3) s.add(10) which predictably gives buckets 2 and 3 to 2 and 3 themselves, with 10 being displaced to bucket 0 or 1. By contrast, {2, 3, 10} builds a frozenset({3, 10, 2}) (note: it's in that order after conversion to frozenset; if you tried to run that exact line and print it, you'd see a different order), then updates an empty set with it. There is an optimized code path for populating an empty set from another set/frozenset that just copies the contents directly (rather than iterating and inserting piecemeal), so the {3, 10, 2} ordering in the cached frozenset is preserved in each set created from it, the same as as if you'd run: s = set() s.update(frozenset({2, 3, 10})) but more performant (because the frozenset is created once at compile time and loaded cheaply for each new set to initialize).
20
28
79,352,276
2025-1-13
https://stackoverflow.com/questions/79352276/apply-function-for-lower-triangle-of-2-d-array
I have an array: U = np.array([3, 5, 7, 9, 11]) I want to get a result like: result = np.array([ [ np.nan, np.nan, np.nan, np.nan, np.nan], [U[0] - U[1], np.nan, np.nan, np.nan, np.nan], [U[0] - U[2], U[1] - U[2], np.nan, np.nan, np.nan], [U[0] - U[3], U[1] - U[3], U[2] - U[3], np.nan, np.nan], [U[0] - U[4], U[1] - U[4], U[2] - U[4], U[3] - U[4], np.nan] ]) I can use np.tril_indices(4, k=-1) to get indices of lower triangle without diagonal, but what is next?
A naive approach that does more work than necessary is to compute the entire difference and select the elements you need: np.where(np.arange(U.size)[:, None] > np.arange(U.size), U[:, None] - U, np.nan) This is one of the times where np.where is actually useful over a simple mask, although it can be done with a mask as well: result = np.full((U.size, U.size), np.nan) index = np.arange(U.size) mask = index[:, None] > index result[mask] = [U[:, None] - U][mask] A more efficient approach might be to use the indices more directly to index into the source: result = np.full((U.size, U.size), np.nan) r, c = np.tril_indices(U.size, k=-1) result[r, c] = U[c] - U[r]
2
2
79,350,683
2025-1-12
https://stackoverflow.com/questions/79350683/how-to-prevent-python-telebot-from-duplicating-messages
I've got a problem with Python Telebot, here is my code: import telebot token = 'token' bot = telebot.TeleBot(token) @bot.message_handler(func=lambda message: True) def start(message): bot.send_message(message.chat.id,"message") bot.register_next_step_handler(message, nextstep) def nextstep(message): bot.send_message(message.chat.id,"anothermessage") bot.register_next_step_handler(message, finalstep) def finalstep(message): bot.send_message(message.chat.id,"in previous step you sent: " + str(message.text) + ". Now let's start again") start(message) print ("running") bot.infinity_polling(skip_pending=True) This code is running ok and looks like this in Telegram chat: But if I send two messages it starts looking like this: And ends up like this: After this the bot keeps to duplicate answers to my messages, as well as multiply triggering the functions. Like the code is double-running itself from the point when bot receives two or more messages. After restart it's ok again, until the next double message. What is causing duplicate messages? How to prevent it?
Because your code does double-runs itself: at the end of the first turn finalstep calls start but then bot's message handler calls start again. And further: bot.register_next_step_handler() just adds a callable object to the bot's list of handlers, so that list grows after each message and bot runs all that handlers at every next step. I think your nextstep and finalstep should by like: def nextstep(message): bot.send_message(message.chat.id,"anothermessage") bot.clear_step_handler(message) bot.register_next_step_handler(message, finalstep) def finalstep(message): bot.send_message(message.chat.id,"in previous step you sent: " + str(message.text) + ". Now let's start again") bot.clear_step_handler(message)
2
1
79,350,990
2025-1-13
https://stackoverflow.com/questions/79350990/how-can-i-scrape-data-from-a-website-into-a-csv-file-using-python-playwright-or
I'm trying to scrape data from this website using Python and Playwright, but I'm encountering a few issues. The browser runs in non-headless mode, and the process is very slow. When I tried other approaches, like using requests and BeautifulSoup, I ran into access issues, including 403 Forbidden and 404 Not Found errors. My goal is to scrape all pages efficiently and save the data into a CSV file. Here’s the code I’m currently using: import asyncio from playwright.async_api import async_playwright import pandas as pd from io import StringIO URL = "https://www.coingecko.com/en/coins/1/markets/spot" async def fetch_page(page, url): print(f"Fetching: {url}") await page.goto(url) await asyncio.sleep(5) return await page.content() async def scrape_all_pages(url, max_pages=10): async with async_playwright() as p: browser = await p.chromium.launch(headless=False, slow_mo=2000) context = await browser.new_context(viewport={"width": 1280, "height": 900}) page = await context.new_page() markets = [] for page_num in range(1, max_pages + 1): html = await fetch_page(page, f"{url}?page={page_num}") dfs = pd.read_html(StringIO(html)) # Parse tables markets.extend(dfs) await page.close() await context.close() await browser.close() return pd.concat(markets, ignore_index=True) def run_async(coro): try: loop = asyncio.get_running_loop() except RuntimeError: loop = None if loop and loop.is_running(): return asyncio.create_task(coro) else: return asyncio.run(coro) async def main(): max_pages = 10 df = await scrape_all_pages(URL, max_pages) df = df.dropna(how='all') print(df) run_async(main()) The primary issues are the slow speed of scraping and the access errors when using alternatives to Playwright. I'm looking for advice on how to improve this approach, whether it’s by optimizing the current code, handling access restrictions like user-agent spoofing or proxies, or switching to a different library entirely. Any suggestions on how to make the process faster and more reliable would be greatly appreciated. Thank you.
Before even writing your scraping code, always take the time to understand the webpage. In this case, based on viewing the page source and looking through the network tab in dev tools, there's nothing dynamic at all here. My first instinct was to use simple HTTP requests with a user agent, but these get blocked by the server, so Playwright is a reasonable option. But since the data is static, there's no need to sleep, and you can also go headless (with a user agent), disable JS and use the fastest navigation predicate, "commit". Here's an initial rewrite: import asyncio import pandas as pd # 2.2.2 from io import StringIO from playwright.async_api import async_playwright # 1.48.0 URL = "<Your URL>" UA = ( "Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/130.0.0.0 Mobile Safari/537.3" ) async def scrape_all_pages(base_url, max_pages=10): markets = [] async with async_playwright() as p: browser = await p.chromium.launch() context = await browser.new_context(user_agent=UA, java_script_enabled=False) page = await context.new_page() for page_num in range(1, max_pages + 1): url = f"{base_url}?page={page_num}" await page.goto(url, wait_until="commit") html = await page.content() markets.extend(pd.read_html(StringIO(html))) return pd.concat(markets, ignore_index=True) async def main(): df = await scrape_all_pages(URL, max_pages=10) df = df.dropna(how="all") print(df) asyncio.run(main()) Original time: real 1m17.661s user 0m3.348s sys 0m1.190s Rewrite time: real 0m6.912s user 0m1.417s sys 0m0.785s We have a 13x speedup. The biggest improvement was removing the unnecessary sleeps. If you're scraping more pages than this, adding a task queue to increase parallelism can help. But for only 10 pages the overhead and code complexity of adding parallelism isn't worth it, so I'll skip that for now. Also, you may have added sleeps to avoid server rate limiting. That's a good reason to sleep, but then you won't be able to get a speedup. Using a residential proxy cluster would bypass this, but it's a lot of hassle to set up, also out of scope of the question.
1
1
79,350,912
2025-1-12
https://stackoverflow.com/questions/79350912/nearest-neighbor-interpolation
Say that I have an array: arr = np.arange(4).reshape(2,2) The array arr contains the elements array([[0, 1], [2, 3]]) I want to increase the resolution of the array in such a way that the following is achieved: np.array([0,0,1,1], [0,0,1,1], [2,2,3,3], [2,2,3,3]]) what is this operation called? Nearest-neighbor interpolation? It is possible to get my desired output with the following np.concat(np.repeat(arr,4).reshape(-1,2,2,2), axis=-1).reshape(4,4) Is there a more general way of doing this for any kind of matrix?
You're looking for "nearest-neighbour upsampling", instead of "interpolation". A concise and efficient way to do this in numpy: import numpy as np arr = np.arange(6).reshape(2, 3) upsampled = np.repeat(np.repeat(arr, 2, axis=0), 2, axis=1) print("Original Array:") print(arr) print("Upsampled Array:") print(upsampled) Note: I changed your sample data to (2,3) to show that the original shape does not affect the function. Result: Original Array: [[0 1 2] [3 4 5]] Upsampled Array: [[0 0 1 1 2 2] [0 0 1 1 2 2] [3 3 4 4 5 5] [3 3 4 4 5 5]] If you need it to be more memory efficient, you can also use: upsampled2 = np.kron(arr, np.ones((2, 2), dtype=int)) print("Upsampled Array using Kronecker product:") print(upsampled2) Result: Upsampled Array using Kronecker product: [[0 0 1 1 2 2] [0 0 1 1 2 2] [3 3 4 4 5 5] [3 3 4 4 5 5]] Libraries like scipy may offer even more efficient methods of doing the same, if your data is very large.
1
4
79,350,780
2025-1-12
https://stackoverflow.com/questions/79350780/cannot-connect-to-vpn-using-python-openvpn-client-api
I have a pyhton program from which I am using python-openvpn-client API to connect to a vpn server using an .ovpn configuration file. I have installed python 3.13.1 from the official website. Then I have created a virtual python environment to use in my python project. I have successfully installed the python-openvpn-client (version 0.0.1) package which says it requires python >= 3.9. I have used below command to install it in my virtual environment: pip install python-openvpn-client I installed OpenVpn from here. I am using version OpenVPN 2.6.12 - Released 18 July 2024. Then I do below: from openvpnclient import OpenVPNClient def connect_to_vpn(config_path): vpn = OpenVPNClient(config_path) try: vpn.connect() while not vpn.is_connected(): print(f"Status: {vpn.status}") sleep(2) print("Connection successfully established.") return vpn except Exception as e: print(f"Error connecting to VPN server: {e}") return None config_path is the full path to an .ovpn file. When executing the line of code vpn.connect() an exception is thrown: module 'signal' has no attribute 'SIGUSR1' If I import the same .ovpn file from OpenVPN app and connect to the the VPN server, then is working but not from my python program. My platform is Windows 10 Pro. So what am I doing wrong?
The python-openvpn-client 0.0.1 is for Unix-like systems like Linux, macOS or BSD. It's page says it was tested to work on macOS and Linux. The Windows systems do not support POSIX signals like SIGUSR1. You had this error message because SIGURS1 is not defined.The POSIX signals are supported on Unix-like systems.You could try openvpn-api.
1
2
79,348,318
2025-1-11
https://stackoverflow.com/questions/79348318/how-do-i-resolve-installation-errors-for-installing-climada-using-mamba
I am trying to install Climada on my Mac. I have anaconda installed and python is as saved in /opt/anaconda3/bin/python (I believe it's python 3.11.8. The Climada documentation says to create a virtual environment using Mamba. mamba create -n climada_env -c conda-forge climada Then it says you can activate the environment and start working. I've tried this two ways, and failed both times. First attempt: create a new virtual environment, install Mamba there. So far so good. Then I try to install Climada within that virtual environment using: mamba install -c conda-forge climada The error I get is: Pinned packages: - python=3.13 error libmamba Could not solve for environment specs The following packages are incompatible ├─ climada =* * is installable with the potential options │ ├─ climada [3.3.2|4.0.0|4.0.1|4.1.0] would require │ │ └─ python =3.9 *, which can be installed; │ └─ climada [4.1.0|5.0.0] would require │ └─ python >=3.9,<3.12 * with the potential options │ ├─ python [3.10.0|3.10.1|...|3.11.9], which can be installed; │ ├─ python 3.12.0rc3 would require │ │ └─ _python_rc =* *, which does not exist (perhaps a missing channel); │ └─ python [3.9.0|3.9.1|...|3.9.9], which can be installed; └─ pin on python 3.13.* =* * is not installable because it requires └─ python =3.13 *, which conflicts with any installable versions previously reported. critical libmamba Could not solve for environment specs Second attempt: First I install Mamba in my base folder using conda install -c conda-forge mamba and then I create the new virtual env using the mamba create command above, taken from the Climada installation instructions. This time the error I get is below. (base) donfry@DONs-Air ~ % mamba create -n climada_env_2 -c conda-forge climada *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFString stringByStandardizingPath]: unrecognized selector sent to instance 0x600003714330' *** First throw call stack: ( 0 CoreFoundation 0x000000018c242ccc __exceptionPreprocess + 176 1 libobjc.A.dylib 0x000000018bd2a788 objc_exception_throw + 60 2 CoreFoundation 0x000000018c2f502c -[NSObject(NSObject) __retain_OA] + 0 3 CoreFoundation 0x000000018c1accdc ___forwarding___ + 1580 4 CoreFoundation 0x000000018c1ac5f0 _CF_forwarding_prep_0 + 96 5 Foundation 0x000000018d2abcd4 -[NSProcessInfo arguments] + 188 6 CoreFoundation 0x000000018c2beaac __getDefaultArguments_block_invoke + 96 7 libdispatch.dylib 0x000000018bf3e3e8 _dispatch_client_callout + 20 8 libdispatch.dylib 0x000000018bf3fc68 _dispatch_once_callout + 32 9 CoreFoundation 0x000000018c2be448 _addBackstopValuesForIdentifierAndSource + 652 10 CoreFoundation 0x000000018c1783dc __81-[_CFXPreferences(SourceAdditions) withNamedVolatileSourceForIdentifier:perform:]_block_invoke + 144 11 CoreFoundation 0x000000018c2be0e4 -[_CFXPreferences withNamedVolatileSourceForIdentifier:perform:] + 272 12 CoreFoundation 0x000000018c17e764 -[CFPrefsSearchListSource addNamedVolatileSourceForIdentifier:] + 136 13 CoreFoundation 0x000000018c2fd54c __108-[_CFXPreferences(SearchListAdditions) withSearchListForIdentifier:container:cloudConfigurationURL:perform:]_block_invoke.155 + 296 14 CoreFoundation 0x000000018c2fd1f4 -[_CFXPreferences withSearchLists:] + 84 15 CoreFoundation 0x000000018c179cb4 __108-[_CFXPreferences(SearchListAdditions) withSearchListForIdentifier:container:cloudConfigurationURL:perform:]_block_invoke + 300 16 CoreFoundation 0x000000018c2fd3a0 -[_CFXPreferences withSearchListForIdentifier:container:cloudConfigurationURL:perform:] + 384 17 CoreFoundation 0x000000018c1795d8 -[_CFXPreferences copyAppValueForKey:identifier:container:configurationURL:] + 156 18 CoreFoundation 0x000000018c179500 _CFPreferencesCopyAppValueWithContainerAndConfiguration + 112 19 SystemConfiguration 0x000000018cf2a818 SCDynamicStoreCopyProxiesWithOptions + 180 20 libcurl.4.dylib 0x0000000103308624 Curl_macos_init + 16 21 libcurl.4.dylib 0x00000001032e4ae8 global_init + 172 22 libcurl.4.dylib 0x00000001032e4a2c curl_global_init + 68 23 libmamba.2.0.0.dylib 0x0000000102f6cfac _GLOBAL__sub_I_singletons.cpp + 24 24 dyld 0x000000018bd8105c ___ZZNK5dyld46Loader25findAndRunAllInitializersERNS_12RuntimeStateEENK3$_0clEv_block_invoke + 168 25 dyld 0x000000018bdbf0d4 ___ZNK5dyld313MachOAnalyzer18forEachInitializerER11DiagnosticsRKNS0_15VMAddrConverterEU13block_pointerFvjEPKv_block_invoke.202 + 172 26 dyld 0x000000018bdb299c ___ZNK5dyld39MachOFile14forEachSectionEU13block_pointerFvRKNS0_11SectionInfoEbRbE_block_invoke + 496 27 dyld 0x000000018bd622fc _ZNK5dyld39MachOFile18forEachLoadCommandER11DiagnosticsU13block_pointerFvPK12load_commandRbE + 300 28 dyld 0x000000018bdb1930 _ZNK5dyld39MachOFile14forEachSectionEU13block_pointerFvRKNS0_11SectionInfoEbRbE + 192 29 dyld 0x000000018bdb4208 _ZNK5dyld39MachOFile32forEachInitializerPointerSectionER11DiagnosticsU13block_pointerFvjjRbE + 160 30 dyld 0x000000018bdbedc8 _ZNK5dyld313MachOAnalyzer18forEachInitializerER11DiagnosticsRKNS0_15VMAddrConverterEU13block_pointerFvjEPKv + 432 31 dyld 0x000000018bd7d070 _ZNK5dyld46Loader25findAndRunAllInitializersERNS_12RuntimeStateE + 524 32 dyld 0x000000018bd83614 _ZNK5dyld416JustInTimeLoader15runInitializersERNS_12RuntimeStateE + 36 33 dyld 0x000000018bd7d45c _ZNK5dyld46Loader23runInitializersBottomUpERNS_12RuntimeStateERN5dyld35ArrayIPKS0_EE + 220 34 dyld 0x000000018bd7d400 _ZNK5dyld46Loader23runInitializersBottomUpERNS_12RuntimeStateERN5dyld35ArrayIPKS0_EE + 128 35 dyld 0x000000018bd810ec _ZZNK5dyld46Loader38runInitializersBottomUpPlusUpwardLinksERNS_12RuntimeStateEENK3$_1clEv + 116 36 dyld 0x000000018bd7d628 _ZNK5dyld46Loader38runInitializersBottomUpPlusUpwardLinksERNS_12RuntimeStateE + 380 37 dyld 0x000000018bda04d8 _ZN5dyld44APIs25runAllInitializersForMainEv + 464 38 dyld 0x000000018bd66f7c _ZN5dyld4L7prepareERNS_4APIsEPKN5dyld313MachOAnalyzerE + 3156 39 dyld 0x000000018bd65edc start + 1844 ) libc++abi: terminating due to uncaught exception of type NSException Any idea what is going with either/both errors? And more importantly, what I can do to fix these?
You're trying to install an old package into a new interpreter. Set the controls of the way-back machine to an earlier era. Using the same interpreter that the Climada documentation author used should work smoothly. I believe it's python 3.11.8. That sounds like a good plan. However the "Pinned packages: - python=3.13" output suggests that your mamba install is trying to use a quite recently released interpreter. Climada released 5.0.0 back in summer, and then interpreter 3.13 was released a few months later. Conda is an effective but slow means of dealing with the binary dependencies of complex builds. Mamba was an attempt to do the same thing faster, and then some of its improvements were folded back into conda. Nowadays many people prefer uv (e.g. uv pip install ...) due to its blinding speed and ease of specifying the desired interpreter version. You may be able to get away with just doing this: which python && python --version python -m pip install climada The first line verifies the activated interpreter is coming from where you think it is, running 3.11 which should be fine. And the second line grabs the package from pypi. uv solution I found this worked smoothly, under MacOS sequoia 15.1. In a fresh project directory, create a three-line requirements.txt file which mentions a pair of related deps: climada dask[dataframe] fiona Then ensure uv is available, set up a 3.11 interpreter, freeze the dep versions, install deps, and verify a good import. which uv || curl -LsSf https://astral.sh/uv/install.sh | sh uv venv --python=python3.11 source .venv/bin/activate && uv pip compile --upgrade --quiet requirements.txt -o requirements.lock source .venv/bin/activate && uv pip install -r requirements.lock source .venv/bin/activate && python -c 'import climada' Repeating with interpreter 3.12 also works smoothly. Repeating with 3.13 reports "Failed to build llvmlite==0.43.0" ... "Could not find a llvm-config binary.", which is explaining a need to build from source since some binary wheels are not yet available for 3.13. Using uv in this way will create .venv/ in the current directory, as we can see from those activate commands. As you experiment with it, discarding with rm -rf and redoing it is safe. You can get it back with that requirements file. (I used make to run these commands, so those last three bash invocations did three independent activates.) The core difficulty you encountered is you lost track of what python interpreter, and version, you were using. Some of your favorite libraries do not (yet) work with 3.13. As a rule of thumb, always activate a project environment before running python, or running a jupyter lab kernel, or installing a library. Avoid putting new libraries under /usr/local, or /usr, or the miniconda base environment, as it will only lead to confusion.
3
1
79,350,430
2025-1-12
https://stackoverflow.com/questions/79350430/check-if-all-values-of-dataframe-are-true
How can I check if all values of a polars DataFrame, containing only boolean columns, are True? Example df: df = pl.DataFrame({"a": [True, True, None], "b": [True, True, True], }) The reason for my question is that sometimes I want to check if all values of a df fulfill a condition, like in the following: df = pl.DataFrame({"a": [1, 2, None], "b": [4, 5, 6], }).select(pl.all() >= 1) By the way, I didn't expect that .select(pl.all() >= 1) keeps the null (None) in last row of column "a", maybe that's worth noting.
As of the date of this answer, I found the following snippet most appropriate for polars: df.fill_null(False).min_horizontal().min() If no null values exist in df, one could omit .fill_null(False). Credit goes to roman, the logic of min_horizotnal().min() was first described by him in this answer on a similar issue on any. Example with the df from above: >>> df.fill_null(False).min_horizontal().min() False >>> df = pl.DataFrame({"a": [True, True, True], ... "b": [True, True, True], ... }) ... ... df.fill_null(False).min_horizontal().min() True
3
1
79,349,155
2025-1-12
https://stackoverflow.com/questions/79349155/how-do-i-multiply-the-values-of-an-array-contained-in-a-column-in-a-dataframe-b
I have tried to do this in order to create a new column, with each row being an array containing the values of column b multiplied by column a. data = {'a': [3, 2], 'b': [[4], [7, 2]]} df = pd.DataFrame(data) df['c'] = df.apply(lambda row: [row['a'] * x for x in row['b']]) The final result should look like this a b c 3 [4] [12] 2 [7, 2] [14, 4]
Your approach would have been correct with axis=1 (= row-wise, the default apply is column-wise): df['c'] = df.apply(lambda row: [row['a'] * x for x in row['b']], axis=1) Using apply is however quite slow since pandas creates an intermediate Series for each row. It will be more efficient to use pure python: a list comprehension is well suited. df['c'] = [[a * x for x in b] for a, b in zip(df['a'], df['b'])] Output: a b c 0 3 [4] [12] 1 2 [7, 2] [14, 4] Comparison of timings (on 200k rows) # list comprehension # df['c'] = [[a * x for x in b] for a, b in zip(df['a'], df['b'])] 98.3 ms ± 3.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) # conversion to numpy arrays # df['c'] = df['a'] * df['b'].map(np.array) 371 ms ± 75.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # apply with axis=1 # df['c'] = df.apply(lambda row: [row['a'] * x for x in row['b']], axis=1) 1.65 s ± 65.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
2
3
79,344,524
2025-1-10
https://stackoverflow.com/questions/79344524/errorit-looks-like-you-are-using-playwright-sync-api-inside-the-asyncio-loop
I have this setup File 1 from playwright.sync_api import sync_playwright class A: def __init__(self,login_dict): self.start = sync_playwright().start() self.browser = self.start.chromium.launch() self.context = self.browser.new_context() self.page = self.context.new_page() self.login_dict = login_dict File 2 import file_1.py class B(A): def __init__(self): super().__init__() from file_1 import A from file_2 import B a = A(some_login_dict) b = B() I get this error at the super init of B class It looks like you are using Playwright Sync API inside the asyncio loop. Please use the Async API instead. I do not understand why is this happening, can someone explain? Is there a way to avoid this?
The files and classes in your code are obscuring the root cause. Nonetheless, it's good to see your intended use case, since you'd normally start Playwright with a with context manager, which isn't as obvious in the class setup. The minimal trigger is simply starting Playwright's sync API twice, which it wasn't designed to do: from playwright.sync_api import sync_playwright # 1.48.0 browser = sync_playwright().start().chromium.launch() browser = sync_playwright().start().chromium.launch() # raises 'It looks like...' The error message is not very clear. The solution is to only .start() Playwright once for the application: from playwright.sync_api import sync_playwright sync_playwright = sync_playwright().start() browser = sync_playwright.chromium.launch() browser = sync_playwright.chromium.launch() In your code, the error occurs when you initialize both A() and B() (removing one initializer or the other works). Since you're working with classes, you can move playwright = sync_playwright().start() to the module-level scope of a shared, one-off module, which is then imported from any other file that needs to launch a browser. An alternative approach is to use the async API, which doesn't seem to mind being started multiple times: import asyncio from playwright.async_api import async_playwright async def run(): pw1 = await async_playwright().start() pw2 = await async_playwright().start() browser1 = await pw1.chromium.launch() browser2 = await pw2.chromium.launch() page1 = await browser1.new_page() page2 = await browser2.new_page() await page1.goto("https://www.example.com") await page2.goto("https://en.wikipedia.org") print(await page1.title()) print(await page2.title()) await pw1.stop() await pw2.stop() asyncio.run(run()) Remember to clean up with .stop() when not using context-managed Playwright. See also Playwright issue #1391 on GitHub.
3
1
79,339,486
2025-1-8
https://stackoverflow.com/questions/79339486/finding-loops-between-numbers-in-a-list-of-sets
Given a list of sets like: sets=[{1,2},{2,3},{1,3}] the product (1,2,3) will be generated twice in itertools.product(*sets), as the literals (1,2,3) and (2,3,1), because there is a loop. If there is no loop there will be no duplication, even though there might be lots of commonality between sets. A loop is formed to A in a set when you travel to B in the same set and then to B in another set that has A or to B in another set with C which connects to a set with A. e.g. 1>2--2>3--3>1 where '--' indicates movement between sets and '>' indicates movement within the set. The smallest loop would involve a pair of numbers in common between two sets, e.g. a>b--b>a. {edit: @ravenspoint's notation is nice, I suggest using {a}-b-{a} instead of the above.} Loops in canonical form should not have a bridging value used more than once: either this represents a case where the loop traced back on itself (like in a cursive "i") or there is a smaller loops that could be made (like the outer and inner squares on the Arch de Triumph](https://commons.wikimedia.org/wiki/File:Paris_Arc_de_Triomphe.jpg). What type of graph structure could I be using to represent this? I have tried representing each set as a node and then indicating which sets are connected to which, but this is not right since for [{1,2},{1,3},{1,4}], there is a connection between all sets -- the common 1-- but there is no loop. I have also tried assigning a letter to each number in each set, but that doesn't seem right, either, since then I don't know how to discriminate against loops within a set. This was motivated by this question about generating unique products. Sample sets like the following (which has the trivial loop 4>17--17>4 and longer loops like 13>5--5>11--11>13) [{1, 13, 5}, {11, 13}, {17, 11, 4, 5}, {17, 4, 1}] can be generated as shown in the docstring of core. Alternate visualization analogy Another way to visualize the "path/loop" is to think of coloring points on a grid: columns contain elements of the sets and equal elements are in the same row. A loop is a path that starts at one point and ends at the same point by travelling vertically or horizontally from point to point and must include both directions of motion. A suitable permutation of rows and columns would reveal a staircase polygon.
To form a loop you must travel from one number ( A ) in a set to another number ( B ) in the set, then to the same number ( B ) in another set So, we cannot just connect pairs of sets that share one or more numbers. We must connect a pair of sets that share one or more numbers with one or more edges that are labelled with the number that is used to connect them. Now we insist that when we travel through a node, we cannot use an out edge with the same label as the in edge that was used. Here is the code to implement this graph generator https://codeberg.org/JamesBremner/so79339486/src/commit/f53b941cd6561f01c81af1194ff5922289174157/src/main.cpp#L32-L50 void genGraph() { raven::set::cRunWatch aWatcher("genGraph"); theGD.g.clear(); for (int is1 = 0; is1 < theSets.size(); is1++) { for (int is2 = is1 + 1; is2 < theSets.size(); is2++) { for (int n1 : theSets[is1]) for (int n2 : theSets[is2]) if (n1 == n2) theGD.setEdgeWeight( theGD.g.add( "set" + std::to_string(is1), "set" + std::to_string(is2)), n1); } } } for the sample D = [{1, 2, 3, 4, 5, 6, 7, 8}, ... the graph looks like: The graph generation is done in less than a millisecond raven::set::cRunWatch code timing profile Calls Mean (secs) Total Scope 1 0.000856 0.000856 genGraph The algorithm to find cycles ( == loops ) in this problem is a modified depth first search. Two modifications are required. The BFS cannot travel along two successive edges with the same label When a previously visited vertex is reached, the Dijsktra algorithm is applied to find the path that forms the shortest cycle that leads back to the back edge of the previously visited vertex. Implementing this sort of thing in Python is not recommended - the performance is painfully slow. Here is some C++ code to implement the modified DFS to prevent using edges with the same label twice in a row. https://codeberg.org/JamesBremner/so79339486/src/branch/main/src/cycleFinder.cpp The output, when run on example D, is 24 loops found set0 -1- set9 -0- set2 -0- set7 -8- set0 set0 -1- set9 -0- set2 -0- set3 -31- set6 -5- set0 set0 -1- set9 -0- set2 -0- set5 -6- set0 set7 -0- set2 -0- set5 -52- set7 set9 -0- set2 -0- set5 -48- set9 set0 -1- set9 -0- set2 -0- set4 -7- set0 set7 -0- set2 -0- set4 -15- set7 set8 -0- set9 -0- set2 -0- set4 -41- set8 set9 -0- set2 -0- set4 -40- set9 set0 -1- set9 -0- set2 -0- set3 -3- set0 set7 -0- set2 -0- set3 -34- set7 set8 -0- set9 -0- set2 -0- set3 -37- set8 set8 -0- set9 -0- set2 -23- set8 set0 -1- set9 -0- set2 -11- set1 -4- set0 set4 -0- set2 -0- set9 -1- set0 -4- set1 -15- set4 set5 -0- set2 -0- set9 -1- set0 -4- set1 -16- set5 set7 -0- set2 -0- set9 -1- set0 -4- set1 -15- set7 set8 -0- set9 -1- set0 -4- set1 -14- set8 set9 -1- set0 -4- set1 -17- set9 set4 -0- set2 -0- set3 -32- set4 set4 -0- set2 -0- set5 -39- set4 set6 -5- set0 -1- set9 -0- set2 -0- set5 -47- set6 set6 -5- set0 -1- set9 -0- set2 -0- set7 -58- set6 set8 -0- set9 -0- set2 -0- set7 -63- set8 raven::set::cRunWatch code timing profile Calls Mean (secs) Total Scope 1 0.0088616 0.0088616 cycleFinder 1 0.0009284 0.0009284 genGraph -N- means that the loop has taken N as the common number to travel between sets. The cycle finder takes 10 milliseconds on this problem. That seems plenty fast enough, but a simple optimization might be well worthwhile if you have problems larger than this. I believe that the presence of one loop means that the number sets fail. If so, then we could abort the loop finding as soon as one loop is found. Another small test problem {1, 2, 4}, {2, 3}, {1, 3, 5}, {4, 5} 2 loops found set0 -2- set1 -3- set2 -1- set0 set3 -4- set0 -1- set2 -5- set3 raven::set::cRunWatch code timing profile Calls Mean (secs) Total Scope 1 0.0007017 0.0007017 cycleFinder 1 0.0003599 0.0003599 genGraph
4
2
79,347,737
2025-1-11
https://stackoverflow.com/questions/79347737/django-view-is-rendering-404-page-instead-of-given-html-template
I'm working on a wiki project with django. I'm trying to render 'add.html' with the view add, but it sends me to 404 instead. All the other views are working fine. How should I fix add? views.py from django.shortcuts import render from django.http import HttpResponseRedirect, HttpResponse from django.urls import reverse from django import forms import re from . import util def index(request): return render(request, "encyclopedia/index.html", { "entries": util.list_entries() }) def detail(request, entry): #if search based on title returns result if util.get_entry(entry): content = util.get_entry(entry) return render(request, "encyclopedia/details.html", { "title": entry, "content": content }) else: return render(request, "encyclopedia/404.html", { 'entry': entry }) def add(request): return render(request, "encyclopedia/add.html") urls.py: from django.urls import path from . import views app_name = "wiki" urlpatterns = [ path("", views.index, name="index"), path("<str:entry>/", views.detail, name="entry"), path("add/", views.add, name="add"), ]
in your urls.py, <str:entry>/ path is defined before the add/ path. this causes django to interpret add/ as a dynamic entry parameter for the detail view instead of routing it to the add view. django document urlpatterns = [ path("", views.index, name="index"), path("add/", views.add, name="add"), # place "add/" before "<str:entry>/" path("<str:entry>/", views.detail, name="entry"), ]
1
3
79,345,299
2025-1-10
https://stackoverflow.com/questions/79345299/best-default-location-for-shared-object-files
I have compiled C code to be called by a Python script. Of course I can include it with cdll.LoadLibrary("./whatever.so"), but I would prefer it to be accessible to all Python scripts in different folders. The idea is that I use default paths for shared objects and do not change environment variables or system files to do that. According to one of the answers on Why can't Python find shared objects that are in directories in sys.path?, /usr/local/lib should work. Namely, /etc/ld.so.conf.d/libc.conf includes that folder. So I used sudo cp -a whatever.so /usr/local/lib and sudo ldconfig. However, cdll.LoadLibrary("whatever.so") does not find the file. Following other suggestions, I have run python -m site, and /usr/local/lib is unfortunately not on the list. Probably the third element, /usr/lib/python3.9, is the best choice, but how can I automatically select it on the cp command? To summarise, is there a good default place to put shared objects (.so) without having to change environment variables and/or system files, and how can I choose it automatically. [I want to write a such Makefile code that puts compiled shared object into path.]
If you do things right, then a shared library libfoo.so[.x.y.z] placed in /usr/local/lib (or in any of the directories listed in files /etc/ld.so.conf.d/*.conf) will be found in python3 by: cdll.LoadLibrary('libfoo.so.[x.y.z]'). For a shared library of your own making, /usr/local/lib is the appropriate place. For example: $ cat foo.c int foo(void) { return 42; } $ gcc -shared -o libfoo.so foo.c $ sudo cp libfoo.so /usr/local/lib/ $ sudo ldconfig $ python3 Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import cdll >>> cdll.LoadLibrary('libfoo.so') <CDLL 'libfoo.so', handle 2dbdb10 at 0x769975b3a930> >>> works, and for an existing system library: $ find /usr/lib/ -name libgmp.so.* /usr/lib/x86_64-linux-gnu/libgmp.so.10.5.0 /usr/lib/x86_64-linux-gnu/libgmp.so.10 $ python3 Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import cdll >>> cdll.LoadLibrary('libgmp.so.10') <CDLL 'libgmp.so.10', handle aaae150 at 0x70b0c6938590> >>> works. I surmise that your mistake must lie in the actual name that you have given to your whatever.so. You cannot call a shared library whatever you like if it is to be found by the dynamic linker (or the static linker) in their usual ways. A shared library needs to have a name of the form lib<name>.so[.x.y.z]. If not, one of the snags that result is that ldconfig will ignore it. Then the dynamic linker will not find it, hence neither will cdll.LoadLibrary. As you can see if we try to repeat the libfoo.so experiment after removing lib from its name: $ mv libfoo.so foo.so $ sudo cp foo.so /usr/local/lib $ sudo ldconfig $ sudo ldconfig --print-cache | grep foo.so libfoo.so (libc6,x86-64) => /usr/local/lib/libfoo.so libfoo.so is still in the ldconfig cache since the first experiment, but foo.so has not been added, and the dynamic linker can't find it: $ python3 Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import cdll >>> cdll.LoadLibrary('foo.so') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.12/ctypes/__init__.py", line 460, in LoadLibrary return self._dlltype(name) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/ctypes/__init__.py", line 379, in __init__ self._handle = _dlopen(self._name, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: foo.so: cannot open shared object file: No such file or directory >>> The requirement for shared library names to begin with lib is documented in man ldconfig ldconfig will look only at files that are named lib*.so* (for regular shared objects) or ld-.so (for the dynamic loader itself). Other files will be ignored.
2
1
79,342,159
2025-1-9
https://stackoverflow.com/questions/79342159/finding-solutions-to-linear-system-of-equations-with-integer-constraint-in-scipy
I have a system of equations where each equation is a linear equation with boolean constraints. For example: x1 + x2 + x3 = 2 x1 + x4 = 1 x2 + x1 = 1 And each x_i is either 0 or 1. Sometimes there might be a small positive (<5) coefficient (for example x1 + 2 * x3 + x4 = 3. Basically a standard linear programming task. What I need to do is to find all x_i which are guaranteed to be 0 and all x_j which are guaranteed to be 1. Sorry if my terminology is not correct here but by guaranteed I mean that if you generate all possible solutions you in all of them all x_i will be 0 and in all of them x_j will be 1. For example my equation has only 2 solutions: 1, 0, 1, 0 0, 1, 1, 1 So you do not have guaranteed 0 and have x_3 as a guaranteed 1. I know how to solve this problem with or-tools by generating all solutions and it works for my usecases (equations are pretty constrained so usually there are < 500 solutions although the number of variables is big enough to make the whole combinatorial search impossible). The big problem is that I can't use that library (system restrictions above my control) and only libraries available in my case are numpy and scipy. I found that scipy has scipy.optimize.linprog. It seems like I have found a way to generate one solution import numpy as np from scipy.optimize import linprog A_eq = np.array([ [1, 1, 1, 0], # x1 + x2 + x3 = 2 [1, 0, 0, 1], # x1 + x4 = 1 [1, 1, 0, 0] # x1 + x2 = 1 ]) b_eq = np.array([2, 1, 1]) c = np.zeros(4) bounds = [(0, 1)] * 4 res = linprog(c, A_eq=A_eq, b_eq=b_eq, bounds=bounds, method='highs-ipm') if res.success: print(res.x) But I can't find a way to generate all solutions. Also I am not sure whether there is a better way to do it as all I need to know is to find guaranteed values P.S. this problem is important to me. I guarantee to add a 500 bounty on it, but system prevents me from doing it until 2 days will pass.
You don't need to (fully) brute-force, and you don't need to find all of your solutions. You just need to find solutions for which each of your variables meets each of their extrema. The following is a fairly brain-off LP approach with 2n² columns and 2mn rows. It's sparse, and for your inputs does not need to be integral. That said, I somewhat doubt it will be the most efficient method possible. import numpy as np from scipy.optimize import milp, Bounds, LinearConstraint import scipy.sparse as sp lhs = np.array(( (1, 1, 1, 0), (1, 0, 0, 1), (1, 1, 0, 0), )) rhs = np.array((2, 1, 1)) m, n = lhs.shape # Variables: n * 2 (minimize, maximize) * n c = sp.kron( sp.eye_array(n), np.array(( (+1,), (-1,), )), ) b = np.tile(rhs, 2*n) system_constraint = LinearConstraint( A=sp.kron(sp.eye_array(2*n), lhs, format='csc'), lb=b, ub=b, ) result = milp( c=c.toarray().ravel(), # must be dense integrality=0, bounds=Bounds(lb=0, ub=1), constraints=system_constraint, ) assert result.success extrema = result.x.reshape((n, 2, n)) mins = extrema[:, 0] maxs = extrema[:, 1] vmins = np.diag(mins) vmaxs = np.diag(maxs) print('Solutions for minima on the diagonal:') print(mins) print('Solutions for maxima on the diagonal:') print(maxs) print('Variable minima:', vmins) print('Variable maxima:', vmaxs) print('Guaranteed 0:', vmaxs < 0.5) print('Guaranteed 1:', vmins > 0.5) Solutions for minima on the diagonal: [[-0. 1. 1. 1.] [ 1. 0. 1. -0.] [ 1. 0. 1. -0.] [ 1. 0. 1. -0.]] Solutions for maxima on the diagonal: [[ 1. 0. 1. -0.] [-0. 1. 1. 1.] [ 1. 0. 1. -0.] [-0. 1. 1. 1.]] Variable minima: [-0. 0. 1. -0.] Variable maxima: [1. 1. 1. 1.] Guaranteed 0: [False False False False] Guaranteed 1: [False False True False] There is a variant on this idea where rather than using sparse modelling, you just loop don't use LP at all fix each variable at each of its extrema, and iteratively column-eliminate from the left-hand side attempt a least-squares solution of the linear system, and infer a high residual to mean that there is no solution This somewhat naively assumes that all solutions will see integer values, and (unlike milp) does not have the option to set integrality=1. For demonstration I was forced to add a row to get a residual. import numpy as np lhs = np.array(( (1, 1, 1, 0), (1, 0, 0, 1), (1, 1, 0, 0), (0, 0, 1, 1), )) rhs = np.array((2, 1, 1, 1)) m, n = lhs.shape epsilon = 1e-12 lhs_select = np.ones(n, dtype=bool) for i in range(n): lhs_select[i] = False x0, (residual,), rank, singular = np.linalg.lstsq(lhs[:, lhs_select], rhs) zero_solves = residual < epsilon x1, (residual,), rank, singular = np.linalg.lstsq(lhs[:, lhs_select], rhs - lhs[:, i]) one_solves = residual < epsilon lhs_select[i] = True if zero_solves and not one_solves: print(f'x{i}=0, solution {x0.round(12)}') elif one_solves and not zero_solves: print(f'x{i}=1, solution {x1.round(12)}') x0=1, solution [-0. 1. 0.] x1=0, solution [ 1. 1. -0.] x2=1, solution [1. 0. 0.] x3=0, solution [ 1. -0. 1.]
13
9
79,344,467
2025-1-10
https://stackoverflow.com/questions/79344467/is-my-time-complexity-analysis-for-finding-universal-words-om-k2-nk-corr
I’m given two string arrays, words1 and words2. A string b is a subset of string a if all characters in b appear in a with at least the same frequency. I need to find all strings in words1 that are universal, meaning every string in words2 is a subset of them. I need to return those universal strings from words1 in any order. I've written a function to find universal words from two lists, words1 and words2. The function uses a hashmap to store character frequencies from words2 and then checks each word in words1 against these frequencies. The code passed all the test cases but I am confused about analyzing time complexity. For time complexity: Processing all words in words2 takes O(m * k), where m = length of words2 and k = average word length. Checking all words in words1 involves O(n * 26), where n = length of words1, and 26 is the max number of hashmap keys (a-z). If I consider the count() in both the loops it will make whole time complexity O(m * k^2 + n*k). Is this the correct analysis? def wordSubsets(self, words1: List[str], words2: List[str]) -> List[str]: ans = [] hmap = {} for i in words2: for j in i: if j in hmap.keys(): hmap[j] = max(hmap[j], i.count(j)) else: hmap[j] = i.count(j) for j in words1: flag = True for k in hmap.keys(): if hmap[k] > j.count(k): flag = False break if flag: ans.append(j) return ans
Indeed, the call of count represents O(𝑘) time complexity, as it needs to scan all letters in the given word. So this makes the first loop's time complexity O(𝑚𝑘²). Similarly, the second loop has a complexity of O(26𝑛𝑘) = O(𝑛𝑘), if indeed your assumption about the range of the characters (a-z) is correct. You can avoid using count and so reduce the complexity of the first loop, by just increasing a count by 1 as you iterate the letters. For instance, like this: def wordSubsets(self, words1: List[str], words2: List[str]) -> List[str]: # A helper function to count the frequency of each letter in a single string def get_freq(word): # O(k) hmap = {} for ch in word: hmap[ch] = hmap.get(ch, 0) + 1 return hmap hmap = {} # O(mk): for hmap2 in map(get_freq, words2): for ch, freq in hmap2.items(): hmap[ch] = max(hmap.get(ch, 0), freq) res = [] # O(nk) for hmap1, word1 in zip(map(get_freq, words1), words1): if all(hmap1.get(ch, 0) >= freq2 for ch, freq2 in hmap.items()): res.append(word1) return res So now the complexity is O((𝑛+𝑚)𝑘). We can further adapt and make use of Counter from the collections module, and make use of more comprehension-syntax: def wordSubsets(self, words1: List[str], words2: List[str]) -> List[str]: hmap = Counter() for word2 in words2: hmap |= Counter(word2) return [word1 for word1 in words1 if Counter(word1) >= hmap] The time complexity remains the same.
1
3
79,342,896
2025-1-9
https://stackoverflow.com/questions/79342896/how-can-i-override-settings-for-code-ran-in-urls-py-while-unit-testing-django
my django app has a env var DEMO which, among other thing, dictate what endpoints are declared in my urls.py file. I want to unit tests these endpoints, I've tried django.test.override_settings but I've found that urls.py is ran only once and not once per unit test. My code look like this: # settings.py DEMO = os.environ.get("DEMO", "false") == "true" # urls.py print(f"urls.py: DEMO = {settings.DEMO}") if settings.DEMO: urlpatterns += [ path('my_demo_endpoint/',MyDemoAPIView.as_view(),name="my-demo-view") ] # test.test_my_demo_endpoint.py class MyDemoEndpointTestCase(TestCase): @override_settings(DEMO=True) def test_endpoint_is_reachable_with_demo_equals_true(self): print(f"test_endpoint_is_reachable_with_demo_equals_true: DEMO = {settings.DEMO}") response = self.client.get("/my_demo_endpoint/") # this fails with 404 self.assertEqual(response.status_code, 200) @override_settings(DEMO=False) def test_endpoint_is_not_reachable_with_demo_equals_false(self): print(f"test_endpoint_is_not_reachable_with_demo_equals_false: DEMO = {settings.DEMO}") response = self.client.get("/my_demo_endpoint/") self.assertEqual(response.status_code, 404) when running this I get: urls.py: DEMO = False test_endpoint_is_reachable_with_demo_equals_true: DEMO = True <test fails with 404> test_endpoint_is_not_reachable_with_demo_equals_false: DEMO = False <test succeed> urls.py is ran only once before every test, however I want to test different behavious of urls.py depending on settings Using a different settings file for testing is not a solution because different tests requires different settings. Directly calling my view in the unit test means that the urls.py code stays uncovered and its behaviour untested so this is also not what I want. How can I override settings for code ran in urls.py? Thank you for your time.
Vitaliy Desyatka's answer nearly works other than the fact that Django caches the URL resolver. This can be seen in Django's code, specifically the function django.urls.resolvers._get_cached_resolver is doing this. You can modify Vitaliy Desyatka's answer to add clearing of the cache to it as well like so: import importlib from django.urls import clear_url_caches class MyDemoEndpointTestCase(TestCase): def reload_urls(self): import myproject.urls # Adjust to your `urls.py` location importlib.reload(myproject.urls) clear_url_caches() The above will get your tests to pass with the caveats that: You'll be reloading the URL Conf and hence repeating any side effects (which ideally the file shouldn't have any) that are present in the module. An alternative would be for you to have a dedicated URLconf for when DEMO = True and instead of overriding DEMO during the tests just override ROOT_URLCONF (or override both). So your settings can look something like: DEMO = os.environ.get("DEMO", "false") == "true" ROOT_URLCONF = 'myproject.urls' # Your default URLConf when DEMO is False if DEMO: ROOT_URLCONF = 'myproject.demo_urls' And then alongside your urls.py file you can have a demo_urls.py file like so: from .urls import urlpatterns as base_urlpatterns urlpatterns = base_urlpatterns + [ # Your demo urls go here ] Now when you want to test your demo URLs you can override the URLConf as: @override_settings(ROOT_URLCONF='myproject.demo_urls') def test_foo(self): pass
1
1
79,342,183
2025-1-9
https://stackoverflow.com/questions/79342183/duckdbpyrelation-from-python-dict
In Polars / pandas / PyArrow, I can instantiate an object from a dict, e.g. In [12]: pl.DataFrame({'a': [1,2,3], 'b': [4,5,6]}) Out[12]: shape: (3, 2) ┌─────┬─────┐ │ a ┆ b │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪═════╡ │ 1 ┆ 4 │ │ 2 ┆ 5 │ │ 3 ┆ 6 │ └─────┴─────┘ Is there a way to do that in DuckDB, without going via pandas / pyarrow / etc.?
duckdb features a function duckdb.read_json which should do this by simply streaming the dict as a json string, but no combination of its various parameters will make it read that dict the same way polars does unfortunately. You can rearrange the dict to match the structure expected for a "unstructured" json format in their documentation pretty easily and then use the duckdb.read_json function to load it the same way as polars, which I think the closest you can get with the library as it currently stands. Here is a demonstration which shows the polars interpretation we expected, the naive duckdb interpretation we probably didn't and the transformation necessary to load it like polars did: import polars import duckdb import io import json # Silent requirement for fsspec data = {'a': [1,2,3], 'b': [4,5,6]} polars_data = polars.DataFrame(data) print('polars\' interpretation:') print(polars_data) duckdb_data = duckdb.read_json(io.StringIO(json.dumps(data))) print('duckdb \'s naive interpretation:') print(duckdb_data) def transform_to_duckjson(dictdata: dict): return [{fieldname: fieldrecords[recordnum] for fieldname, fieldrecords in dictdata.items()} for recordnum in range(len(dictdata[next(iter(dictdata.keys()))]))] duckdb_data2 = duckdb.read_json(io.StringIO(json.dumps(transform_to_duckjson(data)))) print('duckdb \'s interpretation after adjustment:') print(duckdb_data2) Which gives me the following output: polars' interpretation: shape: (3, 2) ┌─────┬─────┐ │ a ┆ b │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪═════╡ │ 1 ┆ 4 │ │ 2 ┆ 5 │ │ 3 ┆ 6 │ └─────┴─────┘ duckdb 's naive interpretation: ┌───────────┬───────────┐ │ a │ b │ │ int64[] │ int64[] │ ├───────────┼───────────┤ │ [1, 2, 3] │ [4, 5, 6] │ └───────────┴───────────┘ duckdb 's interpretation after adjustment: ┌───────┬───────┐ │ a │ b │ │ int64 │ int64 │ ├───────┼───────┤ │ 1 │ 4 │ │ 2 │ 5 │ │ 3 │ 6 │ └───────┴───────┘ As a note: Here is the python api reference (scroll down to "read_csv(..." for the relevant information) which doesn't provide any help at all for using the function unfortunately. Let me know if you have any questions.
1
2
79,343,703
2025-1-9
https://stackoverflow.com/questions/79343703/generalized-kronecker-product-with-different-type-of-product-in-numpy-or-scipy
Consider two boolean arrays import numpy as np A = np.asarray([[True, False], [False, False]]) B = np.asarray([[False, True], [True, True]]) I want to take the kronecker product of A and B under the xor operation. The result should be: C = np.asarray([[True, False, False, True], [False, False, True, True], [False, True, False, True], [True, True, True, True]]) More generally, is there a simple way to implement the Kronecker product with some multiplication operator distinct from the operator *, in this instance the xor operator ^?
You could use broadcasting and reshaping: m, n = A.shape p, q = B.shape C = (A[:, None, :, None] ^ B[None, :, None, :]).reshape(m*p, n*q) Simplified: C = (A[:, None, :, None] ^ B[None, :, None, :] ).reshape(A.shape[0]*B.shape[0], -1) Also equivalent to: C = (np.logical_xor.outer(A, B) .swapaxes(1, 2) .reshape(A.shape[0]*B.shape[0], -1) ) Or with explicit alignment using repeat/tile without reshaping: p, q = B.shape C = np.repeat(np.repeat(A, p, axis=0), q, axis=1) ^ np.tile(B, A.shape) Output: array([[ True, False, False, True], [False, False, True, True], [False, True, False, True], [ True, True, True, True]]) ND-generalization for N dimensional inputs, one could follow the same logic by expanding the dimensions in an interleaved fashion with expand_dims, before reshaping to the element-wise product of the dimensions: C = ( np.expand_dims(A, tuple(range(1, A.ndim*2, 2))) ^ np.expand_dims(B, tuple(range(0, A.ndim*2, 2))) ).reshape(np.multiply(A.shape, B.shape)) Interestingly, this is how kron is actually implemented in numpy (with some extra checks in place). Variant with outer: C = (np.logical_xor.outer(A, B) .transpose(np.arange(A.ndim+B.ndim) .reshape(-1, 2, order='F') .ravel()) .reshape(np.multiply(A.shape, B.shape)) )
2
4
79,344,035
2025-1-9
https://stackoverflow.com/questions/79344035/how-to-add-requirements-txt-to-uv-environment
I am working with uv for the first time and have created a venv to manage my dependencies. Now, I'd like to install some dependencies from a requirements.txt file. How can this be achieved with uv? I already tried manually installing each requirement using uv pip install .... However, this gets tedious for a large list of requirements.
You can install the dependencies to the virtual environment managed by uv using: uv pip install -r requirements.txt When working with a project (application or library) managed by uv, the following command might be used instead: uv add -r requirements.txt This will also add the requirements to the project's pyproject.toml.
3
4
79,343,784
2025-1-9
https://stackoverflow.com/questions/79343784/pyspark-issue-in-converting-hex-to-decimal
I am facing an issue while converting hex to decimal (learned from here) in pyspark. from pyspark.sql.functions import col, sha2, conv, substring # User data with ZIPs user_data = [ ("100052441000101", "21001"), ("100052441000102", "21002"), ("100052441000103", "21002"), ("user1", "21001"), ("user2", "21002") ] df_users = spark.createDataFrame(user_data, ["user_id", "ZIP"]) # Generate SHA-256 hash from the user_id df_users = df_users.withColumn("hash_key", sha2(col("user_id"), 256)) # Convert the hexadecimal hash (sha2 output) to decimal df_users = df_users.withColumn("hash_substr1", substring(col('hash_key'), 1, 16)) df_users = df_users.withColumn("hash_substr2", substring(col('hash_key'), 1, 15)) df_users = df_users.withColumn("hash_int1", conv(col('hash_substr1'), 16, 10).cast("bigint")) df_users = df_users.withColumn("hash_int2", conv(col('hash_substr2'), 16, 10).cast("bigint")) df_users.show() The output I get is: +---------------+-----+--------------------+----------------+---------------+-------------------+-------------------+ | user_id| ZIP| hash_key| hash_substr1| hash_substr2| hash_int1| hash_int2| +---------------+-----+--------------------+----------------+---------------+-------------------+-------------------+ |100052441000101|21001|3cf4b90397964f6b2...|3cf4b90397964f6b|3cf4b90397964f6|4392338961672327019| 274521185104520438| |100052441000102|21002|e18aec7bb2a60b62d...|e18aec7bb2a60b62|e18aec7bb2a60b6| null|1015753888833888438| |100052441000103|21002|e55127f9f61bbe433...|e55127f9f61bbe43|e55127f9f61bbe4| null|1032752028895525860| | user1|21001|0a041b9462caa4a31...|0a041b9462caa4a3|0a041b9462caa4a| 721732164412679331| 45108260275792458| | user2|21002|6025d18fe48abd451...|6025d18fe48abd45|6025d18fe48abd4|6928174017724202309| 433010876107762644| +---------------+-----+--------------------+----------------+---------------+-------------------+-------------------+ Note that hash_int1 is null for 2nd and 3rd records. However, when I try to get the corresponding int using python, I get some value: hexes = ["e18aec7bb2a60b62", "e18aec7bb2a60b6", "e55127f9f61bbe43", "e55127f9f61bbe4"] [int(h, 16) for h in hexes] [16252062221342215010, 1015753888833888438, 16524032462328413763, 1032752028895525860] The values are same when they are not null. The final objective is to generate replicable random values df_users = df_users.withColumn("random_value", (col("hash_int1") % 10**12) / 10**12)
Spark's LongType range is -9223372036854775808 to 9223372036854775807. However, your value 16524032462328413763 is outside of this range so it cannot store as LongType. If you remove .cast("bigint"), you can see that your values won't be null and have correct values.
1
3
79,343,521
2025-1-9
https://stackoverflow.com/questions/79343521/issues-generating-barcode-in-dataimage-pngbase64-format-with-custom-size-and-n
I’m working on a Python project where my goal is to generate barcodes in the data:image/png;base64 format, without any human-readable footer text. Additionally, I need to adjust the size (height and width) and DPI (dots per inch) of the barcode, but I'm encountering some difficulties. Specifically, I have two issues: I cannot remove the human-readable footer text that appears below the barcode. I am unable to customize the size (height, width) and DPI of the generated barcode. I am using the python-barcode library along with the ImageWriter to generate the barcode image. I have tried using options like text=None to remove the text and various writer_options to control the size and resolution, but nothing seems to work as expected. Here’s what I have tried so far: import barcode from barcode.writer import ImageWriter from io import BytesIO import base64 def generate_barcode_without_text(serial_no): barcode_instance = barcode.Code128(serial_no, writer=ImageWriter()) writer_options = { 'text': None, # Disable text under the barcode 'module_width': 0.2, # Adjust the width of the barcode 'module_height': 15, # Adjust the height of the barcode 'dpi': 300, # Set the DPI for the image 'quiet_zone': 6 # Set the quiet zone around the barcode } image_stream = BytesIO() barcode_instance.write(image_stream, writer_options=writer_options) image_stream.seek(0) barcode_base64 = base64.b64encode(image_stream.read()).decode('utf-8') return barcode_base64
I think the docs are slightly off for the current latest version 0.15-1. I was able to change the size and remove the human readable barcode text from under the image with the following code: def generate_barcode_without_text(serial_no): barcode_instance = barcode.Code128(serial_no, writer=ImageWriter()) writer_options = { 'write_text': False, # Disable text under the barcode 'module_width': 0.8, # Adjust the width of the barcode 'module_height': 25, # Adjust the height of the barcode 'dpi': 300, # Set the DPI for the image 'quiet_zone': 6 # Set the quiet zone around the barcode } image_stream = BytesIO() barcode_instance.write(image_stream, options=writer_options) image_stream.seek(0) barcode_base64 = base64.b64encode(image_stream.read()).decode('utf-8') return barcode_base64 The option write_text seems to control whether the text under the barcode is written or not. The current docs don't mention this option and say setting the font_size to 0 suppresses the text, but this yields an error for me. The parameter name for passing the writer options to the barcode class is options instead of writer_options too. Here is the resulting barcode:
2
1
79,342,508
2025-1-9
https://stackoverflow.com/questions/79342508/inverse-fast-fourier-transform-ifft2-of-scipy-not-working-for-fourier-optics
I'm following a tutorial on youtube on Fourier Optics in python, to simulate diffraction of light through a slit. The video in question Source Code of video Now, I'm trying to implement the get_U(z, k) function and then display the corresponding plot below it, as shown in the video (I've got barebones knowledge about this topic), however, i just can't seem to get the plot working (white plot is visible the entire time). Upon inspection, I've discovered that the U variable just consists of a bunch of (nan+nanj) values, which I think shouldn't be the case. I've crosschecked the formula and it looks perfect. I also realise that, sometimes the np.sqrt() has to deal with negative values, but adding neither a np.abs() nor a np.where()(to convert negatives to zero) gives me the intended output. My code: import numpy as np import scipy as sp from scipy.fft import fft2, ifft2, fftfreq, fftshift import matplotlib.pyplot as plt import pint plt.style.use(['grayscale']) u = pint.UnitRegistry() D = 0.1 * u.mm lam = 660 * u.mm x = np.linspace(-2, 2, 1600) * u.mm xv, yv = np.meshgrid(x, x) U0 = (np.abs(xv) < D/2) * (np.abs(yv) < 0.5 * u.mm) U0 = U0.astype(float) A = fft2(U0) kx = fftfreq(len(x), np.diff(x)[0]) * 2 * np.pi kxv, kyv = np.meshgrid(kx, kx) def get_U(z, k): return ifft2(A*np.exp(1j*z.magnitude*np.sqrt(k.magnitude**2 - kxv.magnitude**2 - kyv.magnitude**2))) k = 2*np.pi/(lam) d = 3 * u.cm U = get_U(d, k) plt.figure(figsize=(5, 5)) plt.pcolormesh(xv, yv, np.abs(U), cmap='inferno') plt.xlabel('$x$ [mm]') plt.ylabel('$y$ [mm]') plt.title('Single slit diffraction') plt.show()
Your units of lam are wrong - if you intend to use pint (but I suggest that you don't) then they should be in nm, not mm. When you have made that change I suggest that you remove all reference to pint and mixed units and work entirely in a single set of length units (here, m). This is because units appear to be stripped when creating some of the numpy arrays. You can use scientific notation (e.g. 1e-9) to imply the units. Then you get what I think you require. import numpy as np import scipy as sp from scipy.fft import fft2, ifft2, fftfreq, fftshift import matplotlib.pyplot as plt plt.style.use(['grayscale']) D = 0.1 * 1e-3 lam = 660 * 1e-9 x = np.linspace(-2, 2, 1600) * 1e-3 xv, yv = np.meshgrid(x, x) U0 = (np.abs(xv) < D/2) * (np.abs(yv) < 0.5 * 1e-3) U0 = U0.astype(float) A = fft2(U0) kx = fftfreq(len(x), np.diff(x)[0]) * 2 * np.pi kxv, kyv = np.meshgrid(kx, kx) def get_U(z, k): return ifft2(A*np.exp(1j*z*np.sqrt(k**2 - kxv**2 - kyv**2))) k = 2*np.pi/(lam) d = 3 * 1e-2 U = get_U(d, k) plt.figure(figsize=(5, 5)) plt.pcolormesh(xv*1e3, yv*1e3, np.abs(U), cmap='inferno') plt.xlabel('$x$ [mm]') plt.ylabel('$y$ [mm]') plt.title('Single slit diffraction') plt.show()
3
5
79,342,389
2025-1-9
https://stackoverflow.com/questions/79342389/numpy-grayscale-image-to-black-and-white
I use the MNIST dataset that contains 28x28 grayscale images represented as numpy arrays with 0-255 values. I'd like to convert images to black and white only (0 and 1) so that pixels with a value over 128 will get the value 1 and pixels with a value under 128 will get the value 0. Is there a simple method to do so?
Yes. Use (arr > 128) to get a boolean mask array of the same shape as your image, then .astype(int) to cast the bools to ints: >>> import numpy as np >>> arr = np.random.randint(0, 255, (5, 5)) >>> arr array([[153, 167, 141, 79, 58], [184, 107, 152, 215, 69], [221, 90, 172, 147, 125], [ 93, 35, 125, 186, 187], [ 19, 72, 28, 94, 132]]) >>> (arr > 128).astype(int) array([[1, 1, 1, 0, 0], [1, 0, 1, 1, 0], [1, 0, 1, 1, 0], [0, 0, 0, 1, 1], [0, 0, 0, 0, 1]])
1
4
79,340,547
2025-1-8
https://stackoverflow.com/questions/79340547/how-to-import-nested-modules-with-uv
I am managing a project with uv. My project includes - src - app.py - constants.py - notebooks - testing.ipynb - pyproject.toml where my pyproject.toml is [project] name = "benchmark-extractor" version = "0.1.0" description = "Add your description here" readme = "README.md" requires-python = ">=3.13" dependencies = [ "anthropic>=0.42.0", "ipykernel>=6.29.5", "markitdown>=0.0.1a3", "pandera>=0.22.1", "tabula-py>=2.10.0", ] I want to import app.py and constants.py into notebooks/testing.ipynb, for experimentation purposes. When I do this naively, e.g. import src.app, I get a ModuleNotFoundError. I believe I could approach this using uv pip install -e . (e.g. Sibling package imports) but when I try that, I get ERROR: Package 'benchmark-extractor' requires a different Python: 3.11.3 not in '>=3.13'.
Add this section into pyproject.toml: [tool.uv] package = true Then run uv sync Now you should be able to import app in notebooks/testing https://docs.astral.sh/uv/concepts/projects/config/#project-packaging
1
3
79,341,984
2025-1-9
https://stackoverflow.com/questions/79341984/converting-a-column-to-date-in-pandas
I'm having difficulty with Pandas when trying to convert this column to a date. The table doesn't include a year, so I think that's making the conversion difficult. 28 JUL Unnamed: 0 Alura *Alura - 7/12 68,00 0 28 JUL NaN Passei Direto S/A. - 3/12 19,90 1 31 JUL NaN Drogarias Pacheco 25,99 2 31 JUL NaN Mundo Verde - Rj - Sho 5,90 3 31 JUL NaN Paypal *99app 4,25 4 04 AGO NaN Saldo em atraso 1.091,17 5 04 AGO NaN Crédito de atraso 1.091,17 6 06 AGO NaN Apple.Com/Bill 34,90 7 07 AGO NaN Pagamento em 07 AGO 1.091,17 8 07 AGO NaN Juros de atraso 16,86 9 07 AGO NaN IOF de atraso 4,43 10 07 AGO NaN Multa de atraso 21,91 11 08 AGO NaN Apple.Com/Bill 21,90 12 09 AGO NaN Google Youtubepremium 20,90 13 10 AGO NaN Amazon.Com.Br 41,32 14 12 AGO NaN Uber *Uber *Trip 17,91 15 12 AGO NaN Uber *Uber *Trip 16,94 16 12 AGO NaN Mia Cookies 47,50 17 13 AGO NaN Uber *Uber *Trip 16,96 18 13 AGO NaN Uber *Uber *Trip 19,98 19 16 AGO NaN Uber *Uber *Trip 11,93 20 16 AGO NaN Uber *Uber *Trip 9,97 21 18 AGO NaN Uber *Uber *Trip 9,91 22 22 AGO NaN Uber *Uber *Trip 9,96 23 23 AGO NaN Amazonprimebr 14,90 24 27 AGO NaN Paypal *Sacocheiotv 15,00 25 27 AGO NaN Pag*Easymarketpleno 6,50 I tried to transform it using this code, but it's not working: df["Data"] = pd.to_datetime(df["Data"], format="%d %b", errors="coerce") Incorrect output: Data Local Valor 0 1900-07-28 Alura *Alura - 7/12 68,00 1 1900-07-28 Passei Direto S/A. - 3/12 19,90 2 1900-07-31 Drogarias Pacheco 25,99 3 1900-07-31 Mundo Verde - Rj - Sho 5,90 4 1900-07-31 Paypal *99app 4,25 7 NaT Apple.Com/Bill 34,90 9 NaT Juros de atraso 16,86 10 NaT IOF de atraso 4,43 11 NaT Multa de atraso 21,91 12 NaT Apple.Com/Bill 21,90 13 NaT Google Youtubepremium 20,90 14 NaT Amazon.Com.Br 41,32 15 NaT Uber *Uber *Trip 17,91 16 NaT Uber *Uber *Trip 16,94 17 NaT Mia Cookies 47,50 18 NaT Uber *Uber *Trip 16,96 19 NaT Uber *Uber *Trip 19,98 20 NaT Uber *Uber *Trip 11,93 21 NaT Uber *Uber *Trip 9,97 22 NaT Uber *Uber *Trip 9,91 23 NaT Uber *Uber *Trip 9,96 24 NaT Amazonprimebr 14,90 25 NaT Paypal *Sacocheiotv 15,00 26 NaT Pag*Easymarketpleno 6,50 Could someone help me with this?
This looks like Brazilian Portuguese, you should install the pt_BR locale on your machine, then run: import locale locale.setlocale(locale.LC_ALL, 'pt_BR.UTF-8') df['Data_converted'] = pd.to_datetime(df['Data'], format='%d %b', errors='coerce') Output: Data Data_converted 0 28 JUL 1900-07-28 1 04 AGO 1900-08-04 And, if you want to force the year: df['Data_converted'] = pd.to_datetime('2025 ' + df['Data'], format='%Y %d %b', errors='coerce') Output: Data Data_converted 0 28 JUL 2025-07-28 1 04 AGO 2025-08-04
1
4
79,340,487
2025-1-8
https://stackoverflow.com/questions/79340487/properly-re-expose-submodule-or-is-this-a-bug-in-pylance
I am working on a python package chemcoord with several subpackages, some of whom should be exposed to the root namespace. The repository is here, the relevant __init__.py file is here. For example there is a chemcoord.cartesian_coordinates.xyz_functions that should be accessible as chemcoord.xyz_functions Accessible, in particular, means that the user should be able to write: from chemcoord.xyz_functions import allclose If I write in my __init__.py import chemcoord.cartesian_coordinates.xyz_functions as xyz_functions then I can use chemcoord.xyz_functions in the code, but I cannot do from chemcoord.xyz_functions import allclose If I do the additional ugly/hacky (?) trick of modifying sys.modules in the __init__.py as in import sys sys.modules["chemcoord.xyz_functions"] = xyz_functions then I can write from chemcoord.xyz_functions import allclose But it feels ugly and hacky. Recently I got warnings from PyLance about Import "chemcoord.xyz_functions" could not be resolved Which leads to my two questions: Is my approach of reexposing the submodule correct, or is there a cleaner way? If the answer to question 1 is solved and I still get warnings from PyLance, is there a bug in PyLance?
nit: # -*- coding: utf-8 -*- hasn't been needed in python source files for a very very long time, given that it is default starting with interpreter 3.0. And 3.8 went EOL last year, so really you only need to worry about 3.9 and later. And these seem odd: ... import Cartesian as Cartesian, ... import Zmat as Zmat sys.modules I completely agree with you that the sys.modules["chemcoord.xyz_functions"] assignment is not appropriate. And I agree that apps consuming the Public API should be able to use a short from chemcoord.xyz_functions import allclose, dot. Organizing everything under src/ is very nice and I thank you for that. The entire cartesian_coordinates/ folder keeps things tidy for the package developer and makes good sense. However, you might possibly wish to "hide" a code module by renaming it to _xyz_functions.py, with leading _ underscore. (I don't know what promises you've made to app developers in v2.1.2 and previous, which might require you to still expose some or all of that.) new module Focusing on the OP question, you complain that an app author currently cannot do from chemcoord.xyz_functions import allclose Honestly, it's very simple. You're just a little hung up on that src/chemcoord/cartesian_coordinates/xyz_functions.py filename. You're thinking that is the "right" location, but it's just a private implementation detail, separate from the Public API you choose to expose, which plays into why we might want to hide it. I claim that what you want to do, to answer the original question, is instead of editing an __init__.py module, you want to create a new src/chemcoord/xyz_functions.py module. It can pull in allclose(), dot(), etc., and make them visible with conveniently short names for app authors. Yes, there's a cleaner way, just define a new public module. No, there's no trouble with pylance here. (Also, you might prefer to run $ pyright . .)
1
2
79,339,965
2025-1-8
https://stackoverflow.com/questions/79339965/why-does-pyrtools-imshow-print-a-value-range-thats-different-from-np-min-an
quick problem description: pyrtools imshow function giving me different and negative ranges details: im following the tutorial at https://pyrtools.readthedocs.io/en/latest/tutorials/01_tools.html since i dont have .pgm image, i'm using the below .jpg image. here is the modified python code import matplotlib.pyplot as plt import pyrtools as pt import numpy as np import cv2 # Load the JPG image oim = cv2.imread('/content/ein.jpg') print(f"input image shape: {oim.shape}") # Convert to grayscale if it's an RGB image if len(oim.shape) == 3: # Check if the image has 3 channels (RGB) oim_gray = cv2.cvtColor(oim, cv2.COLOR_BGR2GRAY) else: oim_gray = oim # Already grayscale print(f"grayscale image shape: {oim_gray.shape}") # Check the range of the oim_gray print(f"value range of oim_gray: {np.min(oim_gray), np.max(oim_gray)}") # Subsampling imSubSample = 1 im = pt.blurDn(oim_gray, n_levels=imSubSample, filt='qmf9') # Check the range of the subsampled image print(f"value range of im: {np.min(im), np.max(im)}") # Display the original and subsampled images pt.imshow([oim_gray, im], title=['original (grayscale)', 'subsampled'], vrange='auto2', col_wrap=2); and here is the output input image shape: (256, 256, 3) grayscale image shape: (256, 256) value range of oim_gray: (4, 245) value range of im: (5.43152380173187, 251.90158197570906) with the given image as you can see, from the printouts, both the images oim_gray and im are in positive range. there aren't any negative values on neither im and oim_gray. but when checking the output image, i see the range -38 & 170 (please check the text over the output image). this doesn't make any sense, and don't understand. can you help on this to understand?
That is behavior of pyrtools, specifically decided by vrange='auto2'. 'auto2': all images have same vmin/vmax, which are the mean (across all images) minus/plus 2 std dev (across all images) (documentation in source) If you don't like that library's behavior, you can file a bug at https://github.com/LabForComputationalVision/pyrtools/issues
2
2
79,340,441
2025-1-8
https://stackoverflow.com/questions/79340441/python-polars-expression-list-product
In Python-Polars, it is easy to calculate the Sum of all the lists in an array with polars.Expr.list.sum. See the example below for the sum: df = pl.DataFrame({"values": [[[1]], [[2, 3], [5,6]]]}) df.with_columns( sum=pl.concat_list(pl.col("values")).list.eval( pl.element().list.sum())) shape: (2, 2) ┌──────────────────┬───────────┐ │ values ┆ sum │ │ --- ┆ --- │ │ list[list[i64]] ┆ list[i64] │ ╞══════════════════╪═══════════╡ │ [[1]] ┆ [1] │ │ [[2, 3], [5, 6]] ┆ [5, 11] │ └──────────────────┴───────────┘ I am trying to define the same logic for the product and the division. Since it is not available in the current version of Polars (1.19). To do this, I am using pl.reduce, but it does not seem to work as expected: df.with_columns( sum=pl.concat_list(pl.col("values")).list.eval( pl.reduce(lambda e1, e2: e1*e2,pl.element()))) shape: (2, 2) ┌──────────────────┬──────────────────┐ │ values ┆ sum │ │ --- ┆ --- │ │ list[list[i64]] ┆ list[list[i64]] │ ╞══════════════════╪══════════════════╡ │ [[1]] ┆ [[1]] │ │ [[2, 3], [5, 6]] ┆ [[2, 3], [5, 6]] │ └──────────────────┴──────────────────┘ Would you have any suggestion on how to implement the above using a single expression context?
pl.Expr.list.eval() to get into list context. pl.element() to get access to element within list context. pl.Expr.product() to calculate product. pl.Expr.list.first() to get the result as scalar. df.with_columns( product = pl.col.values.list.eval( pl.element().list.eval( pl.element().product() ).list.first() ) ) shape: (2, 2) ┌──────────────────┬───────────┐ │ values ┆ product │ │ --- ┆ --- │ │ list[list[i64]] ┆ list[i64] │ ╞══════════════════╪═══════════╡ │ [[1]] ┆ [1] │ │ [[2, 3], [5, 6]] ┆ [6, 30] │ └──────────────────┴───────────┘
2
2
79,339,647
2025-1-8
https://stackoverflow.com/questions/79339647/why-does-scraping-followers-count-from-instagram-fails
I'm trying to scrape the number of followers of an array of username. I'm using BeautifulSoup. The code I'm using is the following import requests from bs4 import BeautifulSoup def instagram_followers(username): headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3' } response = requests.get(f'https://www.instagram.com/{username}/') soup = BeautifulSoup(response.text, 'html.parser') info = soup.find('meta', property='og:description') if info: followers = info['content'].split(" ")[0] return followers else: return -1 The function always returns -1
Code works fine in focus of your question, so issue is not reproducable, without any additional information. Check following: response.status_code as first indicator, may you scrape to aggressivly and the server will handle this by blocking your ip. also implement your headers, they are not used in your code import requests from bs4 import BeautifulSoup def instagram_followers(username): headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3' } response = requests.get(f'https://www.instagram.com/{username}/', headers=headers) if response.status_code == 200: soup = BeautifulSoup(response.text, 'html.parser') info = soup.find('meta', property='og:description') if info: followers = info['content'].split(" ")[0] return followers else: return -1 else: print('Something went wrong with your request: ' + response.status_code) instagram_followers('thestackoverflow') Output: 52K
2
0
79,338,219
2025-1-8
https://stackoverflow.com/questions/79338219/panda-iterate-rows-and-multiply-nth-row-values-to-nextn1-row-value
I am trying to iterate multiple column rows and multiply nth row to n+1 row after that add columns. I tried below code and it's working fine. Is there any other simply way to achieve the subtraction and multiplication part together? import pandas as pd df = pd.DataFrame({'C': ["Spark","PySpark","Python","pandas","Java"], 'F' : [2,4,3,5,4], 'D':[3,4,6,5,5]}) df1 = pd.DataFrame({'C': ["Spark","PySpark","Python","pandas","Java"], 'F': [1,2,1,2,1], 'D':[1,2,2,2,1]}) df = pd.merge(df, df1, on="C") df['F_x-F_y'] = df['F_x'] - df['F_y'] df['D_x-D_y'] = df['D_x'] - df['D_y'] for index, row in df.iterrows(): df['F_mul'] = df['F_x-F_y'].mul(df['F_x-F_y'].shift()) df['D_mul'] = df['D_x-D_y'].mul(df['D_x-D_y'].shift()) df['F+D'] = df['F_mul'] + df['D_mul'] Output - C F_x D_x F_y D_y F_x-F_y D_x-D_y F_mul D_mul F+D 0 Spark 2 3 1 1 1 2 NaN NaN NaN 1 PySpark 4 4 2 2 2 2 2.0 4.0 6.0 2 Python 3 6 1 2 2 4 4.0 8.0 12.0 3 pandas 5 5 2 2 3 3 6.0 12.0 18.0 4 Java 4 5 1 1 3 4 9.0 12.0 21.0
First remove iterating by iterrows, then is possible simplify a generalize solution by: cols = ['F','D'] for col in cols: s = df[f'{col}_x'].sub(df[f'{col}_y']) df[f'{col}_mul'] = s.mul(s.shift()) df['+'.join(cols)] = df.filter(like='mul').sum(axis=1, min_count=1) print (df) C F_x D_x F_y D_y F_mul D_mul F+D 0 Spark 2 3 1 1 NaN NaN NaN 1 PySpark 4 4 2 2 2.0 4.0 6.0 2 Python 3 6 1 2 4.0 8.0 12.0 3 pandas 5 5 2 2 6.0 12.0 18.0 4 Java 4 5 1 1 9.0 12.0 21.0 Another idea is processing all columns together - advantage is dont need specify columns for processing: df1 = (df.filter(like='x').rename(columns=lambda x: x.replace('x','mul')) .sub(df.filter(like='y').rename(columns=lambda x: x.replace('y','mul')))) df2 = df1.mul(df1.shift()) df = df.join(df2) df['+'.join(x.replace('_mul','') for x in df2.columns)] = df2.sum(axis=1, min_count=1) print (df) C F_x D_x F_y D_y F_mul D_mul F+D 0 Spark 2 3 1 1 NaN NaN NaN 1 PySpark 4 4 2 2 2.0 4.0 6.0 2 Python 3 6 1 2 4.0 8.0 12.0 3 pandas 5 5 2 2 6.0 12.0 18.0 4 Java 4 5 1 1 9.0 12.0 21.0
2
2
79,372,122
2025-1-20
https://stackoverflow.com/questions/79372122/how-to-check-if-a-cuboid-is-inside-camera-frustum
I want to check if an object (defined by four corners in 3D space) is inside the Field of View of a camera pose. I saw this solution and tried to implement it, but I missed something, can you please tell me how to fix it? the provided 4 points are 2 inside, 2 outside camera frustum. import numpy as np from typing import Tuple class CameraFrustum: def __init__( self, d_dist: float = 0.3, fov: Tuple[float, float] = (50, 40) ): self.d_dist = d_dist self.fov = fov self.frustum_vectors = None self.n_sight = None self.u_hvec = None self.v_vvec = None def compute_frustum_vectors(self, cam_pose: np.ndarray): fov_horizontal, fov_vertical = np.radians(self.fov[0] / 2), np.radians( self.fov[1] / 2 ) self.cam_position = cam_pose[:3, 3] cam_orientation = cam_pose[:3, :3] base_vectors = np.array( [ [np.tan(fov_horizontal), np.tan(fov_vertical), 1], [-np.tan(fov_horizontal), np.tan(fov_vertical), 1], [-np.tan(fov_horizontal), -np.tan(fov_vertical), 1], [np.tan(fov_horizontal), -np.tan(fov_vertical), 1], ] ) base_vectors /= np.linalg.norm(base_vectors, axis=1, keepdims=True) self.frustum_vectors = np.dot(base_vectors, cam_orientation.T) self.n_sight = np.mean(self.frustum_vectors, axis=0) self.u_hvec = np.cross( np.mean(self.frustum_vectors[:2], axis=0), self.n_sight ) self.v_vvec = np.cross( np.mean(self.frustum_vectors[1:3], axis=0), self.n_sight ) def project_point( self, p_point: np.ndarray, cam_orientation: np.ndarray ) -> bool: if self.frustum_vectors is None: self.compute_frustum_vectors(cam_orientation) # p_point_vec = p_point - self.cam_position p_point_vec /= np.linalg.norm(p_point_vec, axis=-1, keepdims=True) # d_prime = np.dot(p_point_vec, self.n_sight) if abs(d_prime) < 1e-6: print("point is not in front of the camera") return False elif d_prime < self.d_dist: print("point is too close to camera") return False # p_prime_vec = self.d_dist *( p_point_vec / d_prime ) - self.d_dist * self.n_sight u_prime = np.dot(p_prime_vec, self.u_hvec) v_prime = np.dot(p_prime_vec, self.v_vvec) # width = 2 * self.d_dist * np.tan(np.radians(self.fov[0]) / 2) height = 2 * self.d_dist * np.tan(np.radians(self.fov[1]) / 2) u_min, u_max = -width / 2, width / 2 v_min, v_max = -height / 2, height / 2 if not (u_min < u_prime < u_max): return False if not (v_min < v_prime < v_max): return False return True cam_frustum = CameraFrustum() pts = np.array( [ [1.54320189, -0.35068437, -0.48266792], [1.52144436, 0.44898697, -0.48990338], [0.32197813, 0.41622155, -0.50429738], [0.34373566, -0.38344979, -0.49706192], ] ) cam_pose = np.array( [ [-0.02719692, 0.9447125, -0.3271947, 1.25978471], [0.99958918, 0.02274412, 0.0, 0.03276859], [-0.00904433, -0.32711006, -0.94495695, 0.4514743], [0.0, 0.0, 0.0, 1.0], ] ) for pt in pts: res = cam_frustum.project_point(pt, cam_pose) print(res) Can you please tell me how can I fix this? thanks. I tried to implement this as follows
EDIT: pending a response from the OP. There is a problem with your cam_pose matrix. The [0:3,0:3] components (first three rows and first three columns) should be a rotation matrix. However, it isn't: the first and third columns aren't orthogonal. Well, no matter how I try to do it, I think all those points lie outside the frustum (really a pyramid). Could you check that those are the particular points that you intended (because the picture in your post suggests that you also tried other camera locations). It would be really good if somebody else looked at this independently. Maybe there's some distinction between the camera position and the focal point that I don't know about. (I'm currently making no distinction.) I tried two things - Method 1: fix your code; Method 2: reverse the transformation and calculate the angles to compare with FOV/2. I also cobbled together something to plot the points. Method 1: Try to fix your code. You calculate a local basis triad, n_sight, u_hvec and v_vvec. These need to be normalised, because you use projections onto them to calculate the relevant coordinate lengths d_prime, u_prime and v_prime. Your method of finding self.h_uvec and self.v_vvec is unnecessarily complicated and hard to envisage (even though correct). I've tried to indicate where these are relative to the local frustrum points below. You don't need self.dist in find u_prime and v_prime (despite what your link says) because you have already got a cross-wise plane at distance d_prime (or you would have if n_sight was normalised to unit length). Your projection plane is at distance n.p from the camera and the vector p_prime_vec is the components of the original displacement perpendicular to n_sight. You shouldn't normalise p_point_vec (or you will get distance d_prime wrong). I've left some debugging print statements in the code below: if you know the order of points (I eventually worked them out) they may help you. import numpy as np import math from typing import Tuple import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D class CameraFrustum: def __init__( self, d_dist: float = 0.3, fov: Tuple[float, float] = (50, 40) ): # original self.d_dist = d_dist self.fov = fov self.frustum_vectors = None self.n_sight = None self.u_hvec = None self.v_vvec = None def compute_frustum_vectors(self, cam_pose: np.ndarray): fov_horizontal, fov_vertical = np.radians(self.fov[0] / 2), np.radians( self.fov[1] / 2 ) self.cam_position = cam_pose[:3, 3] cam_orientation = cam_pose[:3, :3] base_vectors = np.array( [ [np.tan(fov_horizontal), np.tan(fov_vertical), 1], [-np.tan(fov_horizontal), np.tan(fov_vertical), 1], [-np.tan(fov_horizontal), -np.tan(fov_vertical), 1], [np.tan(fov_horizontal), -np.tan(fov_vertical), 1], ] ) base_vectors /= np.linalg.norm(base_vectors, axis=1, keepdims=True) self.frustum_vectors = np.dot(base_vectors, cam_orientation.T) self.n_sight = np.mean(self.frustum_vectors, axis=0) self.u_hvec = self.frustum_vectors[0] - self.frustum_vectors[1] ##### much easier self.v_vvec = self.frustum_vectors[0] - self.frustum_vectors[3] self.n_sight /= np.linalg.norm( self.n_sight ) ##### normalise basis vectors to unit length self.u_hvec /= np.linalg.norm( self.u_hvec ) self.v_vvec /= np.linalg.norm( self.v_vvec ) print( 'n_sight = ', self.n_sight ) # check unit-vector directions print( 'u_hvec = ', self.u_hvec ) print( 'v_vvec = ', self.v_vvec ) def project_point( self, p_point: np.ndarray, cam_orientation: np.ndarray ) -> bool: if self.frustum_vectors is None: self.compute_frustum_vectors(cam_orientation) p_point_vec = p_point - self.cam_position d_prime = np.dot(p_point_vec, self.n_sight) # p.n = plane distance from the camera if abs(d_prime) < 1e-6: print("Point is not in front of the camera") return False elif d_prime < self.d_dist: print("Point is too close to the camera") return False p_prime_vec = p_point_vec - self.n_sight * d_prime # p - (p.n)n = displacement from centreline u_prime = np.dot(p_prime_vec, self.u_hvec) v_prime = np.dot(p_prime_vec, self.v_vvec) u_max = d_prime * np.tan(np.radians(self.fov[0]) / 2); u_min = -u_max v_max = d_prime * np.tan(np.radians(self.fov[1]) / 2); v_min = -v_max print( "u_prime, v_prime, u_max, v_max=", u_prime, v_prime, u_max, v_max ) # check rotated coordinates if not (u_min < u_prime < u_max): return False if not (v_min < v_prime < v_max): return False return True cam_frustum = CameraFrustum() pts = np.array( [ [1.54320189, -0.35068437, -0.48266792], [1.52144436, 0.44898697, -0.48990338], [0.32197813, 0.41622155, -0.50429738], [0.34373566, -0.38344979, -0.49706192] ] ) cam_pose = np.array( [ [-0.02719692, 0.9447125, -0.3271947, 1.25978471], [ 0.99958918, 0.02274412, 0.0, 0.03276859], [-0.00904433,-0.32711006,-0.94495695, 0.4514743 ], [0.0, 0.0, 0.0, 1.0], ] ) for pt in pts: res = cam_frustum.project_point(pt, cam_pose) print(res) Output n_sight = [-0.3271947 0. -0.94495695] u_hvec = [-0.02719692 0.99958918 -0.00904433] v_vvec = [ 0.9447125 0.02274412 -0.32711006] u_prime, v_prime, u_max, v_max= -0.3963363671027199 0.5645937705948938 0.3683791237780479 0.2875334205549486 False u_prime, v_prime, u_max, v_max= 0.40342016151710725 0.5645937726848527 0.37488698183988656 0.29261304252107834 False u_prime, v_prime, u_max, v_max= 0.3963363671027199 -0.5645937705948936 0.5642361967573459 0.44040705127553315 False u_prime, v_prime, u_max, v_max= -0.4034201615171073 -0.5645937726848521 0.5577283386955073 0.4353274293094035 False Method 2 - Invert the translation and rotation You can simply reverse the transformation and calculate the relevant angles, confirming if abs(angle) < FOV/2 in the relevant direction. import numpy as np import math pts = np.array( [ [1.54320189, -0.35068437, -0.48266792], [1.52144436, 0.44898697, -0.48990338], [0.32197813, 0.41622155, -0.50429738], [0.34373566, -0.38344979, -0.49706192] ] ) cam_pose = np.array( [ [-0.02719692, 0.9447125, -0.3271947, 1.25978471], [ 0.99958918, 0.02274412, 0.0, 0.03276859], [-0.00904433,-0.32711006,-0.94495695, 0.4514743 ], [0.0, 0.0, 0.0, 1.0], ] ) fov = ( 50, 40 ) origin = cam_pose[0:3,3] rotate = cam_pose[0:3,0:3] reverse = np.linalg.inv( rotate ) print( 'Angles' ) for p in pts: q = np.dot( p - origin, reverse.T ) angle1 = math.atan2( q[0], q[2] ) * 180 / np.pi angle2 = math.atan2( q[1], q[2] ) * 180 / np.pi print( f'{angle1:7.2f} {angle2:7.2f} ', abs( angle1 ) < fov[0] / 2 and abs( angle2 ) < fov[1] / 2 ) Output: Angles -26.45 35.32 False 26.86 35.32 False 18.24 -25.14 False -18.54 -25.14 False Plotting If anyone wants to try it you can have a go with what I cobbled together to plot it. (You need to rotate it by hand in 3d). I haven't done much more than apply the same rotation and camera (or, at least, focal point) as you have. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D pts = np.array( [ [1.54320189, -0.35068437, -0.48266792], [1.52144436, 0.44898697, -0.48990338], [0.32197813, 0.41622155, -0.50429738], [0.34373566, -0.38344979, -0.49706192] ] ) cam_pose = np.array( [ [-0.02719692, 0.9447125, -0.3271947, 1.25978471], [ 0.99958918, 0.02274412, 0.0, 0.03276859], [-0.00904433,-0.32711006,-0.94495695, 0.4514743 ], [0.0, 0.0, 0.0, 1.0], ] ) fov = ( 50, 40 ) origin = cam_pose[0:3,3] rotate = cam_pose[0:3,0:3] fov_horizontal, fov_vertical = np.radians(fov[0]/2), np.radians(fov[1]/2) base_vectors = np.array( [ [np.tan(fov_horizontal), np.tan(fov_vertical), 1], [-np.tan(fov_horizontal), np.tan(fov_vertical), 1], [-np.tan(fov_horizontal), -np.tan(fov_vertical), 1], [np.tan(fov_horizontal), -np.tan(fov_vertical), 1], ] ) base_vectors /= np.linalg.norm(base_vectors, axis=1, keepdims=True) frustum_vectors = np.dot(base_vectors, rotate.T) pp = frustum_vectors + origin fig = plt.figure(figsize=(10, 8)) ax = fig.add_subplot(111, projection='3d') for xyz in pts: ax.scatter( xyz[0], xyz[1], xyz[2], color='r' ) ax.scatter( origin[0], origin[1], origin[2], color='b' ) for p in pp: ax.plot( [origin[0],p[0]], [origin[1],p[1]], [origin[2],p[2]], color='g' ) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') plt.show() Seen from one direction parallel to one face of the frustum - two points are outside this face. Looking parallel to the opposite face - the other two points are outside.
4
4
79,375,777
2025-1-21
https://stackoverflow.com/questions/79375777/fourier-series-implementation-cannot-approximate-batman-shape
I tried to implement a formula, from which a coefficients of Fourier Series could be calculated. (I used 3B1B's video about it: Video) and writing code for that, my first test subject was singular contour of batman logo, I first take a binary picture of batman logo and use marching squares algorithm to find contour of it. after that i rescale values and get this results: And Here is Code for creating this points: (Contour_Classifier.py) import numpy as np import matplotlib.pyplot as plt from skimage import measure, draw def read_binary_image(file_path): # Open the file and read line by line with open(file_path, 'r') as file: lines = file.readlines() height, width = len(lines), len(lines[0]) print(height, width) # Process lines into a 2D numpy array image_data = [] for i in range(height + 2): arr = [] for j in range(width + 2): arr.append(0) image_data.append(arr) for i in range(2, height + 1): for j in range(2, width + 1): if(lines[i - 2][j - 2] != '1'): image_data[i][j] = 0 else: image_data[i][j] = 1 # Convert list to numpy array for easier manipulation image_array = np.array(image_data) return image_array def display_image(image_array): # Display the binary image using matplotlib plt.imshow(image_array, cmap="gray") plt.axis('off') # Hide axes plt.show() # Example usage file_path = 'KOREKT\images\sbetmeni.txt' # Replace with the path to your file image_array = read_binary_image(file_path) #display_image(image_array) #---------------------------------------------------------------------------------------------------------- #-------------------------------------------Finding Contours----------------------------------------------- #---------------------------------------------------------------------------------------------------------- contours = measure.find_contours(image_array, level=0.5, positive_orientation='high') fixed_contours = [] for contour in contours: fixed_contour = np.column_stack((contour[:, 1], contour[:, 0])) # Swap (row, column) to (column, row) fixed_contour[:, 1] = image_array.shape[0] - fixed_contour[:, 1] # Invert the y-axis # Normalize coordinates between [0, 1] fixed_contour[:, 0] /= image_array.shape[1] # Normalize x (width) fixed_contour[:, 1] /= image_array.shape[0] # Normalize y (height) fixed_contour[:, 0] *= 250 # Normalize x (width) fixed_contour[:, 1] *= 250 # Normalize y (height) fixed_contours.append(fixed_contour) contours = fixed_contours print(fixed_contours[0]) def visualize_colored_contours(contours, title="Colored Contours"): # Create a plot plt.figure(figsize=(8, 8)) for i, contour in enumerate(contours): # Extract X and Y coordinates x, y = zip(*contour) # Plot the points with a unique color plt.plot(x, y, marker='o', label=f'Contour {i+1}') plt.title(title) plt.xlabel("X") plt.ylabel("Y") plt.legend() plt.grid(True) plt.axis("equal") plt.show() # Visualize the normalized contours visualize_colored_contours(contours) Now we go to the main part, where we implement the fourier series algorithm. I divide the time interal (t) into the amount of points provided and i make assumtion that all of that points relative to t have same distances between eachother. I use approximation of integral as the sum of the points as provided into the formula. And Here is code implementing it (Fourier_Coefficients.py): import numpy as np def calculate_Fourier(points, num_coefficients): complex_points = [] for point in points: complex_points.append(point[0] + 1j * point[1]) t = np.linspace(0, 1, len(complex_points), endpoint=False) c_k = np.zeros(num_coefficients, dtype=np.complex128) for i in range(num_coefficients): c_k[i] = np.sum(complex_points * np.exp(-2j * np.pi * i * t) * t[1]) return c_k (NOTE: For this code t1 is basically deltaT, because it equals to 1/len(complex_points) And Now, in the next slide i animate whole process, where i also wrote additional code snippet for creating a gif. If my implementation were correct it shouldn't have anu difficulty creating a batman shape, but we can observe really weird phenomenons throught the gif. this is code snippet for this part import numpy as np import matplotlib.pyplot as plt import imageio from Fourier_Coefficients import calculate_Fourier from Countour_Classifier import contours # List to store file names for GIF creation png_files = [] # Generate plots iteratively for i in range(len(contours[0])): contour_coefficients = [] for contour in contours: contour_coefficients.append(calculate_Fourier(contour, i)) # Fourier coefficients (complex numbers) and frequencies coefficients = contour_coefficients[0] # First contour frequencies = np.arange(len(coefficients)) # Time parameters t = np.linspace(0, 1, len(coefficients)) # One period curve = np.zeros(len(t), dtype=complex) # Use the first (i + 1) coefficients for j in range(len(coefficients)): c, f = coefficients[j], frequencies[j] curve += c * np.exp(1j * 2 * np.pi * f * t) # Plotting plt.figure(figsize=(8, 8)) plt.plot(curve.real, curve.imag, label="Trajectory", color="blue") plt.scatter(0, 0, color="black", label="Origin") plt.axis("equal") plt.title(f"Fourier Series with {i + 1} Coefficients") plt.xlabel("Real Part (X)") plt.ylabel("Imaginary Part (Y)") plt.legend() plt.text(-0.5, -0.5, f"Using {i + 1} coefficients", fontsize=12, color="red") # Save the figure as a PNG file filename = f"fourier_{i + 1}_coefficients.png" plt.savefig(filename) plt.close() # Append the file name to the list png_files.append(filename) # Create a GIF from the PNG files gif_filename = "fourier_series.gif" with imageio.get_writer(gif_filename, mode='I', duration=0.5) as writer: for filename in png_files: image = imageio.imread(filename) writer.append_data(image) print("Plots saved as PNG files and GIF created as 'fourier_series.gif'.") Now this is the result GIF Observation #1 when coefficients number is 0, 1, 2 or 3 it doesnt draw anything. Observation #2 As coefficients number raises, we get the wobbly circular shape, where the lower part of the image is slightly more identical tot he original imagine, but messes up on its wings Observation #3 As we get closer to the len(complex_numbers), the situacion changes and we get this weird shapes, different from circular Observation #4 When we surpass the len(complex_number), it draws a random gibberish Observation #5 When the number of the divisions inside the t value in animation.py code is altered we get completely different images. EDIT 1 here is actual .txt data provided for further testing. https://pastebin.com/Q51pT09E After all of this information given, can you guys help me out whats wrong with my code
In the definition of the Fourier series, you can see that n goes from negative infinity to positive infinity. The issue in your code is that you forgot to compute the coefficients associated with negative values of n. Here is a simple example that shows how to compute the coefficients (from -50 to 50) associated with an ellipse, and build a curve from them: import numpy as np import matplotlib.pyplot as plt def get_ellipse(): t = np.linspace(0, 1, 100) X = 2 * np.cos(2 * np.pi * t) Y = np.sin(2 * np.pi * t) return (X, Y) def calculate_Fourier(X, Y, N): complex_points = [complex(x, y) for x, y in zip(X, Y)] t = np.linspace(0, 1, len(complex_points), endpoint=False) coefficients = np.zeros(2 * N + 1, dtype=complex) for i in range(len(coefficients)): n = i - N coefficients[i] = np.sum(complex_points * np.exp(-2j * np.pi * n * t) * t[1]) return coefficients def build_curve(coefficients, num_points): N = (len(coefficients) - 1) / 2 t = np.linspace(0, 1, num_points) curve = np.zeros(num_points, dtype=complex) for i in range(len(coefficients)): c = coefficients[i] n = i - N curve += c * np.exp(2j * np.pi * n * t) return curve X, Y = get_ellipse() coefficients = calculate_Fourier(X, Y, 50) curve = build_curve(coefficients, 50) plt.plot(curve.real, curve.imag, color="blue") plt.show() Result: Remark: if the number of coefficients is too high, you will get numerical instability.
9
5
79,372,057
2025-1-20
https://stackoverflow.com/questions/79372057/aggregate-3d-array-using-zone-and-time-index-arrays
Using the small example below, I'm seeking to aggregate (sum) the values in the 3D dat_arr array using two other arrays to guide the grouping. The first index of dat_arr is related to time. The second and third indices are related to spatial (X, Y) locations. How can I sum values in dat_arr such that the temporal binning is guided by the contents of tim_idx (same length as the first dimension of dat_arr) and the spatial binning uses zon_arr (has same dimensions as the last two indices of dat_arr)? import numpy as np import matplotlib.pyplot as plt zon_arr = np.zeros((3,5)) tim_idx = np.array([0,0,1,1,2,2,3,3]) # set up arbitrary zones zon_arr[1, :3] = 1 zon_arr[1, 3:] = 2 # plt.imshow(zon_arr) # plt.show() # generate arbitrary array with data # first index = time; last 2 indices represent X-Y pts in space # last two indices must have same dims as zon_arr np.random.seed(100) dat_arr = np.random.rand(8, 3, 5) So the output I'm expecting would give me the sum of values contained in dat_arr for each unique value in tim_idx and zon_arr. In other words, I would expect output that has 3 value (corresponding to 3 zones) for each of the 4 unique time values in tim_idx?
First, let's compute this with a loop to get a sense of the potential output: sums = {} # for each combination of coordinates for i in range(len(tim_idx)): for j in range(zon_arr.shape[0]): for k in range(zon_arr.shape[1]): # add the value to the (time, zone) key combination key = (tim_idx[i], zon_arr[j, k]) sums[key] = sums.get(key, 0) + dat_arr[i, j, k] which gives us: {(0, 0): 8.204124414317336, (0, 1): 3.8075543426771645, (0, 2): 1.2233223229754382, (1, 0): 7.920231812858928, (1, 1): 4.150642040307019, (1, 2): 2.4211020016615836, (2, 0): 10.363684964675313, (2, 1): 3.06163710842573, (2, 2): 1.9547272492467518, (3, 0): 10.841595367423158, (3, 1): 2.6617183569891893, (3, 2): 2.0222766813453674} Now we can leverage numpy indexing to perform the same thing in a vectorial way. meshgrid to generate the indexer, unique to get the unique combinations, and bincount to compute the sum per group: # create the indexer from the combination of time/zone i, j = np.meshgrid(tim_idx, zon_arr, indexing='ij') coord = np.c_[i.ravel(), j.ravel()] # alternatively # coord = np.c_[np.repeat(tim_idx, zon_arr.size), # np.tile(zon_arr.flat, len(tim_idx))] # identify the unique combinations for later aggregation keys, idx = np.unique(coord, return_inverse=True, axis=0) # compute the counts per key sums = np.bincount(idx, dat_arr.ravel()) Output: # keys array([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2], [3, 0], [3, 1], [3, 2]]) # sums array([ 8.20412441, 3.80755434, 1.22332232, 7.92023181, 4.15064204, 2.421102 , 10.36368496, 3.06163711, 1.95472725, 10.84159537, 2.66171836, 2.02227668])
2
2
79,373,051
2025-1-21
https://stackoverflow.com/questions/79373051/how-to-handle-inconsistent-columns-ragged-rows-in-a-delimited-file-using-polar
I am working with a legacy system that generates delimited files (e.g., CSV), but the number of columns in these files is inconsistent across rows (ragged rows). I am reading the file from ADLS with Polars, but I'm encountering an issue depending on the structure of the second row in the file. pl.read_csv('sample.csv', has_header=False, skip_rows=1, infer_schema=False, infer_schema_length=None, ignore_errors=True) If the second row has more columns than subsequent rows, Polars reads the file successfully and fills the missing values in subsequent rows with null. However, if subsequent rows have more columns than the second row, I get the following exception ComputeError: found more fields than defined in 'Schema' Consider setting 'truncate_ragged_lines=True'. Is there a way to handle such cases dynamically in Polars, or do I need to preprocess the file to fix these inconsistencies before reading? Any alternative approaches or solutions to this problem would be appreciated! Example Data - Failure ID,Name,Age 1,John,28 2,Jane,35,California,USA 3,Emily,22 4,Michael,40,Australia,Melbourne Example Data - Success ID,Name,Age 2,Jane,35,California,USA 1,John,28 3,Emily,22 4,Michael,40,Australia,Melbourne
Read it in as a single column by setting the separator to (hopefully) an unused utf8 character with no header and then use .str.split.list.to_struct followed by unnest to allow a dynamic number of columns. Then you have to rename the columns and slice out the first row. import polars as pl import io from warnings import catch_warnings, filterwarnings input_file = io.StringIO("""ID,Name,Age 1,John,28 2,Jane,35,California,USA 3,Emily,22 4,Michael,40,Australia,Melbourne""" ) input_file.seek(0) with catch_warnings(): filterwarnings("ignore") ## this suppresses the warning from `to_struct` which wants explicit field names. df = ( pl.read_csv(input_file, separator="\x00", has_header=False) .with_columns( pl.col("column_1") .str.split(",") .list.to_struct(n_field_strategy="max_width") ) .unnest("column_1") ) df = df.rename({x:y for x,y in zip(df.columns, df.row(0)) if y is not None}) df = df.slice(1,) Now you've got a df of all strings. You could try to do a for loop with all the columns, trying to cast them but turns out that is slower (at least in a few tests that I did) than writing the existing df to a csv and then rereading it to force polars's auto-infer mechanism. from tempfile import NamedTemporaryFile with NamedTemporaryFile() as ff: df.write_csv(ff) ff.seek(0) df= pl.read_csv(ff) If you've got enough memory then replacing the tempfile with an io.BytesIO() will be even faster.
4
1
79,371,384
2025-1-20
https://stackoverflow.com/questions/79371384/cartopy-doesnt-render-left-and-right-longitude-labes
I'm using cartopy to draw a geomap. This is how I set up graticule rendering: if graticule: gl = ax.gridlines( draw_labels=True, linewidth=0.8, color='gray', alpha=0.5, linestyle='--', x_inline=False, y_inline=True ) gl.xlocator = mticker.FixedLocator(np.arange(-180, 181, 10)) gl.ylocator = mticker.FixedLocator(np.arange(0, 91, 10)) gl.top_labels = False gl.bottom_labels = True gl.left_labels = True gl.right_labels = True gl.xlabel_style = {'size': 10, 'color': 'gray'} gl.ylabel_style = {'size': 10, 'color': 'gray'} cartopy doesn't draw longitude labels to the left and to the right of the map. Only bottom and top (if on) labels are drawn: With ax.set_extent([0, 40, 75, 85], crs=ccrs.PlateCarree()) added: Latitude labels are okay being inline.
In draw_labels parameter, instead of setting it to True you can pass a dictionary with each 4 side and which coordinates you wish to see: gl = ax.gridlines( draw_labels={"bottom": "x", "left": "x", "right":"x"}, linewidth=0.8, color='gray', alpha=0.5, linestyle='--', x_inline=False, y_inline=True ) You will also need to remove these 4 lines: # gl.top_labels = False # gl.bottom_labels = True # gl.left_labels = True # gl.right_labels = True
1
2
79,375,373
2025-1-21
https://stackoverflow.com/questions/79375373/how-to-align-split-violin-plots-with-seaborn
I am trying to plot split violin plots with Seaborn, i.e. a pair of KDE plots stacked against each other, typically to see the difference between distributions. My use case is very similar to the docs except I would like to superimpose custom box plots on top (as in this tutorial) However, I am having a strange alignment issue with the violin plots with respect to the X axis and I don't understand what I am doing differently from the docs... Here's code for a MRE with only split violins: import numpy as np import pandas as pd from matplotlib import pyplot as plt import seaborn as sns sns.set_theme() data1 = np.random.normal(0, 1, 1000) data2 = np.random.normal(1, 2, 1000) data = pd.concat( [ pd.DataFrame({"column": "1", "data1": data1, "data2": data2}), pd.DataFrame({"column": "2", "data3": data2, "data4": data1}), ], axis="rows", ) def mkplot(): fig, violin_ax = plt.subplots() sns.violinplot( data=data.melt(id_vars="column"), y="value", split=True, hue="variable", x="column", ax=violin_ax, palette="Paired", bw_method="silverman", inner=None, ) plt.show() mkplot() This produces split violins whose middle is not aligned with the X axis label: mis-aligned violins (this is also true when "column" is of numeric type rather than str) It seems that box plots are also mis-aligned, but not with the same magnitude; you can use the function below with the same data def mkplot2(): fig, violin_ax = plt.subplots() sns.violinplot( data=data.melt(id_vars="column"), y="value", split=True, hue="variable", x="column", ax=violin_ax, palette="Paired", bw_method="silverman", inner=None, ) sns.boxplot( data=data.melt(id_vars="column"), y="value", hue="variable", x="column", ax=violin_ax, palette="Paired", width=0.3, flierprops={"marker": "o", "markersize": 3}, legend=False, dodge=True, ) plt.show() mkplot2() mis-aligned violins+boxes How can I solve this ?
The issue is due to the NaNs that you have after melting. This makes 4 groups and thus the violins are shifted to account for those. You could plot the groups independently: data_flat = data.melt('column').dropna(subset='value') violin_ax = plt.subplot() pal = sns.color_palette('Paired') for i, (name, g) in enumerate(data_flat.groupby('column')): sns.violinplot( data=g, y='value', split=True, hue='variable', x='column', ax=violin_ax, palette=pal[2*i:2*i+2], bw_method='silverman', inner=None, ) Output:
3
2
79,375,192
2025-1-21
https://stackoverflow.com/questions/79375192/how-to-define-the-search-space-for-a-simple-equation-optimization
I'm trying to learn skopt, but I'm struggling to get even a simple multivariate minimization to run. import skopt def black_box_function(some_x, some_y): return -some_x + 2 - (some_y - 1) ** 2 + 1 BOUNDS = [(0, 100.0), (0, 100.0)] result = skopt.dummy_minimize(func=black_box_function, dimensions=BOUNDS) When I run this, it seems to figure out that I want the search space for some_x to lie between 0 and 100, but it returns this error: TypeError: black_box_function() missing 1 required positional argument: 'some_y'. How can I define the search space for both some_x and some_y?
Quoting the documentation Function to minimize. Should take a single list of parameters and return the objective value. So black_box_function should not have two parameters some_x, some_y, but a single parameter some_xy, that is a list of those two def black_box_function(some_xy): some_x, some_y = some_xy return -some_x + 2 - (some_y - 1) ** 2 + 1
4
3
79,374,797
2025-1-21
https://stackoverflow.com/questions/79374797/how-to-calculate-horizontal-median
How to calculate horizontal median for numerical columns? df = pl.DataFrame({"ABC":["foo", "bar", "foo"], "A":[1,2,3], "B":[2,1,None], "C":[1,2,3]}) print(df) shape: (3, 4) ┌─────┬─────┬──────┬─────┐ │ ABC ┆ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪══════╪═════╡ │ foo ┆ 1 ┆ 2 ┆ 1 │ │ bar ┆ 2 ┆ 1 ┆ 2 │ │ foo ┆ 3 ┆ null ┆ 3 │ └─────┴─────┴──────┴─────┘ I want to achieve the same as with the below pl.mean_horizontal, but get median instead of the mean. I did not find existing expression for this. print(df.with_columns(pl.mean_horizontal(pl.col(pl.Int64)).alias("Horizontal Mean"))) shape: (3, 5) ┌─────┬─────┬──────┬─────┬─────────────────┐ │ ABC ┆ A ┆ B ┆ C ┆ Horizontal Mean │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 ┆ f64 │ ╞═════╪═════╪══════╪═════╪═════════════════╡ │ foo ┆ 1 ┆ 2 ┆ 1 ┆ 1.333333 │ │ bar ┆ 2 ┆ 1 ┆ 2 ┆ 1.666667 │ │ foo ┆ 3 ┆ null ┆ 3 ┆ 3.0 │ └─────┴─────┴──────┴─────┴─────────────────┘
There's no median_horizontal() at the moment, but you could use pl.concat_list() to create list column out of all pl.Int64 columns. pl.Expr.list.median() to calculate median. df.with_columns( pl.concat_list(pl.col(pl.Int64)).list.median().alias("Horizontal Median") ) shape: (3, 5) ┌─────┬─────┬──────┬─────┬───────────────────┐ │ ABC ┆ A ┆ B ┆ C ┆ Horizontal Median │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 ┆ f64 │ ╞═════╪═════╪══════╪═════╪═══════════════════╡ │ foo ┆ 1 ┆ 2 ┆ 1 ┆ 1.0 │ │ bar ┆ 2 ┆ 1 ┆ 2 ┆ 2.0 │ │ foo ┆ 3 ┆ null ┆ 3 ┆ 3.0 │ └─────┴─────┴──────┴─────┴───────────────────┘ Or you can use numpy integration (but this will probably be slower): import numpy as np df.with_columns( pl.Series("Horizontal Median", np.nanmedian(df.select(pl.col(pl.Int64)), axis=1)) ) shape: (3, 5) ┌─────┬─────┬──────┬─────┬───────────────────┐ │ ABC ┆ A ┆ B ┆ C ┆ Horizontal Median │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 ┆ f64 │ ╞═════╪═════╪══════╪═════╪═══════════════════╡ │ foo ┆ 1 ┆ 2 ┆ 1 ┆ 1.0 │ │ bar ┆ 2 ┆ 1 ┆ 2 ┆ 2.0 │ │ foo ┆ 3 ┆ null ┆ 3 ┆ 3.0 │ └─────┴─────┴──────┴─────┴───────────────────┘
5
2
79,374,674
2025-1-21
https://stackoverflow.com/questions/79374674/pandas-dataframe-update-with-filter-func
I have two dataframes with identical shape and want to update df1 with df2 if some conditions are met import pandas as pd from typing import Any df1 = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) print(df1, "\n") df2 = pd.DataFrame({"A": [7, 8, 9], "B": [10, 3, 12]}) print(df2, "\n") # Define a condition function def condition(x: Any) -> bool: """condition function to update only cells matching the conditions""" return True if x in [2, 7, 9] else False df1.update(df2) print(df1) but if I use filter_func df1.update(df2,filter_func=condition) it fails with ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Unfortunately the docu is not very verbose. How to update a dataframe with conditions correctly?
Your function will receive a 1D numpy array (per column), it should be vectorized and return a boolean 1D array (callable(1d-array) -> bool 1d-array). Use numpy.isin to test membership: def condition(x): """condition function to update only cells matching the conditions""" return np.isin(x, [2, 7, 9]) df1.update(df2, filter_func=condition) with a lambda: df1.update(df2, filter_func=lambda x: np.isin(x, [2, 7, 9])) Alternatively, if you really can't vectorize with pure numpy functions (this should not be done here!), decorate with numpy.vectorize: @np.vectorize def condition(x: Any) -> bool: """condition function to update only cells matching the conditions""" return True if x in [2, 7, 9] else False df1.update(df2, filter_func=condition) Updated df1: A B 0 1 4 1 8 5 2 3 6
1
3
79,374,415
2025-1-21
https://stackoverflow.com/questions/79374415/matplotlib-multiple-axes-mixups
I have a problem with a multi axis matplotlib plot. The code is close to what I want but somehow axes are getting mixed up. The ticks are missing on ax4 aka the green y-axis but only show up on ax2 (the red one) and the labels are duplicated and appear on both axes, ax2 and ax4. import numpy as np import matplotlib.pyplot as plt # Generate fake data distance = np.logspace(2, 4, num=50) a_detector = 10**4 / distance**1.5 b_detector = 10**5 / distance**1.6 c_detector = 10**3 / distance**1.4 d_detector = 10**2 / distance**1.2 # Create figure and axes fig, ax1 = plt.subplots(figsize=(20, 10)) ax1.plot(distance, a_detector, 'bo-', label='A') ax1.set_xlabel('Shower Plane Distance [meter]') ax1.set_ylabel('signal type I', color='blue') ax1.set_xscale('log') ax1.set_yscale('log') ax1.tick_params(axis='y', labelcolor='blue') ax2 = ax1.twinx() ax2.plot(distance, b_detector, 'ro-', label='B') ax2.set_ylabel('signal type II', color='red') ax2.set_yscale('log') ax2.tick_params(axis='y', labelcolor='red') ax3 = ax1.twinx() ax3.spines['right'].set_position(('outward', 90)) ax3.plot(distance, c_detector, 'ks-', label='C') ax3.set_ylabel('signal type III', color='black') ax3.set_yscale('log') ax3.tick_params(axis='y', labelcolor='black') ax4 = ax1.twinx() ax4.spines['left'].set_position(('outward', 90)) ax4.plot(distance, d_detector, 'g^-', label='D') ax4.set_ylabel('signal type IV', color='green') ax4.set_yscale('log') ax4.tick_params(axis='y', labelcolor='green') ax4.yaxis.set_label_position('left') ax4.yaxis.set_tick_params(labelleft=True) fig.legend(loc='upper right', bbox_to_anchor=(0.89, 0.86)) plt.show()
You just need to replace: ax4.yaxis.set_tick_params(labelleft=True) by: ax4.yaxis.set_tick_params(which='major', left=True, right=False, labelleft=True, labelright=False) ax4.yaxis.set_tick_params(which='minor', left=True, right=False, labelleft=True, labelright=False) You put the yaxis major tick labels to the left but you also need to remove them from their original location with labelright=False. This has to be done also for the tick labels (previously it was the tick marks) and also for the minor ticks (default is major ticks).
2
0
79,371,127
2025-1-20
https://stackoverflow.com/questions/79371127/module-does-not-explicitly-export-attribute-attr-defined
In bar.py foo is imported # bar.py from path import foo In my current file bar is imported and I use the get_id function of foo: from path import bar bar.foo.get_id() Mypy is complaining error: Module "bar" does not explicitly export attribute "foo" [attr-defined]
Is it your own module? Use one of these two options that are suggested in the mypy docs; the __all__ is generally preferred # This will re-export it as bar and allow other modules to import it from foo import bar as bar # This will also re-export bar from foo import bar __all__ = ['bar'] If its a 3rd party; but also valid for your on modules Not all modules are designed and follow these suggestions you can disable/not check for the error, but first check: Is is more valid to use foo directly? from path import foo foo.get_id() I argue that the usage of module.non_child_module.function is often not justified. Also when writing your own modules it's more likely to create import-cycles when you put an unnecessary intermediate module in between. If you think keeping bar.foo is more valid than using foo directly you can either add a # type: ignore[attr-defined] Do not check for the error at all and use --no-implicit-reexport / implicit_reexport=False as many modules
1
1
79,372,830
2025-1-20
https://stackoverflow.com/questions/79372830/add-business-days-including-weekends
I'm trying to adjust a date by adding a specified number of business days but I would like to adjust for weekends. The weekend days, however, could change depending on the record. So if my data set looks like this: ┌────────────┬────────┬──────────┬──────────┐ │ DT ┆ N_DAYS ┆ WKND1 ┆ WKND2 │ │ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ i64 ┆ str ┆ str │ ╞════════════╪════════╪══════════╪══════════╡ │ 2025-01-02 ┆ 2 ┆ Saturday ┆ Sunday │ │ 2025-01-09 ┆ 2 ┆ Friday ┆ Saturday │ │ 2025-01-10 ┆ 2 ┆ Saturday ┆ null │ │ 2025-01-15 ┆ 1 ┆ Saturday ┆ Sunday │ └────────────┴────────┴──────────┴──────────┘ I can apply: df = df.with_columns(pl.col('DT').dt.add_business_days(pl.col('N_DAYS')).alias('NEW_DT')) ┌────────────┬────────┬──────────┬──────────┬────────────┐ │ DT ┆ N_DAYS ┆ WKND1 ┆ WKND2 ┆ NEW_DT │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ i64 ┆ str ┆ str ┆ date │ ╞════════════╪════════╪══════════╪══════════╪════════════╡ │ 2025-01-02 ┆ 2 ┆ Saturday ┆ Sunday ┆ 2025-01-06 │ │ 2025-01-09 ┆ 2 ┆ Friday ┆ Saturday ┆ 2025-01-13 │ │ 2025-01-10 ┆ 2 ┆ Saturday ┆ null ┆ 2025-01-14 │ │ 2025-01-15 ┆ 1 ┆ Saturday ┆ Sunday ┆ 2025-01-16 │ └────────────┴────────┴──────────┴──────────┴────────────┘ However, I've been trying to generate a week_mask tuple for each of the records based on columns WKND1, WKND2 and apply it as part of my transformation so for the first record, the tuple should be: (True, True, True, True, True, False, False) Second Record would be: (True, True, True, True, False, False, True) and so on. Based on the example below the actual response should be: ┌────────────┬────────┬──────────┬──────────┬────────────┐ │ DT ┆ N_DAYS ┆ WKND1 ┆ WKND2 ┆ NEW_DT │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ i64 ┆ str ┆ str ┆ date │ ╞════════════╪════════╪══════════╪══════════╪════════════╡ │ 2025-01-02 ┆ 2 ┆ Saturday ┆ Sunday ┆ 2025-01-06 │ │ 2025-01-09 ┆ 2 ┆ Friday ┆ Saturday ┆ 2025-01-14 │ │ 2025-01-10 ┆ 2 ┆ Saturday ┆ null ┆ 2025-01-13 │ │ 2025-01-15 ┆ 1 ┆ Saturday ┆ Sunday ┆ 2025-01-16 │ └────────────┴────────┴──────────┴──────────┴────────────┘ How can I generate the tuple based on column values and apply it dynamically? I tried creating a new column containing a list and using something like this: df = df.with_columns(pl.col('DT').dt.add_business_days(pl.col('N_DAYS'), week_mask=pl.col('W_MASK')).alias('NEW_DT')) but getting: TypeError: argument 'week_mask': 'Expr' object cannot be converted to 'Sequence'
week_mask supposed to be be Iterable, so it seems you can't pass expression there. You can iterate over different masks though: pl.DataFrame.partition_by() to split DataFrame into dict of dataframes. process dataframes, creating week_mask out of partition key. pl.concat() to concat result dataframes together. weekdays = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] pl.concat([ v.with_columns( pl.col('DT').dt.add_business_days( pl.col('N_DAYS'), week_mask=[x not in k for x in weekdays] ).alias('NEW_DT') ) for k, v in df.partition_by('WKND1','WKND2', as_dict = True).items() ]).sort('DT') shape: (4, 5) ┌────────────┬────────┬──────────┬──────────┬────────────┐ │ DT ┆ N_DAYS ┆ WKND1 ┆ WKND2 ┆ NEW_DT │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ i64 ┆ str ┆ str ┆ date │ ╞════════════╪════════╪══════════╪══════════╪════════════╡ │ 2025-01-02 ┆ 2 ┆ Saturday ┆ Sunday ┆ 2025-01-06 │ │ 2025-01-09 ┆ 2 ┆ Friday ┆ Saturday ┆ 2025-01-13 │ │ 2025-01-10 ┆ 2 ┆ Saturday ┆ null ┆ 2025-01-13 │ │ 2025-01-15 ┆ 1 ┆ Saturday ┆ Sunday ┆ 2025-01-16 │ └────────────┴────────┴──────────┴──────────┴────────────┘
4
2
79,373,355
2025-1-21
https://stackoverflow.com/questions/79373355/how-to-use-vectorized-calculations-in-pandas-to-find-out-where-a-value-or-catego
With a dataset with millions of records, I have items with various categories and measurements, and I'm trying to figure out how many of the records have changed, in particular when the category or measurement goes to NaN (or NULL from the database query) during the sequence. In SQL, I'd use some PARTITION style OLAP functions to do this, but seems like it should fairly straightforward in Python with Pandas, but I can't quite wrap my head around the vectorized notation. I've tried various df.groupby clauses and lambda functions but nothing quite gets it in the required format - basically, the df.groupby('item')['measure'] in this example, the first row of the grouped subset of item & measure always returns True, where I'd like to it to be False or NaN. Simply put, they are false positives. I understand from pandas' perspective, it's a change since the first x.shift() would be NaN, but I can't figure out how to filter that or handle it in the lambda function. Sample Code: import pandas as pd import numpy as np test_df = pd.DataFrame({'item': [20, 20, 20, 20, 20, 20, 20, 20, 30, 30, 30, 30, 30, 30, 30, 30, 40, 40, 40, 40, 40, 40, 40, 40 ], 'measure': [1, 1, 1, 3, 3, 3, 3, 3, 6, 6, 6, 6, 6, 7, 7, 7, 10, 10, 10, 10, 10, 10, 10, 10 ], 'cat': ['a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c', 'c', 'd', 'd', 'd', 'e', 'e', 'e', 'e', 'e', 'e', 'e', 'e']}) test_df['measure_change'] = test_df.groupby('item')['measure'].transform(lambda x: x.shift() != x) test_df['cat_change'] = test_df.groupby('item')['cat'].transform(lambda x: x.shift() != x) In the output below, as an example, rows 0, 8, and 16, the measure_change should be False. So all of item 40 would have measure_change == False and that would indicate no changes with that item. Any & all suggestions are appreciated. (cat_change set up the same way) # item measure measure_change 0 20 1 True 1 20 1 False 2 20 1 False 3 20 3 True 4 20 3 False 5 20 3 False 6 20 3 False 7 20 3 False 8 30 6 True 9 30 6 False 10 30 6 False 11 30 6 False 12 30 6 False 13 30 7 True 14 30 7 False 15 30 7 False 16 40 10 True 17 40 10 False 18 40 10 False 19 40 10 False 20 40 10 False 21 40 10 False 22 40 10 False 23 40 10 False
You can use a combination of groupby.diff and fillna to achieve this. We compare the row difference with 0 to find any rows where measure changed: test_df['measure_change'] = test_df.groupby('item')['measure'].diff().fillna(0) != 0 Result: item measure measure_change 0 20 1 False 1 20 1 False 2 20 1 False 3 20 3 True 4 20 3 False 5 20 3 False 6 20 3 False 7 20 3 False 8 30 6 False 9 30 6 False 10 30 6 False 11 30 6 False 12 30 6 False 13 30 7 True 14 30 7 False 15 30 7 False 16 40 10 False 17 40 10 False 18 40 10 False 19 40 10 False 20 40 10 False 21 40 10 False 22 40 10 False 23 40 10 False Alternativly, if you have strings to compare as well you can add a secondary condition checking the shift value for nans: x.shift().notna(). test_df['measure_change'] = test_df.groupby('item')['measure'].transform(lambda x: (x != x.shift()) & (x.shift().notna()))
2
3
79,371,872
2025-1-20
https://stackoverflow.com/questions/79371872/how-to-accelerate-the-cross-correlation-computation-of-two-2d-matrices-in-python
I am using Python to compute the cross-correlation of two 2D matrices, and I have implemented three different methods. Below is my experimental code along with their execution times: import numpy as np from scipy.signal import fftconvolve # Randomly generate 2D matrices a = np.random.randn(64, 4200) b = np.random.randn(64, 4200) # Using np.correlate def test_1(a, b): return np.array([np.correlate(s_row, t_row, mode="full") for s_row, t_row in zip(a, b)])[:, len(a[0]) - 1:].sum(axis=0) # Using scipy.signal.fftconvolve def test_2(a, b): return np.array([fftconvolve(s_row, t_row[::-1], mode="full") for s_row, t_row in zip(a, b)])[:, len(a[0]) - 1:].sum(axis=0) # Using scipy.signal.fftconvolve but for 2D def test_3(a, b): return fftconvolve(a, np.fliplr(b), mode="full")[:, len(a[0]) - 1:].sum(axis=0) # Performance testing for i in range(100): x = test_1(a, b) # 19.8 seconds for i in range(100): x = test_2(a, b) # 1.1 seconds for i in range(100): x = test_3(a, b) # 3.8 seconds Even if I reverse b in advanced, the cost of time remains unaltered. My computer configuration is: Operating System: Windows 11 CPU: i7-12700F Memory: 16GB I have already used from concurrent.futures import ThreadPoolExecutor, as_completed to speed up the multiple loops. I am looking for further ways to accelerate this computation. Are there any other algorithm optimization suggestions that could help reduce the computation time? Thank you.
I do not think you can find a much faster implementation than that using a sequential code on CPU. This answer explains why. It also provide a slightly-faster experimental FFTW-based implementation which is up to 40% faster, and alternative possibly-faster solutions. Profiling and analysis The FFT library used by Scipy, which is pypocketfft is surprisingly pretty fast (faster than the FFTW library in this case). test_2 spend most of its time in this native library so using a native code or even vectorization would not make the code much faster. Indeed, 60~65% of the time is spent in the function pocketfft::detail::rfftp::exec, 15~20% in Python overheads, ~10% in pocketfft::detail::general_r2c/pocketfft::detail::general_c2r, and ~10% in other operations like copies and the main multiplication. As a result, writing your own specialized native module directly using pypocketfft would make this computation only 15~20% faster at best. I do not think it is worth the effort. Low-level profiling results show that the computing function test_2 is clearly compute bound. The same thing is true for the provided FFTW functions in this answer. That being said, the library pypocketfft only use scalar instructions. Alternative libraries like the FFTW can theoretically benefit from SIMD units. However, in practice, this is far from being easy to write an implementation both using an efficient algorithm and SIMD instructions. The Intel MKL (alternative library) certainly does that. The FFTW only uses SIMD instructions in the complex-to-complex FFTs. This is why the function test_fftw_2 (provided in at the end of this answer) is slower than the other FFTW-based functions (also provided). Faster implementation with the FFTW I tried the FFTW (fully vectorized code), but it is significantly slower in this case (despite the name of the library meaning "Fastest Fourier Transform in the West"). This is because 4200*2-1 has 2 big prime factors and the FFTW underlying algorithm (AFAIK Cooley-Tukey) is inefficient for length having such big prime factors (it is much better for power of 2 length). In fact, the computation is about 8 times faster for a length of 8192 instead of 8399 (your use-case). Note I tried two versions: one in-place with only complex-to-complex FFTs and one out-of-place with real-to-complex/complex-to-real FFTs. The later was supposed to be faster but it is slower in practice... Maybe I missed something critical in my implementations. We can cheat by using bigger array for the FFT, that is a kind of padding. Indeed, using an array size which is divisible by a big power of two can mitigate the aforementioned performance issue (related to prime factors). For example, here we can use an array size of 1024*9=9216 (the smallest number divisible by 1024 bigger than n*2-1). While, we could use a smaller power-of-two base to be as close as possible to n*2-1 (increasing the efficiency by computing useful numbers), this also cause the algorithm the be less efficient (due to a smaller power-of-two factor). In this case, the FFTW implementation is about as fast as the Scipy code with a FFTW_MEASURE plan (this results in a 7% faster code which is a tiny performance boost). With the FFTW_PATIENT, it is actually faster by about 25%! The thing is FFTW_PATIENT plans can be pretty slow to make. The initialization time can only be amortized with at least several thousands of iterations and not just 100. Note that pre-allocating all arrays and plan once before the loop results in a few percent faster execution. I also found out that the FFTW is sub-optimal and computing chunks of line is ~10% faster than computing the whole array in one row. As a result, the final implementation test_fftw_4 is about 40% faster than test_2 (still without considering the first execution slowed down due to the planning time). This implementation is still a bit faster than test_2 without expensive planning (e.g. FFTW_MEASURE). This last implementation efficiently use a CPU core: it massively use the SIMD units, it is pretty cache-friendly and spend nearly all its time in the native FFTW library. It can also be easily parallelized on few cores (regarding the chunk size). Here is the different FFTW functions I tested: import pyfftw import numpy as np # Initial FFTW implementation (slower than Scipy/pypocketfft) def test_fftw_1(a, b): m = a.shape[0] n = a.shape[1] assert b.shape[1] == n # Input/output FFT temporary arrays a2 = pyfftw.empty_aligned((m, 2*n-1), dtype='complex128') b2 = pyfftw.empty_aligned((m, 2*n-1), dtype='complex128') r2 = pyfftw.empty_aligned((m, 2*n-1), dtype='complex128') a2[:,:n] = a a2[:,n:] = 0 b2[:,:n] = b[:,::-1] b2[:,n:] = 0 # See FFTW_ESTIMATE, FFTW_MEASURE or FFTW_PATIENT for varying # performance at the expense of a longuer plannification time axes = (1,) flags = ['FFTW_MEASURE'] a2_fft_plan = pyfftw.FFTW(a2, a2, axes, 'FFTW_FORWARD', flags) b2_fft_plan = pyfftw.FFTW(b2, b2, axes, 'FFTW_FORWARD', flags) r2_ifft_plan = pyfftw.FFTW(r2, r2, axes, 'FFTW_BACKWARD', flags) # Actual execution of the plans a2_fft_plan() b2_fft_plan() np.multiply(a2, b2, out=r2) r2_ifft_plan() return r2.real[:, n-1:].sum(axis=0) # Supposed to be faster, but slower in practice # (because it seems not to use SIMD instructions in practice) def test_fftw_2(a, b): m = a.shape[0] n = a.shape[1] assert b.shape[1] == n # Input FFT temporary arrays a2 = pyfftw.empty_aligned((m, 2*n-1), dtype='float64') b2 = pyfftw.empty_aligned((m, 2*n-1), dtype='float64') a2[:,:n] = a a2[:,n:] = 0 b2[:,:n] = b[:,::-1] b2[:,n:] = 0 # Temporary and output arrays a2_fft = pyfftw.empty_aligned((m, n), dtype='complex128') b2_fft = pyfftw.empty_aligned((m, n), dtype='complex128') r2_fft = pyfftw.empty_aligned((m, n), dtype='complex128') r2 = pyfftw.empty_aligned((m, 2*n-1), dtype='float64') # Same thing for planning axes = (1,) flags = ['FFTW_MEASURE'] a2_fft_plan = pyfftw.FFTW(a2, a2_fft, axes, 'FFTW_FORWARD', flags) b2_fft_plan = pyfftw.FFTW(b2, b2_fft, axes, 'FFTW_FORWARD', flags) r2_ifft_plan = pyfftw.FFTW(r2_fft, r2, axes, 'FFTW_BACKWARD', flags) a2_fft_plan() b2_fft_plan() np.multiply(a2_fft, b2_fft, out=r2_fft) r2_ifft_plan() return r2.real[:, n-1:].sum(axis=0) # Pretty-fast FFTW-based implementation (faster than Scipy/pypocketfft without considering the planning time) # However, the first call is pretty slow (e.g. dozens of seconds) def test_fftw_3(a, b): m = a.shape[0] n = a.shape[1] assert b.shape[1] == n # Input/output FFT temporary arrays block_size = 1024 size = (2*n-1 + block_size-1) // block_size * block_size a2 = pyfftw.empty_aligned((m, size), dtype='complex128') b2 = pyfftw.empty_aligned((m, size), dtype='complex128') r2 = pyfftw.empty_aligned((m, size), dtype='complex128') a2[:,:n] = a a2[:,n:] = 0 b2[:,:n] = b[:,::-1] b2[:,n:] = 0 axes = (1,) flags = ['FFTW_PATIENT'] a2_fft_plan = pyfftw.FFTW(a2, a2, axes, 'FFTW_FORWARD', flags) b2_fft_plan = pyfftw.FFTW(b2, b2, axes, 'FFTW_FORWARD', flags) r2_ifft_plan = pyfftw.FFTW(r2, r2, axes, 'FFTW_BACKWARD', flags) # Actual execution of the plans a2_fft_plan() b2_fft_plan() np.multiply(a2, b2, out=r2) r2_ifft_plan() diff = size - (2*n-1) return r2.real[:, -diff-n:-diff].sum(axis=0) # Fastest FFTW-based implementation def test_fftw_4(a, b): return sum([test_fftw_3(a[i*8:(i+1)*8], b[i*8:(i+1)*8]) for i in range(8)]) Alternative solutions The Intel MKL is certainly faster than pypocketfft, but this certainly requires a native C/C++ code wrapped to be callable from Python (not so simple to do). I think this is the best option in sequential on CPU. An alternative solution is to use GPUs. For example, CuPy supports FFW computations though this requires Nvidia GPUs (due to CUDA). There is also the quite-new portable pyvkfft module based on VkFFT (supported by AMD). Be aware that double-precision computations are only fast on (expensive) server-side GPUs, not client-side ones (i.e. personal computers). Note using single-precision should results in faster execution timings on both CPU and GPU (especially on client-side GPUs). References You can find a list of interesting C++ library and a benchmark here.
3
5
79,371,928
2025-1-20
https://stackoverflow.com/questions/79371928/multiple-overlapping-seaborn-violin-plots-split-by-hue
I am trying to create overlapping and transparent violin plots split by one variable using seaborn in python. My dataset looks like this: The variable "names" are "one" to "nine", "distance" is from 0 to 1, condition is either "healthy" or "disease", and "sample_id" is 1 to 16. Each "condition" has 8 sample_ids. Please see my current result below: As you can see, the problem is that the two halves of the violin plot are wrong orientation for each of the "name" variables, and the legend contains disease/healthy "condition" variable for each of the 16 sample_ids. The code that I am using for this is: my_ids=my_dataset.sample_id.unique() my_condition_palette={"disease": "darkorange","healthy":"steelblue"} fig, ax = plt.pyplot.subplots() for sample_id in my_ids: sns.violinplot(data=my_dataset[my_dataset.sample_id==sample_id], x="name", y="distance", hue="condition", hue_order=["disease", "healthy"], palette=my_condition_palette, cut=0, linewidth=0, inner=None, split=True,density_norm="count",common_norm=False, gap=0.1) for violin in ax.collections: violin.set_alpha(1/8) Does anyone know what I am doing wrong here? Or perhaps there is a better way of plotting this? Thank you!
With density_norm="count", the width of the violin for the x-value with the highest count (for the given sample_id) is maximized. The width of the other violins is shrunk relative to their count. In the given dataset, it seems that each sample_id is either fully 'healthy' or fully 'disease'. When drawing one sample_id, seaborn thinks there is only one hue value active, which will occupy the full width for each of the x-values. You can use dodge=True to force the violin to be reduced and put on the correct side. For the legend, you can set legend=False for all except one of the sample_ids. The following code creates reproducible test data and shows how everything could work. order= sets the order of the x values. from matplotlib import pyplot as plt import seaborn as sns import pandas as pd import numpy as np # first, create some dummy test data np.random.seed(20250120) df = pd.DataFrame({'sample_id': np.repeat(np.arange(1, 17), 100)}) names = ['one', 'two', 'three', 'four', 'five', 'six'] prob = np.random.rand(len(names)) ** 2 + 0.1 # use different probabilities for each 'name' prob /= prob.sum() # the probabilities need to sum to 1 df['name'] = np.random.choice(names, len(df), p=prob) df['distance'] = np.random.rand(len(df)) df['condition'] = np.where(df['sample_id'] % 2 == 1, 'disease', 'healthy') my_ids = df.sample_id.unique() my_condition_palette = {"disease": "darkorange", "healthy": "steelblue"} fig, ax = plt.subplots() for sample_id in my_ids: sns.violinplot(data=df[df['sample_id'] == sample_id], x="name", y="distance", order=names, hue="condition", hue_order=["disease", "healthy"], palette=my_condition_palette, cut=0, linewidth=0, inner=None, split=True, density_norm="count", common_norm=False, gap=0.1, dodge=True, legend=sample_id == my_ids[0]) for violin in ax.collections: violin.set_alpha(1 / 8) sns.despine() sns.move_legend(ax, loc="upper left", bbox_to_anchor=(1, 1)) ax.set_xlabel('') # remove superfluous x label plt.tight_layout() plt.show() PS: This is how the plot looks without dodge=True, and plotting only the first sample. The "half" violins are rescaled to occupy the full width (default 0.8 wide) for each x value.
1
2
79,365,212
2025-1-17
https://stackoverflow.com/questions/79365212/how-to-filter-a-lot-of-colors-out-of-an-image-the-numpy-way
I have an image, from which I would like to filter some colors: if a given pixel is in my set of colors, I would like to replace it by a white pixel. With an image called original of shape (height, width, 3) and a color like ar=np.array([117,30,41]), the following works fine: mask = (original == ar).all(axis=2) original[mask] = [255, 255, 255] The trouble is, the set of colors I want to exclude is quite big (~37000). Using the previous code in a loop (for ar in colors) works again, but is quite slow. Is there a faster way to do it? For now, my set of colors is in a numpy array of shape (37000, 3). I'm sure that all of these colors are present on my image, and I'm also sure that they are unique.
A simple way to solve this would be a look up table. A look up table with a boolean for every color would only cost 256 * 256 * 256 * 1 bytes = 16 MiB, and would enable you to determine if a color is in your list of disallowed colors in constant time. Here is an example. This code generates an image with multiple colors. It filters out some of those colors using two approaches. The first approach is the one you describe in the question. The second approach is the lookup table. import numpy as np # Only used for generating image. Skip this if you already have an image. image_colors = np.array([ (100, 100, 100), (200, 200, 200), (255, 255, 0), (255, 0, 0), ]) image_colors_to_remove = [ (255, 255, 0), (255, 0, 0), ] # Generate image resolution = (800, 600) np.random.seed(42) image = np.random.randint(0, len(image_colors), size=resolution) image = np.array(image_colors)[image].astype(np.uint8) # image = np.random.randint(0, 256, size=(*resolution, 3)) # Slow approach def remove_colors_with_for(image, image_colors_to_remove): image = image.copy() for c in image_colors_to_remove: mask = (image == c).all(axis=2) image[mask] = [255, 255, 255] return image # Fast approach def remove_colors_with_lookup(image, image_colors_to_remove): image = image.copy() colors_remove_lookup = np.zeros((256, 256, 256), dtype=bool) image_colors_to_remove = np.array(image_colors_to_remove).T colors_remove_lookup[tuple(image_colors_to_remove)] = 1 image_channel_first = image.transpose(2, 0, 1) mask = colors_remove_lookup[tuple(image_channel_first)] image[mask] = [255, 255, 255] return image new_image = remove_colors_with_for(image, image_colors_to_remove) new_image2 = remove_colors_with_lookup(image, image_colors_to_remove) print("Same as for loop?", np.all(new_image2 == new_image))
2
2
79,370,632
2025-1-20
https://stackoverflow.com/questions/79370632/asyncio-future-running-future-objects
I have a code: import asyncio as aio async def coro(future: aio.Future): print('Coro start') await aio.sleep(3) print('Coro finish') future.set_result('coro result') async def main(): future = aio.Future() aio.create_task(coro(future)) await future coro_result = future.result() print(coro_result) aio.run(main()) In main() I create an empty aio.Future object, then I create a task with aio.create_task(coro(future)) using coroutine which takes aio.Future object. Then I 'run' the empty future with await future. Somehow this line runs the task instead of running the empty coroutine! I don't understand how it works and why it goes like this, because I expect the line await future to run the empty future, not task! If I reorganize my main() like this: import asyncio as aio async def coro(future: aio.Future): print('Coro start') await aio.sleep(3) print('Coro finish') future.set_result('coro result') async def main(): future = aio.Future() await aio.create_task(coro(future)) # await future coro_result = future.result() print(coro_result) aio.run(main()) I get the same result but the code behaviour becomes much more explicit for me.
First, let's clear up some terminology. You said, "Then I 'run' the empty future with await future ..." A future is not "run". A future represents a value that will be set in the future. If you await the future, there has to be some other task that calls set_result on the future before your await is satisfied. Then you said, "Somehow this line (await future) runs the task instead of running the empty coroutine!" I don't know what you mean by an "empty coroutine". Let's see what is actually happening: In main you create a task with aio.create_task(coro(future)). First, you should ideally assign the task instance that was created to some variable so that a reference to the task exists preventing the task from being prematurely garbage collected (and thus terminated). For example, task = aio.create_task(coro(future)) Now that you have created a task, it will potentially execute (depending on what other tasks exist) as soon as main either executes an await statement or returns. Thus the mere fact that you execute await future is sufficient to cause function coro to start running. coro sets a result in the future and when it issues an await or returns, then another task gets a chance to run. In this case coro returns and the await issued on the future by main completes. Your second example is less than ideal. main wants to wait for the future to be set with a value. This setting is being done by coro so clearly if you wait for coro to complete you will discover that your future has been set. But what if coro is a very long running task and sets a value in the future long before it terminates? In this case main will be waiting an unnecessarily long period of time since the future it is interested in was set long before coro ever terminated. Your code should therefore be: import asyncio as aio async def coro(future: aio.Future): print('Coro start') # For demo purposes we set the future right away: future.set_result('coro result') await aio.sleep(3) print('Coro finish') async def main(): future = aio.Future() task = aio.create_task(coro(future)) # We are interested in examining the future as soon # as it gets a result, which may be before coro terminates: await future # Now we can call `result` on the future even though coro will # not terminate for 3 more seconds: coro_result = future.result() print(coro_result) await task # Give coro a chance to finish aio.run(main()) Prints: Coro start coro result Coro finish
1
3
79,371,681
2025-1-20
https://stackoverflow.com/questions/79371681/matplotlib-colobar-with-wrong-range-in-3d-surface
I'm trying to plot a value around the unit sphere using surface plot and facecolors in matplotlib, but my colorbar shows the normalized values instead of the real values. How can I fix this so the colorbar has the right range? import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import cm fig, ax = plt.subplots(subplot_kw={"projection": "3d"}, figsize = (10, 14)) # Make data n_points = 500 r = 1 u = np.linspace(0, 2 * np.pi, n_points) v = np.linspace(0, np.pi, n_points) x = r * np.outer(np.cos(u), np.sin(v)) y = r * np.outer(np.sin(u), np.sin(v)) z = r * np.outer(np.ones(np.size(u)), np.cos(v)) ax.plot_wireframe(x, y, z, color="grey", alpha = 0.1) data = np.random.uniform(0.2, 0.5, n_points) heatmap = np.array(np.meshgrid(data, data))[1] ax.set_aspect("equal") ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') colormap = cm.viridis normaliser = mpl.colors.Normalize(vmin=np.min(heatmap), vmax=np.max(heatmap)) print(np.min(heatmap)) print(np.max(heatmap)) surf = ax.plot_surface( x, y, z, facecolors=colormap(normaliser(heatmap)), shade=False) fig.colorbar(surf, shrink=0.5, aspect=10, label="Singlet yield", pad = 0.05, norm = normaliser) plt.show() This outputs 0.20009725794516225 and 0.49936395079063567 as min and max in the prints, but you can see the range of the colorbar is 0 to 1 in the following image. How can I fix this issue and make it so the colorbar has the appropriate colors?
The colorbar function itself doesn't have a norm argument according to the documentation for this function. For minimal alteration, you can pass a matplotlib.cm.ScalarMappable as the first argument of the colorbar call and it works as expected (presuming you also pass the appropriate ax argument). Here is a fully runnable code demonstrating this: import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from matplotlib import cm fig, ax = plt.subplots(subplot_kw={"projection": "3d"}, figsize=(10, 14)) # Make data n_points = 500 r = 1 u = np.linspace(0, 2 * np.pi, n_points) v = np.linspace(0, np.pi, n_points) x = r * np.outer(np.cos(u), np.sin(v)) y = r * np.outer(np.sin(u), np.sin(v)) z = r * np.outer(np.ones(np.size(u)), np.cos(v)) ax.plot_wireframe(x, y, z, color="grey", alpha=0.1) data = np.random.uniform(0.2, 0.5, n_points) heatmap = np.array(np.meshgrid(data, data))[1] ax.set_aspect("equal") ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') colormap = cm.viridis normaliser = mpl.colors.Normalize(vmin=np.min(heatmap), vmax=np.max(heatmap)) print(np.min(heatmap)) print(np.max(heatmap)) surf = ax.plot_surface( x, y, z, facecolors=colormap(normaliser(heatmap)), shade=False) mappable = cm.ScalarMappable(norm=normaliser, cmap=colormap) fig.colorbar(mappable, ax=ax, shrink=0.5, aspect=10, label="Singlet yield", pad=0.05) plt.show() Here is the output it generates:
1
2
79,357,840
2025-1-15
https://stackoverflow.com/questions/79357840/extracting-credentials-from-1password-using-onepassword-python-library
I am using this python library ("OnePassword python client") in which to interact with my 1Password instance and extract API credentials from a vault named "Employee". Python Function: def authenticate(base_endpoint, api_key, api_secret): op = OnePassword() available_vaults = op.list_vaults() employee_vault = next((vault for vault in available_vaults if vault["name"] == "Employee"), None) if not employee_vault: print("No Employee vault found") items = op.list_items(vault=employee_vault["id"]) addepar_item = next((item for item in items if item["title"] == "Addepar API"), None) if not addepar_item: print("Please check that Addepar API item exists in 1Password") item_details = op.get_item(uuid=addepar_item["id"], fields=["username", "credential"]) if not item_details: print("Please check that Addepar API item credentials exists in 1Password") api_key = item_details["username"] api_secret = item_details["credential"] base_endpoint = "https://myfirm.addepar.com/api/v1/" This function works flawlessly on a Mac, however, PC users are getting the following error: PC User error: C:\Users\User\PyCharmMiscProject\.venv\Scripts\python.exe C:\Users\User\PyCharmMiscProject\Main.py ⚠ Please enter an Entity ID: 123 Traceback (most recent call last): File "C:\Users\User\PyCharmMiscProject\Main.py", line 180, in <module> entity_attributes = get_entity_attributes(entity_id) File "C:\Users\User\PyCharmMiscProject\Main.py", line 32, in get_entity_attributes base_endpoint, api_key, api_secret = authenticate(base_endpoint="", api_key="", api_secret="") ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\PyCharmMiscProject\Main.py", line 11, in authenticate op = OnePassword() File "C:\Users\User\PyCharmMiscProject\.venv\Lib\site-packages\onepassword.py", line 52, in __init__ raise MissingCredentials() onepassword.MissingCredentials Process finished with exit code 1 My Thoughts: I was able to trace the error back to the onepassword library, specifically, the following file. I think this has something to do with bin_path, however, as Mac user, I am having challenges working out what is going on: class OnePassword(object): def __init__(self, secret=None, token=None, shorthand=None, bin_path=""): self.op = os.path.join(bin_path, "op") if secret is not None: self.shorthand = str(uuid4()) self.session_token = self.get_access_token(secret, shorthand=self.shorthand) elif token is not None and shorthand is not None: self.shorthand = shorthand self.session_token = token else: raise MissingCredentials() I would appreciate any guidance on what might not be configured correctly that might be causing this issue.
The Wandera/1password-client [GIT] [PyPI] library simply does not support Windows. From the Operating systems part of the README (as of Jan 2025): The library is split into two parts: installation and client in which we are slowly updating to cover as many operating systems as possible the following table should ensure users understand what this library can and can't do at time of install. MacOS Linux Fully supported Y Y ... (Note: Redacted irrelevant rows from the table) Besides 'Add Windows functionality' still being the first item on the roadmap; There is still an open issue #10 talking about Windows support, but the last reply from 2023 only talks about the 'windows' branch which hasn't seen a commit in the last 2 years. As an alternative, looking at the official 1Password SDK page, there is an official Python SDK with support for MacOS, Linux and Windows. Example provided by 1Password/onepassword-sdk-python/ import asyncio import os from onepassword.client import Client async def main(): # Gets your service account token from the OP_SERVICE_ACCOUNT_TOKEN environment variable. token = os.getenv("OP_SERVICE_ACCOUNT_TOKEN") # Connects to 1Password. Fill in your own integration name and version. client = await Client.authenticate(auth=token, integration_name="My 1Password Integration", integration_version="v1.0.0") # Retrieves a secret from 1Password. Takes a secret reference as input and returns the secret to which it points. value = await client.secrets.resolve("op://vault/item/field") # use value here if __name__ == '__main__': asyncio.run(main())
5
8
79,371,481
2025-1-20
https://stackoverflow.com/questions/79371481/caching-of-parameterized-nested-fixtures-in-pytest
I am trying to understand how and when return values from pytest fixtures are cached. In my understanding, the goal of fixtures (in particular session-scoped fixtures) is that they are called only once and that return values are cached for future calls. This does not seem to be the case for nested parameterized fixtures. The following code shows the issue: from collections import Counter import pytest state = Counter() @pytest.fixture(scope="session", autouse=True) def setup_session(): yield None print(state) @pytest.fixture(scope="session", params=["A1", "A2", "A3"]) def first(request): state[request.param] += 1 return request.param @pytest.fixture(scope="session", params=["B1", "B2"]) def second(first, request): return first + request.param @pytest.fixture(scope="session", params=["C1", "C2"]) def third(second, request): return second + request.param def test_length(third): assert len(third) == 6 The output is Counter({'A1': 3, 'A2': 2, 'A3': 1}). So first is called three times for parameter value A1, two times for A2, and once for A3. Why? I am expecting to get Counter({'A1': 1, 'A2': 1, 'A3': 1}) - one call for each parameter value. In case that's relevant, I am using Python 3.12.3, pytest-8.3.4 PS: All 12 tests pass - that's fine. PPS: When removing one level of nesting, the problem disappears and first is called exactly once per parameter value.
From the docs (Note box) Pytest only caches one instance of a fixture at a time, which means that when using a parametrized fixture, pytest may invoke a fixture more than once in the given scope. If you look closely at the console output you will see that you count the changes in the returned value from first test_length[C1-B1-A1] -> {'A1': 1, 'A2': 0, 'A3': 0} test_length[C1-B1-A2] -> {'A1': 1, 'A2': 1, 'A3': 0} test_length[C1-B2-A2] test_length[C2-B2-A2] test_length[C2-B2-A1] -> {'A1': 2, 'A2': 1, 'A3': 0} test_length[C2-B2-A3] -> {'A1': 2, 'A2': 1, 'A3': 1} test_length[C2-B1-A3] test_length[C1-B2-A3] test_length[C1-B1-A3] test_length[C2-B1-A2] -> {'A1': 2, 'A2': 2, 'A3': 1} test_length[C2-B1-A1] -> {'A1': 3, 'A2': 2, 'A3': 1} test_length[C1-B2-A1] PPS: reducing the nesting might give you the expected results, but not necessarily def test_length(second): output was Counter({'A1': 2, 'A2': 1, 'A3': 1})
2
2
79,370,497
2025-1-20
https://stackoverflow.com/questions/79370497/type-annotate-inside-loop
The mypy error is Need type annotation for "args" [var-annotated] Need type annotation for "kwargs" [var-annotated] and here is the piece of code expected_args: Optional[Sequence[Tuple[Any, ...]]] expected_kwargs: Optional[Sequence[Dict[str, Any]]] ... expected_args_iter = iter(expected_args or ()) expected_kwargs_iter = iter(expected_kwargs or ()) ... for args, kwargs in itertools.zip_longest( expected_args_iter, expected_kwargs_iter, fillvalue={}) Where can I annotate "args" and "kwargs"?
This appears to be a bug in mypy v1.14.0. For some reason, any iteration over itertools.zip_longest() with an empty sequence* in fillvalue will cause a similar issue. This is the simplest example I could construct: import itertools seq_a: list seq_b: list for a, b in itertools.zip_longest(seq_a, seq_b, fillvalue=[]): ... This is specific to both mypy and empty sequences. Non-empty sequences, non-sequences, and using Pyright are all fine. In your particular case, you can work around this issue by leaving fillvalue as its default (None) and hard-coding an exception into your comprehension. pretty_unused_args = [ ', '.join(itertools.chain( (repr(a) for a in args) if args is not None else [], ('%s=%r' % kwarg for kwarg in kwargs.items()) if kwargs is not None else [])) for args, kwargs in itertools.zip_longest( expected_args_iter, expected_kwargs_iter) ] *Besides (), but including tuple()
2
2
79,369,085
2025-1-19
https://stackoverflow.com/questions/79369085/does-order-of-transforms-applied-for-data-augmentation-matter-in-torchvision-tra
I have the following Custom dataset class for an image segmentation task. class LoadDataset(Dataset): def __init__(self, img_dir, mask_dir, apply_transforms = None): self.img_dir = img_dir self.mask_dir = mask_dir self.transforms = apply_transforms self.img_paths, self.mask_paths = self.__get_all_paths() self.__pil_to_tensor = transforms.PILToTensor() self.__float_tensor = transforms.ToDtype(torch.float32, scale = True) self.__grayscale = transforms.Grayscale() def __get_all_paths(self): img_paths = [os.path.join(self.img_dir, img_name.name) for img_name in os.scandir(self.img_dir) if os.path.isfile(img_name)] mask_paths = [os.path.join(self.mask_dir, mask_name.name) for mask_name in os.scandir(self.mask_dir) if os.path.isfile(mask_name)] img_paths = sorted(img_paths) mask_paths = sorted(mask_paths) return img_paths, mask_paths def __len__(self): return len(self.img_paths) def __getitem__(self, index): img_path, mask_path = self.img_paths[index], self.mask_paths[index] img_PIL = Image.open(img_path) mask_PIL = Image.open(mask_path) img_tensor = self.__pil_to_tensor(img_PIL) img_tensor = self.__float_tensor(img_tensor) mask_tensor = self.__pil_to_tensor(mask_PIL) mask_tensor = self.__float_tensor(mask_tensor) mask_tensor = self.__grayscale(mask_tensor) if self.transforms: img_tensor, mask_tensor = self.transforms(img_tensor, mask_tensor) return img_tensor, mask_tensor When I am applying the following transformation: transforms.RandomHorizontalFlip() either the image or the mask is being flipped. But if I change the order of the transformations in __getitem__ to the following, then it works fine. def __getitem__(self, index): img_path, mask_path = self.img_paths[index], self.mask_paths[index] img_PIL = Image.open(img_path) mask_PIL = Image.open(mask_path) if self.transforms: img_PIL, mask_PIL = self.transforms(img_PIL, mask_PIL) img_tensor = self.__pil_to_tensor(img_PIL) mask_tensor = self.__pil_to_tensor(mask_PIL) img_tensor = self.__float_tensor(img_tensor) mask_tensor = self.__float_tensor(mask_tensor) mask_tensor = self.__grayscale(mask_tensor) return img_tensor, mask_tensor Does the order transformation matter? I'm using torchvision.transforms.v2 for all the transformations.
Yes, the order of transformations matters. In this case, the transform to tensors makes the difference. When v2.RandomHorizontalFlip is given two tensors, the flip will be applied independently. However, when two PIL images are given, the same transform will be applied to both images, thus keeping the image and mask aligned. For a more consistent handling, you can try using TVTensors for the data augmentation. Using these, you can specify the type of each data input before transforming them. For example: from torchvision import tv_tensors img_tensor = tv_tensors.Image(img_tensor) mask_tensor= tv_tensors.Mask(mask_tensor)
2
0
79,369,363
2025-1-19
https://stackoverflow.com/questions/79369363/scipy-minimise-to-find-inverse-function
I have a (non-invertible) function ak([u,v,w]) This takes a point on the surface of the unit octahedron (p: such that |u|+|v|+|w| = 1) and returns a point on the surface of the unit sphere. The function isn't perfect but the intention is to keep the distance between points authalic. I was thinking of using SciPy minimize to provide a numerical inverse, but I cannot wrap my head around it. input: spherical pt [x,y,z], output octahedral pts [u,v,w] such that ak([u,v,w])=[x,y,z] My function ak() is defined like this: def ak(p): # Convert point on octahedron onto the sphere. # Credit to Anders Kaseorg: https://math.stackexchange.com/questions/5016695/ # input: oct_pt is a Euclidean point on the surface of a unit octagon. # output: UVW on a unit sphere. a = 3.227806237143884260376580641604959964752197265625 # 𝛂 - vis. Kaseorg. p1 = (np.pi * p) / 2.0 tp1 = np.tan(p1) xu, xv, xw = tp1[0], tp1[1], tp1[2] u2, v2, w2 = xu ** 2, xv ** 2, xw ** 2 y0p = xu * (v2 + w2 + a * w2 * v2) ** 0.25 y1p = xv * (u2 + w2 + a * u2 * w2) ** 0.25 y2p = xw * (u2 + v2 + a * u2 * v2) ** 0.25 pv = np.array([y0p, y1p, y2p]) return pv / np.linalg.norm(pv, keepdims=True) This function is based on a post that I made on the Math StackExchange. Any hints?
One idea I found that helped significantly was to convert this from a minimization problem to a root-finding problem. The root-finders don't support constraints, so you need to change the objective function to force x to be L1 normalized before converting into Cartesian coordinates. def fn_root(op, tx): # octa_point, target_sphere_point norm = np.linalg.norm(op, ord=1) return ak(op / norm) - tx Once you do that, you can find the point where ak(op / norm) - tx = 0. # Inverse function using numerical optimization def inverse_ak_root(tsp): initial_guess = np.arcsin(tsp) / (np.pi / 2) result = root(fn_root, initial_guess, args=(tsp,), method='hybr', tol=1e-12) assert result.success, result result.x /= np.linalg.norm(result.x, ord=1) return result (I also changed the initial guess from the center of the octahedron's face to np.arcsin(tsp) / (np.pi / 2). I found this reduced how long it took to converge.) I found that this reduced the error of your current solution by about a factor of 10^7, while using fewer calls to ak() than your previous solution. Full testing code import numpy as np from scipy.optimize import minimize, root def xyz_ll(spt): x, y, z = spt[:, 0], spt[:, 1], spt[:, 2] lat = np.degrees(np.arctan2(z, np.sqrt(x**2. + y**2.))) lon = np.degrees(np.arctan2(y, x)) return np.stack([lat, lon], axis=1) def ll_xyz(ll): # convert to radians. phi, theta = np.radians(ll[:, 0]), np.radians(ll[:, 1]) x = np.cos(phi) * np.cos(theta) y = np.cos(phi) * np.sin(theta) z = np.sin(phi) # z is 'up' return np.stack([x, y, z], axis=1) def ak(p): # Convert point on octahedron onto the sphere. # Credit to Anders Kaseorg: https://math.stackexchange.com/questions/5016695/ # input: oct_pt is a Euclidean point on the surface of a unit octagon. # output: UVW on a unit sphere. a = 3.227806237143884260376580641604959964752197265625 # 𝛂 - vis. Kaseorg. p1 = (np.pi * p) / 2.0 tp1 = np.tan(p1) xu, xv, xw = tp1[0], tp1[1], tp1[2] u2, v2, w2 = xu ** 2, xv ** 2, xw ** 2 y0p = xu * (v2 + w2 + a * w2 * v2) ** 0.25 y1p = xv * (u2 + w2 + a * u2 * w2) ** 0.25 y2p = xw * (u2 + v2 + a * u2 * v2) ** 0.25 pv = np.array([y0p, y1p, y2p]) return pv / np.linalg.norm(pv, keepdims=True) def fn(op, tx): # octa_point, target_sphere_point return np.sum((ak(op) - tx) ** 2.) def fn_root(op, tx): # octa_point, target_sphere_point norm = np.linalg.norm(op, ord=1) return ak(op / norm) - tx def constraint(op): # Constraint: |u|+|v|+|w|=1 (surface of the unit octahedron) return np.sum(np.abs(op)) - 1 # Inverse function using numerical optimization def inverse_ak(tsp): initial_guess = np.sign(tsp) * [0.5, 0.5, 0.5] # the centre of the current side # initial_guess = np.arcsin(tsp) / (np.pi / 2) constraints = {'type': 'eq', 'fun': constraint} result = minimize( # maxiter a bit ott maybe, but works. fn, initial_guess, args=(tsp,), constraints=constraints, bounds=[(-1., 1.), (-1., 1.), (-1., 1.)], method='SLSQP', options={'maxiter': 1000, 'ftol': 1e-15, 'disp': False} ) # assert result.success, result return result # Inverse function using numerical optimization def inverse_ak_root(tsp): initial_guess = np.arcsin(tsp) / (np.pi / 2) result = root(fn_root, initial_guess, args=(tsp,), method='hybr', tol=1e-12) assert result.success, result result.x /= np.linalg.norm(result.x, ord=1) return result if __name__ == '__main__': N = 50 np.random.seed(42) lat = np.random.uniform(-90, 90, N) lon = np.random.uniform(-180, 180, N) all_points = np.column_stack([lat, lon]) for i in range(len(all_points)): et = np.array([all_points[i]]) sph = ll_xyz(et) result = inverse_ak(sph[0]) octal = result.x result2 = inverse_ak_root(sph[0]) octal2 = result2.x spherical = ak(octal) rx = xyz_ll(np.array([spherical])) spherical2 = ak(octal2) rx2 = xyz_ll(np.array([spherical2])) print(f'Old Approach') print(f'Original Pt:{et}') print(f'Via inverse:{rx}') print(f'Difference:{rx-et}') print(f'Calls: {result.nfev}') print(f'New Approach') print(f'Via inverse:{rx2}') print(f'Difference:{rx2-et}') print(f'Ratio: {np.linalg.norm(rx-et) / (np.linalg.norm(rx2-et) + 1e-15):g}') print(f'Calls: {result2.nfev}') print() Note: The parameter 𝛂 has many more digits than necessary. When Python does math on this number, it ignores all digits that do not fit in a double-precision float, which means the number is converted to 3.2278062371438843 internally. Other approaches tried I also tried coming up with an analytic Jacobian for this function using SymPy, but had too much trouble getting automatic differentiation to work. If you were able find this, you'd be able to differentiate the function more quickly and accurately. I also tried a solution based on scipy.interpolate.RbfInterpolator. It got similar accuracy and speed to the minimize method.
3
3
79,366,429
2025-1-18
https://stackoverflow.com/questions/79366429/matplotlib-legend-not-respecting-content-size-with-lualatex
I need to generate my matplotlib plots using lualatex instead of pdflatex. Among other things, I am using fontspec to change the document fonts. Below I am using this as an example and set lmroman10-regular.otf as the font. This creates a few issues. One is that the handles in the legends are not fully centered and are followed by some whitespace before the right border: The python code generating the intermediate .pgf file looks like this: import matplotlib import numpy from matplotlib import pyplot x = numpy.linspace(-1, 1) y = x ** 2 matplotlib.rcParams["figure.figsize"] = (3, 2.5) matplotlib.rcParams["font.family"] = "serif" matplotlib.rcParams["font.size"] = 10 matplotlib.rcParams["legend.fontsize"] = 8 matplotlib.rcParams["pgf.texsystem"] = "lualatex" PREAMBLE = r"""\usepackage{ifluatex} \ifluatex \usepackage{fontspec} \setmainfont{lmroman10-regular.otf} \fi """ matplotlib.rcParams["text.latex.preamble"] = PREAMBLE matplotlib.rcParams["pgf.preamble"] = PREAMBLE matplotlib.rcParams["text.usetex"] = True pyplot.plot(x, y, label="this is the data") pyplot.legend() pyplot.xlabel("xlabel") pyplot.tight_layout() pyplot.savefig("lualatex_test.pgf") The .pgf file is then embedded in a latex document. It seems it is not possible to directly compile to .pdf since then the document font will not be the selected font which can for example be seen by setting the font to someting more different like AntPoltExpd-Italic.otf. Also note note that the \ifluatex statement has to be added around the lualatex-only code since matplotlib uses pdflatex to determine the dimensions of text fragements, as will be seen below. For the sake of this simple example, the .tex file to render the .pgf may be just: \documentclass[11pt]{scrbook} \usepackage{pgf} \usepackage{fontspec} \setmainfont{lmroman10-regular.otf} \newcommand{\mathdefault}[1]{#1} \begin{document} \input{lualatex_test.pgf} \end{document} which can be typeset using lualatex <filename> and results in the figure shown above (without the red arrow). I thought I had identified the reason for this but it seems I missed something. As mentioned above, matplotlib computes the dimensions of the text patches by actually placing them in a latex templated and compiling it using pdflatex. This happens in the matplotlib.texmanager.TexManager class in the corresponding file on the main branch for example. I thought I could fix it like this: class TexManager: ... @classmethod def get_text_width_height_descent(cls, tex, fontsize, renderer=None): """Return width, height and descent of the text.""" if tex.strip() == '': return 0, 0, 0 dvifile = cls.make_dvi(tex, fontsize) dpi_fraction = renderer.points_to_pixels(1.) if renderer else 1 with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi: page, = dvi # A total height (including the descent) needs to be returned. w = page.width # !!! if tex == "this is the data": w /= 1.14 print("fixed width") # !!! return w, page.height + page.descent, page.descent which, to my understanding, should trick matplotlib into thinking the text is shorter by a factor of 1.14 (just a guess, should be adapted once the solution works). The code definitely gets called, since "fixed width" gets printed. But the gap is not fixed: How can I fix this issue? How is matplotlib computing the legend content's width and can I maybe patch this to account for the correct width? Let's assume I know that the width error factor for the font size 8 is approximately 1.14. I can easily determine this for other fonts and font sizes.
It seems that when using custom font settings, you have to set the following rcParam: matplotlib.rcParams["pgf.rcfonts"] = False Otherwise, if I understand it correctly, the font settings are applied from the rcParams. This solves the spacing issue with your example: For more details, see also the documentation of the .pgf backend where this parameter is exaplained.
3
2
79,369,295
2025-1-19
https://stackoverflow.com/questions/79369295/convert-a-pdf-to-a-png-with-transparency
My goal is to obtain a PNG file with a transparent background from a PDF file. The convert tool can do the job: $ convert test.pdf test.png $ file test.png test.png: PNG image data, 595 x 842, 8-bit gray+alpha, non-interlaced But I would like to do it programmatically in python without relying on convert or any other tool. I came up with the pdf2image package but I could not figure out how to get transparency. Here is my attempt: import pdf2image with open("test.pdf", "rb") as fd: pdf = pdf2image.convert_from_bytes(fd.read(), transparent=True) pdf[0].save("test.png") Unfortunately I lose transparency: $ python test.py $ file test.png test.png: PNG image data, 1654 x 2339, 8-bit/color RGB, non-interlaced Is there any way to do this without relying on an external tool using pdf2image or any other package ?
With PyMuPDF, you can do this: import pymupdf doc=pymupdf.open("test.pdf") for page in doc: pix = page.get_pixmap(alpha=True, dpi=150) pix.save(f"{doc.name}-{page.number}.png") Results in transparent PNG images named "test.pdf-0.png", etc. The images have a resolution of 150 DPI in above case. Note: I am a maintainer and the original creator of PyMuPDF.
1
3
79,368,759
2025-1-19
https://stackoverflow.com/questions/79368759/tensorflow-probability-mixturenormal-layer-example-not-working-as-in-example
Tensorflow version is 2.17.1 Tensoflow probability version is 0.24.0 Example from the documentation https://www.tensorflow.org/probability/api_docs/python/tfp/layers/MixtureNormal?hl=en is the following: import numpy as np import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions tfpl = tfp.layers tfk = tf.keras tfkl = tf.keras.layers # Load data -- graph of a [cardioid](https://en.wikipedia.org/wiki/Cardioid). n = 2000 t = tfd.Uniform(low=-np.pi, high=np.pi).sample([n, 1]) r = 2 * (1 - tf.cos(t)) x = r * tf.sin(t) + tfd.Normal(loc=0., scale=0.1).sample([n, 1]) y = r * tf.cos(t) + tfd.Normal(loc=0., scale=0.1).sample([n, 1]) # Model the distribution of y given x with a Mixture Density Network. event_shape = [1] num_components = 5 params_size = tfpl.MixtureNormal.params_size(num_components, event_shape) model = tfk.Sequential([ tfkl.Dense(12, activation='relu'), tfkl.Dense(params_size, activation=None), tfpl.MixtureNormal(num_components, event_shape) ]) # Fit. batch_size = 100 model.compile(optimizer=tf.train.AdamOptimizer(learning_rate=0.02), loss=lambda y, model: -model.log_prob(y)) model.fit(x, y, batch_size=batch_size, epochs=20, steps_per_epoch=n // batch_size) This ends up with the error ValueError: Only instances of `keras.Layer` can be added to a Sequential model. Received: <tensorflow_probability.python.layers.distribution_layer.MixtureNormal object at 0x7c9076269a50> (of type <class 'tensorflow_probability.python.layers.distribution_layer.MixtureNormal'>)
Taking a look at the release notes of TensorFlow Probability: "NOTE: In TensorFlow 2.16+, tf.keras (and tf.initializers, tf.losses, and tf.optimizers) refers to Keras 3. TensorFlow Probability is not compatible with Keras 3 -- instead TFP is continuing to use Keras 2, which is now packaged as tf-keras and tf-keras-nightly and is imported as tf_keras. When using TensorFlow Probability with TensorFlow, you must explicitly install Keras 2 along with TensorFlow (or install tensorflow-probability[tf] or tfp-nightly[tf] to automatically install these dependencies.)" Try and follow the instructions above.
2
2
79,367,831
2025-1-18
https://stackoverflow.com/questions/79367831/python-3d-surface-interpolation-from-2d-simulation-data
I’m working with a 3D dataset in Python where some (x,y) points create frame-like structures, some of them with multiple z-values per (x,y) pair, which seems to lead to a problem with griddata. The provided code produces the interpolation between 2 data sets as a working example, and then tries to interpolate 2 data sets, where one has multiple z values. The data and the interpolation are plotted for each case. I also provide an image with a marking, what seems to create the problem afaik. How can I interpolate such data with griddata or is there another way? Code contains part of the original (simulation) data. import numpy as np import matplotlib.pyplot as plt from scipy.spatial import distance from scipy.interpolate import griddata from scipy.ndimage import gaussian_filter from mpl_toolkits.mplot3d.art3d import Poly3DCollection from matplotlib.colors import Normalize from matplotlib import cm import matplotlib.tri as mtri from scipy.interpolate import LinearNDInterpolator x1 = np.array([ 0, 0.08317108166803, 0.16393703317322, 0.24010197356808, 0.31189968667059, 0.37964309047995, 0.44362962064596, 0.50416941327588, 0.56154833462750, 0.61603270573008, 0.66785467575344, 0.71722870327800, 0.76434124363645, 0.80936126105512, 0.85243882934910, 0.89371020366228, 0.93330023252176, 0.97132031393031, 1.0078733278234, 1.04305186588065, 1.07694164659622, 1.10962198351184, 1.14116475234496, 1.17163490787683, 1.20109277181813, 1.22959493653164, 1.25719327772516, 1.28393541084346, 1.30986533211137, 1.33502459428484, 1.3594517119059, 1.38318253432743, 1.40625054167016, 1.428687092237, 1.45052182005806, 1.47178239513572, 1.4924946731886, 1.51268282564772, 1.53236959683959, 1.55157657215197, 1.5703240685486, 1.58863120287582, 1.60651600994447, 1.62399550071956, 1.64108576853907, 1.65780207082065, 1.67415889980995, 1.69017008507776, 1.70584884180465, 1.72120783285127, 1.73625905724844, 1.75101389043042, 1.76548311976239, 1.77967698639756, 1.79360519288652, 1.80727689980216, 1.8207008202846, 1.83388518878035, 1.84683777315861, 1.85956584789141, 1.87207616663999, 1.88437496210868, 1.89646789288332, 1.90835992552183, 1.92005539714572, 1.9315578245162, 1.94286977773435, 1.95399269320514, 1.96492669014037, 1.97567031873621, 1.98622026624029, 1.99657105652974, 2.00671480587649, 2.01664080378909, 2.02633512036696, 2.03578034488985, 2.04495553215686, 2.05383630364206, 2.0623951776226, 2.0706018164916, 2.07842481734728, 2.0858329876037, 2.09279775917234, 2.09929633971377, 2.10531538227125, 2.11085508513429, 2.11593338459342, 2.12058937638755, 2.12488319044411, 2.12889284005344, 2.13270923022591, 2.13642524129613 ]) z1 = np.array([ 28.6619989743589, 28.6049608580482, 28.5450559014246, 28.4794835365082, 28.4092022241029, 28.3350387336261, 28.2576380039013, 28.1775367384564, 28.0951654495223, 28.0108638015799, 27.9249204615456, 27.8375597971889, 27.7489735572062, 27.6593178994861, 27.5687237423079, 27.4773041964674, 27.385152945665, 27.2923533725037, 27.1989741902862, 27.1050785554844, 27.0107190614389, 26.9159392616942, 26.8207798998579, 26.7252795524418, 26.6294706791138, 26.5333790187747, 26.4370286448011, 26.3404423792824, 26.2436415876413, 26.1466433615791, 26.0494627200553, 25.9521139974702, 25.8546109832136, 25.7569659319296, 25.6591888403289, 25.5612891089313, 25.4632757574251, 25.3651575801692, 25.2669423848755, 25.168636129068, 25.0702443614344, 24.9717723691753, 24.8732252294088, 24.7746078436806, 24.6759249761952, 24.5771812831842, 24.4783811991822, 24.3795278208977, 24.2806239242195, 24.1816722476486, 24.082675412506, 23.9836359405196, 23.8845562687947, 23.7854387639056, 23.6862857298263, 23.5870993764416, 23.4878813512533, 23.3886330916743, 23.2893560024275, 23.1900514136945, 23.0907205842034, 22.9913647072391, 22.8919849096176, 22.792582243255, 22.6931576855678, 22.5937118769204, 22.4942453558146, 22.3947585791552, 22.2952518558259, 22.1957253129467, 22.0961789025235, 21.9966124171542, 21.8970253898987, 21.7974170708687, 21.6977864181452, 21.5981324310853, 21.4984538542985, 21.398749114135, 21.2990163991982, 21.1992542481573, 21.0994617682189, 20.9996381351822, 20.8997827371874, 20.799896002592, 20.6999792732694, 20.6000347502029, 20.5000657435458, 20.4000764848403, 20.3000713432661, 20.2000540343164, 20.1000287433225, 20 ]) y1 = np.full(len(x1),1) x2 = np.array([ 0, 0.09970480868419, 0.19934990930324, 0.29892560671855, 0.39845053029212, 0.49791294847979, 0.59729126792395, 0.69654894215390, 0.79562230931157, 0.89441829468223, 0.99278184799703, 1.09041596346573, 1.18680564093363, 1.2811247216627, 1.37173821384615, 1.45615592046124, 1.53140709459726, 1.59481447147833, 1.64540118865725, 1.68420020117436, 1.7139567901805, 1.73739377080012, 1.75669514508293, 1.77341533375185, 1.78855241978336, 1.80270054725639, 1.81619925811863, 1.82923458834867, 1.84190868011591, 1.8542787909028, 1.86637895717989, 1.87823123253782, 1.8898513429009, 1.90125155489232, 1.91244211394041, 1.92343198232874, 1.93422925599876, 1.94484137307261, 1.95527523060825, 1.96553726952681, 1.97563352932633, 1.98556968830044, 1.99535109412568, 2.00498278811171, 2.01446952528177, 2.02381579222306, 2.03302582656997, 2.04210365316528, 2.05105308938297, 2.05987776623044, 2.06858114998522, 2.07716655334668, 2.08563714669058, 2.09399597192344, 2.1022459544216, 2.11038991079032, 2.11843055346434, 2.12637049464471, 2.13421224945851, 2.14195824744614, 2.14961083302462, 2.15717226465044, 2.16464472287464, 2.17203031834828, 2.17933109740207, 2.18654904729014, 2.19368610417827, 2.20074416242255, 2.20772507771543, 2.21463066429355, 2.2214626950629, 2.22822290393084, 2.23491298448034, 2.24153458547506, 2.24808930743357, 2.25457870003345, 2.26100425466765, 2.26736739519304, 2.27366947406393, 2.27991176845626, 2.28609547194039, 2.29222169050455, 2.29829141992375, 2.30430552303668, 2.31026470050494, 2.31616945318094, 2.32202003758885, 2.32781640900641, 2.33355814983544, 2.33924438051412, 2.3448736508916, 2.35044381228348, 2.3559518778997, 2.36139385612463, 2.36676455738366, 2.37205741259143, 2.37726426201264, 2.38237517331566, 2.38737831946074, 2.39225993069978, 2.39700436076639, 2.40159431859356, 2.40601130725183, 2.41023630806784, 2.4142507588751, 2.418037990112, 2.42158487083293, 2.42488372545427, 2.42793436504128, 2.43074593332474, 2.4333379706335, 2.43574001568792, 2.4379889754186, 2.44012113735539, 2.44214574160271 ]) z2 = np.array([ 30.0078688964104, 30.0065512265173, 30.0029356360187, 29.9976735322442, 29.9915262693748, 29.9844424862537, 29.9762659464609, 29.9667364364951, 29.9554600427241, 29.9419800400481, 29.9256478232344, 29.9054159211507, 29.8799561681087, 29.8476996253689, 29.8061730864421, 29.7532731986494, 29.6880451094337, 29.6112448542866, 29.5253946922475, 29.4335819554567, 29.3384411867613, 29.2415312959595, 29.1437049711797, 29.04539962879, 28.9468401079325, 28.8481308935213, 28.7493306654707, 28.6504694225423, 28.5515612207417, 28.4526148172972, 28.3536354145385, 28.2546261943038, 28.1555895272291, 28.0565273761794, 27.9574414945533, 27.8583335076939, 27.7592047062979, 27.6600561309973, 27.5608887508238, 27.4617034796921, 27.3625011864055, 27.2632827022544, 27.1640488269094, 27.0648003331593, 26.9655379708194, 26.866262470073, 26.7669745291472, 26.6676746328858, 26.5683632230419, 26.4690407437848, 26.3697076224236, 26.2703642715209, 26.171011090825, 26.0716484692661, 25.9722767866544, 25.8728964148864, 25.7735077188227, 25.6741110570318, 25.5747068020388, 25.4752952472731, 25.3758766209457, 25.2764511425401, 25.1770190239312, 25.077580470411, 24.978135681488, 24.8786848515835, 24.7792281708969, 24.6797658262875, 24.5802980016905, 24.4808248780774, 24.381346633632, 24.2818634440495, 24.1823754825859, 24.0828829198801, 23.9833859238618, 23.8838846597107, 23.7843792895138, 23.6848699718101, 23.5853568615186, 23.4858401098541, 23.3863198605794, 23.2867961336504, 23.1872689742316, 23.0877384678003, 22.9882046876333, 22.8886676931036, 22.7891275276662, 22.6895842161987, 22.5900377616683, 22.4904881410033, 22.3909353002525, 22.2913791491932, 22.1918194461756, 22.0922558825952, 21.9926883788735, 21.8931167722048, 21.7935407990426, 21.6939600883707, 21.5943741599893, 21.4947824294466, 21.395184222527, 21.2955788018895, 21.1959654064542, 21.0963434196656, 20.9967131851386, 20.8970742537356, 20.7974262724062, 20.6977693400472, 20.5981040167542, 20.4984312619284, 20.3987523065558, 20.2990684973495, 20.1993811617366, 20.099691467103, 20 ]) y2 = np.full(len(x2),2) x3 = np.array([ 0, 0.106091997247487, 0.212282862868987, 0.318600341722176, 0.425123104761527, 0.531901352872626, 0.638980673349671, 0.746408888895399, 0.85423835573243, 0.962522013543275, 1.07131093710695, 1.18064439993891, 1.29054534660613, 1.40102082494537, 1.51205952335725, 1.62363164712265, 1.73568405272677, 1.84814589283292, 1.96093277164309, 2.07395098084042, 2.18706544796369, 2.30006049798178, 2.41251892919309, 2.52351045392648, 2.63096976610101, 2.73100121086301, 2.77148016441521, 2.78453458174498, 2.79186010794094, 2.79751816975908, 2.80236248394865, 2.80670132496914, 2.81067659404222, 2.81436810315714, 2.81782835677761, 2.81838473210957, 2.82109652791738, 2.82420427153188, 2.82717868109766, 2.83004342751941, 2.83281945446213, 2.83552519787279, 2.83817680575457, 2.84078819902162, 2.84337124785453, 2.84593589783908, 2.84849035993836, 2.85104128587469, 2.85359396537556, 2.85615250287907, 2.85872000627528, 2.86129873724824, 2.86389026304064, 2.86649557135293, 2.86911518641933, 2.8717492484006, 2.87439759958004, 2.87705983816952, 2.87973537526801, 2.88242347197052, 2.88512327706806, 2.88783385231766, 2.88845703622116, 2.89055419723002, 2.89328326484414, 2.89601997738632, 2.89876323695298, 2.90151193463858, 2.90426495815636, 2.90702119863042, 2.90977955571619, 2.91253894330014, 2.91529829288703, 2.91805655711182, 2.92081271230053, 2.92356576021251, 2.92631472933472, 2.92905867580467, 2.93179668334584, 2.93452786251032, 2.93725134958514, 2.93965292016822, 2.93996630533669, 2.94267191310587, 2.94536737736262, 2.94805192150669, 2.95072478659992, 2.95338522921948, 2.9560325199586, 2.95866594104728, 2.96128478419988, 2.96388834746919, 2.96647593163284, 2.96904683568692, 2.97160035145282, 2.97413575724706, 2.97425125556441, 2.97665231021929, 2.97914923756346, 2.98162572566976, 2.98408090824307, 2.98651385151949, 2.98892353757412, 2.99130884415201, 2.99366851923393, 2.99600114768388, 2.9962606601005, 2.99830510879971, 3.00057851901249, 3.00281914620488, 3.00502428120326, 3.00719053828585, 3.00931351650196, 3.00952011645174, 3.01138721018009, 3.0134029388599, 3.01534737564675, 3.01699921085196, 3.01719889804888, 3.01892089400833, 3.02044973125651, 3.02078939388317, 3.02167377283328, 3.02228920892799, 3.02239824880744 ]) z3 = np.array([ 30.1186241174416, 30.1184849219899, 30.1171034471136, 30.1152951272103, 30.1133655615441, 30.1113195993005, 30.1091433108655, 30.1068210776114, 30.1043270725803, 30.1016326866963, 30.0986949871154, 30.095463219881, 30.091866695563, 30.087823561571, 30.0832240264785, 30.0779411584852, 30.0718042824719, 30.064598064254, 30.0560271138372, 30.0456642379577, 30.0328664792409, 30.0166381634173, 29.995383512023, 29.9666311419869, 29.9269062544625, 29.8724446142301, 20, 20.0992866487574, 20.1987778658839, 20.2983155533319, 20.3978803414887, 20.4974615256815, 20.5970538814448, 20.6966543922422, 20.796260963899, 29.8010645620936, 20.8958720196992, 20.9954865830927, 21.0951039613303, 21.1947236485541, 21.2943452750121, 21.3939685837807, 21.493593496968, 21.5932201164118, 21.6928485704969, 21.7924790313127, 21.8921116969745, 21.9917467869875, 22.0913845424964, 22.1910252330904, 22.2906691240923, 22.3903164843267, 22.4899675841224, 22.5896226941457, 22.6892820856006, 22.7889460266206, 22.888614783746, 22.988288620861, 23.0879678014928, 23.1876525873797, 23.287343240002, 23.3870400197103, 29.7142868959217, 23.486743187404, 23.5864530033367, 23.6861697290352, 23.7858936263979, 23.885624958565, 23.9853639878029, 24.0851109788725, 24.1848661985415, 24.2846299172733, 24.3844024085183, 24.4841839497909, 24.5839748236319, 24.6837753181654, 24.7835857287758, 24.8834063591202, 24.9832375223667, 25.0830795430067, 25.1829327578276, 29.6171448515155, 25.2827975190689, 25.3826741941813, 25.4825631716602, 25.5824648589361, 25.6823796912861, 25.7823081276471, 25.8822506637417, 25.9822078278468, 26.0821801933113, 26.1821683788585, 26.2821730607998, 26.3821949758609, 26.482234934283, 26.5822938276285, 29.5148759694037, 26.6823726493158, 26.7824724970099, 26.8825946040959, 26.9827403478161, 27.0829112850487, 27.1831091758242, 27.283336026885, 27.3835941282532, 27.4838861281617, 29.4108704451581, 27.5842150845333, 27.6845846449453, 27.7849990464475, 27.8854632367103, 27.9859831360428, 28.0865659498792, 29.3067361736195, 28.1872205245328, 28.2879579643369, 28.3887923780716, 29.2030641097122, 28.4897421297855, 28.5908314496745, 28.6920929845482, 29.0999759013086, 28.7935707828626, 28.997430828688, 28.8953250739844 ]) y3 = np.full(len(x3),3) def filter_data_by_indices(data, indices_to_keep): # Convert 1-based indices to 0-based indices for Python indexing indices_to_keep = [i - 1 for i in indices_to_keep] # Filter the data by the specified indices filtered_data = [data[i] for i in indices_to_keep if 0 <= i < len(data)] return filtered_data def mirror_data(data): mirrored_data = [] for x,y,z in data: # Combine positive and negative r values x_combined = np.concatenate([-x[::-1], x]) z_combined = np.concatenate( [z[::-1], z ])-20 #offset of sim raw data y_combined = np.full_like(x_combined, y[0]) mirrored_data.append((x_combined, y_combined, z_combined)) return mirrored_data def nearest_neighbor_sort(data): sorted_data = [] for x, y, z in data: # Stack x and z to form (x, z) pairs xz_pairs = np.column_stack((x, z)) # Nearest neighbor sorting sorted_indices = [0] # Start with the first point remaining_indices = list(range(1, len(xz_pairs))) while remaining_indices: last_index = sorted_indices[-1] distances = [ distance.euclidean(xz_pairs[last_index], xz_pairs[i]) for i in remaining_indices ] nearest_index = remaining_indices[np.argmin(distances)] sorted_indices.append(nearest_index) remaining_indices.remove(nearest_index) # Apply the sorted indices to x and z, y remains unchanged x_sorted = x[sorted_indices] z_sorted = z[sorted_indices] sorted_data.append((x_sorted, y, z_sorted)) return sorted_data def plot_data(filtered_data): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') all_x, all_y, all_z = [], [], [] for x, y, z in filtered_data: # log_t = np.log10(t) all_x.extend(x) all_y.extend(y) all_z.extend(z) ax.plot(x, y, z, color='black', alpha=0.8, linewidth=1.5) plt.show() # Interpolate surface X, Y, Z = interpolate_surface(all_x, all_y, all_z) X_flat, Y_flat, Z_flat = X.flatten(), Y.flatten(), Z.flatten() triangles = mtri.Triangulation(X_flat, Y_flat) norm = Normalize(vmin=min(Y_flat), vmax=max(Y_flat)) cmap = cm.gnuplot colors = cmap(norm(Y_flat)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # Build Poly3DCollection poly3d = [] for tri in triangles.triangles: poly3d.append([(X_flat[tri[0]], Y_flat[tri[0]], Z_flat[tri[0]]), (X_flat[tri[1]], Y_flat[tri[1]], Z_flat[tri[1]]), (X_flat[tri[2]], Y_flat[tri[2]], Z_flat[tri[2]])]) poly_collection = Poly3DCollection(poly3d, facecolors=colors[triangles.triangles].mean(axis=1), edgecolors='black', linewidths=0.06, alpha=1) ax.add_collection3d(poly_collection) plt.show() def interpolate_surface(all_r, all_log_t, all_z, num_points=100, sigma=1): r_vals = np.linspace(min(all_r), max(all_r), num_points) t_vals = np.linspace(min(all_log_t), max(all_log_t), num_points) R, T = np.meshgrid(r_vals, t_vals) points = np.vstack((all_r, all_log_t)).T values = np.array(all_z) Z = griddata(points, values, (R, T), method='linear') # 'cubic' or 'nearest' ## Interpolate the data with linearNDInterpolator # interpolator = LinearNDInterpolator(points, values) # Z = interpolator(np.array([R.flatten(), T.flatten()]).T).reshape(R.shape) Z = np.nan_to_num(Z, nan=0) # Z = gaussian_filter(Z, sigma=sigma) return R, T, Z data = [(x1,y1,z1),(x2,y2,z2),(x3,y3,z3)] # plot first 2 data sets filtered_data = filter_data_by_indices(data, [1,2]) sorted_data = nearest_neighbor_sort(filtered_data) mirrored_data = mirror_data(sorted_data) plot_data(mirrored_data) # plot last 2 data sets (here interpolation is wrong) filtered_data = filter_data_by_indices(data, [2,3]) sorted_data = nearest_neighbor_sort(filtered_data) mirrored_data = mirror_data(sorted_data) plot_data(mirrored_data)
Don't interpolate in a rectilinear parameter space. Instead, you need to Transform to a semipolar (cylindrical) coordinate space with y as linear, and xz radius and xz angle as the other axes. If I had to guess a safe x-z origin, it would be (0, 0). Interpolate on that. Prior to graphing, transform back to xyz. This is the safest "dumb" solution that will fit your data. If your data have a different hidden parameter space that produce unique, somewhat uniform dependent coordinates then you should use that instead. I also don't understand you're making your own triangle mesh. It causes problems even with a well-parameterized dataset. Just use plot_surface instead: import typing import matplotlib import numpy as np import matplotlib.pyplot as plt import scipy from scipy.spatial.distance import euclidean from mpl_toolkits.mplot3d.art3d import Poly3DCollection def mirror_data( data: list[tuple[np.ndarray, np.ndarray, np.ndarray]], ) -> list[np.ndarray]: mirrored_data = [] for x, y, z in data: # Combine positive and negative r values x_combined = np.concatenate([-x[::-1], x]) z_combined = np.concatenate([z[::-1], z]) - 20 # offset of sim raw data y_combined = np.full_like(x_combined, y[0]) mirrored_data.append(np.array((x_combined, y_combined, z_combined))) return mirrored_data def nearest_neighbor_sort(data: typing.Sequence[np.ndarray]) -> list[np.ndarray]: sorted_data = [] for x, y, z in data: # Stack x and z to form (x, z) pairs xz_pairs = np.column_stack((x, z)) # Nearest neighbor sorting sorted_indices = [0] # Start with the first point remaining_indices = list(range(1, len(xz_pairs))) while remaining_indices: last_index = sorted_indices[-1] distances = [ euclidean(xz_pairs[last_index], xz_pairs[i]) for i in remaining_indices ] nearest_index = remaining_indices[np.argmin(distances)] sorted_indices.append(nearest_index) remaining_indices.remove(nearest_index) # Apply the sorted indices to x and z, y remains unchanged x_sorted = x[sorted_indices] z_sorted = z[sorted_indices] sorted_data.append(np.array((x_sorted, y, z_sorted))) return sorted_data def plot_data(filtered_data: list[np.ndarray]) -> None: fig, ax = plt.subplots(subplot_kw={'projection': '3d'}) ax.set_xlabel('R') ax.set_ylabel('T') ax.set_zlabel('Z') all_x, all_y, all_z = np.concatenate(filtered_data, axis=-1) for x, y, z in filtered_data: # log_t = np.log10(t) ax.plot(x, y, z, color='black', alpha=0.8, linewidth=1.5) # Interpolate surface X, Y, Z = interpolate_surface(all_x, all_y, all_z, num_points=50) fig, ax = plt.subplots(subplot_kw={'projection': '3d'}) ax.set_xlabel('R') ax.set_ylabel('T') ax.set_zlabel('Z') ax.plot_surface(X, Y, Z, cmap=matplotlib.cm.plasma) def interpolate_surface( all_r: np.ndarray, all_log_t: np.ndarray, all_z: np.ndarray, num_points: int = 100, sigma: float = 1., gaussian: bool = False, ) -> tuple[ np.ndarray, # R np.ndarray, # T np.ndarray, # Z ]: # r_vals = np.linspace(all_r.min(), all_r.max(), num_points) t_vals = np.linspace(all_log_t.min(), all_log_t.max(), num_points) phi_vals = np.linspace(0, np.pi, num_points) P, T = np.meshgrid(phi_vals, t_vals) # t is kept as linear. Transform r,z to a cylindrical system. rmax = np.abs(all_r).max() zmax = np.abs(all_z).max() rscale = all_r/rmax zscale = all_z/zmax radius = np.hypot(zscale, rscale) angle = np.atan2(zscale, rscale) points = np.stack((angle, all_log_t), axis=-1) grid_radius = scipy.interpolate.griddata(points, radius, (P, T), method='linear') # 'cubic' or 'nearest' if gaussian: grid_radius = scipy.ndimage.gaussian_filter(grid_radius, sigma=sigma) # Transform from cylindrical angle (P), T, grid_radius to R, T, Z R = rmax * grid_radius * np.cos(P) Z = zmax * grid_radius * np.sin(P) return R, T, Z def demo() -> None: x1 = np.array(( 0, 0.08317108166803, 0.16393703317322, 0.24010197356808, 0.31189968667059, 0.37964309047995, 0.44362962064596, 0.50416941327588, 0.56154833462750, 0.61603270573008, 0.66785467575344, 0.71722870327800, 0.76434124363645, 0.80936126105512, 0.85243882934910, 0.89371020366228, 0.93330023252176, 0.97132031393031, 1.0078733278234, 1.04305186588065, 1.07694164659622, 1.10962198351184, 1.14116475234496, 1.17163490787683, 1.20109277181813, 1.22959493653164, 1.25719327772516, 1.28393541084346, 1.30986533211137, 1.33502459428484, 1.3594517119059, 1.38318253432743, 1.40625054167016, 1.428687092237, 1.45052182005806, 1.47178239513572, 1.4924946731886, 1.51268282564772, 1.53236959683959, 1.55157657215197, 1.5703240685486, 1.58863120287582, 1.60651600994447, 1.62399550071956, 1.64108576853907, 1.65780207082065, 1.67415889980995, 1.69017008507776, 1.70584884180465, 1.72120783285127, 1.73625905724844, 1.75101389043042, 1.76548311976239, 1.77967698639756, 1.79360519288652, 1.80727689980216, 1.8207008202846, 1.83388518878035, 1.84683777315861, 1.85956584789141, 1.87207616663999, 1.88437496210868, 1.89646789288332, 1.90835992552183, 1.92005539714572, 1.9315578245162, 1.94286977773435, 1.95399269320514, 1.96492669014037, 1.97567031873621, 1.98622026624029, 1.99657105652974, 2.00671480587649, 2.01664080378909, 2.02633512036696, 2.03578034488985, 2.04495553215686, 2.05383630364206, 2.0623951776226, 2.0706018164916, 2.07842481734728, 2.0858329876037, 2.09279775917234, 2.09929633971377, 2.10531538227125, 2.11085508513429, 2.11593338459342, 2.12058937638755, 2.12488319044411, 2.12889284005344, 2.13270923022591, 2.13642524129613 )) z1 = np.array(( 28.6619989743589, 28.6049608580482, 28.5450559014246, 28.4794835365082, 28.4092022241029, 28.3350387336261, 28.2576380039013, 28.1775367384564, 28.0951654495223, 28.0108638015799, 27.9249204615456, 27.8375597971889, 27.7489735572062, 27.6593178994861, 27.5687237423079, 27.4773041964674, 27.385152945665, 27.2923533725037, 27.1989741902862, 27.1050785554844, 27.0107190614389, 26.9159392616942, 26.8207798998579, 26.7252795524418, 26.6294706791138, 26.5333790187747, 26.4370286448011, 26.3404423792824, 26.2436415876413, 26.1466433615791, 26.0494627200553, 25.9521139974702, 25.8546109832136, 25.7569659319296, 25.6591888403289, 25.5612891089313, 25.4632757574251, 25.3651575801692, 25.2669423848755, 25.168636129068, 25.0702443614344, 24.9717723691753, 24.8732252294088, 24.7746078436806, 24.6759249761952, 24.5771812831842, 24.4783811991822, 24.3795278208977, 24.2806239242195, 24.1816722476486, 24.082675412506, 23.9836359405196, 23.8845562687947, 23.7854387639056, 23.6862857298263, 23.5870993764416, 23.4878813512533, 23.3886330916743, 23.2893560024275, 23.1900514136945, 23.0907205842034, 22.9913647072391, 22.8919849096176, 22.792582243255, 22.6931576855678, 22.5937118769204, 22.4942453558146, 22.3947585791552, 22.2952518558259, 22.1957253129467, 22.0961789025235, 21.9966124171542, 21.8970253898987, 21.7974170708687, 21.6977864181452, 21.5981324310853, 21.4984538542985, 21.398749114135, 21.2990163991982, 21.1992542481573, 21.0994617682189, 20.9996381351822, 20.8997827371874, 20.799896002592, 20.6999792732694, 20.6000347502029, 20.5000657435458, 20.4000764848403, 20.3000713432661, 20.2000540343164, 20.1000287433225, 20 )) x2 = np.array(( 0, 0.09970480868419, 0.19934990930324, 0.29892560671855, 0.39845053029212, 0.49791294847979, 0.59729126792395, 0.69654894215390, 0.79562230931157, 0.89441829468223, 0.99278184799703, 1.09041596346573, 1.18680564093363, 1.2811247216627, 1.37173821384615, 1.45615592046124, 1.53140709459726, 1.59481447147833, 1.64540118865725, 1.68420020117436, 1.7139567901805, 1.73739377080012, 1.75669514508293, 1.77341533375185, 1.78855241978336, 1.80270054725639, 1.81619925811863, 1.82923458834867, 1.84190868011591, 1.8542787909028, 1.86637895717989, 1.87823123253782, 1.8898513429009, 1.90125155489232, 1.91244211394041, 1.92343198232874, 1.93422925599876, 1.94484137307261, 1.95527523060825, 1.96553726952681, 1.97563352932633, 1.98556968830044, 1.99535109412568, 2.00498278811171, 2.01446952528177, 2.02381579222306, 2.03302582656997, 2.04210365316528, 2.05105308938297, 2.05987776623044, 2.06858114998522, 2.07716655334668, 2.08563714669058, 2.09399597192344, 2.1022459544216, 2.11038991079032, 2.11843055346434, 2.12637049464471, 2.13421224945851, 2.14195824744614, 2.14961083302462, 2.15717226465044, 2.16464472287464, 2.17203031834828, 2.17933109740207, 2.18654904729014, 2.19368610417827, 2.20074416242255, 2.20772507771543, 2.21463066429355, 2.2214626950629, 2.22822290393084, 2.23491298448034, 2.24153458547506, 2.24808930743357, 2.25457870003345, 2.26100425466765, 2.26736739519304, 2.27366947406393, 2.27991176845626, 2.28609547194039, 2.29222169050455, 2.29829141992375, 2.30430552303668, 2.31026470050494, 2.31616945318094, 2.32202003758885, 2.32781640900641, 2.33355814983544, 2.33924438051412, 2.3448736508916, 2.35044381228348, 2.3559518778997, 2.36139385612463, 2.36676455738366, 2.37205741259143, 2.37726426201264, 2.38237517331566, 2.38737831946074, 2.39225993069978, 2.39700436076639, 2.40159431859356, 2.40601130725183, 2.41023630806784, 2.4142507588751, 2.418037990112, 2.42158487083293, 2.42488372545427, 2.42793436504128, 2.43074593332474, 2.4333379706335, 2.43574001568792, 2.4379889754186, 2.44012113735539, 2.44214574160271 )) z2 = np.array(( 30.0078688964104, 30.0065512265173, 30.0029356360187, 29.9976735322442, 29.9915262693748, 29.9844424862537, 29.9762659464609, 29.9667364364951, 29.9554600427241, 29.9419800400481, 29.9256478232344, 29.9054159211507, 29.8799561681087, 29.8476996253689, 29.8061730864421, 29.7532731986494, 29.6880451094337, 29.6112448542866, 29.5253946922475, 29.4335819554567, 29.3384411867613, 29.2415312959595, 29.1437049711797, 29.04539962879, 28.9468401079325, 28.8481308935213, 28.7493306654707, 28.6504694225423, 28.5515612207417, 28.4526148172972, 28.3536354145385, 28.2546261943038, 28.1555895272291, 28.0565273761794, 27.9574414945533, 27.8583335076939, 27.7592047062979, 27.6600561309973, 27.5608887508238, 27.4617034796921, 27.3625011864055, 27.2632827022544, 27.1640488269094, 27.0648003331593, 26.9655379708194, 26.866262470073, 26.7669745291472, 26.6676746328858, 26.5683632230419, 26.4690407437848, 26.3697076224236, 26.2703642715209, 26.171011090825, 26.0716484692661, 25.9722767866544, 25.8728964148864, 25.7735077188227, 25.6741110570318, 25.5747068020388, 25.4752952472731, 25.3758766209457, 25.2764511425401, 25.1770190239312, 25.077580470411, 24.978135681488, 24.8786848515835, 24.7792281708969, 24.6797658262875, 24.5802980016905, 24.4808248780774, 24.381346633632, 24.2818634440495, 24.1823754825859, 24.0828829198801, 23.9833859238618, 23.8838846597107, 23.7843792895138, 23.6848699718101, 23.5853568615186, 23.4858401098541, 23.3863198605794, 23.2867961336504, 23.1872689742316, 23.0877384678003, 22.9882046876333, 22.8886676931036, 22.7891275276662, 22.6895842161987, 22.5900377616683, 22.4904881410033, 22.3909353002525, 22.2913791491932, 22.1918194461756, 22.0922558825952, 21.9926883788735, 21.8931167722048, 21.7935407990426, 21.6939600883707, 21.5943741599893, 21.4947824294466, 21.395184222527, 21.2955788018895, 21.1959654064542, 21.0963434196656, 20.9967131851386, 20.8970742537356, 20.7974262724062, 20.6977693400472, 20.5981040167542, 20.4984312619284, 20.3987523065558, 20.2990684973495, 20.1993811617366, 20.099691467103, 20 )) x3 = np.array(( 0, 0.106091997247487, 0.212282862868987, 0.318600341722176, 0.425123104761527, 0.531901352872626, 0.638980673349671, 0.746408888895399, 0.85423835573243, 0.962522013543275, 1.07131093710695, 1.18064439993891, 1.29054534660613, 1.40102082494537, 1.51205952335725, 1.62363164712265, 1.73568405272677, 1.84814589283292, 1.96093277164309, 2.07395098084042, 2.18706544796369, 2.30006049798178, 2.41251892919309, 2.52351045392648, 2.63096976610101, 2.73100121086301, 2.77148016441521, 2.78453458174498, 2.79186010794094, 2.79751816975908, 2.80236248394865, 2.80670132496914, 2.81067659404222, 2.81436810315714, 2.81782835677761, 2.81838473210957, 2.82109652791738, 2.82420427153188, 2.82717868109766, 2.83004342751941, 2.83281945446213, 2.83552519787279, 2.83817680575457, 2.84078819902162, 2.84337124785453, 2.84593589783908, 2.84849035993836, 2.85104128587469, 2.85359396537556, 2.85615250287907, 2.85872000627528, 2.86129873724824, 2.86389026304064, 2.86649557135293, 2.86911518641933, 2.8717492484006, 2.87439759958004, 2.87705983816952, 2.87973537526801, 2.88242347197052, 2.88512327706806, 2.88783385231766, 2.88845703622116, 2.89055419723002, 2.89328326484414, 2.89601997738632, 2.89876323695298, 2.90151193463858, 2.90426495815636, 2.90702119863042, 2.90977955571619, 2.91253894330014, 2.91529829288703, 2.91805655711182, 2.92081271230053, 2.92356576021251, 2.92631472933472, 2.92905867580467, 2.93179668334584, 2.93452786251032, 2.93725134958514, 2.93965292016822, 2.93996630533669, 2.94267191310587, 2.94536737736262, 2.94805192150669, 2.95072478659992, 2.95338522921948, 2.9560325199586, 2.95866594104728, 2.96128478419988, 2.96388834746919, 2.96647593163284, 2.96904683568692, 2.97160035145282, 2.97413575724706, 2.97425125556441, 2.97665231021929, 2.97914923756346, 2.98162572566976, 2.98408090824307, 2.98651385151949, 2.98892353757412, 2.99130884415201, 2.99366851923393, 2.99600114768388, 2.9962606601005, 2.99830510879971, 3.00057851901249, 3.00281914620488, 3.00502428120326, 3.00719053828585, 3.00931351650196, 3.00952011645174, 3.01138721018009, 3.0134029388599, 3.01534737564675, 3.01699921085196, 3.01719889804888, 3.01892089400833, 3.02044973125651, 3.02078939388317, 3.02167377283328, 3.02228920892799, 3.02239824880744 )) z3 = np.array(( 30.1186241174416, 30.1184849219899, 30.1171034471136, 30.1152951272103, 30.1133655615441, 30.1113195993005, 30.1091433108655, 30.1068210776114, 30.1043270725803, 30.1016326866963, 30.0986949871154, 30.095463219881, 30.091866695563, 30.087823561571, 30.0832240264785, 30.0779411584852, 30.0718042824719, 30.064598064254, 30.0560271138372, 30.0456642379577, 30.0328664792409, 30.0166381634173, 29.995383512023, 29.9666311419869, 29.9269062544625, 29.8724446142301, 20, 20.0992866487574, 20.1987778658839, 20.2983155533319, 20.3978803414887, 20.4974615256815, 20.5970538814448, 20.6966543922422, 20.796260963899, 29.8010645620936, 20.8958720196992, 20.9954865830927, 21.0951039613303, 21.1947236485541, 21.2943452750121, 21.3939685837807, 21.493593496968, 21.5932201164118, 21.6928485704969, 21.7924790313127, 21.8921116969745, 21.9917467869875, 22.0913845424964, 22.1910252330904, 22.2906691240923, 22.3903164843267, 22.4899675841224, 22.5896226941457, 22.6892820856006, 22.7889460266206, 22.888614783746, 22.988288620861, 23.0879678014928, 23.1876525873797, 23.287343240002, 23.3870400197103, 29.7142868959217, 23.486743187404, 23.5864530033367, 23.6861697290352, 23.7858936263979, 23.885624958565, 23.9853639878029, 24.0851109788725, 24.1848661985415, 24.2846299172733, 24.3844024085183, 24.4841839497909, 24.5839748236319, 24.6837753181654, 24.7835857287758, 24.8834063591202, 24.9832375223667, 25.0830795430067, 25.1829327578276, 29.6171448515155, 25.2827975190689, 25.3826741941813, 25.4825631716602, 25.5824648589361, 25.6823796912861, 25.7823081276471, 25.8822506637417, 25.9822078278468, 26.0821801933113, 26.1821683788585, 26.2821730607998, 26.3821949758609, 26.482234934283, 26.5822938276285, 29.5148759694037, 26.6823726493158, 26.7824724970099, 26.8825946040959, 26.9827403478161, 27.0829112850487, 27.1831091758242, 27.283336026885, 27.3835941282532, 27.4838861281617, 29.4108704451581, 27.5842150845333, 27.6845846449453, 27.7849990464475, 27.8854632367103, 27.9859831360428, 28.0865659498792, 29.3067361736195, 28.1872205245328, 28.2879579643369, 28.3887923780716, 29.2030641097122, 28.4897421297855, 28.5908314496745, 28.6920929845482, 29.0999759013086, 28.7935707828626, 28.997430828688, 28.8953250739844 )) y1 = np.full_like(x1, 1) y2 = np.full_like(x2, 2) y3 = np.full_like(x3, 3) data = ( np.array((x1, y1, z1)), np.array((x2, y2, z2)), np.array((x3, y3, z3)), ) # plot first 2 data sets sorted_data = nearest_neighbor_sort(data[:2]) mirrored_data = mirror_data(sorted_data) plot_data(mirrored_data) # plot last 2 data sets (here interpolation is wrong) sorted_data = nearest_neighbor_sort(data[1:]) mirrored_data = mirror_data(sorted_data) plot_data(mirrored_data) plt.show() if __name__ == '__main__': demo()
3
6
79,367,208
2025-1-18
https://stackoverflow.com/questions/79367208/why-does-my-finite-difference-weights-calculation-for-taylor-series-give-incorre
I'm trying to calculate the weights for a finite-difference approximation of the first derivative f′(x)f'(x)f′(x) using the Taylor series expansion. I'm solving for weights a,b,c,d,ea, b, c, d, ea,b,c,d,e such that: af(x+2Δx)+bf(x+Δx)+cf(x)+df(x−Δx)+ef(x−2Δx)a f(x+2\Delta x) + b f(x+\Delta x) + c f(x) + d f(x-\Delta x) + e f(x-2\Delta x)af(x+2Δx)+bf(x+Δx)+cf(x)+df(x−Δx)+ef(x−2Δx) approximates f′(x)f'(x)f′(x). Here's what I did: I used the Taylor series expansion for f(x±kΔx)f(x \pm k\Delta x)f(x±kΔx), where k=1,2k = 1, 2k=1,2. I built a system of linear equations to enforce the following conditions: Coefficients of f(x): a+b+c+d+e=0 Coefficients of f′(x): 2a+b−d−2e=1 Coefficients of f′′(x): 4a+b+d+4e=0 Coefficients of f(3)(x): 8a+b−d−8e=0 Coefficients of f(4)(x): 16a+b+d+16e=0 I implemented the matrix equation A⋅z=bA \cdot z = bA⋅z=b in Python: import numpy as np A = np.array([ [1, 1, 1, 1, 1], # Coefficients of f(x) [2, 1, 0, -1, -2], # Coefficients of f'(x) [4, 1, 0, 1, 4], # Coefficients of f''(x) [8, 1, 0, -1, 8], # Coefficients of f'''(x) [16, 1, 0, 1, 16] # Coefficients of f''''(x) ]) b = np.array([0, 1, 0, 0, 0]) # Targeting the first derivative z = np.linalg.solve(A, b) print(z) The Issue: The output I'm getting is: [0.25,0,-0,0,-0.25] However, the expected weights for the first derivative should be something like: [-1/12,2/3,0,-2/3,1/12] What I Tried: Double-checked the coefficients in matrix A to ensure they match the Taylor series expansion. Verified that the right-hand side vector b=[0,1,0,0,0] is correct for approximating f′(x). Despite this, the weights are incorrect. Am I missing something in the matrix setup or the Python implementation? Expected Behavior: I want the solution to match the theoretical weights for a central finite difference approximation of the first derivative f′(x) using five points.
You have a typo in the 4th line of the matrix A, the last element should be -8 instead of 8: A = np.array([ [1, 1, 1, 1, 1], # Coefficients of f(x) [2, 1, 0, -1, -2], # Coefficients of f'(x) [4, 1, 0, 1, 4], # Coefficients of f''(x) [8, 1, 0, -1, -8], # Coefficients of f'''(x) [16, 1, 0, 1, 16] # Coefficients of f''''(x) ])
2
2
79,366,590
2025-1-18
https://stackoverflow.com/questions/79366590/how-to-correctly-implement-fermats-factorization-in-python
I am trying to implement efficient prime factorization algorithms in Python. This is not homework or work related, it is completely out of curiosity. I have learned that prime factorization is very hard: I want to implement efficient algorithms for this as a self-imposed challenge. I have set to implement Fermat's factorization method first as it seems simple enough. Python code directly translated from the pseudocode: def Fermat_Factor(n): a = int(n ** 0.5 + 0.5) b2 = abs(a**2 - n) while int(b2**0.5) ** 2 != b2: a += 1 b2 = a**2 - n return a - b2**0.5, a + b2**0.5 (I have to use abs otherwise b2 will easily be negative and int cast will fail with TypeError because the root is complex) As you can see, it returns two integers whose product equals the input, but it only returns two outputs and it doesn't guarantee primality of the factors. I have no idea how efficient this algorithm is, but factorization of semiprimes using this method is much more efficient than the trial division method used in my previous question: Why factorization of products of close primes is much slower than products of dissimilar primes. In [20]: %timeit FermatFactor(3607*3803) 2.1 μs ± 28.2 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [21]: FermatFactor(3607*3803) Out[21]: [3607, 3803] In [22]: %timeit FermatFactor(3593 * 3671) 1.69 μs ± 31 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [23]: FermatFactor(3593 * 3671) Out[23]: [3593, 3671] In [24]: %timeit FermatFactor(7187 * 7829) 4.94 μs ± 47.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [25]: FermatFactor(7187 * 7829) Out[25]: [7187, 7829] In [26]: %timeit FermatFactor(8087 * 8089) 1.38 μs ± 12.9 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) In [27]: FermatFactor(8087 * 8089) Out[27]: [8087, 8089] So I want to use this algorithm to generate all prime factors of a any given integer (of course I know this only works with odd integers, but that is not an issue since powers of two can be trivially factored out using bit hacking). The easiest way I can think of is to recursively call Fermat_Factor until n is a prime. I don't know how to check if a number is prime in this algorithm, but I noticed something: In [3]: Fermat_Factor(3) Out[3]: (1.0, 3.0) In [4]: Fermat_Factor(5) Out[4]: (1.0, 3.0) In [5]: Fermat_Factor(7) Out[5]: (1.0, 7.0) In [6]: Fermat_Factor(11) Out[6]: (1.0, 11.0) In [7]: Fermat_Factor(13) Out[7]: (1.0, 13.0) In [8]: Fermat_Factor(17) Out[8]: (3.0, 5.0) In [9]: Fermat_Factor(19) Out[9]: (1.0, 19.0) In [10]: Fermat_Factor(23) Out[10]: (1.0, 23.0) In [11]: Fermat_Factor(29) Out[11]: (3.0, 7.0) In [12]: Fermat_Factor(31) Out[12]: (1.0, 31.0) In [13]: Fermat_Factor(37) Out[13]: (5.0, 7.0) In [14]: Fermat_Factor(41) Out[14]: (1.0, 41.0) The first number in the output of this algorithm for many primes is 1, but not all, as such it cannot be used to determine when the recursion should stop. I learned it the hard way. So I just settled to use membership checking of a pregenerated set of primes instead. Naturally this will cause RecursionError: maximum recursion depth exceeded when the input is a prime larger than the maximum of the set. As I don't have infinite memory, this is to be considered implementation detail. So I have implemented a working version (for some inputs), but for some valid inputs (products of primes within the limit) somehow the algorithm doesn't give the correct output: import numpy as np from itertools import cycle TRIPLE = ((4, 2), (9, 6), (25, 10)) WHEEL = ( 4, 2, 4, 2, 4, 6, 2, 6 ) def prime_sieve(n): primes = np.ones(n + 1, dtype=bool) primes[:2] = False for square, double in TRIPLE: primes[square::double] = False wheel = cycle(WHEEL) k = 7 while (square := k**2) <= n: if primes[k]: primes[square::2*k] = False k += next(wheel) return np.flatnonzero(primes) PRIMES = list(map(int, prime_sieve(1048576))) PRIME_SET = set(PRIMES) TEST_LIMIT = PRIMES[-1] ** 2 def FermatFactor(n): if n > TEST_LIMIT: raise ValueError('Number too large') if n in PRIME_SET: return [n] a = int(n ** 0.5 + 0.5) if a ** 2 == n: return FermatFactor(a) + FermatFactor(a) b2 = abs(a**2 - n) while int(b2**0.5) ** 2 != b2: a += 1 b2 = a**2 - n return FermatFactor(factor := int(a - b2**0.5)) + FermatFactor(n // factor) It works for many inputs: In [18]: FermatFactor(255) Out[18]: [3, 5, 17] In [19]: FermatFactor(511) Out[19]: [7, 73] In [20]: FermatFactor(441) Out[20]: [3, 7, 3, 7] In [21]: FermatFactor(3*5*823) Out[21]: [3, 5, 823] In [22]: FermatFactor(37*333667) Out[22]: [37, 333667] In [23]: FermatFactor(13 * 37 * 151 * 727 * 3607) Out[23]: [13, 37, 727, 151, 3607] But not all: In [25]: FermatFactor(5 * 53 * 163) Out[25]: [163, 13, 2, 2, 5] In [26]: FermatFactor(3*5*73*283) Out[26]: [17, 3, 7, 3, 283] In [27]: FermatFactor(3 * 11 * 29 * 71 * 137) Out[27]: [3, 11, 71, 61, 7, 3, 3] Why is it this case? How can I fix it?
You're supposed to start with a ← ceiling(sqrt(N)), not a = int(n ** 0.5 + 0.5). At the very least use a = math.ceil(n ** 0.5) instead, then Fermat_Factor(17) already gives (1.0, 17.0) instead of (3.0, 5.0). But really better stay away from floats, use math.isqrt. And of course you don't need abs if you actually compute the ceiling. from math import isqrt def Fermat_Factor(n): a = 1 + isqrt(n - 1) b2 = a**2 - n while isqrt(b2) ** 2 != b2: a += 1 b2 = a**2 - n return a - isqrt(b2), a + isqrt(b2) Attempt This Online!
2
2
79,366,678
2025-1-18
https://stackoverflow.com/questions/79366678/attributeerror-figurecanvasinteragg-object-has-no-attribute-tostring-rgb-d
#AttributeError: 'FigureCanvasInterAgg' object has no attribute 'tostring_rgb'. Did you mean: 'tostring_argb'? #import matplotlib.pyplot as plt #======================== # This can be work # import matplotlib # matplotlib.use('TkAgg') # import matplotlib.pyplot as plt #========================= with open('notebook.txt', encoding='utf-8') as file: # contents = file.read() # print(contents) # for line in file: # print('line:', line) contents = file.readlines() print(contents) newList = [] for content in contents: newContent = content.replace('\n', '') money = newContent.split(':')[-1] newList.append(int(money)) # 6月: 9000 # contents = content.replace('\n', '') print(newList) x = [1, 2, 3, 4, 5, 6] y = newList plt.plot(x, y, 'r') plt.xlabel('month') plt.ylabel('money') plt.legend() plt.show() 1月: 7000 2月: 10000 3月: 15000 4月: 12000 5月: 13000 6月: 9000 I am learning to draw graphs with matplotlib, but import matplolib.plylot as plt does not recognize the data. I have pip installed matplotlib, but I suspect it is not installed in the right path. Is there any way to solve this problem?
The following code runs successfully on my computer, and my maplotlib verson is 3.7.1 if you don't know your matplotlib verson,you can press "ctrl" and 'r',then input "cmd" to open the terminal,and input "pip list",then you can find your matlotlib version import matplotlib.pyplot as plt from matplotlib import rcParams # 设置支持中文字体 rcParams['font.sans-serif'] = ['SimHei'] # 使用黑体 rcParams['axes.unicode_minus'] = False # 正常显示负号 with open('notebook.txt', encoding='utf-8') as file: contents = file.readlines() # 按行读取文件内容 newList = [] for content in contents: newContent = content.replace('\n', '') # 去掉换行符 money = newContent.split(':')[-1].strip() # 提取“:”后面的金额部分并去掉空格 newList.append(int(money)) x = [1, 2, 3, 4, 5, 6] plt.plot(x, newList, 'r', label='收入') # 绘制红色折线图,并设置图例标签 plt.xlabel('月份') # 设置 x 轴标签 plt.ylabel('金额') # 设置 y 轴标签 plt.legend() # 显示图例 plt.show() # 展示图形
3
0
79,366,298
2025-1-17
https://stackoverflow.com/questions/79366298/how-to-check-for-specific-structure-in-nested-list-python
Suppose we have the list: mylist = [ [ "Hello", [ "Hi" ] ] ] How do I check that list containing "Hello" and "Hi" exists in mylist, in specifically this structure without flattening it? All the solutions are flattening the list, but I need to check for specific structure, like this Array |_ —-|_ “Hello” ———|_ “Hi” ——. . .
You can just ask whether it's in there: ["Hello", ["Hi"]] in mylist Attempt This Online!
1
1
79,365,680
2025-1-17
https://stackoverflow.com/questions/79365680/how-to-explain-pandas-higher-performances-compared-to-numpy-with-500k-rows
In some sources, I found that pandas works faster than numpy with 500k rows or more. Can someone explain this to me? Pandas have a better performance when the number of rows is 500K or more. — Difference between Pandas VS NumPy - GeeksforGeeks If the number of rows of the dataset is more than five hundred thousand (500K), then the performance of Pandas is better than NumPy. — Pandas Vs NumPy: What’s The Difference? [2023] - InterviewBit [...] Pandas generally performs better than numpy for 500K rows or more [...] — PandaPy - firmai on GitHub I tried to find where this fact came from. I couldn't figure it out and couldn't see any information from the documentation.
Adding to the discussion, here are those tests in the linked page reproduced with some minor changes to see if anything has changed since that original post was made almost 8 years ago and python and many of its libraries have upgraded quite a bit since then. According to python.org the newest version of python available at the time of his post was 3.6 . Here is the source code, copied from the linked page and updated to be runnable as posted here, plus a few minor changes for convenience. import pandas import matplotlib.pyplot as plt import seaborn import numpy import sys import time NUMBER_OF_ITERATIONS = 10 FIGURE_NUMBER = 0 def bench_sub(mode1_inputs: list, mode1_statement: str, mode2_inputs: list, mode2_statement: str) -> tuple[bool, list[float], list[float]]: mode1_results = [] mode1_times = [] mode2_results = [] mode2_times = [] for inputs, statementi, results, times in ( (mode1_inputs, mode1_statement, mode1_results, mode1_times), (mode2_inputs, mode2_statement, mode2_results, mode2_times) ): for inputi in inputs: ast = compile(statementi, '<string>', 'exec') ast_locals = {'data': inputi} start_time = time.perf_counter_ns() for _ in range(NUMBER_OF_ITERATIONS): exec(ast, locals=ast_locals) end_time = time.perf_counter_ns() results.append(ast_locals['res']) times.append((end_time - start_time) / 10 ** 9 / NUMBER_OF_ITERATIONS) passing = True for results1, results2 in zip(mode1_results, mode2_results): if not passing: break try: if type(results1) in [pandas.Series, numpy.ndarray] and type(results2) in [pandas.Series, numpy.ndarray]: if type(results1[0]) is str: isclose = set(results1) == set(results2) else: isclose = numpy.isclose(results1, results2).all() else: isclose = numpy.isclose(results1, results2) if not isclose: passing = False break except (ValueError, TypeError): print(type(results1)) print(results1) print(type(results2)) print(results2) raise return passing, mode1_times, mode2_times def bench_sub_plot(mode1_inputs: list, mode1_statement: str, mode2_inputs: list, mode2_statement: str, title: str, label1: str, label2: str, save_fig: bool = True) -> tuple[bool, list[float], list[float]]: passing, mode1_times, mode2_times = bench_sub(mode1_inputs, mode1_statement, mode2_inputs, mode2_statement) fig, ax = plt.subplots(2, dpi=100, figsize=(8, 6)) mode1_x = [len(x) for x in mode1_inputs] mode2_x = [len(x) for x in mode2_inputs] ax[0].plot(mode1_x, mode1_times, marker='o', markerfacecolor='none', label=label1) ax[0].plot(mode2_x, mode2_times, marker='^', markerfacecolor='none', label=label2) ax[0].set_xscale('log') ax[0].set_yscale('log') ax[0].legend() ax[0].set_title(title + f' : {"PASS" if passing else "FAIL"}') ax[0].set_xlabel('Number of records') ax[0].set_ylabel('Time [s]') if mode1_x == mode2_x: mode_comp = [x / y for x, y in zip(mode1_times, mode2_times)] ax[1].plot(mode1_x, mode_comp, marker='o', markerfacecolor='none', label=f'{label1} / {label2}') ax[1].plot([min(mode1_x), max(mode1_x)], [1.0, 1.0], linestyle='dashed', color='#AAAAAA', label='parity') ax[1].set_xscale('log') ax[1].legend() ax[1].set_title(title + f' (ratio)\nValues <1 indicate {label1} is faster than {label2}') ax[1].set_xlabel('Number of records') ax[1].set_ylabel(f'{label1} / {label2}') plt.tight_layout() # plt.show() if save_fig: global FIGURE_NUMBER # https://stackoverflow.com/a/295152 clean_title = ''.join([x for x in title if (x.isalnum() or x in '_-. ')]) fig.savefig(f'outputs/{FIGURE_NUMBER:06}_{clean_title}.png') FIGURE_NUMBER += 1 return passing, mode1_times, mode2_times def _print_result_comparison(success: bool, times1: list[float], times2: list[float], input_lengths: list[int], title: str, label1: str, label2: str): print(title) print(f' Test result: {"PASS" if success else "FAIL"}') field_width = 15 print(f'{"# of records":>{field_width}} {label1 + " [ms]":>{field_width}} {label2 + " [ms]":>{field_width}} {"ratio":>{field_width}}') for input_length, time1, time2 in zip(input_lengths, times1, times2): print(f'{input_length:>{field_width}} {time1 * 1000:>{field_width}.03f} {time2 * 1000:>{field_width}.03f} {time1 / time2:>{field_width}.03f}') print() def bench_sub_plot_print(mode1_inputs: list, mode1_statement: str, mode2_inputs: list, mode2_statement: str, title: str, label1: str, label2: str, all_lengths: list[int], save_fig: bool = True) -> tuple[bool, list[float], list[float]]: success, times1, times2 = bench_sub_plot( mode1_inputs, mode1_statement, mode2_inputs, mode2_statement, title, label1, label2, True ) _print_result_comparison(success, times1, times2, all_lengths, title, label1, label2) return success, times1, times2 def _main(): start_time = time.perf_counter_ns() # In [2]: iris = seaborn.load_dataset('iris') # In [3]: data_pandas: list[pandas.DataFrame] = [] data_numpy: list[numpy.rec.recarray] = [] all_lengths = [10_000, 100_000, 500_000, 1_000_000, 5_000_000, 10_000_000, 15_000_000] # all_lengths = [10_000, 100_000, 500_000] #, 1_000_000, 5_000_000, 10_000_000, 15_000_000] for total_len in all_lengths: data_pandas_i = pandas.concat([iris] * (total_len // len(iris))) data_pandas_i = pandas.concat([data_pandas_i, iris[:total_len - len(data_pandas_i)]]) data_pandas.append(data_pandas_i) data_numpy.append(data_pandas_i.to_records()) # In [4]: print('Input sizes [count]:') print(f'{"#":>4} {"pandas":>9} {"numpy":>9}') for i, (data_pandas_i, data_numpy_i) in enumerate(zip(data_pandas, data_numpy)): print(f'{i:>4} {len(data_pandas_i):>9} {len(data_numpy_i):>9}') print() # In [5]: mb_size_in_bytes = 1024 * 1024 print('Data sizes [MB]:') print(f'{"#":>4} {"pandas":>9} {"numpy":>9}') for i, (data_pandas_i, data_numpy_i) in enumerate(zip(data_pandas, data_numpy)): print(f'{i:>4} {int(sys.getsizeof(data_pandas_i) / mb_size_in_bytes):>9} {int(sys.getsizeof(data_numpy_i) / mb_size_in_bytes):>9}') print() # In [6]: print(data_pandas[0].head()) print() # In [7]: # ... # In [8]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = data.loc[:, "sepal_length"].mean()', data_numpy, 'res = numpy.mean(data.sepal_length)', 'Mean on Unfiltered Column', 'pandas', 'numpy', all_lengths, True ) # In [9]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = numpy.log(data.loc[:, "sepal_length"])', data_numpy, 'res = numpy.log(data.sepal_length)', 'Vectorised log on Unfiltered Column', 'pandas', 'numpy', all_lengths, True ) # In [10]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = data.loc[:, "species"].unique()', data_numpy, 'res = numpy.unique(data.species)', 'Unique on Unfiltered String Column', 'pandas', 'numpy', all_lengths, True ) # In [11]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = data.loc[(data.sepal_width > 3) & (data.petal_length < 1.5), "sepal_length"].mean()', data_numpy, 'res = numpy.mean(data[(data.sepal_width > 3) & (data.petal_length < 1.5)].sepal_length)', 'Mean on Filtered Column', 'pandas', 'numpy', all_lengths, True ) # In [12]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = numpy.log(data.loc[(data.sepal_width > 3) & (data.petal_length < 1.5), "sepal_length"])', data_numpy, 'res = numpy.log(data[(data.sepal_width > 3) & (data.petal_length < 1.5)].sepal_length)', 'Vectorised log on Filtered Column', 'pandas', 'numpy', all_lengths, True ) # In [13]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = data[data.species == "setosa"].sepal_length.mean()', data_numpy, 'res = numpy.mean(data[data.species == "setosa"].sepal_length)', 'Mean on (String) Filtered Column', 'pandas', 'numpy', all_lengths, True ) # In [14]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = data.petal_length * data.sepal_length + data.petal_width * data.sepal_width', data_numpy, 'res = data.petal_length * data.sepal_length + data.petal_width * data.sepal_width', 'Vectorized Math on Unfiltered Column', 'pandas', 'numpy', all_lengths, True ) # In [16]: success, times_pandas, times_numpy = bench_sub_plot_print( data_pandas, 'res = data.loc[data.sepal_width * data.petal_length > data.sepal_length, "sepal_length"].mean()', data_numpy, 'res = numpy.mean(data[data.sepal_width * data.petal_length > data.sepal_length].sepal_length)', 'Vectorized Math in Filtering Column', 'pandas', 'numpy', all_lengths, True ) end_time = time.perf_counter_ns() print(f'Total run time: {(end_time - start_time) / 10 ** 9:.3f} s') if __name__ == '__main__': _main() Here is the console output it generates: Input sizes [count]: # pandas numpy 0 10000 10000 1 100000 100000 2 500000 500000 3 1000000 1000000 4 5000000 5000000 5 10000000 10000000 6 15000000 15000000 Data sizes [MB]: # pandas numpy 0 0 0 1 9 4 2 46 22 3 92 45 4 464 228 5 928 457 6 1392 686 sepal_length sepal_width petal_length petal_width species 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 4.6 3.1 1.5 0.2 setosa 4 5.0 3.6 1.4 0.2 setosa Mean on Unfiltered Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.061 0.033 1.855 100000 0.160 0.148 1.081 500000 0.653 1.074 0.608 1000000 1.512 2.440 0.620 5000000 11.633 12.558 0.926 10000000 23.954 25.360 0.945 15000000 35.362 40.108 0.882 Vectorised log on Unfiltered Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.124 0.056 2.190 100000 0.507 0.493 1.029 500000 3.399 3.441 0.988 1000000 5.396 6.867 0.786 5000000 27.187 38.121 0.713 10000000 55.497 72.609 0.764 15000000 88.406 112.199 0.788 Unique on Unfiltered String Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.332 1.742 0.191 100000 2.885 21.833 0.132 500000 14.769 125.961 0.117 1000000 29.687 264.521 0.112 5000000 147.359 1501.378 0.098 10000000 295.118 3132.478 0.094 15000000 444.365 4882.316 0.091 Mean on Filtered Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.355 0.130 2.719 100000 0.522 0.672 0.777 500000 1.797 4.824 0.372 1000000 4.602 10.827 0.425 5000000 22.116 57.945 0.382 10000000 43.076 116.028 0.371 15000000 68.893 177.658 0.388 Vectorised log on Filtered Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.361 0.128 2.821 100000 0.576 0.758 0.760 500000 2.066 5.199 0.397 1000000 5.259 11.523 0.456 5000000 22.785 59.581 0.382 10000000 47.527 121.882 0.390 15000000 75.080 187.954 0.399 Mean on (String) Filtered Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.636 0.192 3.304 100000 4.068 1.743 2.334 500000 20.954 9.306 2.252 1000000 41.938 18.522 2.264 5000000 217.254 97.929 2.218 10000000 434.242 197.289 2.201 15000000 657.205 297.919 2.206 Vectorized Math on Unfiltered Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.168 0.049 3.415 100000 0.385 0.338 1.140 500000 3.193 5.018 0.636 1000000 6.028 9.539 0.632 5000000 32.640 48.235 0.677 10000000 69.748 99.893 0.698 15000000 107.528 159.040 0.676 Vectorized Math in Filtering Column Test result: PASS # of records pandas [ms] numpy [ms] ratio 10000 0.350 0.234 1.500 100000 0.926 2.494 0.371 500000 6.093 15.007 0.406 1000000 12.641 30.021 0.421 5000000 71.714 163.060 0.440 10000000 145.373 326.206 0.446 15000000 227.817 490.991 0.464 Total run time: 183.198 s And here are the plots it generated: These results were generated with Windows 10, Python 3.13, on i9-10900K, and never got close to running out of memory so swap should not be a factor.
2
1
79,363,079
2025-1-16
https://stackoverflow.com/questions/79363079/what-is-causing-some-points-to-fail-sampling-an-enclosing-mesh-how-do-i-preve
I am using the PyVista package to resample a .vtk mesh of nonconforming rectangular prisms onto a grid of points. However, some of the points fail to sample the dataset, which results in a "0" value in the index of that point in pointset_sample.point_data['vtkValidPointMask'] and either 0 or some NaN color value when plotting the points. Here's a minimal reproducible example: mesh.vtk: # vtk DataFile Version 2.0 MeshData ASCII DATASET UNSTRUCTURED_GRID POINTS 207 float -2500 -2500 -400 -1500 -2500 -400 -500 -2500 -400 500 -2500 -400 1500 -2500 -400 2500 -2500 -400 -2500 -1500 -400 -1500 -1500 -400 -500 -1500 -400 500 -1500 -400 1500 -1500 -400 2500 -1500 -400 -2500 -500 -400 -1500 -500 -400 -500 -500 -400 500 -500 -400 1500 -500 -400 2500 -500 -400 -2500 500 -400 -1500 500 -400 -500 500 -400 500 500 -400 1500 500 -400 2500 500 -400 -2500 1500 -400 -1500 1500 -400 -500 1500 -400 500 1500 -400 1500 1500 -400 2500 1500 -400 -2500 2500 -400 -1500 2500 -400 -500 2500 -400 500 2500 -400 1500 2500 -400 2500 2500 -400 -2500 -2500 0 -1500 -2500 0 -500 -2500 0 500 -2500 0 1500 -2500 0 2500 -2500 0 -2500 -1500 0 -1500 -1500 0 -500 -1500 0 500 -1500 0 1500 -1500 0 2500 -1500 0 -2500 -500 0 -1500 -500 0 -500 -500 0 500 -500 0 1500 -500 0 2500 -500 0 -2500 500 0 -1500 500 0 -500 500 0 500 500 0 1500 500 0 2500 500 0 -2500 1500 0 -1500 1500 0 -500 1500 0 500 1500 0 1500 1500 0 2500 1500 0 -2500 2500 0 -1500 2500 0 -500 2500 0 500 2500 0 1500 2500 0 2500 2500 0 -2500 -2500 1600 -1500 -2500 1600 -500 -2500 1600 500 -2500 1600 1500 -2500 1600 2500 -2500 1600 -2500 -1500 1600 -1500 -1500 1600 -500 -1500 1600 500 -1500 1600 1500 -1500 1600 2500 -1500 1600 -2500 -500 1600 -1500 -500 1600 -500 -500 1600 500 -500 1600 1500 -500 1600 2500 -500 1600 -2500 500 1600 -1500 500 1600 -500 500 1600 500 500 1600 1500 500 1600 2500 500 1600 -2500 1500 1600 -1500 1500 1600 -500 1500 1600 500 1500 1600 1500 1500 1600 2500 1500 1600 -2500 2500 1600 -1500 2500 1600 -500 2500 1600 500 2500 1600 1500 2500 1600 2500 2500 1600 -1000 -1500 -400 -1500 -1000 -400 -1000 -1000 -400 -500 -1000 -400 -1000 -500 -400 -1000 -1500 0 -1500 -1000 0 -1000 -1000 0 -500 -1000 0 -1000 -500 0 0 -1500 -400 0 -1000 -400 500 -1000 -400 0 -500 -400 0 -1500 0 0 -1000 0 500 -1000 0 0 -500 0 1000 -1500 -400 1000 -1000 -400 1500 -1000 -400 1000 -500 -400 1000 -1500 0 1000 -1000 0 1500 -1000 0 1000 -500 0 -1500 0 -400 -1000 0 -400 -500 0 -400 -1000 500 -400 -1500 0 0 -1000 0 0 -500 0 0 -1000 500 0 0 0 -400 500 0 -400 0 500 -400 0 0 0 500 0 0 0 500 0 1000 0 -400 1500 0 -400 1000 500 -400 1000 0 0 1500 0 0 1000 500 0 -1500 1000 -400 -1000 1000 -400 -500 1000 -400 -1000 1500 -400 -1500 1000 0 -1000 1000 0 -500 1000 0 -1000 1500 0 0 1000 -400 500 1000 -400 0 1500 -400 0 1000 0 500 1000 0 0 1500 0 1000 1000 -400 1500 1000 -400 1000 1500 -400 1000 1000 0 1500 1000 0 1000 1500 0 -1000 -1500 1600 -1500 -1000 1600 -1000 -1000 1600 -500 -1000 1600 -1000 -500 1600 0 -1500 1600 0 -1000 1600 500 -1000 1600 0 -500 1600 1000 -1500 1600 1000 -1000 1600 1500 -1000 1600 1000 -500 1600 -1500 0 1600 -1000 0 1600 -500 0 1600 -1000 500 1600 0 0 1600 500 0 1600 0 500 1600 1000 0 1600 1500 0 1600 1000 500 1600 -1500 1000 1600 -1000 1000 1600 -500 1000 1600 -1000 1500 1600 0 1000 1600 500 1000 1600 0 1500 1600 1000 1000 1600 1500 1000 1600 1000 1500 1600 CELLS 104 936 8 0 1 7 6 36 37 43 42 8 1 2 8 7 37 38 44 43 8 2 3 9 8 38 39 45 44 8 3 4 10 9 39 40 46 45 8 4 5 11 10 40 41 47 46 8 6 7 13 12 42 43 49 48 8 10 11 17 16 46 47 53 52 8 12 13 19 18 48 49 55 54 8 16 17 23 22 52 53 59 58 8 18 19 25 24 54 55 61 60 8 22 23 29 28 58 59 65 64 8 24 25 31 30 60 61 67 66 8 25 26 32 31 61 62 68 67 8 26 27 33 32 62 63 69 68 8 27 28 34 33 63 64 70 69 8 28 29 35 34 64 65 71 70 8 36 37 43 42 72 73 79 78 8 37 38 44 43 73 74 80 79 8 38 39 45 44 74 75 81 80 8 39 40 46 45 75 76 82 81 8 40 41 47 46 76 77 83 82 8 42 43 49 48 78 79 85 84 8 46 47 53 52 82 83 89 88 8 48 49 55 54 84 85 91 90 8 52 53 59 58 88 89 95 94 8 54 55 61 60 90 91 97 96 8 58 59 65 64 94 95 101 100 8 60 61 67 66 96 97 103 102 8 61 62 68 67 97 98 104 103 8 62 63 69 68 98 99 105 104 8 63 64 70 69 99 100 106 105 8 64 65 71 70 100 101 107 106 8 7 108 110 109 43 113 115 114 8 108 8 111 110 113 44 116 115 8 109 110 112 13 114 115 117 49 8 110 111 14 112 115 116 50 117 8 8 118 119 111 44 122 123 116 8 118 9 120 119 122 45 124 123 8 111 119 121 14 116 123 125 50 8 119 120 15 121 123 124 51 125 8 9 126 127 120 45 130 131 124 8 126 10 128 127 130 46 132 131 8 120 127 129 15 124 131 133 51 8 127 128 16 129 131 132 52 133 8 13 112 135 134 49 117 139 138 8 112 14 136 135 117 50 140 139 8 134 135 137 19 138 139 141 55 8 135 136 20 137 139 140 56 141 8 14 121 142 136 50 125 145 140 8 121 15 143 142 125 51 146 145 8 136 142 144 20 140 145 147 56 8 142 143 21 144 145 146 57 147 8 15 129 148 143 51 133 151 146 8 129 16 149 148 133 52 152 151 8 143 148 150 21 146 151 153 57 8 148 149 22 150 151 152 58 153 8 19 137 155 154 55 141 159 158 8 137 20 156 155 141 56 160 159 8 154 155 157 25 158 159 161 61 8 155 156 26 157 159 160 62 161 8 20 144 162 156 56 147 165 160 8 144 21 163 162 147 57 166 165 8 156 162 164 26 160 165 167 62 8 162 163 27 164 165 166 63 167 8 21 150 168 163 57 153 171 166 8 150 22 169 168 153 58 172 171 8 163 168 170 27 166 171 173 63 8 168 169 28 170 171 172 64 173 8 43 113 115 114 79 174 176 175 8 113 44 116 115 174 80 177 176 8 114 115 117 49 175 176 178 85 8 115 116 50 117 176 177 86 178 8 44 122 123 116 80 179 180 177 8 122 45 124 123 179 81 181 180 8 116 123 125 50 177 180 182 86 8 123 124 51 125 180 181 87 182 8 45 130 131 124 81 183 184 181 8 130 46 132 131 183 82 185 184 8 124 131 133 51 181 184 186 87 8 131 132 52 133 184 185 88 186 8 49 117 139 138 85 178 188 187 8 117 50 140 139 178 86 189 188 8 138 139 141 55 187 188 190 91 8 139 140 56 141 188 189 92 190 8 50 125 145 140 86 182 191 189 8 125 51 146 145 182 87 192 191 8 140 145 147 56 189 191 193 92 8 145 146 57 147 191 192 93 193 8 51 133 151 146 87 186 194 192 8 133 52 152 151 186 88 195 194 8 146 151 153 57 192 194 196 93 8 151 152 58 153 194 195 94 196 8 55 141 159 158 91 190 198 197 8 141 56 160 159 190 92 199 198 8 158 159 161 61 197 198 200 97 8 159 160 62 161 198 199 98 200 8 56 147 165 160 92 193 201 199 8 147 57 166 165 193 93 202 201 8 160 165 167 62 199 201 203 98 8 165 166 63 167 201 202 99 203 8 57 153 171 166 93 196 204 202 8 153 58 172 171 196 94 205 204 8 166 171 173 63 202 204 206 99 8 171 172 64 173 204 205 100 206 CELL_TYPES 104 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 CELL_DATA 104 SCALARS BlockID int LOOKUP_TABLE default 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 SCALARS Resistivity[Ohm-m] float LOOKUP_TABLE default 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 1e+10 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 SCALARS ElemID int LOOKUP_TABLE default 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 SCALARS Level int LOOKUP_TABLE default 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 SCALARS Volume float LOOKUP_TABLE default 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 1.6 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 POINT_DATA 207 SCALARS NodeID int LOOKUP_TABLE default 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 array in points.py: import numpy as np points = np.array([[-1.912e+03, -1.912e+03, -2.000e+02], [-1.912e+03, -1.912e+03, 8.000e+02], [-1.030e+03, -1.912e+03, -2.000e+02], [-1.030e+03, -1.912e+03, 8.000e+02], [-5.890e+02, -1.912e+03, -2.000e+02], [-5.890e+02, -1.912e+03, 8.000e+02], [-2.950e+02, -1.912e+03, -2.000e+02], [-2.950e+02, -1.912e+03, 8.000e+02], [-1.000e+00, -1.912e+03, -2.000e+02], [-1.000e+00, -1.912e+03, 8.000e+02], [ 2.930e+02, -1.912e+03, -2.000e+02], [ 2.930e+02, -1.912e+03, 8.000e+02], [ 5.870e+02, -1.912e+03, -2.000e+02], [ 5.870e+02, -1.912e+03, 8.000e+02], [ 1.028e+03, -1.912e+03, -2.000e+02], [ 1.028e+03, -1.912e+03, 8.000e+02], [ 1.910e+03, -1.912e+03, -2.000e+02], [ 1.910e+03, -1.912e+03, 8.000e+02], [-1.912e+03, -1.030e+03, -2.000e+02], [-1.912e+03, -1.030e+03, 8.000e+02], [-1.030e+03, -1.030e+03, -2.000e+02], [-1.030e+03, -1.030e+03, 8.000e+02], [-5.890e+02, -1.030e+03, -2.000e+02], [-5.890e+02, -1.030e+03, 8.000e+02], [-2.950e+02, -1.030e+03, -2.000e+02], [-2.950e+02, -1.030e+03, 8.000e+02], [-1.000e+00, -1.030e+03, -2.000e+02], [-1.000e+00, -1.030e+03, 8.000e+02], [ 2.930e+02, -1.030e+03, -2.000e+02], [ 2.930e+02, -1.030e+03, 8.000e+02], [ 5.870e+02, -1.030e+03, -2.000e+02], [ 5.870e+02, -1.030e+03, 8.000e+02], [ 1.028e+03, -1.030e+03, -2.000e+02], [ 1.028e+03, -1.030e+03, 8.000e+02], [ 1.910e+03, -1.030e+03, -2.000e+02], [ 1.910e+03, -1.030e+03, 8.000e+02], [-1.912e+03, -5.890e+02, -2.000e+02], [-1.912e+03, -5.890e+02, 8.000e+02], [-1.030e+03, -5.890e+02, -2.000e+02], [-1.030e+03, -5.890e+02, 8.000e+02], [-5.890e+02, -5.890e+02, -2.000e+02], [-5.890e+02, -5.890e+02, 8.000e+02], [-2.950e+02, -5.890e+02, -2.000e+02], [-2.950e+02, -5.890e+02, 8.000e+02], [-1.000e+00, -5.890e+02, -2.000e+02], [-1.000e+00, -5.890e+02, 8.000e+02], [ 2.930e+02, -5.890e+02, -2.000e+02], [ 2.930e+02, -5.890e+02, 8.000e+02], [ 5.870e+02, -5.890e+02, -2.000e+02], [ 5.870e+02, -5.890e+02, 8.000e+02], [ 1.028e+03, -5.890e+02, -2.000e+02], [ 1.028e+03, -5.890e+02, 8.000e+02], [ 1.910e+03, -5.890e+02, -2.000e+02], [ 1.910e+03, -5.890e+02, 8.000e+02], [-1.912e+03, -2.950e+02, -2.000e+02], [-1.912e+03, -2.950e+02, 8.000e+02], [-1.030e+03, -2.950e+02, -2.000e+02], [-1.030e+03, -2.950e+02, 8.000e+02], [-5.890e+02, -2.950e+02, -2.000e+02], [-5.890e+02, -2.950e+02, 8.000e+02], [-2.950e+02, -2.950e+02, -2.000e+02], [-2.950e+02, -2.950e+02, 8.000e+02], [-1.000e+00, -2.950e+02, -2.000e+02], [-1.000e+00, -2.950e+02, 8.000e+02], [ 2.930e+02, -2.950e+02, -2.000e+02], [ 2.930e+02, -2.950e+02, 8.000e+02], [ 5.870e+02, -2.950e+02, -2.000e+02], [ 5.870e+02, -2.950e+02, 8.000e+02], [ 1.028e+03, -2.950e+02, -2.000e+02], [ 1.028e+03, -2.950e+02, 8.000e+02], [ 1.910e+03, -2.950e+02, -2.000e+02], [ 1.910e+03, -2.950e+02, 8.000e+02], [-1.912e+03, -1.000e+00, -2.000e+02], [-1.912e+03, -1.000e+00, 8.000e+02], [-1.030e+03, -1.000e+00, -2.000e+02], [-1.030e+03, -1.000e+00, 8.000e+02], [-5.890e+02, -1.000e+00, -2.000e+02], [-5.890e+02, -1.000e+00, 8.000e+02], [-2.950e+02, -1.000e+00, -2.000e+02], [-2.950e+02, -1.000e+00, 8.000e+02], [-1.000e+00, -1.000e+00, -2.000e+02], [-1.000e+00, -1.000e+00, 8.000e+02], [ 2.930e+02, -1.000e+00, -2.000e+02], [ 2.930e+02, -1.000e+00, 8.000e+02], [ 5.870e+02, -1.000e+00, -2.000e+02], [ 5.870e+02, -1.000e+00, 8.000e+02], [ 1.028e+03, -1.000e+00, -2.000e+02], [ 1.028e+03, -1.000e+00, 8.000e+02], [ 1.910e+03, -1.000e+00, -2.000e+02], [ 1.910e+03, -1.000e+00, 8.000e+02], [-1.912e+03, 2.930e+02, -2.000e+02], [-1.912e+03, 2.930e+02, 8.000e+02], [-1.030e+03, 2.930e+02, -2.000e+02], [-1.030e+03, 2.930e+02, 8.000e+02], [-5.890e+02, 2.930e+02, -2.000e+02], [-5.890e+02, 2.930e+02, 8.000e+02], [-2.950e+02, 2.930e+02, -2.000e+02], [-2.950e+02, 2.930e+02, 8.000e+02], [-1.000e+00, 2.930e+02, -2.000e+02], [-1.000e+00, 2.930e+02, 8.000e+02], [ 2.930e+02, 2.930e+02, -2.000e+02], [ 2.930e+02, 2.930e+02, 8.000e+02], [ 5.870e+02, 2.930e+02, -2.000e+02], [ 5.870e+02, 2.930e+02, 8.000e+02], [ 1.028e+03, 2.930e+02, -2.000e+02], [ 1.028e+03, 2.930e+02, 8.000e+02], [ 1.910e+03, 2.930e+02, -2.000e+02], [ 1.910e+03, 2.930e+02, 8.000e+02], [-1.912e+03, 5.870e+02, -2.000e+02], [-1.912e+03, 5.870e+02, 8.000e+02], [-1.030e+03, 5.870e+02, -2.000e+02], [-1.030e+03, 5.870e+02, 8.000e+02], [-5.890e+02, 5.870e+02, -2.000e+02], [-5.890e+02, 5.870e+02, 8.000e+02], [-2.950e+02, 5.870e+02, -2.000e+02], [-2.950e+02, 5.870e+02, 8.000e+02], [-1.000e+00, 5.870e+02, -2.000e+02], [-1.000e+00, 5.870e+02, 8.000e+02], [ 2.930e+02, 5.870e+02, -2.000e+02], [ 2.930e+02, 5.870e+02, 8.000e+02], [ 5.870e+02, 5.870e+02, -2.000e+02], [ 5.870e+02, 5.870e+02, 8.000e+02], [ 1.028e+03, 5.870e+02, -2.000e+02], [ 1.028e+03, 5.870e+02, 8.000e+02], [ 1.910e+03, 5.870e+02, -2.000e+02], [ 1.910e+03, 5.870e+02, 8.000e+02], [-1.912e+03, 1.028e+03, -2.000e+02], [-1.912e+03, 1.028e+03, 8.000e+02], [-1.030e+03, 1.028e+03, -2.000e+02], [-1.030e+03, 1.028e+03, 8.000e+02], [-5.890e+02, 1.028e+03, -2.000e+02], [-5.890e+02, 1.028e+03, 8.000e+02], [-2.950e+02, 1.028e+03, -2.000e+02], [-2.950e+02, 1.028e+03, 8.000e+02], [-1.000e+00, 1.028e+03, -2.000e+02], [-1.000e+00, 1.028e+03, 8.000e+02], [ 2.930e+02, 1.028e+03, -2.000e+02], [ 2.930e+02, 1.028e+03, 8.000e+02], [ 5.870e+02, 1.028e+03, -2.000e+02], [ 5.870e+02, 1.028e+03, 8.000e+02], [ 1.028e+03, 1.028e+03, -2.000e+02], [ 1.028e+03, 1.028e+03, 8.000e+02], [ 1.910e+03, 1.028e+03, -2.000e+02], [ 1.910e+03, 1.028e+03, 8.000e+02], [-1.912e+03, 1.910e+03, -2.000e+02], [-1.912e+03, 1.910e+03, 8.000e+02], [-1.030e+03, 1.910e+03, -2.000e+02], [-1.030e+03, 1.910e+03, 8.000e+02], [-5.890e+02, 1.910e+03, -2.000e+02], [-5.890e+02, 1.910e+03, 8.000e+02], [-2.950e+02, 1.910e+03, -2.000e+02], [-2.950e+02, 1.910e+03, 8.000e+02], [-1.000e+00, 1.910e+03, -2.000e+02], [-1.000e+00, 1.910e+03, 8.000e+02], [ 2.930e+02, 1.910e+03, -2.000e+02], [ 2.930e+02, 1.910e+03, 8.000e+02], [ 5.870e+02, 1.910e+03, -2.000e+02], [ 5.870e+02, 1.910e+03, 8.000e+02], [ 1.028e+03, 1.910e+03, -2.000e+02], [ 1.028e+03, 1.910e+03, 8.000e+02], [ 1.910e+03, 1.910e+03, -2.000e+02], [ 1.910e+03, 1.910e+03, 8.000e+02]]) Example code: import pyvista as pv import numpy as np from points import points mesh = pv.read("mesh.vtk") mesh.set_active_scalars('Resistivity[Ohm-m]') pointset = pv.PointSet(points) pointset_sample = pointset.sample(mesh) print(pointset_sample.point_data['vtkValidPointMask']) plot = pv.Plotter() plot.add_mesh(pointset_sample, scalars = 'Resistivity[Ohm-m]', cmap = 'turbo_r', render_points_as_spheres=True, point_size=20, log_scale=True, clim=[5e0, 5e2]) plot.add_mesh(mesh, scalars = 'Resistivity[Ohm-m]', cmap = 'turbo_r', opacity=0.3, show_edges=True, line_width=2, edge_opacity=1.0, log_scale=True, clim=[5e0, 5e2]) plot.show() This results in this plot, where the 12 points that are missed in this example are on the red end of the color ramp, while all others are in agreement with the cells that contain them and are either green or purple, and a vtkValidPointMask of: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] What is causing some points to fail sampling an enclosing mesh? How do I prevent points from failing to sample an enclosing mesh?
I can reproduce your issue and I don't know why it happens, but frankly I've never quite understood the subtleties of sample/probe/interpolate and friends, despite multiple attempts of more knowledgeable people to explain these to me :) So I also don't know if any of the other similar filters could be applicable. It might be worth opening an issue for. What I do know is that mesh.find_closest_cell() seems to work fine for your example. Since you just want to pick out the constant cell scalar for each point (at least based on your example), you can find the closest cell to each point and query the cell data. This might be feasible depending on the size of your actual problem. Here's the corresponding change: import pyvista as pv import numpy as np from points import points mesh = pv.read("mesh.vtk") mesh.set_active_scalars('Resistivity[Ohm-m]') pointset = pv.PointSet(points) closest_cell_inds = mesh.find_closest_cell(points) pointset['Resistivity[Ohm-m]'] = mesh.cell_data['Resistivity[Ohm-m]'][closest_cell_inds] plot = pv.Plotter() plot.add_mesh(pointset, scalars='Resistivity[Ohm-m]', cmap='turbo_r', render_points_as_spheres=True, point_size=20, log_scale=True, clim=[5e0, 5e2]) plot.add_mesh(mesh, scalars='Resistivity[Ohm-m]', cmap='turbo_r', opacity=0.3, show_edges=True, line_width=2, edge_opacity=1.0, log_scale=True, clim=[5e0, 5e2]) plot.show() Some interactive work on your original sampled data shows that this does the right thing: >>> closest_cell_inds = mesh.find_closest_cell(pointset_sample.points) >>> sampling_works = pointset_sample['Resistivity[Ohm-m]'] == mesh.cell_data['Resistivity[Ohm-m]'][closest_cell_inds] >>> sampling_works pyvista_ndarray([ True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True]) >>> (sampling_works == pointset_sample['vtkValidPointMask']).all() np.True_
3
1
79,365,706
2025-1-17
https://stackoverflow.com/questions/79365706/why-factorization-of-products-of-close-primes-is-much-slower-than-products-of-di
This is a purely academic question without any practical consideration. This is not homework, I dropped out of high school long ago. I am just curious, and I can't sleep well without knowing why. I was messing around with Python. I decided to factorize big integers and measure the runtime of calls for each input. I used a bunch of numbers and found that some numbers take much longer to factorize than others. I then decided to investigate further, I quickly wrote a prime sieve function to generate primes for testing. I found out that a product of a pair of moderately large primes (two four-digit primes) take much longer to be factorized than a product of one very large prime (six-digits+) and a small prime (<=three-digits). At first I thought my first simple function for testing is inefficient, that is indeed the case, so I wrote a second function that pulled primes directly from pre-generated list of primes, the second function was indeed more efficient, but strangely it exhibits the same pattern. Here are some numbers that I used: 13717421 == 3607 * 3803 13189903 == 3593 * 3671 56267023 == 7187 * 7829 65415743 == 8087 * 8089 12345679 == 37 * 333667 38760793 == 37 * 1047589 158202851 == 151 * 1047701 762312571 == 727 * 1048573 Code: import numpy as np from itertools import cycle def factorize(n): factors = [] while not n % 2: factors.append(2) n //= 2 i = 3 while i**2 <= n: while not n % i: factors.append(i) n //= i i += 2 return factors if n == 1 else factors + [n] TRIPLE = ((4, 2), (9, 6), (25, 10)) WHEEL = ( 4, 2, 4, 2, 4, 6, 2, 6 ) def prime_sieve(n): primes = np.ones(n + 1, dtype=bool) primes[:2] = False for square, double in TRIPLE: primes[square::double] = False wheel = cycle(WHEEL) k = 7 while (square := k**2) <= n: if primes[k]: primes[square::2*k] = False k += next(wheel) return np.flatnonzero(primes) PRIMES = list(map(int, prime_sieve(1048576))) TEST_LIMIT = PRIMES[-1] ** 2 def factorize_sieve(n): if n > TEST_LIMIT: raise ValueError('Number too large') factors = [] for p in PRIMES: if p**2 > n: break while not n % p: factors.append(p) n //= p return factors if n == 1 else factors + [n] Test result: In [2]: %timeit factorize(13717421) 279 μs ± 4.29 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [3]: %timeit factorize(12345679) 39.6 μs ± 749 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [4]: %timeit factorize_sieve(13717421) 64.1 μs ± 688 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [5]: %timeit factorize_sieve(12345679) 12.6 μs ± 146 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [6]: %timeit factorize_sieve(13189903) 64.6 μs ± 964 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [7]: %timeit factorize_sieve(56267023) 117 μs ± 3.88 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [8]: %timeit factorize_sieve(65415743) 130 μs ± 1.38 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [9]: %timeit factorize_sieve(38760793) 21.1 μs ± 232 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [10]: %timeit factorize_sieve(158202851) 21.4 μs ± 385 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [11]: %timeit factorize_sieve(762312571) 22.1 μs ± 409 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) As you can clearly see, factorization of two medium primes on average takes much longer than two extremes. Why is it this case?
In don't recognize, that this question is related to programming. Your algorithm executes trial divisions (starting with the smallest numbers) and terminates at the square root of the input number, so the worst case is actually trying to factorize the square number of a prime (maximum factor possible). Encountering a low factor speeds up the process, since you immediately divide n and there is a chance the test factor reached squared exceeds the reduced n already.
1
3
79,365,035
2025-1-17
https://stackoverflow.com/questions/79365035/using-sympy-replace-and-wild-symbols-to-match-and-substitute-arbitrary-functions
Is it possible with sympy Wild symbols and replace to match arbitrary function applications? What I would ideally like to do is the following: x = Symbol('x') expr1 = sin(x) expr2 = exp(x) F = Wild('F') #or maybe WildFunction('F')? result1 = expr1.replace(F(x), lambda F: F(tan(x))) #expected: sin(tan(x)) result2 = expr2.replace(F(x), lambda F: F(tan(x))) #expected: exp(tan(x)) Unfortunately this does not work: it throws a TypeError since Wild symbols are not callable. So is there a way to make this work? Note that I really don't want to match and replace specific functions, nor do I want to match and replace symbolic functions like Function('f'). I want to match and replace arbitrary (sympy?) functions like sin, exp, im, tan, re, conjguate and so on. What does work is F = WildFunction('F') result1 = expr1.replace(F, lambda F: F.func(*F.args)) But it feels a little unnatural and fragile.
Yes, but you're looking for Wild's properties argument and can use the type() of the match to nest a function call >>> F = Wild("F", properties=[lambda F: F.is_Function]) >>> expr1.replace(F, lambda F: type(F)(tan(x))) sin(tan(x)) Or to be more picky about which functions are replaced >>> F = Wild("F", properties=[lambda F: type(F) in [sin,tan,cos]]) >>> (sin(x) + cosh(x) + cos(x)).replace(F, lambda F: type(F)(csc(x))) sin(csc(x)) + cos(csc(x)) + cosh(x) A somewhat shameless plug, but check out my other Answer about how .replace et al. behave (also links to more on Wild)!
2
1
79,365,086
2025-1-17
https://stackoverflow.com/questions/79365086/can-i-have-different-virtual-environments-in-a-project-managed-by-uv
On a Windows machine, I'm developing a Python project that I manage using uv. I run the unit tests with uv run pytest, and uv automatically creates a virtual environment in .venv. So far, so good. But every now and then, I want to run the unit tests - or other commands - in Linux (from the same project source directory). In my case, this means WSL, but it could also be a VM using a shared folder or a network share. The problem is that the virtual environment is platform specific, so uv run pytest reports an error that the virtual environment is invalid. Is it possible to configure uv to use a different name for the project's virtual environment - e. g., .venv_linux? As a workaround, I could move the .venv folder out of the way and let uv on Linux create its own .venv. But I'd have to do that every time I switch between the two, and that would be cumbersome. I also couldn't do this while a command from the project is still running.
To support different virtual environments in a uv-managed project, the environment variable UV_PROJECT_ENVIRONMENT might be used. UV_PROJECT_ENVIRONMENT: Specifies the path to the directory to use for a project virtual environment. See the project documentation for more details. You could set it to .venv_windows or .venv_linux before running uv sync / uv run [...].
2
1
79,358,200
2025-1-15
https://stackoverflow.com/questions/79358200/position-of-robotic-arm-base-in-gymnasium-robotics-fetch-environments-off-cente
I am trying to solve the Farama gymnasium-robotic fetch environments, specifically the "FetchReachDense-v3" problem. When running the simulation, the base of the robotic arm seems to be misplaced: This, firstly, looks weird, is not like that in the gymnasium-robotics documentation or other example solutions of this problem, and I think it messes up the problem as I'm not even sure if the arm is even able to reach some of the goal positions like this. Running this code shows the occurring problem (at least for me): import gymnasium as gym import gymnasium_robotics import numpy as np # Creating environment gym.register_envs(gymnasium_robotics) env = gym.make("FetchReachDense-v3", render_mode="human") observation, info = env.reset(seed=42) print("Simulating with completely random actions") # loop summed_reward = 0 for _ in range(2000): action = np.random.uniform(-1, 1,4) # random action observation, reward, terminated, truncated, info = env.step(action) # Calculate next step of simulation. summed_reward += reward # sum rewards if terminated or truncated: print("summed reward =", summed_reward) summed_reward = 0 observation, info = env.reset() env.close() There also seems to be something else wrong. When I instead run 'FetchPickAndPlaceDense-v3' (where the base is also misplaced) there is no object generated which is supposed to be picked up. That makes the whole environment quite useless. I run python==3.11 with gymnasium==1.0.0, gymnasium-robotics=1.3.1, mujoco==3.2.7, and numpy==2.2.1. The problem also occurred with numpy=2.1.3 and with an older version mujoco (not sure about the exact version anymore). Do you have ideas what I'm doing wrong here? Or is this actually an issue with gymnasium-robotics? Thanks in advance and if you need any other info, let me know.
I found the problem. Apparently the gymnasium-robotics version (1.3.1) that is accessible through pip / pypi has versions v1 and v3 of the FetchReach environments (for me, it was not possible to run v2 even though it is mentioned in the documentation). When I went through the code on GitHub I saw that there also is a version 4 of this environment. : Version History * v4: Fixed bug where initial state did not match initial state description in documentation. Fetch environments' initial states after reset now match the documentation (related [GitHub issue](https://github.com/Farama-Foundation/Gymnasium-Robotics/issues/251)). * v3: Fixed bug: `env.reset()` not properly resetting the internal state. Fetch environments now properly reset their state (related [GitHub issue](https://github.com/Farama-Foundation/Gymnasium-Robotics/issues/207)). * v2: the environment depends on the newest [mujoco python bindings](https://mujoco.readthedocs.io/en/latest/python.html) maintained by the MuJoCo team in Deepmind. * v1: the environment depends on `mujoco_py` which is no longer maintained. For me, it was not able to run version 4 with the installation through pip. After installing directly from GitHub (weirdly also version 1.3.1 of gymnasium-robotics) I can run version 4 and the problem is fixed. To install from GitHub run the following code in bash: git clone https://github.com/Farama-Foundation/Gymnasium-Robotics.git cd Gymnasium-Robotics pip install -e .
2
3
79,365,194
2025-1-17
https://stackoverflow.com/questions/79365194/numerically-obtaining-response-of-a-damped-driven-oscillator-gives-peak-at-wrong
I am trying to plot the response of a periodically-driven damped oscillator whose dynamics is governed by, x''+ 2Gx' + f0^2 x = F cos(ft) where the constants denote the following. G: Damping coefficient f0: Natural frequency f: Driving frequently F: Strength of the drive To do so, I solved the above differential equation for x(t). Next, I extracted the steady-state part from x(t), took its Fourier transform, and plotted its magnitude to visualize the response of the oscillator. Here is the code that attempts to achieve it. import numpy as np import matplotlib.pyplot as plt from scipy.fft import fft, fftfreq G=1.0 f0=2 f1=5 F=1 N=500000 T=50 dt=T/N t=np.linspace(0,T,N) u=np.zeros(N,dtype=float) # Position v=np.zeros(N,dtype=float) # Velocity u[0]=0 v[0]=0.5 for i in range(N-1): u[i+1] = u[i] + v[i]*dt v[i+1] = v[i] - 2*G*v[i]*dt - (f0*f0)*u[i]*dt + F*np.cos(f1*t[i])*dt slice_index=int(20/dt) U=u[slice_index:] X_f = fft(U) frequencies = fftfreq(len(U), dt) psd = np.abs(X_f) positive_freqs = frequencies[frequencies > 0] plt.plot(positive_freqs, psd[frequencies > 0], label="Simulated PSD") plt.plot(frequencies, psd) Since the oscillator is forced and reaches a steady state, I expect the response to peak around the driving frequency. However, the above code gives a peak located nowhere near f. What am I doing wrong?
Your frequencies f0 and f1 are applied in the finite-difference model in rad/s. This may or may not have been your intention. However, your frequencies from the FFT are in cycles/s. Since you are using the symbol f, rather than omega, I would guess that you want them in cycles/s. In your finite-difference model then you would have to use 2.PI.f in both locations where you put an f before. Specifically in the line v[i+1] = v[i] - 2*G*v[i]*dt - (2 * np.pi * f0 ) ** 2 * u[i]*dt + F*np.cos( 2 * np.pi * f1*t[i] ) * dt Then you get peak energy at a frequency of 5 Hz. (Trim the x-axis scale.) You are very heavily damped, BTW. Also, you aren't strictly plotting PSD. import numpy as np import matplotlib.pyplot as plt from scipy.fft import fft, fftfreq G=1.0 f0=2 f1=5 F=1 N=500000 T=50 dt=T/N t=np.linspace(0,T,N) u=np.zeros(N,dtype=float) # Position v=np.zeros(N,dtype=float) # Velocity u[0]=0 v[0]=0.5 for i in range(N-1): u[i+1] = u[i] + v[i]*dt v[i+1] = v[i] - 2*G*v[i]*dt - (2 * np.pi * f0 ) ** 2 * u[i]*dt + F*np.cos(2 * np.pi * f1*t[i] ) * dt slice_index=int(20/dt) U=u[slice_index:] X_f = fft(U) frequencies = fftfreq(len(U), dt) psd = np.abs(X_f) positive_freqs = frequencies[frequencies > 0] plt.plot(positive_freqs, psd[frequencies > 0], label="Simulated PSD") plt.xlim(0,10) plt.show()
4
4