question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,643,432
2023-12-12
https://stackoverflow.com/questions/77643432/why-is-pd-get-dummies-returning-boolean-values-instead-of-the-binaries-of-0-1
I don't know why my One-Hot encoding code; "pd.get_dummies" is returning Boolean values instead of the binaries of 0 1 df = pd.get_dummies(df) after writing the following line of code; df = pd.get_dummies(df) and also tried; df = pd.get_dummies(df, columns=['column_a', 'column_b', 'column_c']) the returning values of both were booleans True and False instead of 0 and 1
By default pd.get_dummies return Boolean, try : df = pd.get_dummies(df, dtype=int)
3
12
77,640,545
2023-12-11
https://stackoverflow.com/questions/77640545/how-to-retrieve-the-line-number-where-a-c-function-is-called-from-python-using
I'm trying to make a C++ logger class for embedded python script with pybind11. How can I retrieve the line number where a C++ function is called from python? I have something like this in C++: class PythonLogger { public: PythonLogger(const std::string& filename) { /* opens log file */ } ~PythonLogger() { /* closes log file */ } void log(const std::string& msg) { // writes formated log message to log file: // [<current time>] [<script file name>:<line number>]: msg // "line number" here should be the line number where // this function is called inside python script } private: <some file class> log_file; }; typedef std::shared_ptr<PythonLogger> PythonLoggerPtr; PYBIND11_EMBEDDED_MODULE(PythonLogger, m) { py::class_<PythonLogger, PythonLoggerPtr>(m, "Logger") .def("debug", &PythonLogger::debug); } int main() { py::scoped_interpreter guard{}; PythonLoggerPtr logger = std::make_shared<PythonLogger>("python_log.log"); try { auto script = py::module_::import("script"); script.import("PythonLogger"); script.attr("logger") = logger; auto func = script.attr("func"); func(); } catch (const std::exception& e) { std::print("{}\n", e.what()); } } Please ignore that I didn't actually include any headers in this code. In script.py: def func(): logger.debug("debug message") And if I run this code, it should write this to the log file: [<current time>] [script.py:2]: debug message
One possibility is to inspect the Python call stack from the C++ function and grab the info about the caller from there. This approach might involve a noticeable overhead -- I haven't measured it, but it would be a good idea before you use this in production. You could do this using the standard inspect module, for example by calling inspect.stack(). py::module inspect_mod = py::module::import("inspect"); py::list frames = inspect_mod.attr("stack")(); This function returns a list of frame information objects, and we're interested in the first one. py::object calling_frame = frames[0]; Now, we want to grab attributes filename (a string) and lineno (an integer). py::str filename_py = calling_frame.attr("filename"); py::int_ line_no_py = calling_frame.attr("lineno"); Next, cast them into C++ types. auto const filename = filename_py.cast<std::string>(); auto const line_no = line_no_py.cast<uint32_t>(); And now you can generate your desired log message. Example code: #include <chrono> #include <cstdint> #include <iomanip> #include <iostream> #include <string> #include <pybind11/pybind11.h> #include <pybind11/embed.h> namespace py = pybind11; PYBIND11_EMBEDDED_MODULE(testmodule, m) { m.def("test_log", [](py::str message) { py::module inspect_mod = py::module::import("inspect"); py::list frames = inspect_mod.attr("stack")(); py::object calling_frame = frames[0]; py::str filename_py = calling_frame.attr("filename"); py::int_ line_no_py = calling_frame.attr("lineno"); auto const filename = filename_py.cast<std::string>(); auto const line_no = line_no_py.cast<uint32_t>(); using std::chrono::system_clock; auto const timestamp = system_clock::to_time_t(system_clock::now()); std::cout << "[" << std::put_time(std::localtime(&timestamp), "%FT%T%z") << "] [" << filename << ":" << line_no << "]: " << message.cast<std::string>() << "\n"; }); } int main() { py::scoped_interpreter guard{}; try { py::exec(R"(\ import testmodule import test_script test_script.foo() testmodule.test_log("From embedded code fragment.") )"); } catch (py::error_already_set& e) { std::cerr << e.what() << "\n"; } } Python script test_script.py used by the above example: import testmodule def foo(): testmodule.test_log("On line 4 in foo().") testmodule.test_log("On line 6.") Example output: g:\example>so07.exe [2023-12-11T18:31:39+0100] [g:\example\test_script.py:6]: On line 6. [2023-12-11T18:31:39+0100] [g:\example\test_script.py:4]: On line 4 in foo(). [2023-12-11T18:31:39+0100] [<string>:7]: From embedded code fragment. Notes One improvement would be to cache the inspect.stack function in persistent logger object, so you don't need to fetch it for every message. Another would be to rewrite it to directly use Python C API to extract the relevant frame info without round-trip to the inspect implementation. One some further inspection, this might be a moving target relying on implementation details that can (and do) change over time. Some avenues for research nevertheless: https://docs.python.org/3/c-api/frame.html#c.PyFrameObject https://docs.python.org/3/c-api/code.html#c.PyCodeObject Alternate Approach After reading through the code of the inspect module, I've arrived at following approach using sys._getframe and the frame object directly: py::module sys_mod = py::module::import("sys"); py::object calling_frame = sys_mod.attr("_getframe")(0); py::str filename_py = calling_frame.attr("f_code").attr("co_filename"); py::int_ line_no_py = calling_frame.attr("f_lineno"); The rest would be the same. Cache the result of sys_mod.attr("_getframe") to avoid fetching it every time. py::object getframe_fn = py::module::import("sys").attr("_getframe"); // ... sometime later ... py::object calling_frame = getframe_fn(0); However, if you do cache that function, you will probably have to make sure the cached object's lifetime doesn't exceed the lifetime of the interpreter. I'll leave it up to the reader to figure that out and handle.
3
3
77,639,642
2023-12-11
https://stackoverflow.com/questions/77639642/the-attempt-to-terminate-a-function-in-a-thread-using-signals-fails-in-pyqt6
I have a time-consuming thread operation, but it cannot emit progress during the processing. So, I use another thread to simulate its progress. When the time-consuming operation is completed, it emits an end signal and simultaneously signals the end of the simulation process. However, the actual function operation in the simulated thread is not controlled. In the example below, self.thread_two_stop_signal.emit(), however, handle_two() is still running. import sys import time from PyQt6.QtCore import QObject, pyqtSignal, QThread from PyQt6.QtWidgets import QApplication, QMainWindow, QProgressBar, QPushButton class ThreadOne(QObject): done_signal = pyqtSignal() finished_signal = pyqtSignal() def __init__(self): super().__init__() def run(self): for i in range(100): time.sleep(0.01) self.done_signal.emit() def finished(self): self.finished_signal.emit() class ThreadTwo(QObject): finished_signal = pyqtSignal() progress_signal = pyqtSignal(int) def __init__(self): self.if_finished = False super().__init__() def run(self): i = 0 while True: if self.if_finished or i == 99: self.progress_signal.emit(i) return i += 1 self.progress_signal.emit(i) time.sleep(0.1) def finished(self): self.finished_signal.emit() def reset(self): self.if_finished = False def stop(self): print("stop") self.if_finished = True class MainWindow(QMainWindow): thread_one_do_signal = pyqtSignal() thread_one_finished_signal = pyqtSignal() thread_two_do_signal = pyqtSignal() thread_two_reset_signal = pyqtSignal() thread_two_stop_signal = pyqtSignal() thread_two_finished_signal = pyqtSignal() def __init__(self): super().__init__() self.setWindowTitle("My PyQt6 App") self.setGeometry(100, 100, 400, 200) self.btn = QPushButton("Start", self) self.btn.setGeometry(150, 25, 50, 50) self.bar = QProgressBar(self) self.bar.setGeometry(50, 100, 300, 20) self.btn.clicked.connect(self.start) self.proxy_thread_one = QThread() self.thread_one = ThreadOne() self.thread_one.moveToThread(self.proxy_thread_one) self.thread_one_do_signal.connect(self.thread_one.run) self.thread_one_finished_signal.connect(self.thread_one.finished) self.thread_one.done_signal.connect(self.handle_one) self.thread_one.finished_signal.connect(self.proxy_thread_one.quit) self.thread_one.finished_signal.connect(self.proxy_thread_one.wait) self.proxy_thread_two = QThread() self.thread_two = ThreadTwo() self.thread_two.moveToThread(self.proxy_thread_two) self.thread_two_do_signal.connect(self.thread_two.run) self.thread_two_stop_signal.connect(self.thread_two.stop) self.thread_two_reset_signal.connect(self.thread_two.reset) self.thread_two_finished_signal.connect(self.thread_two.finished) self.thread_two.progress_signal.connect(self.handle_two) self.thread_two.finished_signal.connect(self.proxy_thread_two.quit) self.thread_two.finished_signal.connect(self.proxy_thread_two.wait) def start(self): self.proxy_thread_one.start() self.proxy_thread_two.start() self.thread_one_do_signal.emit() self.thread_two_reset_signal.emit() self.thread_two_do_signal.emit() def handle_one(self): self.thread_two_stop_signal.emit() self.thread_two_finished_signal.emit() self.thread_one_finished_signal.emit() self.bar.setValue(100) def handle_two(self, value): self.bar.setValue(value) if __name__ == "__main__": app = QApplication(sys.argv) main_window = MainWindow() main_window.show() sys.exit(app.exec()) After I disconnect the singal, it started working. def handle_one(self): self.thread_two.progress_signal.disconnect(self.handle_two) self.thread_two_stop_signal.emit() self.thread_two_finished_signal.emit() self.thread_one_finished_signal.emit() self.bar.setValue(100) I want to understand why my previous approach didn't work and if there is a solution for it. Additionally, I would like to inquire about my usage of threads in PyQt6. Is my approach correct? Can it exit gracefully? Are there any potential thread safety issues?
The main problem with the example is that it uses blocking loops within each thread, which will prevent immediate processing of thread-local events. When a signal is emitted across threads, an event will be posted to the event-loop of the receiving thread. But if a blocking loop is being executed within the receiving worker thread, it will freeze event-processing in exactly the same way as it would within the main GUI thread. So steps must be taken to explictly enforce prcessing of such thread-local events. The simplest way to achieve this is to call QApplication.processEvents(), like so: class ThreadTwo(QObject): ... def run(self): i = 0 while True: QApplication.processEvents() if self.if_finished or i == 99: self.progress_signal.emit(i) return ... As for the more general question of whether threads are used "correctly" in the example: this is largely a matter of opinion/taste. The problem mentioned above could be completely by-passed by removing most of the custom signals and calling stop() directly instead. Strictly speaking, this would mean that modifying the if_finished attribute was no longer thread-safe - but given that only one thread ever needs to read the value of if_finished, this would make no measurable difference to the reliability of the code. In fact, it could be argued that the resulting simplification would make the code easier to understand and maintain, and thus qualitatively more reliable in that sense. To give some idea of how such code might look, try the re-written example bewlow: import sys, random from PyQt6.QtCore import QObject, pyqtSignal, QThread from PyQt6.QtWidgets import ( QApplication, QMainWindow, QProgressBar, QPushButton, QWidget, QHBoxLayout, ) class WorkerOne(QObject): finished = pyqtSignal() def run(self): delay = random.randint(25, 50) for i in range(100): QThread.msleep(delay) self.finished.emit() class WorkerTwo(QObject): progress = pyqtSignal(int) def run(self): self._stopped = False for i in range(1, 101): QThread.msleep(50) if not self._stopped: self.progress.emit(i) else: self.progress.emit(100) break def stop(self): print('stop') self._stopped = True class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("My PyQt6 App") self.setGeometry(600, 200, 400, 50) widget = QWidget() layout = QHBoxLayout(widget) self.btn = QPushButton("Start") self.bar = QProgressBar() layout.addWidget(self.bar) layout.addWidget(self.btn) self.setCentralWidget(widget) self.btn.clicked.connect(self.start) self.thread_one = QThread() self.worker_one = WorkerOne() self.worker_one.moveToThread(self.thread_one) self.thread_one.started.connect(self.worker_one.run) self.worker_one.finished.connect(self.handle_finished) self.thread_two = QThread() self.worker_two = WorkerTwo() self.worker_two.moveToThread(self.thread_two) self.thread_two.started.connect(self.worker_two.run) self.worker_two.progress.connect(self.bar.setValue) def start(self): if not (self.thread_one.isRunning() or self.thread_two.isRunning()): self.thread_one.start() self.thread_two.start() def handle_finished(self): self.worker_two.stop() self.reset() def reset(self): self.thread_one.quit() self.thread_two.quit() self.thread_one.wait() self.thread_two.wait() def closeEvent(self, event): self.reset() if __name__ == "__main__": app = QApplication(sys.argv) main_window = MainWindow() main_window.show() sys.exit(app.exec())
3
2
77,641,087
2023-12-11
https://stackoverflow.com/questions/77641087/fft-values-computed-using-python-and-matlab-dont-match
I have a super simple test code to compute FFT in MATLAB, which I am trying to convert to Python but the computed values do not match. MATLAB Code: rect=zeros(100,1); ffrect=zeros(100,1); for j=45:55 rect(j,1)=1; end frect=fft(rect); Python Code import numpy as np import matplotlib.pyplot as plt from scipy.fft import fft, ifft, fftshift, ifftshift rect = np.zeros((100, 1)) for j in range(44, 55): rect[j] = 1 frect = fft(rect) Problem: The computed values of frect computed using MATLAB and Python do not match (or even look similar). The values computed by Python has only zero values for complex component and 1s for real.
scipy.fft.fft computes the FFT over the last axis unless you specify otherwise with the axis parameter. In the Python version, your input array is of shape (100, 1), so you're computing 100 different 1-point FFTs. To compute a single 100-point FFT, either reshape rect to have its 100 entries in the last dimension (for example, by making it a 1D vector with shape (100,), or a 2D single-row array with shape (1, 100)), or pass axis=0 when calling scipy.fft.fft fft(rect.reshape(-1)) # or fft(rect, axis=0)
2
3
77,634,955
2023-12-10
https://stackoverflow.com/questions/77634955/why-is-the-simpler-loop-slower
Called with n = 10**8, the simple loop is consistently significantly slower for me than the complex one, and I don't see why: def simple(n): while n: n -= 1 def complex(n): while True: if not n: break n -= 1 Some times in seconds: simple 4.340795516967773 complex 3.6490490436553955 simple 4.374553918838501 complex 3.639145851135254 simple 4.336690425872803 complex 3.624480724334717 Python: 3.11.4 (main, Sep 9 2023, 15:09:21) [GCC 13.2.1 20230801] Here's the looping part of the bytecode as shown by dis.dis(simple): 6 >> 6 LOAD_FAST 0 (n) 8 LOAD_CONST 1 (1) 10 BINARY_OP 23 (-=) 14 STORE_FAST 0 (n) 5 16 LOAD_FAST 0 (n) 18 POP_JUMP_BACKWARD_IF_TRUE 7 (to 6) And for complex: 10 >> 4 LOAD_FAST 0 (n) 6 POP_JUMP_FORWARD_IF_TRUE 2 (to 12) 11 8 LOAD_CONST 0 (None) 10 RETURN_VALUE 12 >> 12 LOAD_FAST 0 (n) 14 LOAD_CONST 2 (1) 16 BINARY_OP 23 (-=) 20 STORE_FAST 0 (n) 9 22 JUMP_BACKWARD 10 (to 4) So it looks like the complex one does more work per iteration (two jumps instead of one). Then why is it faster? Seems to be a Python 3.11 phenomenon, see the comments. Benchmark script (Attempt This Online!): from time import time import sys def simple(n): while n: n -= 1 def complex(n): while True: if not n: break n -= 1 for f in [simple, complex] * 3: t = time() f(10**8) print(f.__name__, time() - t) print('Python:', sys.version)
I checked the source code of the bytecode (python 3.11.6) and found that in the decompiled bytecode, it seems that only JUMP_BACKWARD will execute a warmup function, which will trigger specialization in python 3.11 when executed enough times: PyObject* _Py_HOT_FUNCTION _PyEval_EvalFrameDefault(PyThreadState *tstate, _PyInterpreterFrame *frame, int throwflag) { /* ... */ TARGET(JUMP_BACKWARD) { _PyCode_Warmup(frame->f_code); JUMP_TO_INSTRUCTION(JUMP_BACKWARD_QUICK); } /* ... */ } static inline void _PyCode_Warmup(PyCodeObject *code) { if (code->co_warmup != 0) { code->co_warmup++; if (code->co_warmup == 0) { _PyCode_Quicken(code); } } } Among all bytecodes, only JUMP_BACKWARD and RESUME will call _PyCode_Warmup(). Specialization appears to speed up multiple bytecodes used, resulting in a significant increase in speed: void _PyCode_Quicken(PyCodeObject *code) { /* ... */ switch (opcode) { case EXTENDED_ARG: /* ... */ case JUMP_BACKWARD: /* ... */ case RESUME: /* ... */ case LOAD_FAST: /* ... */ case STORE_FAST: /* ... */ case LOAD_CONST: /* ... */ } /* ... */ } After executing once, the bytecode of complex changed, while simple did not: In [_]: %timeit -n 1 -r 1 complex(10 ** 8) 2.7 s Β± 0 ns per loop (mean Β± std. dev. of 1 run, 1 loop each) In [_]: dis(complex, adaptive=True) 5 0 RESUME_QUICK 0 6 2 NOP 7 4 LOAD_FAST 0 (n) 6 POP_JUMP_FORWARD_IF_TRUE 2 (to 12) 8 8 LOAD_CONST 0 (None) 10 RETURN_VALUE 9 >> 12 LOAD_FAST__LOAD_CONST 0 (n) 14 LOAD_CONST 2 (1) 16 BINARY_OP_SUBTRACT_INT 23 (-=) 20 STORE_FAST 0 (n) 6 22 JUMP_BACKWARD_QUICK 10 (to 4) In [_]: %timeit -n 1 -r 1 simple(10 ** 8) 4.78 s Β± 0 ns per loop (mean Β± std. dev. of 1 run, 1 loop each) In [_]: dis(simple, adaptive=True) 1 0 RESUME 0 2 2 LOAD_FAST 0 (n) 4 POP_JUMP_FORWARD_IF_FALSE 9 (to 24) 3 >> 6 LOAD_FAST 0 (n) 8 LOAD_CONST 1 (1) 10 BINARY_OP 23 (-=) 14 STORE_FAST 0 (n) 2 16 LOAD_FAST 0 (n) 18 POP_JUMP_BACKWARD_IF_TRUE 7 (to 6) 20 LOAD_CONST 0 (None) 22 RETURN_VALUE >> 24 LOAD_CONST 0 (None) 26 RETURN_VALUE
63
66
77,639,326
2023-12-11
https://stackoverflow.com/questions/77639326/nested-dictionary-with-class-and-instance-attributes
I store some configuration details across several class attributes and one main class which references each of them. E.g. class A: a = 1 class B: b = 2 def __init__(self): self.a_ = A() x = B() I would like to display all class (and instance) attributes in a dictionary, i.e. {'b': 2, 'a_': {'a': 1}} I understood __dict__ does not access class attributes. x.__dict__ # {'a_': <__main__.A at 0x1b888d9e530>} x.__dict__['a_'].__dict__ # {} -- why not {'a': 1}? So I tried below class methods but the result is still not right: @classmethod def to_dict(cls): return vars(cls) @classmethod def to_dict2(cls): return cls.__dict__
You can implement a Serializable class that both A and B inherit from that has a custom method to_dict() to achieve your desired output: class Serializable: def to_dict(self): d = {} for key, value in self.__class__.__dict__.items(): if not key.startswith('__') and not callable(value): d[key] = value for key, value in self.__dict__.items(): if hasattr(value, 'to_dict'): d[key] = value.to_dict() else: d[key] = value return d class A(Serializable): a = 1 class B(Serializable): b = 2 def __init__(self): self.a_ = A() x = B() print(x.to_dict()) Output for above: {'b': 2, 'a_': {'a': 1}}. Note this approach will have issues if your classes have circular references, non-serializable objects, attributes with leading underscores, or dynamically added attributes. But it works for your given contrived example!
3
2
77,637,539
2023-12-11
https://stackoverflow.com/questions/77637539/why-beautifulsoup-cant-find-this-supposed-to-be-xbrl-related-ix-tag
It turns out that the tag name should be: "ix:nonfraction" This does not work. No "xi" tag is found. from bs4 import BeautifulSoup text = """ <td style="BORDER-BOTTOM:0.75pt solid #7f7f7f;white-space:nowrap;vertical-align:bottom;text-align:right;">$ <ix:nonfraction name="ecd:AveragePrice" contextref="P01_01_2022To12_31_2022" unitref="Unit_USD" decimals="2" scale="0" format="ixt:num-dot-decimal">97.88</ix:nonfraction> </td> """ soup = BeautifulSoup(text, 'lxml') print(soup) ix_tags = soup.find_all('ix') print(ix_tags) But the following works. I don't see a difference. Why is it? Thanks a lot! html_content = """ <html> <body> <ix>Tag 1</ix> <ix>Tag 2</ix> <ix>Tag 3</ix> <p>Not an ix tag</p> </body> </html> """ soup = BeautifulSoup(html_content, 'lxml') ix_tags = soup.find_all('ix') for tag in ix_tags: print(tag.text)
The issue here arises from how BeautifulSoup handles namespaced tags like <ix:nonfraction>. With the lxml parser, namespaced tags might not be correctly parsed or recognized. In the XML you provided, ix is the namespace, and nonfraction is the local name of the element. In XML, a namespace is a method to avoid name conflicts by differentiating elements or attributes within XML documents. The ix:nonfraction tag indicates that the nonfraction element is part of the ix namespace. To correctly find namespaced tags like <ix:nonfraction> when using the lxml parser, you should use the exact tag name in your find_all call: ix_tags = soup.find_all('ix:nonfraction') If you want to find the tags without providing the namespace, then you can use the xml parser which handles namespaced tags much more gracefully. soup = BeautifulSoup(text, 'xml') ix_tags = soup.find_all('nonfraction') Sample run: from bs4 import BeautifulSoup text = """ <td style="BORDER-BOTTOM:0.75pt solid #7f7f7f;white-space:nowrap;vertical-align:bottom;text-align:right;">$ <ix:nonfraction name="ecd:AveragePrice" contextref="P01_01_2022To12_31_2022" unitref="Unit_USD" decimals="2" scale="0" format="ixt:num-dot-decimal">97.88</ix:nonfraction> </td> """ soup = BeautifulSoup(text, 'lxml') ix_tags = soup.find_all('ix:nonfraction') print(ix_tags) soup = BeautifulSoup(text, 'xml') ix_tags = soup.find_all('nonfraction') print(ix_tags) Output: [<ix:nonfraction contextref="P01_01_2022To12_31_2022" decimals="2" format="ixt:num-dot-decimal" name="ecd:AveragePrice" scale="0" unitref="Unit_USD">97.88</ix:nonfraction>] [<nonfraction contextref="P01_01_2022To12_31_2022" decimals="2" format="ixt:num-dot-decimal" name="ecd:AveragePrice" scale="0" unitref="Unit_USD">97.88</nonfraction>]
2
1
77,634,598
2023-12-10
https://stackoverflow.com/questions/77634598/how-to-get-file-type-from-complex-image-url-in-python
I want to get image file extensions from image URLs like below: from os.path import splitext image = ['ai','bmp','gif','ico','jpeg','jpg','png','ps','psd','svg','tif','tiff','webp'] def splitext_(path, extensions): for ext in extensions: if path.endswith(ext): return path[:-len(ext)], path[-len(ext):] return splitext(path) val = "https://dkstatics-public.digikala.com/digikala-products/9f4cb4e049e7a5d48c7bc22257b5031ee9a5eae8_1602179467.jpg?x-oss-process=image/resize,m_lfit,h_300,w_300/quality,q_80" #val = "https://www.needmode.com/wp-content/uploads/2023/04/%D9%84%D9%88%D8%A7%D8%B2%D9%85-%D8%AA%D8%AD%D8%B1%DB%8C%D8%B1.webp" ex_filename, ext = splitext_(val,image) ex_extension = ext.replace(".", "", 1) im_extension = ex_extension.lower() print(im_extension) The problem is this method not working on URLs like below https://dkstatics-public.digikala.com/digikala-products/9f4cb4e049e7a5d48c7bc22257b5031ee9a5eae8_1602179467.jpg?x-oss-process=image/resize,m_lfit,h_300,w_300/quality,q_80 The result is nothing for the example image URL, but it's working on normal URLs.
Edit: here how to manage multiple extension. For this, it's better to use the @Andrej Kesely answer for parsing the url. Working on the url as string only will lead to have the host split and it's harder to manage (you would go to rewrite urlparse). from urllib.parse import urlparse val = "https://dkstatics-public.digikala.com/digikala-products/9f4cb4e049e7a5d48c7bc22257b5031ee9a5eae8_1602179467.tar.gz?x-oss-process=image/resize,m_lfit,h_300,w_300/quality,q_80" parsed_url = urlparse(val) extension = parsed_url.path.rsplit(".")[1:] print(extension) First answer. Here how you can do: val = "https://dkstatics-public.digikala.com/digikala-products/9f4cb4e049e7a5d48c7bc22257b5031ee9a5eae8_1602179467.jpg?x-oss-process=image/resize,m_lfit,h_300,w_300/quality,q_80" print(val.split("?")[0].split(".")[-1]) You split first on the question mark and keeping the url part, not the parameters. And then you split on the dot and keep the last part which is the extension. It won't work with multiple extensions like tar.gz, you would only have gz.
2
2
77,631,313
2023-12-9
https://stackoverflow.com/questions/77631313/python-root-logger-handlers-do-not-get-named-loggers-records
I have the following setup: root logger to log everywhere my programm adds a new handler after startup via an callback (for example a database) with CallbackHandler named logger in my modules do not call into the root-logger Online Python Compiler <- you need to have main.py selected to Run main.py in there main.py import logging import logging.config import MyLogger from MyApp import MyApp MyLogger.init() _logger = logging.getLogger() # root def main() : _logger.error( "main - root logger" ) app = MyApp() # setup app and attach CallbackHandler to root logger app.testLog() # call named logger - should call root logger & callback handler if __name__ == "__main__" : main() MyLogger.py import logging from logging import LogRecord import logging.config import os from typing import Callable LOG_PATH = "./logs" LOGGING_CONFIG : dict = { "version" : 1 , 'formatters': { 'simple': { 'format': '%(name)s %(message)s' }, }, "handlers" : { "ConsoleHandler" : { "class" : "logging.StreamHandler" , "formatter" : "simple" , } , } , "root" : { "handlers" : [ "ConsoleHandler" , ] , "level" : "DEBUG" , } } def init() : os.makedirs( LOG_PATH , exist_ok = True ) logging.config.dictConfig( LOGGING_CONFIG ) class CallbackHandler( logging.Handler ) : def __init__( self , level = logging.DEBUG , callback : Callable = None ) : super().__init__( level ) self._callback = callback def emit( self , record : LogRecord ) : if self._callback is not None : self._callback( record.name + " | " + record.msg ) MyApp.py import logging from MyLogger import CallbackHandler _logger = logging.getLogger( __name__ ) class MyApp : def __init__( self ) : rootLogger = logging.getLogger() rootLogger.addHandler( CallbackHandler( callback = self.myCallback ) ) def myCallback( self , msg : str ) : print( "CALLBACK: " + msg ) def testLog( self ) : _logger.error( "MyApp.testLog() - named logger" ) The docs say, named loggers do not inherit the parents handlers. But they propagate their log messages to the parent/root logger - which has handlers attached. However they do not get called with a named logger. The Problem: CallbackHandler.emit() is not called (if I remove the __name__ in MyApp.py: logging.getLogger(), the root logger gets referenced and the Callback-Handler is called) How do I : initialize the root logger later in my program attach a custom Handler to the root logger use named loggers in my program propagate the the logs from named loggers to the root logger such that the logs use the custom root-logger-handler
How to fix: add the following line to your LOGGING_CONFIG dict: "disable_existing_loggers" : False, The problem is that the child logger is created before the logging gets its configuration and the default behaviour when configuring the logging is to disable existing loggers. Link to the docs (it's the last item in that section).
2
1
77,632,067
2023-12-9
https://stackoverflow.com/questions/77632067/generate-html-page-with-specific-tags-from-another-page-using-beautifulsoup
I'm exploring BeautifulSoup and aiming to retain only specific tags in an HTML file to create a new one. I can successfully achieve this with the following program. However, I believe there might be a more suitable and natural approach without the need to manually append the strings. from bs4 import BeautifulSoup #soup = BeautifulSoup(page.content, 'html.parser') with open('P:/Test.html', 'r') as f: contents = f.read() soup= BeautifulSoup(contents, 'html.parser') NewHTML = "<html><body>" NewHTML+="\n"+str(soup.find('title')) NewHTML+="\n"+str(soup.find('p', attrs={'class': 'm-b-0'})) NewHTML+="\n"+str(soup.find('div', attrs={'id' :'right-col'})) NewHTML+= "</body></html>" with open("output1.html", "w") as file: file.write(NewHTML)
You can have a list of desired tags, iterate through them, and use Beautiful Soup's append method to selectively include corresponding elements in the new HTML structure. from bs4 import BeautifulSoup with open('Test.html', 'r') as f: contents = f.read() soup = BeautifulSoup(contents, 'html.parser') new_html = BeautifulSoup("<html><body></body></html>", 'html.parser') tags_to_keep = ['title', {'p': {'class': 'm-b-0'}}, {'div': {'id': 'right-col'}}] # Iterate through the tags to keep and append them to the new HTML for tag in tags_to_keep: # If the tag is a string, find it in the original HTML # and append it to the new HTML if isinstance(tag, str): new_html.body.append(soup.find(tag)) # If the tag is a dictionary, extract tag name and attributes, # then find them in the original HTML and append them to the new HTML elif isinstance(tag, dict): tag_name = list(tag.keys())[0] tag_attrs = tag[tag_name] new_html.body.append(soup.find(tag_name, attrs=tag_attrs)) with open("output1.html", "w") as file: file.write(str(new_html)) Assuming you have an HTML document like the one below (which would have been helpful to include for reproducibility's sake): <!DOCTYPE html> <head> <title>Test Page</title> </head> <body> <p class="m-b-0">Paragraph with class 'm-b-0'.</p> <div id="right-col"> <p>Paragraph inside the 'right-col' div.</p> </div> <p>Paragraph outside the targeted tags.</p> </body> </html> the resulting output1.html will contain the following content: <html> <body> <title>Test Page</title> <p class="m-b-0">Paragraph with class 'm-b-0'.</p> <div id="right-col"> <p>Paragraph inside the 'right-col' div.</p> </div> </body> </html>
4
7
77,630,264
2023-12-9
https://stackoverflow.com/questions/77630264/could-not-install-packages-due-to-an-oserror-while-trying-to-download-python-p
I've been using python v3.12 for the past month without any problem, but now out of nowhere i'm unable to download any packages at all. It gives this long list of warnings and an error. > WARNING: Certificate did not match expected hostname: files.pythonhosted.org. Certificate: {'subject': ((('commonName', 'r.shared-319-default.ssl.fastly.net'),),), 'issuer': ((('countryName', 'BE'),), (('organizationName', 'GlobalSign nv-sa'),), (('commonName', 'GlobalSign Atlas R3 DV TLS CA 2023 Q1'),)), 'version': 3, 'serialNumber': '01D06257899F0DD2481ECE65A0533F7E', 'notBefore': 'Apr 10 04:55:11 2023 GMT', 'notAfter': 'May 11 04:55:10 2024 GMT', 'subjectAltName': (('DNS', 'r.shared-319-default.ssl.fastly.net'),), 'OCSP': ('http://ocsp.globalsign.com/ca/gsatlasr3dvtlsca2023q1',), 'caIssuers': ('http://secure.globalsign.com/cacert/gsatlasr3dvtlsca2023q1.crt',), 'crlDistributionPoints': ('http://crl.globalsign.com/ca/gsatlasr3dvtlsca2023q1.crl',)} > WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(CertificateError("hostname 'files.pythonhosted.org' doesn't match 'r.shared-319-default.ssl.fastly.net'"))': /packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata > WARNING: Certificate did not match expected hostname: files.pythonhosted.org. Certificate: {'subject': ((('commonName', 'r.shared-319-default.ssl.fastly.net'),),), 'issuer': ((('countryName', 'BE'),), (('organizationName', 'GlobalSign nv-sa'),), (('commonName', 'GlobalSign Atlas R3 DV TLS CA 2023 Q1'),)), 'version': 3, 'serialNumber': '01D06257899F0DD2481ECE65A0533F7E', 'notBefore': 'Apr 10 04:55:11 2023 GMT', 'notAfter': 'May 11 04:55:10 2024 GMT', 'subjectAltName': (('DNS', 'r.shared-319-default.ssl.fastly.net'),), 'OCSP': ('http://ocsp.globalsign.com/ca/gsatlasr3dvtlsca2023q1',), 'caIssuers': ('http://secure.globalsign.com/cacert/gsatlasr3dvtlsca2023q1.crt',), 'crlDistributionPoints': ('http://crl.globalsign.com/ca/gsatlasr3dvtlsca2023q1.crl',)} > WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(CertificateError("hostname 'files.pythonhosted.org' doesn't match 'r.shared-319-default.ssl.fastly.net'"))': /packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata > WARNING: Certificate did not match expected hostname: files.pythonhosted.org. Certificate: {'subject': ((('commonName', 'r.shared-319-default.ssl.fastly.net'),),), 'issuer': ((('countryName', 'BE'),), (('organizationName', 'GlobalSign nv-sa'),), (('commonName', 'GlobalSign Atlas R3 DV TLS CA 2023 Q1'),)), 'version': 3, 'serialNumber': '01D06257899F0DD2481ECE65A0533F7E', 'notBefore': 'Apr 10 04:55:11 2023 GMT', 'notAfter': 'May 11 04:55:10 2024 GMT', 'subjectAltName': (('DNS', 'r.shared-319-default.ssl.fastly.net'),), 'OCSP': ('http://ocsp.globalsign.com/ca/gsatlasr3dvtlsca2023q1',), 'caIssuers': ('http://secure.globalsign.com/cacert/gsatlasr3dvtlsca2023q1.crt',), 'crlDistributionPoints': ('http://crl.globalsign.com/ca/gsatlasr3dvtlsca2023q1.crl',)} > WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(CertificateError("hostname 'files.pythonhosted.org' doesn't match 'r.shared-319-default.ssl.fastly.net'"))': /packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata > WARNING: Certificate did not match expected hostname: files.pythonhosted.org. Certificate: {'subject': ((('commonName', 'r.shared-319-default.ssl.fastly.net'),),), 'issuer': ((('countryName', 'BE'),), (('organizationName', 'GlobalSign nv-sa'),), (('commonName', 'GlobalSign Atlas R3 DV TLS CA 2023 Q1'),)), 'version': 3, 'serialNumber': '01D06257899F0DD2481ECE65A0533F7E', 'notBefore': 'Apr 10 04:55:11 2023 GMT', 'notAfter': 'May 11 04:55:10 2024 GMT', 'subjectAltName': (('DNS', 'r.shared-319-default.ssl.fastly.net'),), 'OCSP': ('http://ocsp.globalsign.com/ca/gsatlasr3dvtlsca2023q1',), 'caIssuers': ('http://secure.globalsign.com/cacert/gsatlasr3dvtlsca2023q1.crt',), 'crlDistributionPoints': ('http://crl.globalsign.com/ca/gsatlasr3dvtlsca2023q1.crl',)} > WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(CertificateError("hostname 'files.pythonhosted.org' doesn't match 'r.shared-319-default.ssl.fastly.net'"))': /packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata > WARNING: Certificate did not match expected hostname: files.pythonhosted.org. Certificate: {'subject': ((('commonName', 'r.shared-319-default.ssl.fastly.net'),),), 'issuer': ((('countryName', 'BE'),), (('organizationName', 'GlobalSign nv-sa'),), (('commonName', 'GlobalSign Atlas R3 DV TLS CA 2023 Q1'),)), 'version': 3, 'serialNumber': '01D06257899F0DD2481ECE65A0533F7E', 'notBefore': 'Apr 10 04:55:11 2023 GMT', 'notAfter': 'May 11 04:55:10 2024 GMT', 'subjectAltName': (('DNS', 'r.shared-319-default.ssl.fastly.net'),), 'OCSP': ('http://ocsp.globalsign.com/ca/gsatlasr3dvtlsca2023q1',), 'caIssuers': ('http://secure.globalsign.com/cacert/gsatlasr3dvtlsca2023q1.crt',), 'crlDistributionPoints': ('http://crl.globalsign.com/ca/gsatlasr3dvtlsca2023q1.crl',)} > WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(CertificateError("hostname 'files.pythonhosted.org' doesn't match 'r.shared-319-default.ssl.fastly.net'"))': /packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata > WARNING: Certificate did not match expected hostname: files.pythonhosted.org. Certificate: {'subject': ((('commonName', 'r.shared-319-default.ssl.fastly.net'),),), 'issuer': ((('countryName', 'BE'),), (('organizationName', 'GlobalSign nv-sa'),), (('commonName', 'GlobalSign Atlas R3 DV TLS CA 2023 Q1'),)), 'version': 3, 'serialNumber': '01D06257899F0DD2481ECE65A0533F7E', 'notBefore': 'Apr 10 04:55:11 2023 GMT', 'notAfter': 'May 11 04:55:10 2024 GMT', 'subjectAltName': (('DNS', 'r.shared-319-default.ssl.fastly.net'),), 'OCSP': ('http://ocsp.globalsign.com/ca/gsatlasr3dvtlsca2023q1',), 'caIssuers': ('http://secure.globalsign.com/cacert/gsatlasr3dvtlsca2023q1.crt',), 'crlDistributionPoints': ('http://crl.globalsign.com/ca/gsatlasr3dvtlsca2023q1.crl',)} > > ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata (Caused by SSLError(CertificateError("hostname 'files.pythonhosted.org' doesn't match 'r.shared-319-default.ssl.fastly.net'"))) I looked and tried every solution I found on the internet, but none worked for me. And I couldn't access https://files.pythonhosted.org/ from the browser either. It shows "Your connection is not private" error and says the connection is not secured. But it's only my laptop that is showing this error, so i think it's not a network problem?
I encountered a similar issue and found that an entry in the hosts file for pythonhosted.org was causing the 'Misdirected Request' error. Here's how I resolved it: Open the Hosts File: On Windows: Run Notepad as an administrator, then open C:\Windows\System32\drivers\etc\hosts. On macOS/Linux: Open Terminal and use sudo nano /etc/hosts or a similar command to edit the file with administrative privileges. Modify the Hosts File: Search for entries with pythonhosted.org. Either delete these lines or comment them out by adding # at the beginning of each line. Save and Restart: In Windows Notepad: Save the changes by clicking File -> Save. In macOS/Linux Nano Editor: Press Ctrl + O to write the changes, then Ctrl + X to exit. Restart your computer. Test Your Access: Try accessing https://files.pythonhosted.org/ again.
4
8
77,629,866
2023-12-9
https://stackoverflow.com/questions/77629866/playwright-page-pdf-only-gets-one-page
I have been trying to convert html to pdf. I have tried a lot of tools but none of them work. Now I am using playwright, it is converting the Page to PDF but it only gets the first screen view. From that page the content from right is trimmed. import os import time import pathlib from playwright.sync_api import sync_playwright filePath = os.path.abspath("Lab6.html") fileUrl = pathlib.Path(filePath).as_uri() fileUrl = "file://C:/Users/PMYLS/Desktop/Code/ScribdPDF/Lab6.html" with sync_playwright() as p: browser = p.chromium.launch() page = browser.new_page() page.goto(fileUrl) for i in range(5): #(The scroll is not working) page.mouse.wheel(0, 15000) time.sleep(2) page.wait_for_load_state('networkidle') page.emulate_media(media="screen") page.pdf(path="sales_report.pdf") browser.close() Html View PDF file after running script I have tried almost every tool available on the internet. I also used selenium but same results. I thought it was due to page not loaded properly, I added wait and manually scrolled the whole page to load the content. All giving same results. The html I am converting https://drive.google.com/file/d/16jEq52iXtAMCg2FDt3VbQN0dCQmdTip_/view?usp=sharing
Here's a somewhat dirty solution that worked on my end. The sleep and scroll isn't great and can probably be improved, but I'll leave this as a starter and see if I have time to tighten it up later (feel free to do the same). from playwright.sync_api import sync_playwright # 1.37.0 from time import sleep with open("index.html") as f: html = f.read() with sync_playwright() as p: browser = p.chromium.launch() page = browser.new_page() page.set_content(html) # focus inside the annoying border to enable scroll page.click(".document_container") for i in range(10): page.mouse.wheel(0, 2500) sleep(0.5) # strip out the annoying border that messes up PDF generation page.evaluate("""() => { const el = document.querySelector(".document_scroller"); el.parentElement.appendChild(el.querySelector(".document_container")); el.remove(); }""") page.emulate_media(media="screen") page.pdf(path="sales_report.pdf") browser.close() Two tricks: Clicking inside the border area enables scrolling, which appears necessary to get everything to load. Ripping out the annoying border allows the PDF generation to capture all pages. When the border is present, there's no scroll on the main body, only on the interior container, which the PDF capture doesn't seem to understand.
3
3
77,625,508
2023-12-8
https://stackoverflow.com/questions/77625508/how-to-activate-verbosity-in-langchain
I'm using Langchain 0.0.345. I cannot get a verbose output of what's going on under the hood using the LCEL approach to chain building. I have this code: from langchain.chat_models import ChatOpenAI from langchain.prompts import ChatPromptTemplate from langchain.schema.output_parser import StrOutputParser from langchain.globals import set_verbose set_verbose(True) prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}") model = ChatOpenAI() output_parser = StrOutputParser() chain = prompt | model | output_parser chain.invoke({"topic": "ice cream"}) According to the documentation using set_verbose is the way to have a verbose output showing intermediate steps, prompt builds etc. But the output of this script is just a string without any intermediate steps. Actually, the module langchain.globals does not appear even mentioned in the API documentation. I have also tried setting the verbose=True parameter in the model creation, but it also does not work. This used to work with the former approach building with classes and so. How is the recommended and current approach to have the output logged so you can understand what's going on? Thanks!
You can add a callback handler to the invoke method's configuration. Like this: from langchain.callbacks.tracers import ConsoleCallbackHandler # ...your code chain.invoke({"topic": "ice cream"}, config={'callbacks': [ConsoleCallbackHandler()]}) Code with change incorporated: from langchain.chat_models import ChatOpenAI from langchain.prompts import ChatPromptTemplate from langchain.schema.output_parser import StrOutputParser from langchain.callbacks.tracers import ConsoleCallbackHandler prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}") model = ChatOpenAI() output_parser = StrOutputParser() chain = prompt | model | output_parser chain.invoke({"topic": "ice cream"}, config={'callbacks': [ConsoleCallbackHandler()]}) The output isn't the same as the original "verbose mode", but this is the closest alternative. Alternatives For more targeted output or less "verbosity" Try attaching a callback handler to specific objects. For example: ChatOpenAI().with_config({'callbacks': [ConsoleCallbackHandler()]}) You can learn more about customizing callbacks here For high verbosity Global debug still works with LCEL: from langchain.globals import set_debug set_debug(True) # your code For a GUI you can use weights and biases or langsmith
10
13
77,629,234
2023-12-8
https://stackoverflow.com/questions/77629234/how-to-get-second-pandas-dataframe-showing-net-trade-based-on-first-pandas-dataf
I have a pandas dataframe df1 as shown below: It shows exports volume from A to B, B to A and A to C in three rows. Trade is possible in both directions. df1.to_dict() returns {'Country1': {0: 'A', 1: 'B', 2: 'A'}, 'Country2': {0: 'B', 1: 'A', 2: 'C'}, 'Value': {0: 3, 1: 5, 2: 3}} I want a second dataframe df2 based on df1 which shows the net trade volume between countries. For example, A to C has a net trade volume of 3 units, and B to A has a net trade volume of 2 units (5-3). This needs to be reflected in the second dataframe as shown below: How can I automate creating df2 based on df1? I have large number of countries, so I want to automate this process.
You could swap the names, merge and filter: val = (df[['Country1', 'Country2']] .merge(df.rename(columns={'Country1': 'Country2', 'Country2': 'Country1'}), how='left')['Value'] .rsub(df['Value'], fill_value=0) ) out = (df.assign(**{'Net Value': val}) .query('`Net Value` >= 0') .drop(columns='Value') ) Output: Country1 Country2 Net Value 1 B A 2.0 2 A C 3.0
2
2
77,628,451
2023-12-8
https://stackoverflow.com/questions/77628451/cannot-unify-float64-and-arrayfloat64-1d-c-for-mv2-3-defined-at-c
I tried to solve this problem by changing a lot of variables but nothing it's working. Here is my code: import numpy as np import matplotlib.pyplot as plt import math as mt from numba import njit N=1000 J=1 h=0.5 plt.rcParams['figure.dpi']=100 plt.xlabel('T', fontsize=14) ns=100000000 s=np.empty([N,0]) for i in range(N): s[i]=np.random.choice([-1,1]) E=0 M=0 for i in range(-1,N-1): E=E-J*(s[i]*s[i+1]+s[i]*s[i-1]) M=M+s[i] Energy=np.empty([0,0]) C=np.empty([0,0]) Magne=np.empty([0,0]) chi=np.empty([0,0]) @njit def average(k , E, M, ns, x, Energy, C, Magne, chi, s): Ev=0. Ev2=0. Mv=0. Mv2=0. for z in range(1,ns+1): i=np.random.randint(-1, high=N-1) dE=2*J*(s[i]*s[i+1]+s[i]*s[i-1]) dM=2*h*s[i] pace=1./(1+mt.exp(k*(dE+dM))) #pace=min(1,mt.exp(-k*dE)) if np.random.random()<pace: s[i]=-s[i] E=E+dE M=M-dM/h if z>x: Ev=Ev+(E-h*M) Ev2=Ev2+(E-h*M)**2 Mv=Mv +M Mv2=Mv2 +M**2 Ev=Ev/(ns-x) Ev2=Ev2/(ns-x) varE=(Ev2-Ev**2) Energy=np.append(Energy,Ev) C=np.append(C,varE*(k**2)) Mv=Mv/(ns-x) Mv2=Mv2/(ns-x) varM=(Mv2-Mv**2) Magne=np.append(Magne,Mv) chi=np.append(chi,varM*k) return E, M, Energy, C, s, Magne, chi b=np.arange(0.3,1.,0.01) x=0.75*ns for k in b: E, M, Energy, C, s, Magne, chi =average(k,E,M,ns, x,Energy,C, Magne, chi, s) plt.plot(b,Energy, '.', color='r',linestyle='--') plt.plot(b,C, '.', color='b',linestyle='--') plt.title('Energy and heat capacity') plt.legend(['E','C']) plt.show() plt.rcParams['figure.dpi']=100 plt.xlabel(r'$\beta$', fontsize=14) plt.plot(b,Magne, '.',linestyle='--', color='r') plt.plot(b,chi, '.', color='b',linestyle='--') plt.title('Magnetization and susceptibility') plt.legend(['M',r'$\chi$']) plt.show() I keep getting the error: Cannot unify float64 and array(float64, 1d, C) for 'Mv2.3', defined at c:\users\usuario\documents\python scripts\ex13hw7.py (65) File "ex13hw7.py", line 65: def average(k , E, M, ns, x, Energy, C, Magne, chi, s): <source elided> if z>x: Ev=Ev+(E-h*M) ^ During: typing of assignment at c:\users\usuario\documents\python scripts\ex13hw7.py (65) Does anyone know what is the issue? I tried changing the variable's name and including/deleting parameters from the function.
Here is fixed version of the code that compiles with numba: import math as mt import matplotlib.pyplot as plt import numpy as np from numba import njit N = 1000 J = 1 h = 0.5 plt.rcParams["figure.dpi"] = 100 plt.xlabel("T", fontsize=14) ns = 100000000 s = np.empty(N) # <-- don't use np.empty([N, 0]) for i in range(N): s[i] = np.random.choice([-1, 1]) E = 0 M = 0 for i in range(-1, N - 1): E = E - J * (s[i] * s[i + 1] + s[i] * s[i - 1]) M = M + s[i] Energy = np.array([]) # <-- don't use np.empty([0, 0]) C = np.array([]) Magne = np.array([]) chi = np.array([]) @njit def average(k, E, M, ns, x, Energy, C, Magne, chi, s): Ev = 0.0 Ev2 = 0.0 Mv = 0.0 Mv2 = 0.0 for z in range(1, ns + 1): i = np.random.randint(-1, high=N - 1) dE = 2 * J * (s[i] * s[i + 1] + s[i] * s[i - 1]) dM = 2 * h * s[i] pace = 1.0 / (1 + mt.e ** (k * (dE + dM))) # <-- use math.e ** x if np.random.random() < pace: s[i] = -s[i] E = E + dE M = M - dM / h if z > x: Ev = Ev + (E - h * M) Ev2 = Ev2 + (E - h * M) ** 2 Mv = Mv + M Mv2 = Mv2 + M**2 Ev = Ev / (ns - x) Ev2 = Ev2 / (ns - x) varE = Ev2 - Ev**2 Energy = np.append(Energy, Ev) C = np.append(C, varE * (k**2)) Mv = Mv / (ns - x) Mv2 = Mv2 / (ns - x) varM = Mv2 - Mv**2 Magne = np.append(Magne, Mv) chi = np.append(chi, varM * k) return E, M, Energy, C, s, Magne, chi b = np.arange(0.3, 1.0, 0.1) x = 0.75 * ns for k in b: E, M, Energy, C, s, Magne, chi = average(k, E, M, ns, x, Energy, C, Magne, chi, s) print("Finished!") plt.plot(b, Energy, ".", color="r", linestyle="--") plt.plot(b, C, ".", color="b", linestyle="--") plt.title("Energy and heat capacity") plt.legend(["E", "C"]) plt.show() plt.rcParams["figure.dpi"] = 100 plt.xlabel(r"$\beta$", fontsize=14) plt.plot(b, Magne, ".", linestyle="--", color="r") plt.plot(b, chi, ".", color="b", linestyle="--") plt.title("Magnetization and susceptibility") plt.legend(["M", r"$\chi$"]) plt.show() Shows these two graphs:
2
2
77,626,069
2023-12-8
https://stackoverflow.com/questions/77626069/how-to-query-a-jsonb-column-that-has-deeply-nested-objects-in-python-fastapi-sq
In a PostgreSQL table "private_notion", I have a JSONB column "record_map" that may or may not contain nested objects, E.g. { "blocks": { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block1", "value": 1, "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block2", "value": 2, "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block3", "value": 3 } } }, "7a9abf0d-a066-4466-a565-4e6d7a960a38": { "name": "block4", "value": 4, "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a39": { "name": "block5", "value": 5, "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a40": { "name": "block6", "value": 6 } } } } }, } } } } To retrieve data, We don't know which block has the data we want, we only have the key. Let's assume we are looking for the object with this key "7a9abf0d-a066-4466-a565-4e6d7a960a40", but we don't know that it is located in child block6 of parent block4 and block5. Another request might look for the parent block4 and so on, and I must find the block by it's key. The entire code looks like this; async def get_private_notion_page( site_uuid: str, page_id: str, db_session: AsyncSession ) -> PrivateNotionPage: page_id_path = f"{page_id}" # page_id looks like this 7a9abf0d-a066-4466-a565-4e6d7a960a37 path = f"$.** ? (@.{page_id_path})" stmt = text( f""" SELECT jsonb_path_query(record_map, {path}) FROM private_notion WHERE site_id = {site_uuid} """ ) result = await db_session.execute(stmt) result = result.scalars().first() if result: return result else: raise PrivateNotionSiteWasNotFound So I came up with the following query statements which use sqlalchemy "text" method to accept raw SQL query, but jsonb_path_query_array and jsonb_path_query throw similar errors; syntax error at or near "$". page_id_path = f"{page_id}" path = f"$.** ? (@.{page_id_path})" stmt = text( f""" SELECT jsonb_path_query(record_map, {path}) FROM private_notion WHERE site_id = {site_uuid} """ ) Error: sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) <class 'asyncpg.exceptions.PostgresSyntaxError'>: syntax error at or near "$" [SQL: SELECT jsonb_path_query(record_map, $.** ? (@.7a9abf0d-a066-4466-a565-4e6d7a960a37)) FROM private_notion WHERE site_id = 26f52d8e-a380-46ab-9131-e6f7f62c528f ] I would later learn that "The $** operator is not valid in a SQL query. Instead, you can use the jsonb_path_query_array function to search recursively through all levels of the JSONB object." Apparently I got the same error after refactoring the code. page_id_path = f"{page_id}" path = f"$[*] ? (@ like_regex {page_id_path})" stmt = text( f""" SELECT jsonb_path_query_array(record_map -> 'block', {path}) FROM private_notion WHERE site_id = {site_uuid} """ ) Error: sqlalchemy.exc.ProgrammingError: (sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) <class 'asyncpg.exceptions.PostgresSyntaxError'>: syntax error at or near "$" [SQL: SELECT jsonb_path_query_array(record_map -> 'block', $[*] ? (@ like_regex 7a9abf0d-a066-4466-a565-4e6d7a960a37)) FROM private_notion WHERE site_id = 26f52d8e-a380-46ab-9131-e6f7f62c528f ] My question is two-pronged, what is the error all about? And is there a better way to retrieve a nested object by key in a JSONB column? Thank you for your time.
This extracts entire objects at any level that have your target uuid-based key in them: demo at db<>fiddle SELECT jsonb_path_query(record_map, 'strict $.**?(@.keyvalue().key==$target_id)', jsonb_build_object('target_id', '7a9abf0d-a066-4466-a565-4e6d7a960a37')) FROM private_notion WHERE site_id = '45bf37be-ca0a-45eb-838b-015c7a89d47b'; jsonb_path_query { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block1", "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block2", "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block3", "value": 3 } }, "value": 2 }, "7a9abf0d-a066-4466-a565-4e6d7a960a38": { "name": "block4", "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a39": { "name": "block5", "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a40": { "name": "block6", "value": 6 } }, "value": 5 } }, "value": 4 } }, "value": 1 }} { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block2", "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block3", "value": 3 } }, "value": 2 }, "7a9abf0d-a066-4466-a565-4e6d7a960a38": { "name": "block4", "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a39": { "name": "block5", "child": { "7a9abf0d-a066-4466-a565-4e6d7a960a40": { "name": "block6", "value": 6 } }, "value": 5 } }, "value": 4 }} { "7a9abf0d-a066-4466-a565-4e6d7a960a37": { "name": "block3", "value": 3 }} Note the object duplication through unnesting: they appear both alone as well as inside each matched parent structure. JSONPath expression needs to be single-quoted. This gets rid of the syntax error: ERROR: syntax error at or near "$" LINE 2: $.**.7a9abf0d-a066-4466-a565-4e6d7a9... ^ Your uuid-based key inside the JSONPath needs to be double-quoted. This will get rid of a problem inside the expression that would soon follow: ERROR: trailing junk after numeric literal at or near ".7a" of jsonpath input LINE 2: '$.**.7a9abf0d-a066-4466-a565-4e6d7a... ^ When using .** accessor, default to using strict mode. You can use the SQLAlchemy JSONPath type to pass the expression.
2
1
77,628,661
2023-12-8
https://stackoverflow.com/questions/77628661/how-to-print-out-another-column-after-a-value-counts-in-dataframe
I am learning pandas and python. I have this dataframe: dfsupport = pd.DataFrame({'Date': ['8/12/2020','8/12/2020','13/1/2020','24/5/2020','31/10/2020','11/7/2020','11/7/2020','4/4/2020','1/2/2020'], 'Category': ['Table','Chair','Cushion','Table','Chair','Mats','Mats','Large','Large'], 'Sales': ['1 table','3chairs','8 cushions','3Tables','12 Chairs','12Mats','4Mats','13 Chairs and 2 Tables', '3 mats, 2 cushions 4@chairs'], 'Paid': ['Yes','Yes','Yes','Yes','No','Yes','Yes','No','Yes'], 'Amount': ['93.78','$51.99','44.99','38.24','Β£29.99','29 21 only','18','312.8','63.77' ] }) which produces: Date Category Sales Paid Amount 0 8/12/2020 Table 1 table Yes 93.78 1 8/12/2020 Chair 3chairs Yes 51.99 2 13/1/2020 Cushion 8 cushions Yes 44.99 3 24/5/2020 Table 3Tables Yes 38.24 4 31/10/2020 Chair 12 Chairs No 29.99 5 11/7/2020 Mats 12Mats Yes 29.21 6 11/7/2020 Mats 4Mats Yes 18 7 4/4/2020 Large 13 Chairs and 2 Tables No 312.8 8 1/2/2020 Large 3 mats, 2 cushions 4@chairs Yes 63.77 I want to find the date with the most sale, so I ran: print("######\n",dfsupport['Date'].value_counts().max()) which gives: 2 What I would now like to do is to unpack that 2 and find out which dates that was for and also which "Sales" occurred in each of those instances. I'm stuck and don't know how to print out those columns. Would appreciate some guidance.
Another possible solution, which uses pandas.DataFrame.groupby, pandas.DataFrame.transform and boolean indexing: s = dfsupport.groupby('Date')['Date'].transform(len) dfsupport[s.eq(s.max())] Output: Date Category Sales Paid Amount 0 8/12/2020 Table 1 table Yes 93.78 1 8/12/2020 Chair 3chairs Yes $51.99 5 11/7/2020 Mats 12Mats Yes 29 21 only 6 11/7/2020 Mats 4Mats Yes 18
2
2
77,628,455
2023-12-8
https://stackoverflow.com/questions/77628455/mypy-unreachable-on-guard-clause
I have a problem where when I try to check if the given value's type is not what I expect I'll log it and raise an error. However, mypy is complaining. What I'm doing wrong? Simplified example: from __future__ import annotations from typing import Union from logging import getLogger class MyClass: def __init__(self, value: Union[float, int]) -> None: self.logger = getLogger("dummy") self.value = value def __add__(self, other: Union[MyClass, float, int]) -> MyClass: if not isinstance(other, (MyClass, float, int)): self.logger.error("Other must be either MyClass, float or int") # error: Statement is unreachable [unreachable] raise NotImplementedError return self.add(other) def add(self, other: Union[MyClass, float, int]) -> MyClass: if isinstance(other, MyClass): return MyClass(self.value + other.value) return MyClass(self.value + other) Please notice it does not complain when I run it on mypy-play.net but locally it raises: main.py:13: error: Statement is unreachable [unreachable] Found 1 error in 1 file (checked 1 source file)
Mypy is complaining because due to the input you set Union[MyClass, float, int] and your condition if not isinstance(other, (MyClass, float, int)):, if the arguments follows the given types, the code will never be reached. Mypy expect that everybody using your code will sent correct argument types (that's why you add types). You can disable this warning either with a type: ignore[unreachable] or just by unsettling the option in local. For the mypy-playground, you have to activate the option
2
2
77,623,684
2023-12-8
https://stackoverflow.com/questions/77623684/strange-warning-for-validationerror-in-pydantic-v2
I updated my FastAPI to Pydantic 2.5.2 and I suddenly get the following warning in the logs. /usr/local/lib/python3.12/site-packages/pydantic/_migration.py:283: UserWarning: `pydantic.error_wrappers:ValidationError` has been moved to `pydantic:ValidationError`. warnings.warn(f'`{import_path}` has been moved to `{new_location}`.') Is that problematic and do you know how to fix it?
Just use from from pydantic import ValidationError instead of from pydantic.error_wrappers import ValidationError. Now your code works correctly and it's just warning, but in the future versions of Pydantic it will cause the import error. If you don't import ValidationError in your code, it probably does one of the libraries you use. In this case just ignore this warning.
2
3
77,621,060
2023-12-7
https://stackoverflow.com/questions/77621060/add-annotations-to-plotly-candlestick-chart
I have been using plotly to create charts using OHLC data in a dataframe. The chart contains candlesticks on the top and volume bars at the bottom: I want to annotate the candlestick chart (not the volume chart) but cannot work out how to do it. This code works to create the charts: # Plot chart # Create subplots and mention plot grid size fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.03, row_width=[0.2, 0.7]) # Plot OHLC on 1st row fig.add_trace(go.Candlestick(x=df.index, open=df["Open"], high=df["High"], low=df["Low"], close=df["Close"], name="OHLC"), row=1, col=1 ) # Bar trace for volumes on 2nd row without legend fig.add_trace(go.Bar(x=df.index, y=df['Volume'], showlegend=False), row=2, col=1) fig.update_layout(xaxis_rangeslider_visible=False, title_text=f'{ticker}') fig.write_html(fr"E:\Documents\PycharmProjects\xxxxxxxx.html") And I tried adding the following after the candlestick add_trace but it doesn't work: fig.add_annotation(x=i, y=df["Close"], text="Test text", showarrow=True, arrowhead=1) What am I doing wrong?
The issue is that you pass the whole df["Close"] series where you should pass only the value at index i, that is df.loc[i, "Close"]. This should work : fig.add_annotation(x=i, y=df.loc[i, "Close"], text="Test text", showarrow=True, arrowhead=1)
2
2
77,621,095
2023-12-7
https://stackoverflow.com/questions/77621095/how-to-rename-row-string-based-on-another-row-string
Imagine I have a dataframe like this: import pandas as pd df = pd.DataFrame({"a":["","DATE","01-01-2012"], "b":["","ID",18], "c":["CLASS A","GOLF",3], "d":["","HOCKEY",4], "e":["","BASEBALL",2], "f":["CLASS B","GOLF",15], "g":["","HOCKEY",2], "h":["","BASEBALL",3] }) Out[33]: a b c d e f g h 0 CLASS A CLASS B 1 DATE ID GOLF HOCKEY BASEBALL GOLF HOCKEY BASEBALL 2 01-01-2012 18 3 4 2 15 2 3 I would like to add the strings in the first row to the names of those sports on the row below, but only before the beginning of the next "Class". Does anyone know how can I do that? So the result should be like this: a b c ... f g h 0 CLASS A ... CLASS B 1 DATE ID CLASS A GOLF ... CLASS B GOLF CLASS B HOCKEY CLASS B BASEBALL 2 01-01-2012 18 3 ... 15 2 3 Later I will make the row 1 to be my header names, but this part I know how to do. I already tried to use df.iterrows but I got confused with the workflow.
Using replace+ffill to forward the CLASS, and a boolean mask to change the strings by boolean indexing: s = df.loc[0].replace('', np.nan).ffill() m = s.notna() df.loc[1, m] = s[m]+' '+df.loc[1, m] Output: a b c d e f g h 0 CLASS A CLASS B 1 DATE ID CLASS A GOLF CLASS A HOCKEY CLASS A BASEBALL CLASS B GOLF CLASS B HOCKEY CLASS B BASEBALL 2 01-01-2012 18 3 4 2 15 2 3 Side note: it might be better to have those two rows as a MultiIndex rather than rows with strings. This would enable you to benefit from vectorized operations on your numeric data.
2
2
77,620,231
2023-12-7
https://stackoverflow.com/questions/77620231/why-does-my-listbox-print-the-whole-list-in-one-line
I create a list by appending dictionary entries containing display_name, browse_name and node_id of OPCUA server nodes. When I print the list, all the elements are on one line. I have no idea why. Please help! # Code for inserting elements into the Listbox def display_nodes(self, nodes_list): # Code zum Anzeigen der Nodes im GUI i = 0 print(nodes_list,sep=" ") display_text="" self.nodes_listbox.delete(0,tk.END) for node in nodes_list: display_text=str(node) self.nodes_listbox.insert(tk.END,display_text) pass # Code for reading nodes and adding them to the List def read_nodes(self,node): # Code zum Lesen der Nodes vom Server for childId in node.get_children(): ch = self.client.get_node(childId) print(ch.get_node_class()) if ch.get_node_class() == ua.NodeClass.Variable: #if str(ch.get_browse_name()).find("QualifiedName(1:") != -1: if (str(ch.nodeid)).find("ns=1")!= -1: node_data={ 'display_name':ch.get_display_name(), 'browse_name':ch.get_browse_name(), 'node_id':str(ch.nodeid), } print(node_data) self.nodes_list.append(node_data) else: #if str(ch.get_browse_name).find("QualifiedName(1:") != -1: node_data={ 'display_name':ch.get_display_name(), 'browse_name':ch.get_browse_name(), 'node_id':str(ch.nodeid), #"data_type":ch.get_data_type_as_variant_type(), } print(node_data) self.read_nodes(ch) return[self.nodes_list]
When i print the list, all the elements are in one line. The problem can be fixed by using asterik(*) Change this: self.nodes_listbox.insert(tk.END,display_text) to: self.nodes_listbox.insert(tk.END, *display_text)
2
0
77,620,439
2023-12-7
https://stackoverflow.com/questions/77620439/list-to-csv-python
When I try to save a Python list in a csv, the csv have the items that I want to save separated by each character. I have a list like this with links: links = ['https://www.portalinmobiliario.com/MLC-2150551226-departamento-los-talaveras-id-117671-_JM#position=1&search_layout=grid&type=item&tracking_id=01bab66e-7cd3-43ce-b3d7-8389260b443d', 'https://www.portalinmobiliario.com/MLC-2148268902-departamento-los-espinos-id-116373-_JM#position=2&search_layout=grid&type=item&tracking_id=01bab66e-7cd3-43ce-b3d7-8389260b443d'] Im trying to save this to a csv with this code: with open('links.csv', 'w', newline='') as f: writer = csv.writer(f) writer.writerows(links) The result I get from the list is the link in a row but each character in a column. How can I get the links separated by rows but in the same column?
writer.writerows expects the parameter to be an iterable of row lists (quoting the docs: "A row must be an iterable of strings or numbers for Writer objects"); right now it's interpreting your link strings as rows of 1-character columns (since a string is indeed an iterable of strings). In short, you'll need to wrap each link in a list (an 1-tuple would do too), e.g. with a generator: writer.writerows([link] for link in links)
2
5
77,619,384
2023-12-7
https://stackoverflow.com/questions/77619384/how-to-load-multiple-files-with-custom-process-for-each-of-them
I have several CSVfiles with the same structure: data_product_1.csv data_product_2.csv data_product_3.csv etc. It is clear to me that to obtain a dataframe with all the data concatted together with polars, I can do something like: import polars as pl df = pl.read_csv("data_*.csv") What I would like to do is to add an extra column to the final dataframe containing the name of the product, e.g. data value product_code 2000-01-01 1 product_1 2000-01-02 2 product_1 2000-01-01 3 product_2 2000-01-02 4 product_2 2000-01-01 5 product_3 I'm aware I can load the files one by one, add the extra column and concat them together afterwards but I was wondering if I'm missing some other way to take advantage of polars performances here.
It seems you're wanting the filename added as a column, e.g. duckdb.sql(""" from read_csv_auto('data_*.csv', filename = true) """) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ data β”‚ value β”‚ filename β”‚ β”‚ date β”‚ int64 β”‚ varchar β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 2000-01-01 β”‚ 1 β”‚ data_product_1.csv β”‚ β”‚ 2000-01-02 β”‚ 2 β”‚ data_product_1.csv β”‚ β”‚ 2000-01-01 β”‚ 3 β”‚ data_product_2.csv β”‚ β”‚ 2000-01-02 β”‚ 4 β”‚ data_product_2.csv β”‚ β”‚ 2000-01-01 β”‚ 4 β”‚ data_product_3.csv β”‚ β”‚ 2000-01-02 β”‚ 5 β”‚ data_product_3.csv β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ This has been requested a few times but is yet to be added to Polars: https://github.com/pola-rs/polars/issues/9096 You can replace read_csv with scan_csv which delays reading the file and returns a LazyFrame instead. The frames can be combined with concat which (by default) "computes" LazyFrames in parallel. from pathlib import Path # lazyframes csvs = [ pl.scan_csv(f).with_columns(product_code=pl.lit(f.name)) for f in Path().glob("data_*.csv") ] # inputs are read in parallel df = pl.concat(csvs).collect() shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ data ┆ value ┆ product_code β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ════════════════════║ β”‚ 2000-01-01 ┆ 1 ┆ data_product_1.csv β”‚ β”‚ 2000-01-02 ┆ 2 ┆ data_product_1.csv β”‚ β”‚ 2000-01-01 ┆ 3 ┆ data_product_2.csv β”‚ β”‚ 2000-01-02 ┆ 4 ┆ data_product_2.csv β”‚ β”‚ 2000-01-01 ┆ 4 ┆ data_product_3.csv β”‚ β”‚ 2000-01-02 ┆ 5 ┆ data_product_3.csv β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
4
77,615,883
2023-12-6
https://stackoverflow.com/questions/77615883/attributeerror-flags-object-has-no-attribute-c-contiguous
I am following Hands On Machine Learning Book by AurΓ©lien GΓ©ron and running in to the following error. Code: y_train_large = (y_train.astype("int") >= 7) y_train_odd = (y_train.astype("int") % 2 == 1) y_multilabel = np.c_[y_train_large, y_train_odd] #model knn_clf = KNeighborsClassifier() knn_clf.fit(X_train, y_multilabel) y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3) The last line produces the following error: { AttributeError: 'Flags' object has no attribute 'c_contiguous'" } Since I am following the book, I expected this code to work. I have tried solutions from Google Bard and Claude AI chatbots but with no success.
There seems to be a bug report for this in Scikit-learn 1.3.0 (although it seems to have been fixed in the nightly builds). Try downgrading to version 1.2.2: pip uninstall scikit-learn pip install scikit-learn==1.2.2
2
2
77,613,936
2023-12-6
https://stackoverflow.com/questions/77613936/how-to-create-a-vector-search-index-in-azure-ai-search-using-v11-4-0
I want to create an Azure AI Search index with a vector field using the currently latest version of azure-search-documents v11.4.0. Here is my code: from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from langchain.embeddings import AzureOpenAIEmbeddings from langchain.text_splitter import TokenTextSplitter from azure.search.documents.indexes.models import ( SearchIndex, SearchField, SearchFieldDataType, SimpleField, SearchableField, SearchIndex, SemanticConfiguration, SemanticField, SearchField, SemanticSearch, VectorSearch, VectorSearchAlgorithmConfiguration, HnswAlgorithmConfiguration ) index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME key = AZURE_COGNITIVE_SEARCH_KEY credential = AzureKeyCredential(key) def create_index(): # Define the index fields client = SearchIndexClient(service_endpoint, credential) fields = [ SimpleField(name="chunk_id", type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True), SimpleField(name="file_name", type=SearchFieldDataType.String), SimpleField(name="url_name", type=SearchFieldDataType.String), SimpleField(name="origin", type=SearchFieldDataType.String, sortable=True, filterable=True, facetable=True), SearchableField(name="content", type=SearchFieldDataType.String), SearchField(name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_configuration="my-vector-config"), ] vector_search=VectorSearch( algorithms=[ HnswAlgorithmConfiguration( name="my-vector-config", kind="hnsw", parameters={ "m": 4, "efConstruction":400, "efSearch":500, "metric":"cosine" } ) ] ) # Create the search index with the semantic settings index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search) return client, index search_client, search_index = create_index() result = search_client.create_or_update_index(search_index) print(f"{result.name} created") This gives me the following error: Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition Code: InvalidField Message: The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition I tried to copy exact solution provided here: https://learn.microsoft.com/en-us/answers/questions/1395031/how-to-configure-vectorsearchconfiguration-for-a-s which gives me same error as above. I also tried this sample which is part of the official documentation (linked on the pypi page): https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_vector_search.py But here I get this error: Code: InvalidRequestParameter Message: The request is invalid. Details: definition : The field 'contentVector' uses a vector search algorithm configuration 'my-algorithms-config' which is not defined. Exception Details: (UnknownVectorAlgorithmConfiguration) The field 'contentVector' uses a vector search algorithm configuration 'my-algorithms-config' which is not defined. Parameters: definition Code: UnknownVectorAlgorithmConfiguration Message: The field 'contentVector' uses a vector search algorithm configuration 'my-algorithms-config' which is not defined. Parameters: definition And I also found this other example notebooks from Microsoft about AI-Search: https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-custom-vectorization-sample.ipynb This code also gave me the exact same error as my initial code. I'm trying to get this working for 2 days now and I'm about to give up. There are several different documentations/examples in various different places and every code looks different. Apparently Microsoft changes the function names constantly with almost every package update so most of the examples are probably outdated by now. I have no idea where to find the "latest" documentation that actually provides working code as all examples I tested did not work for me. This has to be the worst python documentation I have ever seen in my life. Even Langchain documenation is great compared to this... EDIT: I just checked the source code of the "SearchField". It takes the following arguments: def __init__(self, **kwargs): super(SearchField, self).__init__(**kwargs) self.name = kwargs["name"] self.type = kwargs["type"] self.key = kwargs.get("key", None) self.hidden = kwargs.get("hidden", None) self.searchable = kwargs.get("searchable", None) self.filterable = kwargs.get("filterable", None) self.sortable = kwargs.get("sortable", None) self.facetable = kwargs.get("facetable", None) self.analyzer_name = kwargs.get("analyzer_name", None) self.search_analyzer_name = kwargs.get("search_analyzer_name", None) self.index_analyzer_name = kwargs.get("index_analyzer_name", None) self.synonym_map_names = kwargs.get("synonym_map_names", None) self.fields = kwargs.get("fields", None) self.vector_search_dimensions = kwargs.get("vector_search_dimensions", None) self.vector_search_profile_name = kwargs.get("vector_search_profile_name", None) You can see that there is no "vector_search_configuration" nor "vectorSearchConfiguration" argument. I think they renamed it to "vector_search_profile_name" for some reason. Therefore I assume that the sample in the official documentation is the correct one and the other 2 are indeed outdated. But even so I'm still getting an error due to the "my-algorithms-config" not being defined.
I finally found the answer. Turns out at this moment there is not a single correct sample from Microsoft to properly create an index with a vector field. They renamed a few function names and argument names which makes most other answers (e.g. on Microsoft support pages) outdated. The sample in the official GitHub repo generally uses the correct function and argument names but it is still wrong as they pass the wrong value. A GitHub issue was opened by someone else for this exact problem. The issue got closed after someone claimed he fixed it, even though nothing was fixed. The issue was then reopened 3 weeks ago and as of today 07/12/2023 the issue is still open and the documentation is still incorrect. Long story short this is how to properly define an index with a vector field in azure-search-documents v.11.4.0: from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from azure.search.documents.indexes.models import ( SearchIndex, SearchField, SearchFieldDataType, SimpleField, SearchableField, SearchIndex, SearchField, VectorSearch, VectorSearchProfile, HnswAlgorithmConfiguration ) service_endpoint = AZURE_COGNITIVE_SEARCH_ENDPOINT index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME key = AZURE_COGNITIVE_SEARCH_KEY credential = AzureKeyCredential(key) def create_index(): # Define the index fields client = SearchIndexClient(service_endpoint, credential) fields = [ SimpleField(name="chunk_id", type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True), SimpleField(name="file_name", type=SearchFieldDataType.String), SimpleField(name="url_name", type=SearchFieldDataType.String), SimpleField(name="origin", type=SearchFieldDataType.String, sortable=True, filterable=True, facetable=True), SearchableField(name="content", type=SearchFieldDataType.String), SearchField(name="content_vector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_profile_name="my-vector-config"), ] vector_search = VectorSearch( profiles=[VectorSearchProfile(name="my-vector-config", algorithm_configuration_name="my-algorithms-config")], algorithms=[HnswAlgorithmConfiguration(name="my-algorithms-config")], ) index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search) return client, index
4
7
77,615,967
2023-12-6
https://stackoverflow.com/questions/77615967/converting-pandas-dataframe-to-float32-changes-value-of-low-precision-number-by
I have a very large dataset with values that don't require a lot of decimal point precision. In one test scenario, my dataframe is 102 MB, and all columns have a float64 datatype. I was hoping to reduce the memory usage, and potentially the output file sizes by changing my pandas dataframe to hold float32 values. With this dataframe, I am creating these files: .csv (with df.to_csv) .xlsx (with xlsxwriter) .html (with plotly fig.to_html or write_html) After adding one line casting my datatype to float32 via df = df.astype('float32'), I was surprised to find that some of my files were quite a bit larger than they were before. One .html file went from 30 MB to 44 MB. One xlsx file went from 31 MB to ~39 MB. When I look at the data I stored, I see more - and inaccurate - digits after the decimal point: Digging into this more, I am finding unexpected behavior in how Pandas downcasts to float32 - or maybe it's how various methods and functions represent a float32 datatype vs a float64. Given a simple csv file: 59.11,59,59.86,59.86,59.0839945 60.28,59.7817845,59.75,59.75,59 A simple script: import pandas as pd df = pd.read_csv('float_test.csv', header=None) s = df.iloc[0,:] print(df.info()) print(df) print(f's dtype: {s.dtypes}') # Show Datatype print(s) # Show the data as pandas prints it print(f's to list: {s.to_list()}') # Show data convertered to list s32 = s.astype('float32') print(s32) print(f's32 to list: {s32.to_list()}') # Convert to float32 and print as list Looking at the output from this script, I am confused by what is happening. Each column of the dataframe is a "float64". When I print the dataframe, it shows decimal values padded to the most precise float in any column (to a max of 6 digits after the decimal). Similarly when I grab just the first row - it is then treated as a series - and that is represented with padding to 6 decimal points. Now once I convert the series to a float32, I see the value of index spot 2 and 3 change. Instead of 59.86 or 59.860000, it becomes 59.860001. When I use to_list(), I find the original series has the correct values from the CSV files (with ".0" added to indicate float). But the float32 series, has .00000061035156 added to some of the values, and the last value has a similar (but not the same) value added. There is something happening here that I am just not understanding. Why would a float32 of "59.11", not just be "59.11000"? (<< pretend that's the right # of zeros) I understanding loosing precision when down casting from 64 to 32. But I don't understand how a number that should be exact in float32 is becoming non-exact. And while this seems to be a Pandas issue, I am finding the bloating in storage space related to other libraries (xlsxwriter and plotly). I'm guessing this is because a float32 is forcing the data to be kept to a certain decimal point, yet somehow the float64 is okay with 59.11. However this could also be because the number itself is changing slightly (at the most granular level), forcing that many decimal points to be kept.
I'm going to flag your question as a duplicate of this question, but to help understand why I will also submit this answer. Part 1: Why would a float32 of "59.11", not just be "59.11000"? Answer: there is no way to represent exactly 59.11 as a binary floating point number (float). The float representation of 59.11 is some other number really close to 59.11, but not exactly equal. The exact number depends on the machine's implementation of floating point numbers, but in any case if you look at enough decimal places to get beyond the floating point's precision then you could see "garbage". For float32, there are about 7 precise digits (including those left of the .), and the rest is "garbage". For a float64, there are about 15 precise digits before the "garbage" starts. To prove this to yourself, try configuring pandas to print an absurd number of decimal places, and then look at your data again. This way you are looking at exactly the number that is stored in memory, not some rounded version of it. import pandas as pd data = [ [59.11,59,59.86,59.86,59.0839945], [60.28,59.7817845,59.75,59.75,59], ] df = pd.DataFrame(data) s = df.iloc[0,:] pandas.set_option("display.precision", 18) print(s) s32 = s.astype('float32') print(s32) Which prints: 0 59.109999999999999432 1 59.000000000000000000 2 59.859999999999999432 3 59.859999999999999432 4 59.083994500000002859 Name: 0, dtype: float64 0 59.110000610351562500 1 59.000000000000000000 2 59.860000610351562500 3 59.860000610351562500 4 59.083995819091796875 Name: 0, dtype: float32 Part 2: I was surprised to find that some of my files were quite a bit larger than they were before. I am guessing that pandas is printing the binary floating point representation of the number after it has been rounded to a certain number of decimal points. For a float64, the float is very close so even rounding it to 6 or 7 decimal places has only trailing 0s after the data you want to see, and those trailing 0s would be omitted. For a float32, the "garbage" starts early enough that it isn't hidden by the rounding, and therefore takes up more characters in the text version of the result. If you wanted to store the data as numbers directly on the disk instead of storing a text version of the rounded floating point numbers, you could try saving to a binary format like a .pkl (see DataFrame.to_pickle). Just keep in mind that this file would not be human-readable.
2
2
77,614,679
2023-12-6
https://stackoverflow.com/questions/77614679/django-get-a-certin-value-from-a-dict-with-a-for-key-value-inside-of-a-template
Sorry if my title is a bit crypt, but this is the problem I have a list of dict data = [{"a": 1, "b": 2},{"a": 3, "b": 4} ] and a list with keys = ["a","b"] I want in a template do this: for dat in data: <tr> for k in keys: <th> dat[k]</th> </tr> to get this: <tr> <th>1</th> <th>2</th> </tr> <tr> <th>3</th> <th>4</th> </tr>
Use one of these solutions if you want to keep the order given by the keys list. Result will be different with keys = ["a", "b"] VS keys = ["b", "a"]. Solution 1 - Prepare data in the view Process the data in the view. Create a list of list instead of dictionary to keep the order of your keys list. def home(request): data = [{"a": 1, "b": 2},{"a":3, "b":4}] data_to_render = [ [] for _i in range(len(data)) ] keys = ['a', 'b'] for i in range(len(data)): for k in keys: data_to_render[i] += [data[i].get(k)] context = { "data_to_render": data_to_render } return render(request, 'index.html', context) And your template index.html may look like: <table> {% for l in data_to_render %} <tr> {% for value in l %} <th> {{ value }} </th> {% endfor %} </tr> {% endfor %} </table> Solution 2 - Use a custom tag Your view might look like… def home(request): context = { "data" : [{"a": 1, "b": 2},{"a":3, "b":4}], "keys": ["a","b"] } return render(request, 'index.html', context) Create a custom tag getval() at templatetags/extras.py: from django import template register = template.Library() @register.simple_tag def getval(dic, key): return dic.get(key) Finally, your template might look like: {% load extras %} <table> {% for d in data %} <tr> {% for k in keys %} <th> {% getval d k %} </th> {% endfor %} </tr> {% endfor %} </table>
2
2
77,615,257
2023-12-6
https://stackoverflow.com/questions/77615257/avoid-runtimewarning-using-where
I want to apply a function to a numpy array, which goes through infinity to arrive at the correct values: def relu(x): odds = x / (1-x) lnex = np.log(np.exp(odds) + 1) return lnex / (lnex + 1) x = np.linspace(0,1,10) np.where(x==1,1,relu(x)) correctly computes array([0.40938389, 0.43104202, 0.45833921, 0.49343414, 0.53940413, 0.60030842, 0.68019731, 0.77923729, 0.88889303, 1. ]) but also issues warnings: 3478817693.py:2: RuntimeWarning: divide by zero encountered in divide odds = x / (1-x) 3478817693.py:4: RuntimeWarning: invalid value encountered in divide return lnex / (lnex + 1) How do I avoid the warnings? Please note that performance is of critical importance here, so I would rather avoid creating intermediate arrays.
Another possible solution, based on np.divide, to avoid division by zero. This solution is inspired by @hpaulj's comment. def relu(x): odds = np.divide(x, 1-x, out=np.zeros_like(x), where=x!=1) lnex = np.log(np.exp(odds) + 1) return lnex / (lnex + 1) x = np.linspace(0,1,10) np.where(x==1,1,relu(x)) Output: array([0.40938389, 0.43104202, 0.45833921, 0.49343414, 0.53940413, 0.60030842, 0.68019731, 0.77923729, 0.88889303, 1. ])
2
2
77,611,459
2023-12-6
https://stackoverflow.com/questions/77611459/pandas-vectorized-operation-making-counting-function-that-resets-when-threshol
I am quite new to the programming and Im struggling to this matter. Any help is appreciated! I have a dataframe of stocks including the prices and the signal if it will be up (1) or down (-1). I want to count the sequence of repetition into another column 'count'. So, when there is a sequence of 1,1,1; then the count will be 1,2,3. If its -1,-1,-1; then the count will be 1,2,3 too. Additionally, when a threshold value reaches 5, the counting resets. Doesn't matter if it's 1 or -1. So, what I have is: price sign 0 13 1 1 12 1 2 11 -1 3 12 -1 4 13 1 5 14 1 6 14 1 7 14 1 8 14 1 9 14 1 10 14 1 . . . And what I want is: price sign count 0 13 1 1 1 12 1 2 2 11 -1 1 3 12 -1 2 4 13 1 1 5 14 1 2 6 14 1 3 7 14 1 4 8 14 1 5 9 14 1 1 10 14 1 2 . . . I already have this code in normal python code. But I cannot do this in Pandas Vectorized Operation! Help me, please!
Use GroupBy.cumcount by consecutive values of sign with modulo 5: df['count'] = df.groupby(df['sign'].ne(df['sign'].shift()).cumsum()).cumcount() % 5 + 1 print (df) price sign count 0 13 1 1 1 12 1 2 2 11 -1 1 3 12 -1 2 4 13 1 1 5 14 1 2 6 14 1 3 7 14 1 4 8 14 1 5 9 14 1 1 10 14 1 2 Detail: print (df.assign(consecutive=df['sign'].ne(df['sign'].shift()).cumsum(), counter=df.groupby(df['sign'].ne(df['sign'].shift()).cumsum()).cumcount(), count = df.groupby(df['sign'].ne(df['sign'].shift()).cumsum()).cumcount() % 5 + 1)) price sign consecutive counter count 0 13 1 1 0 1 1 12 1 1 1 2 2 11 -1 2 0 1 3 12 -1 2 1 2 4 13 1 3 0 1 5 14 1 3 1 2 6 14 1 3 2 3 7 14 1 3 3 4 8 14 1 3 4 5 9 14 1 3 5 1 10 14 1 3 6 2
2
2
77,584,118
2023-12-1
https://stackoverflow.com/questions/77584118/python-fastapi-how-to-return-a-response-with-unicode-or-non-ascii-characters-en
I am creating a FastAPI application that triggers file downloading through the StreamingResponse class (see FastAPI docs). This part is actually ok. My problem is that when the file contains accent (e.g., Γ©) or another special character, it seems to not encode it well. For example, when there is a Γ©, in a CSV it will be transformed to é, and in a JSON to \u00e9. My code looks something like this: For JSON # API CONTENT # ... return StreamingResponse(io.StringIO(json.dumps(data)), headers={"Content-Disposition": "filename=filename.json") For CSV # API CONTENT # ... return StreamingResponse(io.StringIO(pandas.DataFrame(data).to_csv(index=False)), headers={"Content-Disposition": f"filename=filename.csv"}) In order to fix the encoding, I also tried to: Add media_type="text/csv; charset=utf-8" in the CSV part but without success. Add "Content-Type": "application/octet-stream; charset=utf-8" in the header part but without success too. Tried to replace StreamingResponse by Response . Has somebody already faced this kind of problem? I would like to note that the content before adding it to StreamingResponse is well encoded. Here is a sample of code to test: from fastapi import FastAPI from fastapi.responses import StreamingResponse import pandas as pd import io import json app = FastAPI() data = [ {"éète": "test", "age": 10}, {"éète": "test2", "age": 5}, ] @app.get("/download_json") async def download_json(): return StreamingResponse(io.StringIO(json.dumps(data)), headers={"Content-Disposition": "filename=data.json"}) @app.get("/download_csv") async def download_csv(): return StreamingResponse(io.StringIO(pd.DataFrame(data).to_csv(index=False)), headers={"Content-Disposition": "filename=data.csv"})```
Python's json module, by default, converts non-ASCII and Unicode characters into the \u escape sequence. To avoid having non-ASCII or Unicode characters converted in that way, when encoding your data into JSON, you could set the ensure_ascii flag of json.dumps() function to False. Similalry, when using Panda's DataFrame to_json()or to_csv() functions, you need to make sure to use them with force_ascii=False and encoding='utf-8' arguments, respectively. (Note that encoding='utf-8' is the default encoding for the to_csv() function regardless; hence, you might omit manually setting it). Regarding using StreamingResponse, I would suggest having a look at this answer and all the references included in it, in order to understand whether and when you should use it. In your case, as shown in the example given in your question, is not needed, but you should rather return a custom Response, as explained in the linked answer above. More related answers that you might find helpful can be found here, here, as well as here, here and here. I would also highly suggesting reading this answer, which would clear things up for you, regarding how FastAPI works inder the hood, when returning dictionary/JSON objects from an endpoint. To use faster JSON encoders than the standard json module, have a look at this answer, as well as this answer and this answer. Finally, as explained in this answer and this answer, you could define the Content-Disposition header, so that the data are either viewed in the browser or downloaded to the client's device, using either: headers = {'Content-Disposition': 'inline; filename="out.json"'} or headers = {'Content-Disposition': 'attachment; filename="out.json"'} Please have a look at the linked answers for more details. Working Example from fastapi import FastAPI, Response import pandas as pd import json app = FastAPI() # Exemple de donnΓ©es avec des caractΓ¨res spΓ©ciaux data = [ {"éète": "test", "age": 10}, {"éète": "test2", "age": 5}, ] @app.get("/1") def get_json(): headers = {"Content-Disposition": 'inline; filename="out.json"'} return Response( json.dumps(data, ensure_ascii=False), headers=headers, media_type="application/json", ) @app.get("/2") def get_json_from_df(): headers = {"Content-Disposition": 'inline; filename="out.json"'} return Response( pd.DataFrame(data).to_json(orient="records", force_ascii=False), headers=headers, media_type="application/json", ) # Note: "text/csv" would force the browser to download the data, regardless # of specifying `inline` in the `Content-Disposition` header. # Use `media_type="text/plain"` instead, in order to view the data in the browser. @app.get("/3") def get_csv_from_df(): headers = {"Content-Disposition": 'inline; filename="out.csv"'} return Response( pd.DataFrame(data).to_csv(index=False, encoding="utf-8"), headers=headers, media_type="text/csv; charset=utf-8", ) UPDATE - Downloading CSV file and making Excel automatically displaying UTF-8/UTF-16 data when double-clicking the file As noted in the example provided above, when calling /3 endpoint, while the file that gets downloaded is in utf-8 encodingβ€”one can confirm that by opening the file in Notepad++ and checking the encoding used, as well as finding that all unicode/non-ascii characters are displayed as expectedβ€”when, however, double-clicking on the file to open it in Excel, unicode/non-ascii characters, such as Γ© and Γ¨ in the example above, are not displayed correctly. As it turns out, this is a known issue with Excelβ€”have a look at this, this and this for more detailsβ€”and the way to overcome this is to prepend a Byte Order Mark (BOM) (i.e., \uFEFF) at the beginning of the file, which would result in Excel recognizing the file as UTF-8 or UTF-16. Depending on the Excel version one is using, they should use the appropriate encoding. In some older versions, one would need to use utf-16 encoding (or utf-16-le), as shown in the example below, while in newer versions utf-8 might work as well (in which case, one would need to replace utf-16 with utf-8 in the example below). Finally, make sure to use \t delimiter (i.e., , sep='\t'), in order for the data to be displayed in separate columns in the CSV file; otherwise, using , or ; for instance, all data might end up in a single column. Option 1 - Using Pandas DataFrame @app.get("/3") def get_csv_from_df(): headers = {"Content-Disposition": 'attachment; filename="out.csv"'} return Response( (u'\uFEFF' + pd.DataFrame(data).to_csv(index=False, sep='\t', encoding="utf-16")).encode('utf-16'), headers=headers, media_type="text/csv; charset=utf-16", ) Option 2 - Using Python's built-in csv module Alternatively, one could use Python's built-in csv module to convert the dictionary or list of dictionaries into csv data and have them sent to the client. To avoid, however, saving the data to a file on the disk, one could use Python's NamedTemporaryFile, as demonstrated here, here and here, which would be much faster, and have it deleted at the end (or in case an exception occurs when processing the data), as shown here and here. Using this option, one wouldn't have to prepend a BOM to the data, as described earlier when using Pandas DataFrame. from fastapi import BackgroundTasks, HTTPException from fastapi.responses import FileResponse from tempfile import NamedTemporaryFile import csv import os @app.get("/4") def get_csv(background_tasks: BackgroundTasks): headers = {"Content-Disposition": 'attachment; filename="out.csv"'} temp = NamedTemporaryFile(delete=False, mode='w', encoding='utf-16', newline='') try: with temp as f: keys = data[0].keys() w = csv.DictWriter(f, fieldnames=keys, delimiter='\t') w.writeheader() w.writerows(data) except Exception: os.remove(temp.name) raise HTTPException(detail='There was an error processing the data', status_code=400) background_tasks.add_task(os.remove, temp.name) return FileResponse(temp.name, headers=headers, media_type='text/csv; charset=utf-16') or, a variant of the above (remember to call .seek(0) to reset the cursor back to the start of the file, before reading the contents from itβ€”see here for more details): from fastapi import HTTPException from tempfile import NamedTemporaryFile import csv import os @app.get("/5") def get_csv(): headers = {"Content-Disposition": 'attachment; filename="out.csv"'} temp = NamedTemporaryFile(delete=False, mode='w+', encoding='utf-16', newline='') try: keys = data[0].keys() w = csv.DictWriter(temp, fieldnames=keys, delimiter='\t') w.writeheader() w.writerows(data) temp.seek(0) return Response(temp.read().encode('utf-16'), headers=headers, media_type='text/csv; charset=utf-16') except Exception: raise HTTPException(detail='There was an error processing the data', status_code=400) finally: temp.close() os.remove(temp.name) Note 1 it should also be noted that if one didn't use any of the options provided above, they could still open the file downloaded using the standard Notepad application and change the encoding to Unicode by clicking on "File > Save as > Encoding: Unicode", and then save it. In Notepad++, you could do that by clicking on the "Encoding" tab and selecting UTF-16 LE BOM, then save it. In both cases, double-click on the file after channging the encoding should display unicode characters as expected. There is also the option of opening the Excel application first, and then importing the csv file, which would let you specify the encoding (lokk for guidelines online on how to do that). Regardless, using one of the approaches demonstrated earlier, the contents of the csv file would be shown as expected, when double-clicking the file to open it in Excel. Note 2 In every option given above, the endpoints were defined with normal def, instead of async def, as I/O operations of Pandas DataFrame, Python's built-in json and csv modules, as well as NamedTemporaryFile, are all blocking operations. Thus, depending on the size of data to be written/read, as well as the number of users that might need concurrent access to your API, you might otherwise noticed delays, when you (or some other user) attempted to access the same or a different endpoint, while a request was already being processed. If an endpoint was defined with async def and you had to process (read/write) large size data, and didn't await for some async function inside the endpointβ€”in order to return control back to the event loop and allow other tasks/requests waiting in the event loop to runβ€”every request to such an endpoint would have to be completely finished (i.e., exit the endpoint), before letting other tasks/requests in the event loop to run. When, instead, defining an endpoint with normal def, FastAPI will run that endpoint in an external ThreadPool that is then awaited; hence, avoiding blocking the event loop. Please have a look at this answer for more details and examples on that subject. That answer also provides solutions, when one needs to define their endpoint with async def, as they would have to await for some async function inside it, as well as, at the same time, they would have to execute blocking operations inside that endpoint. Such solutions include running blocking opertions in an external threadpool or processpool and awaiting it. Please have a look at the linked answer above for furhter details. If you would also like using an async version of NamedTemporaryFile, please take a look at Option 2 of this answer, which uses aiofiles. If the size of the data you had to process was small, and you didn't expect a large amount of clients/users to use your API at the same time, it would be just fine to define the endpoint with async def (even without having blocking operations, such as json.dumps(), Pandas.DataFrame.to_csv(), etc., run in external threadpool or processpool). It all depends on your needs.
4
5
77,576,750
2023-11-30
https://stackoverflow.com/questions/77576750/futurewarning-dataframe-swapaxes-is-deprecated-and-will-be-removed-in-a-futur
Looks like numpy is using deprecated function DataFrame.swapaxes in fromnumeric.py. Anaconda3\lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead. return bound(*args, **kwds) I am getting this warning from the following line of code in Jupyter Notebook: train, val, test = np.split(df.sample(frac=1), [int(0.8*len(df)), int(0.9*len(df))]) This is the structure of the dataframe I am using: What exactly is raising this warning and what should I change in my code to get rid of this Warning? I also found that this is currently an open issue of numpy in github. It will be great if anybody could help. Thanks in advance.
According to the numpy issue on github, this "bug" will not be fixed in numpy. The official statement is that np.split should not be used to split pandas DataFrames anymore. Instead, iloc should be used to split DataFrames as it is described in this answer. As it looks that you are splitting the DataFrame for machine learning reasons, please be aware that there are built-in methods in many machine learning libraries doing the test/train(/validation) split for you. Here the example of sklearn.
7
7
77,577,864
2023-11-30
https://stackoverflow.com/questions/77577864/issue-with-hierarchical-lucas-kanade-method-on-optical-flow
Issue with Hierarchical Lucas-Kanade method on optical flow I am implementing the hierarchical Lucas-Kanade method in Python based on this tutorial. However, when applying the method to a rotating sphere, I am encountering unexpected results. The data can be found here. Algorithm explained The overall structure of the algorithm is explained below. Notice that the equations in this tutorial are using the convention where the top-left corner is (0, 0) and the bottom-right corner is (width-1, height-1), while the implementation provided here swaps the x and y axes. In other words, the coordinate for the top-left corner is (0, 0) and the bottom-right corner is (height-1, width-1). Basic Lucas-Kanade Incorporating equations 19, 20, 23, 29, and 28, the basic Lucas-Kanade method is implemented as follows: def lucas_kanade(img1, img2): img1 = np.copy(img1).astype(np.float32) img2 = np.copy(img2).astype(np.float32) # Change the window size based on the image's size due to downsampling. window_size = min(max(3, min(img1.shape[:2]) / 6), 31) window_size = int(2 * (window_size // 2) + 1) print("window size: ", window_size) # Compute image gradients Ix = np.zeros(img1.shape, dtype=np.float32) Iy = np.zeros(img1.shape, dtype=np.float32) Ix[1:-1, 1:-1] = (img1[1:-1, 2:] - img1[1:-1, :-2]) / 2 # pixels on boundry are 0. Iy[1:-1, 1:-1] = (img1[2:, 1:-1] - img1[:-2, 1:-1]) / 2 # Compute temporal gradient It = np.zeros(img1.shape, dtype=np.float32) It = img1 - img2 # Define a (window_size, window_size) kernel for the convolution kernel = np.ones((window_size, window_size), dtype=np.float32) # kernel = create_gaussian_kernel(window_size, sigma=1) # Use convolution to calculate the sum of the window for each pixel Ix2 = convolve2d(Ix**2, kernel, mode="same", boundary="fill", fillvalue=0) Iy2 = convolve2d(Iy**2, kernel, mode="same", boundary="fill", fillvalue=0) Ixy = convolve2d(Ix * Iy, kernel, mode="same", boundary="fill", fillvalue=0) Ixt = convolve2d(Ix * It, kernel, mode="same", boundary="fill", fillvalue=0) Iyt = convolve2d(Iy * It, kernel, mode="same", boundary="fill", fillvalue=0) # Compute optical flow parameters det = Ix2 * Iy2 - Ixy**2 # Avoid division by zero u = np.where((det > 1e-6), (Iy2 * Ixt - Ixy * Iyt) / det, 0) v = np.where((det > 1e-6), (Ix2 * Iyt - Ixy * Ixt) / det, 0) optical_flow = np.stack((u, v), axis=2) return optical_flow.astype(np.float32) Generate Gaussian Pyramid The Gaussian pyramid is generated as follow. def gen_gaussian_pyramid(im, max_level): # Return `max_level+1` arrays. gauss_pyr = [im] for i in range(max_level): gauss_pyr.append(cv2.pyrDown(gauss_pyr[-1])) return gauss_pyr Upsample the flow The processing is conducted from the roughest image to the finest image in the pyramid. Thus, we also need to upsample the flow. def expand(img, dst_size, interpolation=None): # Increase dimension. height, width = dst_size[:2] return cv2.GaussianBlur( cv2.resize( # dim: (width, height) img, (width, height), interpolation=interpolation or cv2.INTER_LINEAR ), (5, 5), 0, ) Warp the image by the flow in previous level In the equation 12, the right image needs to be shifted based on the number of pixels in the previous loop. I choose to use opencv.remap function to warp the left image to be aligned with the right image. def remap(a, flow): height, width = flow.shape[:2] # Create a grid of coordinates using np.meshgrid y, x = np.meshgrid(np.arange(height), np.arange(width), indexing="ij") # Create flow_map by adding the flow vectors flow_map = np.column_stack( # NOTE: minus sign on flow (x.flatten() + -flow[:, :, 0].flatten(), y.flatten() + -flow[:, :, 1].flatten()) ) # Reshape flow_map to match the original image dimensions flow_map = flow_map.reshape((height, width, 2)) # Ensure flow_map values are within the valid range flow_map[:, :, 0] = np.clip(flow_map[:, :, 0], 0, width - 1) flow_map[:, :, 1] = np.clip(flow_map[:, :, 1], 0, height - 1) # Convert flow_map to float32 flow_map = flow_map.astype(np.float32) # Use cv2.remap for remapping warped = cv2.remap(a, flow_map, None, cv2.INTER_LINEAR) return warped Putting it all together After defining all the basics, we can put them all together. Here, g_L and d_L are the variable in equation 7. def hierarchical_lucas_kanade(im1, im2, max_level): # max_level = 4 gauss_pyr_1 = gen_gaussian_pyramid(im1, max_level) # from finest to roughest gauss_pyr_2 = gen_gaussian_pyramid(im2, max_level) # from finest to roughest g_L = [0 for _ in range(max_level + 1)] # Every slot will be (h, w, 2) array. d_L = [0 for _ in range(max_level + 1)] # Every slot will be (h, w, 2) array. assert len(g_L) == 5 # 4 + 1 (base) # Initialzie g_L[0] as (h, w, 2) zeros array g_L[max_level] = np.zeros(gauss_pyr_1[-1].shape[:2] + (2,)).astype(np.float32) for level in range(max_level, -1, -1): # 4, 3, 2, 1, 0 # Warp image 1 by previous flow. warped = remap(gauss_pyr_1[level], g_L[level]) # Run Lucas-Kanade on warped image and right image. d_L[level] = lucas_kanade(warped, gauss_pyr_2[level]) # Expand/Upsample the flow so that the dimension can match the finer result. g_L[level - 1] = 2.0 * expand( g_L[level] + d_L[level], gauss_pyr_2[level - 1].shape[:2] + (2,), interpolation=cv2.INTER_LINEAR, ) return g_L[0] + d_L[0] Visualization After downloading the data, you can run it with the code: sphere_seq = [] for fname in natsorted(Path("./input/sphere/").rglob("*.ppm")): sphere_seq.append(cv2.imread(str(fname), cv2.IMREAD_GRAYSCALE)) flows = [] for i in range(len(sphere_seq) - 1): flows.append(hierarchical_lucas_kanade(sphere_seq[i], sphere_seq[i + 1], max_level=4)) show_flow(sphere_seq[i], flows[i], f"./output/sphere/flow-{i}.png") The result looks like below: Specific Question: There are several problems in the result: It looks like the flow's x direction is correct but the y direction is not. It could be my my visulization code is wrong. Here is the code: def show_flow(img, flow, filename=None): x = np.arange(0, img.shape[1], 1) y = np.arange(0, img.shape[0], 1) x, y = np.meshgrid(x, y) plt.figure(figsize=(10, 10)) fig = plt.imshow(img, cmap="gray", interpolation="bicubic") plt.axis("off") fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) num_points_per_axis = 32 step = int(img.shape[0] / num_points_per_axis) plt.quiver( x[::step, ::step], y[::step, ::step], # reverse order? flow[::step, ::step, 0], flow[::step, ::step, 1], # reverse sign? color="r", pivot="tail", headwidth=2, headlength=3, ) if filename is not None: plt.savefig(filename, bbox_inches="tight", pad_inches=0) There are non-zero flow on the stationary pixels. Any insights or suggestions on resolving this issue would be greatly appreciated. Thank you!
Optical Flow Analysis using the Lucas-Kanade Method Simple Lucas-Kanade (without hierarchical approach) Here's a program based off of the code you provide. Below is just one example of the kind of parameter optimization which could be experimented with to fine-tune the accuracy of the optical flow analysis results. The principal changes introduced were: As mentioned, fine-tuning of the handful of parameters (defined by all caps variable names in the code below; perhaps the most important is the "minimum window size" which determines the size of the local region the Lucas-Kanade algorithm analyzes for determining flow) Implementation of a flow magnitude threshold/cutoff to eliminate the plotted quivers (arrows) on stationary regions of pixels Correction of the y-direction flow values by reversing the sign of the vector import cv2 import imageio import itertools as itl import matplotlib.pyplot as plt import numpy as np from natsort import natsorted from pathlib import Path INPUT_DATA = # [path to input directory containing .ppm image files] # Variable analysis parameters: MIN_WIN_SIZE = [6, 7, 8] NUM_AXIS_PTS = [32, 50, 74] ARROW_SCALE = [50] FLOW_CUTOFF = [0.25] combinations = [ *itl.product(MIN_WIN_SIZE, NUM_AXIS_PTS, ARROW_SCALE, FLOW_CUTOFF) ] print(len(combinations)) for MIN_WIN_SIZE, NUM_AXIS_PTS, ARROW_SCALE, FLOW_CUTOFF in combinations: def lucas_kanade(img1, img2): """Compute optical flow using Lucas-Kanade method. Args: img1 (numpy.ndarray): First input image. img2 (numpy.ndarray): Second input image. Returns: numpy.ndarray: Computed optical flow. """ img1 = np.copy(img1).astype(np.float32) img2 = np.copy(img2).astype(np.float32) # Change the window size based on the image's size due to downsampling. window_size = min(max(3, min(img1.shape[:2]) / 6), MIN_WIN_SIZE) window_size = int(2 * (window_size // 2) + 1) # Compute image gradients Ix = np.zeros(img1.shape, dtype=np.float32) Iy = np.zeros(img1.shape, dtype=np.float32) Ix[1:-1, 1:-1] = (img1[1:-1, 2:] - img1[1:-1, :-2]) / 2 Iy[1:-1, 1:-1] = (img1[2:, 1:-1] - img1[:-2, 1:-1]) / 2 # Compute temporal gradient It = img1 - img2 # Define a (window_size, window_size) kernel for the convolution kernel = np.ones((window_size, window_size), dtype=np.float32) # Use convolution to calculate the sum of the window for each pixel Ix2 = cv2.filter2D(Ix ** 2, -1, kernel) Iy2 = cv2.filter2D(Iy ** 2, -1, kernel) Ixy = cv2.filter2D(Ix * Iy, -1, kernel) Ixt = cv2.filter2D(Ix * It, -1, kernel) Iyt = cv2.filter2D(Iy * It, -1, kernel) # Compute optical flow parameters det = Ix2 * Iy2 - Ixy ** 2 # Avoid division by zero and handle invalid values u = np.where((det > 1e-6), (Iy2 * Ixt - Ixy * Iyt) / (det + 1e-6), 0) v = np.where((det > 1e-6), (Ix2 * Iyt - Ixy * Ixt) / (det + 1e-6), 0) optical_flow = np.stack((u, v), axis=2) return optical_flow.astype(np.float32) def apply_magnitude_threshold(flow, threshold): """Apply magnitude thresholding to filter out small flows. Args: flow (numpy.ndarray): Input flow array. threshold (float): Magnitude threshold value. Returns: numpy.ndarray: Thresholded flow array. """ magnitude = np.linalg.norm( flow, axis=-1 ) # Compute magnitude of flow vectors magnitude = magnitude.reshape( magnitude.shape + (1,) ) # Reshape to match flow shape thresholded_flow = np.where(magnitude < threshold, 0, flow) return thresholded_flow def show_flow(img, flow, filename=None): """Visualize the flow on the input image. Args: img (numpy.ndarray): Input image. flow (numpy.ndarray): Flow array to be visualized. filename (str, optional): Output filename to save the visualization. """ x = np.arange(0, img.shape[1], 1) y = np.arange(0, img.shape[0], 1) x, y = np.meshgrid(x, y) plt.figure(figsize=(10, 10)) fig = plt.imshow(img, cmap="gray", interpolation="bicubic") plt.axis("off") fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) num_points_per_axis = NUM_AXIS_PTS step = int(img.shape[0] / num_points_per_axis) print(step) scale_factor = ( ARROW_SCALE # Adjust arrow scale factor for better visualization ) plt.quiver( x[::step, ::step], y[::step, ::step], flow[::step, ::step, 0], -flow[::step, ::step, 1], # Reverse sign for correct direction color="r", pivot="tail", headwidth=2, headlength=3, scale=scale_factor, ) if filename is not None: plt.savefig(filename, bbox_inches="tight", pad_inches=0) # Read sphere data (PPM images) sphere_seq = [] for fname in natsorted(Path(INPUT_DATA).rglob("*.ppm")): sphere_seq.append(cv2.imread(str(fname), cv2.IMREAD_GRAYSCALE)) # Compute optical flow and visualize flows = [] for i in range(len(sphere_seq) - 1): # Compute optical flow using Lucas-Kanade method flow = lucas_kanade(sphere_seq[i], sphere_seq[i + 1]) # Apply magnitude thresholding to filter out small flows thresholded_flow = apply_magnitude_threshold( flow, threshold=FLOW_CUTOFF ) # Visualize the thresholded flow show_flow( sphere_seq[i], thresholded_flow, f"{INPUT_DATA}/sphere{i:02}_thresholded.png", ) # List all image files in the output folder image_files = sorted(Path(INPUT_DATA).glob("*.png")) # Create GIF animation from the image files with imageio.get_writer( f"{INPUT_DATA}/sphere_animation_MINWIN={MIN_WIN_SIZE}_NUMAXSPTS={NUM_AXIS_PTS}.gif", mode="I", ) as writer: for image_file in image_files: image = imageio.imread(str(image_file)) writer.append_data(image) print("GIF animation created successfully!") Using this approach, for example, produced results such as: MINWIN=10 NUMAXSPTS=6: MINWIN=9 NUMAXSPTS=50: MINWIN=6 NUMAXSPTS=32: MINWIN=5 NUMAXSPTS=50: Hierarchical Lucas-Kanade method As an update to this answer, below is a modification of the above analysis program which implements the hierarchical functions mentioned in the question and in the reference paper (e.g., pyramidal feature tracking, etc.). """Perform optical flow analysis using hierarchical Lucas-Kanade method and create GIF animation from rotating sphere image sequence. """ import cv2 import imageio import itertools as itl import matplotlib.pyplot as plt import numpy as np from natsort import natsorted from pathlib import Path INPUT_DATA = # [path to directory containing .ppm input data files] MAX_LEVEL = [3, 5] NUM_AXIS_PTS = [16, 32, 64] ARROW_SCALE = [1] MIN_WIN_SIZE = [31] FLOW_CUTOFF = [1e-2, 2.5e-2] combinations = [ *itl.product( MAX_LEVEL, NUM_AXIS_PTS, ARROW_SCALE, MIN_WIN_SIZE, FLOW_CUTOFF ) ] print(len(combinations)) print(combinations) for ( MAX_LEVEL, NUM_AXIS_PTS, ARROW_SCALE, MIN_WIN_SIZE, FLOW_CUTOFF, ) in combinations: def gen_gaussian_pyramid(im, max_level): """Generate Gaussian pyramid from the input image.""" gauss_pyr = [im] for i in range(max_level): gauss_pyr.append(cv2.pyrDown(gauss_pyr[-1])) return gauss_pyr def expand(img, dst_size, interpolation=None): """Upsample the flow to match the dimensions of another image.""" height, width = dst_size[:2] return cv2.GaussianBlur( cv2.resize( img, (width, height), interpolation=interpolation or cv2.INTER_LINEAR, ), (5, 5), 0, ) def remap(a, flow): """Warp the image by the flow in previous level.""" height, width = flow.shape[:2] y, x = np.meshgrid(np.arange(height), np.arange(width), indexing="ij") flow_map = np.column_stack( ( x.flatten() + -flow[:, :, 0].flatten(), y.flatten() + -flow[:, :, 1].flatten(), ) ) flow_map = flow_map.reshape((height, width, 2)) flow_map[:, :, 0] = np.clip(flow_map[:, :, 0], 0, width - 1) flow_map[:, :, 1] = np.clip(flow_map[:, :, 1], 0, height - 1) flow_map = flow_map.astype(np.float32) warped = cv2.remap(a, flow_map, None, cv2.INTER_LINEAR) return warped def hierarchical_lucas_kanade(im1, im2, max_level): """Compute optical flow using hierarchical Lucas-Kanade method.""" gauss_pyr_1 = gen_gaussian_pyramid(im1, max_level) gauss_pyr_2 = gen_gaussian_pyramid(im2, max_level) g_L = [0 for _ in range(max_level + 1)] d_L = [0 for _ in range(max_level + 1)] g_L[max_level] = np.zeros(gauss_pyr_1[-1].shape[:2] + (2,)).astype( np.float32 ) for level in range(max_level, -1, -1): warped = remap(gauss_pyr_1[level], g_L[level]) d_L[level] = lucas_kanade(warped, gauss_pyr_2[level]) g_L[level - 1] = 2.0 * expand( g_L[level] + d_L[level], gauss_pyr_2[level - 1].shape[:2] + (2,), interpolation=cv2.INTER_LINEAR, ) return g_L[0] + d_L[0] def lucas_kanade(img1, img2): """Compute optical flow using Lucas-Kanade method.""" img1 = np.copy(img1).astype(np.float32) img2 = np.copy(img2).astype(np.float32) window_size = MIN_WIN_SIZE Ix = cv2.Sobel(img1, cv2.CV_64F, 1, 0, ksize=5) Iy = cv2.Sobel(img1, cv2.CV_64F, 0, 1, ksize=5) It = img1 - img2 Ix2 = cv2.GaussianBlur(Ix ** 2, (window_size, window_size), 0) Iy2 = cv2.GaussianBlur(Iy ** 2, (window_size, window_size), 0) Ixy = cv2.GaussianBlur(Ix * Iy, (window_size, window_size), 0) Ixt = cv2.GaussianBlur(Ix * It, (window_size, window_size), 0) Iyt = cv2.GaussianBlur(Iy * It, (window_size, window_size), 0) det = Ix2 * Iy2 - Ixy ** 2 u = np.where((det > 1e-6), (Iy2 * Ixt - Ixy * Iyt) / (det + 1e-6), 0) v = np.where((det > 1e-6), (Ix2 * Iyt - Ixy * Ixt) / (det + 1e-6), 0) optical_flow = np.stack((u, v), axis=2) return optical_flow.astype(np.float32) def apply_magnitude_threshold(flow, threshold): """Apply magnitude thresholding to filter out small flows. Args: flow (numpy.ndarray): Input flow array. threshold (float): Magnitude threshold value. Returns: numpy.ndarray: Thresholded flow array. """ magnitude = np.linalg.norm( flow, axis=-1 ) # Compute magnitude of flow vectors magnitude = magnitude.reshape( magnitude.shape + (1,) ) # Reshape to match flow shape thresholded_flow = np.where(magnitude < threshold, 0, flow) return thresholded_flow def show_flow(img, flow, filename=None): """Visualize the flow on the input image.""" x = np.arange(0, img.shape[1], 1) y = np.arange(0, img.shape[0], 1) x, y = np.meshgrid(x, y) plt.figure(figsize=(10, 10)) fig = plt.imshow(img, cmap="gray", interpolation="bicubic") plt.axis("off") fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) num_points_per_axis = NUM_AXIS_PTS step = int(img.shape[0] / num_points_per_axis) scale_factor = ARROW_SCALE plt.quiver( x[::step, ::step], y[::step, ::step], flow[::step, ::step, 0], -flow[::step, ::step, 1], # Reverse sign for correct direction color="r", pivot="tail", headwidth=2, headlength=3, scale=scale_factor, ) if filename is not None: plt.savefig(filename, bbox_inches="tight", pad_inches=0) # Read sphere data (PPM images) sphere_seq = [] for fname in natsorted(Path(INPUT_DATA).rglob("*.ppm")): sphere_seq.append(cv2.imread(str(fname), cv2.IMREAD_GRAYSCALE)) # Compute optical flow and visualize flows = [] for i in range(len(sphere_seq) - 1): flow = hierarchical_lucas_kanade( sphere_seq[i], sphere_seq[i + 1], MAX_LEVEL ) # Apply magnitude thresholding to filter out small flows thresholded_flow = apply_magnitude_threshold( flow, threshold=FLOW_CUTOFF ) # Visualize the thresholded flow show_flow( sphere_seq[i], thresholded_flow, f"{INPUT_DATA}/sphere{i:02}.png", ) # Create GIF animation from the image files image_files = sorted(Path(INPUT_DATA).glob("*.png")) with imageio.get_writer( f"{INPUT_DATA}/sphere_animation_MAXLEVEL={MAX_LEVEL}" \ "_MINWINSIZE={MIN_WIN_SIZE}_NUMAXSPTS={NUM_AXIS_PTS}" \ "_ARROWSCALE={ARROW_SCALE}_FLOWCUTOFF={FLOW_CUTOFF}.gif", mode="I", ) as writer: for image_file in image_files: image = imageio.imread(str(image_file)) writer.append_data(image) There appears to be, in both this hierarchical and the previous simpler method shown above, a tradeoff in varying specifically the "minimum window size" parameter which is proving tricky to optimize: the smaller the window size, the more accurate the detection of flow but the less accurate the assessment of the direction of the flow. And vice versa - a larger window size (e.g., MIN_WIN_SIZE=31 [Note: the value for the parameter must be odd]) results in more accurate visualization of the direction of the flow in the images, but at the cost of introducing apparent noisiness in the flow detected (e.g., flow arrows appearing outside the bounds of the rotating sphere). Below are some representative examples selected from the more numerous combinatorial set of outputs generated by this version of the analysis (implementing the hierarchical adaptation of the Lucas-Kanade method). MAXLEVEL=5 MINWINSIZE=25 NUMAXSPTS=16 ARROWSCALE=2 FLOWCUTOFF=0.01 (i.e., β‰ˆNone)ΒΉ MAXLEVEL=3 MINWINSIZE=31 NUMAXSPTS=16 ARROWSCALE=1 FLOWCUTOFF=0.025 (Note: Depicted flow is [almost] all contained accurately within the sphere now, however with a concomitant loss in the sensitivity of detection (some peripheral areas are now missing flow quivers).) MAXLEVEL=4 MINWINSIZE=25 NUMAXSPTS=50 ARROWSCALE=2 FLOWCUTOFF=0.01 (i.e., β‰ˆNone) MAXLEVEL=4 MINWINSIZE=31 NUMAXSPTS=74 ARROWSCALE=70 FLOWCUTOFF=0.01 (i.e., β‰ˆNone) MAXLEVEL=5 MINWINSIZE=25 NUMAXSPTS=74 ARROWSCALE=2 FLOWCUTOFF=0.01 (i.e., β‰ˆNone) MAXLEVEL=5 MINWINSIZE=5 NUMAXSPTS=74 ARROWSCALE=2 FLOWCUTOFF=0.01 (i.e., β‰ˆNone) ΒΉ Note: Setting the parameter FLOW_CUTOFF=0.01 is essentially equivalent to having no threshold cutoff for the magnitudes of detected flow β€” all detected flow is passed through at this low-level value for the parameter.
2
2
77,586,285
2023-12-1
https://stackoverflow.com/questions/77586285/how-can-i-get-a-pytorch-tensor-containing-some-other-tensors-size-or-shape-wi
In the context of exporting pytorch code to ONNX, I get this warning: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. Here is the offending line: text_lengths = torch.tensor([text_inputs.shape[1]]).long().to(text_inputs.device) text_inputs is a torch.Tensor of shape torch.Size([1, 81]) And the warning is spot on and cannot be ignored, because the shape of text_inputs is supposed to be dynamic. I need text_lengths to be a torch.Tensor that contains the number 81 coming from the shape of text_inputs. The "offending line" from above succeeds in doing that, but we actually make a round trip from pytorch to a Python int and back to pytorch, because the elements in the torch.Size objects are Python ints. This is (1) somewhat weird, (2) probably inefficient in terms of GPU -> CPU -> GPU and, as stated above, an actual problem in the ONNX exporting context. Is there some other way how I can use a tensor's shape in torch computations, without "leaving" the torch world?
EDIT As was pointed out in the comments, the original answer was incorrect. Using torch.tensor will lead to a constant value in the exported graph. When tracing, outputs of torch.tensor.shape and torch.tensor.size should return tensors (instead of python integers), so the code above should export as intended just with text_lengths = text_inputs.shape[1] returning a LongTensor with the correct value. Seems that the code doesn't behave as intended because of the brackets in the constructor. The shape inside of [text_inputs.shape[1]] gets interpreted as an integer inside of a python list, and consequently saved as a constant during the trace. Dropping the extra brackets when creating the tensor will export correctly as a scalar defined at runtime by the shape of text_inputs . text_lengths = torch.LongTensor(text_inputs.size(1))
3
2
77,578,724
2023-11-30
https://stackoverflow.com/questions/77578724/conformal-prediction-intervals-insample-data-nixtla
Given the documentation of nixtla y dont find any way to compute the prediction intervals for insample prediction (training data) but just for future predicitons. I put an example of what I can achieve but just to predict (future). from statsforecast.models import SeasonalExponentialSmoothing, ADIDA, ARIMA from statsforecast.utils import ConformalIntervals # Create a list of models and instantiation parameters intervals = ConformalIntervals(h=24, n_windows=2) models = [ SeasonalExponentialSmoothing(season_length=24,alpha=0.1, prediction_intervals=intervals), ADIDA(prediction_intervals=intervals), ARIMA(order=(24,0,12), season_length=24, prediction_intervals=intervals), ] sf = StatsForecast( df=train, models=models, freq='H', ) levels = [80, 90] # confidence levels of the prediction intervals forecasts = sf.forecast(h=24, level=levels) forecasts = forecasts.reset_index() forecasts.head() So my goal will be to do something like: forecasts = sf.forecast(df_x, level=levels) So we can have any prediction intervals in the training set.
You can access the in-sample forecast with a conformal prediction interval using the forecast_fitted_values method. Your selected models need to support in-sample fitted values. From your example code, SeasonalExponentialSmoothing and ADIDA didn't support in-sample fitted values currently. You can find the list of supported models from official documentation You need to specify the fitted=True argument in the forecast step. Then, you can access the in-sample forecast with a conformal prediction interval using the forecast_fitted_values method. import pandas as pd from statsforecast import StatsForecast from statsforecast.models import SeasonalExponentialSmoothing, ADIDA, ARIMA from statsforecast.utils import ConformalIntervals train = pd.read_csv('https://auto-arima-results.s3.amazonaws.com/M4-Hourly.csv') test = pd.read_csv('https://auto-arima-results.s3.amazonaws.com/M4-Hourly-test.csv').rename(columns={'y': 'y_test'}) n_series = 1 uids = train['unique_id'].unique()[:n_series] # select first n_series of the dataset train = train.query('unique_id in @uids') test = test.query('unique_id in @uids') # Create a list of models and instantiation parameters intervals = ConformalIntervals(h=24, n_windows=2) models = [ ARIMA(order=(24,0,12), season_length=24, prediction_intervals=intervals), ] sf = StatsForecast( df=train, models=models, freq='H', n_jobs=-1 ) levels = [80, 90] # confidence levels of the prediction intervals forecasts = sf.forecast(h=24, level=levels, fitted=True) # Add fitted=True to store in-sample predictions. insample_forecasts = sf.forecast_fitted_values() # Access insample predictions
5
3
77,594,674
2023-12-3
https://stackoverflow.com/questions/77594674/azure-ml-deploymentidentityerror-failed-to-create-kubernetes-deployment-identi
I'm using Azure Machine Learning v2 SDK to create a model deployment on a kubernetes compute attached to an AML workspace. I'm able to deploy it locally as part of testing before deploying online. However, when tried to deploy online using KubernetesOnlineDeplyoment, I received DeploymentIdentityError: Failed to create Kubernetes deployment identity, Reason:RefreshExtensionIdentityNotSet. (More detailed error below) I'm provisioned the AKS cluster using terraform. I referred this official tutorial notebook as well. I've tried the local deployment flow mentioned in the tutorial and it works fine. In the tutorial, section 4.3 Attach Arc Cluster, I modified the compute_params dict to include identity as well. Below is the code I used to attach the cluster: compute = "testfooamlXXXX-c" from azure.ai.ml import load_compute compute_params = [ {"name": compute}, {"type": "kubernetes"}, { "resource_id": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/test-foo-aml/providers/Microsoft.ContainerService/managedClusters/testfooamlXXXX", }, {"identity": {"type":"SystemAssigned"}}, # This is the line I added ] k8s_compute = load_compute(source=None, params_override=compute_params) Below is how I'm creating an endpoint first: endpoint = KubernetesOnlineEndpoint( name=endpoint_name, compute=compute, description="this is a sample online endpoint", auth_mode="key", tags={"foo": "bar"}, ) ml_client.begin_create_or_update(endpoint).result() Then I created the deployment object, blue_deployment = KubernetesOnlineDeployment( name="blue", endpoint_name=endpoint_name, model=Model(path=str(model_path)), environment=Environment(name=env_name, version=env_version), code_configuration=CodeConfiguration( code=str(model_script_path.parent), scoring_script=model_script_path.name ), instance_count=1, ) Finally, it's the below line that causes the issue: ml_client.begin_create_or_update(blue_deployment).result() The Error: --------------------------------------------------------------------------- OperationFailed Traceback (most recent call last) File ~/miniconda3/envs/rishabh/lib/python3.11/site-packages/azure/core/polling/base_polling.py:757, in LROBasePolling.run(self) 756 try: --> 757 self._poll() 759 except BadStatus as err: File ~/miniconda3/envs/rishabh/lib/python3.11/site-packages/azure/core/polling/base_polling.py:789, in LROBasePolling._poll(self) 788 if _failed(self.status()): --> 789 raise OperationFailed("Operation failed or canceled") 791 final_get_url = self._operation.get_final_get_url(self._pipeline_response) OperationFailed: Operation failed or canceled The above exception was the direct cause of the following exception: HttpResponseError Traceback (most recent call last) /home/rishabh/aml/test-foo-model-deployments.ipynb Cell 51 line 1 ----> 1 ml_client.begin_create_or_update(blue_deployment).result() File ~/miniconda3/envs/rishabh/lib/python3.11/site-packages/azure/core/polling/_poller.py:251, in LROPoller.result(self, timeout) 242 def result(self, timeout: Optional[float] = None) -> PollingReturnType_co: 243 """Return the result of the long running operation, or 244 the result available after the specified timeout. 245 (...) 249 :raises ~azure.core.exceptions.HttpResponseError: Server problem with the query. 250 """ --> 251 self.wait(timeout) 252 return self._polling_method.resource() File ~/miniconda3/envs/rishabh/lib/python3.11/site-packages/azure/core/tracing/decorator.py:78, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs) 76 span_impl_type = settings.tracing_implementation() 77 if span_impl_type is None: ---> 78 return func(*args, **kwargs) 80 # Merge span is parameter is set, but only if no explicit parent are passed 81 if merge_span and not passed_in_parent: File ~/miniconda3/envs/rishabh/lib/python3.11/site-packages/azure/core/polling/_poller.py:270, in LROPoller.wait(self, timeout) 266 self._thread.join(timeout=timeout) 267 try: 268 # Let's handle possible None in forgiveness here 269 # https://github.com/python/mypy/issues/8165 --> 270 raise self._exception # type: ignore 271 except TypeError: # Was None 272 pass File ~/miniconda3/envs/rishabh/lib/python3.11/site-packages/azure/core/polling/_poller.py:185, in LROPoller._start(self) 181 """Start the long running operation. 182 On completion, runs any callbacks. 183 """ 184 try: --> 185 self._polling_method.run() 186 except AzureError as error: 187 if not error.continuation_token: File ~/miniconda3/envs/rishabh/lib/python3.11/site-packages/azure/core/polling/base_polling.py:772, in LROBasePolling.run(self) 765 raise HttpResponseError( 766 response=self._pipeline_response.http_response, 767 message=str(err), 768 error=err, 769 ) from err 771 except OperationFailed as err: --> 772 raise HttpResponseError(response=self._pipeline_response.http_response, error=err) from err HttpResponseError: (None) DeploymentIdentityError: Failed to create Kubernetes deployment identity, Reason:RefreshExtensionIdentityNotSet Details:Managed identity of AzureML extension is not assigned to the node pool of 'aks-default-XXXXXXXX-vmss000000'. The identity is used to give access for user container, such as pull image from ACR. Please see troubleshooting guide, available here: https://aka.ms/amlarc-tsg Code: None Message: DeploymentIdentityError: Failed to create Kubernetes deployment identity, Reason:RefreshExtensionIdentityNotSet Details:Managed identity of AzureML extension is not assigned to the node pool of 'aks-default-XXXXXXXX-vmss000000'. The identity is used to give access for user container, such as pull image from ACR. Please see troubleshooting guide, available here: https://aka.ms/amlarc-tsg Exception Details: (None) DeploymentIdentityError: Failed to create Kubernetes deployment identity, Reason:RefreshExtensionIdentityNotSet Details:Managed identity of AzureML extension is not assigned to the node pool of 'aks-default-XXXXXXXX-vmss000000'. The identity is used to give access for user container, such as pull image from ACR. Please see troubleshooting guide, available here: https://aka.ms/amlarc-tsg Code: None Message: DeploymentIdentityError: Failed to create Kubernetes deployment identity, Reason:RefreshExtensionIdentityNotSet Details:Managed identity of AzureML extension is not assigned to the node pool of 'aks-default-XXXXXXXX-vmss000000'. The identity is used to give access for user container, such as pull image from ACR. Please see troubleshooting guide, available here: https://aka.ms/amlarc-tsg Regarding the error, quoting the official documentation: ERROR: RefreshExtensionIdentityNotSet This error occurs when the extension is installed but the extension identity is not correctly assigned. You can try to reinstall the extension to fix it. I tried re-installing the extension and deploying but got the same error.
Seems like the Azure ML Extension's deployment identity-controller was being interfered by aad-pod-identity. Removing aad-pod-identity resolved the issue.
4
0
77,594,625
2023-12-3
https://stackoverflow.com/questions/77594625/how-can-i-fix-my-perceptron-to-recognize-numbers
My exercise is to train 10 perceptrons to recognize numbers (0 - 9). Each perceptron should learn a single digit. As training data, I've created 30 images (5x7 bmp). 3 variants per digit. I've got a perceptron class: import numpy as np def unit_step_func(x): return np.where(x > 0, 1, 0) def sigmoid(x): return 1 / (1 + np.exp(-x)) class Perceptron: def __init__(self, learning_rate=0.01, n_iters=1000): self.lr = learning_rate self.n_iters = n_iters self.activation_func = unit_step_func self.weights = None self.bias = None #self.best_weights = None #self.best_bias = None #self.best_error = float('inf') def fit(self, X, y): n_samples, n_features = X.shape self.weights = np.zeros(n_features) self.bias = 0 #self.best_weights = self.weights.copy() #self.best_bias = self.bias for _ in range(self.n_iters): for x_i, y_i in zip(X, y): linear_output = np.dot(x_i, self.weights) + self.bias y_predicted = self.activation_func(linear_output) update = self.lr * (y_i - y_predicted) self.weights += update * x_i self.bias += update #current_error = np.mean(np.abs(y - self.predict(X))) #if current_error < self.best_error: # self.best_weights = self.weights.copy() # self.best_bias = self.bias # self.best_error = current_error def predict(self, X): linear_output = np.dot(X, self.weights) + self.bias y_predicted = self.activation_func(linear_output) return y_predicted I've tried both, unit_step_func and sigmoid, activation functions, and pocketing algorithm to see if there's any difference. I'm a noob, so I'm not sure if this is even implemented correctly. This is how I train these perceptrons: import numpy as np from PIL import Image from Perceptron import Perceptron import os def load_images_from_folder(folder, digit): images = [] labels = [] for filename in os.listdir(folder): img = Image.open(os.path.join(folder, filename)) if img is not None: images.append(np.array(img).flatten()) label = 1 if filename.startswith(f"{digit}_") else 0 labels.append(label) return np.array(images), np.array(labels) digits_to_recognize = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] perceptrons = [] for digit_to_recognize in digits_to_recognize: X, y = load_images_from_folder("data", digit_to_recognize) p = Perceptron() p.fit(X, y) perceptrons.append(p) in short: training data filename is in the format digit_variant. As I said before, each digit has 3 variants, so for digit 0 it is 0_0, 0_1, 0_2, for digit 1 it's: 1_0, 1_1, 1_2, and so on... load_images_from_folder function loads 30 images and checks the name. If digit part of the name is the same as digit input then it appends 1 in labels, so that the perceptron knows that it's the desired digit. I know that it'd be better to load these images once and save them in some array of tuples, for example, but I don't care about the performance right now (I won't care later either). for digit 0 labels array is [1, 1, 1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] for digit 1 labels array is [0,0,0, 1, 1, 1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] and so on... then I train 10 perceptrons using this data. This exercise also requires to have some kind of GUI that allows me to draw a number. I've choosen pygame, I could use pyQT, it actually does not matter. This is the code, you can skip it, it's not that important (except for on_rec_button function, but I'll address on it): import pygame import sys pygame.init() cols, rows = 5, 7 square_size = 50 width, height = cols * square_size, (rows + 2) * square_size screen = pygame.display.set_mode((width, height)) pygame.display.set_caption("Zad1") rec_button_color = (0, 255, 0) rec_button_rect = pygame.Rect(0, rows * square_size, width, square_size) clear_button_color = (255, 255, 0) clear_button_rect = pygame.Rect(0, (rows + 1) * square_size + 1, width, square_size) mouse_pressed = False drawing_matrix = np.zeros((rows, cols), dtype=int) def color_square(x, y): col = x // square_size row = y // square_size if 0 <= row < rows and 0 <= col < cols: drawing_matrix[row, col] = 1 def draw_button(color, rect): pygame.draw.rect(screen, color, rect) def on_rec_button(): np_array_representation = drawing_matrix.flatten() for digit_to_recognize in digits_to_recognize: p = perceptrons[digit_to_recognize] predicted_number = p.predict(np_array_representation) if predicted_number == digit_to_recognize: print(f"Image has been recognized as number {digit_to_recognize}") def on_clear_button(): drawing_matrix.fill(0) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() elif event.type == pygame.MOUSEBUTTONDOWN and event.button == 3: mouse_pressed = True elif event.type == pygame.MOUSEBUTTONUP and event.button == 3: mouse_pressed = False elif event.type == pygame.MOUSEMOTION: mouse_x, mouse_y = event.pos if mouse_pressed: color_square(mouse_x, mouse_y) elif event.type == pygame.MOUSEBUTTONDOWN and event.button == 1: if rec_button_rect.collidepoint(event.pos): on_rec_button() if clear_button_rect.collidepoint(event.pos): on_clear_button() for i in range(rows): for j in range(cols): if drawing_matrix[i, j] == 1: pygame.draw.rect(screen, (255, 0, 0), (j * square_size, i * square_size, square_size, square_size)) else: pygame.draw.rect(screen, (0, 0, 0), (j * square_size, i * square_size, square_size, square_size)) draw_button(rec_button_color, rec_button_rect) draw_button(clear_button_color, clear_button_rect) pygame.display.flip() so, now that I run the app, draw the digit 3, and click the green button that runs on_rec_button function, I expected to see Image has been recognized as number 3, but I get Image has been recognized as number 0. This is what I draw: These are training data: These are very small because of the resolution 5x7 that was required in the exercise. When I draw the digit 1 then I get 2 results: Image has been recognized as number 0 Image has been recognized as number 1 What should I do to make it work the way I want? I don't expect this to work 100% accurate but I guess it could be better.
There seems to be a few issues in the code, I will try to address them: It's missing the back progation function derivatives, as metioned in comments! Those are very important because they are the ones that guide the correction to the correct dirrection (based on the gradient). simillarly, the bias is not calculated correclty. Here is a working code: def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_derivative(x): return x * (1 - x) class Perceptron: def __init__(self, learning_rate=0.01, n_iters=1000): self.lr = learning_rate self.n_iters = n_iters self.weights = None self.bias = None def fit(self, X, y): n_samples, n_features = X.shape self.bias = 0 self.weights = np.zeros(n_features) for _ in range(self.n_iters): for x_i, y_i in zip(X, y): linear_output = np.dot(x_i, self.weights) + self.bias y_predicted = sigmoid(linear_output) error = y_i - y_predicted output_error = error * sigmoid_derivative(y_predicted) self.weights += x_i.T.dot(output_error) * self.lr self.bias += np.sum(output_error, axis=0, keepdims=True) * self.lr def predict(self, X): linear_output = np.dot(X, self.weights) + self.bias y_predicted = sigmoid(linear_output) return y_predicted As the main question is about the perceptron, I prefered to skip the pygame code. I used from keras.datasets import mnist to mock the images. The result correlate, given I didn't changed the Percetron class signature or main functionality. Here is the testing code: from keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images_resized = np.zeros((train_images.shape[0], 784)) test_images_resized = np.zeros((test_images.shape[0], 784)) for i in range(train_images.shape[0]): train_images_resized[i] = np.resize(train_images[i]/np.max(train_images[i]), 784).flatten() for i in range(test_images.shape[0]): test_images_resized[i] = np.resize(test_images[i]/np.max(train_images[i]), 784).flatten() desired_digit = 1 train_labels = [ 1 if label == desired_digit else 0 for label in train_labels] test_labels = [ 1 if label == desired_digit else 0 for label in test_labels] digits_to_recognize = [desired_digit] X, y = train_images_resized,train_labels p = Perceptron(learning_rate=0.05,n_iters=100000) p.fit(X, y) Note that I had to normalize (divide the the max value of each image) the input data so that the sigmoid function don't get saturated, making the derivative function = 0. Results! p.predict(test_images_resized) array([0.004823, 0.531128, 0.94834 , 0.000155, 0.002682, 0.981524, 0.008962, 0.067788, 0.017121, 0.00063 ]) test_labels [0, 0, 1, 0, 0, 1, 0, 0, 0, 0]
2
1
77,592,100
2023-12-2
https://stackoverflow.com/questions/77592100/how-to-show-axis-labels-of-all-subplots-when-the-labels-are-strings
Problem summary Whenever I try to create a plot with plotly express 5.18.0 containing a subplot and axes labels that are not numbers, I only get labels for the first subplot subsequent subplots show empty axis labels. How can I ensure that all subplots show their respective axes labels, even if they contain strings? Example data import numpy as np import pandas as pd import plotly.express as px N = 100 food = ["Dim sum", "Noodles", "Burger", "Pizza", "Pancake"] drink = ["Beer", "Wine", "Soda", "Water", "Fruit juice", "Coffee", "Tea"] df = pd.DataFrame( { "age": np.random.randint(8, 99, N), "favourite_food": np.random.choice(food, N, replace=True), "favourite_drink": np.random.choice(drink, N, replace=True), "max_running_speed": np.random.random(N)*20, "number_of_bicycles": np.random.randint(0, 5, N) } ) df.age.replace({range(0, 19): "Kid", range(19, 100): "Adult"}, inplace=True) Random 5 rows: age favourite_food favourite_drink max_running_speed number_of_bicycles 0 Adult Dim sum Wine 8.57536 2 65 Kid Pizza Water 9.45698 1 57 Kid Pancake Beer 11.1445 0 84 Adult Dim sum Soda 8.80699 0 45 Adult Pizza Fruit juice 17.7258 4 Demonstration of problem If I now create a figure with two subplots: First subplot contains the distribution of the max. running speed (a number) Second subplot contains the distribution of the number of bicycles (a number) For convenience I use the facet_col argument in combination with the wide-form support of plotly express and the formatting updates I found in this related Q&A): px.histogram( df, x=["max_running_speed", "number_of_bicycles"], facet_col="variable", color="age", barmode="group", histnorm="percent", text_auto=".2r", ).update_xaxes(matches=None, showticklabels=True).update_yaxes(matches=None, showticklabels=True) All works as it should βœ…: I get separate ranges x- and y-axes and I get separate labels on the x-axes. Now I do the same, but for the columns with text data: px.histogram( df, x=["favourite_food", "favourite_drink"], facet_col="variable", color="age", barmode="group", histnorm="percent", text_auto=".2r", ).update_xaxes(matches=None, showticklabels=True).update_yaxes(matches=None, showticklabels=True) Now there's a problem ❌: The x-axis of the right plot does not show the names of the favourite drinks. What I've tried I checked the underlying data JSON object, as I noticed that when I hover over the bars of the right plot, the "value" field is empty: But when I inspect the JSON object in the .data key of the figure, I see that x-values are present for both histograms:
One of the plotly contributors suggested I use the .update_traces(bingroup=None) on my figure. This indeed shows the missing categories on the right plot and is a viable workaround for now.
3
1
77,593,997
2023-12-3
https://stackoverflow.com/questions/77593997/efficiently-compute-item-colaborating-filtering-similarity-using-numba-polars-a
Disclaimer The question is part of a thread including those two SO questions (q1, q2) The data resemble movie ratings from the ratings.csv file (~891mb) of ml-latest dataset. Once I read the csv file with polars library like: movie_ratings = pl.read_csv(os.path.join(application_path + data_directory, "ratings.csv")) Let's assume we want to compute the similarity between movies seen by user=1 (so for example 62 movies) with the rest of the movies in the dataset. FYI, the dataset has ~83,000 movies so for each other_movie (82,938) compute a similarity with each movie seen by user 1 (62 movies). The complexity is 62x82938 (iterations). For this example the benchmarks reported are only for 400/82,938 other_movies To do so, I create two polars dataframes. One dataframe with the other_movies (~82,938 row) and a second dataframe with only the movies seen by the user (62 rows). user_ratings = movie_ratings.filter(pl.col("userId")==input_id) #input_id = 1 (data related to user 1) user_rated_movies = list(user_ratings.select(pl.col("movieId")).to_numpy().ravel()) #movies seen by user1 potential_movies_to_recommend = list( movie_ratings.select("movieId").filter( ~(pl.col("movieId").is_in(user_rated_movies)) ).unique().sort("movieId").to_numpy().ravel() ) items_metadata = ( movie_ratings.filter( ~pl.col("movieId").is_in(user_rated_movies) #& pl.col("movieId").is_in(potential_movie_recommendations[:total_unseen_movies]) ) .group_by("movieId").agg( users_seen_movie=pl.col("userId").unique(), user_ratings=pl.col("rating") ) ) target_items_metadata = ( movie_ratings.filter( pl.col("movieId").is_in(user_rated_movies) #& pl.col("movieId").is_in(potential_movie_recommendations[:total_unseen_movies]) ).group_by("movieId").agg( users_seen_movie=pl.col("userId").unique(), user_ratings=pl.col("rating") ) ) The result are two polars dataframes with rows(movies) and columns(users seen the movies & the ratings from each user). The first dataframe contains only other_movies that we can potentially recommend to user1 seen he/she has not seen them. The second dataframe contains only the movies seen by the user. Next my approach is to iterate over each row of the first dataframe by applying a UDF function. item_metadata_similarity = ( items_metadata.with_columns( similarity_score=pl.struct(pl.all()).map_elements( lambda row: item_compute_similarity_scoring_V2(row, similarity_metric, target_items_metadata), return_dtype=pl.List(pl.List(pl.Float64)), strategy="threading" ) ) ) , where item_compute_similarity_scoring_V2 is defined as: def item_compute_similarity_scoring_V2( row, target_movies_metadata:pl.DataFrame ): users_item1 = np.asarray(row["users_seen_movie"]) ratings_item1 = np.asarray(row["user_ratings"]) computed_similarity: list=[] for row2 in target_movies_metadata.iter_rows(named=True): #iter over each row from the second dataframe with the movies seen by the user. users_item2=np.asarray(row2["users_seen_movie"]) ratings_item2=np.asarray(row2["user_ratings"]) r1, r2 = item_ratings(users_item1, ratings_item1, users_item2, ratings_item2) if r1.shape[0] != 0 and r2.shape[0] != 0: similarity_score = compute_similarity_score(r1, r2) if similarity_score > 0.0: #filter out negative or zero similarity scores computed_similarity.append((row2["movieId"], similarity_score)) most_similar_pairs = sorted(computed_similarity, key=lambda x: x[1], reverse=True) return most_similar_pairs , item_ratings & compute_similarity_score defined as def item_ratings(u1:np.ndarray, r1:np.ndarray, u2:np.ndarray, r2:np.ndarray) -> (np.ndarray, np.ndarray): common_elements, indices1, indices2 = np.intersect1d(u1, u2, return_indices=True) sr1 = r1[indices1] sr2 = r2[indices2] assert len(sr1)==len(sr2), "ratings don't have same lengths" return sr1, sr2 @jit(nopython=True, parallel=True) def compute_similarity_score(array1:np.ndarray, array2:np.ndarray) -> float: assert(array1.shape[0] == array2.shape[0]) a1a2 = 0 a1a1 = 0 a2a2 = 0 for i in range(array1.shape[0]): a1a2 += array1[i]*array2[i] a1a1 += array1[i]*array1[i] a2a2 += array2[i]*array2[i] cos_theta = 1.0 if a1a1!=0 and a2a2!=0: cos_theta = float(a1a2/np.sqrt(a1a1*a2a2)) return cos_theta The function basically, iterates over each row of the second dataframe and for each row computes the similarity between other_movie and the movie seen by the user. Thus, for 400 movies we do 400*62 iterations, generating 62 similarity scores per other_movie. The result from each computation is an array with schema [[1, 0.20], [110, 0.34]]... (length 62 pairs per other_movie) Benchmarks for 400 movies INFO - Item-Item: Computed similarity scores for 400 movies in: 0:05:49.887032 ~2 minutes. ~5gb of RAM used. I would to identify how can I improve the computations by using native polars commands or exploiting the numba framework for parallelism. Update - 2nd approach using to_numpy() operations without iter_rows() and map_elements() user_ratings = movie_ratings.filter(pl.col("userId")==input_id) #input_id = 1 user_rated_movies = user_ratings.select(pl.col("movieId")).to_numpy().ravel() potential_movies_to_recommend = list( movie_ratings.select("movieId").filter( ~(pl.col("movieId").is_in(user_rated_movies)) ).unique().sort("movieId").to_numpy().ravel() ) items_metadata = ( movie_ratings.filter( ~pl.col("movieId").is_in(user_rated_movies) ) ) # print(items_metadata.head(5)) target_items_metadata = ( movie_ratings.filter( pl.col("movieId").is_in(user_rated_movies) ) ) # print(target_items_metadata.head(5)) With this second approach items_metadata and target_items_metadata are two large polars tables. Then my next step is to save both tables into numpy.ndarrays with the to_numpy() command. items_metadata_array = items_metadata.to_numpy() target_items_metadata_array = target_items_metadata.to_numpy() computed_similarity_scores:dict = {} for i, other_movie in enumerate(potential_movies_to_recommend[:400]): #take the first 400 unseen movies by user 1 mask = items_metadata_array[:, 1] == other_movie other_movies_chunk = items_metadata_array[mask] u1 = other_movies_chunk[:,0].astype(np.int32) r1 = other_movies_chunk[:,2].astype(np.float32) computed_similarity: list=[] for i, user_movie in enumerate(user_rated_movies): print(user_movie) mask = target_items_metadata_array[:, 1] == user_movie target_movie_chunk = target_items_metadata_array[mask] u2 = target_movie_chunk[:,0].astype(np.int32) r2 = target_movie_chunk[:,2].astype(np.float32) common_r1, common_r2 = item_ratings(u1, r1, u2, r2) if common_r1.shape[0] != 0 and common_r2.shape[0] != 0: similarity_score = compute_similarity_score(common_r1, common_r2) if similarity_score > 0.0: computed_similarity.append((user_movie, similarity_score)) most_similar_pairs = sorted(computed_similarity, key=lambda x: x[1], reverse=True)[:k_similar_user] computed_similarity_scores[str(other_movie)] = most_similar_pairs Benchmarks of the second approach (8.50 minutes > 6 minutes of the first approach) Item-Item: Computed similarity scores for 400 movies in: 0:08:50.537102 Update - 3rd approach using iter_rows() operations In my third approach, I have better results from the previous two methods, getting results in approximately 2 minutes for user 1 and 400 movies. items_metadata = ( movie_ratings.filter( ~pl.col("movieId").is_in(user_rated_movies) ) .group_by("movieId").agg( users_seen_movie=pl.col("userId").unique(), user_ratings=pl.col("rating") ) ) target_items_metadata = ( movie_ratings.filter( pl.col("movieId").is_in(user_rated_movies) ).group_by("movieId").agg( users_seen_movie=pl.col("userId").unique(), user_ratings=pl.col("rating") ) ) items_metadata is the metadata of other_movies not seen by the user 1. target_items_metadata the metadata of the movies rated by user 1. By the term metadata I refer to the two aggregated .agg() columns, users_seen_movie and user_ratings Finally, I create two for loops using iter_rows() method from polars def cosine_similarity_score(array1:np.ndarray, array2:np.ndarray) -> float: assert(array1.shape[0] == array2.shape[0]) a1a2 = 0 a1a1 = 0 a2a2 = 0 for i in range(array1.shape[0]): a1a2 += array1[i]*array2[i] a1a1 += array1[i]*array1[i] a2a2 += array2[i]*array2[i] # cos_theta = 1.0 cos_theta = 0.0 if a1a1!=0 and a2a2!=0: cos_theta = float(a1a2/np.sqrt(a1a1*a2a2)) return max(0.0, cos_theta) for row1 in item_metadata.iter_rows(): computed_similarity: list= [] for row2 in target_items_metadata.iter_rows(): r1, r2 = item_ratings(np.asarray(row1[1]), np.asarray(row1[2]), np.asarray(row2[1]), np.asarray(row2[2])) if r1.shape[0]!=0 and r2.shape[0]!=0: similarity_score = cosine_similarity_score(r1, r2) computed_similarity.append((row2[0], similarity_score if similarity_score > 0 else 0)) computed_similarity_scores[str(row1[0])] = sorted(computed_similarity, key=lambda x: x[1], reverse=True)[:k_similar_user] Benchmarks for 400 movies INFO - Item-Item: Computed similarity scores for 400 movies in: 0:01:50 ~2 minutes. ~4.5gb of RAM used.
I'm not too familiar with numba, so before trying to compare timings, the first thing I would try to do is create a "fully native" Polars approach: This is a direct translation of the current approach (i.e. it still contains the "double for loop") so it just serves as a baseline attempt. Because it uses the Lazy API, nothing in the loops is computed. That is all done when .collect() is called (which allows Polars to parallelize the work). The > 0.0 filtering for the similarity_score would be done after the results are collected. input_id = 1 is_user_rating = pl.col("userId") == input_id can_recommend = ( pl.col("movieId").is_in(pl.col("movieId").filter(is_user_rating)).not_() ) cosine_similarity = ( pl.col('rating').dot('rating_right') / ( pl.col('rating').pow(2).sum().sqrt() * pl.col('rating_right').pow(2).sum().sqrt() ) ) user_rated_movies = movie_ratings.filter(is_user_rating).select("movieId").to_series() potential_movies_to_recommend = ( movie_ratings.filter(can_recommend).select(pl.col("movieId").unique().sort()) ) # use the Lazy API so we can compute in parallel df = movie_ratings.lazy() computed_similarity_scores = [] for other_movie in potential_movies_to_recommend.head(1).to_series(): # .head(N) potential movies for user_movie in user_rated_movies: score = ( df.filter(pl.col("movieId") == user_movie) .join( df.filter(pl.col("movieId") == other_movie), on = "userId" ) .select(cosine = cosine_similarity) .select(user_movie=user_movie, other_movie=other_movie, similarity_score="cosine") ) computed_similarity_scores.append(score) # All scores are computed in parallel computed_similarity_scores_polars = pl.concat(computed_similarity_scores).collect() shape: (62, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ user_movie ┆ other_movie ┆ similarity_score β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i32 ┆ i32 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ══════════════════║ β”‚ 1 ┆ 2 ┆ 0.95669 β”‚ β”‚ 110 ┆ 2 ┆ 0.950086 β”‚ β”‚ 158 ┆ 2 ┆ 0.957631 β”‚ β”‚ 260 ┆ 2 ┆ 0.945542 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 49647 ┆ 2 ┆ 0.9411 β”‚ β”‚ 52458 ┆ 2 ┆ 0.955353 β”‚ β”‚ 53996 ┆ 2 ┆ 0.930388 β”‚ β”‚ 54259 ┆ 2 ┆ 0.95469 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Testing .head(100) I get 58s runtime compared to 111s runtime for your example, memory consumption is the same. duckdb As a comparison, duckdb with .head(400) runs in 5s import duckdb df = duckdb.sql(""" with df as (from 'imdb.parquet'), user as (from df where movieId in (from df select movieId where userId = 1)), movies as (from df where movieId not in (from df select movieId where userId = 1)), other as (from df where movieId in (from movies select distinct movieId order by movieId limit 400)) from user join other using (userId) select user.movieId user_movie, other.movieId other_movie, list_cosine_similarity( list(user.rating), list(other.rating) ) similarity_score group by user_movie, other_movie order by user_movie, other_movie """).pl() shape: (24_764, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ user_movie ┆ other_movie ┆ similarity_score β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ══════════════════║ β”‚ 1 ┆ 2 ┆ 0.95669 β”‚ β”‚ 1 ┆ 3 ┆ 0.941348 β”‚ β”‚ 1 ┆ 4 ┆ 0.92169 β”‚ β”‚ 1 ┆ 5 ┆ 0.943999 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 54259 ┆ 407 ┆ 0.941241 β”‚ β”‚ 54259 ┆ 408 ┆ 0.934745 β”‚ β”‚ 54259 ┆ 409 ┆ 0.937361 β”‚ β”‚ 54259 ┆ 410 ┆ 0.94937 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Elapsed time: 5.02638 seconds
3
3
77,609,841
2023-12-5
https://stackoverflow.com/questions/77609841/fastest-way-to-construct-sparse-block-matrix-in-python
I want to construct a matrix of shape (N,2N) in Python. I can construct the matrix as follows import numpy as np N = 10 # 10,100,1000, whatever some_vector = np.random.uniform(size=N) some_matrix = np.zeros((N, 2*N)) for i in range(N): some_matrix[i, 2*i] = 1 some_matrix[i, 2*i + 1] = some_vector[i] So the result is a matrix of mostly zeros, but where on row i, the 2*i and 2*i + 1 columns are populated. Is there a faster way to construct the matrix without the loop? Feels like there should be some broadcasting operation... Edit: There have been some very fast, great answers! I am going to extend the question a touch to my actual use case. Now suppose some_vector has a shape (N,T). I want to construct a matrix of shape (N,2*N,T) analogously to the previous case. The naive approach is: N = 10 # 10,100,1000, whatever T = 500 # or whatever some_vector = np.random.uniform(size=(N,T)) some_matrix = np.zeros((N, 2*N,T)) for i in range(N): for t in range(T): some_matrix[i, 2*i,t] = 1 some_matrix[i, 2*i + 1,t] = some_vector[i,t] Can we extend the previous answers to this new case?
Variant 1 You can replace the loop with a broadcasting assignment, which interleaves the columns of an identity matrix with the columns of a diagonal matrix: a = np.eye(N) b = np.diag(some_vector) c = np.empty((N, 2*N)) c[:, 0::2] = a c[:, 1::2] = b This is concise, but not optimal. This requires allocating some unnecessary intermediate matrices (a and b), which becomes increasing expensive as N grows larges. This also involves performing random-access assignments to every position in c, instead of just the non-zero entries. This may be faster or slower than the implementation in the question for different values of N. Variant 2 Similar idea Variant 1, but only performs a single allocation and avoids unnecessary assignments to zero entries. We create a single vector of size 2*N**2, which essentially represents the rows of some_matrix concatenated with one another. The non-zero positions are populated using broadcasting assignments. We then create an (N, 2*N) view into this vector using np.ndarray.reshape. some_matrix = np.zeros(2 * n**2) step = 2 * (n + 1) some_matrix[::step] = 1 some_matrix[1::step] = some_vector some_matrix = some_matrix.reshape(n, 2 * n) Performance Comparison For relatively small N (N=100): Variant 1 is about 40% faster than the Python loop (12 us vs 21 us) Variant 2 is about 80% faster than the Python loop (4 us vs 21 us) For large N (N=10_000): Variant 1 is slower than the Python loop. Variant 2 is about 20% faster than the Python loop (10 us vs 13 us) Setup: import numpy as np def original(n): some_vector = np.random.uniform(size=n) some_matrix = np.zeros((n, 2 * n)) for i in range(n): some_matrix[i, 2 * i] = 1 some_matrix[i, 2 * i + 1] = some_vector[i] def variant_1(n): some_vector = np.random.uniform(size=n) a = np.eye(n) b = np.diag(some_vector) c = np.empty((n, 2*n)) c[:, 0::2] = a c[:, 1::2] = b def variant_2(n): some_vector = np.random.uniform(size=n) some_matrix = np.zeros(2 * n**2) step = 2 * (n + 1) some_matrix[::step] = 1 some_matrix[1::step] = some_vector some_matrix = some_matrix.reshape(n, 2*n) Timing at N=100: %timeit -n1000 original(100) 21.1 Β΅s Β± 2.22 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) %timeit -n1000 variant_1(100) 12.2 Β΅s Β± 431 ns per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) %timeit -n1000 variant_2(100) 4.37 Β΅s Β± 21.4 ns per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) Timing at N=1_000 %timeit -n100 original(1_000) 631 Β΅s Β± 98.5 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) %timeit -n100 variant_1(1_000) 5.24 ms Β± 157 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) %timeit -n100 variant_2(1_000) 408 Β΅s Β± 12.8 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) Timing at N=10_000 %timeit -n100 original(10_000) 12.6 ms Β± 225 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) %timeit -n100 variant_2(10_000) 10.1 ms Β± 109 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) Tested using Python 3.10.12 and Numpy v1.26.0.
2
3
77,608,616
2023-12-5
https://stackoverflow.com/questions/77608616/merge-python-logging-handler-output
I am implementing python.logging for a project that relies on a 3rd-party software that also uses logging. The issue is we want console output, and the logs from the two are getting double printed. Is there a way to set up a handler to "combine" the output of sub-handlers? import logging import sys def third_party_use(): print_log = logging.StreamHandler(stream=sys.stdout) print_log.setLevel(logging.INFO) print_log.name = "SubPrintLog" print_log.setFormatter(logging.Formatter("SubPrintLog %(name)s - %(levelname)s: %(message)s")) third_party_logger = logging.getLogger("ThirdParty") third_party_logger.addHandler(print_log) third_party_logger.info("I don't want this to print twice") if __name__ == '__main__': print_log = logging.StreamHandler(stream=sys.stdout) print_log.setLevel(logging.INFO) print_log.name = "MyPrintLog" print_log.setFormatter(logging.Formatter("MyPrintLog %(name)s - %(levelname)s: %(message)s")) logging.basicConfig(level=0, handlers=[print_log]) logger = logging.getLogger(__name__) logger.debug("This shouldn't appear") logger.info("This should") logger.warning("This definitely should") third_party_use() logger.info("Just my stuff again") yields: MyPrintLog __main__ - INFO: This should MyPrintLog __main__ - WARNING: This definitely should SubPrintLog ThirdParty - INFO: I don't want this to print twice MyPrintLog ThirdParty - INFO: I don't want this to print twice MyPrintLog __main__ - INFO: Just my stuff again Is there a way to set up MyPrintLog to not reproduce output from ThirdParty (assume we have no access to modify the configuration of ThirdParty. I was hoping that the propagate flag would fix this problem, but it flows the other direction and we can't modify ThirdParty) One solution that "works" is to effectively disable MyPrintLog when calling ThirdParty: def turn_off_console(): for idx, handler in enumerate(logger.root.handlers): if handler.name == "MyPrintLog": old_level = handler.level handler.setLevel(logging.CRITICAL) return idx, old_level def turn_on_console(handle_index, level): logger.root.handlers[handle_index].setLevel(level) And then wrapping each access to the third party: idx, old_level = turn_off_console() third_party_use() turn_on_console(idx, old_level) Which yields the correct output: MyPrintLog __main__ - INFO: This should MyPrintLog __main__ - WARNING: This definitely should SubPrintLog ThirdParty - INFO: I don't want this to print twice MyPrintLog __main__ - INFO: Just my stuff again But, that means I have to toggle the logger every time, which is prone to error. EDIT:::SOLVED class NoThirdParty(logging.Filter): def filter(self, record): return not record.name == "ThirdParty" if __name__ == '__main__': print_log = logging.StreamHandler(stream=sys.stdout) print_log.setLevel(logging.INFO) print_log.addFilter(NoThirdParty()) # Do Not Reproduce Third Party! print_log.name = "MyPrintLog" print_log.setFormatter(logging.Formatter("MyPrintLog %(name)s - %(levelname)s: %(message)s")) ... yields: MyPrintLog __main__ - INFO: This should MyPrintLog __main__ - WARNING: This definitely should SubPrintLog ThirdParty - INFO: I don't want this to print twice MyPrintLog __main__ - INFO: Just my stuff again This Solution solves the same problem by applying the filter to the 3rd Party logger
class NoThirdParty(logging.Filter): def filter(self, record): return not record.name == "ThirdParty" if __name__ == '__main__': print_log = logging.StreamHandler(stream=sys.stdout) print_log.setLevel(logging.INFO) print_log.addFilter(NoThirdParty()) # Do Not Reproduce Third Party! print_log.name = "MyPrintLog" print_log.setFormatter(logging.Formatter("MyPrintLog %(name)s - %(levelname)s: %(message)s")) ... yields: MyPrintLog __main__ - INFO: This should MyPrintLog __main__ - WARNING: This definitely should SubPrintLog ThirdParty - INFO: I don't want this to print twice MyPrintLog __main__ - INFO: Just my stuff again
2
1
77,608,504
2023-12-5
https://stackoverflow.com/questions/77608504/convert-numpy-array-in-column-vector
I am new to python programming so excuse me if the question may seem silly or trivial. So for a certain function I need to do a check in case x is a vector and (in that case) it must be a column vector. However I wanted to make it so that the user could also pass it a row vector and turn it into a column vector. However at this point how do I make it not do the check if x is a scalar quantity. I attach a code sketch to illustrate the problem. Thanks in advance to whoever responds import numpy as np x = np.arange(80, 130, 10) # x: float = 80.0 print('array before:\n', x) if x is not np.array: x = np.array(x).reshape([len(x), 1]) print('array after:\n', x)
You can check the type using if not isinstance(x, np.ndarray). After that check you can check the number of dimension of the array and convert to a columnar array. In the code below, first if block checks and converts to an array. Then we get how many dimensions are missing. I.e. a scalar value is missing 2 dimensions, a flat list is missing 1 dimension. Finally we iterate over the number of missing dimensions up-converting the array's dimensions with each step. def to_column_array(x): if not isinstance(x, np.ndarray): x = np.array(x) missing_dims = 2 - x.ndim if missing_dims < 0: raise ValueError('You array has too many dimensions') for _ in range(missing_dims): x = x.reshape(-1, 1) return x So if a user has just a number, this convert it to an array of size (1, 1). to_column_array(10) # returns: array([[10]]) Passing in a list or list of lists converts to an array as well. to_column_array([3, 6, 9]) # returns: array([[3], [6], [9]]) to_column_array([[1, 2], [3, 4], [5, 6]) # returns: array([[1, 2], [3, 4], [5, 6]])
2
1
77,607,499
2023-12-5
https://stackoverflow.com/questions/77607499/remove-warning-when-using-concat-with-empty-dataframes
My old code concats some dataframes, some may be empty. I now receive two future warnings regarding this. My goal is to have the old logic but without any warnings. Mainly I need to retain all the column names without empty rows. I wrote the code below (fixed 1 of the 2 warning) but it still gives me FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. import io import pandas as pd df_list = ['RevisionTime,Data,2019/Q2,2019/Q3,2019/Q4\r\n', 'RevisionTime,Data,2019/Q3\r\n2019-08-17,10.5,10.5\r\n', 'RevisionTime,Data,2019/Q3\r\n2019-09-18 08:10:00,51.0,51.0\r\n', 'RevisionTime,Data,2019/Q3\r\n2019-10-18 08:10:00,111.5,111.5\r\n', 'RevisionTime,Data,2019/Q3,2019/Q4\r\n2019-11-15 22:31:00,182.0,111.5,70.5\r\n'] # list with dataframes df_list = [pd.read_csv(io.StringIO(df)) for df in df_list] # to avoid 'The behaviour of array concatenation with empty entries is deprecated.' # and to retain all column names # https://stackoverflow.com/questions/63970182/concat-list-of-dataframes-containing-empty-dataframes for i, df in enumerate(df_list): col_length = len(df.columns) template = pd.DataFrame(data=[[pd.NA] * col_length], columns=df.columns) df_list[i] = df if not df.empty else template # res_df = pd.concat(df_list) # warning here res_df = res_df.dropna(how='all') # remove empty rows print(res_df) Output dataframe should look like: RevisionTime Data 2019/Q2 2019/Q3 2019/Q4 0 2019-08-17 10.5 NaN 10.5 NaN 0 2019-09-18 08:10:00 51.0 NaN 51.0 NaN 0 2019-10-18 08:10:00 111.5 NaN 111.5 NaN 0 2019-11-15 22:31:00 182.0 NaN 111.5 70.5 Basically, help me fix the code to remove the warning.
You can create Index with all possible column names and then reindex the final dataframe: # create `column_names` index with all posible column names column_names = pd.Index([]) for df in df_list: column_names = column_names.union(df.columns) res_df = pd.concat([df for df in df_list if not df.empty]) # reindex the final dataframe (this adds NaN to missing columns) res_df = res_df.reindex(column_names, axis=1) print(res_df) Prints: 2019/Q2 2019/Q3 2019/Q4 Data RevisionTime 0 NaN 10.5 NaN 10.5 2019-08-17 0 NaN 51.0 NaN 51.0 2019-09-18 08:10:00 0 NaN 111.5 NaN 111.5 2019-10-18 08:10:00 0 NaN 111.5 70.5 182.0 2019-11-15 22:31:00
2
2
77,601,398
2023-12-4
https://stackoverflow.com/questions/77601398/aws-lambda-async-invocation-issue-function-getting-timed-out-intermittently-whe
I am attempting to invoke an AWS lambda function asynchronously within another Lambda function using the boto3 SDK. The invocation is done using the following code snippet: lambda_client = boto3.client('lambda') response = lambda_client.invoke( FunctionName='async_function:alias', InvocationType="Event", Payload=json.dumps({'id': '101932092', 'type': 'type', 'sub_type': 'subtype'}) ) The issues I am encountering is that the invoking function sometimes times out(15 minutes) at the above code block. The behavior occurs intermittently and there are no clear patterns. I have ruled out concurrency and throttling issues on the invoked function by checking the relevant metrics. However, even though the invoke call is supposed to put the event in an event queue for asynchronous processing (as per AWS Lambda documentation), the invoking function times out without providing a success or error response. Any insights or suggestions for trouble shooting this would be greatly appreciated.
The most likely reason for this intermittent connectivity is that your Lambda function has been configured for VPC access and you have chosen a mix of private and public subnets. The fix is to configure the Lambda function for private subnets only or, if your Lambda functions only need to reach AWS services, then configure VPC Endpoints for the AWS services that you need access to. The reason that the Lambda function fails intermittently is that it runs in a private subnet sometimes and in a public subnet at other times, depending on placement decisions made by the Lambda service. When the Lambda function executes in a public subnet, it has no network route to the internet or to AWS services. The reasons for this are: the Lambda function has a private IP but does not have a public IP the default route for traffic in a public subnet is the Internet Gateway, which drops traffic from private IPs (because they're not routable on the internet) the default route for traffic in a private subnet, if you set it up correctly to reach the internet, is a NAT or NAT gateway which allows private IP traffic to be NATed to a public IP (the public IP of the NAT device) and hence that traffic can reach the internet Also, see Why can't an AWS lambda function inside a public subnet in a VPC connect to the internet?
2
3
77,605,089
2023-12-5
https://stackoverflow.com/questions/77605089/init-in-overridden-classmethod
I have a small class hierarchy with similar methods and __init__(), but slightly different static (class) read() methods. Specifically will the child class need to prepare the file name a bit before reading (but the reading itself is the same): class Foo: def __init__(self, pars): self.pars = pars @classmethod def read(cls, fname): pars = some_method_to_read_the_file(fname) return cls(pars) class Bar(Foo): @classmethod def read(cls, fname): fname0 = somehow_prepare_filename(fname) return cls.__base__.read(fname0) The problem here is that Bar.read(…) returns a Foo object, not a Bar object. How can I change the code so that an object of the correct class is returned?
cls.__base__.read binds the read method explicitly to the base class. You should use super().read instead to bind the read method of the parent class to the current class. Change: return cls.__base__.read(fname0) to: return super().read(fname0)
2
3
77,601,477
2023-12-4
https://stackoverflow.com/questions/77601477/multipleobjectsreturned-or-objectdoesnotexist-error-when-accessing-google-pro
I'm encountering an issue when trying to access the 'google' provider in Django-allauth. I'm getting either a MultipleObjectsReturned or ObjectDoesNotExist exception. I have followed the documentation and tried various troubleshooting steps, but the problem persists. Here is the code snippet from my views.py file: from allauth.socialaccount.models import SocialApp from django.core.exceptions import MultipleObjectsReturned, ObjectDoesNotExist def home(request): try: google_provider = SocialApp.objects.get(provider='google') except MultipleObjectsReturned: print("Multiple 'google' providers found:") for provider in SocialApp.objects.filter(provider='google'): print(provider.client_id) raise except ObjectDoesNotExist: print("No 'google' provider found.") raise return render(request, 'home.html', {'google_provider': google_provider}) I have also imported the necessary modules SocialApp and MultipleObjectsReturned/ObjectDoesNotExist from allauth.socialaccount.models and django.core.exceptions respectively. I have checked my database, and there is only one entry for the 'google' provider in the SocialApp table. The client_id is correctly set for this provider. Despite taking these steps, I am still encountering the mentioned errors. I have made sure to restart my Django server after any code or database changes.
you probably have 2 instance of of your google account set up. one in your settings.py and the other one in your django admin console. best you delete anyone you think you should delete and make migrations and migrate #settings.py (you could delete this and leave the one in the admin) SOCIALACCOUNT_PROVIDERS = { 'google': { # For each OAuth based provider, either add a ``SocialApp`` # (``socialaccount`` app) containing the required client # credentials, or list them here: 'APP': { 'client_id': '123', 'secret': '456', 'key': '' } } }
6
5
77,594,086
2023-12-3
https://stackoverflow.com/questions/77594086/how-to-run-a-nlptransformers-llm-on-low-memory-gpus
I am trying to load an AI pre-trained model, from intel on hugging face, I have used Colab its resources exceeded, used Kaggle resources increased, used paperspace, which showing me an error: The kernel for Text_Generation.ipynb appears to have died. It will restart automatically. this is the model load script: import transformers model_name = 'Intel/neural-chat-7b-v3-1' model = transformers.AutoModelForCausalLM.from_pretrained(model_name) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) def generate_response(system_input, user_input): # Format the input using the provided template prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n" # Tokenize and encode the prompt inputs = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False) # Generate a response outputs = model.generate(inputs, max_length=1000, num_return_sequences=1) response = tokenizer.decode(outputs[0], skip_special_tokens=True) # Extract only the assistant's response return response.split("### Assistant:\n")[-1] # Example usage system_input = "You are a math expert assistant. Your mission is to help users understand and solve various math problems. You should provide step-by-step solutions, explain reasonings and give the correct answer." user_input = "calculate 100 + 520 + 60" response = generate_response(system_input, user_input) print(response) # expected response """ To calculate the sum of 100, 520, and 60, we will follow these steps: 1. Add the first two numbers: 100 + 520 2. Add the result from step 1 to the third number: (100 + 520) + 60 Step 1: Add 100 and 520 100 + 520 = 620 Step 2: Add the result from step 1 to the third number (60) (620) + 60 = 680 So, the sum of 100, 520, and 60 is 680. """ My purpose is to load this pretrained model, I have done some research on my end I have find some solutions but not working with me, download packages using cuda instead of pip
I would recommend looking into model quantization as this is one of the approaches which specifically addresses this type of problem, of loading a large model for inference. TheBloke has provided a quantized version of this model which is available here: neural-chat-7B-v3-1-AWQ. To use this, you'll need to use AutoAWQ, and as per Hugging Face in this notebook, for Colab you need to install an earlier version given Colab's CUDA version. You should also make sure your model is using GPU, not CPU, by adding .cuda() to the input tensor after it is generated: !pip install -q transformers accelerate !pip install -q -U https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl import torch from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name = 'TheBloke/neural-chat-7B-v3-1-AWQ' ### Use AutoAWQ and from quantized instead of transformers here model = AutoAWQForCausalLM.from_quantized(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) def generate_response(system_input, user_input): # Format the input using the provided template prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n" ### ADD .cuda() inputs = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False).cuda() # Generate a response outputs = model.generate(inputs, max_length=1000, num_return_sequences=1) response = tokenizer.decode(outputs[0], skip_special_tokens=True) # Extract only the assistant's response return response.split("### Assistant:\n")[-1]
2
3
77,595,180
2023-12-3
https://stackoverflow.com/questions/77595180/list-all-files-containing-a-string-between-two-specific-strings-not-on-the-same
I'd like to recursively find all .md files of the current directory that contain the β€œNarrow No-Break Space” U+202F Unicode character between the two strings \begin{document} and \end{document}, possibly (and in fact essentially) not on the same line as U+202F. A great addition would be to replace such U+202Fs by normal spaces. I already find a way to extract text between \begin{document} and \end{document} with a Python regexp (which I used to find easier for multi-line substitutions. I tried to use it just to list files with this pattern (planning to afterwards chain with grep to at least get the files where this pattern contains U+202F) but my attempts with: def finds_files_whose_contents_match_a_regex(filename): textfile = open(filename, 'r') filetext = textfile.read() textfile.close() matches = re.findall("\\begin{document}\s*(.*?)\s*\\end{document}", filetext) for root, dirs, files in os.walk("."): for filename in files: if filename.endswith(".md"): filename=os.path.join(root, filename) finds_files_whose_contents_match_a_regex(filename) but I got unintelligible (for me) errors: Traceback (most recent call last): File "./test-bis.py", line 14, in <module> finds_files_whose_contents_match_a_regex(filename) File "./test-bis.py", line 8, in finds_files_whose_contents_match_a_regex matches = re.findall("\\begin{document}\s*(.*?)\s*\\end{document}", filetext) File "/usr/lib64/python3.10/re.py", line 240, in findall return _compile(pattern, flags).findall(string) File "/usr/lib64/python3.10/re.py", line 303, in _compile p = sre_compile.compile(pattern, flags) File "/usr/lib64/python3.10/sre_compile.py", line 788, in compile p = sre_parse.parse(p, flags) File "/usr/lib64/python3.10/sre_parse.py", line 955, in parse p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0) File "/usr/lib64/python3.10/sre_parse.py", line 444, in _parse_sub itemsappend(_parse(source, state, verbose, nested + 1, File "/usr/lib64/python3.10/sre_parse.py", line 526, in _parse code = _escape(source, this, state) File "/usr/lib64/python3.10/sre_parse.py", line 427, in _escape raise source.error("bad escape %s" % escape, len(escape)) re.error: bad escape \e at position 27
Assuming you are correctly reading or decoding an encoded file... I would do something along these lines. from pathlib import Path import re p=Path('/tmp') # Use your root path here def replace_non_break_spaces(fn): with open(fn,"r") as f: cont=f.read() cont_update=re.sub(r"\\begin{document}[\s\S]*?\\end{document}", lambda m: m.group(0).replace("\u202F", "!"), cont) if cont!=cont_update: # at this point, write 'cont_update' back to the same file. # File is only updated if the re.sub changes the string pass for fn in (x for x in p.glob("**/*.md") if x.is_file()): replace_non_break_spaces(fn) Given your example on regex101 (which I modified as seen): \documentclass{article} \usepackage[width=7cm]{geometry} <=there are u202F there \pagestyle{empty} \begin{document} Du texte alignΓ© Γ  droite : <=there are u202F there \raggedleft cet exemple ne brille sans doute pas par sa complexitΓ©. Clair, non ? \end{document} The result is: \documentclass{article} \usepackage[width=7cm]{geometry} <=there are u202F there \pagestyle{empty} \begin{document} Du texte alignΓ© Γ  droite : !!<=there are u202F there \raggedleft cet exemple ne brille sans doute pas par sa complexitΓ©. Clair, non!? \end{document} (The non-breaking spaces are replaced with ! for clarity...) From the comment running python test.py doesn't change test.md: from pathlib import Path import re p=Path('/tmp') # Use your root path here def replace_non_break_spaces(fn): with open(fn,"r") as f: cont=f.read() cont_update=re.sub(r"\\begin{document}[\s\S]*?\\end{document}", lambda m: m.group(0).replace("\u202F", "!"), cont) if cont!=cont_update: print(f"Updating {fn}") # make a backup... with open(f"{fn}.bak", "w") as f: f.write(cont) with open(fn,"w") as f: f.write(cont_update) for fn in (x for x in p.glob("**/*.md") if x.is_file()): replace_non_break_spaces(fn) CAREFUL!!! This code will recursively change every .md file in a tree (it does make backups as updated.)
3
2
77,596,271
2023-12-3
https://stackoverflow.com/questions/77596271/i-want-to-merge-my-peft-adapter-model-with-the-base-model-and-make-a-fully-new-m
As the title said, I want to merge my PEFT LoRA adapter model (ArcturusAI/Crystalline-1.1B-v23.12-tagger) that I trained before with the base model (TinyLlama/TinyLlama-1.1B-Chat-v0.6) and make a fully new model. And I got this code from ChatGPT: from transformers import AutoModel, AutoConfig # Load the pretrained model and LoRA adapter pretrained_model_name = "TinyLlama/TinyLlama-1.1B-Chat-v0.6" pretrained_model = AutoModel.from_pretrained(pretrained_model_name) lora_adapter = AutoModel.from_pretrained("ArcturusAI/Crystalline-1.1B-v23.12-tagger") # Assuming the models have the same architecture (encoder, decoder, etc.) # Get the weights of each model pretrained_weights = pretrained_model.state_dict() lora_adapter_weights = lora_adapter.state_dict() # Combine the weights (adjust the weights based on your preference) combined_weights = {} for key in pretrained_weights: combined_weights[key] = 0.8 * pretrained_weights[key] + 0.2 * lora_adapter_weights[key] # Load the combined weights into the pretrained model pretrained_model.load_state_dict(combined_weights) # Save the integrated model pretrained_model.save_pretrained("ArcturusAI/Crystalline-1.1B-v23.12-tagger-fullmodel") And I got this error: --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-1-d2120d727884> in <cell line: 6>() 4 pretrained_model_name = "TinyLlama/TinyLlama-1.1B-Chat-v0.6" 5 pretrained_model = AutoModel.from_pretrained(pretrained_model_name) ----> 6 lora_adapter = AutoModel.from_pretrained("ArcturusAI/Crystalline-1.1B-v23.12-tagger") 7 8 # Assuming the models have the same architecture (encoder, decoder, etc.) 1 frames /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs) 3096 ) 3097 else: -> 3098 raise EnvironmentError( 3099 f"{pretrained_model_name_or_path} does not appear to have a file named" 3100 f" {_add_variant(WEIGHTS_NAME, variant)}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME} or" OSError: ArcturusAI/Crystalline-1.1B-v23.12-tagger does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack. I have no idea what I did wrong there, I would appreciate it if anyone could teach me how to fix it, or am I going in a completely wrong direction? Thank you. I tried using transformers and pytorch, I expect them to merge both models and create a new model out of it.
The adapter can't be loaded with AutoModel from transformers and also the suggestion from ChatGPT of merging won't work. Luckily you don't need to rely on AI for that. The peft library has everything ready for you with merge_and_unload: from peft import AutoPeftModelForCausalLM # Local path, check post scriptum for explanation model_id = "./ArcturusAI/Crystalline-1.1B-v23.12-tagger" peft_model = AutoPeftModelForCausalLM.from_pretrained(model_id) print(type(peft_model)) merged_model = peft_model.merge_and_unload() # The adapters are merged now and it is transformers class again print(type(merged_model)) Output: <class 'peft.peft_model.PeftModelForCausalLM'> <class 'transformers.models.llama.modeling_llama.LlamaForCausalLM'> You can now save merged_model with save_pretrained or do with it whatever you want. Please note that this is only the model and not the tokenizer. You still need to load the tokenizer from the TinyLlama/TinyLlama-1.1B-Chat-v0.6 repo and save it with save_pretrained locally to have everything in one place: from transformers import AutoTokenizer t = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v0.6") P.S.: I noticed that you have trained the model with a different version of peft. Hence I downloaded it locally and removed the following keys from the adapter_config.json: loftq_config, megatron_config, megatron_core to be able to load it with peft==0.6.2.
6
4
77,595,144
2023-12-3
https://stackoverflow.com/questions/77595144/pygame-not-processing-input-fast-enough
I am making the game snake in pygame and have completed the game. I have one problem that does not happen every time, but every once in a while, it does not process the keys being pressed if I press them quickly. For instance if the apple is next to a wall and I need to turn really quickly to get it, I sometimes just run into the wall. I have clock.tick set to 8 in order to get the slow 'jumps' that snake usually has. I'm wondering if this means it only checks if I have pressed something 8 times per second. If so, is there a way to make event checking much faster while also keeping the frame rate relatively slow? Here is the main code: import random import pygame from settings import WINDOW_SIZE, BACKGROUND_COLOR, FRAME_RATE from snake import Snake from apple import Apple from high_score import high_score # Initialize pygame and the screen and make a clock, so we can control the framerate pygame.init() screen = pygame.display.set_mode(WINDOW_SIZE) clock = pygame.time.Clock() pygame.display.set_caption("Snake") # This is our great snake that will be doing all the work snake = Snake() # Our apple will never actually change. It will always be the same object # but once they hit it, it will just register that it hit and movel locations apple = pygame.sprite.GroupSingle(Apple()) apple.sprites()[0].place() # If the screen should be up or not running = True # If we are playing or not (this will be false once the score is displayed) playing = True # Main game loop while running: # Check if the user wants to quit for event in pygame.event.get(): # Pressing the close button the window shuts it down if event.type == pygame.QUIT: running = False elif event.type == pygame.KEYDOWN: # Pressing escape shuts it down too if event.key == pygame.K_ESCAPE: running = False elif event.key == pygame.K_w or event.key == pygame.K_UP: snake.up() elif event.key == pygame.K_d or event.key == pygame.K_RIGHT: snake.right() elif event.key == pygame.K_s or event.key == pygame.K_DOWN: snake.down() elif event.key == pygame.K_a or event.key == pygame.K_LEFT: snake.left() elif event.key == pygame.K_RETURN and not playing: snake = Snake() apple = pygame.sprite.GroupSingle(Apple()) apple.sprites()[0].place() playing = True # If we are still playing, update things if playing: # Update the apple and then the snake (the apple may be under the snake) apple.update() snake.update() # If the snake has hit the apple, add a new body part and place the apple elsewhere if snake.body[0].rect.colliderect(apple.sprites()[0].rect): snake.add_new_body_part() apple.sprites()[0].place() # If the snake is dead then we are done if snake.is_dead(): playing = False # Fill the background screen.fill(BACKGROUND_COLOR) # Draw apple apple.draw(screen) # Draw snake snake.draw(screen) # And if we are not playing we need to display a score if not playing: high_score_file = open("high_score.py", "w") font = pygame.font.SysFont("feesanbolt.ttf", 100) # If we have beaten the high score, display a happy message! if snake.length() >= high_score: high_score = snake.length() high_score_text = font.render("New High Score!", True, (80, 80, 80), BACKGROUND_COLOR) high_score_text_rect = high_score_text.get_rect() high_score_text_rect.center = (WINDOW_SIZE.width // 2, WINDOW_SIZE.height // 4) screen.blit(high_score_text, high_score_text_rect) # Update high_score (if snake.length < high_score, the file won't change) high_score_file.write(f"high_score = {high_score}") # Display the score score_text = font.render(str(snake.length()), True, (80, 80, 80), BACKGROUND_COLOR) score_text_rect = score_text.get_rect() score_text_rect.center = (WINDOW_SIZE.width // 2, WINDOW_SIZE.height // 2) screen.blit(score_text, score_text_rect) # Update screen pygame.display.flip() # We want a slow framerate so we can get the slow and discrete snake moving in jumps clock.tick(FRAME_RATE) pygame.quit() I am really not sure what to do. I am new to pygame so am not sure the relationship between the events and the framerate. I tried looking for some information in the documentation and on stack overflow, but could not find anything that specifically talked about this. Any direction would be helpful. Thank you in advance!
I suggest using a higher frame rate but a timer event for the movement and update (see Do something every x (milli)seconds in pygame): FRAME_RATE = 60 update_interval = 100 # 0.1 seconds update_event_id = pygame.USEREVENT + 1 pygame.time.set_timer(update_event_id, update_interval) while running: # Check if the user wants to quit for event in pygame.event.get(): # Pressing the close button the window shuts it down if event.type == pygame.QUIT: running = False # [...] elif event.type == update_event_id: if playing: # Update the apple and then the snake (the apple may be under the snake) apple.update() snake.update() # [...] clock.tick(FRAME_RATE)
2
3
77,590,669
2023-12-2
https://stackoverflow.com/questions/77590669/drawmarker-in-cv2-aruco-not-found
I'm trying to use the function drawMarker() from OpenCV documentation in ArucoMarkers and for some reason it keeps giving me an error: AttributeError: module 'cv2.aruco' has no attribute 'drawMarker' If it helps, I am using VS code and coding in Python. I'm not sure why I don't have "drawMarker()" in my library. I'm trying to print aruco markers in a matplotlib plot. This is my code for that. import cv2 import numpy as np import matplotlib.pyplot as plt # Create a predefined dictionary object for ArUco markers dictionary = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250) # Generate an ArUco marker image marker_id = 23 marker_size = 200 # No need to pass an image to fill marker_image = cv2.aruco.drawMarker(dictionary, marker_id, marker_size) # Display the marker using matplotlib plt.imshow(marker_image, cmap='gray') plt.axis('off') plt.show() This is my version (which I tried to make it as up to date as possible): /Users/[OMITTED]/Desktop/test/aruco_markers.py 4.8.1 When I ran print(dir(cv2.aruco)): rtcbhiv01a02:python_gui [OMITTED]$ /usr/local/bin/python3 /Users/[OMITTED]/Desktop/test/aruco_markers.py ['ARUCO_CCW_CENTER', 'ARUCO_CW_TOP_LEFT_CORNER', 'ArucoDetector', 'Board', 'CORNER_REFINE_APRILTAG', 'CORNER_REFINE_CONTOUR', 'CORNER_REFINE_NONE', 'CORNER_REFINE_SUBPIX', 'CharucoBoard', 'CharucoDetector', 'CharucoParameters', 'DICT_4X4_100', 'DICT_4X4_1000', 'DICT_4X4_250', 'DICT_4X4_50', 'DICT_5X5_100', 'DICT_5X5_1000', 'DICT_5X5_250', 'DICT_5X5_50', 'DICT_6X6_100', 'DICT_6X6_1000', 'DICT_6X6_250', 'DICT_6X6_50', 'DICT_7X7_100', 'DICT_7X7_1000', 'DICT_7X7_250', 'DICT_7X7_50', 'DICT_APRILTAG_16H5', 'DICT_APRILTAG_16h5', 'DICT_APRILTAG_25H9', 'DICT_APRILTAG_25h9', 'DICT_APRILTAG_36H10', 'DICT_APRILTAG_36H11', 'DICT_APRILTAG_36h10', 'DICT_APRILTAG_36h11', 'DICT_ARUCO_MIP_36H12', 'DICT_ARUCO_MIP_36h12', 'DICT_ARUCO_ORIGINAL', 'DetectorParameters', 'Dictionary', 'Dictionary_getBitsFromByteList', 'Dictionary_getByteListFromBits', 'EstimateParameters', 'GridBoard', 'RefineParameters', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_native', 'calibrateCameraAruco', 'calibrateCameraArucoExtended', 'calibrateCameraCharuco', 'calibrateCameraCharucoExtended', 'detectCharucoDiamond', 'detectMarkers', 'drawCharucoDiamond', 'drawDetectedCornersCharuco', 'drawDetectedDiamonds', 'drawDetectedMarkers', 'drawPlanarBoard', 'estimatePoseBoard', 'estimatePoseCharucoBoard', 'estimatePoseSingleMarkers', 'extendDictionary', 'generateImageMarker', 'getBoardObjectAndImagePoints', 'getPredefinedDictionary', 'interpolateCornersCharuco', 'refineDetectedMarkers', 'testCharucoCornersCollinear']
OpenCV 4.7.0 brought few changes in API. generateImageMarker() should be used instead of drawMarker(). Original c++ tutorial was updated with: cv::Mat markerImage; cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250); cv::aruco::generateImageMarker(dictionary, 23, 200, markerImage, 1); cv::imwrite("marker23.png", markerImage); Corresponding working solution in python: import cv2 dictionary = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250) marker_id = 23 marker_size = 200 marker_image = cv2.aruco.generateImageMarker(dictionary, marker_id, marker_size) Printing will bring you the desired marker: import matplotlib.pyplot as plt plt.imshow(marker_image, cmap='gray') plt.axis('off') plt.show()
3
7
77,592,974
2023-12-3
https://stackoverflow.com/questions/77592974/create-new-rows-based-on-missing-grouped-by-values
Given the below dataframe, if rows are grouped by first name and last name, how can I find and create new rows for a group that does not have a row for every type in the types list. So in the example below, two new rows would be created for Bob Jack that are missing from the original dataframe: one with type 'DA' and another with type 'FA', the value columns can be set to 0. data = { 'First Name': ['Alice', 'Alice', 'Alice', 'Alice', 'Bob', 'Bob'], 'Last Name': ['Johnson', 'Johnson', 'Johnson', 'Johnson', 'Jack', 'Jack'], 'Type': ['CA', 'DA', 'FA', 'GCA', 'CA', 'GCA'], 'Value': [25, 30, 35, 40, 50, 37] } types = ['CA', 'DA', 'FA', 'GCA'] df = pd.DataFrame(data)
One way to do this is to create a dataframe which is all the combinations of names and types, then left join that to the original dataframe. This will create a df with all combinations, with NaN values where there was a missing entry in the original data. That can then be filled with 0. Note that because the value column gets NaN values in it, it is converted to type float. You can convert that back to int if desired using astype({'Value': int}) in the chain: out = (df[['First Name', 'Last Name']] .drop_duplicates() .merge(pd.Series(types, name='Type'), how='cross') .merge(df, on=['First Name', 'Last Name', 'Type'], how='left') .fillna(0) # use this astype if you need `Value` to be an int .astype({'Value': int}) ) Output (with the astype to convert Value back to int): First Name Last Name Type Value 0 Alice Johnson CA 25 1 Alice Johnson DA 30 2 Alice Johnson FA 35 3 Alice Johnson GCA 40 4 Bob Jack CA 50 5 Bob Jack DA 0 6 Bob Jack FA 0 7 Bob Jack GCA 37
5
5
77,591,847
2023-12-2
https://stackoverflow.com/questions/77591847/how-can-i-draw-a-spiral-pattern-using-python-matrices
This is not a homework assignment, just me preparing for my final programming exam. We were introduced this problem just to think, not to solve, but I would like to know how to deal with this type of problems (using matrices to draw specific patterns). I'm asked to write a program that prints β€œspirals” of size n Γ— n. Input consists of a sequence of strictly positive natural numbers, ended with zero. And, for each n, I must print a spiral of size n Γ— n. Note that in the bottom row and in the right column there are only β€˜X’s. Also, I need to print an empty line after each spiral. This is an Input/Output example: Input: 4 6 7 0 Output: .XXX .X.X ...X XXXX .XXXXX .X...X .X.X.X .XXX.X .....X XXXXXX .XXXXXX .X....X .X.XX.X .X..X.X .XXXX.X ......X XXXXXXX I'm stuck at this point, I printed bottom and right 'X' lines since they will always be there. Then, I tried to make iterations to draw each 'L' of the spiral. So, first an L was entered (bottom right). Then, an inverse L, etc. This is my code till now: import sys def spirals(n): M = [['.' for _ in range(n)] for _ in range(n)] for row in range(n): M[row][n-1] = 'X' for col in range(n): M[n-1][col] = 'X' spirals_rec(M, 0, n-1) return M def spirals_rec(M, i, j): for _ in range(n//2): pos = 1 while pos < j: M[i][pos] = 'X' pos += 1 prev1_pos = pos pos = 1 while pos < j-1: M[pos][i+1] = 'X' pos += 1 prev2_pos = pos i = prev2_pos-1 j = prev1_pos-1 n = int(sys.stdin.readline()) for row in spirals(n): print(row) I'm just trying to make it work for a single integer input. Then, I will adjust to the statement requirements. This is what my code returns, given the example input: .XXX .X.X ...X XXXX .XXXXX .X.XXX .XX.XX .XXX.X .....X XXXXXX .XXXXXX .X..XXX .X..XXX .XXX.XX .XXXX.X ......X XXXXXXX I have not been taught to solve this type of problems, but I heard my professor puts this type of exercises for exams. With this post I would like to obtain some ideas on how I have to solve this, if I'm on the correct way or I should reformulate all again. I just can use import sys (to read input, in this case, and integer), not other packages. I appreciate any comment or ideas. Thanks. I forgot to add this observation of the statement: A matrix is not needed to solve this problem, but use it for simplicity. I think for me, a matrix would be indeed easier. However, if there is any other easier way of solving this, I'll be grateful of reading it.
This problem repeats itself after every four edges have been drawn. For example, in the spiral below for n=9, you can clearly see the n=5 spiral (which I've modified to have Os instead of Xs, for contrast), set in from the edges of the grid by two spaces, and the n=1 "spiral" drawn with a single X at the very center, offset by four spaces from the outside of the grid: . X X X X X X X X . X . . . . . . X . X . O O O O . X . X . O . . O . X . X . O X . O . X . X . . . . O . X . X O O O O O . X . . . . . . . . X X X X X X X X X X So I'd write code that draws four sides of one loop of a spiral, and which can handle being offset into a larger matrix. Make sure you fail safe for small sizes where you don't actually need all four of the sides! Here's my code, that loops over the offset into the grid: def spiral(n): M = [['.' for _ in range(n)] for _ in range(n)] for offset in range(0, (n+1)//2, 2): for i in range(offset, n-offset): # bottom side M[n-offset-1][i] = 'X' for i in range(offset, n-offset-1): # right side M[i][n-offset-1] = 'X' for i in range(offset+1, n-offset-1): # top side M[offset][i] = 'X' for i in range(offset+1, n-offset-2): # left side M[i][offset+1] = 'X' return M Because a range that has reversed or overlapping end points is "empty" but still iterable, this code handles the inner parts of the spiral that don't need all four sides without needing any explicit special cases (some of the loops just won't do anything).
2
3
77,577,973
2023-11-30
https://stackoverflow.com/questions/77577973/malloc-double-free-error-on-m3-macbook-pro
I am working on a Django python project with a postgres db hosted with render.com. The code works fine on server and my imac. I recently got a Macbook Pro M3 (running sonoma). I have replicated the exact same setup and environment however when I try to run the code locally, I get Python(40505,0x1704f7000) malloc: double free for ptr 0x1368be800 Python(40505,0x1704f7000) malloc: *** set a breakpoint in malloc_error_break to debug The setup is the exact same on the other device and it works fine there. Here is the link to repo https://github.com/moreshk/django-postgres Any assistance would be useful. Setup https://github.com/moreshk/django-postgres on my new device. Setup virtual environment, any dependencies and installed requirements and .env file. Would have expected it to run fine locally. Other Django Python projects seem to work fine, except this one which has a postgres db with render.com When I try to run the code locally, I get the below error: python manage.py runserver Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). Python(40505,0x1704f7000) malloc: double free for ptr 0x1368be800 Python(40505,0x1704f7000) malloc: *** set a breakpoint in malloc_error_break to debug
if you have installed PostgreSQL with Homebrew, brew upgrade postgresql and ensure you have version 14.10_1 https://github.com/Homebrew/homebrew-core/issues/155651#issuecomment-1827988313 psql(6636,0x10f1de600) malloc: *** error for object 0x7f916b00bc00: pointer being freed was not allocated psql(6636,0x10f1de600) malloc: *** set a breakpoint in malloc _error break to debug
7
21
77,589,004
2023-12-2
https://stackoverflow.com/questions/77589004/problem-execute-calculations-in-a-nested-loop-typeerror-numpy-float64-object
I'm trying to calculate the sum of squared errors and i'm using a nested loop. I'm new to Python and i apologize, but i encounter the error: File "...", line 13, in <module> for y in values_subtraction_mean: TypeError: 'numpy.float64' object is not iterable The problem is with the second loop, when i have to calculate result in: for y in values_subtraction_mean: result = sum(math.sqrt(y)) In the second loop, it should show all values of values_subtraction_mean, so it should show 2.2, -0.8, 0.2, 1.2, -2.8. Next for each value above, a sqrt should be calculated and get 4.84, 0.64, 0.04, 1.44, 7.84. In the end you have to sum all these numbers and get 14.8 What am I doing wrong? from numpy import mean import math values = [5, 2, 3, 4, 0] mean = mean(values) for x in values: values_subtraction_mean = x - mean print(values_subtraction_mean) #2.2, -0.8, 0.2, 1.2, -2.8 for y in values_subtraction_mean: result = sum(math.sqrt(y)) print(result) #sqrt: 4.84, 0.64, 0.04, 1.44, 7.84 #sum and result: 14.8 I tried using this, but it doesn't solve the problem: import numpy as np values = np.array([5, 2, 3, 4, 0]) I tried not using numpy, calculating the mean with: sum(values) / len(values), but it doesn't work either and i get error: TypeError: 'float' object is not iterable
You seem to have problems understanding basic python iteration, let alone numpy. So lets look at your code in detail In [1]: import numpy as np ...: import math ...: ...: values = [5, 2, 3, 4, 0] ...: ...: mean = np.mean(values) ...: ...: for x in values: ...: values_subtraction_mean = x - mean ...: print(values_subtraction_mean) ...: ...: for y in values_subtraction_mean: ...: result = sum(math.sqrt(y)) ...: print(result) ...: 2.2 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[1], line 12 9 values_subtraction_mean = x - mean 10 print(values_subtraction_mean) ---> 12 for y in values_subtraction_mean: 13 result = sum(math.sqrt(y)) 14 print(result) TypeError: 'numpy.float64' object is not iterable So the error is in that for y in values...: line. The for x in values: works because values is a list. Inside that loop it calculates the mean (with a numpy mean function): In [2]: mean Out[2]: 2.8 In [3]: type(mean) Out[3]: numpy.float64 In [4]: values_subtraction_mean Out[4]: 2.2 In [5]: x Out[5]: 5 You substracted the mean from one element of the list, and then tried to iterate on that value. The y loop is nested inside the x loop. Plus the x loop isn't accumulating any of those x-mean values. The following line will have problems as well sum(math.sqrt(y)) math functions only work on scalars, not lists or arrays. But python sum requires an iterable (e.g. a list). So I don't quite get what you intend here. Corrected iteration - sort of The standard way to iterate on a list is to append the results to a list. Changing your code: In [9]: values = [5, 2, 3, 4, 0] ...: ...: mean = np.mean(values) ...: values_sublist = [] ...: for x in values: ...: values_sublist.append(x - mean) ...: print(values_sublist) ...: result = [] ...: for y in values_sublist: ...: result.append(math.sqrt(y)) ...: print(result) [2.2, -0.7999999999999998, 0.20000000000000018, 1.2000000000000002, -2.8] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[9], line 10 8 result = [] 9 for y in values_sublist: ---> 10 result.append(math.sqrt(y)) 11 print(result) ValueError: math domain error values_sublist is all the list values minus their mean. So the y loop works. But it runs into another problem - it can't take the sqrt of a negative number! numpy version In [10]: values Out[10]: [5, 2, 3, 4, 0] In [11]: mean=np.mean(values) In [12]: mean Out[12]: 2.8 In [13]: np.array(values)-mean Out[13]: array([ 2.2, -0.8, 0.2, 1.2, -2.8]) In [14]: np.sqrt(np.array(values)-mean) C:\Users\14256\AppData\Local\Temp\ipykernel_8020\509743582.py:1: RuntimeWarning: invalid value encountered in sqrt np.sqrt(np.array(values)-mean) Out[14]: array([1.4832397 , nan, 0.4472136 , 1.09544512, nan]) It's possible to do this without the python loops, if values is a numpy array. Now the negative values produce a nan value, and a warning. There's no point to doing the sum on the array with nan. complex values We can deal with the sqrt issue by making values a complex dtype array: In [15]: values_c = np.array(values,complex); values_c Out[15]: array([5.+0.j, 2.+0.j, 3.+0.j, 4.+0.j, 0.+0.j]) In [16]: np.mean(values_c) Out[16]: (2.8000000000000003+0j) In [17]: values_c - np.mean(values_c) Out[17]: array([ 2.2+0.j, -0.8+0.j, 0.2+0.j, 1.2+0.j, -2.8+0.j]) In [18]: np.sqrt(values_c - np.mean(values_c)) Out[18]: array([1.4832397 +0.j , 0. +0.89442719j, 0.4472136 +0.j , 1.09544512+0.j , 0. +1.67332005j]) In [19]: np.sum(_) Out[19]: (3.0258984079294224+2.5677472440680673j) Though I'm not sure that's what you need. square instead of sqrt But wait, the "sqrt" values that you want are actually the square values, not square root. In [20]: values = [5, 2, 3, 4, 0] ...: mean = np.mean(values) ...: values_sublist = [] ...: for x in values: ...: values_sublist.append(x - mean) ...: print(values_sublist) ...: result = [] ...: for y in values_sublist: ...: result.append(y**2) ...: print(result); print(sum(result)) [2.2, -0.7999999999999998, 0.20000000000000018, 1.2000000000000002, -2.8] [4.840000000000001, 0.6399999999999997, 0.04000000000000007, 1.4400000000000004, 7.839999999999999] 14.8 Now your two loops work just fine. And written as one numpy expression: In [21]: np.sum((np.array(values)-np.mean(values))**2) Out[21]: 14.8 We could also use a list comprehension in place of the for loops - though we still have to use np.mean, or a loop equivalent. So it isn't a pure list calculation: In [22]: values Out[22]: [5, 2, 3, 4, 0] In [23]: mean Out[23]: 2.8 In [24]: sum((x-mean)**2 for x in values) Out[24]: 14.8 That sqrt and complex code may be a distraction, but I'll leave that in because I think it may be instructive. It's part of debugging your code.
2
1
77,590,201
2023-12-2
https://stackoverflow.com/questions/77590201/issue-with-triangles-borders-in-matplotlib
I am facing an issue with drawing triangle borders using Matplotlib in Python. I want to create a specific pattern, but I'm encountering unexpected behavior. I need assistance in identifying and resolving the problem. this is my code import numpy as np import matplotlib.pyplot as plt N = 5 A = np.array([(x, y) for y in range(N, -1, -1) for x in range(N + 1)]) t = np.array([[1, 1], [-1, 1]]) A = np.dot(A, t) # I have defined a triangle fig = plt.figure(figsize=(10, 10)) triangle = fig.add_subplot(111) X = A[:, 0].reshape(N + 1, N + 1) Y = A[:, 1].reshape(N + 1, N + 1) for i in range(1, N + 1): for j in range(i): line_x = np.array([X[i, j + 1], X[i, j], X[i - 1, j]]) line_y = np.array([Y[i, j + 1], Y[i, j], Y[i - 1, j]]) triangle.plot(line_y,line_x, color='black', linewidth=1) plt.show() but I am getting this image, as u can see, At corner extra lines are coming, as i encircled it. I dont want this extra line, i tried to solved it using loop, eventhough one extra line will keep remain for i in range(6): if i == N-1 : for j in range(i-1): line_x = np.array([X[i , j+1], X[i, j],X[i-1, j]]) line_y = np.array([Y[i, j+1], Y[i, j], Y[i-1, j]]) triangle.plot(line_y, line_x, color='black', linewidth=1) pass else: for j in range(i): line_x = np.array([X[i , j+1], X[i, j],X[i-1, j]]) line_y = np.array([Y[i, j+1], Y[i, j], Y[i-1, j]]) triangle.plot(line_y,line_x, color='black', linewidth=1) pass plt.show() kindly resolve the issue
The lines you want to delete belong to the triangules generated in the first (i = 1, j = 0) and last (i = N, J = N - 1) iterations of the nested for loops: import numpy as np import matplotlib.pyplot as plt N = 5 A = np.array([(x, y) for y in range(N, -1, -1) for x in range(N + 1)]) t = np.array([[1, 1], [-1, 1]]) A = np.dot(A, t) # I have defined a triangle fig = plt.figure(figsize=(10, 10)) triangle = fig.add_subplot(111) X = A[:, 1].reshape(N + 1, N + 1) Y = A[:, 0].reshape(N + 1, N + 1) for i in range(1, N + 1): for j in range(i): if i == 1: # Lower right triangle line_x = np.array([X[i, j + 1], X[i, j]]) line_y = np.array([Y[i, j + 1], Y[i, j]]) elif j == N - 1: # Upper right triangle line_x = np.array([X[i, j], X[i - 1, j]]) line_y = np.array([Y[i, j], Y[i - 1, j]]) else: line_x = np.array([X[i, j + 1], X[i, j], X[i - 1, j]]) line_y = np.array([Y[i, j + 1], Y[i, j], Y[i - 1, j]]) triangle.plot(line_x, line_y, color='black', linewidth=1) plt.show()
2
1
77,579,324
2023-11-30
https://stackoverflow.com/questions/77579324/how-to-register-multiple-pluggy-plugins-with-setuptools
Problem I can successfully register a single plugin in pluggy using load_setuptools_entrypoints, but I can only register one. If two different plugins try to register themselves, the last one registered will be the only one that runs. I think this is not how pluggy is intended to work, and that I am making a configuration mistake, but I don't know what to do differently. I think pluggy is supposed to allow multiple plugins to register and run in serial when that hook is called. Project Structure . β”œβ”€β”€ pluggable β”‚ β”œβ”€β”€ pluggable.py β”‚ └── pyproject.toml β”œβ”€β”€ plugin_a β”‚ β”œβ”€β”€ a.py β”‚ └── pyproject.toml └── plugin_b β”œβ”€β”€ b.py └── pyproject.toml with: # pluggable/pluggable.py import pluggy NAME = "pluggable" impl = pluggy.HookimplMarker(NAME) @pluggy.HookspecMarker(NAME) def run_plugin(): pass def main(): m = pluggy.PluginManager(NAME) m.load_setuptools_entrypoints(NAME) m.hook.run_plugin() if __name__ == "__main__": main() and # pluggable/pyproject.toml [project] name = "pluggable" version = "1.0.0" dependencies = ["pluggy==1.3.0"] Both plugin_a/a.py and plugin_b/b.py have the same contents: # plugin_a/a.py import pluggy from pluggable import impl @impl def run_plugin(): print(f"run from {__name__}") if __name__ == "__main__": run_plugin() plugin_a/pyproject.toml and plugin_b/pyproject.toml register their respective modules as pluggy plugins: # plugin_a/pyproject.toml [project] name = "plugin_a" version = "1.0.0" dependencies = ["pluggy==1.3.0", "pluggable"] [project.entry-points.pluggable] run_plugin = "a" plugin_b/pyproject.toml is the same, except that run_plugin = "b". Demo run $ python -m venv venv ... $ venv/bin/pip install -e pluggable -e plugin_a ... Successfully installed pluggable-1.0.0 plugin-a-1.0.0 $ venv/bin/python pluggable/pluggable.py run from a $ venv/bin/pip install -e plugin_b ... Successfully installed plugin-b-1.0.0 $ venv/bin/python pluggable/pluggable.py run from b Expected outcome What I'd like to see at the end is run from b run from a Further analysis From reading the load_setuptools_entrypoints code it's pretty clear what's happening. As soon as one plugin is registered to a name, PluginManager.get_plugin(name) returns that value, so other plugins will not be registered at the same name. But what I am hoping to get help to understand is what is the correct way to configure a plugin system with pluggy so that two different python packages can register themselves with the same spec and hook, such that pluggy will run them in serial?
As you also observed, there can be exactly one plugin with a specific name. When using entry points, the name of the plugin is the name of the entry point. As far as I understand the docs, hooks are matched by the spec/impl name and signature, and the plugin name does not matter in this process (plugins are just named collections of related hook impls). From these two, it follows that the entry point name should match the plugin name (and should be unique); see the plugin setup.py example. So, the pyproject.toml entry point would look something like a = "a". (An unrelated possible issue with the code above is that the spec should be added to the manager with add_hookspecs(), example. Given your code works without it, it's probably not mandatory, but it allows the manager to validate the impls against the spec, and all the examples seem to use it.)
3
2
77,587,845
2023-12-1
https://stackoverflow.com/questions/77587845/modify-regex-capturing-group-in-column
How can I modify the capturing group in pandas df.replace()? I try to add thousands separators to the numbers within the string of each cell. This should happen in a method chain. Here is the code I have so far: import pandas as pd df = pd.DataFrame({'a_column': ['1000 text', 'text', '25000 more text', '1234567', 'more text'], "b_column": [1, 2, 3, 4, 5]}) df = (df.reset_index() .replace({"a_column": {"(\d+)": r"\1"}}, regex=True)) The problem is that I don't know how to do something with r"\1", e.g., str(float(r"\1")) doesn't work. Expected output: index a_column b_column 0 0 1,000 text 1 1 1 text 2 2 2 25,000 more text 3 3 3 1,234,567 4 4 4 more text 5
You can use replace in your pipe, looking for a point preceded by a digit and followed by some multiple of 3 digits using this regex: (?<=\d)(?=(?:\d{3})+\b) That can then be replaced by a comma (,). df = (df .reset_index() .replace({ 'a_column' : { r'(?<=\d)(?=(?:\d{3})+\b)' : ',' } }, regex=True) ) Output: index a_column b_column 0 0 1,000 text 1 1 1 text 2 2 2 25,000 more text 3 3 3 1,234,567 4 4 4 more text 5 5 5 563 and 45 and 9 text 6 Note I added an extra row to your df to show that you don't get commas where you shouldn't.
2
1
77,585,972
2023-12-1
https://stackoverflow.com/questions/77585972/python-nested-lists-search-optimization
I have a search and test problem : in the list of prime numbers from 2 to 100k, we're searching the first set of 5 with the following criteria : p1 < p2 < p3 < p4 < p5 any combination of 2 primes from the solution (3 and 7 => 37 and 73) must also be a prime sum(p1..p5) is the smallest possible sum of primes satisfying criteria, and is above 100k I can totally code such a thing, but i have a severe optimization problem : my code is super duper slow. I have a list of primes under 100k, a list of primes over 100k, and a primality test which works well, but i do not see how to optimize that to obtain a result in a correct time. For a basic idea : the list of all primes under 100k contains 9592 items the list of all primes under 1 billion contains approximately 51 million lines i have the list of all primes under 1 billion, by length Thanks for the help
Here is a numba version that computes the minimal combination of the prime numbers with restrictions as stated in the question. The majority of runtime is spent in pre-computing all valid combinations of prime numbers. On my computer (AMD 5700X) this runs in 1 minute 20 seconds: import numpy as np from numba import njit, prange @njit def prime(a): if a < 2: return False for x in range(2, int(a**0.5) + 1): if a % x == 0: return False return True @njit def str_to_int(s): final_index, result = len(s) - 1, 0 for i, v in enumerate(s): result += (ord(v) - 48) * (10 ** (final_index - i)) return result @njit def generate_primes(n): out = [] for i in range(3, n + 1): if prime(i): out.append(i) return out @njit(parallel=True) def get_comb(n=100_000): # generate all primes < n primes = generate_primes(n) n_primes = len(primes) # generate all valid combinations of primes combs = np.zeros((n_primes, n_primes), dtype=np.uint8) for i in prange(n_primes): for j in prange(i + 1, n_primes): p1, p2 = primes[i], primes[j] c1 = str_to_int(f"{p1}{p2}") c2 = str_to_int(f"{p2}{p1}") if not prime(c1) or not prime(c2): continue combs[i, j] = 1 all_combs = [] for i_p1 in prange(0, n_primes): for i_p2 in prange(i_p1 + 1, n_primes): if combs[i_p1, i_p2] == 0: continue for i_p3 in prange(i_p2 + 1, n_primes): if combs[i_p1, i_p3] == 0: continue if combs[i_p2, i_p3] == 0: continue for i_p4 in prange(i_p3 + 1, n_primes): if combs[i_p1, i_p4] == 0: continue if combs[i_p2, i_p4] == 0: continue if combs[i_p3, i_p4] == 0: continue for i_p5 in prange(i_p4 + 1, n_primes): if combs[i_p1, i_p5] == 0: continue if combs[i_p2, i_p5] == 0: continue if combs[i_p3, i_p5] == 0: continue if combs[i_p4, i_p5] == 0: continue p1, p2, p3, p4, p5 = ( primes[i_p1], primes[i_p2], primes[i_p3], primes[i_p4], primes[i_p5], ) ccomb = np.array([p1, p2, p3, p4, p5], dtype=np.int64) if np.sum(ccomb) < n: continue all_combs.append(ccomb) print(ccomb) break return all_combs all_combs = np.array(get_comb()) print() print("Minimal combination:") print(all_combs[np.sum(all_combs, axis=1).argmin()]) Prints: [ 3 28277 44111 70241 78509] [ 7 61 25939 26893 63601] [ 7 61 25939 61417 63601] [ 7 61 25939 61471 86959] [ 7 2467 24847 55213 92593] [ 7 3361 30757 49069 57331] ... [ 1993 12823 35911 69691 87697] [ 2287 4483 6793 27823 67723] [ 3541 9187 38167 44257 65677] Minimal combination: [ 13 829 9091 17929 72739] real 1m20,599s user 0m0,011s sys 0m0,008s
3
3
77,588,263
2023-12-1
https://stackoverflow.com/questions/77588263/how-to-make-session-state-persist-after-button-click-in-streamlit
I'm encountering an issue with Streamlit where I'm trying to allow the user to modify text using st.text_input and then display the modified text when a button is clicked. However, the modified text is not persisting as expected in the session state. Here's a simplified version of the code: import streamlit as st # Initialize session state if 'text' not in st.session_state: st.session_state.text = "original" if st.button("show"): # Allow the user to modify the text st.session_state.text = st.text_input("Edit Text", value=st.session_state.text) # Display the modified text st.markdown(st.session_state.text) if st.button("show again"): # Display the modified text st.markdown(st.session_state.text) Despite using st.text_input to modify the text, the "show again" button still displays the original text, not the modified one. I've tried using st.text_area as well, but the issue persists. Why is the modified text not persisting in the session state as expected? How to make it persist as expected?
The problem is happening due to two things: The script is rerun every time a button is clicked 'text' is changed when the first button is clicked, so the update lags behind in the session state. This is even listed as a case in the documentation under Buttons to modify st.session_state section. There are two ways to fix the issue. Option 1: Change the way text is initialized in the session state We can use get since st.session_state is like a dict (or an if-else block) to initialize the text key so that on script re-run, the text value persists across "sessions"; if it's the first time, its value will be 'original' but for the subsequent runs, it will be whatever it was in the previous session. import streamlit as st # Initialize session state st.session_state.text = st.session_state.get('text', 'original') if st.button("show"): # Allow the user to modify the text st.text_input("Edit Text", key='text') # <--- set using the key kwarg # Display the modified text st.markdown(st.session_state.text) if st.button("show again"): # Display the modified text st.markdown(st.session_state.text) Option 2: Use callback function Another option is to define a callback function which is activated on button click (to show again). The function updates the session state with the current value of text, so that it persists through the subsequent script re-runs. import streamlit as st def update_text(value): # <--- define callback function st.session_state.text = value # Initialize session state if 'text' not in st.session_state: st.session_state.text = 'original' if st.button("show"): # Allow the user to modify the text st.text_input("Edit Text", key='text') # Display the modified text st.markdown(st.session_state.text) if st.button("show again", on_click=update_text, args=[st.session_state.text]): # <--- invoke the callback # Display the modified text st.markdown(st.session_state.text)
2
2
77,586,290
2023-12-1
https://stackoverflow.com/questions/77586290/how-to-remove-a-tag-from-an-element-of-beautifulsoup
I have a page like this: ... <div class="myclass"> <p> text 1 to keep<span>text 1 to remove</span>and keep this too. </p> <p> text 2 to keep<span>text 2 to remove</span>and keep this too. </p> <div> I.e.: I want to remove all <span> tags from any <p> element from bs4 (BeautifulSoup in Python3). Currently this is my code: from bs4 import BeautifulSoup ... text = "" for tag in soup.find_all(attrs={"class": "myclass"}): text += tag.p.text And of course I get all text in spans too... I read I should use unwrap() or decompose() but I really do not understand how to use them in practice in my use-case... All similar Q/A do not help...
You can try: from bs4 import BeautifulSoup html_text = """\ <div class="myclass"> <p> text 1 to keep<span>text 1 to remove</span>and keep this too. </p> <p> text 2 to keep<span>text 2 to remove</span>and keep this too. </p> <div>""" soup = BeautifulSoup(html_text, "html.parser") for span in soup.select("p span"): span.replace_with(" ") # or span.extract() soup.smooth() print(soup.prettify()) Prints: <div class="myclass"> <p> text 1 to keep and keep this too. </p> <p> text 2 to keep and keep this too. </p> <div> </div> </div>
2
2
77,587,016
2023-12-1
https://stackoverflow.com/questions/77587016/keyerror-when-applying-with-columns-iteratively-over-different-columns-when-usin
I have the following issue with Polars's LazyFrame "Structs" (pl.struct) and "apply" (a.k.a. map_elements) in with_columns The idea here is trying to apply a custom logic to a group of values that belong to more than one column I have been able to achieve this using DataFrames; however, when switching to LazyFrames, a KeyError is raised whenever I try to access a column in the dictionary sent by the struct to the function. I'm looping through columns, one by one, in order to apply different functions (mapped elsewhere to their names, but in the examples below I'll just use the same one for simplicity) Working DataFrame implementation my_df = pl.DataFrame( { "foo": ["a", "b", "c", "d"], "bar": ["w", "x", "y", "z"], "notes": ["1", "2", "3", "4"] } ) print(my_df) cols_to_validate = ("foo", "bar") def validate_stuff(value, notes): # Any custom logic if value not in ["a", "b", "x"]: return f"FAILED {value} - PREVIOUS ({notes})" else: return notes for col in cols_to_validate: my_df = my_df.with_columns( pl.struct([col, "notes"]).map_elements( lambda row: validate_stuff(row[col], row["notes"]) ).alias("notes") ) print(my_df) Broken LazyFrame implementation my_lf = pl.DataFrame( { "foo": ["a", "b", "c", "d"], "bar": ["w", "x", "y", "z"], "notes": ["1", "2", "3", "4"] } ).lazy() def validate_stuff(value, notes): # Any custom logic if value not in ["a", "b", "x"]: return f"FAILED {value} - PREVIOUS ({notes})" else: return notes cols_to_validate = ("foo", "bar") for col in cols_to_validate: my_lf = my_lf.with_columns( pl.struct([col, "notes"]).map_elements( lambda row: validate_stuff(row[col], row["notes"]) ).alias("notes") ) print(my_lf.collect()) (Ah, yeah, do notice that individually executing each iteration does work, so it's not making any sense to me why the for loop breaks) my_lf = my_lf.with_columns( pl.struct(["foo", "notes"]).map_elements( lambda row: validate_stuff(row["foo"], row["notes"]) ).alias("notes") ) my_lf = my_lf.with_columns( pl.struct(["bar", "notes"]).map_elements( lambda row: validate_stuff(row["bar"], row["notes"]) ).alias("notes") ) I have found a workaround using pl.col instead to achieve my desired result, but I would like to know whether Structs can be used the same way with LazyFrames right as I did with DataFrames, or it's actually a bug in this Polars version I'm using Polars 0.19.13, BTW. Thank you for your attention
It's more of a general "gotcha" with Python itself: Official Python FAQ It breaks because col ends up with the same value for every lambda One approach is to use a named/keyword arg: lambda row, col=col: validate_stuff(row[col], row["notes"]) shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ foo ┆ bar ┆ notes β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═══════════════════════════════════║ β”‚ a ┆ w ┆ FAILED w - PREVIOUS (1) β”‚ β”‚ b ┆ x ┆ 2 β”‚ β”‚ c ┆ y ┆ FAILED y - PREVIOUS (FAILED c - … β”‚ β”‚ d ┆ z ┆ FAILED z - PREVIOUS (FAILED d - … β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
5
77,580,121
2023-11-30
https://stackoverflow.com/questions/77580121/are-there-any-ways-to-actually-stop-awaited-long-running-asyncio-task
If there is a long-running background task, cancel() method does not work as I expected (does not work at all). And I cannot find a way to actually stop the task, is that even possible or am I missing something about asyncio work? Documentation says: "Task.cancel() does not guarantee that the Task will be cancelled." But as far as I understand it is due to possibility of coroutine handling the CancelledError exception and suppressing cancelation which is not my case. I tried to run sample code beneath and got infinite loop running forever, cancel() there does not work. My understanding here is that hitting that await asyncio.sleep(1) on the next asyncio event loop iteration should trigger task cancellation, but it is not happening. How can I make already awaited background_task stop? async def background_task(): while True: print('doing something') await asyncio.sleep(1) async def main(): task = asyncio.create_task(background_task()) await task task.cancel() print('Done!') asyncio.run(main()) The output would be: doing something doing something doing something doing something doing something doing something ...
Turns out its not possible in my case. If you await a long-running task - the caller will wait for task completion and therefore block (thanks to @mkrieger1 for the explanation in comments). To actually stop the task, you should either modify it and add some event or flag like @zShadowSkilled mentioned, or run it with asyncio.wait_for() by specifying a timeout.
2
0
77,583,454
2023-12-1
https://stackoverflow.com/questions/77583454/pip-install-pycryptodome-returns-is-not-a-supported-wheel-on-this-platform
I am trying to install pycryptodome-3.19.0 and pycryptodomex-3.19.0 on Windows 11 and Python 3.10 in venv. Whl files were downloaded from pypi manually: pycryptodome-3.19.0-pp310-pypy310_pp73-win_amd64.whl pycryptodomex-3.19.0-pp310-pypy310_pp73-win_amd64.whl Trying to install any of them I have an error: ERROR: pycryptodome(x)-3.19.0-pp310-pypy310_pp73-win_amd64.whl is not a supported wheel on this platform But why? There are most suitable whls availavle on pypi Whls are for windows and for python 3.10. Why it can't be installed?
You've downloaded a PyPy wheel, but you're trying to install it on CPython. Look for a wheel tagged with cp instead of pp. Run pip debug -v to see a list of supported compatibility tags for your Python installation / platform.
3
2
77,582,315
2023-11-30
https://stackoverflow.com/questions/77582315/rankn-type-equivalent-for-mypy-in-python
In Haskell we can use rankN types like so: rankN :: (forall n. Num n => n -> n) -> (Int, Double) rankN f = (f 1, f 1.0) Is the same thing possible in python with mypy? I tried the following code in python 3.10.2 with mypy 1.7.1: I = TypeVar("I", int, float) def rankN(f: Callable[[I], I]) -> tuple[int, float]: return (f(1), f(1.0)) This produces the following errors, implying that f is specializing to float: Incompatible return value type (got "tuple[float, float]", expected "tuple[int, float]") [return-value] Argument 1 has incompatible type "float"; expected "int" [arg-type] I'm not necessarily expecting this to work since the magic syntax in the Haskell case is the nested forall, but I don't know if there is a similar way to convey this to mypy if it is possible at all.
I’m not familiar with mypy, but my guess is that you can (and must) represent this as a protocol with a generic method, in order to scope the type variable to be per method call, rather than per invocation of rankN. from typing import Protocol, TypeVar class UnaryNumeric(Protocol): I = TypeVar("I", int, float) def __call__(self, input: I) -> I: pass def rankN(f: UnaryNumeric) -> tuple[int, float]: return (f(1), f(1.0)) Now rankN(lambda x: abs(x)) is accepted, while rankN(lambda x: str(x)) is rejected. I was surprised that rankN(lambda x: x / 2) is also accepted, since int isn’t really a subtype of float, but I guess it floats like a duck.
4
4
77,581,214
2023-11-30
https://stackoverflow.com/questions/77581214/produce-this-list-0-2-6-12-20-30-42-56-72-90-using-list-comprehension
I can produce the list [0, 2, 6, 12, 20, 30, 42, 56, 72, 90] using the following code: x = [] y = 0 for i in range(2,21,2): x.append(y) y += i However I'm not sure how to convert this into list comprehension syntax of the form [expression for value in iterable if condition ]
You can assign to y inside the comprehension, using an assignment expession, i.e. using :=: y = 0 x = [y := y + i for i in range(0,20,2)] Alternatively, you can make use of the fact that these are doubles of triangular numbers, and then you don't need y (but multiplication): x = [i * (i + 1) for i in range(10)]
3
6
77,580,556
2023-11-30
https://stackoverflow.com/questions/77580556/importing-data-from-two-xml-parent-nodes-to-a-pandas-dataframe-using-read-xml
I am having trouble in importing an XML file to Pandas where I need to grab data from two parent nodes. One parent node (AgentID) has data directly in it, and the other (Sales) has child nodes (Location, Size, Status) that contain data, as given below. test_xml = '''<TEST_XML> <Sales> <AgentID>0001</AgentID> <Sale> <Location>0</Location> <Size>1000</Size> <Status>Available</Status> </Sale> <Sale> <Location>1</Location> <Size>500</Size> <Status>Unavailable</Status> </Sale> </Sales> </TEST_XML>''' when I try to import this to a Pandas Dataframe below is the only way I was able to grab data under the Sale tag. import pandas as pd df = pd.read_xml(test_xml, xpath='//Sale') which gives me a dataframe like the one below: Location Size Status 0 0 1000 Available 1 1 500 Unavailable What I need is including the AgentID tag in the DataFrame too, to get the following, but I was unsuccessful. Expected output is given below for clarity: AgentID Location Size Status 0 0001 0 1000 Available 1 0001 1 500 Unavailable Is there a way to manipulate the xpath parameter to include the data inside the AgentID tag as well, or is it impossible to do it using Pandas' read_xml function? I tried passing a list like xpath=['//AgentID', '//Sale'] but of course, it did not work...
I don't think you can get the desired output using just read_xml(); however, it's possible by manipulating it a bit. Essentially, the idea is to get everything from the xml using a generic xpath, select the required columns, populate the AgentID column to corresponding to Sale columns; then remove redundant rows. df = pd.read_xml(io.StringIO(test_xml), xpath='//*', dtype=str)[['AgentID', 'Location', 'Size', 'Status']] df['AgentID'] = df['AgentID'].ffill() df = df.dropna(how='any').astype({'Location': int, 'Size': int}).reset_index(drop=True) An "easier" solution to get a parent node (although not related to the exact question in the OP) is to convert the XML into a Python dictionary and normalize it into dataframe using pd.json_normalize. This works because meta fields (in this case AgentID) can be specified here. However, we need to install a third-party library (xmltodict) to achieve the first step. !pip install xmltodict import xmltodict df = ( pd.json_normalize(xmltodict.parse(test_xml)['TEST_XML']['Sales'], record_path=['Sale'], meta=['AgentID']) [['AgentID', 'Location', 'Size', 'Status']] )
2
1
77,580,911
2023-11-30
https://stackoverflow.com/questions/77580911/fast-way-to-check-values-of-one-dataframe-against-another-dataframe-in-pandas
I have two dataframes. df1: Date High Mid Low 1 2023-08-03 00:00:00 29249.8 29136.6 29152.3 4 2023-08-03 12:00:00 29395.8 29228.1 29105.0 10 2023-08-04 12:00:00 29305.2 29250.1 29137.1 13 2023-08-05 00:00:00 29099.9 29045.3 29073.0 18 2023-08-05 20:00:00 29061.6 29047.1 29044.0 .. ... ... ... ... 696 2023-11-26 20:00:00 37732.1 37469.9 37370.0 703 2023-11-28 00:00:00 37341.4 37138.2 37254.1 707 2023-11-28 16:00:00 38390.7 38137.2 37534.4 711 2023-11-29 08:00:00 38419.0 38136.3 38112.0 716 2023-11-30 04:00:00 38148.9 37800.1 38040.0 and df2: Start Top Bottom 0 2023-11-28 00:00:00 37341.4 37138.2 1 2023-11-24 12:00:00 38432.9 37894.4 I need to check if the values in the first dataframe fall within the range of the values in a row of a second dataframe and store the number of matches in a column. I can do it using iteration like this: for idx in df1.index: df2.loc[ (df2.Start != df1.at[idx, 'Date']) & (df2.Bottom < df1.at[idx, 'High']) & (df2.Top > df1.loc[idx, ['Mid', 'Low']].max()), 'Match'] += 1 But this way is slow. Is there a faster way to do it without iteration?
If you have enough memory (depends on df1 and df2), you can use a cross merge: df2['Match'] = (df2.reset_index() .merge(df1, how='cross') .loc[lambda x: (x.Start != x.Date) & (x.Bottom < x.High) & (x.Top > x[['Mid', 'Low']].max(axis=1))] .value_counts('index').reindex(df2.index, fill_value=0)) Output: >>> df2 Start Top Bottom Match 0 2023-11-28 00:00:00 37341.4 37138.2 0 1 2023-11-24 12:00:00 38432.9 37894.4 3
2
2
77,580,216
2023-11-30
https://stackoverflow.com/questions/77580216/starred-unpacking-in-subscription-index
Consider the following code: class A: def __getitem__(self, key): print(key) a = A() a[*(1,2,3)] With python 3.10.6, I get a SyntaxError : invalid syntax at the starred unpacking on the last line. On python 3.11.0, however, the code works fine and prints (1,2,3), as one could maybe expect. As far as I can tell, there is no difference in the language reference between the two versions regarding the syntax of subscriptions (sections 6.3.2 of the references are identical), and no mention of this change appears in the 3.11 changelog. Also, looking at these reference the behavior of 3.10.6 seem consistent (the expression between brackets should be an expression_list, which does not allow for starred unpacking). Is this a bug in the 3.11 version, or is this syntax change documented somewhere?
It appears that this change was introduced in 3.11 as a part of grammar changes for PEP 646. Relevant quote: To put it another way, note that x[..., *a, ...] produces the same result as x[(..., *a, ...)] (with any slices i:j in ... replaced with slice(i, j), with the one edge case that x[*a] becomes x[(*a,)]). The relevant PR appears to be bpo-43224 (#31018). The changeset was bpo-43224 (#87390). That this is an implication of PEP 646 is not obvious to me (until you read around a bit). PEP 637 was rejected but it appears to deal with this more directly. In the rejection discussion for PEP 637, Guido van Rossum says: Let me clarify what the typing-sig folks wanted out of this PEP. We only care about adding support for x[*y] (including things like x[a, *b, c]). We'll just update PEP 646 to add that explicitly there and hope that PEP 646 fares better than PEP 637. One last bit of trivia: I found this by looking through the pyright repo, which initially added this support in response to PEP 637 (commit). It appears that this was subsequently updated to reflect the actual changes that went into 3.11 (commit).
2
3
77,579,387
2023-11-30
https://stackoverflow.com/questions/77579387/playwright-how-to-handle-new-windows
Im trying to login with steam on https://buff.163.com. My current code looks like this. from playwright.sync_api import sync_playwright with sync_playwright() as p: browser = p.chromium.launch(headless=False, slow_mo=50) page = browser.new_page() context = browser.new_context() page.goto("https://buff.163.com/market/csgo#tab=selling&page_num=1") page.locator("xpath=/html/body/div[1]/div/div[3]/ul/li/a").click() popup = page.wait_for_event("popup") page.get_by_text("Other login methods").click() After it clicks the last button a new windows opens. How can i acces elements in there?
Your sequence seems to be a bit out of order. There's a modal after clicking the Login button, then a popup after clicking Other login methods. Try this flow: from playwright.sync_api import sync_playwright # 1.37.0 with sync_playwright() as p: browser = p.chromium.launch(headless=False, slow_mo=50) page = browser.new_page() page.goto("<Your URL>") page.get_by_role("link", name="Login/Register").click() with page.expect_popup() as popup_info: page.get_by_text("Other login methods").click() popup = popup_info.value popup.wait_for_load_state() # interact with the popup... print(popup.title())
2
2
77,579,754
2023-11-30
https://stackoverflow.com/questions/77579754/identifying-all-the-entries-within-7-days-of-some-dates-pandas
I have two dataframes. One records the date of trades: trade = pd.DataFrame({'date': ['2019-08-31', '2019-09-01', '2019-09-04'], 'person': [1, 1, 2], 'code': [123, 123, 456], 'value1': [1, 2, 3]}) And the other records the dates of browsing history: view = pd.DataFrame({'date': ['2019-08-29', '2019-08-29', '2019-08-30', '2019-08-31', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-02', '2019-09-03'], 'person': [1, 1, 1, 2, 1, 2, 2, 1, 2], 'code': [123, 456, 123, 456, 123, 123, 456, 123, 456], 'value': [1, 2, 3, 4, 5, 6, 7, 8, 9] }) My desired outcome is to generate a list of the related browsing history (occurred within 7 days prior to the trade) for each trade. out = pd.DataFrame({'date': ['2019-08-31', '2019-09-01', '2019-09-04'], 'person': [1, 1, 2], 'code': [123, 123, 456], 'value1': [1, 2, 3], 'view_dates': [['2019-08-29', '2019-08-29'], ['2019-08-29', '2019-08-30', '2019-09-01'], ['2019-08-31', '2019-09-01', '2019-09-03']], 'view_values':[[1, 3], [1, 3, 5], [4, 7, 9]]}) I tried to use merge_asof to match the views with the trades. And then generate the list for each trade based on the merged outcome. merge_out = pd.merge_asof(view, trade, tolerance=pd.Timedelta('7d'), direction='forward', on='date', by = ['person', 'code']) However, merge_asof uses each view entry only once. It assigns views [1, 3] to trade 1, and views [5] to trade 2. I want to assign views [1, 3] to trade 1, and views [1, 3, 5] to trade 2. I am wondering whether there is a way to identify all the items in a time range in view for each trade. Thanks!
There is no highly efficient way to do this in pure pandas, you can however use janitor's conditional_join with a helper column, then groupby.agg: import janitor trade['date'] = pd.to_datetime(trade['date']) view['date'] = pd.to_datetime(view['date']) out = (trade .assign(start_date=lambda d: d['date'].sub(pd.DateOffset(days=7))) .conditional_join(view.rename(columns={'date': 'view_dates', 'value': 'view_values'}), ('start_date', 'view_dates', '<='), ('date', 'view_dates', '>='), ('person', 'person', '=='), ('code', 'code', '=='), right_columns=['view_dates', 'view_values'] ) .drop(columns='start_date') .assign(view_dates=lambda d: d['view_dates'].dt.strftime('%Y-%m-%d')) .groupby(list(trade), as_index=False).agg(list) ) A pure pandas solution would be less efficient as one would need to merge all combinations of the same person/code: out = (trade .merge(view.rename(columns={'date': 'view_dates', 'value': 'view_values'}), on=['person', 'code']) .loc[lambda d: d['date'].gt(d['view_dates']) & d['date'].sub(pd.DateOffset(days=7)).le(d['view_dates']) ] .assign(view_dates=lambda d: d['view_dates'].dt.strftime('%Y-%m-%d')) .groupby(list(trade), as_index=False).agg(list) ) Output: date person code value1 view_dates view_values 0 2019-08-31 1 123 1 [2019-08-29, 2019-08-30] [1, 3] 1 2019-09-01 1 123 2 [2019-08-29, 2019-08-30, 2019-09-01] [1, 3, 5] 2 2019-09-04 2 456 3 [2019-08-31, 2019-09-01, 2019-09-03] [4, 7, 9]
2
2
77,579,302
2023-11-30
https://stackoverflow.com/questions/77579302/python-assignment-statement
I was going through the python assignment statement docs . Here python uses below Backus–Naur form for assignment statements. assignment_stmt ::= (target_list "=")+ (starred_expression | yield_expression) target_list ::= target ("," target)* [","] target ::= identifier | "(" [target_list] ")" | "[" [target_list] "]" | attributeref | subscription | slicing | "*" target Where as starred_expression is in Backus-Naur Form is starred_expression ::= expression | (starred_item ",")* [starred_item] starred_item ::= assignment_expression | "*" or_expr and yield_expression in Backus-Naur Form is yield_atom ::= "(" yield_expression ")" yield_expression ::= "yield" [expression_list | "from" expression] After recursively going through all those related backnaur form of each sub expression given above. I am still scratching my head how does simple assignment like a=9 can fit into above back naur form. Specially how does the 9, on the RHS of the given statement can fall into yield_expression or starred_exression
Isn't it right here? starred_expression ::= expression | … A starred_expression can be just an expression. It must be the case that expression encompasses numeric literals like 9. (Edited for clarity following comments.) UPDATE Here is the full line from starred_expression to 9. starred_expression ::= expression | (starred_item ",")* [starred_item] expression ::= conditional_expression | lambda_expr conditional_expression ::= or_test ["if" or_test "else" expression] or_test ::= and_test | or_test "or" and_test and_test ::= not_test | and_test "and" not_test not_test ::= comparison | "not" not_test comparison ::= or_expr (comp_operator or_expr)* or_expr ::= xor_expr | or_expr "|" xor_expr xor_expr ::= and_expr | xor_expr "^" and_expr and_expr ::= shift_expr | and_expr "&" shift_expr shift_expr ::= a_expr | shift_expr ("<<" | ">>") a_expr a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr m_expr ::= u_expr | m_expr "*" u_expr | m_expr "@" m_expr | m_expr "//" u_expr | m_expr "/" u_expr | m_expr "%" u_expr u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr power ::= (await_expr | primary) ["**" u_expr] primary ::= atom | attributeref | subscription | slicing | call atom ::= identifier | literal | enclosure literal ::= stringliteral | bytesliteral | integer | floatnumber | imagnumber integer ::= decinteger | bininteger | octinteger | hexinteger decinteger ::= nonzerodigit (["_"] digit)* | "0"+ (["_"] "0")* nonzerodigit ::= "1"..."9" What makes it confusing is that for every element from conditional_expression down to power, the thing that makes it look like the "thing it is" is optional! For instance, in power, the ** operator is actually not even required. So we think of 2**16 as a power, but 2 also qualifies as a power. Similarly for or_test, an or keyword is not actually required. It works like that all the way up. For every line, 9 satisfies the simplest version of the syntactic element with none of the optional parts included.
3
4
77,578,698
2023-11-30
https://stackoverflow.com/questions/77578698/selenium-not-working-with-correct-chromedriver-version-and-chrome-version
I want to just write a hello world with selenium and have the following code: from selenium import webdriver driver = webdriver.Chrome('C:/Users/[...]/chromedriver/chromedriver.exe') driver.get("https://www.google.com") But I keep getting the following error: Traceback (most recent call last): File "C:\Users\Username\AppData\Local\Programs\Python\Python312\Lib\site-packages\selenium\webdriver\common\driver_finder.py", line 38, in get_path path = SeleniumManager().driver_location(options) if path is None else path ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Username\AppData\Local\Programs\Python\Python312\Lib\site-packages\selenium\webdriver\common\selenium_manager.py", line 75, in driver_location browser = options.capabilities["browserName"] ^^^^^^^^^^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'capabilities' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Users\[...]\ets.py", line 3, in <module> driver = webdriver.Chrome('C:/Users/[...]/chromedriver/chromedriver.exe') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Username\AppData\Local\Programs\Python\Python312\Lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 45, in __init__ super().__init__( File "C:\Users\Username\AppData\Local\Programs\Python\Python312\Lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 51, in __init__ self.service.path = DriverFinder.get_path(self.service, options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Username\AppData\Local\Programs\Python\Python312\Lib\site-packages\selenium\webdriver\common\driver_finder.py", line 40, in get_path msg = f"Unable to obtain driver for {options.capabilities['browserName']} using Selenium Manager." ^^^^^^^^^^^^^^^^^^^^ I am using Chrome version: 119.0.6045.200 And ChromeDriver: 119.0.6045.105 I cant find anything on it, thanks for the help
You can simply get the driver by: from selenium import webdriver driver = webdriver.Chrome() then: driver.get("https://www.google.com") You don't need driver.exe anymore doc: https://www.selenium.dev/blog/2022/introducing-selenium-manager/
2
1
77,578,241
2023-11-30
https://stackoverflow.com/questions/77578241/how-to-get-anscombe-data-wide-format-from-long-format
I have the follwing anscombe data in long format import seaborn as sns # Load the example dataset for Anscombe's quartet anscombe_long = sns.load_dataset("anscombe") anscombe_long dataset x y 0 I 10.0 8.04 1 I 8.0 6.95 2 I 13.0 7.58 3 I 9.0 8.81 4 I 11.0 8.33 5 I 14.0 9.96 6 I 6.0 7.24 ... ... I wanted to convert this into wide format where the column names x and y should have suffixes of the set number. expected output: anscombe_wide #> x1 x2 x3 x4 y1 y2 y3 y4 #> 1 10 10 10 8 8.04 9.14 7.46 6.58 #> 2 8 8 8 8 6.95 8.14 6.77 5.76 #> 3 13 13 13 8 7.58 8.74 12.74 7.71 #> 4 9 9 9 8 8.81 8.77 7.11 8.84 #> 5 11 11 11 8 8.33 9.26 7.81 8.47 #> 6 14 14 14 8 9.96 8.10 8.84 7.04 #> 7 6 6 6 8 7.24 6.13 6.08 5.25 #> 8 4 4 4 19 4.26 3.10 5.39 12.50 #> 9 12 12 12 8 10.84 9.13 8.15 5.56 #> 10 7 7 7 8 4.82 7.26 6.42 7.91 #> 11 5 5 5 8 5.68 4.74 5.73 6.89
Use GroupBy.cumcount with DataFrame.pivot: out = (anscombe_long.assign(g = anscombe_long.groupby('dataset').cumcount()) .pivot(index='g', columns='dataset')) print (out) x y dataset I II III IV I II III IV g 0 10.0 10.0 10.0 8.0 8.04 9.14 7.46 6.58 1 8.0 8.0 8.0 8.0 6.95 8.14 6.77 5.76 2 13.0 13.0 13.0 8.0 7.58 8.74 12.74 7.71 3 9.0 9.0 9.0 8.0 8.81 8.77 7.11 8.84 4 11.0 11.0 11.0 8.0 8.33 9.26 7.81 8.47 5 14.0 14.0 14.0 8.0 9.96 8.10 8.84 7.04 6 6.0 6.0 6.0 8.0 7.24 6.13 6.08 5.25 7 4.0 4.0 4.0 19.0 4.26 3.10 5.39 12.50 8 12.0 12.0 12.0 8.0 10.84 9.13 8.15 5.56 9 7.0 7.0 7.0 8.0 4.82 7.26 6.42 7.91 10 5.0 5.0 5.0 8.0 5.68 4.74 5.73 6.89 Then convert roman numbers to integers in list comprehension: #pip install roman import roman out.columns=[f'{a}{roman.fromRoman(b)}' for a, b in out.columns] print (out) x1 x2 x3 x4 y1 y2 y3 y4 g 0 10.0 10.0 10.0 8.0 8.04 9.14 7.46 6.58 1 8.0 8.0 8.0 8.0 6.95 8.14 6.77 5.76 2 13.0 13.0 13.0 8.0 7.58 8.74 12.74 7.71 3 9.0 9.0 9.0 8.0 8.81 8.77 7.11 8.84 4 11.0 11.0 11.0 8.0 8.33 9.26 7.81 8.47 5 14.0 14.0 14.0 8.0 9.96 8.10 8.84 7.04 6 6.0 6.0 6.0 8.0 7.24 6.13 6.08 5.25 7 4.0 4.0 4.0 19.0 4.26 3.10 5.39 12.50 8 12.0 12.0 12.0 8.0 10.84 9.13 8.15 5.56 9 7.0 7.0 7.0 8.0 4.82 7.26 6.42 7.91 10 5.0 5.0 5.0 8.0 5.68 4.74 5.73 6.89 Solution if always know number of datesets by mapping dictionary: d = {'I':1, 'II':2, 'III':3, 'IV':4} out.columns=[f'{a}{d[b]}' for a, b in out.columns] print (out) x1 x2 x3 x4 y1 y2 y3 y4 g 0 10.0 10.0 10.0 8.0 8.04 9.14 7.46 6.58 1 8.0 8.0 8.0 8.0 6.95 8.14 6.77 5.76 2 13.0 13.0 13.0 8.0 7.58 8.74 12.74 7.71 3 9.0 9.0 9.0 8.0 8.81 8.77 7.11 8.84 4 11.0 11.0 11.0 8.0 8.33 9.26 7.81 8.47 5 14.0 14.0 14.0 8.0 9.96 8.10 8.84 7.04 6 6.0 6.0 6.0 8.0 7.24 6.13 6.08 5.25 7 4.0 4.0 4.0 19.0 4.26 3.10 5.39 12.50 8 12.0 12.0 12.0 8.0 10.84 9.13 8.15 5.56 9 7.0 7.0 7.0 8.0 4.82 7.26 6.42 7.91 10 5.0 5.0 5.0 8.0 5.68 4.74 5.73 6.89
2
2
77,577,590
2023-11-30
https://stackoverflow.com/questions/77577590/whats-the-pythonic-way-to-pass-a-variable-into-another-class
I have different classes which calculate different positions. Lets say ClassX provides the function get_xpos() and ClassY therefore get_ypos() For the calculation inside ClassY in need the x_pos. I cant pass the value in the __init__ function because it changes every cycle. In C++ i would to this by passing a pointer.. Moving everything in one class is not possible for different reasons. Right now im doing it like this: class ClassY: y_pos = 0 def simulate(self, x_pos): self.calc_y(x_pos) every class has a simulate function to calculate the new values and a getter for the positions main: x = ClassX() y = ClassY() while True: x.simulate() y.simulate(x.get_xpos()) Note: the classes are in different files.
You can pass a reference to an instance of the ClassX to the constructor of ClassY. Then you can access the x_pos inside the simulate method of y without passing anything. Here's an example with random x and y=2x. import random class ClassX: def __init__(self): self.x_pos = 0 def calc_x(self): self.x_pos = random.randint(0, 10) def simulate(self): self.calc_x() class ClassY: def __init__(self, x_instance): self.x_instance = x_instance self.y_pos = 0 def calc_y(self, x_pos): self.y_pos = x_pos * 2 def simulate(self): self.calc_y(self.x_instance.x_pos) if __name__ == "__main__": x = ClassX() y = ClassY(x) for i in range(10): x.simulate() y.simulate() print(f"[{x.x_pos}, {y.y_pos}]")
3
2
77,574,103
2023-11-29
https://stackoverflow.com/questions/77574103/how-to-make-the-nested-for-loop-execute-faster-in-python
Here is my script: for a in range(-100, 101): for b in range(-100, 101): for c in range(-100, 101): for d in range(-100, 101): if abs(2**a*3**b*5**c*7**d-0.3048) <= 10**(-6): print('a=',a, ', b=', b, ', c=', c,', d=', d,', the number=', 2**a*3**b*5**c*7**d, ', error=', abs(2**a*3**b*5**c*7**d-.3048)) It took 27 mins and 15 seconds to execute the above script in python. I know that it goes through 201^4 expression evaluations, but I need to run these kinds of calculations faster (because I want to try range(-200,201) and so on). I'm wondering if it is possible to make the above code execute faster. I think using numpy arrays would help, but not sure how to apply this, and whether it is actually effective.
For these kind of computations you can try numba JIT: from numba import njit @njit def fn(): for a in range(-100, 101): for b in range(-100, 101): for c in range(-100, 101): for d in range(-100, 101): n = (2.0**a) * (3.0**b) * (5.0**c) * (7.0**d) v = n - 0.3048 if abs(v) <= 1e-06: print( "a=", a, ", b=", b, ", c=", c, ", d=", d, ", the number=", n, ", error=", abs(n - 3.048), ) fn() Running this code on my machine (AMD 5700X) takes ~57 seconds (that's with compilation step included). In comparison, without the @njit (just plain Python) this takes exactly 4 minutes. a= -78 , b= -89 , c= -14 , d= 89 , the number= 0.3047994427888104 , error= 2.7432005572111895 a= -78 , b= -57 , c= 50 , d= 18 , the number= 0.30479915330101043 , error= 2.7432008466989894 a= -69 , b= -85 , c= 87 , d= 0 , the number= 0.3047993420932106 , error= 2.7432006579067894 a= -63 , b= 42 , c= -99 , d= 80 , the number= 0.3048005478488736 , error= 2.7431994521511265 a= -63 , b= 74 , c= -35 , d= 9 , the number= 0.3048002583600241 , error= 2.743199741639976 a= -54 , b= 14 , c= -62 , d= 62 , the number= 0.3048007366419375 , error= 2.7431992633580626 a= -54 , b= 46 , c= 2 , d= -9 , the number= 0.30480044715290866 , error= 2.7431995528470914 a= -54 , b= 78 , c= 66 , d= -80 , the number= 0.3048001576641548 , error= 2.7431998423358452 a= -45 , b= -14 , c= -25 , d= 44 , the number= 0.30480092543511833 , error= 2.7431990745648815 a= -45 , b= 18 , c= 39 , d= -27 , the number= 0.3048006359459102 , error= 2.7431993640540897 a= -36 , b= -10 , c= 76 , d= -45 , the number= 0.30480082473902875 , error= 2.7431991752609712 a= 5 , b= -44 , c= -72 , d= 82 , the number= 0.30479914163960603 , error= 2.743200858360394 a= 14 , b= -72 , c= -35 , d= 64 , the number= 0.304799330431799 , error= 2.743200669568201 a= 14 , b= -40 , c= 29 , d= -7 , the number= 0.3047990409441057 , error= 2.743200959055894 a= 23 , b= -100 , c= 2 , d= 46 , the number= 0.30479951922410875 , error= 2.7432004807758914 a= 23 , b= -68 , c= 66 , d= -25 , the number= 0.30479922973623635 , error= 2.7432007702637637 a= 29 , b= 91 , c= -56 , d= -16 , the number= 0.30480014600271205 , error= 2.743199853997288 a= 38 , b= 31 , c= -83 , d= 37 , the number= 0.30480062428444915 , error= 2.743199375715551 a= 38 , b= 63 , c= -19 , d= -34 , the number= 0.30480033479552704 , error= 2.743199665204473 a= 47 , b= 3 , c= -46 , d= 19 , the number= 0.30480081307756046 , error= 2.7431991869224395 a= 47 , b= 35 , c= 18 , d= -52 , the number= 0.30480052358845894 , error= 2.743199476411541 a= 56 , b= 7 , c= 55 , d= -70 , the number= 0.3048007123815079 , error= 2.7431992876184923 a= 65 , b= -21 , c= 92 , d= -88 , the number= 0.3048009011746738 , error= 2.7431990988253263 a= 97 , b= -27 , c= -93 , d= 57 , the number= 0.3047990292827057 , error= 2.7432009707172944 real 0m57,939s user 0m0,009s sys 0m0,009s Looking at your code, you can use parallel range (prange) to speed up things even further: from numba import njit, prange @njit(parallel=True) def fn(): for a in prange(-100, 101): i_a = 2.0**a for b in prange(-100, 101): i_b = i_a * 3.0**b for c in prange(-100, 101): i_c = i_b * 5.0**c for d in prange(-100, 101): n = i_c * (7.0**d) v = n - 0.3048 if abs(v) <= 1e-06: print( "a=", a, ", b=", b, ", c=", c, ", d=", d, ", the number=", n, ", error=", abs(n - 3.048), ) fn() Takes on my 8C/16T machine just ~2.7 seconds. @EDIT: Added storing intermediate results. Thanks @yotheguitou
2
5
77,570,553
2023-11-29
https://stackoverflow.com/questions/77570553/conditionally-required-value-in-pydantic-v2-model
I'm working with an API that accepts a query parameter, which selects the values the API will return. Therefore, when parsing the API response, all attributes of the Pydantic model used for validation must be optional: class InvoiceItem(BaseModel): """ Pydantic model representing an Invoice """ id: PositiveInt | None = None org: AnyHttpUrl | None = None relatedInvoice: AnyHttpUrl | None = None quantity: PositiveInt | None = None However, when creating an object using the API, some of the attributes are required. How can I make attributes to be required in certain conditions (in Pydantic v1 it was possible to use metaclasses for this)? Examples could be to somehow parameterise the model (as it wouldn't know without external input how its being used) or to create another model InvoiceItemCreate inheriting from InvoiceItem and make the attributes required without re-defining them.
Inspired by Marks answer I ended up using something like this: Mixin generator pattern: from typing import Self from pydantic import BaseModel, model_validator def required_mixin(required_attributes: list[str | list[str]]): class SomeRequired(BaseModel): @model_validator(mode="after") def required_fields(self) -> Self: for v in required_attributes: if isinstance(v, str): if getattr(self, v) is None: raise ValueError(f"{v} attribute is required but is None") else: if not any(getattr(self, attr) is not None for attr in v): raise ValueError(f"One of {v} is required.") return self return SomeRequired Actual class class InvoiceItem(BaseModel): """ Pydantic model representing an Invoice """ id: PositiveInt | None = None quantity: PositiveInt | None = None unitPrice: float | None = None totalPrice: float | None = None title: str | None = None description: str | None = None class InvoiceItemCreate( InvoiceItem, required_mixin(["title", "quantity", ["unitPrice", "totalPrice"]]), ): """ Pydantic model representing an Invoice """ This allows me to re-use the definitions of the actual class and create an additional model based on the old one. The model validator allows to define certain attributes that are required when using the InvoiceItemCreate model, and even allows little more complex scenarios like "either unitPrice or totalPrice is required.
3
0
77,574,303
2023-11-29
https://stackoverflow.com/questions/77574303/get-a-dictionary-of-related-model-values
I have a model Post with some fields. Aside from that I have some models which have Post as a ForeignKey. Some examples are: class ViewType(models.Model): post = models.ForeignKey( Post, on_delete=models.CASCADE, related_name="view_types", verbose_name=_("Post"), ) view = models.CharField( max_length=20, choices=VIEW_TYPE_CHOICES, verbose_name=_("View") ) ... class HeatType(models.Model): post = models.ForeignKey( Post, on_delete=models.CASCADE, related_name="heat_types", verbose_name=_("Post"), ) heat = models.CharField( max_length=30, choices=HEAT_TYPE_CHOICES, verbose_name=_("Heat") ) ... So what I want to do here is somehow get a dictionary with all the values of those fields in my view. For example instead of doing this for all of the models that have Post as a ForeignKey: heat_type = HeatType.objects.filter(post=post) view_type = ViewType.objects.filter(post=post) dict = { "view_type": view_type.view, "heat_type": heat_type.heat, etc... } get all the relevant related fields in one go. Is there a simpler solution for that? Or do I have to manually get all queries for each model? Thanks in advance
With some trepidation.... class Post(models.Model): def dump(self): mydict = {} for k, v in Post.__dict__.items(): # find the attributes that represent the reverse foreign keys if type(v) == ReverseManyToOneDescriptor: print(k) mydict[k] = getattr(self, k).all() print(mydict) With that solution, each value in mydict has a list of instances, but not the values of heat and view respectively. What you might do is add a methods to HeatType and ViewType that have the same name, like so class ViewType(models.Model): def dump(self): return self.view class HeatType(models.Model): def dump(self): return self.view Now you can dump each related object into mydict() with if type(v) == ReverseManyToOneDescriptor: print(k) mydict[k] = [instance.dump() for instance in getattr(self, k).all()]
2
2
77,570,976
2023-11-29
https://stackoverflow.com/questions/77570976/cygpath-not-found-exec-cmd-not-found-for-pyenv-on-windows-wsl-2
I have been trying to get Tensorflow to recognize my GPU within WSL 2. However, I believe that is largely irrelevant for the problem I am having right now. Whenever I try to run the pyenv command within WSL I get the following error: /mnt/c/Users/USER/.pyenv/pyenv-win/bin/pyenv: 3: cygpath: not found /mnt/c/Users/USER/.pyenv/pyenv-win/bin/pyenv: 3: exec: cmd: not found Pyenv does work outside of the WSL environment. Here are the details of my environment: pyenv - 3.1.1 WSL Kernel - 5.15.133.1-1 WSL version - 2.0.9.0 Windows 10 Windows Version - 10.0.19045.3693 What can I do to make pyenv work inside WSL?
It seemed that my WSL environment was referring to the pyenv version installed on windows and not the pyenv version installed within WSL (ubuntu). Installing pyenv in WSL and setting the correct path should help. It can be done like this: curl https://pyenv.run | bash Then add the next bit of code to your ~/.bashrc and change the {USER} variable to your own username. export PATH="/home/USER/.pyenv/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv virtualenv-init -)" For more details on installing pyenv in WSL Ubuntu, see here. p.s. If you get a 'permission denied' error with cygpath and exec:cmd, see here.
2
4
77,570,302
2023-11-29
https://stackoverflow.com/questions/77570302/how-can-i-pass-a-keyword-argument-to-a-function-when-the-name-contains-a-dot
Given a function that accepts "**kwargs", e.g., def f(**kwargs): print(kwargs) how can I pass a key-value pair if the key contains a dot/period (.)? The straightforward way results in a syntax error: In [46]: f(a.b=1) Cell In[46], line 1 f(a.b=1) ^ SyntaxError: expression cannot contain assignment, perhaps you meant "=="?
Python functions only accepts valid python names (letters, underscore, and digits except for the first character), a dot is not allowed. If you want to have a string a.b as parameter, then you must use a dictionary f(**{'a.b': 1}) # {'a.b': 1} You can combine this with other parameters: f(x=2, **{'a.b': 1}) # {'x': 2, 'a.b': 1}
2
3
77,568,371
2023-11-29
https://stackoverflow.com/questions/77568371/how-to-display-value-of-another-fields-of-related-field-in-odoo-form-views
is that possible to display value of another fields of related field? For example, by default, in Sale Order, the displayed value of partner_id is the value of partner_id.name .. how if I want to display value of partner_id.mobile instead of their default? I've tried explicitly declare "partner_id.{FIELD}" like this one below, but the SO model always detects that those fields are not available in the model : <?xml version="1.0" encoding="utf-8"?> <odoo> <record id="sale_order_form_inherit" model="ir.ui.view"> <field name="name">sale.order.form.inherit</field> <field name="model">sale.order</field> <field name="inherit_id" ref="sale.sale_order_form"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="BSP Prinsipal" name="bsp_prinsipal_page"> <group> <field name="partner_id.cp_logistik"/> <field name="partner_id.cp_finance"/> <field name="partner_id.cp_marketing"/> </group> </page> </notebook> </field> </record> </odoo> Thanks in advance, by the way!
You can't use dotted field names in the form view. You can use a related field and remove partner_id from the field name Example: Inherit sale order model: class SaleOrder(models.Model): _inherit = 'sale.order' cp_logistik = fields.Float(related="partner_id.cp_logistik") cp_finance = fields.Float(related="partner_id.cp_finance") cp_marketing = fields.Float(related="partner_id.cp_marketing") Use related field names in form view: <record id="sale_order_form_inherit" model="ir.ui.view"> <field name="name">sale.order.form.inherit</field> <field name="model">sale.order</field> <field name="inherit_id" ref="sale.sale_order_form"/> <field name="arch" type="xml"> <notebook position="inside"> <page string="BSP Prinsipal" name="bsp_prinsipal_page"> <group> <field name="cp_logistik"/> <field name="cp_finance"/> <field name="cp_marketing"/> </group> </page> </notebook> </field> </record> . You may need to modify the field types
2
3
77,531,208
2023-11-22
https://stackoverflow.com/questions/77531208/python-3-12-syntaxwarning-invalid-escape-sequence-on-triple-quoted-string-d
After updating to Python 3.12, I get warnings about invalid escape sequence on some triple-quotes comments. Is this a new restriction? I have the habit of documenting code using triple-quoted string, but this has never been a problem prior to Python 3.12. python3 --version Python 3.12.0 $ ./some_script.py /some_script.py:123: SyntaxWarning: invalid escape sequence '\d' """ I tried replacing all lines with \d: 20230808122708.445|INFO|C:\dist\work\trk-fullstack-test\namespaces.py with \\d: 20230808122708.445|INFO|C:\\dist\work\trk-fullstack-test\namespaces.py The warning disappears. Suppressing the warning do not seem to work: import warnings warnings.filterwarnings('ignore', category=SyntaxWarning) Any pointers on how to do this correctly? I hope I do not have to escape all Windows paths documented in triplequotes in our code.
Back in Python 3.6, using invalid escape sequences in string literals was deprecated (bpo-27364). Since then, attempting to use an invalid escape sequence has emitted a DeprecationWarning. This can often go unnoticed if you don't run Python with warnings enabled. DeprecationWarnings are silenced by default. Python 3.12 upgraded the DeprecationWarning to a SyntaxWarning. SyntaxWarnings are emitted by the compiler when the code is parsed, not when it's being run, so they cannot be ignored using a runtime warning filter. Unlike DeprecationWarnings, SyntaxWarnings are displayed by default, which is why you're seeing it now. This increase in visibility was intentional. In a future version of Python, using invalid escape sequences in string literals is planned to eventually become a hard SyntaxError. The simplest solution would be to use # comments for comments instead of string literals. Unlike string literals, comments aren't required to follow any special syntax rules. See also the discussion in Python comments: # vs. strings for more on the drawbacks of using string literals as comments. To address this warning in general, you can make the string literal a raw string literal r"...". Raw string literals do not process escape sequences. For example, the string "\n" contains a single newline character, whereas the string r"\n" contains the two characters \ and n.
16
31
77,555,527
2023-11-27
https://stackoverflow.com/questions/77555527/how-to-effectively-create-duplicate-rows-in-polars
I am trying to transfer my pandas code into polars but I have a difficulties with duplicating lines (I need it for my pyvista visualizations). In pandas I did the following: df = pd.DataFrame({ "key": [1, 2, 3], "value": [4, 5, 6] }) df["key"] = df["key"].apply(lambda x: 2*[x]) df = df.explode("key", ignore_index=False ) In polars I tried df = pl.DataFrame({ "key": [1, 2, 3], "value": [4, 5, 6] }) df.with_columns( (pl.col("key").map_elements(lambda x: [x]*2)) .explode() ) but it raises: ShapeError: unable to add a column of length 6 to a DataFrame of height 3 I also tried to avoid map_elements using df.with_columns( (pl.col("key").cast(pl.List(float))*2) .explode() ) but it only raises: InvalidOperationError: can only do arithmetic operations on Series of the same size; got 3 and 1 Any idea how to do this?
You can use .repeat_by() and .flatten() df = pl.DataFrame({ "key": [1, 2, 3], "value": [4, 5, 6] }) df.select(pl.all().repeat_by(2).flatten()) shape: (6, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ key ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════║ β”‚ 1 ┆ 4 β”‚ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 3 ┆ 6 β”‚ β”‚ 3 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
77,544,923
2023-11-24
https://stackoverflow.com/questions/77544923/aggregate-column-with-list-of-string-with-intersection-of-the-elements-with-pola
I'm trying to aggregate some rows in my dataframe with a list[str] column. For each id I need the intersection of all the lists in the group. Not sure if I'm just overthinking it but I can't provide a solution right now. Any help please? df = pl.DataFrame( {"id": [1,1,2,2,3,3], "values": [["A", "B"], ["B", "C"], ["A", "B"], ["B", "C"], ["A", "B"], ["B", "C"]] } ) Expected output shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ idx ┆ values β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ list[str] β”‚ β•žβ•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ ["B"] β”‚ β”‚ 2 ┆ ["B"] β”‚ β”‚ 3 ┆ ["B"] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I've tried some stuff without success df.group_by("id").agg( pl.reduce(function=lambda acc, x: acc.list.set_intersection(x), exprs=pl.col("values")) ) # shape: (3, 2) # β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ id ┆ values β”‚ # β”‚ --- ┆ --- β”‚ # β”‚ i64 ┆ list[list[str]] β”‚ # β•žβ•β•β•β•β•β•ͺ══════════════════════════║ # β”‚ 1 ┆ [["A", "B"], ["B", "C"]] β”‚ # β”‚ 3 ┆ [["A", "B"], ["B", "C"]] β”‚ # β”‚ 2 ┆ [["A", "B"], ["B", "C"]] β”‚ # β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Another one df.group_by("id").agg( pl.reduce(function=lambda acc, x: acc.list.set_intersection(x), exprs=pl.col("values").explode()) ) # shape: (3, 2) # β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ id ┆ values β”‚ # β”‚ --- ┆ --- β”‚ # β”‚ i64 ┆ list[str] β”‚ # β•žβ•β•β•β•β•β•ͺ══════════════════════║ # β”‚ 3 ┆ ["A", "B", "B", "C"] β”‚ # β”‚ 1 ┆ ["A", "B", "B", "C"] β”‚ # β”‚ 2 ┆ ["A", "B", "B", "C"] β”‚ # β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
I'm not sure if this is as simple as it may first seem. You could get rid of the lists and use "regular" Polars functionality. One way to check if a value is contained in each row of the id group is to count the number of unique (distinct) row numbers per id, values group. (df.with_columns(group_len = pl.len().over("id")) .with_row_index() .explode("values") .with_columns(n_unique = pl.col.index.n_unique().over("id", "values")) ) shape: (12, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ id ┆ values ┆ group_len ┆ n_unique β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ u32 ┆ i64 ┆ str ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════β•ͺ════════β•ͺ═══════════β•ͺ══════════║ β”‚ 0 ┆ 1 ┆ A ┆ 2 ┆ 1 β”‚ β”‚ 0 ┆ 1 ┆ B ┆ 2 ┆ 2 β”‚ # index = [0, 1] β”‚ 1 ┆ 1 ┆ B ┆ 2 ┆ 2 β”‚ β”‚ 1 ┆ 1 ┆ C ┆ 2 ┆ 1 β”‚ β”‚ 2 ┆ 2 ┆ A ┆ 2 ┆ 1 β”‚ β”‚ 2 ┆ 2 ┆ B ┆ 2 ┆ 2 β”‚ # index = [2, 3] β”‚ 3 ┆ 2 ┆ B ┆ 2 ┆ 2 β”‚ β”‚ 3 ┆ 2 ┆ C ┆ 2 ┆ 1 β”‚ β”‚ 4 ┆ 3 ┆ A ┆ 2 ┆ 1 β”‚ β”‚ 4 ┆ 3 ┆ B ┆ 2 ┆ 2 β”‚ # index = [4, 5] β”‚ 5 ┆ 3 ┆ B ┆ 2 ┆ 2 β”‚ β”‚ 5 ┆ 3 ┆ C ┆ 2 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ You can filter those, and rebuild the lists with .group_by() (df.with_columns(pl.len().over("id").alias("group_len")) .with_row_index() .explode("values") .filter( pl.col.index.n_unique().over("id", "values") == pl.col.group_len ) .group_by("id", maintain_order=True) .agg(pl.col.values.unique()) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ idx ┆ values β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ list[str] β”‚ β•žβ•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ ["B"] β”‚ β”‚ 2 ┆ ["B"] β”‚ β”‚ 3 ┆ ["B"] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
5
5
77,527,847
2023-11-22
https://stackoverflow.com/questions/77527847/jax-vmap-limit-memory
I'm wondering if there is a good way to limit the memory usage for Jax's VMAP function? Equivalently, to vmap in batches at a time if that makes sense? In my specific use case, I have a set of images and I'd like to calculate the affinity between each pair of images; so ~order((num_imgs)^2 * (img shape)) bytes of memory used all at once if I'm understanding vmap correctly (which gets huge since in my real example I have 10,000 100x100 images). A basic example is: def affininty_matrix_ex(n_arrays=10, img_size=5, key=jax.random.PRNGKey(0), gamma=jnp.array([0.5])): arr_of_imgs = jax.random.normal(jax.random.PRNGKey(0), (n_arrays, img_size, img_size)) arr_of_indices = jnp.arange(n_arrays) inds_1, inds_2 = zip(*combinations(arr_of_indices, 2)) v_cPA = jax.vmap(calcPairAffinity2, (0, 0, None, None), 0) affinities = v_cPA(jnp.array(inds_1), jnp.array(inds_2), arr_of_imgs, gamma) print() print(jax.make_jaxpr(v_cPA)(jnp.array(inds_1), jnp.array(inds_2), arr_of_imgs, gamma)) affinities = affinities.reshape(-1) arr = jnp.zeros((n_arrays, n_arrays), dtype=jnp.float16) arr = arr.at[jnp.triu_indices(arr.shape[0], k=1)].set(affinities) arr = arr + arr.T arr = arr + jnp.identity(n_arrays, dtype=jnp.float16) return arr def calcPairAffinity2(ind1, ind2, imgs, gamma): #Returns a jnp array of 1 float, jnp.sum adds all elements together image1, image2 = imgs[ind1], imgs[ind2] diff = jnp.sum(jnp.abs(image1 - image2)) normed_diff = diff / image1.size val = jnp.exp(-gamma*normed_diff) val = val.astype(jnp.float16) return val I suppose I could just say something like "only feed into vmap X pairs at a time, and loop through n_chunks = n_arrays/X, appending each groups results to a list" but that doesn't seem to be ideal. My understanding is vmap does not like generators, not sure if that would be an alternative way around the issue.
Edit, Aug 13 2024 As of JAX version 0.4.31, what you're asking for is possible using the batch_size argument of lax.map. For an iterable of size N, this will perform a scan with N // batch_size steps, and within each step will vmap the function over the batch. lax.map has less flexible semantics than jax.vmap, but for the simplest cases they look relatively similar. Here's an example using your calcPairAffinity function: For example import jax import jax.numpy as jnp def calcPairAffinity(ind1, ind2, imgs, gamma=0.5): image1, image2 = imgs[ind1], imgs[ind2] diff = jnp.sum(jnp.abs(image1 - image2)) normed_diff = diff / image1.size val = jnp.exp(-gamma*normed_diff) val = val.astype(jnp.float16) return val imgs = jax.random.normal(jax.random.key(0), (100, 5, 5)) inds = jnp.arange(imgs.shape[0]) inds1, inds2 = map(jnp.ravel, jnp.meshgrid(inds, inds)) def f(inds): return calcPairAffinity(*inds, imgs, 0.5) result_vmap = jax.vmap(f)((inds1, inds2)) result_batched = jax.lax.map(f, (inds1, inds2), batch_size=1000) assert jnp.allclose(result_vmap, result_batched) Original answer This is a frequent request, but unfortunately there's not yet (as of JAX version 0.4.20) any built-in utility to do chunked/batched vmap (xmap does have some functionality along these lines, but is experimental/incomplete and I wouldn't recommend relying on it). Adding chunking to vmap is tracked in https://github.com/google/jax/issues/11319, and there's some code there that does a limited version of what you have in mind. Hopefully something like what you describe will be possible with JAX's built-in vmap soon. In the meantime, you might think about applying vmap to chunks manually in the way you describe in your question.
4
3
77,549,493
2023-11-25
https://stackoverflow.com/questions/77549493/modulenotfounderror-no-module-named-jupyter-server-contents
I got this error: Traceback (most recent call last): File "C:\ProgramData\anaconda3\Lib\site-packages\notebook\traittypes.py", line 235, in _resolve_classes klass = self._resolve_string(klass) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\traitlets\traitlets.py", line 2025, in _resolve_string return import_item(string) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\traitlets\utils\importstring.py", line 31, in import_item module = __import__(package, fromlist=[obj]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'jupyter_server.contents' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\anaconda3\Scripts\jupyter-notebook-script.py", line 10, in sys.exit(main()) ^^^^^^ File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\jupyter_core\application.py", line 280, in launch_instance super().launch_instance(argv=argv, **kwargs) File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\traitlets\config\application.py", line 1051, in launch_instance app = cls.instance(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\traitlets\config\configurable.py", line 575, in instance inst = cls(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\traitlets\traitlets.py", line 1311, in __new__ inst.setup_instance(*args, **kwargs) File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\traitlets\traitlets.py", line 1354, in setup_instance super(HasTraits, self).setup_instance(*args, **kwargs) File "C:\Users\Cristian Valiante\AppData\Roaming\Python\Python311\site-packages\traitlets\traitlets.py", line 1330, in setup_instance init(self) File "C:\ProgramData\anaconda3\Lib\site-packages\notebook\traittypes.py", line 226, in instance_init self._resolve_classes() File "C:\ProgramData\anaconda3\Lib\site-packages\notebook\traittypes.py", line 238, in _resolve_classes warn(f"{klass} is not importable. Is it installed?", ImportWarning) TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' Thank you for help ! :) I have tried installing and uninstalling and it didnt work.
Edit: https://github.com/jupyter/notebook/issues/7048#issuecomment-1724637960 https://github.com/jupyter/notebook/issues/7048#issuecomment-1720815902 pip install notebook==6.5.6 Or as @West commented, use: pip install --upgrade --no-cache-dir notebook==6.* Old Answer : The Workaround: Uninstall the Recent Problematic Release (v5.10.0) and Install the Prior Version (v5.9.0). Command Line: pip uninstall traitlets pip install traitlets==5.9.0 Git links: https://github.com/microsoft/azuredatastudio/issues/24436#issuecomment-1723328100 https://github.com/jupyter/notebook/issues/7048
17
30
77,553,886
2023-11-26
https://stackoverflow.com/questions/77553886/pytorch-distributed-from-two-ec2-instances-hangs
# env_vars.sh on rank 0 machine #!/bin/bash export MASTER_PORT=23456 export MASTER_ADDR=... # same as below, private ip of machine 0 export WORLD_SIZE=2 export GLOO_SOCKET_IFNAME=enX0 export RANK=0 # env_vars.sh on rank 1 machine #!/bin/bash export MASTER_PORT=23456 export MASTER_ADDR=... # same as above export WORLD_SIZE=2 export GLOO_SOCKET_IFNAME=enX0 export RANK=1 # on rank 0 machine $ ifconfig enX0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001 inet ... netmask 255.255.240.0 broadcast ... inet6 ... prefixlen 64 scopeid 0x20<link> ether ... txqueuelen 1000 (Ethernet) RX packets 543929 bytes 577263126 (550.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 203942 bytes 21681067 (20.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 12 bytes 1020 (1020.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12 bytes 1020 (1020.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 $ conda activate pytorch_env $ . env_vars.sh $ python >>> import torch.distributed >>> torch.distributed.init_process_group('gloo') # Do the same on rank 0 machine After 30 seconds or so, machine 0 outputs the following, and machine 1 just continues to hang. [E ProcessGroupGloo.cpp:138] Gloo connectFullMesh failed with [/opt/conda/conda-bld/pytorch_1699449045860/work/third_party/gloo/gloo/transport/tcp/pair.cc:144] no error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ec2-user/miniconda3/envs/pytorch_env/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 74, in wrapper func_return = func(*args, **kwargs) File "/home/ec2-user/miniconda3/envs/pytorch_env/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1155, in init_process_group default_pg, _ = _new_process_group_helper( File "/home/ec2-user/miniconda3/envs/pytorch_env/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1293, in _new_process_group_helper backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout) RuntimeError: Gloo connectFullMesh failed with [/opt/conda/conda-bld/pytorch_1699449045860/work/third_party/gloo/gloo/transport/tcp/pair.cc:144] no error I can connect to the rank 0 machine from the rank 1 machine: # rank 0 machine nc -lk 23456 # rank 1 machine telnet … 23456 # use private ip address of rank 0 machine Trying ... Connected to … Escape character is '^]'. ping # rank 0 machine ping If I run all the same commands from two shells of the rank 0 machine (modifying one of them with export RANK=1), init_process_group completes execution as expected. A user posted here about the same error, which they said they solved by resetting GLOO_SOCKET_IFNAME and TP_SOCKET_IFNAME. Trying to do a similar thing on my machine didn't succeed.
I solved this problem by enabling All Traffic between my nodes. Initially, I was just allowing the MASTER_PORT and that was not enough.
3
1
77,566,275
2023-11-28
https://stackoverflow.com/questions/77566275/how-to-use-sqlalchemys-on-conflict-do-update-returning-to-return-updated-values
I am trying to do an upsert statement and have the query return the updated values. On inserts, it works fine because there is no data but when there is an update, the query returns the old data that is getting updated. This is my upsert statement- def upsert(model, data, constraints): insert_stmt: Insert = insert(model).values(data) do_update_stmt = insert_stmt.on_conflict_do_update( index_elements=constraints, set_=data, ) return do_update_stmt I execute it and get the values like this- upsert_query = upsert( model, new_data, ["constraint"], ) try: upsert_response = db.session.execute( upsert_query.returning(model) ) updated_model = upsert_response.fetchone()[0] The issue is that updated_model here returns the old data. I can commit the query and it would give me the updated data as the model is changed but in my specific use case, I don't want to commit until more code after the above is ran and if the following code fails to execute I want to rollback. Unfortunately, I can't seem to get the rollback to happen after I commit. My question is; is there a way to get the updated data from the response here instead on updates? If not, how can I rollback this commit? db is a SqlAlchemy() instance that is shared across the app.
When returning ORM objects you have to populate existing objects otherwise they will not be updated. There is an example here: using-returning-with-upsert-statements Here is another example I made The key line is res = session.execute(q, execution_options={"populate_existing": True}).fetchone()[0] import sys from sqlalchemy import ( create_engine, Integer, String, ) from sqlalchemy.orm import Session, declarative_base, mapped_column from sqlalchemy.sql import select from sqlalchemy.dialects import postgresql username, password, db = sys.argv[1:4] engine = create_engine(f"postgresql+psycopg2://{username}:{password}@/{db}", echo=True) Base = declarative_base() Base.metadata.create_all(engine) class User(Base): __tablename__ = "users" id = mapped_column(Integer, primary_key=True) name = mapped_column(String, unique=True) beverage = mapped_column(String, nullable=False) Base.metadata.create_all(engine) with Session(engine) as session: u1 = User(id=1, name="user1", beverage="coffee") u2 = User(id=2, name="user2", beverage="tea") session.add_all([u1, u2]) session.commit() def upsert(model, insert_data, update_data, index_elements): insert_stmt = postgresql.insert(model).values(insert_data) do_update_stmt = insert_stmt.on_conflict_do_update( index_elements=index_elements, set_=update_data, ) return do_update_stmt with Session(engine) as session: u1 = session.scalars(select(User).where(User.id == 1)).first() assert u1.beverage != 'water', "This should be the old value." data = dict(id=1, name="user1", beverage="water") q = upsert(User, data, dict((k, v) for (k, v) in data.items() if k != "id"), ["id"]).returning(User) res = session.execute(q, execution_options={"populate_existing": True}).fetchone()[0] assert res.beverage == 'water', "This should be the new value." assert u1.beverage == 'water', "Our pre-existing object should be the same object but check anyways." session.commit() # Now read it back and check again after commit. u1 = session.scalars(select(User).where(User.id == 1)).first() print (u1.beverage == 'water')
2
5
77,555,312
2023-11-27
https://stackoverflow.com/questions/77555312/langchain-chromadb-why-does-vectorstore-return-so-many-duplicates
import os from langchain.llms import OpenAI import bs4 import langchain from langchain import hub from langchain.document_loaders import UnstructuredFileLoader from langchain.embeddings import OpenAIEmbeddings from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Chroma os.environ["OPENAI_API_KEY"] = "KEY" loader = UnstructuredFileLoader( 'path_to_file' ) docs = loader.load() text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, add_start_index=True ) all_splits = text_splitter.split_documents(docs) vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings()) retriever = vectorstore.as_retriever(search_type="similarity", search_kwargs={"k": 6}) retrieved_docs = retriever.get_relevant_documents( "What is X?" ) This returns: [Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}), Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}), Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}), Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}), Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}), Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932})] Which is all seemingly the same document. When I first ran this code in Google Colab/Jupyter Notebook, it returned different documents...as I ran it more, it started returning the same documents. Makes me feel like this is a database issue, where the same entry is being inserted into the database with each run. How do I return 6 different unique documents?
the issue is here: Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings()) everytime you execute the file, you are inserting the same documents into the database. you could comment out that part of code if you are inserting from same file. or you could detect the similar vectors using EmbeddingsRedundantFilter Filter that drops redundant documents by comparing their embeddings.
7
11
77,546,864
2023-11-25
https://stackoverflow.com/questions/77546864/connexion-3-0-2-modulenotfounderror-please-install-connexion-using-the-flask
Problem I use connextion with Flask. Today I upgraded connexion from 2.14.2 to 3.0.2 and see ModuleNotFoundError: Please install connexion using the 'flask' extra. https://connexion.readthedocs.io/en/latest/quickstart.html I checked the official documentation, which says "To leverage the FlaskApp, make sure you install connexion using the flask extra." Question How can I install connexion using the flask extra? The documentation says the command is pip install connexion[<extra>], but I see an error message "no matches found: connexion[flask]". % pip install connexion[flask] zsh: no matches found: connexion[flask] Environment Python 3.12.0 Flask 3.0.0 Connexion 3.0.2
https://github.com/spec-first/connexion/issues/779#issuecomment-441081238 I find this error was caused by zsh. pip install "connexion[flask]" worked. (Double quotations are needed.)
7
12
77,542,619
2023-11-24
https://stackoverflow.com/questions/77542619/what-is-the-exceptiontable-in-the-output-of-dis
In python3.13, when I try to disassemble [i for i in range(10)], the result is as below: >>> import dis >>> >>> dis.dis('[i for i in range(10)]') 0 RESUME 0 1 LOAD_NAME 0 (range) PUSH_NULL LOAD_CONST 0 (10) CALL 1 GET_ITER LOAD_FAST_AND_CLEAR 0 (i) SWAP 2 L1: BUILD_LIST 0 SWAP 2 L2: FOR_ITER 4 (to L3) STORE_FAST_LOAD_FAST 0 (i, i) LIST_APPEND 2 JUMP_BACKWARD 6 (to L2) L3: END_FOR L4: SWAP 2 STORE_FAST 0 (i) RETURN_VALUE -- L5: SWAP 2 POP_TOP 1 SWAP 2 STORE_FAST 0 (i) RERAISE 0 ExceptionTable: L1 to L4 -> L5 [2] At the end of the output, there's something ExceptionTable. It does not exist in the previous versions of python. Python 3.10.0b1 (default, May 4 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import dis >>> >>> dis.dis('[i for i in range(10)]') 1 0 LOAD_CONST 0 (<code object <listcomp> at 0x7f3d412503a0, file "<dis>", line 1>) 2 LOAD_CONST 1 ('<listcomp>') 4 MAKE_FUNCTION 0 6 LOAD_NAME 0 (range) 8 LOAD_CONST 2 (10) 10 CALL_FUNCTION 1 12 GET_ITER 14 CALL_FUNCTION 1 16 RETURN_VALUE Disassembly of <code object <listcomp> at 0x7f3d412503a0, file "<dis>", line 1>: 1 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) >> 4 FOR_ITER 4 (to 14) 6 STORE_FAST 1 (i) 8 LOAD_FAST 1 (i) 10 LIST_APPEND 2 12 JUMP_ABSOLUTE 2 (to 4) >> 14 RETURN_VALUE I can't understand what that means, also I couldn't find any document for this.
ExceptionTable determines where to jump to when an exception is raised(it was implemented in python-3.11). Prior version uses separate opcodes to handle this. The advantage of this approach is that entering and leaving a try block normally does not execute any code, making execution faster. To access this table, you can do so by accessing the co_exceptiontable attribute of the code object. The ExceptionTable representation returned by dis is the result of parsing this table. >>> def foo(): ... c = 1 + 2 ... return c ... >>> >>> foo.__code__.co_exceptiontable b'' >>> def foo(): ... try: ... 1/0 ... except: ... pass ... >>> foo.__code__.co_exceptiontable b'\x82\x05\x08\x00\x88\x02\x0c\x03' >>> >>> from dis import _parse_exception_table >>> >>> _parse_exception_table(foo.__code__) [_ExceptionTableEntry(start=4, end=14, target=16, depth=0, lasti=False), _ExceptionTableEntry(start=16, end=20, target=24, depth=1, lasti=True)] A detailed explanation about ExceptionTable is available here. Quoting from the same link: python-3.11 uses what is known as "zero-cost" exception handling. Prior to python-3.11, exceptions were handled by a runtime stack of "blocks". In zero-cost exception handling, the cost of supporting exceptions is minimized. In the common case (where no exception is raised) the cost is reduced to zero (or close to zero). The cost of raising an exception is increased, but not by much. The following code: def f(): try: g(0) except: return "fail" compiles as follows in 3.10: 2 0 SETUP_FINALLY 7 (to 16) 3 2 LOAD_GLOBAL 0 (g) 4 LOAD_CONST 1 (0) 6 CALL_NO_KW 1 8 POP_TOP 10 POP_BLOCK 12 LOAD_CONST 0 (None) 14 RETURN_VALUE 4 >> 16 POP_TOP 18 POP_TOP 20 POP_TOP 5 22 POP_EXCEPT 24 LOAD_CONST 3 ('fail') 26 RETURN_VALUE Note the explicit instructions to push and pop from the "block" stack: SETUP_FINALLY and POP_BLOCK. In 3.11, the SETUP_FINALLY and POP_BLOCK are eliminated, replaced with a table to determine where to jump to when an exception is raised. 1 0 RESUME 0 2 2 NOP 3 4 LOAD_GLOBAL 1 (g + NULL) 16 LOAD_CONST 1 (0) 18 PRECALL 1 22 CALL 1 32 POP_TOP 34 LOAD_CONST 0 (None) 36 RETURN_VALUE >> 38 PUSH_EXC_INFO 4 40 POP_TOP 5 42 POP_EXCEPT 44 LOAD_CONST 2 ('fail') 46 RETURN_VALUE >> 48 COPY 3 50 POP_EXCEPT 52 RERAISE 1 ExceptionTable: 4 to 32 -> 38 [0] 38 to 40 -> 48 [1] lasti (Note this code is from python-3.11, later versions may have slightly different bytecode.) If an instruction raises an exception then its offset is used to find the target to jump to. For example, the CALL at offset 22, falls into the range 4 to 32. So, if g() raises an exception, then control jumps to offset 38.
3
5
77,567,521
2023-11-28
https://stackoverflow.com/questions/77567521/optimize-computation-of-similarity-scores-by-executing-native-polars-command-ins
Disclaimer (1): This question is supportive to this SO. After a request from two users to elaborate on my case. Disclaimer (2) - added 29/11: I have seen two solutions so far (proposed in this SO and the supportive one), that utilize the explode() functionality. Based on some benchmarks I did on the whole (~3m rows data) the RAM literally explodes, thus I will test the function on a sample of the dataset and if it works I will accept the solutions of explode() method for those who might experiment on smaller tables. The input dataset (~3m rows) is the ratings.csv from the ml-latest dataset of 80_000 IMDb movies and respective ratings from 330_000 users (you may download the CSV file from here - 891mb). I load the dataset using polars like movie_ratings = pl.read_csv(os.path.join(application_path + data_directory, "ratings.csv")), application_path and data_directory is a parent path on my local server. Having read the dataset my goal is to generate the cosine similarity of a user between all the other users. To do so, first I have to transform the ratings table (~3m rows) to a table with 1 row per user. Thus, I run the following query ## 1st computation bottleneck using UDF functions (2.5minutes for 250_000 rows) users_metadata = movie_ratings.filter( (pl.col("userId") != input_id) #input_id is a random userId. I prefer to make my tests using userId '1' so input_id=1 in this case. ).group_by("userId")\ .agg( pl.col("movieId").unique().alias("user_movies"), pl.col("rating").alias("user_ratings") )\ .with_columns( pl.col("user_movies").map_elements( lambda row: sorted( list(set(row).intersection(set(user_rated_movies))) ), return_dtype=pl.List(pl.Int64) ).alias("common_movies") )\ .with_columns( pl.col("common_movies").map_elements( lambda row: len(row), return_dtype=pl.Int64 ).alias("common_movies_frequency") ) similar_users = ( users_metadata.filter( (pl.col("common_movies_frequency").le(len(user_rated_movies))) & (pl.col("common_movies_frequency").gt(0)) # we don't want the users that don't have seen any movies from the ones seen/rated by the target user. ) .sort("common_movies_frequency", descending=True) ) ## 2nd computation bottleneck using UDF functions similar_users = ( similar_users.with_columns( pl.struct(pl.all()).map_elements( get_common_movie_ratings, #asked on StackOverflow return_dtype=pl.List(pl.Float64), strategy="threading" ).alias("common_movie_ratings") ).with_columns( pl.struct(["common_movies"]).map_elements( lambda row: get_target_movie_ratings(row, user_rated_movies, user_ratings), return_dtype=pl.List(pl.Float64), strategy="threading" ).alias("target_user_common_movie_ratings") ).with_columns( pl.struct(["common_movie_ratings","target_user_common_movie_ratings"]).map_elements( lambda row: compute_cosine(row), return_dtype=pl.Float64, strategy="threading" ).alias("similarity_score") ) ) The code snippet above groups the table by userId and computes some important metadata about them. Specifically, user_movies, user_ratings per user common_movies = intersection of the movies seen by the user that are the same as seen by the input_id user (thus user 1). Movies seen by the user 1 are basically user_rated_movies = movie_ratings.filter(pl.col("userId") == input_id).select("movieId").to_numpy().ravel() common_movies_frequency = The length of the columncommon_movies per user. NOT a fixed length per user. common_movie_ratings = The result of the function I asked here target_user_common_movie_ratings = The ratings of the target user (user1) that match the indexes of the common movies with each user. similarity_score = The cosine similarity score. Screenshot of the table (don't give attention to column potential recommendations) Finally, I filter the table users_metadata by all the users with less than or equal common_movies_frequency to the 62 (len(user_rated_movies)) movies seen by user1. Those are a total of 250_000 users. This table is the input dataframe for the UDF function I asked in this question. Using this dataframe (~250_000 users) I want to calculate the cosine similarity of each user with user 1. To do so, I want to compare their rating similarity. So on the movies commonly rated by each user, compute the cosine similarity among two arrays of ratings. Below are the three UDF functions I use to support my functionality. def get_common_movie_ratings(row) -> pl.List(pl.Float64): common_movies = row['common_movies'] user_ratings = row['user_ratings'] ratings_for_common_movies = [user_ratings[list(row['user_movies']).index(movie)] for movie in common_movies] return ratings_for_common_movies def get_target_movie_ratings(row, target_user_movies:np.ndarray, target_user_ratings:np.ndarray) -> pl.List(pl.Float64): common_movies = row['common_movies'] target_user_common_ratings = [target_user_ratings[list(target_user_movies).index(movie)] for movie in common_movies] return target_user_common_ratings def compute_cosine(row)->pl.Float64: array1 = row["common_movie_ratings"] array2 = row["target_user_common_movie_ratings"] magnitude1 = norm(array1) magnitude2 = norm(array2) if magnitude1 != 0 or magnitude2 != 0: #avoid division with 0 norms/magnitudes score: float = np.dot(array1, array2) / (norm(array1) * norm(array2)) else: score: float = 0.0 return score Benchmarks Total execution time for 1 user is ~4 minutes. If I have to compute this over an iteration per user (1 dataframe per user) that will be approximately4 minutess * 330_000 users. 3-5Gb of RAM while computing the polars df for 1 user. The main question is how can I transform those 3 UDF functions into native polars commands. logs from a custom logger I made 2023-11-29 13:40:24 - INFO - Computed potential similar user metadata for 254188 users in: 0:02:15.586497 2023-11-29 13:40:51 - INFO - Computed similarity scores for 194943 users in: 0:00:27.472388 We can conclude that the main bottleneck of the code is when creating the user_metadata table.
CSV pl.read_csv loads everything into memory. pl.scan_csv() returns a LazyFrame instead. Parquet faster to read/write pl.scan_csv("imdb.csv").sink_parquet("imdb.parquet") imdb.csv = 891mb / imdb.parquet = 202mb Example: In the hopes of making things simpler for replicating results, I've filtered the dataset pl.col("userId").is_between(1, 3) and removed the timestamp column: movie_ratings = pl.read_csv( b'userId,movieId,rating\n1,1,4.0\n1,110,4.0\n1,158,4.0\n1,260,4.5\n1,356,5.0\n1,381,3.5\n1,596,4.0\n1,1036,5.0\n1,1049,' b'3.0\n1,1066,4.0\n1,1196,3.5\n1,1200,3.5\n1,1210,4.5\n1,1214,4.0\n1,1291,5.0\n1,1293,2.0\n1,1376,3.0\n1,1396,3.0\n1,153' b'7,4.0\n1,1909,3.0\n1,1959,4.0\n1,1960,4.0\n1,2028,5.0\n1,2085,3.5\n1,2116,4.0\n1,2336,3.5\n1,2571,2.5\n1,2671,4.0\n1,2' b'762,5.0\n1,2804,3.0\n1,2908,4.0\n1,3363,3.0\n1,3578,5.0\n1,4246,4.0\n1,4306,4.0\n1,4699,3.5\n1,4886,5.0\n1,4896,4.0\n1' b',4993,4.0\n1,4995,5.0\n1,5952,4.5\n1,6539,4.0\n1,7064,3.5\n1,7122,4.0\n1,7139,3.0\n1,7153,5.0\n1,7162,4.0\n1,7366,3.5' b'\n1,7706,3.5\n1,8132,5.0\n1,8533,5.0\n1,8644,3.5\n1,8961,4.5\n1,8969,4.0\n1,8981,3.5\n1,33166,5.0\n1,33794,3.0\n1,40629' b',4.5\n1,49647,5.0\n1,52458,5.0\n1,53996,5.0\n1,54259,4.0\n2,1,5.0\n2,2,3.0\n2,6,4.0\n2,10,3.0\n2,11,3.0\n2,17,5.0\n2,1' b'9,3.0\n2,21,5.0\n2,25,3.0\n2,31,3.0\n2,34,5.0\n2,36,5.0\n2,39,3.0\n2,47,5.0\n2,48,2.0\n2,50,4.0\n2,52,3.0\n2,58,3.0\n2' b',95,2.0\n2,110,5.0\n2,111,3.0\n2,141,5.0\n2,150,5.0\n2,151,5.0\n2,153,3.0\n2,158,3.0\n2,160,1.0\n2,161,3.0\n2,165,4.0' b'\n2,168,3.0\n2,172,2.0\n2,173,2.0\n2,185,3.0\n2,186,3.0\n2,204,3.0\n2,208,3.0\n2,224,3.0\n2,225,3.0\n2,231,4.0\n2,235,3' b'.0\n2,236,2.0\n2,252,3.0\n2,253,2.0\n2,256,3.0\n2,261,4.0\n2,265,2.0\n2,266,4.0\n2,282,1.0\n2,288,1.0\n2,292,3.0\n2,29' b'3,3.0\n2,296,5.0\n2,300,4.0\n2,315,3.0\n2,317,3.0\n2,318,5.0\n2,333,3.0\n2,337,3.0\n2,339,5.0\n2,344,3.0\n2,349,4.0\n2' b',350,3.0\n2,356,5.0\n2,357,5.0\n2,364,4.0\n2,367,4.0\n2,377,4.0\n2,380,4.0\n2,420,2.0\n2,432,3.0\n2,434,4.0\n2,440,3.0' b'\n2,442,3.0\n2,454,3.0\n2,457,5.0\n2,480,3.0\n2,500,4.0\n2,509,3.0\n2,527,5.0\n2,539,5.0\n2,553,3.0\n2,586,4.0\n2,587,' b'4.0\n2,588,4.0\n2,589,4.0\n2,590,5.0\n2,592,3.0\n2,593,5.0\n2,595,4.0\n2,597,5.0\n2,786,4.0\n3,296,5.0\n3,318,5.0\n3,8' b'58,5.0\n3,2959,5.0\n3,3114,5.0\n3,3751,5.0\n3,4886,5.0\n3,6377,5.0\n3,8961,5.0\n3,60069,5.0\n3,68954,5.0\n3,69844,5.0' b'\n3,74458,5.0\n3,76093,5.0\n3,79132,5.0\n3,81834,5.0\n3,88125,5.0\n3,99114,5.0\n3,109487,5.0\n3,112556,5.0\n3,115617,5.' b'0\n3,115713,4.0\n3,116797,5.0\n3,119145,5.0\n3,134853,5.0\n3,152081,5.0\n3,176101,5.0\n3,177765,5.0\n3,185029,5.0\n3,1' b'87593,3.0\n' ) We will assume input_id == 1 One possible approach for gathering all the needed information: # Finding the intersection first seems to use ~35% less RAM # than the previous join / anti-join approach intersection = ( movie_ratings .filter( (pl.col("userId") == 1) | ((pl.col("userId") != 1) & (pl.col("movieId").is_in(pl.col("movieId").filter(pl.col("userId") == 1)))) ) ) (intersection.filter(pl.col("userId") == 1) .join( intersection.filter(pl.col("userId") != 1), on = "movieId" ) .group_by(pl.col("userId_right").alias("other_user")) .agg( target_user = pl.first("userId"), common_movies = "movieId", common_movies_frequency = pl.count(), target_user_ratings = "rating", other_user_ratings = "rating_right", ) ) shape: (2, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ other_user ┆ target_user ┆ common_movies ┆ common_movies_frequency ┆ target_user_ratings ┆ other_user_ratings β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ list[i64] ┆ u32 ┆ list[f64] ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ════════════════════β•ͺ═════════════════════════β•ͺ══════════════════════β•ͺ══════════════════════║ β”‚ 3 ┆ 1 ┆ [4886, 8961] ┆ 2 ┆ [5.0, 4.5] ┆ [5.0, 5.0] β”‚ β”‚ 2 ┆ 1 ┆ [1, 110, 158, 356] ┆ 4 ┆ [4.0, 4.0, 4.0, 5.0] ┆ [5.0, 5.0, 3.0, 5.0] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Lazy API There may be a better strategy to parallelize the work, but a baseline attempt could simply loop through each userID movie_ratings = pl.scan_parquet("imdb.parquet") user_ids = movie_ratings.select(pl.col("userId").unique()).collect().to_series() for user_id in user_ids: result = ( movie_ratings .filter(pl.col("userId") == user_id) ... ) print(result.collect()) DuckDB I was curious, so decided to check duckdb for a comparison. import duckdb duckdb.sql(""" with db as (from movie_ratings) from db target, db other select target.userId target_user, other.userId other_user, list(other.movieId) common_movies, count(other.movieId) common_movies_frequency, list(target.rating) target_user_ratings, list(other.rating) other_user_ratings, where target_user = 1 and other_user != 1 and target.movieId = other.movieId group by target_user, other_user """).pl() shape: (2, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ target_user ┆ other_user ┆ common_movies ┆ common_movies_frequency ┆ target_user_ratings ┆ other_user_ratings β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ list[i64] ┆ i64 ┆ list[f64] ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ════════════════════β•ͺ═════════════════════════β•ͺ══════════════════════β•ͺ══════════════════════║ β”‚ 1 ┆ 3 ┆ [4886, 8961] ┆ 2 ┆ [5.0, 4.5] ┆ [5.0, 5.0] β”‚ β”‚ 1 ┆ 2 ┆ [1, 110, 356, 158] ┆ 4 ┆ [4.0, 4.0, 5.0, 4.0] ┆ [5.0, 5.0, 5.0, 3.0] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ RAM Usage Running both examples against the full dataset (runtime is basically the same) I get: import rich.filesize print("duckdb:", rich.filesize.decimal(223232000)) print("polars:", rich.filesize.decimal(1772072960)) duckdb: 223.2 MB polars: 1.8 GB So it seems there is potential room for improvement on the Polars side.
5
5
77,542,502
2023-11-24
https://stackoverflow.com/questions/77542502/incorrect-image-matching-results-despite-differences-human-fingerprints
I want to use python to compared two images to check whether they are the same or not, I want to use this for fingerprint functionality in django app to validate whether the provided fingerprint is matches the one stored in the database. I have decided to use OpenCV for this purpose, utilizing ORB_create with detectAndCompute and providing the provided fingerprint to the BFMatcher. However, with the code below, when attempting to match the images, it consistently return True, while the provided images are not the same, along with the print statment "Images are the same". def compared_fingerprint(image1, image2): finger1 = cv2.imread(image1, cv2.IMREAD_GRAYSCALE) finger2 = cv2.imread(image2, cv2.IMREAD_GRAYSCALE) orb = cv2.ORB_create() keypoints1, descriptors1 = orb.detectAndCompute(finger1, None) keypoints2, descriptors2 = orb.detectAndCompute(finger2, None) bf = cv2.BFMatcher() matches = bf.match(descriptors1, descriptors2) threshold = 0.7 similar = len(matches) > threshold * len(keypoints1) if similar: print('Images are the same') return similar else: print('Images are not the same') return similar result = compared_fingerprint('c.jpg', 'a.jpg') print(result) With the provided images, the function supposed to return the second statement, since, they are not the same, I thought, it was the threshold assign to 0.7, and when I increase the threshold to 1.7, it return the second statement saying: "Images are not the same" False, but when I try to make the images to be the same, I mean: result = compared_fingerprint('a.jpg', 'a.jpg'), it's still return "Images are not the same" False. a.jpg c.jpg
Fingerprints are matched using features specific to fingerprints. Fingerprints are mostly just ridges running in parallel, so that's boring. The interesting and identifying features are swirls (ridges curve around), ridge ends, short "island" segments (and their lengths), forks, ... https://en.wikipedia.org/wiki/Fingerprint#Fingerprint_verification Generic local feature descriptors may work but are not specific to fingerprints. You'll need to do the literature research to learn what feature descriptors have performed well on fingerprints. Features then need to match in a spatially consistent manner. Rotation and translation don't matter. Some stretching and even less shearing are tolerable. Perspective components are extrmely unlikely. If you estimate a transform, you'll need to assess how much it contorts.
2
4
77,544,825
2023-11-24
https://stackoverflow.com/questions/77544825/useless-parent-or-super-delegation-in-method-init
I'm working through the book Python Crash Course 2nd Edition, and I did what they outlined, but they did something that runs a warning in VS code (Useless parent or super() delegation in method '__init__'). They don't go over how to fix it, and I don't think it does anything (please tell me whether it does or not), but I'd like to not have the message, and I don't want to have the same thing happen in the future. Here is the code for the child class: class ElectricCar(Car): """A child class of the parent Car, specific to electric cars.""" def __init__(self, make, model, year): """Initialize attributes of the parent class.""" super().__init__(make, model, year) Here is the code for the parent class, if it's needed: class Car: """A class to represent a car.""" def __init__(self, make, model, year): """Initialize attributes to describe a car.""" self.make = make self.model = model self.year = year self.odometer_reading = 0 def get_descriptive_name(self): """Return a neatly formatted descriptive name.""" long_name = f"{self.year} {self.make} {self.model}" return long_name.title() def read_odometer(self): """Print a statement showing the car's mileage.""" print(f"This car has {self.odometer_reading} miles on it.") Thanks for any and all help. P.S. I apologise if the code blocks don't look like code. I clicked on the 'Code block' button when the code was selected, but while viewing, it didn't have syntax highlighting (please tell me whether I did it correctly or not, and if not, how to do it next time).
Let's say that you didn't add __init__ to your subclass at all. The parent __init__ has not been overridden and will be called when ElectricCar(...) is instantiated. Your ElectricCar.__init__ doesn't do anything that python wouldn't do anyway. You only need your own __init__ if you plan to do something different that the parent class. Just delete your __init__ method to make the warning go away.
3
4
77,567,405
2023-11-28
https://stackoverflow.com/questions/77567405/how-to-convert-bytes-to-a-float32-array-in-go
I am writing an array of float32 numbers from a Python script to an Elasticache Redis cluster in bytes format, then reading the bytes (as a string) from Elasticache in a Go script. How do I convert the bytes-as-string back to the original float32 array in the Go script? Python example: import numpy as np import redis a = np.array([1.1, 2.2, 3.3], dtype=np.float32) a_bytes = a.tobytes(order="C") #I have also tried order="F" with no luck print(a_bytes) #Output: b'\xcd\xcc\x8c?\xcd\xcc\x0c@33S@' redis_client = redis.cluster.RedisCluster(host=<elasticache config endpoint>, port=6379) redis_client.mset_nonatomic({"key1": a_bytes}) Here's an example of what I've tried in Go (playground) package main import ( "fmt" "math" "strconv" ) func main() { // aBytesStr is an example value retrieved from Elasticache // aBytesStr is type string, not raw bytes var aBytesStr string = "\xcd\xcc\x8c?\xcd\xcc\x0c@33S@" aHex := fmt.Sprintf("%X", aBytesStr) fmt.Println(aHex) // Output: CDCC8C3FCDCC0C4033335340 var aArr [3]float32 for i := 0; i < 3; i++ { aHex1 := aHex[i*8 : i*8+8] aParsed, err := strconv.ParseUint(aHex1, 16, 32) if err != nil { return } aArr[i] = math.Float32frombits(uint32(aParsed)) } fmt.Println(aArr) // Expected output: [1.1 2.2 3.3] // Actual output [-4.289679e+08 -4.2791936e+08 4.17524e-08] }
The example code you are using is to "convert hex, represented as strings"; you have the raw bytes (I think based on aHex: CDCC8C3FCDCC0C4033335340) so its simpler to convert directly (while you could convert the bytes to a hex string, and then convert that, doing so just adds unnecessary work/complexity). Drawing from this answer we get (playground): func GetFloatArray(aBytes []byte) []float32 { aArr := make([]float32, 3) for i := 0; i < 3; i++ { aArr[i] = BytesFloat32(aBytes[i*4:]) } return aArr } func BytesFloat32(bytes []byte) float32 { bits := binary.LittleEndian.Uint32(bytes) float := math.Float32frombits(bits) return float } Update ref comment: I convert the bytes-as-string to a hex string, to a []bytes, to []float32. Is there a way to directly convert the bytes-as-string to []bytes? I'm still a bit confused about what you are receiving so lets work through both possibilities. If the redis query is returning raw data (bytes) as a go string (i.e. "\xcd\xcc\x8c?\xcd\xcc\x0c@33S@") then you can just convert this to []byte (playground) func main() { var aBytesStr string = "\xcd\xcc\x8c?\xcd\xcc\x0c@33S@" fmt.Println(GetFloatArray([]byte(aBytesStr))) } If redis is returning a string containing an ASCII (/UTF-8) representation (i.e. CDCC = []byte{0x41, 0x44, 0x43, 0x43}) the simplest approach is probably to use encoding/hex to decode this (playground) func main() { aHex := "CDCC8C3FCDCC0C4033335340" b, err := hex.DecodeString(aHex) if err != nil { panic(err) } fmt.Println(GetFloatArray(b)) } Note that your original approach could work but, as pointed out in the comments above, you need to deal with Endianness so the following will work (playground - you could make this more efficient, I aimed for clarity): byte1 := aHex[i*8 : i*8+2] byte2 := aHex[i*8+2 : i*8+4] byte3 := aHex[i*8+4 : i*8+6] byte4 := aHex[i*8+6 : i*8+8] aParsed, err := strconv.ParseUint(byte4+byte3+byte2+byte1, 16, 32) However this makes assumptions about the CPU the code is running on which means the previous answer is preferable.
2
4
77,566,173
2023-11-28
https://stackoverflow.com/questions/77566173/is-there-anyway-to-run-brave-browser-with-seleniumbase
I'm trying to run a Brave browser with undetected_chrome on a Debian server. Attempt 1: Using undected_chrome library and binary_location Result: undected_chrome has problem with driver.quit() not working probably, while I need to close and reopen the browser every minutes. People suggest using Seleniumbase instead. Attempt 2: Using Seleniumbase and binary_location Result: Seleniumbase say that brave.exe is not a valid binary. Attempt 3: Changing constants.py in Seleniumbase to include brave.exe valid_chrome_binaries_on_windows = [ "chrome.exe", "chromium.exe", "brave.exe", ] then using binary_location to run on Windows with the code below: from seleniumbase import Driver brave_path = r'C:\Program Files\BraveSoftware\Brave-Browser\Application\brave.exe' driver = Driver(uc=True, proxy=webdriver_proxy, incognito=True, page_load_strategy=load_strategy, block_images=True, binary_location=brave_path) Result: Everything works well Attempt 4: Repeat attempt 3 but on a Debian server Change constants.py in Seleniumbase to include brave-browser valid_chrome_binaries_on_linux = [ "google-chrome", "google-chrome-stable", "chrome", "chromium", "chromium-browser", "google-chrome-beta", "google-chrome-dev", "google-chrome-unstable", "brave-browser", "brave-browser-stable", ] then run this code on a Debian server: brave_path = "/usr/bin/brave-browser" driver = Driver(uc=True, proxy=webdriver_proxy, incognito=True, page_load_strategy=load_strategy, block_images=True, binary_location=brave_path) Result: It works somewhat, it can access webpage but for some reason it misses some headers which trigger bot detection from sites. Using whatismyheader.com, here are a normal Brave header and the test code header side by side: Normal Brave Header(using Brave with undetected_chrome library): <html><head></head><body>GET / HTTP/1.1 Host: whatismyheader.com Connection: keep-alive Sec-Ch-Ua: "Brave";v="119", "Chromium";v="119", "Not?A_Brand";v="24" Sec-Ch-Ua-Mobile: ?0 Sec-Ch-Ua-Platform: "Linux" Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8 Sec-Gpc: 1 Accept-Language: en-US,en;q=0.8 Sec-Fetch-Site: none Sec-Fetch-Mode: navigate Sec-Fetch-User: ?1 Sec-Fetch-Dest: document Accept-Encoding: gzip, deflate, br Test code header (using Brave with modified Seleniumbase): <html><head></head><body>GET / HTTP/1.1 Host: whatismyheader.com Connection: keep-alive Sec-Ch-Ua: Sec-Ch-Ua-Mobile: ?0 Sec-Ch-Ua-Platform: "" Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.1.60.118 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8 Sec-Gpc: 1 Accept-Language: en-US,en;q=0.6 Sec-Fetch-Site: none Sec-Fetch-Mode: navigate Sec-Fetch-User: ?1 Sec-Fetch-Dest: document Accept-Encoding: gzip, deflate, br Anyone has any insight? Thanks for the help.
Upgrade to seleniumbase 4.21.6 (or newer) so that you can use Brave or Opera. (https://github.com/seleniumbase/SeleniumBase/issues/2324) (Set via binary_location). Eg. On a Mac: pytest basic_test.py --binary-location="/Applications/Opera.app/Contents/MacOS/Opera" pytest basic_test.py --binary-location="/Applications/Brave Browser.app/Contents/MacOS/Brave Browser" For the formats that use the Driver() or SB() managers, the binary_location arg should be set. (Note that browser="chrome" should still be used for this, as this will invoke chromedriver with default options.) If your issue is with the user-agent, you can set it with the agent arg. For me, I didn't need to change the user agent at all for UC Mode to work. I used default UC Mode options while setting the updated binary location.
2
2
77,564,155
2023-11-28
https://stackoverflow.com/questions/77564155/how-can-one-plot-a-3d-surface-in-matplotlib-by-points-coordinates
After awhole day of searching, in desperation, I address to you, my dear fellows. I want to draw a 3D surface of human head, for which I have found nice of 3D coordinates (you can download it from my Google drive here). Using 3D scatter plot, everything looks beautiful: For my further purposes, I'd like to plot it as a 3D surface as it looks like in real life. As far as I could conclude, the best way to do so in matplotlib is plot_trisurf function, where one can explicitly put ther points' coordinates and enjoy the result, however, for the same coordinates as in the scatter plot above, I recieve the following ugly result: The problem seems to be that triangles are plotted by random triplets of points rather than the closest ones. I tried to sort the points minimizing the distance between neighbouring ones, which had no effect. I also tried to reproduce the meshgrid-based surface plotting that is basically used in all of the matplotlib examples. None of that worked. I also considered using other libraries, but most of them are either matplotlib-based or interactive. The latter is an overkill, since I'm going to use it in a realtime calculations, so sticking to the matplotlib API is prioritized. The idea seems pretty basic: plot a surface by 3D coordinates by dyeing the triangles of three closest points, although I didn't manage to find such a function. Hopefully, any of you know what can be done to overcome this issue. Thanks in advance.
It is possible to plot the 3D surface over your scatter plot using the plt.plot_trisurf(...) function as long as you find the right ordering of vertices for the triangles. There is a function from SciPy called ConvexHull which finds the simplices of the points on the outside of the data set. This is very handy, but does not immediately work on this example because your data set is not convex! The solution is to make the data convex by expanding the points away from the center until they form a sphere. See below for a visualization of this. After turning the head into a sphere it is now possible to call ConvexHull(...) to get the desired triangulation. This triangulation can be applied to the spherical head first (see below). Then, the head can be shrunk back into its original form, and the triangulation's vertices are still valid! This is the final product! Code import numpy as np import matplotlib.pyplot as plt import csv from scipy.spatial import KDTree from scipy.spatial import ConvexHull from matplotlib import cm from matplotlib import animation plt.style.use('dark_background') # Data reader from a .csv file def getData(file): lstX = [] lstY = [] lstZ = [] with open(file, newline='\n') as f: reader = csv.reader(f, quoting=csv.QUOTE_NONNUMERIC) for row in reader: lstX.append(row[0]) lstY.append(row[1]) lstZ.append(row[2]) return lstX, lstY, lstZ # This function gets rid of the triangles at the base of the neck # It just filters out any triangles which have at least one side longer than toler def removeBigTriangs(points, inds, toler=35): newInds = [] for ind in inds: if ((np.sqrt(np.sum((points[ind[0]]-points[ind[1]])**2, axis=0))<toler) and (np.sqrt(np.sum((points[ind[0]]-points[ind[2]])**2, axis=0))<toler) and (np.sqrt(np.sum((points[ind[1]]-points[ind[2]])**2, axis=0))<toler)): newInds.append(ind) return np.array(newInds) # this calculates the location of each point when it is expanded out to the sphere def calcSpherePts(points, center): kdtree = KDTree(points) # tree of nearest points # d is an array of distances, i is array of indices d, i = kdtree.query(center, points.shape[0]) spherePts = np.zeros(points.shape, dtype=float) radius = np.amax(d) for p in range(points.shape[0]): spherePts[p] = points[i[p]] *radius /d[p] return spherePts, i # points and the indices for where they were in the original lists x,y,z = getData(".\coords3Ddetailed.csv") pts = np.stack((x,y,z), axis=1) # generating data spherePts, sphereInd = calcSpherePts(pts, [0,0,0]) hull = ConvexHull(spherePts) triangInds = hull.simplices # returns the list of indices for each triangle triangInds = removeBigTriangs(pts[sphereInd], triangInds) # plotting! fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.scatter3D(pts[:,0], pts[:,1], pts[:,2], s=2, c='r', alpha=1.0) ax.plot_trisurf(pts[sphereInd,0], pts[sphereInd,1], pts[sphereInd,2], triangles=triangInds, cmap=cm.Blues, alpha=1.0) plt.show()
5
4
77,567,508
2023-11-28
https://stackoverflow.com/questions/77567508/filter-pandas-dataframe-for-rows-with-a-specific-date
I am new to python (I have used R in the past). I have a pandas data frame with one column containing dates. I would like to filter for observations occurring on one specific date. ## Create the pandas DataFrame with column named purchase-date data = ['2023-11-25', '2023-11-24', '2023-11-25', '2023-11-23'] df = pd.DataFrame(data, columns=['purchase-date']) ## convert string format to datetime df['purchase-date'] = pd.to_datetime(df['purchase-date']).dt.date ## Attempt 1: Try setting purchase date as the index and using .loc df.set_index('purchase-date') df.loc['2023-11-24':'2023-11-24'] ## Attempt 2: Filter without making purchase date the index df[(df['purchase-date'] > '2023-11-23') & (df['purchase-date'] < '2023-11-25')] This is the error I got: TypeError: '>' not supported between instances of 'datetime.date' and 'str'
The issue you're encountering is related to the fact that you're trying to compare a datetime.date object with a string in your second attempt. filter based on dates, you need to compare datetime.date objects : import pandas as pd data = ['2023-11-25', '2023-11-24', '2023-11-25', '2023-11-23'] df = pd.DataFrame(data, columns=['purchase-date']) df['purchase-date'] = pd.to_datetime(df['purchase-date']).dt.date # Filter without making purchase date the index start_date = pd.to_datetime('2023-11-23').date() end_date = pd.to_datetime('2023-11-25').date() filtered_df = df[(df['purchase-date'] > start_date) & (df['purchase-date'] < end_date)] print(filtered_df)
2
1
77,567,658
2023-11-28
https://stackoverflow.com/questions/77567658/finding-the-max-or-min-of-values-in-local-sets-of-rows-of-a-pandas-dataframe
So for example, I have a dataframe like this Value Placement 0 12 high 1 15 high 2 18 high 3 14 high 4 4 low 5 5 low 6 9 high 7 11 high 8 2 low 9 1 low 10 3 low 11 2 low I want to create a second dataframe that contains the the highest value in the "Value" column for each set of consecutive rows with "high" placement, and the lowest value in the "Value" column for each set of consecutive rows with "low" placement. So something like Value Placement 0 18 high 1 4 low 2 11 high 3 1 low I also don't want to change the order of the rows, as the order of the "highs" and "lows" is critical to the functionality of the project. I could just iterate through the original dataframe and keep track of the the numbers in "Value" until a change in "Placement" is detected, but I've heard dataframe iteration is very slow and should be avoided if possible. Is there some way to do this without iteration? TIA
Group by consecutive values, swap the sign for Placement that match "low", and get the idxmax per group, then keep the selected rows with loc: # group consecutive rows group = df['Placement'].ne(df['Placement'].shift()).cumsum() # invert the low values, get idxmax per group keep = (df['Value'] .mul(df['Placement'].map({'low': -1, 'high': 1})) .groupby(group, sort=False).idxmax() ) out = df.loc[keep] If efficiency is a concern, and since groupby is based on a python loop, another approach (that is potentially faster for many groups) would be to stable-sort the rows by value and group (using numpy.lexsort) and keep the highest (after sign swap for "low") using drop_duplicates: group = df['Placement'].ne(df['Placement'].shift()).cumsum() s = df['Value'].mul(df['Placement'].map({'low': -1, 'high': 1})) keep = (group .iloc[np.lexsort([s, group])] .drop_duplicates(keep='last') .index ) out = df.loc[keep] Note that despite the sorting step, this strategy will maintain the relative original order of the rows. Output: Value Placement 2 18 high 4 4 low 7 11 high 9 1 low Comparison of timing:
2
2
77,549,857
2023-11-25
https://stackoverflow.com/questions/77549857/iterate-over-a-list-of-lists-assert-multiple-conditions-and-render-when-true-in
I have the following variables to use in my Jinja template: list_python_version = [3, 9] all_python_version = [ [3, 8], [3, 9], [3, 10], [3, 11], [3, 12] ] Is there a way to use a combination of Jinja filters and tests so it iterates over all_python_version, checks that both the first and second elements of the list are greater or equal than the elements of list_python_version, and when the conditions are met, it generates the string for MAJOR.MINOR version and joins all that are valid in a single string? This way, considering the variables above, the rendered template should give 3.9, 3.10, 3.11, 3.12? I have tried the following expression: {{ all_python_version | select('[0] >= minimal_python_version[0] and [1] >= minimal_python_version[1]') | join('.') | join(', ') }} But it will fail since the select filter asks for a function, and so far I have not found in Jinja's documentation any hint as to how we can use conditionals to filter values inside an expression. Alternatively, the solution could encompass a for loop in Jinja, but I need the string to be rendered in one line, and if we do: {% for version in all_python_version %} {% if version[0] >= list_python_version[0] and version[1] >= list_python_version[1] %} {{ version[0] ~ '.' ~ version[1] ~ ',' }} {% endif %} {% endfor %} Each version will render in its own line, with the last , also being rendered. Is there a way to do get the versions to be rendered in a single line in pure Jinja and its plugins?
Conveniently enough, since Jinja allows you to access elements of a list via both list[0] and list.0, this means that you can actually use a dictionary filter on a list. And selectattr is just the filter we need here, since it allows to select items out of a list of dictionaries based on a property of those dictionaries. Still, we cannot properly fit a logical and in there, so we'll need to split the logic in two: all Python versions where the major version is strictly equal to our comparison point and the minor version is greater or equal all Python versions where the major version is greater than our comparison point, regardless of the minor version So, we end up with a quite lengthy: {{ all_python_version | selectattr(0, '==', list_python_version.0) | selectattr(1, '>=', list_python_version.1) | list + all_python_version | selectattr(0, '>', list_python_version.0) | list }} The snippet {%- set list_python_version = [3, 9] -%} {%- set all_python_version = [ [3, 8], [3, 9], [3, 10], [3, 11], [3, 12], ] -%} {{ all_python_version | selectattr(0, '==', list_python_version.0) | selectattr(1, '>=', list_python_version.1) | list + all_python_version | selectattr(0, '>', list_python_version.0) | list }} Would yield: [[3, 9], [3, 10], [3, 11], [3, 12]] Then to join everything together, use a combination of map and join in the list of lists and a simple join in the resulting list: {{ ( all_python_version | selectattr(0, '==', list_python_version.0) | selectattr(1, '>=', list_python_version.1) | list + all_python_version | selectattr(0, '>', list_python_version.0) | list ) | map('join', '.') | join(', ') }} Giving 3.9, 3.10, 3.11, 3.12
2
1