question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
76,503,643
2023-6-19
https://stackoverflow.com/questions/76503643/how-to-change-traceback-back-to-normal
My most recent env prints traceback like this โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ which is beyond useless. I've already looked at this How to make typer traceback look normal but it doesn't help. My hunch is it may be about Huggingface but maybe something else like datasets or evaluate but I can't find anything useful so far. How to make stacktrace print everything it need to again?
It is this thing that causes the issue. So pip uninstall rich solved everything. Edit July 3rd, 2023 This seems to be the issue with accelerate as I originally anticipated. In this case it default to use rich when available. So ACCELERATE_DISABLE_RICH=1 should fix the issue in case other libs require rich for something else.
3
2
76,469,795
2023-6-14
https://stackoverflow.com/questions/76469795/does-anyone-see-where-the-error-in-the-following-gekko-ipopt-nonlinear-optimizat
In my code, I get the following error when running: Exception: @error: Equation Definition Equation without an equality (=) or inequality (>,<) ((((((((((-cos(v4)))*(sin(v5)))-((((sin(v4))*(cos(v5))))*(cos(v3)))))*(((sqrt(( 398574405096000.0/((v1)*((1-((v2)^(2))))))))*([(-sin(v6))(v2+cos(v6))0])))))^(2 ))+((((((((-sin(v4)))*(sin(v5)))+((((cos(v4))*(cos(v5))))*(cos(v3)))))*(((sqrt( (398574405096000.0/((v1)*((1-((v2)^(2))))))))*([(-sin(v6))(v2+cos(v6))0])))))^( 2)))+((((((cos(v5))*(sin(v3))))*(((sqrt((398574405096000.0/((v1)*((1-((v2)^(2)) ))))))*([(-sin(v6))(v2+cos(v6))0])))))^(2))) STOPPING... I tried searching my code for variations of the math functions (sqrt, cos, etc.) to see if I could find something that looked like the above equation, but I cannot find it in any way. I assume GEKKO manipulated some things around to get it, likely as part of the solver. My thinking is that the 'v' values are equivalent to my orbital elements, and I see that the value of mu is expressed. I'm hoping someone can put another set of eyes on my code and maybe help me out. Here is my code: from gecko import GEKKO import numpy as np import matplotlib.pyplot as plt import math def oe2rv(oe): a = oe[0] e = oe[1] i = oe[2] Om = oe[3] om = oe[4] nu = oe[5] p = a * (1 - e**2) r = p/(1 + e * m.cos(nu)) rv = np.array([r * m.cos(nu), r * m.sin(nu), 0]) vv = m.sqrt(mu/p) * np.array([-m.sin(nu), e + m.cos(nu), 0]) cO = m.cos(Om) sO = m.sin(Om) co = m.cos(om) so = m.sin(om) ci = m.cos(i) si = m.sin(i) R = np.array([[cO * co - sO * so * ci, -cO * so - sO * co * ci, sO * si], [sO * co + cO * so * ci, -sO * so + cO * co * ci,-cO * si], [so * si, co * si, ci]]) ri = R * rv vi = R * vv return ri, vi def TwoBody(ri, vi): ri_dot[0] = vi[0] ri_dot[1] = vi[1] ri_dot[2] = vi[2] r_mag = m.sqrt(ri[0]**2 + ri[1]**2 + ri[2]**2) r3 = r_mag**3 c = -mu/r3 vi_dot[0] = c * ri[0] vi_dot[1] = c * ri[1] vi_dot[2] = c * ri[2] return ri_dot, vi_dot def Euler(ri, vi): ri_dot, vi_dot = TwoBody(ri, vi) for i in range(0, 3): ri_new[i] = ri[i] + ri_dot[i] * dt vi_new[i] = vi[i] + vi_dot[i] * dt return ri_new, vi_new def trap(E_en, E_ex, e): dE = (E_en - E_ex)/20 E = np.linspace(E_en, E_ex, 20) nu_new = m.acos((m.cos(E[0]) - e)/(1 - e * m.cos(E[0]))) for i in range(1, 19): nu_new = nu_new + 2 * m.acos((m.cos(E[i]) - e)/(1 - e * m.cos(E[i]))) nu_new = m.acos((m.cos(E[19]) - e)/(1 - e * m.cos(E[19]))) nu_new = (dE/2) * nu_new return nu_new def propagate(a, e, i, Om, om, nu, mass): oe = np.array([a, e, i, Om, om, nu]) ri, vi = oe2rv(oe) r = m.sqrt(ri[0]**2 + ri[1]**2 + ri[2]**2) v = m.sqrt(vi[0]**2 + vi[1]**2 + vi[2]**2) h = np.cross(ri, vi) d1 = m.sqrt(4 * ((al_a * a * v**2)/mu + l_e * (e + m.cos(nu)))**2 + l_e**2 * (r**2/a**2) * m.sin(nu)**2) s_a = (-l_e * (r/a) * m.sin(nu))/d1 c_a = (-2 * (((al_a * a * v**2)/mu) + l_e * (e + m.cos(nu))))/d1 d2 = m.sqrt(l_i**2 * ((r**2 * v**2)/h**2) * (m.cos(om + nu))**2 + (4 * al_a**2 * a**2 * v**4 * c_a**2)/mu**2 + l_e**2 * ((2 * (e + m.cos(nu)) * c_a + r/a * m.sin(nu) * s_a)**2)) s_b = (-l_i * ((r * v)/h) * m.cos(om + nu))/d2 c_b = (((-al_a * 2 * a * v**2)/mu) * c_a - l_e * (2 * (e + m.cos(nu)) * c_a + (r/a) * m.sin(nu) * s_a))/d2 a_n = aT * s_a * c_b a_t = aT * c_a * c_b a_h = aT * s_b n = m.sqrt(mu/a**3) Om_J2 = ((-3 * n * R_E**2 * J2)/(2 * a**2 * (1 - e**2)**2)) * m.cos(i) om_J2 = ((3 * n * R_E**2 * J2)/(4 * a**2 * (1 - e**2)**2)) * (4 - 5 * (m.sin(i))**2) nu_new = trap(E_en, E_ex, e) da_dt = a_t * (2 * a**2 * v)/mu de_dt = (1/v) * (2 * (e + m.cos(nu)) * a_t + (r/a) * a_n * m.sin(nu)) di_dt = (r/h) * a_h * m.cos(om + nu) dOm_dt = (r/(h * m.sin(i))) * a_h * m.sin(om + nu) + Om_J2 dom_dt = (1/(e * v)) * (2 * a_t * m.sin(nu) - (2 * e + (r/a) * m.cos(nu)) * a_n) - (r/(h * m.sin(i))) * a_h * m.sin(om + nu) * m.cos(i) + om_J2 dnu_dt = nu_new - nu dm_dt = (-2 * eta * P)/pow((g * Isp), 2) dt_dE = r/(n * a) Tp = (2 * math.pi/m.sqrt(mu)) * a**(3/2) deltas = np.array([da_dt, de_dt, di_dt, dOm_dt, dom_dt, dnu_dt, dm_dt, dt_dE]) return deltas, Tp #initialize model m = GEKKO() #optional solver settings with APOPT Nsim = 100 #number of steps with constant thrust m.time = np.linspace(0, 0.2, Nsim) #constants mu = 3.98574405096E14 g = 9.81 R_E = 6.2781E6 J2 = 1.08262668E-3 P = 10E3 eta = 0.65 Isp = 3300 m0 = 1200 aT = (2 * eta * P)/(m0 * g * Isp) delta_t = 3600 t_max = 86400 * 200 E_en = math.pi E_ex = -math.pi oe_i = np.array([6927000, 0, math.radians(28.5), 0, 0, 0]) oe_f = np.array([42164000, 0, 0, 0, 0, 0]) v_i = m.sqrt(mu/oe_i[0]) v_f = m.sqrt(mu/oe_f[0]) dv = abs(v_i - v_f) dm = (2 * eta * P)/pow((g * Isp), 2) m_f = m0 * m.exp(-dv/(g * Isp)) #manipulating variables and initial guesses al_a = m.MV(value = -1, lb = -2, ub = 2) al_a.STATUS = 1 l_e = m.MV(value = 0.001, lb = 0, ub = 10**6) l_e.STATUS = 1 l_i = m.MV(value = 1, lb = 0, ub = 10**6) l_i.STATUS = 1 #variables and initial guesses a = m.Var(value = oe_i[0], lb = oe_i[0] - 6378000, ub = oe_f[0] + 6378000) e = m.Var(value = oe_i[1], lb = 0, ub = 1) i = m.Var(value = oe_i[2], lb = 0, ub = math.radians(90)) Om = m.Var(value = oe_i[3], lb = 0, ub = math.radians(360)) om = m.Var(value = oe_i[4], lb = 0, ub = math.radians(360)) nu = m.Var(value = oe_i[5], lb = 0, ub = math.radians(360)) mass = m.Var(value = m0, lb = 0, ub = m0) #objective function tf = m.FV(value = 1.2 * ((m0 - m_f)/dm), lb = 0, ub = t_max) tf.STATUS = 1 #propagation deltas, Tp = propagate(a, e, i, Om, om, nu, mass) m.Equation(a.dt() == (deltas[0] * delta_t * deltas[7])/Tp) m.Equation(e.dt() == (deltas[1] * delta_t * deltas[7])/Tp) m.Equation(i.dt() == (deltas[2] * delta_t * deltas[7])/Tp) m.Equation(Om.dt() == (deltas[3] * delta_t * deltas[7])/Tp) m.Equation(om.dt() == (deltas[4] * delta_t * deltas[7])/Tp) m.Equation(nu.dt() == deltas[5] * delta_t) m.Equation(mass.dt() == (deltas[6] * delta_t * deltas[7])/Tp) #starting constraints m.fix(a, pos = 0, val = oe_i[0]) m.fix(e, pos = 0, val = oe_i[1]) m.fix(i, pos = 0, val = oe_i[2]) m.fix(Om, pos = 0, val = oe_i[3]) m.fix(om, pos = 0, val = oe_i[4]) m.fix(nu, pos = 0, val = oe_i[5]) m.fix(mass, pos = 0, val = m0) #boundary constraints m.fix(a, pos = len(m.time) - 1, val = oe_f[0]) m.fix(e, pos = len(m.time) - 1, val = oe_f[1]) m.fix(i, pos = len(m.time) - 1, val = oe_f[2]) m.fix(Om, pos = len(m.time) - 1, val = oe_f[3]) m.fix(om, pos = len(m.time) - 1, val = oe_f[4]) m.fix(nu, pos = len(m.time) - 1, val = oe_f[5]) m.fix(mass, pos = len(m.time) - 1, val = 0) m.Obj(tf) #minimize final time m.options.IMODE = 6 # non-linear model m.options.SOLVER = 3 # solver (IPOPT) m.options.MAX_ITER = 15000 m.options.RTOL = 1e-7 m.options.OTOL = 1e-7 m.solve(disp=True, debug=True) # Solve print('Optimal time: ' + str(tf.value[0])) m.solve(disp=True) m.open_folder(infeasibilities.txt) After doing some playing around, I believe the issue is that I am using the manipulating variables ('al_a', 'l_e' and 'l_i') in the 'propagate' function. Does that make sense as a possible problem? If that is the problem, is it possible to use the values of those variables in that function - and, if so, how?
The square brackets indicate that a list or numpy array was used instead of a scalar value in one of the expressions. Adding names (e.g. name='a') to the variables helps with a more readable model apm file that is in the local run directory m.path. Open the directory with m.open_folder(). #variables and initial guesses a = m.Var(value = oe_i[0], lb = oe_i[0] - 6378000, ub = oe_f[0] + 6378000,name='a') e = m.Var(value = oe_i[1], lb = 0, ub = 1,name='e') i = m.Var(value = oe_i[2], lb = 0, ub = math.radians(90),name='i') Om = m.Var(value = oe_i[3], lb = 0, ub = math.radians(360),name='om1') om = m.Var(value = oe_i[4], lb = 0, ub = math.radians(360),name='om2') nu = m.Var(value = oe_i[5], lb = 0, ub = math.radians(360),name='nu') mass = m.Var(value = m0, lb = 0, ub = m0,name='mass') This gives an updated version of the error with: @error: Equation Definition Equation without an equality (=) or inequality (>,<) ((((((((((-cos(om1)))*(sin(om2)))-((((sin(om1))*(cos(om2))))*(cos(i)))))*(((sqr t((398574405096000.0/((a)*((1-((e)^(2))))))))*([(-sin(nu))(e+cos(nu))0])))))^(2 ))+((((((((-sin(om1)))*(sin(om2)))+((((cos(om1))*(cos(om2))))*(cos(i)))))*(((sq rt((398574405096000.0/((a)*((1-((e)^(2))))))))*([(-sin(nu))(e+cos(nu))0])))))^( 2)))+((((((cos(om2))*(sin(i))))*(((sqrt((398574405096000.0/((a)*((1-((e)^(2)))) ))))*([(-sin(nu))(e+cos(nu))0])))))^(2))) STOPPING... One of the errors is in the propagate function where h is a (3,3) numpy array but is used in equations. h = np.cross(ri, vi) s_b = (-l_i * ((r * v)/h) * m.cos(om + nu))/d2 After the equations are corrected, one additional suggestion is to move Tp to the right-hand side of the equation to avoid potential divide-by-zero. m.Equation(Tp*a.dt() == (deltas[0] * delta_t * deltas[7])) m.Equation(Tp*e.dt() == (deltas[1] * delta_t * deltas[7])) m.Equation(Tp*i.dt() == (deltas[2] * delta_t * deltas[7])) m.Equation(Tp*Om.dt() == (deltas[3] * delta_t * deltas[7])) m.Equation(Tp*om.dt() == (deltas[4] * delta_t * deltas[7])) m.Equation(nu.dt() == deltas[5] * delta_t) m.Equation(Tp*mass.dt() == (deltas[6] * delta_t * deltas[7]))
3
0
76,509,707
2023-6-19
https://stackoverflow.com/questions/76509707/plotly-sankey-diagram-how-to-display-the-value-for-each-links-and-node-on-the-l
In the Plotly Sankey diagram, you are able to see the 'value' of a link/node by hovering over it. I want the image to display the values without hovering though. I've looked through the documentation and see virtually no way of doing this beside replacing the labels themselves with the desired value. That is not a good option, as nothing would then by labeled. Short of making dynamic labels that include both and and value, I'm not sure how to approach this. Examples below... Sample Sankey Diagram (source): import plotly.graph_objects as go fig = go.Figure(data=[go.Sankey( node = dict( pad = 15, thickness = 20, line = dict(color = "black", width = 0.5), label = ["A1", "A2", "B1", "B2", "C1", "C2"], customdata = ["Long name A1", "Long name A2", "Long name B1", "Long name B2", "Long name C1", "Long name C2"], hovertemplate='Node %{customdata} has total value %{value}<extra></extra>', color = "blue" ), link = dict( source = [0, 1, 0, 2, 3, 3], target = [2, 3, 3, 4, 4, 5], value = [8, 4, 2, 8, 4, 2], customdata = ["q","r","s","t","u","v"], hovertemplate='Link from node %{source.customdata}<br />'+ 'to node%{target.customdata}<br />has value %{value}'+ '<br />and data %{customdata}<extra></extra>', ))]) fig.update_layout(title_text="Basic Sankey Diagram", font_size=10) fig.show() Actual Output: Desired Output:
The node positions are determined by non-trivial algorithms and I am afraid that Plotly does not make the coordinates explicit, as of now, see Extract X and Y coordinates from Plotly Sankey diagram. I think you can: Pass locations computed on your own (may be tricky!), and use the locations to anchor custom annotations Plotly: how to write a text over my Sankey diagram columns? Inject your information into displayable labels (what you described as an alternative), see Add node count to Plotly Sankey diagram
4
4
76,515,800
2023-6-20
https://stackoverflow.com/questions/76515800/the-difference-between-poetry-add-and-poetry-install
I've thought that poetry add package would simply add the package to pyproject.toml but it seems it doesn't just add but also installs it in a virtual environment. But what does poetry install do? When I run it after I added the deps with add, I am getting the following message: Installing dependencies from lock file No dependencies to install or update Note that I started a project from scratch with mkdir new_dir; cd new_dir; poetry init.
poetry add library_name installs the library and adds it to the pyproject.toml file. Note - both installs the library and adds it to the file. poetry install is used when you've directly edited the pyproject.toml file and added the dependency names manually. In that case, they aren't installed yet, so, poetry install takes care of that.
8
16
76,495,086
2023-6-17
https://stackoverflow.com/questions/76495086/not-able-to-override-unittest-starttest-and-stoptest-isn-t-working-correctly
I have override startTest and stopTest from unittest TextTestResult class and used it in my custom test runner. Itโ€™s working correctly in normal scenario but not working correctly when using with --parallel flag. I tried to debug and found the time elapsed b/w startTest and stopTest is coming very very small. Like 4.8000000000492093e-05 which is incorrect. Would someone please tell me is it the right hooks for --parallel flag or I have to use some another hook? Steps to reproduce: Let's assume the project name of django is project and the app is app create custom_test_runner.py in your django project directory and add below code: from time import perf_counter from unittest import TextTestResult, TextTestRunner from django.test.runner import DiscoverRunner class CustomTestRunnerTextTestResult(TextTestResult): def __init__(self, stream, descriptions, verbosity): super(CustomTestRunnerTextTestResult, self).__init__(stream, descriptions, verbosity) def startTest(self, test): self.start_time = perf_counter() super(CustomTestRunnerTextTestResult, self).startTest(test) def stopTest(self, test): super(CustomTestRunnerTextTestResult, self).stopTest(test) print(f"Time elapsed {str(test)} for {perf_counter() - self.start_time}") class CustomTestRunnerTextTestRunner(TextTestRunner): def __init__(self, **kwargs): super().__init__(**kwargs) resultclass = CustomTestRunnerTextTestResult class CustomTestRunnerTestRunner(DiscoverRunner): def __init__(self, **kwargs): super(CustomTestRunnerTestRunner, self).__init__(**kwargs) test_runner = CustomTestRunnerTextTestRunner create test_custom_test_runner.py in tests folder and add the below code: import time from django.test import TestCase class Test1(TestCase): def test_1(self): time.sleep(2) class Test2(TestCase): def test_2(self): time.sleep(2) in your settings.py add below code TEST_RUNNER = "project.custom_test_runner.CustomTestRunnerTestRunner" Now, run python manage.py test app.tests.test_custom_test_runner --parallel You'll get very small time interval like this: .Time elapsed test_1 (app.tests.test_custom_test_runner.Test1) for 0.00034683300000004635 .Time elapsed test_2 (app.tests.test_custom_test_runner.Test2) for 0.0002126670000004438 ---------------------------------------------------------------------- Ran 2 tests in 2.095s But if you ran it without --parallel flag you'll get: .Time elapsed test_1 (app.tests.test_custom_test_runner.Test1) for 2.005363042 .Time elapsed test_2 (app.tests.test_custom_test_runner.Test2) for 2.0043795000000006 ---------------------------------------------------------------------- Ran 2 tests in 4.027s Django Version: 3.2.14
test_runner.resultclass (CustomTestRunnerTextTestResult) methods are not called in real-time if using --parallel flag. Rather, parallel_test_suite.runnerclass.resultclass (RemoteTestResult) methods are called, which queue events (method calls to test_runner.resultclass) to be dispatched together by parallel_test_suite (ParallelTestSuite) after each test is completed. Django ticket, Oct 2020: https://code.djangoproject.com/ticket/32140 (closed as needsinfo) Here's one way to support your requirement: Instead of calling perf_counter() directly, implement and call self._get_time(), which will try to return the time marked by a RemoteTestResult subclass. Provide a method for the RemoteTestResult subclass to mark the time via events. class CustomTestRunnerTextTestResult(TextTestResult): def __init__(self, stream, descriptions, verbosity): super(CustomTestRunnerTextTestResult, self).__init__(stream, descriptions, verbosity) self._marked_remote_time = None # Add this def mark_remote_time(self, _test, remote_time): # For CustomTestRunnerRemoteTestResult events # Add this method self._marked_remote_time = remote_time def _get_time(self): # Add this method return self._marked_remote_time or perf_counter() def startTest(self, test): # self.start_time = perf_counter() # Change this self.start_time = self._get_time() # to this super(CustomTestRunnerTextTestResult, self).startTest(test) def stopTest(self, test): super(CustomTestRunnerTextTestResult, self).stopTest(test) # print(f"Time elapsed {str(test)} for {perf_counter() - self.start_time}") # Change this print(f"Time elapsed {str(test)} for {self._get_time() - self.start_time}") # to this Implement the RemoteTestResult subclass, which marks the time via events. from django.test.runner import RemoteTestResult class CustomTestRunnerRemoteTestResult(RemoteTestResult): def startTest(self, test): self.events.append(("mark_remote_time", self.test_index, perf_counter())) super().startTest(test) # Does self.events.append(("startTest", self.test_index)) def stopTest(self, test): self.events.append(("mark_remote_time", self.test_index, perf_counter())) super().stopTest(test) # Does self.events.append(("stopTest", self.test_index)) Pass the RemoteTestResult subclass to parallel_test_suite.runnerclass constructor. from functools import partial class CustomTestRunnerTestRunner(DiscoverRunner): def __init__(self, **kwargs): super(CustomTestRunnerTestRunner, self).__init__(**kwargs) self.parallel_test_suite.runner_class = partial( # Add this self.parallel_test_suite.runner_class, resultclass=CustomTestRunnerRemoteTestResult, ) test_runner = CustomTestRunnerTextTestRunner
3
1
76,517,805
2023-6-20
https://stackoverflow.com/questions/76517805/is-there-a-difference-between-permission-classes-a-b-permission-classes
What is the difference between permission_classes = [A, B] permission_classes = [A & B] ?? I understand that the first one processes permission sequentially, and the second one does it at once. Is there a difference in the result? Also, I wonder which method is preferred and why.
In short: two or more elements in a list or tuple is advisable. Semantically the two are nearly the same. Indeed, Django checks the permissions in the .check_permissions(โ€ฆ) method [GitHub]: def check_permissions(self, request): """ Check if the request should be permitted. Raises an appropriate exception if the request is not permitted. """ for permission in self.get_permissions(): if not permission.has_permission(request, self): self.permission_denied( request, message=getattr(permission, 'message', None), code=getattr(permission, 'code', None), ) It thus enumerates over the items in the list and when one of the permissions fails, it invokes the .permission_denied(โ€ฆ) method with the message and code of the permission that failed. If you use the & operator on the other hand, it will construct a new permission, with the .__and__(โ€ฆ) method [GitHub]: class OperationHolderMixin: def __and__(self, other): return OperandHolder(AND, self, other) # โ€ฆ class OperandHolder(OperationHolderMixin): def __init__(self, operator_class, op1_class, op2_class): self.operator_class = operator_class self.op1_class = op1_class self.op2_class = op2_class def __call__(self, *args, **kwargs): op1 = self.op1_class(*args, **kwargs) op2 = self.op2_class(*args, **kwargs) return self.operator_class(op1, op2) # โ€ฆ class AND: def __init__(self, op1, op2): self.op1 = op1 self.op2 = op2 def has_permission(self, request, view): return self.op1.has_permission(request, view) and self.op2.has_permission( request, view ) def has_object_permission(self, request, view, obj): return self.op1.has_object_permission( request, view, obj ) and self.op2.has_object_permission(request, view, obj) This is thus essentially just some meta-programming logic to first run the .has_permission(โ€ฆ) on the first operand and if that succeeds, on the second operand. So it seems that the two are equivalent? Well not exactly, by using an &, the .message and .code are gone, so one can not trace back which permission check exactly has been denied. While most (all) builtin permissions have no message or code, if you thus specify one yourself for a custom permission check, and that permission check is used in an operator, the result of that expression loses the code and message. So while this is likely, a small detail, there is a small advantage in using [perm1, perm2] over [perm1 & perm2]
2
4
76,512,183
2023-6-20
https://stackoverflow.com/questions/76512183/efficient-python-function-to-get-value-of-specific-key-in-nested-dict-without-an
The following initial situation: I am looking for a custom function that will extract a corresponding value from a nested dict and return it without external libs and without kowning the whole static path to the corresponding key. The function "search path" (dict key) should be similar to the CSS or XPATH selector, e.g. getValue(nestedDict, "[subkey1][subkey42InSubkey1]") # "[subkey1][subkey42InSubkey1]" = "search path" This function (getValue()) should search within the nestedDict for the key subkey1 and within that for subkey42InSubkey1 and then return the value if found or None. However, the function should be so dynamic that the depth of the nested dict doesn't matter. In addition, the "search path" should be specified relatively, i.e. the absolute path through the whole nested dict doesn't have to be known. Question: Can you please help me to create such a function? Should such a function be solved via recursion to be more efficient than loops? Thank you very much for your help! Python Code test_dict = { "a" : "1", "b" : { "1" : 2, "2" : 4711, "3" : { "b31" : 31 }, "4" : 4 }, "c" : "3", "d" : { "1" : 5, "2" : 9, "3" : { "c31" : 55 } } } test_result = 55 # get value in nested dict like CSS- respectively XPATH Selector def getValue(nestedDict, key): #TODO result = None return result #################################################################################### if __name__ == '__main__': result = getValue(test_dict, "[3][c31]") # should return 55 as a result # the following call should yield the same result! result2 = getValue(test_dict, "[d][3][c31]") # should return 55 as a result too assert result == test_result print(result) I have a "non-clean code" solution that I am unhappy with myself, so I refrain from posting it here so as not to create a bias in answering the question unintentionally. Thank you for understanding!
One possible approach: Recursive generator to find all values for a single key anywhere within a nested dict: def find(nested_dict, key): if key in nested_dict: yield nested_dict[key] for v in nested_dict.values(): if isinstance(v, dict): yield from find(v, key) find(test_dict, "3") # {"b31" : 31} # {"c31" : 55} Helper to access a concrete path of keys inside a dict: def access(obj, bits): for bit in bits: obj = obj[bit] # this can cause errors: obj not a dict or key missing return obj access(test_dict, ["d", "3", "c31"]) # 55 access(test_dict, ["b", "4"]) # 4 Final value collection: def get_values(nested_dict, search_path): # e.g. search_path "[d][3][c31]" start, *tail = search_path.strip("[]").split("][") # start: "d" # tail: ["3", "c31"] for d in find(nested_dict, start): # e.g. all occurrences of "d" try: yield access(d, tail) # then access e.g. ["3", "c31"] inside of them except (KeyError, TypeError): pass # key missing, d or any value down the path not a dict >>> list(get_values(test_dict, "[d][3][c31]")) [55] >>> list(get_values(test_dict, "[3][c31]")) [55] >>> list(get_values(test_dict, "[c31]")) [55] >>> list(get_values(test_dict, "[2]")) [4711, 9] >>> list(get_values(test_dict, "[b][2]")) [4711] This returns a list because there could be more than 1 match. It can easily be modified to just return the first one by changing yield to return in the get_values function.
3
2
76,518,869
2023-6-20
https://stackoverflow.com/questions/76518869/tweepy-errors-forbidden-403-forbidden-issue-with-twitter-api-authentication-u
I'm encountering tweepy.errors.Forbidden: 403 Forbidden When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal. while trying to run the following code that fetches a user's post history using the Twitter API and Tweepy: client = tweepy.Client(bearer_token=bearer_token) tweets = client.search_recent_tweets(query=f'from:{user_handle}') My app does seem to be connected to a project (see img) I have come across some links that suggest it might be a Twitter issue related to API authentication. However, I would like to confirm if this is indeed the case and if there are any possible solutions or workarounds for this problem. Links indicating it might be a Twitter issue: https://github.com/twitterdev/Twitter-API-v2-sample-code/issues/58 https://twittercommunity.com/t/when-authenticating-requests-to-the-twitter-api-v2-endpoints-you-must-use-keys-and-tokens-from-a-twitter-developer-app-that-is-attached-to-a-project-you-can-create-a-project-via-the-developer-portal/189699 I would greatly appreciate any insights, explanations, or potential solutions to resolve this issue.
Your app has the Free access tier which only allows: Posting tweets with the Twitter API v2 Media Upload and Login With Twitter with the Twitter API v1.1 To do anything else, you need at least the Basic access tier, which costs some money. The access tiers are documented in https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api And of course you should use a recent version of Tweepy that supports the V2 API.
5
9
76,517,809
2023-6-20
https://stackoverflow.com/questions/76517809/how-can-i-make-this-function-more-numerically-stable
The following function is supposed to work similarly to pow(x, 1/k) but to be symmetric around the line y = 1 - x as well as not having a 0 or 1 slope at either end of [0, 1]: def sym_gamma(x, k): if k == 1.0: return x a = 1.0 / k - 1.0 b = 1.0 / a c = k + 1.0 / k - 2.0; return 1.0 / (a - c * x) - b As can be seen, it is not defined when k = 1 so when that is the case, I simply return x. However, this special case handling is not enough since the function also behaves poorly when x is not equal to but very close to 1.0. For example sym_gamma(0.5, 1.00000001) yields 0.0 while it's supposed to return something very close to 0.5. How can achieve the same thing without the poor stability? I know that I can introduce a tolerance with respect to k equaling 1.0 but it feels like a hack and I would also want to make sure that the function is perfectly smooth with regards to k.
Simplifying your expression seems to help with the precision. Numerical errors tends to accumulate in each operation. Thus, reducing the number of operation will reduce the chance of numerical errors. We can notice that: a = (1 - k) / k b = k / (1 - k) c = (1 - k) ** 2 / k a - c * x = (1 - k) * (1 + x*k - x) / k 1.0 / (a - c * x) - b = x*k / (1 - x * (1 - k)) Then you can simply rewrite your method: def sym_gamma(x, k): return x*k / (1 - x * (1 - k)) Instead of performing several division, only one division is computed. This method returns 0.5000000025 for sym_gamma(0.5, 1.00000001).
3
5
76,488,582
2023-6-16
https://stackoverflow.com/questions/76488582/python-proper-way-to-run-an-async-routine-in-a-pytest-fixture
The test below passes, but I have doubts that I am using asyncio correctly: The code mixes asyncio and threading The test is passing but never exits (probably because the "loop.run_until_complete" never ends) import asyncio import threading import pytest import websockets async def echo(websocket): async for message in websocket: await websocket.send(message) async def websocket_server(): async with websockets.serve(echo, "localhost", 8765): await asyncio.Future() def _run_server(): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(websocket_server()) loop.close() @pytest.fixture def run_server(): thread = threading.Thread(target=_run_server) thread.start() yield thread # no idea how to stop the loop here thread.join() @pytest.mark.asyncio async def test_websocket(run_server): async with websockets.connect("ws://localhost:8765") as websocket: await websocket.send("Hello!") response = await websocket.recv() assert response == "Hello!" (note: for stopping the loop I attempted the solution proposed here (How to stop websocket server created with websockets.serve()?) but this resulted in the server not starting)
You need some other code in the thread running the server to receive a signal from the main thread and shut itself down. Fortunately, due to asyncio nature, this control can be built in a separate function, without interfering at all with the function implementing the server itself. Only the function that creates the loop and calls the server task have to arrange for some code that will check for this signal from the other thread to arrive, in another task - asyncio will take care that both tasks run in turn. The proper way to communicate across threads is to use a queue - though in this case, even a module level (global) variable would work. Note that even though there are "asyncio queues" - in this case we want to send a message from one thread to another, and there are no two async tasks trying to read it in parallel, so we use the "traditional" multi-threading Queue class in the queue module. Also, not related, but I changed the code starting the asyncio loop to the new way, using asyncio.run, without all the boilerplate that was needed in the first Python versions featuring asyncio. import asyncio import threading import pytest import websockets from queue import Queue, Empty async def echo(websocket): async for message in websocket: await websocket.send(message) async def websocket_server(): async with websockets.serve(echo, "localhost", 8765): await asyncio.Future() async def coordinate(q): server = asyncio.create_task(websocket_server()) while True: await asyncio.sleep(0) # this is necessary to allow the asyncio loop to switch tasks. try: q.get_nowait() except Empty: pass else: # block will run whenever there is _any_ message in the queue. server.cancel() return server.cancel() def _run_server(q): asyncio.run(coordinate(q)) @pytest.fixture def run_server(): command = Queue() thread = threading.Thread(target=_run_server, args=(command,)) thread.start() yield thread command.put("quit") thread.join() @pytest.mark.asyncio async def test_websocket(run_server): async with websockets.connect("ws://localhost:8765") as websocket: await websocket.send("Hello!") response = await websocket.recv() assert response == "Hello!" A second method, without the need for the message-monitoring code in the server thread, is simply to make a call to cancel the server task from the thread running the tests. Asyncio has a prevision for that in the call loop.call_soon_threadsafe - we just need a reference to the loop and the server task (so we can get its .cancel method) in the original thread - which can be done with module level (global) variables. The "run_server" function won't return, so the global variables are needed as their values can be checked in the parent thread as soon as they are set. Otherwise, if you don't want to resort to these due to their global state the threading queue could be used to post the "loop" and "server" objects from the child thread to the fixture code, just as well. Using global variables would prevent the tests from running in parallel properly. import asyncio import threading import pytest import websockets async def echo(websocket): async for message in websocket: await websocket.send(message) async def websocket_server(): async with websockets.serve(echo, "localhost", 8765): await asyncio.Future() def _run_server(): global loop, server loop = asyncio.new_event_loop() server = loop.create_task(websocket_server()) try: loop.run_until_complete(server) except asyncio.CancelledError: pass loop.close() @pytest.fixture def run_server(): thread = threading.Thread(target=_run_server) thread.start() yield thread loop.call_soon_threadsafe(server.cancel) thread.join() @pytest.mark.asyncio async def test_websocket(run_server): async with websockets.connect("ws://localhost:8765") as websocket: await websocket.send("Hello!") response = await websocket.recv() assert response == "Hello!" This time around we need an explicit reference to the asyncio loop object itself, so instead of calling asyncio.run, we do the "create_loop", "run_until_complete" calls. (Thanks for providing the complete, self-contained, executable, minimal example - without which I would not had spent time with this question)
6
3
76,501,267
2023-6-18
https://stackoverflow.com/questions/76501267/randomly-generate-all-unique-pair-wise-combination-of-elements-between-two-list
I have two lists: a = [1, 2, 3, 5] b = ["a", "b", "c", "d"] And would like to generate all possible combinations with a python generator. I know I could be doing: combinations = list(itertools.product(a,b)) random.shuffle(combinations) But that one has an extreme memory cost as i would have to hold in memory all possible combinations, even if only wanted two random unique combinations. My target is to get a python generator that has its memory cost increase with the more iterations are requested from it, getting to the same O memory cost as itertools at max iterations. I had this for now: def _unique_combinations(a: List, b: List): """ Creates a generator that yields unique combinations of elements from a and b in the form of (a_element, b_element) tuples in a random order. """ len_a, len_b = len(a), len(b) generated = set() for i in range(len_a): for j in range(len_b): while True: # choose random elements from a and b element_a = random.choice(a) element_b = random.choice(b) if (element_a, element_b) not in generated: generated.add((element_a, element_b)) yield (element_a, element_b) break But its flawed as it can theoretically run forever if the random.choice lines are unlucky. I'm looking to modify that existing generator so it generates the indexes randomly within a fix set of time, it will be okay to keep them track of as this will be linear increase in memory cost and not exponential. How could i modify that random index generator to be bound in time?
We create a sequence using a prime number and one of its primitive roots modulo n that visits each number in an interval exactly once. More specifically we are looking for a generator of the multiplicative group of integers modulo n. We have to pick our prime number a little larger than the product len(a)*len(b), so we have to account for the cases in which we'd get index errors. import random from math import gcd import math def next_prime(number): if number < 0: raise ValueError('Negative numbers can not be primes') if number <= 1: return 2 if number % 2 == 0: number -= 1 while True: number += 2 max_check = int(math.sqrt(number)) + 2 for divider in range(3, max_check, 2): if number % divider == 0: break else: return number def is_primitive_root(a, n): phi = n - 1 factors = set() for i in range(2, int(phi ** 0.5) + 1): if phi % i == 0: factors.add(i) factors.add(phi // i) for factor in factors: if pow(a, factor, n) == 1: return False return True def find_random_primitive_root(n): while True: a = random.randint(2, n - 1) if gcd(a, n) == 1 and is_primitive_root(a, n): return a def sampler(l): close_prime = next_prime(l) state = root = find_random_primitive_root(close_prime) while state > l: state = (state * root) % close_prime # Inlining the computation leads to a 20% speed up yield state - 1 for i in range(l - 1): state = (state * root) % close_prime while state > l: state = (state * root) % close_prime yield state - 1 Then we use a mapping from 1D -> 2D to "translate" our sequence number into a tuple and yield the result. def _unique_combinations(a, b): cartesian_product_cardinality = len(a) * len(b) sequence = sampler(cartesian_product_cardinality) len_b = len(b) # Function calls are expensive in python and this line yields a 10% total speed up for state in sequence: yield a[state // len_b], b[state % len_b)] from itertools import product a = [1, 2, 3, 5] b = ["a", "b", "c", "d"] u = _unique_combinations(a, b) assert sorted(u) == sorted(product(a, b)) I started benchmarking the various approaches. For merging two lists of length 1000, the divmod solution by @gog already underperforms terrible so i'm going to exclude it from further testing: kelly took 0.9156949520111084 seconds divmod took 41.20149779319763 seconds prime_roots took 0.5146901607513428 seconds samwise took 0.698538064956665 seconds fisher_yates took 0.902874231338501 seconds For the remaining algorithms I benchmarked the following import pandas as pd import timeit import random from itertools import combinations from math import gcd # Define the list lengths to benchmark list_lengths = [10,20,30,100,300,500,1000,1500,2000,3000,5000] num_repetitions = 2 results_df = pd.DataFrame(columns=['Approach', 'List Length', 'Execution Time']) for approach, function in approaches.items(): for length in list_lengths: a = list(range(length)) b = list(range(length)) execution_time = timeit.timeit(lambda: list(function(a, b)), number=num_repetitions) results_df = results_df.append({ 'Approach': approach, 'List Length': length, 'Execution Time': execution_time }, ignore_index=True) All in all, I think all of the approaches are somewhat similar. All tested approaches fall in O(|a|*|b|) time-complexity wise. Memory-wise the prime roots approach wins just because all other approaches need to keep track of O(|a|*|b|) elements, whereas the prime roots doesn't require that. Distribution wise the prime roots approach is absolutely the worst because it's not actually random but rather a difficult-to-predict-deterministic sequence. In practice the sequences should be "random" enough. Credit to this stack overflow answer which inspired the solution.
3
1
76,491,765
2023-6-16
https://stackoverflow.com/questions/76491765/docker-buildx-failing-with-problem-executing-scripts-aptupdatepost-invoke
I have a docker image building through a circle.ci pipeline, it's pulling an ECR image from AWS and hosted on EB/EC2 and its failing to build continuously with this error: #5 0.329 Get:1 http://deb.debian.org/debian bookworm InRelease [147 kB] #5 0.339 Get:2 http://deb.debian.org/debian bookworm-updates InRelease [52.1 kB] #5 0.339 Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB] #5 0.401 Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8904 kB] #5 0.548 Get:5 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [27.7 kB] #5 1.597 Fetched 9179 kB in 1s (7185 kB/s) #5 1.597 Reading package lists... #5 2.165 E: Problem executing scripts APT::Update::Post-Invoke 'rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true' #5 2.165 E: Sub-process returned an error code #5 ERROR: executor failed running [/bin/sh -c apt-get update]: exit code: 100 ------ > [dev 2/7] RUN apt-get update: ------ error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c apt-get update]: exit code: 100 My Docker file FROM python:3.9-slim as dev ENV DEBIAN_FRONTEND=noninteractive # Custom cache invalidation ARG CACHEBUST=1 RUN apt-get update RUN apt-get install -y pipenv default-libmysqlclient-dev libcurl4-openssl-dev libssl-dev COPY Pipfile* ./ RUN pipenv install --system --deploy --dev ENV APP_DIR=/app/withinhealth RUN mkdir -p ${APP_DIR} WORKDIR ${APP_DIR} FROM dev as prod COPY . ./
Looks like the version of docker being used needs updating. I was also having this problem using an old version of docker (20.10.6 - don't ask). There was a Debian release a few days ago. python:3.9-slim derives from this new Debian version and there are some issues with running images based on this Debian version with older versions of docker (certificates and/or keys, updated version of glibc perhaps). Minimal steps to reproduce (with docker version 20.10.6): $ docker run --rm -it python:3.9-slim bash root@f350ecdf0140:/# apt update Get:1 http://deb.debian.org/debian bookworm InRelease [147 kB] Get:2 http://deb.debian.org/debian bookworm-updates InRelease [52.1 kB] Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB] Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8904 kB] Get:5 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [28.3 kB] Fetched 9180 kB in 2s (5210 kB/s) Reading package lists... Done E: Problem executing scripts APT::Update::Post-Invoke 'rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true' E: Sub-process returned an error code Following upgrade of docker to 24.0.2: $ docker run --rm -it python:3.9-slim bash root@6824359f6240:/# apt update Get:1 http://deb.debian.org/debian bookworm InRelease [147 kB] Get:2 http://deb.debian.org/debian bookworm-updates InRelease [52.1 kB] Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB] Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8904 kB] Get:5 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [28.3 kB] Fetched 9180 kB in 2s (5248 kB/s) Reading package lists... Done Building dependency tree... Done Reading state information... Done All packages are up to date.
5
4
76,490,589
2023-6-16
https://stackoverflow.com/questions/76490589/valueerror-when-using-model-fit-even-with-the-vectors-being-aligned
I am attempting to build a naive Bayes model for text classification. Here is a sample of the data I'm working with: df_some_observations = filtered_training.sample(frac=0.0001) df_some_observations.to_dict() The output looks like this: {'Intitulรฉ (Ce champ doit respecter la nomenclature suivante : Code action โ€“ Libellรฉ)_x': {40219: 'aegua00268 format oper scad htbhta fonction avance', 16820: 'aeedf50490 sort conflit facon construct', 24771: '4022mps192 prepar a lhabilit electr boho indic v personnel non elec', 34482: '3095mceg73 affirmezvous relat professionnel bas ref 7114'}, 'Nล“ud parent au niveau N y compris moi-mรชme.1': {40219: 'distribu electricit rel reseau electricit ecr exploit conduit reseau electricit', 16820: 'ct competent transvers rhu ressourc humain for pilotag gestion format', 24771: 'ss sant securit prevent prf prevent risqu professionnel hcp habilit certif perm prevent risqu meti', 34482: 'nan'}, 'Thรจme de formation (Chemin complet)': {40219: 'distribu electricit rel reseau electricit ecr exploit conduit reseau electricit', 16820: 'ct competent transvers rhu ressourc humain for pilotag gestion format', 24771: 'ss sant securit prevent prf prevent risqu professionnel hcp habilit certif perm prevent risqu meti', 34482: 'in ingenier esp equip sous pression'}, 'Description du champ supplรฉmentaire : Objectifs de la formation': {40219: 'nan', 16820: 'nan', 24771: 'prepar a lhabilit electr boho indic v autoris special lissu cet format stagiair doit connaitr risqu electr savoir sen proteg doit etre capabl deffectu oper simpl dexploit suiv certain methodolog', 34482: 'nan'}, 'Objectifs': {40219: 'nan', 16820: 'nan', 24771: 'nan', 34482: 'nan'}, 'Programme de formation': {40219: 'nan', 16820: 'nan', 24771: 'notion elementair delectricit sensibilis risqu electr prevent risqu electr publiqu utec 18 510 definit oper lenviron intervent tbt b appareillag electr bt materiel protect individuel collect manoeuvr mesurag essais verif outillag electr portat a main mis situat coffret didact', 34482: 'nan'}, 'Populations concernรฉes': {40219: 'nan', 16820: 'nan', 24771: 'personnel electricien effectu oper dordr electr', 34482: 'nan'}, 'Prรฉrequis': {40219: 'nan', 16820: 'nan', 24771: 'personnel non electricien effectu oper simpl remplac fusibl rearm disjoncteur rel thermiqu', 34482: 'nan'}, "Description du champ supplรฉmentaire : Commanditaire de l'action": {40219: 'nan', 16820: 'nan', 24771: 'nan', 34482: 'nan'}, "Organisme dispensant l'action": {40219: 'local sei', 16820: 'intern edf', 24771: 'intern edf', 34482: 'intern edf'}, 'Durรฉe thรฉorique (h)': {40219: 14.0, 24771: 11.0, 34482: 14.0}, 'Coรปt de la catรฉgorie Coรปt pรฉdagogique': {40219: 0.0, 16820: 0.0, 24771: 0.0, 34482: 0.0}, 'Coรปt de la catรฉgorie Coรปt logistique': {40219: 0.0, 16820: 0.0, 24771: 0.0, 34482: 0.0}, I started by splitting the data after removing some unnecessary columns: (my target variable is in column 15) df_training = filtered_training.sample(frac=0.8, random_state=42) df_test = filtered_training.drop(df_training.index) X_train = df_training.iloc[:,:14] y_train = df_training.iloc[:,15] X_test = df_test.iloc[:,:14] y_test = df_test.iloc[:,15] When building the model with: model = make_pipeline(TfidfVectorizer(), MultinomialNB()) model.fit(X_train, y_train) predicted_categories = model.predict(X_test) I receive the following error when executing model.fit(X_train, y_train): ValueError: Found input variables with inconsistent numbers of samples: [14, 35478] Additional information that may be helpful: np.shape(X_train) #(35478, 14) np.shape(y_train) #(35478,) np.shape(X_test) #(8870, 14) np.shape(y_test) #(8870,)
I think that the main problem that TfidfVectorizer is able to work with one-dimensional text data only (as I see it from here). That's why when it tries to convert several columns with text data it tries to do it for column names for some reason. In your case I see 2 ways how to solve this problem: If you want to apply TfidfVectorizer for each column individually, it would be better to do it like this for example: column_transformer = ColumnTransformer([(x, TfidfVectorizer(), x) for x in X_train.columns]) # make sure that all columns contains text data model = make_pipeline(column_transformer, MultinomialNB()) model.fit(X_train, y_train) predicted_categories = model.predict(X_test) But if you want to apply one vocabulary for your columns, then I would recomment to do it like this: nex_X_train = X_train.iloc[:,0] for x in X_train.columns[1:]: nex_X_train = nex_X_train + ' ' + X_train[x] nex_X_test = X_test.iloc[:,0] for x in X_test.columns[1:]: nex_X_test = nex_X_test + ' ' + X_test[x] model = make_pipeline(TfidfVectorizer(), MultinomialNB()) model.fit(nex_X_train, y_train) predicted_categories = model.predict(nex_X_test)
4
1
76,509,000
2023-6-19
https://stackoverflow.com/questions/76509000/large-matrix-multiplication-with-low-memory-usage-in-numpy
I have a complex matrix multiplication with several hundreds of thousands of rows and columns. At some point the memory usage grows to 100% and then the computer is freezed and I have to restart it manually. I have tried with Numba (writing the code inside a function with a decorator) and Dask (transforming the numpy arrays as da.from_array(var, chunk)) without success. I am not an expert in any of those. I have read plenty of similar questions to this without finding a good solution for my problem. A minimum reproducible example could be m = 100000 n = 100000 a1 = np.random.rand(m) a2 = np.random.rand(n) c = np.random.rand(m)+1j*np.random.rand(m) b = np.random.rand(n)+1j*np.random.rand(n) A = np.exp(1j*np.outer(a1,a2)) d = c*np.dot(A,b) What would be the best option of solving it in terms of memory usage? (not neccesarily the fastest)
Main issue The main issue is that 1j*np.outer(a1,a2) takes 100_000 * 100_000 * (8 * 2) = 149 GiB. On top of that, np.exp needs to read this matrix and produce another one of the same size so you need at least ~300 GiB of RAM just for this. This is HUGE and inefficient. You should avoid creating the matrix A at any price (including similar temporary matrices). Fast memory-efficient solution Numba can help in this case : you can compute the array d on the fly avoiding huge temporary matrices. Here is an optimized Numba code doing this: import numba as nb import numpy as np @nb.njit('(float64[::1], float64[::1], complex128[::1], complex128[::1])', parallel=True) def compute(a1, a2, b, c): m, n = a1.size, a2.size assert n == m # seems already mantatory in the initial code tmpDot = np.zeros(n, dtype=np.complex128) for i in nb.prange(n): for j in range(n): tmpDot[i] += np.exp(1j * (a2[j] * a1[i])) * b[j] return c * tmpDot m = 100000 n = 100000 a1 = np.random.rand(m) a2 = np.random.rand(n) c = np.random.rand(m)+1j*np.random.rand(m) b = np.random.rand(n)+1j*np.random.rand(n) d = compute(a1, a2, b, c) This code only take a very small amount of memory compared to the initial one : only few MiB. So it takes about 100_000 times less memory! Besides, I also expect it to run significantly faster (because it is multi-threaded and better use the CPU caches as well as the RAM). It takes only 17.1 seconds on my machine (while I cannot even run the initial code)!
5
4
76,509,992
2023-6-19
https://stackoverflow.com/questions/76509992/add-a-column-with-the-new-value-from-a-tuple-value-in-another-column
I have this df: df = pd.DataFrame( {'loss': [0.044, 0.044, 0.038, 0.037, 0.036], 'code': ["('ac',)", "('ac', 'be')", "('ab', 'ac', 'be')", "('ab', 'ac', 'be', 'fi')", "('ab', 'ac', 'be', 'de', 'fi')"]} ) df loss code 0 0.044 ('ac',) 1 0.044 ('ac', 'be') 2 0.038 ('ab', 'ac', 'be') 3 0.037 ('ab', 'ac', 'be', 'fi') 4 0.036 ('ab', 'ac', 'be', 'de', 'fi') Now I want add a new column added-code, the new value introduce in the code column. Expected results: loss code added-code 0 0.044 ('ac',) ac 1 0.044 ('ac', 'be') be 2 0.038 ('ab', 'ac', 'be') ab 3 0.037 ('ab', 'ac', 'be', 'fi') fi 4 0.036 ('ab', 'ac', 'be', 'de', 'fi') de
Assuming there is one new value per row, you can convert to tuples, explode and drop_duplicates: from ast import literal_eval df['added'] = (df['code'] .apply(literal_eval) .explode() .drop_duplicates() ) Output: loss code added 0 0.044 ('ac',) ac 1 0.044 ('ac', 'be') be 2 0.038 ('ab', 'ac', 'be') ab 3 0.037 ('ab', 'ac', 'be', 'fi') fi 4 0.036 ('ab', 'ac', 'be', 'de', 'fi') de Alternative, as suggested in comment, to use set operations: df['added'] = (df['code'] .apply(lambda x: set(literal_eval(x))) .diff() ) Output: loss code added 0 0.044 ('ac',) NaN 1 0.044 ('ac', 'be') {be} 2 0.038 ('ab', 'ac', 'be') {ab} 3 0.037 ('ab', 'ac', 'be', 'fi') {fi} 4 0.036 ('ab', 'ac', 'be', 'de', 'fi') {de}
2
5
76,509,045
2023-6-19
https://stackoverflow.com/questions/76509045/how-to-hide-the-errorbar-if-there-are-less-than-3-data-points-in-the-category
I want to have error bars in my bar plots when more than 3 data points are available (Condition A) but omit error bars when there are less than 3 data points for that specific condition (Condition B). I've only found options to show or hide error bars for all bars, not for specific conditions. import pandas as pd import seaborn as sns import numpy as np df = pd.DataFrame(np.random.randint(0,100,size=(15)), columns=["Value"]) df["Label"]="Condition A" df.Label[13:]="Condition B" sns.barplot(data=df, x="Label", y="Value", errorbar="sd") Actual Outcome: Error bars on all bars: Desired outcome: Error bars on condition A only:
You can use a custom errorbar function in sns.barplot. It should return a [y1, y2] iterable with the position of the min/max error: # defining a custom function to only compute # the error if more than 3 values def cust_error(s): if len(s)<3: return [None, None] else: avg = s.mean() std = s.std() return [avg-std, avg+std] sns.barplot(data=df, x="Label", y="Value", errorbar=cust_error) Another option could be to plot the error bars manually: ax = sns.barplot(data=df, x="Label", y="Value", errorbar=None) g = df.groupby('Label', sort=False) error = g['Value'].std().where(g.size()>=3) plt.errorbar(range(len(s)), g['Value'].mean(), error, ls='', color='#5F5F5F', lw=3) Output:
3
5
76,509,006
2023-6-19
https://stackoverflow.com/questions/76509006/typeerror-cannot-cast-datetimearray-to-dtype-datetime64d
I recently updated my python install from 3.11.2 to 3.11.3, pandas version is 2.0.2 I am now getting this error: TypeError: Cannot cast DatetimeArray to dtype datetime64[D] When I try to perform this: df = df[df['CancelDate'].astype('datetime64[D]') >= (datetime.now() - relativedelta(years=2))] On this dataframe: mydataset = { 'CancelDate': ["2021-09-07", "2021-07-26", "2021-11-01","2015-06-15"] } df = pandas.DataFrame(mydataset) Prior to the update, I was not getting the given error. Could anyone help me realize the error in my ways?
To solve this issue, you can use the pd.pd.to_datetime function to convert the 'CancelDate' column to a DatetimeArray before performing the comparison. df['CancelDate'] = pd.to_datetime(df['CancelDate']) # Convert to DatetimeArray df = df[df['CancelDate'] >= (datetime.now() - relativedelta(years=2))]
5
3
76,489,928
2023-6-16
https://stackoverflow.com/questions/76489928/error-when-importing-pandas-importerror-cant-determine-version-for-numexpr
I am having problems with importing the pandas package. I used the following command to import it: import pandas as pd However, I receive the following error message: Traceback (most recent call last): Cell In[54], line 1 import pandas as pd File ~\AppData\Local\anaconda3\lib\site-packages\pandas\__init__.py:48 from pandas.core.api import ( File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\api.py:27 from pandas.core.arrays import Categorical File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arrays\__init__.py:1 from pandas.core.arrays.arrow import ArrowExtensionArray File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arrays\arrow\__init__.py:1 from pandas.core.arrays.arrow.array import ArrowExtensionArray File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arrays\arrow\array.py:60 from pandas.core.arraylike import OpsMixin File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\arraylike.py:21 from pandas.core.ops.common import unpack_zerodim_and_defer File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\ops\__init__.py:38 from pandas.core.ops.array_ops import ( File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\ops\array_ops.py:57 from pandas.core.computation import expressions File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\computation\expressions.py:20 from pandas.core.computation.check import NUMEXPR_INSTALLED File ~\AppData\Local\anaconda3\lib\site-packages\pandas\core\computation\check.py:5 ne = import_optional_dependency("numexpr", errors="warn") File ~\AppData\Local\anaconda3\lib\site-packages\pandas\compat\_optional.py:157 in import_optional_dependency version = get_version(module_to_get) File ~\AppData\Local\anaconda3\lib\site-packages\pandas\compat\_optional.py:84 in get_version raise ImportError(f"Can't determine version for {module.__name__}") ImportError: Can't determine version for numexpr I am using the following version of Python: Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] Is there any way to solve this problem? Some important information is maybe that this computer has remote access to the server that I'm using via VPN. So I should only have access to the program when I am logged into the VPN.
If you are on ubuntu linux, you can try sudo apt-get install python-numexpr. Refer this answer - https://askubuntu.com/questions/446644/why-do-i-get-importerror-when-trying-to-import-pandas-python-module After installing numexpr and bottleneck, you can try pip install --force-reinstall pandas or pip install --upgrade --force-reinstall pandas to ensure pandas is installed properly.
4
2
76,499,565
2023-6-18
https://stackoverflow.com/questions/76499565/python-does-not-find-module-installed-with-pipx
Debain stable wants me to install Python modules using pipx. So I do $ pipx install auditwheel $ pipx ensurepath $ python3 -m pipx ensurepath $ python3 Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import auditwheel Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'auditwheel' >>> What am I doing wrong?
From Python 3.11 onward, Debian encourages the users to create a separate Python virtual environment to install Python packages. Because Debian declares its Python install to be externally-managed, pip (and other installers) will refuse to install packages system-wide. Installation is only possible in virtual environments or separate Python installs. This is because Python package installers (like pip) are unaware of the constraints that APT-managed packages have on libraries and versions. See PEP-668 for a full discussion of the problems that can occur when multiple installers operate on the same Python install. Therefore, the optimal way is to create a virtual environment, say MyEnv, and install packages therein: $ mkdir -p $HOME/.venvs # create a folder for all virtual environments $ python3 -m venv $HOME/.venvs/MyEnv # create MyEnv This will create a directory $HOME/.venvs/MyEnv with a configuration file pyvenv.cfg which includes some details for this virtual environment, such as the Python executable and Python version. Verify the version of the Python in the virtual environment: $HOME/.venvs/MyEnv/bin/python --version The executables of the created virtual environment are found under $HOME/.venvs/MyEnv/bin. To install a package into the virtual environment, use $HOME/.venvs/MyEnv/bin/python -m pip install <some-package> To 'activate' the virtual enviroment, i.e. adding its configuration variables into the shell environment, use source $HOME/.venvs/MyEnv/bin/activate Consult Python's guide to virtualenv and pip at https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments.
21
14
76,504,640
2023-6-19
https://stackoverflow.com/questions/76504640/pandas-group-by-find-the-difference-with-respect-to-flag-ids
I Have the following Data frame: id flag col_1 col_2 name 0 1 1 11 13 a 1 2 0 62 14 b 2 1 0 13 15 a 3 2 1 74 16 b 4 3 1 25 17 c 5 3 0 22 18 c I need this as the output - id col_3 col_4 name 0 1 2 2 a 1 2 -12 -2 b 2 3 -3 1 c I need to group by id, name and take flag[0] of col_1 - flag[1] of the col_1 which has id, name in common. Thanks in advance.
Using simple indexing with a temporary index (set_index and reset_index) tmp = df.set_index(['flag', 'id', 'name']) out = (tmp.loc[0] - tmp.loc[1]).reset_index() Output: id name col_1 col_2 0 1 a 2 2 1 2 b -12 -2 2 3 c -3 1 Used input: df = pd.DataFrame({'id': [1, 2, 1, 2, 3, 3], 'flag': [1, 0, 0, 1, 1, 0], 'col_1': [11, 62, 13, 74, 25, 22], 'col_2': [13, 14, 15, 16, 17, 18], 'name': ['a', 'b', 'a', 'b', 'c', 'c']})
2
3
76,504,339
2023-6-19
https://stackoverflow.com/questions/76504339/pandas-update-multiple-rows-using-list
I am trying to update pandas dataframe using list. Dataframe with columns A, B, C A B C ------ 1 a F 2 b F 3 c F 4 d F 5 e F I have 2 lists, one contains list of elements whose value needs to update from column B and second contains actual value to replace in column C. Elements to update from column B names=['a', 'd', 'e'] Values to replace in column C values=['T', 'T', 'G'] Output after update A B C ------ 1 a T 2 b F 3 c F 4 d T 5 e G How to update the dataframe?
You can use boolean indexing combined with map: names = ['a', 'd', 'e'] values = ['T', 'T', 'G'] m = df['B'].isin(names) df.loc[m, 'C'] = df.loc[m, 'B'].map(dict(zip(names, values))) Less efficient alternatives: df['C'] = df['B'].map(dict(zip(names, values))).fillna(df['C']) df['C'] = df['C'].mask(df['B'].isin(names), df['B'].map(dict(zip(names, values)))) Output: A B C 0 1 a T 1 2 b F 2 3 c F 3 4 d T 4 5 e G
3
3
76,483,104
2023-6-15
https://stackoverflow.com/questions/76483104/why-does-calling-time-sleep-with-different-values-alter-the-execution-time-of-pa
I run this code multiple time with different SLEEP_TIME, for example SLEEP_TIME=0, SLEEP_TIME=1e-3, SLEEP_TIME=10e-3 and also omitted the time.sleep line altogether from the code. For every value of SLEEP_TIME the measured average work time changes, even though the sleep is outside the measured code. This makes zero sense to me - why would calling time.sleep change the way the process behaves even though the code absolutely does not depend on the sleep? I tested the following code with both linux and windows and the behavior is similar (though in windows omitting the sleep altogether causes the performance to degrade significantly). import numpy as np import multiprocessing import time SLEEP_TIME = 1e-3 def do_work(): total_time = 0 time_to_run = 500 for i in range(time_to_run): t0 = time.time() # start work nparr = np.ones((1000,100,30)) nparr[nparr == 0] = 1 sp = nparr.shape # to synchronize previous call # end work t1 = time.time() total_time += t1 - t0 time.sleep(SLEEP_TIME) # WHY DOES THIS MATTER???? THIS IS OUTSIDE THE WORK AND OUTSIDE MEASUREMENT print(f"avg work time: {1000 * total_time / time_to_run:.2f}ms") if __name__ == '__main__': p1 = multiprocessing.Process(target=do_work) p1.start() p2 = multiprocessing.Process(target=do_work) p2.start() p1.join() p2.join() Example results (on linux): No sleep (commenting out time.sleep) Output: avg work time: 4.50ms avg work time: 4.56ms SLEEP_TIME = 0 Output: avg work time: 4.46ms avg work time: 4.52ms SLEEP_TIME = 1e-3 Output: avg work time: 4.76ms avg work time: 4.82ms SLEEP_TIME = 10e-3 Output: avg work time: 7.05ms avg work time: 7.07ms What is happening here? Is the OS trying (and failing) to optimize my process? And how can I execute the work part as fast as possible regardless of the amount of previous sleep time? ChatGPT suggested I should add to the top of the file: import os os.environ["OMP_NUM_THREADS"] = "1" # or whatever number you choose While it improves the time of execution with large sleeps, the execution time still defers. EDIT: I fixed the join strategy like some have rightly suggested. Though it's doesn't affect the problem in question it is better to write the code correctly to avoid confusion.
I reproduced the behavior of your python script on my Ubuntu machine. In my case it was not specific to python, and I found similar performance degradation in a c++ program that sleeps between each computation. There are various mechanisms in Linux that reduce the frequency of the CPU(s) in order to save power when the system load is low. In my case, the "CPU frequency scaling governor" was set to "powersave" on all CPUs. You can check this by running: cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor In my case, changing to "performance" yields similar time measurements with and without sleep, and the measured time is now even lower than the measured time without sleeping before changing from "powersave". To change these settings, run: echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor Note that this will use more power and could lead to more heat generation, so you might want to monitor the temperature of your CPUs to make sure they don't overheat.
3
2
76,469,459
2023-6-14
https://stackoverflow.com/questions/76469459/firebase-cloud-functions-python-cannot-add-dependencies
I'm using python cloud functions in my firebase project. After initializing cloud functions, adding firebase-admin to the requirements.txt file worked, and I could test with firebase emulators:start and also successfully deploy with firebase deploy --only functions. The issue is when I try to add other packages. I added tldextract to requirements.txt and put import tldextract in main.py which causes ModuleNotFoundError: No module named 'tldextract' 127.0.0.1 - - [14/Jun/2023 00:24:10] "GET /__/functions.yaml HTTP/1.1" 500 - โฌข functions: Failed to load function definition from source: FirebaseError: Failed to parse build specification when I run firebase emulators:start or firebase deploy --only functions. It also seems like the venv folder is not being updated. I tried activating the venv and pip install -r requirements.txt which made the local execution work with firebase emulators:start, BUT after redeploying the functions, they're stilling failing in the cloud. I tried this with different packages to make sure it's not just this one specific package. But adding other pip packages to requirements.txt and importing them in main.py failed for all packages that I tested. What am I doing wrong?
The following solved the issue for me: Delete the venv folder created by firebase init functions. Create a new one as follows: python3.11 -m venv venv source venv/bin/activate pip3 install --upgrade pip python3.11 -m pip install -r requirements.txt Now deploy with firebase deploy --only functions
4
8
76,500,990
2023-6-18
https://stackoverflow.com/questions/76500990/why-is-beautifulsoup-returning-none-when-scraping-google-search-results
I'm trying to use BeautifulSoup to find the birth years of different authors. I'm working in VS Code, if that's relevant. This is my first attempt at web scraping so please explain things as clearly as possible For authors with wikipedia pages, I can successully find birth years using the following code: source_code = requests.get("a_wikipedia_url") plain_text = source_code.text soup = BeautifulSoup(plain_text, features="html.parser") finder = soup.find("span", {"class": "bday"}) if finder is not None: birth_year = finder.string[0:4] return birth_year However when I try the same thing with google search for authors with no (English) wikipedia page, I just get None. After reading this question https://stackoverflow.com/questions/62466340/cant-scrape-google-search-results-with-beautifulsoup I added a User Agent response header to requests.get (I'm using Chrome Version 114.0.5735.134 (Official Build) (64-bit) and Windows 11 Home), but all it did was print None instead of giving my AttributeError: 'NoneType' object has no attribute 'string', which is what I was getting before adding the header. This is my code: headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.5735.134 Safari/537.36"} source_code = requests.get("https://www.google.com/search?q=Guillermo+Saccomanno", headers=headers) plain_text = source_code.text soup = BeautifulSoup(plain_text, features="html.parser") google_finder = soup.find("span", {"class": "LrzXr kno-fv wHYlTd z8gr9e"}) print(google_finder.string) The result is just None - no error message, but no text. I also tried with the header Chrome version as Chrome/114.0.0.0, which is what I found online. Still gives None. I'm not sure where I'm going wrong as the syntax is identical and I copied the class name from the page source? For this particular author, I would expect google_finder.string to be "9 June 1948 (age 75 years)".
If you want to parse the born date I'd chose different strategy: Find a <span> tag with text "Born:" and then next sibling. Also add hl=en parameter to URL to get english results: import requests from bs4 import BeautifulSoup url = 'https://www.google.com/search?q=Guillermo+Saccomanno&hl=en' headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0'} soup = BeautifulSoup(requests.get(url, headers=headers).content, 'html.parser') born = soup.select_one('span:-soup-contains("Born:") + span') print(born.text) Prints: June 9, 1948 (age 75 years), Buenos Aires, Argentina
3
1
76,502,018
2023-6-18
https://stackoverflow.com/questions/76502018/tkinter-throws-importerror
I was trying to make my first Tkinter project in Python but it just shows me this: >>> from tkinter import * Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.11/tkinter/__init__.py", line 38, in <module> import _tkinter # If this fails your Python may not be configured for Tk ^^^^^^^^^^^^^^^ ImportError: libtk8.6.so: cannot open shared object file: No such file or directory I am using Arch. I have tried pip install tkinter but it didn't help. Pls help
Try # pacman -S tk I had the same problem and that fixed it.
3
2
76,498,784
2023-6-18
https://stackoverflow.com/questions/76498784/is-there-a-way-to-access-feature-names-labels-from-a-keras-model-alone
I'm trying to retrieve feature names from a Keras model in a generalized way. I want to load a pretrained model and obtain its feature names, like this: labels = model.get_feature_names() I'm looking for something that works with any Keras model, ideally as a method that takes a black box Keras model and returns the feature names that the model operates on. For example, in SKlearn I can do this: from sklearn.base import BaseEstimator if isinstance(model, BaseEstimator): return lambda m: { 'labels': m.feature_names_in_, } Is there a Keras/TensorFlow equivalent? Any insight is greatly appreciated.
No, unlike sklearn there isn't a Keras/Tensorflow equivalent for obtaining feature names. This is because tensorflow-keras models focus more on the shape and dimensions of the input tensors rather than the individual features. The model utilizes the input features as tensors, regardless of name. With some effort you might be able to track your individual features by values. However, if your end objective is building a classifier out of tabular data (and not some complex Computer-Vision or NLP tasks), you can consider using alternatives such as XGBoost (or perhaps other classifiers from sklearn itself). These classifiers are feature focused and you'll be able to work with the feature_names. They are also better suited to tabular data due to their versatility and ability to generalize well even with limited samples/features.
3
1
76,500,916
2023-6-18
https://stackoverflow.com/questions/76500916/numba-crashes-python-with-parallel-true-flag-set
I'm trying to speed up some calculations using the Numba. The call of the nnleapfrog_integrate function lead to segmentation fault and crash of the Python process. The function works fine if the parallel=True flag is removed from its jit decorator. But then it runs in single thread. I want this fuction to run as fast as possible, thus I want it to run in multi threads to utilize all the CPU cores. from numba import jit, prange import numpy as np @jit('Tuple((f8[:,:,::1],f8[:,:,::1]))(f8[:,::1], f8[:,::1], f8[::1], i8, i8, i8, f8, f8)', nopython=True, parallel=True) def nnleapfrog_integrate(pos, vel, mass, i_steps, r_steps, dt, G, softening): N = pos.shape[0] pos_data = np.zeros((int(np.ceil(i_steps/r_steps)), N, 3)) vel_data = np.zeros((int(np.ceil(i_steps/r_steps)), N, 3)) data_idx = 0 acc = np.zeros((N,3)) for s in range(i_steps): vel += acc * dt/2.0 pos += vel * dt for i in prange(N): acc[i,0] = 0 acc[i,1] = 0 acc[i,2] = 0 for j in range(N): dx = pos[j,0] - pos[i,0] dy = pos[j,1] - pos[i,1] dz = pos[j,2] - pos[i,2] inv_r3 = (dx**2 + dy**2 + dz**2 + softening**2)**(-1.5) acc[i,0] += G * (dx * inv_r3) * mass[j] acc[i,1] += G * (dy * inv_r3) * mass[j] acc[i,2] += G * (dz * inv_r3) * mass[j] vel += acc * dt/2.0 if s % r_steps == 0: pos_data[data_idx] = pos vel_data[data_idx] = vel data_idx += 1 return pos_data, vel_data N = 10 dt = 60 pos = np.random.rand(N, 3) vel = np.random.rand(N, 3) m = np.random.rand(N) softening = 1e3 G = 6.67430e-11 t_max = 3600*24*30 i_steps = int(t_max/dt) r_steps = int(3600*24/dt) r_i, v_i = nnleapfrog_integrate(pos, vel, m, i_steps, r_steps, dt, G, softening) What I have already tried Because only the for i in prange(N): loop is suitable for parallelization, so I have separated it to the separate function getAcc which is works fine with the parallel=True flag and utilizes all the CPU cores. from numba import jit, prange import numpy as np @jit('f8[:, ::1](f8[:, ::1], f8[::1], f8, f8)', nopython=True, parallel=True) def getAcc( pos, mass, G, softening ): N = pos.shape[0] a = np.zeros((N,3)) for i in prange(N): for j in range(N): dx = pos[j,0] - pos[i,0] dy = pos[j,1] - pos[i,1] dz = pos[j,2] - pos[i,2] inv_r3 = (dx**2 + dy**2 + dz**2 + softening**2)**(-1.5) a[i,0] += G * (dx * inv_r3) * mass[j] a[i,1] += G * (dy * inv_r3) * mass[j] a[i,2] += G * (dz * inv_r3) * mass[j] return a @jit('Tuple((f8[:,:,::1],f8[:,:,::1]))(f8[:,::1], f8[:,::1], f8[::1], i8, i8, i8, f8, f8)', nopython=True) def nleapfrog_integrate(pos, vel, mass, i_steps, r_steps, dt, G, softening): N = pos.shape[0] pos_data = np.zeros((int(np.ceil(i_steps/r_steps)), N, 3)) vel_data = np.zeros((int(np.ceil(i_steps/r_steps)), N, 3)) data_idx = 0 acc = getAcc(pos, mass, G, softening) for i in range(i_steps): vel += acc * dt/2.0 pos += vel * dt acc = getAcc( pos, mass, G, softening ) vel += acc * dt/2.0 if i % r_steps == 0: pos_data[data_idx] = pos vel_data[data_idx] = vel data_idx += 1 return pos_data, vel_data N = 10 dt = 60 pos = np.random.rand(N, 3) vel = np.random.rand(N, 3) m = np.random.rand(N) softening = 1e3 G = 6.67430e-11 t_max = 3600*24*30 i_steps = int(t_max/dt) r_steps = int(3600*24/dt) r_i, v_i = nleapfrog_integrate(pos, vel, m, i_steps, r_steps, dt, G, softening) But it turned out to be more than 3 times slower than the single threaded version of the original function in which this cycle was inlined. In [4]: %timeit r_i, v_i = nleapfrog_integrate(pos, vel, m, i_steps, r_steps, dt, G, softening) 8.51 s ยฑ 46.4 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) In [5]: %timeit r_i, v_i = nnleapfrog_integrate(pos, vel, m, i_steps, r_steps, dt, G, softening) 2.53 s ยฑ 18.6 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) Therefore for the best performance, I need the original function with the inlined for i in prange(N): loop to run in multi threads.
The parallelisation of the i-based loop is not efficient because creating and synchronizing threads is expensive. Indeed, this overhead is usually at least dozens of microseconds on PC and often significantly even bigger on computing server (mainly because of the additional cores). The thing is there is i_steps=43200 iteration so this overhead will result in few seconds. There is not enough work to use threads efficiently with N=10. Besides, note that there is a bug in Numba 0.57.0 causing a segmentation fault on this code so I am not sure this is even safe to parallelize it. Fortunately, the serial code can be optimized: x * y**(-1.5) is not efficient because Numba use the expensive exponential function pow to compute it. You can use x / (y * sqrt(y)) instead. This is significantly faster because most CPUs have an integrated hardware unit to compute square root and division relatively efficiently. The fastmath option is not enabled so Numba cannot assume that x*y is equal to y*x preventing some optimizations. Enabling this flag can be dangerous, but the optimization can be done manually by pre-computing values in the inner loop. Here is the resulting optimized code: @jit('Tuple((f8[:,:,::1],f8[:,:,::1]))(f8[:,::1], f8[:,::1], f8[::1], i8, i8, i8, f8, f8)', nopython=True) def nnleapfrog_integrate(pos, vel, mass, i_steps, r_steps, dt, G, softening): N = pos.shape[0] pos_data = np.zeros((int(np.ceil(i_steps/r_steps)), N, 3)) vel_data = np.zeros((int(np.ceil(i_steps/r_steps)), N, 3)) data_idx = 0 acc = np.zeros((N,3)) for s in range(i_steps): vel += acc * dt/2.0 pos += vel * dt for i in range(N): acc[i,0] = 0.0 acc[i,1] = 0.0 acc[i,2] = 0.0 for j in range(N): dx = pos[j,0] - pos[i,0] dy = pos[j,1] - pos[i,1] dz = pos[j,2] - pos[i,2] tmp1 = dx**2 + dy**2 + dz**2 + softening**2 tmp2 = G * mass[j] / (tmp1 * np.sqrt(tmp1)) acc[i,0] += tmp2 * dx acc[i,1] += tmp2 * dy acc[i,2] += tmp2 * dz vel += acc * dt/2.0 if s % r_steps == 0: pos_data[data_idx] = pos vel_data[data_idx] = vel data_idx += 1 return pos_data, vel_data This code is about 5 times faster on my machine with a i5-9600KF processor. It runs in approximately 31 ms. This means every iteration of the encompassing loop takes only 0.72 ยตs (far smaller than the overhead of thread creation/synchronization). Further optimizations include pre-computing G * mass[j] and computing the division/sqrt using SIMD instruction. The former is easy to do and the later is a bit tricky, especially in Numba.
4
3
76,501,256
2023-6-18
https://stackoverflow.com/questions/76501256/assign-different-color-to-each-plt-step-line
I have a code which draws lines for the teams according to their tournament position in each game week. Pretty much I managed to make it work, except 2 things: For some reason a 4th (violet) line is drawn (teams are only 3) which goes from the top to the bottom throughout each game week. As I found out this line is drawn for every iteration (for each team) when plotting the lines. But why? Lines are drawn from x = 0 starting point, thus not aligning with the points (which are drawn correctly). Lines should be drawn from x = 1 starting point as well (according to their Game_week value). Output: What have I missed? Example of the code: import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.DataFrame([['Team1', 1, 1], ['Team1', 2, 2], ['Team1', 1, 3], ['Team1', 5, 4], ['Team1', 1, 5], ['Team2', 2, 1], ['Team2', 3, 2], ['Team2', 4, 3], ['Team2', 4, 4], ['Team2', 3, 5], ['Team3', 3, 1], ['Team3', 4, 2], ['Team3', 3, 3], ['Team3', 2, 4], ['Team3', 2, 5] ], columns=['Team', 'Position', 'Game_week']) positions = df['Position'] weeks = df['Game_week'] teams = df['Team'].unique() print(teams) # Coordinates: y = positions x = weeks print(y) print(x) fig, ax = plt.subplots() # Labels: plt.xlabel('Game weeks') plt.ylabel('Positions') plt.xlim(-0.2, 5.2) plt.ylim(0.8, 5.2) # Inverting the y-axis: plt.gca().invert_yaxis() # x, y ticks: xi = list(np.unique(x)) yi = list(np.unique(y)) plt.xticks(xi) plt.yticks(yi) # Colors for teams: colors = {'Team1': 'tab:red', 'Team2': 'tab:blue', 'Team3': 'blue'} # Points: plt.scatter(x, y, s=45, c=df['Team'].map(colors), zorder=2) # Lines between points: for i, (team, l) in enumerate(df.groupby('Team', sort=False)): plt.step(list(zip(l['Game_week'], l['Position'])), '-', color=colors[team], linewidth=8, alpha=0.2, zorder=1) print('step:', i, '; team:', [team]) print(l) plt.show() plt.close() Thank you!
The issues is with the plt.step(), where you are using zip. As per documentation here, you just need to give the x and y values. Updating that line as below... for i, (team, l) in enumerate(df.groupby('Team', sort=False)): plt.step(l['Game_week'], l['Position'], ## No zip '-', color=colors[team], linewidth=8, alpha=0.2, zorder=1) .. will give you this plot. Does this meet your needs? You may want to change xlim to start at 0.8 perhaps.
3
2
76,500,626
2023-6-18
https://stackoverflow.com/questions/76500626/pytest-with-multiprocessing-lock-not-working-as-expected-when-running-tests-in-p
I am trying to run my pytests in parallel using the pytest plugins parallel-0.1.1 and xdist-3.2.1 along with the --tests-per-worker n flag. I have a set of tests that require a preprocessing step which must be run in a critical section. This section is protected by a multiprocessing lock to avoid simultaneous execution by multiple workers. However, despite using the lock, the workers enter the critical section simultaneously, leading to synchronization problems. Here is a simplified version of the problematic code: Test code: import pytest @pytest.mark.parametrize("preprocess", ["config1"], indirect=True) def test_example1(preprocess): # Use the preprocessed data in the test print(f"Test using preprocessed data: {preprocess}") # do something with preprocess @pytest.mark.parametrize("preprocess", ["config2"], indirect=True) def test_example2(preprocess): # Use the preprocessed data in the test print(f"Test using preprocessed data: {preprocess}") # do something with preprocess conftest.py file: import pytest import multiprocessing lock = multiprocessing.Lock() @pytest.fixture def preprocess(request): with lock: # critical section Why is my lock not preventing simultaneous entry into the critical section when running the tests in parallel? How can I resolve this synchronization problem? I appreciate any assistance with this matter!
A library such as https://pypi.org/project/fasteners/ has better locking mechanisms for the goal you're trying to accomplish. You want a lock based around a file so that different processes don't create different locks. import fasteners lock = fasteners.InterProcessLock('path/to/lock.file') with lock: ... # exclusive access Then your code becomes thread-safe / process-safe.
3
2
76,499,319
2023-6-18
https://stackoverflow.com/questions/76499319/what-is-the-fastest-way-to-find-intersection-of-two-numpy-arrays-while-preservin
I have two one-dimensional NumPy arrays, A and B, of the same length. I want to find the intersection of the two arrays, meaning I want to find all the elements of A that are also present in B. The result should be a boolean array that is True when an element in array A at the index is also a member of array B, preserving the order so that I can use the result to index another array. If not for the boolean mask constraint, I would convert both arrays to sets and use the set intersection operator (&). However, I have tried using np.isin and np.in1d, and found that using plain Python list comprehension is much faster. Given the setup: import numba import numpy as np primes = np.array([ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997], dtype=np.int64) @numba.vectorize(nopython=True, cache=True, fastmath=True, forceobj=False) def reverse_digits(n, base): out = 0 while n: n, rem = divmod(n, base) out = out * base + rem return out flipped = reverse_digits(primes, 10) def set_isin(a, b): return a in b vec_isin = np.vectorize(set_isin) primes contains all prime numbers under 1000 with a total of 168. I chose it because it is of decent size and predetermined. I have performed various tests: In [2]: %timeit np.isin(flipped, primes) 51.3 ยตs ยฑ 1.55 ยตs per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [3]: %timeit np.in1d(flipped, primes) 46.2 ยตs ยฑ 386 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [4]: %timeit setp = set(primes) 12.9 ยตs ยฑ 133 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) In [5]: %timeit setp = set(primes.tolist()) 6.84 ยตs ยฑ 175 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) In [6]: %timeit setp = set(primes.flat) 11.5 ยตs ยฑ 54.6 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) In [7]: setp = set(primes.tolist()) In [8]: %timeit [x in setp for x in flipped] 23.3 ยตs ยฑ 739 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [9]: %timeit [x in setp for x in flipped.tolist()] 12.1 ยตs ยฑ 76.6 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) In [10]: %timeit [x in setp for x in flipped.flat] 19.7 ยตs ยฑ 249 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [11]: %timeit vec_isin(flipped, setp) 40 ยตs ยฑ 317 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [12]: %timeit np.frompyfunc(lambda x: x in setp, 1, 1)(flipped) 25.7 ยตs ยฑ 418 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [13]: %timeit setf = set(flipped.tolist()) 6.51 ยตs ยฑ 44 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) In [14]: setf = set(flipped.tolist()) In [15]: %timeit np.array(sorted(setf & setp)) 9.42 ยตs ยฑ 78.9 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) setp = set(primes.tolist()); [x in setp for x in flipped.tolist()] takes about 19 microseconds, which is faster than NumPy methods. I am wondering why this is the case, and if there is a way to make it even faster. (I wrote all the code, and I used AI suggested edit feature to edit the question)
Why the provided solutions are not efficient np.isin has two implementation. The first consists in sorting the two arrays (using a merge-sort) and then merge them. This solution runs in O(n log n + m log m + n+m) that is O(n log n + m log m). The other implementation is based on a lookup table. This second implementation create an array of boolean value based on the second array and then check if lookupTable[item] is set for each item of the first array. This second implementation can be faster for arrays containing small integers (this is a bit more complicated but explained in the documentation). This second solution runs in O(n + m + max(arr2)) (and even theoretically O(n + m) on some platforms with a big hidden constant). However, it can use much more memory. Numpy try to pick the best one by default. In your case, the two arrays are small and the integers inside are also relatively small so the two solution are relatively fast. For bigger arrays with small integers, the lookup table should be faster. The thing is Numpy is not efficient here because the overhead of calling a Numpy function like this is relatively big compared to the actual computation. Besides, the second array is already sorted so sorting it again is not efficient. Faster implementation One could just use a binary search to find the value of the first array in the second one without allocating any additional temporary array for exemple. You can use Numba so to reduce the overhead of calling Numpy several functions on small arrays and even fill the result faster using a jitted loop. Here is the final implementation: # Assume primes is sorted @numba.njit('bool_[:](int64[:],int64[:])') def compute(flipped, primes): assert primes.size > 0 and primes.size == flipped.size res = np.empty(flipped.size, dtype=np.bool_) idx = np.searchsorted(primes, flipped) for i in range(res.size): if idx[i] < len(primes) and primes[idx[i]] == flipped[i]: res[i] = True else: res[i] = False return res This solution is 15 times faster than np.isin(flipped, primes) on my machine, and faster than all other alternative (by a significant margin). It only takes about 2 ยตs on the provided input. It also scale relatively well. Fastest solution for large arrays For huge arrays, using a lookup table should be faster since the above solution runs in O(n log m) time while a lookup table implementation can run in linear time here. That being said, a lookup table also use significantly more memory. The best approach is to use a Bloom filter to make the lookup table much more compact (thanks to hashing). However, this solution is significantly more complex to implement. There is an exemple here for setdif1d. Fastest solutions often comes at the price of a significantly more complex code (there is no free lunch).
2
5
76,499,877
2023-6-18
https://stackoverflow.com/questions/76499877/create-subplot-by-overlapping-two-dataframes-for-every-group-id
I have the below two dataframe: #Load the required libraries import pandas as pd import matplotlib.pyplot as plt #Create dataset_1 data_set_1 = {'id': [1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,3, 4, 4, 4, 4, 4,4,], 'cycle': [0.0, 0.2,0.4, 0.6, 0.8, 1,1.2,1.4,1.6,1.8,2.0,2.2, 0.0, 0.2,0.4, 0.6,0.8,1.0,1.2, 0.0, 0.2,0.4, 0.6, 0.8,1.0,1.2,1.4, 0.0, 0.2,0.4, 0.6, 0.8,1.0,], 'Salary': [6, 7, 7, 7,8,9,10,11,12,13,14,15, 3, 4, 4, 4,4,5,6, 2, 8,9,10,11,12,13,14, 1, 8,9,10,11,12,], 'Children': ['Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','Yes', 'Yes', 'No','No', 'Yes','Yes', 'Yes', 'Yes', 'No','Yes', 'Yes','Yes',], 'Days': [141, 123, 128, 66, 66, 120, 141, 52,96, 120, 141, 52, 141, 96, 120,120, 141, 52,96, 141, 15,123, 128, 66, 120, 141, 141, 141, 141,123, 128, 66,67,], } #Convert to dataframe_1 df_1 = pd.DataFrame(data_set_1) print("\n df_1 = \n",df_1) #Create dataset_2 data_set_2 = {'id': [1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,3, 4, 4, 4, 4, 4,4,], 'cycle': [0.0, 0.2,0.4, 0.6, 0.8, 1,1.2,1.4,1.6,1.8,2.0,2.2, 0.0, 0.2,0.4, 0.6,0.8,1.0,1.2, 0.0, 0.2,0.4, 0.6, 0.8,1.0,1.2,1.4, 0.0, 0.2,0.4, 0.6, 0.8,1.0,], 'Salary': [7, 8, 8, 8,8,9,14,21,12,19,14,20, 1, 6, 3, 8,4,9,8, 6, 4,9,10,4,12,13,6, 1, 4,9,10,9,4,], 'Children': ['Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','Yes', 'Yes', 'No','No', 'Yes','Yes', 'Yes', 'Yes', 'No','Yes', 'Yes','Yes',], 'Days': [141, 123, 128, 66, 66, 120, 141, 52,96, 120, 141, 52, 141, 96, 120,120, 141, 52,96, 141, 15,123, 128, 66, 120, 141, 141, 141, 141,123, 128, 66,67,], } #Convert to dataframe_2 df_2 = pd.DataFrame(data_set_2) print("\n df_2 = \n",df_2) Now, here I wish to plot the cycle vs Salary, and overlap for two dataframes for every id, in one single plot. Thus I need to use subplot function as such: ## Plot for all id's plt_fig_verify = plt.figure(figsize=(10,8)) ## id1: plt.subplot(4,1,1) plt.plot(df_1.groupby(by="id").get_group(1)['cycle'], df_1.groupby(by="id").get_group(1)['Salary'], 'b', linewidth = '1', label ='id1: df_1') plt.plot(df_2.groupby(by="id").get_group(1)['cycle'], df_2.groupby(by="id").get_group(1)['Salary'], 'r', linewidth = '1', label ='id1: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id2: plt.subplot(4,1,2) plt.plot(df_1.groupby(by="id").get_group(2)['cycle'], df_1.groupby(by="id").get_group(2)['Salary'], 'b', linewidth = '1', label ='id2: df_1') plt.plot(df_2.groupby(by="id").get_group(2)['cycle'], df_2.groupby(by="id").get_group(2)['Salary'], 'r', linewidth = '1', label ='id2: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id3: plt.subplot(4,1,3) plt.plot(df_1.groupby(by="id").get_group(3)['cycle'], df_1.groupby(by="id").get_group(3)['Salary'], 'b', linewidth = '1', label ='id3: df_1') plt.plot(df_2.groupby(by="id").get_group(3)['cycle'], df_2.groupby(by="id").get_group(3)['Salary'], 'r', linewidth = '1', label ='id3: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() ## id4: plt.subplot(4,1,4) plt.plot(df_1.groupby(by="id").get_group(4)['cycle'], df_1.groupby(by="id").get_group(4)['Salary'], 'b', linewidth = '1', label ='id4: df_1') plt.plot(df_2.groupby(by="id").get_group(4)['cycle'], df_2.groupby(by="id").get_group(4)['Salary'], 'r', linewidth = '1', label ='id4: df_2') plt.xlabel('cycle') plt.ylabel('Salary') plt.legend() plt.show() The result looks as such: However, here I need to write the codes for the subplot function four times, i.e. for all four id's of the dataframe, and then overlap. Is there any way out, by which we can have some iterative function and write the subplot function only once and get all four overalapped subplots. Can somebody please let me know how to achieve this task in Python?
I would concat the two dataframes, then use a single groupby to make the subplots : colors = {"df_1": "blue", "df_2": "red"} df = pd.concat([df_1, df_2], keys=colors) fig, axs = plt.subplots(figsize=(10, 8), nrows=2, ncols=2) for (n, g), ax in zip(df.groupby("id"), axs.flatten()): for s in df.index.levels[0]: g.loc[s].plot( x="cycle", y="Salary", xlabel="Cycle", ylabel="Salary", label=f"id {n}: {s}", color=colors[s], ax=ax ) plt.tight_layout() plt.show(); Output : If you need a single col, you can update the subplots configuration this way : fig, axs = plt.subplots(figsize=(10, 8), nrows=len(df["id"].unique()))
3
2
76,478,913
2023-6-15
https://stackoverflow.com/questions/76478913/polars-is-much-slower-than-duckdb-in-conditional-join-group-by-agg-context
For the following example, where it involves a self conditional join and a subsequent groupby/aggregate operation. It turned out that in such case, DuckDB gives much better performance than Polars (~10x on a 32-core machine). My questions are: What could be the potential reason(s) for the slowness (relative to DuckDB) of Polars? Am I missing some other faster ways of doing the same thing in Polars? import time import duckdb import numpy as np import polars as pl ## example dataframe rng = np.random.default_rng(1) nrows = 5_000_000 df = pl.DataFrame( dict( id=rng.integers(1, 1_000, nrows), id2=rng.integers(1, 10, nrows), id3=rng.integers(1, 500, nrows), value=rng.normal(0, 1, nrows), ) ) ## polars start = time.perf_counter() res = ( df.lazy() .join(df.lazy(), on=["id", "id2"], how="left") .filter( (pl.col("id3") > pl.col("id3_right")) & (pl.col("id3") - pl.col("id3_right") < 30) ) .group_by(["id2", "id3", "id3_right"]) .agg(pl.corr("value", "value_right")) .collect(streaming=True) ) time.perf_counter() - start # 120.93155245436355 ## duckdb start = time.perf_counter() res2 = ( duckdb.sql( """ SELECT df.*, df2.id3 as id3_right, df2.value as value_right FROM df JOIN df as df2 ON (df.id = df2.id AND df.id2 = df2.id2 AND df.id3 > df2.id3 AND df.id3 - df2.id3 < 30) """ ) .aggregate( "id2, id3, id3_right, corr(value, value_right) as value", "id2, id3, id3_right", ) .pl() ) time.perf_counter() - start # 18.472263277042657
EDIT: 2023-7-18 The latest polars release has brought the difference down from 15x to 2x. polars v0.18.2 1125 polars v0.18.3 140 duckdb 0.8.2-dev1 75 Original answer Streaming engine The streaming API isn't as optimized yet. Polars is a younger project than DuckDB and we haven't got as many paid developers on the project. So give us time. Next release 0.18.3 will land a PR that can make a streaming groupby over 3.5x faster https://github.com/pola-rs/polars/pull/9346. That just shows how much we still have on the table on the streaming engine. That same optimization we still have to do for streaming joins. In short. Our streaming engine is in alpha stage. It is work in progress. Different algorithm Other that that, the duckdb query might also be using non-equi joins under the hood which we don't have yet in polars, so this query might not be as optimal for polars.
3
4
76,496,565
2023-6-17
https://stackoverflow.com/questions/76496565/how-to-reverse-strings-in-a-numpy-array
I want to reverse the order of characters in each string element of a NumPy array. For example, given the following input: array(['2', '3', '5', '7', '11', '13', '17', '19', '23', '29', '31', '37', '41', '43', '47', '53', '59', '61', '67', '71', '73', '79', '83', '89', '97'], dtype='<U2') I want to obtain the following output (without using Python for loop): array(['2', '3', '5', '7', '11', '31', '71', '91', '32', '92', '13', '73', '14', '34', '74', '35', '95', '16', '76', '17', '37', '97', '38', '98', '79'], dtype='<U2') I know that I can use arr[::-1] to reverse the order of elements in a NumPy array, but that isn't the topic of this question, and np.array([e[::-1] for e in arr]) is inefficient and against the point of NumPy. The array was created using a vectorized version of the base conversion function np.vectorize(to_base_str). How can I reverse the order of characters in each string element of a NumPy array using vectorization? I have searched online but have not found a solution. Note that arr[..., ::-1] does not work for string elements in a NumPy array. (Code is mine, but I did use the "AI suggested edits" feature)
np.array([e[::-1] for e in arr]) is the straight forward way of doing this, and is NOT bad numpy. or bypass numpy entirely with [e[::-1] for e in arr.tolist()]. You could also do something similar with np.vectorize or np.frompyfunc. These might scale a bit better. 'vectorize' in numpy means using compiled methods (and operators) to do the necessary iterations in compiled code. Those are nearly all numeric operations. For strings, numpy uses Python string methods. It does not have its own compiled string operations. Even the np.char functions use python string methods. So there's no numpy equivalent to astr[::-1]. Some comparative times In [16]: timeit np.array([s[::-1] for s in arr]) 36.1 ยตs ยฑ 151 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [17]: timeit np.array([s[::-1] for s in arr.tolist()]) 21.1 ยตs ยฑ 76.5 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [18]: timeit [s[::-1] for s in arr.tolist()] 8.29 ยตs ยฑ 23.2 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) In [20]: timeit np.vectorize(lambda s: s[::-1])(arr) 65.9 ยตs ยฑ 165 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [21]: timeit np.frompyfunc(lambda s: s[::-1],1,1)(arr) 20.3 ยตs ยฑ 76.5 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each)
2
3
76,473,134
2023-6-14
https://stackoverflow.com/questions/76473134/how-can-i-use-duckdb-read-json-auto-in-python-without-creating-a-temporary-file
I have a simple function that inserts a Python dictionary into DuckDB. How can I insert it into my table without creating a temporary file? def save_to_duckdb(data): # Connect to the Duckdb database conn = duckdb.connect('nodes_log_duck.db') # Get the table name from the "name" field in the dictionary table_name = data.get('name') # Create a temp file file_name = table_name + str(int(time.time())) with open( file_name,"w") as file: json.dump(data,file) # Create the table if it doesn't exist conn.execute(f" CREATE TABLE IF NOT EXISTS {table_name} as SELECT * FROM read_json_auto({file_name});") # Insert the dictionary data into the table conn.execute(f"INSERT INTO {table_name} FROM (SELECT * FROM read_json_auto({file_name}))") # Commit the changes to the database and close the connection conn.commit() conn.close()
It seems there is no way to insert a Python dictionary into DuckDB 0.8.1. I use Polars DataFrame for this, and based on a GitHub discussion in the DuckDB repository, someone suggested using fsspec, which works fine. Although using read_json with fsspec creates better data types for DuckDB tables. **fsspec** def save_to_duckdb(data, db_name): with duckdb.connect(db_name) as conn: # Get the table name from the "name" field in the dictionary table_name = data.get('name') if not table_name: return # Create a memory filesystem and write the dictionary data to it with fsspec.filesystem('memory').open(f'{table_name}.json', 'w') as file: file.write(json.dumps(data)) # Register the memory filesystem and create the table conn.register_filesystem(fsspec.filesystem('memory')) conn.execute(f"CREATE TABLE IF NOT EXISTS {table_name} AS SELECT * FROM read_json_auto('memory://{table_name}.json')") # Insert the data into the table conn.execute(f"INSERT INTO {table_name} SELECT * FROM read_json_auto('memory://{table_name}.json')") **Plors** def save_to_duckdb(data, db_name): # Get the table name from the "name" field in the dictionary table_name = data.get('name') if table_name is None: return # Create a polars DataFrame from the data dictionary df = pl.DataFrame(data) # Connect to the Duckdb database and insert the DataFrame into the database with duckdb.connect(db_name) as con: con.execute(f"CREATE TABLE IF NOT EXISTS {table_name} AS SELECT * FROM df") con.execute(f"INSERT INTO {table_name} SELECT * FROM df") con.commit() "Notice that this code inserts data into the database twice for the first time."
3
1
76,493,809
2023-6-16
https://stackoverflow.com/questions/76493809/is-there-a-method-to-convert-a-metpy-output-to-numpy-variable
I calculated the wind direction using Metpy. How can I extract the values and use the values as input in another part of my program? In the sample code below, I would like to display the direction as a numpy variable, so I can use it in my program. import metpy.calc as mpcalc import numpy as np # Make some fake data for us to work with np.random.seed(19990503) # So we all have the same data u = np.random.randint(0, 15, 10) * units('m/s') v = np.random.randint(0, 15, 10) * units('m/s') direction = mpcalc.wind_direction(u, v) print(direction)
To convert a metpy result from a wind calculation to a numpy array, simply use the numpy np.array function. import metpy.calc as mpcalc from metpy.units import units import numpy as np np.random.seed(19990503) u = np.random.randint(0, 15, 10)*units("m/s") v = np.random.randint(0, 15, 10)*units("m/s") direction = mpcalc.wind_direction(u, v) direction_numpy = np.array(direction)
2
2
76,492,575
2023-6-16
https://stackoverflow.com/questions/76492575/calculate-the-signed-area-of-piecewise-constant-functions-without-using-integrat
I have defined a step function in Python using the following code. The function takes in an array a and x values, applies some calculations, and returns a step function f. Additionally, I have defined two helper functions rect and psi_j_n. I'd like to calculate the signed area of the product of step_function and psi_j_n(x, -10, 0) without using the integral because it is a rectangle, and that's the area I'm looking for. My initial attempt: signed_area = 0 for x_values in x: signed_area += step_function(x_values, a) * psi_j_n(x_values, -10, 0) signed_area is incorrect because I am missing the length of the base of the rectangle. When I calculate the area by hand, I should get 0.012. Update I used the following code to define the step function: import numpy as np import matplotlib.pyplot as plt # Define the step function def step_function(x, a): def rect(x): return np.where((x >= 0) & (x < 1), 1, 0) f = np.sum([a[k-1] * rect(x - k) for k in range(1, len(a) + 1)], axis=0) return f # Set the random seed for reproducibility np.random.seed(42) # Generate random values for a_k N = 10 a = np.array([-0.25091976, 0.90142861, 0.46398788, 0.19731697, -0.68796272, -0.68801096, -0.88383278, 0.73235229, 0.20223002, 0.41614516]) #a = np.random.uniform(-1, 1, size=N) # Define the x-values for plotting x = np.arange(0, N + 1, 0.01) # Evaluate the step function at x y = step_function(x, a) # Plot the step function plt.plot(x, y) plt.xlabel('x') plt.ylabel('f(x)') plt.title('Step Function Plot') plt.grid(True) plt.show() It produces the picture The following code defines instead the function psi_j_n(x, j, n): def psi(x): if 0 <= x < 0.5: return 1 elif 0.5 <= x < 1: return -1 else: return 0 def psi_j_n(x, j, n): return 2**(j/2) * psi(2**j * x - n) Then, I would like to calculate the product of step_function(x, a) and psi_j_n(x, j, n).
As you said, the only missing thing in your code is the base of the rectangle. You choose it to 0.01, so why not just multiply the result by 0.01? signed_area = 0 for x_values in x: signed_area += 0.01*step_function(x_values, a) * psi_j_n(x_values, -10, 0) signed_area Note that I could have multiplied by 0.01 the final result instead, that would be more efficient (but anyway that code is not efficient at all; see next how to make it so). But putting it there make it easier to adapt the code to cases where the area of each rectangle is not constant. More efficient version What you need is a vectorized version of psi. As is, you can call it only with scalar values, and it return a scalar value. psi(0.1) # 1 psi(0.6) # -1 psi(2) # 0 We would like to also be able to call psi([0.1, 0.2, 2]) # array([1,-1,0]) That can be done many way. The trick is to avoid calling for loop at all cost. One way could be def psi(x): return ((x>=0)&(x<1))*(2*(x<0.5)-1) Or you could use np.where as you already did for another function. Now that psi is "vectorized", and so is therefore psi_j_n, you can just compute the multiplication of psi_j_n(x,-10,0) (which now makes sense, even if x is an array) and step_function(x,a) aka y in your code. So simply psi_j_n(x,-10,0)*y And then compute the integral of that (you said you didn't want any integrals, but since your attempt is computing that integral, I take you meant that you didn't want to have to calculate the formulation of this integral on a piece of paper before coding). The simpler way to do that with rectangle method is (psy_j_n(x,-10,0)*y).sum()*0.01) Which is indeed 0.012585459687500009
3
2
76,493,293
2023-6-16
https://stackoverflow.com/questions/76493293/iterate-over-numpy-array-to-get-sub-arrays
Given the following numpy array: arr = np.array([0, 1, 2, 3, 4, 5]) what iterable would return sub-arrays of length x from arr? (Given that len(arr) is a multiple of x) x = 2 sub_arrays = [sub_arr for sub_arr in iterable(arr, x)] sub_arrays = [ np.ndarray( [0, 1] ), np.ndarray( [2, 3] ), np.ndarray( [4, 5] ) ] I know that array slicing is possible with start, stop, and step arguments, but that returns individual elements: x = 2 sub_elements = [sub_elem for sub_elem in arr[::x]] sub_elements = [0, 2, 4]
To iterate over a numpy array and obtain sub-arrays of a specific length, you can use the numpy.reshape function. By reshaping the array with the desired shape, you can obtain sub-arrays of the specified length. Here's an example: import numpy as np arr = np.array([0, 1, 2, 3, 4, 5]) x = 2 sub_arrays = np.reshape(arr, (-1, x)) In this example, np.reshape is used to reshape the array arr into sub-arrays of length x. The parameter -1 in the reshape function allows numpy to automatically determine the appropriate size for that dimension. The resulting sub_arrays will be a 2D numpy array containing the sub-arrays: array([[0, 1], [2, 3], [4, 5]]) You can also convert each sub-array into a separate ndarray if desired: sub_arrays = [np.array(sub_arr) for sub_arr in sub_arrays] This will give you a list of numpy arrays as you specified: [array([0, 1]), array([2, 3]), array([4, 5])] Have a good coding!
2
3
76,491,552
2023-6-16
https://stackoverflow.com/questions/76491552/alternative-to-deprecated-makemixeddataframe-in-pandas
Until recently, it was possible to generate sample dataframes in Pandas using functionality of pd.util.testing module: In [22]: import pandas as pd In [23]: pd.util.testing.makeMixedDataFrame() Out[23]: A B C D 0 0.0 0.0 foo1 2009-01-01 1 1.0 1.0 foo2 2009-01-02 2 2.0 0.0 foo3 2009-01-05 3 3.0 1.0 foo4 2009-01-06 4 4.0 0.0 foo5 2009-01-07 (see https://stackoverflow.com/a/65592210/22084711 for more examples) However, pd.util.testing is being deprecated. As far as I can tell, this deprecation is in favor of pd.testing. It does not include any of the functionality used for generating sample dfs (makeMixedDataFrame, makeMissingDataframe, etc.). Is this functionality being transferred to some other module? I looked but couldn't find anywhere else. I'd like to have an alternative that comes with Pandas and does not require additional dependencies like Seaborn, or downloading the dataframe from somewhere else. (I was going to ask on pandas' Github, but they require that all questions are being asked on SO first.)
Actually, there is two different testing modules (if we can say so). An official one (which is documented in the API with only four available functions as of 2.0.0+) and a second one (for internal use). So, I guess you're looking for the latter (i.e pandas._testing) : import pandas as pd #pd.__version__ #2.0.2 df = pd._testing.makeMixedDataFrame() Output : print(df) A B C D 0 0.0 0.0 foo1 2009-01-01 1 1.0 1.0 foo2 2009-01-02 2 2.0 0.0 foo3 2009-01-05 3 3.0 1.0 foo4 2009-01-06 4 4.0 0.0 foo5 2009-01-07
7
5
76,489,981
2023-6-16
https://stackoverflow.com/questions/76489981/how-to-shift-a-pcolor-plot-along-the-x-axis
I'd like to shift a pcolor plot along the x direction. But I'm not sure how to do it, as it's not as simple as using plot with a vector that specifies the x values With this code: import matplotlib.pyplot as plt import numpy as np np.random.seed(0) Z = np.random.rand(6, 5) tk = list(range(0,10+1)) fig, ax = plt.subplots(figsize=(7.2, 2.3)) ax.pcolor(Z) ax.set_xticks(tk) plt.show() It produced this plot: However I want the heatmap shifted to the right to start at x = 2, for example, like the following plot: What am I missing??? As a side question, if I swap lines 11 and 12 to: ax.set_xticks(tk) ax.pcolor(Z) I get this plot with the x axis contracted to the range [0,5]. I'm not sure why setting ticks before adding pcolor would do that?
One 'simple' way of achieving that is by just adding 2 empty (NaN) values to the Z: Z = np.insert(Z, 0, np.nan, axis=1) Z = np.insert(Z, 0, np.nan, axis=1) Gives:
2
3
76,485,082
2023-6-15
https://stackoverflow.com/questions/76485082/package-and-find-non-python-files-in-a-python-package
I'm fairly new to python packaging and I'm trying to create a command line tool so that I can send to client to interact with my service in AWS. My goal is to have a command line tool to upload files that are in the folder resources to s3 that will later be used by other services. It's my first time using setuptools for that but I'm seem to be lost at some point. My project structure is something like: ProjectRoot โ”œโ”€โ”€ MANIFEST.in โ”œโ”€โ”€ Pipfile โ”œโ”€โ”€ Pipfile.lock โ”œโ”€โ”€ dist โ”‚ โ”œโ”€โ”€ myscript-0.0.1.whl โ”‚ โ””โ”€โ”€ myscript-0.0.1.tar.gz โ”œโ”€โ”€ pyproject.toml โ”œโ”€โ”€ resources โ”‚ โ”œโ”€โ”€ artifacts โ”‚ โ”‚ โ”œโ”€โ”€ code1.jar โ”‚ โ”‚ โ”œโ”€โ”€ code2.jar โ”‚ โ”‚ โ”œโ”€โ”€ api.keys โ”‚ โ”‚ โ”œโ”€โ”€ package1.tar.gz โ”‚ โ”‚ โ”œโ”€โ”€ install-linux.sh โ”‚ โ”‚ โ””โ”€โ”€ confs.yaml โ”‚ โ”œโ”€โ”€ recipe.template.yaml โ””โ”€โ”€ src โ””โ”€โ”€ code โ”œโ”€โ”€ __init__.py โ””โ”€โ”€ myscript.py I've tried to make setuptools add the files to the .tar package with the pyproject.toml with this: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [project] name = "myscript" version = "0.0.1" dependencies = [ 'Click', 'boto3', 'botocore', ] [project.scripts] myscript = "code.main:run" [tool.setuptools] include-package-data = true [tool.setuptools.packages.find] where = ["src","resources"] include = ["code*"] exclude = [] [tool.setuptools.package-data] "resources.artifacts" = ["*"] recipe = ["*.yaml"] After that I try to install the wheel generated file with pip install dist/generated_file.whl, but I can't find the resources/ folder anywhere during installation. ps.: I also got a little lost if I need the whl and the tar package together. I tried using relative paths to find the resources, but I saw they weren't installed in the sites_packages. My latest try was using from importlib_resources import files but it also can't seem to find the resources. I can't find the resources folder files.
Starting point With the given project structure ๐Ÿ“ <project root>/ โ”œโ”€๐Ÿ“„ pyproject.toml โ”œโ”€๐Ÿ“ src/ โ”‚ โ””โ”€๐Ÿ“ code/ โ”‚ โ”œโ”€๐Ÿ“„ __init__.py โ”‚ โ””โ”€๐Ÿ“„ myscript.py โ”œโ”€๐Ÿ“ resources/ โ””โ”€๐Ÿ“ artifacts/ โ””โ”€๐Ÿ“„ code1.jar and by specifying [tool.setuptools.packages.find] where = ["src","resources"] include = ["code*"] exclude = [] [tool.setuptools.package-data] "resources.artifacts" = ["*"] recipe = ["*.yaml"] you'll get a wheel with following contents ๐Ÿ“ myscript-0.0.1-py3-none-any/ โ””โ”€๐Ÿ“ code/ โ”œโ”€๐Ÿ“„ __init__.py โ””โ”€๐Ÿ“„ myscript.py The reason for this is that the only package found is code. A package is a folder with python (.py) files, and usually with __init__.py file (if not talking about namespace packages which are a bit of a special thing). What I would do? First, renaming your main package folder. You've called your project myscript in pyproject (so you would install it with pip install myscript), but then the file structure would imply that the import name is code; so you would need to do import code.myscript (code being the main package). I'll change the project name to be in this example myproj, by changing the name in pyproject.toml and ./code folder to ./myproj Second, the name of the "package-data" to me says that it is data inside a package. The ./resources is not a package as it does not contain any python files. If you add there an empty __init__.py, it will become a package. But there is another problem in pyproject.toml: your package should be found under a folder called ./resources, but that is actually in root folder (.). Therefore, you should either change where = ['src', '.'] (which creates new problems) or move ./resources to ./resources/resources, but you could also do it easier (third point) Third, you could simplify things by putting the data files inside you package (./myproj). That's fair more common practice, and also makes pip install the resources with your code, inside site-packages/myproj, which is nice (although, there are other possibilities). So, I propose these changes to pyproject.toml: [project] name = "myproj" # <-- this changed version = "0.0.1" [tool.setuptools.packages.find] where = ["src"] # <-- this changed [tool.setuptools.package-data] "*" = ["*.*"] # <-- this changed and then the folder structure to ๐Ÿ“ <project root>/ โ”œโ”€๐Ÿ“„ pyproject.toml โ””โ”€๐Ÿ“ src/ โ””โ”€๐Ÿ“ myproj/ โ”œโ”€๐Ÿ“„ __init__.py โ”œโ”€๐Ÿ“ __assets__/ # non-code files here โ”‚ โ””โ”€๐Ÿ“„ code1.jar โ””โ”€๐Ÿ“„ myscript.py That will then create a wheel with following folder structure: โ””โ”€๐Ÿ“ myproj-0.0.1-py3-none-any/ โ””โ”€๐Ÿ“ myproj/ โ”œโ”€๐Ÿ“„ __init__.py โ”œโ”€๐Ÿ“ __assets__/ โ”‚ โ””โ”€๐Ÿ“„ code1.jar โ””โ”€๐Ÿ“„ myscript.py
6
11
76,487,970
2023-6-16
https://stackoverflow.com/questions/76487970/splitting-the-elements-of-a-list-by-some-separator-in-the-same-list
I have an array: array([nan, 'Stressful day', 'Drank coffee:Drank tea', 'Drank tea', 'Ate late:Drank coffee', 'Drank coffee:Drank tea:Worked out', 'Drank tea:Worked out', 'Drank coffee:Drank tea:Stressful day', 'Drank coffee', 'Drank coffee:Drank tea:Stressful day:Worked out', 'Drank coffee:Worked out', 'Ate late:Drank coffee:Drank tea', 'Ate late:Drank coffee:Drank tea:Worked out', 'Drank tea:Stressful day', 'Drank tea:Stressful day:Worked out', 'Drank coffee:Stressful day:Worked out', 'Drank coffee:Stressful day', 'Ate late:Drank coffee:Drank tea:Stressful day', 'Worked out', 'Ate late:Drank coffee:Worked out'], dtype=object) these are unique values from the column of a dataframe, as you can see they are combination of other values like 'Drank coffee:Drank tea' is a combination of 'Drank coffee' and 'Drank tea'. I want those unique elements for this list. What's the quickest way to create that list? Is there any inbuilt function in python libraries for this sort of thing? Expected output: array([nan, 'Stressful day', 'Drank coffee', 'Drank tea', 'Ate late', 'Worked out'], dtype=object)
Assuming a the input array, you could use str.extractall: out = pd.Series(a).str.extractall('([^:]+)')[0].unique() From the original Series s: out = s.unique().drop_duplicates().str.extractall('([^:]+)')[0].unique() Output: array(['Stressful day', 'Drank coffee', 'Drank tea', 'Ate late', 'Worked out'], dtype=object) Other options (maybe less efficient): out = set(x for s in a if isinstance(s, str) for x in s.split(':')) out = pd.Series(a).str.split(':').explode().unique() keeping NaNs: s = pd.Series(a) out = np.concatenate([s[s.isna()].unique(), s.str.extractall('([^:]+)')[0].unique()]) Output: array([nan, 'Stressful day', 'Drank coffee', 'Drank tea', 'Ate late', 'Worked out'], dtype=object) Or: out = set(x for s in a for x in (s.split(':') if isinstance(s, str) else [s])) Output: {'Drank coffee', 'Drank tea', nan, 'Stressful day', 'Worked out', 'Ate late'}
3
3
76,486,080
2023-6-15
https://stackoverflow.com/questions/76486080/pandas-read-json-script-that-used-to-work-now-produces-an-error
I have a script that up until recently worked fine, but is now producing an error. import requests import pandas as pd # Set the url to given endpoint url = "https://SomeURL/SomeEndpoint" print('URL set') # Connect to endpoint with credentials and put results in dictionary URLresponse = requests.get(url,auth=("SomeUser", "SomePassword"), verify=True) print('connection to endpoint') # Load the response as proper JSON into a var rawdata = (URLresponse.content) print(type(rawdata)) print('populating variable') # print(rawdata) # Load the var into a dataframe df = pd.read_json(rawdata) print('load variable into df') print(df) This used to work fine but now it is producing an error as following: File "C:\Program Files\Python310\lib\site-packages\pandas\io\common.py", line 901, in get_handle raise TypeError( TypeError: Expected file path name or file-like object, got <class 'bytes'> type How can I go ahead to troubleshoot this?
You can change df = pd.read_json(rawdata) to df = pd.read_json(io.StringIO(rawdata.decode('utf-8'))) You will need to include import io earlier in your file as well.
2
6
76,485,237
2023-6-15
https://stackoverflow.com/questions/76485237/how-to-implement-multi-level-sorting-of-a-list-of-dictionaries-in-python
I am working with a list of dictionaries in Python that represents a set of data records. The data structure looks like this: data = [ {'Name': 'Tom', 'Age': 25, 'Score': 85}, {'Name': 'Alex', 'Age': 30, 'Score': 80}, {'Name': 'Tom', 'Age': 20, 'Score': 90}, {'Name': 'Alex', 'Age': 25, 'Score': 95}, {'Name': 'Tom', 'Age': 25, 'Score': 80}, {'Name': 'Alex', 'Age': 30, 'Score': 85} ] Each dictionary in the list represents a single record with 'Name', 'Age', and 'Score' as fields. My goal is to sort this list of dictionaries according to multiple fields. I want to sort it first by the 'Name' field in alphabetical order, then by the 'Age' field in ascending numerical order, and finally by the 'Score' field in descending numerical order. I have been trying to use the sorted() function in Python, but I can't figure out how to sort by multiple fields at once, and especially not how to sort by different fields in different directions. I would like to know the most efficient way to achieve this multi-level sort in Python. Any suggestions would be greatly appreciated.
You can use key= parameter in sorted() or .sort(). The key parameter will return 3-item tuple, where the score is negated (to have it in descending order): data.sort(key=lambda d: (d["Name"], d["Age"], -d["Score"])) print(data) Prints: [ {"Name": "Alex", "Age": 25, "Score": 95}, {"Name": "Alex", "Age": 30, "Score": 85}, {"Name": "Alex", "Age": 30, "Score": 80}, {"Name": "Tom", "Age": 20, "Score": 90}, {"Name": "Tom", "Age": 25, "Score": 85}, {"Name": "Tom", "Age": 25, "Score": 80}, ]
3
4
76,484,652
2023-6-15
https://stackoverflow.com/questions/76484652/numpy-aggregate-across-multiple-axes
Let's say I have a 3d numpy array.shape of (27,27,27). I want to compress this to (9,9,9) by averaging every 3 elements across every axis simultaneously (e.g. make 3x3x3 pixels into 1x1x1). The objective is to effectively compress by a single integer across all three axes simultaneously (with the assumption that any array will have a multiple of that integer for the shape of each axes). My initial attempt was to use np.apply_over_axes, though I'm worried it is not getting the cubic mean of all 3 axes but instead averaging each sequentially. def mean_over(arr, axis): np.average(arr.reshape(-1, 3), axis=axis) the_array_small = np.apply_over_axes(mean_over, the_array, \[0,1,2\]) However this returns an error: Traceback (most recent call last): File "\<stdin\>", line 1, in \<module\> File "\<\__array_function_\_ internals\>", line 180, in apply_over_axes File "/opt/homebrew/Caskroom/mambaforge/base/envs/seaborn/lib/python3.10/site-packages/numpy/lib/shape_base.py", line 496, in apply_over_axes if res.ndim == val.ndim: AttributeError: 'NoneType' object has no attribute 'ndim' I'm not convinced my apply_over_axes solution gets the aggregation I'm aiming for though. Ideally the mean of each (3,3,3) component is returned.
Another solution but take advantage of as_strided a = np.arange(27**3).reshape(27, 27, 27) tile_size = (3, 3, 3) tile_shape = tuple(np.array(a.shape) // np.array(tile_size)) tile_strides = tuple(np.array(a.strides) * np.array(tile_size)) + tuple(a.strides) tile_view = np.lib.stride_tricks.as_strided( a, shape=tile_shape + tile_size, strides=tile_strides, writeable=False, ) result = np.mean(tile_view, axis=(-3, -2, -1))
2
3
76,482,024
2023-6-15
https://stackoverflow.com/questions/76482024/how-to-get-more-detailed-results-sources-with-langchain
I am trying to put together a simple "Q&A with sources" using Langchain and a specific URL as the source data. The URL consists of a single page with quite a lot of information on it. The problem is that RetrievalQAWithSourcesChain is only giving me the entire URL back as the source of the results, which is not very useful in this case. Is there a way to get more detailed source info? Perhaps the heading of the specific section on the page? A clickable URL to the correct section of the page would be even more helpful! I am slightly unsure whether the generating of the result source is a function of the language model, URL loader or simply RetrievalQAWithSourcesChain alone. I have tried using UnstructuredURLLoader and SeleniumURLLoader with the hope that perhaps more detailed reading and input of the data would help - sadly not. Relevant code excerpt: llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo') chain = RetrievalQAWithSourcesChain.from_llm(llm=llm, retriever=VectorStore.as_retriever()) result = chain({"question": question}) print(result['answer']) print("\n Sources : ",result['sources'] )
ChatGPT is very flexible, and the more explicit you are better results you can get. This link show the docs for the function you are using. there is a parameter for langchain.prompts.BasePromptTemplate that allows you to give ChatGPT more explicit instructions. It looks like the base prompt template is this Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer: You can add in another sentence giving ChatGPT more clear instructions Please format the answer with JSON of the form { "answer": "{your_answer}", "relevant_quotes": ["list of quotes"] }. Substitutde your_answer as the answer to the question, but also include relevant quotes from the source material in the list. You may need to tweak it a little bit to get ChatGPT responding well. Then you should be able to parse it. ChatGPT has 3 message types in the API User - a message from an end user to the model model - a message from the model to the end user system - a message from the prompt engineer to model to add instructions. Lang chain doesn't use this since it's a one-shot prompt I strongly recommend these courses on ChatGPT since they are from Andrew Ng and very high quality.
6
3
76,479,504
2023-6-15
https://stackoverflow.com/questions/76479504/poetry-add-using-a-caret-and-a-symbol
I am confused as to what the "@" operator actually does in poetry add pandas@^1.3.0. Both following commands install pandas version 1.5.3 and set the dependency in my pyproject.toml to pandas = "^1.3.0": poetry add pandas@^1.3.0 poetry add pandas^1.3.0 I have no other dependencies listed (aside from Python 3.8). I thought that using the "@" symbol signifies a strict requirement for a specific version and its compatible releases. With "pandas@^1.3.0," shouldn't Poetry install exactly the version 1.3.0 of the "pandas" package? The official documentation says: When adding dependencies via poetry add, you can use the @ operator. This is understood similarly to the == syntax, but also allows prefixing any specifiers that are valid in pyproject.toml. For example:
The "@" operator in the add command is a delimiter between the package name and the version. If the "@" operator is followed by its required version, e.g. poetry add [email protected] is the same as: poetry add pendulum==2.0.5 If you use caret, e.g.: poetry add requests@^2.13.0 Then you specify a version range. The "@" symbol signifies a strict requirement only if you don't use any other specifier afterwards. The poetry documentation for "@" operator is a bit confusing.
3
3
76,479,392
2023-6-15
https://stackoverflow.com/questions/76479392/identifying-ones-in-each-row-and-creating-a-list-in-python
I have an array A. I am identifying ones in each row except the row number itself and creating a list. For example, in A[0], the ones should be identified for locations 2,3,5 and not 0. I present the current and expected output. import numpy as np A=np.array([[1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0], [0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0], [1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0], [1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0], [0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1], [0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1], [0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1]]) output = [] for i, row in enumerate(A): ones_indices = np.where(row == 1)[0] other_rows = np.arange(A.shape[0]) other_rows = np.delete(other_rows, i) output.append([[i], other_rows.tolist()]) print(output) The current output is [[[0], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]], [[1], [0, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]], [[2], [0, 1, 3, 4, 5, 6, 7, 8, 9, 10, 11]], [[3], [0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11]], [[4], [0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 11]], [[5], [0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11]], [[6], [0, 1, 2, 3, 4, 5, 7, 8, 9, 10, 11]], [[7], [0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11]], [[8], [0, 1, 2, 3, 4, 5, 6, 7, 9, 10, 11]], [[9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11]], [[10], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11]], [[11], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]] The expected output is [[[0], [2,3,5]], [[1], [3,4,6]], [[2], [0,3,5]], [[3], [0, 1, 2, 4, 5, 6]], [[4], [1,3,6]], [[5], [0,2,3,7,8,10]], [[6], [1,3,4,8,9,11]], [[7], [5,8,10]], [[8], [5,6,7,9,10,11]], [[9], [6,8,11]], [[10], [5,7,8], [[11], [6,8,9]]]
The numpy approach would be to fill_diagonal, then to use where: np.fill_diagonal(A, 0) row, idx = np.where(A==1) # np.where(A) if only 0/1 Output: (array([ 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 10, 10, 10, 11, 11, 11]), array([ 2, 3, 5, 3, 4, 6, 0, 3, 5, 0, 1, 2, 4, 5, 6, 1, 3, 6, 0, 2, 3, 7, 8, 10, 1, 3, 4, 8, 9, 11, 5, 8, 10, 5, 6, 7, 9, 10, 11, 6, 8, 11, 5, 7, 8, 6, 8, 9])) If you really want nested lists: np.fill_diagonal(A, 0) out = [[[i], np.where(a==1)[0].tolist()] for i, a in enumerate(A)] Or: row, idx = np.where(A==1) x = np.array_split(idx, np.where(row[1:]!=row[:-1])[0]+1) out = [[[i], j.tolist()] for i,j in enumerate(x)] Output: [[[0], [2, 3, 5]], [[1], [3, 4, 6]], [[2], [0, 3, 5]], [[3], [0, 1, 2, 4, 5, 6]], [[4], [1, 3, 6]], [[5], [0, 2, 3, 7, 8, 10]], [[6], [1, 3, 4, 8, 9, 11]], [[7], [5, 8, 10]], [[8], [5, 6, 7, 9, 10, 11]], [[9], [6, 8, 11]], [[10], [5, 7, 8]], [[11], [6, 8, 9]]]
2
4
76,474,969
2023-6-14
https://stackoverflow.com/questions/76474969/why-does-the-ks-test-give-a-p-value-of-1-if-the-distribution-is-different
Let's take two sets: a = [5,5,5,5,5,4,4,4,4,3,3,3,2,2,1] b = [5,4,3,2,1] We perform the KS-Test using Python: from scipy import stats stats.ks_2samp(b,a) KstestResult(statistic=0.2, pvalue=0.9979360165118678, statistic_location=2, statistic_sign=1) Why is the result a p-value of 0.9979? This means that the distribution of the values in the two sets is almost identical. But it's not! What do I missunderstand? Kind regards.
The observed value of the KS test statistic, namely 0.2, is actually relatively small, considering the distribution of the test statistic for a reasonable null hypothesis; I think this is where the surprise is coming from. As mentioned, the usual KS test assumes there are no ties, so we'll have to compute the p-value ourselves. We can make progress by assuming the null hypothesis is that both samples come from a uniform distribution and estimating the p-value by random sampling. (This null hypothesis is more restrictive than the conventional one which just assume the same distribution, not necessarily uniform.) Here are a few lines of R code to estimate the p-value. In the interest of brevity, it's specific to the problem as stated: a sample of size 5 and a sample of size 15, each one from a uniform distribution on the set { 1, 2, 3, 4, 5 }, and the observed KS test statistic is 0.2. my.ecdf <- function (x) cumsum (sapply (1:5, function (k) sum (x == k))/length(x)) R <- function (n) sample.int (5, size = n, replace = T) generate.ks.test.statistic <- function (n) sapply (1:n, function (k) max (abs (my.ecdf (R (15)) - my.ecdf (R (5))))) ks <- generate.ks.test.statistic (10000) sum (ks >= 0.2)/10000 For this last input, I get 0.8243. That's not as extreme as the value you reported (more than 0.99), but still enough to show that 0.2 is actually relatively small. You can look at hist(ks) to see what the distribution looks like.
3
2
76,477,949
2023-6-14
https://stackoverflow.com/questions/76477949/attribute-error-str-object-has-no-attribute-ignore-local-proxy-with-chrom
I've just started with Selenium and I'm already stuck at the first step: setting up the driver. I keep getting this error: 'str' object has no attribute '_ignore_local_proxy'. Here's the code : from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager import requests driver = webdriver.Chrome(ChromeDriverManager().install()) And the whole traceback : AttributeError Traceback (most recent call last) Cell In[21], line 1 ----> 1 driver = webdriver.Chrome(ChromeDriverManager().install()) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\selenium\webdriver\chrome\webdriver.py:49, in WebDriver.__init__(self, options, service, keep_alive) 45 self.keep_alive = keep_alive 47 self.service.path = DriverFinder.get_path(self.service, self.options) ---> 49 super().__init__( 50 DesiredCapabilities.CHROME["browserName"], 51 "goog", 52 self.options, 53 self.service, 54 self.keep_alive, 55 ) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\selenium\webdriver\chromium\webdriver.py:60, in ChromiumDriver.__init__(self, browser_name, vendor_prefix, options, service, keep_alive) 51 self.service.start() 53 try: 54 super().__init__( 55 command_executor=ChromiumRemoteConnection( 56 remote_server_addr=self.service.service_url, 57 browser_name=browser_name, 58 vendor_prefix=vendor_prefix, 59 keep_alive=keep_alive, ... 63 ) 64 except Exception: 65 self.quit() AttributeError: 'str' object has no attribute '_ignore_local_proxy' I'm using VS Code with Python 3.11, if that can somehow help.
This is due to changes in selenium 4.10.0: https://github.com/SeleniumHQ/selenium/commit/9f5801c82fb3be3d5850707c46c3f8176e3ccd8e Note that the first argument is no longer executable_path, but options. (ChromeDriverManager().install() returns the path to the install location.) Since selenium manager is now included with selenium 4.10.0, you should no longer use ChromeDriverManager at all. from selenium import webdriver driver = webdriver.Chrome() # ... driver.quit() However, if you still want to pass in the executable_path to an existing driver, you must use the service arg now: from selenium import webdriver from selenium.webdriver.chrome.service import Service service = Service(executable_path="PATH_TO_DRIVER") options = webdriver.ChromeOptions() driver = webdriver.Chrome(service=service, options=options) # ... driver.quit()
4
2
76,475,409
2023-6-14
https://stackoverflow.com/questions/76475409/why-does-the-text-widget-event-modified-get-triggered-when-specifically-usin
I've come across a bug that I can't seem to understand. I have a tkinter Text widget that has a bind that triggers on text modification. For some reason this event gets triggered when I use the key combination even though it shouldn't, as it doesn't modify the contents of the Text widget. Here comes the weird part: this only occurs with <Control-o>. I have made a simple program to demonstrate the problem. Other than special preassigned key combinations such as <Control-i> that actually modify the content, no other combination behaves like this. Why does this occur for <Control-o> specifically? And how do I prevent it? import tkinter as tk root = tk.Tk() txt = tk.Text(root) txt.pack() root.bind("<Control-u>", lambda e: print("doesn't trigger")) root.bind("<Control-o>", lambda e: print("somehow triggers")) txt.bind("<<Modified>>", lambda e: print("text got modified!")) # (keep in mind that this will only get triggered once)
The default binding on the text widget for <Control-o> adds a newline. This is from the section bindings in the official Tcl/Tk documentation for the text widget: Control-o opens a new line by inserting a newline character in front of the insertion cursor without moving the insertion cursor. Returning the string "break" from any binding prevents any further processing of the event. So, you can add a binding on the text widget for control-o that returns the string "break". Since your binding is handled before the default bindings for the widget, this will effectively prevent the default binding from modifying the widget. txt.bind("<Control-o>", lambda e: "break")
3
7
76,470,779
2023-6-14
https://stackoverflow.com/questions/76470779/how-to-understand-the-following-fancy-index-behaviour-for-multi-dimensional-arra
We noticed that the mixed usage of fancy indexing and slicing is so confusing and undocumented for multi-dimensional arrays, for example: In [114]: x = np.arange(720).reshape((2,3,4,5,6)) In [115]: x[:,:,:,0,[0,1,2,4,5]].shape Out[115]: (2, 3, 4, 5) In [116]: x[:,:,0,:,[0,1,2,4,5]].shape Out[116]: (5, 2, 3, 5) I have read the usage of fancy indexing on https://numpy.org/doc/stable/user/basics.indexing.html and I can understand that x[:,0,:,[1,2]] = [x[:,0,:,1], x[:,0,:,2]]. However I cannot understand why the result for above Input [115] and Input [116] differ on the first dimension. Can someone point to where such broadcasting rules are documented? Thanks! I have tried searching the documentation for fancy indexing as well as posting issues to the numpy repo on Github.
Some additional insight into why there is ambiguity: In the latter case in the question, the 3rd and 5th axes are indexed, and thus disappear from the new array. A new axis (with shape equal to the broadcasting of the indices) has to be added somewhere. If I was numpy, and had to insert a shape (5,) array into the array with "shape" (2, 3, -, 5, -), would I place it in place of the first missing dimension? Or the second? Exactly because a slice separates two advanced indices, numpy can not replace a consecutive set of axes, and thus not know whether to insert the new axis before or after the separating slice(s). As a result, the new axis is inserted at the front: (5, 2, 3, 5) ^ ^^^^^^^--- old dimensions | new dimension Only in the first case, where the disappearing axes are all adjacent ("shape" (2, 3, 4, -, -)), can the new axes be unambiguously inserted at the end. Note: Behind the scenes numpy always inserts the new axes at the start. It just (mostly for convenience I believe) transposes the array to move the new axes into place when unambiguous. Also interesing is Numpy Enhancement Proposal 21
5
2
76,472,782
2023-6-14
https://stackoverflow.com/questions/76472782/how-to-make-color-bar-ticks-white-and-internal
I am drawing a heatmap and this is my MWE: import matplotlib import seaborn as sns import numpy as np matplotlib.rcParams.update({"figure.dpi": 96}) np.random.seed(7) A = np.random.randint(0,100, size=(20,20)) cmap = matplotlib.cm.get_cmap('viridis').copy() g = sns.heatmap(A, vmin=10, vmax=90, cmap=cmap, cbar_kws={}) # Get the colorbar cbar = g.collections[0].colorbar tick_locations = [*range(15, 86, 10)] # Set the tick positions and labels cbar.set_ticks(tick_locations) cbar.set_ticklabels(tick_locations) plt.show() This gives me: But I would like the little horizontal tick marks on the color bar to be white and inside the color bar as in: How can I do that? (What I am looking for seems to be the default in plotnine/ggplot.)
add this line before plt.show()... cbar.ax.yaxis.set_ticks_position('both') cbar.ax.tick_params(axis="y",direction="in", color='white') Output plot
3
3
76,472,329
2023-6-14
https://stackoverflow.com/questions/76472329/numpy-vectorization-for-linear-combination-of-numpy-matrices
I have a numpy ndarray of shape (5,4,4) that is a set of 5 matrices 4x4. I would like to multiply that ndarray by a matrix of shape (3,5) and I would like to get a numpy ndarray of shape (3,4,4) where each matrix 4x4 in the result is the linear combination of the 5 4x4 matrices with the coefficients coming from the rows of the matrix (3,5). I have the following, very basic, code: import numpy as np np.random.seed(10) X = np.random.rand(5,4,4) A = np.random.rand(3,5) print( A*X ) which generates, a bit as I was expecting, the error 'operands could not be broadcast together with shapes (3,5) (5,4,4)' I was reading a bit about broadcasting but I can't seem to find a way to write the operation in such a way it can be vectorized by Numpy. Has anybody got a similar experience?
For these type of operations, np.einsum is perfect. Is the following what you want? B = np.einsum('ij,jkl->ikl', A, X) print(B.shape) # (3,4,4) In words, the string ij,jkl->ikl means: A has indices (dimensions) i and j respectively X has indices j, k and l respectively ->ikl multiply A[i,j]*X[j,k,l] and sum over j
2
4
76,470,472
2023-6-14
https://stackoverflow.com/questions/76470472/guarantee-asyncio-execution-order
I have seen other answers on here stating that asyncio doesn't guarantee execution order, only that the order of the outputs will match that of the inputs. Is there a way I can guarantee the execution order For example, if I have a list of the functions I want to run, will calling create task on each of them before calling gather work? Is there something else I can do to achieve this?
You could do this with asyncio.Event: async def perform_task(listen_event, push_event): await do_some_paralell_work() await listen_event.wait() await do_some_work_in_order() if push_event: await push-event.set() async def main(): events = [asyncio.Event() for i in range(100)] tasks = [asyncio.Task(events[i], events[i+1] if i<100 else None) for i in range(100)] events[0].set() await asyncio.gather(*tasks) This should let the parallel work run in parallel but then run the final statement in order
4
3
76,452,551
2023-6-11
https://stackoverflow.com/questions/76452551/reference-polars-dataframe-height-in-with-columns
Take this example: df = (polars .DataFrame(dict( j=polars.datetime_range(datetime.date(2023, 1, 1), datetime.date(2023, 1, 3), '8h', closed='left', eager=True), )) .with_columns( k=polars.lit(numpy.random.randint(10, 99, 6)), ) ) j k 2023-01-01 00:00:00 47 2023-01-01 08:00:00 22 2023-01-01 16:00:00 82 2023-01-02 00:00:00 19 2023-01-02 08:00:00 85 2023-01-02 16:00:00 15 shape: (6, 2) Here, numpy.random.randint(10, 99, 6) uses hard-coded 6 as the height of DataFrame, so it won't work if I changed e.g. the interval from 8h to 4h (which would require changing 6 to 12). I know I can do it by breaking the chain: df = polars.DataFrame(dict( j=polars.datetime_range(datetime.date(2023, 1, 1), datetime.date(2023, 1, 3), '4h', closed='left', eager=True), )) df = df.with_columns( k=polars.lit(numpy.random.randint(10, 99, df.height)), ) j k 2023-01-01 00:00:00 47 2023-01-01 04:00:00 22 2023-01-01 08:00:00 82 2023-01-01 12:00:00 19 2023-01-01 16:00:00 85 2023-01-01 20:00:00 15 2023-01-02 00:00:00 89 2023-01-02 04:00:00 74 2023-01-02 08:00:00 26 2023-01-02 12:00:00 11 2023-01-02 16:00:00 86 2023-01-02 20:00:00 81 shape: (12, 2) Is there a way to do it (i.e. reference df.height or an equivalent) in one chained expression though?
You can use .pipe() df = ( pl.datetime_range( datetime.date(2023, 1, 1), datetime.date(2023, 1, 3), "4h", closed="left", eager=True ) .alias("date") .to_frame() ) df.pipe(lambda df: df.with_columns(pl.lit(np.random.randint(10, 99, df.height)).alias("rand")) ) shape: (12, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ date โ”† rand โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ datetime[ฮผs] โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•ก โ”‚ 2023-01-01 00:00:00 โ”† 39 โ”‚ โ”‚ 2023-01-01 04:00:00 โ”† 45 โ”‚ โ”‚ 2023-01-01 08:00:00 โ”† 95 โ”‚ โ”‚ 2023-01-01 12:00:00 โ”† 72 โ”‚ โ”‚ โ€ฆ โ”† โ€ฆ โ”‚ โ”‚ 2023-01-02 08:00:00 โ”† 34 โ”‚ โ”‚ 2023-01-02 12:00:00 โ”† 42 โ”‚ โ”‚ 2023-01-02 16:00:00 โ”† 30 โ”‚ โ”‚ 2023-01-02 20:00:00 โ”† 83 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”˜ As for the example task, perhaps .sample() could be used. df.with_columns( pl.int_range(10, 100).sample(pl.len(), with_replacement=True).alias("rand") ) shape: (12, 2) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ date โ”† rand โ”‚ โ”‚ --- โ”† --- โ”‚ โ”‚ datetime[ฮผs] โ”† i64 โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•ก โ”‚ 2023-01-01 00:00:00 โ”† 25 โ”‚ โ”‚ 2023-01-01 04:00:00 โ”† 27 โ”‚ โ”‚ 2023-01-01 08:00:00 โ”† 68 โ”‚ โ”‚ 2023-01-01 12:00:00 โ”† 95 โ”‚ โ”‚ 2023-01-01 16:00:00 โ”† 96 โ”‚ โ”‚ โ€ฆ โ”† โ€ฆ โ”‚ โ”‚ 2023-01-02 04:00:00 โ”† 36 โ”‚ โ”‚ 2023-01-02 08:00:00 โ”† 25 โ”‚ โ”‚ 2023-01-02 12:00:00 โ”† 90 โ”‚ โ”‚ 2023-01-02 16:00:00 โ”† 92 โ”‚ โ”‚ 2023-01-02 20:00:00 โ”† 92 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
3
3
76,462,548
2023-6-13
https://stackoverflow.com/questions/76462548/in-polars-is-there-a-way-to-remove-character-accents-from-string-columns
I want to remove character accents from a text column, ex. convert Piรฑa to Pina. This is how I would do it in pandas: (names .str.normalize('NFKD') .str.encode('ascii', errors='ignore') .str.decode('utf-8')) Polars has str.decode and str.encode but they don't seem to be what i'm looking for. Thanks!
To expand on @jqurious's comment you can do one of two things: map_elements/lambda like this: from unicodedata import normalize df.with_columns( a=pl.col('a') .map_elements(lambda x: normalize('NFKD',x) .encode('ascii', errors='ignore') .decode('utf-8'))) define function/map_batches like this: from unicodedata import normalize def custnorm(In_series): for i, x in enumerate(In_series): newvalue = normalize('NFKD',x).encode('ascii', errors='ignore').decode('utf-8') if newvalue != x: In_series[i]=newvalue return In_series then inside the df you can do df.with_columns(a=pl.col('a').map_batches(custnorm)) The difference between map_elements and map_batches is that map_elements tells polars to do the looping one row at a time whereas map_batches tells polars to feed the whole column as a Series to the function which must then return a Series of the same size.
2
3
76,455,828
2023-6-12
https://stackoverflow.com/questions/76455828/what-does-the-torch-gather-and-torch-index-select-do
Basically when to use torch.gather vs torch.index_select I have a scenario where I am using positional embedding (max_len, batch_size, embedding_dim). Here, I would like to select only particular indices from the max_len axis. I want the result to be (new_indices, batch_size, embedding_dim). Searching around I found two functions but there purpose seem to be the same. Can someone exaplain which will be ideal for my situation and why? I am just curious.
I made a post about this (because I also always kept forgetting). The jist is in this image, the full version is here if you're interested.
4
3
76,461,596
2023-6-13
https://stackoverflow.com/questions/76461596/unable-to-use-selenium-webdriver-getting-two-exceptions
I am getting the following error when trying to create an object with Selenium Webdriver. "\selenium\webdriver\common\driver_finder.py", line 42, in get_path path = SeleniumManager().driver_location(options) if path is None else path "\selenium\webdriver\common\selenium_manager.py", line 74, in driver_location browser = options.capabilities["browserName"] AttributeError: 'str' object has no attribute 'capabilities' During handling of the above exception, another exception occurred: Traceback (most recent call last): "\selenium_webdriver_webscraping.py", line 4, in <module> driver = webdriver.Chrome(chrome_driver_path) "\selenium\webdriver\chrome\webdriver.py", line 47, in __init__ self.service.path = DriverFinder.get_path(self.service, self.options) "\selenium\webdriver\common\driver_finder.py", line 44, in get_path raise NoSuchDriverException(f"Unable to obtain {service.path} using Selenium Manager; {err}") selenium.common.exceptions.NoSuchDriverException: Message: Unable to obtain chromedriver using Selenium Manager; 'str' object has no attribute 'capabilities'; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors/driver_location This is the code I used: from selenium import webdriver chrome_driver_path = <chrome drive .exe path> driver = webdriver.Chrome(chrome_driver_path)
If the Selenium version you are using is v4.6.0 or above (which I think it is as I see SeleniumManger in the error trace), then you don't really have to set the driver.exe path. Selenium can handle the browser and drivers by itself. So your code can be simplified as below: from selenium import webdriver driver = webdriver.Chrome() driver.get("https://www.google.com/") driver.quit() A few references: Purpose of webdriver manager Introducing Selenium Manager
22
63
76,458,771
2023-6-12
https://stackoverflow.com/questions/76458771/minecraft-proxy-in-python-using-socket-only-2-packages-get-sent
I'm trying to code a proxy in python for a Minecraft server that is hosted on my own computer. While I want to intercept and modify the packages that get sent between the client and the server, at first I just want to send all packages through without modifying them. The problem is that only 2 packages get sent: one from the server to the client and one from the client to the server. I'm using the socket and threading library in python. Also important to note is that in the server.properties file I have online_mode turned off, because when online_mode was turned on the server tried to encrypt the connection which led to Minecraft getting stuck at "Encrypting..." when connecting via the proxy. While I have tried this many times, I still get the same results, so here's an example from ChatGPT: import socket import threading def handle_client(client_socket, target_host, target_port): # Connect to the target server target_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) target_socket.connect((target_host, target_port)) # Relay data between client and target while True: data = client_socket.recv(4096) if len(data) == 0: print("Client connection closed.") break print(f'Received from client: {data}') target_socket.send(data) print("Sent to target.") response = target_socket.recv(4096) if len(response) == 0: print("Target connection closed.") break print(f'Received from target: {response}') client_socket.send(response) print("Sent to client.") # Close the connections client_socket.close() target_socket.close() def start_proxy(proxy_port, target_host, target_port): # Create a server socket server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind(('localhost', proxy_port)) server_socket.listen(5) print(f'Proxy server listening on port {proxy_port}') while True: client_socket, addr = server_socket.accept() print(f'Accepted connection from {addr[0]}:{addr[1]}') client_handler = threading.Thread( target=handle_client, args=(client_socket, target_host, target_port) ) client_handler.start() # Usage example proxy_port = 55555 target_host = "localhost" target_port = 25565 start_proxy(proxy_port, target_host, target_port) My Minecraft server runs on localhost (a.k.a 127.0.0.1) on port 25565 and the proxy listens on localhost:55555. When I run the code and then join the server via the proxy (localhost:55555) in Minecraft, I get the following printed messages: Proxy server listening on port 55555 Accepted connection from 127.0.0.1:52403 Received from client: b'\x10\x00\xfa\x05\tlocalhost\xd9\x03\x02"\x00\x0fBoterBramKroket\x01\x11Zu\x9eC\xa3N\xd6\xbf\x03\\\x00\xe6\x96\xcc\x00' Sent to target. Received from target: b'\x03\x03\x80\x02' Sent to client. Client connection closed. I don't understand why the other packages won't get sent. I hope someone has experience in coding minecraft proxies and can tell me more, thanks in advance!
Hi will test I founded how to do it here the code. So the in your code is that problem is you need to handle server and client in thread because the two in the same while as recv wait for data and that blocking the client part as two separate thread send server data to client and one send client data to the server. Thanks for the help with the base code ! import socket import threading class handle_server(threading.Thread): def __init__(self,client_socket,target_host, target_port,target_socket): super().__init__() self.client_socket = client_socket self.target_host = target_host self.target_port = target_port self.target_socket = target_socket def run(self): client_socket = self.client_socket target_host = self.target_host target_port = self.target_port target_socket = self.target_socket while True: data = client_socket.recv(4096 * 8 * 8 * 8) if len(data) == 0: print("Client connection closed.") break print(f'Received from server: {data}') target_socket.send(data) print("Sent to target.") client_socket.close() target_socket.close() class handle_client_Thread(threading.Thread): def __init__(self,client_socket,target_host, target_port,target_socket): self.client_socket = client_socket self.target_host = target_host self.target_port = target_port self.target_socket = target_socket super().__init__() def run(self): client_socket = self.client_socket target_host = self.target_host target_port = self.target_port target_socket = self.target_socket while True: response = target_socket.recv(4096 * 8 * 8 * 8) if len(response) == 0: print("Target connection closed.") break print(f'Received from client: {response}') client_socket.send(response) print("Sent to client.") client_socket.close() target_socket.close() def handle_client(client_socket, target_host, target_port): # Connect to the target server target_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) target_socket.connect((target_host, target_port)) handle_server(client_socket, target_host, target_port,target_socket).start() handle_client_Thread(client_socket, target_host, target_port,target_socket).start() def start_proxy(proxy_port, target_host, target_port): # Create a server socket server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.bind(('localhost', proxy_port)) server_socket.listen(5) print(f'Proxy server listening on port {proxy_port}') while True: client_socket, addr = server_socket.accept() print(f'Accepted connection from {addr[0]}:{addr[1]}') client_handler = threading.Thread( target=handle_client, args=(client_socket, target_host, target_port) ) client_handler.start() # Usage example proxy_port = 55555 target_host = "localhost" target_port = 25565 start_proxy(proxy_port, target_host, target_port)
3
0
76,423,510
2023-6-7
https://stackoverflow.com/questions/76423510/plotting-of-trendlines-with-certain-conditions-post-significant-pivot-point-dete
I'm trying to get a point which is higher in a range of points, i.e., pivot high, then among a range of pivot high I want to find a significant pivot high. For this I am trying to create a range which is not pre-defined but calculated on every go. It is being calculated by knee plot to identify the best parameters which gives the points above the range and points below the range. This works fine for a lot of data. If the loop is not able to find the optimal parameters, I'm manually assigning the optimal high and optimal low data. Also there is a range where we can check for the parameter values, and the lower parameter, has a condition that it cannot exceed a certain value. This is enough of the background and to make sure the code is understood well. Now I want to include a functionality that plots trend-lines to the plot containing the significant pivot high, significant pivot low and closing prices. The characteristic of the trend line should be such that, I am able to connect significant pivot lows with upward trendline on a price chart. The more significant pivot lows the line touches, the stronger is the trendline. Similar will be the case for downward trendline and the significant pivot low points. What my code plots currently is something like: The dotted red lines and the dotted green lines represent the current lines being plotted respectively. The black and blue connecting lines is something that I desire from my code. I think, I am not able to think of the logic correctly and once, that clears out I can write the algorithm clearly. Code: import os import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.signal import argrelextrema def calculate_pivot_points(data): pivot_points = [] resistance_levels = [] support_levels = [] pivot_high_points = [] pivot_low_points = [] for i in range(len(data)): high = data.loc[i, 'high'] low = data.loc[i, 'low'] close = data.loc[i, 'close'] # Calculate Pivot Point pivot_point = (high + low + close) / 3 pivot_points.append(pivot_point) # Calculate Resistance Levels resistance1 = (2 * pivot_point) - low resistance2 = pivot_point + (high - low) resistance3 = high + 2 * (pivot_point - low) resistance_levels.append({'R1': resistance1, 'R2': resistance2, 'R3': resistance3}) # Calculate Support Levels support1 = (2 * pivot_point) - high support2 = pivot_point - (high - low) support3 = low - 2 * (high - pivot_point) support_levels.append({'S1': support1, 'S2': support2, 'S3': support3}) # Identify Pivot High Points using swing points if i > 0 and i < len(data) - 1: if high > data.loc[i-1, 'high'] and high > data.loc[i+1, 'high']: pivot_high_points.append({'index': i, 'value': high}) # Identify Pivot Low Points using swing points if i > 0 and i < len(data) - 1: if low < data.loc[i-1, 'low'] and low < data.loc[i+1, 'low']: pivot_low_points.append({'index': i, 'value': low}) return pivot_points, resistance_levels, support_levels, pivot_high_points, pivot_low_points # Create a list to store all the data frames data_frames = [] # Specify the folder path containing the CSV files folder_path = "./data_frames" # Iterate over each file in the folder for filename in os.listdir(folder_path): if filename.endswith(".csv"): file_path = os.path.join(folder_path, filename) # Read the data from the CSV file data = pd.read_csv(file_path) # Add the data frame to the list data_frames.append(data) # Extract the file name without the extension file_name = os.path.splitext(filename)[0] # Calculate pivot points and other parameters pivot_points, resistance_levels, support_levels, pivot_high_points, pivot_low_points = calculate_pivot_points(data) # Extract closing prices closing_prices = data['close'] # Define the range of parameter values to test parameter_range = range(1, 40) # Calculate scores for different parameter combinations parameter_scores = [] for high_parameter in parameter_range: for low_parameter in parameter_range: if low_parameter <= 8: # Add the condition here # Determine significant pivot high points using swing points significant_high_points = [] for point in pivot_high_points: if point['index'] > 0 and point['index'] < len(data) - 1: high_range = data.loc[point['index'] - high_parameter: point['index'] + low_parameter, 'high'] if point['value'] == high_range.max(): significant_high_points.append(point) # Determine significant pivot low points using swing points significant_low_points = [] for point in pivot_low_points: if point['index'] > 0 and point['index'] < len(data) - 1: low_range = data.loc[point['index'] - high_parameter: point['index'] + low_parameter, 'low'] if point['value'] == low_range.min(): significant_low_points.append(point) # Calculate the score as the difference between high and low point counts score = len(significant_high_points) - len(significant_low_points) parameter_scores.append((high_parameter, low_parameter, score)) # Convert the scores to a NumPy array for easier manipulation scores = np.array(parameter_scores) # Find the optimal parameter values using the knee point if len(scores) > 0: knee_index = argrelextrema(scores[:, 2], np.less)[0][-1] optimal_high_parameter, optimal_low_parameter, optimal_score = scores[knee_index] else: optimal_high_parameter = 16 # Manually assign the value optimal_low_parameter = 2 # Manually assign the value print("Optimal high parameter value:", optimal_high_parameter) print("Optimal low parameter value:", optimal_low_parameter) # Plot line chart for closing prices plt.plot(closing_prices, label='Closing Prices') # Calculate the trendlines for connecting the pivot high points trendlines_high = [] trendline_points_high = [] for i in range(0, len(significant_high_points) - 1): point1 = significant_high_points[i] point2 = significant_high_points[i+1] slope = (point2['value'] - point1['value']) / (point2['index'] - point1['index']) if slope > 0: if not trendline_points_high: trendline_points_high.append(point1) trendline_points_high.append(point2) else: if len(trendline_points_high) > 1: trendlines_high.append(trendline_points_high) trendline_points_high = [] if len(trendline_points_high) > 1: trendlines_high.append(trendline_points_high) # Calculate the trendlines for connecting the pivot low points trendlines_low = [] trendline_points_low = [] for i in range(0, len(significant_low_points) - 1): point1 = significant_low_points[i] point2 = significant_low_points[i+1] slope = (point2['value'] - point1['value']) / (point2['index'] - point1['index']) if slope < 0: if not trendline_points_low: trendline_points_low.append(point1) trendline_points_low.append(point2) else: if len(trendline_points_low) > 1: trendlines_low.append(trendline_points_low) trendline_points_low = [] if len(trendline_points_low) > 1: trendlines_low.append(trendline_points_low) # Plot the trendlines for positive slope for trendline_points_high in trendlines_high: x_values = [point['index'] for point in trendline_points_high] y_values = [point['value'] for point in trendline_points_high] plt.plot(x_values, y_values, color='red', linestyle='dashed') # Plot the significant pivot high points x_values = [point['index'] for point in significant_high_points] y_values = [point['value'] for point in significant_high_points] plt.scatter(x_values, y_values, color='red', label='Significant Pivot High Points') # Plot the trendlines for positive slope for trendline_points_low in trendlines_low: x_values = [point['index'] for point in trendline_points_low] y_values = [point['value'] for point in trendline_points_low] plt.plot(x_values, y_values, color='green', linestyle='dashed') # Plot the significant pivot low points x_values = [point['index'] for point in significant_low_points] y_values = [point['value'] for point in significant_low_points] plt.scatter(x_values, y_values, color='green', label='Significant Pivot Low Points') # Set chart title and labels plt.title(f'Closing Prices with Trendlines and Significant Pivot Points ({file_name})') plt.xlabel('Index') plt.ylabel('Closing Price') # Show the chart for the current data frame plt.legend() plt.show() The data can be found at this drive link if you wish to attempt the code yourself: Link PS: In the current code, I'm just checking if the two points line on the same straight trendline. This is not going to be the case in a lot of time. So instead what I am thinking is we define a range and if firstly the slope between n and n+1th point is > or < 0 then we proceed to the next two points, i.e., n+1 and n+2th point. Here if the difference between the two slopes, i.e., slope between n and n+1th and n+1th and n+2th is within certain range, then we can shift the main slope variable to slope between n and n+2 and similarly run the loop. This will be a great start, but now I'm stuck with the coding part. If someone can help me code this out, that will be very helpful. import os import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.signal import argrelextrema def calculate_pivot_points(data): pivot_points = [] resistance_levels = [] support_levels = [] pivot_high_points = [] pivot_low_points = [] for i in range(len(data)): high = data.loc[i, 'high'] low = data.loc[i, 'low'] close = data.loc[i, 'close'] # Calculate Pivot Point pivot_point = (high + low + close) / 3 pivot_points.append(pivot_point) # Calculate Resistance Levels resistance1 = (2 * pivot_point) - low resistance2 = pivot_point + (high - low) resistance3 = high + 2 * (pivot_point - low) resistance_levels.append({'R1': resistance1, 'R2': resistance2, 'R3': resistance3}) # Calculate Support Levels support1 = (2 * pivot_point) - high support2 = pivot_point - (high - low) support3 = low - 2 * (high - pivot_point) support_levels.append({'S1': support1, 'S2': support2, 'S3': support3}) # Identify Pivot High Points using swing points if i > 0 and i < len(data) - 1: if high > data.loc[i-1, 'high'] and high > data.loc[i+1, 'high']: pivot_high_points.append({'index': i, 'value': high}) # Identify Pivot Low Points using swing points if i > 0 and i < len(data) - 1: if low < data.loc[i-1, 'low'] and low < data.loc[i+1, 'low']: pivot_low_points.append({'index': i, 'value': low}) return pivot_points, resistance_levels, support_levels, pivot_high_points, pivot_low_points # Create a list to store all the data frames data_frames = [] # Specify the folder path containing the CSV files folder_path = "./data_frames" # Iterate over each file in the folder for filename in os.listdir(folder_path): if filename.endswith(".csv"): file_path = os.path.join(folder_path, filename) # Read the data from the CSV file data = pd.read_csv(file_path) # Add the data frame to the list data_frames.append(data) # Extract the file name without the extension file_name = os.path.splitext(filename)[0] # Calculate pivot points and other parameters pivot_points, resistance_levels, support_levels, pivot_high_points, pivot_low_points = calculate_pivot_points(data) # Extract closing prices closing_prices = data['close'] # Define the range of parameter values to test parameter_range = range(1, 40) # Calculate scores for different parameter combinations parameter_scores = [] for high_parameter in parameter_range: for low_parameter in parameter_range: if low_parameter <= 8: # Add the condition here # Determine significant pivot high points using swing points significant_high_points = [] for point in pivot_high_points: if point['index'] > 0 and point['index'] < len(data) - 1: high_range = data.loc[point['index'] - high_parameter: point['index'] + low_parameter, 'high'] if point['value'] == high_range.max(): significant_high_points.append(point) # Determine significant pivot low points using swing points significant_low_points = [] for point in pivot_low_points: if point['index'] > 0 and point['index'] < len(data) - 1: low_range = data.loc[point['index'] - high_parameter: point['index'] + low_parameter, 'low'] if point['value'] == low_range.min(): significant_low_points.append(point) # Calculate the score as the difference between high and low point counts score = len(significant_high_points) - len(significant_low_points) parameter_scores.append((high_parameter, low_parameter, score)) # Convert the scores to a NumPy array for easier manipulation scores = np.array(parameter_scores) # Find the optimal parameter values using the knee point if len(scores) > 0: knee_index = argrelextrema(scores[:, 2], np.less)[0][-1] optimal_high_parameter, optimal_low_parameter, optimal_score = scores[knee_index] else: optimal_high_parameter = 16 # Manually assign the value optimal_low_parameter = 2 # Manually assign the value print("Optimal high parameter value:", optimal_high_parameter) print("Optimal low parameter value:", optimal_low_parameter) # Plot line chart for closing prices plt.plot(closing_prices, label='Closing Prices') slope_range = 1 # Adjust this range as per your requirement # Calculate the trendlines for connecting the pivot high points trendlines_high = [] trendline_points_high = [] for i in range(0, len(significant_high_points) - 2): point1 = significant_high_points[i] point2 = significant_high_points[i+1] slope1 = (point2['value'] - point1['value']) / (point2['index'] - point1['index']) point3 = significant_high_points[i+1] point4 = significant_high_points[i+2] slope2 = (point4['value'] - point3['value']) / (point4['index'] - point3['index']) slope_difference = abs(slope2 - slope1) if slope1 < 0: if not trendline_points_high: trendline_points_high.append(point1) if slope_difference <= slope_range: trendline_points_high.append(point2) else: if len(trendline_points_high) > 1: trendlines_high.append(trendline_points_high) trendline_points_high = [point2] # Start a new trendline with point2 if len(trendline_points_high) > 1: trendlines_high.append(trendline_points_high) # Calculate the trendlines for connecting the pivot low points trendlines_low = [] trendline_points_low = [] for i in range(0, len(significant_low_points) - 2): point1 = significant_low_points[i] point2 = significant_low_points[i+1] slope1 = (point2['value'] - point1['value']) / (point2['index'] - point1['index']) point3 = significant_low_points[i+1] point4 = significant_low_points[i+2] slope2 = (point4['value'] - point3['value']) / (point4['index'] - point3['index']) slope_difference = abs(slope2 - slope1) if slope1 > 0: if not trendline_points_low: trendline_points_low.append(point1) if slope_difference <= slope_range: trendline_points_low.append(point2) else: if len(trendline_points_low) > 1: trendlines_low.append(trendline_points_low) trendline_points_low = [point2] # Start a new trendline with point2 if len(trendline_points_low) > 1: trendlines_low.append(trendline_points_low) # Plot the trendlines for positive slope for trendline_points_high in trendlines_high: x_values = [point['index'] for point in trendline_points_high] y_values = [point['value'] for point in trendline_points_high] plt.plot(x_values, y_values, color='red', linestyle='dashed') # Plot the significant pivot high points x_values = [point['index'] for point in significant_high_points] y_values = [point['value'] for point in significant_high_points] plt.scatter(x_values, y_values, color='red', label='Significant Pivot High Points') # Plot the trendlines for positive slope for trendline_points_low in trendlines_low: x_values = [point['index'] for point in trendline_points_low] y_values = [point['value'] for point in trendline_points_low] plt.plot(x_values, y_values, color='green', linestyle='dashed') # Plot the significant pivot low points x_values = [point['index'] for point in significant_low_points] y_values = [point['value'] for point in significant_low_points] plt.scatter(x_values, y_values, color='green', label='Significant Pivot Low Points') # Set chart title and labels plt.title(f'Closing Prices with Trendlines and Significant Pivot Points ({file_name})') plt.xlabel('Index') plt.ylabel('Closing Price') # Show the chart for the current data frame plt.legend() plt.show() This is my new approach as per the logic I just stated, but still the plotting isn't anywhere near what we desire.
Here's an example which uses KMeans Clustering and Linear Regression techniques to plot optimized trendlines The number of clusters is hard-coded (and can be easily changed via the variable n_clusters); in a more sophisticated version an optimal number of clusters based on the data itself would be arrived at (e.g., this is why mean_squared_error is included but I ended up not using those in this demo; using that metric simply always chose the max number of clusters possible as the best - there would be a better method for finding the ideal number of clusters, but, parameterizing it by hand through a small round of trial and error runs is not too difficult or time-consuming). import matplotlib.pyplot as plt import numpy as np import pandas as pd import plotly.graph_objects as go from sklearn.cluster import KMeans from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error def fit_line_to_cluster(df, cluster_label): reg = LinearRegression().fit(df[["index"]], df["value"]) pred = reg.predict(df[["index"]]) mse = mean_squared_error(df["value"], pred) return reg, mse def calculate_pivot_points(data): pivot_points = [] pivot_high_points = [] pivot_low_points = [] for i in range(len(data)): high = data.loc[i, "high"] low = data.loc[i, "low"] close = data.loc[i, "close"] # Calculate Pivot Point pivot_point = (high + low + close) / 3 pivot_points.append(pivot_point) # Identify Pivot High Points using swing points if i > 0 and i < len(data) - 1: if ( high > data.loc[i - 1, "high"] and high > data.loc[i + 1, "high"] ): pivot_high_points.append({"index": i, "value": high}) # Identify Pivot Low Points using swing points if i > 0 and i < len(data) - 1: if low < data.loc[i - 1, "low"] and low < data.loc[i + 1, "low"]: pivot_low_points.append({"index": i, "value": low}) return ( pivot_points, pivot_high_points, pivot_low_points, ) def add_fitted_lines_to_plotly(fig, df, models, colors): for i, model in enumerate(models): x_vals = df[df["cluster"] == i]["index"].values y_vals = model.predict(x_vals.reshape(-1, 1)) fig.add_trace( go.Scatter( x=x_vals, y=y_vals, mode="lines", line=dict(color=colors[i]), ) ) ### Data Analysis df = pd.read_excel("~/Downloads/data3.xlsx") df["time"] = pd.to_datetime(df["timestamp"]) pivot_points, pivot_high_points, pivot_low_points = calculate_pivot_points(df) high_df = pd.DataFrame(pivot_high_points) low_df = pd.DataFrame(pivot_low_points) ## Clustering n_clusters = 20 optimal_high_models = [] optimal_low_models = [] # For high points kmeans_high = KMeans(n_clusters=n_clusters, random_state=0).fit( high_df[["index", "value"]] ) high_df["cluster"] = kmeans_high.labels_ for i in range(n_clusters): cluster_data = high_df[high_df["cluster"] == i] model, mse = fit_line_to_cluster(cluster_data, i) optimal_high_models.append(model) # For low points kmeans_low = KMeans(n_clusters=n_clusters, random_state=0).fit( low_df[["index", "value"]] ) low_df["cluster"] = kmeans_low.labels_ for i in range(n_clusters): cluster_data = low_df[low_df["cluster"] == i] model, mse = fit_line_to_cluster(cluster_data, i) optimal_low_models.append(model) closing_prices = df["close"].values ### Plotting fig = go.Figure() # Plot closing, high, and low points fig.add_trace( go.Scatter( x=list(range(len(closing_prices))), y=closing_prices, mode="lines", name="Closing Prices", line=dict(color="blue"), opacity=0.75, ) ) fig.add_trace( go.Scatter( x=high_df["index"], y=high_df["value"], mode="markers", name="High Points", marker=dict(color="red"), opacity=0.75, marker_size=5, ) ) fig.add_trace( go.Scatter( x=low_df["index"], y=low_df["value"], mode="markers", name="Low Points", marker=dict(color="green"), opacity=0.75, marker_size=5, ) ) # Add optimal trendlines for high and low clusters add_fitted_lines_to_plotly( fig, high_df, optimal_high_models, ["magenta"] * n_clusters ) add_fitted_lines_to_plotly( fig, low_df, optimal_low_models, ["cyan"] * n_clusters ) # Set chart title and labels fig.update_layout( title="High and Low Pivot Points with Clustered Trendlines", xaxis_title="Time", yaxis_title="Price", ) fig.show() results in: using Plotly, which allows for zooming in on the data, e.g.:
7
1
76,448,287
2023-6-10
https://stackoverflow.com/questions/76448287/how-can-i-solve-importerror-using-the-trainer-with-pytorch-requires-accele
I'm using the transformers library in Google colab, and When i am using TrainingArguments from transformers library i'm getting Import error with this code: from transformers import TrainingArguments training_args = TrainingArguments( output_dir = "/content/our-model", learning_rate=2e-5, per_device_train_batch_size= 64, per_device_eval_batch_size = 16, num_train_epochs = 2, weight_decay = 0.01, evaluation_strategy = "epoch", save_strategy = "epoch", load_best_model_at_end = True, push_to_hub = False ) This is the error i'm getting: <ipython-input-28-0518ea5ff407> in <cell line: 2>() 1 from transformers import TrainingArguments ----> 2 training_args = TrainingArguments( 3 output_dir = "/content/our-model", 4 learning_rate=2e-5, 5 per_device_train_batch_size= 64, 4 frames /usr/local/lib/python3.10/dist-packages/transformers/training_args.py in _setup_devices(self) 1670 if not is_sagemaker_mp_enabled(): 1671 if not is_accelerate_available(min_version="0.20.1"): -> 1672 raise ImportError( 1673 "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`" 1674 ) ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U I already tried pip install for 0.20.1 version of accelerate and pip install transformers[torch] and both didn't worked.
If you're not particular about which transformers and accelerate version to tie to, then do this to use the most up-to-date version in Google Colab: ! pip install -U accelerate ! pip install -U transformers Then the issue you are having with accelerate should auto-resolve itself. Note: Underspecifying pip install -U transformers instead of pip install transformers[pytorch] might be easier since that's what most of the users do and the developers of the library will make sure that the basic pip works with the common functions and class like TrainingArguments Instead of specifying accelerate to the pip install accelerate>=0.20.1, if you have no particular need to fixed the version, automatically upgrading to the latest version might get you more stability when using the library, esp. with "hot"/"trending" libraries that are constantly changing (almost) daily. If further debugging is necessary, i.e. if the above didn't work. To check your transformers and accelerate version, do this: import accelerate accelerate.__version__ Most probably you might have an ImportError at the first line if accelerate is not already installed when you installed transformers. And then if the first line works and the 2nd line is not outputting a version >=0.20.1, then that is the cause of your issue. The current versions to-date (July 2023) are: import accelerate import transformers transformers.__version__, accelerate.__version__ [out]: ('4.30.1', '0.21.0') Here's an example notebook with the model that you wish to use as per the comments in your question, https://colab.research.google.com/drive/1D79AjHMeE6HAZC-g2S83baTgsHtDUu5i?usp=sharing If the error persist after the pip install ..., try restarting the runtime. If you can't find the buttons to press to restart, try this in the cell Restart kernel in Google Colab then re-run the cells for import ... import os os._exit(00)
26
37
76,450,609
2023-6-11
https://stackoverflow.com/questions/76450609/firebase-functions-gen2-python-init-does-not-work
I have only one python installed in my system: 3.10.10. it includes the latest pip: 23.1.2 and I installed the latest module of firebase_functions After I try to init firebase functions in my machine I follow the instructions and when it asks me to install dependencies I get this error: ERROR: To modify pip, please run the following command: C:\Users\XXX\functions\venv\Scripts\python.exe -m pip install --upgrade pip Error: An unexpected error has occurred. Next time I run the same process but this time I did not accept to install dependencies and it worked: Firebase initialization complete! Now this is the default code google provided: # Welcome to Cloud Functions for Firebase for Python! # To get started, simply uncomment the below code or create your own. # Deploy with `firebase deploy` from firebase_functions import https_fn from firebase_admin import initialize_app initialize_app() @https_fn.on_request() def on_request_example(req: https_fn.Request) -> https_fn.Response: return https_fn.Response("Hello world!") I have all dependencies installed. I made sure thousand times. When I run firebase deploy I get this error: i deploying functions i functions: preparing codebase default for deployment i functions: ensuring required API cloudfunctions.googleapis.com is enabled... i functions: ensuring required API cloudbuild.googleapis.com is enabled... i artifactregistry: ensuring required API artifactregistry.googleapis.com is enabled... + functions: required API cloudbuild.googleapis.com is enabled + artifactregistry: required API artifactregistry.googleapis.com is enabled + functions: required API cloudfunctions.googleapis.com is enabled Error: An unexpected error has occurred. And this is the log in the firebase-debug.log [debug] [2023-06-11T13:05:29.172Z] stderr: ModuleNotFoundError: No module named 'firebase_functions' [debug] [2023-06-11T13:05:29.182Z] Error: spawn "C:\Users\XXX\functions\venv\Scripts\activate.bat" ENOENT at notFoundError (C:\Users\XXX\AppData\Roaming\npm\node_modules\firebase-tools\node_modules\cross-spawn\lib\enoent.js:6:26) at verifyENOENT (C:\Users\XXX\AppData\Roaming\npm\node_modules\firebase-tools\node_modules\cross-spawn\lib\enoent.js:40:16) at cp.emit (C:\Users\XXX\AppData\Roaming\npm\node_modules\firebase-tools\node_modules\cross-spawn\lib\enoent.js:27:25) at ChildProcess._handle.onexit (node:internal/child_process:291:12) [error] Error: An unexpected error has occurred.
Apparently firebase creates its own python dependency separately from your own python version in your machine. It is stored in the venv folder. To make it work follow the following steps: firebase init Choose functions: Functions: Configure a Cloud Functions directory and its files When it asks: Do you want to install dependencies now? (Y/n) Choose No Open cmd within the functions project and then: cd functions\venv\Scripts python.exe -m pip install --upgrade pip python.exe -m pip install firebase_functions cd ../../../ And now: firebase init functions Choose Overwrite, and then: File functions/requirements.txt already exists. Overwrite? No File functions/.gitignore already exists. Overwrite? No File functions/main.py already exists. Overwrite? No Do you want to install dependencies now? Yes And now: firebase deploy --only functions And it should work perfectly
3
4
76,465,343
2023-6-13
https://stackoverflow.com/questions/76465343/huggingface-transformers-model-config-reported-this-is-a-deprecated-strategy-to
I am training a sequence-to-sequence model using HuggingFace Transformers' Seq2SeqTrainer. When I execute the training process, it reports the following warning: /path/to/python3.9/site-packages/transformers/generation/utils.py:1219: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) Note the HuggingFace documentation link is dead. I use the following codes: model = BartForConditionalGeneration.from_pretrained(checkpoint) model.config.output_attentions = True model.config.output_hidden_states = True training_args = Seq2SeqTrainingArguments( output_dir = "output_dir_here", evaluation_strategy = IntervalStrategy.STEPS, #"epoch", optim = "adamw_torch", # Use new PyTorch optimizer eval_steps = 1000, # New logging_steps = 1000, save_steps = 1000, learning_rate = 2e-5, per_device_train_batch_size = batch_size, per_device_eval_batch_size = batch_size, weight_decay = 0.01, save_total_limit = 3, num_train_epochs = 30, predict_with_generate=True, remove_unused_columns=True, fp16 = True, push_to_hub = True, metric_for_best_model = 'bleu', # New or "f1" load_best_model_at_end = True # New ) trainer = Seq2SeqTrainer( model = model, args = training_args, train_dataset = train_ds, eval_dataset = eval_ds, tokenizer = tokenizer, data_collator = data_collator, compute_metrics = compute_metrics, callbacks = [EarlyStoppingCallback(early_stopping_patience=3)] ) trainer.train() The training process can be completed without any problem, but I am concerned about the deprecation warning. How should I modify the codes to solve the problem? Version: Transformers 4.28.1 Python 3.9.7
Root-Cause This is a warning about using the API in the outdated manner (=unsupported soon). However, as of now, the code is fixing this on its own - hence only a warning not a breaking error. See these lines in the source code. Remedy The transformers library encourages the use of config files. In this case, we need to pass a GenerationConfig object early, rather than to set attributes. I will first share a clean, simple example: from transformers import AutoTokenizer, BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") ARTICLE_TO_SUMMARIZE = ( "PG&E stated it scheduled the blackouts in response to forecasts for high winds " "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ) inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt") # change config and generate summary from transformers.generation import GenerationConfig model.config.max_new_tokens = 10 model.config.min_length = 1 gen_cfg = GenerationConfig.from_model_config(model.config) gen_cfg.max_new_tokens = 10 gen_cfg.min_length = 1 summary_ids = model.generate(inputs["input_ids"], generation_config=gen_cfg) tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] If you try to manipulate the config attributes directly and pass no config, you get a warning. If you pass a GenerationConfig, you are all good. This example is reproducible as a Colab notebook here. Now, to the original question. Note that, in general, changing architecture configs of pretrained models is not recommended for incompatibility reasons. This is sometimes possible with extra effort. However, certain config changes are possible upon initialization: model = BartForConditionalGeneration.from_pretrained( "facebook/bart-large-cnn", attention_dropout=0.123 ) Here is the fully-working code, corrected for reproducibility and see also this notebook from transformers import AutoTokenizer, BartForConditionalGeneration from transformers.generation import GenerationConfig from transformers import Trainer, TrainingArguments from transformers.models.bart.modeling_bart import shift_tokens_right from transformers import DataCollatorForSeq2Seq model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn", attention_dropout=0.123) tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") seq2seq_data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) def get_features(batch): input_encodings = tokenizer(batch["text"], max_length=1024, truncation=True) with tokenizer.as_target_tokenizer(): target_encodings = tokenizer(batch["summary"], max_length=256, truncation=True) return {"input_ids": input_encodings["input_ids"], "attention_mask": input_encodings["attention_mask"], "labels": target_encodings["input_ids"]} dataset_ftrs = dataset.map(get_features, batched=True) columns = ['input_ids', 'labels', 'input_ids','attention_mask',] dataset_ftrs.set_format(type='torch', columns=columns) training_args = TrainingArguments( output_dir='./models/bart-summarizer', num_train_epochs=1, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', ) model.config.output_attentions = True model.config.output_hidden_states = True training_args = TrainingArguments( output_dir='./models/bart-summarizer', num_train_epochs=1, warmup_steps=500, per_device_train_batch_size=1, per_device_eval_batch_size=1, weight_decay=0.01, logging_steps=10, push_to_hub=False, evaluation_strategy='steps', eval_steps=500, save_steps=1e6, gradient_accumulation_steps=16, ) trainer = Trainer( model=model, args=training_args, tokenizer=tokenizer, data_collator=seq2seq_data_collator, train_dataset=dataset_ftrs["train"], eval_dataset=dataset_ftrs["test"], ) assert model.config.attention_dropout==0.123 #trainer.train()
7
4
76,464,175
2023-6-13
https://stackoverflow.com/questions/76464175/setfit-training-with-a-pandas-dataframe
I would like to train a zero shot classifier on an annotated sample dataset. I am following some tutorials but as all use their own data and the same pretarined model, I am trying to confirm: Is this the best approach? Data example: import pandas as pd from datasets import Dataset # Sample feedback data, it will have 8 samples per label feedback_dict = [ {'text': 'The product is great and works well.', 'label': 'Product Performance'}, {'text': 'I love the design of the product.', 'label': 'Product Design'}, {'text': 'The product is difficult to use.', 'label': 'Usability'}, {'text': 'The customer service was very helpful.', 'label': 'Customer Service'}, {'text': 'The product was delivered on time.', 'label': 'Delivery Time'} ] # Create a DataFrame with the feedback data df = pd.DataFrame(feedback_dict) # convert to Dataset format df = Dataset.from_pandas(df) By having the previous data format, this is the approach for model finetunning: from setfit import SetFitModel, SetFitTrainer # Select a model model = SetFitModel.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2") # training with Setfit trainer = SetFitTrainer( model=model, train_dataset=df, # to keep the code simple I do not create the df_train eval_dataset=df, # to keep the code simple I do not create the df_eval column_mapping={"text": "text", "label": "label"} ) trainer.train() The issue here is that the process never ends after more than 500 hours in a laptop, and the dataset it is only about 88 records with 11 labels.
I tried to run the example you posted on Google Colab, it took 37 seconds to run the training. Here's you code with some tweak to make it work on Colab: ### Install libraries %%capture !pip install datasets setfit After installing the libraries, run the following code: ### Import dataset import pandas as pd from datasets import Dataset # Sample feedback data, it will have 8 samples per label feedback_dict = [ {'text': 'The product is great and works well.', 'label': 'Product Performance'}, {'text': 'I love the design of the product.', 'label': 'Product Design'}, {'text': 'The product is difficult to use.', 'label': 'Usability'}, {'text': 'The customer service was very helpful.', 'label': 'Customer Service'}, {'text': 'The product was delivered on time.', 'label': 'Delivery Time'} ] # Create a DataFrame with the feedback data df = pd.DataFrame(feedback_dict) # convert to Dataset format df = Dataset.from_pandas(df) ### Run training from setfit import SetFitModel, SetFitTrainer # Select a model model = SetFitModel.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2") # training with Setfit trainer = SetFitTrainer( model=model, train_dataset=df, # to keep the code simple I do not create the df_train eval_dataset=df, # to keep the code simple I do not create the df_eval column_mapping={"text": "text", "label": "label"} ) trainer.train() And finally, you can download the trained model on drive and then download it on you PC manually. ### Download model to drive from google.colab import drive drive.mount('/content/drive') trainer.model._save_pretrained('/content/drive/path/to/target/folder') If your main issue is the training time, this should fix it.
4
4
76,468,665
2023-6-13
https://stackoverflow.com/questions/76468665/why-does-object-new-accept-parameters
Besides the obvious asking "again" about __new__ and __init__ in Python - I can ensure, I know what it does. I'll demonstrate some strange and to my opinion undocumented behavior, for which I seek professional help :). Background I'm implementing several features like abstract methods, abstract classes, must-override methods, singletone behavior, slotted classes (automatic inference of __slots__) and mixin classes (deferred slots) using a user-defined meta-class called ExtendedType. The following code can be found as a whole at pyTooling/pyTooling on the development branch. Thus, the presented question is a stripdown and simplified variant demonstrating the strange behavior of object.__new__. Idea Depending on the internal algorithms of ExtendedType, it might decide a class A is abstract. If so, the __new__ method is replaced by a dummy method raising an exception (AbstractClassError). Later, when a class B(A) inherits from A, the meta-class might come to the decision, B isn't abstract anymore, thus we want to allow the object creation again and allow calling for the original __new__ method. Therefore, the original method is preserved as a field in the class. To simplify the internal algorithms for the abstractness decision, the meta-class implements a boolean named-parameter abstract. class AbstractClassError(Exception): pass class M(type): # staticmethod def __new__(cls, className, baseClasses, members, abstract): newClass = type.__new__(cls, className, baseClasses, members) if abstract: def newnew(cls, *_, **__): raise AbstractClassError(f"Class is abstract") # keep original __new__ and exchange it with a dummy method throwing an error newClass.__new_orig__ = newClass.__new__ newClass.__new__ = newnew else: # 1. replacing __new__ with original (preserved) method doesn't work newClass.__new__ = newClass.__new_orig__ return newClass class A(metaclass=M, abstract=True): pass class B(A, abstract=False): def __init__(self, arg): self.arg = arg b = B(5) When instantiating B we'll try two cases: with a single parameter: b = B(5) Error message: TypeError: object.__new__() takes exactly one argument (the type to instantiate) without a parameter: b = B() Error message: TypeError: B.__init__() missing 1 required positional argument: 'arg' The error message of the latter case is expected, because __init__ of B expects an argument arg. The strange behavior is in case 1, where it reports object.__new__() takes no additional parameters except of the type. So let's investigate if swapping methods worked correctly: print("object.__new__ ", object.__new__) print("A.__new_orig__ ", A.__new_orig__) print("A.__new__ ", A.__new__) print("B.__new__ ", B.__new__) Results: object.__new__ <built-in method __new__ of type object at 0x00007FFE30EDD0C0> A.__new_orig__ <built-in method __new__ of type object at 0x00007FFE30EDD0C0> A.__new__ <function M.__new__.<locals>.newnew at 0x000001CF11AE5A80> B.__new__ <built-in method __new__ of type object at 0x00007FFE30EDD0C0> So, the preserved method in __new_orig__ is identical to object.__new__ and is again the same after swapping back the __new__ method in class B. Comparing with Ordinary Classes Let's take two classes X and Y(X) and instantiate them: class X: pass class Y(X): def __init__(self, arg): self.arg = arg y = Y(3) Of cause this will work, but are the __new__ methods different? object.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0> A.__new_orig__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0> A.__new__ <function M.__new__.<locals>.newnew at 0x000001CD1FB459E0> B.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0> X.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0> Y.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0> Also X and Y use the same __new__ method as B or object. So let's instantiate Y and B and compare results: print("Y.__new__ ", Y.__new__) y = Y(3) print("y.arg ", y.arg) print("B.__new__ ", B.__new__) b = B(5) print("b.arg ", y.arg) Results: Y.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0> y.arg 3 B.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0> Traceback (most recent call last): File "C:\Temp\newIstKomisch.py", line 67, in <module> b = B(5) ^^^^ TypeError: object.__new__() takes exactly one argument (the type to instantiate) Question 1: Why does new accept parameters for Y, but not for B? Creating Objects When an object is created, the __call__ method of the meta-class is executed, which roughly translates to: class M(type): ... def __call__(cls, *args, **kwargs): inst = cls.__new__(cls, *args, **kwargs) inst.__init__(*args, **kwargs) return inst It first calls __new__ to create an instance and then it calls __init__ to initialize the object. One might argue and say: "maybe there is magic behavior in call" to check if a build-in or user-defined method is called"... Let's quickly check how object.__new__ behaves: o = object.__new__(object, 1) Result: TypeError: object() takes no arguments Observation: The error message is different then what we got before. This says "no arguments", the other says "exactly one argument". Alternatively, we can create an object by hand skipping the meta-class: y = Y.__new__(Y, 3) print("Y.__new__(Y, 3) ", y) y.__init__(3) print("y.__init__(3) ", y.arg) Result: Y.__new__(Y, 3) <__main__.Y object at 0x0000020ED770BD40> y.__init__(3) 3 Here we clearly see __new__ can accept additional parameters and ignore them. So let's compare to manual instance creation of B: b = B.__new__(B, 5) print("B.__new__(B, 5) ", b) b.__init__(5) print("b.__init__(5) ", b.arg) Result: Traceback (most recent call last): File "C:\Temp\newIstKomisch.py", line 51, in <module> b = B.__new__(B, 5) ^^^^^^^^^^^^^^^ TypeError: object.__new__() takes exactly one argument (the type to instantiate) Question 2: How can the same method have different behavior and exception handling? Additional notes: All behavior is implemented in M.__new__ or swapped XXX.__new__ methods instead of M.__call__, so the object creation time isn't influenced. Modifying the meta-classes call would have a huge performance impact. Attachements: Full reproducer file
That is sure a lot of research for a question. But the answer is more simple: objects __new__ and __init__ simply special case the "forgiveness of extra arguments" in a way that it feels natural to create new classes with a custom __init__ method, with no need to fiddle with __new__. So, in short, object new checks if the class it is instantiating have a custom __init__ and no custom __new__ - if so, it "forgives" extra args and kwargs. And object's default __init__ does the converse: it checks if the class it is "initting" have a custom __new__ and no custom __init__. If so it also forgives (and forgets) about any extra parameters. The "custom" verification here simply checks if there is a __new__ method present in any class' dict in the __mro__ - so that even setting the same object.__new__ class in a subclass won't work. This strange-sounding special case is needed, and is baked in since a long-time in Python, because without it, whenever creating a class with a __init__ method taking arguments, without implementing also a __new__ method that would fail - so the case for "simplify class customization" by having a simpler __init__ method rather than modifying __new__ would be moot. Here are a few examples on the REPL that make that clear: In [11]: class A(object): pass In [12]: b = A.__new__(A, 3) TypeError (...) TypeError: A() takes no arguments # There is no custom `__init__`, so it fails In [13]: class A(object): ...: def __init__(self, *args): ...: pass ...: In [14]: b = A.__new__(A, 3) # There was a custom `__init__` so, object.__new__ forgives us. # and finally your case, both a __new__ and __init__ even if `cls.__new__ is object.__new__` is true, errors: In [17]: class A(object): ...: def __new__(cls, *args): ...: raise NotImplementedError() ...: def __init__(self, *args): ...: pass ...: In [18]: class B(A): ...: __new__ = object.__new__ ...: In [19]: c = B() #<- works with no args In [20]: c = B(3) TypeError TypeError: object.__new__() takes exactly one argument (the type to instantiate) # And just one example with __init__ to show the converse case: In [28]: class A(object): ...: def __new__(cls, *args): ...: # I strip down the extra args: ...: return super().__new__(cls) ...: # no custom __init__ ...: In [29]: b = A(3) # <- works In [30]: class A(object): ...: def __new__(cls, *args): ...: # I strip down the extra args: ...: return super().__new__(cls) ...: # with a custom __init__ forwarding extra args ...: def __init__(self, *args): ...: print("init") ...: super().__init__(*args) ...: In [31]: b = A(3) # <- errors out init TypeError (...) Cell In[30], line 8, in A.__init__(self, *args) 6 def __init__(self, *args): 7 print("init") ----> 8 super().__init__(*args) TypeError: object.__init__() takes exactly one argument (the instance to initialize) And last but not least: for your case, you can't simply restore object.__new__ in a custom "granddaughter" class - you will need instead to check if __orig_new__ is object.__new__ and if so, use a custom __new__ that will strip extra arguments before calling object.__new__.
5
3
76,440,090
2023-6-9
https://stackoverflow.com/questions/76440090/pinecone-maxretryerror-and-newconnectionerror
An application I've hosted online throws an error whenever it tries to query a pinecone database that I've set up. Whenever I run the same code (same pinecone environment and API key) on my local device, the queries go through just fine. Any ideas on what could be causing this issue? urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='aamcdoc-fb22780.svc.northamerica-northeast1-gcp.pinecone.io', port=443): Max retries exceeded with url: /query (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd50980ce20>: Failed to establish a new connection: [Errno 111] Connection refused')) Edit: I found this on PythonAnywhere's Q&A (it's the service I'm using to host the web application) in regards to a similar question: "It appears that you are not configuring your code to use the proxy on PythonAnywhere, so you cannot connect out from your free account. Have a look at the documentation for the library you're using to see how to configure it to use the proxy." If that's the error, how would I go about fixing it?
I faced the exact same thing. I just fixed it, also my first SO answer ever. Using Python 3.10, pinecone-client you have to pass their proxy server during Pinecone init. Like this from pinecone.core.client.configuration import Configuration as OpenApiConfiguration openapi_config = OpenApiConfiguration.get_default_copy() openapi_config.proxy = "http://proxy.server:3128" pinecone.init( api_key="XXXXXXXXXXXXXXXXXXXXXXXXxx", environment="XXXXXXXXXXXXXXX", openapi_config=openapi_config )
2
4
76,468,978
2023-6-13
https://stackoverflow.com/questions/76468978/problem-using-tweepy-the-error-403-forbidden-apeared-without-making-any-changes
Hello there thanks for reading my post. I was using this same code yesterday and it was okay but today stoped working and i got this error: raise Forbidden(response) tweepy.errors.Forbidden: 403 Forbidden When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal. This is the code I implemented: import tweepy import csv #Autorizaciรณn bearer_token = "" client = tweepy.Client(bearer_token=bearer_token) #Get the user id user="" user_id= client.get_user(username=user).id # Get all the tweets of a user paginator = tweepy.Paginator( client.get_users_tweets, # The method you want to use user_id, # Some argument for this method #end_time=datetime.datetime(2022, 3, 3, 19, 4, 49), tweet_fields=["created_at", "public_metrics", "possibly_sensitive", "attachments", "in_reply_to_user_id"], max_results=100, # How many tweets per page limit=40 # How many pages to retrieve ) data=[] count=0 #Create a csv file to store the data filename="@"+user+"_twitterdata.csv" f = open(filename,'a', newline='') writer = csv.writer(f) #Store the data for tweet in paginator.flatten(limit=40000): # Total number of tweets to retrieve count=count+1 #Check if tha tweet has multimedia content if tweet.attachments!=None:media=True else:media=False #Check if tha tweet is a retweet Retweet=tweet.text[:2]=="RT" #Check if tha tweet is a repy Reply=tweet.in_reply_to_user_id!=None row=[tweet.id, tweet.created_at, tweet.public_metrics.get("retweet_count"), tweet.public_metrics.get("reply_count"), tweet.public_metrics.get("like_count"), tweet.public_metrics.get("quote_count"), tweet.public_metrics.get("impression_count"), tweet.possibly_sensitive, media, #tweet.text.replace(",", "" ), Retweet, Reply #tweet.context_annotations ] writer.writerow(row) print("Retrieved ",count," tweets") # Write the data and close file writer = csv.writer(f) f.close() I tried to generate new keys in the developer portal but that didn't help. Also i tried to change the User autentication settings in the twitter developer portal but that didn't work
With the "free" plan of the Twitter API you can no longer lookup tweets, GET /2/tweets/:id is only available in the "basic" plan, see: https://developer.twitter.com/en/portal/products/free
3
2
76,459,034
2023-6-12
https://stackoverflow.com/questions/76459034/how-to-load-a-fine-tuned-peft-lora-model-based-on-llama-with-huggingface-transfo
I've followed this tutorial (colab notebook) in order to finetune my model. Trying to load my locally saved model model = AutoModelForCausalLM.from_pretrained("finetuned_model") yields Killed. Trying to load model from hub: yields import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "lucas0/empath-llama-7b" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(cwd+"/tokenizer.model") # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) yields AttributeError: /home/ubuntu/empath/lora/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats full stacktrace Model Creation: I have finetuned a model using PEFT and LoRa: model = AutoModelForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", torch_dtype=torch.float16, device_map='auto', ) I had to download and manually specify the llama tokenizer. tokenizer = LlamaTokenizer(cwd+"/tokenizer.model") tokenizer.pad_token = tokenizer.eos_token to the training: from peft import LoraConfig, get_peft_model config = LoraConfig( r=8, lora_alpha=16, target_modules=["q_proj", "k_proj", "v_proj", "o_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, config) data = pd.read_csv("my_csv.csv") dataset = Dataset.from_pandas(data) tokenized_dataset = dataset.map(lambda samples: tokenizer(samples["text"])) trainer = transformers.Trainer( model=model, train_dataset=tokenized_dataset, args=transformers.TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=100, max_steps=100, learning_rate=1e-3, fp16=True, logging_steps=1, output_dir='outputs', ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False) ) model.config.use_cache = True # silence the warnings. Please re-enable for inference! trainer.train() and saved it locally with: trainer.save_model(cwd+"/finetuned_model") print("saved trainer locally") as well as to the hub: model.push_to_hub("lucas0/empath-llama-7b", create_pr=1) How can I load my finetuned model?
To load a fine-tuned peft/lora model, take a look at the guanco example, https://stackoverflow.com/a/76372390/610569 import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer model_name = "decapoda-research/llama-7b-hf" adapters_name = "lucas0/empath-llama-7b" print(f"Starting to load the model {model_name} into memory") m = AutoModelForCausalLM.from_pretrained( model_name, #load_in_4bit=True, torch_dtype=torch.bfloat16, device_map={"": 0} ) m = PeftModel.from_pretrained(m, adapters_name) m = m.merge_and_unload() tok = LlamaTokenizer.from_pretrained(model_name) tok.bos_token_id = 1 stop_token_ids = [0] print(f"Successfully loaded the model {model_name} into memory") You will need an A10g GPU runtime minimally to load the model properly. For more details see https://github.com/artidoro/qlora#tutorials-and-demonstrations Inference notebook: https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf?usp=sharing Training notebook: https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing
19
15
76,447,153
2023-6-10
https://stackoverflow.com/questions/76447153/how-to-use-a-llama-model-with-langchain-it-gives-an-error-pipeline-cannot-infe
finetuned a model (https://huggingface.co/decapoda-research/llama-7b-hf) using peft and lora and saved as https://huggingface.co/lucas0/empath-llama-7b. Now im getting Pipeline cannot infer suitable model classes from when trying to use it along with with langchain and chroma vectordb: from langchain.embeddings import HuggingFaceHubEmbeddings from langchain import PromptTemplate, HuggingFaceHub, LLMChain from langchain.chains import RetrievalQA from langchain.prompts import PromptTemplate from langchain.vectorstores import Chroma repo_id = "sentence-transformers/all-mpnet-base-v2" embedder = HuggingFaceHubEmbeddings( repo_id=repo_id, task="feature-extraction", huggingfacehub_api_token="XXXXX", ) comments = ["foo", "bar"] embeddings = embedder.embed_documents(texts=comments) docsearch = Chroma.from_texts(comments, embedder).as_retriever() #docsearch = Chroma.from_documents(texts, embeddings) llm = HuggingFaceHub(repo_id='lucas0/empath-llama-7b', huggingfacehub_api_token='XXXXX') qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch, return_source_documents=False) q = input("input your query:") result = qa.run(query=q) print(result["result"]) is anyone able to tell me how to fix this? Is it an issue with the model card? I was facing issues with the lack of the config.json file and ended up just placing the same config.json as the model I used as base for the lora fine-tuning. Could that be the origin of the issue? If so, how to generate the correct config.json without having to get the original llama weights? Also, is there a way of loading several sentences into a custom HF model (not only OpenAi, as the tutorial show) without using vector dbs? Thanks! The same issue happens when trying to run the API on the model's HF page:
Before using the langchain API to the huggingface model, you should try to load the model in Huggingface: from transformers import AutoModel model = AutoModel.from_pretrained('lucas0/empath-llama-7b') And that'll throw some errors: --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-2-1b9ce76f5421> in <cell line: 3>() 1 from transformers import AutoModel 2 ----> 3 model = AutoModel.from_pretrained('lucas0/empath-llama-7b') 1 frames /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2553 ) 2554 else: -> 2555 raise EnvironmentError( 2556 f"{pretrained_model_name_or_path} does not appear to have a file named" 2557 f" {_add_variant(WEIGHTS_NAME, variant)}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME} or" OSError: lucas0/empath-llama-7b does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack. Then looking into the model files, it looks like only the adapter model is saved and not the model, https://huggingface.co/lucas0/empath-llama-7b/tree/main, so the Automodel is throwing tantrums. To load an adapted model, you have to the base model and the peft (adapter model separated, first the installs (restart after installs, if needed): ! pip install -U peft accelerate ! pip install -U sentencepiece ! pip install -U transformers Then to load the model, take a look at the guanaco example, Trying to install guanaco (pip install guanaco) for a text classification model but getting error (You will need a GPU runtime) import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer model_name = "decapoda-research/llama-7b-hf" adapters_name = 'lucas0/empath-llama-7b' print(f"Starting to load the model {model_name} into memory") m = AutoModelForCausalLM.from_pretrained( model_name, #load_in_4bit=True, torch_dtype=torch.bfloat16, device_map={"": 0} ) m = PeftModel.from_pretrained(m, adapters_name) m = m.merge_and_unload() tok = LlamaTokenizer.from_pretrained(model_name) tok.bos_token_id = 1 stop_token_ids = [0] print(f"Successfully loaded the model {model_name} into memory") Now you can load the model that you've adapted/fine-tuned in Huggingface transformers, you can try it with langchain, before that we have to dig the langchain code, to use a prompt with HF model, users are told to do this: from langchain import PromptTemplate, LLMChain, HuggingFaceHub template = """ Hey llama, you like to eat quinoa. Whatever question I ask you, you reply with "Waffles, waffles, waffles!". Question: {input} Answer: """ prompt = PromptTemplate(template=template, input_variables=["input"]) model = HuggingFaceHub(repo_id="facebook/mbart-large-50", model_kwargs={"temperature": 0, "max_length":200}, chain = LLMChain(prompt=prompt, llm=model) But when we look at the HuggingFaceHub object it isn't just a vanilla AutoModel from transformers huggingface. When we look at https://github.com/hwchase17/langchain/blob/master/langchain/chains/llm.py, we see that it's trying to load the llm=... argument with some wrapper class, so we dig deeper into langchain's HuggingFaceHub object at https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_hub.py The HuggingFaceHub object wraps over the huggingface_hub.inference_api.InferenceApi for the text-generation, text2text-generation or summarization tasks And HuggingFaceHub looks like some spaghetti like object that inherits from LLM object https://github.com/hwchase17/langchain/blob/master/langchain/llms/base.py#L453 To summarize this a little, we want to: load a HuggingFaceHub with langchain API, and the HuggingFaceHub is actually a wrapper over the huggingface_hub.inference_api.InferenceApi and the HuggingFaceHub object is a subclass of llm.base.LLM Given that knowledge on the HuggingFaceHub object, now, we have several options: Opinion: The easiest way around it is to totally avoid langchain, since it's wrapper around things, you can write your customized wrapper that skip the levels of inheritance created in langchain to wrap around as many tools as it can/need Ideally: Ask the langchain developer/maintainer to load peft/adapter model and write another subclass for them Practical:* Lets hack the thing and write our own LLM subclass. Practical solution: Lets try to hack up a new LLM subclass from typing import Any, Dict, List, Mapping, Optional from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforce_stop_tokens from langchain import PromptTemplate, LLMChain class HuggingFaceHugs(LLM): pipeline: Any class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def __init__(self, model, tokenizer, task="text-generation"): super().__init__() self.pipeline = pipeline(task, model=model, tokenizer=tokenizer) @property def _llm_type(self) -> str: """Return type of llm.""" return "huggingface_hub" def _call(self, prompt, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None,): # Runt the inference. text = self.pipeline(prompt, max_length=100)[0]['generated_text'] # @alvas: I've totally no idea what this in langchain does, so I copied it verbatim. if stop is not None: # This is a bit hacky, but I can't figure out a better way to enforce # stop tokens when making calls to huggingface_hub. text = enforce_stop_tokens(text, stop) print(text) return text[len(prompt):] template = """ Hey llama, you like to eat quinoa. Whatever question I ask you, you reply with "Waffles, waffles, waffles!". Question: {input} Answer: """ prompt = PromptTemplate(template=template, input_variables=["input"]) hf_model = HuggingFaceHugs(model=m, tokenizer=tok) chain = LLMChain(prompt=prompt, llm=hf_model) chain("Who is Princess Momo?") Phew, langchain didn't complain... and here's the output: {'input': 'Who is Princess Momo?', 'text': ' She is a princess. She is a princess. She is a princess. She is a princess. She is a princess. She is a princess. She is a princess. She is'} Epilogue: Apparently this llama model doesn't understand that all it needs to do is to reply Waffles, waffles, waffles TL;DR See https://colab.research.google.com/drive/1l2GiSSPbajVyp2Nk3CFT4t3uH6-5TiBe?usp=sharing
2
14
76,469,330
2023-6-13
https://stackoverflow.com/questions/76469330/using-cppyy-with-rvalue-pointers-and-maps
I would love to love cppyy. However the codebase I am using has heavy use of std.unique_ptr, rvalue pointers, and templates. I am confused about how to translate these into something I can call from python. For instance, I am stuck on how to create an std::map from classes. I understand that I can make an std::map by doing the following: test_map = Cpp.std.map[Cpp.std.string, Cpp.std.string]() test_string = "value" test_map["key"] = test_string print(test_map["key"]) However, when I do: test_map = Cpp.std.map[Cpp.std.string, Cpp.std.string]() test_string = Cpp.std.string("value") test_map["key"] = Cpp.std.move(test_string) print(test_map["key"]) I get --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[34], line 3 1 test_map = Cpp.std.map[Cpp.std.string, Cpp.std.string]() 2 test_string = Cpp.std.string("value") ----> 3 test_map["key"] = Cpp.std.move(test_string) 4 print(test_map["key"]) TypeError: none of the 2 overloaded methods succeeded. Full details: std::string& std::map<std::string,std::string>::operator[](std::map<std::string,std::string>::key_type&& __k) => logic_error: basic_string::_M_construct null not valid std::string& std::map<std::string,std::string>::operator[](const std::map<std::string,std::string>::key_type& __k) => logic_error: basic_string::_M_construct null not valid I am not sure why this fails. What I actually want to construct is a map from a string to a templated class, see: import cppyy import cppyy.gbl as Cpp cppyy.cppdef(r"""\ template<typename T> class MyClass { public: MyClass(T t) : m_data(t) {} T m_data; }; """) But when I try: test_map = Cpp.std.map[Cpp.std.string, Cpp.MyClass['double']]() myClass = Cpp.MyClass['double'](5.0) test_map["key"] = Cpp.std.move(myClass) print(test_map["key"]) I get a long error: input_line_50:6:86: error: no member named 'operator[]' in 'std::map<std::__cxx11::basic_string<char>, MyClass<double>, std::less<std::__cxx11::basic_string<char> >, std::allocator<std::pair<const std::__cxx11::basic_string<char>, MyClass<double> > > >' new (ret) (MyClass<double>*) (&((std::map<std::string,MyClass<double> >*)obj)->operator[]((std::string&&)*(std::string*)args[0])); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ input_line_50:10:55: error: no member named 'operator[]' in 'std::map<std::__cxx11::basic_string<char>, MyClass<double>, std::less<std::__cxx11::basic_string<char> >, std::allocator<std::pair<const std::__cxx11::basic_string<char>, MyClass<double> > > >' ((std::map<std::string,MyClass<double> >*)obj)->operator[]((std::string&&)*(std::string*)args[0]); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ input_line_51:6:86: error: no member named 'operator[]' in 'std::map<std::__cxx11::basic_string<char>, MyClass<double>, std::less<std::__cxx11::basic_string<char> >, std::allocator<std::pair<const std::__cxx11::basic_string<char>, MyClass<double> > > >' new (ret) (MyClass<double>*) (&((std::map<std::string,MyClass<double> >*)obj)->operator[]((const std::string&)*(const std::string*)args[0])); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ input_line_51:10:55: error: no member named 'operator[]' in 'std::map<std::__cxx11::basic_string<char>, MyClass<double>, std::less<std::__cxx11::basic_string<char> >, std::allocator<std::pair<const std::__cxx11::basic_string<char>, MyClass<double> > > >' ((std::map<std::string,MyClass<double> >*)obj)->operator[]((const std::string&)*(const std::string*)args[0]); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[45], line 3 1 test_map = Cpp.std.map[Cpp.std.string, Cpp.MyClass['double']]() 2 myClass = Cpp.MyClass['double'](5.0) ----> 3 test_map["key"] = Cpp.std.move(myClass) 4 print(test_map["key"]) TypeError: none of the 2 overloaded methods succeeded. Full details: MyClass<double>& std::map<std::string,MyClass<double> >::operator[](std::map<std::string,MyClass<double> >::key_type&& __k) => ReferenceError: none of the 2 overloaded methods succeeded. Full details: attempt to access a null-pointer attempt to access a null-pointer MyClass<double>& std::map<std::string,MyClass<double> >::operator[](const std::map<std::string,MyClass<double> >::key_type& __k) => TypeError: none of the 2 overloaded methods succeeded. Full details: MyClass<double>& MyClass<double>::operator=(MyClass<double>&&) => ValueError: could not convert argument 1 (object is not an rvalue) attempt to access a null-pointer What am I doing wrong?
You can insert Key/Value pairs using .emplace and you can lookup a value from a Key using .at. Example: #!/bin/python import cppyy import cppyy.gbl as Cpp # the class with an added operator<< overload to support printing: cppyy.cppdef(r"""\ template<typename T> class MyClass { public: MyClass(T t) : m_data(t) {} T m_data; friend std::ostream& operator<<(std::ostream& os, const MyClass& mc) { return os << mc.m_data; } }; """) from cppyy.gbl import MyClass test_map = Cpp.std.map[Cpp.std.string, MyClass['double']]() myObj = MyClass['double'](5.0) # add a key/value pair test_map.emplace("key", Cpp.std.move(myObj)) # print the value mapped to the key: print(test_map.at("key")) # loop and print keys and values for key, value in test_map: print(key, value) Output: 5 key 5
3
4
76,468,406
2023-6-13
https://stackoverflow.com/questions/76468406/create-many-confusion-like-matrices-concatenated-in-python
I have the following pandas dataframe import pandas as pd df = pd.DataFrame({'cl1': ['A','A','A','A', 'A','A','A','A', 'D','D','D','D', 'D','D','D','D'], 'cl2': ['C','C','C','C', 'B','B','B','B', 'C','C','C','C', 'B','B','B','B'], 'p1p2': ['00','01','10','11', '00','01','10','11', '00','01','10','11', '00','01','10','11'], 'val':[1,2,3,4, 10,20,30,40, 5,6,7,8, 50,60,70,80]}) df cl1 cl2 p1p2 val 0 A C 00 1 1 A C 01 2 2 A C 10 3 3 A C 11 4 4 A B 00 10 5 A B 01 20 6 A B 10 30 7 A B 11 40 8 D C 00 5 9 D C 01 6 10 D C 10 7 11 D C 11 8 12 D B 00 50 13 D B 01 60 14 D B 10 70 15 D B 11 80 And I would like to create a plot that looks like this How could I do that in python ?
Assuming that p1p2 only contains values 00, 01, 10, or 11, it is easy to use pivot table to get something like this: d = df.copy() d['p1'] = d['p1p2'].str[0] d['p2'] = d['p1p2'].str[1] counts = d.pivot_table(values = 'val', columns = ['cl1', 'p1'], index = ['cl2', 'p2']) counts # cl1 A D # p1 0 1 0 1 # cl2 p2 # B 0 10 30 50 70 # 1 20 40 60 80 # C 0 1 3 5 7 # 1 2 4 6 8 As for the plotting, here's something that is pretty close to what you are looking for: from matplotlib import pyplot as plt import seaborn as sns import numpy as np from matplotlib.colors import ListedColormap colors = pd.pivot_table(d, values = 'p1p2', columns = ['cl1', 'p1'], index = ['cl2', 'p2']).astype(int) colors = colors.astype(str).applymap(lambda val: int(val, 2)) color_labels = pd.pivot_table(d, values = 'val', columns = ['cl1', 'cl2'], index = ['p1p2']).index counts.index = counts.index.droplevel(-1) counts.columns = counts.columns.droplevel(-1) colors.index = colors.index.droplevel(-1) colors.columns = colors.columns.droplevel(-1) cmap = ListedColormap(['C0', 'C1', 'C2', 'C3']) ax = sns.heatmap(colors, cmap = cmap, annot = counts, cbar_kws = {"label": "p1p2 (legend)"}) colorbar = colorbar = ax.collections[0].colorbar colorbar.set_ticks(np.linspace(0, 3, 9)[1::2]) colorbar.set_ticklabels(color_labels) plt.show()
2
2
76,450,952
2023-6-11
https://stackoverflow.com/questions/76450952/two-conditional-clause-count-in-pandas
I have a df which looks like this: api_spec_id type_of_change label 213 Breaking NaN 213 Breaking major 213 Non-Breaking patch 345 Non-Breaking NaN 345 Non-Breaking patch 345 Non-Breaking patch 678 Breaking NaN 678 Breaking minor 678 Breaking major 123 Breaking NaN 123 Breaking NaN I want to calculate the unique number of api_spec_id where: all type_of_change are breaking, where expected output would be 2 (ids: 678,123) all type_of_change are non-breaking, where expected output would be 1 (id:345) at least one type_of_change is breaking, expected output: 3 (ids:213,678,123) at least one type_of_change is non-breaking, expected output: 2 (ids:213,345) I am not sure how I can achieve this, any suggestions or ideas would be greatly appreciated.
A possible solution : from functools import partial grp = df.groupby("api_spec_id") def detect(g, how, change): if how == "all": return g["type_of_change"].eq(change).all() elif how == "any": return g["type_of_change"].eq(change).any() def get_id(df): return df["api_spec_id"].unique().tolist() v1 = grp.filter(partial(detect, how="all", change="Breaking")).pipe(get_id) v2 = grp.filter(partial(detect, how="all", change="Non-Breaking")).pipe(get_id) v3 = grp.filter(partial(detect, how="any", change="Breaking")).pipe(get_id) v4 = grp.filter(partial(detect, how="any", change="Non-Breaking")).pipe(get_id) Output : print(v1) # [678, 123] print(v2) # [345] print(v3) # [213, 678, 123] print(v4) # [213, 345]
2
3
76,466,400
2023-6-13
https://stackoverflow.com/questions/76466400/why-are-docstrings-and-block-comments-suddenly-the-same-color-as-a-single-line-c
My Python docstrings & block comments in Visual Studio Code always used to be a different color to the single line comment. They use to be: docstrings & block comments: orange single line comments: green I did a reinstall of Visual Studio Code this morning and the block comments and docstrings are now the same color as the single line comments (so all comments are green). I have pasted an image (in case the colors here are different). I have always used the default dark mode (and never change the default settings). Has something changed or is there a setting that could change this back?
According to the Release notes for 1.79 (the pull request section), they changed the docstring comment colors with this pull request. You can set your own colors for this using the solution in the other answer.
5
5
76,466,694
2023-6-13
https://stackoverflow.com/questions/76466694/pandas-chaining-and-the-use-of-inplace-parameter
For pandas DataFrames in python, multiple member methods have an inplace parameter which purportedly allow you to NOT create a copy of the object, but rather to directly modify the original object*. [*Edited to add: however, this proves to not be the case as pointed out by @juanpa.arrivillaga. inplace=True DOES copy data and merely updates a pointer associated with the modified object, so has few advantages over a manual re-assignment to the name of the original object.] Examples that I have seen online for the use of inplace=True do not include examples where chaining is used. This comment in a related SO thread may be an answer to why I don't see such examples anywhere: you can't method chain and operate in-place. in-place ops return None and break the chain But, would "inplace chaining" work if you put an inplace=True in the last entry in the chain? [Edited to add: no] Or would that be equivalent to trying to change a copy created in an earlier link in the chain, which, as it is no longer your original object, is "lost" after the chain statement is complete? [Edited to add: yes; see answer here] The use of large data objects would seem to preclude the notion of chaining without the ability to do so in-place, at least insofar as desire to maintain a low memory overhead and high computational speed. Is there an alternate implementation of pandas or, e.g. an equivalent of R's data.table available in python that might be appropriate for my needs? Or are my only options to not chain (and compute quickly) or to chain but make redundant copies of the data, at least transiently?
Let's try it. import pandas as pd import numpy as np df = pd.DataFrame({'value' : [2, 2, 1, 1, 3, 4, 5, np.NaN]}) df.sort_values('value').drop_duplicates().dropna(inplace=True) Expect: value 2 1.0 0 2.0 4 3.0 5 4.0 6 5.0 Result: value 0 2.0 1 2.0 2 1.0 3 1.0 4 3.0 5 4.0 6 5.0 7 NaN Answer: No, inplace=True at the end of the chain does not modify the original dataframe.
3
3
76,466,162
2023-6-13
https://stackoverflow.com/questions/76466162/python-pass-by-object-reference-in-memory-using-id
I am trying to test out some code in reference to Robert Heaton's article explaining the difference between the different concepts of passing parameters into functions. Why are list and myList stored in the same location in memory when, according to Heaton, they should be two completely separate variables. Here is what I have written: def main(): myList = [0] print(f"myList: {id(myList)}") print(f"[0]: {id([0])}") append(myList) print(f"After call: {myList}") def append(list): print(f"append's list: {id(list)}") list.append(1) According to Heaton, with pass-by-object-reference "the function provides its own box and creates a new variable for itself". As such, I expected that id(myList) and id(list) should be two different variables stored in different locations in memory. However, the output I receive is: myList: 1941960670528 [0]: 1941963219200 append's list: 1941960670528 After call: [0, 1]
You have to consider the entire paragraph. The article is not saying it should be 2 different objects. There is one list, but separate variables pointing to it. Consider the following (list variable changed to l because built-ins shouldn't be used for variable names.): def append(l): print(f"append's list: {id(l)}") l.append(1) l = [] print(f"append's list after reassigning: {id(l)}") myList = [0] print(f"myList: {id(myList)}") append(myList) print(f"After call: {myList}") print(f"myList id after call: {id(myList)}") We reassigned l to a new list, and yet myList outside is still pointing to the list containing [0, 1] What it's essentially trying to show, is that there are 3 parts to a variable - a name, reference and its value. Names - e.g. l, myList - is what we refer to in code. They, in turn get resolved to a reference - to simplify, the value given by id() - to find a place in memory where the actual value is stored. So, when we pass a list to a function, same reference is given a new name we can use inside the function. Changing the value related to this reference will be visible to the outside function, as the reference is the same. If we do something that changes the reference, e.g. assign a new list with l = [], this will not affect the outside list, as it still uses the old reference. To summarise, we've got 2 relationships: name -> reference and reference -> value When calling a function, we give the reference a new name, resulting in something like: name_outer -\ >- reference -- value name_inner -/ Changing the value results in something like: name_outer -\ >- reference -- new_value name_inner -/ While changing the reference results in: name_outer -- reference -- value name_inner -- new_reference -- new_value
3
3
76,464,908
2023-6-13
https://stackoverflow.com/questions/76464908/fillna-by-avoiding-row-wise-operation-in-pandas
I have a data frame in which there is a column containing several NaN values. The dataframe looks like this: col_1 col_2 2022-10-31 99.094 102.498 2022-11-30 99.001 101.880 2022-12-31 NaN 108.498 2023-01-31 NaN 100.500 I want to fill those NaN based on the simple calculation below: desired_val = (previous value in col_1 * current value in col_2) / previous value in col_2 which means, df.loc['2022-12-31', 'col_1'] should be = (99.001 * 108.498) / 101.880 = 105.432 and df.loc['2023-01-31', 'col_1'] should be = (105.432 * 100.500) / 108.498 = 97.660 I found solution by using row by row operation but it is slow when the dataset is big. I tried column wise operation by using this: df['col_1'] = df['col_1'].fillna( (df[col_1].shift(1) * df[col_2]) / df[col_2].shift(1) ) But it does work only for one row and then it does not go further. Is there any column wise pandas solution for that?
You can think of your operation and see that you multiply by x in one row and divide by x in the next row. Thus you can simplify the result to: col1_value = (last_valid_col1_value * current_col2_value) / col2_value_at_last_valid_col1_position Which can be translated as: # is the row a NA? m1 = df['col_1'].isna() # is the next row a NA? m2 = df['col_1'].shift(-1).isna() df.loc[m1, 'col_1'] = (df['col_1'].div(df['col_2']) .where(m2 & ~m1).ffill() .mul(df['col_2'])[m1] ) Output: col_1 col_2 2022-10-31 99.094000 102.498 2022-11-30 99.001000 101.880 2022-12-31 105.431984 108.498 2023-01-31 97.659997 100.500 Intermediates: col_1 col_2 m1 m2 m2&~m1 ffilled(col1/col2) result 2022-10-31 99.094 102.498 False False False NaN NaN 2022-11-30 99.001 101.880 False True True 0.971741 NaN 2022-12-31 NaN 108.498 True True False 0.971741 105.431984 2023-01-31 NaN 100.500 True True False 0.971741 97.659997
3
4
76,459,471
2023-6-12
https://stackoverflow.com/questions/76459471/cant-create-tables-in-test-database-while-testing-with-pytest-postgresql
I'm trying to write a pytest for models and database in Postgres using fixtures and pytest_postgresql. Running test gives: FAILED tests/test_model_with_test_db.py::test_authors - sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "authors" does not exist Why it doesn't created all tables with model.Base.metadata.create_all(con)? My test code is following: import pytest from pytest_postgresql import factories from pytest_postgresql.janitor import DatabaseJanitor from sqlalchemy import create_engine, select from sqlalchemy.orm.session import sessionmaker import model test_db = factories.postgresql_proc(port=None, dbname="test_db") @pytest.fixture(scope="session") def db_session(test_db): pg_host = test_db.host pg_port = test_db.port pg_user = test_db.user pg_password = test_db.password pg_db = test_db.dbname with DatabaseJanitor(pg_user, pg_host, pg_port, pg_db, test_db.version, pg_password): connection_str = f"postgresql+psycopg2://{pg_user}:@{pg_host}:{pg_port}/{pg_db}" engine = create_engine(connection_str) with engine.connect() as con: model.Base.metadata.create_all(con) yield sessionmaker(bind=engine, expire_on_commit=False) @pytest.fixture(scope="module") def create_test_data(): authors = [ ["John", "Smith", "[email protected]"], ["Bill", "Miles", "[email protected]"], ["Frank", "James", "[email protected]"] ] return [model.Author(firstname=firstname, lastname=lastname, email=email) for firstname, lastname, email in authors] def test_persons(db_session, create_test_data): s = db_session() for obj in create_test_data: s.add(obj) s.commit() query_result = s.execute(select(model.Author)).all() s.close() assert len(query_result) == len(create_test_data) model.py: from sqlalchemy import create_engine, Column, Integer, String, DateTime, Text, ForeignKey from sqlalchemy.engine import URL from sqlalchemy.orm import declarative_base, relationship, sessionmaker from datetime import datetime Base = declarative_base() class Author(Base): __tablename__ = 'authors' id = Column(Integer(), primary_key=True) firstname = Column(String(100)) lastname = Column(String(100)) email = Column(String(255), nullable=False) joined = Column(DateTime(), default=datetime.now) articles = relationship('Article', backref='author') class Article(Base): __tablename__ = 'articles' id = Column(Integer(), primary_key=True) slug = Column(String(100), nullable=False) title = Column(String(100), nullable=False) created_on = Column(DateTime(), default=datetime.now) updated_on = Column(DateTime(), default=datetime.now, onupdate=datetime.now) content = Column(Text) author_id = Column(Integer(), ForeignKey('authors.id')) url = URL.create( drivername="postgresql", username="postgres", host="localhost", port=5433, database="andy" ) engine = create_engine(url) Session = sessionmaker(bind=engine)
Figured out myself. Modified db_session fixture like following: @pytest.fixture(scope="session") def db_session(test_db): pg_host = test_db.host pg_port = test_db.port pg_user = test_db.user pg_password = test_db.password pg_db = test_db.dbname with DatabaseJanitor(pg_user, pg_host, pg_port, pg_db, test_db.version, pg_password): connection_str = f"postgresql+psycopg2://{pg_user}:@{pg_host}:{pg_port}/{pg_db}" engine = create_engine(connection_str) model.Base.metadata.create_all(engine) yield sessionmaker(bind=engine, expire_on_commit=False) And test passes. But why it doesn't work with connection instead?
2
3
76,456,495
2023-6-12
https://stackoverflow.com/questions/76456495/what-function-would-best-fit-the-data-i-have-from-a-galaxy
I have the following set of data: surface_brightnesses_o2 = [12076.0616666451, 11850.730704516911, 10265.598145816548, 9120.859898168235, 7070.26133100111, 5636.138833975608, 3968.1608109082404, 2923.2839406153525, 1963.9315683870766, 1417.3534005331746, 953.9023540784231, 705.6331341427699, 494.19332394388607, 368.6833467905476, 266.41823769096874, 209.98748543636287, 162.17577134818487, 125.70474388251918, 99.72308185010249, 77.89696236284223, 53.44842864009773, 44.01192443651109, 35.52192383706094, 28.055033719366026] surface_brightnesses_o3 = [24172.942124480545, 23257.99074788583, 19560.86193185194, 16867.86523112749, 12362.182457744273, 9447.974865736134, 6155.667579526176, 4233.309154367383, 2589.6992946467008, 1744.3756532539348, 1096.6861498588305, 768.600975237508, 512.7340397075068, 378.58271663510016, 268.4441550825379, 206.52758729119557, 155.45645416835472, 124.71693391104529, 97.34230151849876, 79.90134896492059, 63.519334039447266, 52.12382464229779, 41.91733978896593, 37.68365343589249, 31.54091147651983, 25.80764998552268, 22.808177293717083, 20.4718551088832, 16.05156984850126, 15.497358990115051, 15.42389243808505, 13.54177847744223] radii_o2 = [0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0] radii_o3 = [0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14, 14.5, 15, 15.5, 16] surface_brightnesses_error_o2 = [109.89113552 85.30012943 80.8548183 76.55283021 66.49162753 58.35388488 49.4425817 43.48019603 36.48439283 32.13758154 28.57971998 26.30618542 24.27602806 23.10048171 22.01106869 21.3172123 20.77203895 20.41962288 20.12573286 19.8928839 19.84192745 19.80754151 19.6515864 19.60323267] surface_brightnesses_error_o3 = [155.47650023 84.28555314 74.17986129 66.93258861 54.67881726 46.5099896 36.86637245 30.71396278 25.45559327 22.40018842 19.83606727 18.43327984 16.94700871 16.13059484 15.55795461 15.155422 14.7707935 14.59604581 14.30144021 14.13502224 14.04555569 13.9530354 14.01473729 14.13623735 14.16959504 14.1342218 13.9836842 13.87870645 13.88701116 13.91734777 13.96048525 13.98621865] I am trying to plot a fit such that the yscale (surface brightnesses) is log and the xscale (radii) is linear. I would also like to incorporate the errors for O2 and O3 in the corresponding plots for the surface brightnesses of O2 and O3. I do not want to take log of the surface brightness values, I just want to plot the data as it is and set the yscale to log. However, I couldn't find a function that fits the data correctly. I would appreciate some input on what would be a good fit here, and how to code it in. I tried fitting a Sersic function, which is a brightness profile function used to study the surface brightness profiles of galaxies. fig, ax = plt.subplots(figsize=(10, 7)) # Define Sersic function def sersic(r, I_e, R_e, n): b_n = 1.9992*n - 0.3271 return I_e * np.exp(-b_n * ((r/R_e)**(1/n) - 1)) # Fit the model to the O2 data popt_o2, pcov_o2 = curve_fit(sersic, radii_o2, surface_brightnesses_o2, sigma=surface_brightnesses_error_o2, p0=[100000, 16, 2]) # Fit the model to the O3 data popt_o3, pcov_o3 = curve_fit(sersic, radii_o3, surface_brightnesses_o3, sigma=surface_brightnesses_error_o3, p0=[10000, 16, 2]) # O2 data with error bars and fitted line plt.errorbar(radii_o2, surface_brightnesses_o2, yerr=surface_brightnesses_error_o2, fmt='o', label='O2 data', capsize=4) plt.plot(radii_o2, sersic(radii_o2, *popt_o2), 'r-', label='O2 fit: I_e=%5.3f, R_e=%5.3f, n=%5.3f' % tuple(popt_o2), color = 'blue') # O3 data with error bars and fitted line plt.errorbar(radii_o3, surface_brightnesses_o3, yerr=surface_brightnesses_error_o3, fmt='o', label='O3 data', capsize=4) plt.plot(radii_o3, sersic(radii_o3, *popt_o3), 'b-', label='O3 fit: I_e=%5.3f, R_e=%5.3f, n=%5.3f' % tuple(popt_o3), color = 'red') plt.xlabel('Radii') plt.ylabel('Surface Brightness') plt.yscale('log') plt.ylim(1, 30000) # Adjust the y-axis limits here plt.title('Sersic Fit to Surface Brightness vs Radii for O2 and O3') plt.legend() plt.show() And then I tried fitting a log-Gaussian plot: # Define the log-Gaussian function to fit to the data def log_gaussian(x, amp, cen, wid): return amp * np.exp(-(np.log(x) - cen)**2 / wid**2) # Initial guess for parameters (necessary for log-Gaussian) popt_o2, pcov_o2 = curve_fit(power_law, radii_o2, surface_brightnesses_o2) popt_o3, pcov_o3 = curve_fit(power_law, radii_o3, surface_brightnesses_o3 # Fit the log-Gaussian model to the data params_o2, _ = curve_fit(log_gaussian, radii_o2, surface_brightnesses_o2, p0_o2) params_o3, _ = curve_fit(log_gaussian, radii_o3, surface_brightnesses_o3, p0_o3) # Generate points for the fitted log-Gaussian function fit_o2 = power_law(radii_smooth_o2, *popt_o2) fit_o3 = power_law(radii_smooth_o3, *popt_o3) # Create the plot plt.figure(figsize=(10, 6)) # Plot the original data plt.errorbar(radii_o2, surface_brightnesses_o2, yerr=surface_brightnesses_error_o2, fmt='o', label='Data O2', capsize=4) plt.errorbar(radii_o3, surface_brightnesses_o3, yerr=surface_brightnesses_error_o3, fmt='o', label='Data O3', capsize=4) # Plot the fitted log-Gaussian function plt.plot(radii_fit, fit_o2, label='Fit O2', color = 'blue') plt.plot(radii_fit, fit_o3, label='Fit O3', color = 'red') # Decorate the plot and set yscale to log plt.xlabel('Radii') plt.ylabel('Surface Brightnesses') plt.title('Surface Brightnesses vs Radii') plt.legend() plt.yscale('log') # Show the plot plt.show()
Use a different model, and when you do, perform a log-fit. You've applied your log on x when I believe you should apply it on y during fit. There's an infinite number of models to choose from; which are scientifically valid is up to you to determine. One that has a loosely reasonable fit is a generalized Gaussian with a linear decay term; there are others. import numpy as np from matplotlib import pyplot as plt from scipy.optimize import curve_fit surface_brightnesses_o2 = np.array([ 12076.0616666451, 11850.730704516911, 10265.598145816548, 9120.859898168235, 7070.26133100111, 5636.138833975608, 3968.1608109082404, 2923.2839406153525, 1963.9315683870766, 1417.3534005331746, 953.9023540784231, 705.6331341427699, 494.19332394388607, 368.6833467905476, 266.41823769096874, 209.98748543636287, 162.17577134818487, 125.70474388251918, 99.72308185010249, 77.89696236284223, 53.44842864009773, 44.01192443651109, 35.52192383706094, 28.055033719366026 ]) surface_brightnesses_o3 = np.array([ 24172.942124480545, 23257.99074788583, 19560.86193185194, 16867.86523112749, 12362.182457744273, 9447.974865736134, 6155.667579526176, 4233.309154367383, 2589.6992946467008, 1744.3756532539348, 1096.6861498588305, 768.600975237508, 512.7340397075068, 378.58271663510016, 268.4441550825379, 206.52758729119557, 155.45645416835472, 124.71693391104529, 97.34230151849876, 79.90134896492059, 63.519334039447266, 52.12382464229779, 41.91733978896593, 37.68365343589249, 31.54091147651983, 25.80764998552268, 22.808177293717083, 20.4718551088832, 16.05156984850126, 15.497358990115051, 15.42389243808505, 13.54177847744223 ]) radii_o2 = np.array([ 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0 ]) radii_o3 = np.array([ 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14, 14.5, 15, 15.5, 16 ]) surface_brightnesses_error_o2 = [ 109.89113552, 85.30012943, 80.85481830, 76.55283021, 66.49162753, 58.353884880, 49.44258170, 43.48019603, 36.48439283, 32.13758154, 28.579719980, 26.30618542, 24.27602806, 23.10048171, 22.01106869, 21.317212300, 20.77203895, 20.41962288, 20.12573286, 19.89288390, 19.841927450, 19.80754151, 19.65158640, 19.60323267] surface_brightnesses_error_o3 = [ 155.47650023, 84.28555314, 74.17986129, 66.93258861, 54.67881726, 46.50998960, 36.86637245, 30.71396278, 25.45559327, 22.40018842, 19.83606727, 18.43327984, 16.94700871, 16.13059484, 15.55795461, 15.15542200, 14.77079350, 14.59604581, 14.30144021, 14.13502224, 14.04555569, 13.95303540, 14.01473729, 14.13623735, 14.16959504, 14.13422180, 13.98368420, 13.87870645, 13.88701116, 13.91734777, 13.96048525, 13.98621865] def gaussian(x: np.ndarray, amp: float, cen: float, wid: float, pow: float, slope: float, off: float) -> np.ndarray: return amp * np.exp(-np.abs((x - cen)/wid)**pow) + slope*x + off def log_gaussian(x: np.ndarray, *params: float) -> np.ndarray: return np.log(gaussian(x, *params)) ax: plt.Axes fig, ax = plt.subplots() for title, brightness, radii, error, guess in ( ( 'O2', surface_brightnesses_o2, radii_o2, surface_brightnesses_error_o2, (1e4, 0, 1, 2, 0, 0), ), ( 'O3', surface_brightnesses_o3, radii_o3, surface_brightnesses_error_o3, (1e4, 0, 1, 2, 0, 0), ), ): ax.errorbar(radii, brightness, yerr=error, fmt='o', capsize=4, label=f'{title} data') # ax.plot(radii, gaussian(radii, *guess), label=f'{title} guess') fit, _ = curve_fit( f=log_gaussian, xdata=radii, ydata=np.log(brightness), p0=guess, bounds=( ( 1, -20, 0.01, 0.5, -1e6, -1e6), (1e9, 20, 10.00, 10.0, 1e6, 1e6), ), ) print(fit) plot_radii = np.linspace(start=fit[1], stop=max(radii_o2.max(), radii_o3.max()), num=200) ax.plot(plot_radii, gaussian(plot_radii, *fit), label=f'{title} fit') plt.title('Gaussian Fit to Surface Brightness vs Radii for O2 and O3') ax.set_xlabel('Radii') ax.set_ylabel('Surface brightness') ax.set_yscale('log') ax.set_ylim(1, 1e5) ax.legend() plt.show()
3
2
76,460,679
2023-6-12
https://stackoverflow.com/questions/76460679/read-csv-file-with-columns-of-varying-length-as-dictionary-in-python
How do I read in a .csv file in Python with columns of varying lengths? I want to create a dictionary from the .csv file, with the .csv columns as lists of dictionary values. I've figured out how to write the dictionary to a .csv file, but I need help reading in that same file. import csv import itertools path = 'C:/Users/.../test.csv' out_dict = { 'Class1': ['A', 'B'], 'Class2': ['C', 'D', 'E', 'F', 'G', 'H', 'I'], 'Class3': ['J', 'K', 'L', 'M', 'N']} # write dictionary to csv with open(path, 'wt', newline='') as csv_file: writer = csv.writer(csv_file) writer.writerow(out_dict.keys()) writer.writerows(itertools.zip_longest(*out_dict.values())) csv_file.close() # read csv as dictionary with open(path, 'rt') as csv_file: reader = csv.reader(csv_file); in_dict = ??? csv_file.close() print(in_dict) Desired Output: {'Class1': ['A', 'B'], 'Class2': ['C', 'D', 'E', 'F', 'G', 'H', 'I'], 'Class3': ['J', 'K', 'L', 'M', 'N']}
To read the CSV file back I recommend to use csv.DictReader: import csv import itertools path = '<PATH>' out_dict = { "Class1": ["A", "B"], "Class2": ["C", "D", "E", "F", "G", "H", "I"], "Class3": ["J", "K", "L", "M", "N"], } # write dictionary to csv with open(path, 'wt', newline='') as csv_file: writer = csv.writer(csv_file) writer.writerow(out_dict.keys()) writer.writerows(itertools.zip_longest(*out_dict.values())) # read csv as dictionary out = {} with open(path, 'rt') as csv_file: reader = csv.DictReader(csv_file) for row in reader: for k, v in row.items(): if v != '': out.setdefault(k, []).append(v) print(out) Prints: { "Class1": ["A", "B"], "Class2": ["C", "D", "E", "F", "G", "H", "I"], "Class3": ["J", "K", "L", "M", "N"], }
2
3
76,459,485
2023-6-12
https://stackoverflow.com/questions/76459485/replace-string-with-other-with-two-possible-patterns-in-python
I would like to replace everything in String untill "err" or "error" appear for empty string. So: "abc err efg" -> "efg", "abc error efg" -> "efg. How to do it with one pattern using re.sub? I tried this: lines_input = ['some line err hello', 'some line error hello'] rep = {r'^.*?err': '', r'^.*?error': ''} dict((re.escape(k), v) for k, v in rep.items()) pattern = re.compile("|".join(rep.keys())) for line in lines_input: print pattern.sub(lambda m: rep[re.escape(m.group(0))], line) and had KeyError. Would like to have: hello hello
You don't need multiple regular expressions. The two patterns are the same except for the optional or at the end of error, so use an optional group. line = re.sub(r'.*err(or)?\s*', '', line)
2
5
76,450,603
2023-6-11
https://stackoverflow.com/questions/76450603/using-loop-run-in-executor-to-call-sync-functions-from-async-ones
I have 3 functions: func_1, func_2, and func_3. I would like to run these asynchronously, so that I do not have to wait for func_1 to finish before func_2 starts executing. The problem is, that the definition of func_1 for example looks something like this: async def func_1(a, b): x = some_sync_func(a) y = some_other_sync_func(b) z = another_sync_func(x, y) return yet_another_sync_func(z) The functions that I am calling within func_1 are all synchronous functions which are non-awaitable. Thus, they will block the execution of func_2 and func_3. I read here that loop.run_in_executor() can be used to call synchronous functions from asynchronous functions without blocking the execution. Thus, I modified the definition of func_1 as follows: async def func_1(a, b): loop = asyncio.get_event_loop() x = await loop.run_in_executor(None, some_sync_func, a) y = await loop.run_in_executor(None, some_other_sync_func, b) z = await loop.run_in_executor(None, lambda: another_sync_func(a,b)) w = await loop.run_in_executor(None, yet_another_sync_func, z) return w Is this the right way to deal with this problem? Am I using loop.run_in_executor() correctly? Here, the docs provide an example which seems to support this. I don't know what threads are, or what a "process pool" is, and haven't really been able to make much sense of the docs.
Almost right, but since you are awaiting eagerly at each function call, the next line of code in each case (after the await) will only be called when the line with await finishes execution. However if you call func_1 in parallel from some other place, two instances of func_1 will work in parallel. (I am almost sure that is not what you want). So, in order for these other functions to actually run in parallel (in other threads), you have to create the task to run each of them, but not await it immediately, instead, you gather all the tasks you want to run in parallel and await for them at once (usually with a function properly named gather ): ... async def func_1(a, b): loop = asyncio.get_event_loop() task_x = loop.run_in_executor(None, some_sync_func, a) task_y = loop.run_in_executor(None, some_other_sync_func, b) task_z = loop.run_in_executor(None, lambda: another_sync_func(a,b)) x, y, z = await asyncio.gather(task_x, task_y, task_z) # this depends on `z` so, it is not included in the gather. # if its return value is not important, you can ommit the # await, return the task, and await for it sometime later. w = await loop.run_in_executor(None, yet_another_sync_func, z) return w ...
2
7
76,456,918
2023-6-12
https://stackoverflow.com/questions/76456918/typehint-method-as-returning-return-type-of-other-method-in-python
I have a base class: from abc import abstractmethod class Thing: @abstractmethod def _process(self): ... def process(self, x: int): self.pre_process(x) return self._process() How do I typehint process as returning the return type of _process? My first thought was something like: from abc import abstractmethod from typing import TypeVar class Thing: T = TypeVar("T") @abstractmethod def _process(self) -> T: ... def process(self, x: int) -> T: ... But mypy 1.3.0 complains quite rightly that T is only present once in the function signature: > mypy /tmp/t.py ... error: A function returning TypeVar should receive at least one argument containing the same TypeVar ...
You can make Thing inherit Generic[T]. from typing import TypeVar from typing import Generic from abc import abstractmethod T = TypeVar("T") class Thing(Generic[T]): @abstractmethod def _process(self) -> T: ... def process(self, x: int) -> T: return self._process() > mypy /tmp/t.py Success: no issues found in 1 source file Now you can inherit from Thing like this: class A(Thing[list]): def _process(self) -> list: return []
2
3
76,454,711
2023-6-12
https://stackoverflow.com/questions/76454711/display-only-existing-x-axis-values-for-each-facet-in-a-multi-faceted-bar-plot-u
For the following multifacet plot df = pd.DataFrame({ 'row': [0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1], 'col': [0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1 ], 'x_value': [1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4], 'count': [1,7,4,0,0,3,1,3,1,9,2,2,0,0,3,4] }) df = df.query('count != 0 ') fig = px.bar(df, x='x_value', y='count', facet_col='col', facet_row='row', template='simple_white') fig.for_each_xaxis(lambda xaxis: xaxis.update(showticklabels=True, title_font = dict(size =20), type = 'category')) fig.show() for each subplot, i want to show ONLY the x-ticks present in the data. The circled ticks should not be showing:
To get what you are looking for, you need to customize each x-axis (and y-axis, if required) to include matches=None. This will stop trying to match the columns across each of the subplots. So, replace the line... fig.for_each_xaxis(lambda xaxis: xaxis.update(showticklabels=True, title_font = dict(size =20), type = 'category')) with fig.for_each_xaxis(lambda xaxis: xaxis.update(showticklabels=True, matches=None, title_font = dict(size =20), type = 'category')) and you will get below plot. Hope this is what you are looking for...
2
3
76,452,656
2023-6-11
https://stackoverflow.com/questions/76452656/unpacking-and-assignment-oddity-between-3d-list-and-numpy-array
I create a normal python list a = [[[1,2],[3,4]],[[5,6],[7,8]],[[9,10],[11,12]]] Now I want to shift the first row of each 2x2 array to the previous 2x2 array, wrapping the first back to the last. I use the following unpacking assignment statement: a[0][0],a[1][0],a[2][0] = a[1][0],a[2][0],a[0][0] I get the following, which is what I want print(a) [[[5, 6], [3, 4]], [[9, 10], [7, 8]], [[1, 2], [11, 12]]] Now, I do the same thing, this time, using numpy arrays b = np.arange(1,13).reshape((3,2,2)) print(b) [[[ 1 2] [ 3 4]] [[ 5 6] [ 7 8]] [[ 9 10] [11 12]]] The shift is the same as before but using numpy indexing syntax b[0,0],b[1,0],b[2,0] = b[1,0],b[2,0],b[0,0] print(b) [[[ 5 6] [ 3 4]] [[ 9 10] [ 7 8]] [[ 5 6] [11 12]]] As you can see, this assignment left the first and last 2x2 arrays with the same first row. Would you expect this behavior difference? Why does numpy do this but not the regular list type? How could the numpy assignment be done to result in the same result as the list?
The way I reason this out is that since b is a (3,2,2) array each of b[1,0],b[2,0],b[0,0] is a view of b. That is, 3 (2,) arrays, but each uses a different part of the b data_buffer. During the assignment, b[0,0] is set to the values at b[1,0]. By the time it assigns to b[2,0], B[0,0] now has the new values, [5,6]. I've seen this on occasion before. It's another case where arrays aren't exactly the same as lists. Lists contain pointers to lists, where as array indexing either produces copies or views. Views are convenient, but can mess with list/reference based intuitions. Fortunately arrays can assign several values at once, so we can do: b[[0,1,2],0] = b[[1,2,0],0] or the equivalent with slices b[:,0]=b[::-1,0] Earlier questions focus more on the correct way of doing the swap, and less on why there's a difference. So won't mark this as duplicate Swap slices of Numpy arrays Row exchange in Numpy
2
3
76,443,511
2023-6-9
https://stackoverflow.com/questions/76443511/loss-of-data-in-short-time-fourier-transform-and-inverse-need-help-improving-au
I am currently developing my own audio library and have implemented the Short-Time Fourier Transform (STFT) and its inverse as part of the signal processing pipeline. However, I have noticed that the STFT and its inverse operations seem to be causing a significant loss of data, resulting in very poor audio quality. First I divide the audio signal into overlapping frames, and for each frame, I apply a window function to reduce spectral leakage. Then, I perform the Fourier Transform on each frame to obtain the frequency-domain representation. Then as a test, I perform the inverse to get the audio back. I have plotted these results below and you can see the degradation. Here are the functions: class AudioLib: '''Library of audio processing functions.''' def __init__(self, blocksize=1024 * 2): self.blocksize = blocksize self.window = np.hanning(blocksize) def stft(self, audio): '''Compute the short-time Fourier transform of the audio.''' # Split the audio into overlapping blocks num_blocks = len(audio) // self.blocksize blocks = np.reshape(audio[:num_blocks * self.blocksize], (num_blocks, self.blocksize)) # Apply the windowing function to each block windowed_blocks = blocks * self.window[np.newaxis, :] # Compute the Fourier transform of each block spectrum = np.fft.fft(windowed_blocks, axis=1) return spectrum def istft(self, spectrum): '''Compute the inverse short-time Fourier transform of the spectrum.''' # Compute the inverse Fourier transform of each block windowed_blocks = np.fft.ifft(spectrum, axis=1).real # Apply overlap-and-add to reconstruct the output signal output = np.zeros(len(spectrum) * self.blocksize) for i, block in enumerate(windowed_blocks): output[i * self.blocksize : (i + 1) * self.blocksize] += block return output I have tried changing the block size and while reducing the size improves the audio somewhat, it is still not perfect and I feel as though my implementation is incorrect. Any help regarding this would be greatly appreciated!
Like Christoph Rackwitz said, the problem with this STFT implementation is that the blocks are non-overlapping. For invertibility, you want that each block has 50% overlap with the next block. Here is a possible simple implementation for extracting and overlap-adding the blocks: # Copyright 2023 Google LLC. # SPDX-License-Identifier: Apache-2.0 def extract_blocks(audio: np.ndarray, blocksize: int) -> np.ndarray: """Extracts blocks with 50% overlap.""" hop_step = blocksize // 2 blocks = [] offset = 0 while offset + blocksize <= len(audio): blocks.append(audio[offset:(offset + blocksize)]) offset += hop_step return np.column_stack(blocks) def overlap_add_blocks(blocks: np.ndarray) -> np.ndarray: """Overlap-adds blocks with 50% overlap.""" blocksize, num_blocks = blocks.shape hop_step = blocksize // 2 output = np.zeros(blocksize + (num_blocks - 1) * hop_step) offset = 0 for i in range(num_blocks): output[offset:(offset + blocksize)] += blocks[:, i] offset += hop_step return output If you want do to this more efficiently, check out numpy stride_tricks. Another detail: for accurate invertibility, the window should be such that adding 50%-overlapped translated copies produces 1.0. To do this, np.hanning needs a small correction. Change self.window = np.hanning(blocksize) to self.window = np.hanning(blocksize + 1)[:blocksize] This plot shows the difference for blocksize = 32. The thick line is the sum of the windows. Notice that thick line is wiggly for np.hanning(blocksize) but perfectly flat and equal to 1.0 for np.hanning(blocksize + 1)[:blocksize].
2
3
76,450,635
2023-6-11
https://stackoverflow.com/questions/76450635/how-to-hash-a-ring-buffer-of-integers
I am using Python. I have some lists containing integers, but they are actually ring buffers. The following are rules by examples: We do not add new elements or modify any elements. These rings are immutable. No repetitive elements in a ring. If two lists have different lengths, they are not the same ring. Between two lists of the same length, if one list, after arbitrary times of ring shift or reversion, can be identical to the other, the two rings are equal. For example, [1, 7, 9, 2, 5] and [7, 9, 2, 5, 1](ring shifted) are equal, [1, 7, 9, 2, 5] and [1, 5, 2, 9, 7](ring shifted and reversed) are equal, but [1, 7, 9, 2, 5] and [7, 1, 9, 2, 5] are not equal. I want to quickly identify whether two rings are equal. One method is to compare their elements, another method is to find a good hashing method. I tried shifting two lists to their normal state and compare if their elements are identical (or identical after reversed), but it's too slow. I think hashing is a better choice. So what hashing method is good for this kind of ring buffers? The following is what I currently have: import random from time import perf_counter from typing import List, Tuple class Ring: def __init__(self, ids:List[int]) -> None: self.ids = ids def get_shifted(self, n:int) -> 'Ring': result_list = self.ids.copy() for i in range(n): head = result_list[0] result_list.remove(head) result_list.append(head) return Ring(result_list) def get_normalized(self) -> 'Ring': min_i = self.ids.index(min(self.ids)) shifted = self.get_shifted(min_i) return shifted def get_reversed(self) -> 'Ring': result_list = self.ids.copy() result_list.reverse() return Ring(result_list) def __eq__(self, other: 'Ring') -> bool: if len(self.ids) != len(other.ids): return False normalized1 = tuple(self.get_normalized().ids) normalized2 = tuple(self.get_reversed().get_normalized().ids) normalized_other = tuple(other.get_normalized().ids) return normalized1 == normalized_other or normalized2 == normalized_other @staticmethod def Random() -> 'Ring': unduplicated = set() while len(unduplicated) < ring_capacity: unduplicated.add(random.randint(0, 20)) return Ring(list(unduplicated)) if __name__ == '__main__': random.seed(1) ring_capacity = 5 num_rings = 2000 ring_set = [] random_rings = [Ring.Random() for _ in range(num_rings)] start = perf_counter() for ring in random_rings: if ring not in ring_set: ring_set.append(ring) end = perf_counter() print(end - start) print(f'{len(ring_set)} out of {num_rings} unduplicated rings')
It's faster and simpler to just compute a normalized form at construction: class Ring: def __init__(self, ids:List[int]) -> None: self.ids = ids i = ids.index(min(ids)) self.normed = min( ids[i:] + ids[:i], ids[i::-1] + ids[:i:-1] ) def __eq__(self, other: 'Ring') -> bool: return self.normed == other.normed Output with yours: 15.41106627508998 1896 out of 2000 unduplicated rings Output with mine: 0.38525977171957493 1896 out of 2000 unduplicated rings Output with the below (Attempt This Online!): 0.0007639830000698566 1896 out of 2000 unduplicated rings (Hmm, I just realized that by moving the normalization into construction, I moved it out of the timing. But if I include the random_rings = [Ring.Random() for _ in range(num_rings)] in the timing, the whole thing still only takes ~0.03 seconds.) The last is modifying your using code to make your ring_set an actual set instead of a misnamed list: start = perf_counter() ring_set = set(random_rings) end = perf_counter() With the Ring class providing meaningful hashes: class Ring: def __init__(self, ids:List[int]) -> None: self.ids = ids i = ids.index(min(ids)) self.normed = min( ids[i:] + ids[:i], ids[i::-1] + ids[:i:-1] ) self.hash = hash(tuple(self.normed)) def __eq__(self, other: 'Ring') -> bool: return self.normed == other.normed def __hash__(self): return self.hash
2
2
76,446,124
2023-6-10
https://stackoverflow.com/questions/76446124/best-line-detector-algorithm-for-a-specific-content-bounding-box-measurement
The purpose of the algorithm is to auto align-sheet music pages based on staves/systems content. The algorithm need to detect the bounding box to allow to easily compute the left/right and top/bottom margin for the wholes pages. Current algorithm like morphological operations in openCV (using cv2.HoughLinesP for example) fails. I need to found the most precise coordinates at least the top (first) or bottom (last) staff line and at least the left (systems) lines to compute the bounding box (red lines at left of the picture) What is the state-of-the-art algorithm to work on this kind of document. The full resolution is in 600 dpi (5078 x 6566 pixels) if that can help (without downsizing). Thank you very much.
You already looked into "staff-line removers". For this task, you need a part of that: the part that identifies staff lines. Assuming the scan is upright, not rotated by a few degrees, that is usually done with morphology operations that use a "line" shaped kernel. To also drop the short thick vertical bars on the left, your suggestion of a column histogram allows you to overcome the short horizontal stubs that connected those bars to the long vertical line. threshold level, mask = cv.threshold( src=im, thresh=128, maxval=255, type=cv.THRESH_BINARY_INV | cv.THRESH_OTSU) morphology hkernel = np.ones((1, 3), dtype=np.uint8) vkernel = np.ones((3, 1), dtype=np.uint8) horizontals = cv.morphologyEx(src=mask, op=cv.MORPH_OPEN, kernel=hkernel, iterations=50) verticals = cv.morphologyEx(src=mask, op=cv.MORPH_OPEN, kernel=vkernel, iterations=40) combined = horizontals | verticals Tweak the kernel sizes and number of iterations. bounding box bbox = cv.boundingRect(combined) canvas = cv.cvtColor(im >> 1, cv.COLOR_GRAY2BGR) cv.rectangle(img=canvas, rec=bbox, color=(0,255,0), thickness=1) Includes the short stubs on the left edge. column histogram to start the bounding box at the first long vertical colhist = combined.mean(axis=0) / 255 colmax = colhist.max() # option: find global maximum left = colhist.argmax() # option: find first above threshold indices, = np.where(colhist >= 0.5 * colmax) left = indices[0] fix the box up: x0,y0,w,h = bbox x1 = x0 + w # right edge stays where it is bbox = (left, y0, x1 - left, h) shift the image so that the box is centered x0,y0,w,h = bbox bcenter = np.array([x0 + (w-1)/2, y0 + (h-1)/2]) icenter = np.array([(iw-1)/2, (ih-1)/2]) # image size T = np.eye(3) T[0:2, 2] = (+icenter - bcenter).round() canvas = cv.warpAffine( src=canvas, M=T[:2], dsize=(iw, ih), flags=cv.INTER_NEAREST, borderMode=cv.BORDER_REPLICATE) This will be confused if there are any other long lines that aren't staff lines. That scan shows the bottom edge of the sheet of paper. You should make sure that either doesn't show up in the scan, or you need to erase image content near the edges (which is what I did here, without detailing it).
3
5
76,444,601
2023-6-10
https://stackoverflow.com/questions/76444601/playwright-get-by-role-using-nth-selector
I'm writing a Playwright test in Python. I have a table that I want to grab every row from to perform some actions within a for loop but I want to skip the first row. I am able to grab every row within a table by doing something as simple as the following: table = page.get_by_role("row").all() for row in table: print(row) But I want it to skip the first row. So I know the css selector for that is :nth-child(n+1) but I'm not sure how to use this in Playwright. I tried doing something like: table = page.get_by_role("row:nth-child(n+1)") But I got the error: unexpected symbol ":" Then I tried: table = page.get_by_role("row").nth("n+1") But that also didn't work. Finally, tried this: table = page.locator("[role=row]:nth-child(n+1)") But I got the error: ERROR test_table.py - TypeError: 'Locator' object is not iterable How do I properly query all the rows skipping the first one?
Rather than manipulating the selector, you simply start the for loop at the second element, like this: table = page.get_by_role("row").all() # Using slice notation for row in table[1:]: print(row)
2
3
76,446,783
2023-6-10
https://stackoverflow.com/questions/76446783/question-about-fastapis-dependency-injection-and-its-reusability
from fastapi import Depends, FastAPI class MyDependency: def __init__(self): # Perform initialization logic here pass def some_method(self): # Perform some operation pass def get_dependency(): # Create and return an instance of the dependency return MyDependency() app = FastAPI() @app.get("/example") def example(dependency: MyDependency = Depends(get_dependency)): dependency.some_method() For the code snippet above, does subsequent visits to /example create a new instance of the MyDependency object each time? If so, how can I avoid that?
Yes, each request will receive a new instance. If you don't want that to happen, use a cache decorator, such as the built-in lru_cache in functools: - it's just a regular function, so any decorators will still be invoked (since they replace the original function with a new one which wraps the old one): from functools import lru_cache ... @lru_cache def get_dependency(): # Create and return an instance of the dependency return MyDependency() However, if you use the same dependency multiple places in the hiearchy (for the same request), the same value will be re-used. If one of your dependencies is declared multiple times for the same path operation, for example, multiple dependencies have a common sub-dependency, FastAPI will know to call that sub-dependency only once per request.
4
4
76,444,617
2023-6-10
https://stackoverflow.com/questions/76444617/cast-pl-date-to-unix-epoch
Trying to convert a pl.Date column to UNIX epoch as is, without any timezone offset: import datetime import polars as pl df = pl.DataFrame( {'Date': [datetime.datetime.now().date()]} ) Correct time (00:00:00) when converted to Datetime: df.with_columns( pl.col("Date").cast(pl.Datetime) ) โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Date โ”‚ โ”‚ --- โ”‚ โ”‚ datetime[ฮผs] โ”‚ โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก โ”‚ 2023-06-10 00:00:00 โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ Incorrect time when casting to timestamp: datetime.datetime.fromtimestamp( df.with_columns( pl.col("Date").cast(pl.Datetime).dt.timestamp("ms").truediv(1_000) ).item() ) datetime.datetime(2023, 6, 10, 8, 0) # (08:00:00) As suggested, without casting to Datetime also produces the incorrect time. (08:00:00) pl.col("Date").dt.timestamp("ms").truediv(1_000)
Note that vanilla Python datetime defaults to local time if you don't set a time zone (naive datetime). In contrast, polars assumes naive datetime to resemble UTC (as pandas does as well). Keep it consistent by setting the time zone, e.g. UTC: from datetime import datetime, timezone import polars as pl df = pl.DataFrame( {'Date': [datetime.now(timezone.utc).date()]} ) df = df.with_columns( pl.col("Date").cast(pl.Datetime).dt.timestamp("ms").truediv(1_000).alias("Unix") ) print(df) # shape: (1, 2) # โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” # โ”‚ Date โ”† Unix โ”‚ # โ”‚ --- โ”† --- โ”‚ # โ”‚ date โ”† f64 โ”‚ # โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก # โ”‚ 2023-06-10 โ”† 1.6864e9 โ”‚ # โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ print(datetime.fromtimestamp(df["Unix"][0], timezone.utc)) # 2023-06-10 00:00:00+00:00
2
3
76,444,501
2023-6-10
https://stackoverflow.com/questions/76444501/typeerror-init-got-multiple-values-for-argument-options
What could be the reason for this error being thrown: Traceback (most recent call last): File "/Users/me/sc/sc.py", line 30, in <module> driver = Chrome(ChromeDriverManager().install(), options=chrome_options) TypeError: __init__() got multiple values for argument 'options' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/me/sc/sc.py", line 34, in <module> driver = Chrome("./chromedriver", options=chrome_options) TypeError: __init__() got multiple values for argument 'options' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/me/sc/sc.py", line 36, in <module> driver = Chrome("chromedriver.exe", options=chrome_options) TypeError: __init__() got multiple values for argument 'options' List item For this code: from time import time, sleep from selenium.webdriver import Chrome from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.remote.webelement import WebElement from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.wait import WebDriverWait from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.action_chains import ActionChains from args_parser import ArgsParser from downloader import download_file args = ArgsParser() def print_if_verbose(val): if args.output_verbose: print(val) WAITING_TIMEOUT = 180 chrome_options = Options() driver_user_agent = ('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 ' '(KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36') chrome_options.add_argument(f'user-agent={driver_user_agent}') if not args.display_browser: chrome_options.add_argument('--headless') try: driver = Chrome(ChromeDriverManager().install(), options=chrome_options) except Exception as e: print(e) try: driver = Chrome("./chromedriver", options=chrome_options) except Exception: driver = Chrome("chromedriver.exe", options=chrome_options) BTW, I have Chrome 114 on macOS M2 silicon, and using Python 3.9
This is due to changes in selenium 4.10.0: https://github.com/SeleniumHQ/selenium/commit/9f5801c82fb3be3d5850707c46c3f8176e3ccd8e Note that the first argument is no longer executable_path, but options. (That's why it complains that you're passing it in twice.) If you want to pass in an executable_path, you'll have to use the service arg now. Example: from selenium import webdriver from selenium.webdriver.chrome.service import Service service = Service(executable_path=r'./chromedriver') options = webdriver.ChromeOptions() options.add_argument('--headless') driver = webdriver.Chrome(service=service, options=options) # ... driver.quit() Also note that a driver manager is now built-in to selenium, so you no longer need to use the separate webdriver_manager. The Selenium Team talked about that here: https://www.linkedin.com/pulse/selenium-manager-best-tool-from-you-can-forget-david-burns/
2
4
76,434,535
2023-6-8
https://stackoverflow.com/questions/76434535/attributeerror-super-object-has-no-attribute-init
I was making a personal assistant. I got an error in starting code: import pyttsx3 engine = pyttsx3.init() engine.say('How are you today?') engine.runAndWait() Error: /usr/local/lib/python3.11/site-packages/pyttsx3/drivers/nsss.py:12: ObjCSuperWarning: Objective-C subclass uses super(), but super is not objc.super class NSSpeechDriver(NSObject): Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/pyttsx3/__init__.py", line 20, in init eng = _activeEngines[driverName] ~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/usr/local/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/weakref.py", line 136, in __getitem__ o = self.data[key]() ~~~~~~~~~^^^^^ KeyError: None During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/anshtyagi/Documents/personal assistant/main.py", line 5, in <module> engine = pyttsx3.init() ^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pyttsx3/__init__.py", line 22, in init eng = Engine(driverName, debug) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pyttsx3/engine.py", line 30, in __init__ self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pyttsx3/driver.py", line 52, in __init__ self._driver = self._module.buildDriver(weakref.proxy(self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pyttsx3/drivers/nsss.py", line 9, in buildDriver return NSSpeechDriver.alloc().initWithProxy(proxy) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pyttsx3/drivers/nsss.py", line 15, in initWithProxy self = super(NSSpeechDriver, self).init() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'super' object has no attribute 'init' sys:1: UninitializedDeallocWarning: leaking an uninitialized object of type NSSpeechDriver I don't know what is the problem. One more thing: due to some issue I had to uninstall old python version on Mac and installed new one using Homebrew. Mac OS ventura 13.4 Python 3.11
this turns out to be a little tricky. and this is a workaround! hope works for you. Under the hood, this module pyttsx3 uses PyObjC as a bridge between Python and Objective-C. Step 1: Check that pyobjc is installed(pip show pyobjc), if not install as pip install pyobjc. Step 2: open this file /usr/local/lib/python3.11/site-packages/pyttsx3/drivers/nsss.py and change the following: #self = super(NSSpeechDriver, self).init() comment this line , and add the following self = objc.super(NSSpeechDriver, self).init() Note: from Foundation import * imports NSObject and objc from foundation, which has been consumed. after the change, your following program would run okay. import pyttsx3 engine = pyttsx3.init() engine.say('How are you today?') engine.runAndWait()
7
28
76,443,923
2023-6-9
https://stackoverflow.com/questions/76443923/create-data-frame-with-month-start-and-end-in-python
I want to create a pandas dataframe from a given start and end date: import pandas as pd from pandas.tseries.offsets import MonthEnd start_date = "2020-05-17" end_date = "2020-07-23" For each row in this dataframe, I should have the start day and end day of the month, so the expected output is: start end month year 2020-05-17 2020-05-31 May 2020 2020-06-01 2020-06-30 June 2020 2020-07-01 2020-07-23 July 2020 I know I have to loop over each month between the interval created by start_date and end_date. While I know how to extract the last day in a date: def last_day(date: str): return pd.Timestamp(date) + MonthEnd(1) I'm stuck over how to run this over the interval. Any suggestion will be appreciated.
You can use pd.date_range and pd.to_datetime: start = pd.to_datetime([start_date] + pd.date_range(start_date, end_date, freq='MS').tolist()) end = pd.to_datetime(pd.date_range(start_date, end_date, freq='M').tolist() + [end_date]) month = start.strftime('%B') year = start.year df = pd.DataFrame({'start': start, 'end': end, 'month': month, 'year': year}) Output: >>> df start end month year 0 2020-05-17 2020-05-31 May 2020 1 2020-06-01 2020-06-30 June 2020 2 2020-07-01 2020-07-23 July 2020
3
2
76,443,854
2023-6-9
https://stackoverflow.com/questions/76443854/how-to-find-a-number-of-occurrences-of-every-element-of-a-numpy-array
Given an array of integers, I would like to obtain an array of the same size where every value is a number of occurrences of a corresponding element in the original array. For example given the following array: a = np.array([1, 1, 4, 10, 5, 3, 5, 5, 8, 9]) This should be the result: array([2, 2, 1, 1, 3, 1, 3, 3, 1, 1]) Even though it is straightforward to achieve this via collections.Counter or built-in list.count(), I'm looking for a more performant way to work with large lists.
You could use np.unique and use the parameter return_inverse and return_counts. Use return_inverse to index return_counts to get desired results. return_inverse bool, optional If True, also return the indices of the unique array (for the specified axis, if provided) that can be used to reconstruct ar. return_counts bool, optional If True, also return the number of times each unique item appears in ar. _, idx, c = np.unique(a, return_inverse=True, return_counts=True) c[idx] # array([2, 2, 1, 1, 3, 1, 3, 3, 1, 1])
4
6
76,442,097
2023-6-9
https://stackoverflow.com/questions/76442097/how-to-assign-a-color-to-a-specific-value-on-a-heatmap
I am making a heatmap in seaborn. I am using 'viridis', but I modify it slightly so some of the values get particular colors. In my MWE, .set_over is used to set the values above 90 to 'black', and .set_under is used to set the values below 10 to 'white'. I also mask out part of the heatmap. This all works fine. How can I also map a middle range value, 20, to 'orange', and without effecting the current colorbar appearance? As you can see, .set_over, and .set_under do not change the colorbar appearance. import matplotlib import seaborn as sns import numpy as np np.random.seed(7) A = np.random.randint(0,100, size=(20,20)) mask_array = np.zeros((20, 20), dtype=bool) mask_array[:, :5] = True cmap = matplotlib.colormaps["viridis"] # Set the under color to white cmap.set_under("white") # Set the voer color to white cmap.set_over("black") # Set the background color g = sns.heatmap(A, vmin=10, vmax=90, cmap=cmap, mask=mask_array) # Set color of masked region g.set_facecolor('lightgrey') I have seen Map value to specific color in seaborn heatmap, but I am not sure how I can use it to solve my problem.
Pulling from this answer, here is a solution that uses a mask rather than a custom colorbar: import matplotlib import seaborn as sns import numpy as np from matplotlib.colors import ListedColormap np.random.seed(7) A = np.random.randint(0,100, size=(20,20)) mask_array = np.zeros((20, 20), dtype=bool) mask_array[:, :5] = True # cmap = matplotlib.colormaps["viridis"] cmap = matplotlib.cm.get_cmap('viridis') # Set the under color to white cmap.set_under("white") # Set the voer color to white cmap.set_over("black") # Set the background color g = sns.heatmap(A, vmin=10, vmax=90, cmap=cmap, mask=mask_array) # Set color of masked region g.set_facecolor('lightgrey') special_data = np.ma.masked_where(A==20, A) sns.heatmap(special_data, cmap=ListedColormap(['orange']), mask=(special_data != 1), cbar=False)
3
2
76,432,343
2023-6-8
https://stackoverflow.com/questions/76432343/auto-switching-python-virtual-environments-in-visual-studio-code-per-directory-w
I am working on a project in VSCode that has multiple directories, each of which requires a different Python virtual environment. My virtual environments are located in the ~/.virtualenvs directory and my workspace is structured like this: ~/.virtualenvs/ โ”‚ โ”œโ”€โ”€ venv_A/ โ”‚ โ””โ”€โ”€ venv_B/ my_workspace/ โ”‚ โ”œโ”€โ”€ project_A/ โ”‚ โ””โ”€โ”€ script_A.py โ”‚ โ””โ”€โ”€ project_B/ โ””โ”€โ”€ script_B.py I want VSCode to automatically switch to the appropriate virtual environment (venv_A for project_A, and venv_B for project_B) located in ~/.virtualenvs when I open a Python file from each directory within the workspace. Currently, I have to manually select the virtual environment through the command palette each time. I have tried looking through the VSCode documentation and searched for guides or tutorials on how to achieve this functionality, but I haven't found anything that addresses this specific issue. I expected there to be some configuration options either through the .vscode/settings.json file or the workspace settings that would allow me to specify which virtual environment should be used for each directory, and how Pylance, pylint, and yapf should adapt accordingly. I am aware that VSCode has support for workspaces and .env files, but I'm not sure how to configure it to auto-switch virtual environments based on the directory, and to have Pylance, pylint, and yapf adapt accordingly. I also found the issue Select pyenv environment based on folder .python-version file that is not closed.
The easiest way is to open project_A and project_B as workspaces respectively, and then select an interpreter for the workspace, vscode will remember your choice, and will still use the previously selected interpreter when it is opened next time. Another approach is to use Multi-root Workspaces Open a new window and use Add Folder to Worspace... to add both folders to the current workspace, Then created separate virtual environments for both folders using Ctrl+Shift+P --> Python: Create Environment... and select the interpreter for the respective folder. You can also select the .py file in the corresponding folder, and then click the python version in the lower right corner to switch These documents may also be useful: https://code.visualstudio.com/docs/python/python-tutorial#_select-a-python-interpreter https://code.visualstudio.com/docs/python/environments#_working-with-python-interpreters https://code.visualstudio.com/docs/editor/profiles
4
2
76,434,311
2023-6-8
https://stackoverflow.com/questions/76434311/how-to-get-the-logits-of-the-model-with-a-text-classification-pipeline-from-hugg
I need to use pipeline in order to get the tokenization and inference from the distilbert-base-uncased-finetuned-sst-2-english model over my dataset. My data is a list of sentences, for recreation purposes we can assume it is: texts = ["this is the first sentence", "of my data.", "In fact, thats not true,", "but we are going to assume it", "is"] Before using pipeline, I was getting the logits from the model outputs like this: with torch.no_grad(): logits = model(**tokenized_test).logits Now I have to use pipeline, so this is the way I'm getting the model's output: selected_model = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(selected_model) model = AutoModelForSequenceClassification.from_pretrained(selected_model, num_labels=2) classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) print(classifier(text)) which gives me: [{'label': 'POSITIVE', 'score': 0.9746173024177551}, {'label': 'NEGATIVE', 'score': 0.5020197629928589}, {'label': 'NEGATIVE', 'score': 0.9995120763778687}, {'label': 'NEGATIVE', 'score': 0.9802979826927185}, {'label': 'POSITIVE', 'score': 0.9274746775627136}] And I cant get the 'logits' field anymore. Is there a way to get the logits instead of the label and score? Would a custom pipeline be the best and/or easiest way to do it?
When you use the default pipeline, the postprocess function will usually take the softmax, e.g. from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english') model = AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english') text = ['hello this is a test', 'that transforms a list of sentences', 'into a list of list of sentences', 'in order to emulate, in this case, two batches of the same lenght', 'to be tokenized by the hf tokenizer for the defined model'] classifier(text, batch_size=2, truncation="only_first") [out]: [{'label': 'NEGATIVE', 'score': 0.9379090666770935}, {'label': 'POSITIVE', 'score': 0.9990271329879761}, {'label': 'NEGATIVE', 'score': 0.9726701378822327}, {'label': 'NEGATIVE', 'score': 0.9965035915374756}, {'label': 'NEGATIVE', 'score': 0.9913086891174316}] So what you want is to overload the postprocess logic by inheriting from the pipeline. To check which pipeline the classifier inherits do this: classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) type(classifier) [out]: transformers.pipelines.text_classification.TextClassificationPipeline Now that you know the parent class of the task pipeline you want to use, now you can do this and still enjoy the perks of the precoded batching from TextClassificationPipeline: from transformers import TextClassificationPipeline class MarioThePlumber(TextClassificationPipeline): def postprocess(self, model_outputs): best_class = model_outputs["logits"] return best_class pipe = MarioThePlumber(model=model, tokenizer=tokenizer) pipe(text, batch_size=2, truncation="only_first") [out]: [tensor([[ 1.5094, -1.2056]]), tensor([[-3.4114, 3.5229]]), tensor([[ 1.8835, -1.6886]]), tensor([[ 3.0780, -2.5745]]), tensor([[ 2.5383, -2.1984]])]
8
10
76,435,305
2023-6-8
https://stackoverflow.com/questions/76435305/convert-python-list-of-dicts-to-mapping-of-key-to-rest-of-dict
Let's say I have the following list: l = [ {"a": 10, "b": 100, "c": 100}, {"a": 20, "b": 100, "c": 100}, {"a": 30, "b": 100, "c": 100}, ] I know "a" is unique in each item: assert len({x["a"] for x in l}) == len(l) I want to generate a mapping of the "a" value to the rest of each item so my end result is the following dictionary: { 10: {"b": 100, "c": 100}, 20: {"b": 100, "c": 100}, 30: {"b": 100, "c": 100}, } So far I've come up with the following: {x["a"]: {k: v for k, v in x.items() if k != "a"} for x in l} Is this the best way to write this? Or is there a better way or a built in function that I'm missing?
Maybe dict.pop is what you want? out = {d.pop('a'): d for d in l} print(out) Prints: {10: {'b': 100, 'c': 100}, 20: {'b': 100, 'c': 100}, 30: {'b': 100, 'c': 100}}
2
4
76,435,071
2023-6-8
https://stackoverflow.com/questions/76435071/how-can-vectorization-be-used-for-row-dependent-functions-in-pandas
and sorry if this has been asked before (I could only find approaches that worked on previous rows and not the rest of the dataframe.) I'm currently trying to switch out my iterative approach for a problem to a more Pandas (and time) friendly version. The problem is as follows: I have two columns, "A" and "B" that are players. At each time, "A" and "B" take on different arbitrary values. I want to add a third column that has a value either "A wins!" or "B wins!" based on the rows beneath the values at that row. To determine when 'A wins!' for a certain row number, I want to compare the value in column "A" at that row with each value in column 'B' that is beneath this row. To determine when 'B wins!', I want to do the same thing: take the value in row "B" and compare it to each entry in column "A" beneath this entry. Whichever is first to "match" with a value in another column will be the winner. Here's an example: Time A B Winner 1 2 4 A wins! 2 3 5 B wins! 3 5 2 A wins! 4 6 5 None 5 2 10 B wins! 6 10 7 None At time 1, A wins because at time 3 "B" takes on the value 2 before "A" can take on 4. At time 2, "B" wins because "A" in the row below takes on the value 5 before "B" took on the value 3. This is similar for time 3 and 5, and at times 4 & 6 there is no winner because the opposing players do not happen to take on each others values in the later rounds. Right now, I have a working solution by just using df.iterrows(). I have a pretty large dataset, so I would like to speed this up but I can't think of any simple Panda's functions because they usually isolate by row. All my attempts of apply and map have not worked because of the dependence on rows, so I'm looking for a solution that might cut down time and not have to use explicit iteration. Any and all help would be appreciated, thank you!! EDIT: Here's my working iterative solution. I feed a DataFrame into find_winner which calls find_winner_row on each row. def find_winner_row(df, row, result): A_val = df['A'][row] # Player A B_val = df['B'][row] # Player B potentials_B = np.where(df['A'][row+1:] == B_val)[0] #[row+1:] slices and only considers the future values of A potentials_A = np.where(df['B'][row+1:] == A_val)[0] # below logic is just to handle the case when there are no matching values if potentials_B.size == 0: B_switch_time = len(df.columns) + 1 else: B_switch_time = potentials_B[0] if potentials_A.size == 0: A_switch_time = len(df.columns) + 1 else: A_switch_time = potentials_A[0] # now which is first? if B_switch_time < A_switch_time: result[row] = "B" elif B_switch_time > A_switch_time: result[row] = "A" else: result[row] = "None" def find_winner(df): result_series = pd.Series(np.zeros(len(df.columns))) for num, (index, row) in enumerate(df.iterrows()): find_winner_row(df, num, result_series) df.loc[:,'Winner'] = result_series.values return df ## So with our given example above, we can run the following and see we get the expected result demo_df = pd.DataFrame([[2,4],[3,5],[5,2],[6,5],[2,10],[10,7]],columns=['A','B']) find_winner(demo_df)
Started writing this before you added your code -- but figure it might still be helpful. I was able to write 1 function that, based on the logic, returns a string of who wins based on a row index and given DataFrame with minimal (internal) iteration: # Sample data import pandas as pd data = {"Time":[1,2,3,4,5,6],"A":[2,3,5,6,2,10],"B":[4,5,2,5,10,7],"Winner":["A","B","A","None","B","None"]} df = pd.DataFrame(data) def FindWinner(row_index,dataframe=df): # Record the intial value in indicated column A_initial = df.iloc[row_index]["A"] B_initial = df.iloc[row_index]["B"] # Convert data underneath this row into a pair of lists rowsUnderA = list(df.iloc[row_index+1:]["A"]) rowsUnderB = list(df.iloc[row_index+1:]["B"]) # Use .index() to find when the inital value appears next in the other list try: rowsUntilA_initial = rowsUnderB.index(A_initial) except ValueError: rowsUntilA_initial = "DOES_NOT_APPEAR" try: rowsUntilB_initial = rowsUnderA.index(B_initial) except ValueError: rowsUntilB_initial = "DOES_NOT_APPEAR" # Set win conditions--> first handle scenarios where one or both values do not appear if rowsUntilB_initial == "DOES_NOT_APPEAR" and rowsUntilA_initial == "DOES_NOT_APPEAR": return "No one wins :(" elif rowsUntilB_initial == "DOES_NOT_APPEAR" and rowsUntilA_initial != "DOES_NOT_APPEAR": return "A wins!" elif rowsUntilB_initial != "DOES_NOT_APPEAR" and rowsUntilA_initial == "DOES_NOT_APPEAR": return "B wins!" # If A appears first, A wins ... vice versa elif rowsUntilA_initial < rowsUntilB_initial: return "A wins!" elif rowsUntilB_initial < rowsUntilA_initial: return "B wins!" # What if they are the same? elif rowsUntilB_initial == rowsUntilB_initial: return "... what happens if they're the same?" Based on a quick test this does return the expected output: Using this function it should be possible to make/map a new column, or rather even iterate once through each row and create a new column (which is what map would be doing anyway). I understand the objective here is to minimize iteration, but without referencing each row individually in some capacity - I'm not sure there's a way to compute and display the winners. The logic here and in your sample code seems similar, but I would be interested to know if the differences affect runtime. I do not have access to your dataset so I'm unable to determine that myself, but figured it might be worthwhile anyway to try.
3
2
76,434,987
2023-6-8
https://stackoverflow.com/questions/76434987/why-does-this-simple-python-https-request-throw-an-ssl-error-even-when-given-an
I've recently started an internship at a company with pretty strict IT policies. I'm the only developer at the company and it is clear that I'm running into problems that most likely don't affect anyone else here, which makes them difficult to resolve. IT has inserted their own SSL certificate at the proxy level (probably to monitor network traffic), which has led to problems making HTTPS requests. I have found what I believe to be the correct certificate and it has worked to allow me to install packages with conda. However, HTTPS requests made with python's requests package still error out even when they are given the correct certificate. Here's a really simple attempt at an HTTPS request to https://example.com: import requests url = "https://example.com" response = requests.get(url, cert = "/temp/certs/certificate.pem") And all of the output: python : Traceback (most recent call last): At line:1 char:1 + python main.py 2>%1 > bruh.txt + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (Traceback (most recent call last)::String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn conn.connect() File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\urllib3\connection.py", line 419, in connect self.sock = ssl_wrap_socket( File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\urllib3\util\ssl_.py", line 418, in ssl_wrap_socket context.load_cert_chain(certfile, keyfile) ssl.SSLError: [SSL] PEM lib (_ssl.c:3921) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\requests\adapters.py", line 487, in send resp = conn.urlopen( File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen retries = retries.increment( File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\urllib3\util\retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='example.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(9, '[SSL] PEM lib (_ssl.c:3921)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "(file location)", line 27, in <module> response = requests.get(url, cert = "/temp/certs/certificate.pem") File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\requests\api.py", line 73, in get return request("get", url, params=params, **kwargs) File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\requests\sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\requests\sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "C:\Users\J1P\AppData\Local\anaconda3\envs\schedule_tracker\lib\site-packages\requests\adapters.py", line 518, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='example.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(9, '[SSL] PEM lib (_ssl.c:3921)'))) This is different than the error produced when no certificate is provided, which references a self-signed certificate. Edit: The request does go through when SSL verification is set to false, but obviously that isn't really a solution.
The cert option of the request is meant to be for the client certificate authentication. What I believe you are trying to do is to add a trusted CA for the request. For this use the verify option. response = requests.get(url, verify = "/temp/certs/certificate.pem") You can find the difference in the documentation.
3
6