question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,267,511 | 2024-4-3 | https://stackoverflow.com/questions/78267511/how-to-use-curve-fit-of-scipy-with-constrain-where-the-fitted-curve-is-always-un | I'm trying to fit a signal with an exponential decay curve. I would like to constrain the fitted curve to be always under the signal. How can I add such a constraint? I tried something with a residual function with a penalization but the fit is not good Here a minimal example import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit,leastsq y = np.array([0.13598974610162404,0.14204518683071268,0.12950580786633123,0.11907324299581903,0.10128368784179803,0.09801605741178761,0.08384607033484785,0.080831165652505,0.08320697432504208,0.0796448643292049,0.08036960780924939,0.07794871929139761,0.06684868128842808,0.08473240868175465,0.12911858937102086,0.2643875667237164,0.35984364939831903,0.2193622531576059,0.11434823952113388,0.07542004424929072,0.05811782617304745,0.05244297390163204,0.046658695718735835,0.04848192538027753,0.04720951580680828,0.043285109240216044,0.04182209865781944,0.039844899409411334,0.03462168053862101,0.03378305258506322,0.03533297573624328,0.03434759644082368,0.033784129758841895,0.030419029760045915,0.028085746545496386,0.02614296782807577,0.024221565132520304,0.022189741126251487,0.02093159168492871,0.02041496822457043,0.021031182865802436,0.024510234374072886,0.023307213889378165,0.0267484745286596,0.02258945483736504,0.014891232218542747,0.01151363712852099,0.010139967470707011,0.009769727537338574,0.009323591440734363,0.008852570111374145,0.008277064263333187,0.007088585763561308,0.00607584327561278,0.005423044957885124,0.005017536008889349,0.005194048550726604,0.005066069823795679,0.004923514285732114,0.0053721924337601975,0.005156078360383089,0.004962157137571195,0.0045958264654801136,0.0043323942880189766,0.004310971039183395,0.004733498071711899,0.005238905827304569,0.005180319290046715,0.0050892994891999395,0.005323200339923676,0.005430819354625569,0.0051261318575094965,0.004608215352126279,0.0042522740751442835,0.003964475580118653,0.004281845094328685,0.003932866994198572,0.003751478035379218,0.003988758544406512,0.00366304957414055,0.0030455636180720283,0.0027753884456863088,0.0025920006620398267,0.00253411154251131,0.0024133671863316246,0.0020164600081521793,0.002294208143652257,0.0021879013667402856,0.00213873257081609,0.0019997327222615736,0.00195034020886016,0.0022503784328324725,0.003038201783164678,0.003603415824772916,0.003642976691503975,0.003263887163622944,0.0035506429555724373,0.0047798428190157045,0.0040553738896165386,0.002473176007612183,0.0025941258844692236,0.0018292994313265358,0.00209892075806378,0.0023955564365646335,0.0020375114833779307,0.002260575557815427,0.0022985835848993693,0.002099406433733155,0.0018586368200849512,0.0016053613868235123,0.001438613175578214,0.00143049357541102,0.0013095127315154774,0.001262471540939509,0.0013514522407795408,0.001605619634800475,0.001961075896285937,0.001865266816887284,0.0023526578031602017,0.00246341280674717,0.0025884459641316543,0.0025289043233280195,0.0027480853600970576,0.003160811294269662,0.003061310957205347,0.0034708227008575852,0.0027193887970078795,0.0025019043062104967,0.001721602287020676,0.0014938287993981696,0.001379701311142287,0.001482278335951954,0.0017739654977338047,0.0016173740322614279,0.0014568993700072393,0.001561687803455451,0.0016478201019948435,0.001296045775857753,0.001237797494806695,0.0014233100660923912,0.001327643348684166,0.0012058468589450113,0.001326993796471779,0.0015302363900395407,0.0019691433239499958,0.001914607620254396,0.0017054233649494027,0.001999944948934884,0.001586257522693384,0.0017888302317418617,0.0024194552369763127,0.002602486169233071,0.0023322619326367703,0.002188641252143114,0.002160637896948486,0.0017183240941773745,0.0013791696278384316,0.0013010975606518034,0.0012917607493148195,0.0014473287423454842,0.0011277134770190562,0.0009788023156115833,0.0011624520875172602,0.0011529250281587956,0.0011286272690398862,0.0011650110432320925,0.0011670732824154513,0.0012701258601414223,0.0010863631780132393,0.001151403997327795,0.001261531100583112,0.0014433469612850924,0.0012625181480229021,0.0013366719381237742,0.0013129577294860868,0.0010799358566476144,0.0012361331567450533,0.0013155633998451644,0.0017427549165517102,0.0017117554798138019,0.0014424582600283703,0.0014934381441740442,0.001320132472902865,0.0010134949123866623,0.0009392144030905535,0.0008956207514417853,0.0009483482891766875,0.0007118586291810097,0.0006572633034661715,0.0006246206878692327]) x = np.array([1.1,1.2000000000000002,1.3,1.4000000000000001,1.5,1.6,1.7000000000000002,1.8,1.9000000000000001,2.0,2.1,2.2,2.3000000000000003,2.4000000000000004,2.5,2.6,2.7,2.8000000000000003,2.9000000000000004,3.0,3.1,3.2,3.3000000000000003,3.4000000000000004,3.5,3.6,3.7,3.8000000000000003,3.9000000000000004,4.0,4.1000000000000005,4.2,4.3,4.4,4.5,4.6000000000000005,4.7,4.800000000000001,4.9,5.0,5.1000000000000005,5.2,5.300000000000001,5.4,5.5,5.6000000000000005,5.7,5.800000000000001,5.9,6.0,6.1000000000000005,6.2,6.300000000000001,6.4,6.5,6.6000000000000005,6.7,6.800000000000001,6.9,7.0,7.1000000000000005,7.2,7.300000000000001,7.4,7.5,7.6000000000000005,7.7,7.800000000000001,7.9,8.0,8.1,8.200000000000001,8.3,8.4,8.5,8.6,8.700000000000001,8.8,8.9,9.0,9.1,9.200000000000001,9.3,9.4,9.5,9.600000000000001,9.700000000000001,9.8,9.9,10.0,10.100000000000001,10.200000000000001,10.3,10.4,10.5,10.600000000000001,10.700000000000001,10.8,10.9,11.0,11.100000000000001,11.200000000000001,11.3,11.4,11.5,11.600000000000001,11.700000000000001,11.8,11.9,12.0,12.100000000000001,12.200000000000001,12.3,12.4,12.5,12.600000000000001,12.700000000000001,12.8,12.9,13.0,13.100000000000001,13.200000000000001,13.3,13.4,13.5,13.600000000000001,13.700000000000001,13.8,13.9,14.0,14.100000000000001,14.200000000000001,14.3,14.4,14.5,14.600000000000001,14.700000000000001,14.8,14.9,15.0,15.100000000000001,15.200000000000001,15.3,15.4,15.5,15.600000000000001,15.700000000000001,15.8,15.9,16.0,16.1,16.2,16.3,16.400000000000002,16.5,16.6,16.7,16.8,16.900000000000002,17.0,17.1,17.2,17.3,17.400000000000002,17.5,17.6,17.7,17.8,17.900000000000002,18.0,18.1,18.2,18.3,18.400000000000002,18.5,18.6,18.7,18.8,18.900000000000002,19.0,19.1,19.200000000000003,19.3,19.400000000000002,19.5,19.6,19.700000000000003,19.8,19.900000000000002,20.0]) def funcExp(x, a, b, c): return a * np.exp(-b * x) + c # here you include the penalization factor def residuals(p, x, y): est = funcExp(x, p[0], p[1], p[2]) penalization = y - funcExp(x, p[0], p[1], p[2] ) penalization[penalization<0] = 0 penaliz = np.abs(np.sum(penalization)) return y - funcExp(x, p[0], p[1], p[2] ) - penalization popt, pcov = curve_fit(funcExp, x, y,p0=[y[0], 1, y[-1]]) popt2, pcov2 = leastsq(func=residuals, x0=(y[0], 1, y[-1]), args=(x, y)) fig, ax = plt.subplots() ax.plot(x,y ) ax.plot(x,funcExp(x, popt[0], popt[1], popt[2]),'r' ) ax.plot(x,funcExp(x, popt2[0], popt2[1], popt2[2]),'g' ) plt.show() which gives : | This is an exponential function, so use the logarithmic scale for both least-squared error and plotting. Use a lower-envelope constraint; works fine - import matplotlib.pyplot as plt import numpy as np from scipy.optimize import minimize, Bounds, NonlinearConstraint y_exper = np.array([0.13598974610162404,0.14204518683071268,0.12950580786633123,0.11907324299581903,0.10128368784179803,0.09801605741178761,0.08384607033484785,0.080831165652505,0.08320697432504208,0.0796448643292049,0.08036960780924939,0.07794871929139761,0.06684868128842808,0.08473240868175465,0.12911858937102086,0.2643875667237164,0.35984364939831903,0.2193622531576059,0.11434823952113388,0.07542004424929072,0.05811782617304745,0.05244297390163204,0.046658695718735835,0.04848192538027753,0.04720951580680828,0.043285109240216044,0.04182209865781944,0.039844899409411334,0.03462168053862101,0.03378305258506322,0.03533297573624328,0.03434759644082368,0.033784129758841895,0.030419029760045915,0.028085746545496386,0.02614296782807577,0.024221565132520304,0.022189741126251487,0.02093159168492871,0.02041496822457043,0.021031182865802436,0.024510234374072886,0.023307213889378165,0.0267484745286596,0.02258945483736504,0.014891232218542747,0.01151363712852099,0.010139967470707011,0.009769727537338574,0.009323591440734363,0.008852570111374145,0.008277064263333187,0.007088585763561308,0.00607584327561278,0.005423044957885124,0.005017536008889349,0.005194048550726604,0.005066069823795679,0.004923514285732114,0.0053721924337601975,0.005156078360383089,0.004962157137571195,0.0045958264654801136,0.0043323942880189766,0.004310971039183395,0.004733498071711899,0.005238905827304569,0.005180319290046715,0.0050892994891999395,0.005323200339923676,0.005430819354625569,0.0051261318575094965,0.004608215352126279,0.0042522740751442835,0.003964475580118653,0.004281845094328685,0.003932866994198572,0.003751478035379218,0.003988758544406512,0.00366304957414055,0.0030455636180720283,0.0027753884456863088,0.0025920006620398267,0.00253411154251131,0.0024133671863316246,0.0020164600081521793,0.002294208143652257,0.0021879013667402856,0.00213873257081609,0.0019997327222615736,0.00195034020886016,0.0022503784328324725,0.003038201783164678,0.003603415824772916,0.003642976691503975,0.003263887163622944,0.0035506429555724373,0.0047798428190157045,0.0040553738896165386,0.002473176007612183,0.0025941258844692236,0.0018292994313265358,0.00209892075806378,0.0023955564365646335,0.0020375114833779307,0.002260575557815427,0.0022985835848993693,0.002099406433733155,0.0018586368200849512,0.0016053613868235123,0.001438613175578214,0.00143049357541102,0.0013095127315154774,0.001262471540939509,0.0013514522407795408,0.001605619634800475,0.001961075896285937,0.001865266816887284,0.0023526578031602017,0.00246341280674717,0.0025884459641316543,0.0025289043233280195,0.0027480853600970576,0.003160811294269662,0.003061310957205347,0.0034708227008575852,0.0027193887970078795,0.0025019043062104967,0.001721602287020676,0.0014938287993981696,0.001379701311142287,0.001482278335951954,0.0017739654977338047,0.0016173740322614279,0.0014568993700072393,0.001561687803455451,0.0016478201019948435,0.001296045775857753,0.001237797494806695,0.0014233100660923912,0.001327643348684166,0.0012058468589450113,0.001326993796471779,0.0015302363900395407,0.0019691433239499958,0.001914607620254396,0.0017054233649494027,0.001999944948934884,0.001586257522693384,0.0017888302317418617,0.0024194552369763127,0.002602486169233071,0.0023322619326367703,0.002188641252143114,0.002160637896948486,0.0017183240941773745,0.0013791696278384316,0.0013010975606518034,0.0012917607493148195,0.0014473287423454842,0.0011277134770190562,0.0009788023156115833,0.0011624520875172602,0.0011529250281587956,0.0011286272690398862,0.0011650110432320925,0.0011670732824154513,0.0012701258601414223,0.0010863631780132393,0.001151403997327795,0.001261531100583112,0.0014433469612850924,0.0012625181480229021,0.0013366719381237742,0.0013129577294860868,0.0010799358566476144,0.0012361331567450533,0.0013155633998451644,0.0017427549165517102,0.0017117554798138019,0.0014424582600283703,0.0014934381441740442,0.001320132472902865,0.0010134949123866623,0.0009392144030905535,0.0008956207514417853,0.0009483482891766875,0.0007118586291810097,0.0006572633034661715,0.0006246206878692327]) x_exper = np.array([1.1,1.2000000000000002,1.3,1.4000000000000001,1.5,1.6,1.7000000000000002,1.8,1.9000000000000001,2.0,2.1,2.2,2.3000000000000003,2.4000000000000004,2.5,2.6,2.7,2.8000000000000003,2.9000000000000004,3.0,3.1,3.2,3.3000000000000003,3.4000000000000004,3.5,3.6,3.7,3.8000000000000003,3.9000000000000004,4.0,4.1000000000000005,4.2,4.3,4.4,4.5,4.6000000000000005,4.7,4.800000000000001,4.9,5.0,5.1000000000000005,5.2,5.300000000000001,5.4,5.5,5.6000000000000005,5.7,5.800000000000001,5.9,6.0,6.1000000000000005,6.2,6.300000000000001,6.4,6.5,6.6000000000000005,6.7,6.800000000000001,6.9,7.0,7.1000000000000005,7.2,7.300000000000001,7.4,7.5,7.6000000000000005,7.7,7.800000000000001,7.9,8.0,8.1,8.200000000000001,8.3,8.4,8.5,8.6,8.700000000000001,8.8,8.9,9.0,9.1,9.200000000000001,9.3,9.4,9.5,9.600000000000001,9.700000000000001,9.8,9.9,10.0,10.100000000000001,10.200000000000001,10.3,10.4,10.5,10.600000000000001,10.700000000000001,10.8,10.9,11.0,11.100000000000001,11.200000000000001,11.3,11.4,11.5,11.600000000000001,11.700000000000001,11.8,11.9,12.0,12.100000000000001,12.200000000000001,12.3,12.4,12.5,12.600000000000001,12.700000000000001,12.8,12.9,13.0,13.100000000000001,13.200000000000001,13.3,13.4,13.5,13.600000000000001,13.700000000000001,13.8,13.9,14.0,14.100000000000001,14.200000000000001,14.3,14.4,14.5,14.600000000000001,14.700000000000001,14.8,14.9,15.0,15.100000000000001,15.200000000000001,15.3,15.4,15.5,15.600000000000001,15.700000000000001,15.8,15.9,16.0,16.1,16.2,16.3,16.400000000000002,16.5,16.6,16.7,16.8,16.900000000000002,17.0,17.1,17.2,17.3,17.400000000000002,17.5,17.6,17.7,17.8,17.900000000000002,18.0,18.1,18.2,18.3,18.400000000000002,18.5,18.6,18.7,18.8,18.900000000000002,19.0,19.1,19.200000000000003,19.3,19.400000000000002,19.5,19.6,19.700000000000003,19.8,19.900000000000002,20.0]) def funcExp(x: np.ndarray, a: float, b: float, c: float) -> np.ndarray: return a*np.exp(-b*x) + c def log_residuals(params: np.ndarray) -> float: y = funcExp(x_exper, *params) error = np.log(y) - np.log(y_exper) return error.dot(error) # least squares def lower_envelope(params: np.ndarray) -> np.ndarray: y = funcExp(x_exper, *params) return y_exper - y x0 = y_exper[0]*np.exp(x_exper[0]), 1, y_exper[-1] result = minimize( fun=log_residuals, x0=x0, bounds=Bounds( lb=(0.01, 0.1, -0.1), ub=(0.5, 20, 0.1), ), constraints=NonlinearConstraint( fun=lower_envelope, lb=0, ub=np.inf, ), ) assert result.success, result.message print(result.x) fig, ax = plt.subplots() ax.semilogy(x_exper, y_exper, label='experiment') ax.semilogy(x_exper, funcExp(x_exper, *x0), label='guess') ax.semilogy(x_exper, funcExp(x_exper, *result.x), label='fit') ax.legend() plt.show() [0.2157369 0.5899542 0.000623 ] This will perform better with a more sensible initial guess and defined Jacobians: from functools import partial import matplotlib.pyplot as plt import numpy as np from scipy.optimize import check_grad, minimize, Bounds, NonlinearConstraint def load_data() -> tuple[np.ndarray, np.ndarray]: x_exper = np.arange(1.1, 20.05, 0.1) y_exper = np.array([ 0.13598974610162404, 0.14204518683071268, 0.12950580786633123, 0.11907324299581903, 0.10128368784179803, 0.09801605741178761, 0.08384607033484785, 0.080831165652505, 0.08320697432504208, 0.0796448643292049, 0.08036960780924939, 0.07794871929139761, 0.06684868128842808, 0.08473240868175465, 0.12911858937102086, 0.2643875667237164, 0.35984364939831903, 0.2193622531576059, 0.11434823952113388, 0.07542004424929072, 0.05811782617304745, 0.05244297390163204, 0.046658695718735835, 0.04848192538027753, 0.04720951580680828, 0.043285109240216044, 0.04182209865781944, 0.039844899409411334, 0.03462168053862101, 0.03378305258506322, 0.03533297573624328, 0.03434759644082368, 0.033784129758841895, 0.030419029760045915, 0.028085746545496386, 0.02614296782807577, 0.024221565132520304, 0.022189741126251487, 0.02093159168492871, 0.02041496822457043, 0.021031182865802436, 0.024510234374072886, 0.023307213889378165, 0.0267484745286596, 0.02258945483736504, 0.014891232218542747, 0.01151363712852099, 0.010139967470707011, 0.009769727537338574, 0.009323591440734363, 0.008852570111374145, 0.008277064263333187, 0.007088585763561308, 0.00607584327561278, 0.005423044957885124, 0.005017536008889349, 0.005194048550726604, 0.005066069823795679, 0.004923514285732114, 0.0053721924337601975, 0.005156078360383089, 0.004962157137571195, 0.0045958264654801136, 0.0043323942880189766, 0.004310971039183395, 0.004733498071711899, 0.005238905827304569, 0.005180319290046715, 0.0050892994891999395, 0.005323200339923676, 0.005430819354625569, 0.0051261318575094965, 0.004608215352126279, 0.0042522740751442835, 0.003964475580118653, 0.004281845094328685, 0.003932866994198572, 0.003751478035379218, 0.003988758544406512, 0.00366304957414055, 0.0030455636180720283, 0.0027753884456863088, 0.0025920006620398267, 0.00253411154251131, 0.0024133671863316246, 0.0020164600081521793, 0.002294208143652257, 0.0021879013667402856, 0.00213873257081609, 0.0019997327222615736, 0.00195034020886016, 0.0022503784328324725, 0.003038201783164678, 0.003603415824772916, 0.003642976691503975, 0.003263887163622944, 0.0035506429555724373, 0.0047798428190157045, 0.0040553738896165386, 0.002473176007612183, 0.0025941258844692236, 0.0018292994313265358, 0.00209892075806378, 0.0023955564365646335, 0.0020375114833779307, 0.002260575557815427, 0.0022985835848993693, 0.002099406433733155, 0.0018586368200849512, 0.0016053613868235123, 0.001438613175578214, 0.00143049357541102, 0.0013095127315154774, 0.001262471540939509, 0.0013514522407795408, 0.001605619634800475, 0.001961075896285937, 0.001865266816887284, 0.0023526578031602017, 0.00246341280674717, 0.0025884459641316543, 0.0025289043233280195, 0.0027480853600970576, 0.003160811294269662, 0.003061310957205347, 0.0034708227008575852, 0.0027193887970078795, 0.0025019043062104967, 0.001721602287020676, 0.0014938287993981696, 0.001379701311142287, 0.001482278335951954, 0.0017739654977338047, 0.0016173740322614279, 0.0014568993700072393, 0.001561687803455451, 0.0016478201019948435, 0.001296045775857753, 0.001237797494806695, 0.0014233100660923912, 0.001327643348684166, 0.0012058468589450113, 0.001326993796471779, 0.0015302363900395407, 0.0019691433239499958, 0.001914607620254396, 0.0017054233649494027, 0.001999944948934884, 0.001586257522693384, 0.0017888302317418617, 0.0024194552369763127, 0.002602486169233071, 0.0023322619326367703, 0.002188641252143114, 0.002160637896948486, 0.0017183240941773745, 0.0013791696278384316, 0.0013010975606518034, 0.0012917607493148195, 0.0014473287423454842, 0.0011277134770190562, 0.0009788023156115833, 0.0011624520875172602, 0.0011529250281587956, 0.0011286272690398862, 0.0011650110432320925, 0.0011670732824154513, 0.0012701258601414223, 0.0010863631780132393, 0.001151403997327795, 0.001261531100583112, 0.0014433469612850924, 0.0012625181480229021, 0.0013366719381237742, 0.0013129577294860868, 0.0010799358566476144, 0.0012361331567450533, 0.0013155633998451644, 0.0017427549165517102, 0.0017117554798138019, 0.0014424582600283703, 0.0014934381441740442, 0.001320132472902865, 0.0010134949123866623, 0.0009392144030905535, 0.0008956207514417853, 0.0009483482891766875, 0.0007118586291810097, 0.0006572633034661715, 0.0006246206878692327]) return x_exper, y_exper def func_exp(x: np.ndarray, a: float, b: float, c: float) -> np.ndarray: return a*np.exp(-b*x) + c def log_residuals( params: np.ndarray, x_exper: np.ndarray, y_exper: np.ndarray, ) -> float: error = np.log(y_exper) - np.log(func_exp(x_exper, *params)) return error.dot(error) # least squares def jac_residuals( params: np.ndarray, x: np.ndarray, y: np.ndarray, ) -> tuple[float, float, float]: a, b, c = params aenbx = a*np.exp(-b*x) + c cepbx = c*np.exp(b*x) + a jac_prior_dot = 2*np.stack(( np.log(aenbx/y)/cepbx, a*x*np.log(y/aenbx)/cepbx, -np.log(y/aenbx)/aenbx, )) return jac_prior_dot.sum(axis=1) def lower_envelope(params: np.ndarray, x: np.ndarray) -> np.ndarray: return func_exp(x, *params) def jac_lower_envelope( params: np.ndarray, x: np.ndarray, ) -> np.ndarray: a, b, c = params return np.stack(( np.exp(-b*x), -a*x*np.exp(-b*x), np.ones_like(x), ), axis=1) def estimate(x_exper: np.ndarray, y_exper: np.ndarray) -> tuple[float, float, float]: c0 = y_exper.min() b0 = 0.5 a0 = ((y_exper - c0)*np.exp(b0*x_exper))[:-1].min() return a0, b0, c0 def solve( x_exper: np.ndarray, y_exper: np.ndarray, x0: tuple[float, float, float], ) -> np.ndarray: error = check_grad( log_residuals, jac_residuals, x0, x_exper, y_exper, ) / np.abs(jac_residuals(x0, x_exper, y_exper)).sum() assert error < 1e-4 error = check_grad( lower_envelope, jac_lower_envelope, x0, x_exper, ) assert error < 1e-6 result = minimize( fun=log_residuals, jac=jac_residuals, args=(x_exper, y_exper), x0=x0, bounds=Bounds( lb=(0.01, 0.1, -0.1), ub=(0.5, 20, 0.1), ), constraints=NonlinearConstraint( fun=partial(lower_envelope, x=x_exper), # confuses the optimizer; cannot make any progress # jac=partial(jac_lower_envelope, x=x_exper), lb=-np.inf, ub=y_exper, ), ) if not result.success: raise ValueError(result.message) return result.x def plot( x_exper: np.ndarray, y_exper: np.ndarray, x0: tuple[float, float, float], xopt: np.ndarray, ) -> plt.Figure: fig, ax = plt.subplots() ax.semilogy(x_exper, y_exper, label='experiment') ax.semilogy(x_exper, func_exp(x_exper, *x0), label='guess') ax.semilogy(x_exper, func_exp(x_exper, *xopt), label='fit') ax.legend() return fig def main() -> None: x_exper, y_exper = load_data() x0 = estimate(x_exper, y_exper) xopt = solve(x_exper, y_exper, x0) print(xopt) plot(x_exper, y_exper, x0, xopt) plt.show() if __name__ == '__main__': main() [0.21573704 0.5899543 0.000623 ] | 3 | 3 |
78,268,766 | 2024-4-3 | https://stackoverflow.com/questions/78268766/fastest-way-to-extract-relevant-rows-pandas | I've got a very large pandas df that has the fields group, id_a, id_b and score, ordered by score so the highest is at the top. There is a row for every possible combination of id_a and id_b. I want to extract rows so that there is only one row per id_a and id_b, which reflects the highest score possible without repeating IDs. Showing an example of what this might look like - the resulting df has 3 rows, with all ids in id_a and id_b appearing once each. In the case of A2/B2 and A1/B1, the row with the best score for both IDs has been used. In the case of A3, the best score related to a row with B1, which had already been used, so the next best score combined with B3 is used. Input table Desired result To achieve this, I've got a loop iterating through the original df. This is incredibly slow with a large dataset. I've tried coming up with alternatives but I am struggling, for e.g.: Once a row is identified for a pair of IDs, I could remove those IDs from the original df, but I'm not sure how to do this without restarting the loop I could split things by group (the example only has 1 group but there would be lots more, with IDs unique across groups) - however this doesn't seem to save time Can anybody offer any other approaches? Thank you! import pandas as pd # Create sample df group = [1, 1, 1, 1, 1, 1, 1, 1, 1] id_a = ['A2', 'A1', 'A3', 'A3', 'A2', 'A1', 'A1', 'A2', 'A3'] id_b = ['B2', 'B1', 'B1', 'B3', 'B1', 'B2', 'B3', 'B3', 'B2'] score = [0.99, 0.98, 0.97, 0.96, 0.93, 0.5, 0.41, 0.4, 0.2] df = pd.DataFrame({'group': group, 'id_a': id_a, 'id_b': id_b, 'score': score}) result = pd.DataFrame(columns=df.columns) # Extract required rows for i, row in df.iterrows(): if len(result) == 0: result = row.to_frame().T else: if ((row['id_a'] in result['id_a'].tolist()) or (row['id_b'] in result['id_b'].tolist())): continue else: result = pd.concat([result, row.to_frame().T[result.columns]]) | This looks to me line a linear_sum_assignment problem (i.e. maximizing the sum of score for unique pairs of id_a/id_b), you could use a pivot per group to do this: from scipy.optimize import linear_sum_assignment out = {} for group, g in (df.pivot(index=['group', 'id_a'], columns='id_b', values='score') .groupby(level=0)): idx, col = linear_sum_assignment(g, maximize=True) out[group] = pd.DataFrame({'id_a': g.index.get_level_values(1)[idx], 'id_b': g.columns[col], 'score': g.values[idx, col]}) out = pd.concat(out, names=['group']).reset_index(0) Alternative code (does the same thing with a different order of the steps). This way is preferred if you don't have the same id_a/id_b across groups (this avoids building a large pivoted intermediate): from scipy.optimize import linear_sum_assignment import numpy as np out = [] for group, g in df.sort_values(by=['id_a', 'id_b']).groupby('group'): tmp = g.pivot(index='id_a', columns='id_b', values='score') idx, col = linear_sum_assignment(tmp, maximize=True) out.append(g.iloc[np.ravel_multi_index((idx, col), tmp.shape)]) out = pd.concat(out) Output: group id_a id_b score 0 1 A1 B1 0.98 1 1 A2 B2 0.99 2 1 A3 B3 0.96 | 4 | 2 |
78,268,740 | 2024-4-3 | https://stackoverflow.com/questions/78268740/import-from-confusion | I read the Python3 references for the Import statement here, in which it said: The from form uses a slightly more complex process: find the module specified in the from clause, loading and initializing it if necessary; for each of the identifiers specified in the import clauses: check if the imported module has an attribute by that name if not, attempt to import a submodule with that name and then check the imported module again for that attribute if the attribute is not found, ImportError is raised. otherwise, a reference to that value is stored in the local namespace, using the name in the as clause if it is present, otherwise using the attribute name For the quoted block in above, I cannot figure out what does it really say about the form: "from A import B". In my understanding, in that case, if B is not an attribute of A, then the process will search for a submodule named as "B", if that is found, the process would search an attribute named "B" in B submodule(from "then check the imported module again for that attribute"), is my understanding to that correct or not ? Need your help. | It's checking the first module again, not the submodule. So for example, from foo import bar will find the foo module and check it for a bar attribute. If there's no such attribute there, it will attempt to import a foo.bar submodule. If this new import succeeds, it should ordinarily result in the submodule getting set as a bar attribute on foo. The first import will then check foo again for the bar attribute it expects to now be there. (The description in the docs leaves out one last step - if there's still no bar attribute on foo, then the import will try to retrieve foo.bar directly from sys.modules. This matters in some circular import cases, due to how the initialization order works out in circular imports.) | 2 | 5 |
78,237,117 | 2024-3-28 | https://stackoverflow.com/questions/78237117/jupyter-doesnt-let-me-type-the-left-square-bracket | Recently I have installed Jupyter Notebook on my Mac for didactical use and I've noticed a problem: doing list = Jupyter doesn't let me type the left square bracket, and if I type that in a new line it allows me to type, for example: [] I've just tried to reinstall completely Jupyter from the prompt using Homebrew and it doesn't worked. How can I do? | This is a known issue: https://github.com/jupyterlab/jupyterlab/issues/15744 You can do: Settings β Settings Editor β JSON Settings Editor β Keyboard Shortcuts β User β paste β save paste: { "shortcuts": [ { "args": {}, "command": "inline-completer:next", "keys": [ "Alt ]" ], "selector": ".jp-mod-completer-enabled", "disabled": true }, { "args": {}, "command": "inline-completer:previous", "keys": [ "Alt [" ], "selector": ".jp-mod-completer-enabled", "disabled": true }, { "args": {}, "command": "inline-completer:invoke", "keys": [ "Alt \\" ], "selector": ".jp-mod-completer-enabled", "disabled": true } ] } https://github.com/jupyterlab/jupyterlab/issues/15744#issuecomment-1999433752 As @everson commented please restart your notebook after changes. | 4 | 8 |
78,251,318 | 2024-3-31 | https://stackoverflow.com/questions/78251318/optuna-hyperband-algorithm-not-following-expected-model-training-scheme | I have observed an issue while using the Hyperband algorithm in Optuna. According to the Hyperband algorithm, when min_resources = 5, max_resources = 20, and reduction_factor = 2, the search should start with an initial space of 4 models for bracket 1, with each model receiving 5 epochs in the first round. Subsequently, the number of models is reduced by a factor of 2 in each round and search space should also reduced by factor of 2 for next brackets i.e bracket 2 will have initial search space of 2 models, and the number of epochs for the remaining models is doubled in each subsequent round. so total models should be 11 is expected but it is training lot's of models. link of the article:- https://arxiv.org/pdf/1603.06560.pdf import optuna import numpy as np import pandas as pd from tensorflow.keras.layers import Dense,Flatten,Dropout import tensorflow as tf from tensorflow.keras.models import Sequential # Toy dataset generation def generate_toy_dataset(): np.random.seed(0) X_train = np.random.rand(100, 10) y_train = np.random.randint(0, 2, size=(100,)) X_val = np.random.rand(20, 10) y_val = np.random.randint(0, 2, size=(20,)) return X_train, y_train, X_val, y_val X_train, y_train, X_val, y_val = generate_toy_dataset() # Model building function def build_model(trial): model = Sequential() model.add(Dense(units=trial.suggest_int('unit_input', 20, 30), activation='selu', input_shape=(X_train.shape[1],))) num_layers = trial.suggest_int('num_layers', 2, 3) for i in range(num_layers): units = trial.suggest_int(f'num_layer_{i}', 20, 30) activation = trial.suggest_categorical(f'activation_layer_{i}', ['relu', 'selu', 'tanh']) model.add(Dense(units=units, activation=activation)) if trial.suggest_categorical(f'dropout_layer_{i}', [True, False]): model.add(Dropout(rate=0.5)) model.add(Dense(1, activation='sigmoid')) optimizer_name = trial.suggest_categorical('optimizer', ['adam', 'rmsprop']) if optimizer_name == 'adam': optimizer = tf.keras.optimizers.Adam() else: optimizer = tf.keras.optimizers.RMSprop() model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy', tf.keras.metrics.AUC(name='val_auc')]) return model def objective(trial): model = build_model(trial) # Assuming you have your data prepared # Modify the fit method to include AUC metric history = model.fit(X_train, y_train, validation_data=(X_val, y_val), verbose=1) # Check if 'val_auc' is recorded auc_key = None for key in history.history.keys(): if key.startswith('val_auc'): auc_key = key print(f"auc_key is {auc_key}") break if auc_key is None: raise ValueError("AUC metric not found in history. Make sure it's being recorded during training.") # Report validation AUC for each model if auc_key =="val_auc": step=0 else: step = int(auc_key.split('_')[-1]) auc_value=history.history[auc_key][0] trial.report(auc_value, step=step) print(f"prune or not:-{trial.should_prune()}") if trial.should_prune(): raise optuna.TrialPruned() return history.history[auc_key] # Optuna study creation study = optuna.create_study( direction='maximize', pruner=optuna.pruners.HyperbandPruner( min_resource=5, max_resource=20, reduction_factor=2 ) ) # Start optimization study.optimize(objective) | You are using the default value of the parameter n_trials in the study.optimize function, which is None. According to the documentation, that means that it will stop evaluating configurations when it "times out". Optuna's Hyperband implementation is not identical to what was described in the original article. It has some tweaks to make the algorithm compatible with Optuna's inner workings. You can check the number of successive halving brackets like this: study.pruner._n_brackets. And you can check the allocated budget to each bracket like this: study.pruner._trial_allocation_budgets. What I am still trying to figure out is how the n_trials plays into defining the number of configurations that will be examined at each bracket. | 2 | 1 |
78,250,734 | 2024-3-31 | https://stackoverflow.com/questions/78250734/operator-class-gin-trgm-ops-does-not-exist-for-access-method-gin | psycopg2.errors.UndefinedObject: operator class "gin_trgm_ops" does not exist for access method "gin" Hello everybody, this is the whole message when I try to run pytest on my project which is written in Python/Django + db is postgresql and all sitting inside docker. I build a project on docker with django-cookiecutter template, all settings are default. I put a gin index to one of my string fields, migrations running successfully, pg_trgm extention is being created successfully, but if I try to test my project with pytest I got this error. Here is my pytest.ini [pytest] DJANGO_SETTINGS_MODULE = config.settings.test Here is my test settings file's database configuration: test.py DATABASES['test'] = { # noqa 'ENGINE': 'django.contrib.gis.db.backends.postgis', 'NAME': 'test', 'PASSWORD': 'test', 'USER': 'test', 'HOST': 'localhost', 'PORT': 5454, } This is part of the migration which creates the extention pg_trgm and puts an index to the field given migrations.AddIndex( model_name='<model_name>', index=django.contrib.postgres.indexes.GinIndex(fields=['field_name'], name='field_name_gin_idx', opclasses=['gin_trgm_ops']), ), And this is the whole traceback which I am getting: self = <django.db.backends.utils.CursorWrapper object at 0xffff7244dbb0> sql = 'CREATE INDEX "bank_name_gin_idx" ON "financing_graincreditagrobankworksheet" USING gin ("bank_name" gin_trgm_ops)', params = None ignored_wrapper_args = (False, {'connection': <django.contrib.gis.db.backends.postgis.base.DatabaseWrapper object at 0xffff7d096b80>, 'cursor': <django.db.backends.utils.CursorWrapper object at 0xffff7244dbb0>}) def _execute(self, sql, params, *ignored_wrapper_args): self.db.validate_no_broken_transaction() with self.db.wrap_database_errors: if params is None: # params default might be backend specific. > return self.cursor.execute(sql) E psycopg2.errors.UndefinedObject: operator class "gin_trgm_ops" does not exist for access method "gin" /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:82: UndefinedObject The above exception was the direct cause of the following exception: request = <SubRequest '_django_setup_unittest' for <TestCaseFunction test_correct_insurance_company_name>> django_db_blocker = <pytest_django.plugin._DatabaseBlocker object at 0xffff7fe702b0> @pytest.fixture(autouse=True, scope="class") def _django_setup_unittest( request, django_db_blocker: "_DatabaseBlocker", ) -> Generator[None, None, None]: """Setup a django unittest, internal to pytest-django.""" if not django_settings_is_configured() or not is_django_unittest(request): yield return # Fix/patch pytest. # Before pytest 5.4: https://github.com/pytest-dev/pytest/issues/5991 # After pytest 5.4: https://github.com/pytest-dev/pytest-django/issues/824 from _pytest.unittest import TestCaseFunction original_runtest = TestCaseFunction.runtest def non_debugging_runtest(self) -> None: self._testcase(result=self) try: TestCaseFunction.runtest = non_debugging_runtest # type: ignore[assignment] > request.getfixturevalue("django_db_setup") /usr/local/lib/python3.9/site-packages/pytest_django/plugin.py:490: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.9/site-packages/pytest_django/fixtures.py:122: in django_db_setup db_cfg = setup_databases( /usr/local/lib/python3.9/site-packages/django/test/utils.py:179: in setup_databases connection.creation.create_test_db( /usr/local/lib/python3.9/site-packages/django/db/backends/base/creation.py:74: in create_test_db call_command( /usr/local/lib/python3.9/site-packages/django/core/management/__init__.py:181: in call_command return command.execute(*args, **defaults) /usr/local/lib/python3.9/site-packages/django/core/management/base.py:398: in execute output = self.handle(*args, **options) /usr/local/lib/python3.9/site-packages/django/core/management/base.py:89: in wrapped res = handle_func(*args, **kwargs) /usr/local/lib/python3.9/site-packages/django/core/management/commands/migrate.py:244: in handle post_migrate_state = executor.migrate( /usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py:117: in migrate state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial) /usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py:147: in _migrate_all_forwards state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial) /usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py:227: in apply_migration state = migration.apply(state, schema_editor) /usr/local/lib/python3.9/site-packages/django/db/migrations/migration.py:126: in apply operation.database_forwards(self.app_label, schema_editor, old_state, project_state) /usr/local/lib/python3.9/site-packages/django/db/migrations/operations/models.py:761: in database_forwards schema_editor.add_index(model, self.index) /usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/schema.py:218: in add_index self.execute(index.create_sql(model, self, concurrently=concurrently), params=None) /usr/local/lib/python3.9/site-packages/django/db/backends/base/schema.py:145: in execute cursor.execute(sql, params) /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:66: in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:75: in _execute_with_wrappers return executor(sql, params, many, context) /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:84: in _execute return self.cursor.execute(sql, params) /usr/local/lib/python3.9/site-packages/django/db/utils.py:90: in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <django.db.backends.utils.CursorWrapper object at 0xffff7244dbb0> sql = 'CREATE INDEX "bank_name_gin_idx" ON "financing_graincreditagrobankworksheet" USING gin ("bank_name" gin_trgm_ops)', params = None ignored_wrapper_args = (False, {'connection': <django.contrib.gis.db.backends.postgis.base.DatabaseWrapper object at 0xffff7d096b80>, 'cursor': <django.db.backends.utils.CursorWrapper object at 0xffff7244dbb0>}) def _execute(self, sql, params, *ignored_wrapper_args): self.db.validate_no_broken_transaction() with self.db.wrap_database_errors: if params is None: # params default might be backend specific. > return self.cursor.execute(sql) E django.db.utils.ProgrammingError: operator class "gin_trgm_ops" does not exist for access method "gin" /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:82: ProgrammingError I want to run tests with pytest and got this problem which is actually not causing any problem while project is running, gin index is actually working. But only when I run a test no matter its django's manage.py test or pytest it fails with the error above. If I comment out the part of migration which should generate an index it works. | Try adding to your migration: from django.contrib.postgres.operations import TrigramExtension operations = [ TrigramExtension(), ... ] | 2 | 3 |
78,238,232 | 2024-3-28 | https://stackoverflow.com/questions/78238232/python-3-7-4-and-3-10-6-asyncio-creating-multiple-tasks-in-vs-code-debug-call-st | When using asyncio and aiohttp, faced this issue and was wondering if it was an occurance limited to VSC only? This one keeps leaving residues from the previous loops. while True: loop.run_until_complete(get_data()) This runs without any residual tasks. while True: asyncio.run(get_data()) session = aiohttp.ClientSession() async def fetch(url, params=None): async with semaphore: async with session.get(url, params=params) as response: return await response.json() I tried moving the while loop inside the called method itself but still the same result. Also tried using semaphores with no better luck. I was trying to manage multiple rest api call methods using aiohttp from within a single loop but with this happening, guess I am stuck with having to use asyncio.run separately for each api related method. So the only way to avoid this right now is to keep closing and reopening separate loops? Could this possibly be a VSC related issue? Edit: Tried something with Python 3.10.6 and VSCodium. I don't know if it's because this one shows tasks/loops instead of threadpool here but the same issue is not replicated. The increment issue still persists when using Python 3.10.6 with VS Code. So could this be either 1) VSC issue as suspected or 2) just that the call stacks are being treated different between the two IDEs? while True: asyncio.run(get_data()) Another thing to note is that the ThreadPoolExecutor stacking when using VSC, stops exactly at the 20th stack: ThreadPoolExecutor-0_19. Then no more increments and no crashes. | This worked. Still not sure as to the different results between VSC and VSCodium with the initial code but nonetheless, for anyone else who stumbles through. async def fetch(url, params=None): if not self.session: self.session = aiohttp.ClientSession() async with semaphore: async with session.get(url, params=params) as response: return await response.json() async def get_data(self): json_response = await self.fetch("https://jsonplaceholder.typicode.com/todos/1") while True: asyncio.run(get_data()) | 2 | 0 |
78,263,269 | 2024-4-2 | https://stackoverflow.com/questions/78263269/scrapy-and-great-expectations-great-expectations-not-working-together | I am trying to use the packages scrapy and great_expectations within the same virtual environment. There seems to be an issue with the compatibility between the two packages, depending on the order in which I import them in. Example: I created a virtual environment and pip installed the latest version of each package. Does work: import great_expectations import scrapy print("done") Does not work: import scrapy import great_expectations print("done") Error: Traceback (most recent call last): File "/Users/grant/vs_code_projects/grants_projects/test_environment.py", line 2, in <module> import great_expectations File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/__init__.py", line 32, in <module> register_core_expectations() File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/registry.py", line 187, in register_core_expectations from great_expectations.expectations import core # noqa: F401 File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/core/__init__.py", line 1, in <module> from .expect_column_distinct_values_to_be_in_set import ( File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/core/expect_column_distinct_values_to_be_in_set.py", line 12, in <module> from great_expectations.expectations.expectation import ( File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/expectation.py", line 2350, in <module> class BatchExpectation(Expectation, ABC): File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/expectation.py", line 287, in __new__ newclass._register_renderer_functions() File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/expectation.py", line 369, in _register_renderer_functions attr_obj: Callable = getattr(cls, candidate_renderer_fn_name) AttributeError: __provides__ Edit: I have tried all major versions of Scrapy (current version, 2.0.0, 1.0.0) with all major versions of Great Expectations and it is the same result. | This has been fixed as of 0.18.13 https://github.com/great-expectations/great_expectations/releases/tag/0.18.13 | 4 | 2 |
78,253,818 | 2024-4-1 | https://stackoverflow.com/questions/78253818/how-to-specify-column-data-type | I have the following code: import polars as pl from typing import NamedTuple class Event(NamedTuple): name: str description: str def event_table(num) -> list[Event]: events = [] for i in range(num): events.append(Event("name", "description")) return events data = {"events": [1, 2]} df = pl.DataFrame(data).select(events=pl.col("events").map_elements(event_table)) """ shape: (2, 1) βββββββββββββββββββββββββββββββββββββ β events β β --- β β list[struct[2]] β βββββββββββββββββββββββββββββββββββββ‘ β [{"name","description"}] β β [{"name","description"}, {"name"β¦ β βββββββββββββββββββββββββββββββββββββ """ But if the first list is empty, I get a list[list[str]] instead of the list[struct[2]] that I need: data = {"events": [0, 1, 2]} df = pl.DataFrame(data).select(events=pl.col("events").map_elements(event_table)) print(df) """ shape: (3, 1) βββββββββββββββββββββββββββββββββββββ β events β β --- β β list[list[str]] β βββββββββββββββββββββββββββββββββββββ‘ β [] β β [["name", "description"]] β β [["name", "description"], ["nameβ¦ β βββββββββββββββββββββββββββββββββββββ """ I tried using the return_dtype of the map_elements function like: data = {"events": [0, 1, 2]} df = pl.DataFrame(data).select( events=pl.col("events").map_elements( event_table, return_dtype=pl.List(pl.Struct({"name": pl.String, "description": pl.String})), ) ) but this failed with: Traceback (most recent call last): File "script.py", line 18, in <module> df = pl.DataFrame(data).select( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.11/site-packages/polars/dataframe/frame.py", line 8193, in select return self.lazy().select(*exprs, **named_exprs).collect(_eager=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.11/site-packages/polars/lazyframe/frame.py", line 1943, in collect return wrap_df(ldf.collect()) ^^^^^^^^^^^^^ polars.exceptions.SchemaError: expected output type 'List(Struct([Field { name: "name", dtype: String }, Field { name: "description", dtype: String }]))', got 'List(List(String))'; set `return_dtype` to the proper datatype How can I get this to work? i need the type of this column to be list[struct[2]] event if the first list is empty. | Quick fix right now Here's a map_batches implementation that should be at least marginally faster. def event_table(col: pl.Series) -> pl.Series: return pl.Series( [ [ Event("name", "description")._asdict() #note ._asdict() for _ in range(num) ] for num in col ] ) It uses nested list comprehensions which ought to be a bit faster than appending to a list in an explicit for loop but that is a python optimization not polars. pl.DataFrame(data).select(events=pl.col("events").map_batches(event_table)) shape: (3, 1) βββββββββββββββββββββββββββββββββββββ β events β β --- β β list[struct[2]] β βββββββββββββββββββββββββββββββββββββ‘ β [] β β [{"name","description"}] β β [{"name","description"}, {"name"β¦ β βββββββββββββββββββββββββββββββββββββ You actually just need to use _asdict() rather than relying on polars to infer what a NamedTuple ought to be. Medium to Long term fix The issue is here specifically that in certain paths, it treats tuples and lists the same and since a NamedTuple is a tuple, that's why it gets returned as a list. This PR makes it check for the _asdict method and shifts to treating it as a dict/struct. With this PR you can do class Event(NamedTuple): name: str description: str def event_table(num: int) -> list[Event]: return [Event("name", "desc") for _ in range(num)] data = {"events": [0, 1, 2]} pl.DataFrame(data).select( events=pl.col("events").map_elements( event_table, return_dtype=pl.List( pl.Struct({"name": pl.String, "description": pl.String}) ), ) ) shape: (3, 1) βββββββββββββββββββββββββββββββββββββ β events β β --- β β list[struct[2]] β βββββββββββββββββββββββββββββββββββββ‘ β [] β β [{"name","desc"}] β β [{"name","desc"}, {"name","desc"β¦ β βββββββββββββββββββββββββββββββββββββ | 5 | 3 |
78,239,484 | 2024-3-28 | https://stackoverflow.com/questions/78239484/why-does-dataframe-to-sql-slow-down-after-certain-amount-of-rows | I have a very large Pandas Dataframe ~9 million records, 56 columns, which I'm trying to load into a MSSQL table, using Dataframe.to_sql(). Importing the whole Dataframe in one statement often leads to errors, relating to memory. To cope with this, I'm looping through the Dataframe in batches of 100K rows, and importing a batch at a time. This way I no longer get any errors, but the code slows down dramatically after about 5.8 million records. The code I'm using: maxrow = df.shape[0] stepsize = 100000 for i in range(0, maxrow, stepsize): batchstart = datetime.datetime.now() if i == 0: if_exists = 'replace' else: if_exists = 'append' df_import = df.iloc[i:i+stepsize] df_import.to_sql('tablename', engine, schema='tmp', if_exists=if_exists, index=False, dtype=dtypes ) I've timed the batches, and there is a clear breaking point in speed: These results are basically the same for batches of 50k, 100k and 200k rows. it takes about 40 minutes to upload 6 million records, and another 2 hours and 20 minutes to upload the next 3 million. My thinking was that it was either due to the size of the MSSQL table, or something being cached/saved after each upload. Because of that I've tried pushing the Dataframe to two different tables. I've also tried something like expunge_all() on the SQLALchemy session, after each upload. Both to no effect. Manually stopping imports after 5 million records and restarting from 5 million with a new engine object also hasn't helped. I'm all out of ideas what might be the cause of the process slowing down so drastically, and would really appreciate help. UPDATE As a last resort I've reversed the loop, uploading parts of the Dataframe starting at the highest index, looping down. This has basically reversed the times per batch. So it seems it is the data itself that is different/bigger further down the Dataframe. Not the connection being overloaded or the SQL table getting to large. Thanks to everyone trying to help, but it seems I need to go through the data to see what causes this. | I've found a solution, which might be useful for someone else looking to speed up a slow(ing) Dataframe.to_sql() operation, and has already tried things like the chunksizes and setting up the SQLALchemy connection with fast_executemany=True. Not sure what mechanism causes this change in performance, but it's worked for me, and I haven't seen it mentioned anywhere else. So I hope it helps someone else. Sorting the original Dataframe with DataFrame.sort_values() has cut the time needed by roughly 2/3. It seems to be very important on which columns the values are sorted though. The speed is best when the Dataframe is sorted by a column which is unique and acts like a nested/clustered sort. Sorting by this column implicitly sorts by about 20 columns. It's sorting addresses, which also sorts street, postal code, city, region etc. That also means that a lot of rows within a batch share a lot of values within the rows. When the Dataframe is sorted by another column which has no connection to other columns it actually slows down the Dataframe.to_sql() quite dramatically. | 2 | 1 |
78,264,205 | 2024-4-2 | https://stackoverflow.com/questions/78264205/not-allowed-to-access-non-ipm-folder | I've been using exchangelib library in python for a long time now to access emails in an email account. It's been working amazing. Out of the blue today, I get an error saying, KeyError: 'folders' During handling of the above exception, another exception occured: exchangelib.errors.ErrorAccessDenied: Not allowed to access Non IPM folder. The line of code this is happening on is right here. msg_folder= my_account.root / 'Top of Information Store' / 'my_subfolder' Like I mentioned, this has been working great for over a year now. Double checked that access for the microsoft application is correct, and it is. It also looks like the latest release of exchangelib was 3/8/24 and it's been working since then, so it can't be that. The other weird thing, is a separate script is able to access messages in the inbox, it's just this sibling folder to the inbox that is throwing the error. I'm not finding anything on this error though. Any ideas on how to fix this? | This is caused by a recent change in O365. We may be able to find a fix for it in exchangelib. Until then, a workaround is to navigate to folders using double slashes: msg_folder = my_account.root // 'Top of Information Store' // 'my_subfolder' This works by not collecting the full folder hierarchy first and navigating the client-side folder cache, but rather asking the server for the specific child folder each time we reach a new // level. UPDATE: fix provided in https://github.com/ecederstrand/exchangelib/issues/1290 | 3 | 5 |
78,232,655 | 2024-3-27 | https://stackoverflow.com/questions/78232655/how-to-configure-ray-to-use-standard-python-logger-for-multiprocessing | I am trying to use Ray to improve the speed of a process and I want the log messages to be passed to the standard Python logger. This way, the application can handle formatting, filtering, and saving the log messages. However, when I use Ray, the log messages are not formatted according to my logger configuration and are not passed back to the root logger. I tried setting log_to_driver=True and configure_logging=True in ray.init() , but it didn't solve the problem. How can I configure Ray to use the standard Python logger for multiprocessing?" Here is an example that should demonstrate the issue: from ray.util.multiprocessing import Pool import pathlib import logging import json def setup_logging(config_file: pathlib.Path): with open(config_file) as f_in: config = json.load(f_in) logging.config.dictConfig(config) logger = logging.getLogger(__name__) config_file = pathlib.Path(__file__).parent / "log_setup/config_logging.json" setup_logging(config_file=config_file) def f(index): logger.warning(f"index: {index}") return (index, "model") if __name__ == "__main__": logger.warning("Starting") pool = Pool(1) results = pool.map(f, range(10)) print(list(results)) where I have the config of the logger as: { "version": 1, "disable_existing_loggers": false, "formatters": { "detailed": { "format": "[%(levelname)s|%(name)s|%(module)s|L%(lineno)d] %(asctime)s: %(message)s", "datefmt": "%Y-%m-%dT%H:%M:%S%z" } }, "handlers": { "stdout": { "class": "logging.StreamHandler", "level": "INFO", "formatter": "detailed" } }, "loggers": { "root": { "level": "DEBUG", "handlers": [ "stdout" ] } } } If I just use the python map, I would get the following printed: [WARNING|__main__|ray_trial|L28] 2024-03-27T15:14:21+0100: Starting [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 0 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 1 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 2 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 3 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 4 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 5 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 6 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 7 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 8 [WARNING|__main__|ray_trial|L22] 2024-03-27T15:14:21+0100: index: 9 But when I use Ray I get: 2024-03-27 14:54:55,064 INFO worker.py:1743 -- Started a local Ray instance. View the dashboard at 127.0.0.1:8265 (PoolActor pid=43261) index: 0 (PoolActor pid=43261) index: 1 (PoolActor pid=43261) index: 2 (PoolActor pid=43261) index: 3 (PoolActor pid=43261) index: 4 (PoolActor pid=43261) index: 5 (PoolActor pid=43261) index: 6 (PoolActor pid=43261) index: 7 (PoolActor pid=43261) index: 8 (PoolActor pid=43261) index: 9 | As per the Ray documentation, every worker sets up its own logging, and so if you want to change the logging, you need to initialize it for every worker. The simplest way to do so is also given in the Ray documentation: # driver.py def logging_setup_func(): logger = logging.getLogger("ray") logger.setLevel(logging.DEBUG) warnings.simplefilter("always") ray.init(runtime_env={"worker_process_setup_hook": logging_setup_func}) logging_setup_func() You can then use ray's implementation of Pool as normally. | 3 | 2 |
78,264,508 | 2024-4-2 | https://stackoverflow.com/questions/78264508/fastest-way-to-extract-moving-dynamic-crop-from-video-using-ffmpeg | I'm working on an AI project that involves object detection and action recognition on roughly 30 minute videos. My pipeline is the following: determine crops using object detection model extract crops using Python and write them to disk as individual images for each frame. use action recognition model by inputting a series of the crops. The models are fast but actual writing of the crops to disk is slow. Sure, using an SSD would speed it up but I'm sure ffmpeg would greatly speed it up. Some of the challenges with the crops: the output size is always 128x128 the input size is variable the crop moves on every frame My process for extracting crops is simple using cv2.imwrite(output_crop_path, crop) in a for loop. I've done experiments trying to use sndcommand and filter_complex. I tried this https://stackoverflow.com/a/67508233/4447761 but it outputs an image with black below the crop and the image gets wrapped around on the x axis. | Don't store the pictures. Store just the sequence of bounding boxes. Then, for whatever you wanted to do with that mountain of individual images, instead decode the video and read the sequence of boxes, and take your crops out of it like that, on the fly. I'd recommend using PyAV for video reading. It gives you the presentation timestamps reliably. You shouldn't even need them, but having them along in the file can be very helpful for debugging. Maybe you want to use pandas to write and read such a file. CSV is a popular format. Another advantage of keeping the boxes like that: you can work on this sequence, maybe smooth it, or mix in the results of an optical tracker (sticks like glue but also creeps over time). If you really do need to write a video, then write a video. Individual frames as files are a serious burden on most file systems, let alone file managers. Again I would recommend PyAV to write a video. It gives you all kinds of control over how to do it, and the basic code isn't all that much. PyAV comes with examples. | 2 | 2 |
78,264,069 | 2024-4-2 | https://stackoverflow.com/questions/78264069/mypy-cannot-infer-type-argument-difference-between-list-and-iterable | T = TypeVar("T", bound=Union[str, int]) def connect_lists(list_1: list[T], list_2: list[T]) -> list[T]: out: list[T] = [] out.extend(list_1) out.extend(list_2) return out connect_lists([1, 2], ["a", "b"]) mypy: error: Cannot infer type argument 1 of "connect_lists" [misc] T = TypeVar("T", bound=Union[str, int]) def connect_lists(list_1: Iterable[T], list_2: Iterable[T]) -> list[T]: out: list[T] = [] out.extend(list_1) out.extend(list_2) return out connect_lists([1, 2], ["a", "b"]) Now mypy doesn't raise an error. What is the difference between List and Iterable in this case? | Iterable is covariant - an Iterable[int] is also an Iterable[int|str]. list is not covariant - a list[int] is not a list[int|str], because you can add strings to a list[int|str], which you can't do with a list[int]. mypy infers the types of [1, 2] and ["a", "b"] as list[int] and list[str] respectively. With the first definition of connect_objects, there is no choice of T that will make the call valid. But with the second definition, a list[int] is an Iterable[int], which is an Iterable[int|str], and a list[str] is similarly also an Iterable[int|str], so T is inferred as int|str. I don't think there's actually a spec yet for how type inference works. There may never be such a spec. Future mypy versions might perform this inference differently, for example, performing context-sensitive inference to infer a type of list[int|str] for both input lists, making the first version of the code pass type checking. | 8 | 4 |
78,263,331 | 2024-4-2 | https://stackoverflow.com/questions/78263331/python-module-with-name-main-py-doesnt-run-while-imported | Somebody asked me a question and I honestly didn't try it before, So it was interesting to know what exactly happens when we name a module __main__.py. So I named a module __main__.py, imported it in another file with name test.py. Surprisingly when I tried to run test.py it prints nothing and none of the functions of __main__.py are available in test.py. Here is the contents of these files : Here is the contents of __main__.py : def add(a,b): result = a+b return result print(__name__) if __name__=='__main__': print(add(1,2)) Here is the contents of test.py : import __main__ Why isn't the print statement from __main__.py reached? Although when I rename the __main__.py with some other name such as func.py, the program runs correctly and the line that prints module's name works fine. | When you run python test.py, the test.py module itself is already present in sys.modules with the key "__main__" (see top-level code environment in the docs). The import __main__ will just return this existing cache hit, so the presence of a __main__.py file on sys.path is irrelevant. These modifications to test.py will prove that point: import sys print(sys.modules['__main__']) # it's already in there before the import import __main__ print(__main__.__file__) # this should be the abspath of test.py | 2 | 3 |
78,262,945 | 2024-4-2 | https://stackoverflow.com/questions/78262945/how-to-merge-two-lists-and-get-names-of-lists-with-the-highest-value-for-each-in | I am trying to compare two lists of odds from two bookmakers. They look like this: List1 = ['2.66', '3.79', '1.88', '1.61', '2.51', '1.29', '2.29', '2.56', '3.16', '2.05', '2.95', '2.64', '2.26', '3.17', '2.64', '2.25'] List2 = ['2.70', '4.40', '1.87', '1.56', '2.50', '1.26', '2.33', '2.60', '3.20', '2.04', '3.00', '2.65', '2.25', '3.20', '2.65', '2.22'] I need to merge them and get the highest odds. I already did this with numpy: numpy.array([List1, List2]).astype(float).max(axis = 0) FinalList = [2.7 4.4 1.88 1.61 2.51 1.29 2.33 2.6 3.2 2.05 3.2 2.65 2.26 3.2 2.65 2.25] The problem is that I can't know to which list each index belongs to. In this example what I need to get is: NamesLists = [List2, List2, List1, List1, List1, List1, List2, List2, List2, List1, List2, List2, List1, List2, List2, List1] But I really have no idea how to do this. | You can combine argmax and take_along_axis: import numpy List1 = ['2.66', '3.79', '1.88', '1.61', '2.51', '1.29', '2.29', '2.56', '3.16', '2.05', '2.95', '2.64', '2.26', '3.17', '2.64', '2.25'] List2 = ['2.70', '4.40', '1.87', '1.56', '2.50', '1.26', '2.33', '2.60', '3.20', '2.04', '3.00', '2.65', '2.25', '3.20', '2.65', '2.22'] tmp = numpy.array([List1, List2]).astype(float) idx = tmp.argmax(axis=0) FinalList = numpy.take_along_axis(tmp, idx[None], axis=0)[0] # or: FinalList = tmp[idx[None], numpy.arange(tmp.shape[1])][0] # array([2.7 , 4.4 , 1.88, 1.61, 2.51, 1.29, 2.33, 2.6 , 3.2 , 2.05, 3. , # 2.65, 2.26, 3.2 , 2.65, 2.25]) NamesLists = numpy.array(['List1', 'List2'])[idx] # array(['List2', 'List2', 'List1', 'List1', 'List1', 'List1', 'List2', # 'List2', 'List2', 'List1', 'List2', 'List2', 'List1', 'List2', # 'List2', 'List1'], dtype='<U5') Note that idx is of the form: array([1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0]) which might be easier to use than ['List2', 'List2', 'List1', ...] | 2 | 4 |
78,262,629 | 2024-4-2 | https://stackoverflow.com/questions/78262629/why-is-libopenblas-from-numpy-so-big | We are deploying an open source application based on numpy that includes libopenblas.{cryptic string}.gfortran-win32.dll. It is part of the Python numpy package. This dll is over 27MB in size. I'm curious why it is so big and where I can find the source for it to see for myself. Ultimately I'd like to see if it can be more limited in size for my application. Thanks | OpenBLAS includes many optimized kernels for different CPU architectures and instruction sets. This is why the DLL is too large. Alternatives like BLIS and Intel's Math Kernel Library (MKL) exist. You can give a try to them if you consider. You can also build the library from the beginning and strip off the unnecessary parts while building, maybe. | 2 | 2 |
78,262,552 | 2024-4-2 | https://stackoverflow.com/questions/78262552/how-to-get-genericalias-super-types-in-python | Say I have a class defined as follows: class MyList(list[int]): ... I'm looking for a method that will return list[int] when I give it MyList, e.g.: >>> inspect.getsupers(MyList) [list[int]] The trouble is that no such method exists, as far as I can find. There is inspect.getmro(...), but this only returns list, not list[int]. Is there anything that I can reuse or implement that will give me this, short of some magic involving inspect.getsource(...)? The context is that I'm trying to write a generic issubtype(t1, t2) method that works for arbitrary classes and GenericAliases. This is one issue I've run into in writing this method that I'm not sure how to solve. | Aha! This information is stored in .__orig_bases__. So: >>> MyList.__orig_bases__ (list[int],) And on 3.12+, the canonical way would be to use: >>> import types >>> types.get_original_bases(MyList) (list[int],) | 2 | 4 |
78,262,291 | 2024-4-2 | https://stackoverflow.com/questions/78262291/importerror-cannot-import-name-float-from-numpy-a-problem-with-abydos | I am using a library that requires abydos/distance/_aline.py to be imported. But as can be seen from the source code, it uses the old np_float: 25 from numpy import float as np_float which throws an error ImportError: cannot import name 'float' from 'numpy' (/usr/local/lib/python3.10/dist-packages/numpy/__init__.py) I have seen this error before with old versions of packages, so upgrading them usually solves it. This is from the latest installation of the library (abydos==0.5.0). I don't really want to downgrade my numpy installation. Is there a way to get around this? | Found a github issue from abydos right after posting this. Installing v0.6.0b from source solves the issue: !pip install git+https://github.com/chrislit/abydos.git Hope this helps anyone trying to use the library. The 0.5.0 version requirement seems to be everywhere. | 2 | 2 |
78,236,708 | 2024-3-28 | https://stackoverflow.com/questions/78236708/custom-json-encoder-not-being-called | I tried to apply martineau solution in this post to a slightly different case, but it seems that for some obscure (at least to me) reason, the custom encoder isn't called from the json.dump() method. from collections.abc import MutableMapping import json import numpy as np class JSONSerializer(json.JSONEncoder): def encode(self, obj): # Convert dictionary keys that are tuples into strings. if isinstance(obj, MutableMapping): for key in list(obj.keys()): if isinstance(key, tuple): strkey = "%d:%d" % key obj[strkey] = obj.pop(key) return super().encode(obj) class Agent(object): def __init__(self, states, alpha=0.15, random_factor=0.2): self.state_history = [((0, 0), 0)] # state, reward self.alpha = alpha self.random_factor = random_factor # start the rewards table self.G = {} self.init_reward(states) def init_reward(self, states): for i, row in enumerate(states): for j, col in enumerate(row): self.G[(j,i)] = np.random.uniform(high=1.0, low=0.1) def memorize(self): with open("memory.json", "w") as w: json.dump(self.G, w, cls=JSONSerializer) if __name__ == "__main__": robot = Agent(states=np.zeros((6, 6)), alpha=0.1, random_factor=0.25) print(robot.G) robot.memorize() When testing the string encoding it seems to return what I'm expecting, but when the call to json.dump happens, the custom encoder doesn't seem to be called at all. Do you have any idea why? Thanks | This should probably be considered an implementation detail of the json package, and also it might be a Python version thing. In any case, it becomes crucial with your implementation: The dump() function internally calls the iterencode() method on your encoder (see lines 169 and 176 in the actual source code. Yet, the dumps() function internally calls encode() (see lines 231 and 238). You can verify this by adjusting encode() and overriding iterencode() in your JSONSerializer like so: class JSONSerializer(json.JSONEncoder): def encode(self, obj): print("encode called") ... # Your previous code here return super().encode(obj) def iterencode(self, *args, **kwargs): print("iterencode called") return super().iterencode(*args, **kwargs) β¦ and you will see that only "iterencode called" will be printed with your test code, but not "encode called". The other Stack Overflow question that you linked in your question seems to have the same issue by the way, at least when using a rather recent version of Python (I am currently on 3.11 for writing this) β see my comment to the corresponding answer. I have two solutions: Either use dumps() in your Agent.memorize() method, e.g. like so: def memorize(self): with open("memory.json", "w") as w: w.write(json.dumps(self.G, cls=JSONSerializer)) Or move your own implementation from encode() to iterencode(), e.g. like so: class JSONSerializer(json.JSONEncoder): def iterencode(self, obj, *args, **kwargs): if isinstance(obj, MutableMapping): for key in list(obj.keys()): if isinstance(key, tuple): strkey = "%d:%d" % key obj[strkey] = obj.pop(key) yield from super().iterencode(obj, *args, **kwargs) This 2nd solution seems to have the benefit that it works with both dump() and dumps() (see note below). A note for completion: The dumps() function later seems to result in a call of iterencode(), as well (I did not track the source code so far as to see where exactly that happens, but from the printouts that I added it definitely happens). This has the following effects: (1) In the 1st proposed solution, as encode() is called first and we can make all adjustments for making our data JSON-serializable there, at this later point, calling iterencode() will not result in an error, any more. (2) In the 2nd proposed solution, as we reimplemented iterencode(), our data will be made JSON-serializable at this point. Update: There are actually two more solutions. First, thanks to the comment of @AbdulAzizBarkat: we can override the default() method. However, we have to make sure that we hand over an object type for serialization that is not handled by the regular encoder, or otherwise we will never reach the default() method with it. In the given code, we can for example pass the Agent instance itself to dump() or dumps(), rather than its dictionary field G. So we have to make two adjustments: Adjust memorize() to pass self, rather than self.G, e.g. like so: def memorize(self): with open("memory.json", "w") as w: json.dump(self, w, cls=JSONSerializer) Adjust JSONSerializer.default() to handle the Agent instance, e.g. like so (we won't need encode() and iterencode(), any more): class JSONSerializer(json.JSONEncoder): def default(self, obj): if isinstance(obj, Agent): new_obj = {} for key in obj.G.keys(): new_key = ("%d:%d" % key) if isinstance(key, tuple) else key new_obj[new_key] = obj.G[key] return new_obj # Return the adjusted dictionary return super().default(obj) Second, arguably the most simple solution is not using a custom JSONEncoder at all, but providing json.dump() with a JSON-serializable object directly. We can do this, for example, by moving the preprocessing of keys to Agent.memorize(): def memorize(self): obj = {} for key in self.G.keys(): new_key = ("%d:%d" % key) if isinstance(key, tuple) else key obj[new_key] = self.G[key] with open("memory.json", "w") as w: json.dump(obj, w) Final side note: In the first two solutions, the keys of the original dictionary are altered, following the code in your question. You probably don't want to have this though, as actually your instance should not change just because you dump a copy of it. So you might rather want to create a new dictionary with the altered keys and original values. I took care of this in the 3rd and 4th solution. | 2 | 2 |
78,260,128 | 2024-4-2 | https://stackoverflow.com/questions/78260128/django-cte-gives-queryset-object-has-no-attribute-with-cte | I have records in below format: | id | name | created | ----------------------------------------------- |1 | A |2024-04-10T02:49:47.327583-07:00| |2 | A |2024-04-01T02:49:47.327583-07:00| |3 | A |2024-03-01T02:49:47.327583-07:00| |4 | A |2024-02-01T02:49:47.327583-07:00| |5 | B |2024-02-01T02:49:47.327583-07:00| Model: class Model1(model.Models): name = models.CharField(max_length=100) created = models.DateTimeField(auto_now_add=True) I want to perform a group by in django with month from field created and get latest record from that month. Expected output: | id | name | created | ----------------------------------------------- |1 | A |2024-04-10T02:49:47.327583-07:00| |3 | A |2024-03-01T02:49:47.327583-07:00| |4 | A |2024-02-01T02:49:47.327583-07:00| I am using django-cte to perform the above action from django.db.models.functions import DenseRank, ExtractMonth from django_cte import With m = Model1.objects.get(id=1) cte = With( Model1.objects.filter(name=m.name) rank=Window( expression=DenseRank(), partition_by=[ExtractMonth("created")], order_by=F("created").desc(), ) ) qs = cte.queryset().with_cte(cte).filter(rank=1) But the above give error: qs = cte.queryset().with_cte(cte).filter(rank=1) ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'QuerySet' object has no attribute 'with_cte' Please help! | You need to mix in the CTEManager, otherwise you get a "vanilla" QuerySet: from django_cte import CTEManager class Model1(model.Models): name = models.CharField(max_length=100) created = models.DateTimeField(auto_now_add=True) objects = CTEManager() | 3 | 3 |
78,258,488 | 2024-4-2 | https://stackoverflow.com/questions/78258488/cant-use-gekko-equation | i want to define an equation in gekko, but than comes the error: Traceback (most recent call last): File "/Users/stefantomaschko/Desktop/Bundeswettbewerb Informatik/2.Runde/PΓ€ckchen/paeckchen_gurobi.py", line 131, in <module> m.solve() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gekko/gekko.py", line 2166, in solve raise Exception(apm_error) Exception: @error: Equation Definition Equation without an equality (=) or inequality (>,<) true STOPPING... my Code (not all functions, but the most important): def fitness(var): anzahl_sorte_stil = {} boxes = convert(var) print(boxes) stile = set() for var_box in var: #anzahl_box = [] for v in var_box: try: value = int(str(v.value)) except: value = int(float(str(v.value[0])))# if v.value != None else 0 info = info_name(v) stile.add(info[1]) if info not in anzahl_sorte_stil: anzahl_sorte_stil[info] = value else: anzahl_sorte_stil[info] += value #anzahl_box.append(value) # if min(anzahl_box) == 0: gruppen = finde_gruppen(stile) for g in gruppen: if g not in kombinierbar: return unmoeglich #print(anzahl_sorte_stil) uebrig = 0 for kleid in kleider: dif = kleid[2] - anzahl_sorte_stil[(kleid[0],kleid[1])] if dif<0: print("ZU OFT VERWENDET!") return unmoeglich else: uebrig += dif return uebrig sorten,stile,kombinierbar,kleider,gesamt = read_data("paeckchen0.txt") unmoeglich = gesamt+1 min_boxen,max_boxen = get_min_max_boxen(sorten,kleider) print("Min. Anzahl Boxen: ", min_boxen) print("Max. Anzahl Boxen: ", max_boxen) m = GEKKO(remote=False) m.options.max_time = 1000 m.options.max_iter = 1000 m.options.max_memory = 1000 var = [[] for _ in range(max_boxen)] for i,var_box in enumerate(var): for kleid in kleider: #print(kleid[:2]) var_box.append(m.Var(0,lb=1,ub=min((kleid[2],3)),integer=True,name=f"{kleid[0]}_{kleid[1]}_{i}"))#wie oft ist Kleid {kleid[:2]} in Box {i} #m.Equation(fitness(var) < gesamt) m.Minimize(fitness(var)) m.Equation(fitness(var) <= unmoeglich) m.options.SOLVER=1 m.solve() in the docmentation i didnt found anything related to that. can anybody help me to change it. I would also be happy about some looks at my code and why it did not find the right solution. now i want to implement the equation to not even allow incorrect solutions. | Gekko optimization models can be defined by function calls but then the model is compiled into byte-code. The function fitness() is called only twice with the definition of the objective function and the equation: m.Minimize(fitness(var)) m.Equation(fitness(var) <= unmoeglich) You can inspect the model in the folder that is opened with m.open_folder() in the file gk0_model.apm as a text file. The fitness() function overwrites the Gekko variable and returns a scalar value. That is why a True is returned as the equation because Gekko operator overloading is not used to evaluate the inequality constraint. It becomes a simple Python expression that evaluates to True. To remedy this problem, use Gekko expressions that provide continuous first and second derivative information to the gradient-based solvers such as m.if3(condition,val1,val2) or m.max3(0,expression). From the constraints and variable names, I'm guessing that the application is a packing optimization problem with categories and combinations to evaluate a packing configuration and calculate a score based on the number of items, the types, and how they are distributed among the boxes. The objective appears to be a cost or inefficiency metric. Feasibility constraints in the solver should work to find a solution without needing to help with if statements. Here is an example packing optimization in Gekko: from gekko import GEKKO m = GEKKO(remote=False) # 5 types of items, and a maximum of 10 boxes to use num_items = 5 max_boxes = 10 # size of each item type (units of volume or weight) item_sizes = [3, 4, 2, 5, 3] # number of each item type available item_counts = [10, 6, 8, 5, 7] # max capacity of each box box_capacity = [10,10,10,15,15,15,15,20,20,20] # number of each item type in each box item_in_box = [[m.Var(lb=0, ub=item_counts[i], integer=True) for i in range(num_items)] for _ in range(max_boxes)] # Objective: minimize the number of boxes used # could also minimize unused volume or another metric boxes_used = m.Array(m.Var,max_boxes,lb=0,ub=1,integer=True) m.Minimize(sum(boxes_used)) # total size of items in each box does not exceed box capacity for box in range(max_boxes): m.Equation(m.sum([item_in_box[box][i] * item_sizes[i] for i in range(num_items)]) <= box_capacity[box] * boxes_used[box]) # all items are packed for i in range(num_items): m.Equation(m.sum([item_in_box[box][i] for box in range(max_boxes)])==item_counts[i]) # Solve the problem with APOPT solver m.options.SOLVER = 1 m.solve(disp=True) # Output the solution print("Optimal Packing Solution:") for box in range(max_boxes): if boxes_used[box].value[0] > 0.5: # box is used print(f"Box {box + 1}:") for i in range(num_items): iib = int(item_in_box[box][i].value[0]) if iib > 0: print(f" - Item Type {i+1}: {iib}") The optimizer chooses the largest boxes because the objective is to minimize the number of boxes used. Here is the solution: --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 2.23269999999320 sec Objective : 7.00000000000000 Successful solution --------------------------------------------------- Optimal Packing Solution: Box 4: - Item Type 2: 1 - Item Type 3: 1 - Item Type 4: 1 - Item Type 5: 1 Box 5: - Item Type 1: 1 - Item Type 3: 2 - Item Type 4: 1 - Item Type 5: 1 Box 6: - Item Type 1: 1 - Item Type 2: 1 - Item Type 3: 1 - Item Type 5: 1 Box 7: - Item Type 1: 2 - Item Type 2: 1 - Item Type 3: 1 - Item Type 5: 1 Box 8: - Item Type 1: 2 - Item Type 2: 1 - Item Type 3: 1 - Item Type 4: 1 - Item Type 5: 1 Box 9: - Item Type 1: 2 - Item Type 2: 1 - Item Type 3: 1 - Item Type 4: 1 - Item Type 5: 1 Box 10: - Item Type 1: 2 - Item Type 2: 1 - Item Type 3: 1 - Item Type 4: 1 - Item Type 5: 1 The objective could be adjusted to minimize unused volume or another metric. There is a related circle packing optimization in the Design Optimization course. | 2 | 0 |
78,257,779 | 2024-4-1 | https://stackoverflow.com/questions/78257779/can-pandas-groupby-split-into-just-2-bins | Imagine I have this table: Col-1 | Col-2 A | 2 A | 3 B | 1 B | 4 C | 7 Groupby on Col-1 with a sum aggregation on Col-2 will sum A to 5, B to 5, and C to 7. What I want to know is if there is a baked in feature that allows aggregation on a target value in a column and then groups all other entries into another bin. For example, if I wanted to groupby on Col-1 targeting A and grouping all other entries into a label named other, I would end up with A as 5 and Other as 12. Does that make sense? I know I could do some filtering sorcery and merging datasets back together, but figured there had to be a cleaner, more Pythonic way I am missing. I have tried going through the documentation, but nothing jumped out at me. | One solution is to make pd.Categorical from the Column 1 -> with two categories A for string A and Other for other strings. Then group by this categorical: tmp = pd.Categorical(df["Col1"], categories=["A", "Other"]).fillna("Other") out = df.groupby(tmp, observed=False)["Col2"].sum() print(out) Prints: A 5 Other 12 Name: Col2, dtype: int64 Another solution, group by boolean mask: out = ( df.groupby(df["Col1"].eq("A"))["Col2"] .sum() .rename(index={True: "A", False: "Other"}) ) print(out) Prints: Col1 Other 12 A 5 Name: Col2, dtype: int64 | 2 | 5 |
78,253,701 | 2024-4-1 | https://stackoverflow.com/questions/78253701/python-illegal-instruction-core-dumped-when-importing-certain-libraries-bea | I am using Python3.10 on Ubuntu 22.04.4, and I am trying to run code that I originally wrote on a Windows 11 machine. Whenever I run this script--main.py--it always stops at the import stages, and fails to import beautifulsoup4 and yfinance in particular: print("Starting imports.") import pandas print("Imported pandas...") from bs4 import BeautifulSoup print("Imported beautifulsoup4...") import yfinance as yf print("Imported yfinance...") # Rest of code... When I run this script in the terminal, it only ever prints Starting imports. Imported pandas.... Following that print statement, it spits out Illegal instruction (core dumped): (myenv) minime@Shaguar:~/Coding/WebScraping$ python main.py Starting imports. Imported pandas... Illegal instruction (core dumped) I have done some research into this "Illegal instruction" error and have come across failed imports of TensorFlow, numpy, and others because of incompatible cpu architecture, yet I do not know how it would apply to beautifulsoup4 in this situation. How could I successfully import beautifulsoup4 and yfinance without the Illegal instruction (core dumped) error? (Architecture output of lscpu): Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 36 bits physical, 48 bits virtual Byte Order: Little Endian | I have solved it! After further inspection of /var/log/syslog--the Ubuntu error/warning logfile--I found that there was a problem with some etree thing: kernel: [ 3887.864653] traps: python3[5907] trap invalid opcode ip:74ab4c0807c0 sp:7fffee156560 error:0 in etree.cpython-310-x86_64-linux-gnu.so[74ab4c04e000+329000] After some research on etree and what it is used for, I found that a library shared by both yfinance and beautifulsoup4 that runs on etree was the cause of the core dumpage: lxml. I looked at the lxml PyPI page and saw that the latest version (lxml 5.20) did not support CPU architectures that pre-dated 2011; my Ubuntu machine is from 2009. So, I uninstalled and downgraded my lxml version to v5.0 using pip and everything worked smoothly. Key Takeaway If you are having a similar issue, it is probably due to incompatible CPU architecture with regards to a library or a library's underlying dependencies. Check your system logfiles after a core dump. Do some research, especially on CPU compatability, on any specific keywords in that logfile--in my case it was etree. Lastly, try to downgrade the suspected problem library--in my case it was lxml--to a version that supports your architecture. Everything above is what worked for me. | 2 | 5 |
78,256,965 | 2024-4-1 | https://stackoverflow.com/questions/78256965/how-to-change-columns-valueslist-using-another-data-frame-in-python | I have two data frame, I need to change column values of first data frame that are in list, using second data frame. df1 = pd.DataFrame({'title':['The Godfather','Fight Club','The Empire'], 'genre_ids':[[18, 80],[18],[12, 28, 878]]}) title genre_ids 0 The Godfather [18, 80] 1 Fight Club [18] 2 The Empire [12, 28, 878] df2 = pd.DataFrame({'id':[18,80,12,28,878,99],'name':['Action','Adventure','Adventure','Animation','Comedy','Documentary']}) id name 0 18 Action 1 80 Horror 2 12 Adventure 3 28 Animation 4 878 Comedy 5 99 Documentary How can I assign genere_ids like this using df2 in python title genre_ids 0 The Godfather [Action, Horror] 1 Fight Club [Action] 2 The Empire [Adventure, Animation, Comedy] | You can also first map the genre IDs in df1 to their corresponding names using df2, and then replacing the genre IDs with the mapped names, like the following: import pandas as pd df1 = pd.DataFrame({'title':['The Godfather','Fight Club','The Empire'], 'genre_ids':[[18, 80],[18],[12, 28, 878]]}) df2 = pd.DataFrame({'id':[18,80,12,28,878,99],'name':['Action','Adventure','Adventure','Animation','Comedy','Documentary']}) genre_map = dict(zip(df2['id'], df2['name'])) df1['genre_ids'] = df1['genre_ids'].apply(lambda x: [genre_map[id] for id in x]) print(df1) Output: title genre_ids 0 The Godfather [Action, Adventure] 1 Fight Club [Action] 2 The Empire [Adventure, Animation, Comedy] | 3 | 0 |
78,257,104 | 2024-4-1 | https://stackoverflow.com/questions/78257104/isinstance-fails-on-an-object-contained-in-a-list-after-using-dill-dump-and-d | Is this expected behaviour (and if so, can someone explain why)? This only happens when using dill, not pickle. from pathlib import Path import dill class MyClass: def __init__(self) -> None: pass path = Path('test/test.pkl') # create parent directory if it does not exist path.parent.mkdir(exist_ok=True) x = [ MyClass() ] dill.dump(x, path.open('wb')) y = dill.load(path.open('rb')) print(isinstance(x[0], MyClass)) # True print(isinstance(y[0], MyClass)) # False ??? I was expecting True. | The reason or this is that dill is pickling and re-creating the MyClass class object when deserializing your object. Hence MyClass (also x[0].__class__) is a different object compared to the deserialized y[0].__class__ object, which causes the isinstance check to fail against MyClass. print(id(MyClass)) # 140430969773264 print(id(x[0].__class__)) # same as above # 140430969773264 print(id(y[0].__class__)) # different # 140430969780544 By contrast, the stdlib pickle module will use a reference to the class instead, which results in the behavior you expect because it will import the class by reference rather than creating a new class when deserializing your object. To make dill use references, set the byref setting to True with byref=True, dill to behave a lot more like pickle with certain objects (like modules) pickled by reference as opposed to attempting to pickle the object itself. dill.settings['byref'] = True x = [ MyClass() ] dill.dump(x, path.open('wb')) y = dill.load(path.open('rb')) print(isinstance(x[0], MyClass)) # True print(isinstance(y[0], MyClass)) # True Alternatively, you can just use pickle from the stdlib instead of dill: pickle.dump(x, path.open('wb')) y = pickle.load(path.open('rb')) print(isinstance(x[0], MyClass)) # True print(isinstance(y[0], MyClass)) # True | 3 | 3 |
78,256,559 | 2024-4-1 | https://stackoverflow.com/questions/78256559/attributeerror-module-flax-traverse-util-has-no-attribute-unfreeze | I'm trying to run a model written in jax, https://github.com/lindermanlab/S5. However, I ran into some error that says Traceback (most recent call last): File "/Path/run_train.py", line 101, in <module> train(parser.parse_args()) File "/Path/train.py", line 144, in train state = create_train_state(model_cls, File "/Path/train_helpers.py", line 135, in create_train_state params = variables["params"].unfreeze() AttributeError: 'dict' object has no attribute 'unfreeze' I tried to replicate this error by import jax import jax.numpy as jnp import flax from flax import linen as nn model = nn.Dense(features=3) params = model.init(jax.random.PRNGKey(0), jnp.ones((1, 2))) params_unfrozen = flax.traverse_util.unfreeze(params) And the error reads: Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'flax.traverse_util' has no attribute 'unfreeze' I'm using: flax 0.7.4 jax 0.4.13 jaxlib 0.4.13+cuda12.cudnn89 I think this is an issue relating to the version of flax, but does anyone know what exactly is going on? Any help is appreciated. Let me know if you need any further information | unfreeze is a method of Flax's FrozenDict class: (See FrozenDict.unfreeze). It appears that you have passed a Python dict where a FrozenDict is expected. To fix this, you should ensure that variables['params'] is a FrozenDict, not a dict. Regarding the error in your attempted replication: flax.traverse_util does not define an unfreeze function, but this seems unrelated to the original problem. | 2 | 1 |
78,254,889 | 2024-4-1 | https://stackoverflow.com/questions/78254889/how-to-use-asyncio-serve-forever-without-freezing-gui | I am trying to create GUI with pyqt5 for tcp communication. I have 2 different .py file one for GUI and one for TCP. tcp.py contains tcpClients and tcpServer classes which has a function called tcp_server_connect (can be found below) to create connection between client and server. GUI has a button called connect and i want to call the function tcp_server_connect when it is pressed. When I press it GUI freezes and not respond. There is no error. I think it is because of server.serve_forever() loop but I am not sure how to solve this problem. async def tcp_server_connect(): server = await asyncio.start_server( handle_client, '127.0.0.1', 8888) async with server: await server.serve_forever() | The issue is that Qt runs on the main thread, and trying to run an asyncio event loop on the same thread won't work. Both Qt and the event loop want to and need to control the entire thread. As a note, Qt did recently announce an asyncio module for Python, but that is still in technical preview. The below example, which uses PySide6 (as I have that already installed), has a Qt window with buttons to start a TCP server and then a string entry control and a button to send the message to the TCP server. There is an asyncio event loop running on a separate thread that listens to messages and then performs the required actions. Try it out. All you need to install is PySide6. # Core dependencies import asyncio import sys from threading import Thread # Package dependencies from PySide6.QtWidgets import ( QApplication, QWidget, QPushButton, QVBoxLayout, QLineEdit, ) class MainWindow(QWidget): def __init__(self) -> None: super().__init__() # The the `asyncio` queue and event loop are created here, in the GUI thread (main thread), # but they will be passed into a new thread that will actually run the event loop. # Under no circumstances should the `asyncio.Queue` be used outside of that event loop. It # is only okay to construct it outside of the event loop. self._async_queue = asyncio.Queue() self._asyncio_event_loop = asyncio.new_event_loop() self.initialize() def initialize(self) -> None: """Initialize the GUI widgets""" self.setWindowTitle("PySide with asyncio server and client") # Create layout main_layout = QVBoxLayout() self.setLayout(main_layout) button_start_server = QPushButton(text="Start server") line_edit_message = QLineEdit() button_send_message = QPushButton(text="Send message") main_layout.addWidget(button_start_server) main_layout.addWidget(line_edit_message) main_layout.addWidget(button_send_message) button_start_server.pressed.connect( lambda: self.send_message_to_event_loop("start_server") ) button_send_message.pressed.connect( lambda: self.send_message_to_event_loop(line_edit_message.text()) ) # Disable the user being able to resize the window by setting a fixed size self.setFixedWidth(500) self.setFixedHeight(200) # Show the window self.show() def send_message_to_event_loop(self, message: str) -> None: """Send the `asyncio` event loop's queue a message by using the coroutine `put` and sending it to run on the `asyncio` event loop, putting the message on the queue inside the event loop. This must be done because `asyncio.Queue` is not threadsafe. """ asyncio.run_coroutine_threadsafe( coro=self._async_queue.put(message), loop=self._asyncio_event_loop, ) async def handle_client( reader: asyncio.StreamReader, writer: asyncio.StreamWriter ) -> None: data = await reader.read(100) message = data.decode() addr = writer.get_extra_info("peername") print(f"Received {message!r} from {addr!r}") writer.close() await writer.wait_closed() async def run_server(): server = await asyncio.start_server(handle_client, "127.0.0.1", 8888) async with server: print("Started server") await server.serve_forever() async def send_message_to_server(message: str) -> None: # Lazily create a new connection every time just for demonstration # purposes _, writer = await asyncio.open_connection("127.0.0.1", 8888) writer.write(message.encode()) await writer.drain() writer.close() await writer.wait_closed() async def read_messages(queue: asyncio.Queue) -> None: server_task = None while True: message = await queue.get() match message: case "start_server": server_task = asyncio.create_task(run_server()) case msg: await send_message_to_server(msg) async def async_main(queue: asyncio.Queue): # Launch the tasks to sit around and listen to messages await asyncio.gather(read_messages(queue)) def start_asyncio_event_loop(loop: asyncio.AbstractEventLoop) -> None: """Starts the given `asyncio` loop on whatever the current thread is""" asyncio.set_event_loop(loop) loop.set_debug(enabled=True) loop.run_forever() def run_event_loop(queue: asyncio.Queue, loop: asyncio.AbstractEventLoop) -> None: """Runs the given `asyncio` loop on a separate thread, passing the queue to the event loop for any other thread to send messages to the event loop. The main coroutine that is launched on the event loop is `async_main`. """ thread = Thread(target=start_asyncio_event_loop, args=(loop,), daemon=True) thread.start() asyncio.run_coroutine_threadsafe(async_main(queue), loop=loop) def run_application(application: QApplication): application.exec() if __name__ == "__main__": application = QApplication(sys.argv) window = MainWindow() async_queue = window._async_queue asyncio_event_loop = window._asyncio_event_loop run_event_loop(queue=async_queue, loop=asyncio_event_loop) sys.exit(run_application(application)) | 2 | 1 |
78,251,324 | 2024-3-31 | https://stackoverflow.com/questions/78251324/odoo-16-make-fields-readonly-using-xpath | i am currently using odoo 16, where i have created a custom field and trying to make that field readonly. The readonly field should appear only if a non-administrator user is logged in This is my initial code:- <odoo> <data> <record id="product_template_only_form_view" model="ir.ui.view"> <field name="name">product.template.product.form</field> <field name="model">product.template</field> <field name="inherit_id" ref="product.product_template_only_form_view"/> <field name="arch" type="xml"> <xpath expr="//group[@name='group_general']" position="inside"> <field name="recommanded_price" string="Recommended Price" readonly="1" groups="base.group_system"/> </xpath> </field> </record> </data> </odoo> What i tried? <odoo> <data> <record id="product_template_only_form_view" model="ir.ui.view"> <field name="name">product.template.product.form</field> <field name="model">product.template</field> <field name="inherit_id" ref="product.product_template_only_form_view"/> <field name="arch" type="xml"> <xpath expr="//group[@name='group_general']" position="inside"> <field name="recommanded_price" string="Recommended Price" attrs="{'readonly': [('groups', 'not in', ['base.group_system'])]}" /> </xpath> </field> </record> </data> </odoo> | Try this: for base.group_user, the field will become read-only, and for base.group_system, it will become editable. <record id="product_template_only_form_view" model="ir.ui.view"> <field name="name">product.template.product.form</field> <field name="model">product.template</field> <field name="inherit_id" ref="product.product_template_only_form_view" /> <field name="groups_id" eval="[(6, 0, [ref('base.group_system') ])]" /> <field name="arch" type="xml"> <field name="name" position="attributes"> <attribute name="readonly">0</attribute> </field> </field> </record> <record id="product_template_only_form_view" model="ir.ui.view"> <field name="name">product.template.product.form</field> <field name="model">product.template</field> <field name="inherit_id" ref="product.product_template_only_form_view" /> <field name="groups_id" eval="[(6, 0, [ref('base.group_user') ])]" /> <field name="arch" type="xml"> <field name="name" position="attributes"> <attribute name="readonly">1</attribute> </field> </field> </record> | 2 | 1 |
78,253,534 | 2024-4-1 | https://stackoverflow.com/questions/78253534/why-does-np-exp1000-give-an-overflow-warning-but-np-exp-100000-not-give-an-u | On running: >>> import numpy as np >>> np.exp(1000) <stdin>:1: RuntimeWarning: overflow encountered in exp Shows an overflow warning. But then why does the following not give an underflow warning? >>> np.exp(-100000) 0.0 | By default, underflow errors are ignored. The current settings can be checked as follows: print(np.geterr()) {'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'} To issue a warning for underflows just like overflows, you can use np.seterr like this: np.seterr(under="warn") np.exp(-100000) # RuntimeWarning: underflow encountered in exp Alternatively, you can use np.errstate like this: import numpy as np with np.errstate(under="warn"): np.exp(-100000) # RuntimeWarning: underflow encountered in exp | 4 | 3 |
78,246,770 | 2024-3-30 | https://stackoverflow.com/questions/78246770/how-to-automating-code-formatting-in-vscode-for-jupyter-notebooks-with-black-for | I've been enjoying the convenience of the Black Formatter extension in Visual Studio Code, especially its "Format on Save" feature for Python files. Being able to automatically format my code upon saving with Ctrl+S has significantly streamlined my workflow. However, I've encountered a limitation when working with Jupyter notebooks (.ipynb files) in VSCode. While the Black Formatter seamlessly formats .py files on save, I've noticed that formatting code within Jupyter notebooks requires manually triggering the format command for each cell using Alt+Shift+F. This inconsistency in the user experience between .py files and .ipynb files disrupts the workflow and diminishes the efficiency gained from the "Format on Save" feature for Python scripts. I'm reaching out to see if anyone has found a solution or a workaround to extend the "Format on Save" functionality to Jupyter notebooks within VSCode. Ideally, I'm looking for a method to automatically format all code cells in a notebook when saving the notebook file, similar to how it works for .py files. Has anyone else experienced this issue or found a way to make code formatting as effortless in .ipynb files as it is in .py files within VSCode? Any advice, plugins, or settings recommendations that could help achieve this would be greatly appreciated. In my quest to streamline my development workflow, I've successfully configured the Black Formatter extension in VSCode to automatically format Python (.py) files on save, using the following settings in my settings.json: "[python]": { "editor.defaultFormatter": "ms-python.black-formatter", "editor.formatOnSave": true } This setup works perfectly for Python scripts, automatically formatting them each time I press Ctrl+S, aligning with my expectations for a seamless and efficient coding experience. Transitioning to Jupyter notebooks within VSCode, my expectation was to replicate this level of automation. Given the widespread use of notebooks for data science and machine learning projects, automating code formatting within these notebooks would greatly enhance productivity. I anticipated that either the same settings would apply or there would be a straightforward alternative for .ipynb files. What I tried was to apply the same logic and settings, hoping VSCode would interpret and extend the "Format on Save" feature to the cells of Jupyter notebooks. I explored the VSCode and Black Formatter extension settings but found no direct reference or solution for applying automatic formatting to .ipynb files upon saving, similar to .py files. What I'm seeking is either a confirmation that this functionality currently doesn't exist for .ipynb files in VSCode or guidance on a workaround. Perhaps a different configuration or an extension that bridges this functionality gap? Any shared experience, advice, or solution that would allow for automatic formatting of Jupyter notebook cells on save, enhancing the consistency and efficiency of the development process within VSCode, would be invaluable. | After setting up Black formatter, you could search for Notebook: formatOnSave and Notebook: formatOnCellExecution in settings(Ctrl+,). Check these options, You can make formatter work when you save the file in jupyter notebook. | 4 | 5 |
78,248,902 | 2024-3-30 | https://stackoverflow.com/questions/78248902/typing-a-function-decorator-with-conditional-output-type-in-python | I have a set of functions which all accept a value named parameter, plus arbitrary other named parameters. I have a decorator: lazy. Normally the decorated functions return as normal, but return a partial function if value is None. How do I type-hint the decorator, whose output depends on the value input? from functools import partial def lazy(func): def wrapper(value=None, **kwargs): if value is not None: return func(value=value, **kwargs) else: return partial(func, **kwargs) return wrapper @lazy def test_multiply(*, value: float, multiplier: float) -> float: return value * multiplier @lazy def test_format(*, value: float, fmt: str) -> str: return fmt % value print('test_multiply 5*2:', test_multiply(value=5, multiplier=2)) print('test_format 7.777 as .2f:', test_format(value=7.777, fmt='%.2f')) func_mult_11 = test_multiply(multiplier=11) # returns a partial function print('Type of func_mult_11:', type(func_mult_11)) print('func_mult_11 5*11:', func_mult_11(value=5)) I'm using mypy and I've managed to get most of the way using mypy extensions, but haven't got the value typing working in wrapper: from typing import Callable, TypeVar, ParamSpec, Any, Optional from mypy_extensions import DefaultNamedArg, KwArg R = TypeVar("R") P = ParamSpec("P") def lazy(func: Callable[P, R]) -> Callable[[DefaultNamedArg(float, 'value'), KwArg(Any)], Any]: def wrapper(value = None, **kwargs: P.kwargs) -> R | partial[R]: if value is not None: return func(value=value, **kwargs) else: return partial(func, **kwargs) return wrapper How can I type value? And better still, can I do this without mypy extensions? | I see two possible options here. First is "more formally correct", but way too permissive, approach relying on partial hint: from __future__ import annotations from functools import partial from typing import Callable, TypeVar, ParamSpec, Any, Optional, Protocol, overload, Concatenate R = TypeVar("R") P = ParamSpec("P") class YourCallable(Protocol[P, R]): @overload def __call__(self, value: float, *args: P.args, **kwargs: P.kwargs) -> R: ... @overload def __call__(self, value: None = None, *args: P.args, **kwargs: P.kwargs) -> partial[R]: ... def lazy(func: Callable[Concatenate[float, P], R]) -> YourCallable[P, R]: def wrapper(value: float | None = None, *args: P.args, **kwargs: P.kwargs) -> R | partial[R]: if value is not None: return func(value, *args, **kwargs) else: if args: raise ValueError("Lazy call must provide keyword arguments only") return partial(func, **kwargs) return wrapper # type: ignore[return-value] @lazy def test_multiply(value: float, *, multiplier: float) -> float: return value * multiplier @lazy def test_format(value: float, *, fmt: str) -> str: return fmt % value print('test_multiply 5*2:', test_multiply(value=5, multiplier=2)) print('test_format 7.777 as .2f:', test_format(value=7.777, fmt='%.2f')) func_mult_11 = test_multiply(multiplier=11) # returns a partial function print('Type of func_mult_11:', type(func_mult_11)) print('func_mult_11 5*11:', func_mult_11(value=5)) func_mult_11(value=5, multiplier=5) # OK func_mult_11(value='a') # False negative: we want this to fail Last two calls show hat is good and bad about this approach. partial accepts any input arguments, so is not sufficiently safe. If you want to override the arguments provided to lazy callable initially, this is probably the best solution. Note that I slightly changed signatures of the input callables: without that you will not be able to use Concatenate. Note also that KwArg, DefaultNamedArg and company are all deprecated in favour of protocols. You cannot use paramspec with kwargs only, args must also be present. If you trust your type checker, it is fine to use kwarg-only callables, all unnamed calls will be rejected at the type checking phase. However, I have another alternative to share if you do not want to override default args passed to the initial callable, which is fully safe, but emits false positives if you try to. from __future__ import annotations from functools import partial from typing import Callable, TypeVar, ParamSpec, Any, Optional, Protocol, overload, Concatenate _R_co = TypeVar("_R_co", covariant=True) R = TypeVar("R") P = ParamSpec("P") class ValueOnlyCallable(Protocol[_R_co]): def __call__(self, value: float) -> _R_co: ... class YourCallableTooStrict(Protocol[P, _R_co]): @overload def __call__(self, value: float, *args: P.args, **kwargs: P.kwargs) -> _R_co: ... @overload def __call__(self, value: None = None, *args: P.args, **kwargs: P.kwargs) -> ValueOnlyCallable[_R_co]: ... def lazy_strict(func: Callable[Concatenate[float, P], R]) -> YourCallableTooStrict[P, R]: def wrapper(value: float | None = None, *args: P.args, **kwargs: P.kwargs) -> R | partial[R]: if value is not None: return func(value, *args, **kwargs) else: if args: raise ValueError("Lazy call must provide keyword arguments only") return partial(func, **kwargs) return wrapper # type: ignore[return-value] @lazy_strict def test_multiply_strict(value: float, *, multiplier: float) -> float: return value * multiplier @lazy_strict def test_format_strict(value: float, *, fmt: str) -> str: return fmt % value print('test_multiply 5*2:', test_multiply_strict(value=5, multiplier=2)) print('test_format 7.777 as .2f:', test_format_strict(value=7.777, fmt='%.2f')) func_mult_11_strict = test_multiply_strict(multiplier=11) # returns a partial function print('Type of func_mult_11:', type(func_mult_11_strict)) print('func_mult_11 5*11:', func_mult_11_strict(value=5)) func_mult_11_strict(value=5, multiplier=5) # False positive: OK at runtime, but not allowed by mypy. E: Unexpected keyword argument "multiplier" for "__call__" of "ValueOnlyCallable" [call-arg] func_mult_11_strict(value='a') # Expected. E: Argument "value" to "__call__" of "ValueOnlyCallable" has incompatible type "str"; expected "float" [arg-type] You can also mark value kw-only in ValueOnlyCallable definition if you'd like, I just don't think it is reasonable for a function with only one argument. You can compare both approaches in playground. If you do not want to use an ignore comment, the verbose option below should work. However, I do not think that verbosity is worth removing one ignore comment - it's up to you to decide. def lazy_strict(func: Callable[Concatenate[float, P], R]) -> YourCallableTooStrict[P, R]: @overload def wrapper(value: float, *args: P.args, **kwargs: P.kwargs) -> R: ... @overload def wrapper(value: None = None, *args: P.args, **kwargs: P.kwargs) -> ValueOnlyCallable[R]: ... def wrapper(value: float | None = None, *args: P.args, **kwargs: P.kwargs) -> R | ValueOnlyCallable[R]: if value is not None: return func(value, *args, **kwargs) else: if args: raise ValueError("Lazy call must provide keyword arguments only") return partial(func, **kwargs) return wrapper Here's also Pyright playground, because mypy failed to find a mistake in my original answer and Pyright did. | 2 | 1 |
78,249,645 | 2024-3-30 | https://stackoverflow.com/questions/78249645/polars-asof-join-on-next-available-date | I have a frame (events) which I want to join into another frame (fr), joining on Date and Symbol. There aren't necessarily any date overlaps. The date in events would match with the first occurrence only on the same or later date in fr, so if the event date is 2010-12-01, it would join on the same date or if not present then the next available date (2010-12-02). I've tried to do this using search_sorted and join_asof but I'd like to group by the Symbol column and also this isn't a proper join. This somewhat works for a single Symbol only. fr = pl.DataFrame( { 'Symbol': ['A']*5, 'Date': ['2010-08-29', '2010-09-01', '2010-09-05', '2010-11-30', '2010-12-02'], } ).with_columns(pl.col('Date').str.to_date('%Y-%m-%d')).with_row_index().set_sorted("Date") events = pl.DataFrame( { 'Symbol': ['A']*3, 'Earnings_Date': ['2010-06-01', '2010-09-01', '2010-12-01'], 'Event': [1, 4, 7], } ).with_columns(pl.col('Earnings_Date').str.to_date('%Y-%m-%d')).set_sorted("Earnings_Date") idx = fr["Date"].search_sorted(events["Earnings_Date"], "left") fr = fr.with_columns( pl.when( pl.col("index").is_in(idx) ) .then(True) .otherwise(False) .alias("Earnings") ) fr = fr.join_asof(events, by="Symbol", left_on="Date", right_on="Earnings_Date") fr = fr.with_columns( pl.when( pl.col("Earnings") == True ) .then(pl.col("Event")) .otherwise(False) .alias("Event") ) | It sounds like you are on the right track using pl.DataFrame.join_asof. To group by the symbol the by parameter can be used. ( fr .join_asof( events, left_on="Date", right_on="Earnings_Date", by="Symbol", ) ) shape: (5, 5) βββββββββ¬βββββββββ¬βββββββββββββ¬ββββββββββββββββ¬ββββββββ β index β Symbol β Date β Earnings_Date β Event β β --- β --- β --- β --- β --- β β u32 β str β date β date β i64 β βββββββββͺβββββββββͺβββββββββββββͺββββββββββββββββͺββββββββ‘ β 0 β A β 2010-08-29 β 2010-06-01 β 1 β β 1 β A β 2010-09-01 β 2010-09-01 β 4 β β 2 β A β 2010-09-05 β 2010-09-01 β 4 β β 3 β A β 2010-11-30 β 2010-09-01 β 4 β β 4 β A β 2010-12-02 β 2010-12-01 β 7 β βββββββββ΄βββββββββ΄βββββββββββββ΄ββββββββββββββββ΄ββββββββ Now, I understand that you'd like each event to be matched at most once. I don't believe this is possible with join_asof alone. However, we can set all event rows that equal the previous row to Null. For this, an pl.when().then() construct can be used. ( fr .join_asof( events, left_on="Date", right_on="Earnings_Date", by="Symbol", ) .with_columns( pl.when( pl.col("Earnings_Date", "Event").is_first_distinct() ).then( pl.col("Earnings_Date", "Event") ).over("Symbol") ) ) shape: (5, 5) βββββββββ¬βββββββββ¬βββββββββββββ¬ββββββββββββββββ¬ββββββββ β index β Symbol β Date β Earnings_Date β Event β β --- β --- β --- β --- β --- β β u32 β str β date β date β i64 β βββββββββͺβββββββββͺβββββββββββββͺββββββββββββββββͺββββββββ‘ β 0 β A β 2010-08-29 β 2010-06-01 β 1 β β 1 β A β 2010-09-01 β 2010-09-01 β 4 β β 2 β A β 2010-09-05 β null β null β β 3 β A β 2010-11-30 β null β null β β 4 β A β 2010-12-02 β 2010-12-01 β 7 β βββββββββ΄βββββββββ΄βββββββββββββ΄ββββββββββββββββ΄ββββββββ | 3 | 2 |
78,252,692 | 2024-3-31 | https://stackoverflow.com/questions/78252692/why-numpy-vectorize-calls-vectorized-function-more-times-than-elements-in-the-ve | When we call vectorized function for some vector, for some reason it is called twice for the first vector element. What is the reason, and can we get rid of this strange effect (e.g. when this function needs to have some side effect, e.g. counts some sum etc) Example: import numpy @numpy.vectorize def test(x): print(x) test([1,2,3]) Result: 1 1 2 3 numpy 1.26.4 | This is well defined in the vectorize documentation: If otypes is not specified, then a call to the function with the first argument will be used to determine the number of outputs. If you don't want this, you can define otypes: import numpy def test(x): print(x) test = numpy.vectorize(test, otypes=[float]) test([1,2,3]) 1 2 3 In any case, be aware that vectorize is just a convenience around a python loop. If you need to have side effects you might rather want to use an explicit python loop. | 2 | 5 |
78,252,285 | 2024-3-31 | https://stackoverflow.com/questions/78252285/attributeerror-module-numba-has-no-attribute-generated-jit | I am trying to run from ydata_profiling import ProfileReport profile = ProfileReport(merged_data) profile.to_notebook_iframe() in jupyter notebook. But I am getting an error: AttributeError: module 'numba' has no attribute 'generated_jit' I am running jupyter notebook in Docker container with requirements listed below: numpy==1.24.3 pandas==1.4.1 scikit-learn==1.4.1.post1 pyyaml==6.0 dvc==3.48.4 mlflow==2.11.1 seaborn==0.11.2 matplotlib==3.5.1 boto3==1.18.60 jupyter==1.0.0 pandoc==2.3 ydata-profiling==4.7.0 numba==0.59.1 I am using WSL Ubuntu in Visual Studio Code. Tried to build Docker image several times now with different versions of libraries. EDIT: Adding Traceback: AttributeError Traceback (most recent call last) Cell In[3], line 1 ----> 1 from ydata_profiling import ProfileReport 3 profile = ProfileReport(merged_data) 4 profile.to_notebook_iframe() File /usr/local/lib/python3.11/site-packages/ydata_profiling/__init__.py:14 10 warnings.simplefilter("ignore", category=NumbaDeprecationWarning) 12 import importlib.util # isort:skip # noqa ---> 14 from ydata_profiling.compare_reports import compare # isort:skip # noqa 15 from ydata_profiling.controller import pandas_decorator # isort:skip # noqa 16 from ydata_profiling.profile_report import ProfileReport # isort:skip # noqa File /usr/local/lib/python3.11/site-packages/ydata_profiling/compare_reports.py:12 10 from ydata_profiling.model import BaseDescription 11 from ydata_profiling.model.alerts import Alert ---> 12 from ydata_profiling.profile_report import ProfileReport 15 def _should_wrap(v1: Any, v2: Any) -> bool: 16 if isinstance(v1, (list, dict)): File /usr/local/lib/python3.11/site-packages/ydata_profiling/profile_report.py:25 23 from tqdm.auto import tqdm 24 from typeguard import typechecked ---> 25 from visions import VisionsTypeset 27 from ydata_profiling.config import Config, Settings, SparkSettings 28 from ydata_profiling.expectations_report import ExpectationsReport File /usr/local/lib/python3.11/site-packages/visions/__init__.py:4 1 """Core functionality""" 3 from visions import types, typesets, utils ----> 4 from visions.backends import * 5 from visions.declarative import create_type 6 from visions.functional import ( 7 cast_to_detected, 8 cast_to_inferred, 9 detect_type, 10 infer_type, 11 ) File /usr/local/lib/python3.11/site-packages/visions/backends/__init__.py:9 6 try: 7 import pandas as pd ----> 9 import visions.backends.pandas 10 from visions.backends.pandas.test_utils import pandas_version 12 if pandas_version[0] < 1: File /usr/local/lib/python3.11/site-packages/visions/backends/pandas/__init__.py:2 1 import visions.backends.pandas.traversal ----> 2 import visions.backends.pandas.types File /usr/local/lib/python3.11/site-packages/visions/backends/pandas/types/__init__.py:3 1 import visions.backends.pandas.types.boolean 2 import visions.backends.pandas.types.categorical ----> 3 import visions.backends.pandas.types.complex 4 import visions.backends.pandas.types.count 5 import visions.backends.pandas.types.date File /usr/local/lib/python3.11/site-packages/visions/backends/pandas/types/complex.py:7 5 from visions.backends.pandas.series_utils import series_not_empty, series_not_sparse 6 from visions.backends.pandas.types.float import string_is_float ----> 7 from visions.backends.shared.parallelization_engines import pandas_apply 8 from visions.types.complex import Complex 9 from visions.types.string import String File /usr/local/lib/python3.11/site-packages/visions/backends/shared/__init__.py:1 ----> 1 from . import nan_handling, parallelization_engines, utilities File /usr/local/lib/python3.11/site-packages/visions/backends/shared/nan_handling.py:34 30 # TODO: There are optimizations here, just have to define precisely the desired missing ruleset in the 31 # generated jit 32 if has_numba: ---> 34 @nb.generated_jit(nopython=True) 35 def is_missing(x): 36 """ 37 Return True if the value is missing, False otherwise. 38 """ 39 if isinstance(x, nb.types.Float): AttributeError: module 'numba' has no attribute 'generated_jit' | The top API level function numba.decorated_jit is deprecated and removed from numba version>=0.59.0. I suggest to install last version where numba.decorated_jit is and that is numba==0.58.1 | 4 | 7 |
78,251,979 | 2024-3-31 | https://stackoverflow.com/questions/78251979/what-is-the-algorithm-behind-math-gcd-and-why-it-is-faster-euclidean-algorithm | Tests shows that Python's math.gcd is one order faster than naive Euclidean algorithm implementation: import math from timeit import default_timer as timer def gcd(a,b): while b != 0: a, b = b, a % b return a def main(): a = 28871271685163 b = 17461204521323 start = timer() print(gcd(a, b)) end = timer() print(end - start) start = timer() print(math.gcd(a, b)) end = timer() print(end - start) gives $ python3 test.py 1 4.816000000573695e-05 1 8.346003596670926e-06 e-05 vs e-06. I guess there is some optimizations or some other algorithm? | math.gcd() is certainly a Python shim over a library function that is running as machine code (i.e. compiled from "C" code), not a function being run by the Python interpreter. See also: Where are math.py and sys.py? This should be it (for CPython): math_gcd(PyObject *module, PyObject * const *args, Py_ssize_t nargs) in mathmodule.c and it calls _PyLong_GCD(PyObject *aarg, PyObject *barg) in longobject.c which apparently uses Lehmer's GCD algorithm The code is smothered in housekeeping operations and handling of special case though, increasing the complexity considerably. Still, quite clean. PyObject * _PyLong_GCD(PyObject *aarg, PyObject *barg) { PyLongObject *a, *b, *c = NULL, *d = NULL, *r; stwodigits x, y, q, s, t, c_carry, d_carry; stwodigits A, B, C, D, T; int nbits, k; digit *a_digit, *b_digit, *c_digit, *d_digit, *a_end, *b_end; a = (PyLongObject *)aarg; b = (PyLongObject *)barg; if (_PyLong_DigitCount(a) <= 2 && _PyLong_DigitCount(b) <= 2) { Py_INCREF(a); Py_INCREF(b); goto simple; } /* Initial reduction: make sure that 0 <= b <= a. */ a = (PyLongObject *)long_abs(a); if (a == NULL) return NULL; b = (PyLongObject *)long_abs(b); if (b == NULL) { Py_DECREF(a); return NULL; } if (long_compare(a, b) < 0) { r = a; a = b; b = r; } /* We now own references to a and b */ Py_ssize_t size_a, size_b, alloc_a, alloc_b; alloc_a = _PyLong_DigitCount(a); alloc_b = _PyLong_DigitCount(b); /* reduce until a fits into 2 digits */ while ((size_a = _PyLong_DigitCount(a)) > 2) { nbits = bit_length_digit(a->long_value.ob_digit[size_a-1]); /* extract top 2*PyLong_SHIFT bits of a into x, along with corresponding bits of b into y */ size_b = _PyLong_DigitCount(b); assert(size_b <= size_a); if (size_b == 0) { if (size_a < alloc_a) { r = (PyLongObject *)_PyLong_Copy(a); Py_DECREF(a); } else r = a; Py_DECREF(b); Py_XDECREF(c); Py_XDECREF(d); return (PyObject *)r; } x = (((twodigits)a->long_value.ob_digit[size_a-1] << (2*PyLong_SHIFT-nbits)) | ((twodigits)a->long_value.ob_digit[size_a-2] << (PyLong_SHIFT-nbits)) | (a->long_value.ob_digit[size_a-3] >> nbits)); y = ((size_b >= size_a - 2 ? b->long_value.ob_digit[size_a-3] >> nbits : 0) | (size_b >= size_a - 1 ? (twodigits)b->long_value.ob_digit[size_a-2] << (PyLong_SHIFT-nbits) : 0) | (size_b >= size_a ? (twodigits)b->long_value.ob_digit[size_a-1] << (2*PyLong_SHIFT-nbits) : 0)); /* inner loop of Lehmer's algorithm; A, B, C, D never grow larger than PyLong_MASK during the algorithm. */ A = 1; B = 0; C = 0; D = 1; for (k=0;; k++) { if (y-C == 0) break; q = (x+(A-1))/(y-C); s = B+q*D; t = x-q*y; if (s > t) break; x = y; y = t; t = A+q*C; A = D; B = C; C = s; D = t; } if (k == 0) { /* no progress; do a Euclidean step */ if (l_mod(a, b, &r) < 0) goto error; Py_SETREF(a, b); b = r; alloc_a = alloc_b; alloc_b = _PyLong_DigitCount(b); continue; } /* a, b = A*b-B*a, D*a-C*b if k is odd a, b = A*a-B*b, D*b-C*a if k is even */ if (k&1) { T = -A; A = -B; B = T; T = -C; C = -D; D = T; } if (c != NULL) { assert(size_a >= 0); _PyLong_SetSignAndDigitCount(c, 1, size_a); } else if (Py_REFCNT(a) == 1) { c = (PyLongObject*)Py_NewRef(a); } else { alloc_a = size_a; c = _PyLong_New(size_a); if (c == NULL) goto error; } if (d != NULL) { assert(size_a >= 0); _PyLong_SetSignAndDigitCount(d, 1, size_a); } else if (Py_REFCNT(b) == 1 && size_a <= alloc_b) { d = (PyLongObject*)Py_NewRef(b); assert(size_a >= 0); _PyLong_SetSignAndDigitCount(d, 1, size_a); } else { alloc_b = size_a; d = _PyLong_New(size_a); if (d == NULL) goto error; } a_end = a->long_value.ob_digit + size_a; b_end = b->long_value.ob_digit + size_b; /* compute new a and new b in parallel */ a_digit = a->long_value.ob_digit; b_digit = b->long_value.ob_digit; c_digit = c->long_value.ob_digit; d_digit = d->long_value.ob_digit; c_carry = 0; d_carry = 0; while (b_digit < b_end) { c_carry += (A * *a_digit) - (B * *b_digit); d_carry += (D * *b_digit++) - (C * *a_digit++); *c_digit++ = (digit)(c_carry & PyLong_MASK); *d_digit++ = (digit)(d_carry & PyLong_MASK); c_carry >>= PyLong_SHIFT; d_carry >>= PyLong_SHIFT; } while (a_digit < a_end) { c_carry += A * *a_digit; d_carry -= C * *a_digit++; *c_digit++ = (digit)(c_carry & PyLong_MASK); *d_digit++ = (digit)(d_carry & PyLong_MASK); c_carry >>= PyLong_SHIFT; d_carry >>= PyLong_SHIFT; } assert(c_carry == 0); assert(d_carry == 0); Py_INCREF(c); Py_INCREF(d); Py_DECREF(a); Py_DECREF(b); a = long_normalize(c); b = long_normalize(d); } Py_XDECREF(c); Py_XDECREF(d); simple: assert(Py_REFCNT(a) > 0); assert(Py_REFCNT(b) > 0); /* Issue #24999: use two shifts instead of ">> 2*PyLong_SHIFT" to avoid undefined behaviour when LONG_MAX type is smaller than 60 bits */ #if LONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT /* a fits into a long, so b must too */ x = PyLong_AsLong((PyObject *)a); y = PyLong_AsLong((PyObject *)b); #elif LLONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT x = PyLong_AsLongLong((PyObject *)a); y = PyLong_AsLongLong((PyObject *)b); #else # error "_PyLong_GCD" #endif x = Py_ABS(x); y = Py_ABS(y); Py_DECREF(a); Py_DECREF(b); /* usual Euclidean algorithm for longs */ while (y != 0) { t = y; y = x % y; x = t; } #if LONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT return PyLong_FromLong(x); #elif LLONG_MAX >> PyLong_SHIFT >> PyLong_SHIFT return PyLong_FromLongLong(x); #else # error "_PyLong_GCD" #endif error: Py_DECREF(a); Py_DECREF(b); Py_XDECREF(c); Py_XDECREF(d); return NULL; } | 2 | 7 |
78,251,275 | 2024-3-31 | https://stackoverflow.com/questions/78251275/numpy-array-methods-are-faster-than-numpy-functions | I have to work with the learning history of a Keras model. This is a basic task, but I've measured the performance of the Python built-in min() function, the numpy.min() function, and the numpy ndarray.min() function for list and ndarray. The performance of the built-in Python min() function is nothing compared to that of Numpy for ndarray - numpy is 10 times faster (for list numpy is almost 6 times slower, but this is not the case of this question). However, the ndarray.min() method is almost twice as fast as numpy.min(). The ndarray.min() documentation refers to the numpy.amin() documentation, which according to the numpy.amin docs, is an alias for numpy.min(). Therefore, I assumed that numpy.min() and ndarray.min() would have the same performance. However, why is the performance of these functions not equal? from timeit import default_timer import random a = random.sample(range(1,1000000), 10000) b = np.array(random.sample(range(1,1000000), 10000)) def time_mgr(func): tms = [] for i in range(3, 6): tm = default_timer() for j in range(10**i): func() tm = (default_timer()-tm) / 10**i * 10e6 tms.append(tm) print(func.__name__, tms) @time_mgr def min_list(): min(a) @time_mgr def np_min_list(): np.min(a) @time_mgr def min_nd(): min(b) @time_mgr def np_min_nd(): np.min(b) @time_mgr def np_nd_min(): b.min() output, time in mks: min_list [520.7690014503896, 515.3326001018286, 516.221239999868] np_min_list [2977.614998817444, 3009.602500125766, 3014.1312699997798] min_nd [2270.1649996452034, 2195.6873999442905, 2155.1631700014696] np_min_nd [22.295000962913033, 21.675399970263243, 22.30485000181943] np_nd_min [14.261999167501926, 12.929399963468313, 12.935079983435571] | Basically your observations are correct. Here's my timings and notes Create 2 arrays, one much larger, and a list: In [254]: a = np.random.randint(0,1000,1000); b = a.tolist() In [255]: aa = np.random.randint(0,1000,100000) The method is faster, by about 7Β΅s in both cases - that's basially the overhead of the function delegating the job to the method: In [256]: timeit a.min() 7.15 Β΅s Β± 16 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) In [257]: timeit np.min(a) 14.7 Β΅s Β± 204 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) In [258]: timeit aa.min() 49.4 Β΅s Β± 174 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) In [259]: timeit np.min(aa) 57.4 Β΅s Β± 141 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) The extra time for calling np.min on the list is the time required to convert the list to an array: In [260]: timeit np.min(b) 142 Β΅s Β± 446 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) In [261]: timeit np.array(b) 120 Β΅s Β± 161 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) The native Python function does reasonably well with a list: In [262]: timeit min(b) 40.7 Β΅s Β± 92 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) It is slower when applied to the array. The extra time is basically the time it takes to iterate through array, treating it as a list: In [263]: timeit min(a) 127 Β΅s Β± 675 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) In [264]: timeit min(list(a)) 146 Β΅s Β± 1.43 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) tolist is a faster way to create a list from an array: In [265]: timeit min(a.tolist()) 77.1 Β΅s Β± 82 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) In general, when there's a numpy function with the same name as a method, it is doing two things: converting the argument(s) to array, if necessary delegating the actual calculation to the method. Converting a list to an array takes time. Whether that extra time is worth it depends on the following task. Conversely, treating an array as a list, is usually slower. In [270]: timeit [i for i in b] 50 Β΅s Β± 203 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) In [271]: timeit [i for i in a] 126 Β΅s Β± 278 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) The actual item created in an iteration is different: In [275]: type(b[0]), type(a[0]) Out[275]: (int, numpy.int32) b[0] is the same object that is in b. That is, b contains references to int objects. a[0] is a new np.int32 object each time it's called, with a new id. That 'unboxing' takes time. In sum, if you already have an array, it's fastest to use the method. But if for clarity or generality, using the function instead is no big deal, especially if the array is large. Treating an array as a list is usually slower. If you are starting with list, using the native python function is usually the best - if available. | 3 | 2 |
78,251,320 | 2024-3-31 | https://stackoverflow.com/questions/78251320/asyncio-how-to-read-stdout-from-subprocess | I have stuck with a pretty simple problem - I can't communicate with process' stdout. The process is a simple stopwatch, so I'd be able to start it, stop and get current time. The code of stopwatch is: import argparse import time def main() -> None: parser = argparse.ArgumentParser() parser.add_argument('start', type=int, default=0) start = parser.parse_args().start while True: print(start) start += 1 time.sleep(1) if __name__ == "__main__": main() And its manager is: import asyncio class RobotManager: def __init__(self): self.cmd = ["python", "stopwatch.py", "10"] self.robot = None async def start(self): self.robot = await asyncio.create_subprocess_exec( *self.cmd, stdout=asyncio.subprocess.PIPE, ) async def stop(self): if self.robot: self.robot.kill() stdout = await self.robot.stdout.readline() print(stdout) await self.robot.wait() self.robot = None async def main(): robot = RobotManager() await robot.start() await asyncio.sleep(3) await robot.stop() await robot.start() await asyncio.sleep(3) await robot.stop() asyncio.run(main()) But stdout.readline returns an empty byte string every time. When changing stdout = await self.robot.stdout.readline() to stdout, _ = await self.robot.communicate(), the result is still an empty byte string. When adding await self.robot.stdout.readline() to the end of the RobotManager.start method, it hangs forever. However, when removing stdout=asyncio.subprocess.PIPE and all readline calls, the subprocess prints to the terminal as expected. How do I read from the subprocess stdout correctly? | In this case "proc.communicate" cannot be used; it's not suitable for the purpose, since the OP wants to interrupt a running process. The sample code in the Python docs also shows how to directly read the piped stdout in these cases, so there is in principle nothing wrong with doing that. The main problem is that the stopwatch process is buffering the output. Try using as command: ["python", "-u", "stopwatch.py", "3"] For debugging it will also help to add some prints indicating when the robot started and ended. The following works for me: class RobotManager: def __init__(self): self.cmd = ["python", "-u", "stopwatch.py", "3"] self.robot = None async def start(self): print("======Starting======") self.robot = await asyncio.create_subprocess_exec( *self.cmd, stdout=asyncio.subprocess.PIPE, ) async def stop(self): if self.robot: self.robot.kill() stdout = await self.robot.stdout.readline() while stdout: print(stdout) stdout = await self.robot.stdout.readline() await self.robot.wait() print("======Terminated======") self.robot = None | 2 | 2 |
78,244,861 | 2024-3-29 | https://stackoverflow.com/questions/78244861/launching-python-debug-with-arguments-messes-with-file-path | I'm using VSCode on Windows, with the GitBash as integrated terminal. When I launch the Python Debugger with default configurations, it works fine, and I get this command executed on the terminal: /usr/bin/env c:\\Users\\augus\\.Apps\\anaconda3\\envs\\muskit-env\\python.exe \ c:\\Users\\augus\\.vscode\\extensions\\ms-python.debugpy-2024.2.0-win32-x64\\bundled\\libs\\debugpy\\adapter/../..\\debugpy\\launcher \ 53684 -- E:\\muskit\\QuantumSoftwareTestingTools\\Muskit\\Muskit\\CommandMain.py Notice the \\ in the file path. Again, above works just fine. The problem is when I add the args property to my launch.json configuration. launch.json { "configurations": [ { "name": "Python Debugger: Current File with Arguments", "type": "debugpy", "request": "launch", "program": "${file}", "console": "integratedTerminal", "args": "foo" } ] } The following command is executed on the terminal: $ /usr/bin/env c:\Users\augus\.Apps\anaconda3\envs\muskit-env\python.exe \ c:\Users\augus\.vscode\extensions\ms-python.debugpy-2024.2.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher \ 53805 -- E:\muskit\QuantumSoftwareTestingTools\Muskit\Muskit\CommandMain.py foo /usr/bin/env: βc:Usersaugus.Appsanaconda3envsmuskit-envpython.exeβ: No such file or directory Notice that, instead of \\. it uses \, which causes the "No such file or directory". Is this a bug, or am I missing something? | Going through the issues on vscode-python repository, it is being mentioned in multiple issues that git bash is not officially supported. For example here: Note Gitbash isn't supported by Python extension, so use Select default profile to switch to cmd or powershell if need be. Possibly it is a bug and it will be better to use cmd or powershell, since you can run into issues in the future as well. Some related issues which mention the same https://github.com/Microsoft/vscode-python/issues/3035 https://github.com/microsoft/vscode-python/issues/23008 https://github.com/microsoft/vscode-python/issues/22957 | 3 | 1 |
78,248,526 | 2024-3-30 | https://stackoverflow.com/questions/78248526/how-to-create-vector-embeddings-using-sentencetransformers | I found this code: https://github.com/pixegami/langchain-rag-tutorial/blob/main/create_database.py It takes the document, splits it into chunks, creates vector embeddings for each chunk, and saves those into Chroma Database. However, the source code uses OpenAI key to create embeddings. Since I don't have access to OpenAI I tried to use free alternative, - SentenceTransformers. But I wasn't able to rewrite the code properly. Here is my curent attempt: from langchain_community.document_loaders import DirectoryLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.schema import Document from sentence_transformers import SentenceTransformer from langchain.vectorstores.chroma import Chroma import os import shutil CHROMA_PATH = "chroma" DATA_PATH = "data/books" embedder = SentenceTransformer("all-MiniLM-L6-v2") def main(): generate_data_store() def generate_data_store(): documents = load_documents() chunks = split_text(documents) save_to_chroma(chunks) def load_documents(): loader = DirectoryLoader(DATA_PATH, glob="*.md") documents = loader.load() return documents def split_text(documents: list[Document]): text_splitter = RecursiveCharacterTextSplitter( chunk_size=300, chunk_overlap=100, length_function=len, add_start_index=True, ) chunks = text_splitter.split_documents(documents) print(f"Split {len(documents)} documents into {len(chunks)} chunks.") document = chunks[10] print(document.page_content) print(document.metadata) return chunks def save_to_chroma(chunks: list[Document]): # Clear out the database first. if os.path.exists(CHROMA_PATH): shutil.rmtree(CHROMA_PATH) # Create a new DB from the documents. db = Chroma.from_documents( chunks, embedder.encode, persist_directory=CHROMA_PATH ) db.persist() print(f"Saved {len(chunks)} chunks to {CHROMA_PATH}.") if __name__ == "__main__": main() This is the error that I get: Traceback (most recent call last): File "/media/andrew/Simple Tom/Robotics/Crew_AI/langchain-rag-tutorial/create_database.py", line 56, in <module> main() File "/media/andrew/Simple Tom/Robotics/Crew_AI/langchain-rag-tutorial/create_database.py", line 15, in main generate_data_store() File "/media/andrew/Simple Tom/Robotics/Crew_AI/langchain-rag-tutorial/create_database.py", line 20, in generate_data_store save_to_chroma(chunks) File "/media/andrew/Simple Tom/Robotics/Crew_AI/langchain-rag-tutorial/create_database.py", line 49, in save_to_chroma db = Chroma.from_documents( File "/home/andrew/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 778, in from_documents return cls.from_texts( File "/home/andrew/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 736, in from_texts chroma_collection.add_texts( File "/home/andrew/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 275, in add_texts embeddings = self._embedding_function.embed_documents(texts) AttributeError: 'function' object has no attribute 'embed_documents' I'm not a programmer. If someone could point out if it's at all possible to achive this without OpenAI key and if yes, then where my mistake is, that'd be great. | Define your embedding model, with HuggingFaceEmbeddings from langchain_community.embeddings import HuggingFaceEmbeddings embedder = HuggingFaceEmbeddings( model_name = "sentence-transformers/all-MiniLM-L6-v2" ) Then embed the chunks into vectorDB db = Chroma.from_documents( documents=chunks, embedding=embedder, persist_directory=CHROMA_PATH ) db.persist() | 2 | 2 |
78,248,879 | 2024-3-30 | https://stackoverflow.com/questions/78248879/remove-gaps-between-subplots-mosaic-in-matplotlib | How do I remove the gaps between the subplots on a mosaic? The traditional way does not work with mosaics: plt.subplots_adjust(wspace=0, hspace=0) I also tried using gridspec_kw, but no luck. import matplotlib.pyplot as plt import numpy as np ax = plt.figure(layout="constrained").subplot_mosaic( """ abcde fghiX jklXX mnXXX oXXXX """, empty_sentinel="X", gridspec_kw={ "wspace": 0, "hspace": 0, }, ) for k,ax in ax.items(): print(ax) #ax.text(0.5, 0.5, k, transform=ax.transAxes, ha="center", va="center", fontsize=8, color="darkgrey") ax.set_xticklabels([]) ax.set_yticklabels([]) ax.tick_params(length = 0) The code generates: | This is not caused by subplot_mosaic but because a layout was specified. If you use constrained or tight layout, the layout manager will supersede the custom adjustments. Remove the layout manager and either method will work: gridspec_kw fig = plt.figure() # without layout param ax = fig.subplot_mosaic( """ abcde fghiX jklXX mnXXX oXXXX """, empty_sentinel="X", gridspec_kw={ "wspace": 0, "hspace": 0, }, ) subplots_adjust fig = plt.figure() # without layout param ax = fig.subplot_mosaic( """ abcde fghiX jklXX mnXXX oXXXX """, empty_sentinel="X", ) fig.subplots_adjust(wspace=0, hspace=0) | 2 | 4 |
78,249,223 | 2024-3-30 | https://stackoverflow.com/questions/78249223/scraping-all-links-using-beautifulsoup | I am trying to scrape all match reports links from the page but there is 'load more' button, and I don't want to use selenium. Is there any solution to collect all links without selenium. Thanks in advance. Here what I tried: from bs4 import BeautifulSoup as bs import requests r=requests.get('https://www.iplt20.com/news/match-reports') soup = bs(r.text,'lxml') for match in soup.find_all('div',class_='latest-slider-wrap position-relative'): links = match.find('a') print(links['href']) | Try: import requests from bs4 import BeautifulSoup url = "https://www.iplt20.com/news/match-reports" soup = BeautifulSoup(requests.get(url).content, "html.parser") for a in soup.select("#div-match-report a:has(li)"): print(a["href"]) Prints: https://www.iplt20.com/news/4014/tata-ipl-2024-match-11-lsg-vs-pbks-match-report https://www.iplt20.com/news/4012/tata-ipl-2024-match-10-rcb-vs-kkr-match-report https://www.iplt20.com/news/4011/tata-ipl-2024-match-09-rr-vs-dc-match-report https://www.iplt20.com/news/4009/tata-ipl-2024-match-08-srh-vs-mi-match-report https://www.iplt20.com/news/4007/tata-ipl-2024-match-07-csk-vs-gt-match-report https://www.iplt20.com/news/4006/tata-ipl-2024-match-06-rcb-vs-pbks-match-report https://www.iplt20.com/news/4004/tata-ipl-2024-match-05-gt-vs-mi-match-report https://www.iplt20.com/news/4003/tata-ipl-2024-match-04-rr-vs-lsg-match-report https://www.iplt20.com/news/4001/tata-ipl-2024-match-03-kkr-vs-srh-match-report https://www.iplt20.com/news/4000/tata-ipl-2024-match-02-pbks-vs-dc-match-report https://www.iplt20.com/news/3999/tata-ipl-2024-match-01-csk-vs-rcb-match-report https://www.iplt20.com/news/3976/tata-ipl-2023-final-csk-vs-gt-match-report https://www.iplt20.com/news/3974/tata-ipl-2023-qualifier-2-gt-vs-mi-match-report https://www.iplt20.com/news/3973/tata-ipl-2023-eliminator-lsg-vs-mi-match-report https://www.iplt20.com/news/3972/tata-ipl-2023-qualifier-1-gt-vs-csk-match-report https://www.iplt20.com/news/3971/tata-ipl-2023-match-70-rcb-vs-gt-match-report https://www.iplt20.com/news/3970/tata-ipl-2023-match-69-mi-vs-srh-match-report https://www.iplt20.com/news/3969/tata-ipl-2023-match-68-kkr-vs-lsg-match-report https://www.iplt20.com/news/3968/tata-ipl-2023-match-67-dc-vs-csk-match-report https://www.iplt20.com/news/3967/tata-ipl-2023-match-66-pbks-vs-rr-match-report https://www.iplt20.com/news/3966/tata-ipl-2023-match-65-srh-vs-rcb-match-report EDIT: To get all links you can use their Ajax pagination API: import requests api_url = "https://www.iplt20.com/add-more-match-report?page={page}&type=match-reports" for page in range(1, 4): # <-- adjust number of pages here print(f"{page=}") data = requests.get(api_url.format(page=page)).json() for d in data["newsResponce"]["data"]: print(f'https://www.iplt20.com/news/{d["id"]}/{d["titleUrlSegment"]}') Prints: ... page=2 https://www.iplt20.com/news/3964/tata-ipl-2023-match-64-pbks-vs-dc-match-report https://www.iplt20.com/news/3963/tata-ipl-2023-match-63-lsg-vs-mi-match-report https://www.iplt20.com/news/3962/tata-ipl-2023-match-62-gt-vs-srh-match-report https://www.iplt20.com/news/3960/tata-ipl-2023-match-61-csk-vs-kkr-match-report https://www.iplt20.com/news/3959/tata-ipl-2023-match-60-rr-vs-rcb-match-report https://www.iplt20.com/news/3958/tata-ipl-2023-match-59-dc-vs-pbks-match-report https://www.iplt20.com/news/3956/tata-ipl-2023-match-58-srh-vs-lsg-match-report https://www.iplt20.com/news/3955/tata-ipl-2023-match-57-mi-vs-gt-match-report https://www.iplt20.com/news/3953/tata-ipl-2023-match-56-kkr-vs-rr-match-report https://www.iplt20.com/news/3952/tata-ipl-2023-match-55-csk-vs-dc-match-report https://www.iplt20.com/news/3951/tata-ipl-2023-match-54-mi-vs-rcb-match-report https://www.iplt20.com/news/3947/tata-ipl-2023-match-53-kkr-vs-pbks-match-report https://www.iplt20.com/news/3946/tata-ipl-2023-match-52-rr-vs-srh-match-report https://www.iplt20.com/news/3945/tata-ipl-2023-match-51-gt-vs-lsg-match-report https://www.iplt20.com/news/3944/tata-ipl-2023-match-50-dc-vs-rcb-match-report https://www.iplt20.com/news/3943/tata-ipl-2023-match-49-csk-vs-mi-match-report https://www.iplt20.com/news/3942/tata-ipl-2023-match-48-rr-vs-gt-match-report https://www.iplt20.com/news/3940/tata-ipl-2023-match-47-srh-vs-kkr-match-report https://www.iplt20.com/news/3938/tata-ipl-2023-match-46-pbks-vs-mi-match-report https://www.iplt20.com/news/3937/tata-ipl-2023-match-45-lsg-vs-csk-match-report https://www.iplt20.com/news/3936/tata-ipl-2023-match-44-gt-vs-dc-match-report page=3 https://www.iplt20.com/news/3934/tata-ipl-2023-match-43-lsg-vs-rcb-match-report https://www.iplt20.com/news/3932/tata-ipl-2023-match-42-mi-vs-rr-match-report https://www.iplt20.com/news/3931/tata-ipl-2023-match-41-csk-vs-pbks-match-report https://www.iplt20.com/news/3930/tata-ipl-2023-match-40-dc-vs-srh-match-report ... | 2 | 3 |
78,248,599 | 2024-3-30 | https://stackoverflow.com/questions/78248599/export-pandas-to-csv-data-but-before-doing-so-sort-date-by-least-to-greatest | Upon looking and researching online it does seem that there a numerous way of doing this but not sure if it fits the way that I want it for. Personally I would still like my dates as this format "3/30/24". I am extracting data from some which has numerous amount of data and everything works as expected but when I tried to sort my date it just was sorting it lexicographically and this is only when I read from the csv rather than writing because I just want to to test if it could even sort in the first place after writing to it. data_to_export = { "Company Name": ["Mcdonalds","Burgerking"], "Delivery Address":["123 lake rd", "124 west rd"], "Date": ["3/30/24", "1/23/24"], "Customer Name": ["Zack", "Peter"],} df = pd.DataFrame(data_to_export) df.to_csv(join_move_to, index=False) Above is some of the dummy data just so you can get a quick example than below is how I save it I did have some sort of sort methods before this but it didn't work so I just deleted it but before writing to how can I let it know that I want to sort by date and keep in mind I still want my format as "(Month/Day/Year) " I do understand this is a string so it must be converted to some sort of date then sorted but I can not find a way to do that here is what I tried just so you guys can see df = pd.DataFrame(data_to_export) df["Date"] = pd.to_datetime(df["Date"],format="%m/%d/%y") df.sort_values(by="Date", inplace=True) df.to_csv(path, index=False) | You should pass a custom key with to_datetime to sort_values, this will use the defined logic to sort while leaving the data unchanged: (df.sort_values(by='Date', key=lambda x: pd.to_datetime(x, format='%m/%d/%y') .to_csv(path, index=False) ) Output csv: Company Name,Delivery Address,Date,Customer Name Burgerking,124 west rd,1/23/24,Peter Mcdonalds,123 lake rd,3/30/24,Zack | 2 | 1 |
78,247,930 | 2024-3-30 | https://stackoverflow.com/questions/78247930/rotate-a-multipolygon-without-changing-the-inner-spatial-relation | I have a multipolygon shapefile that i want to rotate. I can do the rotation but the problem is that the roatation changes the inner vertices. This creates overlap of polygon which i dont want. This is what i have tried. import geopandas as gpd input_poly_path = "poly3.shp" gdf = gpd.read_file(input_poly_path) explode = gdf.explode(ignore_index=True) ex = explode.rotate(-90, origin="centroid") g = gpd.GeoDataFrame(columns=['geometry'], geometry='geometry') g["geometry"] = ex https://drive.google.com/drive/folders/1HJpnNL-iXU_rReQzVcDGuyWZ8IjciKP8?usp=drive_link Link to the polygon | IIUC, you need to pass the centroid of the unary_union as the origin of the rotation : out = gpd.GeoDataFrame( geometry=gdf.rotate( angle=-90, origin=list(gdf.unary_union.centroid.coords)[0] ) ) NB: There is no need to explode the geometry, because you do not have MultiPolygons. Used input (gdf) : import geopandas as gpd # download from Google Drive input_poly_path = ( "C:/Users/Timeless/Downloads/" "share-20240330T125957Z-001.zip!share" ) gdf = gpd.read_file(input_poly_path, engine="pyogrio") | 2 | 3 |
78,247,747 | 2024-3-30 | https://stackoverflow.com/questions/78247747/tkinter-menu-spontaneously-adding-extra-item | I'm writing a Tkinter program that so far creates a window with a menu bar, a File menu, and a single item. The menu is successfully created, but with two items, the first being one that I did not specify, whose name is "-----". If I don't add an item, the spontaneous one is still added. This still happens if I specify tearoff=0. Any idea why this is happening? Windows 11, Python 3.12.2, Tkinter and Tcl 8.6. import tkinter as tk window = tk.Tk() window.geometry("800x600") menubar = tk.Menu(window) window.config(menu=menubar) fileMenu = tk.Menu(menubar) fileMenu.add_command( label="Exit", command=window.destroy, ) menubar.add_cascade(label="File", menu=fileMenu, underline=0) window.mainloop() | In that way it works. I think u put tearoff=0 in menubar instead of fileMenu. If you put your tearoff=0 in menubar it won't affect fileMenu. So, u need to specifically put tearoff=0 in specific tk.Menu() import tkinter as tk window = tk.Tk() window.geometry("800x600") menubar = tk.Menu(window) window.config(menu=menubar) fileMenu = tk.Menu(menubar,tearoff=0) fileMenu.add_command( label="Exit", command=window.destroy, ) menubar.add_cascade(label="File", menu=fileMenu, underline=0) window.mainloop() | 2 | 3 |
78,246,775 | 2024-3-30 | https://stackoverflow.com/questions/78246775/how-can-i-change-the-groupby-column-to-find-the-first-row-that-meets-the-conditi | This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'main': ['x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'y', 'y', 'y'], 'sub': ['c', 'c', 'c', 'd', 'd', 'e', 'e', 'e', 'e', 'f', 'f', 'f', 'f', 'g', 'g', 'g'], 'num_1': [10, 9, 80, 80, 99, 101, 110, 222, 90, 1, 7, 10, 2, 10, 95, 10], 'num_2': [99, 99, 99, 102, 102, 209, 209, 209, 209, 100, 100, 100, 100, 90, 90, 90] } ) And This is my expected output. I want to add column result: main sub num_1 num_2 result 0 x c 10 99 101 1 x c 9 99 101 2 x c 80 99 101 3 x d 80 102 110 4 x d 99 102 110 5 x e 101 209 222 6 x e 110 209 222 7 x e 222 209 222 8 x e 90 209 222 9 y f 1 100 NaN 10 y f 7 100 NaN 11 y f 10 100 NaN 12 y f 2 100 NaN 13 y g 10 90 95 14 y g 95 90 95 15 y g 10 90 95 The mask is: mask = (df.num_1 > df.num_2) The process starts like this: a) The groupby column is sub b) Finding the first row that meets the condition of the mask for each group. c) Put the value of num_1 in the result If there are no rows that meets the condition of the mask, then the groupby column is changed to main to find the first row of mask. There is condition for this phase: The previous subs should not be considered when using main as the groupby column. An example of the above steps for group d in the sub column: a) sub is the groupby column. b) There are no rows in the d group that df.num_1 > df.num_2 So now for group d, its main group is searched. However group c is also in this main group. Since it is before group d, group c should not count for this step. In this image I have shown where those values come from: And this is my attempt. It partially solves the issue for some groups but not all of them: def step_a(g): mask = (g.num_1 > g.num_2) g.loc[mask.cumsum().eq(1) & mask, 'result'] = g.num_1 g['result'] = g.result.ffill().bfill() return g a = df.groupby('sub').apply(step_a) | IIUC, you can use numpy broadcasting to form a mask per "main" and use this to find the first num1>num2 while considering only the next groups: def find(g): # get sub as 0,1,2β¦ sub = pd.factorize(g['sub'])[0] # convert inputs to numpy n1 = g['num_1'].to_numpy() n2 = g.loc[~g['sub'].duplicated(), 'num_2'].to_numpy() # form mask # (n1[:, None] > n2) -> num_1 > num_2 # (sub[:, None] >= np.arange(len(n2))) -> exclude previous groups m = (n1[:, None] > n2) & (sub[:, None] >= np.arange(len(n2))) # find first True per column return pd.Series(np.where(m.any(0), n1[m.argmax(0)], np.nan)[sub], index=g.index) df['result'] = df.groupby('main', group_keys=False).apply(find) Note that you can easily tweak the masks to perform other logics (search in next n groups, exclude all previous groups except the immediate previous one, etc.). Outputs: # example 1 # example 2 (from comments) main sub num_1 num_2 result main sub num_1 num_2 result 0 x c 10 99 101.0 0 x d 10 102 110.0 1 x c 9 99 101.0 1 x d 9 102 110.0 2 x c 80 99 101.0 2 x c 80 99 101.0 3 x d 80 102 110.0 3 x c 80 99 101.0 4 x d 99 102 110.0 4 x c 99 99 101.0 5 x e 101 209 222.0 5 x e 101 209 222.0 6 x e 110 209 222.0 6 x e 110 209 222.0 7 x e 222 209 222.0 7 x e 222 209 222.0 8 x e 90 209 222.0 8 x e 90 209 222.0 9 y f 1 100 NaN 9 y f 1 100 NaN 10 y f 7 100 NaN 10 y f 7 100 NaN 11 y f 10 100 NaN 11 y f 10 100 NaN 12 y f 2 100 NaN 12 y f 2 100 NaN 13 y g 10 90 95.0 13 y g 10 90 95.0 14 y g 95 90 95.0 14 y g 95 90 95.0 15 y g 10 90 95.0 15 y g 10 90 95.0 Intermediate masks m, here for the second example: # main == 'x' # 99 102 209 array([[False, False, False], # 10 [False, False, False], # 9 [False, False, False], # 80 [False, False, False], # 80 [False, False, False], # 99 [False, True, False], # 101 [ True, True, False], # 110 [ True, True, True], # 222 [False, False, False]]) # 90 # out: 110 101 222 # main == 'y' # 100 90 array([[False, False], # 1 [False, False], # 7 [False, False], # 10 [False, False], # 1 [False, False], # 10 [False, True], # 95 [False, False]]) # 90 # out: NaN 95 | 2 | 1 |
78,246,703 | 2024-3-30 | https://stackoverflow.com/questions/78246703/getting-a-function-to-call-an-equation | I am creating a function to create an approximate solution for differential equations using Euler's method. I can get the code to work, but ran into trouble when trying to convert this into a function. Namely, I am having difficulty getting my function to correctly call the formula. This original code worked well: #define your inital values y = 1 x = 0 #h represents step size (incremental shift over the directional field) h = 0.1 solutions = [] x_values = [] #generates a list of 10 solutions, increase the number in the range to get more solutions for s in range (0, 10): #enter your differential equation diff_eq = x+y #Euler's method to approximate the solutions s = y + h*(diff_eq) solutions.append(s) #replace initial values with the new values y = s x = x + h #create a list of the x values x_values.append(x) #creating a dataframe from the solutions Euler_values = pd.DataFrame(list(zip(x_values, solutions)), columns = ['xβ', 'yβ']) #create an index for the dataframe starting at one and increasing by one Euler_values.index = Euler_values.index + 1 Euler_values.rename_axis('n', inplace = True) Euler_values This is the start of turning the above code into a function: #eulers method to approximate the solutions function def eulers_method(x, y, diff_eq, h, n): #empty solution and x_value lists to be used in the euler function solutions = [] x_values = [] for s in range (0, n): # eq = diff_eq <-- this does not work # if I call the equation here it does work, but I want it to be entered into the function eq = x + y s = y + h*(eq) #replace initial values with the new values and adds them to the solution and x_value lists y = s solutions.append(s) x = x + h x_values.append(x) #creates a dataframe from the solutions Euler_values = pd.DataFrame(list(zip(x_values, solutions)), columns = ['xβ', 'yβ']) #creates an index for the dataframe starting at one and increasing by one Euler_values.index = Euler_values.index + 1 Euler_values.rename_axis('n', inplace = True) return Euler_values #enter an initial x value, initial y value, the differential equation, the step size, and the number of solutions to generate # the diff_eq entry is giving me trouble eulers_method(0,1, x+y, 0.1, 10) | There are a couple of ways of dealing with your problem. The first (and IMO preferable) solution is just to pass a function to your eulers_method function i.e. def eulers_method(x, y, diff_eq, h, n): #empty solution and x_value lists to be used in the euler function solutions = [] x_values = [] for s in range (0, n): # compute the function eq = diff_eq(x, y) s = y + h*(eq) #replace initial values with the new values and adds them to the solution and x_value lists y = s solutions.append(s) x = x + h x_values.append(x) #creates a dataframe from the solutions Euler_values = pd.DataFrame(list(zip(x_values, solutions)), columns = ['xβ', 'yβ']) #creates an index for the dataframe starting at one and increasing by one Euler_values.index = Euler_values.index + 1 Euler_values.rename_axis('n', inplace = True) return Euler_values #enter an initial x value, initial y value, the differential equation, the step size, and the number of solutions to generate eulers_method(0, 1, lambda x, y:x+y, 0.1, 10) Output: xβ yβ n 1 0.1 1.100000 2 0.2 1.220000 3 0.3 1.362000 4 0.4 1.528200 5 0.5 1.721020 6 0.6 1.943122 7 0.7 2.197434 8 0.8 2.487178 9 0.9 2.815895 10 1.0 3.187485 The second, and not really desirable method is to pass a string version of the function, and eval it in the eulers_method function: def eulers_method(x, y, diff_eq, h, n): #empty solution and x_value lists to be used in the euler function solutions = [] x_values = [] for s in range (0, n): # compute the function eq = eval(diff_eq) s = y + h*(eq) #replace initial values with the new values and adds them to the solution and x_value lists y = s solutions.append(s) x = x + h x_values.append(x) #creates a dataframe from the solutions Euler_values = pd.DataFrame(list(zip(x_values, solutions)), columns = ['xβ', 'yβ']) #creates an index for the dataframe starting at one and increasing by one Euler_values.index = Euler_values.index + 1 Euler_values.rename_axis('n', inplace = True) return Euler_values #enter an initial x value, initial y value, the differential equation, the step size, and the number of solutions to generate eulers_method(0, 1, 'x+y', 0.1, 10) The output is the same. | 2 | 3 |
78,245,576 | 2024-3-29 | https://stackoverflow.com/questions/78245576/how-to-make-regex-code-apply-only-to-empty-target-cells | An example of my data StreetAddress City State Zip 1 Main St 01123 Winsted CT 1 Main St Winsted CT 01123 I am trying to use regex and pandas to clean a spreadsheet that I have. The problem I am running into is that my regex code is replacing every cell in the entire column even if there is valid data in it. I tried df['Zip'] = df['StreetAddress'].str.extract(r'(\d{5})') df['StreetAddress'] = df['StreetAddress'].str.replace(r'(\d{5})', '', regex=True) which gives me StreetAddress City State Zip 1 Main St Winsted CT 01123 1 Main St Winsted CT I was hoping for something more like this StreetAddress City State Zip 1 Main St Winsted CT 01123 1 Main St Winsted CT 01123 | I would use a boolean mask, this will avoid overwriting existing data, and also be more efficient since only the relevant rows will be evaluated: add = df['StreetAddress'].str.extract(r'(\d{5})', expand=False) m = add.notna() df.loc[m, 'Zip'] = add[m] df.loc[m, 'StreetAddress'] = (df.loc[m, 'StreetAddress'] .str.replace(r' *\d{5}', '', regex=True) ) Alternatively: df['Zip'] = df['StreetAddress'].str.extract(r'(\d{5})', expand=False).fillna(df['Zip']) Or, as suggested by @ouroboros1, to keep the original Zip: df['Zip'].fillna(df['StreetAddress'].str.extract(r'(\d{5})', expand=False)) Output: StreetAddress City State Zip 0 1 Main St Winsted CT 01123 1 1 Main St Winsted CT 01123 | 2 | 2 |
78,240,251 | 2024-3-28 | https://stackoverflow.com/questions/78240251/python-script-using-bluetooth-running-on-windows-11-vs-raspberry-pi4 | I've run into an issue with a python script which performs differently when run on Windows 11 (through VS) versus running on a Raspberry Pi4 The script has been modified from a CLi script found to interact with Victron Energy hardware. Thanks to @ukbaz, the script ran and returned data from a bluetooth connected Victron device in my van. The script used a line: loop.run_forever() Great! works perfectly fine on both machines. I don't want to run forever, just for a 5 second duration, long enough to read the BT device and write to a txt file for future use. After introducing a change: the_end = time.time() + 5 while time.time() < the_end: loop.stop() loop.run_forever() This will run fine on Windows, read BT device, write to file. On the Rpi, script runs with no errors but doesn't read BT or write to file. Here is the full script from __future__ import annotations import inspect import json import logging import time from enum import Enum from typing import Set import asyncio from typing import List, Tuple from threading import Thread from bleak import BleakScanner from bleak.backends.device import BLEDevice from bleak.backends.scanner import AdvertisementData from victron_ble.devices import Device, DeviceData, detect_device_type from victron_ble.exceptions import AdvertisementKeyMissingError, UnknownDeviceError logger = logging.getLogger(__name__) class BaseScanner: def __init__(self) -> None: """Initialize the scanner.""" self._scanner: BleakScanner = BleakScanner( detection_callback=self._detection_callback ) self._seen_data: Set[bytes] = set() def _detection_callback(self, device: BLEDevice, advertisement: AdvertisementData): # Filter for Victron devices and instant readout advertisements data = advertisement.manufacturer_data.get(0x02E1) if not data or not data.startswith(b"\x10") or data in self._seen_data: return # De-duplicate advertisements if len(self._seen_data) > 1000: self._seen_data = set() self._seen_data.add(data) self.callback(device, data) def callback(self, device: BLEDevice, data: bytes): raise NotImplementedError() async def start(self): await self._scanner.start() async def stop(self): await self._scanner.stop() # An ugly hack to print a class as JSON class DeviceDataEncoder(json.JSONEncoder): def default(self, obj): if issubclass(obj.__class__, DeviceData): data = {} for name, method in inspect.getmembers(obj, predicate=inspect.ismethod): if name.startswith("get_"): value = method() if isinstance(value, Enum): value = value.name.lower() if value is not None: data[name[4:]] = value return data class Scanner(BaseScanner): def __init__(self, device_keys: dict[str, str] = {}): super().__init__() self._device_keys = {k.lower(): v for k, v in device_keys.items()} self._known_devices: dict[str, Device] = {} async def start(self): logger.info(f"Reading data for {list(self._device_keys.keys())}") await super().start() def get_device(self, ble_device: BLEDevice, raw_data: bytes) -> Device: address = ble_device.address.lower() if address not in self._known_devices: advertisement_key = self.load_key(address) device_klass = detect_device_type(raw_data) if not device_klass: raise UnknownDeviceError( f"Could not identify device type for {ble_device}" ) self._known_devices[address] = device_klass(advertisement_key) return self._known_devices[address] def load_key(self, address: str) -> str: try: return self._device_keys[address] except KeyError: raise AdvertisementKeyMissingError(f"No key available for {address}") def callback(self, ble_device: BLEDevice, raw_data: bytes): logger.debug( f"Received data from {ble_device.address.lower()}: {raw_data.hex()}" ) try: device = self.get_device(ble_device, raw_data) except AdvertisementKeyMissingError: return except UnknownDeviceError as e: logger.error(e) return parsed = device.parse(raw_data) blob = { "name": ble_device.name, "address": ble_device.address, "rssi": ble_device.rssi, "payload": parsed, } print(json.dumps(blob, cls=DeviceDataEncoder, indent=1)) ve_string = json.dumps(blob, cls=DeviceDataEncoder, indent=1) print(ve_string) #MAC_filename = "this_device" + ".txt" #print(f"MAC filename: {MAC_filename}") this_file = open("this_device.txt", "w") this_file.write(ve_string) this_file.close() print("file written") time.sleep(3) class DiscoveryScanner(BaseScanner): def __init__(self) -> None: super().__init__() self._seen_devices: Set[str] = set() def callback(self, device: BLEDevice, advertisement: bytes): if device.address not in self._seen_devices: logger.info(f"{device}") self._seen_devices.add(device.address) class DebugScanner(BaseScanner): def __init__(self, address: str): super().__init__() self.address = address async def start(self): logger.info(f"Dumping advertisements from {self.address}") await super().start() def callback(self, device: BLEDevice, data: bytes): if device.address.lower() == self.address.lower(): logger.info(f"{time.time():<24}: {data.hex()}") def my_scan(device_keys: List[Tuple[str, str]]): loop = asyncio.get_event_loop() async def scan(keys): scanner = Scanner(keys) await scanner.start() asyncio.ensure_future(scan({k: v for k, v in device_keys})) the_end = time.time() + 5 while time.time() < the_end: loop.stop() loop.run_forever() if __name__ == '__main__': my_scan([("d5:55:aa:4d:99:33","149c3c2865054b71962dcb06866524a9")]) I have tried moving from run_forever to run_until_complete This method failed spectacularly on the Rpi :( so reverted back Why would the script behave differently between 2 machines and how can I replicate on RPi? Many thanks for any help | I suspect that what you want to do is only exit after the file has been written. Maybe you could use the asyncio event capability? https://docs.python.org/3/library/asyncio-sync.html#event Create an event in the global scope by placing the following at the top of the file after the imports. file_written_event = asyncio.Event() Then add file_written_event.set() after the file is written. e.g. this_file = open("this_device.txt", "w") this_file.write(ve_string) this_file.close() print("file written") file_written_event.set() The bottom of your file would change the most. It does mostly the same, but has a while loop waiting for the event to happen before exiting. async def my_scan(device_keys: List[Tuple[str, str]]): async def scan(keys): scanner = Scanner(keys) await scanner.start() asyncio.ensure_future(scan({k: v for k, v in device_keys})) while not file_written_event.is_set(): await asyncio.sleep(0.1) if __name__ == '__main__': logging.basicConfig() logger.setLevel(logging.DEBUG) asyncio.run(my_scan([("d5:55:aa:4d:99:33","149c3c2865054b71962dcb06866524a9")])) | 2 | 1 |
78,244,496 | 2024-3-29 | https://stackoverflow.com/questions/78244496/scraping-mlb-daily-lineups-from-rotowire-using-python | I am trying to scrape the MLB daily lineup information from here: https://www.rotowire.com/baseball/daily-lineups.php I am trying to use python with requests, BeautifulSoup and pandas. My ultimate goal is to end up with two pandas data frames. First is a starting pitching data frame: date game_time pitcher_name team lineup_throws 2024-03-29 1:40 PM ET Spencer Strider ATL R 2024-03-29 1:40 PM ET Zack Wheeler PHI R Second is a starting batter data frame: date game_time batter_name team pos batting_order lineup_bats 2024-03-29 1:40 PM ET Ronald Acuna ATL RF 1 R 2024-03-29 1:40 PM ET Ozzie Albies ATL 2B 2 S 2024-03-29 1:40 PM ET Austin Riley ATL 3B 3 R 2024-03-29 1:40 PM ET Kyle Schwarber PHI DH 1 L 2024-03-29 1:40 PM ET Trea Turner PHI SS 2 R 2024-03-29 1:40 PM ET Bryce Harper PHI 1B 3 L This would be for all game for a given day. I've tried adapting this answer to my needs but can't seem to get it to quite work: Scraping Web data using BeautifulSoup Any help or guidance is greatly appreciated. Here is the code from the link I am trying to adapt, but can't seem to make progress: import pandas as pd import requests from bs4 import BeautifulSoup url = "https://www.rotowire.com/baseball/daily-lineups.php" soup = BeautifulSoup(requests.get(url).content, "html.parser") weather = [] for tag in soup.select(".lineup__bottom"): header = tag.find_previous(class_="lineup__teams").get_text( strip=True, separator=" vs " ) rain = tag.select_one(".lineup__weather-text > b") forecast_info = rain.next_sibling.split() temp = forecast_info[0] wind = forecast_info[2] weather.append( {"Header": header, "Rain": rain.text.split()[0], "Temp": temp, "Wind": wind} ) df = pd.DataFrame(weather) print(df) The information I want seems to be contained in lineup__main and not in lineup__bottom. | You have to iterate the boxes and select all your expected features. import pandas as pd import requests from bs4 import BeautifulSoup url = "https://www.rotowire.com/baseball/daily-lineups.php" soup = BeautifulSoup(requests.get(url).content, "html.parser") data_pitiching = [] data_batter = [] team_type = '' for e in soup.select('.lineup__box ul li'): if team_type != e.parent.get('class')[-1]: order_count = 1 team_type = e.parent.get('class')[-1] if e.get('class') and 'lineup__player-highlight' in e.get('class'): data_pitiching.append({ 'date': e.find_previous('main').get('data-gamedate'), 'game_time': e.find_previous('div', attrs={'class':'lineup__time'}).get_text(strip=True), 'pitcher_name':e.a.get_text(strip=True), 'team':e.find_previous('div', attrs={'class':team_type}).next.strip(), 'lineup_throws':e.span.get_text(strip=True) }) elif e.get('class') and 'lineup__player' in e.get('class'): data_batter.append({ 'date': e.find_previous('main').get('data-gamedate'), 'game_time': e.find_previous('div', attrs={'class':'lineup__time'}).get_text(strip=True), 'pitcher_name':e.a.get_text(strip=True), 'team':e.find_previous('div', attrs={'class':team_type}).next.strip(), 'pos': e.div.get_text(strip=True), 'batting_order':order_count, 'lineup_bats':e.span.get_text(strip=True) }) order_count+=1 df_pitching = pd.DataFrame(data_pitiching) df_batter = pd.DataFrame(data_batter) date game_time pitcher_name team lineup_throws 0 2024-03-29 1:40 PM ET Freddy Peralta Brewers R 1 2024-03-29 1:40 PM ET Jose Quintana Mets L .. 19 2024-03-29 10:10 PM ET Bobby Miller Dodgers R date game_time pitcher_name team pos batting_order lineup_bats 0 2024-03-29 1:40 PM ET J. Chourio Brewers RF 1 R 1 2024-03-29 1:40 PM ET W. Contreras Brewers C 2 R ... 178 2024-03-29 10:10 PM ET E. Hernandez Dodgers CF 8 R 179 2024-03-29 10:10 PM ET Gavin Lux Dodgers 2B 9 L | 2 | 2 |
78,240,915 | 2024-3-28 | https://stackoverflow.com/questions/78240915/saving-a-scipy-sparse-matrix-directly-as-a-regular-txt-file | I have a scipy.sparse matrix (csr_matrix()). But I need to save it to a file not in the .npz format but as a regular .txt or .csv file. My problem is that I don't have enough memory to convert the sparse matrix into a regular np.array() and then save it to a file. Is there a way to have the data as a sparse matrix in memory but save it directly as a regular matrix in the form: 0 0 0 0 1 0 1 0 1 to the disk? Or is there a way to "unzip" a .npz file without loading it into memory inside Python? (like for example gunzip or unzip in Bash). | Answer to new question: import numpy as np from scipy import sparse, io A = sparse.eye(5, format='csr') * np.pi np.set_printoptions(precision=16, linewidth=1000) with open('matrix.txt', 'a') as f: for row in A: f.write(str(row.toarray()[0])) f.write('\n') # [3.141592653589793 0. 0. 0. 0. ] # [0. 3.141592653589793 0. 0. 0. ] # [0. 0. 3.141592653589793 0. 0. ] # [0. 0. 0. 3.141592653589793 0. ] # [0. 0. 0. 0. 3.141592653589793] And with begin/end brackets: import numpy as np from scipy import sparse, io A = sparse.eye(5, format='csr') * np.pi np.set_printoptions(precision=16, linewidth=1000) with open('matrix.txt', 'a') as f: for i, row in enumerate(A): f.write('[' if (i == 0) else ' ') f.write(str(row.toarray()[0])) f.write(']' if (i == A.shape[0] - 1) else '\n') # [[3.141592653589793 0. 0. 0. 0. ] # [0. 3.141592653589793 0. 0. 0. ] # [0. 0. 3.141592653589793 0. 0. ] # [0. 0. 0. 3.141592653589793 0. ] # [0. 0. 0. 0. 3.141592653589793]] You may have to fiddle with set_printoptions depending on your data. Answer to original question, which did not require that the matrix be written as dense. Harwell-Boeing format is plain text: import numpy as np from scipy import sparse, io A = sparse.eye(3, format='csr') * np.pi # Default title 0 # 3 1 1 1 # RUA 3 3 3 0 # (40I2) (40I2) (3E25.16) # 1 2 3 4 # 1 2 3 # 3.1415926535897931E+00 3.1415926535897931E+00 3.1415926535897931E+00 io.hb_write('matrix.txt', A) # saves as matrix.txt A2 = io.hb_read('matrix.txt') assert not (A2 != A).nnz # efficient check for equality So is Matrix Market: io.mmwrite('matrix', A) # saves as matrix.mtx # %%MatrixMarket matrix coordinate real symmetric # % # 3 3 3 # 1 1 3.141592653589793e+00 # 2 2 3.141592653589793e+00 # 3 3 3.141592653589793e+00 A2 = io.mmread('matrix') assert not (A2 != A).nnz If you want an even simpler format, although it involves more code: import numpy as np from scipy import sparse A = sparse.eye(10, format='csr')*np.pi np.savetxt('data.txt', A.data) np.savetxt('indices.txt', A.indices, fmt='%i') np.savetxt('indptr.txt', A.indptr, fmt='%i') To load: data = np.loadtxt('data.txt') indices = np.loadtxt('indices.txt', dtype=np.int32) indptr = np.loadtxt('indptr.txt', dtype=np.int32) A2 = sparse.csr_matrix((data, indices, indptr)) assert not (A2 != A).nnz But the important idea is that all you need to save are the data, indices, and indptr attributes of the csr_matrix. | 2 | 2 |
78,244,582 | 2024-3-29 | https://stackoverflow.com/questions/78244582/parsing-strings-with-numbers-and-si-prefixes-in-polars | Say I have this dataframe: >>> import polars >>> df = polars.DataFrame(dict(j=['1.2', '1.2k', '1.2M', '-1.2B'])) >>> df shape: (4, 1) βββββββββ β j β β --- β β str β βββββββββ‘ β 1.2 β β 1.2k β β 1.2M β β -1.2B β βββββββββ How would I go about parsing the above to get: >>> df = polars.DataFrame(dict(j=[1.2, 1_200, 1_200_000, -1_200_000_000])) >>> df shape: (4, 1) βββββββββββββ β j β β --- β β f64 β βββββββββββββ‘ β 1.2 β β 1200.0 β β 1.2e6 β β -1.2000e9 β βββββββββββββ >>> | You can use str.extract() and str.strip_chars() to split the parts and then get the resulting number by using Expr.replace() + Expr.pow(): df.with_columns( pl.col('j').str.strip_chars('KMB').cast(pl.Float32) * pl.lit(10).pow( pl.col('j').str.extract(r'(K|M|B)').replace(['K','M','B'],[3,6,9]).fill_null(0) ) ) βββββββββββββββ β j β β --- β β f64 β βββββββββββββββ‘ β 1.2 β β 1200.000048 β β 1.2000e6 β β -1.2000e9 β βββββββββββββββ | 4 | 3 |
78,238,201 | 2024-3-28 | https://stackoverflow.com/questions/78238201/why-does-it-provide-two-different-outputs-with-if2-if3 | I am testing the m.if3 function in gekko by using conditional statements of if-else but I get two different outputs. The optimal number i get from below code is 12. I plug that in the next code with if-else statement to ensure that the cost matches up but it does not. Am I using if3/if2 incorrectly? The rate is 0.1 for the first 5 days and it switches to 0.3 for the remaining 45 days. I get different outputs even though I am doing the same thing in both of them. I tried everything from using if-else statements to using if2 statements. | The if2 or if3 function isn't needed because the switching argument duration-5 is a constant value that is not a function of a Gekko variable. Just like the validation script, the two segments can be calculated separately and added together to get a total cost and patient count. from gekko import GEKKO m = GEKKO(remote=False) # parameters cost_p = 9 cost_s = 12 var1 = 50 duration = 50 x = m.Var(integer=True, lb=1) rate1 = 0.3 rate2 = 0.1 cost1 = m.Intermediate((rate1 * cost_p * 5 + cost_s) * x) cost2 = m.Intermediate((rate2 * cost_p * (duration-5) + cost_s) * x) cost = m.Intermediate(cost1+cost2) countp1 = m.Intermediate(rate1 * 5 * x) countp2 = m.Intermediate(rate2 * (duration-5) * x) p_count = m.Intermediate(countp1+countp2) m.Minimize(cost) m.Equation(p_count >= var1) m.options.SOLVER = 1 # for MINLP solution m.solve(disp=False) num_sites = x.value[0] print(f'num_s = {num_s}') print(f'cost: {cost.value[0]}') print(f'p_count: {p_count.value[0]}') The optimal solution is: num_s = 9.0 cost: 810.0 p_count: 54.0 The solution validation agrees with this answer: # Solution validation # Parameters cost_s = 9 cost_p = 12 num_p = 50 duration = 50 if duration > 5: rate = 0.1 else: rate = 0.3 x = 9 cost1 = (0.1 * cost_p * 45 + cost_s) * x cost2 = (0.3 * cost_p * 5 + cost_s) * x cost = cost1 + cost2 countp1 = 0.3 * 5 * x countp2 = 0.1 * 45 * x countp = countp1 + countp2 print(f'cost (validation): {cost}') print(f'count (validation): {countp}') | 2 | 0 |
78,233,914 | 2024-3-27 | https://stackoverflow.com/questions/78233914/calculate-relative-volume-ratio-indicator-in-pandas-data-frame-and-add-the-indic | I know there have been a few posts on this, but my case is a little bit different and I wanted to get some help on this. I have a pandas dataframe symbol_df with 1 min bars in the below format for each stock symbol: id Symbol_id Date Open High Low Close Volume 1 1 2023-12-13 09:15:00 4730.95 4744.00 4713.95 4696.40 2300 2 1 2023-12-13 09:16:00 4713.20 4723.70 4717.85 4702.55 1522 3 1 2023-12-13 09:17:00 4716.40 4718.55 4701.00 4701.00 909 4 1 2023-12-13 09:18:00 4700.15 4702.80 4696.70 4696.00 715 5 1 2023-12-13 09:19:00 4696.70 4709.90 4702.00 4696.10 895 ... ... ... ... ... ... ... ... 108001 1 2024-03-27 13:44:00 6289.95 6291.95 6289.00 6287.55 989 108002 1 2024-03-27 13:45:00 6288.95 6290.85 6289.00 6287.75 286 108003 1 2024-03-27 13:46:00 6291.25 6293.60 6292.05 6289.10 1433 108004 1 2024-03-27 13:47:00 6295.00 6299.00 6293.20 6293.15 2702 108005 1 2024-03-27 13:48:00 6292.05 6296.55 6291.95 6291.95 983 I would like to calculate the "Relative Volume Ratio" indicator and add this calculated value to the symbol_df as a new column on a rolling basis. "Relative volume ratio" indicator calculated as below: So far today's Volume is compared with the mean volume of the last 10 days of the same period. To get the ratio value, we simply divide "today so far volume" by "mean volume of the last 10 days of the same period". For example..the current bar time is now 13:48. cumulativeVolumeOfToday = Volume of 1 minuite bars between 00:00 -13:48 today added up avergeVolumeOfPreviousDaysOfSamePeriod = Average accumulation of volume from the same period(00:00 - 13:48) over the last 10 days. relativeVolumeRatio = CumulativeVolumeOfToday/AvergeVolumeOfPrevious10DaysOfSamePeriod Add this value as a new column to the dataframe. Sample data download for the test case: import yfinance as yf #pip install yfinance from datetime import datetime import pandas as pd symbol_df = yf.download(tickers="AAPL", period="7d", interval="1m")["Volume"] symbol_df=symbol_df.reset_index(inplace=False) #symbol_df['Datetime'] = symbol_df['Datetime'].dt.strftime('%Y-%m-%d %H:%M') symbol_df = symbol_df.rename(columns={'Datetime': 'Date'}) #We can only download 7 days sample data. So 5 days mean for calculations How can I do this in Pandas? | TL;DR from yfinance import download # Prepare data similar to the original symbol_df = ( download(tickers="AAPL", period="7d", interval="1m") .rename_axis(index='Date') .reset_index() ) # Calculate Relative Volume Ratio volume = symbol_df.set_index('Date')['Volume'] dts = volume.index cum_volume = volume.groupby(dts.date, sort=False).cumsum() prev_mean = lambda days: ( cum_volume .groupby(dts.time, sort=False) .rolling(days, closed='left') .mean() .reset_index(0, drop=True) # drop the level with dts.time ) rvr = cum_volume / prev_mean(5) # Assign the output to the initial data symbol_df = symbol_df.join(rvr.rename('Relative volume ratio'), on='Date') Explanation Based on the provided description, you need to perform several transformations on the aggregated data. First is to cumulatively summarize the data for each day. Then run a [ten]-day window over the data grouped by time of day to calculate the average. And at the end, actually divide the former by the latter. Let's say, you have the following test data, where "Date" is a column of type datetime: from yfinance import download symbol_df = ( download(tickers="AAPL", period="7d", interval="1m") .rename_axis(index='Date') .reset_index() ) To calculate the Relative Volume Ratio values, we will use "Volume" as a separate sequence with date-time stamps "Date" as its index: volume = symbol_df.set_index('Date')['Volume'] dts = volume.index # date-time stamps for convenient grouping Let's create a sequence of cumulative volumes for each day. For this, we group volume by its date (the year, month and day values with no time) and apply cumsum to a group (use sort=False in hopes to speed up calculations): cum_volume = volume.groupby(dts.date, sort=False).cumsum() To calculate the mean of cumulative volumes at the same time of day in the given number of previous days, we group cum_volume by its time (hours and minutes with no year, month, day values), and apply rolling calculations to each group to obtain averages over windows. Note that here we need the source data to be sorted by date-time stamps since only business days are taken into account and we can't use a non-fixed frequency of "10B" as a window value. To calculate means for exactly the previous days excluding the current one, we pass closed='left' (see DataFrameGroupBy.rolling docs for details): prev_mean = lambda days: ( cum_volume .groupby(dts.time, sort=False) .rolling(days, closed='left') .mean() .reset_index(0, drop=True) ) Now the final touch with the window of 5 days: rvr = cum_volume / prev_mean(5) Comparison Compared to Andrei Kesely's solution, this one wins in speed (on Intel Core i3-2100, for example, processing the data offered there will take over 1 minute versus 300-400 ms with the code above). The calculation result is the same for timestamps after the first 10 days. But in the beginning, when there's less then 10 previous days, calculation of mean in rolling windows is made as if there's always 10 items (missing values are set to nan). Whereas in the case of the Kesely's solution, we obtain average values only for the available cumulative volumes. | 2 | 1 |
78,243,747 | 2024-3-29 | https://stackoverflow.com/questions/78243747/how-to-use-two-key-functions-when-sorting-a-multiindex-dataframe | In this call to df.sort_index() on a MultiIndex dataframe, how to use func_2 for level two? func_1 = lambda s: s.str.lower() func_2 = lambda x: np.abs(x) m_sorted = df_multi.sort_index(level=['one', 'two'], key=func_1) The documentation says "For MultiIndex inputs, the key is applied per level", which is ambiguous. import pandas as pd import numpy as np np.random.seed(3) # Create multiIndex choice = lambda a, n: np.random.choice(a, n, replace=True) df_multi = pd.DataFrame({ 'one': pd.Series(choice(['a', 'B', 'c'], 8)), 'two': pd.Series(choice([1, -2, 3], 8)), 'A': pd.Series(choice([2,6,9,7] ,8)) }) df_multi = df_multi.set_index(['one', 'two']) # Sort MultiIndex func_1 = lambda s: s.str.lower() func_2 = lambda x: np.abs(x) m_sorted = df_multi.sort_index(level=['one'], key=func_1) | sort_index takes a unique function as key that would be used for all levels. That said, you could use a wrapper function to map the desired sorting function per level name: def sorter(level, default=lambda x: x): return { 'one': lambda s: s.str.lower(), 'two': np.abs, }.get(level.name, default)(level) df_multi.sort_index(level=['one', 'two'], key=sorter) NB. in case of no match a default function is used that returns the level unchanged. Another option with numpy.lexsort instead of sort_index: # levels, functions in desired sorting order sorters = [('one', lambda s: s.str.lower()), ('two', np.abs)] out = df_multi.iloc[np.lexsort([f(df_multi.index.get_level_values(lvl)) for lvl, f in sorters[::-1]])] lexsort uses the major keys last, thus the [::-1] Output: A one two a 1 6 -2 2 3 7 B 1 6 -2 7 -2 7 3 2 3 6 | 5 | 5 |
78,243,115 | 2024-3-29 | https://stackoverflow.com/questions/78243115/calculating-based-on-rows-conditions-in-pandas | I encountered the following problem: I have a pandas dataframe that looks like this. id_tranc sum bid 1 4000 2.3% 1 20000 3.5% 2 100000 if >=100 000 - 1.6%, if < 100 000 - 100$ 3 30000 if >=100 000 - 1.6%, if < 100 000 - 100$ 1 60000 500$ code_to_create_dataset: dataframe = pd.DataFrame({ 'id_tranc': [1, 1, 2, 3, 1], 'sum': [4000, 20000, 100000, 30000, 60000], 'bid': ['2.3%', '3.5%', 'if >=100 000 - 1.6%, if < 100 000 - 100$', 'if >=100 000 - 1.6%, if < 100 000 - 100$', '500$']}) Necessary to calculated 'commission', depending columns 'sum' and 'bid'. Final dataframe should be look like: id_tranc sum bid comission 1 4000 2.3% 92 1 20000 3.5% 700 2 100000 if >=100 000 - 1.6%, if < 100 000 - 100$ 1600 3 30000 if >=100 000 - 1.6%, if < 100 000 - 100$ 100 1 60000 500$ 500 If calculated with df['commission'] = df['sum'] * df['bid'] - getting result only for first 2 record. Please tell me how to do this correctly. | I would write a small parser based on a regex and operator: from operator import ge, lt, gt, le import re def logic(value, bid): # define operators, add other ones if needed ops = {'>=': ge, '>': gt, '<': lt, '<=': le} # remove spaces, split conditions on comma conditions = bid.replace(' ', '').split(',') # then loop over them, the first match will be used for cond in conditions: # extract operator, threshold, commission, unit m = re.search('(?:if(\W+)(\d+)-)?(\d+\.?\d*)([%$])', cond) if not m: # if no match, ignore continue op, thresh, com, unit = m.groups() # if no condition or condition is valid if (not op) or (op and ops[op](value, float(thresh))): if unit == '%': # handle % case return value * float(com)/100 elif unit == '$': # handle fixed com return float(com) df['comission'] = [logic(val, bid) for val, bid in zip(df['sum'], df['bid'])] # or with apply, which is less efficient # df['comission'] = df.apply(lambda row: logic(row['sum'], row['bid']), axis=1) Output: id_tranc sum bid comission 0 1 4000 2.3% 92.0 1 1 20000 3.5% 700.0 2 2 100000 if >=100 000 - 1.6%, if < 100 000 - 100$ 1600.0 3 3 30000 if >=100 000 - 1.6%, if < 100 000 - 100$ 100.0 4 1 60000 500$ 500.0 Regex: regex demo (?:if(\W+)(\d+)-)? # optionally match a condition (operator and threshold) (\d+\.?\d*) # match the value of the commission ([%$]) # match type of commission (% or $) Reproducible input: df = pd.DataFrame({'id_tranc': [1, 1, 2, 3, 1], 'sum': [4000, 20000, 100000, 30000, 60000], 'bid': ['2.3%', '3.5%', 'if >=100 000 - 1.6%, if < 100 000 - 100$', 'if >=100 000 - 1.6%, if < 100 000 - 100$', '500$']}) | 2 | 2 |
78,240,481 | 2024-3-28 | https://stackoverflow.com/questions/78240481/i-am-encountering-an-f2py-dimension-error-when-passing-numpy-array-to-fortran | I have been trying to wrap a fortran module that takes several 1-dimensional arrays and returns calculated values CTP and HILOW. subroutine ctp_hi_low ( nlev_in, tlev_in, qlev_in, plev_in, & t2m_in , q2m_in , psfc_in, CTP, HILOW , missing ) implicit none ! ! Input/Output Variables ! integer, intent(in ) :: nlev_in ! *** # of pressure levels real(4), intent(in ) :: missing ! *** missing value - useful for obs real(4), intent(in ), dimension(nlev_in) :: tlev_in ! *** air temperature at pressure levels [K] real(4), intent(in ), dimension(nlev_in) :: qlev_in ! *** specific humidity levels [kg/kg] real(4), intent(in ), dimension(nlev_in) :: plev_in ! *** pressure levels [Pa] real(4), intent(in ) :: psfc_in ! *** surface pressure [Pa] real(4), intent(in ) :: t2m_in ! *** 2-meter temperature [K] real(4), intent(in ) :: q2m_in ! *** 2-meter specific humidity [kg/kg] real(4), intent(out) :: CTP ! *** Convective Triggering Potential [K] real(4), intent(out) :: HILOW ! *** Low-level humidity [K] <internal variables and calculations> The wrapping was successful, but when running the program I receive the following error. I have been passing dummy data to ensure that all arrays have the same shape. Traceback (most recent call last): File "/mnt/d/ResearchProjects/NSF-Drought/coupling-metrics/minimal/test.py", line 47, in <module> conv_trig_pot_mod.ctp_hi_low(10, arr, arr, arr, 1,1, 1) ValueError: ctp_hilow.ctp_hilow.conv_trig_pot_mod.ctp_hi_low: failed to create array from the 2nd argument `qlev_in` -- 0-th dimension must be fixed to 1 but got 10 I have been trying to pass my 1-D arrays in varying dimensions, but all of them produce the same error. arr = np.ones([10,], order='F') arr = np.ones([10,1], order='F') arr = np.ones([1,10], order='F') I also have been trying to pass a list. I simply cannot figure out how to satisfy f2py's shape requirement. Any help is appreciated | It is better to call fortran subroutine from python with explicit argument name: conv_trig_pot_mod.ctp_hi_low(nlev_in=10, tlev_in=arr, qlev_in=arr, plev_in=arr, t2m_in=1, q2m_in=1, psfc_in=????, missing=1) The order of arguments differs between Fortran and Python. you can check the new arguments list and order __doc__ in python: # import may be different for different python version from xxxx import yyyy as ctp_module print(ctp_module.ctp_hi_low.__doc__) | 2 | 1 |
78,240,642 | 2024-3-28 | https://stackoverflow.com/questions/78240642/python-global-variables-in-recursion-get-different-result | I have this code, which prints 1: s = 0 def dfs(n): global s if n > 10: return 0 s += dfs(n + 1) return n dfs(0) print(s) If I modify dfs like thisοΌ def dfs(n): global s if n > 10: return 0 i = dfs(n + 1) s += i return n it will print 55 I know what is a better way to write the dfs. I just want to know why the value of s is different after the call of two dfs | Python is interpreted and executed from top to bottom, so in the first version you have: s += dfs(n + 1) which is exact as: s = s + dfs(n + 1) So when you do this recursively you have on stack these commands: s = 0 + dfs(1) # <-- dfs(1) will return 1 s = 0 + dfs(2) # <-- dfs(2) will return 2 ... s = 0 + dfs(9) # <-- dfs(9) will return 9 s = 0 + dfs(10) # <-- dfs(10) will return 10 s = 0 + dfs(11) # <-- dfs(11) will return 0 So when you observe, the last step is s = 0 + 1 and you see the 1 as final result. The second version i = dfs(n + 1) s += i You assign to s after dfs(n + 1) is evaluated so you see the final answer 55 NOTE: If you rewrite the first version s += dfs(n + 1) to s = dfs(n + 1) + s you will see the result 55 too. | 4 | 8 |
78,238,674 | 2024-3-28 | https://stackoverflow.com/questions/78238674/how-to-convert-rs-tukeys-hsd-table-into-correlation-matrix-in-python-using-pan | I have recently exported a table from R's TukeyHSD test to obtain the p-values for various time groups (0, 5, 10, 20, 30, 40, 50, 60). I'm curious if there's a method to transform this into a correlation matrix, where each axis represents the time groups and corresponds to the respective p-value. The table includes an index indicating the correspondence between the different time groups (e.g., 5-10 or 10-50). I've imported it as a dataframe into Python. Is there a way to rearrange the dataframe as depicted below? p adj Groups 50-0 2.815526e-13 60-0 2.855494e-13 20-0 4.764197e-08 50-5 1.712389e-05 50-10 1.483440e-04 50-40 1.643480e-04 60-5 5.873007e-04 60-10 5.218047e-03 60-40 5.613566e-03 10-0 6.878476e-03 40-0 1.270855e-02 20-5 7.380859e-02 50-20 1.574372e-01 40-20 3.264569e-01 20-10 3.369147e-01 5-0 3.816166e-01 60-50 7.301423e-01 60-20 8.503578e-01 10-5 9.731384e-01 40-5 9.820983e-01 40-10 1.000000e+00 I want it to be something like: 0 5 10 20 ... 0 ... ... ... ... 5 ... ... ... ... 10 ... ... ... ... 20 ... ... ... ... ... I haven't found anything similar online, so I don't know where to start. | Try: df[["x", "y"]] = df.index.str.split("-", expand=True).to_frame().astype(int).values print(pd.crosstab(df["x"], df["y"], df["p adj"], aggfunc="first")) Prints: y 0 5 10 20 40 50 x 5 3.816166e-01 NaN NaN NaN NaN NaN 10 6.878476e-03 0.973138 NaN NaN NaN NaN 20 4.764197e-08 0.073809 0.336915 NaN NaN NaN 40 1.270855e-02 0.982098 1.000000 0.326457 NaN NaN 50 2.815526e-13 0.000017 0.000148 0.157437 0.000164 NaN 60 2.855494e-13 0.000587 0.005218 0.850358 0.005614 0.730142 | 2 | 1 |
78,239,380 | 2024-3-28 | https://stackoverflow.com/questions/78239380/polars-apply-same-custom-function-to-multiple-columns-in-group-by | What's the best way to apply a custom function to multiple columns in Polars? Specifically I need the function to reference another column in the dataframe. Say I have the following: df = pl.DataFrame({ 'group': [1,1,2,2], 'other': ['a', 'b', 'a', 'b'], 'num_obs': [10, 5, 20, 10], 'x': [1,2,3,4], 'y': [5,6,7,8], }) And I want to group by group and calculate an average of x and y, weighted by num_obs. I can do something like this: variables = ['x', 'y'] df.group_by('group').agg((pl.col(var) * pl.col('num_obs')).sum()/pl.col('num_obs').sum() for var in variables) but I'm wondering if there's a better way. Also, I don't know how to add other aggregations to this approach, but is there a way that I could also add pl.sum('n_obs')? | You can just pass list of columns into pl.col(): df.group_by('group').agg( (pl.col('x','y') * pl.col('num_obs')).sum() / pl.col('num_obs').sum(), pl.col('num_obs').sum() ) βββββββββ¬βββββββββββ¬βββββββββββ¬ββββββββββ β group β x β y β num_obs β β --- β --- β --- β --- β β i64 β f64 β f64 β i64 β βββββββββͺβββββββββββͺβββββββββββͺββββββββββ‘ β 1 β 1.333333 β 5.333333 β 15 β β 2 β 3.333333 β 7.333333 β 30 β βββββββββ΄βββββββββββ΄βββββββββββ΄ββββββββββ | 2 | 2 |
78,232,295 | 2024-3-27 | https://stackoverflow.com/questions/78232295/manually-set-values-shown-in-legend-for-continuous-variable-of-seaborn-matplotli | Is there a way to manually set the values shown in the legend of a seaborn (or matplotlib) scatterplot when the legend contains a continuous variable (hue)? For example, in the plot below I might like to show the colors corresponding to values of [0, 1, 2, 3] rather than [1.5, 3, 4.5, 6, 7.5] np.random.seed(123) x = np.random.randn(500) y = np.random.randn(500) z = np.random.exponential(1, 500) fig, ax = plt.subplots() hue_norm = (0, 3) sns.scatterplot( x=x, y=y, hue=z, hue_norm=hue_norm, palette='coolwarm', ) ax.grid() ax.set(xlabel="x", ylabel="y") ax.legend(title="z") sns.despine() | Seaborn creates its scatterplot a bit different than matplotlib. That way, the scatterplot can be customized in more ways. For the legend, Seaborn 0.13 employs custom Line2D elements (older Seaborn versions use PathCollections). The following approach: replaces Seaborn's hue_norm=(0, 3) with an equivalent matplotlib norm creates dummy Line2D elements to serve as legend handles copies all properties (size, edgecolor, ...) of the legend handle created by Seaborn then changes the marker color depending on the norm and colormap The approach might need some tweaks if your scatterplot differs. The code has been tested with Matplotlib 3.8.3 and Seaborn 0.13.2 (and 0.12.2). import matplotlib.pyplot as plt import seaborn as sns import numpy as np from matplotlib.lines import Line2D np.random.seed(123) x = np.random.randn(500) y = np.random.randn(500) z = np.random.exponential(1, 500) fig, ax = plt.subplots() hue_norm = plt.Normalize(vmin=0, vmax=3) sns.scatterplot(x=x, y=y, hue=z, hue_norm=hue_norm, palette='coolwarm', ax=ax) legend_keys = [0, 1, 2, 3] handles = [Line2D([], []) for _ in legend_keys] cmap = plt.get_cmap('coolwarm') for h, key in zip(handles, legend_keys): if type(ax.legend_.legend_handles[0]) == Line2D: h.update_from(ax.legend_.legend_handles[0]) else: h.set_linestyle('') h.set_marker('o') h.set_markeredgecolor(ax.legend_.legend_handles[0].get_edgecolor()) h.set_markeredgewidth(ax.legend_.legend_handles[0].get_linewidth()) h.set_markerfacecolor(cmap(hue_norm(key))) h.set_label(f'{key}') ax.legend(handles=handles, title='z') sns.despine() plt.show() | 2 | 2 |
78,237,884 | 2024-3-28 | https://stackoverflow.com/questions/78237884/python-convert-markdown-to-html-with-codeblocks-like-in-stackoverflow | I have been trying to convert a Markdown file into HTML code with Python for a few days - without success. The markdown file contains inline code and code blocks. However, I can't find a solution to display the code blocks in HTML like StackOverflow or GitHub does. In other words, a rectangle containing the code. The Markdown file looks like this: ### Output of `df` '''Bash Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 809224 1552 807672 1% /run /dev/sda3 61091660 37443692 20512276 65% / tmpfs 4046120 54604 3991516 2% /dev/shm ''' The corresponding part of my HTML version looks like this: <h3>Output of <code>df</code></h3> <div class="codehilite"> <pre><span></span><code> Filesystem<span class="w"> </span>1K-blocks<span class="w"> </span>Used<span class="w"> </span>Available<span class="w"> </span>Use%<span class="w"> </span>Mounted<span class="w"> </span>on tmpfs<span class="w"> </span><span class="m">809224</span><span class="w"> </span><span class="m">1552</span><span class="w"> </span><span class="m">807672</span><span class="w"> </span><span class="m">1</span>%<span class="w"> </span>/run /dev/sda3<span class="w"> </span><span class="m">61091660</span><span class="w"> </span><span class="m">37443692</span><span class="w"> </span><span class="m">20512276</span><span class="w"> </span><span class="m">65</span>%<span class="w"> </span>/ tmpfs<span class="w"> </span><span class="m">4046120</span><span class="w"> </span><span class="m">54604</span><span class="w"> </span><span class="m">3991516</span><span class="w"> </span><span class="m">2</span>%<span class="w"> </span>/dev/shm </code></pre> </div> I use the following code to convert: converted = markdown2.markdown(lfmd, extras=['fenced-code-blocks']) If I now open the HTML version in the browser, I have the code in a different font, but that's it. But I need these nice code blocks. Another problem is that this HTML code is then sent as an HTML e-mail. That's why playing around with Javascript etc. doesn't work. I have tried to change the text background with CSS. This works quite well for inline code. However, code blocks then look bad because they are aligned on the left and then fray on the right. What my crappy CSS looks like: This is my HTML/CSS code, as I have no control over the HTML body: <head> <style> body {font-family: TeleNeo Office, sans-serif; color: #000000;} body {background-color: #ffffff;} code { font-family: Source Sans Pro, monospace; font-weight: normal; font-size: small; color: black; background-color: #E3E6E8; border-radius: 5px; height: fit-content; width: auto; padding: 3px 10px 3px 10px; } </style> </head> Can anyone give me a tip on how I could implement this instead? | The background-color and related styles need to be defined on the pre tag (or its parent div), actually, if you want the same styles applied to both code blocks and code spans, then you will need to define the styles for both. And as the background color is the same, there is no need to undo the styles for the code elements inside the pre blocks. So just define for both: pre, code { font-family: Source Sans Pro, monospace; font-weight: normal; font-size: small; color: black; background-color: #E3E6E8; border-radius: 5px; height: fit-content; width: auto; padding: 3px 10px 3px 10px; } | 2 | 2 |
78,235,551 | 2024-3-28 | https://stackoverflow.com/questions/78235551/looking-for-regex-pattern-to-return-similar-results-to-my-current-function | I have some pascal-cased text that I'm trying to split into separate tokens/words. For example, "Hello123AIIsCool" would become ["Hello", "123", "AI", "Is", "Cool"]. Some Conditions "Words" will always start with an upper-cased letter. E.g., "Hello" A contiguous sequence of numbers should be left together. E.g., "123" -> ["123"], not ["1", "2", "3"] A contiguous sequence of upper-cased letters should be kept together except when the last letter is the start to a new word as defined in the first condition. E.g., "ABCat" -> ["AB", "Cat"], not ["ABC", "at"] There is no guarantee that each condition will have a match in a string. E.g., "Hello", "HelloAI", "HelloAIIsCool" "Hello123", "123AI", "AIIsCool", and any other combination I haven't provided are potential candidates. I've tried a couple regex variations. The following two attempts got me pretty close to what I want, but not quite. Version 0 import re def extract_v0(string: str) -> list[str]: word_pattern = r"[A-Z][a-z]*" num_pattern = r"\d+" pattern = f"{word_pattern}|{num_pattern}" extracts: list[str] = re.findall( pattern=pattern, string=string ) return extracts string = "Hello123AIIsCool" extract_v0(string) ['Hello', '123', 'A', 'I', 'Is', 'Cool'] Version 1 import re def extract_v1(string: str) -> list[str]: word_pattern = r"[A-Z][a-z]+" num_pattern = r"\d+" upper_pattern = r"[A-Z][^a-z]*" pattern = f"{word_pattern}|{num_pattern}|{upper_pattern}" extracts: list[str] = re.findall( pattern=pattern, string=string ) return extracts string = "Hello123AIIsCool" extract_v1(string) ['Hello', '123', 'AII', 'Cool'] Best Option So Far This uses a combination of regex and looping. It works, but is this the best solution? Or is there some fancy regex that can do it? import re def extract_v2(string: str) -> list[str]: word_pattern = r"[A-Z][a-z]+" num_pattern = r"\d+" upper_pattern = r"[A-Z][A-Z]*" groups = [] for pattern in [word_pattern, num_pattern, upper_pattern]: while string.strip(): group = re.search(pattern=pattern, string=string) if group is not None: groups.append(group) string = string[:group.start()] + " " + string[group.end():] else: break ordered = sorted(groups, key=lambda g: g.start()) return [grp.group() for grp in ordered] string = "Hello123AIIsCool" extract_v2(string) ['Hello', '123', 'AI', 'Is', 'Cool'] | Based on your Version 1: import re def extract_v1(string: str) -> list[str]: word_pattern = r"[A-Z][a-z]+" num_pattern = r"\d+" upper_pattern = r"[A-Z]+(?![a-z])" # Fixed pattern = f"{word_pattern}|{num_pattern}|{upper_pattern}" extracts: list[str] = re.findall( pattern=pattern, string=string ) return extracts string = "Hello123AIIsCool" extract_v1(string) Result: ['Hello', '123', 'AI', 'Is', 'Cool'] The fixed upper_pattern will match as many uppercased letters as possible, and will stop one before a lowercased letter if it exists. | 6 | 3 |
78,233,403 | 2024-3-27 | https://stackoverflow.com/questions/78233403/attributeerror-module-keras-tf-keras-keras-has-no-attribute-internal | Trying to install top2vec on colab ,and install everything that other people mentioned, but still get this error, have no idea how to solve,anybody knows? really appreciate! error screenshot !pip install top2vec !pip install top2vec[sentence_transformers] !pip install top2vec[sentence_encoders] from top2vec import Top2Vec import pandas as pd ============================================================ AttributeError: module 'keras._tf_keras.keras' has no attribute 'internal' | This is a known issue: The recent release of Keras 3 breaks TensorFlow Probability at import. installation of tensorflow v2.15.0, tensorflow-probability v0.23.0, and keras v3 causes a AttributeError: module 'keras._tf_keras.keras' has no attribute '__internal__' Please see these posts: https://github.com/tensorflow/probability/issues/1774#issuecomment-1979642276 First install tensorflow-probability version 0.24.0 then install tensorflow-keras https://huggingface.co/google/gemma-7b-it/discussions/71 | 3 | 3 |
78,232,294 | 2024-3-27 | https://stackoverflow.com/questions/78232294/why-regular-operations-are-not-based-on-their-in-place-corresponding-operation | To me, the only difference is that the regular operation needs one more instantiation, and the result is held by this new instance. And thus the regular implementation should call the other. But : these (in-place) methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, or if that method returns NotImplemented, the augmented assignment falls back to the normal methods. Here, i understand that the standard way is the opposite of mine : __iadd__ relies on __add__ EDIT in the sense that __add__ will be called in last resort. So why __add__ is the default way to achieve the addition and not __iadd__, whereas it should require less processing ? END EDITING A bit of context : the question came while implementing a Polynomial class, for learning purpose. I have written: class A: ... def __iadd__(self, other): "processing resulting in modification of attributes of self" return self def __add__(self, other): res = self.copy() # A.copy() being implemented as well res += other return res | Inplace operations likely modify their operands. In the example you give, you use A.copy() to avoid that. Some types do not allow modifications (tuples, for example, are immutable). Thus they don't allow inplace operations. Also, inplace operations are not exactly intuitive to someone new to programming, or even to someone new to python. It's a nice feature of the language, but that's all. Overall, regular operations are the 'normal' form of operations. If you're writing a new class and want to support operators, you'll probably think of the regular operators first, and the inplace ones later, or not at all. With your way, inplace would not work if only regular was defined. But sometimes performance is indeed an issue, and inplace would allow for a more efficient implementation. That's why you have the possibility to define the inplace operation rather then just have it call the regular one. | 3 | 1 |
78,231,207 | 2024-3-27 | https://stackoverflow.com/questions/78231207/skip-level-in-nested-json-and-convert-to-pandas-dataframe | I have json data that is structured like this, which I want to turn into a data frame: { "data": { "1": { "Conversion": { "id": "1", "datetime": "2024-03-26 08:30:00" } }, "50": { "Conversion": { "id": "50", "datetime": "2024-03-27 09:00:00" } } } } My usual approach would be to use json_normalize, like this: df = pd.json_normalize(input['data']) My goal is to have a table/dataframe with just the columns "id" and "datetime". How do I skip the numbering level below data and go straight to Conversion? I would imagine something like this (which clearly doesn't work): df = pd.json_normalize(input['data'][*]['Conversion']) What is the best way to achieve this? Any hints are greatly appreciated! | You have to manually change data in double list comprehension: L = [b['Conversion'] for k, v in input['data'].items() for a, b in v.items()] print (L) [{'id': '1', 'datetime': '2024-03-26 08:30:00'}, {'id': '50', 'datetime': '2024-03-27 09:00:00'}] out = pd.json_normalize(L) print (out) id datetime 0 1 2024-03-26 08:30:00 1 50 2024-03-27 09:00:00 Here is json_normalize not necessary, working DataFrame constructor: out = pd.DataFrame(L) print (out) id datetime 0 1 2024-03-26 08:30:00 1 50 2024-03-27 09:00:00 Thank you chepner for another idea with .values: out = pd.json_normalize((b['Conversion'] for v in input['data'].values() for b in v.values())) print (out) id datetime 0 1 2024-03-26 08:30:00 1 50 2024-03-27 09:00:00 out = pd.DataFrame((b['Conversion'] for v in input['data'].values() for b in v.values())) print (out) id datetime 0 1 2024-03-26 08:30:00 1 50 2024-03-27 09:00:00 In json_normalize is parameter max_level, but working different: Max number of levels(depth of dict) to normalize. if None, normalizes all levels. out = pd.json_normalize(input['data'], max_level=1) print (out) data.1 \ 0 {'Conversion': {'id': '1', 'datetime': '2024-0... data.50 0 {'Conversion': {'id': '50', 'datetime': '2024-... out = pd.json_normalize(input['data'], max_level=2) print (out) data.1.Conversion \ 0 {'id': '1', 'datetime': '2024-03-26 08:30:00'} data.50.Conversion 0 {'id': '50', 'datetime': '2024-03-27 09:00:00'} out = pd.json_normalize(input['data'], max_level=3) print (out) data.1.Conversion.id data.1.Conversion.datetime data.50.Conversion.id \ 0 1 2024-03-26 08:30:00 50 data.50.Conversion.datetime 0 2024-03-27 09:00:00 | 2 | 3 |
78,230,664 | 2024-3-27 | https://stackoverflow.com/questions/78230664/python-tkinter-resize-all-ttkbootstrap-or-ttk-button-padding-for-a-specific-styl | I want to alter padding for all buttons using a particular style (danger). For some reason this change is only applied to the currently active theme, switching themes reverts the Button padding to default. You can see the issue by running the following and switching themes ... import tkinter as tk from tkinter import ttk import ttkbootstrap as tb def change_theme(theme, style): style.theme_use(theme.lower().replace(" ", "")) def display_text(label_text, entry_text): label_text.set(entry_text.get()) def setup_ui(style): root = style.master danger = tb.Style() danger.configure('danger.TButton', padding=0) # Why does this only apply to the first theme? theme_names_titlecase = [name.replace('_', ' ').title() for name in style.theme_names() if name.lower() in ['darkly', 'simplex']] default_theme = 'darkly' current_theme = tk.StringVar(value=default_theme.capitalize()) theme_combo = ttk.Combobox(root, textvariable=current_theme, values=theme_names_titlecase, width=50) theme_combo.pack(pady=0, side=tk.TOP) theme_combo.bind("<<ComboboxSelected>>", lambda e: change_theme(current_theme.get(), style)) tb.Button(root, text='Text', bootstyle='danger.TButton').pack(side=tk.TOP, padx=0, pady=0) tb.Button(root, text='Text', bootstyle='info.TButton').pack(side=tk.TOP, padx=0, pady=0) return root if __name__ == "__main__": default_theme = 'darkly' style = tb.Style(theme=default_theme) root = setup_ui(style) root.mainloop() What I want to know is : Why are my changes to 'danger.TButton' only applied to the current theme? Can I fix this so all 'danger.TButton' s have no padding regardless of theme? Note: using all ttk widgets and Styles has the same result so the answer relates to ttk not ttkbootstrap particularly. Many thanks. | The main reason is this part and configuring tb.Style() within the setup_ui(). It means that the configuration you're setting for danger.TButton is only associated with the instance of tb.Style(), So,when u try to change your theme, a new instance of tb.Style() is created internally by ttkbootstrap, and your custom configuration is not carried over to this new instance. danger = tb.Style() danger.configure('danger.TButton', padding=0) # Why does this only apply to the first theme? By configuring danger.TButton directly on the main instance of tb.Style(), your custom configuration will be applied consistently across all themes. Here is solution for your code: import tkinter as tk from tkinter import ttk import ttkbootstrap as tb def change_theme(theme, style):style.theme_use(theme.lower().replace(" ", "")) def display_text(label_text, entry_text):label_text.set(entry_text.get()) def setup_ui(style): root = style.master theme_names_titlecase = [name.replace('_', ' ').title() for name in style.theme_names() if name.lower() in ['darkly', 'simplex']] default_theme = 'darkly' current_theme = tk.StringVar(value=default_theme.capitalize()) theme_combo = ttk.Combobox(root, textvariable=current_theme, values=theme_names_titlecase, width=50) theme_combo.pack(pady=0, side=tk.TOP) theme_combo.bind("<<ComboboxSelected>>", lambda e: change_theme(current_theme.get(), style)) tb.Button(root, text='Text', bootstyle='danger.TButton').pack(side=tk.TOP, padx=0, pady=0) tb.Button(root, text='Text', bootstyle='info.TButton').pack(side=tk.TOP, padx=0, pady=0) return root if __name__ == "__main__": default_theme = 'darkly' style = tb.Style(theme=default_theme) style.configure('danger.TButton', padding=0) # Why does this only apply to the first theme? root = setup_ui(style) root.mainloop() | 3 | 0 |
78,202,760 | 2024-3-21 | https://stackoverflow.com/questions/78202760/polars-groupby-describe-extension | df is a demo Polars DataFrame: df = pl.DataFrame( { "groups": ["A", "A", "A", "B", "B", "B"], "values": [1, 2, 3, 4, 5, 6], } ) The current group_by.agg() apporach is a bit inconvinient for creating descriptive statistics: print( df.group_by("groups").agg( pl.len().alias("count"), pl.col("values").mean().alias("mean"), pl.col("values").std().alias("std"), pl.col("values").min().alias("min"), pl.col("values").quantile(0.25).alias("25%"), pl.col("values").quantile(0.5).alias("50%"), pl.col("values").quantile(0.75).alias("75%"), pl.col("values").max().alias("max"), pl.col("values").skew().alias("skew"), pl.col("values").kurtosis().alias("kurtosis"), ) ) out: shape: (2, 11) ββββββββββ¬ββββββββ¬βββββββ¬ββββββ¬ββββ¬ββββββ¬ββββββ¬βββββββ¬βββββββββββ β groups β count β mean β std β β¦ β 75% β max β skew β kurtosis β β --- β --- β --- β --- β β --- β --- β --- β --- β β str β u32 β f64 β f64 β β f64 β i64 β f64 β f64 β ββββββββββͺββββββββͺβββββββͺββββββͺββββͺββββββͺββββββͺβββββββͺβββββββββββ‘ β B β 3 β 5.0 β 1.0 β β¦ β 6.0 β 6 β 0.0 β -1.5 β β A β 3 β 2.0 β 1.0 β β¦ β 3.0 β 3 β 0.0 β -1.5 β ββββββββββ΄ββββββββ΄βββββββ΄ββββββ΄ββββ΄ββββββ΄ββββββ΄βββββββ΄βββββββββββ I want to write a customized group_by extension module that allows me to achieve the same results by calling: df.describe(by="groups", percentiles=[xxx], skew=True, kurt=True) or df.group_by("groups").describe(percentiles=....) | Calling this will output as same as you mentioned in the question import polars as pl class DescribeAccessor: def __init__(self, df: pl.DataFrame): self._df = df def __call__( self, by: str, percentiles: list = [0.25, 0.5, 0.75], skew: bool = True, kurt: bool = True, ) -> pl.DataFrame: percentile_exprs = [ pl.col("values").quantile(p).alias(f"{int(p * 100)}%") for p in percentiles ] aggs = [ pl.len().alias("count"), pl.col("values").mean().alias("mean"), pl.col("values").std().alias("std"), pl.col("values").min().alias("min"), *percentile_exprs, pl.col("values").max().alias("max"), ] if skew: aggs.append(pl.col("values").skew().alias("skew")) if kurt: aggs.append(pl.col("values").kurtosis().alias("kurtosis")) return self._df.group_by(by).agg(aggs) pl.DataFrame.describe = property(lambda self: DescribeAccessor(self)) df = pl.DataFrame( { "groups": ["A", "A", "A", "B", "B", "B"], "values": [1, 2, 3, 4, 5, 6], } ) print(df.describe(by="groups", percentiles=[0.25, 0.5, 0.75], skew=True, kurt=True)) | 3 | 4 |
78,227,144 | 2024-3-26 | https://stackoverflow.com/questions/78227144/simulation-of-a-pendulum-hanging-on-a-spinning-disk | Can anybody get this code to run? I know, that it is very long and maybe not easy to understand, but what I am trying to do is to write a simulation for a problem, that I have already posted here: https://math.stackexchange.com/questions/4876146/pendulum-hanging-on-a-spinning-disk I try to make a nice simulation, that would look like the one someone answered the linked question with. The picture in the answer is written in mathematica and I had no idea how to translate it. Hope you can help me finish this up. There are two elements of code. One calculating the ODE second degree and one plotting it 3 times. When you plot the ODE, you can see, that the graph line is not doing, what it is supposed to. I don't know, where the mistake is but hopefully you can help. Here are the two snippets: import numpy as np import sympy as sp import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from IPython.display import display from sympy.vector import CoordSys3D from scipy.integrate import solve_ivp def FindDGL(): N = CoordSys3D('N') e = N.i + N.j + N.k t = sp.symbols('t') x = sp.symbols('x') y = sp.symbols('y') z = sp.symbols('z') x = sp.Function('x')(t) y = sp.Function('y')(t) z = sp.Function('z')(t) p = x*N.i + y*N.j + z*N.k m = sp.symbols('m') g = sp.symbols('g') r = sp.symbols('r') omega = sp.symbols('omega') q0 = sp.symbols('q0') A = sp.symbols('A') l = sp.symbols('l') xl = sp.symbols('xl') yl = sp.symbols('yl') zl = sp.symbols('zl') dpdt = sp.diff(x,t)*N.i + sp.diff(y,t)*N.j + sp.diff(z,t)*N.k #Zwang = ((p-q)-l) Zwang = (p.dot(N.i)**2*N.i +p.dot(N.j)**2*N.j +p.dot(N.k)**2*N.k - 2*r*(p.dot(N.i)*N.i*sp.cos(omega*t)+p.dot(N.j)*N.j*sp.sin(omega*t))-2*q0*(p.dot(N.k)*N.k) + r**2*(N.i*sp.cos(omega*t)**2+N.j*sp.sin(omega*t)**2)+q0**2*N.k) - l**2*N.i - l**2*N.j -l**2*N.k display(Zwang) dpdtsq = dpdt.dot(N.i)**2*N.i + dpdt.dot(N.j)**2*N.j + dpdt.dot(N.k)**2*N.k #La = 0.5 * m * dpdtsq - m * g * (p.dot(N.k)*N.k) + (ZwangA*A) L = 0.5 * m * dpdtsq + m * g * (p.dot(N.k)*N.k) - Zwang*A #display(La) display(L) Lx = L.dot(N.i) Ly = L.dot(N.j) Lz = L.dot(N.k) Elx = sp.diff(sp.diff(Lx,sp.Derivative(x,t)), t) + sp.diff(Lx,x) Ely = sp.diff(sp.diff(Ly,sp.Derivative(y,t)), t) + sp.diff(Ly,y) Elz = sp.diff(sp.diff(Lz,sp.Derivative(z,t)), t) + sp.diff(Lz,z) display(Elx) display(Ely) display(Elz) ZwangAV = (sp.diff(Zwang, t, 2))/2 display(ZwangAV) ZwangA = ZwangAV.dot(N.i)+ZwangAV.dot(N.j)+ZwangAV.dot(N.k) display(ZwangA) Eq1 = sp.Eq(Elx,0) Eq2 = sp.Eq(Ely,0) Eq3 = sp.Eq(Elz,0) Eq4 = sp.Eq(ZwangA,0) LGS = sp.solve((Eq1,Eq2,Eq3,Eq4),(sp.Derivative(x,t,2),sp.Derivative(y,t,2),sp.Derivative(z,t,2),A)) #display(LGS) #display(LGS[sp.Derivative(x,t,2)].free_symbols) #display(LGS[sp.Derivative(y,t,2)].free_symbols) #display(LGS[sp.Derivative(z,t,2)]) XS = LGS[sp.Derivative(x,t,2)] YS = LGS[sp.Derivative(y,t,2)] ZS = LGS[sp.Derivative(z,t,2)] #t_span = (0, 10) dxdt = sp.symbols('dxdt') dydt = sp.symbols('dydt') dzdt = sp.symbols('dzdt') #t_eval = np.linspace(0, 10, 100) XSL = XS.subs({ sp.Derivative(y,t):dydt, sp.Derivative(z,t):dzdt, sp.Derivative(x,t):dxdt, x:xl , y:yl , z:zl}) YSL = YS.subs({ sp.Derivative(y,t):dydt, sp.Derivative(z,t):dzdt, sp.Derivative(x,t):dxdt, x:xl , y:yl , z:zl}) ZSL = ZS.subs({ sp.Derivative(y,t):dydt, sp.Derivative(z,t):dzdt, sp.Derivative(x,t):dxdt, x:xl , y:yl , z:zl}) #display(ZSL.free_symbols) XSLS = str(XSL) YSLS = str(YSL) ZSLS = str(ZSL) replace = {"xl":"x","yl":"y","zl":"z","cos":"np.cos", "sin":"np.sin",} for old, new in replace.items(): XSLS = XSLS.replace(old, new) for old, new in replace.items(): YSLS = YSLS.replace(old, new) for old, new in replace.items(): ZSLS = ZSLS.replace(old, new) return[XSLS,YSLS,ZSLS] Result = FindDGL() print(Result[0]) print(Result[1]) print(Result[2]) Here is the second one: import numpy as np import matplotlib.pyplot as plt from scipy.integrate import solve_ivp from mpl_toolkits.mplot3d import Axes3D def Q(t): omega = 1 return r * (np.cos(omega * t) * np.array([1, 0, 0]) + np.sin(omega * t) * np.array([0, 1, 0])) + np.array([0, 0, q0]) def equations_of_motion(t, state, r, omega, q0, l): x, y, z, xp, yp, zp = state dxdt = xp dydt = yp dzdt = zp dxpdt = dxdt**2*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dxdt**2*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dxdt*omega*r**2*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dxdt*omega*r*x*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dydt**2*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dydt**2*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dydt*omega*r**2*np.cos(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dydt*omega*r*x*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dzdt**2*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dzdt**2*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*q0*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*q0*x/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r*z*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*x*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*x*np.cos(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*y*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x**2*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) dypdt = dxdt**2*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dxdt**2*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dxdt*omega*r**2*np.sin(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dxdt*omega*r*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dydt**2*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dydt**2*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dydt*omega*r**2*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dydt*omega*r*y*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dzdt**2*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dzdt**2*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*q0*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*q0*y/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r*z*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + g*y*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*x*np.sin(omega*t)*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*r**2*y*np.sin(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x*y*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*y**2*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) dzpdt = dxdt**2*q0/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dxdt**2*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dxdt*omega*q0*r*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dxdt*omega*r*z*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dydt**2*q0/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dydt**2*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - 2.0*dydt*omega*q0*r*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*dydt*omega*r*z*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + dzdt**2*q0/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - dzdt**2*z/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r**2*np.sin(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*r**2*np.cos(omega*t)**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*g*r*x*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + 2.0*g*r*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*x**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - g*y**2/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*q0*r*x*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) + omega**2*q0*r*y*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*x*z*np.cos(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) - omega**2*r*y*z*np.sin(omega*t)/(q0**2 - 2.0*q0*z + r**2*np.sin(omega*t)**2 + r**2*np.cos(omega*t)**2 - 2.0*r*x*np.cos(omega*t) - 2.0*r*y*np.sin(omega*t) + x**2 + y**2 + z**2) return [dxdt, dydt, dzdt, dxpdt, dypdt, dzpdt] r = 0.5 omega = 1.2 q0 = 1 l = 1 g = 9.81 #{x[0] == r, y[0] == x'[0] == y'[0] == z'[0] == 0, z[0] == q0 - l} initial_conditions = [r, 0, 0, 0, 0, q0-l] tmax = 200 solp = solve_ivp(equations_of_motion, [0, tmax], initial_conditions, args=(r, omega, q0, l), dense_output=True)#, method='DOP853') t_values = np.linspace(0, tmax, 1000) p_values = solp.sol(t_values) print(p_values.size) d =0.5 Qx = [Q(ti)[0] for ti in t_values] Qy = [Q(ti)[1] for ti in t_values] Qz = [Q(ti)[2] for ti in t_values] fig = plt.figure(figsize=(20, 16)) ax = fig.add_subplot(111, projection='3d') ax.plot(p_values[0], p_values[1], p_values[2], color='blue') ax.scatter(r, 0, q0-l, color='red') ax.plot([0, 0], [0, 0], [0, q0], color='green') ax.plot(Qx, Qy, Qz, color='purple') #ax.set_xlim(-d, d) #ax.set_ylim(-d, d) #ax.set_zlim(-d, d) ax.view_init(30, 45) plt.show() | Edited to include the direct solution in Cartesians, since this was the original direction taken by the OP. See the bottom of this answer. Edited again to provide an alternative derivation of the Cartesian equations from Newton's Second Law (F=ma) rather than Lagrangian Mechanics. (This has the side benefit that it also finds the tension in the pendulum rod.) Lagrangian Equations (i) Angle Coordinates This method, related to, but not quite the same as, the replies in your Stack Exchange post is quite appealing because it treats your 2-degree-of-freedom problem with two independent coordinates, so not having to deal with the constraint (that the length of pendulum is fixed). Let Ο be the angle between the vertical plane containing the string and the x-axis. Let ΞΈ be the angle made by the string with the downward vertical. Then the coordinates of the bob (relative to the centre of the ring) are: Differentiate with respect to time and we get velocity components Summing, squaring and simplifying using trig formulae: The Lagrangian (strictly, Lagrangian divided by mass, but mass would cancel in the analysis that follows) is whence From the Lagrangian equations for a conservative system we get (after a LOT of algebra!) the key equations for our degrees of freedom: The code sample below solves these using solve_ivp. Note that the denominator of the second equation means that you shouldnβt start with the pendulum vertical, as sinβ‘ΞΈ would be 0. Then your animation. I use FuncAnimation from matplotlib.animation. The objective here is to update any elements of your plot (in this case the ends of the pendulum string) in each frame. The code plots an animation by default, but you can remove the subsequent comment to store it as a (rather large) graphics file. If you want the trajectory rather than the animation then re-comment the commands at the bottom to choose plot_figure instead. Note that the system is chaotic and very sensitive to the relative sizes of disk and pendulum and the angular velocity Ο. If I do a dimensional analysis, the trajectory shape should be a function of g/Ο^2 L (or the ratios of the periods of the disk and the stand-alone pendulum), R/L, and the initial angles. CODE import math import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from scipy.integrate import solve_ivp from mpl_toolkits.mplot3d import Axes3D g = 9.81 def plot_animation( t, qx, qy, qz, x, y, z ): plotInterval = 1 fig = plt.figure( figsize=(4,4) ) ax = fig.add_subplot(111, projection='3d' ) ax.view_init(30, 45) ax.set_xlim( -1.0, 1.0 ); ax.set_ylim( -1.0, 1.0 ); ax.set_aspect('equal') ax.plot( qx, qy, qz, 'k' ) # ring a = ax.plot( [qx[0],x[0]], [qy[0],y[0]], [qz[0],z[0]], 'g' ) # pendulum string b = ax.plot( [x[0]], [y[0]], [z[0]], 'ro' ) # pendulum bob def animate( i ): a[0].set_data_3d( [qx[i],x[i]], [qy[i],y[i]], [qz[i],z[i]] ) # update anything that has changed b[0].set_data_3d( [x[i]], [y[i]], [z[i]] ) ani = animation.FuncAnimation( fig, animate, interval=4, frames=len( t ) ) plt.show() # ani.save( "demo.gif", fps=50 ) def plot_figure( qx, qy, qz, x, y, z ): fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111, projection='3d') ax.plot( qx, qy, qz, 'k' ) # disk ax.plot( x, y, z, 'b' ) # bob trajectory ax.plot( ( qx[-1], x[-1] ), ( qy[-1], y[-1] ), ( qz[-1], z[-1] ), 'g' ) # final line ax.plot( x[-1], y[-1], z[-1], 'ro' ) # final bob ax.view_init(30, 45) ax.set_xlim( -1.0, 1.0 ); ax.set_ylim( -1.0, 1.0 ); ax.set_aspect('equal') plt.show() def deriv( t, Y, R, omega, L ): theta, phi, thetaprime, phiprime = Y ct, st = math.cos( theta ), math.sin( theta ) phase = omega * t - phi thetaprime2 = ( phiprime ** 2 * L * st * ct + omega ** 2 * R * ct * math.cos( phase ) - g * st ) / L phiprime2 = ( -2 * thetaprime * phiprime * L * ct + omega ** 2 * R * math.sin( phase ) ) / ( L * st ) return [ thetaprime, phiprime, thetaprime2, phiprime2 ] R = 0.5 omega = 2.0 L = 1.0 Y0 = [ 0.01, 0.01, 0.0, 0.0 ] period = 2 * np.pi / omega tmax = 5 * period solution = solve_ivp( deriv, [0, tmax], Y0, args=(R,omega,L), rtol=1.0e-6, dense_output=True ) t = np.linspace( 0, tmax, 1000 ) Y = solution.sol( t ) theta = Y[0,:] phi = Y[1,:] # Position on disk qx = R * np.cos( omega * t ) qy = R * np.sin( omega * t ) qz = np.zeros_like( qx ) # Trajectory x = qx + L * np.sin( theta ) * np.cos( phi ) y = qy + L * np.sin( theta ) * np.sin( phi ) z = - L * np.cos( theta ) #plot_figure( qx, qy, qz, x, y, z ) plot_animation( t, qx, qy, qz, x, y, z ) Trajectory: Lagrangian Equations (ii) Cartesian Coordinates The difficulty with Cartesian coordinates is that you are using 3 coordinates to solve a 2-degree-of-freedom problem, and so the constraint (length of pendulum being fixed) must be dealt with through a Lagrange multiplier Ξ». The following vastly simplifies the solution given in Stack Exchange. The Lagrangian is where x is the location of the bob and "q"=(R cosβ‘Οt,R sinβ‘Οt,0). The three Lagrangian equations, rewritten in vector form give Ξ» must be found from the constraint equation Differentiating twice, and noting that together with the expression for the acceleration above, gives the expression for 2Ξ» below. Thus, your equation set is This is far simpler than the expression given in Stack Exchange, but probably equivalent. It is shown in code below. The only real difference from the first code is the routine deriv(), since you now have different dependent variables. The other problem with a constrained Lagrangian is that the initial conditions (position and velocity) must satisfy both the constraint equation AND its derivative. i.e. the string length is L and it is not initially lengthening. Code with Cartesian dependent variables. import math import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from scipy.integrate import solve_ivp from mpl_toolkits.mplot3d import Axes3D g = 9.81 def plot_animation( t, qx, qy, qz, x, y, z ): plotInterval = 1 fig = plt.figure( figsize=(4,4) ) ax = fig.add_subplot(111, projection='3d' ) ax.view_init(30, 45) ax.set_xlim( -1.0, 1.0 ); ax.set_ylim( -1.0, 1.0 ); ax.set_aspect('equal') ax.plot( qx, qy, qz, 'k' ) # ring a = ax.plot( [qx[0],x[0]], [qy[0],y[0]], [qz[0],z[0]], 'g' ) # pendulum string b = ax.plot( [x[0]], [y[0]], [z[0]], 'ro' ) # pendulum bob def animate( i ): a[0].set_data_3d( [qx[i],x[i]], [qy[i],y[i]], [qz[i],z[i]] ) # update anything that has changed b[0].set_data_3d( [x[i]], [y[i]], [z[i]] ) ani = animation.FuncAnimation( fig, animate, interval=4, frames=len( t ) ) plt.show() # ani.save( "demo.gif", fps=50 ) def plot_figure( qx, qy, qz, x, y, z ): fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111, projection='3d') ax.plot( qx, qy, qz, 'k' ) # disk ax.plot( x, y, z, 'b' ) # bob trajectory ax.plot( ( qx[-1], x[-1] ), ( qy[-1], y[-1] ), ( qz[-1], z[-1] ), 'g' ) # final line ax.plot( x[-1], y[-1], z[-1], 'ro' ) # final bob ax.view_init(30, 45) ax.set_xlim( -1.0, 1.0 ); ax.set_ylim( -1.0, 1.0 ); ax.set_aspect('equal') plt.show() def deriv( t, Y, R, omega, L ): x, y, z, xdot, ydot, zdot = Y qx, qy = R * math.cos( omega * t ), R * math.sin( omega * t ) qxdot, qydot = -omega * R * math.sin( omega * t ), omega * R * math.cos( omega * t ) twoLambda = ( g * z + ( R ** 2 - qx * x - qy * y ) * omega ** 2 - ( xdot - qxdot ) ** 2 - ( ydot -qydot ) ** 2 - zdot ** 2 ) / L ** 2 xddot = twoLambda * ( x - qx ) yddot = twoLambda * ( y - qy ) zddot = twoLambda * z - g return [ xdot, ydot, zdot, xddot, yddot, zddot ] R = 0.5 omega = 2.0 L = 1.0 Y0 = [ R, 0.0, -L, 0.0, 0.0, 0.0 ] period = 2 * np.pi / omega tmax = 5 * period solution = solve_ivp( deriv, [0, tmax], Y0, args=(R,omega,L), rtol=1.0e-6, dense_output=True ) t = np.linspace( 0, tmax, 1000 ) Y = solution.sol( t ) x = Y[0,:] y = Y[1,:] z = Y[2,:] # Position on disk qx = R * np.cos( omega * t ) qy = R * np.sin( omega * t ) qz = np.zeros_like( qx ) #plot_figure( qx, qy, qz, x, y, z ) plot_animation( t, qx, qy, qz, x, y, z ) Newtonian Mechanics Your more recent post (on elastic pendulums) has prompted me to ask how the problem would be solved by Newtonian mechanics, F=ma, and not with Lagrangians. The pendulum bob experiences two forces: an (unknown) tension T along the vector from bob to tether point and the weight of the bob, mg. Dividing by mass we get acceleration We need to find the tension T. We do this by applying the constraint that the pendulum has a fixed length: Differentiating twice and dividing by 2 we get Multiplying out, using the expression for acceleration above and noting that gives whence we get an expression for the tension T: Finally, substituting this back in the equation for the acceleration (and flipping signs to multiply by x-q rather than q-x) gives after a bit of rearrangement: which is the same as the equation derived by Lagrangian mechanics above. | 2 | 7 |
78,215,243 | 2024-3-24 | https://stackoverflow.com/questions/78215243/python-app-keeps-oom-crashing-on-pandas-merge | I have a ligh Python app which should perform a very simple task, but keeps crashing due to OOM. What app should do Loads data from .parquet in to dataframe Calculate indicator using stockstats package Merge freshly calculated data into original dataframe to have both OHCL + SUPERTREND inside one dataframe -> here is crashes Store dataframe as .parquet Where is crashes df = pd.merge(df, st, on=['datetime']) Using Python 3.10 pandas~=2.1.4 stockstats~=0.4.1 Kubernetes 1.28.2-do.0 (running in Digital Ocean) Here is the strange thing, the dataframe is very small (df.size is 208446, file size is 1.00337 MB, mem usage is 1.85537 MB). Measured import os file_stats = os.stat(filename) file_size = file_stats.st_size / (1024 * 1024) # 1.00337 MB df_mem_usage = dataframe.memory_usage(deep=True) df_mem_usage_print = round(df_mem_usage.sum() / (1024 * 1024), 6 # 1.85537 MB df_size = dataframe.size # 208446 Deployment info App is deployed into Kubernetes using Helm with following resources set resources: limits: cpu: 1000m memory: 6000Mi requests: cpu: 1000m memory: 1000Mi I am using nodes with 4vCPU + 8 GB memory and the node not under performance pressure. I have created dedicated node pool with 8 vCPU + 16 GB nodes, but same issue. kubectl top node test-pool NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% test-pool-j8t3y 38m 0% 2377Mi 17% Pod info kubectl describe pod xxx ... State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Sun, 24 Mar 2024 16:08:56 +0000 Finished: Sun, 24 Mar 2024 16:09:06 +0000 ... Here is CPU and memory consumption from Grafana. I am aware that very short Memory or CPU spikes will be hard to see, but from long term perspective, the app does not consume a lot of RAM. On the other hand, from my experience we are using the same pandas operations on containers with less RAM and dataframes are much much bigger with not problems. How should I fix this? What else should I debug in order to prevent OOM? Data and code example Original dataframe (named df) datetime open high low close volume 0 2023-11-14 11:15:00 2.185 2.187 2.171 2.187 19897.847314 1 2023-11-14 11:20:00 2.186 2.191 2.183 2.184 8884.634728 2 2023-11-14 11:25:00 2.184 2.185 2.171 2.176 12106.153954 3 2023-11-14 11:30:00 2.176 2.176 2.158 2.171 22904.354082 4 2023-11-14 11:35:00 2.171 2.173 2.167 2.171 1691.211455 New dataframe (named st). Note: If trend_orientation = 1 => st_lower = NaN, if -1 => st_upper = NaN datetime supertrend_ub supertrend_lb trend_orientation st_trend_segment 0 2023-11-14 11:15:00 0.21495 NaN -1 1 1 2023-11-14 11:20:00 0.21495 NaN -10 1 2 2023-11-14 11:25:00 0.21495 NaN -11 1 3 2023-11-14 11:30:00 0.21495 NaN -12 1 4 2023-11-14 11:35:00 0.21495 NaN -13 1 Code example import pandas as pd import multiprocessing import numpy as np import stockstats def add_supertrend(market): try: # Read data from file df = pd.read_parquet(market, engine="fastparquet") # Extract date columns date_column = df['datetime'] # Convert to stockstats object st_a = stockstats.wrap(df.copy()) # Generate supertrend st_a = st_a[['supertrend', 'supertrend_ub', 'supertrend_lb']] # Add back datetime columns st_a.insert(0, "datetime", date_column) # Add trend orientation using conditional columns conditions = [ st_a['supertrend_ub'] == st_a['supertrend'], st_a['supertrend_lb'] == st_a['supertrend'] ] values = [-1, 1] st_a['trend_orientation'] = np.select(conditions, values) # Remove not required supertrend values st_a.loc[st_a['trend_orientation'] < 0, 'st_lower'] = np.NaN st_a.loc[st_a['trend_orientation'] > 0, 'st_upper'] = np.NaN # Unwrap back to dataframe st = stockstats.unwrap(st_a) # Ensure correct date types are used st = st.astype({ 'supertrend': 'float32', 'supertrend_ub': 'float32', 'supertrend_lb': 'float32', 'trend_orientation': 'int8' }) # Add trend segments st_to = st[['trend_orientation']] st['st_trend_segment'] = st_to.ne(st_to.shift()).cumsum() # Remove trend value st.drop(columns=['supertrend'], inplace=True) # Merge ST with DF df = pd.merge(df, st, on=['datetime']) # Write back to parquet df.to_parquet(market, compression=None) except Exception as e: # Using proper logger in real code print(e) pass def main(): # Using fixed market as example, in real code market is fetched market = "BTCUSDT" # Using multiprocessing to free up memory after each iteration p = multiprocessing.Process(target=add_supertrend, args=(market,)) p.start() p.join() if __name__ == "__main__": main() Dockerfile FROM python:3.10 ENV PYTHONFAULTHANDLER=1 \ PYTHONHASHSEED=random \ PYTHONUNBUFFERED=1 \ PYTHONPATH=. # Adding vim RUN ["apt-get", "update"] # Get dependencies COPY requirements.txt . RUN pip3 install -r requirements.txt # Copy main app ADD . . CMD main.py Possible solutions / tried approaches β: tried; not worked π‘: and idea I am going to test π: did not completely solved the problem, but helped towards the solution β
: working solution Lukasz Tracewskis suggestion Use Node-pressure Eviction in order to test whether pod even can allocate enough memory on nodes I have done: created new node pool: 8vCPU + 16 GB RAM ensured that only my pod (and some system ones) will be deployed on this node (using tolerations and affinity) run a stress test with no OOM or other errors ... image: "polinux/stress" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "5G", "--vm-hang", "1"] ... kubectl top node test-pool-j8t3y NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% test-pool-j8t3y 694m 8% 7557Mi 54% Node description Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system cilium-24qxl 300m (3%) 0 (0%) 300Mi (2%) 0 (0%) 43m kube-system cpc-bridge-proxy-csvvg 100m (1%) 0 (0%) 75Mi (0%) 0 (0%) 43m kube-system csi-do-node-tzbbh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system disable-systemd-upgrade-timer-mqjsk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system do-node-agent-dv2z2 102m (1%) 0 (0%) 80Mi (0%) 300Mi (2%) 43m kube-system konnectivity-agent-wq5p2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system kube-proxy-gvfrv 0 (0%) 0 (0%) 125Mi (0%) 0 (0%) 43m scanners data-gw-enrich-d5cff4c95-bkjkc 100m (1%) 1 (12%) 1000Mi (7%) 6000Mi (43%) 2m33s The pod did not crash due to OOM. So it is very likely that the issue will be inside code, somewhere. Detailed memory monitoring I have inserted memory measurement into multiple points. I am measuring both dataframe size and memory usage using psutil. import psutil total = round(psutil.virtual_memory().total / 1000 / 1000, 4) used = round(psutil.virtual_memory().used / 1000 / 1000, 4) pct = round(used / total * 100, 1) logger.info(f"[Current memory usage is: {used} / {total} MB ({pct} %)]") Memory usage prior read data from file RAM: 938.1929 MB after df loaded df_mem_usage: 1.947708 MB RAM: 954.1181 MB after ST generated df_mem_usage of ST df: 1.147757 MB RAM: 944.9226 MB line before df merge df_mem_usage: 945.4223 MB β Not using multiprocessing In order to "reset" memory every iteration, I am using multiprocessing. However I wanted to be sure that this does not cause troubles. I have removed it and called the add_supertrend directly. But it ended up in OOM, so I do not think this is the problem. Real data As suggested by Lukasz Tracewski, I am sharing real data which are causing the OOM crash. Since they are in parquet format, I cannot use services like pastebin and I am using GDrive instead. I will use this folder to share any other stuff related to this question/issue. GDrive folder β Upgrade pandas to 2.2.1 Sometimes plain pacakge upgrade might help, so I have decide to try using upgrading pandas to 2.2.1 and also fastparquet to 2024.2.0 (newer pandas required newer fastparquet). pyarrow was also updated to 15.0.0. It seemed to work during first few iterations, but than crashed with OOM again. β Using Dask I remembered that when I used to solve complex operations with dataframes, I used dask. So I tried to use it in this case as well. Without success. OOM again. Using dask 2024.3.1. import dask.dataframe as dd # mem usage 986.452 MB ddf1 = dd.from_pandas(df) # mem usage 1015.37 MB ddf2 = dd.from_pandas(st) # mem usage 1019.50 MB df_dask = dd.merge(ddf1, ddf2, on='datetime') # mem usage 1021.56 MB df = df_dask.compute() <- here it crashes Β―\_(γ)_/Β― π‘ Duplicated datetimes During investigating data with dask, I have noticed that there are duplicate records for datetime columns. This is definitely wrong, datetime has to be unique. I think this might cause the issue. I will investigate that further. df.tail(10) datetime open high low close volume 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 I have implemented a fix which removes duplicate records in the other component that prepares data. Fix looks like this and I will monitor whether this will help or not. # Append gathered data to df and write to file df = pd.concat([df, fresh_data]) # Drop duplicates df = df.drop_duplicates(subset=["datetime"]) | In order to close this question, I have figured out that the issue was caused by duplicated datetimes in dataframe. This caused some weird bugs on dataframe merge on datetime column. So I have fixed the data and it works fine now. Duplicated datetimes During investigating data with dask, I have noticed that there are duplicate records for datetime columns. This is definitely wrong, datetime has to be unique. I think this might cause the issue. I will investigate that further. df.tail(10) datetime open high low close volume 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408 | 2 | 1 |
78,225,920 | 2024-3-26 | https://stackoverflow.com/questions/78225920/why-nextitertrain-dataloader-takes-long-execution-time-in-pytorch | I am trying to load a local dataset with images (around 225 images in total) using the following code: # Set the batch size BATCH_SIZE = 32 # Create data loaders train_dataloader, test_dataloader, class_names = data_setup.create_dataloaders( train_dir=train_dir, test_dir=test_dir, transform=manual_transforms, # use manually created transforms batch_size=BATCH_SIZE ) # Get a batch of images image_batch, label_batch = next(iter(train_dataloader)) # why it takes so much time? what can I do about it? My question concerns the last line of the code and the iteration in the train_dataloader which takes long execution time. Why is this the case? I have only 225 images. Edit: The code for the dataloader can be found in the following link. import os from torchvision import datasets, transforms from torch.utils.data import DataLoader import pdb NUM_WORKERS = os.cpu_count() def create_dataloaders( train_dir: str, test_dir: str, transform: transforms.Compose, batch_size: int, num_workers: int=NUM_WORKERS ): # Use ImageFolder to create dataset(s) train_data = datasets.ImageFolder(train_dir, transform=transform) test_data = datasets.ImageFolder(test_dir, transform=transform) # Get class names class_names = train_data.classes # Turn images into data loaders train_dataloader = DataLoader( train_data, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True, ) test_dataloader = DataLoader( test_data, batch_size=batch_size, shuffle=False, # don't need to shuffle test data num_workers=num_workers, pin_memory=True, ) return train_dataloader, test_dataloader, class_names | The main reason the next(iter(train_dataloader) call is slow is due to multiprocessing - or to the pittfalls of multiprocessing. When num_workers > 0, the call to iter(train_dataloader) will fork the main Python process (the current script), which means that any time-consuming code that occurs during import before the call to iter(...), such as any kind of file loading that happens in global scope (!), will cause an extra slow down. That is, extra on top of the process creation time and on top of the serialization and deserialization of data that needs to happen when next(iter(...)) is called. You can verify this by adding time.sleep(5) in global scope anywhere before calling next(iter(train_dataloader)). You'll then see that the call will be 5 sec slower than it already was. Unfortunately, I don't know how to fix this for the torch DataLoader, apart from either (1) set num_workers=0, or (2) make sure you don't have time-consuming code during the import of the main script, or (3) don't use the torch DataLoader, but use the HuggingFace dataset interfaces. Update: There does not seem to be a work-around here. If you have the following code (in the same script): dataloader = create_dataloader(...) # similar to the OPs code for x in dataloader: ... or also if you initialized the dataloader in some other module and use something like from other_module import dataloader a, b = next(iter(dataloader)) then the fork (that is triggered by starting to iterate) will cause re-initialization of the dataloader (and its underlying datasets, reading everything from disk again). So, it appears that it only makes sense to use num_workers=1 (or higher) if data actually needs to be downloaded from remote servers. If all data is already on the localhost, then, as I understand it, it never makes sense to set num_workers=1 (or higher) in this API. (I'm not totally sure here, since I'm not familiar with the underlying torch implementation. Conceivably it could also make sense when the transform method is much slower than the serialization/deserialization part of the code.) | 2 | 2 |
78,192,426 | 2024-3-20 | https://stackoverflow.com/questions/78192426/how-to-use-solr-as-retriever-in-rag | I want to build a RAG (Retrieval Augmented Generation) service with LangChain and for the retriever I want to use Solr. There is already a python package eurelis-langchain-solr-vectorstore where you can use Solr in combination with LangChain but how do I define server credentials? And my embedding model is already running on a server. I thought something like this but I don't know import requests from eurelis_langchain_solr_vectorstore import Solr embeddings_model = requests.post("http://server-insight/embeddings/") solr = Solr(embeddings_model, core_kwargs={ 'page_content_field': 'text_t', # field containing the text content 'vector_field': 'vector', # field containing the embeddings of the text content 'core_name': 'langchain', # core name 'url_base': 'http://localhost:8983/solr' # base url to access solr }) # with custom default core configuration retriever = solr.as_retriever() | For the first question: For basic credentials you can send them in the url with the login:password@ pattern http://localhost:8983/solr => http://login:password@localhost:8983/solr For the second one: to use your embeddings server you need to provide the Solr vector store with a class inheriting from langchain_core.embeddings.embeddings.Embeddings it must then implement both def embed_documents(self, texts: List[str]) -> List[List[float]]: """Embed search docs.""" and def embed_query(self, text: str) -> List[float]: """Embed query text.""" in both methods you can use your http://server-insight/embeddings/ endpoint. First method is used at indexing time and is intended to work with a list of text and return a list of embeddings second one is used at query time and is intended to work with a single text and return a single embedding (a single list of float) | 2 | 1 |
78,211,526 | 2024-3-23 | https://stackoverflow.com/questions/78211526/pytorch-attributeerror-torch-dtype-object-has-no-attribute-itemsize | I am trying to follow this article on medium Article. I had a few problems with it so the remain chang eI did was to the TrainingArguments object I added gradient_checkpointing_kwargs={'use_reentrant':False},. So now I have the following objects: peft_training_args = TrainingArguments( output_dir = output_dir, warmup_steps=1, per_device_train_batch_size=1, gradient_accumulation_steps=4, max_steps=100, #1000 learning_rate=2e-4, optim="paged_adamw_8bit", logging_steps=25, logging_dir="./logs", save_strategy="steps", save_steps=25, evaluation_strategy="steps", eval_steps=25, do_eval=True, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant':False}, report_to="none", overwrite_output_dir = 'True', group_by_length=True, ) peft_model.config.use_cache = False peft_trainer = transformers.Trainer( model=peft_model, train_dataset=train_dataset, eval_dataset=eval_dataset, args=peft_training_args, data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) And when I call peft_trainer.train() I get the following error: AttributeError: 'torch.dtype' object has no attribute 'itemsize' I'm using Databricks, and my pytorch version is 2.0.1+cu118 | I was able to recreate your problem on Databricks with the following cluster: Runtime: 14.1 ML (includes Apache Spark 3.5.0, GPU, Scala 2.12) Worker Type: Standard_NC16as_T4_v3 / Standard_NC6s_vs Driver Type: Standard_NC16as_T4_v3 / Standard_NC6s_vs And then building on top of all the answers here already I was able to overcome your problem by the following: Upgrade your transformers library via: !pip install -βupgrade git+https://github.com/huggingface/transformers Upgrading your torch version via: !pip install -βupgrade torch torchvision Upgrading your accelerate version via: !pip install -βupgrade accelerate Using a specific version of the datasets library via: !pip install datasets==2.16.0 I'm not sure if it matters but the order I used of the commands above are: 4 >> 1 >> 3 >> 2 This makes your problem go away and works on both transformers.Trainer and also SFTTrainer that I saw in your article imported but never used. | 2 | 1 |
78,221,046 | 2024-3-25 | https://stackoverflow.com/questions/78221046/simultaneous-spacing-and-duration-constraints-with-time-gaps-in-gekko | I'm trying to simultaneously enforce sequential duration and spacing constraints to vector solution output in Gekko. Normally, this would be fairly straightforward using window logic, but my time array (in weeks) has gaps per the "week" array below (e.g., it goes [13, 14, 17...]). I was able to get the spacing requirement (s) in weeks to work by looking up the index of the next sequential week using the code below (seems to work for all values of "s"), but I'm unsure how to factor in "d" (the number of consecutive weeks that must be run) into the existing solution (complete reproducible example below). The general solution output I'm looking for would look like this for s=1 and d=2: [13, 14, 17, 18, 33, 34, 50, 51...] import numpy as np import pandas as pd from gekko import GEKKO m = GEKKO(remote=False) m.options.NODES = 3 m.options.IMODE = 3 m.options.MAX_ITER = 1000 lnuc_weeks = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0] min_promo_price = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,3] max_promo_price = [3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5,3.5, 3.5, 3.5, 3.5, 3.5, 3.5] base_srp = [3.48, 3.48, 3.48, 3.48, 3.0799, 3.0799, 3.0799, 3.0799,3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799] lnuc_min_promo_price = 1.99 lnuc_max_promo_price = 1.99 coeff_fedi = [0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589,0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589] coeff_feao = [0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995] coeff_diso = [0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338] sumproduct_base = [0.20560305, 0.24735297, 0.24957423, 0.23155435, 0.23424058,0.2368096 , 0.27567109, 0.27820648, 0.2826393 , 0.28660598, 0.28583971, 0.30238505, 0.31726649, 0.31428312, 0.31073792, 0.29036779, 0.32679041, 0.32156337, 0.24633734] neg_ln = [[0.14842000515],[0.14842000512],[0.14842000515],[0.14842000512],[-0.10407483058],[0.43676249024],[0.43676249019],[0.43676249024],[0.43676249019],[0.43676249024],[0.43676249019], [0.026284840258],[0.026284840291],[0.026284840258],[0.026284840291], [0.026185109811],[0.026284840258],[0.026284840291],[0.026284840258]] neg_ln_ppi_coeff = [1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879,1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879,1.22293879, 1.22293879, 1.22293879, 1.22293879] base_volume = [124.38, 193.2, 578.72, 183.88, 197.42, 559.01, 67.68, 110.01,60.38, 177.11, 102.65, 66.02, 209.83, 81.22, 250.44, 206.44, 87.99, 298.95, 71.07] week = pd.Series([13, 14, 17, 18, 19, 26, 28, 33, 34, 35, 39, 42, 45, 46, 47, 48, 50, 51, 52]) n = 19 x1 = m.Array(m.Var,(n), integer=True) #LNUC weeks i = 0 for xi in x1: xi.value = lnuc_weeks[i] xi.lower = 0 xi.upper = lnuc_weeks[i] i += 1 x2 = m.Array(m.Var,(n)) #Blended SRP i = 0 for xi in x2: xi.value = 5 m.Equation(xi >= m.if3((x1[i]) - 0.5, min_promo_price[i], lnuc_min_promo_price)) m.Equation(xi <= m.if3((x1[i]) - 0.5, max_promo_price[i], lnuc_max_promo_price)) i += 1 x3 = m.Array(m.Var,(n), integer=True) #F&D x4 = m.Array(m.Var,(n), integer=True) #FO x5 = m.Array(m.Var,(n), integer=True) #DO x6 = m.Array(m.Var,(n), integer=True) #TPR #Default to F&D i = 0 for xi in x3: xi.value = 1 xi.lower = 0 xi.upper = 1 i += 1 i = 0 for xi in x4: xi.value = 0 xi.lower = 0 xi.upper = 1 i += 1 i = 0 for xi in x5: xi.value = 0 xi.lower = 0 xi.upper = 1 i += 1 i = 0 for xi in x6: xi.value = 0 xi.lower = 0 xi.upper = 1 i += 1 x7 = m.Array(m.Var,(n), integer=True) #Max promos i = 0 for xi in x7: xi.value = 1 xi.lower = 0 xi.upper = 1 i += 1 x = [x1,x2,x3,x4,x5,x6,x7] neg_ln=[m.Intermediate(-m.log(x[1][i]/base_srp[i])) for i in range(n)] total_vol_fedi =[m.Intermediate(coeff_fedi[0]+ sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] total_vol_feao =[m.Intermediate(coeff_feao[0]+ sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] total_vol_diso =[m.Intermediate(coeff_diso[0]+ sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] total_vol_tpro =[m.Intermediate(sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] simu_total_volume = [m.Intermediate(( (m.max2(0,base_volume[i]*(m.exp(total_vol_fedi[i])-1)) * x[2][i] + m.max2(0,base_volume[i]*(m.exp(total_vol_feao[i])-1)) * x[3][i] + m.max2(0,base_volume[i]*(m.exp(total_vol_diso[i])-1)) * x[4][i] + m.max2(0,base_volume[i]*(m.exp(total_vol_tpro[i])-1)) * x[5][i]) + base_volume[i]) * x[6][i]) for i in range(n)] [m.Equation(x3[i] + x4[i] + x5[i] + x6[i] == 1) for i in range(i)] #Limit max promos m.Equation(sum(x7)<=10) #Enforce spacing and duration d=2 s=1 for s2 in range(1, s+1): for i in range(0, n-s2): f = week[week == week[i] + s2].index if len(f) > 0: m.Equation(x7[i] + x7[f[0]]<=1) m.Maximize(m.sum(simu_total_volume)) m.options.SOLVER=1 m.solve(disp = True) df = pd.concat([pd.Series(week), pd.Series([i[0] for i in x7]), pd.Series([i[0] for i in simu_total_volume])], axis=1) df.columns = ['week', 'x7', 'total_volume'] df[df['x7']>0] | Below is a minimal example of duration and spacing constraints that may help. The decision variable is when to start the promo. The promo selection is post-processed after st is optimized. from gekko import GEKKO import numpy as np # Initialize the model m = GEKKO(remote=False) # Define weeks and parameters weeks = np.arange(1,11) # Week numbers [1,2,...,9,10] n_weeks = len(weeks) # Number of weeks d = 3 # Duration constraint: number of consecutive weeks s = 2 # Spacing constraint: number of weeks between events # Define variables # start promo location st = m.Array(m.Var,n_weeks,integer=True,lb=0,ub=1) # Objective Function m.Maximize(sum(st)) # Always start on first week available m.fix(st[0],1) # Don't start at the end if duration constraint doesn't allow it for i in range(n_weeks-d+1,n_weeks): m.Equation(st[i]==0) # Spacing Constraint for i in range(n_weeks-d-s): m.Equation(sum(st[i:i+d+s]) <= 1) # Solve the problem m.options.SOLVER=1 m.solve(disp=True) # Output the solution print("Weeks:", weeks) print("Start Promo:", [int(si.VALUE[0]) for si in st]) x = np.zeros(n_weeks) for i in range(0,n_weeks-d+1): if (int(st[i].VALUE[0])==1): x[i:i+d]=1 print("Promo occurrence:", [int(xi) for xi in x]) The solution respects the duration (d=3) and spacing (s=2) constraints. The start of a promo is not scheduled unless there is sufficient duration to meet the minimum duration constraint. Weeks: [ 1 2 3 4 5 6 7 8 9 10] Start Promo: [1, 0, 0, 0, 0, 1, 0, 0, 0, 0] Promo occurrence: [1, 1, 1, 0, 0, 1, 1, 1, 0, 0] | 2 | 1 |
78,210,393 | 2024-3-23 | https://stackoverflow.com/questions/78210393/cannot-import-name-linear-util-from-jax | I'm trying to reproduce the experiments of the S5 model, https://github.com/lindermanlab/S5, but I encountered some issues when solving the environment. When I'm running the shell script./run_lra_cifar.sh, I get the following error Traceback (most recent call last): File "/Path/S5/run_train.py", line 3, in <module> from s5.train import train File "/Path/S5/s5/train.py", line 7, in <module> from .train_helpers import create_train_state, reduce_lr_on_plateau,\ File "/Path/train_helpers.py", line 6, in <module> from flax.training import train_state File "/Path/miniconda3/lib/python3.12/site-packages/flax/__init__.py", line 19, in <module> from . import core File "/Path/miniconda3/lib/python3.12/site-packages/flax/core/__init__.py", line 15, in <module> from .axes_scan import broadcast File "/Path/miniconda3/lib/python3.12/site-packages/flax/core/axes_scan.py", line 22, in <module> from jax import linear_util as lu ImportError: cannot import name 'linear_util' from 'jax' (/Path/miniconda3/lib/python3.12/site-packages/jax/__init__.py) I'm running this on an RTX4090 and my CUDA version is 11.8. My jax version is 0.4.25 and jaxlib version is 0.4.25+cuda11.cudnn86 I first tried to install the dependencies using the author's pip install -r requirements_gpu.txt However, this doesn't seem to work in my case since I can't evenimport jax. So I installed jax according to the instructions on https://jax.readthedocs.io/en/latest/installation.html by typing pip install --upgrade pip pip install --upgrade "jax[cuda11_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html So far I've tried: Using a older GPU(3060 and 2070) Downgrading python to 3.9 Does anyone know what could be wrong? Any help is appreciated | jax.linear_util was deprecated in JAX v0.4.16 and removed in JAX v0.4.24. It appears that flax is the source of the linear_util import, meaning that you are using an older flax version with a newer jax version. To fix your issue, you'll either need to install an older version of JAX which still has jax.linear_util, or update to a newer version of flax which is compatible with more recent JAX versions. | 3 | 1 |
78,202,488 | 2024-3-21 | https://stackoverflow.com/questions/78202488/combination-of-non-overlapping-interval-pairs | I recently did a coding challenge where I was tasked to return the number of unique interval pairs that do not overlap when given the starting points in one list and the ending points in one list. I was able to come up with an n^2 solution and eliminated duplicates by using a set to hash each entry tuple of (start, end). I was wondering if there was a more efficient way of approaching this, or this is the best I could do: def paperCuttings(starting, ending): # Pair each start with its corresponding end and sort intervals = sorted(zip(starting, ending), key=lambda x: x[1]) non_overlaps = set() print(intervals) # Store valid combinations for i in range(len(intervals)): for j in range(i+1, len(intervals)): # If the ending of the first is less than the starting of the second, they do not overlap if intervals[i][1] < intervals[j][0]: non_overlaps.add((intervals[i], intervals[j])) return len(non_overlaps) starting = [1,1,6,7] ending = [5,3,8,10] print(paperCuttings(starting, ending)) # should return 4 starting2 = [3,1,2,8,8] ending2 = [5, 3, 7, 10, 10] print(paperCuttings(starting2, ending2)) # should return 3 I ask because I timed out in some hidden test cases | This is a O(n*log n) solution in Ruby (n being the number of intervals). I will include a detailed explanation that should make conversion of the code to Python straightforward. I assume that non-overlapping intervals have no points in common, not even endpoints1. def paperCuttings(starting, ending) # Compute an array of unique intervals sorted by the beginning # of each interval intervals = starting.zip(ending).uniq.sort n = intervals.size count = 0 # Loop over the indices of all but the last interval. # The interval at index i of intervals will be referred to # below as "interval i" (0..n-2).each do |i| # intervals[i] is interval i, an array containing its two # endpoints. Extract the second endpoint to the variable endpoint _, endpoint = intervals[i] # Employ a binary search to find the index of the first # interval j > i for which intervals[j].first > endpoint, # where intervals[j].first is the beginning of interval j k = (i+1..n-1).bsearch { |j| intervals[j].first > endpoint } # k equals nil if no such interval is found, in which case # continue the loop the next interval i next if k == nil # As intervals i and k are non-overlapping, interval i is # non-overlapping with all intervals l, k <=l<= n-1, of which # there are n-k, so add n-k to count count = count + n - k end # return count count end Try it. starting = [1, 1, 6, 7] ending = [5, 3, 8, 10] paperCuttings(starting, ending) #=> 4 starting = [3, 1, 2, 8, 8] ending = [5, 3, 7, 10, 10] paperCuttings(starting, ending) #=> 3 Here I will explain the calculation intervals = starting.zip(ending).uniq.sort for starting = [3, 1, 2, 8, 8] ending = [5, 3, 7, 10, 10] a = starting.zip(ending) #=> [[3, 5], [1, 3], [2, 7], [8, 10], [8, 10]] b = a.uniq #=> [[3, 5], [1, 3], [2, 7], [8, 10]] b.sort #=> [[1, 3], [2, 7], [3, 5], [8, 10]] The removal of duplicates is required by the statement of the problem. The elements of b are sorted by their first elements. Had there been two arrays with the same first element the second element would be used as the tie-breaker, though that's not important here. The documentation for Ruby's binary search method (over a range) is here. Binary searches have a time complexity of O(log n), which accounts for the log term in the overall time complexity of O(n*log n). 1. If intervals that share only a single endpoint are regarded as non-overlapping, change starting[j] >= endpoint to starting[j] > endpoint. | 2 | 4 |
78,227,090 | 2024-3-26 | https://stackoverflow.com/questions/78227090/get-own-pyproject-toml-dependencies-programatically | I use a pyproject.toml file to list a package's dependencies: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [project] name = "foobar" version = "1.0" requires-python = ">=3.8" dependencies = [ "requests>=2.0", "numpy", "tomli;python_version<'3.11'", ] Is is possible, from within the package, to get the list of its own dependencies as strings? In the above case, it should give ["requests", "numpy"] if used with Python>=3.11, and ["requests", "numpy", "tomli"] otherwise. | Something along the lines of the following should do the trick: import importlib.metadata import packaging.requirements def _get_dependencies(name): rd = metadata(name).get_all('Requires-Dist') deps = [] for req in rd: req = packaging.requirements.Requirement(req) if req.marker is not None and not req.marker.evaluate(): continue deps.append(req.name) return deps References: https://docs.python.org/3/library/importlib.metadata.html#distribution-metadata https://importlib-metadata.readthedocs.io/en/latest/api.html#importlib_metadata.metadata https://packaging.pypa.io/en/stable/requirements.html | 2 | 1 |
78,228,212 | 2024-3-26 | https://stackoverflow.com/questions/78228212/python-converting-dateprice-list-to-new-rows | I am trying to convert the following column into new rows: Id Prices 001 ["March:59", "April:64", "May:62"] 002 ["Jan:55", ETC] to id date price 001 March 59 001 April 64 001 May 62 002 Jan 55 The date:price pairs aren't stored in a traditional dictionary format like the following solution: Convert dictionary keys to rows and show all the values in single column using Pandas I managed to get the key:value pairs into individual rows like: Id Prices 001 March:59 001 April:64 And could split these into two columns using string manipulation but this feels inefficient instead of actually using the key:value pairs. Can anyone help please? | If you have valid lists, explode and split: df = pd.DataFrame({'Id': ['001', '002'], 'Prices': [["March:59", "April:64", "May:62"], ["Jan:55"]]}) out = df.explode('Prices') out[['date', 'price']] = out.pop('Prices').str.split(':', expand=True) If you have strings, str.extractall with a regex and join: df = pd.DataFrame({'Id': ['001', '002'], 'Prices': ['["March:59", "April:64", "May:62"]', '["Jan:55"]']}) out = (df.drop(columns='Prices') .join(df['Prices'].str.extractall(r'(?P<date>[^":]+):(?P<price>[^":]+)') .droplevel('match')) ) Output: Id date price 0 001 March 59 0 001 April 64 0 001 May 62 1 002 Jan 55 regex demo for the second approach. | 2 | 3 |
78,227,479 | 2024-3-26 | https://stackoverflow.com/questions/78227479/add-a-column-to-a-polars-lazyframe-based-on-a-group-by-aggregation-of-another-co | I have a LazyFrame of time, symbols and mid_price: Example: time symbols mid_price datetime[ns] str f64 2024-03-01 00:01:00 "PERP_SOL_USDT@β¦ 126.1575 2024-03-01 00:01:00 "PERP_WAVES_USDβ¦ 2.71235 2024-03-01 00:01:00 "SOL_USDT@BINANβ¦ 126.005 2024-03-01 00:01:00 "WAVES_USDT@BINβ¦ 2.7085 2024-03-01 00:02:00 "PERP_SOL_USDT@β¦ 126.3825 I want to perform some aggregations over the time dimension (ie: group by symbol): aggs = ( df .group_by('symbols') .agg([ pl.col('mid_price').diff(1).alias("change"), ]) ) I get back a list of each value per unique symbols value: >>> aggs.head().collect() symbols change str list[f64] "SOL_USDT@BINANβ¦ [null, 0.25, β¦ -0.55] "PERP_SOL_USDT@β¦ [null, 0.225, β¦ -0.605] "WAVES_USDT@BINβ¦ [null, -0.002, β¦ -0.001] "PERP_WAVES_USDβ¦ [null, -0.00255, β¦ 0.0001] I would now like to join this back onto my original dataframe: df = df.join( aggs, on='symbols', how='left', ) This now results in each row getting the full list of change, rather then the respective value. >>> df.head().collect() time symbols mid_price change datetime[ns] str f64 list[f64] 2024-03-01 00:01:00 "PERP_SOL_USDT@β¦ 126.1575 [null, 0.225, β¦ -0.605] 2024-03-01 00:01:00 "PERP_WAVES_USDβ¦ 2.71235 [null, -0.00255, β¦ 0.0001] 2024-03-01 00:01:00 "SOL_USDT@BINANβ¦ 126.005 [null, 0.25, β¦ -0.55] 2024-03-01 00:01:00 "WAVES_USDT@BINβ¦ 2.7085 [null, -0.002, β¦ -0.001] 2024-03-01 00:02:00 "PERP_SOL_USDT@β¦ 126.3825 [null, 0.225, β¦ -0.605] I have 2 questions please: How do I unstack/explode the lists returned from my group_by when joining them back into the original dataframe? Is this the recommended way to add a new column to my original dataframe from a group_by (that is: group_by followed by join)? | It sounds like you don't want to actually aggregate anything (and get a single value per symbol), but instead want to compute "change" but independently for each symbol. In polars, this kind of behaviour, similar to window functions in PostgreSQL, can be achieved with pl.Expr.over. df.with_columns( pl.col("mid_price").diff(1).over("symbol").alias("change") ) On some example data, the resolt looks as follows. import polars as pl import numpy as np import datetime df = pl.DataFrame({ "symbol": ["A"] * 3 + ["B"] * 3 + ["C"] * 3, "time": [datetime.datetime(2024, 3, 1, hour) for hour in range(3)] * 3, "mid_price": np.random.randn(9), }) df.with_columns( pl.col("mid_price").diff(1).over("symbol").alias("change") ) shape: (9, 4) ββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββ¬ββββββββββββ β symbol β time β mid_price β change β β --- β --- β --- β --- β β str β datetime[ΞΌs] β f64 β f64 β ββββββββββͺββββββββββββββββββββββͺββββββββββββͺββββββββββββ‘ β A β 2024-03-01 00:00:00 β -0.349863 β null β β A β 2024-03-01 01:00:00 β 0.093732 β 0.443595 β β A β 2024-03-01 02:00:00 β -1.262064 β -1.355796 β β B β 2024-03-01 00:00:00 β 1.953929 β null β β B β 2024-03-01 01:00:00 β 0.637582 β -1.316348 β β B β 2024-03-01 02:00:00 β 1.009401 β 0.37182 β β C β 2024-03-01 00:00:00 β 0.75864 β null β β C β 2024-03-01 01:00:00 β -0.866227 β -1.624867 β β C β 2024-03-01 02:00:00 β -0.674938 β 0.191289 β ββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββ | 2 | 3 |
78,225,953 | 2024-3-26 | https://stackoverflow.com/questions/78225953/why-is-if-x-is-none-pass-faster-than-x-is-none-alone | Timing results in Python 3.12 (and similar with 3.11 and 3.13 on different machines): When x = None: 13.8 ns x is None 10.1 ns if x is None: pass When x = True: 13.9 ns x is None 11.1 ns if x is None: pass How can doing more take less time? Why is if x is None: pass faster, when it does the same x is None check and then additionally checks the truth value of the result (and does or skips the pass)? Times on other versions/machines: Python 3.11: (12.4, 9.3) and (12.0, 8.8) Python 3.13: (12.7, 9.9) and (12.7, 9.6) Benchmark script (Attempt This Online!): from timeit import repeat import sys for x in None, True: print(f'When {x = }:') for code in ['x is None', 'if x is None: pass'] * 2: t = min(repeat(code, f'{x=}', repeat=100)) print(f'{t*1e3:4.1f} ns ', code) print() print('Python:', sys.version) | Look at the disassembled code: >>> import dis >>> dis.dis('if x is None: pass') 0 0 RESUME 0 1 2 LOAD_NAME 0 (x) 4 POP_JUMP_IF_NOT_NONE 1 (to 8) 6 RETURN_CONST 0 (None) >> 8 RETURN_CONST 0 (None) >>> dis.dis('x is None') 0 0 RESUME 0 1 2 LOAD_NAME 0 (x) 4 LOAD_CONST 0 (None) 6 IS_OP 0 8 RETURN_VALUE The if case has a special POP_JUMP_IF_NOT_NONE operation, which is faster than a LOAD_CONST plus IS_OP. You can read the detailed discussion about it here: https://github.com/faster-cpython/ideas/discussions/154. | 5 | 16 |
78,225,397 | 2024-3-26 | https://stackoverflow.com/questions/78225397/replace-chars-in-existing-column-names-without-creating-new-columns | I am reading a csv file and need to normalize the column names as part of a larger function chaining operation. I want to do everything with function chaining. When using the recommended name.map function for replacing chars in columns like: import polars as pl df = pl.DataFrame( {"A (%)": [1, 2, 3], "B": [4, 5, 6], "C (Euro)": ["abc", "def", "ghi"]} ).with_columns( pl.all().name.map( lambda c: c.replace(" ", "_") .replace("(%)", "pct") .replace("(Euro)", "euro") .lower() ) ) df.head() I get shape: (3, 6) βββββββββ¬ββββββ¬βββββββββββ¬ββββββββ¬ββββββ¬βββββββββ β A (%) β B β C (Euro) β a_pct β b β c_euro β β --- β --- β --- β --- β --- β --- β β i64 β i64 β str β i64 β i64 β str β βββββββββͺβββββ|βββββββββββ‘ββββββββ‘ββββββ‘βββββββββ‘ β 1 β 4 β "abc" β 1 β 4 β "abc" β β 2 β 5 β "def" β 2 β 5 β "def" β β 3 β 6 β "ghi" β 3 β 6 β"ghi" β βββββββββ΄ββββββ΄βββββββββββ΄ββββββββ΄ββββββ΄βββββββββ instead of the expected shape: (3, 3) βββββββββ¬ββββββ¬βββββββββ β a_pct β b β c_euro β β --- β --- β --- β β i64 β i64 β str β βββββββββͺβββββ|βββββββββ‘ β 1 β 4 β "abc" β β 2 β 5 β "def" β β 3 β 6 β "ghi" β βββββββββ΄ββββββ΄βββββββββ ? How can I replace specific chars in existing column names with function chaining without creating new columns? | You could simply replace DataFrame.with_columns() with DataFrame.select() method: df = pl.DataFrame( {"A (%)": [1, 2, 3], "B": [4, 5, 6], "C (Euro)": ["abc", "def", "ghi"]} ).select( pl.all().name.map( lambda c: c.replace(" ", "_") .replace("(%)", "pct") .replace("(Euro)", "euro") .lower() ) ) βββββββββ¬ββββββ¬βββββββββ β a_pct β b β c_euro β β --- β --- β --- β β i64 β i64 β str β βββββββββͺββββββͺβββββββββ‘ β 1 β 4 β abc β β 2 β 5 β def β β 3 β 6 β ghi β βββββββββ΄ββββββ΄βββββββββ IT would be important to say (as Dean MacGregor mentioned in the comments), that DataFrame.with_columns() always adds columns to the dataframe. The column names might be the same as the ones in the original dataframe, but in that case original columns will be replaced with the new ones. You can see it in the documentation: Add columns to this DataFrame. Added columns will replace existing columns with the same name. DataFrame.select(), on the other hand, selects existing columns of the dataframe. Additionally, if you just want to rename all the columns, it's probably more natural to use DataFrame.rename() instead: ... .rename( lambda c: c.replace(" ", "_") .replace("(%)", "pct") .replace("(Euro)", "euro") .lower() ) βββββββββ¬ββββββ¬βββββββββ β a_pct β b β c_euro β β --- β --- β --- β β i64 β i64 β str β βββββββββͺββββββͺβββββββββ‘ β 1 β 4 β abc β β 2 β 5 β def β β 3 β 6 β ghi β βββββββββ΄ββββββ΄βββββββββ | 3 | 3 |
78,225,517 | 2024-3-26 | https://stackoverflow.com/questions/78225517/how-do-i-update-a-dataframe-column-with-a-formula-that-uses-values-from-other-co | I have a dataset of investments that shows a value for the investment, a coupon rate, and an annual income. Some investments generate no income, and some investments have a coupon rate but the annual income is not computed. How do I update the income column when there is a valid coupon but no number in the income column? Here is some sample data: import pandas as pd import numbers data = {'value':[100, 150, 200, 250], 'coupon':[3,'-',4,5], 'income':[3,'-',8,'-']} df = pd.DataFrame(data) Rows 0 and 2 are complete, and don't need updating. Row 1 has no value for income, and no value for coupon, so it also doesn't need updating. Only the income value in the bottom row should be updated, to 12.5. Like in my dummy data set, the cells without meaningful numbers have dashes in them, they're not empty. I wrote a function to execute this algorithm, but I don't know how to apply it to the dataframe: def filler(value, coupon, income): if ~isinstance(income, numbers.Number) and isinstance(coupon, numbers.Number): income = (coupon * value)/100 return(income) else: pass | You should not use a loop, not use - for missing values. Remove the -, use NaNs, and vectorize your code: df[['coupon', 'income']] = df[['coupon', 'income']].apply(pd.to_numeric, errors='coerce') df['income'] = df['income'].fillna(df['coupon'].mul(df['value']).div(100)) Output: value coupon income 0 100 3.0 3.0 1 150 NaN NaN 2 200 4.0 8.0 3 250 5.0 12.5 | 3 | 3 |
78,224,406 | 2024-3-26 | https://stackoverflow.com/questions/78224406/why-is-the-value-and-the-times-which-the-function-is-called-different-and-how-d | Updated: May I know how recursion works for this similar code? def fib( n ): global cnt cnt += 1 #global cnt is assigned and changed if n <= 2: return 1 return fib( n - 1 ) + fib( n - 2 ) cnt = 0 print(fib( 10 )) print("fib is called", cnt, "times") #109 times! Old question: def power_of_two(n): global cnt cnt += 1 if n == 0: return 1 return 2 * power_of_two(n-1) cnt = 0 print(power_of_two(3)) print("power_of_two is called", cnt, "times") Question: What is the value of cnt and how many times is the function power_of_two() called when the code above is executed? A. 8, 4 times B. 4, 3 times C. 6, 4 times D. 3, 3 times I ran the code and it shows that cnt is 4, so should it be answer B? But for answer B, it says that the code is called 3 times but shouldn't it be 4? If it is 3, may I know why then? I thought of choosing answer A too since it makes more sense, but it asks for the value of cnt but not the value of (power_of_two(3)). Or did I misintepret the question/ did my teacher write the question wrongly? I know this is a basic programming question but thanks a lot for your help :) | it will be 4 times for n=3, Steps for your code. for n==3 your cnt ==1 return 2* #Recursion step 1 for n==2 it will be cnt ==2 return 2*2* #Recursion step 2 for n==1 it will be cnt==3 return 2*2*2* #Recursion step 3 and for last step is the stop recursion with if n==0 statement, so cnt==4 return 2*2*2*1 #Recursion step 4 result will be A) 8, 4 times | 2 | 1 |
78,224,121 | 2024-3-26 | https://stackoverflow.com/questions/78224121/annotate-at-the-top-of-a-marker-with-varying-sizes-in-matplotlib | Can I get the coordinates of markers to move the annotation to the top of the triangle? import matplotlib.pyplot as plt X = [1,2,3,4,5] Y = [1,1,1,1,1] labels = 'ABCDE' sizes = [1000, 1500, 2000, 2500, 3000] fig, ax = plt.subplots() ax.scatter(X, Y, s= sizes, marker = 10) for x, y, label, size in zip(X, Y, labels, sizes): print(x,y) ax.annotate(label, (x, y), fontsize=12) plt.show() Which gives me: | You need to (a) move the y coordinate by something proportional to the height of the marker; (b) root the text at bottom centre ("center" in American!). Note that the "size" of a marker is proportional to its AREA. So its height is proportional to sqrt(size). A certain amount of trial and error produced this. The height scaling probably depends on the type of marker. import math import matplotlib.pyplot as plt X = [1,2,3,4,5] Y = [1,1,1,1,1] labels = 'ABCDE' sizes = [1000, 1500, 2000, 2500, 3000] fig, ax = plt.subplots() ax.scatter(X, Y, s= sizes, marker = 10) for x, y, label, size in zip(X, Y, labels, sizes): print(x,y,size) ax.annotate(label, (x, y + math.sqrt( size ) / 3000 ), horizontalalignment='center', fontsize=12) plt.show() | 2 | 3 |
78,194,686 | 2024-3-20 | https://stackoverflow.com/questions/78194686/how-to-web-scrape-google-news-headline-of-a-particular-year-e-g-news-from-2020 | I've been exploring web scraping techniques using Python and RSS feed, but I'm not sure how to narrow down the search results to a particular year on Google News. Ideally, I'd like to retrieve headlines, publication dates, and possibly summaries for news articles from a specific year (such as 2020). With the code provided below, I can scrape the current data, but if I try to look for news from a specific year, it isn't available. Even when I use the Google articles search box, the filter only shows results from the previous year. However, when I scroll down, I can see articles from 2013 and 2017. Could someone provide me with a Python script or pointers on how to resolve this problem? Here's what I've attempted so far: import feedparser import pandas as pd from datetime import datetime class GoogleNewsFeedScraper: def __init__(self, query): self.query = query def scrape_google_news_feed(self): formatted_query = '%20'.join(self.query.split()) rss_url = f'https://news.google.com/rss/search?q={formatted_query}&hl=en-IN&gl=IN&ceid=IN%3Aen' feed = feedparser.parse(rss_url) titles = [] links = [] pubdates = [] if feed.entries: for entry in feed.entries: # Title title = entry.title titles.append(title) # URL link link = entry.link links.append(link) # Date pubdate = entry.published date_str = str(pubdate) date_obj = datetime.strptime(date_str, "%a, %d %b %Y %H:%M:%S %Z") formatted_date = date_obj.strftime("%Y-%m-%d") pubdates.append(formatted_date) else: print("Nothing Found!") data = {'URL link': links, 'Title': titles, 'Date': pubdates} return data def convert_data_to_csv(self): d1 = self.scrape_google_news_feed() df = pd.DataFrame(d1) csv_name = self.query + ".csv" csv_name_new = csv_name.replace(" ", "_") df.to_csv(csv_name_new, index=False) if __name__ == "__main__": query = 'forex rate news' scraper = GoogleNewsFeedScraper(query) scraper.convert_data_to_csv() | You can use date filters in your rss_url. modify the query part in the below format Format: q=query+after:yyyy-mm-dd+before:yyyy-mm-dd Example: https://news.google.com/rss/search?q=forex%20rate%20news+after:2023-11-01+before:2023-12-01&hl=en-IN&gl=IN&ceid=IN:en The URL above returns articles related to forex rate news that were published between November 1st, 2023, and December 1st, 2023. Please refer to this article for more information. | 2 | 2 |
78,218,094 | 2024-3-25 | https://stackoverflow.com/questions/78218094/how-to-mock-a-python-function-so-that-it-wont-be-called-during-import | I am writing some unit tests (using pytest) for someone else's code which I am not allowed to change or alter in any way. This code has a global variable, that is initialized with a function return outside of any function and it calls a function which (while run locally) raises an error. I cannot share that code, but I've coded a simple file that has the same problem: def annoying_function(): '''Does something that generates exception due to some hardcoded cloud stuff''' raise ValueError() # Simulate the original function raising error due to no cloud connection annoying_variable = annoying_function() def normal_function(): '''Works fine by itself''' return True And this is my test function: def test_normal_function(): from app.annoying_file import normal_function assert normal_function() == True Which fails due to ValueError from annoying_function, because it is still called during the module import. Here's the stack trace: failed: def test_normal_function(): > from app.annoying_file import normal_function test\test_annoying_file.py:6: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ app\annoying_file.py:6: in <module> annoying_variable = annoying_function() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def annoying_function(): '''Does something that generates exception due to some hardcoded cloud stuff''' > raise ValueError() E ValueError app\annoying_file.py:3: ValueError I have tried mocking this annoying_function like this: def test_normal_function(mocker): mocker.patch("app.annoying_file.annoying_function", return_value="foo") from app.annoying_file import normal_function assert normal_function() == True But the result is the same. Here's the stack trace: failed: thing = <module 'app' (<_frozen_importlib_external._NamespaceLoader object at 0x00000244A7C72FE0>)> comp = 'annoying_file', import_path = 'app.annoying_file' def _dot_lookup(thing, comp, import_path): try: > return getattr(thing, comp) E AttributeError: module 'app' has no attribute 'annoying_file' ..\..\..\..\.pyenv\pyenv-win\versions\3.10.5\lib\unittest\mock.py:1238: AttributeError During handling of the above exception, another exception occurred: mocker = <pytest_mock.plugin.MockerFixture object at 0x00000244A7C72380> def test_normal_function(mocker): > mocker.patch("app.annoying_file.annoying_function", return_value="foo") test\test_annoying_file.py:5: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv\lib\site-packages\pytest_mock\plugin.py:440: in __call__ return self._start_patch( .venv\lib\site-packages\pytest_mock\plugin.py:258: in _start_patch mocked: MockType = p.start() ..\..\..\..\.pyenv\pyenv-win\versions\3.10.5\lib\unittest\mock.py:1585: in start result = self.__enter__() ..\..\..\..\.pyenv\pyenv-win\versions\3.10.5\lib\unittest\mock.py:1421: in __enter__ self.target = self.getter() ..\..\..\..\.pyenv\pyenv-win\versions\3.10.5\lib\unittest\mock.py:1608: in <lambda> getter = lambda: _importer(target) ..\..\..\..\.pyenv\pyenv-win\versions\3.10.5\lib\unittest\mock.py:1251: in _importer thing = _dot_lookup(thing, comp, import_path) ..\..\..\..\.pyenv\pyenv-win\versions\3.10.5\lib\unittest\mock.py:1240: in _dot_lookup __import__(import_path) app\annoying_file.py:6: in <module> annoying_variable = annoying_function() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def annoying_function(): '''Does something that generates exception due to some hardcoded cloud stuff''' > raise ValueError() E ValueError app\annoying_file.py:3: ValueError Also moving the import statement around doesn't affect my result. From what I've read this happens because the mocker (I'm using pytest-mock) has to import the file with the function it is mocking, and during the import of this file the line annoying_variable = annoying_function() runs and in result fails the mocking process. The only way that I've found to make this sort-of work is by mocking the cloud stuff that's causing the error in the original code, but I want to avoid this as my tests kind of stop being unit tests then. Again, I can't modify or alter the original code. I'll be gratefull for any ideas or advice. | As other commenters already noted, the problem that you attempt to solve hints at bigger issues with the code to be tested, so probably giving an answer to how the specific problem can be solved is actually the wrong thing to do. That said, here is a bit of an unorthodox and messy way to do so. It is based on the following ideas: Adjust the source code of annoying_file.py dynamically before importing it, so that annoying_function() will not be called. Given your code example, we can achieve this, for example, by replacing annoying_variable = annoying_function() with annoying_variable = None in the actual source code. Import the dynamically adjusted module rather than the original module. Test normal_function() in the dynamically adjusted module. In the following code, I assume that a module called annoying_file.py contains the annoying_function(), annoying_variable, and normal_function() from your question, both annoying_file.py and the module that contains the code below live in the same folder. from ast import parse, unparse, Assign, Constant from importlib.abc import SourceLoader from importlib.util import module_from_spec, spec_from_loader def patch_annoying_variable_in(module_name: str) -> str: """Return patched source code, where `annoying_variable = None`""" with open(f"{module_name}.py", mode="r") as f: tree = parse(f.read()) for stmt in tree.body: # Assign None to `annoying_variable` if (isinstance(stmt, Assign) and len(stmt.targets) == 1 and stmt.targets[0].id == "annoying_variable"): stmt.value = Constant(value=None) break return unparse(tree) def import_from(module_name: str, source_code: str): """Load and return a module that has the given name and holds the given code.""" # Following https://stackoverflow.com/questions/62294877/ class SourceStringLoader(SourceLoader): def get_data(self, path): return source_code.encode("utf-8") def get_filename(self, fullname): return f"{module_name}.py (patched)" loader = SourceStringLoader() mod = module_from_spec(spec_from_loader(module_name, loader)) loader.exec_module(mod) return mod def test_normal_function(): module_name = "annoying_file" patched_code = patch_annoying_variable_in(module_name) mod = import_from(module_name, patched_code) assert mod.normal_function() == True The code achieves the following: With patch_annoying_variable_in(), the original code of annoying_file is parsed. The assignment to annoying_variable is replaced, so that annoying_function() will not be executed. The resulting adjusted source code is returned. With import_from(), the adjusted source code is loaded as a module. Finally, test_normal_function() makes use of the previous two functions to test the dynamically adjusted module. | 3 | 3 |
78,221,500 | 2024-3-25 | https://stackoverflow.com/questions/78221500/python-pandas-subset-dataframe-based-on-non-missing-values-from-a-column | I have a pd dataframe: import pandas as pd column1 = [None,None,None,4,8,9,None,None,None,2,3,5,None] column2 = [None,None,None,None,5,1,None,None,6,3,3,None,None] column3 = [None,None,None,3,None,7,None,None,7,None,None,1,None] df = pd.DataFrame(np.column_stack([column1, column2,column3]),columns=['column1', 'column2', 'column3']) print(df) column1 column2 column3 0 None None None 1 None None None 2 None None None 3 4 None 3 4 8 5 None 5 9 1 7 6 None None None 7 None None None 8 None 6 7 9 2 3 None 10 3 3 None 11 5 None 1 12 None None None I want to subset the rows between the values in column 3, and get rid of all empty rows. My desired outcomes are: print (df1) column1 column2 column3 0 4 None 3 1 8 5 None 2 9 1 7 print(df2) column1 column2 column3 0 None 6 7 1 2 3 None 2 3 3 None 3 5 None 1 I don't care about the actual values column3. Column 3 values are used to indicate "start" and "stop". | You can find the non-na value, then perform a cumulative sum, then mod 2 to get the "groups" of start and one-less-than stop positions. Shifting this by 1, adding to the original, and clipping to (0, 1) gets clumps of the start and stop points. To label the groups, you can take a diff of 1, then clip to (0, 1) again, and cum sum, then multiply those two together. g_small = (~df.column3.isna()).cumsum().mod(2) g = (g_small + g_small .shift(1, fill_value=0)).clip(0,1) groups = g.diff(1).fillna(0).clip(0,1).cumsum().astype(int) * g You can then do a groupby operation on the data frame: dfs = {i: g for i, g in df.groupby(groups) if i > 0} dfs # returns: {1: column1 column2 column3 3 4 None 3 4 8 5 None 5 9 1 7, 2: column1 column2 column3 8 None 6 7 9 2 3 None 10 3 3 None 11 5 None 1} | 2 | 1 |
78,211,119 | 2024-3-23 | https://stackoverflow.com/questions/78211119/how-to-tackle-statement-is-unreachable-unreachable-with-mypy-when-setting-at | Problem description Suppose a following test class Foo: def __init__(self): self.value: int | None = None def set_value(self, value: int | None): self.value = value def test_foo(): foo = Foo() assert foo.value is None foo.set_value(1) assert isinstance(foo.value, int) assert foo.value == 1 # unreachable The test: First, checks that foo.value is something Then, sets the value using a method. Then it checks that the foo.value has changed. When running the test with mypy version 1.9.0 (latest at the time of writing), and having warn_unreachable set to True, one gets: (venv) niko@niko-ubuntu-home:~/code/myproj$ python -m mypy tests/test_foo.py tests/test_foo.py:16: error: Statement is unreachable [unreachable] Found 1 error in 1 file (checked 1 source file) What I have found There is an open issue in the mypy GitHub: https://github.com/python/mypy/issues/11969 One comment said to use safe-assert, but after rewriting the test as from safe_assert import safe_assert def test_foo(): foo = Foo() safe_assert(foo.value is None) foo.set_value(1) safe_assert(isinstance(foo.value, int)) assert foo.value == 1 the problem persists (safe-assert 0.4.0). This time, both mypy and VS Code Pylance think that foo.set_value(1) two lines above is not reachable. Question How can I say to mypy that the foo.value has changed to int and that it should continue checking also everything under the assert isinstance(foo.value, int) line? | You can explicitly control type narrowing with the TypeGuard special form (PEP 647). Although normally you would use TypeGuard to farther narrow a type than what has already been inferred, you can use it to 'narrow' to whatever type you choose, even if it is different or broader than the type checker has already inferred. In this case, we'll write a function _value_is_set which is annotated with a return type of TypeGuard[int] such that type checkers like mypy will infer type of int for values 'type guarded' under calls to this function (e.g., an assert of if expression). from typing import TypeGuard, Any # ... def _value_is_set(value: Any) -> TypeGuard[int]: if isinstance(value, int): return True return False def test_foo(): foo = Foo() assert foo.value is None foo.set_value(1) assert _value_is_set(foo.value) # the next line is redundant now, but can be kept without issue assert isinstnace(foo.value, int) assert foo.value == 1 # now reachable, according to mypy Normally, mypy should treat assert isinstance(...) or if isinstance(...) in a similar way. But for whatever reason, it doesn't in this case. Using TypeGuard, we can coarse type checkers into doing the correct thing. With this change applied, mypy will not think this code is unreachable. | 3 | 1 |
78,215,074 | 2024-3-24 | https://stackoverflow.com/questions/78215074/stacked-subplots-with-same-legend-color-and-labels | I have been trying to plot a stacked plot with the same legend color and non-duplicate labels, without much of a success. import plotly.graph_objects as go from plotly.subplots import make_subplots # Sample data x = [1, 2, 3, 4, 5] y1 = [1, 2, 4, 8, 16] y2 = [1, 3, 6, 10, 15] y3 = [1, 4, 8, 12, 16] # Create subplot figure with two subplots fig = make_subplots(rows=1, cols=2, subplot_titles=('Subplot 1', 'Subplot 2')) # Add stacked area plot to subplot 1 fig.add_trace(go.Scatter(x=x, y=y1, mode='lines', stackgroup='one', name='A'), row=1, col=1) fig.add_trace(go.Scatter(x=x, y=y2, mode='lines', stackgroup='one', name='B'), row=1, col=1) # Add stacked area plot to subplot 2 fig.add_trace(go.Scatter(x=x, y=y1, mode='lines', stackgroup='two', name='A'), row=1, col=2) fig.add_trace(go.Scatter(x=x, y=y3, mode='lines', stackgroup='two', name='C'), row=1, col=2) # Update layout fig.update_layout(title_text='Stacked Area Plots on Subplots', showlegend=True) # Show figure fig.show() The resulting plot is as follows: Any help is appreciated. | You can set the fill color directly for each trace like so: from plotly.subplots import make_subplots import plotly.graph_objects as go # Sample data x = [1, 2, 3, 4, 5] y1 = [1, 2, 4, 8, 16] y2 = [1, 3, 6, 10, 15] y3 = [1, 4, 8, 12, 16] # Create subplot figure with two subplots fig = make_subplots(rows=1, cols=2, subplot_titles=("Subplot 1", "Subplot 2")) # Add stacked area plot to subplot 1 fig.add_trace(go.Scatter(x=x, y=y1, line={"color":"blue"}, fillcolor="blue", mode="lines", stackgroup="one", name="A", legendgroup="A"), row=1, col=1) fig.add_trace(go.Scatter(x=x, y=y2, line={"color":"red"}, fillcolor="red", mode="lines", stackgroup="one", name="B"), row=1, col=1) # Add stacked area plot to subplot 2 fig.add_trace(go.Scatter(x=x, y=y1, line={"color":"blue"}, fillcolor="blue", mode="lines", stackgroup="two", name="A", legendgroup="A", showlegend=False), row=1, col=2) fig.add_trace(go.Scatter(x=x, y=y3, line={"color":"red"}, fillcolor="red", mode="lines", stackgroup="two", name="C"), row=1, col=2) # Update layout fig.update_layout(title_text="Stacked Area Plots on Subplots", showlegend=True) # Show figure fig.show(renderer="browser") # or any other renderer Or you can make a mapping like described [here].(https://community.plotly.com/t/automatically-pick-colors-when-using-add-trace/59075/2) I've used the argument legendgroup to group them and showlegend=False to hide one. The argument for line makes the color of the line match the fill. | 2 | 1 |
78,217,152 | 2024-3-25 | https://stackoverflow.com/questions/78217152/created-nested-json-from-dataframe | data={'category':['Medical','Medical','Research','Medical','Research'], 'countrycode':['US','CAN','US','CAN','US'], 'stateCode':['AK','AB','MO','NT','OK'], 'statecount':[600,100,200,760,90]} df=pd.DataFrame(data) Given is my input dataframe. I want to produce a nested json output as follows : { 'Medical' : { 'US' : {'AK':600,'OK':90} 'CAN': {'AB':160, 'NT':760} }, 'Research' : { 'US' : {'MO':200,'OK':90} } } I have been trying to play around with the following code, but cannot seem to get countrycode as second level. df.groupby('category').apply(lambda x: x.groupby('countrycode').apply(lambda y:y[['stateCode','statecount']]).to_json(orient='records')) Any help or direction is appreciated, new to python. | One efficient option using groupby: out = {} for (k1,k2), g in df.groupby(['category', 'countrycode']): out.setdefault(k1, {})[k2] = g.set_index('stateCode')['statecount'].to_dict() Alternatively, less efficient but maybe more flexible: {k1: {k2: g2.set_index('stateCode')['statecount'].to_dict() for k2, g2 in g.groupby('countrycode')} for k1, g in df.groupby('category')} Output: {'Medical': {'CAN': {'AB': 100, 'NT': 760}, 'US': {'AK': 600}}, 'Research': {'US': {'MO': 200, 'OK': 90}}} | 2 | 1 |
78,216,573 | 2024-3-25 | https://stackoverflow.com/questions/78216573/question-about-difference-between-two-expressions | if any([x % 2 for x in result]): print("good") and if any(x % 2 for x in result): print("good") I'm studying Python, but not sure what is difference between two expressions shown above. Does the first expression check each element in list? I try to code myself, to solve this problem, but I don't get it why those two expressions are different and what they do. | To see what's going on, let's run the following program: def test(i): print(i) if i == 5: return True return False if any([test(i) for i in range(10)]): print("Done") if any(test(i) for i in range(10)): print("Done") 0 1 2 3 4 5 6 7 8 9 Done 0 1 2 3 4 5 Done The first version creates a list using list comprehension, and after the entire list is created, it is then checked for True/False values. You can see the entire list being created, because all numbers up to 10 are printed. However, in the second version, you'll see that the output only goes up to 5. This is because the second is a generator expression, which the any function than evaluates one-by-one. For example, as soon as i == 5, the any function returns true and stops iterating the generator. Let's replicate the any function in the same way it works in the second if-statement. def any(gen): for i in gen: if i: return True return False This is pretty similar to how the builtin any function works. It looks at each element, checks if it is True, and returns before the loop is completed if a True value is found. The reason generator expressions are more efficient is because values are being created one-by-one, rather than all at once with the list comprehension. | 2 | 3 |
78,216,029 | 2024-3-24 | https://stackoverflow.com/questions/78216029/loop-through-each-customer-records-to-get-the-first-last-channel-they-came-from | I have customer visit records with the channel they came from. I want to have one record per customer where I have the first channel they came from and the last channel they came from. Another logic I need to add is that if the first channel is "Direct", then do not take it and look at the next record. If that next record is also "Direct", then do not take it either and look at the next record. If you finished all records for the customer and they were all "Direct", then you can make first channel "Direct". I need the same logic for the last channel. Get the channel, if it is "Direct", then go to the previous one. If it is "Direct", then go through the previous record, until you finish all records. The input data looks like this: Customer ID Date Channel 1 1/1/24 Email 1 1/2/24 Search 1 1/3/24 Direct 2 1/5/24 Direct 2 1/6/24 Paid 2 1/7/24 Email 3 1/8/24 Direct 3 1/9/24 Direct 3 1/10/24 Direct 3 1/11/24 Direct And the output I need is this: Customer ID First Channel Last Channel 1 Email Search 2 Paid Email 3 Direct Direct How can I do that in my DataFrame? Probably I need to create two For loops, one to loop through all records, and inside it another loop to go through each customer record. It assigns first channel, then it checks if it is "Direct", then it goes to the next record, until the end of the loop. | Convert Date to datetime so you can sort by Date (use format="%m/%d/%y" if your dates are in this format instead). Define condition_i and condition_ii to account for your new logic: you want to keep for each Customer ID: (i) all rows if Channel is Direct for all rows; or (ii) only rows where Channel is not Direct, if not all rows for Channel are Direct. Group on Customer ID then aggregate by first and last, respectively. Merge the dataframes resulting from the groupbys. df["Date"] = pd.to_datetime(df["Date"], format="%d/%m/%y") df = df.sort_values(by="Date") condition_i = df.groupby("Customer ID")["Channel"].transform( lambda x: all(x == "Direct") ) condition_ii = ( df.groupby("Customer ID")["Channel"] .transform(lambda x: any(x != "Direct")) .loc[df["Channel"] != "Direct"] ) first = ( df[(condition_i) | (condition_ii)] .groupby("Customer ID", as_index=False) .first() .rename(columns={"Channel": "Fist Channel"}) ) last = ( df[(condition_i) | (condition_ii)] .groupby("Customer ID", as_index=False) .last() .rename(columns={"Channel": "Last Channel"}) ) out = pd.merge( first[["Customer ID", "Fist Channel"]], last[["Customer ID", "Last Channel"]] ) Customer ID Fist Channel Last Channel 0 1 Email Search 1 2 Paid Email 2 3 Direct Direct | 2 | 1 |
78,214,477 | 2024-3-24 | https://stackoverflow.com/questions/78214477/how-to-make-black-borders-around-certain-markers-in-a-seaborn-pairplot | I have the following code: import seaborn as sns import pandas as pd import numpy as np Data = pd.DataFrame(columns=['x1','x2','x3','label']) for i in range(100): Data.loc[len(Data.index)] = [np.random.rand(),np.random.rand(),np.random.rand(),'1'] Data.loc[len(Data.index)] = [np.random.rand(),np.random.rand(),np.random.rand(),'2'] Data.loc[len(Data.index)] = [np.random.rand(),np.random.rand(),np.random.rand(),'3'] Data.loc[len(Data.index)] = [np.random.rand(),np.random.rand(),np.random.rand(),'4'] Data.loc[len(Data.index)] = [np.random.rand(),np.random.rand(),np.random.rand(),'5'] sns.pairplot(Data,vars=['x1','x2','x3'],hue='label',markers=['o','s','s','s','s'],corner=True) Which gives the following output: I want to put black borders only around the square markers to make them more visible but I don't know how to do that. I tried to add: grid_kws={fillstyles:['none','full','full','full','full']} as an argument to sns.pairplot, but I just got the following error: Traceback (most recent call last): File ~/anaconda3/lib/python3.10/site-packages/spyder_kernels/py3compat.py:356 in compat_exec exec(code, globals, locals) File ~/Dokument/Python/MasterProjectCoCalc/SNmasterproject/untitled0.py:21 sns.pairplot(Data,vars=['x1','x2','x3'],hue='label',markers=['o','s','s','s','s'],corner=True,grid_kws={fillstyles:['none','full','full','full','full']}) NameError: name 'fillstyles' is not defined I also tried to add: plot_kws={'edgecolor':'black'} to the sns.pairplot function and then I got but now all the points have a black border. How do I get only black borders around the square markers? | The scatter dots are stored in ax.collections[0]. To avoid that the colors of later hue values always come on top, seaborn keeps the dots in the order they appear in the dataframe. You can use .set_edgecolors() to set the edge color of each individual dot. For the legend, the dots in stored in its handles as line objects, which you can change via .set_markeredgecolor(...) Here is how the code could look like: import seaborn as sns import pandas as pd import numpy as np Data = pd.DataFrame(columns=['x1', 'x2', 'x3', 'label']) for i in range(100): Data.loc[len(Data.index)] = [np.random.rand(), np.random.rand(), np.random.rand(), '1'] Data.loc[len(Data.index)] = [np.random.rand(), np.random.rand(), np.random.rand(), '2'] Data.loc[len(Data.index)] = [np.random.rand(), np.random.rand(), np.random.rand(), '3'] Data.loc[len(Data.index)] = [np.random.rand(), np.random.rand(), np.random.rand(), '4'] Data.loc[len(Data.index)] = [np.random.rand(), np.random.rand(), np.random.rand(), '5'] g = sns.pairplot(Data, vars=['x1', 'x2', 'x3'], hue='label', markers=['o', 's', 's', 's', 's'], corner=True) edge_colors = ['none' if l == '1' else 'k' for l in Data['label']] for ax in g.axes.flat: if ax is not None and len(ax. Collections) > 0: ax.collections[0].set_edgecolors(edge_colors) for h in g.legend.legend_handles[1:]: h.set_markeredgecolor('k') | 5 | 3 |
78,214,361 | 2024-3-24 | https://stackoverflow.com/questions/78214361/how-to-handle-inf-and-nans-in-great-table | I've got a dataframe that I want to format which includes inf and nan. The dict for it is: df = pd.DataFrame({'Foodbank': {0: 'study', 1: 'generation', 2: 'near', 3: 'sell', 4: 'former', 5: 'line', 6: 'ok', 7: 'field', 8: 'last', 9: 'really', 10: 'particularly', 11: 'must', 12: 'drive', 13: 'herself', 14: 'learn'}, '%(LY)': {0: -20.93, 1: -19.23, 2: -26.09, 3: 150.0, 4: 90.24, 5: -23.85, 6: nan, 7: inf, 8: inf, 9: inf, 10: inf, 11: -35.48, 12: nan, 13: nan, 14: -1.3}}) from great_tables import GT GT(df) It looks like this: What I want is to have a dash or n/a to highlight it rather than inf which won't mean anything to an audience. | A possible solution: df['%(LY)'] = df['%(LY)'].replace(np.inf, np.nan) Output: Foodbank %(LY) 0 study -20.93 1 generation -19.23 2 near -26.09 3 sell 150.00 4 former 90.24 5 line -23.85 6 ok NaN 7 field NaN 8 last NaN 9 really NaN 10 particularly NaN 11 must -35.48 12 drive NaN 13 herself NaN 14 learn -1.30 | 3 | 3 |
78,212,524 | 2024-3-23 | https://stackoverflow.com/questions/78212524/strategies-for-enhancing-algorithm-efficiency | I have this task: Mother Anna opened a package of candies, which she wants to distribute to her children as a reward. So that they are not clashes between them, so of course the one who finished in a better place in the competition cannot get less candies than the one that ended up in a worse place. How many ways can Anna divide C candies among D children? The task: For the given numbers D and C, find the number of possible distributions of candies. Entrance: In the first line of the file there is a number Q indicating the number of sets. There are Q rows with a pair of numbers D and C. 1 β€ Q β€ 1000 1 β€ D β€ 100 1 β€ C β€ 5,000 Output: The output of the program is the result for each set modulo (2^30)-1 on a separate line. Example Input: 3 1 10 2 4 3 8 Output: 1 3 10 I made this code and it works, but when i have 1000 inputs i get **Timelimit ** in evaluator, can you help me make code that will work faster? def generate_combinations(d, c, current_combination=None, combinations=None): if current_combination is None: current_combination = [] if combinations is None: combinations = [] if d == 0: if c == 0: if all(current_combination[i] <= current_combination[i + 1] for i in range(len(current_combination) - 1)): combinations.append(list(current_combination)) return for i in range(c + 1): generate_combinations(d - 1, c - i, current_combination + [i], combinations) return combinations c = 8 d = 3 pocet = int(input()) for i in range(pocet): d, c = map(int, input().split()) print(len(generate_combinations(d, c))) I also have version using dynamic programming def dynamic_programming(d, c): dp = [[0] * (c + 1) for _ in range(d + 1)] dp[0][0] = 1 for i in range(1, d + 1): for j in range(c + 1): dp[i][j] = dp[i - 1][j] if j >= i: dp[i][j] += dp[i][j - i] dp[i][j] %= (2 ** 30 - 1) b = dp[d][c] return dp[d][c] q = int(input().strip()) for i in range(q): d, c = map(int, input().split()) print(dynamic_programming(d, c)) For better understending here is the example if we want to divide 8 candies among 3 children we have 21 possibilities: [0, 0, 8] [0, 1, 7] [0, 2, 6] [0, 3, 5] [0, 4, 4] [0, 5, 3] [0, 6, 2] [0, 7, 1] [0, 8, 0] [1, 0, 7] [1, 1, 6] [1, 2, 5] [1, 3, 4] [1, 4, 3] [1, 5, 2] [1, 6, 1] [1, 7, 0] [2, 0, 6] [2, 1, 5] [2, 2, 4] [2, 3, 3] [2, 4, 2] [2, 5, 1] [2, 6, 0] [3, 0, 5] [3, 1, 4] [3, 2, 3] [3, 3, 2] [3, 4, 1] [3, 5, 0] [4, 0, 4] [4, 1, 3] [4, 2, 2] [4, 3, 1] [4, 4, 0] [5, 0, 3] [5, 1, 2] [5, 2, 1] [5, 3, 0] [6, 0, 2] [6, 1, 1] [6, 2, 0] [7, 0, 1] [7, 1, 0] [8, 0, 0] And there is only 10 possibilities that fit for this task: [0, 0, 8] [0, 1, 7] [0, 2, 6] [0, 3, 5] [0, 4, 4] [1, 1, 6] [1, 2, 5] [1, 3, 4] [2, 2, 4] [2, 3, 3] | Consider that each assignment of candies is an integer partition of n candies into k parts. Also note that for each partition of n candies, there is a single unique assignment candies to children that is valid (e.g. for n=7 the partition 3 3 1 can only be mapped to children in one way, [0, 3, 3], anything else would be invalid). With this, we just need to count the number of k-partitions of n, which has a nice recursive formula: p(0, 0) = 1 p(n, k) = 0 if n <= 0 or k <= 0 p(n, k) = p(n - k, k) + p(n - 1, k - 1) Now, we need to be careful- this formula counts non-empty partitions, but we allow giving a child 0 candies. We could alter the recurrence relation, but there's a simpler approach; if we just add 1 additional candy per each child at the start, and then imagine we just remove that one candy from their part afterwards, it'll be just as if we allowed empty partitions. Implemented in python: from functools import cache @cache def count_parts(candies, children): if candies == children == 0: return 1 if candies <= 0 or children <= 0: return 0 return count_parts(candies - children, children) + count_parts(candies - 1, children - 1) Example usage: test_cases = [(10, 1), (4, 2), (8, 3)] for candies, children in test_cases: print(count_parts(candies + children, children)) Output: 1 3 10 This will be O(C*D) time and space, A bottom-up dynamic programming solution would have the same time complexity but would be slightly faster in practice and could be made to have a linear (in terms of C) space complexity. The details are escaping me at the moment, but I believe there's also an early-exit shortcut calculation for p(2*k, k) that could be used for further speedup as well. | 2 | 2 |
78,213,315 | 2024-3-24 | https://stackoverflow.com/questions/78213315/aggregate-in-polars-by-appending-lists | In Python Polars, how can I aggregate by concatenating lists, rather than creating a nested list? For example, I'd like to aggregate this dataframe on id import polars as pl df = pl.DataFrame({ 'id': [1, 1], 'name': [["Bob"], ["Mary", "Sue"]], }) id name 1 ["Bob"] 1 ["Mary", "Sue"] and get this result id name 1 ["Bob", "Mary", "Sue"] If I use df.group_by('id').agg("name"), I get a nested list, which I don't want: id name 1 [["Bob"], ["Mary", "Sue"]] | Try using explode on your name column. result_df = df.group_by('id').agg(pl.col('name').explode()) | 2 | 2 |
78,208,864 | 2024-3-22 | https://stackoverflow.com/questions/78208864/time-based-spacing-constraints-in-gekko | I'm trying to constrain the vector output of "simu_total_volume" below by requiring that solution output elements (x7=1) be spaced apart by s records (weeks) while also controlling for the maximum number of times x7 can be = 1 in total. The code below seems to work but I'm noticing a reduction in the sum of x7 from 10 (without the spacing requirement) to 8 (with the spacing requirement) despite there being enough space for sum(x7) to = 10 given the constraints. I can also manually arrange the full solution space and here and come up with a more optimal solution in Excel, so I'm not sure why Gekko isn't finding it. Here are the full details to reproduce locally (tested for accuracy): import numpy as np from gekko import GEKKO m = GEKKO(remote=False) m.options.NODES = 3 m.options.IMODE = 3 m.options.MAX_ITER = 1000 lnuc_weeks = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0] min_promo_price = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,3] max_promo_price = [3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5,3.5, 3.5, 3.5, 3.5, 3.5, 3.5] base_srp = [3.48, 3.48, 3.48, 3.48, 3.0799, 3.0799, 3.0799, 3.0799,3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799, 3.0799] lnuc_min_promo_price = 1.99 lnuc_max_promo_price = 1.99 coeff_fedi = [0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589,0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589, 0.022589] coeff_feao = [0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995, 0.02929995] coeff_diso = [0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338, 0.05292338] sumproduct_base = [0.20560305, 0.24735297, 0.24957423, 0.23155435, 0.23424058,0.2368096 , 0.27567109, 0.27820648, 0.2826393 , 0.28660598, 0.28583971, 0.30238505, 0.31726649, 0.31428312, 0.31073792, 0.29036779, 0.32679041, 0.32156337, 0.24633734] neg_ln = [[0.14842000515],[0.14842000512],[0.14842000515],[0.14842000512],[-0.10407483058],[0.43676249024],[0.43676249019],[0.43676249024],[0.43676249019],[0.43676249024],[0.43676249019], [0.026284840258],[0.026284840291],[0.026284840258],[0.026284840291], [0.026185109811],[0.026284840258],[0.026284840291],[0.026284840258]] neg_ln_ppi_coeff = [1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879,1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879, 1.22293879,1.22293879, 1.22293879, 1.22293879, 1.22293879] base_volume = [124.38, 193.2, 578.72, 183.88, 197.42, 559.01, 67.68, 110.01,60.38, 177.11, 102.65, 66.02, 209.83, 81.22, 250.44, 206.44, 87.99, 298.95, 71.07] week = pd.Series([13, 14, 17, 18, 19, 26, 28, 33, 34, 35, 39, 42, 45, 46, 47, 48, 50, 51, 52]) n = 19 x1 = m.Array(m.Var,(n), integer=True) #LNUC weeks i = 0 for xi in x1: xi.value = lnuc_weeks[i] xi.lower = 0 xi.upper = lnuc_weeks[i] i += 1 x2 = m.Array(m.Var,(n)) #Blended SRP i = 0 for xi in x2: xi.value = 5 m.Equation(xi >= m.if3((x1[i]) - 0.5, min_promo_price[i], lnuc_min_promo_price)) m.Equation(xi <= m.if3((x1[i]) - 0.5, max_promo_price[i], lnuc_max_promo_price)) i += 1 x3 = m.Array(m.Var,(n), integer=True) #F&D x4 = m.Array(m.Var,(n), integer=True) #FO x5 = m.Array(m.Var,(n), integer=True) #DO x6 = m.Array(m.Var,(n), integer=True) #TPR #Default to F&D i = 0 for xi in x3: xi.value = 1 xi.lower = 0 xi.upper = 1 i += 1 i = 0 for xi in x4: xi.value = 0 xi.lower = 0 xi.upper = 1 i += 1 i = 0 for xi in x5: xi.value = 0 xi.lower = 0 xi.upper = 1 i += 1 i = 0 for xi in x6: xi.value = 0 xi.lower = 0 xi.upper = 1 i += 1 x7 = m.Array(m.Var,(n), integer=True) #Max promos i = 0 for xi in x7: xi.value = 1 xi.lower = 0 xi.upper = 1 i += 1 x = [x1,x2,x3,x4,x5,x6,x7] neg_ln=[m.Intermediate(-m.log(x[1][i]/base_srp[i])) for i in range(n)] total_vol_fedi =[m.Intermediate(coeff_fedi[0]+ sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] total_vol_feao =[m.Intermediate(coeff_feao[0]+ sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] total_vol_diso =[m.Intermediate(coeff_diso[0]+ sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] total_vol_tpro =[m.Intermediate(sumproduct_base[i] + (neg_ln[i]*neg_ln_ppi_coeff[0])) for i in range(n)] simu_total_volume = [m.Intermediate(( (m.max2(0,base_volume[i]*(m.exp(total_vol_fedi[i])-1)) * x[2][i] + m.max2(0,base_volume[i]*(m.exp(total_vol_feao[i])-1)) * x[3][i] + m.max2(0,base_volume[i]*(m.exp(total_vol_diso[i])-1)) * x[4][i] + m.max2(0,base_volume[i]*(m.exp(total_vol_tpro[i])-1)) * x[5][i]) + base_volume[i]) * x[6][i]) for i in range(n)] [m.Equation(x3[i] + x4[i] + x5[i] + x6[i] == 1) for i in range(i)] #Limit max promos m.Equation(sum(x7)<=10) #Enforce spacing s=1 for s2 in range(1, s+1): for i in range(0, n-s2): f = week[week == week[i] + s2].index if len(f)>0: m.Equation(x7[i] + x7[f[0]]<=1) m.Maximize(m.sum(simu_total_volume)) m.options.SOLVER=1 m.solve(disp = True) | Enforce a spacing constraint with a summation over a subset of the periods with a moving window such as: m.Equation(sum(x[0:3])<=1) m.Equation(sum(x[1:4])<=1) m.Equation(sum(x[2:5])<=1) Here is a test that shows solutions with different spacing constraints with a maximum of 4 out of the 5 selected. The spacing constraints are successively [0,1,2,3]: from gekko import GEKKO m = GEKKO(remote=False) for s in [0,1,2,3]: n = 5 x = m.Array(m.Var,n,integer=True,value=1,lb=0,ub=1) m.Equation(sum(x)<=4) for i in range(0,n-s): m.Equation(sum(x[i:i+s+1])<=1) m.Maximize(sum(x)) m.options.SOLVER=1 m.solve(disp=False) print(f'spacing: {s} solution: {x}') The solution is: spacing: 0 solution: [[0.0] [1.0] [1.0] [1.0] [1.0]] spacing: 1 solution: [[1.0] [0.0] [1.0] [0.0] [1.0]] spacing: 2 solution: [[0.0] [1.0] [0.0] [0.0] [1.0]] spacing: 3 solution: [[1.0] [0.0] [0.0] [0.0] [1.0]] There are multiple solutions for the case with spacing 0 and 2 and unique solutions for spacing 1 and 3. The solver returns just one of the solutions for each case. You may need to add an additional objective if there is a preference to select earlier slots. | 3 | 1 |
78,212,052 | 2024-3-23 | https://stackoverflow.com/questions/78212052/stop-ode-integration-when-a-condition-is-satisfied | I'm simulating double pendulum fliptimes using scipy.integrate.odeint. The way my code is structured, odeint solves the system over a fixed time interval; then I have to check the result to see if a flip occurred. Instead, I want to check after each step whether the condition is satisfied, and if so, stop. Then, there is no need to solve the rest of time interval. Here is my current code: from matplotlib.colors import LogNorm, Normalize from scipy.integrate import odeint import matplotlib.pyplot as plt from tqdm import tqdm import seaborn as sns import numpy as np def ode(y, t, length_1, length_2, mass_1, mass_2, gravity): angle_1, angle_1_d, angle_2, angle_2_d = y angle_1_dd = (-gravity * (2 * mass_1 + mass_2) * np.sin(angle_1) - mass_2 * gravity * np.sin(angle_1 - 2 * angle_2) - 2 * np.sin(angle_1 - angle_2) * mass_2 * (angle_2_d ** 2 * length_2 + angle_1_d ** 2 * length_1 * np.cos(angle_1 - angle_2))) / (length_1 * (2 * mass_1 + mass_2 - mass_2 * np.cos(2 * angle_1 - 2 * angle_2))) angle_2_dd = (2 * np.sin(angle_1 - angle_2) * (angle_1_d ** 2 * length_1 * (mass_1 + mass_2) + gravity * (mass_1 + mass_2) * np.cos(angle_1) + angle_2_d ** 2 * length_2 * mass_2 * np.cos(angle_1 - angle_2))) / (length_2 * (2 * mass_1 + mass_2 - mass_2 * np.cos(2 * angle_1 - 2 * angle_2))) return [angle_1_d, angle_1_dd, angle_2_d, angle_2_dd] def double_pendulum(length_1, length_2, mass_1, mass_2, angle_1_init, angle_2_init, angle_1_d_init, angle_2_d_init, gravity, dt, num_steps): time_span = np.linspace(0, dt*num_steps, num_steps) y0 = [np.deg2rad(angle_1_init), np.deg2rad(angle_1_d_init), np.deg2rad(angle_2_init), np.deg2rad(angle_2_d_init)] sol = odeint(ode, y0, time_span, args=(length_1, length_2, mass_1, mass_2, gravity)) return sol def flip(length_1, length_2, mass_1, mass_2, angle_1_init, angle_2_init, angle_1_d_init, angle_2_d_init, gravity, dt, num_steps): solution = double_pendulum(length_1, length_2, mass_1, mass_2, angle_1_init, angle_2_init, angle_1_d_init, angle_2_d_init, gravity, dt, num_steps) #angle_1 = solution[:, 0] angle_2 = solution[:, 2] for index, angle2 in enumerate(angle_2): if abs(angle2 - angle_2_init) >= 2*np.pi: return index*dt return dt*num_steps angle_1_range = np.arange(-172, 172, 1) angle_2_range = np.arange(-172, 172, 1) fliptime_matrix = np.zeros((len(angle_1_range), len(angle_2_range))) for i, angle_1 in tqdm(enumerate(angle_1_range), desc='angle_1'): for j, angle_2 in tqdm(enumerate(angle_2_range), desc='angle_2', leave=False): fliptime = flip(1, 1, 1, 1, angle_1, angle_2, 0, 0, 9.81, 0.01, 10000) fliptime_matrix[i, j] = fliptime sns.heatmap(fliptime_matrix, square=True, cbar_kws={'label': 'Divergence'}, norm=LogNorm()) plt.xlabel('Angle 2 (degrees)') plt.ylabel('Angle 1 (degrees)') plt.title('Fliptime Heatmap') plt.gca().invert_yaxis() plt.show() How do I stop the integration as soon as the flip condition (change in angle2 is greater than 2*pi) is satisfied? | In the language of initial value problems, you want to detect an "event". scipy.integrate.odeint does not provide an interface for that. This is one of the reasons the documentation suggests: For new code, use scipy.integrate.solve_ivp to solve a differential equation. Once you convert your code to use solve_ivp, you can use the events parameter to terminate integration once an event is detected. The event function would look something like: def event(t, y): angle2 = y[2] return abs(angle2 - angle_2_init) - 2*np.pi event.terminal = True | 2 | 2 |
78,210,383 | 2024-3-23 | https://stackoverflow.com/questions/78210383/odoo16-where-is-the-docs-variable-used-in-the-template-defined | I have created 2 recrods, action report and template for pdf, but I have never defined the docs variable in the model, so how can Odoo understand it, or is it defined by default somewhere? So what if I want to define additional variables like "text1" : "hello" to use in the template? Thanks All | The docs variable is set while rendering the report,in _get_rendering_context function 1/ You can pass them through data to report_action Example: (from employees summary report) def print_report(self): self.ensure_one() [data] = self.read() data['emp'] = self.env.context.get('active_ids', []) employees = self.env['hr.employee'].browse(data['emp']) datas = { 'ids': [], 'model': 'hr.employee', 'form': data } return self.env.ref('hr_holidays.action_report_holidayssummary').report_action(employees, data=datas) 2/ You can also use a custom report and define additional variables in _get_report_values Example: (from holidays summary report) @api.model def _get_report_values(self, docids, data=None): if not data.get('form'): raise UserError(_("Form content is missing, this report cannot be printed.")) holidays_report = self.env['ir.actions.report']._get_report_from_name('hr_holidays.report_holidayssummary') holidays = self.env['hr.leave'].browse(self.ids) return { 'doc_ids': self.ids, 'doc_model': holidays_report.model, 'docs': holidays, 'get_header_info': self._get_header_info(data['form']['date_from'], data['form']['holiday_type']), 'get_day': self._get_day(data['form']['date_from']), 'get_months': self._get_months(data['form']['date_from']), 'get_data_from_report': self._get_data_from_report(data['form']), 'get_holidays_status': self._get_holidays_status(), } | 2 | 3 |
78,202,681 | 2024-3-21 | https://stackoverflow.com/questions/78202681/explode-a-dataframe-into-a-range-of-another-dataframe | I have some data in 2 dataframes that look like: import polars as pl data = {"channel": [0, 1, 2, 1, 2, 0, 1], "time": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]} time_df = pl.DataFrame(data) data = { "time": [10.0, 10.5], "event_table": [["start_1", "stop_1", "start_2", "stop_2"], ["start_3"]], } events_df = pl.DataFrame(data) where channel 0 in the time_df means that a new "event table" start here. I want to explode each row of the event_table starting at the channels 0 in the event_df and have a result like: data = { "channel": [1, 2, 1, 2, 1], "time": [0.2, 0.3, 0.4, 0.5, 0.7], "event": ["start_1", "stop_1", "start_2", "stop_2", "start_3"], } result_df = pl.DataFrame(data) What I am currently doing is to remove all channel 0 from the first dataframe, explode the second data frame, and use hstack to combine both dataframes. This works OK if my data is perfect. In reality, the event table can have more (or less) events. In these cases I want to "truncate" the explosion (or fill with nulls) e.g. import polars as pl data = {"channel": [0, 1, 2, 1, 2, 0, 1], "time": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]} time_df = pl.DataFrame(data) data = { "time": [10.0, 10.5], "event_table": [["start_1", "stop_1", "start_2"], ["start_3", "stop_3"]], } events_df = pl.DataFrame(data) data = { "channel": [1, 2, 1, 2, 1], "time": [0.2, 0.3, 0.4, 0.5, 0.7], "event": ["start_1", "stop_1", "start_2", None, "start_3"], } result_df = pl.DataFrame(data) I appreciate any help. | Not sure if I'm overcomplicating things here but: The initial approach that comes to mind is to number the events so they align with the row index from the channels. shape: (2, 2) ββββββββββββββββββββββββββββββββββββ¬ββββββββ β event_table β index β β --- β --- β β list[str] β u32 β ββββββββββββββββββββββββββββββββββββͺββββββββ‘ β ["start_1", "stop_1", "start_2"] β 0 β # [1, 2, 3] β ["start_3", "stop_3"] β 5 β # [6, 7] ββββββββββββββββββββββββββββββββββββ΄ββββββββ 1. Run-length encoding We find the start index and run-length of each channel 0: rle_df = ( time_df .with_row_index("start") .filter(channel = 0) .select( "start", len = pl.col("start").diff().shift(-1) - 1 ) .with_row_index() ) shape: (2, 3) βββββββββ¬ββββββββ¬βββββββ β index β start β len β β --- β --- β --- β β u32 β u32 β i64 β βββββββββͺββββββββͺβββββββ‘ β 0 β 0 β 4 β β 1 β 5 β null β βββββββββ΄ββββββββ΄βββββββ If len < event_len we want to use that as the new length. 2. Truncate We join the run-lengths to events_df and use .min_horizontal() to give use the "least" length truncate with .list.head() events_with_index = ( events_df .with_row_index() .join(rle_df, on="index", how="left") .select( pl.col("event_table").list.head( pl.min_horizontal(pl.col("len"), pl.col("event_table").list.len()) ), index = pl.col("start") ) .explode("event_table") .with_columns( pl.col("index") + pl.col("index").cum_count().over("index") ) ) After the .explode() we have: βββββββββββββββ¬ββββββββ β event_table β index β β --- β --- β β str β u32 β βββββββββββββββͺββββββββ‘ β start_1 β 0 β β stop_1 β 0 β β start_2 β 0 β β start_3 β 5 β β stop_3 β 5 β βββββββββββββββ΄ββββββββ The .cum_count() addition then gives us: shape: (5, 2) βββββββββββββββ¬ββββββββ β event_table β index β β --- β --- β β str β u32 β βββββββββββββββͺββββββββ‘ β start_1 β 1 β # index of 1st channel 0 (plus 1) β stop_1 β 2 β β start_2 β 3 β β start_3 β 6 β # index of 2nd channel 0 (plus 1) β stop_3 β 7 β βββββββββββββββ΄ββββββββ 3. Join With the indexes aligned, we can join and filter out the channel 0 rows. A left-join gives us the "fill with nulls" behaviour. (time_df .with_row_index() .join( events_with_index, on = "index", how = "left" ) .filter(pl.col("channel") != 0) ) shape: (5, 4) βββββββββ¬ββββββββββ¬βββββββ¬ββββββββββββββ β index β channel β time β event_table β β --- β --- β --- β --- β β u32 β i64 β f64 β str β βββββββββͺββββββββββͺβββββββͺββββββββββββββ‘ β 1 β 1 β 0.2 β start_1 β β 2 β 2 β 0.3 β stop_1 β β 3 β 1 β 0.4 β start_2 β β 4 β 2 β 0.5 β null β β 6 β 1 β 0.7 β start_3 β βββββββββ΄ββββββββββ΄βββββββ΄ββββββββββββββ | 2 | 1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.