question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,153,992
2024-11-4
https://stackoverflow.com/questions/79153992/do-dictionaries-have-the-same-implicit-line-continuation-as-parentheses
I was surprised that this seems to work without parentheses: dict = { "a": 1 if True else 2, "b": 2 } I know that dictionaries have implicit line continuation, but I presumed that only let you split between commas. Does it simply function like parentheses, where everything is treated as one line?
Yes, implicit line continuation/joining takes place within parentheses, square brackets and curly braces in Python as documented: Expressions in parentheses, square brackets or curly braces can be split over more than one physical line without using backslashes.
1
3
79,153,372
2024-11-3
https://stackoverflow.com/questions/79153372/how-where-does-pytorch-max-documentation-show-that-you-can-pass-in-2-tensors-for
I am learning pytorch and deep learning. The documentation for torch.max doesn't make sense in that it looks like we can compare 2 tensors but I don't see where in the documentation I could have determined this. I had this code at first where I wanted to check ReLU values against the maximum. I thought that 0 could be broadcast for and h1.shape=torch.Size([10000, 128]). h1 = torch.max(h1, 0) y = h1 @ W2 + b2 However, I got this error: TypeError: unsupported operand type(s) for @: 'torch.return_types.max' and 'Tensor' I got to fix this when I changed the max equation to use a tensor instead of 0. h1 = torch.max(h1, torch.tensor(0)) y = h1 @ W2 + b2 1. Why does this fix the error? This is when I checked the documentation again and realized that there is nothing mentions a collection like a tuple or list for multiple tensors or even a *input for iterable unpacking. Here are the 2 versions: 1st torch.max version: torch.max(input) → Tensor Returns the maximum value of all elements in the input tensor. Warning This function produces deterministic (sub)gradients unlike max(dim=0) 2nd version of torch.max torch.max(input, dim, keepdim=False, *, out=None) Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax). If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input. 2. What is tensor(0) according to this documentation?
Check the bottom of the documentation page you linked where it says: torch.max(input, other, *, out=None) → Tensor See torch.maximum(). When the second argument is a tensor, torch.max computes torch.maximum
1
4
79,153,401
2024-11-3
https://stackoverflow.com/questions/79153401/difference-between-time-sleep-and-threading-timer-accuracy-and-efficiency-in
Let's say I have a Python script which is designed to wait for a specific amount of time in milliseconds and then press a key on the keyboard. Should I use threading.Timer() or should I just do it with time.sleep()? Which one is more accurate in terms of pressing the key on time and which one is more efficient? Also, please let me know if there are better and more accurate ways to do this.
threading.Timer() runs in a separate thread, which means it won't block your main program. However, it can be less precise due to thread scheduling overhead and context switching. time.sleep() is more accurate for short durations (milliseconds range) since it doesn't involve thread creation overhead. However, it blocks the entire program while waiting. A third option worth considering is using Python's asyncio, which can provide good timing accuracy without blocking.
1
3
79,153,289
2024-11-3
https://stackoverflow.com/questions/79153289/generate-a-stacked-bar-chart-in-python-out-of-groupby-based-on-multi-index
I would like to generate a stacked bar chart (ideally in seaborn), but happy to go with the native pandas plotting functionality. Let me introduce some test data to make things clear. In [353]: import pandas as pd In [354]: import seaborn as sns In [355]: df = pd.DataFrame({"Cat":["A", "B","A","B","A","B","A","B","C"], "Time":[0,0,1,1,0,0,1,1,1], "ID":[0,0,0,0,1,1,1,1,1]}) In [356]: df Out[356]: Cat Time ID 0 A 0 0 1 B 0 0 2 A 1 0 3 B 1 0 4 A 0 1 5 B 0 1 6 A 1 1 7 B 1 1 8 C 1 1 In [357]: df.groupby(["ID","Cat"]).count() Out[357]: Time ID Cat 0 A 2 B 2 1 A 2 B 2 C 1 In [358]: What I would like to see here, is on the x-axis the ID's where on the y axis I get the count (column Time) stacked by the variable Cat, e.g. for ID 1, I want to see a stacked coloured bar where the sizes are 2, 2 and 1. I've tried the following without succeeding: df.groupby(["ID","Cat"]).count().plot(kind="bar", stacked=True, x="ID") as it seems it can't handle the multi-axis. Any help much appreciated! EDIT This EDIT is to add the trouble I'm having with the legenbox being placed outside of the plotting window. The real code I'm using p = so.Plot(df.astype({"Time": "category"}),x='Time', color='Category').add(so.Bar(), so.Count(), so.Stack()) fig, ax = plt.subplots(figsize=(2560/120, 1335/120)) today = dt.datetime.today().strftime("%Y%m%d") plt.grid() p.on(ax).save(f"{today}_barchart_sources_{c}.png") plt.close()
You would need to unstack the Cat (also only aggregate Time): (df.groupby(['ID', 'Cat'])['Time'].count().unstack('Cat') .plot(kind='bar', stacked=True) ) Output: Alternatively, with seaborn's object interface, for which you don't need to pre-aggregate the data. See Stack for more examples: import seaborn.objects as so (so.Plot(df.astype({'ID': 'category'}), x='ID', color='Cat') .add(so.Bar(), so.Count(), so.Stack()) ) NB. converting ID to category to avoid having a numeric x-axis. Output:
2
2
79,148,566
2024-11-1
https://stackoverflow.com/questions/79148566/with-python-how-to-apply-vector-operations-to-a-neighborhood-in-an-n-d-image
I have a 3D image with vector components (i.e., a mapping from R3 to R3). My goal is to replace each vector with the vector of maximum norm within its 3x3x3 neighborhood. This task is proving to be unexpectedly challenging. I attempted to use scipy.ndimage.generic_filter, but despite its name, this filter only handles scalar inputs and outputs. I also briefly explored skimage and numpy's sliding_window_view, but neither seemed to provide a straightforward solution. What would be the correct way to implement this? Here's what I ended up writing. It's not very elegant and pretty slow, but should help understand what I'm trying to do. import numpy as np import matplotlib.pyplot as plt def max_norm_vector(data): """Return the vector with the maximum norm.""" data = data.reshape(-1, 3) norms = np.linalg.norm(data, axis=-1) idx_max = np.argmax(norms) return data[idx_max] if __name__ == '__main__': # Load the image range_ = np.linspace(-5, 5, 30) x, y, z = np.meshgrid(range_, range_, range_, indexing='ij') data = 1 - (x ** 2) # Compute the gradient grad = np.gradient(data) grad = np.stack(grad, axis=-1) # Stack gradients along a new last axis # grad = grad[:5, :5, :5, :] # Crop the gradient for testing max_grad = np.zeros_like(grad) for i in range(1,grad.shape[0]-1): for j in range(1,grad.shape[1]-1): for k in range(2,grad.shape[2]-1): max_grad[i, j, k] = max_norm_vector(grad[i-1:i+2, j-1:j+2, k-1:k+2,:]) # Visualization fig = plt.figure(figsize=(12, 6)) # Plot original data ax1 = fig.add_subplot(121, projection='3d') ax1.scatter(x.ravel(), y.ravel(), z.ravel(), c=data.ravel(), cmap='viridis', alpha=0.5) ax1.set_title('Original Data') # Plot maximum gradient vectors ax2 = fig.add_subplot(122, projection='3d') # Downsample for better performance step = 3 x_down = x[::step, ::step, ::step] y_down = y[::step, ::step, ::step] z_down = z[::step, ::step, ::step] max_grad_down = max_grad[::step, ::step, ::step] ax2.quiver(x_down.ravel(), y_down.ravel(), z_down.ravel(), max_grad_down[:, :, :, 0].ravel(), max_grad_down[:, :, :, 1].ravel(), max_grad_down[:, :, :, 2].ravel(), length=0.1, color='red', alpha=0.7) ax2.set_title('Maximum Gradient Vectors') plt.tight_layout() plt.show()
DIPlib has this function implemented: dip.SelectionFilter(). This is how you'd use it: grad = ... # OP's grad array norm = dip.Norm(grad) out = dip.SelectionFilter(grad, norm, dip.Kernel(3, "rectangular"), mode="maximum") You can cast the dip.Image object out to a NumPy array with np.asarray(out) (no copy of the data will be made). NumPy functions will accept the dip.Image object as input, but many functions in scikit-image expect the input array to have a .shape method or similar, which will fail if you don't do the cast explicitly. Install the package with pip install diplib. Disclaimer: I'm an author of DIPlib, but I didn't implement this function.
3
3
79,147,443
2024-11-1
https://stackoverflow.com/questions/79147443/is-it-more-efficient-to-highlight-cells-in-qabstracttablemodels-data-method-or
I'm building a Qt table in Python to display a large pandas DataFrame. The table uses a custom PandasTableModel (subclassing QAbstractTableModel) to connect to the DataFrame, and I want to highlight cells conditionally—e.g., red for False values and green for True. I have found two ways of doing this: Using the data method in the model: Returning a specific background color for certain cells based on their value. class PandasTableModel(QtCore.QAbstractTableModel): ... @override def data(self, index, role): if not index.isValid(): return None value = self._dataframe.iloc[index.row(), index.column()] if role == QtCore.Qt.ItemDataRole.DisplayRole: return str(value) elif role == QtCore.Qt.ItemDataRole.BackgroundRole: if value == "True": return QtGui.QColor(0, 255, 0, 100) elif value == "False": return QtGui.QColor(255, 0, 0, 100) return None Using a custom delegate's paint method: Setting the cell's background color in the paint method of a delegate (subclassed from QItemDelegate): class PandasItemDelegate(QtWidgets.QItemDelegate): ... @override def paint(self, painter, option, index): super().paint(painter, option, index) value = index.data(QtCore.Qt.ItemDataRole.DisplayRole) if value in ["True", "False"]: color = QtGui.QColor(0, 255, 0, 100) if value == "True" else QtGui.QColor(255, 0, 0, 100) painter.fillRect(option.rect, color) Which approach would be more efficient, especially for a large DataFrame, in terms of memory and processing speed?
Premise: the proper comparison First of all, there is no such thing as more/better/optimal/faster/etc. in their absolute meaning. What's "best" is up to your requirements and context: it may be good for you and terrible for others. And that doesn't even consider user expectations in most cases. Then, before considering "efficiency", you should also think about consistency, effectiveness, and the implications of each approach you may choose. Finally, while the two concepts you're suggesting are relatively similar, their results certainly are not, especially because your implementations are potentially quite different. Distinction between model data and view display An important thing to remember about item/views is that the model is always expected to reliably return consistent data (no matter "what" shows that data), while the view may choose to display that data in its own way. In the model data() case, you let the view (and therefore, its delegate) draw the item background based on the BackgroundRole, using a context to define what color to return. The delegate will theoretically fill the cell with that background, and then draw the remaining "display" contents (possibly some text, an icon and/or a checkbox, if they exist) above that. The "theoretically" has an important meaning, because it may be completely ignored in some cases. In the delegate case, you're specifically drawing the color depending on the given field, but above the "display" contents: how that is drawn may change because of composition (see below). The two common Qt delegates (QItemDelegate and QStyledItemDelegate) override the background color using the Highlight palette role when an item is selected, meaning that in the "item data" case the user will probably never be able to see the background of a selected item. Furthermore, if the view uses a QStyledItemDelegate and a proper Qt Style Sheet is set (for instance, using the ::item { background: <color>; } selector/property combination), the background is possibly completely ignored to begin with. This means that if you follow the model approach, the background will probably never be shown for any selected item at all. If showing that color is of utmost importance, you cannot rely on the model approach only. Which brings us to an appropriate delegate approach, but that's not the only issue. Color composition As explained above, there is an important difference in how you applied those methods: the first draws the background first (before anything else), the other after that (above what has been drawn before). You probably did not notice or consider the difference enough, but it is there, because drawing above something is achieved through composition, and the order on which things are drawn depends on the opacity of the colors involved. Drawing with an opaque color above something, always clears what was "behind" it with the new color, but if you use a partially transparent color, then it's blended with what was previously drawn. This is fundamentally identical to physical painting: if the paint you're using is too diluted (as in a not fully opaque color), the first pass will just blend what's behind with the new color; draw a line using a yellow marker and then draw another line with a blue one intersecting the first one: what color do you get? See the difference using the background role, in the model data approach (the text is in opaque color and is drawn on a semi transparent background): And then drawing the background above, in your delegate attempt (the opposite): The difference in the text is quite visible. It's also important to notice that the "background" is not identical: in the first case, the delegate uses the given color as a reference, eventually basing the painted rectangle on that color (the style decides what to do with it); in the second case, the color is the same, because you explicitly filled the rectangle with that color. What's "best" Performance is important, but should always be balanced with actual efficiency: not only "how fast" an approach can be, but also how effective it may be. The fact that the model data may be large must be carefully considered: it's not always relevant. Qt item views are relatively smart, and they query the model only about what they need; some roles are only requested upon drawing, others for related aspects (eg: to know how wide a column should be): background/foreground roles are only relevant when displaying items, therefore if the view has a size that can only show 10 rows and 10 columns, it really doesn't matter if the model has one hundred or one million items, at least for what concerns those roles: you can rest assured that data() will be called with those roles only for the items that are currently visible. Besides the paint composition issues noted above (regarding your approaches), the only difference is that you're using a custom delegate that draws things on its own, and it does it from Python, which introduces a relatively important bottle neck: using the default delegate (that would "do things" on the "C++ side" of things, therefore much faster) against relying it on the more efficient C++ and precompiled approach. If your view normally shows a reasonable amount of items ("how much reasonable" is up to you to decide, but if you're under the count of 1000 on a decent computer, that should be fine) that shouldn't be an issue, especially if you're using the simpler QItemDelegate. QItemDelegate and QStyledItemDelegate While they have the same purpose and achieve similar results, the two standard delegates Qt provides are different. QItemDelegate is the simplest one, it's generally faster and less "fancy", as it doesn't rely too much on the QStyle. The background and selection colors, if any, are shown as plain colors, without applying 3D/glow/hover gradients. Even though those are done on the "C++ side", filling dozens of rectangles with a given color is obviously simpler than computing those gradients for each one of them. Private subclasses of that delegate are used for QComboBox popups, QColumnView and the internal QTableView of QCalendarWidget. QStyledItemDelegate is the default for all standard Qt item views. It's normally more compliant with the OS, but its complexity may add some significant overhead if you're going to show lots of items at the same time (in the order of many thousands) and the model is also large: the problem will be mostly visible during scrolling. Considerations about possible approaches and optimizations As said above, any Python implementation may add a significant bottleneck, so you need to carefully choose which delegate to use (considering how much display matters) and how to implement it. Knowing which, when and how often each function is called, is mandatory for proper comparison in each consideration. For instance, QItemDelegate provides drawBackground(), which would be nice to implement; unfortunately, it is not a "virtual" function, meaning that trying to override it is pointless unless it's called explicitly. Using QStyledItemDelegate and overriding initStyleOption() to set the option.backgroundBrush may be a possibility, but consider that that function is also called a lot, even when nothing is being painted yet (sizeHint() calls it). It also is important to know if the background color should be applied to any item in the view, or only for specific rows or columns. For obvious reasons, if you have a lot of columns and you only need this behavior for just one or a few of them, then you shall create a single delegate for the view and call setItemDelegateForColumn() for those columns. If you don't care much about the selection overriding the background color and text/decoration display, using the model data approach with QItemDelegate might be one good performant choice. Yet, there's still room for improvement. For instance, there's little point in creating new QColors every time in the model depending on the cell value. A possible solution could be to use a basic dictionary: class PandasTableModel(QtCore.QAbstractTableModel): boolColors = { "False": QColor(0, 255, 0, 100), "True": QColor(255, 0, 0, 100), } ... @override def data(self, index, role): if not index.isValid(): return if role == QtCore.Qt.ItemDataRole.DisplayRole: return str(self._dataframe.iloc[index.row(), index.column()]) elif role == QtCore.Qt.ItemDataRole.BackgroundRole: return self.boolColors.get( self._dataframe.iloc[index.row(), index.column()]) My changes/removal in the return None cases are irrelevant in efficiency and just up to personal style (I don't particularly agree with the PEP-8 on this). Note that the above syntax is not that elegant, but if the model is quite large and the view may show thousands of items at any given time, its efficiency may be preferred: since data() is called very often, and lots of those calls may end up in unused roles, it's better to get the value only when actually required; local variable allocation (value = ...), even though relatively fast, should be done if the variable is actually needed more than once within the same block. Then, let's consider the composition issue: as said above, you shall not draw the background above everything. If you want to display both the background and the selection, you need to do composition appropriately, meaning that you shall consider both colors. There are various options for this. As said above, which one to choose is up to your specific needs. For instance, you may still choose the simpler QItemDelegate, but rely on its drawBackground() behavior, which is either based on the palette (when selected) or BackgroundRole, considering the option state. The following is a possible example, without the model data() implementation of the background role: class BackgroundDelegate(QItemDelegate): boolColors = { # Each state has two color values, depending on the selection "False": (QColor(0, 255, 0, 100), QColor(0, 255, 0)), "True": (QColor(255, 0, 0, 100), QColor(255, 0, 0)), } transparent = QColor(Qt.transparent) def paint(self, qp, opt, index): bgColors = self.boolColors.get(index.data()) if bgColors: # always draw the background if opt.state & QStyle.State.State_Selected: qp.fillRect(opt.rect, bgColors[1] # override the background color for the "Highlight" palette # role by making it transparent, so that nothing will be # drawn in drawBackground(); yet, keep the "option" state, # which may be relevant for other aspects decided by the # style, such as the font weight, italic, etc. opt.palette.setColor(QPalette.ColorRole.Highlight, self.transparent) else: qp.fillRect(opt.rect, bgColors[0] super().paint(qp, opt, index) Final considerations The above should clearly summarize how no best/more/better/etc. option could ever exist without a proper context. What has been shown above is just a fragment of possible options, and should also explain how every possible implementation should be carefully considered: in your case, while appropriate in principle, your two options were actually resulting in two quite different and inconsistent results, meaning that none of them were appropriate for reasonable comparisons. Finally, ensure that code consistency (including function signature) is also appropriate: some of your overrides are not. For instance, data() expects that the role is keyworded (as in optional) defaulting to Qt.DisplayRole. That may not be so relevant for performance or efficiency, but it's still important for consistency, especially in subclassing. The @override decorator isn't mandatory for functionality, yet you've considered adding it as more relevant than the expected signature (which does not require the role argument). Check your priorities: while typing may be relevant, it can not be more important than the expected signature syntax.
1
3
79,151,731
2024-11-2
https://stackoverflow.com/questions/79151731/error-computing-phase-angle-between-two-time-series-using-hilbert-transform
I'm trying to compute the phase angle between two time-series of real numbers. To check if my function is working without errors I have created two sine waves with a phase of 17 degrees. However, when I compute the phase angle between those two sine waves I do not get the 17 degrees. Here's my script: import numpy as np from scipy.signal import hilbert import matplotlib.pyplot as plt def coupling_angle_hilbert(x, y, datatype, center=True, pad=True): """ Compute the phase angle between two time series using the Hilbert transform. Parameters: - x: numpy array Time series data for the first signal. - y: numpy array Time series data for the second signal. - center: bool, optional If True, center the amplitude of the data around zero. Default is True. - pad: bool, optional If True, perform data reflection to address issues arising with data distortion. Default is True. - unwrap: bool, optional If True, unwrap the phase angle to avoid phase wrapping. Default is True. Returns: - phase_angle: numpy array Phase angle between the two signals. """ # Convert input data to radians if specified as degrees if datatype.lower().strip() == 'degs': x = np.radians(x) y = np.radians(y) # Center the signals if the 'center' option is enabled if center: # Adjust x to be centered around zero: subtract minimum, then offset by half the range x = x - np.min(x) - ((np.max(x) - np.min(x))/2) # Adjust y to be centered around zero: subtract minimum, then offset by half the range y = y - np.min(y) - ((np.max(y) - np.min(y))/2) # Reflect and pad the data if padding is enabled if pad: # Number of padding samples equal to signal length # Ensure that the number of pads is even npads = x.shape[0] // 2 * 2 # Ensure npads is even # Reflect data at the beginning and end to create padding for 'x' and 'y' x_padded = np.concatenate((x[:npads][::-1], x, x[-npads:][::-1])) y_padded = np.concatenate((y[:npads][::-1], y, y[-npads:][::-1])) else: # If padding not enabled, use original signals without modification x_padded = x y_padded = y # Apply the Hilbert transform to the time series data hilbert_x = hilbert(x_padded) hilbert_y = hilbert(y_padded) # Calculate the phase of each signal by using arctan2 on imaginary and real parts phase_angle_x = np.arctan2(hilbert_x.imag, x_padded) phase_angle_y = np.arctan2(hilbert_y.imag, y_padded) # Calculate the phase difference between y and x phase_angle = phase_angle_y - phase_angle_x # Trim the phase_angle to match the shape of x or y if pad: # Remove initial and ending padding to return only the original signal's phase angle difference phase_angle = phase_angle[npads:npads + x.shape[0]] return phase_angle # input data angles = np.radians(np.arange(0, 360, 1)) phase_offset = np.radians(17) wav1 = np.sin(angles) wav2 = np.sin(angles + phase_offset) # Compute phase_angle usig Hilbert transform ca_hilbert = coupling_angle_hilbert(wav1, wav2, 'rads', center=True, pad=True) plt.plot(np.degrees(ca_hilbert)) plt.show() Thank you n advance for any help.
Instead of using arctan2(hilbert.imag, x), you can use np.angle(), which returns the argument (angle) of a complex number (always in the range [-π, π]). It's essentially doing arctan2(y, x) for a complex number x + iy. Also, after phase_angle = phase_y - phase_x, we need to ensure again that it lies in [-π, π], so we do phase_angle = np.angle(np.exp(1j * phase_angle)), following the documentation. Thus, your function becomes: import matplotlib.pyplot as plt import numpy as np from scipy.signal import hilbert def coupling_angle_hilbert(x, y, datatype="rads", center=True, pad=True): """ Compute the phase angle between two time series using the Hilbert transform. Parameters: - x: numpy array Time series data for the first signal. - y: numpy array Time series data for the second signal. - datatype: str, optional Specify if input is in 'rads' or 'degs'. Default is 'rads'. - center: bool, optional If True, center the amplitude of the data around zero. Default is True. - pad: bool, optional If True, perform data reflection to address issues arising with data distortion. Default is True. Returns: - phase_angle: numpy array Phase angle between the two signals in radians. """ if datatype.lower().strip() == "degs": x = np.radians(x) y = np.radians(y) # Center the signals if the 'center' option is enabled if center: # Adjust x to be centered around zero: subtract minimum, then offset by half the range x = x - np.min(x) - ((np.max(x) - np.min(x)) / 2) # Adjust y to be centered around zero: subtract minimum, then offset by half the range y = y - np.min(y) - ((np.max(y) - np.min(y)) / 2) # Reflect and pad the data if padding is enabled if pad: # Number of padding samples equal to signal length # Ensure that the number of pads is even npads = x.shape[0] // 2 * 2 # Ensure npads is even # Reflect data at the beginning and end to create padding for 'x' and 'y' x_padded = np.concatenate((x[:npads][::-1], x, x[-npads:][::-1])) y_padded = np.concatenate((y[:npads][::-1], y, y[-npads:][::-1])) else: # If padding not enabled, use original signals without modification x_padded = x y_padded = y # Apply the Hilbert transform to the time series data hilbert_x = hilbert(x_padded) hilbert_y = hilbert(y_padded) # Calculate the instantaneous phases phase_x = np.angle(hilbert_x) phase_y = np.angle(hilbert_y) # Calculate the phase difference phase_angle = phase_y - phase_x # Ensure phase angle is in [-π, π] phase_angle = np.angle(np.exp(1j * phase_angle)) # Trim the phase_angle to match the shape of x or y if pad: # Remove initial and ending padding to return only the original signal's phase angle difference phase_angle = phase_angle[npads : npads + x.shape[0]] return phase_angle With that, I get 17.41 as phase difference.
2
3
79,151,303
2024-11-2
https://stackoverflow.com/questions/79151303/how-to-add-the-plane-y-x-to-a-3d-surface-plot-in-plotly
I am currently working on a 3D surface plot using Plotly in Python. Below is the code I have so far: import numpy as np import plotly.graph_objects as go # Definition of the domain x = np.linspace(-5, 5, 100) y = np.linspace(-5, 5, 100) X, Y = np.meshgrid(x, y) # Definition of the function, avoiding division by zero Z = np.where(X**2 + Y**2 != 0, (X * Y) / (X**2 + Y**2), 0) # Creation of the interactive graph fig = go.Figure(data=[go.Surface(z=Z, x=X, y=Y, colorscale='Viridis')]) # Add title and axis configurations fig.update_layout( title='Interactive graph of f(x, y) = xy / (x^2 + y^2)', scene=dict( xaxis_title='X', yaxis_title='Y', zaxis_title='f(X, Y)' ), ) # Show the graph fig.show() I would like to add the plane (y = x) to this plot. However, I am having trouble figuring out how to do this. Can anyone provide guidance on how to add this plane to my existing surface plot? Any help would be greatly appreciated!
To add '𝑦=𝑥' to your 3D surface plot in Plotly, you can define a separate surface for this plane and add it to the figure using another 'go.Surface' object. In Plotly, the Surface plot requires explicit values for 𝑍 to display a plane. To achieve the 𝑦=𝑥 plane across a given 𝑍 range, we need to set both 𝑋 and 𝑌 to represent 𝑦=𝑥 over a flat range of 𝑍. Here is the new code that I modified, import numpy as np import plotly.graph_objects as go # Definition of the domain x = np.linspace(-5, 5, 100) y = np.linspace(-5, 5, 100) X, Y = np.meshgrid(x, y) # Definition of the function, avoiding division by zero Z = np.where(X**2 + Y**2 != 0, (X * Y) / (X**2 + Y**2), 0) # Creation of the main surface plot fig = go.Figure(data=[go.Surface(z=Z, x=X, y=Y, colorscale='Viridis')]) # Define the y = x plane x_plane = np.linspace(-5, 5, 100) y_plane = x_plane # y = x X_plane, Z_plane = np.meshgrid(x_plane, np.linspace(-1, 1, 2)) # Z range can be adjusted as needed Y_plane = X_plane # Since y = x # Add the y = x plane to the plot fig.add_trace(go.Surface(z=Z_plane, x=X_plane, y=Y_plane, colorscale='Reds', opacity=0.5)) # Add title and axis configurations fig.update_layout( title='3D Surface Plot with Plane y = x', scene=dict( xaxis_title='X', yaxis_title='Y', zaxis_title='f(X, Y)' ), ) # Show the graph fig.show() and the output is
1
1
79,151,228
2024-11-2
https://stackoverflow.com/questions/79151228/how-can-i-specify-the-directory-i-want-to-get-using-the-os-library
I have a directory called "Data" and inside it I have 35 other directories with another bunch of directories each. I need to check if these last directories have .txt files and, if so, I want to get the name of the specific directory that is one of the aforementioned 35. After this, I want to use the pandas library to generate a "yes/no" spreadsheet with "yes" for the directories (one of the 35) that have .txt files and "no" for the directories (one of the 35) that do not have .txt files. For now, I could write the following as a test: import os w=[] w=os.listdir(r'C:\Users\Name\New\Data') tot=len(w) a=0 while a!=tot: print(w[a]) a=a+1 Which gives me the names of the 35 main directories I am interested in (Folder1, Folder2, Folder3, ..., Folder35) AND for root, dirs, files in os.walk(r'C:\Users\Name\New\Data'): for file in files: if file.endswith('.txt'): print(root) But it results in a list with the whole path, like "C:\Users\Name\New\Data\Folder1\Folder1-1", and what I really need is to compare the name "Folder1" to the entries of the aforementioned list. How can I check if the element in "w[]" corresponds to the name in "root"?
To check if each main directory contains any .txt files in its subdirectories, we can combine the logic you've started with and streamline it to match only the main directory name (one of the 35). Here's the code to achieve this and generate a yes/no spreadsheet using pandas. import os import pandas as pd main_dir = r'C:\Users\Name\New\Data' main_folders = os.listdir(main_dir) results = {folder: "No" for folder in main_folders} for root, dirs, files in os.walk(main_dir): if any(file.endswith('.txt') for file in files): main_folder_name = os.path.basename(os.path.dirname(root)) if main_folder_name in results: results[main_folder_name] = "Yes" df = pd.DataFrame(list(results.items()), columns=['Folder', 'Has_txt_file']) output_path = r'C:\Users\Name\New\output.xlsx' df.to_excel(output_path, index=False) I hope this will help you a little.
2
1
79,151,259
2024-11-2
https://stackoverflow.com/questions/79151259/django-model-filter-for-string
I have table which has string such as id | url 1 | /myapi/1241/ 2 | /myapi/ 3 | /myapi/1423/ 4 | /myapi/ Now I want to filter them like this below myModel.Objects.filter(url="/myapi/****/") Is it possible , or is there any method to do this?
Filter with the __regex lookup [Django-doc]: MyModel.objects.filter(url__regex=r'/myapi/\d+/')
2
2
79,151,289
2024-11-2
https://stackoverflow.com/questions/79151289/too-large-dataframe-python-spider
I tried to load an excel file on Pandas and process it. I use large excel files. For example I'm working on a file with 50 columns. I'd like to see it on spider but when I use Print functions, everytime a part of columns will be hidden with 3 points. See the attached picture. I see someone suggest use Jupiter, is this the only solution? I tried to use print, display and tabulate but nothing worked. I expect someone to give me simple suggestion. I'm not a professional developer. #EXAMPLE FOR QUESTION import pandas as pd df = pd.DataFrame({'Jan21': [22, 20, 25, 30, 4, 8, 12, 10]}) df['Feb21'] = df.Jan21*3 df['Mar21'] = df.Jan21*0.2 df['Apr21'] = df.Jan21*1.23 df['Mag21'] = df.Jan21*2.5 df['Giu21'] = df.Jan21*3 df['Lug21'] = df.Jan21*4.2 df['Ago21'] = df.Jan21*0.25 df['Set21'] = df.Jan21*2.5 df['Ott21'] = df.Jan21*2.7 df['Nov21'] = df.Jan21*3 df['Dic21'] = df.Jan21*0.33 df['Jan22'] = df.Jan21*1.89 df['Feb22'] = df.Jan21*2.65 print ("Monthly sales") print (df) For example, if I process this simple code, in the end you will see that only few columns of dataframe will be visible. I tried Jupiter but it also had the same problem.
To see all columns, you can adjust the pandas display settings to avoid truncated output. 1. Option 1: Adjust pandas display options You can change the display option for maximum columns. pd.set_option("display.max_columns", None) print("Monthly sales") print(df) 2. Option 2: Use to_string() for full dataframe display You can use to_string() to show entire dataframe without truncated columns. print("Monthly sales") print(df.to_string()) I hope this will help you a little.
2
0
79,150,393
2024-11-2
https://stackoverflow.com/questions/79150393/how-can-i-do-one-hot-encoding-from-multiple-columns
When I search for this topic, I get answers that do not match what I want to do. Let's say I have a table like this: Item N1 N2 N3 N4 Item1 1 2 4 8 Item2 2 3 6 7 Item3 4 5 7 9 Item4 1 5 6 7 Item5 3 4 7 8 I would like to one-hot encode this to get: Item 1 2 3 4 5 6 7 8 9 Item1 1 1 0 1 0 0 0 1 0 Item2 0 1 1 0 0 1 1 0 0 Item3 0 0 0 1 1 0 1 0 1 Item4 1 0 0 0 1 1 1 0 0 Item5 0 0 1 1 0 0 1 1 0 Is this feasible at all? I am now in the process of coding some sort of loop to go through each line but I decided to ask if anyone knows a more efficient way to do this.
Use melt and crosstab. tmp = df.melt('Item') result = pd.crosstab(tmp['Item'], tmp['value']).reset_index().rename_axis(None, axis=1) Item 1 2 3 4 5 6 7 8 9 0 Item1 1 1 0 1 0 0 0 1 0 1 Item2 0 1 1 0 0 1 1 0 0 2 Item3 0 0 0 1 1 0 1 0 1 3 Item4 1 0 0 0 1 1 1 0 0 4 Item5 0 0 1 1 0 0 1 1 0
3
2
79,147,681
2024-11-1
https://stackoverflow.com/questions/79147681/how-to-separate-the-string-by-specific-symbols-and-write-it-to-list
I have the following string: my_string='11AB2AB33' I'd like to write this string in a list, so 'AB' is a single element of this list in the following way: ['1', '1', 'AB', '2', 'AB', '3', '3'] I tried to do it by list(my_string) but the result wasn't what I expected: ['1', '1', 'A', 'B', '2', 'A', 'B', '3', '3'] I also tried partition method: list(my_string.partition('AB')) and also didn't get expected result ['11', 'AB', '2AB33']
You can use re.findall with an alternation using the pipe | matching either AB or a non whitespace character \S If you also want to match spaces you can use a . instead of \S You can see the matches here on regex101. my_string='11AB2AB33' print(re.findall(r'AB|\S', my_string)) Output ['1', '1', 'AB', '2', 'AB', '3', '3'] If you want to use split, and you only have characters A-Z or digits 0-9, you could use non word boundary to get a positon where directly on the right is either AB or a digit. You can see the matches here on regex101 my_string='11AB2AB33' print(re.split(r"\B(?=AB|\d)", my_string)) Output ['1', '1', 'AB', '2', 'AB', '3', '3']
2
2
79,149,913
2024-11-2
https://stackoverflow.com/questions/79149913/cannot-install-python-packages-because-of-urllib3
I am trying to run a Python script: python main.py obtaining this output: Traceback (most recent call last): File "main.py", line 8, in <module> import stateful File "/path/to/s.py", line 599, in <module> import policy as offloading_policy File "/path/to/p.py", line 2, in <module> from pacsltk import perfmodel ModuleNotFoundError: No module named 'pacsltk' Process finished with exit code 1 ... but not only pacsltk is already installed: when I try to install it anyways (pip install pacsltk), I obtain: Defaulting to user installation because normal site-packages is not writeable Collecting pacsltk Downloading pacsltk-0.2.0-py3-none-any.whl.metadata (2.3 kB) Collecting boto3>=1.11.5 (from pacsltk) Downloading boto3-1.35.54-py3-none-any.whl.metadata (6.7 kB) Requirement already satisfied: numpy>=1.18.1 in ./.local/lib/python3.8/site-packages (from pacsltk) (1.24.4) Requirement already satisfied: scipy>=1.4.1 in ./.local/lib/python3.8/site-packages (from pacsltk) (1.10.1) Collecting botocore<1.36.0,>=1.35.54 (from boto3>=1.11.5->pacsltk) Downloading botocore-1.35.54-py3-none-any.whl.metadata (5.7 kB) Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in ./.local/lib/python3.8/site-packages (from boto3>=1.11.5->pacsltk) (1.0.1) Collecting s3transfer<0.11.0,>=0.10.0 (from boto3>=1.11.5->pacsltk) Downloading s3transfer-0.10.3-py3-none-any.whl.metadata (1.7 kB) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in ./.local/lib/python3.8/site-packages (from botocore<1.36.0,>=1.35.54->boto3>=1.11.5->pacsltk) (2.8.2) Collecting urllib3<1.27,>=1.25.4 (from botocore<1.36.0,>=1.35.54->boto3>=1.11.5->pacsltk) Downloading urllib3-1.26.20-py2.py3-none-any.whl.metadata (50 kB) Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.36.0,>=1.35.54->boto3>=1.11.5->pacsltk) (1.14.0) WARNING: No metadata found in ./.local/lib/python3.8/site-packages Downloading pacsltk-0.2.0-py3-none-any.whl (9.6 kB) Downloading boto3-1.35.54-py3-none-any.whl (139 kB) Downloading botocore-1.35.54-py3-none-any.whl (12.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.7/12.7 MB 901.6 kB/s eta 0:00:00 Downloading s3transfer-0.10.3-py3-none-any.whl (82 kB) Downloading urllib3-1.26.20-py2.py3-none-any.whl (144 kB) WARNING: Error parsing dependencies of attrs: [Errno 2] File o directory non esistente: '/home/username/.local/lib/python3.8/site-packages/attrs-23.1.0.dist-info/METADATA' WARNING: Error parsing dependencies of urllib3: [Errno 2] File o directory non esistente: '/home/username/.local/lib/python3.8/site-packages/urllib3-2.0.4.dist-info/METADATA' Installing collected packages: urllib3, botocore, s3transfer, boto3, pacsltk Attempting uninstall: urllib3 WARNING: No metadata found in ./.local/lib/python3.8/site-packages Found existing installation: urllib3 2.0.4 error: uninstall-no-record-file × Cannot uninstall urllib3 2.0.4 ╰─> The package's contents are unknown: no RECORD file was found for urllib3. hint: You might be able to recover from this via: pip install --force-reinstall --no-deps urllib3==2.0.4 Furthermore, when I try to run the suggested command pip install --force-reinstall --no-deps urllib3==2.0.4, I am only able to obtain just the same error: Defaulting to user installation because normal site-packages is not writeable Collecting urllib3 Downloading urllib3-2.2.3-py3-none-any.whl.metadata (6.5 kB) WARNING: No metadata found in ./.local/lib/python3.8/site-packages Downloading urllib3-2.2.3-py3-none-any.whl (126 kB) Installing collected packages: urllib3 Attempting uninstall: urllib3 WARNING: No metadata found in ./.local/lib/python3.8/site-packages Found existing installation: urllib3 2.0.4 error: uninstall-no-record-file × Cannot uninstall urllib3 2.0.4 ╰─> The package's contents are unknown: no RECORD file was found for urllib3. hint: You might be able to recover from this via: pip install --force-reinstall --no-deps urllib3==2.0.4 Similar error when trying to pip uninstall urllib3. Also, sudo apt-get autoremove/purge applied to python-urllib3 just says that the package is not installed. The same error with urllib3 seems to be applied with whatever python package I want to install. I have looked to several other questions with this error without getting to solve this problem. What can I do? EDIT Tried @x1337Loser's answer: now everything runs smoothly, but I had the following output after the pip command: Defaulting to user installation because normal site-packages is not writeable (... some `Requirement already satisfied` row...) WARNING: Error parsing dependencies of attrs: [Errno 2] File o directory non esistente: '/home/username/.local/lib/python3.8/site-packages/attrs-23.1.0.dist-info/METADATA'` I am glad my program is now running, but what is that warning?
A few days ago I faced a similar problem with python-requests and these steps worked for me [Don't use sudo while installing, because this seems to work when you move to sudo but installing a package with sudo is harmful who knows what you're installing is a malware in a package form]: Remove the package manually from the site-packages directory: cd ~/.local/lib/python3.8/site-packages/ rm -rf urllib3* now install your package: pip install pacsltk --upgrade let me know if this solves your problem
2
4
79,149,707
2024-11-2
https://stackoverflow.com/questions/79149707/how-to-obtain-n-th-even-triangle-number-using-recursive-algorithm
I know there is a formula for n-th even triangle, but it's just a matter of interest for me to try to write a recursive algorithm. def even_triangle(n): value = n * (n + 1) // 2 if value % 2 == 0: return value return even_triangle(n + 1) for i in range(1, 12): print(f"{i})", even_triangle(i)) I tried to write a function which could compute the n-th even triangle number (OEIS A014494), but it turned out that it returns the previous result if the n-th triangle number is odd, but not the next even triangle number as expected. Output: 1) 6 2) 6 3) 6 4) 10 5) 28 6) 28 7) 28 8) 36 9) 66 10) 66 11) 66 What I expect: 1) 6 2) 10 3) 28 4) 36 5) 66 6) 78 7) 120 8) 136 9) 190 10) 210 11) 276
The reason you get duplicates follows from your recursive call: return even_triangle(n + 1) When this gets executed, then consequently even_triangle(n) === even_triangle(n + 1), which is not what you ever want to have (since it enforces the duplicate value). When using recursion, you would typically make a recursive call on a simpler version of the problem (so with a lesser value of n) and then "upgrade" that result with some manipulation to make it suitable for the current value of n. In this particular case we can see that results come in pairs of standard triangular numbers, then skipping two triangular numbers, and then again a pair, ...etc. That makes me look at a solution where you determine whether the n-th even triangular number is the first one of such a pair, or the second one. If it is the second one, the difference with the preceding n-th even triangular number (got from recursion) is the distance between normal triangular numbers, otherwise the distance is greater. In short, we can find this recursive relation: def even_triangle(n): if n == 0: return 0 elif n % 2 == 0: return even_triangle(n - 1) + 2 * n else: return even_triangle(n - 1) + 6 * n As the difference between the two recursive calls is just the factor 2 versus 6, you could produce that factor from n % 2 (i.e. 2 + (n % 2) * 4) and merge those two recursive return statements. The base case can also be merged into it with an and operator: def even_triangle(n): return n and even_triangle(n - 1) + (2 + (n % 2) * 4) * n As you have already noted yourself, there is a closed formula for this, so using recursion is not the most efficient here.
3
3
79,149,709
2024-11-2
https://stackoverflow.com/questions/79149709/efficient-way-to-delete-columns-and-rows-from-a-numpy-array-using-slicing-and-no
Would it be possible given an array A, bad row indices and bad column indices to use slicing to make a new array that does not have these rows or columns? This can be done with np.delete as follows: import numpy as np A=np.random.rand(20,16) bad_col=np.arange(0,A.shape[1],4)[1:] bad_row=np.arange(0,A.shape[0],4)[1:] Anew=np.delete(np.delete(A,bad_row,0),bad_col,1) print('old shape ',A.shape) print('new shape ',Anew.shape) I also know that you can use slicing to select certain columns and rows from an array. But I'm wondering if it can be used to exclude certain column and rows? and if not what the best way besides np.delete to do that. EDIT: Based on comments, it might not be possible with slicing in place. How about creating a new array with advanced indexing? It can be done with the following code but slow, looking for a faster alternative: good_col = [i for i in range(A.shape[1]) if i==0 or i % 4 != 0] good_row=[ i for i in range(A.shape[0]) if i==0 or i % 4 != 0] Anew2=A[good_row,:][:,good_col] print('new shape ',Anew2.shape) Thank you
You cannot remove items of an array without either moving all items (which is slow for large arrays) or creating a new one. There is no other solution. In Numba or Cython, you can directly create a new array with one operation instead of 2 so it should be about twice faster for large arrays. It should be even faster for small arrays because Numpy functions have a significant overhead for small arrays. Numpy views are either contiguous or strided. There is no way to use a variable stride along a given axis. This has been defined that way for sake of performance. Thus, if you want to select only columns and rows with an even ID, you can (because there is a constant stride for each axis that can be set for the resulting view). However, you cannot select all rows/columns avec an ID that is not divisible by 4 for example (because there are no view that can be built with a constant stride). Note that if you try to cheat by creating new dimension and then flatten the view, then Numpy will create a copy (because there is no other way).
5
3
79,149,626
2024-11-1
https://stackoverflow.com/questions/79149626/how-to-fold-python-code-using-sed-and-or-awk
I have a large python code base. I want to get a feel of how a BaseClass is subclassed by 'grepping-out' the name of the sub class and the functions in the class, but only for classes that inherit from SomeBaseClass. So if we have multiple files that have multiple classes, and some of the classes look like: class SubClass_A(BaseClass): def foo(self): ... async def goo(self): ... class AnotherClass: ... class SubClass_B(BaseClass): def foo(self): ... async def goo(self): ... If we apply some sort of scrip to this code, it would output: File: /path/to/file class SubClass_A(BaseClass): def foo(self): async def goo(self): class SubClass_B(BaseClass): def foo(self): async def goo(self): There can be many classes in the same file some of them may be a BaseClass. Essentially, this is like folding lines of code in an IDE. Now, I tried using sed to do this, but I'm not an expert in it, and sed won't be able to print the file path. However, I know awk can print the file path, but I don't know awk! Argh. I thought about writing a python program to do this, but this problem seems like something a nifty sed/awk program can do. Thanks for the help.
Using any awk, this may be what you're trying to do: $ awk '/^class/{ f=/\(BaseClass)/ } f && $1 ~ /^(class|def|async)$/' file class SubClass_A(BaseClass): def foo(self): async def goo(self): class SubClass_B(BaseClass): def foo(self): async def goo(self):
2
4
79,149,172
2024-11-1
https://stackoverflow.com/questions/79149172/python-replace-period-if-its-the-only-value-in-column
How do I replace '.' in a dataframe if it's the only value in a cell without also replacing it if it's part of a decimal number? Here's the dataframe id 2.2222 . . 3.2 1.0 I tried this but it removes all decimals df = pd.DataFrame({'id':['2.2222','.','.','3.2','1.0']}) df['id'] = df['id'].str.replace('.','',regex=False) Unwanted result: id 22222 32 10 This is the desired result: id 2.2222 3.2 1.0
You could use .loc to do the replace: df.loc[df['id'] == '.', 'id'] = ''
2
4
79,148,924
2024-11-1
https://stackoverflow.com/questions/79148924/how-to-replace-xml-special-characters-from-text-within-html-tags-using-python
I am quite new to Python. I've been working on a web-scraping project that extracts data from various web pages, constructs a new HTML page using the data, and sends the page to a document management system The document management system has some XML-based parser for validating the HTML. It will reject it if XML special characters appear in text within HTML tags. For example: <p>The price of apples & oranges in New York is > the price of apples and oranges in Chicago</p> will get rejected because of the & and the >. I considered using String.replace() on the HTML doc before sending it, but it is not broad enough, and I don't want to remove valid occurrences of characters like & and >, such as when they form part of a tag or an attribute Could someone please suggest a solution to replacing the XML special characters with, for example, their english word equivalents (eg: & -> and)? Any help you can provide would be much appreciated
BeautifulSoup tames unruly HTML and presents it as unbroken HTML. You can use it to fix references like this. from bs4 import BeautifulSoup doc = """<body> <p>The price of apples & oranges in New York is > the price of apples and oranges in Chicago</p> </body>""" soup = BeautifulSoup(doc, features="lxml") print(soup.prettify()) Outputs <html> <body> <p> The price of apples &amp; oranges in New York is &gt; the price of apples and oranges in Chicago </p> </body> </html> Note that HTML itself is not necessarily XML compliant and there may be other reasons why an HTML document would not pass an XML validator.
1
3
79,147,445
2024-11-1
https://stackoverflow.com/questions/79147445/pandas-pivot-data-fill-mult-index-column-horizontally
i have the following code: import pandas as pd data = { 'name': ['Comp1', 'Comp1', 'Comp2', 'Comp2', 'Comp3'], 'entity_type': ['type1', 'type1', 'type2', 'type2', 'type3'], 'code': ['code1', 'code2', 'code3', 'code1', 'code2'], 'date': ['2024-01-31', '2024-01-31', '2024-01-29', '2024-01-31', '2024-01-29'], 'value': [10, 10, 100, 10, 200], 'source': [None, None, 'Estimated', None, 'Reported'] } df = pd.DataFrame(data) pivot_df = df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value').rename_axis([('name', 'entity_type', 'source', 'date')]) df = pivot_df.reset_index() df This produces the following: I am having trouble with the following: I would like to remove the first column I would like to fill the first 3 rows horizontally. so, for example, the blank cells above 'code2' should be Comp1, type1, NaN would be nice to replace those nans in the column headers with empty string Any help would be appreciated. EDIT - working hack as this data i want to end up as an array that looks exactly like it does in the dataframe to insert into a spreadsheet..this works. In this case however, there would not be any meaningful 'columns'.. out = (df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value') .rename_axis([('name', 'entity_type', 'source', 'date')]) .reset_index() .fillna('') ) out.columns.names = [None, None, None, None] columns_df = pd.DataFrame(out.columns.tolist()).T out = pd.concat([columns_df, pd.DataFrame(out.values)], ignore_index=True) out
You could (1) reset_index with drop=True, (2) set the display.multi_sparse to False (with pandas.option_context) and (3) fillna with '': df = pd.DataFrame(data) out = (df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value') .rename_axis([('name', 'entity_type', 'source', 'date')]) .reset_index(drop=True) .fillna('') ) with pd.option_context('display.multi_sparse', False): print(out) Output: name Comp1 Comp1 Comp2 Comp2 Comp3 entity_type type1 type1 type2 type2 type3 source NaN NaN Estimated NaN Reported code code1 code2 code3 code1 code2 0 100.0 200.0 1 10.0 10.0 10.0 Printing without the index: out = (df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value') .fillna('') ) with pd.option_context('display.multi_sparse', False): print(out.to_string(index=False)) Output: Comp1 Comp1 Comp2 Comp2 Comp3 type1 type1 type2 type2 type3 NaN NaN Estimated NaN Reported code1 code2 code3 code1 code2 100.0 200.0 10.0 10.0 10.0 Hiding the indices: out = (df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value') .rename_axis(None) .rename(lambda x: '') .fillna('') ) with pd.option_context('display.multi_sparse', False): print(out) Output: name Comp1 Comp1 Comp2 Comp2 Comp3 entity_type type1 type1 type2 type2 type3 source NaN NaN Estimated NaN Reported code code1 code2 code3 code1 code2 100.0 200.0 10.0 10.0 10.0 Updated answer: header as rows out = (df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value') .rename_axis(index=None, columns=('name', 'entity_type', 'source', 'date')) .fillna('') .T.reset_index().T .reset_index() ) Output: index 0 1 2 3 4 0 name Comp1 Comp1 Comp2 Comp2 Comp3 1 entity_type type1 type1 type2 type2 type3 2 source NaN NaN Estimated NaN Reported 3 date code1 code2 code3 code1 code2 4 2024-01-29 100.0 200.0 5 2024-01-31 10.0 10.0 10.0
2
4
79,148,025
2024-11-1
https://stackoverflow.com/questions/79148025/create-random-partition-inside-a-pandas-dataframe-and-create-a-field-that-identi
I have created the following pandas dataframe: ds = {'col1':[1.0,2.1,2.2,3.1,41,5.2,5.0,6.1,7.1,10]} df = pd.DataFrame(data=ds) The dataframe looks like this: print(df) col1 0 1.0 1 2.1 2 2.2 3 3.1 4 41.0 5 5.2 6 5.0 7 6.1 8 7.1 9 10.0 I need to create a random 80% / 20% partition of the dataset and I also need to create a field (called buildFlag) which shows whether a record belongs to the 80% partition (buildFlag = 1) or belongs to the 20% partition (buildFlag = 0). For example, the resulting dataframe would like like: col1 buildFlag 0 1.0 1 1 2.1 1 2 2.2 1 3 3.1 0 4 41.0 1 5 5.2 0 6 5.0 1 7 6.1 1 8 7.1 1 9 10.0 1 The buildFlag values are assigned randomly. Can anyone help me, please?
SOLUTION (PANDAS + NUMPY) A possible solution, which: First, using np.random.choice to randomly choose 80% of df indices without replacement. The df.index.isin function then checks each row's index to see if it was selected. Finally, np.where assigns a 1 to the Flag column for selected indices and a 0 for the others. df.assign(Flag=np.where( df.index.isin(np.random.choice( df.index, size=int(0.8 * len(df)), replace=False)), 1, 0)) SOLUTION (PANDAS + SKLEARN) Alternatively, we can use scikit-learn's train_test_split function: First, it randomly splits the df's indices into two groups: 80% for training and 20% for testing, as specified by test_size=0.2. The training indices are extracted using [0]. The df.index.isin method then checks which indices belong to the training set, producing a boolean array. Finally, this boolean array is converted to integers (1 for True and 0 for False) using .astype(int). from sklearn.model_selection import train_test_split df.assign(Flag = df.index.isin( train_test_split(df.index, test_size=0.2, random_state=42)[0]).astype(int)) Ouput: col1 Flag 0 1.0 0 1 2.1 1 2 2.2 1 3 3.1 1 4 41.0 1 5 5.2 1 6 5.0 1 7 6.1 1 8 7.1 1 9 10.0 0
2
2
79,147,500
2024-11-1
https://stackoverflow.com/questions/79147500/glxplatform-object-has-no-attribute-osmesa
I am testing whether OSMesa functions properly, but I encountered the following error. How can this error be resolved? The full error message: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[1], line 2 1 from OpenGL import GL ----> 2 from OpenGL import osmesa 4 ctx = osmesa.OSMesaCreateContext(osmesa.OSMESA_RGBA, None) 5 if ctx: File ~/anaconda3/envs/flame/lib/python3.8/site-packages/OpenGL/osmesa/__init__.py:2 1 from OpenGL.raw.osmesa._types import * ----> 2 from OpenGL.raw.osmesa.mesa import * File ~/anaconda3/envs/flame/lib/python3.8/site-packages/OpenGL/raw/osmesa/mesa.py:34 30 OSMesaGetCurrentContext = _p.GetCurrentContext 32 @_f 33 @_p.types(OSMesaContext,GLenum, OSMesaContext) ---> 34 def OSMesaCreateContext(format,sharelist): pass 36 @_f 37 @_p.types(OSMesaContext,GLenum, GLint, GLint, GLint, OSMesaContext) 38 def OSMesaCreateContextExt(format, depthBits, stencilBits,accumBits,sharelist ): pass File ~/anaconda3/envs/flame/lib/python3.8/site-packages/OpenGL/raw/osmesa/mesa.py:9, in _f(function) 7 def _f( function ): 8 return _p.createFunction( ----> 9 function,_p.PLATFORM.OSMesa, 10 None, 11 error_checker=None 12 ) AttributeError: 'GLXPlatform' object has no attribute 'OSMesa' Minimal reproducible example: from OpenGL import osmesa ctx = osmesa.OSMesaCreateContext(osmesa.OSMESA_RGBA, None) My environment as follows: pyrender==0.1.45 OpenGL==3.1.0
To resolve this, the environment variable PYOPENGL_PLATFORM should be set to osmesa before importing pyrender. The pyrender documentation suggests to do this either in the shell when executing the program: PYOPENGL_PLATFORM=osmesa python render.py or by adding the following lines at the top of the program: import os os.environ["PYOPENGL_PLATFORM"] = "osmesa" Otherwise, it is as the error indicates: the GLX platform doesn't provide OSMesa functionality. Full code example: import os os.environ["PYOPENGL_PLATFORM"] = "osmesa" from OpenGL import osmesa ctx = osmesa.OSMesaCreateContext(osmesa.OSMESA_RGBA, None) if ctx: print("OSMesa is working correctly")
2
0
79,146,569
2024-10-31
https://stackoverflow.com/questions/79146569/modify-numpy-array-of-arrays
I have a numpy array of numpy arrays and I need to add a leading zero to each of the inner arrays: a = [[1 2] [3 4] [5 6]] --> b = [[0 1 2] [0 3 4] [0 5 6]] Looping through like this: for item in a: item = np.insert(item, 0, 0) doesn't help. Numpy.put() flattens the array, which I don't want. Any suggestions how I accomplish this? Thanks
np.insert - insert values along the given axis before the given indices. If axis is None then array is flattened first. import numpy as np a = np.array([[1, 2], [3, 4], [5, 6]]) b = np.insert(a, 0, 0, axis=1) print(b) Result: [[0 1 2] [0 3 4] [0 5 6]]
3
4
79,145,336
2024-10-31
https://stackoverflow.com/questions/79145336/stripna-pandas-dropna-but-behaves-like-strip
It is a pretty common occurrence to have leading and trailing NaN values in a table or DataFrame. This is particularly true after joins and in timeseries data. import numpy as np import pandas as pd df1 = pd.DataFrame({ 'a': [1, 2, 3, 4, 5, 6, 7], 'b': [np.NaN, 2, np.NaN, 4, 5, np.NaN, np.NaN], }) Out[0]: a b 0 1 NaN 1 2 2.0 2 3 NaN 3 4 4.0 4 5 5.0 5 6 NaN 6 7 NaN Let's remove these with dropna. df1.dropna() Out[1]: a b 1 2 2.0 3 4 4.0 4 5 5.0 Oh no!! We lost all the missing values that showed in column b. I want to keep the middle (inner) ones. How do I strip the rows with leading and trailing NaN values in a quick, clean and efficient way? The results should look as follows: df1.stripna() # obviously I'm not asking you to create a new pandas method... # I just thought it was a good name. Out[3]: a b 1 2 2.0 2 3 NaN 3 4 4.0 4 5 5.0 Some of the answers so far are pretty nice but I think this is important enough functionality that I raised a feature request with Pandas here if anyone is interested. Let's see how it goes!
Another way, that might be a little more readable is using the pd.Series.first_valid_index and pd.Series.last_valid_index with index slicing using loc: df1.loc[df1['b'].first_valid_index():df1['b'].last_valid_index()] Output: a b 1 2 2.0 2 3 NaN 3 4 4.0 4 5 5.0 And, this should be really fast. Using @LittleBobbyTables input dataframe. %timeit df1.loc[df1['b'].ffill().notna()&df1['b'].bfill().notna()] 24.2 ms ± 610 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) verus: %timeit df1.loc[df1['b'].first_valid_index():df1['b'].last_valid_index()] 1.43 ms ± 34.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
3
4
79,145,689
2024-10-31
https://stackoverflow.com/questions/79145689/check-if-value-is-in-enum-fails
I am super confused how neither of these work. Can someone help me understand what's going on and why it prints "BAD" and "Value does not exist"? from enum import Enum class EventType(Enum): USER_LOGIN = 1, USER_LOGOUT = 2, @classmethod def has_value(cls, value): return value in cls._value2member_map_ eventType = 2 if not EventType.has_value(eventType): print("BAD") else: print("GOOD") if eventType in EventType.__members__.values(): print("Value exists") else: print("Value does not exist")
As @msanford said in the comments, remove the trailing commas from your values -- they are creating tuples.
2
4
79,142,218
2024-10-30
https://stackoverflow.com/questions/79142218/how-to-calculate-transfer-function-with-iout-iin-instead-for-parallel-rlc-tank
I am trying to calculate the RLC tank transfer function using LCAPY. I know what the answer should be but I want to do it with code. The issue is there isn't a current() or voltage() function in LCAPY and only impedance() and transfer(). Here is the code so far: import lcapy from lcapy import s, expr from IPython.display import display, Math from sympy import symbols, collect, sqrt import sympy as sp # Define the circuit netlist with components netlist = """ Iin 1 0 Cp 1 0 C Rp 1 0 R Lp 1 0 L """ # Create a Circuit object from the netlist circuit = lcapy.Circuit(netlist) s_model = circuit.s_model() # Calculate the transfer function H = I_out / I_in # H(s) = transfer from node 1 (current source) to ground (node 0) # Calculate the output current I_out I_out = s_model.current(1, 0) # Current through resistor Rp at node 1 # Calculate the input current I_in from the current source I_in = s_model.Iin(1) # Current from the current source at node 1 # H(s) = I_out / I_in H_final = I_out / I_in # Display the transfer function H display(Math(sp.latex(H_final))) I was hoping to get the following:
You don't need a voltage here, since both currents will be proportional to it. You just need the resistance (effectively the impedance of the resistor) and the total impedance (resistor+capacitor+inductor) between the two lines. If you want your frequency response then you will need to specify this explicitly. Since they end at the same nodes and I(in)=I(total), writing Z for combined impedance of resistor, capacitor, inductor: V = I(total).Z = I(resistor).R Hence, H = abs( I(resistor) / I(total) ) = abs( Z / R ) Run the following in, e.g., a jupyter notebook: import lcapy from lcapy import s, j, omega from IPython.display import display, Math, Latex import sympy as sp # Define the circuit netlist with components netlist = """ Iin 1 0 Cp 1 0 C Rp 1 0 R Lp 1 0 L """ circuit = lcapy.Circuit(netlist) S = circuit.s_model() # Calculate the transfer function # H = S.resistance( 1, 0 ) / S.impedance( 1, 0 ) # Note that resistance() actually returns abs(Z*Z)/R H = S.impedance( 1, 0 ) / circuit.Rp.R # H = I(resistor) / I(total) = Z(total) / Z(resistor) A = H( j * omega ).magnitude display(Math(sp.latex( A ))) Output: Note: in the (commented-out) version of H, resistance() does not return R; it appears to actually return what I would call abs(Z2)/R, where Z is the complex impedance (which is returned correctly); hence the transfer function looks "the wrong way up".
3
4
79,141,599
2024-10-30
https://stackoverflow.com/questions/79141599/with-pandas-how-do-i-use-np-where-with-nullable-datetime-colums
np.where is great for pandas since it's a vectorized way to change the column based on a condition. But while it seems to work great with np native types, it doesn't play nice with dates. This works great: >>> df1 = pd.DataFrame([["a", 1], ["b", np.nan]], columns=["name", "num"]) >>> df1 name num 0 a 1.0 1 b NaN >>> np.where(df1["num"] < 2, df1["num"], np.nan) array([ 1., nan]) But this doesn't: >>> df2 = pd.DataFrame([["a", datetime.datetime(2024,1,2)], ["b", np.nan]], columns=["name", "date"]) >>> df2 name date 0 a 2024-01-02 1 b NaT >>> np.where(df2["date"] < datetime.datetime(2024,3,1), df2["date"], np.nan) Traceback (most recent call last): File "<python-input-10>", line 1, in <module> np.where(df2["date"] < datetime.datetime(2024,3,1), df2["date"], np.nan) ~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ numpy.exceptions.DTypePromotionError: The DType <class 'numpy.dtypes.DateTime64DType'> could not be promoted by <class 'numpy.dtypes._PyFloatDType'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (<class 'numpy.dtypes.DateTime64DType'>, <class 'numpy.dtypes._PyFloatDType'>) >>> What is the proper vectorized way to do the latter operation?
The best answer is to use Series.where: df2['out'] = df2['date'].where(df2["date"] < datetime.datetime(2024,3,1)) As a second best answer, you can use NaT. numpy.where returns an array with a single dtype, you should not use NaN as an empty value but NaT: np.where(df2["date"] < datetime.datetime(2024,3,1), df2["date"], pd.NaT) Output: array([1704153600000000000, NaT], dtype=object) Note the the output values are integers (datetimes are integers). If you want to assign a column: df2['out'] = pd.to_datetime(np.where(df2["date"] < datetime.datetime(2024,3,1), df2["date"], pd.NaT)) Output: name date out 0 a 2024-01-02 2024-01-02 1 b NaT NaT
2
3
79,142,186
2024-10-30
https://stackoverflow.com/questions/79142186/how-do-i-flatten-the-elements-of-a-column-of-type-list-of-lists-so-that-it-is-a
Consider the following example: import polars as pl pl.DataFrame(pl.Series("x", ["1, 0", "2,3", "5 4"])).with_columns( pl.col("x").str.split(",").list.eval(pl.element().str.split(" ")) ) shape: (3, 1) ┌────────────────────┐ │ x │ │ --- │ │ list[list[str]] │ ╞════════════════════╡ │ [["1"], ["", "0"]] │ │ [["2"], ["3"]] │ │ [["5", "4"]] │ └────────────────────┘ I want to flatten the elements of the column, so instead of being a nested list, the elements are just a list. How do I do that?
You can use Expr.explode(), Expr.list.explode(), or Expr.flatten() to return one row for each list element, and using it inside of Expr.list.eval() lets you expand each row's nested lists instead of exploding the series itself. import polars as pl df = pl.DataFrame(pl.Series("x", ["1, 0", "2,3", "5 4"])) print(df.with_columns( pl.col("x") .str.split(",") .list.eval(pl.element().str.split(" ")) .list.eval(pl.element().flatten()) )) # You can also combine it with your existing .list.eval() print(df.with_columns( pl.col("x") .str.split(",") .list.eval(pl.element().str.split(" ").flatten()) ))
2
3
79,144,429
2024-10-31
https://stackoverflow.com/questions/79144429/seaborn-can-i-add-a-second-hue-or-similar-within-a-stripplot-with-dodge-tru
Let's say I have a plot that looks like so: import numpy as np df = sns.load_dataset('iris') dfm = pd.melt(df, id_vars=["species"]) dfm = dfm.query('variable in ["sepal_length", "sepal_width"]') sns.stripplot(data=dfm, x="species", y="value", hue="variable", dodge=True) plt.legend(bbox_to_anchor=(1.05, 1), loc=2) Let's also say there's another column with important info in my data, such as "potency". dfm['potency'] = np.random.randint(1, 6, dfm.shape[0]) I would like to highlight the potency corresponding to each point in my plot with darkening colors (high potency -> darker). Is this possible? I know that hue='potency' would do just this but then I could not use dodge to separate the data into sepal_width and and sepal_length chunks.
You could overlay multiple stripplots with different alphas: start, end = dfm['potency'].agg(['min', 'max']) for k, v in dfm.groupby('potency'): sns.stripplot(data=v.assign(variable=v['variable']+f' / potency={k}'), x="species", y="value", hue="variable", dodge=True, alpha=(k-start+1)/(end-start+1) ) plt.legend(bbox_to_anchor=(1.05, 1), loc=2) Output: A variant with multiple colors in a paired palette: pal = sns.color_palette("Paired") for i, (k, v) in enumerate(dfm.groupby('potency')): sns.stripplot(data=v.assign(variable=v['variable']+f' / potency={k}'), x="species", y="value", hue="variable", dodge=True, palette=pal[2*i:2*(i+1)] ) plt.legend(bbox_to_anchor=(1.05, 1), loc=2) Output:
2
1
79,139,603
2024-10-30
https://stackoverflow.com/questions/79139603/how-can-i-keep-track-of-which-function-returned-which-value-with-concurrent-fut
My code is as follows: with concurrent.futures.ThreadPoolExecutor() as executor: length = len(self.seq) futures = [None] * length results = [] for i in range(length): f,args,kwargs = self.seq[i] future = executor.submit(f, *args, **kwargs) futures[i] = future for f in concurrent.futures.as_completed(starts): results.append(f.result()) I'd like the return values in results to be ordered by the order of the functions they were returned from in self.seq, but my code orders them in the order of whichever returns first. How can I do this? For clarification, .as_completed() is not what I'm looking for. I want to maintain the order of the original function list.
The simplest answer is to pass a counter argument along with the rest of your args and kwargs, where you set counter to be i. You then need to modify f to return the counter along with whatever other values its supposed to return. Your results can then be sorted by the counter. Alternatively, create a results array of the right size, and as each result comes back, the counter value tells you where to put it in the array. If you do not have access to f, then just write a wrapper around it: def f_wrapper(*args, *, counter, **kwargs): result = f(*args, **kwargs) return counter, result Likewise, you'll write future = executor.submit(f_wrapper, *args, **kwargs | dict(counter=i))
2
0
79,141,781
2024-10-30
https://stackoverflow.com/questions/79141781/get-maximum-previous-nonmissing-value-within-group-in-pandas-dataframe
I have a pandas dataframe with a group structure where the value of interest, val, is guaranteed to be sorted within the group. However, there are missing values in val which I need to bound. The data I have looks like this: group_id id_within_group val 1 1 3.2 1 2 4.8 1 3 5.2 1 4 NaN 1 5 7.5 2 1 1.8 2 2 2.8 2 3 NaN 2 4 5.4 2 5 6.2 I now want to create a lower bound, max_prev which is the maximum value within the group for the rows before the current row, whereas min_next is the minimum value within the group for the rows after the current row. It is not possible to just look one row back and ahead, because there could be clusters of NaN. I don't need to take care of the edge cases of the first and last row within group. The desired output would hence be group_id id_within_group val max_prev min_next 1 1 3.2 NaN 4.8 1 2 4.8 3.2 5.2 1 3 5.2 4.8 7.5 1 4 NaN 5.2 7.5 1 5 7.5 5.2 NaN 2 1 1.8 NaN 2.8 2 2 2.8 1.8 5.4 2 3 NaN 2.8 5.4 2 4 5.4 2.8 6.2 2 5 6.2 5.4 NaN How can I achieve this in a reasonable fast way?
You could use a custom groupby.transform with ffill/bfill+shift: g = df.groupby('group_id')['val'] df['max_prev'] = g.transform(lambda x: x.ffill().shift()) df['min_next'] = g.transform(lambda x: x[::-1].ffill().shift()) # or df['min_next'] = g.transform(lambda x: x.bfill().shift(-1)) If your values are not sorted, add a cummax/cummin: g = df.groupby('group_id')['val'] df['max_prev'] = g.transform(lambda x: x.ffill().cummax().shift()) df['min_next'] = g.transform(lambda x: x[::-1].ffill().cummin().shift()) Output: group_id id_within_group val max_prev min_next 0 1 1 3.2 NaN 4.8 1 1 2 4.8 3.2 5.2 2 1 3 5.2 4.8 7.5 3 1 4 NaN 5.2 7.5 4 1 5 7.5 5.2 NaN 5 2 1 1.8 NaN 2.8 6 2 2 2.8 1.8 5.4 7 2 3 NaN 2.8 5.4 8 2 4 5.4 2.8 6.2 9 2 5 6.2 5.4 NaN
1
4
79,141,475
2024-10-30
https://stackoverflow.com/questions/79141475/how-to-project-a-3d-point-from-a-3d-world-with-a-specific-camera-position-into-a
I have some hard time understanding, how a 3D point from a scene can be translated into a 2d Image. I have created a scene in Blender where my camera is positioned at P_cam(0|0|0) at is looking between the x and y axis (rotation of x=90° and z=-45°). I spawned a test cube at pos_c (5|5|0). The img I see looks like this: I now want to create the same img with native python. Therefore I use opencv to map 3D points to 2D points. I created my camera matrix (with (0|0) refering to the middle of the img) cx = width/2 #principal point is our image center cy = height/2 camera_matrix = np.array([[fx, 0, cx], [0, fy, cy], [0, 0, 1]], np.float32) I then use openvcs projectPoints method to transform my cubes world coordinates into 2d img points. I apply no transformation in form of the tvec. But i know that i need the rotation vector and thats the tricky part for me. I tried to use the vector with the same values as i did in blender, but it did not work. cam_rot_in_deg = (90, 0, -45) cam_rot_in_rad = np.radians(cam_rot_in_deg) rvec = np.array([[cam_rot_in_rad]], np.float32).reshape((3,1)) The image I render looks like this: Within in the image I also visualized the coordinate axis, so I can see that the cube is drawn at the correct position within the world, but my camera perspective is completly off. What am I missing? Thank you so much I tried numerous rotation angles, but I did not came accros the desired solution.
Your issue is here: cam_rot_in_deg = (90, 0, -45) cam_rot_in_rad = np.radians(cam_rot_in_deg) rvec = np.array([[cam_rot_in_rad]], np.float32).reshape((3,1)) OpenCV's "rvec" is an axis-angle representation, not Euler angles. What you did there is Euler angles. That's no good. Instead of rvecs (and tvecs), you can just deal with matrices. I'd recommend 4x4 for 3D affine transforms. They're trivial to compose using matrix multiplication, which is @ for numpy arrays. That way you can compose any individual primitive rotations around any axes you like. OpenCV has the cv.Rodrigues() function that converts back and forth between axis-angle vector and (3x3) rotation matrix. For convenience, you should define utility functions that build matrices for translation (translation along each axis), scaling (factor, or factor per dimension), rotation (from axis and angle). Another convenience function you'd probably need will take a 4x4 transform matrix apart into its rotation and translation parts. Since slicing the rotation part is trivial already, that function should also call Rodrigues because then the function can turn a transform matrix into rvec and tvec, which is commonly used within OpenCV. A function to do the forward thing (rtvec to 4x4) just does the opposite. Euler angles are generally unwieldy (gimbal lock), so they aren't used as a medium for anything, merely for initialization. They're confusing too, because everyone has their own conventions for the order of the rotations, and even which axes there are.
2
1
79,141,089
2024-10-30
https://stackoverflow.com/questions/79141089/saving-numpy-array-after-indexing-is-much-slower
I am running into an issue that saving a numpy array after indexing results in much slower saving. A minimal reproducible example can be seen below: import time import numpy as np def mre(save_path): array = np.zeros((245, 233, 6)) start = time.time() for i in range(1000): with open(save_path + '/array1_' + str(i), "wb") as file: np.save(file, array) end = time.time() print(f"No indexing: {end - start}s") array2 = array[:,:,[0,1,2,3,4,5]] start = time.time() for i in range(1000): with open(save_path + '/array2_' + str(i), "wb") as file: np.save(file, array2) end = time.time() print(f"With indexing: {end - start}s") print("Arrays are equal: ", np.array_equal(array, array2)) Which results in: No indexing: 2.9975574016571045s With indexing: 10.408239126205444s Arrays are equal: True So according to numpy the arrays are equal, but still the resulting saving times are significantly slower. Does anyone have an idea as to why this is?
Have you tried to use the numpy.ascontiguousarray() function ? This function is useful when working with arrays that have a non-contiguous memory layout, as it can improve performance by ensuring that the data is stored in contiguous memory locations. Example array2 = np.ascontiguousarray(array[:,:,[0,1,2,3,4,5]]) Output No indexing: 6.80817985534668s With indexing (contiguous copy): 6.550203800201416s
2
1
79,140,661
2024-10-30
https://stackoverflow.com/questions/79140661/how-to-sum-values-based-on-a-second-index-array-in-a-vectorized-manner
Let' say I have a value array values = np.array([0.0, 1.0, 2.0, 3.0, 4.0]) and an index array indices = np.array([0,1,0,2,2]) Is there a vectorized way to sum the values for each unique index in indices? I mean a vectorized version to get sums in this snippet: sums = np.zeros(np.max(indices)+1) for index, value in zip(indices, values): sums[index] += value Bonus points, if the solution allows values (and in consequence sums)to be multi-dimensional. EDIT: I benchmarked the posted solutions: import numpy as np import time import pandas as pd values = np.arange(1_000_000, dtype=float) rng = np.random.default_rng(0) indices = rng.integers(0, 1000, size=1_000_000) N = 100 now = time.time_ns() for _ in range(N): sums = np.bincount(indices, weights=values, minlength=1000) print(f"np.bincount: {(time.time_ns() - now) * 1e-6 / N:.3f} ms") now = time.time_ns() for _ in range(N): sums = np.zeros(1 + np.amax(indices), dtype=values.dtype) np.add.at(sums, indices, values) print(f"np.add.at: {(time.time_ns() - now) * 1e-6 / N:.3f} ms") now = time.time_ns() for _ in range(N): pd.Series(values).groupby(indices).sum().values print(f"pd.groupby: {(time.time_ns() - now) * 1e-6 / N:.3f} ms") now = time.time_ns() for _ in range(N): sums = np.zeros(np.max(indices)+1) for index, value in zip(indices, values): sums[index] += value print(f"Loop: {(time.time_ns() - now) * 1e-6 / N:.3f} ms") Results: np.bincount: 1.129 ms np.add.at: 0.763 ms pd.groupby: 5.215 ms Loop: 196.633 ms
Another possible solution, which: First, creates an array of zeros b with a length equal to the number of unique elements in the indices array It then uses the np.add.at function to accumulate the values from the values array into the corresponding positions in b as specified by the indices array. b = np.zeros(1 + np.amax(indices), dtype=values.dtype) np.add.at(b, indices, values) Output: array([2., 1., 7.])
3
3
79,139,406
2024-10-30
https://stackoverflow.com/questions/79139406/executing-an-scheduled-task-if-the-script-is-run-too-late-and-after-the-schedule
Imagine I want to run this function: def main(): pass at these scheduled times (not every random 3 hours): import schedule schedule.every().day.at("00:00").do(main) schedule.every().day.at("03:00").do(main) schedule.every().day.at("06:00").do(main) schedule.every().day.at("09:00").do(main) schedule.every().day.at("12:00").do(main) schedule.every().day.at("15:00").do(main) schedule.every().day.at("18:00").do(main) schedule.every().day.at("21:00").do(main) while True: schedule.run_pending() time.sleep(1) How can I make this run main() immediately when the script is run at some time between the scheduled start times, say at 19:42. This works fine but if the something happens like the power goes out and I'm not there to restart the script and miss a scheduled run, I'd like it to execute the function as soon as the script is run again even if it's not at one of the scheduled times. This is Windows machine if it matters.
I think you need just to add main() call before the while loop, with check of current time import datetime # other schedules ... schedule.every().day.at("21:00").do(main) now = datetime.datetime.now() if 0 <= now.hour <= 20: main() # <-- this will be called once, at script start if current time is between 00:00 and 21:00 while True: schedule.run_pending() time.sleep(1)
2
0
79,139,273
2024-10-29
https://stackoverflow.com/questions/79139273/django-warning-accessing-the-database-during-app-initialization-is-discourage
Recently, I’ve updated Django to the latest version, 5.1.2, and since then, every time I start the server, I get this warning: RuntimeWarning: Accessing the database during app initialization is discouraged. To fix this warning, avoid executing queries in AppConfig.ready() or when your app modules are imported. From what I’ve searched so far, my apps.py file is causing this due to the operation it’s executing on the database: from django.apps import AppConfig class TenantsConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'tenants' def ready(self): self.load_additional_databases() def load_additional_databases(self): from django.conf import settings from .models import DatabaseConfig for config in DatabaseConfig.objects.all(): if config.name not in settings.DATABASES: db_settings = settings.DATABASES['default'].copy() db_settings.update({ 'ENGINE': config.engine, 'NAME': config.database_name, 'USER': config.user, 'PASSWORD': config.password, 'HOST': config.host, 'PORT': config.port, }) settings.DATABASES[config.name] = db_settings My settings.py has two main databases (default and tenants) hard-coded and the other configurations should be updated with data from the model DatabaseConfig when I start the server. The problem is that I need this exact behavior, but the solution I’ve found so far, is to use connection_created which makes this run for every database query. This is the implementation using the signal: def db_config(**kwargs): from django.conf import settings from .models import DatabaseConfig for config in DatabaseConfig.objects.all(): if config.name not in settings.DATABASES: db_settings = settings.DATABASES['default'].copy() db_settings.update({ 'ENGINE': config.engine, 'NAME': config.database_name, 'USER': config.user, 'PASSWORD': config.password, 'HOST': config.host, 'PORT': config.port, }) settings.DATABASES[config.name] = db_settings class TenantsConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'tenants' def ready(self): connection_created.connect(db_config) Is there another way to achieve the previous behavior in the ready method? Or is it really that bad to keep using it with the warning? For what I understand from the docs, this should be avoided, but is not necessarily bad. The docs mention the save() and delete() methods, but here I’m just updating the settings.py file. Any help would be appreciated.
Because I'm only updating my settings and not making any changes at the database level, I concluded that, in my case, it's safe to keep using this. To silence the warning, I added the following at the beginning of settings.py: import warnings warnings.filterwarnings( 'ignore', message='Accessing the database during app initialization is discouraged', category=RuntimeWarning ) Just keep in mind that, depending on the use case, this could be problematic. For reference, this is the warning in the docs, linked in the question: Although you can access model classes as described above, avoid interacting with the database in your ready() implementation. This includes model methods that execute queries (save(), delete(), manager methods etc.), and also raw SQL queries via django.db.connection. Your ready() method will run during startup of every management command. For example, even though the test database configuration is separate from the production settings, manage.py test would still execute some queries against your production database!
3
0
79,126,171
2024-10-25
https://stackoverflow.com/questions/79126171/contextvar-set-and-reset-in-the-same-function-fails-created-in-a-different-con
I have this function: async_session = contextvars.ContextVar("async_session") async def get_async_session() -> AsyncGenerator[AsyncSession, None]: async with async_session_maker() as session: try: _token = async_session.set(session) yield session finally: async_session.reset(_token) This fails with: ValueError: <Token var=<ContextVar name='async_session' at 0x7e1470e00e00> at 0x7e14706d4680> was created in a different Context How can this happen? AFAICT the only way for the Context to be changed is for a whole function call. So how can the context change during a yield? This function is being used as a FastAPI Depends in case that makes a difference - but I can't see how it does. It's running under Python 3.8 and the version of FastAPI is equally ancient - 0.54.
For anyone who comes after me, the actual problem is that my generator function is not async (the example in the question is misleading). FastAPI runs async and sync dependencies in different contexts to avoid the sync dependencies holding up the async loop thread, which is why it is then cleaned up in the wrong context.
2
0
79,126,228
2024-10-25
https://stackoverflow.com/questions/79126228/pyinstaller-doesnt-launch-my-exe-no-module-pydantic-deprecated-decorator
I compiled my code with pyinstaller to make a .exe but when I launch it, it told me the module 'pydantic.deprecated.decorator' wasn't found. that seems logic because I have nothing with that name. So I don't know what to do to solve this issue I already tried to reinstal pydantic Traceback (most recent call last): File "pydantic\_internal\_validators.py", line 99, in _import_string_logic File "importlib\__init__.py", line 126, in import_module File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked ModuleNotFoundError: No module named 'pydantic.deprecated.decorator' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "pydantic\_internal\_validators.py", line 62, in import_string File "pydantic\_internal\_validators.py", line 108, in _import_string_logic ImportError: No module named 'pydantic.deprecated.decorator' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "UX.py", line 9, in <module> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 384, in exec_module File "RAG_modif_pour_UX.py", line 18, in <module> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 384, in exec_module File "langchain_ollama\__init__.py", line 3, in <module> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 384, in exec_module File "langchain_ollama\chat_models.py", line 39, in <module> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 384, in exec_module File "langchain_core\tools\__init__.py", line 22, in <module> File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 384, in exec_module File "langchain_core\tools\base.py", line 27, in <module> File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist File "pydantic\__init__.py", line 402, in __getattr__ File "pydantic\_migration.py", line 287, in wrapper File "pydantic\_internal\_validators.py", line 64, in import_string pydantic_core._pydantic_core.PydanticCustomError: Invalid python path: No module named 'pydantic.deprecated.decorator'
I have faced a similar error message. What has fixed it for me was to add hidden imports. With the following hidden imports that problem went away: pyinstaller --hidden-import=pydantic --hidden-import=pydantic-core --hidden-import=pydantic.deprecated.decorator app.py The langchain modules use pydantic under the hood and without the hidden imports it will not be able to find the modules. At least that is my understanding at this point.
2
3
79,135,993
2024-10-29
https://stackoverflow.com/questions/79135993/convert-easting-northing-coordinates-to-latitude-and-longitude-in-scala-spark-wi
I’m working on a project in Scala and Apache Spark where I need to convert coordinates from Easting/Northing (EPSG:27700) to Latitude/Longitude (EPSG:4326). I have a Python script that uses in built libraries pyproj (transformer) to handle this, but I haven’t found an equivalent way to do it in Scala/Spark. Here’s the Python code I’m currently using: from pyproj import Transformer import pandas as pd data = { 'node_id': ['94489', '94555', '94806', '99118', '76202'], 'easting': [276164, 428790, 357501, 439545, 357353], 'northing': [84185, 92790, 173246, 336877, 170708] } df = pd.DataFrame(data) transformer = Transformer.from_crs("epsg:27700", "epsg:4326") lat, lon = transformer.transform(df['easting'].values, df['northing'].values) df['longitude'] = lon df['latitude'] = lat print(df) The output DataFrame should looks like this: node_id easting northing longitude latitude 94489 276164 84185 -3.752811 50.644154 94555 428790 92790 -1.593413 50.734016 94806 357501 173246 -2.613059 51.456587 99118 439545 336877 -1.413188 52.927852 76202 357353 170708 -2.614883 51.433757
@MartinHH Thanks for providing the reference. Looks like it is possible to create same code in scala/spark package com.test.job.function_testing import org.apache.spark.sql.SparkSession import geotrellis.proj4.CRS import geotrellis.proj4.Transform object TestCode { def main(args: Array[String]) = { val runLocally = true val jobName = "Test Spark Logging Case" implicit val spark: SparkSession = Some(SparkSession.builder.appName(jobName)) .map(sparkSessionBuilder => if (runLocally) sparkSessionBuilder.master("local[2]") else sparkSessionBuilder ) .map(_.getOrCreate()) .get import spark.implicits._ val columns = Seq("node_id", "easting", "northing") val data = Seq( (94489, 276164, 84185), (94555, 428790, 92790), (94806, 357501, 173246), (99118, 439545, 336877), (76202, 357353, 170708)) val df = data.toDF(columns: _*) val eastingNorthing = CRS.fromEpsgCode(27700) val latLong = CRS.fromEpsgCode(4326) val transform = Transform(eastingNorthing, latLong) import org.apache.spark.sql.functions._ def transformlatlong = udf((easting: Int, northing: Int) => { val (long, lat) = transform(easting, northing) (long,lat) } ) val newdf = df.withColumn("latlong",transformlatlong(df("easting"),df("northing"))) newdf.select(col("node_id"),col("easting"),col("northing"),col("latlong._1").as("longitude"),col("latlong._2").as("latitude")).show() } } And here is the output dataframe +-------+-------+--------+-------------------+------------------+ |node_id|easting|northing| longitude| latitude| +-------+-------+--------+-------------------+------------------+ | 94489| 276164| 84185| -3.752810925839862|50.644154475886154| | 94555| 428790| 92790|-1.5934125598396651| 50.73401609723385| | 94806| 357501| 173246|-2.6130593045676984| 51.45658738605824| | 99118| 439545| 336877| -1.413187622652739| 52.92785156624134| | 76202| 357353| 170708| -2.614882589162872| 51.43375699275326| +-------+-------+--------+-------------------+------------------+ And added below library to the build.sbt libraryDependencies += "org.locationtech.geotrellis" %% "geotrellis-raster" % "3.5.2"
2
1
79,136,294
2024-10-29
https://stackoverflow.com/questions/79136294/how-to-run-tests-using-each-package-pyproject-toml-configuration-instead-of-the
I'm using uv workspace with a structure that looks like the following example from the doc. albatross ├── packages │ ├── bird-feeder │ │ ├── pyproject.toml │ │ └── src │ │ └── bird_feeder │ │ ├── __init__.py │ │ └── foo.py │ └── seeds │ ├── pyproject.toml │ └── src │ └── seeds │ ├── __init__.py │ └── bar.py ├── pyproject.toml ├── README.md ├── uv.lock └── src └── albatross └── main.py How do I tell pytest to use the configuration in each package pyproject.toml while running pytest from the root package(albatross in this case) directory?
You can tell pytest the directory: uv run pytest packages/bird-feeder uv run --package bird-feeder doesn't work: uv run --package bird-feeder pytest doesn't pass pyproject.toml info to pytest. pytest merely infers dir=pathlib.Path.cwd(). uv run --directory packages/bird-feeder works: uv run --directory packages/bird-feeder pytest Related: Using uv run as a task runner #5903 may be supported in future.
2
1
79,138,896
2024-10-29
https://stackoverflow.com/questions/79138896/how-to-conditionally-update-a-column-in-a-polars-dataframe-with-values-from-a-li
I am trying to update specific rows in a python-polars DataFrame where two columns ("Season" and "Wk") meet certain conditions, using values from a list or Series that should align with the filtered rows. In pandas, I would use .loc[] to do this, but I haven't found a way to achieve the same result with Polars. import polars as pl # Sample DataFrame df = pl.DataFrame({ "Season": [2024, 2024, 2024, 2024], "Wk": [28, 28, 29, 29], "position": [1, 2, 3, 4] }) # List of new values for the filtered rows new_positions = [10, 20] # Example values aligned with the filtered rows # Condition to update only where Season is 2024 and Wk is 29 df = df.with_columns( # Expected update logic here ) I've tried approaches with when().then() and zip_with(), but they either don't accept lists/Series directly or don't align the new values correctly with the filtered rows. Is there a recommended way to update a column conditionally in python-polars with values from a list or Series in the same order as the filter?
You could generate the indices with when/then and .cum_sum() df.with_columns( idx = pl.when(Season=2024, Wk=29).then(1).cum_sum() - 1 ) shape: (4, 4) ┌────────┬─────┬──────────┬──────┐ │ Season ┆ Wk ┆ position ┆ idx │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i32 │ ╞════════╪═════╪══════════╪══════╡ │ 2024 ┆ 28 ┆ 1 ┆ null │ │ 2024 ┆ 28 ┆ 2 ┆ null │ │ 2024 ┆ 29 ┆ 3 ┆ 0 │ │ 2024 ┆ 29 ┆ 4 ┆ 1 │ └────────┴─────┴──────────┴──────┘ Which you can .get() and then .fill_null() the remaining original values. df.with_columns( pl.lit(pl.Series(new_positions)) .get(pl.when(Season=2024, Wk=29).then(1).cum_sum() - 1) .fill_null(pl.col.position) .alias("position") ) shape: (4, 3) ┌────────┬─────┬──────────┐ │ Season ┆ Wk ┆ position │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞════════╪═════╪══════════╡ │ 2024 ┆ 28 ┆ 1 │ │ 2024 ┆ 28 ┆ 2 │ │ 2024 ┆ 29 ┆ 10 │ │ 2024 ┆ 29 ┆ 20 │ └────────┴─────┴──────────┘ Or perhaps .replace_strict() df.with_columns( (pl.when(Season=2024, Wk=29).then(1).cum_sum() - 1) .replace_strict(dict(enumerate(new_positions)), default=pl.col.position) .alias("position") )
2
1
79,138,384
2024-10-29
https://stackoverflow.com/questions/79138384/use-np-where-to-create-a-list-with-same-number-of-elements-but-different-conten
I have a pandas dataframe where a value sometimes gets NA. I want to fill this column with a list of strings with the same length as another column: import pandas as pd import numpy as np df = pd.DataFrame({"a": ["one", "two"], "b": ["three", "four"], "c": [[1, 2], [3, 4]], "d": [[5, 6], np.nan]}) a b c d one three [1, 2] [5, 6] two four [3, 4] NaN and I want this to become a b c d one three [1, 2] [5, 6] two four [3, 4] [no_value, no_value] I tried df["d"] = np.where(df.d.isna(), ['no_value' for element in df.c], df.d) and df["d"] = np.where(df.d.isna(), ['no_value'] * len(df.c), df.d) but both does not work. Anyone has an idea? SOLUTION: I adjusted PaulS answer a little to: df[‘d‘] = np.where(df.d.isna(), pd.Series([['no_value'] * len(lst) for lst in df.c]), df.d))
A possible solution consists in using np.where. df.assign(d = np.where( df['d'].isna(), pd.Series([['no_value'] * len(lst) for lst in df['c']]), df['d'])) Another possible solution, which uses: apply on df, iterating through each row with a lambda function that checks whether the value in column d is NaN. If the condition is met, the function generates a new list filled with the string 'no_value' repeated to match the length of the list in column c. If x['d'] does not meet these conditions, the original value in x['d'] is retained. df['d'] = df.apply( lambda x: ['no_value'] * len(x['c']) if (x['d'] is np.nan) else x['d'], axis=1) Output: a b c d 0 one three [1, 2] [5, 6] 1 two four [3, 4] [no_value, no_value]
3
1
79,137,381
2024-10-29
https://stackoverflow.com/questions/79137381/how-to-split-an-audio-file-to-have-chunks-that-are-less-than-a-certain-dimension
I need to divide an audio file to have chunks that are less than 25mb. I would like to not have to save the file on the disk. This is the code I have for now, but it is not working as expected, as it splits the audio in chunks of around 2mb def audio_splitter(audio_file) audio = AudioSegment.from_file(audio_file) # Set the chunk size and overlap target_chunk_size = 20 * 1024 * 1024 # Target chunk size in bytes (20 MB) # Overlap in milliseconds (10 seconds) overlap_duration = 10 * 1000 # Estimate the number of bytes per millisecond in the audio bytes_per_ms = len(audio.raw_data) / len(audio) # Calculate duration of each chunk in milliseconds chunk_duration = int(target_chunk_size / bytes_per_ms) chunks = [] start = 0 while start < len(audio): end = start + chunk_duration chunk = audio[start:end] chunks.append(chunk) start += chunk_duration - overlap_duration for i, chunk in enumerate(chunks): chunk.export(f"chunk_{i + 1}.mp3", format="mp3") I think there is a problem with len(audio.raw_data) as it seems not to return the correct byte size. Is there a better method altogheter to approach this problem?
You can use bytes.IO estimation In my test case I used 1/2 mb limit and files were all exactly 512kb,512kb,426kb,71.1kb from pydub import AudioSegment import io import sys import os def audio_splitter(audio_file): audio = AudioSegment.from_file(audio_file) test = audio[0:len(audio)] test_io = io.BytesIO() test.export(test_io, format="mp3") test_size = sys.getsizeof(test_io) # Set the chunk size and overlap target_chunk_size = 20 * 1024 * 1024 # Target chunk size in bytes (20 MB) # Overlap in milliseconds (10 seconds) overlap_duration = 10 * 1000 # Estimate the number of bytes per millisecond in the audio bytes_per_ms = test_size/len(audio) # Estimation # Calculate duration of each chunk in milliseconds chunk_duration = int(target_chunk_size / bytes_per_ms) chunks = [] start = 0 while start < len(audio): end = start + chunk_duration chunk = audio[start:end] chunks.append(chunk) start += chunk_duration - overlap_duration for i, chunk in enumerate(chunks): chunk.export(f"chunk_{i + 1}.mp3", format="mp3") if __name__ == "__main__": audio_splitter(*sys.argv[1:]) in bytes_per_ms I am doing average, you can even do this for every chunk but it will cost you memory,so it depends on your accuracy vs speed criteria, mine is almost accurate for realtime audio(everywhere same byte distribution on average) and fast.
2
0
79,125,305
2024-10-25
https://stackoverflow.com/questions/79125305/how-to-measure-a-server-process-maximum-memory-usage-from-launch-to-sigint-using
I have a server program (let's call it my-server) I cannot alter and I am interested in measuring its maximum memory usage from when it starts to when a certain external event occurs. I am able to do it manually on my shell by: running /usr/bin/time -f "%M" my-server triggering the external event sending an INT signal to the process by pressing CTRL + C reading the standard error I'd like to do this programmatically using Python. If not possible I'd like to find an alternative simple way to achieve the same result. I tried launching the same command as a subprocess and sending an INT signal to it: import signal import subprocess proc = subprocess.Popen( ["/usr/bin/time", "-f", "%M", "my-server"], stderr=subprocess.PIPE, stdout=subprocess.PIPE, text=True ) # ...wait for external event proc.send_signal(signal.SIGINT) out, err = proc.communicate() # both out and err are "" however both out and err are empty. The same script works fine for processes that terminate: proc = subprocess.Popen( ["/usr/bin/time", "-f", "%M", "echo", "hello"], stderr=subprocess.PIPE, stdout=subprocess.PIPE, text=True ) out, err = proc.communicate() # out is "hello", err is "1920" but for processes that need to be terminated via signal I am not sure how to retrieve stderr after termination was issued. (or even before it, despite not useful) The following somewhat equivalent example may be useful for testing: terminate-me.py import signal import sys def signal_handler(sig, frame): print('Terminated', file=sys.stderr) # Need to read this sys.exit(0) signal.signal(signal.SIGINT, signal_handler) print('Send SIGINT to terminate') signal.pause() main.py import signal import subprocess proc = subprocess.Popen( ["python", "terminate-me.py"], stderr=subprocess.PIPE, stdout=subprocess.PIPE, text=True ) # No need to wait for any external event proc.send_signal(signal.SIGINT) out, err = proc.communicate() print(out) print(err)
It turns out I misunderstood /usr/bin/time's behavior. I thought that when receiving an INT signal time would relay it to its sub-process being measured, waited for it to terminate, and then terminated normally outputting its results. What happens instead is that time simply terminates before outputting any result. When executing within a shell, pressing CTRL + C actually sends an INT signal to both, but the sub-process terminates before time does, so time is able to complete its execution and it appears as if time awaited the sub-process. Therefore in my script I was able to make it work by only sending the INT signal to the sub-process: (actually, to all of its sub-processes for simplicity) import signal import subprocess import psutil proc = subprocess.Popen( ["/usr/bin/time", "-f", "%M", "my-server"], stderr=subprocess.PIPE, stdout=subprocess.PIPE, text=True ) # Wait for external event... children = psutil.Process(proc.pid).children() for child in children: child.send_signal(signal.SIGINT) out, err = proc.communicate()
3
1
79,135,329
2024-10-28
https://stackoverflow.com/questions/79135329/why-is-pandas-itertuples-slower-than-iterrows-on-dataframes-with-many-100-col
In the unfortunate situation where looping over the rows of a Pandas dataframe is the only way to proceed, it's usually mentioned that itertuples() is to be preferred over iterrows() in terms of computational speed. This assertion appears valid for dataframes with few columns ('narrow dataframes'), but doesn't seem to be the case for wide dataframes with hundreds of columns. Is this normal ? Shouldn't itertuples behave like iterrows in terms of scaling with column number, i.e. constant time rather than linear time ? Attached below is the code snippet showing the cross-over from itertuples() to iterrows() been the fastest way to iterate when the dataframe width increases. import pandas as pd import time from pylab import * size_range = [100, 300, 600, 1200] nrows = 100000 compute_time_rows = zeros(len(size_range)) compute_time_tuples = zeros(len(size_range)) for s, size in enumerate(size_range): x = pd.DataFrame(randn(nrows, size)) start = time.time() for idx, row in x.iterrows(): z = sum(row) stop = time.time() compute_time_rows[s] = stop - start start = time.time() for row in x.itertuples(index=False): z = sum(row) stop = time.time() compute_time_tuples[s] = stop - start xlabel('Dataframe width') ylabel('Computation time [s]') pd.Series(compute_time_rows, index=size_range).plot(grid=True, label='iterrows') pd.Series(compute_time_tuples, index=size_range).plot(grid=True, label='itertuples') title(f'Iteration over a {nrows} rows Pandas dataframe') legend() [Row iteration speed vs dataframe width] https://i.sstatic.net/65QuPfjB.png
First of all, you should not use either of them. Iterating through a dataframe in a Python loop is painfully slow no matter how you do it. You can only use them if you don't care about the performance. That being said, I will answer your question based on my technical curiosity. Both APIs are helper functions implemented in Python, so you can have a look. https://github.com/pandas-dev/pandas/blob/v2.2.3/pandas/core/frame.py#L1505-L1557 https://github.com/pandas-dev/pandas/blob/v2.2.3/pandas/core/frame.py#L1559-L1641 The following are the relevant parts. You can check that these show the same pattern as the original if you want to. import pandas as pd def iterrows(df: pd.DataFrame): for row in df.values: yield pd.Series(row) def itertuples(df: pd.DataFrame): cols = [df.iloc[:, k] for k in range(len(df.columns))] return zip(*cols) From the above code, we can find two major factors. The first thing is that iterrows does not make copies (in this case). df.values returns a single numpy array. If you iterate over it, each row is a view, not a copy, of the original array. And when you pass a numpy array to pd.Series, it uses it as an internal buffer, so it's also a view. Therefore, iterrows yields a views of the dataframe. On the other hand, itertuples creates views of columns first, but leaves the rest to Python's builtin zip function, which makes a copy. Second, since ziping columns is equivalent to creating one iterator per column, the iteration overhead is multiplied (compared to iterrows). So itertuples is not only linear, it is also simply slow. To be fair, I don't think itertuples was designed to handle such a large number of columns, considering that it is much faster for small numbers of columns. To confirm this hypothesis, I will show two examples. The first, in iterrows, yields a tuple instead of a pd.Series. This makes iterrows linear, since it forces a copy. def iterrows_2(df: pd.DataFrame): for row in df.values: # yield pd.Series(row) yield tuple(row) With this simple change, the performance characteristics of iterrows are now very similar to those of itertuples (but somehow faster). The second, in itertuples, converts to a single numpy array instead of using zip. def itertuples_2(df: pd.DataFrame): cols = [df.iloc[:, k] for k in range(len(df.columns))] # return zip(*cols) return np.array(cols).T # This is only possible for the single dtype dataframe. Both creating cols and converting it to a numpy array take linear time, so it's still linear in overall, but much faster. In other words, you can see how slow zip(*cols) is. Here is the benchmark code: import time import matplotlib.pyplot as plt import numpy as np import pandas as pd def iterrows(df: pd.DataFrame): for row in df.values: yield pd.Series(row) def iterrows_2(df: pd.DataFrame): for row in df.values: # yield pd.Series(row) yield tuple(row) def itertuples(df: pd.DataFrame): cols = [df.iloc[:, k] for k in range(len(df.columns))] return zip(*cols) def itertuples_2(df: pd.DataFrame): cols = [df.iloc[:, k] for k in range(len(df.columns))] # return zip(*cols) return np.array(cols).T # This is only possible for the single dtype dataframe. def benchmark(candidates): size_range = [100, 300, 600, 900, 1200] nrows = 100000 times = {k: [] for k in candidates} for size in size_range: for func in candidates: x = pd.DataFrame(np.random.randn(nrows, size)) start = time.perf_counter() for row in func(x): pass stop = time.perf_counter() times[func].append(stop - start) # Plot plt.figure() for func in candidates: s = pd.Series(times[func], index=size_range, name=func.__name__) plt.plot(s.index, s.values, marker="o", label=s.name) plt.title(f"iterrows vs itertuples ({nrows} rows)") plt.xlabel("Columns") plt.ylabel("Time [s]") plt.legend() plt.grid(True) plt.tight_layout() plt.show() benchmark([iterrows, itertuples, iterrows_2]) benchmark([iterrows, itertuples, itertuples_2]) FYI, note that iterrows yields a view in your benchmark, but this is not always the case. For example, if you convert the first column to a string type, it will make a copy, and iterrows will take linear time. x = pd.DataFrame(np.random.randn(nrows, size)) x[0] = x[0].astype(str)
2
4
79,120,600
2024-10-24
https://stackoverflow.com/questions/79120600/passing-a-python-function-in-a-container-to-a-c-object-and-pybind-wrappers
I am writing a set of Python wrappers via pybind11 for an optimization library whose heavy-lifting code was written in C++. The abstract class hierarchy of my C++ code to be wrapped currently looks something like this (multivariate.h): typedef std::function<double(const double*)> multivariate; struct multivariate_problem { // objective multivariate _f; int _n; // bound constraints double *_lower; double *_upper; multivariate_problem(const multivariate f, const int n, double *lower, double *upper): _f(f), _n(n), _lower(lower), _upper(upper)) { } }; struct multivariate_solution { ... // some code here for return a solution (seems to work ok) }; class MultivariateOptimizer { public: virtual ~MultivariateOptimizer() { } // initialize the optimizer virtual void init(const multivariate_problem &f, const double *guess) = 0; // perform one step of iteration virtual void iterate() = 0; // retrieve current solution virtual multivariate_solution solution() = 0; // this essentially calls init(), performs a number of iterate() calls, returns solution() virtual multivariate_solution optimize(const multivariate_problem &f, const double *guess) = 0; }; Now, my pybind code looks something like this currently (multivariate_py.cpp): // wrap the function expressions typedef std::function<double(const py::array_t<double>&)> py_multivariate; // wrap the multivariable optimizer void build_multivariate(py::module_ &m) { // wrap the solution object py::class_<multivariate_solution> solution(m, "MultivariateSolution"); ... // wrap the solver py::class_<MultivariateOptimizer> solver(m, "MultivariateSearch"); // corresponds to MultivariateSearch::optimize() solver.def("optimize", [](MultivariateOptimizer &self, py_multivariate py_f, py::array_t<double> &py_lower, py::array_t<double> &py_upper, py::array_t<double> &py_guess) { const int n = py_lower.size(); double *lower = static_cast<double*>(py_lower.request().ptr); double *upper = static_cast<double*>(py_upper.request().ptr); const multivariate &f = [&py_f, &n](const double *x) -> double { const auto &py_x = py::array_t<double>(n, x); return py_f(py_x); }; const multivariate_problem &prob { f, n, lower, upper }; double *guess = static_cast<double*>(py_guess.request().ptr); return self.optimize(prob, guess); }, "f"_a, "lower"_a, "upper"_a, "guess"_a, py::call_guard<py::scoped_ostream_redirect, py::scoped_estream_redirect>()); // corresponds to MultivariateSearch::init() solver.def("initialize", [](MultivariateOptimizer &self, py_multivariate py_f, py::array_t<double> &py_lower, py::array_t<double> &py_upper, py::array_t<double> &py_guess) { const int n = py_lower.size(); double *lower = static_cast<double*>(py_lower.request().ptr); double *upper = static_cast<double*>(py_upper.request().ptr); const multivariate &f = [&py_f, &n](const double *x) -> double { const auto &py_x = py::array_t<double>(n, x); return py_f(py_x); }; const multivariate_problem &prob { f, n, lower, upper }; double *guess = static_cast<double*>(py_guess.request().ptr); return self.init(prob, guess); }, "f"_a, "lower"_a, "upper"_a, "guess"_a, py::call_guard<py::scoped_ostream_redirect, py::scoped_estream_redirect>()); // corresponds to MultivariateSearch::iterate() solver.def("iterate", &MultivariateOptimizer::iterate); // corresponds to MultivariateSearch::solution() solver.def("solution", &MultivariateOptimizer::solution); // put algorithm-specific bindings here build_acd(m); build_amalgam(m); build_basin_hopping(m); ... As you can see, I internally wrap the python function to a C++ function, then wrap the function inside a multivariate_problem object to pass to my C++ backend. Which would then be called by pybind with the following entry point (mylibrary.cpp): namespace py = pybind11; void build_multivariate(py::module_&); PYBIND11_MODULE(mypackagename, m) { build_multivariate(m); ... } pybind11 will compile this without errors through setup.py via typical pip install . commands. In fact, most of the code is working correctly for both initialize() and optimize() calls. For example, the following Python code will run fine (assuming mylibrary is replaced with the name of my package in setup.py, and MySolverName is a particular instance of MultivariateSearch: import numpy as np from mylibrary import MySolverName # function to optimize def fx(x): total = 0.0 for x1, x2 in zip(x[:-1], x[1:]): total += 100 * (x2 - x1 ** 2) ** 2 + (1 - x1) ** 2 return total n = 10 # dimension of problem alg = MySolverName(some_args_to_pass...) sol = alg.optimize(fx, lower=-10 * np.ones(n), upper=10 * np.ones(n), guess=np.random.uniform(low=-10., high=10., size=n)) print(sol) However, here is where I am currently stuck. I would also like the user to have an option to run the solvers interactively, which is where the initialize and iterate functions come into play. However, the binding provided above, which works for optimize, does not work for iterate. To illustrate a use case, the following code does not run: import numpy as np from mylibrary import MySolverName # function to optimize def fx(x): ... return result here for np.ndarray x n = 10 # dimension of problem alg = MySolverName(some_args_to_pass...) alg.initialize(f=fx, lower=-10 * np.ones(n), upper=10 * np.ones(n), guess=np.random.uniform(low=-10., high=10., size=n)) alg.iterate() print(alg.solution()) When the iterate command is executed, the script hangs and then terminates after a while, typically with no error, and it does not print the last line. In rare cases, it produces a crash on the Python side that the "dimension cannot be negative", but there is no guidance on which memory this pertains to. Without the iterate() line, the code will successfully print the last line as expected. When I std::cout inside the C++ iterate() function, it always hangs at the point right before any calls to the function fx and does not print anything after the call, so I have narrowed down the problem to the Python function not persisting correctly with iterate() as the entry point. However, optimize() calls iterate() internally within the C++ code, and the previous example is working successfully, so the error is rather odd. I've tried to go through the documentation for pybind, but I could not find an example that achieves precisely what I am trying to do above. I have tried other solutions like keep_alive<>, to no avail. Can you help me modify the pybind wrappers, such that the internally stored Python function will persist inside the C++ object, and execute correctly even after the control is again restored to the Python interpreter between iterate() calls? Thanks for your help!
If you would like to wrap your C-style interfaces in modern c++ compatible with pybind11, you can do like this: Considering that we have some dummy impl of your optimizer: typedef std::function<double(const double *)> multivariate; struct multivariate_problem { // objective multivariate _f; int _n; // bound constraints double *_lower; double *_upper; multivariate_problem() : multivariate_problem(nullptr, 0, nullptr, nullptr) {} multivariate_problem(const multivariate f, const int n, double *lower, double *upper) : _f(f), _n(n), _lower(lower), _upper(upper) {} }; struct multivariate_solution { // some code here for return a solution (seems to work ok) }; class MultivariateOptimizer { public: virtual ~MultivariateOptimizer() {} // store the args virtual void init(const multivariate_problem &f, const double *guess) { _problem = f; _guess = guess; } // just dbg print to check if all is ok virtual void iterate() { double result = _problem._f(_guess); std::cout << "iterate, f(guess): " << result << '\n'; } virtual multivariate_solution solution() { return multivariate_solution{}; } // this essentially calls init(), performs a number of iterate() calls, // returns solution() virtual multivariate_solution optimize(const multivariate_problem &f, const double *guess) { init(f, guess); // WARN: after `optimize` returns, guess will dangle, so don't call `iterate` without calling `init` first iterate(); iterate(); return multivariate_solution{}; } private: multivariate_problem _problem; const double *_guess; }; we can add the following 2 wrappers around the problem definition and the optimizer class: typedef std::function<double(std::span<const double>)> Multivariate; class MultiVarProblemWrapper { public: friend void swap(MultiVarProblemWrapper &first, MultiVarProblemWrapper &second) { using std::swap; swap(first._f, second._f); swap(first._n, second._n); swap(first._lower, second._lower); swap(first._upper, second._upper); swap(first._problem, second._problem); } MultiVarProblemWrapper() = default; MultiVarProblemWrapper(Multivariate f, size_t n, std::vector<double> lower, std::vector<double> upper) : _f{f}, _n{n}, _lower{lower}, _upper{upper}, _problem{[this, f, n](const double *arg) { std::span<const double> sp{arg, n}; return f(sp); }, static_cast<int>(n), _lower.data(), _upper.data()} {} // need to implement copy-CTor, because default version would be incorrect MultiVarProblemWrapper &operator=(MultiVarProblemWrapper rhs) { swap(*this, rhs); return *this; } MultiVarProblemWrapper(const MultiVarProblemWrapper &other) : MultiVarProblemWrapper(other._f, other._n, other._lower, other._upper) { } MultiVarProblemWrapper(MultiVarProblemWrapper &&) = default; MultiVarProblemWrapper &operator=(MultiVarProblemWrapper &&) = default; multivariate_problem getProblem() const { return _problem; } private: Multivariate _f; size_t _n; std::vector<double> _lower; std::vector<double> _upper; multivariate_problem _problem; }; class MultiVarOptWrapper { public: MultiVarOptWrapper() : _impl{new MultivariateOptimizer} {} void init(const MultiVarProblemWrapper &f, std::vector<double> guess) { _problem = f; _guess = guess; _impl->init(_problem.getProblem(), _guess.data()); } void iterate() { _impl->iterate(); } multivariate_solution solution() { return _impl->solution(); } multivariate_solution optimize(const MultiVarProblemWrapper &f, std::vector<double> guess) { return _impl->optimize(f.getProblem(), guess.data()); } private: std::unique_ptr<MultivariateOptimizer> _impl; MultiVarProblemWrapper _problem; std::vector<double> _guess; }; and then the python bindings: PYBIND11_MODULE(test_binding, m) { typedef std::function<double(const py::array_t<double> &)> py_multivariate; py::class_<multivariate_solution>(m, "multivariate_solution") .def(py::init<>()); py::class_<MultiVarProblemWrapper>(m, "MultiVarProblemWrapper") .def(py::init<>()) // explicit conversion from span to array_t; hopefully this will be supported out-of-the-box in the future pybind11 versions .def(py::init([](py_multivariate py_f, size_t n, std::vector<double> lower, std::vector<double> upper) { auto f = [py_f](std::span<const double> sp) { py::array_t<double> arr(sp.size(), sp.data()); return py_f(arr); }; return MultiVarProblemWrapper{f, n, lower, upper}; })); py::class_<MultiVarOptWrapper>(m, "MultiVarOptWrapper") .def(py::init<>()) .def("init", &MultiVarOptWrapper::init) .def("iterate", &MultiVarOptWrapper::iterate) .def("solution", &MultiVarOptWrapper::solution) .def("optimize", &MultiVarOptWrapper::optimize); } And then you can test it in python: def foo(data): print(data) return data[0] + data[1] problem = MultiVarProblemWrapper(foo, 2, [1, 2], [3,4]) optimizer = MultiVarOptWrapper() optimizer.init(problem, [6.9,9.6]) optimizer.iterate() In MultiVarOptWrapper default constructor I hardcoded dummy algorithm implementation. Of course you can create a parametric MultiVarOptWrapper constructor were you pass arguments needed to create a real solver, like in your python example alg = MySolverName(some_args_to_pass...). In MultiVarProblemWrapper I used copy and swap idiom.
2
1
79,136,314
2024-10-29
https://stackoverflow.com/questions/79136314/how-to-let-pip-install-show-progress
In the past, I install my personal package with setup.py: python setup.py install Now this method is deprecated, and I can only use pip: python -m pip install . However, the method with setup.py can show install messages, but pip method cannot. For example, when there is c++ code which requires compiling the source code, the setup.pymethod can print warnings to the screen, but thepip` method only let you wait until everything is done. Is there any method that I can see more messages with pip like the setup.py in the past?
You can force pip to be more verbose using pip -v install. The option is additive, and can be used up to 3 times to increase verbosity: pip -v install pip -vv install pip -vvv install
2
2
79,136,400
2024-10-29
https://stackoverflow.com/questions/79136400/filter-rows-by-condition-on-columns-with-certain-names
I have a dataframe: df = pd.DataFrame({"ID": ["ID1", "ID2", "ID3", "ID4", "ID5"], "Item": ["Item1", "Item2", "Item3", "Item4","Item5"], "Catalog1": ["cat1", "1Cat12", "Cat35", "1cat3","Cat5"], "Catalog2": ["Cat11", "Cat12", "Cat35", "1Cat1","2cat5"], "Catalog3": ["cat6", "Ccat2", "1Cat9", "1cat3","Cat7"], "Price": ["716", "599", "4400", "150","139"]}) I need to find all rows, that contain string "cat1" or "Cat1" in any column with name starting with Catalog (the number of these columns may vary, so I can't just list them). I tried: filter_col = [col for col in df if col.startswith('Catalog')] df_res = df.loc[(filter_col.str.contains('(?i)cat1'))] But I get mistake: AttributeError: 'list' object has no attribute 'str'
In your code, filter_col is a list. You can't use str with it. You can make use of pandas functions to do the operations faster. Here's the code to solve it: import pandas as pd # Create the DataFrame df = pd.DataFrame({"ID": ["ID1", "ID2", "ID3", "ID4", "ID5"], "Item": ["Item1", "Item2", "Item3", "Item4","Item5"], "Catalog1": ["cat1", "1Cat12", "Cat35", "1cat3","Cat5"], "Catalog2": ["Cat11", "Cat12", "Cat35", "1Cat1","2cat5"], "Catalog3": ["cat6", "Ccat2", "1Cat9", "1cat3","Cat7"], "Price": ["716", "599", "4400", "150","139"]}) # Define the search strings search_strings = ["cat1", "Cat1"] # Filter the DataFrame filtered_df = df[df.filter(like='Catalog').apply(lambda row: row.str.contains('|'.join(search_strings), case=False).any(), axis=1)] print(filtered_df)
4
4
79,136,350
2024-10-29
https://stackoverflow.com/questions/79136350/numpy-array-slice-then-assign-to-itself
Why the following code does not return [1,4,3,4]? Hasn't a already changed during the reversed order assignment? a=np.array([1,2,3,4]) a[1::]=a[:0:-1] The result is: array([1, 4, 3, 2])
You're right that a changes during the assignment, which in turn affects the view whose elements you're assigning into a. If NumPy didn't have special handling for this case, you could indeed see array([1, 4, 3, 4]) as a result. However, NumPy checks for this case. If NumPy detects that the RHS of the assignment may share memory with the array being assigned to, it makes a copy of the RHS first, to avoid this kind of problem: if (tmp_arr && solve_may_share_memory(self, tmp_arr, 1) != 0) { Py_SETREF(tmp_arr, (PyArrayObject *)PyArray_NewCopy(tmp_arr, NPY_ANYORDER)); }
2
6
79,126,205
2024-10-25
https://stackoverflow.com/questions/79126205/how-to-split-a-pyspark-dataframe-taking-a-portion-of-data-for-each-different-id
I'm working with a pyspark dataframe (in Python) containing time series data. Data got a structure like this: event_time variable value step ID 1456942945 var_a 123.4 1 id_1 1456931076 var_b 857.01 1 id_1 1456932268 var_b 871.74 1 id_1 1456940055 var_b 992.3 2 id_1 1456932781 var_c 861.3 2 id_1 1456937186 var_c 959.6 3 id_1 1456934746 var_d 0.12 4 id_1 1456942945 var_a 123.4 1 id_2 1456931076 var_b 847.01 1 id_2 1456932268 var_b 871.74 1 id_2 1456940055 var_b 932.3 2 id_2 1456932781 var_c 821.3 3 id_2 1456937186 var_c 969.6 4 id_2 1456934746 var_d 0.12 4 id_2 For each id i got each variable's value at a specific "step". I need to subset this dataframe like this: for each id take all the rows corresponding to steps 1, 2, 3 and a portion of step 4 data starting from the first_event time value of step 4, let's say first 25%. This portioning is to be done with respect to event time. I'm able to do it for a single id, after having subset the DF based on that id: # single step partitioning threshold_value = DF.selectExpr(f"percentile_approx({"event_time"}, {0.25}) as threshold").collect()[0]["threshold"] partitioned_df= DF.filter(col(column_name) <= threshold_value) # First 3 steps first_3_steps_df = DF.filter((col("step").isin([1,2,3]))) And then i would concat the partitioned_df and first_3_steps_df to obtain the desidered output for 1 specific id. I'm stuck at iterating this kind of partitioning for each id in DF without actually iterating that process for each id separately. I'm also able to do it in pandas, but the DF is huge and i really need to stick to Pyspark, so no Pandas answers, please.
Group the data by ID and use percentile_approx as aggregation function to calculate the threshold for step=4. Then create a where clause with these values to filter the data: from pyspark.sql import functions as F df = ... threshold = df.where('step = 4') \ .groupBy('ID') \ .agg(F.percentile_approx('event_time', 0.25)) \ .collect() threshold = [(r[0],r[1]) for r in threshold] whereStmt = 'step=1 or step=2 or step=3' for r in threshold: whereStmt = whereStmt + f' or (step=4 and ID={r[0]} and event_time<={r[1]})' df.where(F.expr(whereStmt)).show()
3
1
79,131,057
2024-10-27
https://stackoverflow.com/questions/79131057/why-do-i-get-a-networkxerror-node-has-no-position-when-trying-to-draw-a-graph
I'm trying to create a simulation (powered by python) to analyze density and curvature of Hyperbolic Lattices and also their realtions with Anti-de Sitter (AdS) Spaces and AdS/CFT Correspondence for a research. I'm using matplotlib, numpy, networkx, random and collections. While everything seems fine when I try to run the code, I get an error about the values. I tried to change the node_size, edge_color and with_labels values, but the error still didn't vanish. Here is the code: import numpy as np import matplotlib.pyplot as plt import networkx as nx import random from collections import deque # Parameters for hyperbolic lattice p = 7 # Sides per polygon (heptagon) q = 3 # Polygons per vertex depth = 4 # Depth of recursion # Initialize graph for lattice G = nx.Graph() # Define recursive function to create the lattice def hyperbolic_polygon_vertices(sides, center, radius): angles = np.linspace(0, 2 * np.pi, sides, endpoint=False) vertices = [] for angle in angles: x = center[0] + radius * np.cos(angle) y = center[1] + radius * np.sin(angle) vertices.append((x, y)) return vertices def add_hyperbolic_lattice(G, center, radius, depth, max_depth): if depth >= max_depth: return # Add current center node G.add_node(center, pos=(radius * np.cos(center[0]), radius * np.sin(center[1]))) # Generate neighboring nodes angle_offset = 2 * np.pi / q for i in range(q): angle = i * angle_offset new_radius = radius * 0.8 # Adjust radius to create depth effect new_center = (center[0] + new_radius * np.cos(angle), center[1] + new_radius * np.sin(angle)) # Add edge to graph and recursive call G.add_edge(center, new_center) add_hyperbolic_lattice(G, new_center, new_radius, depth + 1, max_depth) # Initialize lattice with central node initial_center = (0, 0) initial_radius = 0.3 add_hyperbolic_lattice(G, initial_center, initial_radius, 0, depth) # Plotting the hyperbolic lattice pos = nx.get_node_attributes(G, 'pos') nx.draw(G, pos, node_size=10, edge_color="blue", with_labels=False) plt.title("Hyperbolic Lattice (Poincaré Disk)") plt.show() # Function to calculate density by distance def calculate_density_by_distance(G, center, max_distance): distances = nx.single_source_shortest_path_length(G, center) density_by_distance = {} for distance in range(1, max_distance + 1): nodes_in_ring = [node for node, dist in distances.items() if dist == distance] if nodes_in_ring: area = np.pi * ((distance + 1)**2 - distance**2) # Approximate area for each ring density_by_distance[distance] = len(nodes_in_ring) / area return density_by_distance # Calculate and plot density vs. distance center_node = initial_center density_data = calculate_density_by_distance(G, center_node, max_distance=5) plt.plot(density_data.keys(), density_data.values()) plt.xlabel("Distance from Center") plt.ylabel("Density") plt.title("Density vs Distance in Hyperbolic Lattice") plt.show() # Analyzing boundary connectivity (AdS/CFT analogy) distances = nx.single_source_shortest_path_length(G, center_node) max_distance = max(distances.values()) boundary_nodes = [node for node, dist in distances.items() if dist == max_distance] boundary_connectivity = np.mean([G.degree(node) for node in boundary_nodes]) print("Average connectivity at boundary (AdS/CFT analogy):", boundary_connectivity) # Simulating network flow to boundary (black hole thermodynamics analogy) boundary_node = random.choice(boundary_nodes) try: flow_value, _ = nx.maximum_flow(G, center_node, boundary_node) print("Simulated flow value (energy transfer to boundary):", flow_value) except: print("Error calculating flow, nodes may not be fully connected in hyperbolic space.") And here is the error: Traceback (most recent call last): File "C:\Users\zorty\Masaüstü\.venv\Lib\site-packages\networkx\drawing\nx_pylab.py", line 445, in draw_networkx_nodes xy = np.asarray([pos[v] for v in nodelist]) ~~~^^^ KeyError: (np.float64(0.70848), np.float64(0.0)) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\zorty\Masaüstü\Project.py", line 51, in <module> nx.draw(G, pos, node_size=10, edge_color="blue", with_labels=False) File "C:\Users\zorty\Masaüstü\.venv\Lib\site-packages\networkx\drawing\nx_pylab.py", line 126, in draw draw_networkx(G, pos=pos, ax=ax, **kwds) File "C:\Users\zorty\Masaüstü\.venv\Lib\site-packages\networkx\drawing\nx_pylab.py", line 314, in draw_networkx draw_networkx_nodes(G, pos, **node_kwds) File "C:\Users\zorty\Masaüstü\.venv\Lib\site-packages\networkx\drawing\nx_pylab.py", line 447, in draw_networkx_nodes raise nx.NetworkXError(f"Node {err} has no position.") from err networkx.exception.NetworkXError: Node (np.float64(0.70848), np.float64(0.0)) has no position.
Your current code does not assign positions to the nodes at depth == max_depth, resulting in nodes without defined positions. Switching the first two statements in add_hyperbolic_lattice resolves the issue. def add_hyperbolic_lattice(G, center, radius, depth, max_depth): # Add current center node G.add_node(center, pos=(radius * np.cos(center[0]), radius * np.sin(center[1]))) if depth >= max_depth: return ...
2
0
79,119,390
2024-10-23
https://stackoverflow.com/questions/79119390/micropython-aioble-esp32-charactrictic-written-never-returns-even-upon-a-wri
I am working with ESP32-C3. I created a BLE GATT Server, I want to exchange data with it bidirectionally. I am using nRF Connect android app for debugging. Charactrictic #1 is used to send data from ESP to nRF Connect. It works fine. Charactrictic #2 is used to receive data from nRF Connect on ESP32. It doesn't work: connection, data = await recv_char.written(timeout_ms=_ADV_INTERVAL_MS) awaits infinitely and never returns. Even when I make a write in NRF Connect. Below is my code, mostly copy-pasted from examples. Please help me find the mistake in data reception on the ESP32 side. import sys sys.path.append('') import asyncio import aioble import bluetooth import machine import time led = machine.Pin(8, machine.Pin.OUT) GATT_UUID = bluetooth.UUID(0x1802) GATT_CHAR_UUID = bluetooth.UUID("90D3D001-C950-4DD6-9410-2B7AEB1DD7D8") RECV_CHAR_UUID = bluetooth.UUID("90D3D002-C950-4DD6-9410-2B7AEB1DD7D8") _ADV_INTERVAL_MS = 250_000 # Register GATT server, the service and characteristics ble_service = aioble.Service(GATT_UUID) sensor_characteristic = aioble.Characteristic(ble_service, GATT_CHAR_UUID, read=True, notify=True, write=True, capture=True) recv_char = aioble.Characteristic(ble_service, RECV_CHAR_UUID, write=True, notify=True, capture=True) # Register service(s) aioble.register_services(ble_service) def _decode_data(data): try: if data is not None: # Decode the UTF-8 data number = int.from_bytes(data, 'big') return number except Exception as e: print("Error decoding temperature:", e) return None async def peripheral_task(): while True: try: async with await aioble.advertise(_ADV_INTERVAL_MS, name="MYNAME", services=[GATT_UUID], ) as connection: print("Connection from", connection.device) await connection.disconnected() except asyncio.CancelledError: # Catch the CancelledError print("Peripheral task cancelled") except Exception as e: print("Error in peripheral_task:", e) finally: # Ensure the loop continues to the next iteration await asyncio.sleep_ms(100) async def wait_for_write(): while True: try: connection, data = await recv_char.written(timeout_ms=_ADV_INTERVAL_MS) # Here it fails ^ print('OLOLO') print(data) # print(type) data = _decode_data(data) print('Connection: ', connection) print('Data: ', data) if data == 1: print('Turning LED ON') led.value(1) elif data == 0: print('Turning LED OFF') led.value(0) else: print('Unknown command') except asyncio.CancelledError: # Catch the CancelledError print("Wait4Write task cancelled") except Exception as e: print("Error in Wait4write_task:", e) finally: # Ensure the loop continues to the next iteration await asyncio.sleep_ms(500) def _encode_data(data): return str(data).encode('utf-8') async def sensor_task(): while True: value = b'Hello World!' sensor_characteristic.write(_encode_data(value), send_update=True) # print('New value written: ', value) await asyncio.sleep_ms(1000) # Run tasks async def ble_main(): t1 = asyncio.create_task(peripheral_task()) t2 = asyncio.create_task(sensor_task()) t3 = asyncio.create_task(wait_for_write()) await asyncio.gather(t1, t2, t3) I tried various methods of sending data in nRF Connect, also tried to send and receive data in one charactristic. Then I split sending and receiving into 2 distinct characteristics. No effect.
I solved my own problem. The mistake was: GATT_UUID = bluetooth.UUID(0x1802) I changed that UUID to a 128-bit unique one, and it worked. Hope this post helps someone.
2
2
79,132,812
2024-10-28
https://stackoverflow.com/questions/79132812/find-intersection-of-dates-in-grouped-polars-dataframe
Consider the following pl.DataFrame: import polars as pl data = { "symbol": ["AAPL"] * 5 + ["GOOGL"] * 3 + ["MSFT"] * 4, "date": [ "2023-01-01", "2023-01-02", "2023-01-03", "2023-01-04", "2023-01-05", # AAPL has 5 dates "2023-01-01", "2023-01-02", "2023-01-03", # GOOGL has 3 dates "2023-01-01", "2023-01-02", "2023-01-03", "2023-01-04", # MSFT has 4 dates ], } df = pl.DataFrame(data) with pl.Config(tbl_rows=-1): print(df) shape: (12, 2) ┌────────┬────────────┐ │ symbol ┆ date │ │ --- ┆ --- │ │ str ┆ str │ ╞════════╪════════════╡ │ AAPL ┆ 2023-01-01 │ │ AAPL ┆ 2023-01-02 │ │ AAPL ┆ 2023-01-03 │ │ AAPL ┆ 2023-01-04 │ │ AAPL ┆ 2023-01-05 │ │ GOOGL ┆ 2023-01-01 │ │ GOOGL ┆ 2023-01-02 │ │ GOOGL ┆ 2023-01-03 │ │ MSFT ┆ 2023-01-01 │ │ MSFT ┆ 2023-01-02 │ │ MSFT ┆ 2023-01-03 │ │ MSFT ┆ 2023-01-04 │ └────────┴────────────┘ I need to make each group's dates (grouped_by symbol) consistent accross all groups. Therefore, I need to identify the common dates across all groups (probably using join) and subsequently filter the dataframe accordingly. It might be related to Find intersection of columns from different polars dataframes. I am looking for a generalized solution. In the above example the resulting pl.DataFrame should look as follows: shape: (9, 2) ┌────────┬────────────┐ │ symbol ┆ date │ │ --- ┆ --- │ │ str ┆ str │ ╞════════╪════════════╡ │ AAPL ┆ 2023-01-01 │ │ AAPL ┆ 2023-01-02 │ │ AAPL ┆ 2023-01-03 │ │ GOOGL ┆ 2023-01-01 │ │ GOOGL ┆ 2023-01-02 │ │ GOOGL ┆ 2023-01-03 │ │ MSFT ┆ 2023-01-01 │ │ MSFT ┆ 2023-01-02 │ │ MSFT ┆ 2023-01-03 │ └────────┴────────────┘
You could count the number of unique (n_unique) symbols over date and filter the rows that have all symbols: df.filter(pl.col('symbol').n_unique().over('date') .eq(pl.col('symbol').n_unique())) Output: ┌────────┬────────────┐ │ symbol ┆ date │ │ --- ┆ --- │ │ str ┆ str │ ╞════════╪════════════╡ │ AAPL ┆ 2023-01-01 │ │ AAPL ┆ 2023-01-02 │ │ AAPL ┆ 2023-01-03 │ │ GOOGL ┆ 2023-01-01 │ │ GOOGL ┆ 2023-01-02 │ │ GOOGL ┆ 2023-01-03 │ │ MSFT ┆ 2023-01-01 │ │ MSFT ┆ 2023-01-02 │ │ MSFT ┆ 2023-01-03 │ └────────┴────────────┘ Intermediates: ┌────────┬────────────┬───────────────────┬───────────────────┐ │ symbol ┆ date ┆ nunique_over_date ┆ eq_symbol_nunique │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ u32 ┆ bool │ ╞════════╪════════════╪═══════════════════╪═══════════════════╡ │ AAPL ┆ 2023-01-01 ┆ 3 ┆ true │ │ AAPL ┆ 2023-01-02 ┆ 3 ┆ true │ │ AAPL ┆ 2023-01-03 ┆ 3 ┆ true │ │ AAPL ┆ 2023-01-04 ┆ 2 ┆ false │ │ AAPL ┆ 2023-01-05 ┆ 1 ┆ false │ │ GOOGL ┆ 2023-01-01 ┆ 3 ┆ true │ │ GOOGL ┆ 2023-01-02 ┆ 3 ┆ true │ │ GOOGL ┆ 2023-01-03 ┆ 3 ┆ true │ │ MSFT ┆ 2023-01-01 ┆ 3 ┆ true │ │ MSFT ┆ 2023-01-02 ┆ 3 ┆ true │ │ MSFT ┆ 2023-01-03 ┆ 3 ┆ true │ │ MSFT ┆ 2023-01-04 ┆ 2 ┆ false │ └────────┴────────────┴───────────────────┴───────────────────┘
2
6
79,130,996
2024-10-27
https://stackoverflow.com/questions/79130996/programmatically-change-components-of-pytorch-model
I am training a model in pytorch and would like to be able to programmatically change some components of the model architecture to check which works best without any if-blocks in the forward(). Consider a toy example: import torch class Model(torch.nn.Model): def __init__(self, layers: str, d_in: int, d_out: int): super().__init__() self.layers = layers linears = torch.nn.ModuleList([ torch.nn.Linear(d_in, d_out), torch.nn.Linear(d_in, d_out), ]) def forward(x1: torch.Tensor, x2: torch.Tensor) -> torch.Tensor: if self.layers == "parallel": x1 = self.linears[0](x1) x2 = self.linears[1](x2) x = x1 + x2 elif self.layers == "sequential": x = x1 + x2 x = self.linears[0](x) x = self.linears[1](x) return x My first intution was to provide external functions, e.g. def parallel(x1, x2): x1 = self.linears[0](x1) x2 = self.linears[1](x2) return x1 + x2 and provide them to the model, like class Model(torch.nn.Model): def __init__(self, layers: str, d_in: int, d_out: int, fn: Callable): super().__init__() self.layers = layers linears = torch.nn.ModuleList([ torch.nn.Linear(d_in, d_out), torch.nn.Linear(d_in, d_out), ]) self.fn = fn def forward(x1: torch.Tensor, x2: torch.Tensor) -> torch.Tensor: x = self.fn(x1, x2) but of course the function's scope does not know self.linears and I would also like to avoid having to pass each and every architectural element to the function. Do I wish for too much? Do I have to "bite the sour apple" as it says in German and either have larger function signatures, or use if-conditions, or something else? Or is there a solution to my problem?
You could just use the if statement in the init function or in another function, for example: from enum import Enum class ModelType(Enum): Parallel = 1 Sequential = 2 class Model(torch.nn.Model): def __init__(self, layers: str, d_in: int, d_out: int, model_type: ModelType): super().__init__() self.layers = layers linears = torch.nn.ModuleList([ torch.nn.Linear(d_in, d_out), torch.nn.Linear(d_in, d_out), ]) self.model_type = model_type self.initialize() def initialize(self): if self.model_type == ModelType.Parallel: self.fn = self.parallel else if self.model_type == ModelType.Sequential:: self.fn = self.sequential def forward(self, x1: torch.Tensor, x2: torch.Tensor) -> torch.Tensor: x = self.fn(x1, x2) return x def parallel(self, x1, x2): x1 = self.linears[0](x1) x2 = self.linears[0](x2) x = x1 + x2 return x def sequential(self, x1, x2): x = x1 + x2 x = self.linears[0](x) x = self.linears[0](x) return x I hope it helps.
2
1
79,129,975
2024-10-27
https://stackoverflow.com/questions/79129975/how-to-sum-delimited-text-in-python-in-excel
I slowly started using python in excel. Somehow I managing the code, however this time I I have received my output data is delimited with ",". How to do it with py option excel. Based on the below pic, sum of aa,dd,ee is 18, bb,gg is 9 and so on.. I can do it sum individual but not sure how to deal with delimited text. Any help would appreciate. Thank you.
Enter in E2 with =PY: df_nv = xl("A1:B10", headers=True) names = xl("D2:D5").iloc[:,0] nv = dict(zip(df_nv["Name"],df_nv["Value"])) def sum_split(a): return sum(map(lambda a: nv[a], a.split(","))) list(map(sum_split, names)) Siddharth Rout's answer has better checks - for example xl returns dataFrame only for multi-cell inputs.
2
3
79,130,783
2024-10-27
https://stackoverflow.com/questions/79130783/difficulty-to-scrape-html-page-from-a-dynamic-generated-website-with-python
I'm trying to retrieve some data from a website with python. The website seems to generate its content with Javascript so I cannot use the standard requests library. I tried the module requests-html and Selenium that both handle javascript content, but the problem is that I still cannot get the html page of this website. https://www.lvmh.com/en/join-us/our-job-offers?PRD-en-us-timestamp-desc%5BrefinementList%5D%5Bmaison%5D%5B0%5D=Kendo I'm expecting to get the exact same view as what I have when I'm inspecting the page with my browser. For instant, I can clearly see all the information about the open positions. But when I fetch the page source with requests-html or Selenium, I get a page without any information of the open position. For instance, if I want to retrieve the name of the open position, it is located in the span with the class "ais-Highlight-nonHighlighted". I can see it in my browser, but I am not able to get this data with python. HTML page when inspecting through my browser, showing the data to retrieve (the job position name) What I want is to get the html of the webpage, just like requests, and then process is with BeautifulSoup. I tried with requests-html : from requests_html import HTMLSession url = "https://www.lvmh.com/en/join-us/our-job-offers?PRD-en-us-timestamp-desc%5BrefinementList%5D%5Bmaison%5D%5B0%5D=Kendo" session = HTMLSession() r = session.get(url) r.html.render(wait=5) print(r.html.html) print(r.html.text) print(r.text) job_name = r.html.find('.ais-Highlight-nonHighlighted') session.close() --> print does not display the job position name and job_name is empty I tried with Selenium : from selenium import webdriver from selenium.webdriver.common.by import By url = "https://www.lvmh.com/en/join-us/our-job-offers?PRD-en-us-timestamp-desc%5BrefinementList%5D%5Bmaison%5D%5B0%5D=Kendo" driver = webdriver.Safari() driver.get(url) data_source = driver.page_source data_execute = driver.execute_script("return document.body.innerHTML") driver.quit() --> data_source and data_execute both does not include the job position name None worked... if anyone can help me on that it will be grateful.
You're approach using selenium is not optimum. Don't try to get the page source but rather use selenium's built-in functionality for navigation. For example: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver import ChromeOptions def text(e): if r := e.text: return r return e.get_attribute("textContent") options = ChromeOptions() options.add_argument("--headless=true") url = "https://www.lvmh.com/en/join-us/our-job-offers?PRD-en-us-timestamp-desc%5BrefinementList%5D%5Bmaison%5D%5B0%5D=Kendo" with webdriver.Chrome(options) as driver: driver.get(url) wait = WebDriverWait(driver, 10) selector = By.CSS_SELECTOR, "span.ais-Highlight-nonHighlighted" for span in wait.until(EC.presence_of_all_elements_located(selector)): print(text(span)) Output (partial): ... COPY OF DIRECTOR, VISUAL DESIGN KENDO San Francisco United States Permanent Job Minimum 10 years Full Time PACKAGING CREATIVE DIRECTOR KENDO San Francisco United States Permanent Job Minimum 10 years Full Time
2
1
79,117,084
2024-10-23
https://stackoverflow.com/questions/79117084/using-python-to-imageclip-an-image-in-autocad
I want to automatize the creation of a Clipping Boundary of an Image (references externes) in Autocad using known coordinates. I'm able to do it manually using the native function IMAGECLIP in Autocad by drawing a polygon. It seems the key parameter is ClipBoundary But I'm not able to use it proprely. For your information, the ObjectName of the image is AcDbRasterImage and it has beed add via AddRaster(image_path, insertion_point, 1, 0) (pyautocad). https://help.autodesk.com/view/OARX/2022/ENU/?guid=GUID-D9612F57-7F1F-4CFD-B804-838B826C59FC I've try this code but it doesn't work : import comtypes.client # Connexion à l'application AutoCAD active acad = comtypes.client.GetActiveObject("AutoCAD.Application") doc = acad.ActiveDocument modelspace = doc.ModelSpace # Appliquer la boundary de clip à l'image raster def apply_clip_boundary(raster, points): raster.ClippingEnabled = True # Activer le clipping # Debugging: Output the clip points to verify the correct format print("Points to be passed to ClipBoundary:", list(points)) print(raster.ClipBoundary) # Appliquer la ClipBoundary raster.ClipBoundary = points # Assurez-vous que les points sont bien formatés print("Clip boundary applied and enabled.") print(raster.ClipBoundary) # Créer les points 2D initiaux (x, y) points = [ (325100.2407, 7682500.6218), (325100.6019, 7682500.6218), (325100.6019, 7682500.3992), (325100.2407, 7682500.3992), (325100.2407, 7682500.6218) ] # Sélectionner un raster par son nom def get_raster_by_name(name): for item in modelspace: if item.ObjectName == "AcDbRasterImage" and item.Name == name: return item return None # Obtenir le raster et appliquer la boundary raster = get_raster_by_name("photo") if raster: apply_clip_boundary(raster, points) else: print("Raster not found.") The result of the script is : no error message, but nothing happens This is the result when I use : print(raster.ClipBoundary) before passing points <bound method ClipBoundary of <POINTER(IAcadRasterImage) ptr=0x214be61a5d8 at 214c65509c0>> and this is the result of the print(raster.ClipBoundary) after passing points: [(325100.2407, 7682500.6218), (325100.6019, 7682500.6218), (325100.6019, 7682500.3992), (325100.2407, 7682500.3992), (325100.2407, 7682500.6218)] My conclusion is either I don't pass the good kind of datas (array of points?) or I do not know how to use ClipBoundary.... it seems ClipBoundary is not anymore a method after passing parameters points While raster.ClipBoundary = points doesn't work, raster.ClippingEnabled = True does work ( I noticed the parameter in Autocad UI changed whenever I pass a parameter True or False for ClippingEnabled) I'm in nativ WCS (I did not change it). Maybe it's a problem of coordinate system? Does ClipBoundary need points to be passed in local (relativ from the image, from the insert_point of the image? from the boundingbox of the image)? maybe I should use ClipBoundary like a fonction() ? I also tried to pass directly a polygon, but it doesn't work: from pyautocad import Autocad, APoint, aDouble import comtypes.client from ctypes import c_double # Connexion à AutoCAD via pyautocad acad = Autocad(create_if_not_exists=True) def apply_clip_boundary(raster, points): # Vérifiez si le raster est bien trouvé avant de continuer if not raster: print("Raster not found.") return # Créer une polyligne 2D avec les points fournis (pas de Z, donc uniquement X et Y) polyline_points = aDouble(*[coord for point in points for coord in point]) # Ajouter uniquement X et Y # Utilisation de AddLightWeightPolyline pour créer une polyline 2D pl = acad.model.AddLightWeightPolyline(polyline_points) # Clôturer la polyligne pl.Closed = True # Activer la boundary de clip et ajouter la géométrie au raster raster.ClipBoundary = pl raster.ClippingEnabled = True return pl def get_raster_by_name(name): # Rechercher le raster par son nom dans le modèle for item in acad.iter_objects(): if item.ObjectName == "AcDbRasterImage" and item.Name == name: return item return None # Points définissant la boundary de découpe points = [ (325100.2407, 7682500.6218), (325100.6019, 7682500.6218), (325100.6019, 7682500.3992), (325100.2407, 7682500.3992) ] # Obtenir le raster et appliquer la boundary raster = get_raster_by_name("photo") if raster: apply_clip_boundary(raster, points) else: print("Raster not found.") Even if I would like to avoid "SendCommand" method, as much as possible. I will accept this solution.
Note: I don't have AutoCAD installed, so I can't do any testing to support my statement(s). But as the very URL from the question ([AutoDesk.Help]: ClipBoundary Method (ActiveX)) states, ClipBoundary is a method, which means it has to be invoked. raster.ClipBoundary = points doesn't make any sense, instead you should: raster.ClipBoundary(points)
4
0
79,129,809
2024-10-27
https://stackoverflow.com/questions/79129809/use-beautiful-soup-to-count-title-links
I am attempting to write a code that keeps track of the text for the links in the left handed gray box on this webpage. In this case the code should return Valykrie, The Acid Baby Here is the code I am trying to use: import requests from bs4 import BeautifulSoup url = 'https://www.mountainproject.com/area/109928429/aasgard-sentinel' page = requests.get(url) soup = BeautifulSoup(page.text, "html.parser") for link in soup.findAll('a', class_= 'new-indicator'): print(link) It is not working (otherwise I wouldn't be here!) I'm pretty new to BeautifulSoup, and coding in general. No matter how much I inspect the page source I can't seem to figure out the inputs to the findAll to get it to return what I want!
Search for the table with a specific I’d, than the rows: import requests from bs4 import BeautifulSoup url = 'https://www.mountainproject.com/area/109928429/aasgard-sentinel' page = requests.get(url) soup = BeautifulSoup(page.text, "html.parser") table = soup.find(lambda tag: tag.name=='table' and tag.has_attr('id') and tag['id']=="left-nav-route-table") rows = table.findAll(lambda tag: tag.name=='a') for member in rows: print(member.text) Output: Acid Baby Valkyrie, The If you would also see the href link add print(member.get('href'))
2
1
79,128,733
2024-10-26
https://stackoverflow.com/questions/79128733/hiding-grouped-slash-commands-in-dms
How can I hide grouped slash commands in DMs? I provided a little python code sample below, with a normal slash command (bot.tree) and a grouped slash command (class TestGroup). @discord.app_commands.guild_only() works perfectly fine with the hidden-test command but not the other one, I've tried many approaches to the situation and nothing seems to work for me. import discord from discord.ext import commands # Setup intents = discord.Intents.all() bot = commands.Bot(command_prefix="!", intents=intents) # Simple login confirmation @bot.event async def on_ready(): await bot.tree.sync() print(f"{bot.user} is online!") # @discord.app_commands.guild_only() hides the command if the user try's to use it in DMs. @bot.tree.command(name="hidden-test", description="This command can only be used in guilds!") @discord.app_commands.guild_only() async def hidden_command(interaction: discord.Interaction): await interaction.response.send_message("This command can only be used in guilds!", ephemeral=True) # This is a discord slash command group, I tried testing "@discord.app_commands.guild_only()" here, it doesn't work. class TestGroup(discord.app_commands.Group): def __init__(self): super().__init__(name="hidden", description="These command can only be used in guilds!") @discord.app_commands.command(name="true", description="This command can only be used in guilds!") @discord.app_commands.guild_only() async def true_command(self, interaction: discord.Interaction): await interaction.response.send_message("This command can only be used in guilds! (not really :d)", ephemeral=True) bot.tree.add_command(TestGroup()) # Replace with your bot token, if you're going to test it. bot.run("YOUR-TOKEN") Tried using the app_commands decorator any many more things. Result: Slash command showing up in my DM with the bot. Expected Result: Slash command not showing up in my DMs with the bot, just like the "hidden-test" command.
You were using the decorator @discord.app_commands.guild_only() on group made using subclass method, however it applies only on individual commands-group or GroupCog method, instead use guild_only=True in super().__init__() for your method of making group,not the decorator. That's the only change I have made import discord from discord import app_commands from discord.ext import commands # Setup intents = discord.Intents.all() bot = commands.Bot(command_prefix="!", intents=intents) # Simple login confirmation @bot.event async def on_ready(): await bot.tree.sync() print(f"{bot.user} is online!") # @discord.app_commands.guild_only() hides the command if the user try's to use it in DMs. @bot.tree.command(name="hidden-test", description="This command can only be used in guilds!") @discord.app_commands.guild_only() async def hidden_command(interaction: discord.Interaction): await interaction.response.send_message("This command can only be used in guilds!", ephemeral=True) # This is a discord slash command group, I tried testing "@discord.app_commands.guild_only()" here, it doesn't work. class TestGroup(discord.app_commands.Group): def __init__(self): super().__init__(name="hidden", description="These command can only be used in guilds!",guild_only=True) @discord.app_commands.command(name="true", description="This command can only be used in guilds!") @discord.app_commands.guild_only() async def true_command(self, interaction: discord.Interaction): await interaction.response.send_message("This command can only be used in guilds! (not really :d)", ephemeral=True) bot.tree.add_command(TestGroup()) # Replace with your bot token, if you're going to test it. bot.run("token") References: https://fallendeity.github.io/discord.py-masterclass/slash-commands/#app_commandsguild_only https://fallendeity.github.io/discord.py-masterclass/slash-commands/#__tabbed_8_3
2
0
79,129,581
2024-10-26
https://stackoverflow.com/questions/79129581/how-to-determine-if-a-large-integer-is-a-power-of-3-in-python
I'm trying to determine if a given positive integer ( N ) is a power of 3, i.e., if there exists a nonnegative integer ( x ) such that ( 3^x = N ). The challenge is that ( N ) can be extremely large, with up to ( 10^7 ) digits. Here's the logic I want to implement: If ( N ) is less than or equal to 0, return -1. Use logarithmic calculations to determine if ( N ) is a power of 3. If it is a power of 3, print the value of ( x ); otherwise, print -1. I've tried the following code, but I'm concerned about precision issues with large integers: import math def is_power_of_three(N): if N <= 0: return -1 log3_N = math.log10(N) / math.log10(3) if abs(log3_N - round(log3_N)) < 1e-10: return int(round(log3_N)) else: return -1 # Example usage: N = int(input("Enter a number: ")) print(is_power_of_three(N)) Question: Is there a more efficient way to check if ( N ) is a power of 3, especially for very large values? How can I handle precision issues in Python when calculating logarithms for such large integers? Are there alternative methods to achieve the same goal?
Any approach that involves repeated division is going to be slow with numbers that large. Instead, consider using the bit length of the number (effectively the ceiling of its base 2 logarithm) to approximate the corresponding power of three, then check to see if it is indeed equal: import math def what_power_of_3(n): if n <= 0: return -1 k = math.floor(n.bit_length() * (math.log(2) / math.log(3))) if n == 3**k: return k else: return -1 This is fast: the test requires only a single exponentiation, so for e.g. 3**10000000, it requires only a few seconds on my smartphone. A brief sketch to show why this is correct: Because of the equality check, if n is not a power of 3, the answer will always be correct (because n == 3**k cannot be true). So it suffices to prove that this answer always finds the correct k when n == 3 ** k. Let k > 0, n = 3 ** k, and t = n.bit_length(). Then, 3 ** k < 2 ** t < 3 ** (k + 1) by definition of bit_length(). Thus k < t * log(2) / log(3) < k + 1, and so n.bit_length() * (math.log(2) / math.log(3)) is going to be a fractional value that lies strictly between k and k+1; thus, the floor of that value will be exactly k.
8
13
79,129,491
2024-10-26
https://stackoverflow.com/questions/79129491/writing-interdependent-if-else-statements
Is there any advantage to one of these over the other? Also, is there any better code than these to achieve the goal? My intuition is that in number 2, since it has already checked for x or y, the check for y is more efficient? if x or y: do some stuff if y: do some OTHER stuff if x or y: do some stuff if y: do some OTHER stuff
The second one is marginally more efficient, because it won't redundantly test for y again in the case where x and y are both False. In cases where x and y are often False (i.e. the first if is rarely entered), this results in a slight improvement in performance. However, it should be noted that this improvement is very slight, and there is a tradeoff to making this change: the code becomes less readable due to the extra nesting level of if y. If y is expensive to compute, it will be better to pre-compute (cache) the condition once rather than evaluating it twice. That will improve performance much more than micro-optimizing the arrangement of these if statements.
2
3
79,127,647
2024-10-26
https://stackoverflow.com/questions/79127647/tensorflow-docker-not-using-gpu
I'm trying to get Tensorflow working on my Ubuntu 24.04.1 with a GPU. According to this page: Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the NVIDIA® driver So I'm trying to use Docker. I'm checking to ensure my GPU is working with Docker by running docker run --gpus all --rm nvidia/cuda:12.6.2-cudnn-runtime-ubuntu24.04 nvidia-smi. The output of that is: ========== == CUDA == ========== CUDA Version 12.6.2 Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. Sat Oct 26 01:16:50 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA TITAN RTX Off | 00000000:01:00.0 Off | N/A | | 41% 40C P8 24W / 280W | 1MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ (Side note, I'm not using the command they suggest because docker run --gpus all --rm nvidia/cuda nvidia-smi doesn't work due to nvidia/cuda not having a latest tag anymore) So it looks to be working. However when I run: docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu \ python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" The output is: 2024-10-26 01:20:51.021242: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1729905651.033544 1 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1729905651.037491 1 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-10-26 01:20:51.050486: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. W0000 00:00:1729905652.350499 1 gpu_device.cc:2344] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... [] Which indicates that there is no GPU detected by Tensorflow. What am I doing wrong here?
I don't think you're doing anything wrong, but I'm concerned that the image may be a "pip install" short of a complete image. I'm running a different flavor of linux, but to start off with I had to make sure I had my gpu available to docker (see here Add nvidia runtime to docker runtimes ) and I upgraded my cuda version to the latest. Even after doing all this I had the same error as you. So I logged into the container as follows: docker run -it --rm --runtime=nvidia --gpus all tensorflow/tensorflow:latest-gpu /bin/bash and ran pip install tensorflow[and-cuda] Some of the dependencies were there and some or the dependencies had to be installed because they were missing. This is undesireable because you'd expect everything necessary to be there to run (maybe they'll fix the image in the future) After it finished I ran python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" and it finally found my GPU You're going to want to create your own docker image using their docker image as a base. So your dockerfile may look like something like: # Use the official TensorFlow GPU base image FROM tensorflow/tensorflow:latest-gpu # Install TensorFlow with CUDA support RUN pip install tensorflow[and-cuda] # Shell CMD ["bash"]
2
1
79,126,303
2024-10-25
https://stackoverflow.com/questions/79126303/how-to-efficiently-multiply-all-non-diagonal-elements-by-a-constant-in-a-pandas
I have a square cost matrix stored as a pandas DataFrame. Rows and columns represent positions [i, j], and I want to multiply all off-diagonal elements (where i != j) by a constant c, without using any for loops for performance reasons. Is there an efficient way to achieve this in pandas or do I have to switch to numpy and then back to pandas to perform this task? Example import pandas as pd # Sample DataFrame cost_matrix = pd.DataFrame([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]) # Constant c = 4 # Desired output # 1 8 12 # 16 5 24 # 28 16 9
Build a boolean mask with numpy.identity and update the underlying array in place: cost_matrix.values[np.identity(n=len(cost_matrix))==0] *= c output: 0 1 2 0 1 8 12 1 16 5 24 2 28 32 9 Intermediate: np.identity(n=len(cost_matrix))==0 array([[False, True, True], [ True, False, True], [ True, True, False]]) NB. for .values to be a view of the underlying array, the DataFrame must have been constructed from an homogeneous block. If not, it should be converted to one using cost_matrix = cost_matrix.copy(). Alternative @PaulS suggested to modify all the values and restore the diagonal. I would use: d = np.diag(cost_matrix) cost_matrix *= c np.fill_diagonal(cost_matrix.values, d) Timings The mask approach seems to be faster on small/medium size inputs, and the diagonal restoration faster on large inputs. (My previous timings were performed online and I don't reproduce the results with perfplot). NB. the timings below were computed with c=1 or c=-1 to avoid increasing the values exponentially during the timing.
4
7
79,126,042
2024-10-25
https://stackoverflow.com/questions/79126042/how-to-efficiently-remove-overlapping-circles-from-the-dataset
I have a dataset of about 20,000 records that represent global cities of population > 20,000. I have estimated radius which more or less describes the size of the city. It's not exactly accurate but for my purposes it will be enough. that I'm loading it into my Panda dataframe object. Below is the sample name_city,country_code,latitude,longitude,geohash,estimated_radius,population Vitry-sur-Seine,FR,48.78716,2.40332,u09tw9qjc3v3,1000,81001 Vincennes,FR,48.8486,2.43769,u09tzkx5dr13,500,45923 Villeneuve-Saint-Georges,FR,48.73219,2.44925,u09tpxrmxdth,500,30881 Villejuif,FR,48.7939,2.35992,u09ttdwmn45z,500,48048 Vigneux-sur-Seine,FR,48.70291,2.41357,u09tnfje022n,500,26692 Versailles,FR,48.80359,2.13424,u09t8s6je2sd,1000,85416 Vélizy-Villacoublay,FR,48.78198,2.19395,u09t9bmxdspt,500,21741 Vanves,FR,48.82345,2.29025,u09tu059nwwp,500,26068 Thiais,FR,48.76496,2.3961,u09tqt2u3pmt,500,29724 Sèvres,FR,48.82292,2.21757,u09tdryy15un,500,23724 Sceaux,FR,48.77644,2.29026,u09tkp7xqgmw,500,21511 Saint-Mandé,FR,48.83864,2.41579,u09tyfz1eyre,500,21261 Saint-Cloud,FR,48.84598,2.20289,u09tfhhh7n9u,500,28839 Paris,FR,48.85341,2.3488,u09tvmqrep8n,12000,2138551 Orly,FR,48.74792,2.39253,u09tq6q1jyzt,500,20528 Montrouge,FR,48.8162,2.31393,u09tswsyyrpr,500,38708 Montreuil,FR,48.86415,2.44322,u09tzx7n71ub,2000,111240 Montgeron,FR,48.70543,2.45039,u09tpf83dnpn,500,22843 Meudon,FR,48.81381,2.235,u09tdy73p38y,500,44652 Massy,FR,48.72692,2.28301,u09t5yqqvupx,500,38768 Malakoff,FR,48.81999,2.29998,u09tsr6v13tr,500,29420 Maisons-Alfort,FR,48.81171,2.43945,u09txtbkg61z,1000,53964 Longjumeau,FR,48.69307,2.29431,u09th0q9tq1s,500,20771 Le Plessis-Robinson,FR,48.78889,2.27078,u09te9txch23,500,22510 Le Kremlin-Bicêtre,FR,48.81471,2.36073,u09ttwrn2crz,500,27867 Le Chesnay,FR,48.8222,2.12213,u09t8rc3cjwz,500,29154 La Celle-Saint-Cloud,FR,48.85029,2.14523,u09tbufje6p6,500,21539 Ivry-sur-Seine,FR,48.81568,2.38487,u09twq8egqrc,1000,57897 Issy-les-Moulineaux,FR,48.82104,2.27718,u09tezd5njkr,1000,61447 Fresnes,FR,48.75568,2.32241,u09tkgenkj6r,500,24803 Fontenay-aux-Roses,FR,48.79325,2.29275,u09ts4t92cn3,500,24680 Clamart,FR,48.80299,2.26692,u09tes6dp0dn,1000,51400 Choisy-le-Roi,FR,48.76846,2.41874,u09trn12bez7,500,35590 Chevilly-Larue,FR,48.76476,2.3503,u09tmmr7mfns,500,20125 Châtillon,FR,48.8024,2.29346,u09tshnn96xx,500,32383 Châtenay-Malabry,FR,48.76507,2.26655,u09t7t6mn7yj,500,32715 Charenton-le-Pont,FR,48.82209,2.41217,u09twzu3r9hq,500,30910 Cachan,FR,48.79632,2.33661,u09tt5j7nvqd,500,26540 Bagnolet,FR,48.86667,2.41667,u09tyzzubrxb,500,33504 Bagneux,FR,48.79565,2.30796,u09tsdbx727w,500,38900 Athis-Mons,FR,48.70522,2.39147,u09tn6t2mr16,500,31225 Alfortville,FR,48.80575,2.4204,u09txhf6p7jp,500,37290 Quinze-Vingts,FR,48.84656,2.37439,u09tyh0zz6c8,500,26265 Croulebarbe,FR,48.81003,2.35403,u09tttd5hc5f,500,20062 Gare,FR,48.83337,2.37513,u09ty1cdbxcq,1000,75580 Maison Blanche,FR,48.82586,2.3508,u09tv2rz1xgx,1000,64302 Below is the visual representation of the data sample: My goal is to find an efficient algorithm that would remove the intersecting circles and only keep the one with largest population. My initial approach was to determine which circles are intersecting using haversine formula. Problem was that to check every record for intersections with others, it needs to traverse entire dataset. The time complexity of this was too high. My second approach was to segregate the dataset by country code, and run the comparisons by chunks: def _remove_intersecting_circles_for_country(df_country): """Helper function to remove intersections within a single country.""" indices_to_remove = set() for i in range(len(df_country)): for j in range(i + 1, len(df_country)): distance = haversine(df_country['latitude'].iloc[i], df_country['longitude'].iloc[i], df_country['latitude'].iloc[j], df_country['longitude'].iloc[j]) if distance < df_country['estimated_radius'].iloc[i] + df_country['estimated_radius'].iloc[j]: if df_country['population'].iloc[i] < df_country['population'].iloc[j]: indices_to_remove.add(df_country.index[i]) else: indices_to_remove.add(df_country.index[j]) return indices_to_remove all_indices_to_remove = set() for country_code in df['country_code'].unique(): df_country = df[df['country_code'] == country_code] indices_to_remove = _remove_intersecting_circles_for_country(df_country) all_indices_to_remove.update(indices_to_remove) new_df = df.drop(index=all_indices_to_remove) return new_df This has significantly improved the performance because to check every record we only need to check against all the records with the same country_code. But that still makes a lot of unnecessary comparisons
Once you have the circles as polygons, determining intersections between polygons is very fast if you use a spatial index to do so. So, you can: buffer the points to circles. In WGS84 this would be imprecise, so the buffering needs to be done via a sidestep to an equidistant projection. calculating which circles intersect can be done very fast using a spatial index. E.g. geopandas.sjoin uses a spatial index under the hood. now you can determine which circles intersect a circle representing a city with a larger population and remove those. Result (red: original cities, blue: retained cities): Code: from pathlib import Path import geopandas as gpd from matplotlib import pyplot as plt from pyproj import CRS, Transformer from shapely.geometry import Point from shapely.ops import transform def geodesic_point_buffer(lat, lon, distance): # Azimuthal equidistant projection aeqd_proj = CRS.from_proj4( f"+proj=aeqd +lat_0={lat} +lon_0={lon} +x_0=0 +y_0=0") tfmr = Transformer.from_proj(aeqd_proj, aeqd_proj.geodetic_crs) buf = Point(0, 0).buffer(distance) # distance in metres return transform(tfmr.transform, buf) # Read the csv file with data #csv_path = Path(__file__).resolve().with_suffix(".csv") csv_path = "https://raw.githubusercontent.com/theroggy/pysnippets/refs/heads/main/pysnippets/stackoverflow_questions/2024/Q4/2024-10-25_circles.csv" points_df = gpd.read_file(csv_path, autodetect_type=True) # Convert the points to circles by buffering them points_buffer_gdf = gpd.GeoDataFrame( points_df, geometry=points_df.apply( lambda row : geodesic_point_buffer(row.latitude, row.longitude, row.estimated_radius), axis=1 ), crs=4326, ) # Determine the intersecting city buffers (result includes self-intersections) intersecting_gdf = points_buffer_gdf.sjoin(points_buffer_gdf) # Get all city buffers that intersect a city with a larger population intersecting_larger_population_df = intersecting_gdf.loc[ intersecting_gdf.population_left < intersecting_gdf.population_right ] # Remove the city buffers that intersect with a larger population city buffer result_gdf = points_buffer_gdf[ ~points_buffer_gdf.index.isin(intersecting_larger_population_df.index) ] # Plot result ax = points_buffer_gdf.boundary.plot(color="red") result_gdf.boundary.plot(color="blue", ax=ax) plt.show()
3
4
79,127,523
2024-10-25
https://stackoverflow.com/questions/79127523/singleton-different-behavior-when-using-class-and-dict-to-store-the-instance
Why do these two base classes result in the child objects having different behavior? class Base: _instance: "Base" = None def __new__(cls) -> "Base": if cls._instance is None: cls._instance = super().__new__(cls) return cls._instance class A(Base): def foo(self): return "foo" class B(Base): def quz(self): return "quz" a = A() b = B() print(id(a)) print(id(b)) 140035075937792 140035075948400 On the other hand from typing import Dict class Base: _instances: Dict[int, "Base"] = {} def __new__(cls) -> "Base": if 0 not in cls._instances: cls._instances[0] = super().__new__(cls) return cls._instances[0] class A(Base): def foo(self): return "foo" class B(Base): def quz(self): return "quz" a = A() b = B() print(id(a)) print(id(b)) 140035075947296 140035075947296
When a = A() is executed, the __new__ method is called with the class A as its argument. This sets the value of the class attribute A._instance. Likewise, b = B() sets the value of B._instance. In the first case, the original value of Base._instance, A.instance and B._instance is None, which is a non-mutable object, so changing this value in A or B does not affect the other two classes. In the second case, A._instance, B._instance and Base._instance point to the same dictionary. Since a dictionary is a mutable object, modifying this dictionary via one class affect all three classes.
3
3
79,126,854
2024-10-25
https://stackoverflow.com/questions/79126854/how-to-hint-argument-to-a-function-as-dictionary-with-parent-class-in-python
I would like to hint a function as a mapping between instances of a class or its children and a value. takes_mapping below is an example. However, I am getting a static typing error when I use the following: from collections.abc import Mapping class Parent: pass class Child(Parent): pass assert issubclass(Child, Parent) def takes_mapping(mapping: Mapping[Parent, int]): return child = Child() my_dict: dict[Child, int] = {child: 1} my_mapping: Mapping[Child, int] = {child: 1} takes_mapping(my_dict) # typing error... takes_mapping(my_mapping) # same basic error, involving invariance (see below) Pyright generates the following error: Argument of type "dict[Child, int]" cannot be assigned to parameter "mapping" of type "Mapping[Parent, int]" in function "takes_mapping" "dict[Child, int]" is not assignable to "Mapping[Parent, int]" Type parameter "_KT@Mapping" is invariant, but "Child" is not the same as "Parent" reportArgumentType How can I hint the argument to take mapping in such a way that the keys may be an instance of Parent or any of its children (without typing errors)? In my use case, we may introduce additional children of Parent and it would be nice we didn't have to couple the hint to the hierarchy, i.e., Union will not really express what's desired since it depends on the specific unioned types.
The problem is that if the function is typed as taking a Mapping[Parent, ...] the body of the function is expected to try to access that dict with Parent keys, and that's likely not going to work if you pass in a dict[Child, ...]. (It could work depending how you implement __hash__, and you can make an argument for why mypy should allow it, but you can see why it also might make sense for it to be skittish.) To fix this I think you want to use a TypeVar, which allows the type to be narrowed within the bound of Parent: from collections.abc import Mapping from typing import TypeVar class Parent: pass class Child(Parent): pass assert issubclass(Child, Parent) _Parent = TypeVar("_Parent", bound=Parent) def takes_mapping(mapping: Mapping[_Parent, int]): # do stuff with the mapping, with some potentially-narrowed type for the mapping keys return child = Child() my_dict: dict[Child, int] = {child: 1} my_mapping: Mapping[Child, int] = {child: 1} takes_mapping(my_dict) # ok (_Parent is bound to the Child type) takes_mapping(my_mapping) # ok (same)
2
2
79,123,288
2024-10-24
https://stackoverflow.com/questions/79123288/how-can-i-reliably-get-the-module-and-class-of-the-current-class-method-in-pytho
I've encountered situations where standard methods like __module__ and __class__ become unreliable due to inheritance hierarchies or metaclass-based class creation. I need a robust approach that can accurately identify the module and class, regardless of the complexity of the class structure. Here's an example of a potential issue: class BaseClass: @classmethod def mymethod(cls): print(cls.__module__, cls.__name__) class DerivedClass(BaseClass): pass DerivedClass.mymethod() This will output __main__ DerivedClass, which might not be the desired result if you need to differentiate between the base class and derived class methods. I'm looking for a solution that can handle scenarios like these and provide accurate module and class information, even in the presence of inheritance, metaclasses, and other advanced Python constructs.
To get the module and class of the defining type, you can look at the qualified name. With this approach, the derived class could be defined in a separate module entirely. import logging class BaseClass: @classmethod def mymethod(cls): _, _, methodname, _ = logging.getLogger().findCaller() func = getattr(cls, methodname) print("module:", func.__module__) print("method name:", methodname) print("qualname:", func.__qualname__) print("defining class:", func.__qualname__.split(".")[0]) class DerivedClass(BaseClass): pass DerivedClass.mymethod() Note that direct invocations of findCaller() were bugged until Python 3.11. A somewhat deviant approach which still works in older Python versions: class BaseClass: @classmethod def mymethod(cls): methodname = (lambda:0).__qualname__.split(".")[1] func = getattr(cls, methodname) print("module:", func.__module__) print("method name:", methodname) print("qualname:", func.__qualname__) print("defining class:", func.__qualname__.split(".")[0]) class DerivedClass(BaseClass): pass DerivedClass.mymethod()
3
5
79,126,618
2024-10-25
https://stackoverflow.com/questions/79126618/formatting-nested-square-roots-in-sympy
I'm working with sympy to obtain symbolic solutions of equation. This is my code: import sympy as sp # Define the symbolic variable x = sp.symbols('x') # Define f(x) f = ((x**2 - 2)**2 - 2)**2 - 2 # Solve the equation f(x) = 0 solutions = sp.solve(f, x) # Filter only the positive solutions positive_solutions = [sol for sol in solutions if sol.is_real and sol > 0] # Print the positive solutions print("The positive solutions of the equation f(x) = 0 are:") for sol in positive_solutions: print(sol) I'm using SymPy to solve the equation and I am able to obtain the positive solutions. However, the solutions are returned with nested square roots in a way that makes the inner roots appear on the left side, like this: sqrt(2 - sqrt(2 - sqrt(2))) sqrt(2 - sqrt(sqrt(2) + 2)) sqrt(sqrt(2 - sqrt(2)) + 2) sqrt(sqrt(sqrt(2) + 2) + 2) I would like the nested square roots to be formatted in a way that they all appear to the right, like this: sqrt(2 - sqrt(2 - sqrt(2))) sqrt(2 - sqrt(sqrt(2) + 2)) sqrt(2 + sqrt(2 - sqrt(2))) sqrt(2 + sqrt(sqrt(2) + 2)) Is there a way to adjust the output formatting of the solutions so that the nested square roots appear as desired? Any help would be greatly appreciated! Thank you!
Using your current code (applying reverse lexicographic order) print("The positive solutions of the equation f(x) = 0 are:") for sol in positive_solutions: pretty_print(sol, order='rev-lex') Resulting The positive solutions of the equation f(x) = 0 are: ________________ ╱ ________ ╲╱ 2 - ╲╱ 2 - √2 ________________ ╱ ________ ╲╱ 2 - ╲╱ 2 + √2 ________________ ╱ ________ ╲╱ 2 + ╲╱ 2 - √2 ________________ ╱ ________ ╲╱ 2 + ╲╱ 2 + √2 For further details sorting docs
3
1
79,127,002
2024-10-25
https://stackoverflow.com/questions/79127002/what-kind-of-sequence-are-range-and-bytearray
In the "Fluent Python" book by Luciano Ramalho (2nd ed.) he defines the concepts of container sequences and flat sequences: A container sequence holds references to the objects it contains, which may be of any type, while a flat sequence stores the value of its contents in its own memory space, not as distinct Python objects. Can we say that objects of type range and bytearray are flat sequences (I think that these objects can't contain references but not sure)? And is there a simple way to test whether a sequence is flat or not? I just found this post that gives a quote from the 1st edition of the "Fluent Python" book (I personally only have the 2nd edition): Flat sequences: str, bytes, bytearray, memoryview, and array.array hold items of one type. So, it seems that bytearray is a flat sequence. In the 2nd edition (p.22) the author says only that Flat sequences – Hold items of one simple type. Some examples: str, bytes, and array.array. Container sequences – Can hold items of different types, including nested containers. Some examples: list, tuple, and collections.deque.
You have it right about bytearray. It references an internal mutable memory space to hold objects of just one simple type (bytes), so it's a flat sequence, by the author's definition. A range object is a bit more tricky though. I'd say it doesn't match either of the criteria given by the author, though it's certainly a sequence. Rather than containing any values however, it just stores the start, stop and step arguments it's constructed with, and calculates what values it should contain on the fly, as they're needed. It doesn't reference them, as Python objects or otherwise, until then. This means you can create a range of any size in constant time, since nothing but a few integers needs to be allocated. A final note: The definitions of "flat sequence" and "container sequence" are not widely used in the Python development community. They may be useful in the context of your book to explain some differences in container behavior, but they won't be understood by Python programmers more widely. In general, the details of how a given container stores its contents are not usually something you need to care about. Leaving implementation details unspecified is often a deliberate choice by Python's design team, so that if a more efficient implementation becomes available, the code can be switched to it. This happened not long ago with Python's dictionaries, which became ordered (by insertion order) as a side effect of an implementation change. Because many of the previous implementation's details were unspecified, there was no break in backwards compatibility, since all the documented interfaces still worked the same way.
3
5
79,123,379
2024-10-24
https://stackoverflow.com/questions/79123379/read-multiple-excel-files-to-dataframes-using-for-loop-by-reading-the-month-in-e
I have 12 Excel files. Each is based on a month of the year and therefore each filename ends in 'Month YYYY'. For example, the file for March of 2021 ends in 'March 2021.xlsx'. I want to read each Excel file, select certain columns, drop empty rows, then merge each file into one excel file as a named worksheet. However, I want to search the file's name, identify the month and then rename the second column to say that month. How do I add a code to have the month of each file be used as the 'new name' for the second column of each df? Here's an example using two months: File one: January 2021.xlsx A B 1 x 3 x File three: February 2021.xlsx A B 3 x 5 x I want to rename B to represent the month of the respective excel file and then merge to get: A January February 1 x 0 3 x x 5 0 x This is what I have done so far. #Store Excel files in a python list excel_files = list(Path(DATA_DIR).glob('*.xlsx')) #Read each file into a dataframe dfs = [] for excel_file in excel_files: df = pd.read_excel(excel_file,sheet_name='Sheet1',header=5,usecols='A,F',skipfooter=8) df.dropna(how='any', axis=0, inplace = True) df.rename(columns={'old-name': 'new-name'}, inplace=True) dfs.append(df) #Compile the list of dataframes to merge data_frames = [dfs[0], dfs[1],dfs[2] ... dfs[11]] #Merge all dataframes df_merged = reduce(lambda left,right: pd.merge(left,right,on=['A'], how='outer'), data_frames).fillna(0) I need help adding the code to have the month of each file be used as the 'new name' for the second column of each df?
I think your question is similar to this: Extract month, day and year from date regex an advanced way to do this would be using regex, which is laid out a little in that prior post. a simpler way to do this would be to split (or rsplit) the on (' '), assuming that there is a space in front of the month as well as after: excel_file = "first bit of names MONTH 2021.xlsx": for excel_file in excel_files: new-name = str(excel_file).rsplit(' ', 2)[-2] # creates a list [[first bit of names...], [MONTH], [2021.xlsx]] and takes the 2nd to last element df = pd.read_excel(excel_file,sheet_name='Sheet1',header=5,usecols='A,F',skipfooter=8) df.dropna(how='any', axis=0, inplace = True) df.rename(columns={'old-name': new-name}, inplace=True) dfs.append(df) I think this answers the question, but you may have another problem getting the "old name" in the way you propose. Hope this helps *edited to match comment
2
1
79,123,556
2024-10-24
https://stackoverflow.com/questions/79123556/seeking-clarification-on-the-sql-alchemy-connection-pool-status
I am running a python (v3.9.16) application from a main thread while a separate worker thread runs an asyncio loop that makes SQL queries to a database (using aioodbc v0.5.0). Currently there are 4 asyncio tasks running in the worker thread. With the create_async_engine command, I have configured the connection pool size to 8 and the max overflow to 10. I added support to monitor the connection pool status with the following code: pool_status = session.get_bind().pool.status() Attached below is a snippet from the log of the pool_status, which is displayed during each query made to the database. The Pool size is 8, which makes sense given my create_async_engine configuration. Can someone please provide clarification on the meaning / behavior of other three components of the pool_status: Connections in pool, Current Overflow, Checked out connections? For example, the Current Overflow is confusing since it shows a negative value. Pool size: 8 Connections in pool: 1 Current Overflow: -7 Current Checked out connections: 0 Pool size: 8 Connections in pool: 1 Current Overflow: -7 Current Checked out connections: 0 Pool size: 8 Connections in pool: 0 Current Overflow: -7 Current Checked out connections: 1 Pool size: 8 Connections in pool: 2 Current Overflow: -6 Current Checked out connections: 0 Pool size: 8 Connections in pool: 2 Current Overflow: -6 Current Checked out connections: 0 Pool size: 8 Connections in pool: 2 Current Overflow: -6 Current Checked out connections: 0 Pool size: 8 Connections in pool: 2 Current Overflow: -6 Current Checked out connections: 0 Pool size: 8 Connections in pool: 1 Current Overflow: -6 Current Checked out connections: 1 Pool size: 8 Connections in pool: 2 Current Overflow: -6 Current Checked out connections: 0 Pool size: 8 Connections in pool: 1 Current Overflow: -6 Current Checked out connections: 1 Pool size: 8 Connections in pool: 1 Current Overflow: -6 Current Checked out connections: 1 Pool size: 8 Connections in pool: 1 Current Overflow: -6 Current Checked out connections: 1 Pool size: 8 Connections in pool: 2 Current Overflow: -5 Current Checked out connections: 1 Pool size: 8 Connections in pool: 2 Current Overflow: -5 Current Checked out connections: 1 Pool size: 8 Connections in pool: 3 Current Overflow: -5 Current Checked out connections: 0 Pool size: 8 Connections in pool: 3 Current Overflow: -5 Current Checked out connections: 0 Pool size: 8 Connections in pool: 3 Current Overflow: -5 Current Checked out connections: 0 Pool size: 8 Connections in pool: 1 Current Overflow: -5 Current Checked out connections: 2
First of the easy ones: Pool size : it indicates the maximum connections available that can be made in the pool without going overflow. Connections in pool : it indicates the number of connections idle (available for new tasks) in the pool. A connection returns back into pool once the task using it is completed. Current Checked out connections : its indicates the number of connections that are currently being used by tasks in pool. or the connections that are unavailable for new tasks Current Overflow : its indicates the number of connections that are made additionally from configuration due to the demand (if the number is +ive). If its negative there may be several reasons of it: (a) negative values can sometimes appear in log output, especially if the connections are not being returned correctly, or there's a misconfiguration between pool size and max overflow. To fix it - > => Make sure that all connections are properly released back to the pool after each use. Connections should be closed or returned to the pool even if exceptions occur in the code. => Ensure aioodbc compatibility with the pool settings, as some async drivers may have internal pool management conflicts when used alongside certain pool configurations in SQLAlchemy.
2
0
79,121,678
2024-10-24
https://stackoverflow.com/questions/79121678/how-to-run-computations-on-other-rows-efficiently
I am working with a Polars DataFrame and need to perform computations on each row using values from other rows. Currently, I am using the map_elements method, but it is not efficient. In the following example, I add two new columns to a DataFrame: sum_lower: The sum of all elements that are smaller than the current element. max_other: The maximum value from the DataFrame, excluding the current element. Here is my current implementation: import polars as pl COL_VALUE = "value" def fun_sum_lower(current_row, df): tmp_df = df.filter(pl.col(COL_VALUE) < current_row[COL_VALUE]) sum_lower = tmp_df.select(pl.sum(COL_VALUE)).item() return sum_lower def fun_max_other(current_row, df): tmp_df = df.filter(pl.col(COL_VALUE) != current_row[COL_VALUE]) max_other = tmp_df.select(pl.col(COL_VALUE)).max().item() return max_other if __name__ == '__main__': df = pl.DataFrame({COL_VALUE: [3, 7, 1, 9, 4]}) df = df.with_columns( pl.struct([COL_VALUE]) .map_elements(lambda row: fun_sum_lower(row, df), return_dtype=pl.Int64) .alias("sum_lower") ) df = df.with_columns( pl.struct([COL_VALUE]) .map_elements(lambda row: fun_max_other(row, df), return_dtype=pl.Int64) .alias("max_other") ) print(df) The output of the above code is: shape: (5, 3) ┌───────┬───────────┬───────────┐ │ value ┆ sum_lower ┆ max_other │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═══════╪═══════════╪═══════════╡ │ 3 ┆ 1 ┆ 9 │ │ 7 ┆ 8 ┆ 9 │ │ 1 ┆ 0 ┆ 9 │ │ 9 ┆ 15 ┆ 7 │ │ 4 ┆ 4 ┆ 9 │ └───────┴───────────┴───────────┘ While this code works, it is not efficient due to the use of lambdas and row-wise operations. Is there a more efficient way to achieve this in Polars, without using lambdas, iterating over rows, or running Python code? I also tried using Polars methods: cum_sum, group_by_dynamic, and rolling, but I don't think those can be used for this task.
For your specific use case you don't really need join, you can calculate values with window functions. pl.Expr.shift() to exclude current row. pl.Expr.cum_sum() to calculate sum of all elements up to the current row. pl.Expr.max() to calculate max. pl.Expr.bottom_k() to calculate 2 largest elements so then we can take pl.Expr.min() as second largest. ( df .sort("value") .with_columns( sum_lower = pl.col.value.shift(1).cum_sum().fill_null(0), max_other = pl.when(pl.col.value.max() != pl.col.value) .then(pl.col.value.max()) .otherwise(pl.col.value.bottom_k(2).min()) ) ) shape: (5, 3) ┌───────┬───────────┬───────────┐ │ value ┆ sum_lower ┆ max_other │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═══════╪═══════════╪═══════════╡ │ 1 ┆ 0 ┆ 9 │ │ 3 ┆ 1 ┆ 9 │ │ 4 ┆ 4 ┆ 9 │ │ 7 ┆ 8 ┆ 9 │ │ 9 ┆ 15 ┆ 7 │ └───────┴───────────┴───────────┘ You can also use pl.DataFrame.with_row_index() to keep current order so you can revert to it at the end with pl.DataFrame.sort(). ( df.with_row_index() .sort("value") .with_columns( sum_lower = pl.col.value.shift(1).cum_sum().fill_null(0), max_other = pl.when(pl.col.value.max() != pl.col.value) .then(pl.col.value.max()) .otherwise(pl.col.value.bottom_k(2).min()) ) .sort("index") .drop("index") ) Another possible solution would be to use DuckDB integration with Polars. Using window functions, getting advantage of excellent DuckDB windows framing options. max(arg, n) to calculate top 2 largest elements. import duckdb duckdb.sql(""" select d.value, coalesce(sum(d.value) over( order by d.value rows unbounded preceding exclude current row ), 0) as sum_lower, max(d.value) over( rows between unbounded preceding and unbounded following exclude current row ) as max_other from df as d """).pl() shape: (5, 3) ┌───────┬───────────────┬───────────┐ │ value ┆ sum_lower ┆ max_other │ │ --- ┆ --- ┆ --- │ │ i64 ┆ decimal[38,0] ┆ i64 │ ╞═══════╪═══════════════╪═══════════╡ │ 1 ┆ 0 ┆ 9 │ │ 3 ┆ 1 ┆ 9 │ │ 4 ┆ 4 ┆ 9 │ │ 7 ┆ 8 ┆ 9 │ │ 9 ┆ 15 ┆ 7 │ └───────┴───────────────┴───────────┘ Or using lateral join: import duckdb duckdb.sql(""" select d.value, coalesce(s.value, 0) as sum_lower, m.value as max_other from df as d, lateral (select sum(t.value) as value from df as t where t.value < d.value) as s, lateral (select max(t.value) as value from df as t where t.value != d.value) as m """).pl() shape: (5, 3) ┌───────┬───────────┬───────────┐ │ value ┆ sum_lower ┆ max_other │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═══════╪═══════════╪═══════════╡ │ 3 ┆ 1 ┆ 9 │ │ 7 ┆ 8 ┆ 9 │ │ 1 ┆ 0 ┆ 9 │ │ 9 ┆ 15 ┆ 7 │ │ 4 ┆ 4 ┆ 9 │ └───────┴───────────┴───────────┘ duplicate values pure polars solution above works well if there're no duplicate values, but if there are, you can also work around it. Here're 2 examples depending on whether you want to keep original order or not: # not keeping original order ( df .select(pl.col.value.value_counts()).unnest("value") .sort("value") .with_columns( sum_lower = pl.col.value.shift(1).cum_sum().fill_null(0), max_other = pl.when(pl.col.value.max() != pl.col.value) .then(pl.col.value.max()) .otherwise(pl.col.value.bottom_k(2).min()), value = pl.col.value.repeat_by("count") ).drop("count").explode("value") ) # keeping original order ( df.with_row_index() .group_by("value").agg("index") .sort("value") .with_columns( sum_lower = pl.col.value.shift(1).cum_sum().fill_null(0), max_other = pl.when(pl.col.value.max() != pl.col.value) .then(pl.col.value.max()) .otherwise(pl.col.value.bottom_k(2).min()) ) .explode("index") .sort("index") .drop("index") )
8
4
79,125,764
2024-10-25
https://stackoverflow.com/questions/79125764/find-intersection-of-columns-from-different-polars-dataframes
I have a variable number of pl.DataFrames which share some columns (e.g. symbol and date). Each pl.DataFrame has a number of additional columns, which are not important for the actual task. The symbol columns do have exactly the same content (the different str values exist in every dataframe). The date columns are somewhat different in the way that they don't have the exact same dates in every pl.DataFrame. The actual task is to find the common dates per grouping (i.e. symbol) and filter each pl.DataFrame accordingly. Here are three example pl.DataFrames: import polars as pl df1 = pl.DataFrame( { "symbol": ["AAPL"] * 4 + ["GOOGL"] * 3, "date": [ "2023-01-01", "2023-01-02", "2023-01-03", "2023-01-04", "2023-01-02", "2023-01-03", "2023-01-04", ], "some_other_col": range(7), } ) df2 = pl.DataFrame( { "symbol": ["AAPL"] * 3 + ["GOOGL"] * 5, "date": [ "2023-01-02", "2023-01-03", "2023-01-04", "2023-01-01", "2023-01-02", "2023-01-03", "2023-01-04", "2023-01-05", ], "another_col": range(8), } ) df3 = pl.DataFrame( { "symbol": ["AAPL"] * 4 + ["GOOGL"] * 2, "date": [ "2023-01-02", "2023-01-03", "2023-01-04", "2023-01-05", "2023-01-03", "2023-01-04", ], "some_col": range(6), } ) DataFrame 1: shape: (7, 3) ┌────────┬────────────┬────────────────┐ │ symbol ┆ date ┆ some_other_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪════════════════╡ │ AAPL ┆ 2023-01-01 ┆ 0 │ │ AAPL ┆ 2023-01-02 ┆ 1 │ │ AAPL ┆ 2023-01-03 ┆ 2 │ │ AAPL ┆ 2023-01-04 ┆ 3 │ │ GOOGL ┆ 2023-01-02 ┆ 4 │ │ GOOGL ┆ 2023-01-03 ┆ 5 │ │ GOOGL ┆ 2023-01-04 ┆ 6 │ └────────┴────────────┴────────────────┘ DataFrame 2: shape: (8, 3) ┌────────┬────────────┬─────────────┐ │ symbol ┆ date ┆ another_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪═════════════╡ │ AAPL ┆ 2023-01-02 ┆ 0 │ │ AAPL ┆ 2023-01-03 ┆ 1 │ │ AAPL ┆ 2023-01-04 ┆ 2 │ │ GOOGL ┆ 2023-01-01 ┆ 3 │ │ GOOGL ┆ 2023-01-02 ┆ 4 │ │ GOOGL ┆ 2023-01-03 ┆ 5 │ │ GOOGL ┆ 2023-01-04 ┆ 6 │ │ GOOGL ┆ 2023-01-05 ┆ 7 │ └────────┴────────────┴─────────────┘ DataFrame 3: shape: (6, 3) ┌────────┬────────────┬──────────┐ │ symbol ┆ date ┆ some_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪══════════╡ │ AAPL ┆ 2023-01-02 ┆ 0 │ │ AAPL ┆ 2023-01-03 ┆ 1 │ │ AAPL ┆ 2023-01-04 ┆ 2 │ │ AAPL ┆ 2023-01-05 ┆ 3 │ │ GOOGL ┆ 2023-01-03 ┆ 4 │ │ GOOGL ┆ 2023-01-04 ┆ 5 │ └────────┴────────────┴──────────┘ Now, the first step would be to find the common dates for every symbol. AAPL: ["2023-01-02", "2023-01-03", "2023-01-04"] GOOGL: ["2023-01-03", "2023-01-04"] That means, each pl.DataFrame needs to be filtered accordingly. The expected result looks like this: DataFrame 1 filtered: shape: (5, 3) ┌────────┬────────────┬────────────────┐ │ symbol ┆ date ┆ some_other_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪════════════════╡ │ AAPL ┆ 2023-01-02 ┆ 1 │ │ AAPL ┆ 2023-01-03 ┆ 2 │ │ AAPL ┆ 2023-01-04 ┆ 3 │ │ GOOGL ┆ 2023-01-03 ┆ 5 │ │ GOOGL ┆ 2023-01-04 ┆ 6 │ └────────┴────────────┴────────────────┘ DataFrame 2 filtered: shape: (5, 3) ┌────────┬────────────┬─────────────┐ │ symbol ┆ date ┆ another_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪═════════════╡ │ AAPL ┆ 2023-01-02 ┆ 0 │ │ AAPL ┆ 2023-01-03 ┆ 1 │ │ AAPL ┆ 2023-01-04 ┆ 2 │ │ GOOGL ┆ 2023-01-03 ┆ 5 │ │ GOOGL ┆ 2023-01-04 ┆ 6 │ └────────┴────────────┴─────────────┘ DataFrame 3 filtered: shape: (5, 3) ┌────────┬────────────┬──────────┐ │ symbol ┆ date ┆ some_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪══════════╡ │ AAPL ┆ 2023-01-02 ┆ 0 │ │ AAPL ┆ 2023-01-03 ┆ 1 │ │ AAPL ┆ 2023-01-04 ┆ 2 │ │ GOOGL ┆ 2023-01-03 ┆ 4 │ │ GOOGL ┆ 2023-01-04 ┆ 5 │ └────────┴────────────┴──────────┘
You can use pl.DataFrame.join() with how="semi" parameter: semi Returns rows from the left table that have a match in the right table. on = ["symbol","date"] df1.join(df2, on=on, how="semi").join(df3, on=on, how="semi") df2.join(df1, on=on, how="semi").join(df3, on=on, how="semi") df3.join(df1, on=on, how="semi").join(df2, on=on, how="semi") shape: (5, 3) ┌────────┬────────────┬────────────────┐ │ symbol ┆ date ┆ some_other_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪════════════════╡ │ AAPL ┆ 2023-01-02 ┆ 1 │ │ AAPL ┆ 2023-01-03 ┆ 2 │ │ AAPL ┆ 2023-01-04 ┆ 3 │ │ GOOGL ┆ 2023-01-03 ┆ 5 │ │ GOOGL ┆ 2023-01-04 ┆ 6 │ └────────┴────────────┴────────────────┘ shape: (5, 3) ┌────────┬────────────┬─────────────┐ │ symbol ┆ date ┆ another_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪═════════════╡ │ AAPL ┆ 2023-01-02 ┆ 0 │ │ AAPL ┆ 2023-01-03 ┆ 1 │ │ AAPL ┆ 2023-01-04 ┆ 2 │ │ GOOGL ┆ 2023-01-03 ┆ 5 │ │ GOOGL ┆ 2023-01-04 ┆ 6 │ └────────┴────────────┴─────────────┘ shape: (5, 3) ┌────────┬────────────┬──────────┐ │ symbol ┆ date ┆ some_col │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞════════╪════════════╪══════════╡ │ AAPL ┆ 2023-01-02 ┆ 0 │ │ AAPL ┆ 2023-01-03 ┆ 1 │ │ AAPL ┆ 2023-01-04 ┆ 2 │ │ GOOGL ┆ 2023-01-03 ┆ 4 │ │ GOOGL ┆ 2023-01-04 ┆ 5 │ └────────┴────────────┴──────────┘ Or you could probably generalize it a bit: on = ["symbol","date"] dfs = [df1, df2, df3] # filter first dataframe on all others for df in dfs[1:]: dfs[0] = dfs[0].join(df, on=on, how="semi") # then filter all others on first one for i, df in enumerate(dfs[1:]): dfs[i] = df.join(dfs[0], on=on, how="semi") for df in dfs: print(df)
2
1
79,125,266
2024-10-25
https://stackoverflow.com/questions/79125266/pandas-html-generation-reproducible-output
I am writing a Pandas dataframe as HTML using this code import pandas as pd df = pd.DataFrame({ "a": [1] }) print(df.style.to_html()) I ran it once and it produced this output <style type="text/css"> </style> <table id="T_f9297"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_f9297_level0_col0" class="col_heading level0 col0" >a</th> </tr> </thead> <tbody> <tr> <th id="T_f9297_level0_row0" class="row_heading level0 row0" >0</th> <td id="T_f9297_row0_col0" class="data row0 col0" >1</td> </tr> </tbody> </table> But when I run the same program again a moment later it gives <style type="text/css"> </style> <table id="T_d628d"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_d628d_level0_col0" class="col_heading level0 col0" >a</th> </tr> </thead> <tbody> <tr> <th id="T_d628d_level0_row0" class="row_heading level0 row0" >0</th> <td id="T_d628d_row0_col0" class="data row0 col0" >1</td> </tr> </tbody> </table> I would like to get the same output each time. That is, the T_f9297 and T_d628d identifiers shouldn't change from one run to the next. How can I get that? I believe that I could generate HTML without any CSS styling and without the identifiers, but I do want CSS (I just omitted it from my example) and I'm happy to have the identifiers, as long as I get the same output given the same input data. I am using Python 3.11.7 and Pandas 2.1.4.
pandas.io.formats.style.Styler.to_html has a table_uuid parameter: table_uuid str, optional: Id attribute assigned to the <table> HTML element in the format: <table id="T_<table_uuid>" ..> If not provided it generates a random uuid. If set it will use the uuid provided: print(df.style.to_html(table_uuid="my_table_id")) Output: <style type="text/css"> </style> <table id="T_my_table_id"> <thead> <tr> <th class="blank level0" >&nbsp;</th> <th id="T_my_table_id_level0_col0" class="col_heading level0 col0" >a</th> </tr> </thead> <tbody> <tr> <th id="T_my_table_id_level0_row0" class="row_heading level0 row0" >0</th> <td id="T_my_table_id_row0_col0" class="data row0 col0" >1</td> </tr> </tbody> </table>
2
1
79,118,378
2024-10-23
https://stackoverflow.com/questions/79118378/how-to-save-and-load-spacy-encodings-in-a-polars-dataframe
I want to use Spacy to generate embeddings of text stored in a polars DataFrame and store the results in the same DataFrame. Next, I want to save this DataFrame to the disk and be able to load again as a polars DataFrame. The backtransformation from pandas to polars results in an error. This is the error message: ArrowInvalid: Could not convert Hello with type spacy.tokens.doc.Doc: did not recognize Python value type when inferring an Arrow data type Here is my code: from io import StringIO import polars as pl import pandas as pd import spacy nlp = spacy.load("de_core_news_sm") json_str = '[{"foo":"Hello","bar":6},{"foo":"What a lovely day","bar":7},{"foo":"Nice to meet you","bar":8}]' #Initalize and store DataFrame df = pl.read_json(StringIO(json_str)) df = df.with_columns(pl.col("foo").map_elements(lambda x: nlp(x)).alias("encoding")) df.to_pandas().to_pickle('pickled_df.pkl') #Load DataFrame df_loaded_pd = pd.read_pickle('pickled_df.pkl') df_loaded_pl = pl.from_pandas(df_loaded_pd) These are the package versions I used: # Name Version Build Channel pandas 2.2.3 py312hf9745cd_1 conda-forge polars 1.9.0 py312hfe7c9be_0 conda-forge spacy 3.7.2 py312h6db74b5_0 spacy-curated-transformers 0.2.2 pypi_0 pypi spacy-legacy 3.0.12 pyhd8ed1ab_0 conda-forge spacy-loggers 1.0.5 pyhd8ed1ab_0 conda-forge Thank you for your help!
Serializing and deserializing SpaCy objects within a polars DataFrame can be stored by using SpaCys native DocBin class. The following code generates doc objects, saves them locally, and successfully loads them afterwards. from io import StringIO from spacy.tokens import DocBin import polars as pl import spacy nlp = spacy.load("de_core_news_md") json_str = '[{"foo":"Hello","bar":6},{"foo":"What a lovely day","bar":7},{"foo":"Nice to meet you","bar":8}]' doc = nlp("some text") #Serialize polars DataFrame df = pl.read_json(StringIO(json_str)) df = df.with_columns(pl.col("foo").map_elements(lambda x: DocBin(docs=[nlp(x)]).to_bytes()).alias('binary_embbeding')) df.write_parquet('saved.pq') #Deserialize polars DataFrame df_loaded = pl.read_parquet('saved.pq') df_loaded = df_loaded.with_columns(pl.col('binary_embbeding').map_elements(lambda x: list(DocBin().from_bytes(x).get_docs(nlp.vocab))[0]).alias("spacy_embedding")) #Calculate similarity df_loaded.with_columns(pl.col("spacy_embedding").map_elements(lambda x: doc.similarity(x), return_dtype=pl.Float64).alias('Score')) Applying functions to deserialized SpaCy objects Serializing and deserializing SpaCys objects with native polars functions (such as df.write_parquet()) heavily depends on the used model. In the above case the similarity calculation only works when utilizing SpaCys language model that contain wordvectors. nlp = spacy.load("de_core_news_sm") # Line 20 does not works nlp = spacy.load("de_core_news_md") # Line 20 works nlp = spacy.load("de_core_news_lg") # Line 20 works nlp = spacy.load("de_dep_news_trf") # Line 20 does not works
4
1
79,119,492
2024-10-23
https://stackoverflow.com/questions/79119492/bar-chart-with-multiple-bars-using-xoffset-when-the-x-axis-is-temporal
Here's a small example: import altair as alt import polars as pl source = pl.DataFrame( { "Category": list("AAABBBCCC"), "Value": [0.1, 0.6, 0.9, 0.7, 0.2, 1.1, 0.6, 0.1, 0.2], "Date": [f"2024-{m+1}-1" for m in range(3)] * 3, } ).with_columns(pl.col("Date").str.to_date()) bars = alt.Chart(source).mark_bar().encode( x=alt.X("Date:T"), xOffset="Category:N", y="Value:Q", color="Category:N", ) bars If I set x="Date:N", then the example behaves as I'd like, but without the benefits of temporal formatting for the x-axis: Is there any way in which I can have xOffset work for the case where x="Date:T"?
If you use the ordinal or nominal data type, you can supply a timeUnit to get date formatting. There are many options depending on what kind of data you are working with. import altair as alt import polars as pl source = pl.DataFrame( { "Category": list("AAABBBCCC"), "Value": [0.1, 0.6, 0.9, 0.7, 0.2, 1.1, 0.6, 0.1, 0.2], "Date": [f"2024-{m+1}-1" for m in range(3)] * 3, } ).with_columns(pl.col("Date").str.to_date()) bars = alt.Chart(source.to_pandas()).mark_bar().encode( x=alt.X("Date:O", timeUnit="yearmonthdate"), xOffset="Category:N", y="Value:Q", color="Category:N", ) bars
3
3
79,123,305
2024-10-24
https://stackoverflow.com/questions/79123305/numpy-array-does-not-correctly-update-in-gaussian-elimination-program
I am trying to write a function gaussian_elim which takes in an n x n numpy array A and an n x 1 numpy array b and performs Gaussian elimination on the augmented matrix [A|b]. It should return an n x (n+1) matrix of the form M = [U|c], where U is an n x n upper triangular matrix. However, when I test my code on a simple 2x2 matrix, it seems that the elimination step is not being performed properly. I have inserted print statements to illustrate how the matrix is not being updated properly. def gaussian_elim(A,b): """ A: n x n numpy array b: n x 1 numpy array Applies Gaussian Elimination to the system Ax = b. Returns a matrix of the form M = [U|c], where U is upper triangular. """ n = len(b) b = b.reshape(-1, 1) # Ensure b is a column vector of shape (n, 1) M = np.hstack((A,b)) #Form the n x (n+1) augmented matrix M := [A|b] #For each pivot: for j in range(n-1): #j = 0,1,...,n-2 #For each row under the pivot: for i in range(j+1,n): #i = j + 1, j + 2,..., n-1 if (M[j,j] == 0): print("Error! Zero pivot encountered!") return #The multiplier for the the i-th row m = M[i,j] / M[j,j] print("M[i,:] = ", M[i,:]) print("M[j,:] = ", M[j,:]) print("m = ", m) print("M[i,:] - m*M[j,:] = ", M[i,:] - m*M[j,:]) #Eliminate entry M[i,j] (the first nonzero entry of the i-th row) M[i,:] = M[i,:] - m*M[j,:] print("M[i,:] = ", M[i,:]) #Make sure that i-th row of M is correct (it's not!) return M Testing with a 2x2 matrix A = np.array([[3,-2],[1,5]]) b = np.array([1,1]) gaussian_elim(A,b) yields the following output: M[i,:] = [1 5 1] M[j,:] = [ 3 -2 1] m = 0.3333333333333333 M[i,:] - m*M[j,:] = [0. 5.66666667 0.66666667] <-- this is correct! M[i,:] = [0 5 0] <--why is this not equal to the above line??? array([[ 3, -2, 1], [ 0, 5, 0]]) The output I expected is array([[ 3, -2, 1],[0. 5.66666667 0.66666667]]). Why did the second row not update properly?
Because you use numpy array of integers, all your result will be rounded off to integers. You need to define A and b as A = np.array([[3,-2],[1,5]], dtype=np.float64) b = np.array([1,1], dtype=np.float64) Doing so allows for float values in your matrices.
3
4
79,122,612
2024-10-24
https://stackoverflow.com/questions/79122612/unable-to-use-list-for-positional-arguements
So I am working on a small project and I can't for the life of me figure our why this doesn't work.... I am using a list for positional arguments, yet it returns that parametres are missing, I know its probably something basic but I can't seem to figure it out.. If is just place the write out the list direction in function it works, but it doesn't seem to want to work with the contesetants list. Hoping someone can help here! class Tester(): def __init__(self, first: int, second: int, third: int) -> None: self.first = first self.second = second self.third = third contestants = [54, 56, 32] print(Tester(contestants))
You need to unpack the list to pass the arguments: class Tester: def __init__(self, first: int, second: int, third: int) -> None: self.first = first self.second = second self.third = third def __str__(self) -> str: return f"Tester(first={self.first}, second={self.second}, third={self.third})" contestants = [54, 56, 32] print(Tester(*contestants)) Output: Tester(first=54, second=56, third=32)
2
3
79,122,259
2024-10-24
https://stackoverflow.com/questions/79122259/python-flask-app-doesnt-serve-images-used-in-react-app
I have a react project that is hosted in flask app. This is the project folders: /simzol/ └── backend/ ├── app.py └── build/ ├── index.html ├── images/ ├── static/ │ ├── css/ │ └── js/ └── other_files... PS. the build folder is generated by react using the command "yarn build" here's the flask app initializer: app = Flask(__name__, static_url_path="/static", static_folder="build/static", template_folder="build") that's how I serve the main route and any other path that isn't an API path (which should be handled by the react app. If the page doesn't exist, react should return the appropriate 404 not found page that I wrote): @app.route("/") @app.route('/<path:path>') def home(path=None): try: return render_template("index.html"), 200 except Exception as e: return handle_exception(e) When I run the app and go to the main route, I can't see my images load up. What I tried: move the images folder into the static folder. didn't change anything change the static_url_path to empty string ("") and the static_folder to "build". That solves the problem but I encounter another problem when I surf any page that is not the root page (like /contactus) directly through the browser's url input field, I get 404 error from flask (not react) I use relative image path in react src attributes, maybe changing that could fix the problem but if that solution works, I don't like it, because that makes the developing in react more complicated
I wanted the images to load up in both flask app in production and in react app (using npm start) for development. At the same time, I wanted unknown pages to be handled by react. The most convenient solution I found is to put all the images under public/static/images instead of public/images and to change the src paths accordingly. Then I can run yarn build and get the result build folder to the flask app folder (backend). That worked perfectly.
3
0
79,119,480
2024-10-23
https://stackoverflow.com/questions/79119480/conda-cannot-find-a-specific-package-dedalus-in-conda-forge-that-is-explicitly
The package in question is the Dedalus spectral CFD library, here. For posterity I'll also link the project homepage. When I run conda search --channel conda-forge dedalus or variations with options I cannot find the dedalus package. This isn't a general issue with conda because I can find all default channels packages and every other conda-forge package I tested (most recently related libraries like fipy or gmsh, or general stuff like mamba). To be absolutely precise, this is what the terminal might look like on my end. C:\Users\FirstName LastName>conda search --override-channels --channel conda-forge dedalus Loading channels: done No match found for: dedalus. Search: *dedalus* PackagesNotFoundError: The following packages are not available from current channels: - dedalus Current channels: - https://conda.anaconda.org/conda-forge/win-64 - https://conda.anaconda.org/conda-forge/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. I note that pip can locate the associated pypi project, but dedalus is very much a full-stack affair and so is best installed as a conda environment. I note also that my version of conda is 23.7.4. I note lastly that when I look for dedalus on an HPC server I SSH'd into I can find it, as one would hope. The server's version of conda is 4.8.3. I'm not sure why there's such a wide version difference, suggesting that maybe I'm on some deprecated system, but my listed build version is newer (3.26.1 vs 3.18.11). I would appreciate any assistance in troubleshooting this problem. I'm at a complete loss as to what my next steps should be.
Its not available on win64, check the OS tags, which only include linux-64 osx-64 osx-arm64
2
1
79,117,805
2024-10-23
https://stackoverflow.com/questions/79117805/silence-mypy-arg-type-error-when-using-stategy-pattern
Minimal example: from typing import overload, TypeVar, Generic class EventV1: pass class EventV2: pass class DataGathererV1: def process(self, event: EventV1): pass def process2(self, event: EventV1): pass class DataGathererV2: def process(self, event: EventV2): pass def process2(self, event: EventV2): pass class Dispatcher: def __init__(self): self.worker_v1: DataGathererV1 = DataGathererV1() self.worker_v2: DataGathererV2 = DataGathererV2() def dispatch(self, event: EventV1 | EventV2): handler: DataGathererV1 | DataGathererV2 = self.worker_v1 if isinstance(event, EventV1) else self.worker_v2 handler.process(event) # Common logic handler.process2(event) # Common logic handler.process(event) # etc... In the code above, I'm using a sort of "strategy" pattern to process events. I would like to avoid splitting my dispatch method in two methods as I don't think it makes sense and will generate code duplication. Mypy gives me the following errors and I don't know how to properly type my code to avoid such errors. example.py:36: error: Argument 1 to "process" of "DataGathererV1" has incompatible type "EventV1 | EventV2"; expected "EventV1" [arg-type] example.py:36: error: Argument 1 to "process" of "DataGathererV2" has incompatible type "EventV1 | EventV2"; expected "EventV2" [arg-type] example.py:40: error: Argument 1 to "process2" of "DataGathererV1" has incompatible type "EventV1 | EventV2"; expected "EventV1" [arg-type] example.py:40: error: Argument 1 to "process2" of "DataGathererV2" has incompatible type "EventV1 | EventV2"; expected "EventV2" [arg-type] example.py:44: error: Argument 1 to "process" of "DataGathererV1" has incompatible type "EventV1 | EventV2"; expected "EventV1" [arg-type] example.py:44: error: Argument 1 to "process" of "DataGathererV2" has incompatible type "EventV1 | EventV2"; expected "EventV2" [arg-type] I tried to add an overload function to retrieve both my handler and my event at the same time to force the type to be "linked" (see below), however it did not solve my problem. @overload def __handler(self, event: EventV1) -> tuple[DataGathererV1, EventV1]: ... @overload def __handler(self, event: EventV2) -> tuple[DataGathererV2, EventV2]: ... def __handler(self, event: EventV1 | EventV2) -> tuple[DataGathererV1 | DataGathererV2, EventV1 | EventV2]: return ( self.worker_v1 if isinstance(event, EventV1) else self.worker_v2, event ) My "dream" solution to indicate to mypy that the type of event and the type of handler are "linked" together. What I would like to avoid is to set the input type of my event in each DataGathererVX to be EventV1 | EventV2 and add an assert isinstance(event, EventVX) at the beginning of each method as my aim is to have errors when these methods are called with the incorrect type of event.
The issue here is that I think there are two different types being conflated. Consider the protocol: T = TypeVar("T", contravariant=True) class DataGatherer(Generic[T], Protocol): def process(self, event: T): pass def process2(self, event: T): pass We can say DataGathererV1 is a subtype of DataGatherer[EventV1], and DataGathererV2 is a subtype of DataGatherer[EventV2]. So far so good. The type we need in the dispatch method however, is actually DataGatherer[EventV1 | EventV2] (as this takes both arguments). Since: DataGatherer[EventV1 | EventV2] != DataGatherer[EventV1] | DataGatherer[EventV2] ...we have an issue. A Solution By using a cast, we can convert handler to the type we desire: from typing import Protocol, overload, TypeVar, Generic, cast class EventV1: pass class EventV2: pass T = TypeVar("T", contravariant=True) class DataGatherer(Generic[T], Protocol): def process(self, event: T): pass def process2(self, event: T): pass class DataGathererV1(DataGatherer[EventV1]): def process(self, event: EventV1): pass def process2(self, event: EventV1): pass class DataGathererV2(DataGatherer[EventV2]): def process(self, event: EventV2): pass def process2(self, event: EventV2): pass class Dispatcher: def __init__(self): self.worker_v1 = DataGathererV1() self.worker_v2 = DataGathererV2() def dispatch(self, event: EventV1 | EventV2): handler = cast( DataGatherer[EventV1 | EventV2], self.worker_v1 if isinstance(event, EventV1) else self.worker_v2 ) handler.process(event) # Common logic handler.process2(event) # Common logic handler.process(event) # etc... You could also use a slightly more generic type for worker_v1 and worker_v2 to circumvent this (although less explicitly): class Dispatcher: def __init__(self): self.worker_v1: DataGatherer = DataGathererV1() self.worker_v2: DataGatherer = DataGathererV2() def dispatch(self, event: EventV1 | EventV2): handler: DataGatherer[EventV1 | EventV2] = self.worker_v1 if isinstance(event, EventV1) else self.worker_v2 handler.process(event) # Common logic handler.process2(event) # Common logic handler.process(event) # etc... You could also split it into two if blocks, as previously suggested, but otherwise I'm not sure there's any other way to go about it. Hope this is useful!
2
1
79,120,587
2024-10-24
https://stackoverflow.com/questions/79120587/optimization-of-pyspark-code-to-do-comparisons-of-rows
I want to iteratively compare 2 sets of rows in a PySpark dataframe, and find the common values in another column. For example, I have the dataframe (df) below. Column1 Column2 abc 111 def 666 def 111 tyu 777 abc 777 def 222 tyu 333 ewq 888 The output I want is abc,def,CommonRow <-- because of 111 abc,ewq,NoCommonRow abc,tyu,CommonRow <-- because of 777 def,ewq,NoCommonRow def,tyu,NoCommonRow ewq,tyu,NoCommonRow The PySpark code that I'm currently using to do this is # "value_list" contains the unique list of values in Column 1 index = 0 for col1 in value_list: index += 1 df_col1 = df.filter(df.Column1 == col1) for col2 in value_list[index:]: df_col2 = df.filter(df.Column1 == col2) df_join = df_col1.join(df_col2, on=(df_col1.Column2 == df_col2.Column2), how="inner") if df_join.limit(1).count() == 0: # No common row print(col1,col2,"NoCommonRow") else: print(col1,col2,"CommonRow") However, I found that this takes a very long time to run (df has millions of rows). Is there anyway to optimize it to run faster, or is there a better way to do the comparisons?
You can do this without loops using self join as follows: from pyspark.sql import SparkSession from pyspark.sql import functions as F spark = SparkSession.builder.getOrCreate() data = [("abc", 111), ("def", 666), ("def", 111), ("tyu", 777), ("abc", 777), ("def", 222), ("tyu", 333), ("ewq", 888)] df = spark.createDataFrame(data, ["Column1", "Column2"]) result_df = ( df.alias("a") .join(df.alias("b"), F.col("a.Column1") < F.col("b.Column1"), "left") # this is more efficient than the "!=" approach used in other answer because using "<" creates less rows after join (almost half) .join(df.alias("c"), (F.col("a.Column2") == F.col("c.Column2")) & (F.col("b.Column2") == F.col("c.Column2")), "left") .where(F.col("a.Column1").isNotNull() & F.col("b.Column1").isNotNull()) .groupBy("a.Column1", "b.Column1") .agg( F.when(F.count("c.Column2") > 0, "CommonRow").otherwise("NoCommonRow").alias("CommonStatus") ) .orderBy("a.Column1", "b.Column1") # remove this if not required ) result_df.show(truncate=False) # Output: # +-------+-------+------------+ # |Column1|Column1|CommonStatus| # +-------+-------+------------+ # |abc |def |CommonRow | # |abc |ewq |NoCommonRow | # |abc |tyu |CommonRow | # |def |ewq |NoCommonRow | # |def |tyu |NoCommonRow | # |ewq |tyu |NoCommonRow | # +-------+-------+------------+ Also, please avoid the orderBy when you run this on your million-row df - use it only if necessary.
2
1
79,120,520
2024-10-24
https://stackoverflow.com/questions/79120520/fastest-way-to-combine-image-patches-given-as-4d-array-in-python
Given a 4D array of size (N,W,H,3), where N is the number of patches, W,H are the width and height of an image patch and 3 is the number of color channels. Assume that these patches were generated by taking and original image I and dividing it up into small squares. The order by which this division happen is row by row. So if we divide our image into 3x3 patches (9 total) each back is 10x10pixels, then the 4D array will be (9,10,10,3) and the order of element in it will be [patch11,patch12,patch13,patch21,patch22,patch23,patch31,patch32,patch33]. Now my question is about the most efficient way to combine these patches back to produce the original image in python only using simply python functions and numpy (no PIL or OpenCV). Thank you so much. I can write a double for loop that does the job as below, but I'm wondering if there is a better algorithm that can provide faster performance: import numpy as np def reconstruct_image(patches, num_rows, num_cols): # num_rows and num_cols are the number of patches in the rows and columns respectively patch_height, patch_width, channels = patches.shape[1], patches.shape[2], patches.shape[3] # Initialize the empty array for the full image full_image = np.zeros((num_rows * patch_height, num_cols * patch_width, channels), dtype=patches.dtype) # Iterate over the rows and columns of patches for i in range(num_rows): for j in range(num_cols): # Get the index of the current patch in the 4D array patch_index = i * num_cols + j # Place the patch in the appropriate position in the full image full_image[i*patch_height:(i+1)*patch_height, j*patch_width:(j+1)*patch_width, :] = patches[patch_index] return full_image N = 9 # Number of patches W, H, C = 10, 10, 3 # Patch dimensions (WxHxC) num_rows, num_cols = 3, 3 # Number of patches in rows and columns (3x3 patches) patches = np.random.rand(N, W, H, C) # Example patch data reconstructed_image = reconstruct_image(patches, num_rows, num_cols)
Here's a fast pure numpy 1-liner way to do it: def reconstruct_image_2(): return patches.reshape(num_rows, num_cols, W, H, C).swapaxes(1, 2).reshape(num_rows*W, num_cols*H, C) reconstructed_image_2 = reconstruct_image_2() assert np.all(reconstructed_image == reconstructed_image_2) # True Explanation: First reshape restructures your array as a "2D" array of patches, swapaxes makes your array (num_rows, W, num_cols, H, C), and finally the second and last reshape effectively concatenates the patches together in rows and columns. Timing comparison: import timeit %timeit reconstruct_image(patches, num_rows, num_cols) # 6.2 µs ± 16.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) %timeit reconstruct_image_2() # 1.56 µs ± 2.57 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
4
3
79,119,610
2024-10-23
https://stackoverflow.com/questions/79119610/getting-the-module-and-class-of-currently-executing-classmethod-in-python
For code that exists in a module named some.module and looks like this: class MyClass: @classmethod def method(cls): # do something pass I'd like to know in the block marked as "do something" what the module name, the class name, and the method name are. In the above example, that would be: module some.module class MyClass method method I know of inspect.stack() and I can get the function / method name from it, but I can't find the rest of the data.
These are the most direct ways: import inspect class MyClass: @classmethod def method(cls): print("module:", __name__) print("class:", cls.__name__) print("method:", inspect.currentframe().f_code.co_name) Doing the same from a utility function requires traversing "back" (up) in the call stack to find the calling frame: import inspect def dbg(): frame = inspect.currentframe().f_back print("module:", frame.f_globals.get("__name__")) print("cls:", frame.f_locals.get("cls")) print("method:", frame.f_code.co_name)
2
3
79,119,637
2024-10-23
https://stackoverflow.com/questions/79119637/what-is-the-equivalent-of-np-polyval-in-the-new-np-polynomial-api
I can't find a direct answer in NumPy documentation. This snippet will populate y with polynomial p values on domain x: p = [1, 2, 3] x = np.linspace(0, 1, 10) y = [np.polyval(p, i) for i in x] What is the new API equivalent when p = Polynomial(p)?
You can simply evaluate values with p(x). Documentation can be found on "Using the convenience classes" under "Evaluation": p = [1, 2, 3] p = np.polynomial.Polynomial(p) x = np.linspace(0, 1, 10) y = p(x) Note: Coefficients are in reverse order compared to legacy API i.e. coefficients go from lowest order to highest order such that p[i] is the ith-order term.
2
3
79,118,016
2024-10-23
https://stackoverflow.com/questions/79118016/how-to-preserve-data-types-when-working-with-pandas-and-sklearn-transformers
While working with a large sklearn Pipeline (fit using a DataFrame) I ran into an error that lead back to a wrong data type of my input. The problem occurred on an a single observation coming from an API that is supposed to interface the model in production. Missing information in a single line makes pandas (obviously) incapable of inferring the correct dtype but I thought that my fit transformers will handle the conversions. Apparently, I am mistaken. import pandas as pd from sklearn.impute import SimpleImputer X_tr = pd.DataFrame({"A": [5, 8, None, 4], "B": [3, None, 9.9, 12]}) print(X_tr.dtypes) #>> A float64 #>> B float64 x = pd.DataFrame({"A": [10.1], "B": [None]}) print(x.dtypes) #>> A float64 #>> B object The above shows clearly that pandas infers the float64 types for column A and B in the training dataset, however (again obviously) for the single observation it doesn't know the dtype for the column B so it assigns object. No issue so far. But let's imagine a SimpleImputer somewhere within a Pipeline to replace the missing values: imputer = SimpleImputer( fill_value=0, strategy="constant", missing_values=pd.NA ).set_output(transform="pandas") X_tr_im = imputer.fit_transform(X_tr) # training print(X_tr_im.dtypes) #>> A float64 #>> B float64 x_im = imputer.transform(x) print(x_im.dtypes) #>> A object #>> B object The imputer does replace the None values with zeros in all cases, however, two things happened that I did not expect: Column B was NOT converted to the dtype that it was fit on Column A was converted to the unwanted dtype of object This creates two unwanted non-numeric data types that lead to errors further down the pipeline. Even if it is not the task of transformers to preserve dtypes, in my case it would still be very helpful. Am I doing something fundamentally wrong? Are there any solutions available?
The issue you are running into is how pandas handles None in a column. If the column has other float or integer values, the None is coerced to a numpy.nan which is an instance of float. This coercion maintains the column's type as a numeric column. However, if no other values are present in the column, just None values, pandas does NOT try to coerce the column to float, and instead keeps it as a column of Python object types, which is what you are seeing when you try to impute x. To ensure your datafame's columns are converted to a numeric type before being passed through the rest of the pipeline, you can use a sklearn.preprocessing.FunctionTransformer before the imputer, and use a function that forces the date type to np.float64 before any other computation happens. import pandas as pd from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from sklearn.preprocessing import FunctionTransformer def df_to_float(x): return x.astype(np.float64) x_tr = pd.DataFrame({"A": [5, 8, None, 4], "B": [3, None, 9.9, 12]}) x = pd.DataFrame({"A": [10.1], "B": [None]}) float_xform = FunctionTransformer(df_to_float) imputer = SimpleImputer( fill_value=0, strategy="constant", missing_values=pd.NA ).set_output(transform="pandas") pipe = Pipeline([('float-transform', float_xform), ('impute-NA', imputer)]) print(pipe.fit_transform(x_tr).dtypes) # A float64 # B float64 print(pipe.transform(x).dtypes) # A float64 # B float64
3
3
79,117,673
2024-10-23
https://stackoverflow.com/questions/79117673/can-i-reuse-output-field-instance-in-django-orm-or-i-should-always-create-a-dupl
I have a Django codebase that does a lot of Case/When/ExpressionWrapper/Coalesce/Cast ORM functions and some of them sometimes need a field as an argument - output_field. from django.db.models import FloatField, F some_param1=Sum(F('one_value')*F('second_value'), output_field=FloatField()) some_param2=Sum(F('one_value')*F('second_value'), output_field=FloatField()) some_param3=Sum(F('one_value')*F('second_value'), output_field=FloatField()) some_param4=Sum(F('one_value')*F('second_value'), output_field=FloatField()) some_param5=Sum(F('one_value')*F('second_value'), output_field=FloatField()) Sometimes I find myself wondering why am I always creating the same instance of any Field subclass over and over again. Is there any difference if I just pass one instance and share it between expressions? F.E from django.db.models import FloatField, F float_field = FloatField() some_param1=Sum(F('one_value')*F('second_value'), output_field=float_field) some_param2=Sum(F('one_value')*F('second_value'), output_field=float_field) some_param3=Sum(F('one_value')*F('second_value'), output_field=float_field) some_param4=Sum(F('one_value')*F('second_value'), output_field=float_field) some_param5=Sum(F('one_value')*F('second_value'), output_field=float_field) I coulnd't find it in a documentation and the source code is not documented well regarding this parameter. P.S. The example is fake, just imagine a big annotate function that does a lot of processing using Case/When/ExpressionWrapper/Coalesce/Cast and has a lot of duplicated Field instances as output_field.
You can reuse the field. Using this output_field=… [Django-doc] serves two purposes: the type sometimes requires specific formatting, typically for GIS columns, since a point, polygon, etc. needs to be converted to text so that Django can understand it; and to know what lookups transformations, etc. can be applied on it. Indeed, if we use: queryset = queryset.annotate( some_param1=Sum( F('one_value') * F('second_value'), output_field=CharField() ) ) then Django will assume that some_param1 is a CharField (here this does not make much sense), and thus you can use: queryset.filter(some_param1__lower='a') since __lower is defined as a lookup on a CharField. But for a FloatField, it does not make much sense. But the field is not specialized or altered. It is thus more of a "signal" object to specify what can be done with it. That being said, I don't see much reasons to convert code to prevent constructing a FloatField. If we use %timeit, we get: In [1]: %timeit FloatField() 3.82 µs ± 379 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) So the construction takes approximately 3.82 microseconds. Typically a view has a lot more work to do than that, so writing a query that is itself more efficient, or saving a roundtrip to the database, will (very) likely outperform any optimization with respect to saving a FloatField by a few magnitudes.
2
1
79,117,121
2024-10-23
https://stackoverflow.com/questions/79117121/polars-get-all-possible-categories-as-physical-representation
Given a DataFrame with categorical column: import polars as pl df = pl.DataFrame({ "id": ["a", "a", "a", "b", "b", "b", "b"], "value": [1,1,1,6,6,6,6], }) res = df.with_columns(bucket = pl.col.value.cut([1,3])) shape: (7, 3) ┌─────┬───────┬───────────┐ │ id ┆ value ┆ bucket │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ cat │ ╞═════╪═══════╪═══════════╡ │ a ┆ 1 ┆ (-inf, 1] │ │ a ┆ 1 ┆ (-inf, 1] │ │ a ┆ 1 ┆ (-inf, 1] │ │ b ┆ 6 ┆ (3, inf] │ │ b ┆ 6 ┆ (3, inf] │ │ b ┆ 6 ┆ (3, inf] │ │ b ┆ 6 ┆ (3, inf] │ └─────┴───────┴───────────┘ How do I get all potential values of the categorical column? I can get them as strings with pl.Expr.cat.get_categories() as strings? res.select(pl.col.bucket.cat.get_categories()) shape: (3, 1) ┌───────────┐ │ bucket │ │ --- │ │ str │ ╞═══════════╡ │ (-inf, 1] │ │ (1, 3] │ │ (3, inf] │ └───────────┘ I can also get all existing values in their physical representation with pl.Expr.to_physical() res.select(pl.col.bucket.to_physical()) shape: (7, 1) ┌────────┐ │ bucket │ │ --- │ │ u32 │ ╞════════╡ │ 0 │ │ 0 │ │ 0 │ │ 2 │ │ 2 │ │ 2 │ │ 2 │ └────────┘ But how I can get all potential values in their physical representation? I'd expect output like: shape: (3, 1) ┌────────┐ │ bucket │ │ --- │ │ u32 │ ╞════════╡ │ 0 │ │ 1 │ │ 2 │ └────────┘ Or should I just assume that it's always encoded as range of integers without gaps?
I don't see any direct way. However, you could combine pl.Expr.cat.get_categories and pl.Expr.to_physical as follows. res.select( pl.col("bucket").cat.get_categories().cast(res.schema["bucket"]).to_physical() ) shape: (3, 1) ┌────────┐ │ bucket │ │ --- │ │ u32 │ ╞════════╡ │ 0 │ │ 1 │ │ 2 │ └────────┘ Here, it would be nice to have pl.Expr.meta.dtype implemented, such accessing res again can be avoided.
2
2
79,112,091
2024-10-22
https://stackoverflow.com/questions/79112091/how-to-highlight-values-per-column-in-polars
I have a Polars DataFrame, and I want to highlight the top 3 values for each column using the style and loc features in Polars. I can achieve this for individual columns, but my current approach involves a lot of repetition, which is not scalable to many variables. import polars as pl import polars.selectors as cs from great_tables import loc, style df = pl.DataFrame({ "id": [1, 2, 3, 4, 5], "variable1": [15, 25, 5, 10, 20], "variable2": [40, 30, 50, 10, 20], "variable3": [400, 100, 300, 200, 500] }) top3_var1 = pl.col("variable1").is_in(pl.col("variable1").top_k(3)) top3_var2 = pl.col("variable2").is_in(pl.col("variable2").top_k(3)) ( df .style .tab_style( style.text(weight="bold"), loc.body("variable1", top3_var1) ) .tab_style( style.text(weight="bold"), loc.body("variable2", top3_var2) ) ) This works, but it's not scalable for many columns since I have to manually define top3_var for each column. I’ve tried using pl.all().top_k(3) to make the process more automatic: ( df .style .tab_style( style.text(weight="bold", ), loc.body("variable1", top3_var1) ) .tab_style( style.text(weight="bold"), loc.body("variable2", top3_var2) ) ) However, I’m not sure how to apply the style and loc methods to highlight only the top 3 values in each column individually without affecting the entire row.
Update. Since the writing of my original answer, a mask parameter was added to great_tables.loc.body providing a native solution to the problem. import polars as pl import polars.selectors as cs from great_tables import loc, style df = pl.DataFrame({ "id": [1, 2, 3, 4, 5], "variable1": [15, 25, 5, 10, 20], "variable2": [40, 30, 50, 10, 20], "variable3": [400, 100, 300, 200, 500] }) ( df .style .tab_style( style.text(weight="bold"), locations=loc.body( mask=cs.exclude("id").is_in(cs.exclude("id").top_k(3)) ) ) ) Original. As outlined in the comments, there are already some discussions on GitHub regarding adding a loc.body(mask=...) argument suitable for the use-case. Until this feature is implemented, you could create a GT (Great Table) object and iteratively use gt.tab_style as follows. This avoids the manual chaining of tab_style calls. import polars as pl import polars.selectors as cs from great_tables import GT, loc, style df = pl.DataFrame({ "id": [1, 2, 3, 4, 5], "variable1": [15, 25, 5, 10, 20], "variable2": [40, 30, 50, 10, 20], "variable3": [400, 100, 300, 200, 500] }) gt = GT(df) for col in df.select(cs.exclude("id")).columns: gt = gt.tab_style( style.text(weight="bold"), loc.body(col, pl.col(col).is_in(pl.col(col).top_k(3))) ) gt
3
4
79,114,550
2024-10-22
https://stackoverflow.com/questions/79114550/is-mydict-getx-x-eqivalent-to-mydict-getx-or-x
When using a dictionary to occasionally replace values, are .get(x, x) and .get(x) or x equivalent? For example: def current_brand(brand_name): rebrands = { "Marathon": "Snickers", "Opal Fruits": "Starburst", "Jif": "Cif", "Thomson": "TUI", } return rebrands.get(brand_name, brand_name) # or return rebrands.get(brand_name) or brand_name # this is forbidden - cannot use `default` keyword argument here return rebrands.get(brand_name, default=brand_name) assert current_brand("Jif") == "Cif" assert current_brand("Boots") == "Boots" I think .get(x) or x is clearer, but that's pretty much a matter of opinion, so I'm curious if there's a technical benefit or drawback to one or the other I've not spotted. Edit: sorry, to be clear, I'm assuming that the dictionary does not contain falsey values as that would't make sense in this context (i.e. in the example above you're not recording that "Somerfield" rebranded as "")
In answer to your original question: or will short-circuit if your value is Falsey, so there are plenty of values where the two statements will behave differently. my_dict = { 'foo': 0, 'bar': "", 'baz': [], } x = 'foo' print(repr(my_dict.get(x, x))) print(repr(my_dict.get(x) or x)) In answer to your edit, if you can guarantee that your dict is definitely a dict, and that the values in your dict are always Truthy, then the two should behave equivalently. There may be some performance tradeoffs either way, but I generally don't bother to get hung up on that. If this code ends up in a hot loop, then profile it, write the faster version, then stick a comment nearby explaining that your code's weird because the weird way is faster. In my opinion the downside of your method is philosophical. To my eyes, .get(x, ...) communicates your intent as "retrieve key x, and if it's not there, default to ...". .get(x) or ... communicates "if it's not there, or is defined as a Falsey value, default to ...". The intent displayed in your question suggests you should use .get(x, x), but for a simple example like this it hardly matters.
2
5
79,095,041
2024-10-16
https://stackoverflow.com/questions/79095041/detectron2-installation-no-module-named-torch
I am trying to install detectron2 on ubuntu and face a weird python dependency problem. In short - pytorch is installed (with pip), torchvision is installed (with pip), but when I run pip install 'git+https://github.com/facebookresearch/detectron2.git' I get error ModuleNotFoundError: No module named 'torch' as for dependencies (detectron2_test) ubuntu@LAPTOP:~$ pip install torchvision Requirement already satisfied: torchvision in ./detectron2_test/lib/python3.12/site-packages (0.19.1+cu118) Requirement already satisfied: numpy in ./detectron2_test/lib/python3.12/site-packages (from torchvision) (1.26.3) Requirement already satisfied: torch==2.4.1 in ./detectron2_test/lib/python3.12/site-packages (from torchvision) (2.4.1+cu118) (...) (detectron2_test) ubuntu@LAPTOP:~$ which pip /home/ubuntu/detectron2_test/bin/pip (detectron2_test) ubuntu@LAPTOP:~$ which python /home/ubuntu/detectron2_test/bin/python (detectron2_test) ubuntu@LAPTOP:~$ which python3 /home/ubuntu/detectron2_test/bin/python3 Any suggestions are appreciated!
This is probably due to the isolation mechanism of the pip building process. Basically, the installation requires torch to be installed to work, but recent versions of pip use some isolation that does not allow the build process to access installed packages. You can disable that isolation by using this command: $ pip install --no-build-isolation 'git+https://github.com/facebookresearch/detectron2.git' This happens a lot for packages that need torch, probably because they tend to verify torch version and also import it to check for cuda and/or other capabilities, or to compile some kernels.
3
13
79,099,281
2024-10-17
https://stackoverflow.com/questions/79099281/firebase-firestore-client-cannot-be-deployed
I have an Firebase Cloud Functions codebase that uses Firestore database. Everything below works when I use it in local emulator using firebase emulators:start but when I need to deploy it to Firebase I got below error: Error: User code failed to load. Cannot determine backend specification main.py import json from firebase_functions import https_fn from firebase_admin import initialize_app, firestore import flask from enum import Enum from flask import g from endpoints.moon_phase import moon_phase_bp # Initialize Firebase app and Firestore initialize_app() db = firestore.client() app = flask.Flask(__name__) # Set up a before_request function to make db available in blueprints @app.before_request def before_request(): # g.db = db print("before_request") app.register_blueprint(moon_phase_bp) //Doesn't even use db, but in the future it will. # Firebase Function to handle requests @https_fn.on_request() def astro(req: https_fn.Request) -> https_fn.Response: with app.request_context(req.environ): return app.full_dispatch_request() If I update the db initialisation to db = firestore.client it deploys, but obviously, it's a function reference, so I cannot use Firestore db in my endpoints. This also means it's not related to my Firebase credentials or project setup. What might be the issue here?
Figured out the below answer in the Firebase repo fixes the issue: I've discovered if I move the db = firestore.client() into my cloud function, I'm able to deploy. https://github.com/firebase/firebase-functions-python/issues/126#issuecomment-1682542027 It's really weird that Google does not update the docs or fixes the issue, but for now, this answer unblocks me, hope it helps others too.
4
0
79,102,534
2024-10-18
https://stackoverflow.com/questions/79102534/save-to-disk-training-dataset-and-validation-dataset-separately-in-pytorch
I want to save train dataset, test dataset, and validation dataset in 3 separate folders. Doing this for training and testing is easy # Get training and testing data all_training_data = getattr(datasets, config['Pytorch_Dataset']['dataset'])( root= path_to_data + "/data_all_train_" + config['Pytorch_Dataset']['dataset'], train=True, download=True, # If present it will not download the data again transform=ToTensor() ) test_data = getattr(datasets, config['Pytorch_Dataset']['dataset'])( root= path_to_data + "/data_test_" + config['Pytorch_Dataset']['dataset'], train=False, download=True, # If present it will not download the data again transform=ToTensor() ) This code makes use of torchvision.datasets to load and save to disk the dataset specified in config['Pytorch_Dataset']['dataset'] (e.g. MNIST). However there is no option to load a validation set this way, there is no validation=True option. I could split the train dataset into train and validation with torch.utils.data.random.split, but there are two main problems with this approach: I don't want to save the folder data_all_train, I want to save only 2 folders, one with the true training part and one with the validation part I would like PyTorch to understand if data_train and data_validation are present, and in this case it should not download again data_all_train, even if not present
You don't have to save the split results on separate folders to maintain reproducibility, which is what I am assuming you really care for. You could instead fix the seed before calling split like this: torch.manual_seed(42) data_train, data_val = torch.utils.data.random_split(data_all_train, (0.7, 0.3)) Then you get to maintain just the initial folders while also ensuring the train and val splits are consistent across trials. But the caveat to the above is, you are fixing the global seed so you are also losing the randomness you might desire in the dataloader shuffling and such, which will end up identical per trial. To avoid that, you can narrow down the scope you are fixing the seed by setting it only for the generator you pass to the split call: split_gen = torch.Generator() split_gen.manual_seed(42) data_train, data_val = torch.utils.data.random_split( data_all_train, (0.7, 0.3), generator=split_gen)
2
2
79,112,040
2024-10-21
https://stackoverflow.com/questions/79112040/dealing-with-interlacing-lock-in-python3
I am trying to implement the following logic in Python3: # Clarification: # function f() is the only function that would acquire both locks # It is protected by other locks so f() itself has no concurrency. # It always acquires lock1 first and then acquire lock2 inside lock1 # In other words, NO thread will own lock2 and wait for lock1 def f(): lock1.acquire() task_protected_by_lock1() # Might acquire lock2 internally lock2.acquire() task_protected_by_lock1_and_lock2() lock1.release() task_protected_by_lock2() # Might acquire lock1 internally lock2.release() However, I found it impossible to correctly handle SIGINT because it will raise a KeyBoardInterrupt exception at random location. I need to guarantee that lock1 and lock2 are both released when control flow exits f() (i.e. either normal return or unhandled exception). I am aware that SIGINT can be temporarily masked. However, correctly restoring the mask becomes another challenge because it might already been masked from outside. Also, the tasks performed between locks might also tweak signal masks. I believe there has to be a better solution. I am wondering if there exist a way for me to utilize context-manager (with statement) to achieve it. I've considered the following, but none would work for my use case: Approach 1 - single with statement def f(): with lock1, lock2: task_protected_by_lock1() # Bad: acquiring lock2 internally will cause deadlock task_protected_by_lock1_and_lock2() # Good task_protected_by_lock2() # Bad: acquiring lock1 internally will cause deadlock Approach 2 - nested with statement def f(): with lock1: task_protected_by_lock1() # Good with lock2: task_protected_by_lock1_and_lock2() # Good task_protected_by_lock2() # Bad: acquiring lock1 internally will cause deadlock Approach 3 - manual lock management def f(): flag1 = False flag2 = False try: lock1.acquire() # Bad: SIGINT might be raised here flag1 = True task_protected_by_lock1() lock2.acquire() # Bad: SIGINT might be raised here flag2 = True task_protected_by_lock1_and_lock2() lock1.release() # Bad: SIGINT might be raised here flag1 = False task_protected_by_lock2() lock2.release() # Bad: SIGINT might be raised here flag2 = False except Exception as e: if flag1: lock1.release() if flag2: lock2.release() raise e Approach 4 - similar to 3, but trickier def f(): try: lock1.acquire() task_protected_by_lock1() lock2.acquire() task_protected_by_lock1_and_lock2() lock1.release() # Suppose SIGINT happened here, just after another thread acquired lock1 task_protected_by_lock2() lock2.release() except Exception as e: if lock1.locked(): lock1.release() # Bad: lock1 is NOT owned by this thread! if lock2.locked(): lock2.release() raise e Approach 5 - breaks consistency & inefficient def f(): with lock1: task_protected_by_lock1() # Bad: other thread might acquire lock1 and modify protected resources. # This breaks data consistency between 1st and 2nd task. with lock1, lock2: task_protected_by_lock1_and_lock2() # Bad: other thread might acquire lock2 and modify protected resources. # This breaks data consistency between 2nd and 3rd task. with lock2: task_protected_by_lock2() Update: Pseudo code demonstrating why I need interlacing lock Here is the logic I am trying to implement. This logic is part of a utility library, therefore the behavior of task() is dependent how user implements it. You're welcome to provide a better solution that does not require interlacing lock while retaining the exact same behavior. lock1 = Lock() # Guards observation/assignment of `task` lock2 = Lock() # Guards execution of `task` # Executing it might acquire lock1 and change `task` task: callable | Generator def do_task(): lock1.acquire() if not validate_task(task): task = None # `task` can be modified here if task is None: # observation of task fails return lock1.release() lock2.acquire() # Execution lock acquired before observation lock released task_snapshot = task lock1.release() # Now other process may update `task`, # But since execution lock (lock2) is owned here, # updated task will not be executed till this one finishes. try: if callable(task_snapshot): task_snapshot() else: assert isinstance(task_snapshot, Generator) # This is why lock2 is needed. # Concurrent next() will throw "generator already executing" next(task_snapshot) except: with lock1: task = None lock2.release() Desired pattern (suppose the 2nd execution of task1 updates the task): task1, task1, task1, task2, task2, .... Bad pattern (suppose the same scenario as above): task1, task1, /task2/, task1, task2, ....
You can interleave context managers by using contextlib.ExitStack, with a "stack" of just one context manager, because it lets you exit it early with the close() method: with ExitStack() as es: es.enter_context(lock_a) protected_by_a() with lock_b: protected_by_a_and_b() es.close() protected_by_b() You can even push more context managers back on to it after close() if need be, allowing you to do more complex things: with ExitStack() as es_a: with ExitStack() as es_b: es_a.enter_context(lock_a) es_b.enter_context(lock_b) protected_by_a_and_b() es_b.close() protected_by_a() es_b.enter_context(lock_b) protected_by_a_and_b() es_a.close() protected_by_b() You can even pass the exit stack objects as parameters to other functions for them to close and relock. But then it's up to you to debug the deadlocks in the monster you've created!
3
2
79,115,545
2024-10-22
https://stackoverflow.com/questions/79115545/unexpected-output-in-python-repl-using-vs-code-on-windows
I have VS Code set up with Python extension, but without Jupyter extension. VS Code lets you send Python code from the editor to what it calls the "Native REPL" (a Jupyter-type interface without full notebook capabilities) or the "Terminal REPL" (the good old Python >>> prompt). See https://code.visualstudio.com/docs/python/run. I prefer working in the Terminal REPL. I have my keyboard shortcuts configured to use Ctrl+Enter (instead of Shift+Enter) to send the selection to the terminal: [ { "key": "ctrl+enter", "command": "python.execSelectionInTerminal", "when": "editorTextFocus && !findInputFocussed && !isCompositeNotebook && !jupyter.ownsSelection && !notebookEditorFocused && !replaceInputFocussed && editorLangId == 'python'" }, { "key": "shift+enter", "command": "-python.execSelectionInTerminal", "when": "editorTextFocus && !findInputFocussed && !isCompositeNotebook && !jupyter.ownsSelection && !notebookEditorFocused && !replaceInputFocussed && editorLangId == 'python'" } ] On Linux, this works as expected. Editor: print("Hello, World!") What happens in terminal: >>> print("Hello, world!") Hello, world! On Windows, the same actions (select line, hit Ctrl+Enter) produces the following: >>> KeyboardInterrupt >>> print("Hello, world!") Hello, world! This might be some harmless little visual noise, but it seems to be interfering with plotting. When I initialize a plot with: import matplotlib.pyplot as plt fig, ax = plt.subplots() print("Hello, world!") now generates the following: >>> print("Hello, world!") Traceback (most recent call last): File "<stdin>", line 0, in <module> KeyboardInterrupt Note that putting code directly into the terminal (instead of using VS Code's send-to-terminal feature) still functions normally. How can I fix the send-to-terminal feature?
Roll back your python extension to 14.0 and disable auto-update will fix the issue. Reference: https://github.com/microsoft/vscode-python/issues/24251
2
0
79,114,440
2024-10-22
https://stackoverflow.com/questions/79114440/ttk-frames-not-filling-properly
I am making a python application that uses 4 ttk Frames within its main window. The first two frames should expand both vertically and horizontally to fill available space. Frames 3 and 4 should only expand horizontally and be a fixed height vertically. This is my code so far (minimum working example): import tkinter as tk from tkinter import ttk tk_root = tk.Tk() tk_root.geometry('500x500') frame_1 = ttk.Frame(tk_root, padding=10, borderwidth=3, relief=tk.GROOVE) frame_2 = ttk.Frame(tk_root, padding=10, borderwidth=3, relief=tk.GROOVE) frame_3 = ttk.Frame(tk_root, padding=10, borderwidth=3, relief=tk.GROOVE, height=50) frame_4 = ttk.Frame(tk_root, padding=10, borderwidth=3, relief=tk.GROOVE, height=50) frame_1.pack(fill=tk.BOTH, expand=True) frame_2.pack(fill=tk.BOTH, expand=True) frame_3.pack(fill=tk.X, expand=True) frame_4.pack(fill=tk.X, expand=True) tk_root.mainloop() When i run this the first two frames only expand to 50% of the window height and empty space is left around frames 3 and 4 (screenshot). I would like to have frames 3 and 4 be a constant height while frames 1 and 2 expand to fill all available space above frames 3 and 4.
acw1668 gave the answer in a comment: Remove expand=True for frame 3 and 4. That did the trick.
1
2
79,115,254
2024-10-22
https://stackoverflow.com/questions/79115254/raise-exception-in-map-elements
Update: This was fixed by pull/20417 in Polars 1.18.0 I'm using .map_elements to apply a complex Python function to every element of a polars series. This is a toy example: import polars as pl df = pl.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) def sum_cols(row): return row["A"] + row["B"] df.with_columns( pl.struct(pl.all()) .map_elements(sum_cols, return_dtype=pl.Int32).alias("summed") ) shape: (3, 3) ┌─────┬─────┬────────┐ │ A ┆ B ┆ summed │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i32 │ ╞═════╪═════╪════════╡ │ 1 ┆ 4 ┆ 5 │ │ 2 ┆ 5 ┆ 7 │ │ 3 ┆ 6 ┆ 9 │ └─────┴─────┴────────┘ However, when my function raises an exception, Polars silently uses Nulls as the output of the computation: def sum_cols(row): raise Exception return row["A"] + row["B"] df.with_columns( pl.struct(pl.all()) .map_elements(sum_cols, return_dtype=pl.Int32).alias("summed") ) shape: (3, 3) ┌─────┬─────┬────────┐ │ A ┆ B ┆ summed │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i32 │ ╞═════╪═════╪════════╡ │ 1 ┆ 4 ┆ null │ │ 2 ┆ 5 ┆ null │ │ 3 ┆ 6 ┆ null │ └─────┴─────┴────────┘ How can I make the Polars command fail when my function raises an exception?
I'm pretty sure this is a bug in Polars. https://github.com/pola-rs/polars/issues/19315 https://github.com/pola-rs/polars/issues/14821 As a workaround, you could use .map_batches() to pass the whole "column" instead: import polars as pl df = pl.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) def sum_cols(col): raise Exception return pl.Series(row["A"] + row["B"] for row in col) df.with_columns( pl.struct(pl.all()).map_batches(sum_cols) ) Which propagates exceptions as one would expect. # ComputeError: Exception:
2
0
79,115,080
2024-10-22
https://stackoverflow.com/questions/79115080/how-to-use-ruff-as-fixer-in-vim-with-ale
I'm using ale in vim, and I want to add ruff as fixer for python. So, in .vimrc, I added: let g:ale_fixers = { \ 'python': ['ruff'], \ 'javascript': ['eslint'], \ 'typescript': ['eslint', 'tsserver', 'typecheck'], \} Then, when executing in vim the command ALEFix, I've go this error: 117: Unknown function: ale#fixers#ruff#Fix Do you know how to integrate ruff as fixer in vim with ALE ?
https://docs.astral.sh/ruff/editors/setup/#vim , open "With the ALE plugin for Vim or Neovim." The docs shows " Linter let g:ale_linters = { "python": ["ruff"] } " Formatter let g:ale_fixers = { "python": ["ruff_format"] }
2
-1
79,115,170
2024-10-22
https://stackoverflow.com/questions/79115170/operation-on-all-columns-of-a-type-in-modern-polars
I have a piece of code that works in Polars 0.20.19, but I don't know how to make it work in Polars 1.10. The working code (in Polars 0.20.19) is very similar to the following: def format_all_string_fields_polars() -> pl.Expr: return ( pl.when( (pl.col(pl.Utf8).str.strip().str.lengths() == 0) | # ERROR ON THIS LINE (pl.col(pl.Utf8) == "NULL") ) .then(None) .otherwise(pl.col(pl.Utf8).str.strip()) .keep_name() ) df.with_columns(format_all_string_fields_polars()) I have converted the pl.Utf8 dtype to pl.String, but it keeps giving me the same error: AttributeError: 'ExprStringNameSpace' object has no attribute 'strip' The function is supposed to perform the When-Then operation on all string fields of the dataframe, in-place, but return all columns in the dataframe (including the non-string columns as well). How do I convert this function to a working piece of code in Polars 1.10?
pl.Utf8 was renamed to pl.String .str.strip() was renamed to .str.strip_chars() .str.lengths() was split into .str.len_chars() and .str.len_bytes() .keep_name() was renamed to .name.keep() def format_all_string_fields_polars() -> pl.Expr: return ( pl.when( (pl.col(pl.String).str.strip_chars().str.len_chars() == 0) | (pl.col(pl.String) == "NULL") ) .then(None) .otherwise(pl.col(pl.String).str.strip_chars()) .name.keep() )
2
2
79,111,779
2024-10-21
https://stackoverflow.com/questions/79111779/how-do-i-iterate-through-table-rows-in-python
How would I loop through HTML Table Rows in Python? Just to let y'all know, I am working on the website: https://schools.texastribune.org/districts/. What I'm trying to do is click each link in the table body (?) and extract the total number of students: What I have so far: response = requests.get("https://schools.texastribune.org/districts/") soup = BeautifulSoup(response.text) data = [] for a in soup.find_all('a', {'class': 'table table-striped'}): response = requests.get(a.get('href')) asoup = BeautifulSoup(response.text) data.append({ 'url': a.get('href'), 'title': a.h2.get_text(strip=True), 'content': asoup.article.get_text(strip=True) }) pd.DataFrame(data) This is my first ever time web scraping something.
You should not have class_="td" when finding the <td> elements, they don't have any class. There's no <ul> elements in the table, so view = match.find('ul',class_="tr") won't find anything. You need to find the <a> element, gets its href, and load that to get the total students. d = {} for match in soup.find_all('td'): link = match.find("a") if link and link.href: school_page = requests.get("https://schools.texastribune.org" + link.href) school_soup = BeautifulSoup(school_page, 'lxml') total_div = school_soup.find("div", class_="metric", text="Total students" if total_div: amount = total_div.find("p", class_="metric-value") d[link.text] = amount.text print(d)
2
2