question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,821,648
2024-1-15
https://stackoverflow.com/questions/77821648/managing-allowed-hosts-in-django-for-kubernetes-health-check
I have a Django application running on Kubernetes, using an API for health checks. The issue I'm facing is that every time the IP associated with Django in Kubernetes changes, I have to manually update ALLOWED_HOSTS. django code: class HealthViewSet(ViewSet): @action(methods=['GET'], detail=False) def health(self, request): try: return Response('OK', status=status.HTTP_200_OK) except Exception as e: print(e) return Response({'response': 'Internal server error'}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) deployment code : livenessProbe: httpGet: path: /health/ port: 8000 initialDelaySeconds: 15 timeoutSeconds: 5 Error: Invalid HTTP_HOST header: '192.168.186.79:8000'. You may need to add '192.168.186.79' to ALLOWED_HOSTS. Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/django/utils/deprecation.py", line 135, in __call__ response = self.process_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/django/middleware/common.py", line 48, in process_request host = request.get_host() ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/django/http/request.py", line 148, in get_host raise DisallowedHost(msg) django.core.exceptions.DisallowedHost: Invalid HTTP_HOST header: '192.168.186.79:8000'. You may need to add '192.168.186.79' to ALLOWED_HOSTS. Bad Request: /health/ Is there a way to dynamically use ALLOWED_HOSTS and avoid manual updates? (Every deployment IP changed.) ALLOWED_HOSTS ALLOWED_HOSTS = [localhost', '127.0.0.1'] Any guidance or suggestions for the best solution in this regard would be appreciated.
The ALLOWED_HOSTS setting checks the Host header sent on an HTTP request, so you can simply configure the headers sent by your liveliness probe: livenessProbe: httpGet: path: /health/ port: 8000 httpHeaders: - name: host value: your.hostname.here # Configure the appropriate host here initialDelaySeconds: 15 timeoutSeconds: 5
6
5
77,794,024
2024-1-10
https://stackoverflow.com/questions/77794024/searching-existing-chromadb-database-using-cosine-similarity
I have a preexisting database with around 15 PDFs stored. I want to be able to search the database so that I'm getting the X most relevant results back given a certain threshold using cosine similarity. Currently, I've defined a collection using this code: chroma_client = chromadb.PersistentClient(path="TEST_EMBEDDINGS/CHUNK_EMBEDDINGS") collection = chroma_client.get_or_create_collection(name="CHUNK_EMBEDDINGS") I've done a bit of research and it seems to me that while ChromaDB does not have a similarity search, FAISS does. However, the existing solutions online describe to do something along the lines of this: from langchain.vectorstores import Chroma db = Chroma.from_documents(texts, embeddings) docs_score = db.similarity_search_with_score(query=query, distance_metric="cos", k = 6) I am unsure how I can integrate this code or if there are better solutions.
ChromaDB does have similarity search. The default is L2, but you can change it as documented here. collection = client.create_collection( name="collection_name", metadata={"hnsw:space": "cosine"} # l2 is the default )
3
3
77,790,973
2024-1-10
https://stackoverflow.com/questions/77790973/lengths-of-overlapping-time-ranges-listed-by-rows
I am using pandas version 1.0.5 The example dataframe below lists time intervals, recorded over three days, and I seek where some time intervals overlap every day. For example, one of the overlapping time across all the three dates (yellow highlighted) is 1:16 - 2:13. The other (blue highlighted) would be 18:45 - 19:00 So my expected output would be like: [57,15] because 57 - Minutes between 1:16 - 2:13. 15 - Minutes between 18:45 - 19:00 Please use this generator of the input dataframe: import pandas as pd dat1 = [ ['2023-12-27','2023-12-27 00:00:00','2023-12-27 02:14:00'], ['2023-12-27','2023-12-27 03:16:00','2023-12-27 04:19:00'], ['2023-12-27','2023-12-27 18:11:00','2023-12-27 20:13:00'], ['2023-12-28','2023-12-28 01:16:00','2023-12-28 02:14:00'], ['2023-12-28','2023-12-28 02:16:00','2023-12-28 02:28:00'], ['2023-12-28','2023-12-28 02:30:00','2023-12-28 02:56:00'], ['2023-12-28','2023-12-28 18:45:00','2023-12-28 19:00:00'], ['2023-12-29','2023-12-29 01:16:00','2023-12-29 02:13:00'], ['2023-12-29','2023-12-29 04:16:00','2023-12-29 05:09:00'], ['2023-12-29','2023-12-29 05:11:00','2023-12-29 05:14:00'], ['2023-12-29','2023-12-29 18:00:00','2023-12-29 19:00:00'] ] df = pd.DataFrame(dat1,columns = ['date','Start_tmp','End_tmp']) df["Start_tmp"] = pd.to_datetime(df["Start_tmp"]) df["End_tmp"] = pd.to_datetime(df["End_tmp"])
This solution uses: numpy, no uncommon Python modules, so using Python 1.0.5 you should, hopefully, be in the clear, no nested loops to care for speed issues with growing dataset, Method: Draw the landscape of overlaps Then select the overlaps corresponding to the number of documented days, Finally describe the overlaps in terms of their lengths Number of documented days: (as in Python: Convert timedelta to int in a dataframe) n = 1 + ( max(df['End_tmp']) - min(df['Start_tmp']) ).days n 3 Additive landscape: # initial flat whole-day landcape (height: 0) L = np.zeros(24*60, dtype='int') # add up ranges: (reused @sammywemmy's perfect formula for time of day in minutes) for start, end in zip(df['Start_tmp'].dt.hour.mul(60) + df['Start_tmp'].dt.minute, # Start_tmp timestamps expressed in minutes df['End_tmp'].dt.hour.mul(60) + df['End_tmp'].dt.minute): # End_tmp timestamps expressed in minutes L[start:end+1] += 1 plt.plot(L) plt.hlines(y=[2,3],xmin=0,xmax=1400,colors=['green','red'], linestyles='dashed') plt.xlabel('time of day (minutes)') plt.ylabel('time range overlaps') (Please excuse the typo: these are obviously minutes, not seconds) Keep only overlaps over all days: (red line, n=3) # Reduce heights <n to 0 because not overlaping every day L[L<n]=0 # Simplify all greater values to 1 because only their presence matters L[L>0]=1 # Now only overlaps are highlighted # (Actually this latest line is disposable, provided we filter all but the overlaps of rank n. Useful only if you were to include lower overlaps) Extract overlap ranges and their lengths # Highlight edges of overlaping intervals D = np.diff(L) # Describe overlaps as ranges R = list(zip([a[0] for a in np.argwhere(D>0)], # indices where overlaps *begin*, with scalar indices instead of arrays [a[0]-1 for a in np.argwhere(D<0)])) # indices where overlaps *end*, with scalar indices instead of arrays R [(75, 132), (1124, 1139)] # Finally their lengths [b-a for a,b in R] Final output: [57, 15]
5
3
77,818,902
2024-1-15
https://stackoverflow.com/questions/77818902/how-to-make-connections-between-two-wells
I have a list of wells that stores two wells and the contents of these wells in the form [depth interval from, depth interval to, soil description] wells = [ [[0, -4, 'soil'], [-4, -8, 'clay'], [-8, -12, ' gravel'], [-12, -20, 'sand'], [-20, -24, 'basalts']], [[0, -4, 'soil'], [-4, -16, 'sand'], [-16, -20, 'galka'], [-20, -32, 'pebble'], [-32, -36, 'basalts']] ] We need to draw lines between all the soil layers so that we can get a full geological section with all the layers This is what these two wells look like, where different or identical rocks are marked in color. you need to connect all the same rocks and correctly identify those rocks that are only in one well and this is what the final result of all connections looks like: all these connections can be made based on the data that I provided. import matplotlib.pyplot as plt wells = [ [[0, -4, 'soil'], [-4, -8, 'clay'], [-8, -12, ' gravel'], [-12, -20, 'sand'], [-20, -24, 'basalts']], [[0, -4, 'soil'], [-4, -16, 'sand'], [-16, -20, 'galka'], [-20, -32, 'pebble'], [-32, -36, 'basalts']] ] fig, ax = plt.subplots() depths_dict = {} colors = {'soil': 'red', 'sand': 'blue', 'gravel': 'green', 'pebble': 'orange', 'basalts': 'purple', 'clay': 'brown', 'galka': 'pink'} for well_index, well in enumerate(wells): for interval in well: start_depth, end_depth, description = interval color = colors.get(description, 'gray') if description in depths_dict: ax.plot([well_index - 1, well_index], [depths_dict[description][0], start_depth], color=color, linestyle='--') ax.plot([well_index - 1, well_index], [depths_dict[description][1], end_depth], color=color, linestyle='--') depths_dict[description] = (start_depth, end_depth) ax.plot([well_index, well_index], [start_depth, end_depth], color=color, label=description) ax.set_xlabel('Wells') ax.set_ylabel('Depth') ax.set_title('Profile') legend_handles = [plt.Line2D([0], [0], marker='o', color='w', markerfacecolor=colors[rock], markersize=10, label=rock) for rock in colors] ax.legend(handles=legend_handles) plt.show() but this code connects intervals with the same description
All i had to do was compare all the layers from top to bottom and draw connections so that the deeper layers lay lower than the shallower ones wells = [ [ [24, 20, 'basalts'], [20, 12, 'sand'], [12, 8, 'graviy'], [8, 4, 'clay'], [4, 0, 'soil'] ], [ [36, 32, 'basalts'], [32, 20, 'galka'], [20, 16, 'sheben'], [16, 4, 'sand'], [4, 0, 'soil'] ] ] def lith_layers_comparison(well_1, well_2): matching_points = [] i, j = 0, 0 while i < len(well_1) and j < len(well_2): p1, p2 = well_1[i], well_2[j] if p1[2] == p2[2]: matching_points.extend([[p1[0], p2[0]], [p1[1], p2[1]]]) i, j = i + 1, j + 1 elif p1[2] != p2[2] and p1[1] < p2[1]: matching_points.append([p1[0], p2[1]]) j += 1 elif p1[2] != p2[2] and p1[1] > p2[1]: matching_points.append([p1[1], p2[0]]) i += 1 elif p1[2] != p2[2] and p1[1] == p2[1] and p1[0] < p2[0]: matching_points.append([p1[0], p2[1]]) j += 1 elif p1[2] != p2[2] and p1[1] == p2[1] and p1[0] > p2[0]: matching_points.append([p1[1], p2[0]]) i += 1 elif p1[2] != p2[2] and p1[1] == p2[1] and p1[0] == p2[0]: if p1[3] > p2[3]: matching_points.append([p1[0], p2[1]]) j += 1 else: matching_points.append([p1[1], p2[0]]) i += 1 return matching_points reversed_connections = [] for i in range(len(wells) - 1): connections = lith_layers_comparison(wells[i], wells[i+1]) filtered_connections = [] for connection in connections: if connection not in filtered_connections: filtered_connections.append(connection) reversed_connections.append(filtered_connections) sorted_connections = [[list(map(lambda x: -x, sub)) for sub in sublist[::-1]] for sublist in reversed_connections] print(sorted_connections)
3
0
77,806,733
2024-1-12
https://stackoverflow.com/questions/77806733/download-oecd-api-data-using-python-and-sdmx
I'm trying to execute the following SDMX query using Python from the OECD's database. https://stats.oecd.org/restsdmx/sdmx.ashx/GetData/LAND_COVER_FUA/AUS+AUT.FOREST+GRSL+WETL+SHRUBL+SPARSE_VEGETATION+CROPL+URBAN+BARE+WATER.THOUSAND_SQKM+PCNT/all?startTime=1992&endTime=2019 I've looked around but can't get this to work with the existing packages and tutorials out there (I've looked at this link for example). Would you be able to help? I can't seem to get the various libraries (pandasdmx, cif, etc.) to work with it. Thanks so much in advance!
Using sdmx1: import sdmx OECD = sdmx.Client("OECD_JSON") key = dict( COU="AUS AUT".split(), FUA=[], VARIABLE="FOREST GRSL WETL SHRUBL SPARSE_VEGETATION CROPL URBAN BARE WATER".split(), MEAS="THOUSAND_SQKM PCNT".split(), ) # Assemble into a string key_str = ".".join("+".join(values) for values in key.values()) print(f"{key_str = }") # Commented: these keys are invalid # key_str = "AUS+AUT.FOREST+GRSL+WETL+SHRUBL+SPARSE_VEGETATION+CROPL+URBAN+BARE+WATER.THOUSAND_SQKM+PCNT" # key_str = "AUS+AUT.FOREST+GRSL+WETL+SHRUBL+SPARSE_VEGETATION+CROPL+URBAN+BARE+WATER.THOUSAND_SQKM+PCNT." # Retrieve a data message dm = OECD.data( "LAND_COVER_FUA", key=key_str, params=dict(startPeriod=1992, endPeriod=2019), ) # Retrieve the first data set in the message ds = dm.data[0] # Convert to pandas print(sdmx.to_pandas(ds)) This gives output like: $ python q.py key_str = 'AUS+AUT..FOREST+GRSL+WETL+SHRUBL+SPARSE_VEGETATION+CROPL+URBAN+BARE+WATER.THOUSAND_SQKM+PCNT' COU FUA VARIABLE MEAS TIME_PERIOD AUS AUS01 BARE THOUSAND_SQKM 1992 0.000618 2004 0.000618 2015 0.000618 2018 0.000541 2019 0.000541 ... AUT AT005L4 WATER THOUSAND_SQKM 1992 0.001314 2004 0.001314 2015 0.001314 2018 0.001314 2019 0.002319 Name: value, Length: 2160, dtype: float64 A few key points to understand: The URL you give (with /GetData/) indicates an SDMX 2.0 API. These are very old and few tools support them. OECD provides both an SDMX-ML 2.1 and an SDMX-JSON API. The package documentation above includes some notes on these. This particular data flow (LAND_COVER_FUA) does not appear to be available from the "OECD" (SDMX-ML) source, only from the "OECD_JSON" one. The package can only automatically construct a key from a dict when SDMX-ML is available. Since we have to use the JSON endpoint, we manually construct the key. Parameters like "startTime=1992" are outdated/incorrect; the current form for SDMX 2.1 and later is "startPeriod=1992". Your key only has 3 parts, but this data flow has 4 dimensions. If you uncomment the first line above that sets key_str explicitly, the web service responds: Semantic Error - Wrong number of non-time dimensions provided. Expected 4. Query contains 3 If we randomly guess and put an extra period at the end (second commented key_str), we get the error: Semantic Error - Dimension 'FUA' does not contain code(s) 'FOREST,GRSL,WETL,SHRUBL,SPARSE_VEGETATION,CROPL,URBAN,BARE,WATER' This indicates that (a) the missing dimension is the second one and (b) it has an id "FUA". So we insert this in key. Notice that the resulting, valid key string contains +AUT..FOREST+. These two consecutive periods (..) indicate "No specific labels/return all labels for this dimension."
3
2
77,815,914
2024-1-14
https://stackoverflow.com/questions/77815914/macos-sonoma-cron-job-doesnt-have-access-to-trash-even-though-it-has-full-sy
Edit: MacOS Sonoma Version 14.2.1 I am running a python script via crontab, and the script runs, but I get an error when trying to iterate the ~/.Trash directory: PermissionError: [Errno 1] Operation not permitted: '/Users/me/.Trash' I have enabled full disk access for: /usr/sbin/cron, /usr/bin/crontab, and terminal.app, but still have the same problem. If I run the command directly, it works fine, but when cron runs it, I get the error above. I have tried a few different crontab entries, but get the same result from all of them (I've ran each version directly and each works fine when not ran via cron). */5 * * * * /Users/me/miniforge3/envs/dev/bin/fclean >> /dev/null 2>&1 */5 * * * * /Users/me/miniforge3/envs/dev/bin/python /Users/me/miniforge3/envs/dev/bin/fclean >> /dev/null 2>&1 */5 * * * * /Users/me/miniforge3/envs/dev/bin/python /Users/me/path/to/file.py >> /dev/null 2>&1 if it's helpful the python function that's raising the permission issue is: def clean_folder(folder: Path, _time: int = days(30)) -> None: """ If a file in the specified path hasn't been accessed in the specified days; remove it. Args: folder (Path): Path to folder to iterate through _time (int): optional time parameter to pass as expiration time. Returns: None """ for file in folder.iterdir(): if expired(file, _time): try: rm_files(file) except PermissionError as permission: logging.exception(permission) continue except Exception as _err: logging.exception(_err) continue
I cross posted this issue in the apple developer forums: Here In one of the responses I was linked a really great thread that helps explain some of what is going on: Here Here's the snippet that's most applicable to the situation. Scripting MAC presents some serious challenges for scripting because scripts are run by interpreters and the system can’t distinguish file system operations done by the interpreter from those done by the script. For example, if you have a script that needs to manipulate files on your desktop, you wouldn’t want to give the interpreter that privilege because then any script could do that. The easiest solution to this problem is to package your script as a standalone program that MAC can use for its tracking. This may be easy or hard depending on the specific scripting environment. For example, AppleScript makes it easy to export a script as a signed app, but that’s not true for shell scripts. I was able to provide Full Disk Access to the python interpreter and that allows cron to run the python script and access the ~/.Trash directory. As @eskimo1 points out in the article - that means any script running in that environment has Full Disk Access. So I will be looking at creating a package for my script in the future.
4
1
77,823,058
2024-1-16
https://stackoverflow.com/questions/77823058/how-create-a-2-row-table-header-with-docutils
I wrote an extension for Sphinx to read code coverage files and present them as a table in a Sphinx generated HTML documentation. Currently the table has a single header row with e.g. 3 columns for statement related values and 4 columns for branch related data. I would like to create a 2 row table header, so multiple columns are grouped. In pure HTML it would be done by adding colspan=3. But how to solve that question with docutils? The full sources can be found here: https://github.com/pyTooling/sphinx-reports/blob/main/sphinx_reports/CodeCoverage.py#L169 Interesting code is this: def _PrepareTable(self, columns: Dict[str, int], identifier: str, classes: List[str]) -> Tuple[nodes.table, nodes.tgroup]: table = nodes.table("", identifier=identifier, classes=classes) tableGroup = nodes.tgroup(cols=(len(columns))) table += tableGroup tableRow = nodes.row() for columnTitle, width in columns.items(): tableGroup += nodes.colspec(colwidth=width) tableRow += nodes.entry("", nodes.paragraph(text=columnTitle)) tableGroup += nodes.thead("", tableRow) return table, tableGroup def _GenerateCoverageTable(self) -> nodes.table: # Create a table and table header with 5 columns table, tableGroup = self._PrepareTable( identifier=self._packageID, columns={ "Module": 500, "Total Statements": 100, "Excluded Statements": 100, "Covered Statements": 100, "Missing Statements": 100, "Total Branches": 100, "Covered Branches": 100, "Partial Branches": 100, "Missing Branches": 100, "Coverage in %": 100 }, classes=["report-doccov-table"] ) tableBody = nodes.tbody() tableGroup += tableBody
The magic of multiple cells spanning rows or columns is done by morerows and morecols. In addition, merged cells need to be set as None. I found it by investigating the code for the table parser. Like always with Sphinx and docutils, such features are not documented (but isn't docutils and Sphinx meant to document code/itself?). Anyhow, I created a helper method which returns a table node with header rows in it. I used a simple approach to describe header columns in the primary rows that are divided into more columns in a secondary row. Alternatively, @Bhav-Bhela demonstrated a description technique for deeper nesting. The method expects a list of primary column descriptions, which is a tuple of column title, optional list of secondary columns, column width. If the secondary column list is present, then no column width is needed for the primary row. In the secondary row, a tuple of title and column width is used. from typing import Optional as Nullable List[ Tuple[str, Nullable[List[ Tuple[str, int]] ], Nullable[int]] ] class BaseDirective(ObjectDescription): # ... def _PrepareTable(self, columns: List[Tuple[str, Nullable[List[Tuple[str, int]]], Nullable[int]]], identifier: str, classes: List[str]) -> Tuple[nodes.table, nodes.tgroup]: table = nodes.table("", identifier=identifier, classes=classes) hasSecondHeaderRow = False columnCount = 0 for groupColumn in columns: if groupColumn[1] is not None: columnCount += len(groupColumn[1]) hasSecondHeaderRow = True else: columnCount += 1 tableGroup = nodes.tgroup(cols=columnCount) table += tableGroup # Setup column specifications for _, more, width in columns: if more is None: tableGroup += nodes.colspec(colwidth=width) else: for _, width in more: tableGroup += nodes.colspec(colwidth=width) # Setup primary header row headerRow = nodes.row() for columnTitle, more, _ in columns: if more is None: headerRow += nodes.entry("", nodes.paragraph(text=columnTitle), morerows=1) else: morecols = len(more) - 1 headerRow += nodes.entry("", nodes.paragraph(text=columnTitle), morecols=morecols) for i in range(morecols): headerRow += None tableHeader = nodes.thead("", headerRow) tableGroup += tableHeader # If present, setup secondary header row if hasSecondHeaderRow: tableRow = nodes.row() for columnTitle, more, _ in columns: if more is None: tableRow += None else: for columnTitle, _ in more: tableRow += nodes.entry("", nodes.paragraph(text=columnTitle)) tableHeader += tableRow return table, tableGroup It's then used like that: class CodeCoverage(BaseDirective): # ... def _GenerateCoverageTable(self) -> nodes.table: # Create a table and table header with 10 columns table, tableGroup = self._PrepareTable( identifier=self._packageID, columns=[ ("Package", [ (" Module", 500) ], None), ("Statments", [ ("Total", 100), ("Excluded", 100), ("Covered", 100), ("Missing", 100) ], None), ("Branches", [ ("Total", 100), ("Covered", 100), ("Partial", 100), ("Missing", 100) ], None), ("Coverage", [ ("in %", 100) ], None) ], classes=["report-codecov-table"] ) tableBody = nodes.tbody() tableGroup += tableBody def run(self) -> List[nodes.Node]: self._CheckOptions() container = nodes.container() container += self._GenerateCoverageTable() return [container] The full code can be found here: Sphinx.py:BaseDirective._PrepareTable The result looks like this: Link to example: https://pytooling.github.io/sphinx-reports/coverage/index.html
2
2
77,819,010
2024-1-15
https://stackoverflow.com/questions/77819010/how-to-find-the-points-of-intersection-between-a-circle-and-an-ellipsem
So I have been trying to solve a question lately, given an ellipse and a circle such that the center of the circle lies on the ellipse. We need to find the area between the two curves. As for inputs we have a,b (The axes of the ellipse). R(The radius of the circle) and Theta(The angle that the circles center makes with the horizontal axis) Also the ellipse is centred at the origin. I figured out that we can get the co ordinates for the center of the circle and then solve the two equations giving me the necessary points which can be then used to get the correct area. However finding those points actually leads to a fourth order equation, and I am stuck at finding the necessary intersection points. I tried solving the equation using numpy but that also gave a huge overhead. Another approach that I was thinking of was trying to fit a convex polygon within the region and then return it's area however I am clueless with regards to the accuracy and complexity of this method. Thank you.
I think the easiest way to do this is simply by numerical integration. You can find the lower and upper curves for both circle and ellipse and hence the top and bottom parts of the area at any x value. You can also readily find the limits of integration. Note also that, for the purposes of finding the relevant area, you can rotate and reflect such that the circle centre is in the first quadrant of the ellipse. I think the below code is OK, but you'll have to count squares on the graph as a rough check. Note that the angle is specified in degrees, between 0 and 360. from math import cos, sin, sqrt, pi import matplotlib.pyplot as plt import numpy as np def getOverlap( a, b, R, phi_deg, N ): phi_rad = phi_deg * pi / 180.0 # Identify point of intersection (= centre of circle) scale = a / sqrt( cos( phi_rad ) ** 2 + ( a * sin( phi_rad ) / b ) ** 2 ) x0, y0 = scale * cos( phi_rad ), scale * sin( phi_rad ) # WLOG, work in first quadrant xc, yc = abs( x0 ), abs( y0 ) # Set maximum possible limits of numerical integration # Note: height of intersection may still be 0 between part of these bounds left, right = max( xc - R, -a ), min( xc + R, a ) # Numerical integration (mid-ordinate rule) dx = ( right - left ) / N area = 0.0 for i in range( N ): x = left + ( i + 0.5 ) * dx dy = sqrt( R ** 2 - ( x - xc ) ** 2 ) circle_top = yc + dy circle_bottom = yc - dy dy = b * sqrt( 1.0 - ( x / a ) ** 2 ) ellipse_top = dy ellipse_bottom = -dy top = min( circle_top , ellipse_top ) bottom = max( circle_bottom, ellipse_bottom ) if top > bottom: area += ( top - bottom ) * dx return area, x0, y0 a, b, R, phi_deg = 3.0, 1.0, 2.5, 30.0 N = 1000000 overlap, x0, y0 = getOverlap( a, b, R, phi_deg, N ) print( "Area of overlap = ", overlap ) npts = 1000 t = np.linspace( 0.0, 2 * pi, npts ) xellipse, yellipse = a * np.cos( t ), b * np.sin( t ) xcircle , ycircle = x0 + R * np.cos( t ), y0 + R * np.sin( t ) plt.plot( xcircle, ycircle ) plt.plot( xellipse, yellipse ) plt.axis( "square" ) plt.grid() plt.show() Output: Area of overlap = 6.213925590052305 Graph: An alternative case with 4 points of intersection instead of 2 would be a, b, R, phi_deg = 5.0, 1.0, 3.0, 90.0 whence Area of overlap = 10.49361094712482
3
2
77,826,724
2024-1-16
https://stackoverflow.com/questions/77826724/implementing-frustum-culling-in-python-ray-tracing-camera
I'm working on a ray tracing project in Python and have encountered a performance bottleneck. I believe implementing frustum culling in my Camera class could significantly improve rendering times. Camera: class Camera(): def __init__(self, look_from, look_at, screen_width = 400 ,screen_height = 300, field_of_view = 90., aperture = 0., focal_distance = 1.): self.screen_width = screen_width self.screen_height = screen_height self.aspect_ratio = float(screen_width) / screen_height self.look_from = look_from self.look_at = look_at self.camera_width = np.tan(field_of_view * np.pi/180 /2.)*2. self.camera_height = self.camera_width/self.aspect_ratio self.cameraFwd = (look_at - look_from).normalize() self.cameraRight = (self.cameraFwd.cross(vec3(0.,1.,0.))).normalize() self.cameraUp = self.cameraRight.cross(self.cameraFwd) self.lens_radius = aperture / 2. self.focal_distance = focal_distance self.x = np.linspace(-self.camera_width/2., self.camera_width/2., self.screen_width) self.y = np.linspace(self.camera_height/2., -self.camera_height/2., self.screen_height) xx,yy = np.meshgrid(self.x,self.y) self.x = xx.flatten() self.y = yy.flatten() def get_ray(self,n): x = self.x + (np.random.rand(len(self.x )) - 0.5)*self.camera_width /(self.screen_width) y = self.y + (np.random.rand(len(self.y )) - 0.5)*self.camera_height /(self.screen_height) r = np.sqrt(np.random.rand(x.shape[0])) phi = np.random.rand(x.shape[0])*2*np.pi ray_origin = self.look_from + self.cameraRight *r * np.cos(phi)* self.lens_radius + self.cameraUp *r * np.sin(phi)* self.lens_radius return Ray(origin=ray_origin, dir=(self.look_from + self.cameraUp*y*self.focal_distance + self.cameraRight*x*self.focal_distance + self.cameraFwd*self.focal_distance - ray_origin ).normalize(), depth=0, n=n, reflections = 0, transmissions = 0, diffuse_reflections = 0)
You just need to modify the Camera class as following: class Camera(): def __init__(self, look_from, look_at, screen_width = 400 ,screen_height = 300, field_of_view = 90., aperture = 0., focal_distance = 1.): self.screen_width = screen_width self.screen_height = screen_height self.aspect_ratio = float(screen_width) / screen_height self.look_from = look_from self.look_at = look_at self.camera_width = np.tan(field_of_view * np.pi / 180 / 2.) * 2. self.camera_height = self.camera_width / self.aspect_ratio self.cameraFwd = (look_at - look_from).normalize() self.cameraRight = (self.cameraFwd.cross(vec3(0., 1., 0.))).normalize() self.cameraUp = self.cameraRight.cross(self.cameraFwd) self.lens_radius = aperture / 2. self.focal_distance = focal_distance self.near = .1 self.far = 100. self.x = np.linspace(-self.camera_width / 2., self.camera_width / 2., self.screen_width) self.y = np.linspace(self.camera_height / 2., -self.camera_height / 2., self.screen_height) xx, yy = np.meshgrid(self.x, self.y) self.x = xx.flatten() self.y = yy.flatten() def get_ray(self,n): x = self.x + (np.random.rand(len(self.x)) - 0.5) * self.camera_width / (self.screen_width) y = self.y + (np.random.rand(len(self.y)) - 0.5) * self.camera_height / (self.screen_height) ray_origin = self.look_from + self.cameraRight * x * self.near + self.cameraUp * y * self.near return Ray(origin=ray_origin, dir=(self.look_from + self.cameraUp * y * self.focal_distance +self.cameraRight * x * self.focal_distance +self.cameraFwd * self.focal_distance - ray_origin).normalize(), depth=0, n=n, reflections=0, transmissions=0, diffuse_reflections=0)
2
6
77,817,356
2024-1-15
https://stackoverflow.com/questions/77817356/how-to-correctly-define-a-classmethod-that-accesses-a-value-of-a-mangled-child-a
In Python, how do I correctly define a classmethod of a parent class that references an attribute of a child class? from enum import Enum class LabelledEnum(Enum): @classmethod def list_labels(cls): return list(l for c, l in cls.__labels.items()) class Test(LabelledEnum): A = 1 B = 2 C = 3 __labels = { 1: "Label A", 2: "Custom B", 3: "Custom label for value C + another string", } print(Test.list_labels()) # expected output # ["Label A", "Custom B", "Custom label for value C + another string"] In the code above I expect that Test.list_labels() will correctly print out the labels, however because the __labels dictionary is defined with the double underscore, I cannot access it correctly. The reason I wanted to have double underscore is to make sure that the labels would not show up when iterating over the enumerator, e.g. list(Test) should not show the dictionary containing labels.
Note: This answer was originally a comment to the question. I strongly advise taking a different approach, like: Ethan Furman's answer (author of Python's Enum) Or juanpa.arrivillaga's answer Python 3.11+ I do not suggest using private names. That being said, if for some reason you must use private names and you can't use the @enum.nonmember decorator, which is a much better approach. Then the following will work in Python 3.11+. The _Private__names section in Enum HOWTO states: Private names are not converted to enum members, but remain normal attributes. You could do something really ugly like: getattr(cls, f"_{cls.__name__}__labels", {}) from enum import Enum class LabelledEnum(Enum): @classmethod def list_labels(cls): # account for private name mangling labels = getattr(cls, f"_{cls.__name__}__labels", {}) return list(l for c, l in labels.items()) class Test(LabelledEnum): A = 1 __labels = { 1: "Label A" } print(Test.list_labels()) # ['Label A'] Python < 3.11 In Python versions less than 3.11, __labels will become the _Test__labels enum member of Test. And the above code will raise an error, due to getattr returning the enum rather than a dict. print(Test.__members__) #{'A': <Test.A: 1>, '_Test__labels': <Test._Test__labels: {1: 'Label A'}>} print(type(Test._Test__labels)) #<enum 'Test'> Also, in Python 3.9 and 3.10, using private names in an enum class will cause a DeprecationWarning, similar to the following: DeprecationWarning: private variables, such as '_Test__labels', will be normal attributes in 3.10
4
2
77,827,942
2024-1-16
https://stackoverflow.com/questions/77827942/connection-and-cursor-still-usable-outside-with-block
My system: Windows 10 x64 Python version 3.9.4 SQLite3 module integrated in the standard library A DB connection and a cursor are both considered resources, so that I can use a with clause. I know that if I open a resource in a with block, it will be automatically closed outside it (files work in this way). If these assumptions are correct, why can I access a connection or a cursor even from outside the with block? Try this: import sqlite3 with sqlite3.connect('test.db') as conn: cur = conn.cursor() # some code here.... # Now we are outside with block but I can still use conn and cur cur.execute('''CREATE TABLE IF NOT EXISTS users (name TEST, surname TEXT) ''') cur2 = conn.cursor()
The context manager does not close the connection on exit; it either commits the last transaction (if no exception was raised) or rolls it back. From the documentation: Note The context manager neither implicitly opens a new transaction nor closes the connection. If you need a closing context manager, consider using contextlib.closing(). Related, there is no new scope associated with the with statement, so both conn and cur are still in scope following the statement. If you want the with statement to close the connection, do as the documentation suggests: from contextlib import closing with closing(sqlite3.connect('test.db')) as conn: ...
2
1
77,827,341
2024-1-16
https://stackoverflow.com/questions/77827341/searching-a-list-of-lists-with-unique-keys
I have a list of track information from Spotify: track_list = [[track_id1, track_title1, track_popularity1],[track_id2, track_title2, track_popularity2] Each track_id is unique. Is there a way to find the entry that matches the track_id without iterating through the entire list every time? def find_track(track_id, track_list): for track in track_list: if track[0] = track_id: return track break return('Not Found') I know I could get the item I need by using a for loop, but I suspect there's a more efficient way that I've yet to discover.
The most efficient method for this purpose would be using a dictionary, where the keys are the track_ids and the values are the corresponding track information. This allows for constant-time lookups (O(1) complexity), which is significantly faster than iterating over the entire list (O(n) complexity). Here's an example: def list_to_dict(track_list): return {track[0]: track for track in track_list} def find_track(track_id, track_dict): return track_dict.get(track_id, 'Not Found') # Example usage: track_list = [ ['track_id1', 'track_title1', 'track_popularity1'], ['track_id2', 'track_title2', 'track_popularity2'] ] track_dict = list_to_dict(track_list) track = find_track('track_id1', track_dict) print(track)
2
4
77,825,686
2024-1-16
https://stackoverflow.com/questions/77825686/error-installing-faiss-cpu-no-module-named-swig
I'm trying to install faiss-cpu via pip (pip install faiss-cpu) and get the following error: × Building wheel for faiss-cpu (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [12 lines of output] running bdist_wheel running build running build_py running build_ext building 'faiss._swigfaiss' extension swigging faiss/faiss/python/swigfaiss.i to faiss/faiss/python/swigfaiss_wrap.cpp swig -python -c++ -Doverride= -I/usr/local/include -Ifaiss -doxygen -module swigfaiss -o faiss/faiss/python/swigfaiss_wrap.cpp faiss/faiss/python/swigfaiss.i Traceback (most recent call last): File "/Users/me/langchain/venv/bin/swig", line 5, in <module> from swig import swig ModuleNotFoundError: No module named 'swig' error: command '/Users/me/langchain/venv/bin/swig' failed with exit code 1 [end of output] I've searched around and tried various solutions but nothing has worked. I have the following setup: Python 3.12.1 pip 23.3.2 SWIG Version 4.1.1 Python swig package (via pip): 4.1.1.post1
faiss-cpu is not available in python-3.12. According to their page on pypi it's available from python-3.7 to python-3.11. You need to downgrade your python in order to install faiss-cpu in your system. Now you can either remove python-3.12 and install python-3.10 on your system. Or use conda to create a virtual env with python-3.10 like following: conda create -n myenv python=3.10 then to activate: conda activate myenv now install faiss-cpu: pip install faiss-cpu
3
4
77,808,226
2024-1-12
https://stackoverflow.com/questions/77808226/pydantic-pass-the-entire-dataset-to-a-nested-field
I am using django, django-ninja framework to replace some of my apis ( written in drf, as it is becoming more like a boilerplate codebase ). Now while transforming some legacy api, I need to follow the old structure, so the client side doesn't face any issue. This is just the backstory. I have two separate models. class Author(models.Model): username = models.CharField(...) email = models.CharField(...) ... # Other fields class Blog(models.Model): title = models.CharField(...) text = models.CharField(...) tags = models.CharField(...) author = models.ForeignKey(...) ... # Other fields The structure written in django rest framework serializer class BlogBaseSerializer(serializers.Serializer): class Meta: model = Blog exclude = ["author"] class AuthorSerializer(serializers.Serializer): class Meta: model = Author fields = "__all__" class BlogSerializer(serializers.Serializer): blog = BlogBaseSerializer(source="*") author = AuthorSerializer() In viewset the following queryset will be passed class BlogViewSet(viewsets.GenericViewSet, ListViewMixin): queryset = Blog.objects.all() serializer_class = BlogSerializer ... # Other config So, as I am switching to django-ninja which uses pydantic for schema generation. I have the following code for pydantic schema AuthorSchema = create_schema(Author, exclude=["updated", "date_joined"]) class BlogBaseSchema(ModelSchema): class Meta: model = Blog exclude = ["author", ] class BlogSchema(Schema): blog: BlogBaseSchema author: AuthorSchema But as you can see, drf serializer has a parameter called source, where source="*" means to pass the entire original dataset to the nested field serializer. Is there any option to do the exact same with pydantic? Except for creating a list of dictionaries [{author: blog.author, "blog": blog} for blog in queryset]
Resolved the problem with the following code class AuthorSchema(ModelSchema): class Meta: model = Author exclude=["updated", "date_joined"] class BlogBaseSchema(ModelSchema): class Meta: model = Blog exclude = ["author", ] class BlogSchema(Schema): blog: BlogBaseSchema author: AuthorSchema @staticmethod def resolve_blog(self, obj): return obj
4
2
77,813,892
2024-1-14
https://stackoverflow.com/questions/77813892/what-is-the-minimal-gdscript-equivalent-of-python-websocket-client-code
I'm attempting to program a simple turn-based game in Godot which relies on a Python websocket server for rules and state updates. To that end, I'm trying to make a simple prototype where I can input text into a Godot client, send it to the websocket, and receive some data back - an 'echo' configuration. I have server code outlined already: # server.py import socket HOST = "127.0.0.1" PORT = 65432 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: data = conn.recv(1024) if not data: break conn.sendall(data) I could write a client in Python that sends data and receives the echo, but my efforts to do the same in Godot are sluggish at best. What might the minimum GDScript file for a client look like? Where should I look for tutorials on this sort of thing? I've looked at the Godot Official documentation in search of answers, then tutorials online. Maybe it's just my search engine acting up, but I haven't found anything that at once addresses my question and explains it in basic terms.
In Godot 4.2 WebSocketClient seems to not exist anymore, but in the editor I found the class WebSocketPeer with an example. I extended the example to show how to send a message to the socket server. extends Node # The URL we will connect to @export var websocket_url = "ws://localhost:8080" # Our WebSocketClient instance var socket = WebSocketPeer.new() func _ready(): var err = socket.connect_to_url(websocket_url) if err != OK: print("Unable to connect") set_process(false) else: print("connected") func _process(delta): socket.poll() var state = socket.get_ready_state() if state == WebSocketPeer.STATE_OPEN: while socket.get_available_packet_count(): _on_message_received(socket.get_packet()) elif state == WebSocketPeer.STATE_CLOSING: # Keep polling to achieve proper close. pass elif state == WebSocketPeer.STATE_CLOSED: var code = socket.get_close_code() var reason = socket.get_close_reason() _on_closed(code, reason) func _on_message_received(packet: PackedByteArray): var message = packet.get_string_from_utf8() print("WebSocket message received: %s" % message) func _on_closed(code, reason): print("WebSocket closed with code: %d, reason %s. Clean: %s" % [code, reason, code != -1]) set_process(false) # Stop processing. func _send_message(message : String): var state = socket.get_ready_state() if state == WebSocketPeer.STATE_OPEN: socket.send_text(message) Basically it's similar to the WebSocketClient, but without the signals. In the process you poll the socket and check the current state. If it's open you get the available packets. Same for sending. If you directly want to send a string you can use the send_text function as in the example. To send a byte array you can use send.
2
1
77,824,830
2024-1-16
https://stackoverflow.com/questions/77824830/how-to-colour-the-outer-ring-like-a-doughnut-plot-in-a-radar-plot-according-to
I have data in this form : data = {'Letter': ['A', 'B', 'C', 'D', 'E'], 'Type': ['Apples', 'Apples', 'Oranges', 'Oranges', 'Bananas'], 'Value': [1, 2, 0, 5, 6]} df = pd.DataFrame(data) I want to combine a doughnut plot and a radar plot, where the outer ring will be coloured according to the column "Type". import numpy as np import matplotlib.pyplot as plt import pandas as pd data = {'Letter': ['A', 'B', 'C', 'D', 'E'], 'Type': ['Apples', 'Apples', 'Oranges', 'Oranges', 'Bananas'], 'Value': [1, 2, 0, 5, 6]} df = pd.DataFrame(data) num_categories = len(df) angles = np.linspace(0, 2 * np.pi, num_categories, endpoint=False).tolist() values = df['Value'].tolist() values += values[:1] angles += angles[:1] plt.figure(figsize=(8, 8)) plt.polar(angles, values, marker='o', linestyle='-', linewidth=2) plt.fill(angles, values, alpha=0.25) plt.xticks(angles[:-1], df['Letter']) types = df['Type'].unique() color_map = {t: i / len(types) for i, t in enumerate(types)} colors = df['Type'].map(color_map) plt.fill(angles, values, color=plt.cm.viridis(colors), alpha=0.25) plt.show() I want this to look like this :
You could use Wedge: from matplotlib.patches import Wedge from matplotlib import colormaps ax = plt.gca() cmap = plt.cm.viridis.resampled(df['Type'].nunique()) # group consecutive types g = df['Type'].ne(df['Type'].shift()).cumsum() # set up colors per type colors = dict(zip(df['Type'].unique(), map(matplotlib.colors.to_hex, cmap.colors))) # {'Apples': '#440154', 'Oranges': '#21918c', 'Bananas': '#fde725'} # radius of wedge y = 0.49 # loop over groups for (_, name), grp in df.reset_index(drop=True).groupby([g, 'Type']): factor = 360/len(df) start = (grp.index[0]-0.5)*factor end = (grp.index[-1]+0.5)*factor ax.add_artist(Wedge((0.5, 0.5), y, start, end, width=0.01, color=colors[name], transform=ax.transAxes) ) Output:
4
5
77,825,112
2024-1-16
https://stackoverflow.com/questions/77825112/filtering-data-based-on-boolean-columns-in-python
I have the following pandas dataframe and I would like a function that returns the ID's data with at least 1 True value in bool_1, 2 True values in bool_2 and 3 True values in bool_3 column, using the groupby function. index ID bool_1 bool_2 bool_3 0 7 True True True 1 7 False True True 2 7 False False True 3 8 True True True 4 8 True True True 5 8 False False True 6 9 True True True 7 9 True False True 8 9 True False True 9 9 True False False As output I would expect complete data for ID 7 and 8 to be returned, since 9 has only 1 True value for bool_2. Any idea for that function? Thank you!
You can specify number of Trues in dictionary, so possible compare by DataFrame.ge for greate or equal count of Trues by aggregate sum and filter original DataFrame by boolean indexing with Series.isin: d = {'bool_1':1, 'bool_2':2,'bool_3':3} ids = df.groupby('ID')[list(d.keys())].sum().ge(d).all(axis=1) print (ids) ID 7 True 8 True 9 False dtype: bool out = df[df['ID'].isin(ids.index[ids])] print (out) index ID bool_1 bool_2 bool_3 0 0 7 True True True 1 1 7 False True True 2 2 7 False False True 3 3 8 True True True 4 4 8 True True True 5 5 8 False False True Another idea is use GroupBy.transform for create boolean mask: d = {'bool_1':1, 'bool_2':2,'bool_3':3} mask = df.groupby('ID')[list(d.keys())].transform('sum').ge(d).all(axis=1) print (mask) 0 True 1 True 2 True 3 True 4 True 5 True 6 False 7 False 8 False 9 False dtype: bool out = df[mask] print (out) index ID bool_1 bool_2 bool_3 0 0 7 True True True 1 1 7 False True True 2 2 7 False False True 3 3 8 True True True 4 4 8 True True True 5 5 8 False False True
2
2
77,817,726
2024-1-15
https://stackoverflow.com/questions/77817726/python-access-modifiers-as-a-decorator
Is it possible to implement protected and private access modifiers for classes with decorators in python? How? The functionality should be like the code below: class A: def public_func(self): self.protected_func() # Runs without warning and error (Because is called in the owner class) self.private_func() # Runs without warning and error (Because is called in the owner class) @protected def protected_func(self): print('protected is running') self.private_func() # Runs without warning and error (Because is called in the owner class) @private def private_func(self): print(f'private is running') a = A() a.public_func() # Runs without any warning and error (Because has no access modifier) a.protected_func() # Runs with protected warning a.private_func() # Raises Exception The idea for this question was being accessable private functions as below: class A: def __private_func(self): print('private is running') a = A() a._A__private_function() If we define private with decorator, then have not to define it with __name. So _A__private_function will not exist and the private function is really not inaccessible from outside of the owner class. Is the idea a True solution to solve the problem below? __name is not realy private
In the other answer of mine, in trying to determine whether a call is made from code defined in the same class as a protected method, I made use of the co_qualname attribute of a code object, which was only introduced in Python 3.11, making the solution incompatible with earlier Python versions. Moreover, using the fully qualified name of a function or code for a string-based comparison means that it would be difficult, should there be a need, to allow inheritance to work, where a method in a subclass should be able to call a protected method of the parent class without complaints. It would be difficult because the subclass would have a different name from that of the parent, and because the parent class may be defined in a closure, resulting in a fully qualified name that makes it difficult for string-based introspections to reliably work. To clarify, if we are to allow protected methods to work from a subclass, the following usage should run with the behaviors as commented: class C: def A_factory(self): class A: @protected def protected_func(self): print('protected is running') self.private_func() @private def private_func(self): print('private is running') return A a = C().A_factory() class B(a): def foo(self): super().private_func() b = B() b.foo() # Runs without complaint because of inheritance b.protected_func() # Runs with protected warning b.private_func() # Raises Exception We therefore need a different approach to determining the class in which a protected method is defined. One such approach is to recursively trace the referrer of objects, starting from a given code object, until we obtain a class object. Tracing recursively the referrers of an object can be potentially costly, however. Given the vast interconnectedness of Python objects, it is important to limit the recursion paths to only referrer types that can possibly lead to a class. Since we know that the code object of a method is always referenced by a function object, and that a function object is either referenced by the __dict__ of a class (whose type is a subclass of type) or a cell object in a tuple representing a function closure that leads to another function and so on, we can create a dict that maps the current object type to a list of possible referrer types, so that the function get_class, which searches for the class closest to a code object, can stay laser-focused: from gc import get_referrers from types import FunctionType, CodeType, CellType referrer_types = { CodeType: [FunctionType], FunctionType: [dict, CellType], CellType: [tuple], tuple: [FunctionType], dict: [type] } def get_class(obj): if next_types := referrer_types.get(type(obj)): for referrer in get_referrers(obj): if issubclass(referrer_type := type(referrer), type): return referrer if referrer_type in next_types and (cls := get_class(referrer)): return cls With this utility function in place, we can now create decorators that return a wrapper function that validates that the class defining the decorated function is within the method resolution order of the class defining the caller's code. Use a weakref.WeakKeyDictionary to cache the code-to-class mapping to avoid a potential memory leak: import sys import warnings from weakref import WeakKeyDictionary def make_protector(action): def decorator(func): def wrapper(*args, **kwargs): func_code = func.__code__ if func_code not in class_of: class_of[func_code] = get_class(func_code) caller_code = sys._getframe(1).f_code if caller_code not in class_of: class_of[caller_code] = get_class(caller_code) if not (class_of[caller_code] and class_of[func_code] in class_of[caller_code].mro()): action(func.__qualname__) return func(*args, **kwargs) class_of = WeakKeyDictionary() return wrapper return decorator @make_protector def protected(name): warnings.warn(f'{name} is protected.', stacklevel=3) @make_protector def private(name): raise Exception(f'{name} is private.') Demo: https://ideone.com/o5aQae
4
6
77,824,075
2024-1-16
https://stackoverflow.com/questions/77824075/changing-values-of-a-column-in-a-data-frame
I have a data frame exemplified as data = [['A', 10, {'Cc', 'Dd'}], ['B', 15, {'Aa', 'Dd', 'Cc', 'Ee'}], ['C', 14, {'Dd', 'Ee', 'Aa'}],['D', 3, {'Bb'}],['E', 3,{'Dd', 'Cc'}]] df = pd.DataFrame(data, columns=['type', 'val', 'others']) I would like to perform some data processing on this dataset. For the 'type' column, I have introduced a dictionary as dic_type ={'A':0, 'B':1, 'C':2, 'D':3, 'E':4} which brings the first column into a value using df["type"].replace(dic_type, inplace=True) This column then looks good to me to carry out the rest of the analysis. However, I am unsure how to bring the values of the 'others' column into numbers. Each cell of the last column contains a list with various numbers of elements. The permutation of these elements, which may appear in some cells, should not be counted. For instance, the values for the first and last cell of the 'others' should be identical. Do you have any suggestions on getting this enumeration? My data frame has over 6000 cells, and I should perform this task as efficiently as possible.
If I understand you correctly, you need to assign a numerical code to the column others - which contains sets of values. You can convert the column values to frozenset and then apply pd.Categorical to it: df["others_codes"] = pd.Categorical(df["others"].apply(frozenset)).codes print(df) Prints: type val others others_codes 0 A 10 {Dd, Cc} 0 1 B 15 {Dd, Aa, Ee, Cc} 2 2 C 14 {Dd, Ee, Aa} 1 3 D 3 {Bb} 3 4 E 3 {Dd, Cc} 0
2
1
77,823,443
2024-1-16
https://stackoverflow.com/questions/77823443/pd-io-formats-excel-excelformatter-header-style-none-not-working
I have used the following code to remove the default formating from the header in pandas. pd.io.formats.excel.ExcelFormatter.header_style = None It was working previously, but now I am getting following error. pd.io.formats.excel.ExcelFormatter.header_style = None ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pandas.io.formats' has no attribute 'excel' What could be the possible reason and solution? Are there any alternate method for this? Thanks. I have tried updating Pandas. But that did not help.
I tried on pandas <2 as well as pandas >2, it is working in both pandas versions. This should be the correct import: import pandas.io.formats.excel pandas.io.formats.excel.ExcelFormatter.header_style = None related git : https://github.com/pandas-dev/pandas/issues/19386#issuecomment-851653196
5
4
77,822,142
2024-1-15
https://stackoverflow.com/questions/77822142/polars-execute-many-operations-over-same-grouping
Showing a toy example with K=2 but the question is mostly relevant for high g cardinality and K>>1: df = pl.DataFrame(dict( g=[1, 2, 1, 2, 1, 2], v=[1, 2, 3, 4, 5, 6], )) K = 2 df.with_columns((col.v.shift(k+1).over('g').alias(f's{k}') for k in range(K))) ╭─────┬─────┬──────┬──────╮ │ g ┆ v ┆ s0 ┆ s1 │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪══════╪══════╡ │ 1 ┆ 1 ┆ null ┆ null │ │ 2 ┆ 2 ┆ null ┆ null │ │ 1 ┆ 3 ┆ 1 ┆ null │ │ 2 ┆ 4 ┆ 2 ┆ null │ │ 1 ┆ 5 ┆ 3 ┆ 1 │ │ 2 ┆ 6 ┆ 4 ┆ 2 │ ╰─────┴─────┴──────┴──────╯ How can I make sure the grouping by g is done only once? Polars does not seem to optimize for this in the query plan. I would expect it to run as fast as: df.group_by('g').agg((col.v.shift(k+1).alias(f's{k}') for k in range(K)))
Polars caches window expressions. While you may not see this represented in the query plan, the over grouping is only done once. Still, your over query will not run as fast as your group_by query. This is because over has to add the results back into the original data frame. A fairer comparison would be the query below, which will match the result of the over query. As you can see, an additional join is required. import polars as pl df = pl.DataFrame( { "g": [1, 2, 1, 2, 1, 2], "v": [1, 2, 3, 4, 5, 6], } ) K = 2 df_shift = ( df.group_by("g") .agg([pl.col("v")] + [pl.col("v").shift(k + 1).alias(f"s{k}") for k in range(K)]) .explode(["v"] + [f"s{k}" for k in range(K)]) ) result = df.join(df_shift, on=["g", "v"], how="left") print(result) shape: (6, 4) ┌─────┬─────┬──────┬──────┐ │ g ┆ v ┆ s0 ┆ s1 │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪══════╪══════╡ │ 1 ┆ 1 ┆ null ┆ null │ │ 2 ┆ 2 ┆ null ┆ null │ │ 1 ┆ 3 ┆ 1 ┆ null │ │ 2 ┆ 4 ┆ 2 ┆ null │ │ 1 ┆ 5 ┆ 3 ┆ 1 │ │ 2 ┆ 6 ┆ 4 ┆ 2 │ └─────┴─────┴──────┴──────┘
2
5
77,814,538
2024-1-14
https://stackoverflow.com/questions/77814538/cannot-import-publickey-from-solana-publickey
I'm developing a tracking script using Python and the Solana.py, installed via pip. The script uses websocket subscriptions to notify me when something happens on the Solana blockchain. My first approach was to watch a specific account: await websocket.account_subscribe(PublicKey('somekeyhere')) But when doing ... from solana.publickey import PublicKey I get the following error: from solana.publickey import PublicKey ModuleNotFoundError: No module named 'solana.publickey' Solana.py is the latest version (0.31.0). I'm on Ubuntu 22.04 LTS and using Python 3.10.12. I found a workaround, just using another subscription method, that does not need any parameters: await websocket.logs_subscribe() Then I analyze the output to my needs. That means, that I'm dealing with every single log on the Solana blockchains. It works, but it does not "feel right". And I would love to understand, what is going wrong when using the websocket.account_subscribe method. Any suggestions? Regards Kurt
Based on their github, it looks like what you need is Pubkey, not Publickey, taken from solders rather than solana. Have you tried this instead? from solders.pubkey import Pubkey I haven't used their SDK before, so this might not solve your issue, but it was something I noted given your issue. Ref: solana.py Github
4
4
77,820,120
2024-1-15
https://stackoverflow.com/questions/77820120/numpy-filter-data-based-on-multiple-conditions
Here's my question: I'm trying to filter an image based on the values of two coordinates. I can do this easily with a for loop: import numpy as np import matplotlib.pyplot as plt x = np.linspace(-1, 1, 101) y = np.linspace(-1, 1, 51) X, Y = np.meshgrid(x, y) Z = np.exp(X**2 + Y**2) newZ = np.zeros(Z.shape) for i, y_i in enumerate(y): for j, x_j in enumerate(x): if x_j > 0 and y_i > 0: newZ[i, j] = Z[i, j] else: newZ[i, j] = 0 plt.contourf(x, y, newZ) plt.show() but I'm pretty sure there should be a way to do it by indexing (as it should be faster) like: Z = Z[y>0, x>0] which doesn't work (IndexError: Shape mismatch) I assume I could do this with a mask using masked array perhaps (it seems they are doing something like that here), but I wonder if there is a simple one-liner in normal numpy that I can't seem to figure out. Thanks
Edit, you have to use numpy broadcasting: m = (y[:, None] > 0) | (x > 0) newZ = np.where(m, Z, 0) # OR m = (y[:, None] > 0) | (x > 0) newZ = np.zeros(Z.shape) newZ[m] = Z[m] Demo: x = np.linspace(-1, 1, 11) y = np.linspace(-1, 1, 6) Z = np.arange(len(y)*len(x)).reshape(len(y), len(x)) m = (y[:, None] > 0) | (x > 0) newZ = np.zeros(Z.shape, dtype=int) newZ[m] = Z[m] Output: >>> x array([-1. , -0.8, -0.6, -0.4, -0.2, 0. , 0.2, 0.4, 0.6, 0.8, 1. ]) >>> y array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) >>> Z array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43], [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54], [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]]) >>> newZ array([[ 0, 0, 0, 0, 0, 0, 6, 7, 8, 9, 10], [ 0, 0, 0, 0, 0, 0, 17, 18, 19, 20, 21], [ 0, 0, 0, 0, 0, 0, 28, 29, 30, 31, 32], [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43], [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54], [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]]) With masked array: >>> np.ma.masked_array(Z, ~m, fill_value=0) masked_array( data=[[--, --, --, --, --, --, 6, 7, 8, 9, 10], [--, --, --, --, --, --, 17, 18, 19, 20, 21], [--, --, --, --, --, --, 28, 29, 30, 31, 32], [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43], [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54], [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]], mask=[[ True, True, True, True, True, True, False, False, False, False, False], [ True, True, True, True, True, True, False, False, False, False, False], [ True, True, True, True, True, True, False, False, False, False, False], [False, False, False, False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False, False, False, False]], fill_value=0) IIUC, you are looking for np.where: newZ = np.where(Z>0, Z, 0) Same as: m = Z>0 newZ = np.zeros(Z.shape) newZ[m] = Z[m] Alternative with np.clip: newZ = np.clip(Z, 0, np.inf)
2
2
77,821,018
2024-1-15
https://stackoverflow.com/questions/77821018/stackable-traits-in-python
I am implementing a hierarchy of classes in Python. The common base class exposes a few methods like an interface, there are a few abstract classes that should not be instantiated directly and each of the concrete subclasses can mixin a few of the abstract classes and provide additional behaviour. For example, the following code is a simple example of what I started from: class BaseEntity(ABC): @property @abstractmethod def f(self) -> List[str]: pass @property @abstractmethod def g(self) -> int: pass class AbstractEntity1(BaseEntity, ABC): # this should not be instantiable @property @abstractmethod def f(self) -> List[str]: return ["a", "b"] @property @abstractmethod def g(self) -> int: return 10 class AbstractEntity2(BaseEntity, ABC): # this should not be instantiable @property @abstractmethod def f(self) -> List[str]: return ["x"] @property @abstractmethod def g(self) -> int: return 3 class FinalEntity(AbstractEntity1, AbstractEntity2, BaseEntity): @property def f(self) -> List[str]: return ["C"] @property def g(self) -> int: return 10 I would like FinalEntity and all other concrete entities to behave as follows: when I call final_entity.f(), it should return ["a", "b", "x", "C"] (so the equivalent of calling the + operator on each of the mixins and the class itself); similarly, when I call final_entity.g(), it should return 10 + 3 + 10 (i.e. calling the + operator on each of the mixins and the class itself). The functions are obviously just an example and it won't always be +, so it will have to be defined for each of the functions. What is the best pythonic way to approach this problem?
Congratulations! You've discovered when not to use inheritance. Inheritance is a very specific tool in a programmer's toolbelt that solves a very specific problem, but there are lots of poor tutorials and poor instructors out there that try to hammer every nail in the software development world with subclasses. What you have is called a list, not a series of superclasses. If you want FinalEntity to have a bunch of traits, then that's a has-a relationship. We model has-a relationships with composition, not inheritance. That is, rather than having FinalEntity inherit from all of its traits, we have it contain them. class Trait1: def f(self): return ["a", "b"] class Trait2: def f(self): return ["x"] class Trait3: def f(self): return ["C"] Then your FinalEntity can simply be class FinalEntity: def __init__(self, traits): self.traits = traits def f(self): return [value for trait in self.traits for value in trait.f()] As a bonus, this is way easier to write tests for. Your proposed FinalEntity is tightly coupled to its parents, so you wouldn't be able to test it with different traits without mocking. But this FinalEntity has a constructor that directly plugs-and-plays different traits for free. Note that, if you really want to use inheritance, you just need to call super in the subclasses, and Python's default method resolution order will take it the rest of the way. class BaseEntity: def f(self): return [] class AbstractEntity1(BaseEntity): def f(self): return super.f() + ["a", "b"] class AbstractEntity2(BaseEntity): @abstractmethod def f(self): return super.f() + ["x"] class FinalEntity(AbstractEntity1, AbstractEntity2): def f(self): return super.f() + ["C"] But, again, this is going to result in more brittle code that's harder to test. For what it's worth, the feature you're looking for is called method combinations. This isn't a feature available in Python, but some languages, most notably Common Lisp, do support this out of the box. The equivalent to your code in Common Lisp would be (defclass base () ()) (defclass abstract1 (base) ()) (defclass abstract2 (base) ()) (defclass final (abstract1 abstract2) ()) (defgeneric f (value) (:method-combination append :most-specific-last)) (defmethod f append ((value abstract1)) (list "a" "b")) (defmethod f append ((value abstract2)) (list "x")) (defmethod f append ((value final)) (list "C")) (let ((instance (make-instance 'final))) (format t "~A~%" (f instance))) ;; Prints ("a" "b" "x" "C") But, again, that's not available in Python without a lot of clever reflection tricks.
2
3
77,820,136
2024-1-15
https://stackoverflow.com/questions/77820136/pandas-to-datetime-is-off-by-one-hour
I have some data recorded with timestamps using time.time(). I want to evaluate the data using pandas and convert the timestamps to datetime objects for better handling. However, when I try, all my timing data is off by one hour. This example reproduces the issue on my machine: import datetime as dt import pandas as pd origin = dt.datetime(2024, 1, 15).timestamp() timestamps = [origin + 3600 * i for i in range(10)] print([dt.datetime.fromtimestamp(t).isoformat() for t in timestamps]) print(pd.to_datetime(timestamps, unit='s')) Output: ['2024-01-15T00:00:00', '2024-01-15T01:00:00', '2024-01-15T02:00:00', '2024-01-15T03:00:00', '2024-01-15T04:00:00', '2024-01-15T05:00:00', '2024-01-15T06:00:00', '2024-01-15T07:00:00', '2024-01-15T08:00:00', '2024-01-15T09:00:00'] DatetimeIndex(['2024-01-14 23:00:00', '2024-01-15 00:00:00', '2024-01-15 01:00:00', '2024-01-15 02:00:00', '2024-01-15 03:00:00', '2024-01-15 04:00:00', '2024-01-15 05:00:00', '2024-01-15 06:00:00', '2024-01-15 07:00:00', '2024-01-15 08:00:00'], dtype='datetime64[ns]', freq=None) I am guessing that this has something to do with my timezone (I'm in UTC+1) but I'm confused as to how I should deal with this. If possible, I want to avoid explicitly specifying timezones and such (though I will do it if necessary). I want to just get the same times as I get with dt.datetime.fromtimestamp(). How do I do this?
You can use tz_convert to shift your datetime: >>> pd.to_datetime(timestamps, unit='s', utc=True).tz_convert('Europe/Paris') DatetimeIndex(['2024-01-15 00:00:00+01:00', '2024-01-15 01:00:00+01:00', '2024-01-15 02:00:00+01:00', '2024-01-15 03:00:00+01:00', '2024-01-15 04:00:00+01:00', '2024-01-15 05:00:00+01:00', '2024-01-15 06:00:00+01:00', '2024-01-15 07:00:00+01:00', '2024-01-15 08:00:00+01:00', '2024-01-15 09:00:00+01:00'], dtype='datetime64[ns, Europe/Paris]', freq=None) You can replace Europe/Paris with Etc/GMT-1 or UTC+01:00 If you want to convert as naive timezone without lag, you can chain with tz_localize to remove timezone information: >>> (pd.to_datetime(timestamps, unit='s', utc=True) .tz_convert('UTC+01:00') .tz_localize(None)) DatetimeIndex(['2024-01-15 00:00:00', '2024-01-15 01:00:00', '2024-01-15 02:00:00', '2024-01-15 03:00:00', '2024-01-15 04:00:00', '2024-01-15 05:00:00', '2024-01-15 06:00:00', '2024-01-15 07:00:00', '2024-01-15 08:00:00', '2024-01-15 09:00:00'], dtype='datetime64[ns]', freq=None) It all depends on the meaning you give to your timestamps (naive or timezone aware) If time.time() doesn't return the same value as pd.Timestamp().now(), you have to consider to use timezone to get the local time: >>> pd.to_datetime(time.time(), unit='s') 2024-01-15 16:12:49.276476672 >>> print(pd.Timestamp.now()) 2024-01-15 17:12:49.276847
3
2
77,817,609
2024-1-15
https://stackoverflow.com/questions/77817609/cloud-function-cant-call-postgresql-multiple-times-at-once-unless-test-queries
Edit: The answer to this was a bit complicated. The tl;dr is make sure you do Lazy Loading properly; many of the variables declared in the code below were declared and set globally, but your global variables should be set to None and only changed in your actual API call! I'm going bonkers. Here is my full main.py. It can be run locally via functions-framework --target=api or on Google Cloud directly: import functions_framework import sqlalchemy import threading from google.cloud.sql.connector import Connector, IPTypes from sqlalchemy.orm import sessionmaker, scoped_session Base = sqlalchemy.orm.declarative_base() class TestUsers(Base): __tablename__ = 'TestUsers' uuid = sqlalchemy.Column(sqlalchemy.String, primary_key=True) cloud_sql_connection_name = "myproject-123456:asia-northeast3:tosmedb" connector = Connector() def getconn(): connection = connector.connect( cloud_sql_connection_name, "pg8000", user="postgres", password="redacted", db="tosme", ip_type=IPTypes.PUBLIC, ) return connection def init_pool(): engine_url = sqlalchemy.engine.url.URL.create( "postgresql+pg8000", username="postgres", password="redacted", host=cloud_sql_connection_name, database="tosme" ) engine = sqlalchemy.create_engine(engine_url, creator=getconn) # Create tables if they don't exist Base.metadata.create_all(engine) return engine engine = init_pool() # Prepare a thread-safe Session maker Session = scoped_session(sessionmaker(bind=engine)) print("Database initialized") def run_concurrency_test(): def get_user(): with Session() as session: session.query(TestUsers).first() print("Simulating concurrent reads...") threads = [] for i in range(2): thread = threading.Thread(target=get_user) threads.append(thread) thread.start() # Wait for all threads to complete for thread in threads: thread.join() print(f"Thread {thread.name} completed") print("Test passed - Threads all completed!\n") run_concurrency_test() @functions_framework.http def api(request): print("API hit - Calling run_concurrency_test()...") run_concurrency_test() return "Success" requirements.txt: functions-framework==3.* cloud-sql-python-connector[pg8000]==1.5.* SQLAlchemy==2.* pg8000==1.* It's super simple - and it works! As long as you have a PostgreSQL instance, it will create the TestUsers table as needed, query it twice (at the same time via threads!), and every time you curl it, it works as well. Here's some example output: Database initialized Simulating concurrent reads... Thread Thread-4 (get_user) completed Thread Thread-5 (get_user) completed Test passed - Threads all completed! API hit - Calling run_concurrency_test()... Simulating concurrent reads... Thread Thread-7 (get_user) completed Thread Thread-8 (get_user) completed Test passed - Threads all completed! However, if I comment out the first call to run_concurrency_test() (i.e. the one that's not inside the api(request)), run it and curl, I get this: Database initialized API hit - Calling run_concurrency_test()... Simulating concurrent reads... Thread Thread-4 (get_user) completed It gets stuck! Specifically, it gets stuck at session.query(TestUsers).first(). It didn't get stuck when I ran the concurrency test outside the api() first. To the best of my knowledge, my code is stateless, and thread safe. So what is going on here that makes it suddenly not work?
Please see this other SO post for the proper detailed usage of the Cloud SQL Python Connector with Cloud Functions. The reason for the error here has to do with initializing the Connector as a global var outside of the Cloud Function request context. Cloud Functions only have access to compute and resources when requests are made, otherwise they scale down. The Connector has background tasks that run in order to make successful connections to Cloud SQL when the time comes where you try to connect, these background tasks are being throttled and causing your error because you are attempting to initialize it globally when no CPU is allocated to your function. Cloud Functions recommends lazy initializing global variables for this exact reason and is what the linked post above showcases. NOTE: Initializing the Connector inside of getconn as another answer mentions is not recommended and will introduce more bugs into your code when attempting to scale traffic. It works because it guarantees the Connector is initialized within the Cloud Function request context but will create a new Connector on each db connection. The `Connector is meant to be shared across connections to allow for scalable solutions and is thus why having it as a lazy global var is the recommended approach.
2
3
77,819,377
2024-1-15
https://stackoverflow.com/questions/77819377/attributeerror-module-streamlit-has-no-attribute-chat-input
I'm trying to run a simple Streamlit app in my conda env. When I'm running the following app.py file: # Streamlit app import streamlit as st # prompt = st.chat_input("Say something") if prompt: st.write(f"User has sent the following prompt: {prompt}") It returns the following error when running streamlit run app.py: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 556, in _run_script exec(code, module.__dict__) File "/Users/quinten/Documents/app.py", line 11, in <module> prompt = st.chat_input("Say something") AttributeError: module 'streamlit' has no attribute 'chat_input' I don't understand why this error happens. I used the newest Streamlit version. Also I don't understand why the error uses python3.9 while I use 3.12 in my environment. I check this blog, but this doesn't help unfortunately. So I was wondering if anyone knows why this happens? I'm using the following versions: streamlit 1.30.0 And python: python --version Python 3.12.0
Create a new virtual environment: python -m venv venv Activate the virtual environment : source venv/bin/activate Then install streamlit: pip install streamlit Then you can run your app: streamlit run app.py That way you can make sure that you are using the right version of streamlit. You can use conda to manage manage environments
2
2
77,819,360
2024-1-15
https://stackoverflow.com/questions/77819360/python-pandas-select-rows-that-match-more-than-one-condition-on-group
Given the sample data: data = [['john', 'test_1', pd.NA], ['john', 'test_2', 'fail'], ['john', 'test_3', 'fail'], ['mary', 'test_1', pd.NA], ['mary', 'test_2', 'fail'], ['mary', 'test_3', pd.NA], ['nick', 'test_1', pd.NA], ['liam', 'test_1', pd.NA], ['liam', 'test_2', pd.NA], ['jane', 'test_1', pd.NA], ['jane', 'test_2', 'pass'], ['jane', 'test_3', pd.NA], ['emma', 'test_1', pd.NA], ['emma', 'test_2', pd.NA], ['emma', 'test_3', 'fail']] df = pd.DataFrame(data, columns=['name', 'test', 'result']) I'm having trouble trying to select the people that didn't failed test_2. df = df.groupby('name').filter(lambda x : ((x.result != 'fail')).all()) The code above partially works by removing john and mary, the problem is that also removes emma because the condition doesn't apply to test_2 only. The following code also does not work for my desired output: df = df.groupby('name').filter(lambda x : ((x.test == 'test_2') & (x.result != 'fail')).all()) I'm sure my issue is more logic related, but I've have spent some hours trying to figure it out without success.
Determine who failed the test with boolean indexing and drop them with another boolean indexing and isin: # identify which names failed "test_2" drop = df.loc[df['test'].eq('test_2') & df['result'].eq('fail'), 'name'].unique() # ['john', 'mary'] # select the other names out = df[~df['name'].isin(drop)] You code was incorrect since you were filtering for the users that took the "test_2" but didn't fail for all rows in the group The correct code with filter would have been: # do NOT keep users that ever passed "test_2" AND failed out = (df.groupby('name') .filter(lambda x : not ( (x.test == 'test_2') &(x.result == 'fail') ).any()) ) Or using De Morgan's laws: # keep users that always passed something other than "test_2" OR didn't fail # (equivalent to: never passed "test_2" AND failed) out = (df.groupby('name') .filter(lambda x : ( (x.test != 'test_2') | (x.result != 'fail') ).all()) ) Output: name test result 6 nick test_1 <NA> 7 liam test_1 <NA> 8 liam test_2 <NA> 9 jane test_1 <NA> 10 jane test_2 pass 11 jane test_3 <NA> 12 emma test_1 <NA> 13 emma test_2 <NA> 14 emma test_3 fail
3
2
77,818,764
2024-1-15
https://stackoverflow.com/questions/77818764/how-can-i-select-all-dataframe-entries-between-two-times-when-the-time-is-a-ser
Here's some data: my_dataframe = pd.DataFrame({'time': ["2024-1-1 09:00:00", "2024-1-1 15:00:00", "2024-1-1 21:00:00", "2024-1-2 09:00:00", "2024-1-2 15:00:00", "2024-1-2 21:00:00"], 'assists': [5, 7, 7, 9, 12, 9], 'rebounds': [11, 8, 10, 6, 6, 5], 'blocks': [4, 7, 7, 6, 5, 8]}) I want to select all data between noon and 10pm, i.e. rows 1-2 & 4-5. How can I do that? I tried using between_time, but it does not seem to work because (as far as I can tell) the time is a series in the data and not a timestamp. That suggests I need to first convert the series to a timestamp, but datetime.strptime does not seem to work because the time is not a string, and neither does series.to_timestamp (my_dataframe.iloc[:,0].to_timestamp() seems to raise an "unsupported Type RangeIndex" error).
You could convert your Series (converted to_datetime) to a DatetimeIndex to use DatetimeIndex.indexer_between_time to get the positions of the matching rows, and iloc to select them: keep = (pd.DatetimeIndex(pd.to_datetime(my_dataframe['time'])) .indexer_between_time('12:00', '22:00') ) # array([1, 2, 4, 5]) out = my_dataframe.iloc[keep] Output: time assists rebounds blocks 1 2024-1-1 15:00:00 7 8 7 2 2024-1-1 21:00:00 7 10 7 4 2024-1-2 15:00:00 12 6 5 5 2024-1-2 21:00:00 9 5 8 Or, use the datetime as index and between_time: out = (my_dataframe.set_index(pd.to_datetime(my_dataframe['time'])) .between_time('12:00', '22:00') ) Output: time assists rebounds blocks time 2024-01-01 15:00:00 2024-1-1 15:00:00 7 8 7 2024-01-01 21:00:00 2024-1-1 21:00:00 7 10 7 2024-01-02 15:00:00 2024-1-2 15:00:00 12 6 5 2024-01-02 21:00:00 2024-1-2 21:00:00 9 5 8
2
2
77,818,180
2024-1-15
https://stackoverflow.com/questions/77818180/is-it-pythonic-to-use-a-shallow-copy-to-update-an-objects-attribute
I found this code in the wild (simplified here), and I was wondering whether this is considered a good practice or not, that is, making a shallow copy (of an object's attribute in this case) for the sole purpose of updating the attribute (and reduce verbosity?). I ask because it is not the first time I see this kind of pattern and it bothers me a bit. class DrawingBoard: def __init__(self): self.raw_points = [] def add_point(self, x, y): Point = collections.namedtuple('Point', ['x', 'y']) points = self.raw_points points.append(Point(x, y)) if len(points) > ......: # DO SOMETHING board = DrawingBoard() for x in np.arange(0, 10, 0.1): board.add_point(x, np.sin(x)) For context, the actual class is performing some action when a certain amount of points have been added. On my first read, I miss this update was happening (the code is a fair bit more complex). My question is: Is it just me not used to this pattern, and this is a mere question of less verbosity vs clarity? or is there something else I am missing? I can see that if we were updating multiple attributes, it could get quite verbose, but I find calling self.raw_points.append(Points(x,y)) instead of points.append(Points(x,y))so much clearer.
There is no copy of any kind, shallow or otherwise, made in your example. points = self.raw_points means points is self.raw_points. It makes the name points an additional reference to the same object that self.raw_points refers to, so that: points.append(Points(x,y)) is functionally equivalent to: self.raw_points.append(Points(x,y)) without the overhead of an additional attribute lookup in self.raw_points, which is why this is generally considered a good practice if self.raw_points is to be referenced repeatedly.
2
5
77,792,759
2024-1-10
https://stackoverflow.com/questions/77792759/how-to-rotatescale-pdf-pages-around-the-center-with-pypdf
I would like to rotate PDF pages around the center (other than just multiples of 90°) in a PDF document and optionally scale them to fit into the original page. Here on StackOverflow, I found a few similar questions, however, mostly about the outdated PyPDF2. And in the latest pypdf documentation, I could not find (or overlooked) a recipe to rotate pages around the center, e.g. for slightly tilted scanned documents, which require rotation of a few degrees. I know that there is the Transformation Class, but the standard rotation is around the lower left corner and documentation is not explaining in detail what the matrix elements actually are. How to rotate a PDF page around the center and optionally scale it that it fits into the original page?
It took me some time to figure out how to rotate a page around the center and scale it to fit the original page. Although, it just requires a matrix with the correct elements at the right place and some trigonometric functions, it was not too obvious for me. Hence, I post the script I ended up with for my own memories and maybe it is helpful to others not having to re-invent the wheel. After all, you can also achieve it somehow via a combination of rotate(), translate(), and scale(). If anybody has ideas to simplify or improve the following script, please let me know. Script: ### rotate around center (and optional scale) pdf pages from pypdf import PdfReader, PdfWriter, Transformation from math import sin, cos, atan, sqrt, radians, pi pdf_input = PdfReader("Test.pdf") pdf_output = PdfWriter() rotation_angle = 7.3 # in degrees shrink_page = True for page in pdf_input.pages: x0 = (page.mediabox.right - page.mediabox.left)/2 y0 = (page.mediabox.top - page.mediabox.bottom)/2 a = radians(rotation_angle) a0 = atan(max(x0,y0)/min(x0,y0)) s0 = min(x0,y0)/cos(a0 - abs((a-pi/2)%pi - pi/2))/sqrt(x0**2 + y0**2) if shrink_page else 1 rotate_center = Transformation(( s0*cos(a), s0*sin(a), -s0*sin(a), s0*cos(a), s0*(-x0*cos(a)+y0*sin(a))+x0, s0*(-x0*sin(a)-y0*cos(a))+y0)) pdf_output.add_page(page).add_transformation(rotate_center) pdf_output.write("Test_out.pdf") ### end of script Alternatively, as @MartinThoma commented, instead of using the transformation matrix (or tuple) use translate(), rotate() and again translate() and optionally scale(), but in the right order with the proper numbers. This means, replace the line rotate_center = ... with the following: rotate_center = Transformation().translate(tx=-x0,ty=-y0).scale(sx=s0,sy=s0).rotate(rotation_angle).translate(tx=x0,ty=y0) Result: (screenshot from Test_out.pdf)
2
1
77,813,766
2024-1-14
https://stackoverflow.com/questions/77813766/how-does-range-allocate-memory-in-python-3
I'm a beginner programmer, and I took a class in C and got (what I believe and hope to be) a really good understanding of how different functions and data types allocate memory differently. So, with that in mind, could someone explain how range() in Python uses memory, please? I know range() in Python 2 would create a list of values, so that's pretty straightforward to understand, but in Python 3, I keep seeing people say it "creates a new object" or makes an "iterator object", but what does the computer do internally with regards to memory? Also, from what I understand, for-loops work the way for-each loops work in Java, so how does a function like for n in range(6) work if it's not iterating through a list of 6 numbers from 0-5? Just to be clear, I know that, as a beginner, memory usage shouldn't be a concern, but I just like knowing how things work under the hood.
The Python 3 range() object does not allocate memory by creating it (other than the heap memory allocated by itself object, of course); it is a sequence object. All it contains is the start, stop and step values. It 'generates' numbers on demand, in this case, the 'demand' is the loop for n in range(6):. As you 'iterate' over the object the next integer is calculated each iteration. The range object returns an iterator when called __iter__(). The for loop automatically calls this function. We can get the next value by calling next() to it. For example: r = range(6) i = r.__iter__() print(next(i)) # 0 print(next(i)) # 1 print(next(i)) # 2 print(next(i)) # 3 print(next(i)) # 4 print(next(i)) # 5 print(next(i)) # raises 'StopIteration' The loop automatically stops when next() raises StopIteration. A possible implementation would be as: def range_func(stop): i = 0 while i < stop: yield i i += 1 Note: This is not the real implementation of range; The real one is way more complicated than this. The above function does not allocate any memory for all of the numbers that it generates. It can be used like range(). for n in range_func(6): print(n)
5
2
77,790,822
2024-1-10
https://stackoverflow.com/questions/77790822/having-relevant-so-and-binaries-inside-the-venv
I installed OpenCV using Anaconda, with the following command. mamba create -n opencv -c conda-forge opencv matplotlib I know that the installation is fully functional because the below works: import cv2 c = cv2.imread("microphone.png") cv2.imwrite("microphone.jpg",c) import os os.getpid() # returns 13249 Now I try to do the same using C++. #include <opencv2/core.hpp> #include <opencv2/imgcodecs.hpp> #include <iostream> using namespace cv; int main() { std::string image_path = "microphone.png"; Mat img = imread(image_path, IMREAD_COLOR); if(img.empty()) { std::cout << "Could not read the image: " << image_path << std::endl; return 1; } imwrite("microphone.JPG", img); return 0; } And the compilation: > g++ --version g++ (conda-forge gcc 12.3.0-3) 12.3.0 Copyright (C) 2022 Free Software Foundation, Inc. ... > export PKG_CONFIG_PATH=/home/stetstet/mambaforge/envs/opencv/lib/pkgconfig > g++ opencv_test.cpp `pkg-config --cflags --libs opencv4` When I run the above, g++ complains that I am missing an OpenGL. /home/stetstet/mambaforge/envs/opencv/bin/../lib/gcc/x86_64-conda-linux-gnu/12.3.0/../../../../x86_64-conda-linux-gnu/bin/ld: warning: libGL.so.1, needed by /home/stetstet/mambaforge/envs/opencv/lib/libQt5Widgets.so.5, not found (try using -rpath or -rpath-link) After some experimentation I discover that some of the libraries must be from /usr/lib/x86_64-linux-gnu, while others must be used from /home/stetstet/mambaforge/envs/opencv/lib/ (opencv is the name of the venv in use). The following yields an a.out which does what was intended: > /usr/bin/g++ opencv_test.cpp `pkg-config --cflags --libs opencv4` -lpthread -lrt The /usr/bin/g++ so that it can actually find libGL.so.1 as well as libglapi.so.0, libselinux.so.1, libXdamage.so.1, and libXxf86vm.so.1. Also, without -lpthread -lrt these libraries are used from the venv, which causes "undefined reference to `h_errno@GLIBC_PRIVATE'" Now, I am very bothered by the fact that I now need to know which one of which library (and g++/ld) I should use. I thought package managers were supposed to handle the dependency mess for us! Would there be any way to make the compilation command into something like > g++ opencv_test.cpp `pkg-config --cflags --libs opencv4` i.e. have all relevant files or binaries inside the venv? For example, is there a way to modify the mamba create command (see top) so that this condition is satisfied? Note: I am tagging both Anaconda, Linux, and OpenCV because I have absolutely no idea what I can use to reach a solution.
You need pkg-config to also be in the env. The following should work: mamba create -n opencv -c conda-forge opencv matplotlib pkg-config mamba activate opencv g++ opencv_test.cpp `pkg-config --cflags --libs opencv4`
2
2
77,814,530
2024-1-14
https://stackoverflow.com/questions/77814530/is-there-a-way-to-get-hotspot-information-from-cur-file-using-pyqt
I am working on a PyQt application and one of the functions is to change the cursor style when user opens the app. It is easy to make this work, the only problem is the hotspot information is default to half of the image's width and height and there is not a certainty that all cursor image have their hotspot info just locate on the center of the image. So I want to get these info from a cur file and set these info by calling QWidget's setCursor method. I have no idea how to get those position information using PyQt. My code is like this: @staticmethod def setCursor(widget: QWidget, cursorIconPath: str): widget.setCursor(QCursor(QPixmap(cursorIconPath))) Please note that the cursor resource file is a .cur file, and technically there is a hotspot info in this file. I also found that QCursor have a method hotSpot() to get this info as a QPoint. I know there may be some way to get it out of PyQt, like a image editor, but it is troublesome because I need to set this hotspot info in my PyQt application every time I want to change the cursor file. Is there any way to solve my problem? Any help would be appreciated!
When creating the cursor via QPixmap, any hotspot information in the file will be lost, since Qt will treat it as an ICO image (which has an almost identical format). The QCursor.hotSpot() method can only ever return the values you supply in the constructor - or a generic default calculated as roughly width [or height] / 2 / device-pixel-ratio. So your only option here is to extract the values directly from the file and then supply them in the QCursor constructor. Fortunately, this is quite easy to do as the structure of the file is quite easy to parse. Below is a basic demo which shows how to achieve this. (Click on the items to test the cursors). from PyQt5 import QtCore, QtGui, QtWidgets def create_cursor(path): curfile = QtCore.QFile(path) if curfile.open(QtCore.QIODevice.ReadOnly): pixmap = QtGui.QPixmap.fromImage( QtGui.QImage.fromData(curfile.readAll(), b'ICO')) if not pixmap.isNull(): curfile.seek(10) stream = QtCore.QDataStream(curfile) stream.setByteOrder(QtCore.QDataStream.LittleEndian) hx = stream.readUInt16() hy = stream.readUInt16() return QtGui.QCursor(pixmap, hx, hy) class Window(QtWidgets.QWidget): def __init__(self): super().__init__() self.button = QtWidgets.QPushButton('Choose Cursors') self.button.clicked.connect(self.handleButton) self.view = QtWidgets.QListWidget() layout = QtWidgets.QVBoxLayout(self) layout.addWidget(self.view) layout.addWidget(self.button) self.view.itemClicked.connect(self.handleItemClicked) def handleButton(self): files = QtWidgets.QFileDialog.getOpenFileNames( self, 'Choose Cursors', QtCore.QDir.homePath(), 'Cursor Files (*.cur)')[0] if files: self.view.clear() for filepath in files: cursor = create_cursor(filepath) if cursor is not None: item = QtWidgets.QListWidgetItem(self.view) item.setIcon(QtGui.QIcon(cursor.pixmap())) item.setText(QtCore.QFileInfo(filepath).baseName()) item.setData(QtCore.Qt.UserRole, cursor) def handleItemClicked(self, item): self.setCursor(item.data(QtCore.Qt.UserRole)) if __name__ == '__main__': app = QtWidgets.QApplication(['Test']) window = Window() window.setGeometry(600, 100, 250, 350) window.show() app.exec()
2
3
77,815,755
2024-1-14
https://stackoverflow.com/questions/77815755/modifying-element-with-index-in-python
I am a beginner in Python. I am learning currently how to Modify an element with index MyCode: data = [4, 6, 8] for index, value in enumerate(data): data[index] = data[index] + 1 value = value * 2 print(data) print(value) Terminal Output: [5, 7, 9] 16 Terminal Output Expectation: [5, 7, 9] [8, 12, 16] Why didn’t I get my expected output? Now I have tried to define an empty_list before my FOR loop and then add the VALUE variable to it, then print(the_empty_list) Here: data = [4, 6, 8] empty_list = [] for index, value in enumerate(data): data[index] = data[index] + 1 value = value * 2 **(How to add VALUE to the empty_list)** print(data) print(empty_list) ** **I am finding the asterisked part difficult. Who knows what I should do here?
If your list is empty you can't access it with an index. Use append method to add new elements: data = [4, 6, 8] empty_list = [] for index, value in enumerate(data): data[index] = data[index] + 1 value = value * 2 empty_list.append(value) print(data) print(empty_list) Output: [5, 7, 9] [8, 12, 16]
2
1
77,812,049
2024-1-13
https://stackoverflow.com/questions/77812049/openai-api-error-choice-object-has-no-attribute-text
I created a Python bot a few months ago, and it worked perfectly, but now, after the OpenAI SDK update, I have some problems with it. As I don't know Python very well, I need your help. This is the code: from openai import OpenAI import time import os import csv import logging # Your OpenAI API key api_key = "MY-API-KEY" client = OpenAI(api_key=api_key) # Path to the CSV file containing city names csv_file = "city.csv" # Directory where generated content files will be saved output_directory = "output/" # Initialize the OpenAI API client # Configure logging to save error messages logging.basicConfig( filename="error_log.txt", level=logging.ERROR, format="%(asctime)s [%(levelname)s]: %(message)s", datefmt="%Y-%m-%d %H:%M:%S", ) # Read city names from the CSV file def read_city_names_from_csv(file_path): city_names = [] with open(file_path, "r") as csv_file: csv_reader = csv.reader(csv_file) for row in csv_reader: if row: city_names.append(row[0]) return city_names # Generate content for a given city name and save it to a file def generate_and_save_content(city_name): prompt_template = ( ".... Now Write An Article On This Topic {city_name}" ) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt_template.format(city_name=city_name)}, ] try: response = client.chat.completions.create(model="gpt-3.5-turbo", messages=messages, max_tokens=1000) choices = response.choices chat_completion = choices[0] content = chat_completion.text output_file = os.path.join(output_directory, city_name + ".txt") with open(output_file, "w", encoding="utf-8") as file: file.write(content) return True except Exception as e: error_message = f"Error generating content for {city_name}: {str(e)}" print(error_message) logging.error(error_message) return False # Main function def main(): # Create the output directory if it doesn't exist if not os.path.exists(output_directory): os.makedirs(output_directory) city_names = read_city_names_from_csv(csv_file) successful_chats = 0 unsuccessful_chats = 0 for city_name in city_names: print(f"Generating content for {city_name}...") success = generate_and_save_content(city_name) if success: successful_chats += 1 else: unsuccessful_chats += 1 # Add a delay to avoid API rate limits time.sleep(2) print("Content generation completed.") print(f"Successful chats: {successful_chats}") print(f"Unsuccessful chats: {unsuccessful_chats}") if __name__ == "__main__": main() Currently, I'm getting this error: 'Choice' object has no attribute 'text' and couldn't fix it at all. Would you please tell me how I can fix this? Also, if there is any other problem with the code, please guide me on how to fix it. Thanks. I tried many things using Bard and ChatGPT, but none of them helped.
You're trying to extract the response incorrectly. Change this... choices = response.choices chat_completion = choices[0] content = chat_completion.text # Wrong (this works with the Completions API) ...to this. choices = response.choices chat_completion = choices[0] content = chat_completion.message.content # Correct (this works with the Chat Completions API) Or, if you want to have everything in one line, change this... content = response.choices[0].text # Wrong (this works with the Completions API) ...to this. content = response.choices[0].message.content # Correct (this works with the Chat Completions API)
2
3
77,813,986
2024-1-14
https://stackoverflow.com/questions/77813986/string-replace-replacing-all-occurences
I'm trying to make string.replace replace all words except ones starting with a certain character, e.g. ~, but if i have a word with only one letter, that letter gets deleted from the words i want to stay Code import re st1 = "dolphin fish ~shark eel octopus ~squid a" for i in re.findall(r"\b(?<!~)\w+", st1): st1 = st1.replace(i, "") print(st1) What I want to happen: Input: "dolphin fish ~shark eel octopus ~squid a" Output: "~shark ~squid" What happens: Input: "dolphin fish ~shark eel octopus ~squid a" Output: "~shrk ~squid" I think this is happening because it's replacing all instances of 'a', including the ones within other words. How do I ensure it only deletes the instance intended?
I would actually suggest using a regex find all approach here: st1 = "dolphin fish ~shark eel octopus ~squid a" matches = re.findall(r'~\w+', st1) output = " ".join(matches) print(output) # ~shark ~squid
2
2
77,810,920
2024-1-13
https://stackoverflow.com/questions/77810920/how-to-find-pair-of-subarrays-with-maximal-sum
Given an array of integers, I can find the maximal subarray sum using Kadane's algorithm. In code this looks like: def kadane(arr, n): # initialize subarray_sum, max_subarray_sum and subarray_sum = 0 max_subarray_sum = np.int32(-2**31) # Just some initial value to check # for all negative values case finish = -1 # local variable local_start = 0 for i in range(n): subarray_sum += arr[i] if subarray_sum < 0: subarray_sum = 0 local_start = i + 1 elif subarray_sum > max_subarray_sum: max_subarray_sum = subarray_sum start = local_start finish = i # There is at-least one # non-negative number if finish != -1: return max_subarray_sum, start, finish # Special Case: When all numbers in arr[] are negative max_subarray_sum = arr[0] start = finish = 0 # Find the maximum element in array for i in range(1, n): if arr[i] > max_subarray_sum: max_subarray_sum = arr[i] start = finish = i return max_subarray_sum, start, finish This is fast and works well. However, I would like to find a pair of subarrays with maximal sum. Take this example input. arr = [3, 3, 3, -8, 3, 3, 3] The maximal subarray is the entire array with sum 10. But if I am allowed to take two subarrays they can be [3, 3, 3] and [3, 3, 3] which has sum 18. Is there a fast algorithm to compute the maximal pair of subarrays? I am assuming the two subarrays will not overlap.
Kadane's algorithm is used to find the maximal subarray sum in an array of integers. However, to find the maximal pair of non-overlapping subarrays with maximal sum, a different approach is needed. One way to solve this is by using the concept of prefix and suffix sums. Here's a high-level overview of the algorithm: Compute the prefix sum and suffix sum arrays for the given array. Use Kadane's algorithm to find the maximum sum subarray in the prefix sum array, which gives the maximal subarray ending at each position. Use Kadane's algorithm in reverse to find the maximum sum subarray in the suffix sum array, which gives the maximal subarray starting at each position. Iterate through the array and find the pair of non-overlapping subarrays with maximal sum by combining the results from steps 2 and 3. This approach has a time complexity of O(n) (and space complexity of O(n)) and can efficiently find the maximal pair of subarrays with maximal sum, where n is the size of the input array. def max_subarray(arr, rev=False): best_sum = [(arr[0], 0, 0)] current_sum = (arr[0], 0, 0) for j in range(1, len(arr)): current_sum = max( (arr[j], j, j), (current_sum[0] + arr[j], current_sum[1], j) ) best_sum.append(max(best_sum[-1], current_sum)) return [ (t[0], len(arr) - t[2] - 1, len(arr) - t[1] - 1) for t in reversed(best_sum) ] if rev else best_sum def max_pair_subarray(arr): max_sub = max_subarray(arr) max_sub_rev = max_subarray(list(reversed(arr)), rev=True) max_sum = (max_sub[0][0] + max_sub_rev[1][0], max_sub[0], max_sub_rev[1]) for j in range(1, len(arr) - 2): max_sum = max( max_sum, (max_sub[j][0] + max_sub_rev[j + 1][0], max_sub[j], max_sub_rev[j + 1]), ) return ( max_sum[0], (max_sum[1][1], max_sum[1][2]), (max_sum[2][1], max_sum[2][2]), ) max_sum, (start_idx_1, end_idx_1), (start_idx_2, end_idx_2) = max_pair_subarray(arr)
2
2
77,812,803
2024-1-13
https://stackoverflow.com/questions/77812803/efficient-rendering-optimization-in-ray-tracing-avoiding-full-scene-rendering
I'm currently working on a ray tracing project using Python and have encountered performance issues with rendering the entire scene each time. I want to implement a more efficient rendering approach similar to how Unreal Engine handles it. Specifically, I'm looking for guidance on implementing the following optimizations: Frustum Culling: I want to avoid rendering objects that are outside the camera's frustum. What is the best way to implement frustum culling in my ray tracing code? Dynamic Resolution Scaling: I'm interested in rendering each object at a specific resolution based on its distance from the camera. How can I implement dynamic resolution scaling to optimize rendering performance? I've found a ray tracing code on GitHub, and while it provides a solid foundation, I'm struggling to integrate these optimization techniques into my existing code. Could someone provide guidance or code snippets for achieving these optimizations in a ray tracing context?
To 1: Frustum culling does not make sense in ray tracing. Frustum culling makes sense for rasterization. Rasterization is a top down approach: For instance you want to render cube. In rasterization you say in a top down approach - to render a cube I just must render it faces. To render a face (quad) of a cube you you just need to render 2 triangles. To render a triangle you must just project it vertices via a projection matrix and then do some clipping. After projection and clipping to render a 2D triangle you must draw its fragments. To draw a fragment you need a Z-Buffer and a Z-Buffer test, maybe some alpha blending. You do rendering in rasterization from top (a cube) down to the fragment/pixel level. When it comes to frustum culling you simply say if the Cube (its 8 vertices) are not within the view frustum I can skip the whole top down approach if projection individual triangles, clipping, etc. Usually you use bounding boxes for more complicated objects to fast reject object outside of the view frustum. Ray tracing is bottom up. You start at the pixel level - you ask one pixel where does your color contribution come from. You trace a ray from the pixel in the scene. Usually, you have a bounding volume hierarchy (BVH). You find out that the ray hits the bounding box of your scene. You go down the hierarchy and find out the ray intersects a triangle. You find out that the triangle belongs to an object (e.g. a cube) which has some specific BRDF. You sample the BRDF and get the color contribution. Frustum culling does not make sense here since rays can bounce around the whole scene (go to a reflecting cube - reflecting ray reflects an object outside of your frustum). I think what you mean is not Frustum Culling, but BVH. There are different ways to implement BVHs. For instance you can use Octrees: https://book.vertexwahn.de/docs/rendering/octree/ To 2: Comes also from rasterization domain. A translation to ray tracing would be to sample objects that far away less than object close to the viewer
2
1
77,810,699
2024-1-13
https://stackoverflow.com/questions/77810699/can-numpy-replace-these-list-comprehensions-to-make-it-run-faster
Can this matrix math be done faster? I'm using Python to render 3D points in perspective. Speed is important, because it will translate directly to frame rate at some point down the line. I tried to use NumPy functions, but I couldn't clear out two pesky list comprehensions. 90% of my program's runtime is during the list comprehensions, which makes sense since they contain all of the math, so I want to find a faster method if possible. The first list comprehension happens when making pos- it does a sum and matrix multiplication for every individual row of vert_array The second one, persp, multiplies the x and y values of each row based on that specific row's z value. Can replace those list comprehensions with something from NumPy? I read about numpy.einsum and numpy.fromfunction, but I was struggling to understand if they're even relevant to my problem. Here is the function that does the main rendering calculations: I want to make pos and persp faster: import time from random import randint import numpy as np def render_all_verts(vert_array): """ :param vert_array: a 2-dimensional numpy array of float32 values and size 3 x n, formatted as follows, where each row represents one vertex's coordinates in world-space coordinates: [[vert_x_1, vert_y_1, vert_z_1], [vert_x_2, vert_y_2, vert_z_2], ... [vert_x_n, vert_y, vert_z]] :return: a 2-dimensional numpy array of the same data type, size and format as vert_array, but in screen-space coordinates """ # Unit Vector is a 9 element, 2D array that represents the rotation matrix # for the camera after some rotation (there's no rotation in this example) unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype='float32') # Raw Shift is a 3 element, 1D array that represents the position # vector (x, y, z) of the camera in world-space coordinates shift = np.array([0, 0, 10], dtype='float32') # PURPOSE: This converts vert_array, with its coordinates relative # to the world-space axes and origin, into coordinates relative # to camera-space axes and origin (at the camera). # MATH DESCRIPTION: For every row, raw_shift is added, then matrix # multiplication is performed with that sum (1x3) and unit_array (3x3). pos = np.array([np.matmul(unit_vec, row + shift) for row in vert_array], dtype='float32') # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # This is a constant used to solve for the perspective focus = 5 # PURPOSE: This calculation does the math to change the vertex coordinates, # which are relative to the camera, into a representation of how they'll # appear on screen in perspective. The x and y values are scaled based on # the z value (distance from the camera) # MATH DESCRIPTION: Each row's first two columns are multiplied # by a scalar, which is derived from that row's third column value. persp = np.array([np.multiply(row, np.array([focus / abs(row[2]), focus / abs(row[2]), 1])) for row in pos]) # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ return persp I wrote this code to time render_all_verts and generate an list of random vertex coordinates to run through it repeatedly. # TESTING RENDERING SPEED start_time = time.time() # The next few lines make an array, similar to the 3D points I'd be rendering. # It contains n vertices with random coordinate values from -m to m n = 1000 m = 50 example_vertices = np.array([(randint(-m, m), randint(-m, m), randint(-m, m)) for i in range(n)]) # This empty array, which is the same shape as example_vertices. The results are saved here. rendered_verts = np.empty(example_vertices.shape) print('Example vertices:\n', example_vertices) # This loop will render the example vertices many times render_times = 2000 for i in range(render_times): rendered_verts = render_all_verts(example_vertices) print('\n\nLast calculated render of vertices:\n', rendered_verts) print(f'\n\nThis program took an array of {n} vertices with randomized coordinate') print(f'values between {-m} and {m} and rendered them {render_times} times.') print(f'--- {time.time() - start_time} seconds ---') Finally, here's one instance of the terminal output: C:\...\simplified_demo.py Example vertices: [[-45 4 -43] [ 42 27 28] [-33 24 -18] ... [ -5 48 5] [-17 -17 29] [ -5 -46 -24]] C:\...\simplified_demo.py:45: RuntimeWarning: divide by zero encountered in divide persp = np.array([np.multiply(row, np.array([focus / abs(row[2]), focus / abs(row[2]), 1])) Last calculated render of vertices: [[ -6.81818182 0.60606061 -33. ] [ 5.52631579 3.55263158 38. ] [-20.625 15. -8. ] ... [ -1.66666667 16. 15. ] [ -2.17948718 -2.17948718 39. ] [ -1.78571429 -16.42857143 -14. ]] This program took an array of 1000 vertices with randomized coordinate values between -50 and 50 and rendered them 2000 times. --- 15.910243272781372 seconds --- Process finished with exit code 0 P.S. NumPy seems to handle division by zero and overflow values fine for now, so I'm not worried about the Runtimewarning. I replacced my file paths with ... P.P.S. Yes, I know I could just use OpenGL or any other existing rendering engine that would already handle all this math, but I'm more interested in reinventing this wheel. It's mostly an experiment for me to study Python and NumPy.
An intial speedup can be made by using vectorization def render_all_verts(vert_array): """ :param vert_array: a 2-dimensional numpy array of float32 values and size 3 x n, formatted as follows, where each row represents one vertex's coordinates in world-space coordinates: [[vert_x_1, vert_y_1, vert_z_1], [vert_x_2, vert_y_2, vert_z_2], ... [vert_x_n, vert_y, vert_z]] :return: a 2-dimensional numpy array of the same data type, size and format as vert_array, but in screen-space coordinates """ # Unit Vector is a 9 element, 2D array that represents the rotation matrix # for the camera after some rotation (there's no rotation in this example) unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype='float32') # Raw Shift is a 3 element, 1D array that represents the position # vector (x, y, z) of the camera in world-space coordinates shift = np.array([0, 0, 10], dtype='float32') # PURPOSE: This converts vert_array, with its coordinates relative # to the world-space axes and origin, into coordinates relative # to camera-space axes and origin (at the camera). # MATH DESCRIPTION: For every row, raw_shift is added, then matrix # multiplication is performed with that sum (1x3) and unit_array (3x3). pos2 = np.matmul(unit_vec, (vert_array + shift).T).T """ pos = np.array([np.matmul(unit_vec, row + shift) for row in vert_array], dtype='float32') print(vert_array.shape, unit_vec.shape) assert pos2.shape == pos.shape, (pos2.shape, pos.shape) assert np.all(pos2 == pos), np.sum(pos - pos2) """ # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # This is a constant used to solve for the perspective focus = 5 # PURPOSE: This calculation does the math to change the vertex coordinates, # which are relative to the camera, into a representation of how they'll # appear on screen in perspective. The x and y values are scaled based on # the z value (distance from the camera) # MATH DESCRIPTION: Each row's first two columns are multiplied # by a scalar, which is derived from that row's third column value. x = focus / np.abs(pos2[:,2]) persp2 = np.multiply(pos2, np.dstack([x, x, np.ones(x.shape)])) """ persp = np.array([np.multiply(row, np.array([focus / abs(row[2]), focus / abs(row[2]), 1])) for row in pos2]) assert persp.shape == persp2.shape, (persp.shape, persp2.shape) assert np.all(persp == persp2), np.sum(persp - persp2) """ # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ return persp2 this runs in 0.18s then we can remove the un-needed transpose parts for 50% more perf def render_all_verts_2(vert_array): unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype='float32') shift = np.array([0, 0, 10], dtype='float32') pos2 = np.matmul(unit_vec, (vert_array + shift).T) focus = 5 x = focus / np.abs(pos2[2]) persp2 = np.multiply(pos2, np.vstack([x, x, np.ones(x.shape)])) return persp2.T[np.newaxis,] this version runs in 0.12 seconds on my system finally we can use numba for a 4x speedup to 0.03s from numba import njit @njit def render_all_verts_2(vert_array): unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float32) shift = np.array([0, 0, 10], dtype=np.float32) pos2 = np.dot(unit_vec, (vert_array + shift).T) focus = 5 x = focus / np.abs(pos2[2]) persp2 = np.multiply(pos2, np.vstack((x, x, np.ones(x.shape, dtype=np.float32)))) return persp2.T.reshape((1, persp2.shape[1], persp2.shape[0])) this is 800 times faster than the 26 seconds it was taking before on my machine. Even more optimized as suggested by @Nin17 (but I couldn't see any impact) Also removed the 3d reshape. Somewhere while testing I asserted that the shape should match 3d, this is changed. # Best njit version with transpose - 0.03 @njit def render_all_verts(vert_array): unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float32) shift = np.array([0, 0, 10], dtype=np.float32) focus = 5 data = (vert_array + shift).T data = np.dot(unit_vec, data) data[:2] *= focus / np.abs(data[2:3]) return data.T # Best normal version wiht transpose - 0.10 def render_all_verts(vert_array): unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float32) shift = np.array([0, 0, 10], dtype=np.float32) focus = 5 data = (vert_array + shift).T data = np.dot(unit_vec, data) data[:2] *= focus / np.abs(data[2:3]) return data.T # Without transpose is slower, probably because of BLAS implementation / memory contiguity etc. # Without transpose normal - 0.14s (slowest) def render_all_verts(vert_array): unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float32) shift = np.array([0, 0, 10], dtype=np.float32) focus = 5 pos2 = (vert_array + shift) pos2 = np.dot(pos2, unit_vec) pos2[:,:2] *= focus/np.abs(pos2[:,2:3]) return pos2 # njit without transpose (second fastest) - 0.06s @njit def render_all_verts_2(vert_array): unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float32) shift = np.array([0, 0, 10], dtype=np.float32) focus = 5 pos2 = (vert_array + shift) pos2 = np.dot(pos2, unit_vec) pos2[:,:2] *= focus/np.abs(pos2[:,2:3]) return pos2 also fastest implementation in plain python that I could do (using numba non supported features) - it is basically on par with numba for larger vertices, and faster than numba for even larger vertices. def render_all_verts(vert_array): unit_vec = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.float32) shift = np.array([0, 0, 10], dtype=np.float32) focus = 5 data = vert_array.T.astype(np.float32, order='C') data += shift[:,np.newaxis] np.dot(unit_vec, data, out=data) data[:2] *= focus / np.abs(data[2:3]) return data.T
2
1
77,812,375
2024-1-13
https://stackoverflow.com/questions/77812375/pytorch-error-on-mps-apple-silicon-metal
When I use PyTorch on the CPU, it works fine. When I try to use the mps device it fails. I'm using miniconda for osx-arm64, and I've tried both python 3.8 and 3.11 and both the stable and nightly PyTorch installs. According to the website (https://pytorch.org/get-started/locally/) mps acceleration is available now without nightly. The code I've written is as follows: import torch mps_device = torch.device("mps") float_32_tensor1 = torch.tensor([3.0, 6.0, 9.0], dtype=torch.float32, device=mps_device, requires_grad=False) float_32_tensor2 = torch.tensor([3.0, 6.0, 9.0], dtype=torch.float32, device=mps_device, requires_grad=False) print(float_32_tensor1.mul(float_32_tensor2)) This results in the following (fairly long) error: https://pastebin.com/svwZj8Ke First line of error is: RuntimeError: Failed to create indexing library, error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:168:1: error: type 'const constant ulong3 *' is not valid for attribute 'buffer' How would I go about solving this? edit: meta says pastebin shouldn't be used but the error is too long to include in the question edit 2: Not that torch.backends.mps.is_available() returns true edit 3: seems to work normally on the console but Jupyter has this error
Jupyter was using the old kernel (the dev one) even though I switched interpreters. Restarting Jupyter with the new anaconda environment (python 3.8 and the release version of PyTorch) works.
2
1
77,811,790
2024-1-13
https://stackoverflow.com/questions/77811790/what-is-the-role-of-base-class-used-in-the-built-in-module-unittest-mock
While having deep dive into how the built-in unittest.mock was designed, I run into these lines in the official source code of mock.py: class Base(object): _mock_return_value = DEFAULT _mock_side_effect = None def __init__(self, /, *args, **kwargs): pass class NonCallableMock(Base): ... # and later in the init def _init__ (...): ... _safe_super(NonCallableMock, self).__init__( spec, wraps, name, spec_set, parent, _spec_state ) What is the role of Base class besides providing common class attributes _mock_return_value and _mock_side_effect as it has just an empty __init__ ? And why NonCallableMock has to call _safe_super(NonCallableMock, self).__init__(), which I believe is exactly just the empty __init__ method of the Base class ? Thanks a lot for your answers in advance. I tried to understand the code but couldn't see the rationale behind the design.
It's for multiple inheritance, which unittest.mock uses for classes like class MagicMock(MagicMixin, Mock):. Despite the name, super doesn't mean "call the superclass method". Instead, it finds the next method implementation in a type's method resolution order, which might not come from a parent of the current class when multiple inheritance is involved. When you don't know what method super will call, you have to be a lot more careful about what arguments you pass to that method. Most of the classes in this class hierarchy forward arguments like spec and wraps to the next __init__ implementation when they call _safe_super(...).__init__(...). Even if none of their ancestors need those arguments, a sibling implementation could still need them. If all of the classes were to forward these arguments, then object.__init__ would eventually receive those arguments, and object.__init__ would throw an exception. Some class has to handle the job of specifically not passing those arguments to object.__init__, and that class has to sit at the root of the class hierarchy, right below object itself. Thus, this Base class: a class for all the other classes in this multiple inheritance hierarchy to inherit from, existing almost solely for the purpose of preventing object.__init__ from complaining.
2
3
77,811,473
2024-1-13
https://stackoverflow.com/questions/77811473/how-to-draw-a-arc-in-a-surface
i want to draw a semicircle inside a surface. So i thought to use pygame.draw.arc() but i cant figure out how to draw the arc inside of a surface i have seen the official_docs and am using math.radians() to convert degrees to radians but i cant see the arc. A circle is being drawn tho... import pygame import math pygame.init() width, height = 800, 600 screen = pygame.display.set_mode((width, height)) surf = pygame.Surface((100, 100)) surf.fill("green") rect = surf.get_rect(center=(100, 100)) run = 1 while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = 0 screen.fill((255, 202, 135)) pygame.draw.circle(surf, (255, 255, 255), (30,30),25) pygame.draw.arc(surf, (255, 255, 255), rect, math.radians(270), math.radians(90)) screen.blit(surf, rect) pygame.display.flip() pygame.quit() thanks in advance
The position of the circle segment must of course be relative to the Surface on which the circle segment is drawn and not the absolute position on which the Surface is then placed on the screen: pygame.draw.arc(surf, (255, 255, 255), rect, math.radians(270), math.radians(90)) pygame.draw.arc(surf, (255, 255, 255), (0, 0, 100, 100), math.radians(270), math.radians(90))
5
3
77,809,700
2024-1-12
https://stackoverflow.com/questions/77809700/should-name-mangling-be-used-in-classes-that-are-inherited-by-other-subclasses
I'm trying to create classes and subclasses in python with "protected" attributes using name mangling (__). However, when I try to extend the functionality of classes further with different subclasses, I get errors. So when is the appropriate time to use name mangling when the expectation is that both the parent class and the subclasses have unique use cases? Here is a simple toy example to illustrate the question. Let's say the Parent is a realist, so they see the value as it is. The PositiveChild sees things in a better light, so they see the value as a positive. The NegativeChild is a pessimist, so they see the value as a negative. However, both PositiveChild and NegativeChild understand how their parent feels, so they have access to the Parent's function. class Parent(): def __init__(self, value: int) -> None: self.__value = value def how_parent_feels(self) -> None: print(self.__value) class PositiveChild(Parent): def __init__(self, value: int) -> None: super().__init__(value) def how_child_feels(self) -> None: print(abs(self.__value)) class NegativeChild(Parent): def __init__(self, value: int) -> None: super().__init__(value) def how_child_feels(self) -> None: print(abs(self.__value) * -1) if __name__ == "__main__": # This is our function. val = 10 # Here is our parent. mom = Parent(val) # How does she feel? mom.how_parent_feels() >>> 10 # Here is her kids. kid1 = PositiveChild(val) kid2 = NegativeChild(val) # Do they know how the mom feels? kid1.how_parent_feels() >>> 10 kid2.how_parent_feels() >>> 10 # But how do THEY feel? kid1.how_child_feels() >>> AttributeError: 'PositiveChild' object has no attribute '_PositiveChild__value' kid2.how_child_feels() >>> AttributeError: 'NegativeChild' object has no attribute '_NegativeChild__value' I understand that the name mangling is making both PositiveChild and NegativeChild to have a _Parent__value attribute, and this is what the inherited how_parent_feels() method expects and so this works fine. But the respective how_child_feels() methods are expecting a _PositiveChild__value or _NegativeChild__value attribute. So what are your options for re-using name mangled inherited attributes within subclass specific methods? Is this a misuse of "protected" python attributes and should be avoided altogether?
The entire point of using double-underscore name-mangling is to prevent name collisions in subclasses. That's it. So this is precisely the behavior that is intended, what you have demonstrated is the canonical use case for double-underscore name mangling! Or to be a little more precise, it is to prevent accidental name collisions. The whole point is to allow subclasses to define their own self.__value variable without having to worry if the parent class is using a name like that! It is to reproduce this one feature of "private" in languages with access modifiers, that is, no accidental name collisions. It is crucial to understand, though, that isn't private though, since these variables are "publicly" accessible. If you really wanted to use double-underscore name-mangling this way, the straightforward solution is to use the correct variable, which is _Parent__value #in a subclass class PositiveChild(Parent): def __init__(self, value: int) -> None: super().__init__(value) def how_child_feels(self) -> None: # see, all variables are always public in Python # this only prevents *accidental* name collisions # nothing is stopping you from using this variable print(abs(self._Parent__value)) If you want to mark a variable as restricted to internal use, you would commonly just use a single leading underscore. Both of these approaches are equally as "private". Python doesn't have private! Note, in languages with access modifiers, like Java, if you wanted a variable that wasn't exposed publicly, but was exposed to subclasses, you would use protected, so sometimes that terminology is used to refer to this idea in Python.
2
4
77,808,706
2024-1-12
https://stackoverflow.com/questions/77808706/update-the-values-in-the-df-based-on-the-column-name
I have next pandas DataFrame: x_1 x_2 x_3 x_4 col_to_replace cor_factor 1 2 3 4 x_2 1 3 3 5 1 x_1 6 2 2 0 0 x_3 0 ... I want to update the column which name is saved in the col_to_replace with the value from cor_factor and save the result in the corresponded column and also in the car_factor column. Some (ugly) solution could be: for i in len(df.shape[0]): df[df['col_to_replace']].iloc[i] = df[df['col_to_replace']].iloc[i] - df['cor_factor'].iloc[i] df['cor_factor'].iloc[i] = df['cor_factor'].iloc[i] - df[df['col_to_replace']].iloc[i] This way is absolutely not time efficient. I'm looking for the faster solution. The output for the DF should be in this case: x_1 x_2 x_3 x_4 col_to_replace cor_factor 1 1 3 4 x_2 -1 -3 3 5 1 x_1 3 2 2 0 0 x_3 0 ...
Use a pivot to correct the x_ values and an indexing lookup to correct the last column. Make sure to make a copy before modification since the values change: # perform indexing lookup # save the value for later idx, cols = pd.factorize(df['col_to_replace']) corr = df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx] # pivot and subtract the factor # ensure original order of the columns cols = df.columns.intersection(cols, sort=False) df[cols] = df[cols].sub(df.pivot(columns='col_to_replace', values='cor_factor'), fill_value=0).convert_dtypes() # correct with the saved "corr" df['cor_factor'] -= corr Output: x_1 x_2 x_3 x_4 col_to_replace cor_factor 0 1 1 3 4 x_2 -1 1 -3 3 5 1 x_1 3 2 2 2 0 0 x_3 0
2
3
77,808,253
2024-1-12
https://stackoverflow.com/questions/77808253/how-to-replace-counter-to-use-numpy-code-only
I have this code: from collections import Counter import numpy as np def make_data(N): np.random.seed(40) g = np.random.randint(-3, 4, (N, N)) return g N = 100 g = make_data(N) n = g.shape[0] sum_dist = Counter() for i in range(n): for j in range(n): dist = i**2 + j**2 sum_dist[dist] += g[i, j] sorted_dists = sorted(sum_dist.keys()) for i in range(1, len(sorted_dists)): sum_dist[sorted_dists[i]] += sum_dist[sorted_dists[i-1]] # print(sum_dist) print(max(sum_dist, key=sum_dist.get)) The output is 7921. I want to convert it into numpy only code and get rid of Counter. How can I do that?
Can you just make sum_dist into an array, since you know its maximum index? sum_dist = np.zeros(2 * N * N, dtype=int) for i in range(n): for j in range(n): dist = i**2 + j**2 sum_dist[dist] += g[i, j] print(np.argmax(np.cumsum(sum_dist)))
6
5
77,808,235
2024-1-12
https://stackoverflow.com/questions/77808235/pandas-remove-characters-from-a-column-of-strings
I have a dataframe with a Date column consisting of stings in this format. I need to strip the end of the string so that I can convert to a datetime object. "20231101 05:00:00 America/New_York" "20231101 06:00:00 America/New_York" I have tried these approaches unsuccessfully. df['Date'] = df['Date'].replace('^.*\]\s*', '', regex=True) df['Date'] = df['Date'].str.strip(' America/New_York') df['Date'] = df['Date'].map(lambda x: x.rstrip(' America/NewYork'))`` as well as a couple of others based on my searches. Is there an easy way to do this or should I write a function to slice the string by grabbing the first 17 characters and assigning the result back to the df. Note the string could be of the form '20231101 05:00:00 America/Central' Thanks for any and all assistance.
If you want to remove a particular suffix, then I recommend str.removesuffix rather than str.strip. Notice that you sometimes write New_York with an underscore and sometimes NewYork without an underscore. If you ask to remove 'NewYork' then 'New_York' won't be removed. After the edit in your question, the suffixes all start with ' America' but differ afterwards; in this case you could use str.split(' America').str[0] to keep everything before ' America'. import pandas as pd df = pd.DataFrame({ 'Name': ['Alice', 'Bob', 'Charlie'], 'Date': ["20231101 05:00:00 America/New_York", "20231101 06:00:00 America/New_York", "20231101 07:00:00 America/Central"] }) # df['Date'] = df['Date'].str.removesuffix(' America/New_York') df['Date'] = df['Date'].str.split(' America').str[0] print(df) # Name Date # 0 Alice 20231101 05:00:00 # 1 Bob 20231101 06:00:00 # 2 Charlie 20231101 07:00:00
2
1
77,802,033
2024-1-11
https://stackoverflow.com/questions/77802033/c-program-and-subprocess
I wrote this simple C program to explain a more hard problem with the same characteristics. #include <stdio.h> int main(int argc, char *argv[]) { int n; while (1){ scanf("%d", &n); printf("%d\n", n); } return 0; } and it works as expected. I also wrote a subprocess script to interact with this program: from subprocess import Popen, PIPE, STDOUT process = Popen("./a.out", stdin=PIPE, stdout=PIPE, stderr=STDOUT) # sending a byte process.stdin.write(b'3') process.stdin.flush() # reading the echo of the number print(process.stdout.readline()) process.stdin.close() The problem is that, if I run my python script, the execution is freezed on the readline(). In fact, if I interrupt the script I get: /tmp » python script.py ^CTraceback (most recent call last): File "/tmp/script.py", line 10, in <module> print(process.stdout.readline()) ^^^^^^^^^^^^^^^^^^^^^^^^^ KeyboardInterrupt If I change my python script in: from subprocess import Popen, PIPE, STDOUT process = Popen("./a.out", stdin=PIPE, stdout=PIPE, stderr=STDOUT) with process.stdin as pipe: pipe.write(b"3") pipe.flush() # reading the echo of the number print(process.stdout.readline()) # sending another num: pipe.write(b"4") pipe.flush() process.stdin.close() I got this output: » python script.py b'3\n' Traceback (most recent call last): File "/tmp/script.py", line 13, in <module> pipe.write(b"4") ValueError: write to closed file That means that the first input is sent correctly, and also the read was done. I really can't find something that explain this behaviour; can someone help me understanding? Thanks in advance [EDIT]: since there are many points to clearify, I added this edit. I'm training on exploitation of buffer overflow vuln using the rop technique and I'm writing a python script to achieve that. To exploit this vuln, because of ASLR, I need to discover the libc address and make the program restart without terminating. Since the script will be executed on a target machine, I dont know which libraries will be avaiable, then I'm going to use subprocess because it's built-in in python. Without going into details, the attack send a sequence of bytes on the first scanf aim to leak libc base address and restart the program; then a second payload is sent to obtain a shell with which I will communicate in interactive mode. That's why: I can only use built-in libraries I have to send bytes and cannot append ending \n: my payload would not be aligned or may leeds to fails I need to keep open the stdin open I cannot change the C-code
Change these: Send a separator between the numbers read by the C program. scanf(3) accepts any non-digit byte as separator. For easiest buffering, send a newline (e.g. .write(b'42\n')) from Python. Without a separator, scanf(3) will wait for more digits indefinitely. After each write (both in C and Python), flush the output. This works for me: #include <stdio.h> int main(int argc, char *argv[]) { int n; while (1){ scanf("%d", &n); printf("%d\n", n); fflush(stdout); /* I've added this line only. */ } return 0; } import subprocess p = subprocess.Popen( ('./a.out',), stdin=subprocess.PIPE, stdout=subprocess.PIPE) try: print('A'); p.stdin.write(b'42 '); p.stdin.flush() print('B'); print(repr(p.stdout.readline())); print('C'); p.stdin.write(b'43\n'); p.stdin.flush() print('D'); print(repr(p.stdout.readline())); finally: print('E'); print(p.kill()) The reason why your original C program works when run interactively within the terminal window is that in C the output is automatically flushed when a newline (\n) is written to the terminal. Thus printf("%d\n", n); does an implicit fflush(stdout); in the end. The reason why your original C program doesn't work when run from Python with subprocess is that it writes its output to a pipe (rather than to a terminal), and there is no autoflush to a pipe. What happens is that the Python program is waiting for bytes, and the C program doesn't write those bytes to the pipe, but it is waiting for more bytes (in the next scanf), so both programs are waiting for the other indefinitely. (However, there would be a partial autoflush after a few KiB (typically 8192 bytes) of output. But a single decimal number is too short to trigger that.) If it's not possible to change the C program, then you should use a terminal device instead of a pipe for communication between the C and the Python program. The pty Python module can create the terminal device, this works for me with your original C program: import os, pty, subprocess master_fd, slave_fd = pty.openpty() p = subprocess.Popen( ('./a.out',), stdin=slave_fd, stdout=slave_fd, preexec_fn=lambda: os.close(master_fd)) try: os.close(slave_fd) master = os.fdopen(master_fd, 'rb+', buffering=0) print('A'); master.write(b'42\n'); master.flush() print('B'); print(repr(master.readline())); print('C'); master.write(b'43\n'); master.flush() print('D'); print(repr(master.readline())); finally: print('E'); print(p.kill()) If you don't want to send newlines from Python, here is a solution without them, it works for me: import os, pty, subprocess, termios master_fd, slave_fd = pty.openpty() ts = termios.tcgetattr(master_fd) ts[3] &= ~(termios.ICANON | termios.ECHO) termios.tcsetattr(master_fd, termios.TCSANOW, ts) p = subprocess.Popen( ('./a.out',), stdin=slave_fd, stdout=slave_fd, preexec_fn=lambda: os.close(master_fd)) try: os.close(slave_fd) master = os.fdopen(master_fd, 'rb+', buffering=0) print('A'); master.write(b'42 '); master.flush() print('B'); print(repr(master.readline())); print('C'); master.write(b'43\t'); master.flush() print('D'); print(repr(master.readline())); finally: print('E'); print(p.kill())
4
2
77,807,818
2024-1-12
https://stackoverflow.com/questions/77807818/trouble-updating-a-rectangle-using-a-custom-property-in-pygame
I am debugging one of my programs where I am trying to assign a custom variable to a rectangle to update it's position. Here is my code: import os ; os.environ['PYGAME_HIDE_SUPPORT_PROMPT']='False' import pygame, random pygame.init() display = pygame.display.set_mode((401, 401)) display.fill("white") ; pygame.display.flip() class MyRect(pygame.Rect): def __setattr__(self, attr, value): # Sets a custom attribute to a rectangle super().__setattr__(attr, value) if attr == 'xValue': pygame.Rect.move(self, (value-self.centerx), 0) # Move the rectangle according to the xValue def contains(self, coords): return self.collidepoint(coords) square = MyRect(175, 175, 50, 50) pygame.draw.rect(display, 'steelBlue', square) # Draw the rectangle to the screen square.xValue = 200 while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() ; exit() elif event.type == pygame.MOUSEBUTTONDOWN: if square.contains(pygame.mouse.get_pos()): square.xValue = random.randint(0, display.get_width()) # Update the square.xValue property pygame.display.flip() # Update the screen When I execute the program, the square.xValue property is changing, but the position of the square on the screen is not. What did I miss?
You have to redraw the scene. You have to redraw the rectangle after it has been changed. Clear the display and draw the rectangle in the application loop: while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() ; exit() elif event.type == pygame.MOUSEBUTTONDOWN: if square.contains(pygame.mouse.get_pos()): square.xValue = random.randint(0, display.get_width()) # Update the square.xValue property display.fill("white"); pygame.draw.rect(display, 'steelBlue', square) pygame.display.flip() # Update the screen
2
1
77,806,510
2024-1-12
https://stackoverflow.com/questions/77806510/can-i-use-pydantic-to-deserialize-a-union-type-without-creating-another-basemod
I want to deserialize some data into a union type like so: from pydantic import BaseModel, Field from typing import Annotated, Union, Literal class Foo(BaseModel): type: Literal["foo"] = "foo" x: int class Bar(BaseModel): type: Literal["bar"] = "bar" x: float Baz = Annotated[Union[Foo, Bar], Field(discriminator="type")] d = { "type": "bar", "x": 10.1 } # Deserialize Baz here This could be done by making another base model and deserializing it there, but I was wondering if there was a way to use a function within Pydantic to do it without this? class Config(BaseModel): baz: Baz model = Config.model_validate({"baz": d}) print(model) # Outputs: baz=Bar(type='bar', x=10.1)
You can use pydantic.TypeAdapter from pydantic import TypeAdapter model = TypeAdapter(Baz).validate_python(d)
7
6
77,802,979
2024-1-11
https://stackoverflow.com/questions/77802979/how-to-draw-a-horizontal-line-at-y-0-in-an-altair-line-chart
I'm creating a line chart using Altair. I have a DataFrame where my y-values move up and down around 0, and I'd like to add a phat line to mark y=0. Sounds easy enough, so I tried this: # Add a horizontal line at y=0 to clearly distinguish between positive and negative values. y_zero = alt.Chart().mark_rule().encode( y=alt.value(0), color=alt.value('black'), size=alt.value(10), ) This indeed draws a horizontal line, but it gets drawn at the top very top of my chart. It seems Altair uses a coordinate system where (0,0) is at the top-left corner. How do I move my line to my data's y=0 position? Thanks!
Using alt.datum instead of alt.value will draw the line at the data value 0, instead at a pixel value of 0. You can read more and see examples in the docs here https://altair-viz.github.io/user_guide/encodings/index.html#datum-and-value
3
2
77,802,627
2024-1-11
https://stackoverflow.com/questions/77802627/is-it-really-good-idea-to-use-asyncio-with-files
I've been reading and watching videos a lot about asyncio in python, but there's something I can't wrap my head around. I understand that asyncio is great when dealing with databases or http requests, because database management systems and http servers can handle multiple rapid requests, but... How can it be good for files, writing in particular.. Let's discuss this case with this code that is running on a server as fastAPI webApp. @app.get('/end_point') async function () do_something() await write_to_a_file do_more_stuff_and_return Now, as a server, it may get like 50, 100, or even more rapid requests that will trigger this function, yes await will pause the next instruction inside function() , but if there are multiple requests coming this function will start so many times and execute multiple write attempt at once to the file, which may corrupt the data inside. That's why I can't really understand how it is recommended to use asyncio for files. Am I getting something wrong here? Is there a way to instruct the event_loop to give this higher priority to that function event or some another way to avoid this? What I'm doing now is delaying writing files until I have X amount of data to write [write in bursts], but I still think that any function that does interaction with files shouldn't be async function. But then, I don't know how am I going to deal with tons of requests.... Your thoughts will be highly appreciated as I'm having a hard time with this. thanks, have a good day. I'm trying to write data to file, but the server will surly receive bursts of requests. So in a way I don't know how to handle this situation, and I fear the corruption of data.
I'm trying to write data to file, but the server will surly receive bursts of requests. So in a way I don't know how to handle this situation, and I fear the corruption of data. Yes, corruption could happen from time to time especially if you use aiofiles. If I'm not wrong it uses threads to achieve this. I think you have two options here: Using lock and background tasks: in FastApi(since you mentioned) you can do the writing in the background (after sending the respond to the user), and there, acquire a lock object for writing to the file. Queue the messages for writing: instead of directly writing to the file, put them on a queue and let another task/process get do the writing part. In both cases user won't wait for writing to be finished and get the response immediately and the writing part will happen in sync.
2
2
77,801,556
2024-1-11
https://stackoverflow.com/questions/77801556/how-can-i-create-a-list-of-range-of-numbers-as-a-column-of-dataframe
My DataFrame is: import pandas as pd df = pd.DataFrame( { 'a': [20, 100], 'b': [2, 3], 'dir': ['long', 'short'] } ) Expected output: Creating column x: a b dir x 0 20 2 long [22, 24, 26] 1 100 3 short [97, 94, 91] Steps: x is a list with length of 3. a is starting point of x and b is step that a increases/decreases depending on dir. If df.dir == long x ascends otherwise it descends. My Attempt based on this answer: df['x'] = np.arange(0, 3) * df.b + df.a Which does not produce the expected output.
If you want a vectorial approach with numpy # create 1 for long, else -1 d = np.where(df['dir'].eq('long'), 1, -1)[:,None] # convert a and b to numpy as a column vector a = df['a'].to_numpy()[:,None] b = df['b'].to_numpy()[:,None] # combine N = 3 x = np.arange(1, N+1) df['x'] = list(a + d*b*x) Output: a b dir x 0 20 2 long [22, 24, 26] 1 100 3 short [97, 94, 91]
3
2
77,800,927
2024-1-11
https://stackoverflow.com/questions/77800927/how-to-combine-fixture-with-parametrize
I have comfortably been using @pytest.mark.parametrize() in many tests now. I have not, however, succeeded in combining it with @pytest.fixture() and I cannot find a question answering this issue. This example applies @pytest.fixture() succesfully (copied from another question that I cannot find anymore): import pytest def foo(data, key): return data[key] @pytest.fixture() def dict_used_by_many_tests(): return {"name": "Dionysia", "age": 28, "location": "Athens"} def test_foo_one(dict_used_by_many_tests): actual = foo(dict_used_by_many_tests, "name") expected = "Dionysia" assert actual == expected Now, in practice I want to use @pytest.mark.parametrize(). @pytest.fixture() def dict_used_by_many_tests(): return {"name": "Dionysia", "age": 28, "location": "Athens"} @pytest.mark.parametrize( "dict_used_by_many_tests", [ (dict_used_by_many_tests), ], ) def test_foo_one(dict_used_by_many_tests): actual = foo(dict_used_by_many_tests, "name") expected = "Dionysia" assert actual == expected This results in the error: TypeError: 'function' object is not subscriptable. I tried calling dict_used_by_many_tests() to work with its return value instead of the function object. This resulted in a Fixtures are not meant to be called directly error, however.
One method is request.getfixturevalue: import pytest @pytest.fixture() def dict_used_by_many_tests(): return { "name": "Dionysia", "age": 28, "location": "Athens", } @pytest.fixture() def dict_used_by_some_tests(): return { "name": "Medusa, maybe?", "age": 2 << 32, "location": "Underworld?", } @pytest.mark.parametrize( "fixture_name", [ "dict_used_by_many_tests", "dict_used_by_some_tests", ], ) def test_foo_one(request, fixture_name): d = request.getfixturevalue(fixture_name) # ... Another is indirect parametrization: import pytest @pytest.fixture() def character_dict(request): if request.param == 1: return {"name": "Dionysia", "age": 28, "location": "Athens"} elif request.param == 2: return {"name": "Medusa", "age": 2 << 32, "location": "Underworld"} raise NotImplementedError(f"No such dict: {request.param!r}") @pytest.mark.parametrize( "character_dict", [1, 2], indirect=True, ) def test_foo_one(character_dict): assert isinstance(character_dict, dict)
2
4
77,796,639
2024-1-10
https://stackoverflow.com/questions/77796639/combining-two-dataframes-in-which-the-order-of-entries-dont-match
I have 2 dataframes called df and df2 ,a small example of both are shown below. I want to join the two into a combined dataframe by matching the entries of the dataFrame by the 'formula' column df2 with the 'filename' column of df1 since the order of entries in the 2 dataFrames are not the same. An example of the final ideal dataframe is also provided below. df = pd.DataFrame({'filename': ['Fe12Co4BN.cif', 'Fe16N2.cif', 'Fe15CoBN.cif'], 'jml_bp_mult_atom_rad': [170.11, 154.73, 172.43], 'jml_hfus_add_bp': [45.13, 41.90, 47.23]}) df2 = pd.DataFrame({'formula': ['Fe12Co4BN','Fe15CoBN','Fe16N2'], 'nsites': [8, 8, 8], 'number': [139, 139, 140]}) Combined dataFrame should look like this: df3 = pd.DataFrame({'formula': ['Fe12Co4BN','Fe15CoBN','Fe16N2'], 'jml_bp_mult_atom_rad': [170.11, 172.43, 154.73], 'jml_hfus_add_bp': [45.13, 47.23, 41.90], 'nsites': [8, 8, 8], 'number': [139, 139, 140]}) The order of of the columns matters to me, so the 'formula' column should be first followed by the columns in df1 and then df2. thanks in advance.
Create a temporary column in first dataframe df and use merge: out = df2.merge( df.assign(formula=df["filename"].str.split(".").str[0]), on="formula" ).drop(columns="filename") print(out) Prints: formula nsites number jml_bp_mult_atom_rad jml_hfus_add_bp 0 Fe12Co4BN 8 139 170.11 45.13 1 Fe15CoBN 8 139 172.43 47.23 2 Fe16N2 8 140 154.73 41.90
2
3
77,794,386
2024-1-10
https://stackoverflow.com/questions/77794386/compute-the-max-sum-circular-area
I have an n by n matrix of integers and I want to find the circular area, with origin at the top left corner, with maximum sum. Consider the following grid with a circle imposed on it. This is made with: import matplotlib.pyplot as plt from matplotlib.patches import Circle import numpy as np plt.yticks(np.arange(0, 10.01, 1)) plt.xticks(np.arange(0, 10.01, 1)) plt.xlim(0,9) plt.ylim(0,9) plt.gca().invert_yaxis() # Set aspect ratio to be equal plt.gca().set_aspect('equal', adjustable='box') plt.grid() np.random.seed(40) square = np.empty((10, 10), dtype=np.int_) for x in np.arange(0, 10, 1): for y in np.arange(0, 10, 1): plt.scatter(x, y, color='blue', s=2, zorder=2, clip_on=False) for x in np.arange(0, 10, 1): for y in np.arange(0, 10, 1): value = np.random.randint(-3, 4) square[int(x), int(y)] = value plt.text(x-0.2, y-0.2, str(value), ha='center', va='center', fontsize=8, color='black') r1 = 3 circle1 = Circle((0, 0), r1, color="blue", alpha=0.5, ec='k', lw=1) plt.gca().add_patch(circle1) In this case the matrix is: [[ 3 0 2 -3 -3 -1 -2 1 -1 0] [-1 0 0 0 -2 -3 -2 2 -2 -3] [ 1 3 3 1 1 -3 -1 -1 3 0] [ 0 0 -2 0 2 1 2 2 -1 -1] [-1 0 3 1 1 3 -2 0 0 -1] [-1 -1 1 2 -3 -2 1 -2 0 0] [-3 2 2 3 -2 0 -1 -1 3 -2] [-2 0 2 1 2 2 1 -1 -3 -3] [-2 -2 1 -3 -2 -1 3 2 3 -3] [ 2 3 1 -1 0 1 -1 3 -2 -1]] When the circle has radius 3, there are 11 points in the grid within the circle. As the radius increases, more and more points fall into the circle. I am looking for a fast way to find a radius which maximizes the sum of the integers of grid points within it. The radius will not be unique so any one that maximizes the sum is ok. I will ultimately want to do this with much larger matrices. This question is related but I am not sure how to extend it to my new question.
Same O(n^2 log n) method as others: accumulate weight sums by distance, then compute cumulative sums. But in pure python which should be trivial to translate to C++. Also PyPy may run surprisingly fast. from collections import Counter g = [[-3, 2, 2], [ 2, 0, 3], [-1, -2, 0]] n = 3 sum_dist = Counter() for i in range(n): for j in range(n): dist = i**2 + j**2 sum_dist[dist] += g[i][j] sorted_dists = sorted(sum_dist.keys()) for i in range(1, len(sorted_dists)): sum_dist[sorted_dists[i]] += sum_dist[sorted_dists[i-1]] print(sum_dist) print(max(sum_dist, key=sum_dist.get))
2
1
77,795,362
2024-1-10
https://stackoverflow.com/questions/77795362/how-to-migrate-from-typing-typealias-to-type-statements
I have created a type alias for defining member variables in a dataclass through type annotations: >>> from typing import TypeAlias >>> Number: TypeAlias = int | float >>> n, x = 1, 2.5 >>> isinstance(n, Number) True >>> isinstance(x, Number) True According to the docs, this syntax has been deprecated: Deprecated since version 3.12: TypeAlias is deprecated in favor of the type statement, which creates instances of TypeAliasType and which natively supports forward references. Note that while TypeAlias and TypeAliasType serve similar purposes and have similar names, they are distinct and the latter is not the type of the former. Removal of TypeAlias is not currently planned, but users are encouraged to migrate to type statements. I tried to use the new syntax, but I got an error: >>> type Number = int | float >>> isinstance(n, Number) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union How should I go about this?
If you need to use isinstance with the new syntax, you need to access the type alias's value (and force its evaluation) explicitly, through the __value__ attribute: isinstance(n, Number.__value__) Of course, this will only work if the value of the alias is compatible with isinstance, which rules out stuff like NewType or generics, or... most of the things you'd want to create a type alias for, but int | float should be fine. The old syntax had these limits too.
2
5
77,794,522
2024-1-10
https://stackoverflow.com/questions/77794522/how-to-reformat-a-big-csv-file-putting-a-newline-every-9-delimiter
I have a big csv file of data delimited by ";" stored in one only huge row. The first 9 fields separated by a semicolon are the columns name of the entire set. With Python, how could I reformat this csv file rewriting it and adding a newline every 9 fields for a correct import in Excel or Calc? Should I import csv or pandas? Thanks a lot in advance.
Here's a quick regex hack that could do the job. This assumes that there are no quoted semicolons in the values: import re csv = "a;b;c;d;e;f;g;h;i;1;2;3;4;5;6;7;8;9;1;2;3;4;5;6;7;8;9;" wrapped = re.sub(r"((?:[^;]*;){9})", r"\1\n", csv) print(wrapped) This outputs: a;b;c;d;e;f;g;h;i; 1;2;3;4;5;6;7;8;9; 1;2;3;4;5;6;7;8;9;
2
2
77,792,596
2024-1-10
https://stackoverflow.com/questions/77792596/subsequent-if-statement-shorter-ways-to-assign-values-to-parameters-based-on-an
I want to create an array in which the values depends on one variables and two coefficients. The coefficient values depend on the variable value as well. A simple example: x_intervals = [2, 10] C1_values = [0.1, 0.5, -0.2] C2_values = [0.4,0.6, -0.8] Here I want C1 and C2: to be equal to the first array element if x<2 (so C1 = 0.1 and C2 = 0.4) to be equal to the second array element if x>2 and x>10 (so C1 = 0.5 and C2 = 0.6) to be equal to the third element if x>10 (so C1 = -0.2 and C2 = -0.8) Then I want to use these coefficients to find the values of y=C1*x^C2 for any number of x contained in a list. Currently I do this using subsequent if functions as: x = [0.1,1,1.5,2.2,5,9,12,20,30,60,70,100] y = [] for x_ in x: if x_<=x_intervals[0]: C1 = C1_values[0] C2 = C2_values[0] elif x_<=x_intervals[1]: C1 = C1_values[1] C2 = C2_values[1] else: C1 = C1_values[2] C2 = C2_values[2] y.append(C1*x_ **C2) This works but it is extremely dirty and manual, and if I increase the number of intervals it becomes extremely long to code. I am sure that there is a smarter and cleaner way to do this. Any idea? Thanks in advance (Using Python, but this is more of a logic/algorithm question) #################################################################### EDIT: Thank you all for your solutions. They all worked and are useful I found the numpy solution using searchsorted the most appropriate in my case. The code looks like this now (where Coeff_1 and Coeff_2 are two pandas dataframe where I store my coefficients C1 and C2): def calculate_curve_value(Re=10, df=pd.DataFrame()): MaxRe = df.loc["MaxRe"].values C1 = df.loc["C1"].values C2 = df.loc["C2"].values idx = np.searchsorted(MaxRe, Re) value = C1[idx]*Re**(C2[idx]) return value Re = [0.1,1,1.5,2.2,5,9,12,20,30,60,70,100] N = [] f = [] for i in Re: N.append(calculate_curve_value(Re=i, df=Coeff_1)) f.append(calculate_curve_value(Re=i, df=Coeff_2))
It would make a lot of sense to use numpy here. What you want is a searchsorted on x_intervals, then a vectorial operation: import numpy as np x_intervals = np.array([2, 10]) C1_values = np.array([0.1, 0.5, -0.2]) C2_values = np.array([0.4,0.6, -0.8]) x = [0.1,1,1.5,2.2,5,9,12,20,30,60,70,100] idx = np.searchsorted(x_intervals, x) y = C1_values[idx]*x**C2_values[idx] Output: array([ 0.03981072, 0.1 , 0.1176079 , 0.80246041, 1.3132639 , 1.86859641, -0.02739586, -0.01820564, -0.01316234, -0.00755978, -0.00668269, -0.00502377]) If you really want to use pure python, go with bisect: from bisect import bisect_left x_intervals = [2, 10] C1_values = [0.1, 0.5, -0.2] C2_values = [0.4,0.6, -0.8] x = [0.1,1,1.5,2.2,5,9,12,20,30,60,70,100] y = [] for x_ in x: idx = bisect_left(x_intervals, x_) y.append(C1_values[idx]*x_**C2_values[idx]) Output: [0.03981071705534973, 0.1, 0.11760790225246737, 0.8024604053351734, 1.3132639022018835, 1.8685964094232759, -0.027395863825287095, -0.0182056420302608, -0.013162336572232132, -0.007559777184220181, -0.006682693821222914, -0.005023772863019159]
2
1
77,792,098
2024-1-10
https://stackoverflow.com/questions/77792098/average-day-of-a-month-on-minute-precision
I have data with datetime index using minute resolution. I want to see what is the average 'profile' of one day in a month using minute resolution. The dataset format is like this: Power 2019-01-01 11:43:01+02:00 9.223261 2019-01-01 11:44:01+02:00 14.304057 2019-01-01 11:45:01+02:00 28.678970 2019-01-01 11:46:01+02:00 35.143512 2019-01-01 11:47:01+02:00 24.431278 ... ... 2019-12-31 15:05:14+02:00 -0.075000 2019-12-31 15:06:14+02:00 -0.075000 2019-12-31 15:07:14+02:00 -0.075000 2019-12-31 15:08:14+02:00 -0.075000 2019-12-31 15:09:14+02:00 -0.075000 To plot the average day of a month power profile on hourly basis I did the following plt.plot(df_jul.groupby(df_jul.index.hour)[['Power']].mean(), label=('July')) where df_jul is a subset of the data above, including only data from July. Power 2019-07-01 05:28:15+03:00 2.561204 2019-07-01 05:29:15+03:00 2.749837 2019-07-01 05:30:15+03:00 2.963823 2019-07-01 05:31:15+03:00 3.190177 2019-07-01 05:32:15+03:00 3.374277 ... ... 2019-07-31 21:12:02+03:00 2.311575 2019-07-31 21:13:02+03:00 2.310808 2019-07-31 21:14:02+03:00 2.415743 2019-07-31 21:15:02+03:00 2.485820 2019-07-31 21:16:02+03:00 1.874091 The resulting figure is like this: So, what is the best method to get the same profile plot as in the figure above, but using minute resolution? I tried to group it by minutes but that results in an average hour of the month. I also think I could just iterate through the dataframe and do the average calculations, but I feel like there is an easier way that I am missing. Here is the correct result plot
You should use resample with the desired frequency. If you want to plot an average day of the month, you can transform all days to the end of month (with pd.offsets.MonthEnd), then resample: (df_jul.set_axis(df_jul.index + pd.offsets.MonthEnd(0)) .resample('2min')['Power'].mean() .plot(marker='o') ) Or with a variant of your original groupby: df_jul.groupby(df_jul.index.floor('1min').time)['Power'].mean().plot() If you want to plot the full month: df_jul.resample('1min')['Power'].mean().plot(marker='o') Output:
2
4
77,790,923
2024-1-10
https://stackoverflow.com/questions/77790923/should-i-write-jupyter-server-or-jupyter-server-when-using-pip-install
When pip installing the jupyter server package, it seems that pip install jupyter_server and pip install jupyter-server do the same thing. Is that right? Why are the package names with underscore and hyphen both OK? The same goes to jupyter_client and jupyter-client.
Pip replaces underscore with dash by default, so you always install the same package (jupyter-server).
2
2
77,790,846
2024-1-10
https://stackoverflow.com/questions/77790846/finding-the-maximum-value-between-two-columns-where-one-of-them-is-shifted
My DataFrame is: import pandas as pd df = pd.DataFrame( { 'a': [20, 9, 31, 40], 'b': [1, 10, 17, 30], } ) Expected output: Creating column c a b c 0 20 1 20 1 9 10 20 2 31 17 17 3 40 30 31 Steps: c is the maximum value between df.b and df.a.shift(1).bfill(). My attempt: df['temp'] = df.a.shift(1).bfill() df['c'] = df[['temp', 'b']].max(axis=1) Is it the cleanest way / best approach?
If you don't want the temporary column, then you can replace values on the shifted column using where() in a one-liner. df['c'] = df['a'].shift(1).bfill().where(lambda x: x>df['b'], df['b']) This is similar to the combine() method posted in the other answer, but this one does a vectorized comparison while, combine() does it element-wise so this should be much faster as the length of the dataframe increase.
2
1
77,790,217
2024-1-9
https://stackoverflow.com/questions/77790217/filter-for-specific-sequences-involving-multiple-columns-and-surrounding-rows
I have data that looks like this: It's standard financial price data (open, high, low, close). In addition, I run some calculations. 'major_check' occasionally returns 1 or 2 (which 'minor_check' will then also return). 'minor_check' also returns 1 or 2, but more frequently. the rest is filled with 0 or NaN. I'd like to test for specific patterns: Whenever there is a 2 in 'major_check', I want to see if I can find a 21212 pattern in 'minor_check', with 21 preceding the central 2 and 12 following it. If there is a 1 in 'major_check', I'd like to find a 12121 pattern in 'minor_check' I highlighted a 21212 pattern in the screenshot to give a better idea on what I am looking for. Once the 21212 or 12121 patterns are found, I'll check if specific rules applied on open/high/low/close (corresponding to the 5 rows constituting the pattern) are met or not. Of course, one could naively iterate through the dataframe but this doesn't sound like the Pythonic way to do it. I didn't manage to find a good way to do this, since a 21212 pattern can have some 0s inside it
As this answer by Timeless looked surprisingly complex, here is a quite simpler one. Method: Temporarily remove rows that are empty of test results (effectively skipping NaN and None), Search for patterns either with numpy.where and pandas.shift to check for patterns row-wise (faster), or preferrably with pandas.rolling -probably faster, more compact, but still readable. Finally report the findings into the original df. You haven't specified how to flag the findings. Here they get marked as a True in two new columns, one for each pattern, appended to the original dataframe, for whatever use you would plan for them. They are called "hit1" and "hit2". Input data No text input data in your post, so until then I came up with my own. It is designed to produce one hit for each pattern: "minor test" 12121 + "major test" 1 at index 14 "minor test" 21212 + "major test" 2 at index 12 import pandas as pd import numpy as np # Start dataframe df = pd.DataFrame({'minor': [None,2, 1, None,None,2, None,2, None,None,None, 1, 2, None,1, None,None,2, None,1, 2, None,None], 'major': [None,None,None,None,None,2, None,None,None,None,None, None,2, None,1, None,None,None, None,None,None,None,None]}) df minor major 0 NaN NaN 1 2.0 NaN 2 1.0 NaN 3 NaN NaN 4 NaN NaN 5 2.0 2.0 6 NaN NaN 7 2.0 NaN 8 NaN NaN 9 NaN NaN 10 NaN NaN 11 1.0 NaN 12 2.0 2.0 13 NaN NaN 14 1.0 1.0 15 NaN NaN 16 NaN NaN 17 2.0 NaN 18 NaN NaN 19 1.0 NaN 20 2.0 NaN 21 NaN NaN 22 NaN NaN Locate hits Skip rows without test results # Remove rows without minor test result df1 = df.dropna(axis=0,subset='minor').copy() # No reset_index because we'll use it to report back to df. # Patterns of 'minor test' minor_pat1 = [1,2,1,2,1] minor_pat2 = [2,1,2,1,2] Pattern search: alternative 1: .shift()and np.where # Deploy shift columns, looking 2 values backwards and 2 forward for i in range(-2,3): # i in [-2,-1, 0, 1, 2] df1[i] = df1['minor'].shift(i) # create a column named i # Test for both patterns and major test value df1['hit1'] = np.where(df1['major']==1, # case 12121 df1[list(range(-2,3))].eq(minor_pat1).all(axis=1), False) df1['hit2'] = np.where(df1['major']==2, # case 21212 df1[list(range(-2,3))].eq(minor_pat2).all(axis=1), False) df1 # Temporary dataframe with findings located minor major -2 -1 0 1 2 hit1 hit2 1 2.0 NaN 2.0 1.0 2.0 NaN NaN False False 2 1.0 NaN 2.0 2.0 1.0 2.0 NaN False False 5 2.0 2.0 1.0 2.0 2.0 1.0 2.0 False False 7 2.0 NaN 2.0 1.0 2.0 2.0 1.0 False False 11 1.0 NaN 1.0 2.0 1.0 2.0 2.0 False False 12 2.0 2.0 2.0 1.0 2.0 1.0 2.0 False True 14 1.0 1.0 1.0 2.0 1.0 2.0 1.0 True False 17 2.0 NaN 2.0 1.0 2.0 1.0 2.0 False False 19 1.0 NaN NaN 2.0 1.0 2.0 1.0 False False 20 2.0 NaN NaN NaN 2.0 1.0 2.0 False False alternative 2: one-liner with rolling, preferred: .rolling() was designed for that purpose exactly. Just too bad they haven't implemented .rolling().eq() yet (list of window functions). This is why we must resort to apply .eq() from inside a lambda function. df1['hit1'] = (df1['major']==1) & (df1['minor'] .rolling(window=5, center=True) .apply(lambda x : x.eq(minor_pat1).all())) df1['hit2'] = (df1['major']==2) & (df1['minor'] .rolling(window=5, center=True) .apply(lambda x : x.eq(minor_pat2).all())) minor major hit1 hit2 1 2.0 NaN False False 2 1.0 NaN False False 5 2.0 2.0 False False 7 2.0 NaN False False 11 1.0 NaN False False 12 2.0 2.0 False True 14 1.0 1.0 True False 17 2.0 NaN False False 19 1.0 NaN False False 20 2.0 NaN False False Finally report back to original df df.loc[df1.index,'hit1'] = df1['hit1'] df.loc[df1.index,'hit2'] = df1['hit2'] df minor major hit1 hit2 0 NaN NaN NaN NaN 1 2.0 NaN False False 2 1.0 NaN False False 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN 5 2.0 2.0 False False 6 NaN NaN NaN NaN 7 2.0 NaN False False 8 NaN NaN NaN NaN 9 NaN NaN NaN NaN 10 NaN NaN NaN NaN 11 1.0 NaN False False 12 2.0 2.0 False True 13 NaN NaN NaN NaN 14 1.0 1.0 True False 15 NaN NaN NaN NaN 16 NaN NaN NaN NaN 17 2.0 NaN False False 18 NaN NaN NaN NaN 19 1.0 NaN False False 20 2.0 NaN False False 21 NaN NaN NaN NaN 22 NaN NaN NaN NaN
4
1
77,752,443
2024-1-3
https://stackoverflow.com/questions/77752443/abstractapp-call-missing-1-required-positional-argument-send-connexion
Recently upgraded from connexion version 2 to version 3. Based on official documents in https://connexion.readthedocs.io/en/latest/index.html major change in architecture happend. my problem: when using zappa to deploy python lambda function in aws, this general error Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code. happens. in clouldwatch log review i see also general comment AbstractApp.__call__() missing 1 required positional argument: 'send'. Any idea?
Connexion 3 migrated from the WSGI to the ASGI interface, which does not seem to be supported by Zappa. You can either use an adapter middleware to run Connexion through WSGI (docs) or switch to a Zappa alternative like Magnum which supports ASGI.
3
1
77,781,734
2024-1-8
https://stackoverflow.com/questions/77781734/sagemaker-batch-transformer-with-my-own-pre-trained-model
I'm trying to run inference on demand for yolo-nas using sagemaker batch transformer. Using pre trained model with pre trained weights. But I am getting this error: python3: can't open file '//serve': [Errno 2] No such file or directory I have no idea what this '//serve' is. I have no ref or use of it at all, and can't find any docs about it Some more data about my case: Data are just images with single object in them (known as 'crop') seq-gil-transformer.py sits in s3://benny-test/indexer/yolo-nas/1/code /opt/ml/processing/input is not the right path. I know about it, I'm in the process of understanding which path I should use but the code crashes way before it... I have my own custom image as follows: FROM python:3.10-bullseye RUN apt update && \ apt install ffmpeg libsm6 libxext6 libgl1 -y && \ rm -rf /var/lib/apt/lists/* ADD req.txt req.txt RUN pip3 install -r req.txt ENTRYPOINT ["python3"] I have my own inference code known as 'seq-gil-transformer.py': (I know that I can improve it, for now trying to make the logic work & understand sagemaker and then I will improve the code) import super_gradients import torch from torch import nn import json from collections import OrderedDict from typing import List from pathlib import Path import PIL import numpy as np import time import cProfile import trace import io import psutil import numpy as np import matplotlib.pyplot as plt import sys import pandas as pd import argparse # getting images from local path def get_images(path, device): crops_path_list = Path(path).glob("*.png") images = [] for crop in crops_path_list: image = PIL.Image.open(str(crop)).convert("RGB") image = np.asarray(image) image = np.ascontiguousarray(image.transpose(2, 0, 1)) image = torch.from_numpy(np.array(image)).unsqueeze(0).to(dtype=torch.float32, device=device) images.append(image) return images # saving model def create_yolo_nas_indexer(device="cpu"): yolo_nas = super_gradients.training.models.get("yolo_nas_l", pretrained_weights="coco").to(device) torch.save(yolo_nas.backbone, f"yolo_nas_l_{device}.tar.gz") # loading model. # using our own module named 'modelwrapper', # you can see it in the end of this post def load_yolo_nas(device="cpu",output_layers=["stage2","stage3","context_module"]): yolo_nas = torch.load(f"yolo_nas_l_{device}.pth.tar").to(device) wraped_yolo_nas = ModelWrapper(yolo_nas,device=device).to(device) wraped_yolo_nas.add_output_layers(output_layers) return wraped_yolo_nas # making the inference using 'modelWrapper' # and the loaded yolo-nas super-gredients of # deci-ai model per image(crop) def primitive_c2v(crop:torch.Tensor, model)->list: output = model(crop)[0] return output if __name__ == "__main__": engine = "cpu" crops = get_images("/opt/ml/processing/input", engine) H = W = 64 C = 3 outputs = [] create_yolo_nas_indexer(engine) model = load_yolo_nas(engine) model.eval() list_o = [] for crop in crops: start_time = time.time() output = primitive_c2v(crop, model) output = [l.detach().numpy() for l in output] list_o.append(output) # printing a bit of the output just to be sure # everything went well. # in my data i got list of lists of numpy tensors. # each list in the grand list is of size of 4 for l in list_o: print(len(l)) And the sagemaker code itself is: import boto3 import sagemaker from sagemaker import get_execution_role sagemaker_session = sagemaker.Session() role = get_execution_role() bucket = 'benny-test' model = sagemaker.model.Model( source_dir="s3://benny-test/indexer/yolo-nas/1/code", entry_point="seq-gil-transformer.py", image_uri='*.dkr.ecr.eu-west-1.amazonaws.com/sagemaker:super-gredients-0.1', role=role, name="yolo-nas-cpu") transformer = model.transformer( instance_count=1, instance_type="ml.m5.4xlarge" ) transformer.transform( data="s3://benny-test/indexer/Raw/dummy_set/images" ) transformer.wait() Model wrapper ref: this code is not necessarily needed but I'm showing it here for reference and reproducibility :) def remove_all_hooks_recursive(model: nn.Module) -> None: for name, child in model.named_children(): if child is not None: if hasattr(child, "_forward_hooks"): child._forward_hooks = OrderedDict() elif hasattr(child, "_forward_pre_hooks"): child._forward_pre_hooks = OrderedDict() elif hasattr(child, "_backward_hooks"): child._backward_hooks = OrderedDict() remove_all_hooks_recursive(child) def add_all_modules_to_model_dict_recursive(model, module_dict, prefix=''): """Recursively adds all modules in a PyTorch model to a hierarchical dictionary.""" for name, module in model.named_children(): full_name = prefix + '.' + name if prefix else name full_name = full_name if full_name != "_model" else "" module_dict[full_name] = module if isinstance(module, nn.Module): add_all_modules_to_model_dict_recursive(module, module_dict, full_name) class StopModel(Exception): def __init__(self): super().__init__() def forward_hook(model_wrapper, layer_name, model=None): def hook(module, input, output): model_wrapper.selected_out[layer_name] = output if model is not None: _, code = model(output) model_wrapper.selected_out[f"code_{layer_name}"] = code if model_wrapper.stop_at_last_hook and layer_name == model_wrapper.last_layer: raise StopModel() return hook class ModelWrapper(nn.Module): def __init__(self, model, stop_at_last_hook=False, device=None): super().__init__() self.stop_at_last_hook = stop_at_last_hook self.model = model self.model.eval() self.output_layers = [] self.selected_out = OrderedDict() self.fhooks = [] self.layer_size_dict = {} self.layer_stride_dict = {} self.model_dict = self.add_all_modules_to_model_dict() self.device = device or torch.device('cuda' if torch.cuda.is_available() else 'cpu') self.last_layer = None @classmethod def from_cfg(cls, cfg_path: str): with open(cfg_path, "r") as f: cfg_dict = json.load(f) cls(**cfg_dict) def add_output_layers(self, output_layers: List[str]): self.last_layer = output_layers[-1] for output_layer in output_layers: if output_layer not in self.model_dict: raise ValueError(f"Model does not have layer: {output_layer}") self.output_layers = output_layers for layer_name, module in self.model_dict.items(): if layer_name in self.output_layers: self.fhooks.append(module.register_forward_hook(forward_hook(self, layer_name))) self.compute_output_layer_parameters() def compute_output_layer_parameters(self): random_input = torch.rand(1, 3, 64, 64) #TODO: CHANGE TO ZEROS and avoid seed usage random_input = random_input.to(self.device) self.forward(random_input) for layer_name, output_value in self.selected_out.items(): if isinstance(output_value, (list, tuple)): self.layer_size_dict[layer_name] = None self.layer_stride_dict[layer_name] = None else: self.layer_size_dict[layer_name] = output_value.shape[1] self.layer_stride_dict[layer_name] = int(64 / output_value.shape[2]) def print_all_modules(self, print_module_str=False): for layer_name, module in self.model_dict.items(): layer_txt = layer_name if print_module_str: layer_txt += f": {str(module)}" if layer_name in self.output_layers: layer_txt += " (SET AS AN OUTPUT LAYER)" print(layer_txt) def forward(self, x): # TODO: find a way to run the model only for the selected out if self.stop_at_last_hook: try: self.model(x) except Exception as e: if not isinstance(e, StopModel): raise e out = None else: out = self.model(x) return out, self.selected_out def inference(self, image, name): bbox_list = self.model.inference(image, name) return bbox_list, self.selected_out def add_all_modules_to_model_dict(self): model_dict = {} add_all_modules_to_model_dict_recursive(self.model, model_dict) return model_dict def remove_all_hooks(self): remove_all_hooks_recursive(self.model) self.selected_out = OrderedDict()
Sagemaker batch inference and endpoints work in same way. They are expecting web server with get[ping] and post[invocations] to start working. The thing is, when sagemaker runs, he is running a file named "serve" which was missing in my case. To be clear, as far as i understood. "my case" is when we dont use estimator before the inference. you can see that im defining my own pre trained model like that: model = sagemaker.model.Model( image_uri='*.dkr.ecr.eu-west-1.amazonaws.com/sagemaker:yolo-nas-cpu-infra-0.1.28', role=role, name="yolo-nas-cpu-infra-v0-1-28-dt"+str(date_time_in_numbers), ) transformer = model.transformer( instance_count=1, instance_type="ml.m5.4xlarge", ) transformer.transform( data="s3://benny-test/indexer/yolo-nas/sagemaker-bs/", content_type="application/json" ) TL;DR used flask, ngnix, gunicorn. serve file configuring and running ngnix and gunicorn
2
0
77,763,904
2024-1-5
https://stackoverflow.com/questions/77763904/runtimeerror-event-loop-is-closed-when-using-unit-isolatedasynciotestcase-to
Consider this mcve: requirements.txt: fastapi httpx motor pydantic[email] python-bsonjs uvicorn==0.24.0 main.py: import asyncio import unittest from typing import Optional import motor.motor_asyncio from bson import ObjectId from fastapi import APIRouter, Body, FastAPI, HTTPException, Request, status from fastapi.testclient import TestClient from pydantic import BaseModel, ConfigDict, EmailStr, Field from pydantic.functional_validators import BeforeValidator from typing_extensions import Annotated # -------- Model -------- PyObjectId = Annotated[str, BeforeValidator(str)] class ItemModel(BaseModel): id: Optional[PyObjectId] = Field(alias="_id", default=None) name: str = Field(...) email: EmailStr = Field(...) model_config = ConfigDict( populate_by_name=True, arbitrary_types_allowed=True, json_schema_extra={ "example": {"name": "Jane Doe", "email": "[email protected]"} }, ) # -------- Router -------- mcve_router = APIRouter() @mcve_router.post( "", response_description="Add new item", response_model=ItemModel, status_code=status.HTTP_201_CREATED, response_model_by_alias=False, ) async def create_item(request: Request, item: ItemModel = Body(...)): db_collection = request.app.db_collection new_bar = await db_collection.insert_one( item.model_dump(by_alias=True, exclude=["id"]) ) created_bar = await db_collection.find_one({"_id": new_bar.inserted_id}) return created_bar @mcve_router.get( "/{id}", response_description="Get a single item", response_model=ItemModel, response_model_by_alias=False, ) async def show_item(request: Request, id: str): db_collection = request.app.db_collection if (item := await db_collection.find_one({"_id": ObjectId(id)})) is not None: return item raise HTTPException(status_code=404, detail=f"item {id} not found") if __name__ == "__main__": app = FastAPI() app.include_router(mcve_router, tags=["item"], prefix="/item") app.db_client = motor.motor_asyncio.AsyncIOMotorClient( "mongodb://127.0.0.1:27017/?readPreference=primary&appname=MongoDB%20Compass&ssl=false" ) app.db = app.db_client.mcve_db app.db_collection = app.db.get_collection("bars") class TestAsync(unittest.IsolatedAsyncioTestCase): async def asyncSetUp(self): self.client = TestClient(app) async def asyncTearDown(self): self.client.app.db_client.close() def run_async_test(self, coro): loop = asyncio.get_event_loop() return loop.run_until_complete(coro) def test_show_item(self): bar_data = {"name": "John Doe", "email": "[email protected]"} create_response = self.client.post("/item", json=bar_data) self.assertEqual(create_response.status_code, 201) created_item_id = create_response.json().get("id") self.assertIsNotNone(created_item_id) response = self.client.get(f"/item/{created_item_id}") self.assertEqual(response.status_code, 200) unittest.main() When I try to run it I'll get this crash: (venv) d:\mcve>python mcve.py E ====================================================================== ERROR: test_show_item (__main__.TestBarRoutesAsync.test_show_item) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\software\python\3.11.3-amd64\Lib\unittest\async_case.py", line 90, in _callTestMethod if self._callMaybeAsync(method) is not None: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\software\python\3.11.3-amd64\Lib\unittest\async_case.py", line 112, in _callMaybeAsync return self._asyncioRunner.run( ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\software\python\3.11.3-amd64\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\software\python\3.11.3-amd64\Lib\asyncio\base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "d:\mcve\mcve.py", line 87, in test_show_item response = self.client.get(f"/item/{created_item_id}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 502, in get return super().get( ^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 1055, in get return self.request( ^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 468, in request return super().request( ^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 828, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 915, in send response = self._send_handling_auth( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 943, in _send_handling_auth response = self._send_handling_redirects( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 980, in _send_handling_redirects response = self._send_single_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\httpx\_client.py", line 1016, in _send_single_request response = transport.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 344, in handle_request raise exc File "D:\mcve\venv\Lib\site-packages\starlette\testclient.py", line 341, in handle_request portal.call(self.app, scope, receive, send) File "D:\mcve\venv\Lib\site-packages\anyio\from_thread.py", line 288, in call return cast(T_Retval, self.start_task_soon(func, *args).result()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\software\python\3.11.3-amd64\Lib\concurrent\futures\_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "D:\software\python\3.11.3-amd64\Lib\concurrent\futures\_base.py", line 401, in __get_result raise self._exception File "D:\mcve\venv\Lib\site-packages\anyio\from_thread.py", line 217, in _call_func retval = await retval_or_awaitable ^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\fastapi\applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "D:\mcve\venv\Lib\site-packages\starlette\applications.py", line 116, in __call__ await self.middleware_stack(scope, receive, send) File "D:\mcve\venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__ raise exc File "D:\mcve\venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__ await self.app(scope, receive, _send) File "D:\mcve\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app raise exc File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app await app(scope, receive, sender) File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 746, in __call__ await route.handle(scope, receive, send) File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 288, in handle await self.app(scope, receive, send) File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 75, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app raise exc File "D:\mcve\venv\Lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app await app(scope, receive, sender) File "D:\mcve\venv\Lib\site-packages\starlette\routing.py", line 70, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\fastapi\routing.py", line 299, in app raise e File "D:\mcve\venv\Lib\site-packages\fastapi\routing.py", line 294, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\fastapi\routing.py", line 191, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "d:\mcve\mcve.py", line 57, in show_item if (item := await db_collection.find_one({"_id": ObjectId(id)})) is not None: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\motor\metaprogramming.py", line 75, in method return framework.run_on_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\mcve\venv\Lib\site-packages\motor\frameworks\asyncio\__init__.py", line 85, in run_on_executor return loop.run_in_executor(_EXECUTOR, functools.partial(fn, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\software\python\3.11.3-amd64\Lib\asyncio\base_events.py", line 816, in run_in_executor self._check_closed() File "D:\software\python\3.11.3-amd64\Lib\asyncio\base_events.py", line 519, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed ---------------------------------------------------------------------- Ran 1 test in 0.074s FAILED (errors=1) The line that's producing that crash is response = self.client.get(f"/item/{created_item_id}") but I don't understand what's the issue. Btw, not interested on using pytest at all, the main purpose of this question is to figure what's wrong and how to fix the current mcve Thanks in advance!
From what I understand from your question, you are also facing issues while running FastAPI. To solve the unit test issue, try creating a test_app.py file in your directory and paste the following code: import asyncio import unittest from fastapi.testclient import TestClient from abhyas import app class TestAsync(unittest.IsolatedAsyncioTestCase): async def asyncSetUp(self): self.client = TestClient(app) async def asyncTearDown(self): self.client.app.db_client.close() async def run_async_test(self, coro): return asyncio.run(coro) async def test_show_item(self): async def test_logic(): bar_data = {"name": "John Doe", "email": "[email protected]"} create_response = await self.client.post("/item", json=bar_data) self.assertEqual(create_response.status_code, 201) created_item_id = create_response.json().get("id") self.assertIsNotNone(created_item_id) response = await self.client.get(f"/item/{created_item_id}") self.assertEqual(response.status_code, 200) self.run_async_test(test_logic) if __name__ == "__main__": unittest.main() It directly utilizes asyncio.run for running asynchronous coroutines, and the test methods themselves are marked as asynchronous. After separating the unit test code from the FastAPI application. The mcve.py will look like this: from typing import Optional import motor.motor_asyncio import uvicorn from bson import ObjectId from fastapi import APIRouter, Body, FastAPI, HTTPException, Request, status from pydantic import BaseModel, ConfigDict, EmailStr, Field from pydantic.functional_validators import BeforeValidator from typing_extensions import Annotated # -------- Model -------- PyObjectId = Annotated[str, BeforeValidator(str)] class ItemModel(BaseModel): id: Optional[PyObjectId] = Field(alias="_id", default=None) name: str = Field(...) email: EmailStr = Field(...) model_config = ConfigDict( populate_by_name=True, arbitrary_types_allowed=True, json_schema_extra={ "example": {"name": "Jane Doe", "email": "[email protected]"} }, ) # -------- Router -------- mcve_router = APIRouter() @mcve_router.post( "", response_description="Add new item", response_model=ItemModel, status_code=status.HTTP_201_CREATED, response_model_by_alias=False, ) async def create_item(request: Request, item: ItemModel = Body(...)): db_collection = request.app.db_collection new_bar = await db_collection.insert_one( item.model_dump(by_alias=True, exclude=["id"]) ) created_bar = await db_collection.find_one({"_id": new_bar.inserted_id}) return created_bar @mcve_router.get( "/{id}", response_description="Get a single item", response_model=ItemModel, response_model_by_alias=False, ) async def show_item(request: Request, id: str): db_collection = request.app.db_collection if (item := await db_collection.find_one({"_id": ObjectId(id)})) is not None: return item raise HTTPException(status_code=404, detail=f"item {id} not found") app = FastAPI() app.include_router(mcve_router, tags=["item"], prefix="/item") app.db_client = motor.motor_asyncio.AsyncIOMotorClient( "mongodb://127.0.0.1:27017/?readPreference=primary&appname=MongoDB%20Compass&ssl=false" ) app.db = app.db_client.mcve_db app.db_collection = app.db.get_collection("bars") if __name__ == '__main__': uvicorn.run("mcve:app", host="0.0.0.0", port=8000, reload=True) If you need help setting up MongoDB locally, you can refer to this page.
2
1
77,785,399
2024-1-9
https://stackoverflow.com/questions/77785399/show-a-wolfram-mathematica-plot-from-within-python
I need to show a plot created by the Wolfram Mathematica application used from within a Python script. So far I have written the following code, but I do not know how I should show the information stored in the wlplot variable. Is it even possible in Python? The plot.txt file contains the command I need. I checked that the command is correct and that it is properly executed in the Wolfram Alpha itself. session = WolframLanguageSession() plot_file = open("plot.txt") wl_comm = plot_file.read() plot_file.close() wlplot = session.evaluate(wlexpr(wl_plot)) #??? session.terminate() So the question is: If it is possible, what should I write within the Python script in order to show the plot created by the Wolfram Engine?
The working code follows. from PIL import Image png_export = wl.Export(path,wlplot,"PNG") session.evaluate(png_export) img = Image.open(path) img.show()
4
1
77,768,273
2024-1-6
https://stackoverflow.com/questions/77768273/instaloader-json-query-to-explore-tags-hashtag-404-not-found
I am constantly getting an error where when I try to scrape posts from a specific hashtag using instaloader, I am getting the error: JSON Query to explore/tags/hashtag/: 404 Not Found Here is my script: from itertools import islice import instaloader username = '' password = '' hashtag = 'food' L = instaloader.Instaloader() L.login(username, password) posts = L.get_hashtag_posts(hashtag) for post in posts: print(post.url) and here is the full error messsage: JSON Query to explore/tags/hashtag/: 404 Not Found [retrying; skip with ^C] JSON Query to explore/tags/hashtag/: 404 Not Found [retrying; skip with ^C] Traceback (most recent call last): File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/instaloadercontext.py", line 405, in get_json raise QueryReturnedNotFoundException("404 Not Found") instaloader.exceptions.QueryReturnedNotFoundException: 404 Not Found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/instaloadercontext.py", line 405, in get_json raise QueryReturnedNotFoundException("404 Not Found") instaloader.exceptions.QueryReturnedNotFoundException: 404 Not Found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/instaloadercontext.py", line 405, in get_json raise QueryReturnedNotFoundException("404 Not Found") instaloader.exceptions.QueryReturnedNotFoundException: 404 Not Found The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/marcus/Documents/python/instascraper/main.py", line 12, in <module> posts = L.get_hashtag_posts('hashtag') File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/instaloader.py", line 1204, in get_hashtag_posts return Hashtag.from_name(self.context, hashtag).get_posts_resumable() File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/structures.py", line 1662, in from_name hashtag._obtain_metadata() File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/structures.py", line 1676, in _obtain_metadata self._node = self._query({"__a": 1, "__d": "dis"}) File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/structures.py", line 1671, in _query json_response = self._context.get_json("explore/tags/{0}/".format(self.name), params) File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/instaloadercontext.py", line 435, in get_json return self.get_json(path=path, params=params, host=host, session=sess, _attempt=_attempt + 1, File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/instaloadercontext.py", line 435, in get_json return self.get_json(path=path, params=params, host=host, session=sess, _attempt=_attempt + 1, File "/Users/marcus/opt/anaconda3/lib/python3.9/site-packages/instaloader/instaloadercontext.py", line 423, in get_json raise QueryReturnedNotFoundException(error_string) from err instaloader.exceptions.QueryReturnedNotFoundException: JSON Query to explore/tags/hashtag/: 404 Not Found Any help is appreciated.
Seems like instagram has temporarily blocked your ip Address due to the unusual activity you'd need to change your address or use a quality VPN.
3
-1
77,756,336
2024-1-4
https://stackoverflow.com/questions/77756336/better-way-to-access-a-nested-foreign-key-field-in-django
Consider the following models class A(models.Model): id field1 field2 class B(models.Model): id field3 field_a (foreign_key to Class A) class C(models.model): id field4 field5 field_b (foreign_key to Class B) @property def nested_field(self): return self.field_b.field_a Now here that property in class C, would trigger additional SQL queries to be fetched. Is there an optimized or better way to get nested foreign key fields? I have basically tried searching and finding regarding this and couldn't find a better solution, that addresses this problem.
Turning my comments into an answer: select_related() is one of the go-to tools. As you noticed, the model instance needs to have been fetched by a queryset that has made the appropriate call to select_related(). queryset = C.objects.select_related('b__a') obj = queryset.first() print(obj.nested_field) # Shouldn't cost additional queries The main reason I turned this into an answer is to mention a tradeoff: select_related() does the equivalent of SELECT * on the related model(s). It's convenient when you want to have the related model instance available, and not much of a problem for most small models; however, if you have a related model with a lot of columns, or several distantly nested models, this can cause unnecessary overhead if you don't need to use all the fields that are being retrieved. You can optimize this using only(), but what I've used more often is annotate(). I feel like annotate() gets overlooked for these kinds of common foreign key traversals, but it can give you a similar interface to using model properties (and it took me way too long into my coding career to figure that out): from django.db.models import F queryset = C.objects.annotate(nested_field_1=F('field_b__field_a__field1')) obj = queryset.first() print(obj.nested_field_1) # similar, though it's per-field annotate() does the equivalent of SELECT AS here, and it can accomplish many of the same things as a Model property. Here, nested_field_1 is available as an attribute on each object in the query result. To make this more reusable, these kinds of calls to annotate() can be added to a custom Manager: from django.db.models import Manager, Model class ModelCManager(Manager): def get_queryset(self): return ( super().get_queryset() .annotate(nested_field_1=F('field_b__field_a__field1')) ) class C(Model): ... objects = Manager() with_nested = ModelCManager() queryset = C.with_nested.all() obj = queryset.first() print(obj.nested_field_1) In a production application where I had a LOT of normalized, nested foreign keys, I would more frequently extend QuerySet so that I could chain additional methods together: from django.db.models import F, Model, QuerySet class ModelCQuerySet(Queryset): def annotate_a_fields(self): return self.annotate( a_field_1=F('field_b__field_a__field1'), a_field_2=F('field_b__field_a__field2') ) def annotate_b_fields(self): return self.annotate( b_field_1=F('field_b__field1') ) class C(Model): ... objects = ModelCQuerySet.as_manager() queryset = ( C.objects .filter(field_b__field4=42) .annotate_a_fields() .annotate_b_fields() ) obj = queryset.first() print(obj.a_field_1) With the above, you have a lot of control over the interface you create, and it makes the queries involved to get the data you want obvious. Model properties are still super useful for local column operations, like joining strings or formatting values - but I've been burned enough times by surprise 9000+ implicit queries that I avoid defining any model properties that have to traverse a foreign key. Moving those data retrieval concerns to the QuerySet helped guard against some of those accidents.
2
1
77,789,573
2024-1-9
https://stackoverflow.com/questions/77789573/vscode-pytest-discovers-but-wont-run
In VScode, pytests are discovered. Using a conda env and it is the selected Python:Interpreter. However, when I try to run one or all the tests, it just says "Finished running tests!". Can't get into debug. Doesn't give green checkmark or red x's. If I run pytest in the terminal, it works just fine. I have reinstalled everything and started from scratch. I have no idea what is going on. Please help! :) Added Python Test Log. This is the output when I discover and then try to run a test.
having the following in my .vscode/setting.json fixed the problem: { "python.testing.pytestEnabled": true, "python.testing.unittestEnabled": false, "python.testing.pytestArgs": [ "${workspaceFolder}", "--rootdir=${workspaceFolder}" ], }
2
1
77,788,310
2024-1-9
https://stackoverflow.com/questions/77788310/vs-code-jupyter-notebook-output-cell-word-wrapping-not-working
I'm selecting text data from an SFrame and printing it. The text is really long, and the cell gets a horizontal scrollbar to view it. I would like to have it wrap to a newline and fit in my window, not to have a horizontal scrollbar. I tried enabling/disabling the vscode command View: Toggle Word Wrap, but that didn't change the output, even upon rerunning the script.
Unfortunately, VS Code doesn't currently support word wrap in the output or terminal windows. You can try to use textwrap package manually. Add the following codes to your script: import textwrap wrapped_text = textwrap.fill(text, width=80) #text is the object which you want to print print(wrapped_text)
6
2
77,785,036
2024-1-9
https://stackoverflow.com/questions/77785036/iteratively-convert-an-arbitrary-depth-list-to-a-dict
The input has a pattern, every element in the list is a dict and has a fixed keys [{'key': 'a', 'children': [{'key': 'a1', 'children': [{'key': 'a11', 'children': []}]}]}, {'key': 'b', 'children': [{'key': 'b1', 'children': [{'key': 'b11', 'children': []}]}]},] expected output {'a': {'a1': {'a11': ''}}, 'b': {'b1': {'b11': ''}}} I want to do it in an iterative way, currently I'm able to get all the value of the fixed key key, but failed to compose to the targetd dict def get_res(stack, key='key'): result = [] while stack: elem = stack.pop() if isinstance(elem, dict): for k, v in elem.items(): if k == key: # breakpoint() result.append(v) stack.append(v) elif isinstance(elem, list): stack.extend(elem) print(result) return result also stuck in recursive approach def gen_x(stack): for bx in stack: if 'children' not in bx: return {bx['key']: ''} tem_ls = bx['children'] xs = gen_x(tem_ls) print(xs)
You can perform a breadth-first traversal with a queue instead: from collections import deque from operator import itemgetter def convert_tree(lst): tree = {} queue = deque([(lst, tree)]) while queue: entries, branch = queue.popleft() for key, children in map(itemgetter('key', 'children'), entries): if children: queue.append((children, branch.setdefault(key, {}))) else: branch[key] = '' return tree Demo: https://ideone.com/hvfqLj
2
1
77,789,971
2024-1-9
https://stackoverflow.com/questions/77789971/convert-dataframe-from-datetimens-to-datetimeus
I'm trying to convert a column of my pandas dataframe from datetime['ns'] to datetime['us']. I've tried using astype but the column type doesn't seem to change. I need the type of the column itself to update due to some downstream libraries checking for a specific column type. >> data[column].dtype.name 'datetime64[ns]' >> data[column] = data[column].astype('datetime64[us]') >> data[column].dtype.name 'datetime64[ns]'
I can reproduce, the behaviour you describe, in 1.5.3. So, you're most likely using a similar version or at least older than (2.0.0). Why ? Backwards incompatible API changes : In past versions, when constructing a Series or DataFrame and passing a datetime64 or timedelta64 dtype with unsupported resolution (i.e. anything other than "ns"), pandas would silently replace the given dtype with its nanosecond analogue. To get your cast work, you need to update your pandas' version : pip install pandas==2.0.0 # or pip install -U pandas # latest Minimal Reproducible Example : import pandas as pd # >= 2.0.0 data = pd.DataFrame({"dt": pd.date_range("20240101", freq="T", periods=3)}) data["dt"].dtype.name # 'datetime64[ns]' data["dt"].astype("datetime64[us]").dtype.name # 'datetime64[us]' <<
2
1
77,776,128
2024-1-8
https://stackoverflow.com/questions/77776128/adding-submenu-to-a-qcombobox
How can I go about creating sub-menus in a QComboBox? I am currently using the default layout offered by the widget but this creates a lengthy pull-down list as seen in the attached image.
QComboBox normally uses a QListView as its popup. While it is possible to change it to a QTreeView by calling setView(), its result is sometimes cumbersome and often requires further adjustments to make it usable. Most importantly, the view will not use furhter popups, which can become an issue if the whole structure requires too space horizontally or vertically, or the hierarchy is too complex. A simpler solution for such cases is to use a QToolButton with actual menus. Using some custom functions and signals, you can get a behavior similar to that of a QComboBox, getting the text of the currently selected option (even in its full hierarchy). class FakeCombo(QToolButton): currentItemChanged = pyqtSignal(str) currentItemPathChanged = pyqtSignal(list) _currentAction = None def __init__(self, data=None, *args, **kwargs): super().__init__(*args, **kwargs) self.setPopupMode(self.MenuButtonPopup) menu = QMenu(self) self.setMenu(menu) menu.triggered.connect(self.setCurrentAction) self.pressed.connect(self.showMenu) if data: self.setData(data) def _currentActionPath(self): if not self._currentAction: return [] action = self._currentAction path = [action] while action.parent() != self.menu(): action = action.parent().menuAction() path.append(action) return reversed(path) def _getActionsRecursive(self, parent): for action in parent.actions(): if action.menu(): yield from self._getActionsRecursive(action.menu()) else: yield action def _rebuildList(self): self._actions = tuple(self._getActionsRecursive(self.menu())) def currentItem(self): if not self._currentAction: return '' return self._currentAction.text() def currentItemPath(self): return [a.text() for a in self._currentActionPath()] def setCurrentAction(self, action): if self._currentAction == action: return if not isinstance(action, QAction): action = None self._currentAction = action if action is None: self.currentItemChanged.emit('') self.currentItemPathChanged.emit([]) return path = self.currentItemPath() self.setText(': '.join(path)) self.currentItemChanged.emit(path[-1]) self.currentItemPathChanged.emit(path) def setData(self, data): menu = self.menu() menu.clear() if not data: self.setCurrentAction(None) return for item in data: self.addItem(item, menu) self._rebuildList() self.setCurrentAction(self._actions[0]) def addItem(self, item, parent): if isinstance(item, str): action = QAction(item, parent) elif isinstance(item, (tuple, list)): main, subitems = item action = parent.addAction(main) menu = QMenu() action.setMenu(menu) for other in subitems: self.addItem(other, menu) action.destroyed.connect(menu.clear) parent.addAction(action) return action def mousePressEvent(self, event): if self.menu().actions(): QAbstractButton.mousePressEvent(self, event) def keyPressEvent(self, event): if self.menu().actions() or event.key() != Qt.Key_Space: super().keyPressEvent(event) # simple example of data structure made of tuples: # - "final" items are simple strings # - groups are tuples made of a string and another tuple DATA = ( 'Top level item', ('Group #1', ( 'sub item #1 ', 'sub item #2 ', ('Sub group', ( 'sub-sub item #1', 'sub-sub item #2', )), )), ('Group #2', ( 'sub item #3', )), ) app = QApplication([]) win = QWidget() box = FakeCombo(DATA) itemField = QLineEdit(readOnly=True) pathField = QLineEdit(readOnly=True) layout = QFormLayout(win) layout.addRow('Options:', box) layout.addRow('Current item:', itemField) layout.addRow('Current path:', pathField) def updateCurrent(item): itemField.setText(item) pathField.setText(', '.join(box.currentItemPath())) box.currentItemChanged.connect(updateCurrent) updateCurrent(box.currentItem()) win.show() app.exec() There's obviously some margin of improvement, for instance allowing wheel and arrow key navigation, highlight of the current item on popup etc.
2
0
77,786,853
2024-1-9
https://stackoverflow.com/questions/77786853/how-to-print-timedelta-consistently-i-e-formatted
I have this code that prints the time difference in milliseconds. #!/usr/bin/python import datetime import sys date1= datetime.datetime.strptime('20231107-08:52:53.539', '%Y%m%d-%H:%M:%S.%f') date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') diff = date1-date2 sys.stdout.write(str(diff) + '\t') date1= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') diff = date1-date2 sys.stdout.write(str(diff) + '\t') date1= datetime.datetime.strptime('20231107-08:52:53.532', '%Y%m%d-%H:%M:%S.%f') date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') diff = date1-date2 sys.stdout.write(str(diff) + '\n') And it prints this, without consistency $ python ./prntdatetest.py 0:00:00.002000 <tab> 0:00:00 <tab> -1 day, 23:59:59.995000 I want this to be printed like this 00:00:00.002 <tab> 00:00:00.000 <tab> -0:00:00.005 I do not want to use the print but i want to use stdout.write How can I do this?
You can define the following formatting function: def format_diff(diff): sign = '-' if diff < datetime.timedelta(0) else '' seconds = abs(diff.total_seconds()) return f'{sign}{seconds//60//60:02.0f}:{seconds//60%60:02.0f}:{seconds%60:06.03f}' #!/usr/bin/python import datetime import sys date1= datetime.datetime.strptime('20231107-08:52:53.539', '%Y%m%d-%H:%M:%S.%f') date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') diff = date1-date2 sys.stdout.write(format_diff(diff) + '\t') date1= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') diff = date1-date2 sys.stdout.write(format_diff(diff) + '\t') date1= datetime.datetime.strptime('20231107-08:52:53.532', '%Y%m%d-%H:%M:%S.%f') date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f') diff = date1-date2 sys.stdout.write(format_diff(diff) + '\n') Output: 00:00:00.002 00:00:00.000 -00:00:00.005 Note: that handles values up to on day. For >= 1 day deltas, you need to update the function.
2
1
77,786,441
2024-1-9
https://stackoverflow.com/questions/77786441/using-pd-cut-with-duplicate-bins-and-labels
I'm using pd.cut with the keyword argument duplicates='drop'. However, this gives errors when you combine it with the keyword argument labels. The question is similar to this question, but that ignores the label part. Does not work: pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2]) Works: pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2], duplicates='drop') Does not work: pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2], duplicates='drop', labels=[0, 1, 1, 2]) Wouldn't we expect it to drop the label corresponding to the duplicate entry?
No, the cut documentation is pretty clear, it only concerns the bins: duplicates {default ‘raise’, ‘drop’}, optional If bin edges are not unique, raise ValueError or drop non-uniques. Also, in any case the labels must be one value less than the bins, so dropping the labels based on the bins would be ambiguous. This works if you have the correct final number of labels: pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2], labels=['a', 'b'], duplicates='drop' ) Or using a weird programmatic alternative: bins = pd.Series([0, 1, 1, 2]) labels = pd.Series(['a', 'b', 'c']) pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2], labels=labels[~bins.duplicated()[:-1]], duplicates='drop' ) Output: 0 NaN 1 a 2 b 3 NaN 4 NaN 5 NaN dtype: category Categories (2, object): ['a' < 'b']
2
2
77,785,794
2024-1-9
https://stackoverflow.com/questions/77785794/importerror-cannot-import-name-checkpoint-from-ray-air
I'm trying to follow this tutorial to tune hyperparameters in PyTorch using Ray, copy-pasted everything but I get the following error: ImportError: cannot import name 'Checkpoint' from 'ray.air' from this line of import: from ray.air import Checkpoint I installed ray using pip install -U "ray[tune]" as suggested on the official website. After getting the error, to be sure, I also tried a more general pip install ray, which did not fix anything. I have version ray==2.9.0 installed. Any help, please?
Try to install older version 2.7.0: pip install ray[tune]==2.7.0 Update : For the newest version the Ray AIR session is replaced with a Ray Train context object. You can import Checkpoint using : from ray.train import Checkpoint You need to adjust your code as follow: from ray import air, train # Ray Train methods and classes: air.session.report -> train.report air.session.get_dataset_shard -> train.get_dataset_shard air.session.get_checkpoint -> train.get_checkpoint air.Checkpoint -> train.Checkpoint air.Result -> train.Result # Ray Train configurations: air.config.CheckpointConfig -> train.CheckpointConfig air.config.FailureConfig -> train.FailureConfig air.config.RunConfig -> train.RunConfig air.config.ScalingConfig -> train.ScalingConfig # Ray TrainContext methods: air.session.get_experiment_name -> train.get_context().get_experiment_name air.session.get_trial_name -> train.get_context().get_trial_name air.session.get_trial_id -> train.get_context().get_trial_id air.session.get_trial_resources -> train.get_context().get_trial_resources air.session.get_trial_dir -> train.get_context().get_trial_dir air.session.get_world_size -> train.get_context().get_world_size air.session.get_world_rank -> train.get_context().get_world_rank air.session.get_local_rank -> train.get_context().get_local_rank air.session.get_local_world_size -> train.get_context().get_local_world_size air.session.get_node_rank -> train.get_context().get_node_rank For more informations see : Refining the Ray AIR Surface API 2.7 Migration Guide
2
5
77,780,873
2024-1-8
https://stackoverflow.com/questions/77780873/removing-certain-end-standing-values-from-list-in-python
Is there an elegant Pythonic way to perform something like rstrip() on a list? Imagine, I have different lists: l1 = ['A', 'D', 'D'] l2 = ['A', 'D'] l3 = ['D', 'A', 'D', 'D'] l4 = ['A', 'D', 'B', 'D'] I need a function that will remove all end-standing 'D' elements from a given list (but not those that come before or in between other elements!). for mylist in [l1, l2, l3, l4]: print(mylist, ' => ', remove_end_elements(mylist, 'D')) So the desired output would be: ['A', 'D', 'D'] => ['A'] ['A', 'D'] => ['A'] ['D', 'A', 'D', 'D'] => ['D', 'A'] ['A', 'D', 'B', 'D'] => ['A','D','B'] One implementation that does the job is this: def remove_end_elements(mylist, myelement): counter = 0 for element in mylist[::-1]: if element != myelement: break counter -= 1 return mylist[:counter] Is there a more elegant / efficient way to do it? To answer comment-questions: Either a new list or modifying the original list is fine (although the above implementation has creating a new list in mind). The real lists contain multi-character-strings (lines from a text file). What I'm actually trying to strip away are lines that fulfill certain criteria for "empty" (no characters OR only whitespace OR only whitespace and commas). I have that check implemented elsewhere. These empty lines can be an arbitrary number at the end of the list, but in most cases will be 1. I timed the different solutions offered so far, with simulated data close to my actual use case, and the actual is_empty_line() function that I'm using: Kelly Bundy's solution: 0.029670200019609183 Guy's solution: 0.038380099984351546 my original solution: 0.03837349999230355 cards' solution: 0.0408437000005506 Timeless' solution: 0.08083210000768304 Which one performs better does seem to depend on the complexity of the is_empty_line() function (except for Timeless' solution, which is consistently slower than everything else, and KellyBundy's solution, which is consistently faster).
def remove_end_elements(mylist, myelement): while myelement in mylist[-1:]: mylist.pop() return mylist
5
1
77,785,263
2024-1-9
https://stackoverflow.com/questions/77785263/deprecating-a-function-that-is-being-replaced-with-a-property
I am refactoring parts of an API and wish to add deprecation warnings to parts that will eventually be removed. However I have stumbled into an issue where I would like to replace a function call with a property sharing a name. Is there a hack where I can support both calling the .length as a property and as a function? I have thinkered with __getattribute__ and __getattr__ and can't think of a way. import warnings class A: @property def length(self): return 1 def length(self): warnings.warn(".length function is deprecated. Use the .length property", DeprecationWarning) return 1 P.S. Preferably I would like the solution to be python 2.7 compatible. Additional context The only "kind of" solution I have thought of is to overwrite the return value and skip the properties for now and add them in later when the deprecation warnings are removed. This solution would work for my case, if there really isn't any other way, but I would prefer a solution that is a lot less hacky. import warnings class F(float): def __init__(self, v): self.v = v def __new__(cls, value): return float.__new__(cls, value) def __call__(self, *args, **kwargs): warnings.warn(".length function is deprecated. Use the .length property", DeprecationWarning) return self.v class A(object): def __getattribute__(self, item): if item == "length": # This is a hack to enable a deprecation warning when calling .length() # Remove this in favor for the @property, when the deprecation warnings are removed. return F(1) return super(A, self).__getattribute__(item) # @property # def length(self): # # type: () -> float # return 1.0
One workaround is to make the property return a proxy object of a subtype of the value to be returned. The proxy object can then produce the warning when called: import warnings def warning_property(message, warning_type=DeprecationWarning): class _property(property): def __get__(self, obj, obj_type=None): value = super().__get__(obj, obj_type) class _proxy(type(value)): def __call__(self): warnings.warn(message, warning_type) return value return _proxy(value) return _property so that: class A: @warning_property(".length function is deprecated. Use the .length property") def length(self): return 1 print(A().length) print(A().length()) outputs: 1 1 DeprecationWarning: .length function is deprecated. Use the .length property Demo: https://ideone.com/PWkLN9 Note that the above assumes that the constructor of the type of the returning value of the property can take an instance as an argument, which is the case for all built-in types. If the constructor has a different signature then you should modify return _proxy(value) accordingly.
4
4
77,766,048
2024-1-5
https://stackoverflow.com/questions/77766048/getting-a-very-simple-stablebaselines3-example-to-work
I tried to model the simplest coin flipping game where you have to predict if it is going to be a head. Sadly it won't run, given me: Using cpu device Traceback (most recent call last): File "/home/user/python/simplegame.py", line 40, in <module> model.learn(total_timesteps=10000) File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/ppo/ppo.py", line 315, in learn return super().learn( File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 264, in learn total_timesteps, callback = self._setup_learn( File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/common/base_class.py", line 423, in _setup_learn self._last_obs = self.env.reset() # type: ignore[assignment] File "/home/user/python/mypython3.10/lib/python3.10/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py", line 77, in reset obs, self.reset_infos[env_idx] = self.envs[env_idx].reset(seed=self._seeds[env_idx], **maybe_options) TypeError: CoinFlipEnv.reset() got an unexpected keyword argument 'seed' Here is the code: import gymnasium as gym import numpy as np from stable_baselines3 import PPO from stable_baselines3.common.vec_env import DummyVecEnv class CoinFlipEnv(gym.Env): def __init__(self, heads_probability=0.8): super(CoinFlipEnv, self).__init__() self.action_space = gym.spaces.Discrete(2) # 0 for heads, 1 for tails self.observation_space = gym.spaces.Discrete(2) # 0 for heads, 1 for tails self.heads_probability = heads_probability self.flip_result = None def reset(self): # Reset the environment self.flip_result = None return self._get_observation() def step(self, action): # Perform the action (0 for heads, 1 for tails) self.flip_result = int(np.random.rand() < self.heads_probability) # Compute the reward (1 for correct prediction, -1 for incorrect) reward = 1 if self.flip_result == action else -1 # Return the observation, reward, done, and info return self._get_observation(), reward, True, {} def _get_observation(self): # Return the current coin flip result return self.flip_result # Create the environment with heads probability of 0.8 env = DummyVecEnv([lambda: CoinFlipEnv(heads_probability=0.8)]) # Create the PPO model model = PPO("MlpPolicy", env, verbose=1) # Train the model model.learn(total_timesteps=10000) # Save the model model.save("coin_flip_model") # Evaluate the model obs = env.reset() for _ in range(10): action, _states = model.predict(obs) obs, rewards, dones, info = env.step(action) print(f"Action: {action}, Observation: {obs}, Reward: {rewards}") What am I doing wrong? This is in version 2.2.1.
The gymnasium.Env class has the following signature which divers from the one by DummyVecEnv which takes no arguments. Env.reset(self, *, seed: int | None = None, options: dict[str, Any] | None = None) → tuple[ObsType, dict[str, Any]] in other words seed and options are keyword-only which your own reset function needs to implement. It returns the observation, info tuple in the end. The problems to note: Signature of reset does not match, needs seed and options Return signature of reset does not match. It needs to return a valid observation (ObsType) and a dictionary Return signature of step does not match, needs to say if result is truncated / model went out of bounds. (see below) def reset(self, *, seed=None, options=None): # Fix input signature # Reset the environment self.flip_result = 0 # None is not a valid Observation return self.flip_result, {} # Fix return signature If you return None, as underlying numpy arrays are used array([0])[0]=obs <- None would throw another error. step needs to have five returns parameters observation, reward, terminated, truncated, info def step(self, action): # Perform the action (0 for heads, 1 for tails) self.flip_result = int(np.random.rand() < self.heads_probability) # Compute the reward (1 for correct prediction, -1 for incorrect) reward = 1 if self.flip_result == action else -1 # Return the observation, reward, done, truncated, and info return self._get_observation(), reward, True, False, {} Now the models trains: ----------------------------- | time/ | | | fps | 5608 | | iterations | 1 | | time_elapsed | 0 | | total_timesteps | 2048 | ----------------------------- ----------------------------------------- | time/ | | | fps | 3530 | | iterations | 2 | | time_elapsed | 1 | | total_timesteps | 4096 | | train/ | | | approx_kl | 0.020679139 | | clip_fraction | 0.617 | | clip_range | 0.2 | | entropy_loss | -0.675 | | explained_variance | 0 | | learning_rate | 0.0003 | | loss | 0.38 | | n_updates | 10 | | policy_gradient_loss | -0.107 | | value_loss | 1 | ----------------------------------------- ----------------------------------------- | time/ | | | fps | 3146 | | iterations | 3 | | time_elapsed | 1 | | total_timesteps | 6144 | | train/ | | | approx_kl | 0.032571375 | | clip_fraction | 0.628 | | clip_range | 0.2 | | entropy_loss | -0.599 | | explained_variance | 0 | | learning_rate | 0.0003 | | loss | 0.392 | | n_updates | 20 | | policy_gradient_loss | -0.104 | | value_loss | 0.987 | ----------------------------------------- --------------------------------------- | time/ | | | fps | 2984 | | iterations | 4 | | time_elapsed | 2 | | total_timesteps | 8192 | | train/ | | | approx_kl | 0.0691616 | | clip_fraction | 0.535 | | clip_range | 0.2 | | entropy_loss | -0.417 | | explained_variance | 0 | | learning_rate | 0.0003 | | loss | 0.335 | | n_updates | 30 | | policy_gradient_loss | -0.09 | | value_loss | 0.941 | --------------------------------------- ---------------------------------------- | time/ | | | fps | 2898 | | iterations | 5 | | time_elapsed | 3 | | total_timesteps | 10240 | | train/ | | | approx_kl | 0.12130852 | | clip_fraction | 0.125 | | clip_range | 0.2 | | entropy_loss | -0.189 | | explained_variance | 0 | | learning_rate | 0.0003 | | loss | 0.536 | | n_updates | 40 | | policy_gradient_loss | -0.0397 | | value_loss | 0.806 | ---------------------------------------- Action: [1], Observation: [0], Reward: [1.] Action: [1], Observation: [0], Reward: [-1.] Action: [1], Observation: [0], Reward: [-1.] Action: [1], Observation: [0], Reward: [1.] Action: [1], Observation: [0], Reward: [1.] Action: [1], Observation: [0], Reward: [-1.] Action: [1], Observation: [0], Reward: [1.] Action: [1], Observation: [0], Reward: [-1.] Action: [1], Observation: [0], Reward: [1.] Action: [1], Observation: [0], Reward: [1.]
2
1
77,767,421
2024-1-5
https://stackoverflow.com/questions/77767421/plot-an-bended-arrow-in-matploblib-with-gradient-color-from-head-to-tail
I am trying to plot an arrow in Matplotlib as shown in the following picture. The arrow's color changes gradually from black to blue from the tail to the head. I searched the existing answers, and I found a few relevant posts: Using Colormap with Annotate Arrow in Matplotlib, Matplotlib: How to get a colour-gradient as an arrow next to a plot?, Arrow with color gradient in matplotlib [duplicate] However, the arrow plotted in the previous answer is straight. Here, I need an arrow with the arc. I think normally this is achieved by using connectionstyle options with arc or rad. Unfortunately, it seems neither matplotlib.patches.FancyArrowPatch nor matplotlib.pyplot.annotate supports the color to be defined by colormap directly. Could you please tell me how I can do this?
Plotting multiple overlapping arrows of different lengths and colors might work. You can adjust their length by changing the tail shrinking factors like this: import matplotlib.pyplot as plt import matplotlib.cm as cm from matplotlib.colors import LinearSegmentedColormap, ListedColormap import numpy as np def annotation_with_arc_arrow(ax, text, xy, xytext, color, headwidth, shrinkA, text_alpha=1): """Annotates a certain point on a plot with a simple arrow of a specified color, head width, shrinkA factor, and text alpha value (to be able to generate transparent text)""" text_props = {'alpha': text_alpha} ax.annotate(text, xy=xy, xycoords='data', xytext=xytext, textcoords='data', size=20, arrowprops=dict(facecolor=color, ec='none', arrowstyle='simple, head_width={0}'.format(headwidth), shrinkA=shrinkA, connectionstyle="arc3,rad=0.3" ), **text_props ) def gradient_annotation(ax, text, xy, xytext, cmap): """Annotates a certain point on a plot using a bent arrow with gradient color from tail to head""" # draw a headless arrow of the very first color from the map, with text annotation_with_arc_arrow(ax, text, xy, xytext, cmap(0), 0, 0) # draw many overlapping headless arrows of varying colors and shrinkA factors with transparent text last_cmap_index = cmap.N-1 for i in range(1, last_cmap_index): annotation_with_arc_arrow(ax, text, xy, xytext, cmap(i), 0, i, 0) # finally, draw an arrow of the very last color and shrinkA factor having a head of size 0.5 and transparent text annotation_with_arc_arrow(ax, text, xy, xytext, cmap(last_cmap_index), 0.5, last_cmap_index, 0) ax = plt.subplot(111) # generate a custom blue-black colormap N = 256 vals = np.ones((N, 4)) vals[:, 0] = np.linspace(0, 0, N) vals[:, 1] = np.linspace(0, 0, N) vals[:, 2] = np.linspace(0, 1, N) cmap = ListedColormap(vals) gradient_annotation(ax, 'Test', (0.2, 0.2), (0.8, 0.8), cmap) plt.show() The text annotation and arrow's head should be drawn only once to have a relatively clean result: Unfortunately, many other colormaps and arrow styles produce a bit dirty result with this solution. Perhaps you should try various ways of changing colors, setting shrinking factors, and choosing the "drawing direction" of the arrow (from tail to head or from head to tail). UPDATE: Alternatively, you could define arrow lengths by drawing a transparent head patch (patchB) of a gradually increasing radius each time you draw an arrow: import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap, ListedColormap import numpy as np import math import matplotlib.patches as mpatches def annotation_with_arc_arrow(ax, text, xy, xytext, color, headwidth, patchB_radius, text_alpha=1): """Annotates a certain point on a plot with a simple arrow of a specified color, head width, patchB radius, and text alpha value (to be able to generate transparent text)""" text_props = {'alpha': text_alpha} # transparent circle, limiting the arrow patchB = mpatches.Circle(xy, patchB_radius, alpha=0) ax.add_artist(patchB) ax.annotate(text, xy=xy, xycoords='data', xytext=xytext, textcoords='data', size=20, arrowprops=dict(facecolor=color, ec='none', arrowstyle='simple, head_width={0}'.format(headwidth), patchB=patchB, connectionstyle="arc3,rad=0.3" ), **text_props ) def gradient_annotation(ax, text, xy, xytext, cmap): """Annotates a certain point on a plot using a bent arrow with gradient color from tail to head""" # get the absolute differences in coordinates of the annotated point and the text dx = abs(xy[0] - xytext[0]) dy = abs(xy[1] - xytext[1]) # make those differences slightly smaller to compute a slightly smaller patch radius dx = dx - dx/50 dy = dy - dy/50 # get a radius, which is slightly smaller than the distance between the annotated point and the text r = math.sqrt(dx**2 + dy**2) # draw an arrow of the very first color from the map, with a head and text annotation_with_arc_arrow(ax, text, xy, xytext, cmap(0), 0.5, 0) # draw many overlapping headless arrows of varying colors and patchB radii, with transparent text # transparent patchB with a gradually increasing radius limits the arrow size for i in range(1, cmap.N): # compute a fraction of the maximum patchB radius r_i = r * (i/cmap.N) annotation_with_arc_arrow(ax, text, xy, xytext, cmap(i), 0, r_i, 0) ax = plt.subplot(111) # generate a custom blue-black colormap N = 256 vals = np.ones((N, 4)) vals[:, 0] = np.linspace(0, 0, N) vals[:, 1] = np.linspace(0, 0, N) vals[:, 2] = np.linspace(1, 0, N) cmap = ListedColormap(vals) gradient_annotation(ax, 'Test', (0.2, 0.2), (0.8, 0.8), cmap) plt.show() This code produces a nearly identical picture: But it allows you to safely rescale the picture without losing the original gradient color. The result might look a bit ugly if the arrow is very wide though. You might need to adjust the distribution of colors in the color map.
2
1
77,775,267
2024-1-7
https://stackoverflow.com/questions/77775267/combine-argparse-metavartypehelpformatter-argparse-argumentdefaultshelpformatte
I want to display default values, argument type, and big spacing for --help. But if I do import argparse class F(argparse.MetavarTypeHelpFormatter, argparse.ArgumentDefaultsHelpFormatter, lambda prog: argparse.HelpFormatter(prog, max_help_position = 52)): pass parser = argparse.ArgumentParser( prog = 'junk', formatter_class = F) It gives the following error TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases Does anyone know how to combine these three formatters correctly?
Separating the lambda part works In [122]: class F(argparse.MetavarTypeHelpFormatter, argparse.ArgumentDefaultsHelpFormatter): pass ...: F1 = lambda prog: F(prog, max_help_position=52) ...: parser = argparse.ArgumentParser( ...: prog = 'junk', ...: formatter_class = F1) In [123]: parser.print_help() usage: junk [-h] options: -h, --help show this help message and exit The lambda expression isn't used to make a new subclass, but to modify how the class is called. In [136]: parser = argparse.ArgumentParser( ...: prog = 'junk', ...: formatter_class = F1) ...: parser.add_argument('--long_name', type=float, default='123.213', help='help line'); In [137]: parser.print_help() usage: junk [-h] [--long_name float] options: -h, --help show this help message and exit --long_name float help line (default: 123.213) On the argparse bug/issues board it has been suggested that you can subclass the helpformatter by inheriting from several of the provided subclasses. But no one has tested all combinations. The provided subclasses just modify one or more of the formatter methods to produce the desired change. You could make those changes directly in your class. Read the argparse.py code to see how On using the lambda formatter: Explain lambda argparse.HelpFormatter(prog, width) I show how you can do by subclassing, but the lambda expression is simpler. Also where is the documentation for the python argparse helpformatter class?
3
2
77,774,217
2024-1-7
https://stackoverflow.com/questions/77774217/how-to-extract-several-rotated-shapes-from-an-image
I had taken an online visual IQ test, in it a lot of questions are like the following: The addresses of the images are: [f"https://www.idrlabs.com/static/i/eysenck-iq/en/{i}.png" for i in range(1, 51)] In these images there are several shapes that are almost identical and of nearly the same size. Most of these shapes can be obtained from the others by rotation and translation, but there is exactly one shape that can only be obtained from the others with reflection, this shape has a different chirality from the others, and it is "the odd man". The task is to find it. The answers here are 2, 1, and 4, respectively. I would like to automate it. And I nearly succeeded. First, I download the image, and load it using cv2. Then I threshold the image and invert the values, and then find the contours. I then find the largest contours. Now I need to extract the shapes associated with the contours and make the shapes stand upright. And this is where I stuck, I nearly succeeded but there are edge cases. My idea is simple, find the minimal area bounding box of the contour, then rotate the image to make the rectangle upright (all sides are parallel to grid-lines, longest sides are vertical), and then calculate the new coordinates of the rectangle, and finally using array slicing to extract the shape. I have achieved what I have described: import cv2 import requests import numpy as np img = cv2.imdecode( np.asarray( bytearray( requests.get( "https://www.idrlabs.com/static/i/eysenck-iq/en/5.png" ).content, ), dtype=np.uint8, ), -1, ) def get_contours(image): gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(gray, 128, 255, 0) thresh = ~thresh contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) return contours def find_largest_areas(contours): areas = [cv2.contourArea(contour) for contour in contours] area_ranks = [(area, i) for i, area in enumerate(areas)] area_ranks.sort(key=lambda x: -x[0]) for i in range(1, len(area_ranks)): avg = sum(e[0] for e in area_ranks[:i]) / i if area_ranks[i][0] < avg * 0.95: break return {e[1] for e in area_ranks[:i]} def find_largest_shapes(image): contours = get_contours(image) area_ranks = find_largest_areas(contours) contours = [e for i, e in enumerate(contours) if i in area_ranks] rectangles = [cv2.minAreaRect(contour) for contour in contours] rectangles.sort(key=lambda x: x[0]) return rectangles def rotate_image(image, angle): size_reverse = np.array(image.shape[1::-1]) M = cv2.getRotationMatrix2D(tuple(size_reverse / 2.0), angle, 1.0) MM = np.absolute(M[:, :2]) size_new = MM @ size_reverse M[:, -1] += (size_new - size_reverse) / 2.0 return cv2.warpAffine(image, M, tuple(size_new.astype(int))) def int_sort(arr): return np.sort(np.intp(np.floor(arr + 0.5))) RADIANS = {} def rotate(x, y, angle): if pair := RADIANS.get(angle): cosa, sina = pair else: a = angle / 180 * np.pi cosa, sina = np.cos(a), np.sin(a) RADIANS[angle] = (cosa, sina) return x * cosa - y * sina, y * cosa + x * sina def new_border(x, y, angle): nx, ny = rotate(x, y, angle) nx = int_sort(nx) ny = int_sort(ny) return nx[3] - nx[0], ny[3] - ny[0] def coords_to_pixels(x, y, w, h): cx, cy = w / 2, h / 2 nx, ny = x + cx, cy - y nx, ny = int_sort(nx), int_sort(ny) a, b = nx[0], ny[0] return a, b, nx[3] - a, ny[3] - b def new_contour_bounds(pixels, w, h, angle): cx, cy = w / 2, h / 2 x = np.array([-cx, cx, cx, -cx]) y = np.array([cy, cy, -cy, -cy]) nw, nh = new_border(x, y, angle) bx, by = pixels[..., 0] - cx, cy - pixels[..., 1] nx, ny = rotate(bx, by, angle) return coords_to_pixels(nx, ny, nw, nh) def extract_shape(rectangle, image): box = np.intp(np.floor(cv2.boxPoints(rectangle) + 0.5)) h, w = image.shape[:2] angle = -rectangle[2] x, y, dx, dy = new_contour_bounds(box, w, h, angle) image = rotate_image(image, angle) shape = image[y : y + dy, x : x + dx] sh, sw = shape.shape[:2] if sh < sw: shape = np.rot90(shape) return shape rectangles = find_largest_shapes(img) for rectangle in rectangles: shape = extract_shape(rectangle, img) cv2.imshow("", shape) cv2.waitKeyEx(0) But it doesn't work perfectly: As you can see, it includes everything in the bounding rectangle, not just the main shape in bounded by the contour, there are some extra bits sticking in. I want the shape to contain only areas bound by the contour. And then, the more serious problem, somehow the bounding box doesn't always align with the principal axis of the contour, as you can see in the last image it doesn't stand upright and there are black areas. How to fix these problems?
This is my take on the problem. It basically involves working with the contour themselves, instead of the actual raster images. Use the Hu moments as shape descriptors, you can compute moments on the array of points directly. Get two vector/arrays: One "objective/reference" contour and compare it amongst the "target" contours, looking for maximum (Euclidean) distance between the two arrays. The contour that produces the maximum distance is the mismatched contour. Keep in mind that the contours are stored un-ordered, this means that the objective contour (contour number 0 – the reference) might be the mismatched shape since the beginning. In this case we already have "maximum distance" between this contour and the rest, we need have to handle this case. Hint: check the distribution of the distances. If there’s a real maximum distance, this will be larger than the rest and might be identified as an outlier. Let’s check the code. The first step is what you already have – look for target contours applying a filter area. Let’s keep the largest contours: import numpy as np import cv2 import math # Set image path directoryPath = "D://opencvImages//shapes//" imageNames = ["01", "02", "03", "04", "05"] # Loop through the image file names: for imageName in imageNames: # Set image path: imagePath = directoryPath + imageName + ".png" # Load image: inputImage = cv2.imread(imagePath) # To grayscale: grayImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY) # Otsu: binaryImage = cv2.threshold(grayImage, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV)[1] # Contour list: # Store here all contours of interest (large area): contourList = [] # Find the contour on the binary image: contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for i, c in enumerate(contours): # Get blob area: currentArea = cv2.contourArea(c) # Set min area: minArea = 1000 if currentArea > minArea: # Approximate the contour to a polygon: contoursPoly = cv2.approxPolyDP(c, 3, True) # Get the polygon's bounding rectangle: boundRect = cv2.boundingRect(contoursPoly) # Get contour centroid: cx = int(int(boundRect[0]) + 0.5 * int(boundRect[2])) cy = int(int(boundRect[1]) + 0.5 * int(boundRect[3])) # Store in dict: contourDict = {"Contour": c, "Rectangle": tuple(boundRect), "Centroid": (cx, cy)} # Into the list: contourList.append(contourDict) Until this point, I’ve filtered all contour above a minimum area. I’ve stored the following information in a dictionary: The contour itself (the array of points), its bounding rectangle and its centroid. The later two will come handy while checking out results. Now, let’s compute hu moments and their distances. I’ll set the first contour at index 0 as objective/reference and the rest as targets. The main takeaway from this is that we are looking for maximal distance between the reference’s hu moments array and the targets – that’s the one that identifies the mismatched shaped. One note of caution, though, the scale of the feature moments varies wildly. Whenever you are comparing things for similarity is advisable to have all the features scaled in the same range. In this particular case I’ll apply a power transform (log) to all features to scale them down: # Get total contours in the list: totalContours = len(contourList) # Deep copies of input image for results: inputCopy = inputImage.copy() contourCopy = inputImage.copy() # Set contour 0 as objetive: currentDict = contourList[0] # Get objective contour: objectiveContour = currentDict["Contour"] # Draw objective contour in green: cv2.drawContours(contourCopy, [objectiveContour], 0, (0, 255, 0), 3) # Draw contour index on image: center = currentDict["Centroid"] cv2.putText(contourCopy, "0", center, cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2) # Store contour distances here: contourDistances = [] # Calculate log-scaled hu moments of objective contour: huMomentsObjective = getScaledMoments(objectiveContour) # Start from objectiveContour+1, get target contour, compute scaled moments and # get Euclidean distance between the two scaled arrays: for i in range(1, totalContours): # Set target contour: currentDict = contourList[i] # Get contour: targetContour = currentDict["Contour"] # Draw target contour in red: cv2.drawContours(contourCopy, [targetContour], 0, (0, 0, 255), 3) # Calculate log-scaled hu moments of target contour: huMomentsTarget = getScaledMoments(targetContour) # Compute Euclidean distance between the two arrays: contourDistance = np.linalg.norm(np.transpose(huMomentsObjective) - np.transpose(huMomentsTarget)) print("contourDistance:", contourDistance) # Store distance along contour index in distance list: contourDistances.append([contourDistance, i]) # Draw contour index on image: center = currentDict["Centroid"] cv2.putText(contourCopy, str(i), center, cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2) # Show processed contours: cv2.imshow("Contours", contourCopy) cv2.waitKey(0) That’s a big snippet. A couple of things: I’ve defined the function getScaledMoments to compute the scaled array of hu moments, I show this function at the end of the post. From here, I’m storing the distance between the objective contour and the target in the contourDistances list, along the index of the original target contour. This will later help me identify the mismatched contour. I’m also using the centroids to label each contour. You can see the process of distance calculation in the following animation; The reference contour is drawn in green, while each target gets drawn in red: Next, let’s get the maximum distance. We have to handle the case when the reference contour was the mismatched shape all along. For this, I’ll apply a very crude outlier detector. The idea is that a distance from the contourDistances list has to be larger than the rest, while the rest are more or less the same. The key word here is variation. I’ll use the standard deviation of the distance list and look for maximum distance only if the standard deviation is above one sigma: # Get maximum distance, # List to numpy array: distanceArray = np.array(contourDistances) # Get distance mean and std dev: mean = np.mean(distanceArray[:, 0:1]) stdDev = np.std(distanceArray[:, 0:1]) print("M:", mean, "Std:", stdDev) # Set contour 0 (default) as the contour that is different from the rest: contourIndex = 0 # Sigma minimum threshold: minSigma = 1.0 # If std dev from the distance array is above a minimum variation, # there's an outlier (max distance) in the array, thus, the real different # contour we are looking for: if stdDev > minSigma: # Get max distance: maxDistance = np.max(distanceArray[:, 0:1]) # Set contour index (contour at index 0 was the objective!): contourIndex = np.argmax(distanceArray[:, 0:1]) + 1 print("Max:", maxDistance, "Index:", contourIndex) # Fetch dissimilar contour, if found, # Get boundingRect: boundingRect = contourList[contourIndex]["Rectangle"] # Get the dimensions of the bounding rect: rectX = boundingRect[0] rectY = boundingRect[1] rectWidth = boundingRect[2] rectHeight = boundingRect[3] # Draw dissimilar (mismatched) contour in red: color = (0, 0, 255) cv2.rectangle(inputCopy, (int(rectX), int(rectY)), (int(rectX + rectWidth), int(rectY + rectHeight)), color, 2) cv2.imshow("Mismatch", inputCopy) cv2.waitKey(0) At the end, I just use the bounding rectangle I stored at the beginning of the script to identify the mismatched shape: This is the getScaledMoments function: def getScaledMoments(inputContour): """Computes log-scaled hu moments of a contour array""" # Calculate Moments moments = cv2.moments(inputContour) # Calculate Hu Moments huMoments = cv2.HuMoments(moments) # Log scale hu moments for m in range(0, 7): huMoments[m] = -1 * math.copysign(1.0, huMoments[m]) * math.log10(abs(huMoments[m])) return huMoments Here are some results: The following result is artificial. I manually modified the image to have a case where the reference contour (the one at index = 0) is the mismatched contour, so there's no need to look for a maximum distance (since we already have the result):
4
5
77,770,099
2024-1-6
https://stackoverflow.com/questions/77770099/pairwise-conditional-averages-in-pandas
Consider the following table. All are to be considered boolean except days_old and SKU which are to be considered integer pd.DataFrame({ 'SKU': [10,11,12,13,14,15], 'frozen':[1,0,1,1,1,0], 'vegetable':[1,1,0,1,1,0], 'microwaveable':[1,0,0,0,0,1], 'days_old':[21,9,11,2,6,14], }) I want a co-occurence table of counts that match both the column and index variables and give the average of days_old: Desired output: AVERAGE DAYS OLD: frozen | vegetable | microwaveable frozen 10 | 9.6 | 21 vegetable 9.6 | 9.5 | 21 microw... 21 | 21 | 17.5
One option is to use dot: df2 = df[['frozen','vegetable','microwaveable']] df2.T.dot(df2.mul(df['days_old'],axis=0)).div(df2.T.dot(df2)).round(2) Output: frozen vegetable microwaveable frozen 10.00 9.67 21.0 vegetable 9.67 9.50 21.0 microwaveable 21.00 21.00 17.5
3
1
77,773,655
2024-1-7
https://stackoverflow.com/questions/77773655/prisoners-dilemma-strange-results
I tried to implement a prisoner's dilemma in Python, but my results, instead of showing that tit for tat is a better solution, it is showing that defecting is giving better results. Can someone look at my code, and tell me what I have done wrong here? import random from colorama import Fore, Style import numpy as np # Define the actions COOPERATE = 'cooperate' DEFECT = 'defect' # Define the strategies def always_cooperate(history): return COOPERATE def always_defect(history): return DEFECT def random_choice_cooperate(history): return COOPERATE if random.random() < 0.75 else DEFECT def random_choice_defect(history): return COOPERATE if random.random() < 0.25 else DEFECT def random_choice_neutral(history): return COOPERATE if random.random() < 0.5 else DEFECT def tit_for_tat(history): if not history: # If it's the first round, cooperate return COOPERATE opponent_last_move = history[-1][1] # Get the opponent's last move return opponent_last_move # Mimic the opponent's last move def tat_for_tit(history): if not history: # If it's the first round, cooperate return DEFECT opponent_last_move = history[-1][1] # Get the opponent's last move return opponent_last_move # Mimic the opponent's last move def tit_for_two_tats(history): if len(history) < 2: # If it's the first or second round, cooperate return COOPERATE opponent_last_two_moves = history[-2:] # Get the opponent's last two moves if all(move[1] == DEFECT for move in opponent_last_two_moves): # If the opponent defected in the last two rounds return DEFECT return COOPERATE # Define the payoff matrix payoff_matrix = { (COOPERATE, COOPERATE): (3, 3), (COOPERATE, DEFECT): (0, 5), (DEFECT, COOPERATE): (5, 0), (DEFECT, DEFECT): (1, 1) } # Define the players players = [always_cooperate, always_defect, random_choice_defect, tit_for_tat, tit_for_two_tats, random_choice_cooperate, tat_for_tit, random_choice_neutral] # Assign a unique color to each player player_colors = { 'always_cooperate': Fore.GREEN, 'always_defect': Fore.RED, 'tit_for_tat': Fore.BLUE, 'random_choice_cooperate': Fore.MAGENTA, 'random_choice_defect': Fore.LIGHTRED_EX, 'tat_for_tit': Fore.LIGHTYELLOW_EX, 'random_choice_neutral': Fore.WHITE, 'tit_for_two_tats': Fore.LIGHTBLACK_EX, } def tournament(players, rounds=100): total_scores = {player.__name__: 0 for player in players} for i in range(len(players)): for j in range(i+1, len(players)): player1 = players[i] player2 = players[j] history1 = [] history2 = [] match_scores = {player1.__name__: 0, player2.__name__: 0} # print(f"\n{player1.__name__} vs {player2.__name__}") for round in range(rounds): move1 = player1(history1) move2 = player2(history2) score1, score2 = payoff_matrix[(move1, move2)] match_scores[player1.__name__] += score1 match_scores[player2.__name__] += score2 total_scores[player1.__name__] += score1 total_scores[player2.__name__] += score2 history1.append((move1, move2)) history2.append((move2, move1)) # print(f"{player1.__name__} moves: {''.join([Fore.GREEN+'O'+Style.RESET_ALL if move[0]==COOPERATE else Fore.RED+'X'+Style.RESET_ALL for move in history1])}") # print(f"{player2.__name__} moves: {''.join([Fore.GREEN+'O'+Style.RESET_ALL if move[0]==COOPERATE else Fore.RED+'X'+Style.RESET_ALL for move in history2])}") # print(f"Match scores: {player1.__name__} {match_scores[player1.__name__]}, {player2.__name__} {match_scores[player2.__name__]}") sorted_scores = sorted(total_scores.items(), key=lambda item: item[1], reverse=True) return sorted_scores # Run the tournament # for player, score in tournament(players): # print(f'\nFinal score: {player}: {score}') num_tournaments = 1000 results = {player.__name__: [] for player in players} for _ in range(num_tournaments): for player, score in tournament(players): results[player].append(score) # Calculate the median score for each player and store them in a list of tuples medians = [(player, np.median(scores)) for player, scores in results.items()] # Sort the list of tuples based on the median score sorted_medians = sorted(medians, key=lambda x: x[1]) num_players = len(sorted_medians) # Print the sorted median scores with gradient color for i, (player, median_score) in enumerate(sorted_medians): # Calculate the ratio of green and red based on the player's position green_ratio = i / (num_players - 1) red_ratio = 1 - green_ratio # Calculate the green and red components of the color green = int(green_ratio * 255) red = int(red_ratio * 255) # Create the color code color_code = f'\033[38;2;{red};{green};0m' player_color = player_colors.get(player, Fore.RESET) # Print the player name and median score with the color print(f'{player_color}{player}: {median_score} coins') The code itself create the matching for 100 rounds. But it then iterate 1000 times to get the median score over many iterations. Here is the ouput of the results always_cooperate: 1347.024 coins random_choice_cooperate: 1535.651 coins tit_for_two_tats: 1561.442 coins tit_for_tat: 1609.444 coins tat_for_tit: 1619.43 coins random_choice_neutral: 1663.855 coins always_defect: 1711.764 coins random_choice_defect: 1726.992 coins In the latest Veritasium video the dilemma is presented with the reward matrix, but Tit for Tat is presented as the most efficient solution. I cannot replicate that result, and thus I'm opening this question.
I think the problem lies in the setup of your tournament. It is set up in such a way that always_defect never has to play against always_defect. So a player of any type never plays against a player of the same type. It seems to be an advantage to be the only always_defect in the group. Modifying the lines for i in range(len(players)): for j in range(i+1, len(players)): to for i in range(len(players)): for j in range(i, len(players)): makes it so that an always_defect also has to play against an always_defect, which changes the picture. However, I am not 100% sure that the accounting is done correctly for the case that a player type plays against a player of the same type.
2
5
77,754,131
2024-1-3
https://stackoverflow.com/questions/77754131/macos-tkinter-app-terminating-invalid-parameter-not-satisfying-astring-ni
When im launching my app via CLI, it works without issue ./org_chart.app/Contents/MacOS/org_chart however when I launch via double click I met with the error *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: aString != nil' I used py2app to build the app. Im not sure where to begin debugging this, if someone could point me in the right direction? Thanks for your help! here's the full code for the small app import os, shutil import tkinter as tk from tkinter import filedialog, messagebox, Tk, Canvas, Entry, Text, Button, PhotoImage from tkinter import font as tkFont def build_org_chart(): print("im making a chart") return 'Done chart created!' if __name__ == "__main__": window = Tk() window.title("Org Chart Spreadsheet Generator") # Variables to store file paths window.geometry("1012x506") window.configure(bg = "#00403D") # Define the font properties my_font = tkFont.Font(family="Montserrat SemiBold", size=16, weight="normal") canvas = Canvas( window, bg = "#00403D", height = 506, width = 1012, bd = 0, highlightthickness = 0, relief = "ridge" ) canvas.place(x = 0, y = 0) canvas.create_rectangle( 308.0, 0.0, 1012.0, 506.0, fill="#FFFFFF", outline="") canvas.create_text( 320.0, 18.0, anchor="nw", text="Org Chart", fill="#000000", font=("Montserrat Bold", 64 * -1) ) window.resizable(False, False) window.mainloop() Also now my app is so small its still crashing im thinking it could be something in the setup file too so ive added that code below import os from setuptools import setup def list_files(directory): base_path = os.path.abspath(directory) paths = [] for root, directories, filenames in os.walk(base_path): for filename in filenames: # Exclude .DS_Store files if you are on macOS if filename != '.DS_Store': paths.append(os.path.join(root, filename)) return paths # Your assets folder assets_folder = 'assets' # Listing all files in the assets folder assets_files = list_files(assets_folder) APP = ['org_chart_min.py'] DATA_FILES = [('assets', assets_files)] OPTIONS = { 'argv_emulation': True, 'packages': ['pandas', 'openpyxl','xlsxwriter'], 'plist': { 'CFBundleName': '_org_chart', 'CFBundleDisplayName': ' Org Chart', 'CFBundleGetInfoString': "Create a spreadsheet that populates our Lucid org chart template", 'CFBundleIdentifier': 'com.yourdomain.orgchart', 'CFBundleVersion': '0.1', 'CFBundleShortVersionString': '0.1', 'NSRequiresAquaSystemAppearance': True }, 'iconfile': 'org_chart.icns', } setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], )
I suppose you use pyinstaller. It works for mac and it will compile an UNIX file. Just use pip install pyinstaller and after that in directory of your script pyinstaller -F main.py. Rename your file as main.py and it should compile one UNIX file. -F flag compiles it in one file. It works perfectly!
2
1
77,772,560
2024-1-7
https://stackoverflow.com/questions/77772560/merge-value-lists-of-two-dict-in-python
I have several dictionary with equal keys. The values of the key are list. Now I would merge the values lists like. Input, e.g.: dict_1 ={"a":["1"], "b":["3"]} dict_2 = {"a":["2"], "b":["3"]} Required output: new_dict = {'a':["1","2"], 'b':["3","3"]} What is the fastest, pythonic way to get this result? I found this, but this doesn’t fulfill my needs: merged_dic = {**dict_1, **dict_2} and others, but nothing solve my wish. Is there a built in function without loop over each element, because I have a lot of dictionaries and more complex as my example above? Thanks for any help!
you can use defaultdict with extend like this: from collections import defaultdict new_dict = defaultdict(list) for d in [dict_1, dict_2]: for key, value in d.items(): new_dict[key].extend(value)
2
4
77,771,673
2024-1-7
https://stackoverflow.com/questions/77771673/merging-dataframe-columns-in-python
I have a special dataframe called df here is how it looks like RepID +Col01 +Col02 +Col03 -Col01 +Col04 +Col05 -Col03 -Col04 +Col06 -Col07 1 5 7 9 8 3 8 1 9 4 6 2 1 3 3 3 1 2 2 3 6 0 3 9 8 0 9 4 9 5 1 2 0 4 3 1 0 5 8 7 1 0 9 2 5 0 7 1 2 0 0 2 9 2 1 They are all positive numbers in the data but if you notice the column name it is a column name that is either with + or with - Some of these columns have + with no columns with the - (Such as +Col06) Some of these columns have - with no columns with the + (Such as -Col07) Some other have both (Such as +Col01 and -Col01) I want to make this dataset normalised by subtracting ting the value in the - columns from the + columns, and chnage the column name to a name with no + or - in the begining of the name, so the end table will look like this RepID Col01 Col02 Col03 Col04 Col05 Col06 Col07 1 -3 7 8 -6 8 4 -6 2 -2 3 1 -2 2 6 -0 3 0 8 -5 3 9 2 0 4 -2 1 -1 8 7 9 -2 5 -2 7 -1 -9 0 2 -1 Is there anyway I can do that
No need for groupby, just extract the + and - columns separately, subtract with fill_value=0, concat to the ID: out = pd.concat([df[['RepID']], df.filter(regex='^\+') .rename(columns=lambda x: x[1:]) .sub(df.filter(regex='^-') .rename(columns=lambda x: x[1:]), fill_value=0) ], axis=1) Output: RepID Col01 Col02 Col03 Col04 Col05 Col06 Col07 0 1 -3 7.0 8 -6 8.0 4.0 -6.0 1 2 -2 3.0 1 -2 2.0 6.0 0.0 2 3 0 8.0 -5 3 9.0 2.0 0.0 3 4 -2 1.0 -1 8 7.0 9.0 -2.0 4 5 -2 7.0 -1 -9 0.0 2.0 -1.0
4
2
77,766,228
2024-1-5
https://stackoverflow.com/questions/77766228/insert-a-column-with-a-conditional-function-into-a-dataframe
I want to calculate the difference between open and close price but taking to account if a trade is a Buy/Sell. I have the conditional function below; but l do not know how to insert a new column with the results into the original dataframe which has a list of trades and l am getting an error with this function. def pl_gap(type): if type in mt_trades['type'] == 'BUY': mt_trades['close_price'] - mt_trades['open_price'] else: mt_trades['open_price'] - mt_trades['close_price'] mt_trades['pl_gap'] = mt_trades.apply(pl_gap)
I also found out that this solution also works: mt_trades["pl_gap"] = mt_trades.apply( lambda row: ( row["close_price"] - row["open_price"] if row["type"] == "BUY" else row["open_price"] - row["close_price"] ), axis=1, )
2
0
77,771,119
2024-1-6
https://stackoverflow.com/questions/77771119/gurobi-unsupported-operand-types-for-int-and-tupledict
I have this constraint with big-M parameter and an auxiliary binary variable w: for i in customers: for j in customers: if i != j: mdl.addConstr(y[j] + z[j] <= y[i] + z[i] - df.demand[j]*(x1[i,j] + x2[i,j]) + 100000 * (1 - w), name= 'C8') When I run the code, I got the following error: TypeError: unsupported operand type(s) for -: 'int' and 'tupledict' w is defined as follows: w = mdl.addVars(0,1,vtype=GRB.BINARY, name='w') I couldn't figure out what is the problem? Is it a problem in defining w? Thank you
w is tupledict as it was defined using mdl.addVars. As w is a single variable you need to define it using mdl.addVar instead: w = mdl.addVar(0,1,vtype=GRB.BINARY, name='w')
2
4
77,769,625
2024-1-6
https://stackoverflow.com/questions/77769625/how-to-generate-a-standalone-swagger-ui-docs-page-for-each-endpoint-in-fastapi
Is there a way to generate each endpoint on its own page instead of generating the documentation for all of the endpoints on a single page like below: I would like to show, let's say, the GET endpoint for books on its own dedicated page without all the other endpoints. I want this because I am using an iframe tag to embed the UI for a specific endpoint.
You could achieve having separate Swagger UI (OpenAPI) autodocs generated by using Sub applications. In the following example, you could access the Swagger UI autodocs for the main API at http://127.0.0.1:8000/docs, and the docs for the sub API at http://127.0.0.1:8000/subapi/docs. Example from fastapi import FastAPI app = FastAPI() @app.get("/app") def read_main(): return {"message": "Hello World from main app"} subapi = FastAPI() @subapi.get("/sub") def read_sub(): return {"message": "Hello World from sub API"} app.mount("/subapi", subapi) If you instead would like to group/sort the endpoints in a single /docs page, please have a look at this answer.
2
1
77,766,938
2024-1-5
https://stackoverflow.com/questions/77766938/is-it-possible-to-build-a-tree-form-expanded-airflow-dag-tasks-dynamic-task-ma
I want to generate dynamic tasks from the dynamic task output. Each mapped task returns a list, and I'd like to create a separate mapped task for each of the element of the list so the process will look like this: Is it possible to expand on the output of the dynamically mapped task so it will result in a sequence of map operations instead of a map and then reduce? What I tried: In my local environment, I'm using: Astronomer Runtime 9.6.0 based on Airflow 2.7.3+astro.2 Git Version: .release:9fad9363bb0e7520a991b5efe2c192bb3405b675 For the sake of the experiment, I'm using three tasks with a single string as an input and a string list as an output. 1. Expand over a group with expanded task (map over a group with mapped tasks): import datetime import logging from airflow.decorators import dag, task, task_group @dag(schedule_interval=None, start_date=datetime.datetime(2023, 9, 27)) def try_dag3(): @task def first() -> list[str]: return ["0", "1"] first_task = first() @task_group def my_group(input: str) -> list[str]: @task def second(input: str) -> list[str]: logging.info(f"input: {input}") result = [] for i in range(3): result.append(f"{input}_{i}") # ['0_0', '0_1', '0_2'] # ['1_0', '1_1', '1_2'] return result second_task = second.expand(input=first_task) @task def third(input: str, input1: str = None): logging.info(f"input: {input}, input1: {input1}") return input third_task = third.expand(input=second_task) my_group.expand(input=first_task) try_dag3() but it causes NotImplementedError: operator expansion in an expanded task group is not yet supported 2. expand over the expanded task result (map over a mapped tasks): import datetime import logging from airflow.decorators import dag, task @dag(start_date=datetime.datetime(2023, 9, 27)) def try_dag1(): @task def first() -> list[str]: return ["0", "1"] first_task = first() @task def second(input: str) -> list[str]: logging.info(f"source: {input}") result = [] for i in range(3): result.append(f"{input}_{i}") # ['0_0', '0_1', '0_2'] # ['1_0', '1_1', '1_2'] return result # this expands fine into two tasks from the list returned by first_task second_task = second.expand(input=first_task) @task def third(input: str): logging.info(f"source: {input}") return input # this doesn't expand - there are two mapped tasks, and the input value is a list, not a string third_task = third.expand(input=second_task) try_dag1() but the result of second dag is not expanded, and third task input is a string list instead: third[0] task log: [2024-01-05, 11:40:30 UTC] {try_dag1.py:30} INFO - source: ['0_0', '0_1', '0_2'] 3. Expand over the expanded task with const input (to test if the structure is possible): import datetime import logging from airflow.decorators import dag, task @dag(start_date=datetime.datetime(2023, 9, 27)) def try_dag0(): @task def first() -> list[str]: return ["0", "1"] first_task = first() @task def second(input: str) -> list[str]: logging.info(f"input: {input}") result = [] for i in range(3): result.append(f"{input}_{i}") # ['0_0', '0_1', '0_2'] # ['1_0', '1_1', '1_2'] return result second_task = second.expand(input=first_task) @task def third(input: str, input1: str = None): logging.info(f"input: {input}, input1: {input1}") return input third_task = third.expand(input=second_task, input1=["a", "b", "c"]) try_dag0() It looks like the mapped tasks can be expanded over a constant list passed to input1, but input value is a nonexpanded list: third[0] task log: [2024-01-05, 12:51:39 UTC] {try_dag0.py:33} INFO - input: ['0_0', '0_1', '0_2'], input1: a
You would need to add a task which collects and flattens the result of second. @task def first() -> list[str]: return ['1', '2'] @task def second(input: str) -> list[str]: return [f"{input}_{i}" for i in ['1', '2', '3']] @task def second_collect(input: list[list[str]]) -> list[str]: return list(chain.from_iterable(input)) @task def third(input: str) -> str: return f"Result: {input}!" sc = second_collect(second.expand(input=first())) third.expand(input=sc) Result of second_collect is ['1_1', '1_2', '1_3', '2_1', '2_2', '2_3'] (flattened result of mapped tasks). Results of third mapped tasks are: Result: 1_1! Result: 1_2! ...
2
4
77,756,723
2024-1-4
https://stackoverflow.com/questions/77756723/type-hints-for-class-decorator
I have a class decorator which removes one method and adds another to a class. How could I provide type hints for that? I've obviously tried to research this myself, to no avail. Most people claim this requires an intersection type. Is there any recommended solution? Something I'm missing? Example code: class MyProtocol(Protocol): def do_check(self) -> bool: raise NotImplementedError def decorator(clazz: type[MyProtocol]) -> ???: do_check: Callable[[MyProtocol], bool] = getattr(clazz, "do_check") def do_assert(self: MyProtocol) -> None: assert do_check(self) delattr(clazz, "do_check") setattr(clazz, "do_assert", do_assert) return clazz @decorator class MyClass(MyProtocol): def do_check(self) -> bool: return False mc = MyClass() mc.do_check() # hints as if exists, but doesn't mc.do_assert() # no hints, but works I guess what I'm looking for is the correct return type for decorator.
There is no type annotation which can do what you want. Even with intersection types, there won't be a way to express the action of deleting an attribute - the best you can do is making an intersection with a type which overrides do_check with some kind of unusable descriptor. What you're asking for can be instead done with a mypy plugin. The result can look like this after a basic implementation: from package.decorator_module import MyProtocol, decorator @decorator class MyClass(MyProtocol): def do_check(self) -> bool: return False >>> mc = MyClass() # mypy: Cannot instantiate abstract class "MyClass" with abstract attribute "do_check" [abstract] >>> mc.do_check() # raises `NotImplementedError` at runtime >>> mc.do_assert() # OK Note that mc.do_check exists but is detected to be an abstract method by the plugin. This matches the runtime implementation, as delattr deleting MyClass.do_check merely exposes the parent MyProtocol.do_check instead, and non-overridden methods on a typing.Protocol are abstract methods and you can't instantiate the class without overriding them. Here's a basic implementation. Use the following directory structure: project/ mypy.ini mypy_plugin.py test.py package/ __init__.py decorator_module.py Contents of mypy.ini [mypy] plugins = mypy_plugin.py Contents of mypy_plugin.py from __future__ import annotations import typing_extensions as t import mypy.plugin import mypy.plugins.common import mypy.types if t.TYPE_CHECKING: import collections.abc as cx import mypy.nodes def plugin(version: str) -> type[DecoratorPlugin]: return DecoratorPlugin class DecoratorPlugin(mypy.plugin.Plugin): # See https://mypy.readthedocs.io/en/stable/extending_mypy.html#current-list-of-plugin-hooks # Since this is a class definition modification with a class decorator # and the class body should have been semantically analysed by the time # the class definition is to be manipulated, we choose # `get_class_decorator_hook_2` def get_class_decorator_hook_2( self, fullname: str ) -> cx.Callable[[mypy.plugin.ClassDefContext], bool] | None: if fullname == "package.decorator_module.decorator": return class_decorator_hook return None def class_decorator_hook(ctx: mypy.plugin.ClassDefContext) -> bool: mypy.plugins.common.add_method_to_class( ctx.api, cls=ctx.cls, name="do_assert", args=[], # Instance method with (1 - number of bound params) arguments, i.e. 0 arguments return_type=mypy.types.NoneType(), self_type=ctx.api.named_type(ctx.cls.fullname), ) del ctx.cls.info.names["do_check"] # Remove `do_check` from the class return True # Returns whether class is fully defined or needs another round of semantic analysis Contents of test.py from package.decorator_module import MyProtocol, decorator @decorator class MyClass(MyProtocol): def do_check(self) -> bool: return False mc = MyClass() # mypy: Cannot instantiate abstract class "MyClass" with abstract attribute "do_check" [abstract] mc.do_check() # raises `NotImplementedError` at runtime mc.do_assert() # OK Contents of package/decorator_module.py from __future__ import annotations import typing_extensions as t if t.TYPE_CHECKING: import collections.abc as cx _T = t.TypeVar("_T") class MyProtocol(t.Protocol): def do_check(self) -> bool: raise NotImplementedError # The type annotations here don't mean anything for the mypy plugin, # which does its own magic when it sees `@package.decorator_module.decorator`. def decorator(clazz: type[_T]) -> type[_T]: do_check: cx.Callable[[_T], bool] = getattr(clazz, "do_check") def do_assert(self: _T) -> None: assert do_check(self) delattr(clazz, "do_check") setattr(clazz, "do_assert", do_assert) return clazz
2
2
77,767,405
2024-1-5
https://stackoverflow.com/questions/77767405/creating-a-new-column-when-values-in-another-column-is-not-duplicate
This is my DataFrame: import pandas as pd import numpy as np df = pd.DataFrame( { 'a': [98, 97, 100, 101, 103, 110, 108, 109, 130, 135], 'b': [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], 'c': [np.nan, np.nan, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 3.0, 3.0], 'd': [92, 92, 92, 92, 92, 92, 92, 92, 92, 92], } ) And this is the expected output. I want to to create column x: a b c d x 0 98 3 NaN 92 92 1 97 3 NaN 92 92 2 100 3 1.0 92 94 3 101 3 1.0 92 94 4 103 3 1.0 92 94 5 110 3 2.0 92 104 6 108 3 2.0 92 104 7 109 3 2.0 92 104 8 130 3 3.0 92 124 9 135 3 3.0 92 124 Steps: a) When c is not duplicated, df['x'] = df.a - (df.b * 2) b) If df.c == np.nan, df['x'] = df.d For example: The first new value in c is row 2. So df['x'] = 100 - (3 * 2) which is 94 and df['x'] = 94 until a new value in c appears which is row 5. For row 5, df['x'] = 110 - (3 * 2) which is 104. And the logic continues. This is what I have tried: df['x'] = df.a - (df.b * 2) df.loc[df.c.isna(), 'x'] = df.d df['x'] = df.x.cummax()
You can use duplicated, mask, grouby.ffill and fillna: # identify duplicated "c" m = df['c'].duplicated() # compute a-(2*b) # mask the duplicated "c" # ffill per group # replace NaN with "d" df['x'] = (df['a'].sub(df['b'] * 2) .mask(m) .groupby(df['c']).ffill() .fillna(df['d']) ) Variant to work by groups of successive identical "c": g = df['c'].ne(df['c'].shift()).cumsum() m = g.duplicated() df['x'] = (df['a'].sub(df['b'] * 2) .mask(m) .groupby(m1.cumsum()).ffill() .where(df['c'].notna(), df['d']) ) Output: a b c d x 0 98 3 NaN 92 92.0 1 97 3 NaN 92 92.0 2 100 3 1.0 92 94.0 3 101 3 1.0 92 94.0 4 103 3 1.0 92 94.0 5 110 3 2.0 92 104.0 6 108 3 2.0 92 104.0 7 109 3 2.0 92 104.0 8 130 3 3.0 92 124.0 9 135 3 3.0 92 124.0
2
2
77,766,887
2024-1-5
https://stackoverflow.com/questions/77766887/merge-3-dataframes-with-different-timesteps-10min-and-15min-and-30min-using-pa
The goal is to merge three different dataframes having different timesteps (10min, 15min and 30min. The code must recognize what timestep to consider firstly and identify the next available next timestep. in This example 2019/04/02 10:40:00 does not exist in the dataframes dataset. Therefore the next timestep to consider after 2019/04/02 10:30:00 would be 2019/04/02 10:45:00. df1: Timestamp data1 2019/04/02 10:00:00 1 2019/04/02 10:10:00 1 2019/04/02 10:20:00 1 2019/04/02 10:30:00 1 df2: Timestamp data2 2019/04/02 10:00:00 2 2019/04/02 10:15:00 22 2019/04/02 10:30:00 222 2019/04/02 10:45:00 2222 2019/04/02 11:00:00 22222 df3: Timestamp data3 2019/04/02 10:00:00 3 2019/04/02 10:30:00 33 2019/04/02 11:00:00 333 2019/04/02 11:30:00 3333 desired result: Timestamp data1 data2 data3 2019/04/02 10:00:00 1 2 3 2019/04/02 10:10:00 1 NaN NaN 2019/04/02 10:15:00 NaN 22 NaN 2019/04/02 10:20:00 1 NaN NaN 2019/04/02 10:30:00 1 222 33 2019/04/02 10:45:00 NaN 2222 NaN 2019/04/02 11:00:00 NaN 22222 333 2019/04/02 11:30:00 NaN NaN 3333 I used the python concat function and the merge function but did not deliver he desired result.
# Convert Timestamp columns to datetime df1['Timestamp'] = pd.to_datetime(df1['Timestamp']) df2['Timestamp'] = pd.to_datetime(df2['Timestamp']) df3['Timestamp'] = pd.to_datetime(df3['Timestamp']) # Sort the DataFrames based on Timestamp df1 = df1.sort_values('Timestamp') df2 = df2.sort_values('Timestamp') df3 = df3.sort_values('Timestamp') # Merge using merge_asof result = pd.merge_asof(df1, df2, on='Timestamp', direction='nearest') result = pd.merge_asof(result, df3, on='Timestamp', direction='nearest') EDIT: IF YOU WANT TO KEEP NAN VALUES # Convert Timestamp columns to datetime df1['Timestamp'] = pd.to_datetime(df1['Timestamp']) df2['Timestamp'] = pd.to_datetime(df2['Timestamp']) df3['Timestamp'] = pd.to_datetime(df3['Timestamp']) # Merge using merge with how='outer' result = pd.merge(df1, df2, on='Timestamp', how='outer') result = pd.merge(result, df3, on='Timestamp', how='outer') # Sort the result based on Timestamp result = result.sort_values('Timestamp')
4
1
77,766,721
2024-1-5
https://stackoverflow.com/questions/77766721/dataframe-convert-the-columns-in-a-df-to-list-of-row-in-python-dataframe
I have a pandas dataframe like below id name value1 value2 value3 Type ======================================================= 1 AAA 1.0 1.5 1.8 NEW 2 BBB 2.0 2.3 2.5 NEW 3 CCC 3.0 3.6 3.7 NEW I have convert the above df into something like below so that i can join it to another df based on name (which will be unique in my case) Type AAA BBB CCC ================================================================ NEW [1.0, 1.5, 1.8] [2.0, 2.3, 2.5] [3.0, 3.6, 3.7] Is there any way to achieve this instead of having too many looping statements. Any help is much appreciated. Thanks,
A possible solution, which first adds column value containing the list and then uses pivot: (df.assign(value=df.loc[:, 'value1':'value3'].apply(list, axis=1)) .pivot(index='Type', columns='name', values='value') .rename_axis(None, axis=1).reset_index()) Output: Type AAA BBB CCC 0 NEW [1.0, 1.5, 1.8] [2.0, 2.3, 2.5] [3.0, 3.6, 3.7]
2
4
77,752,955
2024-1-3
https://stackoverflow.com/questions/77752955/dynamically-discounted-cumulative-sum-in-numpy
I have a frequently occurring problem where I have two arrays of the same length: one with values and one with dynamic decay factors; and wish to calculate a vector of the decayed cumulative sum at each position. Using a Python loop to express the desired recurrence we have the following: c = np.empty_like(x) c[0] = x[0] for i in range(1, len(x)): c[i] = c[i-1] * d[i] + x[i] The Python code is very clear and readable, but slows things down significantly. I can get around this by using Numba to JIT-compile it, or Cython to precompile it. If the discount factors were a fixed number (which is not the case), I could have used SciPy's signals library and do an lfilter (see https://stackoverflow.com/a/47971187/1017986). Is there a more "Numpythonic" way to express this without sacrificing clarity or efficiency?
Mathematically speaking, this is a first-order non-homogeneous recurrence relation with variable coefficients. Please note that coefficients (in your case, values in d[1:]) must be different from 0. Here is a way to solve your recurrence relation using NumPy functions. Note that d[0] is supposed to be equal to 1 (your algorithm is not using it anyway). In this solution, as you can see, the values in c do not depend on previous values in c. I.e., c[i] does not depend on c[i-1]. g = np.insert(x[1:], 0, 0) pd = np.cumprod(d) c = pd * (x[0] + np.cumsum(g / pd)) Let's make an example. Here I define two functions, one that computes the recurrence relation with your code, and one that uses NumPy: def rec(d, x): c = [x[0]] for i in range(1, len(x)): c.append(d[i] * c[-1] + x[i]) return np.array(c) def num(d, x): g = np.insert(x[1:], 0, 0) pd = np.cumprod(d) return pd * (x[0] + np.cumsum(g / pd)) Here are some usage examples: >>> rec(np.array([3, 4, 5.2, 6.1]), np.array([2, 3.2, 4, 5.7])) array([ 2. , 11.2 , 62.24 , 385.364]) >>> num(np.array([3, 4, 5.2, 6.1]), np.array([2, 3.2, 4, 5.7])) array([ 2. , 11.2 , 62.24 , 385.364]) >>> rec(np.array([3, 4.3, 5.2]), np.array([6.7, 7.1, 1.2])) array([ 6.7 , 35.91 , 187.932]) >>> num(np.array([3, 4.3, 5.2]), np.array([6.7, 7.1, 1.2])) array([ 6.7 , 35.91 , 187.932]) If you're interested in the mathematical details behind this answer (which are out of scope here), check this Wikipedia page. Edit This solution, while mathematically correct, might introduce numerical issues during the computation (because of the numpy.cumprod function), especially if d contains many values that are close to 0.
3
1
77,765,051
2024-1-5
https://stackoverflow.com/questions/77765051/what-is-the-proper-replacement-for-scipy-interpolate-interp1d
The class scipy.interpolate.interp1d has a message that reads: Legacy This class is considered legacy and will no longer receive updates. This could also mean it will be removed in future SciPy versions. I'm not sure why this is marked for deprecation, but I use it heavily in my code to return an interpolating function, which numpy.interp() does not handle. What is the proper replacement for scipy.interpolate.interp1d?
According to the documentation: interp1d is considered legacy API and is not recommended for use in new code. Consider using more specific interpolators instead. https://docs.scipy.org/doc/scipy/tutorial/interpolate/1D.html#legacy-interface-for-1-d-interpolation-interp1d it is asking to: Consider using more specific interpolators instead It also says The ‘cubic’ kind of interp1d is equivalent to make_interp_spline, and the ‘linear’ kind is equivalent to numpy.interp while also allowing N-dimensional y arrays. Another set of interpolations in interp1d is nearest, previous, and next, where they return the nearest, previous, or next point along the x-axis. Nearest and next can be thought of as a special case of a causal interpolating filter.
6
1
77,764,733
2024-1-5
https://stackoverflow.com/questions/77764733/how-to-set-zorder-across-axis-in-matplotlib
I want to make a graph that contains bars, the value of the bars noted above them and a line on the secondary axis, but I can't configure the order of the elements the way I want where the bars would be the furthest element, followed by the line and then the texts, but so far I can only change the position of an entire axis and either the line goes over the text or is behind the bars. import matplotlib.pyplot as plt import seaborn as sns years = [2019, 2020, 2021, 2022, 2023] values1 = [1350, 1360, 1420, 1480, 1650] values2 = [57, 62, 60.5, 59.7, 62.3] fig, ax = plt.subplots(figsize=(6.2, 7.36), dpi = 150) bars = ax.bar(years, values1) heights = [] for bar in bars: bar.set_zorder(2) height = bar.get_height() heights.append(height) ax.annotate('{}'.format(round(height, 1)), xy=(bar.get_x() + bar.get_width() / 2, height), xytext=(0, 10), # 3 points vertical offset textcoords="offset points", ha='center', va='bottom', fontsize = 20).set_zorder(1) ax2 = ax.twinx() lineplot = sns.lineplot(x = years, y = values2, ax = ax2, color="gray") the graph that the code produces: graph with line in fort of text annotation I tried changing the zorder on both axes and directly on the charts
Unfortunately respecting zorder across twinned axes is not possible. A workaround for annotations is to add them to the second axes, but using the data transform of the first. See the xycoords parameter added in the call to annotate here: import matplotlib.pyplot as plt years = [2019, 2020, 2021, 2022, 2023] values1 = [1350, 1360, 1420, 1480, 1650] values2 = [57, 62, 60.5, 59.7, 62.3] fig, ax = plt.subplots(figsize=(6.2, 7.36), dpi = 150) bars = ax.bar(years, values1) ax2 = ax.twinx() lineplot = ax2.plot(years, values2, ls="-", color="gray") for bar in bars: height = bar.get_height() ax2.annotate('{}'.format(round(height, 1)), xy=(bar.get_x() + bar.get_width() / 2, height), xytext=(0, 10), # 3 points vertical offset textcoords="offset points", ha='center', va='bottom', fontsize = 20, xycoords=ax.transData) plt.show() Note that this can be simplified by using the bar_label method, which is a convenience wrapper for adding annotations to bars: import matplotlib.pyplot as plt years = [2019, 2020, 2021, 2022, 2023] values1 = [1350, 1360, 1420, 1480, 1650] values2 = [57, 62, 60.5, 59.7, 62.3] fig, ax = plt.subplots(figsize=(6.2, 7.36), dpi = 150) bars = ax.bar(years, values1) ax2 = ax.twinx() lineplot = ax2.plot(years, values2, ls="-", color="gray") ax2.bar_label(bars, fontsize=20, padding=10, xycoords=ax.transData) plt.show()
2
1
77,764,710
2024-1-5
https://stackoverflow.com/questions/77764710/how-to-set-multiple-values-in-a-pandas-dataframe-at-once
I have a large dataframe and a list of many locations I need to set to a certain value. Currently I'm iterating over the locations to set the values one by one: import pandas as pd import numpy as np #large dataframe column_names = np.array(range(100)) np.random.shuffle(column_names) row_names = np.array(range(100)) np.random.shuffle(row_names) df = pd.DataFrame(columns=column_names, index=row_names) #index values to be set ix = np.random.randint(0, 100,1000) #column values to be set iy = np.random.randint(0, 100,1000) #setting the specified locations to 1, one by one for k in range(len(ix)): df.loc[ix[k], iy[k]]=1 This appears to be prohibitively slow. For the above example, on my machine, the last for loop takes 0.35 seconds. On the other hand, if I do df.loc[ix, iy]=1 only takes 0.035 seconds so it is ten times faster. Unfortunately, it does not give the correct result, as it sets all combinations of elements of ix and iy to 1. I was wondering whether there is a similarly fast way to set values of many locations at once, avoiding the iteration over the locations?
You can access the underlying numpy array with .values and use the position indices after conversion: cols = pd.Series(range(df.shape[1]), index=df.columns) idx = pd.Series(range(df.shape[0]), index=df.index) df.values[idx.reindex(ix), cols.reindex(iy)] = 1 Minimal example: # input df = pd.DataFrame(index=['a', 'b', 'c'], columns=['A', 'B', 'C']) ix = ['a', 'b', 'c'] iy = ['A', 'C', 'A'] # output A B C a 1 NaN NaN b NaN NaN 1 c 1 NaN NaN previous answer df.values[ix, iy] = 1 Minimal example: df = pd.DataFrame(index=range(5), columns=range(5)) ix = [1, 2, 4] iy = [1, 3, 2] df.values[ix, iy] = 1 Output: 0 1 2 3 4 0 NaN NaN NaN NaN NaN 1 NaN 1 NaN NaN NaN 2 NaN NaN NaN 1 NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN 1 NaN NaN
2
2