question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
77,566,724 | 2023-11-28 | https://stackoverflow.com/questions/77566724/handle-circular-import-of-subclasses-types-in-abstract-class | I am facing a circular import error when I define type aliases for the subclasses of an abstract class. This is an example of what I'm trying to achieve: #abstract_file_builder.py from abc import ABC, abstractmethod from typing import Generic, MutableSequence, TypeVar from mymodule.type_a_file_builder import TypeARow from mymodule.type_b_file_builder import TypeBRow GenericRow = TypeVar("GenericRow", TypeARow, TypeBRow) class AbstractFileBuilder(ABC, Generic[GenericRow]): ... @abstractmethod def generate_rows( self, ) -> MutableSequence[GenericRow]: pass #type_a_file_builder.py from typing import Any, MutableSequence from mymodule.abstract_file_builder import AbstractFileBuilder TypeARow = MutableSequence[Any] class TypeAFileBuilder(AbstractFileBuilder[TypeARow]): ... def generate_rows( self, ) -> MutableSequence[TypeARow]: ... # Code logic for TypeA return rows #type_b_file_builder.py from typing import MutableSequence, Union from mymodule.abstract_file_builder import AbstractFileBuilder TypeBRow = MutableSequence[Union[int, float]] class TypeBFileBuilder(AbstractFileBuilder[TypeBRow]): ... def generate_rows( self, ) -> MutableSequence[TypeBRow]: ... # Code logic for TypeB return rows What is the most pythonic way to solve this? I know I can use the TYPE_CHECKING variable to avoid runtime imports, but that feels like a patch instead of a good solution. Another thing that can solve the problem is to define the type aliases in the abstract class, but that would ruin the whole purpose of having an abstract class and not having to know what's implemented below. I'm not sure however if I can do some form of an "abstract" type alias inside the abstract_file_builder.py file, and then declare the TypeARow and TypeBRow types as children of that abstract type. I must note that the solution must work with, at least, Python 3.9. If it supports versions back to 3.7 it'll be better, but not super necessary. | By placing the following line in your abstract class you kind of add information of your concrete subclasses to your abstract class: GenericRow = TypeVar("GenericRow", TypeARow, TypeBRow) That doesn't look right to me. I think you should rather remove this dependency, because in consequence it would mean, that you potentially would have to change the definiton of the return type of your abstract method again in case you add another subclass later. I would only consider this in case you think your subclasses are pretty fixed and neither the return type changes nor new subclasses are added. In that case, you could just put the three classes in one file and put the definitions of the return types before the definition of your abstract class. But because we usually do not know if we add other subclasses later (why would we care about creating abstract superclasses otherwise), I think the superclass should not have knowledge about the subclasses. In that case I would rather type the abstract method with the broader type, e.g. like this: #abstract_file_builder.py from abc import ABC, abstractmethod from typing import Generic, MutableSequence, TypeVar class AbstractFileBuilder(ABC): ... @abstractmethod def generate_rows( self, ) -> MutableSequence[Any]: pass | 2 | 2 |
77,566,151 | 2023-11-28 | https://stackoverflow.com/questions/77566151/pymunk-body-object-being-problematic | I am using this video tutorial: Video I am making a ball (with 0 mass at the moment) in pymunk and trying to show it on pygame but it is not working I tried doing this to make a ball in pymunk and pygame and I was expecting a ball with no movement(I will make it move later on): import pymunk import pygame pygame.init() space=pymunk.Space() FPS=60 clock = pygame.time.Clock() body = pymunk.Body() body.position = 400,400 shape=pymunk.Circle(body,10) space.add(body,shape) (width, height) = (600, 400) display = pygame.display.set_mode((width, height)) def main(): while True: for event in pygame.event.get(): if event.type == pygame.QUIT: return display.fill((255,255,255)) x,y=body.position pygame.draw.circle(display,(255,0,0),(int(x),int(y)),10) clock.tick(FPS) space.step(1/FPS) main() pygame.display.quit() pygame.quit() I got an error saying that the position is Vec2d(Nan,Nan) Then, I ran this: print(body.position) Output:Vec2d(nan, nan) body.position = 400, 400 print(body.position) Output:Vec2d(nan, nan) | When creating the body, you must specify a mass and a moment not equal to 0 (see pymunk.Body): body = pymunk.Body() body = pymunk.Body(1, 1) In addition, pygame.disaply.flip() is missing after drawing the objects in the scene and if the body should fall, you must also specify the gravity (e.g. space.gravity = (0, 100)) | 2 | 2 |
77,559,175 | 2023-11-27 | https://stackoverflow.com/questions/77559175/how-can-i-change-the-size-of-a-checkbox-in-pysimplegui | I made some checkboxes with PySimpleGUI like so: import PySimpleGUI as sg WINDOW_SIZE = (1280, 710) CHECKBOX_SIZE = (1000, 40) sg.theme('Dark Blue 3') indicator_column = [ [sg.Checkbox("First", size=CHECKBOX_SIZE)], [sg.Checkbox("Second")], # in the actual code there are more checkboxes ] layout = [ [sg.Column(indicator_column)] ] window = sg.Window('Window Title', layout, finalize=True, size=WINDOW_SIZE) while True: event, values = window.read() print(event, values) if event == sg.WIN_CLOSED or event == 'Exit': break window.close() I want to adjust the size of the checkbox by using its size property, but this seems to have no effect. Why doesn't the first checkbox have a custom size when set as in the above code? How can I change the size? | You'll find one solution posted on Trinket where you can execute it. It's based on a PySimpleGUI Demo Program, also posted in the PSG Repo and in the psgdemos PyPI release. Like the normal Checkbox Element, you can click the box or the text next to it. This bit of code also flips the text, something not normally done so you can remove that line of code if desired. Another way is to use Text Elements instead of Image Elements, swapping Base64 images with UNICODE chars. To get the value of a checkbox 1: window[window[('-IMAGE-', 1)].metadata Or you can add to your event loop right after your read: values['-CB 1-'] = window[('-IMAGE-', 1)].metadata values['-CB 2-'] = window[('-IMAGE-', 2)].metadata Like this event, values = window.read() print(event) if event == sg.WIN_CLOSED or event == 'Exit': break values['-CB 1-'] = window[('-IMAGE-', 1)].metadata values['-CB 2-'] = window[('-IMAGE-', 2)].metadata and then reference the checkboxes using the values dictionary just as if they were PySimpleGUI Checkbox Elements: if values['-CB 1-']: # your checkbox is true import PySimpleGUI as sg """ Demo - Custom Checkboxes done simply The Base64 Image encoding feature of PySimpleGUI makes it possible to create beautiful GUIs very simply These 2 checkboxes required 3 extra lines of code than a normal checkbox. 1. Keep track of the current value using the Image Element's Metadata 2. Change / Update the image when clicked 3. The Base64 image definition Enable the event on the Image with the checkbox so that you can take action (flip the value) """ def main(): layout = [[sg.Text('Fancy Checkboxes... Simply')], [sg.Image(checked, key=('-IMAGE-', 1), metadata=True, enable_events=True), sg.Text(True, enable_events=True, k=('-TEXT-', 1))], [sg.Image(unchecked, key=('-IMAGE-', 2), metadata=False, enable_events=True), sg.Text(False, enable_events=True, k=('-TEXT-', 2))], [sg.Button('Go'), sg.Button('Exit')]] window = sg.Window('Custom Checkboxes', layout, font="_ 14") while True: event, values = window.read() print(event, values) if event == sg.WIN_CLOSED or event == 'Exit': break # if a checkbox is clicked, flip the vale and the image if event[0] in ('-IMAGE-', '-TEXT-'): cbox_key = ('-IMAGE-', event[1]) window[cbox_key].metadata = not window[cbox_key].metadata window[cbox_key].update(checked if window[cbox_key].metadata else unchecked) # Update the string next to the checkbox (optional) window[('-TEXT-', event[1])].update(window[cbox_key].metadata) window.close() if __name__ == '__main__': checked = b'iVBORw0KGgoAAAANSUhEUgAAAB4AAAAeCAYAAAA7MK6iAAAKMGlDQ1BJQ0MgUHJvZmlsZQAAeJydlndUVNcWh8+9d3qhzTAUKUPvvQ0gvTep0kRhmBlgKAMOMzSxIaICEUVEBBVBgiIGjIYisSKKhYBgwR6QIKDEYBRRUXkzslZ05eW9l5ffH2d9a5+99z1n733WugCQvP25vHRYCoA0noAf4uVKj4yKpmP7AQzwAAPMAGCyMjMCQj3DgEg+Hm70TJET+CIIgDd3xCsAN428g+h08P9JmpXBF4jSBInYgs3JZIm4UMSp2YIMsX1GxNT4FDHDKDHzRQcUsbyYExfZ8LPPIjuLmZ3GY4tYfOYMdhpbzD0i3pol5IgY8RdxURaXky3iWyLWTBWmcUX8VhybxmFmAoAiie0CDitJxKYiJvHDQtxEvBQAHCnxK47/igWcHIH4Um7pGbl8bmKSgK7L0qOb2doy6N6c7FSOQGAUxGSlMPlsult6WgaTlwvA4p0/S0ZcW7qoyNZmttbWRubGZl8V6r9u/k2Je7tIr4I/9wyi9X2x/ZVfej0AjFlRbXZ8scXvBaBjMwDy97/YNA8CICnqW/vAV/ehieclSSDIsDMxyc7ONuZyWMbigv6h/+nwN/TV94zF6f4oD92dk8AUpgro4rqx0lPThXx6ZgaTxaEb/XmI/3HgX5/DMISTwOFzeKKIcNGUcXmJonbz2FwBN51H5/L+UxP/YdiftDjXIlEaPgFqrDGQGqAC5Nc+gKIQARJzQLQD/dE3f3w4EL+8CNWJxbn/LOjfs8Jl4iWTm/g5zi0kjM4S8rMW98TPEqABAUgCKlAAKkAD6AIjYA5sgD1wBh7AFwSCMBAFVgEWSAJpgA+yQT7YCIpACdgBdoNqUAsaQBNoASdABzgNLoDL4Dq4AW6DB2AEjIPnYAa8AfMQBGEhMkSBFCBVSAsygMwhBuQIeUD+UAgUBcVBiRAPEkL50CaoBCqHqqE6qAn6HjoFXYCuQoPQPWgUmoJ+h97DCEyCqbAyrA2bwAzYBfaDw+CVcCK8Gs6DC+HtcBVcDx+D2+EL8HX4NjwCP4dnEYAQERqihhghDMQNCUSikQSEj6xDipFKpB5pQbqQXuQmMoJMI+9QGBQFRUcZoexR3qjlKBZqNWodqhRVjTqCakf1oG6iRlEzqE9oMloJbYC2Q/ugI9GJ6Gx0EboS3YhuQ19C30aPo99gMBgaRgdjg/HGRGGSMWswpZj9mFbMecwgZgwzi8ViFbAGWAdsIJaJFWCLsHuxx7DnsEPYcexbHBGnijPHeeKicTxcAa4SdxR3FjeEm8DN46XwWng7fCCejc/Fl+Eb8F34Afw4fp4gTdAhOBDCCMmEjYQqQgvhEuEh4RWRSFQn2hKDiVziBmIV8TjxCnGU+I4kQ9InuZFiSELSdtJh0nnSPdIrMpmsTXYmR5MF5O3kJvJF8mPyWwmKhLGEjwRbYr1EjUS7xJDEC0m8pJaki+QqyTzJSsmTkgOS01J4KW0pNymm1DqpGqlTUsNSs9IUaTPpQOk06VLpo9JXpSdlsDLaMh4ybJlCmUMyF2XGKAhFg+JGYVE2URoolyjjVAxVh+pDTaaWUL+j9lNnZGVkLWXDZXNka2TPyI7QEJo2zYeWSiujnaDdob2XU5ZzkePIbZNrkRuSm5NfIu8sz5Evlm+Vvy3/XoGu4KGQorBToUPhkSJKUV8xWDFb8YDiJcXpJdQl9ktYS4qXnFhyXwlW0lcKUVqjdEipT2lWWUXZSzlDea/yReVpFZqKs0qySoXKWZUpVYqqoypXtUL1nOozuizdhZ5Kr6L30GfUlNS81YRqdWr9avPqOurL1QvUW9UfaRA0GBoJGhUa3RozmqqaAZr5ms2a97XwWgytJK09Wr1ac9o62hHaW7Q7tCd15HV8dPJ0mnUe6pJ1nXRX69br3tLD6DH0UvT2693Qh/Wt9JP0a/QHDGADawOuwX6DQUO0oa0hz7DecNiIZORilGXUbDRqTDP2Ny4w7jB+YaJpEm2y06TX5JOplWmqaYPpAzMZM1+zArMus9/N9c1Z5jXmtyzIFp4W6y06LV5aGlhyLA9Y3rWiWAVYbbHqtvpobWPNt26xnrLRtImz2WczzKAyghiljCu2aFtX2/W2p23f2VnbCexO2P1mb2SfYn/UfnKpzlLO0oalYw7qDkyHOocRR7pjnONBxxEnNSemU73TE2cNZ7Zzo/OEi55Lsssxlxeupq581zbXOTc7t7Vu590Rdy/3Yvd+DxmP5R7VHo891T0TPZs9Z7ysvNZ4nfdGe/t57/Qe9lH2Yfk0+cz42viu9e3xI/mF+lX7PfHX9+f7dwXAAb4BuwIeLtNaxlvWEQgCfQJ3BT4K0glaHfRjMCY4KLgm+GmIWUh+SG8oJTQ29GjomzDXsLKwB8t1lwuXd4dLhseEN4XPRbhHlEeMRJpEro28HqUYxY3qjMZGh0c3Rs+u8Fixe8V4jFVMUcydlTorc1ZeXaW4KnXVmVjJWGbsyTh0XETc0bgPzEBmPXM23id+X/wMy421h/Wc7cyuYE9xHDjlnIkEh4TyhMlEh8RdiVNJTkmVSdNcN24192Wyd3Jt8lxKYMrhlIXUiNTWNFxaXNopngwvhdeTrpKekz6YYZBRlDGy2m717tUzfD9+YyaUuTKzU0AV/Uz1CXWFm4WjWY5ZNVlvs8OzT+ZI5/By+nL1c7flTuR55n27BrWGtaY7Xy1/Y/7oWpe1deugdfHrutdrrC9cP77Ba8ORjYSNKRt/KjAtKC94vSliU1ehcuGGwrHNXpubiySK+EXDW+y31G5FbeVu7d9msW3vtk/F7OJrJaYllSUfSlml174x+6bqm4XtCdv7y6zLDuzA7ODtuLPTaeeRcunyvPKxXQG72ivoFcUVr3fH7r5aaVlZu4ewR7hnpMq/qnOv5t4dez9UJ1XfrnGtad2ntG/bvrn97P1DB5wPtNQq15bUvj/IPXi3zquuvV67vvIQ5lDWoacN4Q293zK+bWpUbCxp/HiYd3jkSMiRniabpqajSkfLmuFmYfPUsZhjN75z/66zxailrpXWWnIcHBcef/Z93Pd3Tvid6D7JONnyg9YP+9oobcXtUHtu+0xHUsdIZ1Tn4CnfU91d9l1tPxr/ePi02umaM7Jnys4SzhaeXTiXd272fMb56QuJF8a6Y7sfXIy8eKsnuKf/kt+lK5c9L1/sdek9d8XhyumrdldPXWNc67hufb29z6qv7Sern9r6rfvbB2wGOm/Y3ugaXDp4dshp6MJN95uXb/ncun572e3BO8vv3B2OGR65y747eS/13sv7WffnH2x4iH5Y/EjqUeVjpcf1P+v93DpiPXJm1H2070nokwdjrLHnv2T+8mG88Cn5aeWE6kTTpPnk6SnPqRvPVjwbf57xfH666FfpX/e90H3xw2/Ov/XNRM6Mv+S/XPi99JXCq8OvLV93zwbNPn6T9mZ+rvitwtsj7xjvet9HvJ+Yz/6A/VD1Ue9j1ye/Tw8X0hYW/gUDmPP8uaxzGQAAAp1JREFUeJzFlk1rE1EUhp9z5iat9kMlVXGhKH4uXEo1CoIKrnSnoHs3unLnxpW7ipuCv0BwoRv/gCBY2/gLxI2gBcHGT9KmmmTmHBeTlLRJGquT+jJ3djPPfV/OPefK1UfvD0hIHotpsf7jm4mq4k6mEsEtsfz2gpr4rGpyPYjGjyUMFy1peNg5odkSV0nNDNFwxhv2JAhR0ZKGA0JiIAPCpgTczaVhRa1//2qoprhBQdv/LSKNasVUVAcZb/c9/A9oSwMDq6Rr08DSXNW68TN2pAc8U3CLsVQ3bpwocHb/CEs16+o8ZAoVWKwZNycLXD62DYDyUszbLzW2BMHa+lIm4Fa8lZpx6+QEl46OA1CaX+ZjpUFeV0MzAbecdoPen1lABHKRdHThdcECiNCx27XQxTXQufllHrxaIFKItBMK6xSXCCSeFsoKZO2m6AUtE0lvaE+wCPyKna055erx7SSWul7pes1Xpd4Z74OZhfQMrwOFLlELYAbjeeXuud0cKQyxZyzHw9efGQ6KStrve8WrCpHSd7J2gL1Jjx0qvxIALh4aIxJhulRmKBKWY+8Zbz+nLXWNWgXqsXPvxSfm5qsAXDg4yu3iLn7Gzq3Jv4t3XceQxpSLQFWZelnmztldnN43wvmDoxyeGGLvtlyb0z+Pt69jSItJBfJBmHpZXnG+Gtq/ejcMhtSBCuQjYWqmzOyHFD77oZo63WC87erbudzTGAMwXfrM2y81nr+rIGw83nb90XQyh9Ccb8/e/CAxCF3aYOZgaB4zYDSffvKvN+ANz+NefXvg4KykbmabDXU30/yOguKbyHYnNzKuwUnmhPxpF3Ok19UsM2r6BEpB6n7NpPFU6smpuLpoqCgZFdCKBDC3MDKmntNSVEuu/AYecjifoa3JogAAAABJRU5ErkJggg==' unchecked = b'iVBORw0KGgoAAAANSUhEUgAAAB4AAAAeCAYAAAA7MK6iAAAKMGlDQ1BJQ0MgUHJvZmlsZQAAeJydlndUVNcWh8+9d3qhzTAUKUPvvQ0gvTep0kRhmBlgKAMOMzSxIaICEUVEBBVBgiIGjIYisSKKhYBgwR6QIKDEYBRRUXkzslZ05eW9l5ffH2d9a5+99z1n733WugCQvP25vHRYCoA0noAf4uVKj4yKpmP7AQzwAAPMAGCyMjMCQj3DgEg+Hm70TJET+CIIgDd3xCsAN428g+h08P9JmpXBF4jSBInYgs3JZIm4UMSp2YIMsX1GxNT4FDHDKDHzRQcUsbyYExfZ8LPPIjuLmZ3GY4tYfOYMdhpbzD0i3pol5IgY8RdxURaXky3iWyLWTBWmcUX8VhybxmFmAoAiie0CDitJxKYiJvHDQtxEvBQAHCnxK47/igWcHIH4Um7pGbl8bmKSgK7L0qOb2doy6N6c7FSOQGAUxGSlMPlsult6WgaTlwvA4p0/S0ZcW7qoyNZmttbWRubGZl8V6r9u/k2Je7tIr4I/9wyi9X2x/ZVfej0AjFlRbXZ8scXvBaBjMwDy97/YNA8CICnqW/vAV/ehieclSSDIsDMxyc7ONuZyWMbigv6h/+nwN/TV94zF6f4oD92dk8AUpgro4rqx0lPThXx6ZgaTxaEb/XmI/3HgX5/DMISTwOFzeKKIcNGUcXmJonbz2FwBN51H5/L+UxP/YdiftDjXIlEaPgFqrDGQGqAC5Nc+gKIQARJzQLQD/dE3f3w4EL+8CNWJxbn/LOjfs8Jl4iWTm/g5zi0kjM4S8rMW98TPEqABAUgCKlAAKkAD6AIjYA5sgD1wBh7AFwSCMBAFVgEWSAJpgA+yQT7YCIpACdgBdoNqUAsaQBNoASdABzgNLoDL4Dq4AW6DB2AEjIPnYAa8AfMQBGEhMkSBFCBVSAsygMwhBuQIeUD+UAgUBcVBiRAPEkL50CaoBCqHqqE6qAn6HjoFXYCuQoPQPWgUmoJ+h97DCEyCqbAyrA2bwAzYBfaDw+CVcCK8Gs6DC+HtcBVcDx+D2+EL8HX4NjwCP4dnEYAQERqihhghDMQNCUSikQSEj6xDipFKpB5pQbqQXuQmMoJMI+9QGBQFRUcZoexR3qjlKBZqNWodqhRVjTqCakf1oG6iRlEzqE9oMloJbYC2Q/ugI9GJ6Gx0EboS3YhuQ19C30aPo99gMBgaRgdjg/HGRGGSMWswpZj9mFbMecwgZgwzi8ViFbAGWAdsIJaJFWCLsHuxx7DnsEPYcexbHBGnijPHeeKicTxcAa4SdxR3FjeEm8DN46XwWng7fCCejc/Fl+Eb8F34Afw4fp4gTdAhOBDCCMmEjYQqQgvhEuEh4RWRSFQn2hKDiVziBmIV8TjxCnGU+I4kQ9InuZFiSELSdtJh0nnSPdIrMpmsTXYmR5MF5O3kJvJF8mPyWwmKhLGEjwRbYr1EjUS7xJDEC0m8pJaki+QqyTzJSsmTkgOS01J4KW0pNymm1DqpGqlTUsNSs9IUaTPpQOk06VLpo9JXpSdlsDLaMh4ybJlCmUMyF2XGKAhFg+JGYVE2URoolyjjVAxVh+pDTaaWUL+j9lNnZGVkLWXDZXNka2TPyI7QEJo2zYeWSiujnaDdob2XU5ZzkePIbZNrkRuSm5NfIu8sz5Evlm+Vvy3/XoGu4KGQorBToUPhkSJKUV8xWDFb8YDiJcXpJdQl9ktYS4qXnFhyXwlW0lcKUVqjdEipT2lWWUXZSzlDea/yReVpFZqKs0qySoXKWZUpVYqqoypXtUL1nOozuizdhZ5Kr6L30GfUlNS81YRqdWr9avPqOurL1QvUW9UfaRA0GBoJGhUa3RozmqqaAZr5ms2a97XwWgytJK09Wr1ac9o62hHaW7Q7tCd15HV8dPJ0mnUe6pJ1nXRX69br3tLD6DH0UvT2693Qh/Wt9JP0a/QHDGADawOuwX6DQUO0oa0hz7DecNiIZORilGXUbDRqTDP2Ny4w7jB+YaJpEm2y06TX5JOplWmqaYPpAzMZM1+zArMus9/N9c1Z5jXmtyzIFp4W6y06LV5aGlhyLA9Y3rWiWAVYbbHqtvpobWPNt26xnrLRtImz2WczzKAyghiljCu2aFtX2/W2p23f2VnbCexO2P1mb2SfYn/UfnKpzlLO0oalYw7qDkyHOocRR7pjnONBxxEnNSemU73TE2cNZ7Zzo/OEi55Lsssxlxeupq581zbXOTc7t7Vu590Rdy/3Yvd+DxmP5R7VHo891T0TPZs9Z7ysvNZ4nfdGe/t57/Qe9lH2Yfk0+cz42viu9e3xI/mF+lX7PfHX9+f7dwXAAb4BuwIeLtNaxlvWEQgCfQJ3BT4K0glaHfRjMCY4KLgm+GmIWUh+SG8oJTQ29GjomzDXsLKwB8t1lwuXd4dLhseEN4XPRbhHlEeMRJpEro28HqUYxY3qjMZGh0c3Rs+u8Fixe8V4jFVMUcydlTorc1ZeXaW4KnXVmVjJWGbsyTh0XETc0bgPzEBmPXM23id+X/wMy421h/Wc7cyuYE9xHDjlnIkEh4TyhMlEh8RdiVNJTkmVSdNcN24192Wyd3Jt8lxKYMrhlIXUiNTWNFxaXNopngwvhdeTrpKekz6YYZBRlDGy2m717tUzfD9+YyaUuTKzU0AV/Uz1CXWFm4WjWY5ZNVlvs8OzT+ZI5/By+nL1c7flTuR55n27BrWGtaY7Xy1/Y/7oWpe1deugdfHrutdrrC9cP77Ba8ORjYSNKRt/KjAtKC94vSliU1ehcuGGwrHNXpubiySK+EXDW+y31G5FbeVu7d9msW3vtk/F7OJrJaYllSUfSlml174x+6bqm4XtCdv7y6zLDuzA7ODtuLPTaeeRcunyvPKxXQG72ivoFcUVr3fH7r5aaVlZu4ewR7hnpMq/qnOv5t4dez9UJ1XfrnGtad2ntG/bvrn97P1DB5wPtNQq15bUvj/IPXi3zquuvV67vvIQ5lDWoacN4Q293zK+bWpUbCxp/HiYd3jkSMiRniabpqajSkfLmuFmYfPUsZhjN75z/66zxailrpXWWnIcHBcef/Z93Pd3Tvid6D7JONnyg9YP+9oobcXtUHtu+0xHUsdIZ1Tn4CnfU91d9l1tPxr/ePi02umaM7Jnys4SzhaeXTiXd272fMb56QuJF8a6Y7sfXIy8eKsnuKf/kt+lK5c9L1/sdek9d8XhyumrdldPXWNc67hufb29z6qv7Sern9r6rfvbB2wGOm/Y3ugaXDp4dshp6MJN95uXb/ncun572e3BO8vv3B2OGR65y747eS/13sv7WffnH2x4iH5Y/EjqUeVjpcf1P+v93DpiPXJm1H2070nokwdjrLHnv2T+8mG88Cn5aeWE6kTTpPnk6SnPqRvPVjwbf57xfH666FfpX/e90H3xw2/Ov/XNRM6Mv+S/XPi99JXCq8OvLV93zwbNPn6T9mZ+rvitwtsj7xjvet9HvJ+Yz/6A/VD1Ue9j1ye/Tw8X0hYW/gUDmPP8uaxzGQAAAPFJREFUeJzt101KA0EQBeD3XjpBCIoSPYC3cPQaCno9IQu9h+YauYA/KFk4k37lYhAUFBR6Iko/at1fU4uqbp5dLg+Z8pxW0z7em5IQgaIhEc6e7M5kxo2ULxK1njNtNc5dpIN9lRU/RLZBpZPofJWIUePcBQAiG+BAbC8gwsHOjdqHO0PquaHQ92eT7FZPFqUh2/v5HX4DfUuFK1zhClf4H8IstDp/DJd6Ff2dVle4wt+Gw/am0Qhbk72ZEBu0IzCe7igF8i0xOQ46wFJz6Uu1r4RFYhvnZnfNNh+tV8+GKBT+s4EAHE7TbcVYi9FLPn0F1D1glFsARrAAAAAASUVORK5CYII=' main() Using Text elements instead of Image elements If you don't want to include the custom images, you can change them into simple UNICODE characters and change the Image elements into Text elements. The PySimpleGUI element parameters are often the same and in this care they are. So all that you have to change in the code is the value of the two variables and change the word Image to Text in the layout. import PySimpleGUI as sg """ Demo - Custom Checkboxes done simply The Base64 Image encoding feature of PySimpleGUI makes it possible to create beautiful GUIs very simply These 2 checkboxes required 3 extra lines of code than a normal checkbox. 1. Keep track of the current value using the Image Element's Metadata 2. Changle / Update the image when clicked 3. The Base64 image definition Enable the event on the Image with the checkbox so that you can take action (flip the value) Copyright 2022 PySimpleGUI """ def main(): layout = [[sg.Text('Fancy Checkboxes... Simply')], [sg.Text(checked, key=('-IMAGE-', 1), metadata=True, enable_events=True), sg.Text(True, enable_events=True, k=('-TEXT-', 1))], [sg.Text(unchecked, key=('-IMAGE-', 2), metadata=False, enable_events=True), sg.Text(False, enable_events=True, k=('-TEXT-', 2))], [sg.Button('Go'), sg.Button('Exit')]] window = sg.Window('Custom Checkboxes', layout, font="_ 14") while True: event, values = window.read() print(event, values) if event == sg.WIN_CLOSED or event == 'Exit': break # if a checkbox is clicked, flip the vale and the image if event[0] in ('-IMAGE-', '-TEXT-'): cbox_key = ('-IMAGE-', event[1]) text_key = ('-TEXT-', event[1]) window[cbox_key].metadata = not window[cbox_key].metadata window[cbox_key].update(checked if window[cbox_key].metadata else unchecked) # Update the string next to the checkbox window[text_key].update(window[cbox_key].metadata) window.close() if __name__ == '__main__': checked = '☑' unchecked = '☐' main() One advantage of using the Text element addresses your question about size more directly. You can change the size of the checkbox by changing the font size for the Text element. I changed it to a size 20 font. The window has a size of 14 and I left the other checkbox a size 14. This is the difference: sg.Text(checked, key=('-IMAGE-', 1), metadata=True, enable_events=True, font='_ 20') size parameter footnote.... The size parameter you'll find explained in the API call reference as part of the docs. All of the elements' parameters are explained in the documentation. For nearly all elements, it specifies the size of the entire element in characters wide and rows high. For the Checkbox, it's only the width that's being used. If you have another element next to it, then the width will push that other element over if it's longer than the text of the Checkbox. | 2 | 1 |
77,557,429 | 2023-11-27 | https://stackoverflow.com/questions/77557429/validate-a-function-call-without-calling-a-function | I'd like to know if I can use pydantic to check whether arguments would be fit for calling a type hinted function, but without calling the function. For example, given kwargs = {'x': 1, 'y': 'hi'} def foo(x: int, y: str, z: Optional[list] = None): pass I want to check whether foo(**kwargs) would be fine according to the type hints. So basically, what pydantic.validate_call does, without calling foo. My research led me to this github issue. The solution works, but it relies on validate_arguments, which is deprecated. It cannot be switched for validate_call, because its return value doesn't have a vd attribute, making the line validated_function = f.vd fail. | You can extract the annotations attribute from the function and use it to build a pydantic model using the type constructor: def form_validator_model(func: collections.abc.Callable) -> type[pydantic.BaseModel]: ann = func.__annotations__.copy() ann.pop('return', None) # Remove the return value annotation if it exists. return type(f'{func.__name__}_Validator', (pydantic.BaseModel,), {'__annotations__': ann}) def func(a: str, b: int) -> str: return a * b model = form_validator_model(func) model(a='hi', b='bye') # raises ValidationError The downside is you can't call the arguments positionally. | 4 | 4 |
77,564,964 | 2023-11-28 | https://stackoverflow.com/questions/77564964/checking-if-a-tensors-values-are-contained-in-another-tensor | I have a torch tensor like so: a=[1, 234, 54, 6543, 55, 776] and other tensors like so: b=[234, 54] c=[55, 776] I want to create a new mask tensor where the values of a will be true if there is another tensor (b or c) are equal to it. For example, in the tensors we have above I would like to create the following masking tensor: a_masked =[False, True, True, False, True, True] # The first two True values correspond to tensor `b` while the last two True values correspond to tensor `c`. I have seen other methods to check whether a full tensor is contained in another but this isn't the case here. Is there a torch way to do this efficiently? Thanks! | Based on the answers to on the PyTorch forum here, you could explicitly use a for loop, e.g., import torch a = torch.tensor([1, 234, 54, 6543, 55, 776]) b = torch.tensor([234, 54]) c = torch.tensor([55, 776]) a_masked = sum(a == i for i in b).bool() + sum(a == i for i in c).bool() print(a_masked) tensor([False, True, True, False, True, True]) However, there is actually a PyTorch isin function, for which you could do: a_masked = torch.isin(a, torch.cat([b, c])) This is several times faster than the sum method. | 2 | 2 |
77,541,498 | 2023-11-24 | https://stackoverflow.com/questions/77541498/applying-python-udf-function-per-row-in-a-polars-dataframe-throws-unexpected-exc | I have the following polars DF in Python df = pl.DataFrame({ "user_movies": [[7064, 7153, 78009], [6, 7, 1042], [99, 110, 3927], [2, 11, 152081], [260, 318, 195627]], "user_ratings": [[5.0, 5.0, 5.0], [4.0, 2.0, 4.0], [4.0, 4.0, 3.0], [3.5, 3.0, 4.0], [1.0, 4.5, 0.5]], "common_movies": [[7064, 7153], [7], [110, 3927], [2], [260, 195627]] }) print(df.head()) I want to create a new column named "common_movie_ratings" that will take from each rating list only the index of the movie rated in the common movies. For example, for the first row, I should return only the ratings for movies [7064, 7153,], for the second row the ratings for the movie [7], and so on and so forth. For this reason, I created the following function: def get_common_movie_ratings(row): #Each row is a tuple of arrays. common_movies = row[2] #the index of the tuple denotes the 3rd array, which represents the common_movies column. user_ratings = row[1] ratings_for_common_movies= [user_ratings[list(row[0]).index(movie)] for movie in common_movies] return ratings_for_common_movies Finally, I apply the UDF function on the dataframe like df["common_movie_ratings"] = df.apply(get_common_movie_ratings, return_dtype=pl.List(pl.Float64)) Every time I apply the function, on the 3rd iteration/row I receive the following error expected tuple, got list I have also tried a different approach for the UDF function like def get_common_movie_ratings(row): common_movies = row[2] user_ratings = row[1] ratings = [user_ratings[i] for i, movie in enumerate(row[0]) if movie in common_movies] return ratings But again on the 3rd iteration, I received the same error. Update - Data input and scenario scope (here) | What went wrong with your approach Ignoring performance penalties for python UDFs, there are two things that went wrong in your approach. apply which is now map_rows in the context that you're trying to use it is expecting the output to be a tuple where each element of the tuple is an output column. Your function doesn't output a tuple. If you change the return line to return (ratings_for_common_movies,) then it outputs a tuple and will work. You can't add columns to polars dataframes with square bracket notation. The only thing that can be on the left side of the = is a df, never df['new_column']=<something>. If you're using an old version that does allow it then you shouldn't, in part, because new versions don't allow it. That means you have to do something like df.with_columns(new_column=<some_expression>) In the case adding a column to an existing df while using map_rows you can use hstack like: df=df.hstack(df.map_rows(get_common_movie_ratings)).rename({'column_0':'common_movie_ratings'}) The above is really an anti-pattern as using any of the map_rows, map_elements, etc when a native approach could work will be slower and less efficient. Scroll to the bottom for a map_elements approach. Native solution preamble If we assume the lists are always 3 long then you could do this... # this is the length of the user_movies lists n_count=3 df.with_columns( # first gather the items from user_movies based on (yet to be created) # indices list pl.col('user_movies').list.gather( # use walrus operator to create new list which is the indices where # user_movies are in common_movies this works by looping through # each element and checking if it's in common_movies. When it is in common_movies # then it stores its place in the loop n variable. The n_count is the list size (indices:=pl.concat_list( pl.when( pl.col('user_movies').list.get(n).is_in(pl.col('common_movies')) ) .then(pl.lit(n)) for n in range(n_count) ).list.drop_nulls()) ), # use the same indicies list to gather the corresponding elements from user_ratings pl.col('user_ratings').list.gather(indices) ) Note that we're generating the indices list by looping through a range from 0 to the length of the list as n and when the item associated the nth position of user_movies is in common_movies then that n is put in the indices list. There is unfortunately not a .index like method in polars for list type columns so, without exploding the lists, this is the best way I can think of to create that indices list. Native solution answer Polars itself can't recursively set n_count so we need to do it manually. By using lazy evaluation this is faster than other approaches as it can compute each n_count case in parallel. ( pl.concat([ # this is a list comprehension # From here to the "for n_count..." line is the same as the previous code # snippet except that, here, it's called inner_df and it's being # made into a lazy frame inner_df.lazy().with_columns( pl.col('user_movies').list.gather( (indices:=pl.concat_list( pl.when( pl.col('user_movies').list.get(n).is_in(pl.col('common_movies')) ) .then(pl.lit(n)) for n in range(n_count) ).list.drop_nulls()) ), pl.col('user_ratings').list.gather(indices) ) # this is the iterating part of the list comprehension # it takes the original df, creates a column which is # a row index, then it creates a column which is the # length of the list, it then partitions up the df into # multiple dfs where each of the inner_dfs only has rows # where the list length is the same. By using as_dict=True # and .items(), it gives a convenient way to unpack the # n_count (length of the list) and the inner_df for n_count, inner_df in ( df .with_row_count('i') # original row position .with_columns(n_count=pl.col('user_movies').list.len()) .partition_by('n_count', as_dict=True, include_key=False) .items()) ]) .sort('i') # sort by original row position .drop('i') # drop the row position column .collect() # run all of the queries in parallel ) shape: (5, 3) ┌───────────────┬──────────────┬───────────────┐ │ user_movies ┆ user_ratings ┆ common_movies │ │ --- ┆ --- ┆ --- │ │ list[i64] ┆ list[f64] ┆ list[i64] │ ╞═══════════════╪══════════════╪═══════════════╡ │ [7064, 7153] ┆ [5.0, 5.0] ┆ [7064, 7153] │ │ [7] ┆ [2.0] ┆ [7] │ │ [110, 3927] ┆ [4.0, 3.0] ┆ [110, 3927] │ │ [2] ┆ [3.5] ┆ [2] │ │ [260, 195627] ┆ [1.0, 0.5] ┆ [260, 195627] │ └───────────────┴──────────────┴───────────────┘ By converting to lazy in the first part of the concat it allows for each frame to be calculated in parallel where each frame is a subset based on the length of the list. It also allows for the indices to become a CSER which means it only calculates it once even though there are 2 references to it. Incidentally, for less code but more processing/time, you could simply set n_counts in the preamble section to n_count=df.select(n_count=pl.col('user_movies').list.len().max()).item() and then just run the rest of what's in that section. That approach will be much slower than this one as, for every row it iterates through elements up to the max list length which adds unnecessary checks. It also doesn't get the same parallelism. In other words, it's doing more work with fewer CPU cores working on it. Benchmarks Fake data creation n=10_000_000 df = ( pl.DataFrame({ 'user':np.random.randint(1,int(n/10),size=n), 'user_movies':np.random.randint(1,50,n), 'user_ratings':np.random.uniform(1,5, n), 'keep':np.random.randint(1,100,n) }) .group_by('user') .agg( pl.col('user_movies'), pl.col('user_ratings').round(1), common_movies=pl.col('user_movies').filter(pl.col('keep')>75) ) .filter(pl.col('common_movies').list.len()>0) .drop('user') ) print(df.head(10)) shape: (5, 3) ┌────────────────┬───────────────────┬───────────────┐ │ user_movies ┆ user_ratings ┆ common_movies │ │ --- ┆ --- ┆ --- │ │ list[i64] ┆ list[f64] ┆ list[i64] │ ╞════════════════╪═══════════════════╪═══════════════╡ │ [23, 35, … 22] ┆ [3.4, 1.6, … 4.0] ┆ [35] │ │ [30, 18, … 26] ┆ [4.9, 1.9, … 2.3] ┆ [10] │ │ [25, 19, … 29] ┆ [1.7, 1.7, … 1.1] ┆ [18, 40, 38] │ │ [31, 15, … 42] ┆ [2.9, 1.8, … 4.3] ┆ [31, 4, … 42] │ │ [36, 16, … 49] ┆ [1.0, 2.0, … 4.2] ┆ [36] │ └────────────────┴───────────────────┴───────────────┘ My method (16 threads): 1.92 s ± 195 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) My method (8 threads): 2.31 s ± 175 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) My method (4 threads): 3.14 s ± 221 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) @jqurious: 2.73 s ± 130 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) map_rows: 9.12 s ± 195 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Preamble with big n_count: 9.77 s ± 1.61 s per loop (mean ± std. dev. of 7 runs, 1 loop each) My method and the map_rows each use about 1 GB of RAM but the explody one is closer to 3 GB. struct and map_elements Instead of using map_rows which is, imo, really clunky, you can instead use map_elements. It has its own clunkiness as you often need to wrap your input in a struct but you can add columns more cleanly and you don't have to rely on column position. For instance you can define your function and use it as follows: def get_common_movie_ratings(row): #Each row is a tuple of arrays. common_movies = row['common_movies'] #the index of the tuple denotes the 3rd array, which represents the common_movies column. user_ratings = row['user_ratings'] ratings_for_common_movies= [user_ratings[list(row['user_movies']).index(movie)] for movie in common_movies] return ratings_for_common_movies df.with_columns(user_ratings=pl.struct(pl.all()).map_elements(get_common_movie_ratings)) What's happening here is that map_elements can only be invoked from a single column so if your custom function needs multiple inputs you can wrap them in a struct. The struct will get turned into a dict where the keys have the name of the columns. This approach doesn't have any inherent performance benefit relative to the map_rows, it's just, imo, better syntax. Lastly As @jqurious mentioned in comments of his answer, this could almost certainly be streamlined in terms of both syntax and performance by incorporating this logic with the formation of these lists. In other words, you have step 1: ______ step 2: this question. While I can only guess at what's happening in step 1 it is very likely that combining the two steps would be a worthwhile endeavor. | 4 | 5 |
77,564,682 | 2023-11-28 | https://stackoverflow.com/questions/77564682/pandas-groupby-transform-with-custom-condition | Suppose I have the following table: import pandas as pd data = pd.DataFrame({ 'Group':['A','A','A','A','B','B'] , 'Month':[1,2,3,4,1,2] , 'Value':[100,300,700,750, 200,400] }) I would like to use groupby and transform functions in pandas to create a new column that is equal to the value of each group in month 2. Here's how the result should look: import pandas as pd data = pd.DataFrame({ 'Group':['A','A','A','A','B','B'] , 'Month':[1,2,3,4,1,2] , 'Value':[100,300,700,750, 200,400] , 'Desired_Result':[300,300,300,300,400,400] }) It seems like there should be a straightforward way of accomplishing this with groupby and transform, but haven't found it yet. | Use Series.map with filtered rows in boolean indexing: s = data[data['Month'].eq(2)].set_index('Group')['Value'] data['Desired_Result'] = data['Group'].map(s) print (data) Group Month Value Desired_Result 0 A 1 100 300 1 A 2 300 300 2 A 3 700 300 3 A 4 750 300 4 B 1 200 400 5 B 2 400 400 With GroupBy.transform is possible replace non matched values by NaNs and use first: s = data['Value'].where(data['Month'].eq(2)) data['Desired_Result'] = s.groupby(data['Group']).transform('first') print (data) Group Month Value Desired_Result 0 A 1 100 300.0 1 A 2 300 300.0 2 A 3 700 300.0 3 A 4 750 300.0 4 B 1 200 400.0 5 B 2 400 400.0 | 3 | 6 |
77,547,060 | 2023-11-25 | https://stackoverflow.com/questions/77547060/fail-to-start-flask-connexion-swagger | Problem I initiated a Flask app (+ Connexion and Swagger UI) and tried to open http://127.0.0.1:5000/api/ui. The browser showed starlette.exceptions.HTTPException: 404: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. Setup % pip install "connexion[flask, swagger-ui]" % export FLASK_APP="app" (Prepare files) % flask run --debug (Access http://127.0.0.1:5000/api/ui) Result Python 3.12.0 connexion 3.0.2 Flask 3.0.0 swagger_ui_bundle 1.1.0 Werkzeug 3.0.1 Files Directory structure app/ __init__.py openapi.yaml hello.py __init__.py from connexion import FlaskApp from flask.app import Flask from pathlib import Path BASE_DIR = Path(__file__).parent.resolve() def create_app() -> Flask: flask_app: FlaskApp = FlaskApp(__name__) app: Flask = flask_app.app flask_app.add_api("openapi.yaml") return app openapi.yaml openapi: 3.0.3 info: title: "test" description: "test" version: "1.0.0" servers: - url: "/api" paths: /hello: get: summary: "hello" description: "hello" operationId: "hello.say_hello" responses: 200: description: "OK" content: text/plain: schema: type: string example: "hello" hello.py def say_hello() -> str: return 'Hello, world!' Error message Based on these settings, I believe I can see Swagger UI at http://127.0.0.1:5000/api/ui. However, I faced the error message below. Traceback (most recent call last): File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 867, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 841, in dispatch_request self.raise_routing_exception(req) File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 450, in raise_routing_exception raise request.routing_exception # type: ignore File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/ctx.py", line 353, in match_request result = self.url_adapter.match(return_rule=True) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/werkzeug/routing/map.py", line 624, in match raise NotFound() from None werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 1478, in __call__ return self.wsgi_app(environ, start_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 1458, in wsgi_app response = self.handle_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 1455, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 869, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/flask/app.py", line 759, in handle_user_exception return self.ensure_sync(handler)(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/myusername/tmp/.venv/lib/python3.12/site-packages/connexion/apps/flask.py", line 245, in _http_exception raise starlette.exceptions.HTTPException(exc.code, detail=exc.description) starlette.exceptions.HTTPException: 404: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. | TL:DR: You need an ASGI server to run your application. A similar problem on the connexion github issue tracker. This will only lead you to the documentation on running your application which can be found here. Bearing the above in mind, I managed to create my own solution: __init__.py: from connexion import FlaskApp def create_app(): app = FlaskApp(__name__) app.add_api("openapi.yaml", validate_responses=True) return app app = create_app() openapi.yaml: openapi: 3.0.3 info: title: test description: test version: 1.0.0 paths: /hello: get: summary: hello description: hello operationId: app.hello.say_hello responses: 200: description: OK content: text/plain: schema: type: string example: hello For additional assistance, please see below the docker setup I used: docker-compose.yaml: version: '3.8' services: test-api: build: context: . dockerfile: Dockerfile restart: unless-stopped env_file: - .env ports: - '8000:8000' volumes: - ./app:/usr/src/app/app Dockerfile: FROM public.ecr.aws/lambda/python:3.10 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY requirements.txt /usr/src/app/ RUN pip3 install --no-cache-dir -r requirements.txt ENV FLASK_RUN_HOST=0.0.0.0 ENV FLASK_RUN_PORT=8000 EXPOSE 8000 COPY entrypoint.sh /usr/src/app/entrypoint.sh RUN chmod +x /usr/src/app/entrypoint.sh ENTRYPOINT ["/usr/src/app/entrypoint.sh"] entrypoint.sh: #!/bin/sh gunicorn -w 1 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 app:app --reload # Keep the script running to keep the container alive tail -f /dev/null requirements.txt: connexion[flask, swagger-ui, uvicorn]==3.0.2 gunicorn==21.2.0 Werkzeug==3.0.1 Flask==3.0.0 setuptools >= 21.0.0 swagger-ui-bundle==1.1.0 project structure: project-root/ |-- app/ | |-- __init__.py | |-- hello.py | |-- openapi.yaml |-- Dockerfile |-- docker-compose.yml |-- entrypoint.sh |-- requirements.txt |-- .env Hope this helps! I'd recommend playing around from this working point with specific settings such as CORS, db setup, migrations, alterations to the docker setup etc. | 3 | 2 |
77,561,415 | 2023-11-28 | https://stackoverflow.com/questions/77561415/how-to-include-a-text-summary-within-a-printed-table | Say I have two dataframes: data1 = { 'Name': ['Alice', 'Bob', 'Charlie'], 'Age': [25, 30, 35], 'Height': [165, 182, 177] } df1 = pd.DataFrame(data1) data2 = { 'Summary': ["Alice is the youngest", "Bob is the tallest"] } df2 = pd.DataFrame(data2) Is there a way I could vertically concatenate the two dfs in such a way: +---+---------+-----+----------+ | | Name | Age | Height | +---+---------+-----+----------+ | 0 | Alice | 25 | 165 | | 1 | Bob | 30 | 182 | | 2 | Charlie | 35 | 177 | +---+---------+-----+----------+ | | Summary | +---+--------------------------+ | 0 | "Alice is the youngest" | | 1 | "Bob is the tallest" | +---+--------------------------+ Everything I have attempted so far either groups all fours columns together at the top. | You can use tabulate and post-process the outputs: import tabulate as t t.PRESERVE_WHITESPACE = True width1 = 10 width2 = width1 * df1.shape[1] + df1.shape[1]*2 str1 = t.tabulate(df1.applymap(f'{{:^{width1}}}'.format), list(df1), tablefmt='outline', stralign='center', numalign='center') str2 = t.tabulate(df2.applymap(f'{{:^{width2}}}'.format), df2.columns, tablefmt='outline', stralign='center', numalign='center') print(str1 + '\n' + '\n'.join(str2.splitlines()[1:])) Output: +----+------------+------------+------------+ | | Name | Age | Height | +====+============+============+============+ | 0 | Alice | 25 | 165 | | 1 | Bob | 30 | 182 | | 2 | Charlie | 35 | 177 | +----+------------+------------+------------+ | | Summary | +====+======================================+ | 0 | Alice is the youngest | | 1 | Bob is the tallest | +----+--------------------------------------+ Or to adjust df2 dynamically from df1's width: import tabulate as t t.PRESERVE_WHITESPACE = True str1 = tabulate(df1, list(df1), tablefmt='outline', stralign='center', numalign='center') H = str1.split('\n', 1)[0] L = len(H)-H[1:].index('+')-5 str2 = tabulate(df2.applymap(f'{{:^{L}}}'.format), list(df2), tablefmt='outline', stralign='center', numalign='center') print(str1 + '\n' + '\n'.join(str2.splitlines()[1:])) Output: +----+---------+-------+----------+ | | Name | Age | Height | +====+=========+=======+==========+ | 0 | Alice | 25 | 165 | | 1 | Bob | 30 | 182 | | 2 | Charlie | 35 | 177 | +----+---------+-------+----------+ | | Summary | +====+============================+ | 0 | Alice is the youngest | | 1 | Bob is the tallest | +----+----------------------------+ | 3 | 1 |
77,534,017 | 2023-11-23 | https://stackoverflow.com/questions/77534017/pythons-built-in-sum-function-taking-forever-to-compute-sums-of-very-large-rang | This works well and returns 249999500000: sum([x for x in range(1_000_000) if x%2==0]) This is slower but still returns 24999995000000: sum([x for x in range(10_000_000) if x%2==0]) However, larger range of values such as 1_000_000_000 takes a very long time to compute. In fact, this returns an error: sum([x for x in range(10_000_000_000) if x%2==0]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <listcomp> MemoryError. I've seen posts saying that the built-in sum function is the fastest but this doesn't look like being fast to me. My question then is, do we have a faster Pythonic approach to compute large sums like this? Or is it perhaps a hardware limitation? | After I posted the below code based on the Gauss Sum, Kelly Bundy pointed out a simplification of the math and a practical simplification making use of properties of a range object. He suggested I add this here. An arbitrary range can be visualized as a trapezoid. That there is the sum for the area of a trapezoid. It also uses the ability of a range to know its length. That simplifies my (stop - start + step-1) // step expression. def sum_range(*args): rg = range(*args) return (rg[0] + rg[-1]) * len(rg) // 2 if rg else 0 Following is my original answer. I've gridsearched both functions for being identical to sum_naive, given argument ranges that cover all the cases. for arbitrary start, stop, step: #def sum_naive(start, stop, step): return sum(range(start, stop, step)) def sum_gauss(start, stop, step=1): if step == 0: raise ValueError("range() arg 3 must not be zero") if step < 0: step = -step shift = 1 + (start - stop - 1) % step (start, stop) = (stop+shift, start+1) if start >= stop: return 0 # triangle on a base n = (stop - start + step-1) // step base = start * n triangle = n * (n - 1) // 2 * step return base + triangle A "geometric" interpretation should make these expressions self-evident. ∎ SCNR The negative step case is rewritten into a positive step case using equivalent start and stop. The sums of even and odd integers in a given range start, stop are simply those: one_half = sum_gauss(start, stop, 2) other_half = sum_gauss(start+1, stop, 2) # no need to know which are which assert one_half + other_half == sum_gauss(start, stop, 1) | 2 | 2 |
77,562,118 | 2023-11-28 | https://stackoverflow.com/questions/77562118/how-to-calculate-time-between-rows-for-each-id-in-a-pandas-dataframe-using-polar | I am working on a task where I have a Pandas DataFrame using Polars library in Python, containing columns for 'ID' and 'Timestamp'. Each row represents the end of a session identified by the 'Timestamp'. I am trying to create a new column called 'time_since_last_session', which should contain the time duration between sessions for each unique 'ID'. I have been able to compute the time difference between sessions for a specific filtered 'ID' using the following code: import polars as pl # DataFrame: sessions_features # Columns: 'ID', 'Timestamp' filtered_id = "BBIISSIOTNIFSIDYIUSA" time_diff = sessions_features.filter(pl.col("ID") == filtered_id)["Timestamp"].diff().dt.total_seconds() However, I'm struggling to perform this calculation for each 'ID' in the DataFrame using a group_by() operation or similar. I have attempted to use map_groups() but haven't been successful. Could someone please guide me on how to perform this operation efficiently for each 'ID' using Polars? A minimal reproducible example would be this: import polars as pl import pandas as pd # Creating a sample DataFrame data = { 'ID': ['A', 'A', 'A', 'B', 'B', 'B'], 'Timestamp': ['2023-01-01 10:00:00', '2023-01-01 10:30:00' ,'2023-01-01 11:00:00', '2023-01-01 12:00:00', '2023-01-01 12:30:00', '2023-01-01 13:00:00'] } df = pd.DataFrame(data) # Converting to Polars DataFrame sessions_features = pl.from_pandas(df) sessions_features = sessions_features.with_columns( pl.col("Timestamp").str.to_datetime() ) print(sessions_features.filter(pl.col("ID") == "A")["Timestamp"].diff().dt.total_seconds()) This example creates a sample DataFrame and calculates the time difference between sessions for a specific 'ID'. However, the goal is to perform this calculation for each unique 'ID' in the DataFrame efficiently using Polars. Any help or insights would be greatly appreciated! the expected result for the final df in the minimum example would be: ┌─────┬─────────────────────┬───────────────────────┐ │ ID ┆ Timestamp ┆ time_between_sessions │ │ --- ┆ --- ┆ --- │ │ str ┆ datetime[μs] ┆ i64 │ ╞═════╪═════════════════════╪═══════════════════════╡ │ A ┆ 2023-01-01 10:00:00 ┆ 0 │ │ A ┆ 2023-01-01 10:30:00 ┆ 1800 │ │ A ┆ 2023-01-01 11:00:00 ┆ 1800 │ │ B ┆ 2023-01-01 12:00:00 ┆ 0 │ │ B ┆ 2023-01-01 12:30:00 ┆ 1800 │ │ B ┆ 2023-01-01 13:00:00 ┆ 1800 │ └─────┴─────────────────────┴───────────────────────┘ | You can group by ID and then apply the rolling diff to each group: df.group_by("ID").map_groups( lambda g: g.with_columns( pl.col("Timestamp").diff().dt.total_seconds().fill_null(0).alias("Diff") ) ) | 3 | -2 |
77,561,341 | 2023-11-28 | https://stackoverflow.com/questions/77561341/getting-gps-boundaries-for-each-hexbin-in-a-python-plotly-hexbin-mapbox-heat-m | I have created a hexbin "heat map" in Python using plotly by mapping a number of locations (using GPS latitude / longitude), along with the value of each location. See code below for sample df and hexbin figure plot. Data Desired When I mouse-over each hexbin, I can see the average value contained within that hexbin. But what I want is a way to download into a pandas df the following info for each hexbin: Average value in each hexbin (already calculated per the code below, but currently only accessible to me by mousing over each and every hexbin; I want to be able to download it into a df) Centroid GPS coordinate for each hexbin GPS coordinates for each corner of the hexbin (i.e., latitude and longitude for each of the six corners of each hexbin) My Question How can I download the data described in the bullets above into a pandas df? Code example # Import dependencies import pandas as pd import numpy as np import plotly.figure_factory as ff import plotly.express as px # Create a list of GPS coordinates gps_coordinates = [[32.7792, -96.7959, 10000], [32.7842, -96.7920, 15000], [32.8021, -96.7819, 12000], [32.7916, -96.7833, 26000], [32.7842, -96.7920, 51000], [32.7842, -96.7920, 17000], [32.7792, -96.7959, 25000], [32.7842, -96.7920, 19000], [32.7842, -96.7920, 31000], [32.7842, -96.7920, 40000]] # Create a DataFrame with the GPS coordinates df = pd.DataFrame(gps_coordinates, columns=['LATITUDE', 'LONGITUDE', 'Value']) # Print the DataFrame display(df) # Create figure using 'df_redfin_std_by_year_and_acreage_bin' data fig = ff.create_hexbin_mapbox( data_frame=df, lat='LATITUDE', lon='LONGITUDE', nx_hexagon=2, opacity=0.2, labels={"color": "Dollar Value"}, color='Value', agg_func=np.mean, color_continuous_scale="Jet", zoom=14, min_count=1, # This gets rid of boxes for which we have no data height=900, width=1600, show_original_data=True, original_data_marker=dict(size=5, opacity=0.6, color="deeppink"), ) # Create the map fig.update_layout(mapbox_style="open-street-map") fig.show() | You can extract the coordinates of the six corners of each hexbin as well as the values from fig.data[0]. However, I am not sure where the centroids information is stored in the figure object, but we can create a geopandas dataframe from this data, and get the directly get the centroids attribute of the geometry column: import geopandas as gpd from shapely.geometry import LineString coordinates = [feature['geometry']['coordinates'] for feature in fig.data[0].geojson['features']] values = fig.data[0]['z'] hexbins_df = pd.DataFrame({'coordinates': coordinates, 'values': values}) hexbins_df['geometry'] = hexbins_df['coordinates'].apply(lambda x: LineString(x[0])) hexbins_gdf = gpd.GeoDataFrame(hexbins_df, geometry='geometry') hexbins_gdf['centroid'] = hexbins_gdf['geometry'].centroid corners_df = hexbins_gdf['coordinates'].apply(lambda x: pd.Series(x[0])).rename(columns=lambda x: f'corner_{x+1}') hexbins_df = pd.concat([hexbins_df, corners_df], axis=1).drop(columns='corner_7') # we drop corner_7 since that is the same as the starting corner The resulting geopandas dataframe looks something like this: >>> hexbins_df coordinates values ... corner_5 corner_6 0 [[[-96.7889, 32.78215666477984], [-96.78539999... 28833.333333 ... [-96.792400000007, 32.7872532054738] [-96.792400000007, 32.78385554412095] 1 [[[-96.792400000007, 32.777059832108314], [-96... 17500.000000 ... [-96.79590000001399, 32.78215666477984] [-96.79590000001399, 32.77875880877266] 2 [[[-96.785399999993, 32.7872532054738], [-96.7... 26000.000000 ... [-96.7889, 32.79234945416662] [-96.7889, 32.788951987483806] 3 [[[-96.785399999993, 32.79744541083471], [-96.... 12000.000000 ... [-96.7889, 32.80254107545448] [-96.7889, 32.79914399815894] [4 rows x 21 columns] | 2 | 2 |
77,550,969 | 2023-11-26 | https://stackoverflow.com/questions/77550969/weighted-sum-of-pytrees-in-jax | I have a pytree represented by a list of lists holding parameter tuples. The sub-lists all have the same structure (see example). Now I would like to create a weighted sum so that the resulting pytree has the same structure as one of the sub-lists. The weights for each sub-list are stored in a separate array / list. So far I have the following code that seems to works but requires several steps and for-loop that I would like avoid for performance reasons. import jax import jax.numpy as jnp list_1 = [ [jnp.asarray([[1, 2], [3, 4]]), jnp.asarray([2, 3])], [jnp.asarray([[1, 2], [3, 4]]), jnp.asarray([2, 3])], ] list_2 = [ [jnp.asarray([[2, 3], [3, 4]]), jnp.asarray([5, 3])], [jnp.asarray([[2, 3], [3, 4]]), jnp.asarray([5, 3])], ] list_3 = [ [jnp.asarray([[7, 1], [4, 4]]), jnp.asarray([6, 2])], [jnp.asarray([[6, 4], [3, 7]]), jnp.asarray([7, 3])], ] weights = [1, 2, 3] pytree = [list_1, list_2, list_3] weighted_pytree = [jax.tree_map(lambda tree: weight * tree, tree) for weight, tree in zip(weights, pytree)] reduced = jax.tree_util.tree_map(lambda *args: sum(args), *weighted_pytree) | I think this will do what you have in mind: def wsum(*args, weights=weights): return jnp.asarray(weights) @ jnp.asarray(args) reduced = jax.tree_util.tree_map(wsum, *pytree) For the edited question, where tree elements have more general shapes, you can define wsum like this instead: def wsum(*args, weights=weights): return sum(weight * arg for weight, arg in zip(weights, args)) | 2 | 2 |
77,559,112 | 2023-11-27 | https://stackoverflow.com/questions/77559112/django-duplicate-action-when-i-press-follow-or-unfollow-button | I am doing CS50w project4 problem, so I have to build a Social Media app. This problem has a part where in the profile of the user a follow and unfollow button has to be available. My problem is when I press the follow or unfollow button, this increase or decrease the followers and the people that the user follow. views.py def seguir(request): if request.method == 'POST': accion = request.POST.get('accion') usuario_accion = request.POST.get('usuario') usuario = request.user.username # Comprobación de si ya lo esta siguiendo y recuperación de información de usuarios perfil_usuario = User.objects.get(username=usuario) perfil_usuario_accion = User.objects.get(username=usuario_accion) esta_siguiendo = perfil_usuario.seguidos.filter(username=usuario_accion).exists() if accion == 'follow': # Si no lo esta siguiendo se añade a la lista de seguidos de uno y de seguidores del otro if esta_siguiendo == False: perfil_usuario_accion.seguidores.add(perfil_usuario) perfil_usuario.seguidos.add(perfil_usuario_accion) # Redirección a la página del usuario seguido return render(request, "network/profile.html", { "profile_user": perfil_usuario_accion, "logged_user": 0 }) else: # Comprobación de que lo siga if esta_siguiendo: perfil_usuario.seguidos.remove(perfil_usuario_accion) perfil_usuario_accion.seguidores.remove(perfil_usuario) return render(request, "network/profile.html", { "profile_user": perfil_usuario_accion, "logged_user": 0 }) profile.html {% extends "network/layout.html" %} {% block body %} <h1 id="titulo-perfil">{{ profile_user.username }} profile page</h1> <div id="contenedor-seguir"> <div id="seguidores">Followers: {{ profile_user.seguidores.count }}</div> <div id="seguidos">Following: {{ profile_user.seguidos.count }}</div> </div> {% if logged_user == 0 %} <form id="contenedor-botones" action="{% url 'follow' %}" method="post"> {% csrf_token %} <input type="hidden" name="usuario" value="{{ profile_user.username }}"> <button type="submit" class="btn btn-primary" name="accion" value="follow">Follow</button> <button type="submit" class="btn btn-primary", name="accion" value="unfollow">Unfollow</button> </form> {% endif %} {% endblock %} models.py class User(AbstractUser): seguidores = models.ManyToManyField('self', symmetrical='false', related_name='tesiguen', blank=True) seguidos = models.ManyToManyField('self', symmetrical='false', related_name='siguiendo', blank=True) I want that when the user press the follow button, the follower of the people he are following increase and the followers counter of the other one increase in one too. | The main problem is that you wrote symmetrical='false' as a string literal, and the truthiness of that string is, well, True. You should use the False constant, so symmetrical=False. But you are overcomplicating things. Django is perfectly capable to handle a ManyToManyField [Django-doc] in the two directions. Indeed, we can define this as: class User(AbstractUser): seguidores = models.ManyToManyField( 'self', symmetrical=False, related_name='seguidos', blank=True ) So here if A adds B to the seguidores, then A will appear in the seguidos of B, so it is automatically reflecting that. So the view then is simplified to: def seguir(request): if request.method == 'POST': accion = request.POST.get('accion') usuario_accion = request.POST.get('usuario') usuario = request.user.username # Comprobación de si ya lo esta siguiendo y recuperación de información de usuarios perfil_usuario = User.objects.get(username=usuario) perfil_usuario_accion = User.objects.get(username=usuario_accion) esta_siguiendo = perfil_usuario.seguidos.filter( username=usuario_accion ).exists() if accion == 'follow': # Si no lo esta siguiendo se añade a la lista de seguidos de uno y de seguidores del otro if not esta_siguiendo: perfil_usuario_accion.seguidores.add(perfil_usuario) # Redirección a la página del usuario seguido return render( request, "network/profile.html", {"profile_user": perfil_usuario_accion, "logged_user": 0}, ) else: # Comprobación de que lo siga if esta_siguiendo: perfil_usuario.seguidos.remove(perfil_usuario_accion) perfil_usuario_accion.seguidores.remove(perfil_usuario) return render( request, "network/profile.html", {"profile_user": perfil_usuario_accion, "logged_user": 0}, ) and the template remains the same. | 2 | 2 |
77,558,127 | 2023-11-27 | https://stackoverflow.com/questions/77558127/pyspark-foreachpartition-with-additional-parameters | I'm trying to execute my function using spark_df.foreachPartition(), and I want to pass additional parameter but apparently the function supports only one parameter (the partition). I tried to play with it and do something like this : def my_function(row, index_name) : return True def partition_func(row): return my_function(row, "blabla") spark_df.foreachPartition(partition_func) However, I'm getting a serialization error : _pickle.PicklingError: Could not serialize object: TypeError: Cannot serialize socket object How can I make this work? I know I can add parameters to my Spark Dataframe, but I think it's an ugly solution, sending it in function parameter is so much better. | There might be other ways, but one simple approach could be to create a broadcast variable (or a container that holds any variables you may need), and then pass it to be used in your foreachPartition function. Something like this: def partition_func_with_var(partition, broadcast_var): for row in partition: print(str(broadcast_var.value) + row.desc) df = spark.createDataFrame([(1,"one"),(2,"two")],["id","desc"]) bv = spark.sparkContext.broadcast(" some extra variable ") df.foreachPartition(lambda p: partition_func_with_var(p,bv)) Note that "passing a variable" has a little murky meaning here, as it is actually a broadcast operation, with all its consequences and limitations (read-only, sent once, etc.) | 2 | 2 |
77,558,347 | 2023-11-27 | https://stackoverflow.com/questions/77558347/use-python-to-calculate-timing-between-bus-stops | The following is an example of the .csv file of thousands of lines I have for the different bus lines. Table: trip_id arrival_time departure_time stop_id stop_sequence stop_headsign 107_1_D_1 6:40:00 6:40:00 AREI2 1 107_1_D_1 6:40:32 6:40:32 JD4 2 107_1_D_1 6:41:27 6:41:27 PNG4 3 Raw Data: trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign 107_1_D_1,6:40:00,6:40:00,AREI2,1, 107_1_D_1,6:40:32,6:40:32,JD4,2, 107_1_D_1,6:41:27,6:41:27,PNG4,3, I want to create a table or dataframe that creates a line for each road segment and calculates the time between each arrival_time. Expected result: some other trip_id may share the same RoadSegment | I think in such cases you can use shift. Here is an example: df = pd.read_csv('...') result = pd.DataFrame() for _, trip_df in df.groupby('trip_id', sort=False): # type: str, pd.DataFrame trip_df = trip_df.sort_values('stop_sequence') trip_df['arrival_time'] = pd.to_timedelta(trip_df['arrival_time']) trip_df['departure_time'] = pd.to_timedelta(trip_df['departure_time']) trip_df['prev_arrival_time'] = trip_df['arrival_time'].shift() trip_df['prev_stop_id'] = trip_df['stop_id'].shift() trip_df['RoadSegment'] = trip_df['prev_stop_id'].str.cat(trip_df['stop_id'], sep='-') trip_df['planned_duration'] = trip_df['departure_time'] - trip_df['prev_arrival_time'] trip_df = trip_df.dropna(subset=['planned_duration']) trip_df['planned_duration'] = ( trip_df['planned_duration'] .apply(lambda x: x.total_seconds()) .astype(int) ) result = pd.concat( [result, trip_df[['RoadSegment', 'trip_id', 'planned_duration']]], sort=False, ignore_index=True, ) print(result) | 2 | 2 |
77,558,059 | 2023-11-27 | https://stackoverflow.com/questions/77558059/how-to-check-if-a-path-is-a-relative-symlink-in-python | os package has os.path.islink(path) to check if path is a symlink. How do I determine if a file located at path is a relative symlink? Note that path can be an absolute path (i.e. path == os.path.abspath(path)), e.g. /home/user/symlink that is a relative symlink to ../ which is relative. As opposed to a link to /home/user, which points to the same directory but is absolute. | I think you just have to read the actual target pathname and check it. def isrellink(path): return os.path.islink(path) and not os.path.isabs(os.readlink(path)) | 2 | 2 |
77,549,662 | 2023-11-25 | https://stackoverflow.com/questions/77549662/python-grpc-set-timeout-per-one-attempt | I wrote something like this: settings = { 'methodConfig': [ { 'name': [{}], 'timeout': '0.5s', 'retryPolicy': { 'maxAttempts': 5, 'initialBackoff': '0.1s', 'maxBackoff': '2s', 'backoffMultiplier': 2, 'retryableStatusCodes': [ 'UNAVAILABLE', 'INTERNAL', 'DEADLINE_EXCEEDED', ], }, }, ], } settings_as_json_string = json.dumps(settings) request = Request(...) async with grpc.aio.insecure_channel( host_port, options=(('grpc.service_config', settings),), ) as channel: stub = StubClass(channel=channel) await stub.SomeMethod( request=request, ) On the service side, I specifically slowed down call processing so that it takes about one second. When running the above code, I see that there are no 5 call attempts with a 0.5 second wait each time. There is only one attempt, ending with error DEADLINE_EXCEEDED after about 0.5 seconds. From here I conclude that timeout is not the waiting time for the service response in each attempt, but it is the maximum duration of interaction with the service. A similar behavior is observed when I remove timeout from the config and specify it as argument in the call SomeMethod: ... await stub.SomeMethod( request=request, timeout=0.5, ) My question is this: Is there a way to specify the timeout of waiting for the service response in each individual attempt? So after 0.5 seconds of waiting and some backoff, there was another attempt, during which we also waited for a response within 0.5 seconds, and so on until the max_attempts was reached. | There is no per-attempt timeout in the design of the retry functionality in gRPC. The general theory is that any attempt could succeed, and artificially cutting an attempt off early only reduces the chances of achieving any success at all. | 2 | 3 |
77,556,822 | 2023-11-27 | https://stackoverflow.com/questions/77556822/how-to-ascribe-the-value-count-of-a-list-item-to-a-new-column-pandas | Imagine that I have a dataset df with one column containing a dictionary with two list types (list_A and list_B) as value: data = [{"list_A": [2.93, 4.18, 4.18, None, 1.57, 1.57, 3.92, 6.27, 2.09, 3.14, 0.42, 2.09], "list_B": [820, 3552, 7936, None, 2514, 4035, 6441, 15379, 2167, 6147, 3322, 1177]}, {"list_A": [2.51, 3.58, 3.58, None, 1.34, 1.34, 3.36, 5.37, 1.79, 2.69, 0.36, 1.79], "list_B": [820, 3552, 7936, None, 2514, 4035, 6441, 15379, 2167, 6147, 3322, 1177]}, {"list_A": [None, 5.94, 5.94, None, 2.23, 2.23, 5.57, 8.9, 2.97, 4.45, 0.59, 2.97], "list_B": [820, 3552, 7936, None, 2514, 4035, 6441, 15379, 2167, 6147, 3322, 1177]}] # Create a DataFrame with a column named "column_dic" df = pd.DataFrame({"column_dic": [data]}) Now, I want to create an additional column count_first_item that contains the count of non-Null values of the first item ([0]) of the lists that correspond to "List_A". The expected output of this is 2 (2.93 = +1; 2.51 = +1; None = 0). | Use list comprehension for get first values of list_A, test non missing values by notna and count Trues by sum: df['count_first_item'] = [pd.notna([y['list_A'][0] for y in x]).sum() for x in df['column_dic']] print (df) column_dic count_first_item 0 [{'list_A': [2.93, 4.18, 4.18, None, 1.57, 1.5... 2 Or use Series.explode, get values of lists by str or Series.str.get, get first values by indexing - str[0] and count non missing values by DataFrameGroupBy.count: df['count_first_item'] = (df['column_dic'].explode().str.get('list_A').str[0] .groupby(level=0).count()) print (df) column_dic count_first_item 0 [{'list_A': [2.93, 4.18, 4.18, None, 1.57, 1.5... 2 | 2 | 2 |
77,554,214 | 2023-11-27 | https://stackoverflow.com/questions/77554214/transpose-df-columns-based-on-the-column-names-in-pandas | I have a pandas dataframe as below and I would like to transpose the df based on the column Names I want transpose all the coumns that has _1,_2,_3 & _4. I have 100+ Values in my df, Below is the example data. import pandas as pd data = { # One Id Column 'id':[1], #Other columns 'c1':['1c'], 'c2':['2c'], 'c3':['3c'], #IC indicator 'oc_1':[1], 'oc_2':[0], 'oc_3':[1], 'oc_4':[1], #GC Indicator 'gc_1':['T1'], 'gc_2':['T2'], 'gc_3':['T3'], 'gc_4':['T4'], #PF Indicator 'pf_1':['PF1'], 'pf_2':['PF2'], 'pf_3':['PF3'], 'pf_4':['PF4'], #Values 'V1_1':[11], 'V1_2':[12], 'V1_3':[13], 'V1_4':[14], 'S1_1':[21], 'S1_2':[22], 'S1_3':[23], 'S1_4':[24] } df = pd.DataFrame(data) I need to transpose this df as below output I tried below code: standard_cols = ['id','c1','c2','c3'] value_cols = ['V1_1','V1_2','V1_3','V1_4','S1_1','S1_2','S1_3','S1_4'] result_cols = standard_cols+['OC','GC','PF','Var','value'] melted_df = pd.melt(df, id_vars=standard_cols + ['oc_1','oc_2','oc_3','oc_4','gc_1','gc_2','gc_3','gc_4','pf_1','pf_2','pf_3','pf_4'], value_vars=value_cols,var_name='Var',value_name='value') print(melted_df) | You can first reshape with wide_to_long, then melt: standard_cols = ['id','c1','c2','c3'] value_cols = ['V1','S1'] result_cols = ['OC','GC','PF'] result_cols = [c.lower() for c in result_cols] out = (pd .wide_to_long( df, i=standard_cols, stubnames=result_cols+value_cols, j='j', sep='_') .reset_index() .melt(standard_cols+result_cols+['j'], var_name='VAR', value_name='Value') .assign(VAR=lambda d: d['VAR']+'_'+d.pop('j').astype('str')) ) print(out) Output: id c1 c2 c3 oc gc pf VAR Value 0 1 1c 2c 3c 1 T1 PF1 V1_1 11 1 1 1c 2c 3c 0 T2 PF2 V1_2 12 2 1 1c 2c 3c 1 T3 PF3 V1_3 13 3 1 1c 2c 3c 1 T4 PF4 V1_4 14 4 1 1c 2c 3c 1 T1 PF1 S1_1 21 5 1 1c 2c 3c 0 T2 PF2 S1_2 22 6 1 1c 2c 3c 1 T3 PF3 S1_3 23 7 1 1c 2c 3c 1 T4 PF4 S1_4 24 | 2 | 5 |
77,539,458 | 2023-11-23 | https://stackoverflow.com/questions/77539458/using-pytest-and-hypothesis-how-can-i-make-a-test-immediately-return-after-disc | I'm developing a library, and I'm using hypothesis to test it. I usually sketch out a (buggy) implementation of a function, implement tests, then iterate by fixing errors and running tests. Usually these errors are very simple (e.g., a typo), and I don't need simplified test cases to figure out the issue. For example: def foo(value): return vslue + 1 # a silly typo @given(st.integers()) def test_foo(x): assert foo(x) == x + 1 How can I make hypothesis stop generating test cases as soon as it has found a single counterexample? Ideally, this would be using commandline flags to pytest. | Building on my comment from earlier - did you look at this? https://hypothesis.readthedocs.io/en/latest/settings.html#hypothesis.settings.phases Looks like there's some support for having different "Setting Profiles". I bet you could have one where you eliminate the Shrinking phase. Maybe that's the way to do. Yeah! That's probably it :) That gets you this command line option: pytest tests --hypothesis-profile <profile-name> So, in summary: In your conftest.py: from hypothesis import settings, Phase settings.register_profile("failfast", phases=[Phase.explicit, Phase.reuse, Phase.generate]) And then run it with: pytest tests --hypothesis-profile failfast | 2 | 3 |
77,548,545 | 2023-11-25 | https://stackoverflow.com/questions/77548545/adding-subttiles-to-plotly-dash-video-player-in-python | I am trying to add subtitles to my Plotly Dash video player, i.e. VTT captions overlay, in Python. I cannot find any examples or instruction on this. from dash import Dash, dcc, html, Input, Output, State import dash_player And in a html Div somewhere: dash_player.DashPlayer( id='vid1', controls = True, url = "http://86.47.173.33:1935/playlist.m3u8", width='100%', style={'margin-top':'20px','margin-bottom':'20px'}, playing= True, muted= True ) The DashPlayer object has no methods to handle a subtitle track in the documentation. Perhaps this is something that could be handled in CSS? To find some React player examples. | My strategy in the end was to create a div ('subtitle-container') as a sibling to the DashPlayer in the layout: dash_player.DashPlayer( id='vid', controls = True, url = None, style={'margin-top':'20px','margin-bottom':'20px', 'margin-left':'10px'}, playing= False, muted= False ), html.Div( id='subtitle-container', children=[html.P("subtitle text")], style={} ), I implemented my own logic to display the subtitles, in the subtitle-container using the DashPlayer's 'current-time' callback. @app.callback([Output('subtitle-container','children'),Output('subtitle-container', 'style')], [Input('vid', 'currentTime')]) def update_subtitles(current_time): subtitle_text = get_subtitle_for_time(current_time) # Determine whether subtitles are being displayed subtitles_displayed = bool(subtitle_text) # Set the alpha value based on whether subtitles are being displayed alpha_value = 0 if not subtitles_displayed else 0.5 # Update subtitle container style subtitle_style={ 'position':'absolute', 'bottom':'6%', 'left': '25%', 'width':'50%', 'background-color':f'rgba(30,30,30, {alpha_value})', 'padding':'5px', 'whiteSpace':'pre-wrap', 'color':'white', 'text-align':'center' } return subtitle_text, subtitle_style This callback returns the relevant caption based on the player's current time, to be displayed in the 'subtitle-container'. I also implement logic to determine the style for the subtitle, semi-transparent dark background when caption present, and no background (alpha = 0), when no caption is present. To gather all the captions initially, I parse an SRT file, to produce a list of caption items. Each caption item is a list of 3 items, start time, end time, and caption text. e.g [1185.65, 1193.74, "You're the one who was frowning.\nFrisco. San Francisco. Exactly.']. The timings have been converted from the original SRT timestamps. Getting the appropriate cue for the player's current time is simply a matter of: def get_subtitle_for_time(current_time): for caption in captions: if current_time <= caption[1]: if current_time >= caption[0]: return caption[2] return '' | 3 | 2 |
77,539,758 | 2023-11-23 | https://stackoverflow.com/questions/77539758/asyncio-bufferedprotocol-doesnt-do-what-it-says-in-the-documentation-doesnt-a | According to the documentation: "BufferedProtocol implementations allow explicit manual allocation and control of the receive buffer. Event loops can then use the buffer provided by the protocol to avoid unnecessary data copies. This can result in noticeable performance improvement for protocols that receive big amounts of data. Sophisticated protocol implementations can significantly reduce the number of buffer allocations." Since my program reads large amounts of data from a socket connection and knows the amount of data it should receive; I was using a custom BufferedProtocol in the hopes of avoiding the data being copied around an unnecessary number of times, but an exception in my get_buffer() method led me to discover the ugly truth through the traceback: The buffer is not actually being used! instead the data is still copied, and is actually copied an extra time into the buffer I provided!. The get_buffer() and buffer_updated() methods are being called by protocols._feed_data_to_buffered_proto() which receives a bytes object and then feeds it into the buffer I provided, hence leading not to a reduction in the number of times the data is being copied but rather to the data being copied an extra time!. After digging further into the guts of asyncio I find that the actual reading of data from the raw socket is actually being done using socket.recv_into(), but it is just not being received directly into the buffer I provided, instead it is later COPIED to the buffer I provided. Is this correct? and if so WTF?! or am I completely missing something obvious? In case my understanding is correct, how do I fix it? what would it take to make asyncio actually avoid the extra copying by using my buffer directly in socket.recv_into()? can this be done without completely monkey-patching asyncio or rewriting the eventloop? ... Regarding calls for a "minimal reproducible example": Raise an exception inside BufferedProtocol.get_buffer(), connected using loop.create_connection(), and the traceback will lead you to the following function inside asyncio.protocols: def _feed_data_to_buffered_proto(proto, data): data_len = len(data) while data_len: buf = proto.get_buffer(data_len) buf_len = len(buf) if not buf_len: raise RuntimeError('get_buffer() returned an empty buffer') if buf_len >= data_len: buf[:data_len] = data proto.buffer_updated(data_len) return else: buf[:buf_len] = data[:buf_len] proto.buffer_updated(buf_len) data = data[buf_len:] data_len = len(data) Not only is this function not avoiding unnecessary copying, in the worst case it is actually drastically increasing the number of times the data is being copied! And no, data is not a memoryview as you might have hoped, it is a bytes object! And in case you don't believe me that this creates many more unnecessary copy-operations please see this SO question regarding slicing bytes objects. | After spending a lot of time reading through the asyncio code and related issues I think I can at least partially answer the question myself, although I would still appreciate anyone smarter than me chiming in and elaborating or correcting me if I'm wrong: The issue: There is an issued with the WSARecv() Windows function used by ProactorEventLoop which can cause data loss, and for reasons I don't fully understand this has so far prevented the buffer returned by get_buffer() from being used directly when the ProactorEventLoop is used, hence the helper function _feed_data_to_buffered_proto() was created. Workaround: If the event loop policy is instead set to WindowsSelectorEventLoopPolicy then the BufferedProtocol will behave as expected, I.e. the buffer returned by get_buffer() will be used directly in recv_into(), this however comes with some drawbacks (See Platform Support), the SelectorEventLoop on Windows doesn't work with pipes or subprocesses!. In my case since I'm not using pipes or subprocesses I solved it by adding the line: asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) So it seems that my issues is mainly with Windows and not with Python, what a surprise. I still think it would be nice if this was mentioned in the docs so that I didn't have to find out myself the hard way. | 3 | 1 |
77,552,595 | 2023-11-26 | https://stackoverflow.com/questions/77552595/regex-find-words-that-have-a-o-in-second-position-and-end-in-ions | The other day, I was trying to teach the principles of Regex to a student - I am far from an expert but wanted to show the principles of it. I have a text file containing all the words of the English language, and I wanted to define a series of rules in regex so that the only word that would come out would be "CONGRATULATIONS". We did it in Python code first, then using the re python library. The two first rules were to find words having O as the second letter, and words ending in IONS. While we first tried to find the words matching a rule and then the other in pure Python, we tried to do it in one go using the regex pattern ^.O[A-Z]*IONS$. We were surprised to see that the two methods differed by one word: IONS. I mean, it totally makes sense, as the regex pattern is looking, by definition, for words of at least 6 characters. But it got us wondering if there was a way to get the words that have O as a second letter, which can also be the O of IONS in one single regex pattern. I know we can totally do it in two steps, first getting the words having an O as the second letter, and then getting the words ending in IONS among these words, but I just wondered if it was possible in one step alone. | Short answer is: ^([A-Z]O[A-Z]*)?IONS$ Explanation: Using Venn diagram. Rule 1: words with O as second letter ^.O or `^.O.*$ Rule 2: words that ends with IONS IONS$ or `^.*IONS$ In the intersection of Rule 1 & 2 (there are 3 2 disjoint cases): case 1 the shortest word is the word IONS, None for 5 letters, second letters are contradicting: "O" in rule 1 while "I" in rule 2 case 2 for 6 letters and above: your pattern suffices. union of these 3 2 is IONS OR your pattern which is ^(IONS|.O.*IONS)$ or ^(IONS|.O.*IONS)$ but as @InSync commented you can simplify it to ^(.O.*)?IONS$ how & why? since: A: the word IONS satisfies both rules, satisfying one would would match IONS. Let's keep the stricter rule 2. IONS$ 4-letter word: ^IONS$ B: there are 2 cases. We can make rule 1 optional (rule1)?, when matching: 6-letter words^(.O)?IONS$ or 6+letter words ^(.O.*)?IONS$ or even stricter as needed: if 'words' is A-Z only ^([A-Z]O[A-Z]*)?IONS$ | 2 | 2 |
77,548,225 | 2023-11-25 | https://stackoverflow.com/questions/77548225/reduce-list-of-lists-in-jax | I have a list holding many lists of the same structure (Usually, there are much more than two sub-lists inside the list, the example shows two lists for the sake of simplicity). I would like to create the sum or product over all sub-lists so that the resulting list has the same structure as one of the sub-lists. So far I tried the following using the tree_reduce method but I get errors that I don't understand. I could need some guidance on how to use tree_reduce() in such a case. import jax import jax.numpy as jnp list_1 = [ [jnp.asarray([1]), jnp.asarray([2, 3])], [jnp.asarray([4]), jnp.asarray([5, 6])], ] list_2 = [ [jnp.asarray([7]), jnp.asarray([8, 9])], [jnp.asarray([10]), jnp.asarray([11, 12])], ] list_of_lists = [list_1, list_2] reduced = jax.tree_util.tree_reduce(lambda x, y: x + y, list_of_lists, 0, is_leaf=True) # Expected # reduced = [ # [jnp.asarray([8]), jnp.asarray([10, 12])], # [jnp.asarray([14]), jnp.asarray([16, 18])], # ] | You can do this with tree_map of a sum over the splatted list: reduced = jax.tree_util.tree_map(lambda *args: sum(args), *list_of_lists) print(reduced) [[Array([8], dtype=int32), Array([10, 12], dtype=int32)], [Array([14], dtype=int32), Array([16, 18], dtype=int32)]] | 2 | 2 |
77,544,934 | 2023-11-24 | https://stackoverflow.com/questions/77544934/why-do-i-need-the-chromedriver-in-selenium | I have following code: from selenium import webdriver driver = webdriver.Chrome() driver.get("https://google.com") This code works well except that it is really slow. I don´t use the ChromeDriver. Just the normal Chrome Browser on my Mac. Why do I have to download the ChromeDriver? It doesn´t speed up the code execution. I've already tried that. Also: Why do I have to close the driver at the end of my programm with driver.close()? The browser closes automatically when the code is finished. Thank you! | Your code is perfectly fine. selenium version > 4.12.0 There is no need to download the chromedriver manually, Selenium's new in-built tool Selenium Manager will automatically download and manage the drivers for you. https://www.selenium.dev/documentation/selenium_manager/ https://www.selenium.dev/blog/2023/status_of_selenium_manager_in_october_2023/ You do not have to use driver.close() with the above code. | 3 | 4 |
77,544,480 | 2023-11-24 | https://stackoverflow.com/questions/77544480/understanding-memory-leak-with-c-extension-for-python | I have been struggling to understand what I am doing wrong with the memory management of this this C++ function for a python module, but I don't have much experience in this regard. Every time I run this function, the memory consumption of the python interpreter increases. The function is supposed to take in two numpy arrays and create new numpy arrays with the desired output. extern "C" PyObject *radec_to_xyz(PyObject *self, PyObject *args) { // Parse the input arguments PyArrayObject *ra_arrobj, *dec_arrobj; if (!PyArg_ParseTuple(args, "O!O!", &PyArray_Type, &ra_arrobj, &PyArray_Type, &dec_arrobj)) { PyErr_SetString(PyExc_TypeError, "invalid arguments, expected two numpy arrays"); return nullptr; } // skipping checks that would ensure: // dtype==float64, dim==1, len()>0 and equal for inputs, data or contiguous npy_intp size = PyArray_SIZE(ra_arrobj); // create the output numpy array with the same size and datatype PyObject *x_obj = PyArray_EMPTY(1, &size, NPY_FLOAT64, 0); if (!x_obj) return nullptr; Py_XINCREF(x_obj); // get pointers to the arrays double *ra_array = static_cast<double *>(PyArray_DATA(ra_arrobj)); double *dec_array = static_cast<double *>(PyArray_DATA(dec_arrobj)); double *x_array = static_cast<double *>(PyArray_DATA(reinterpret_cast<PyArrayObject*>(x_obj))); // compute the new coordinates for (npy_intp i = 0; i < size; ++i) { double cos_ra = cos(ra_array[i]); double cos_dec = cos(dec_array[i]); // compute final coordinates x_array[i] = cos_ra * cos_dec; } // return the arrays holding the new coordinates return Py_BuildValue("O", x_obj); } I suspect that I am getting the reference counts wrong and therefore, the returned numpy arrays don't get garbage collected. I tried changing the reference counts, but this did not help. When I pass the X array as additional argument from the python interpreter instead of allocating it in the function, the memory leak is gone as expected. | Py_XINCREF(x_obj) is not needed because Py_BuildValue("O", x_obj) will increment the reference count for you. In other words, you're accidentally increasing the reference count too much by 1 | 2 | 2 |
77,543,244 | 2023-11-24 | https://stackoverflow.com/questions/77543244/logarithm-of-a-positive-number-result-in-minus-infinity-python | I have this image: HI00008918.png I want to apply a logarithmic function (f(x) = (1/a)*log(x + 1), where a = 0.01) on the image... So this is the code: import numpy as np import matplotlib.pyplot as plt import skimage.io as io car = io.imread('HI00008918.png') # plt.imshow(car, cmap='gray', vmin=0, vmax=255) a = 0.01 fnLog = lambda x : (1/a)*np.log(x + 1) # logarithmic function # the original image has white pixels (=0) and black pixels (=255) carLog = fnLog(car) # Applying the function fnLog print(car[0][0][-1]) print(carLog[0][0][-1]) print(fnLog(car[0][0][-1])) The output: 255 -inf 554.5177444479563 Look at one moment it results in -inf and at others it results in the correct value :( Now I will show the arrays: carLog = [[[277.2 277.2 277.2 -inf] [289. 289. 289. -inf] [304.5 304.5 304.5 -inf] ... [423.5 431.8 429. -inf] [422. 434.5 427.8 -inf] [437. 450. 440.5 -inf]] [[434.5 434.5 434.5 -inf] [433.2 433.2 433.2 -inf] [430.5 430.5 430.5 -inf] ... [422. 430.5 427.8 -inf] [420.2 429. 426.2 -inf] [433.2 444.2 438.2 -inf]]] car = [[[ 15 15 15 255] [ 17 17 17 255] [ 20 20 20 255] ... [148 138 149 255] [138 125 142 255] [148 134 151 255]] [[ 10 10 10 255] [ 14 14 14 255] [ 19 19 19 255] ... | It looks like np.log(x + 1) gives -Inf only where x is 255 in an array. Because the array x is uint8, adding 1 to 255 causes overflow, which wraps the result yielding 0. The log of 0 is -Inf. You might want to cast the image to a floating-point type before applying the function: carLog = fnLog(car.astype(np.float32)) When you apply the function to a value extracted from the image, you are working with a Python int, which doesn’t ever overflow. | 2 | 3 |
77,537,125 | 2023-11-23 | https://stackoverflow.com/questions/77537125/asan-memory-leaks-in-embedded-python-interpreter-in-c | Address Sanitizer is reporting a memory leak (multiple actually) originating from an embedded python interpreter when testing some python code exposed to c++ using pybind11. I have distilled the code down to nothing other than calling PyInitialize_Ex and then PyFinalizeEx #include <Python.h> int main() { Py_InitializeEx(0); Py_FinalizeEx(); return 0; } All the memory leaks originate from a call to Py_InitializeEx. Example: Direct leak of 576 byte(s) in 1 object(s) allocated from: #0 0x7f0d55ce791f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:69 #1 0x7f0d55784a87 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x133a87) #2 0x7f0d55769e4f (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x118e4f) #3 0x7f0d5576a084 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x119084) #4 0x7f0d5576b0fa (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x11a0fa) #5 0x7f0d557974e6 in PyType_Ready (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x1464e6) #6 0x7f0d5577fea6 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x12eea6) #7 0x7f0d5584eda5 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x1fdda5) #8 0x7f0d559697b3 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x3187b3) #9 0x7f0d558524e8 in Py_InitializeFromConfig (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x2014e8) #10 0x7f0d558548fb in Py_InitializeEx (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x2038fb) #11 0x55fe54d040ce in main /home/steve/src/test.cpp:8 #12 0x7f0d54fe7d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 #13 0x7f0d54fe7e3f in __libc_start_main_impl ../csu/libc-start.c:392 #14 0x55fe54d04124 in _start (/home/steve/src/build/test+0x1124) Questions: How can I free up the memory so asan is happy? Failing that, is it safe just to suppress Py_InitializeEx in its entirety? | Py_Initialize has a long lasting known issue, whose origin is in C source codes. For C++ programmers I recommend boost.python as a replacement that uses C++ features to simplify interoperability between the 2 languages. | 4 | 1 |
77,542,762 | 2023-11-24 | https://stackoverflow.com/questions/77542762/in-python-typing-is-there-a-way-to-specify-combinations-of-allowed-generic-types | I have a pretty small but versatile (maybe too much) class in Python that effectively has two generic types that are constrained together. My code (yet poorly typed) code should show my intent: class ApplyTo: def __init__( self, *transforms: Callable[..., Any], to: Any | Sequence[Any], dispatch: Literal['separate', 'joint'] = 'separate', ): self._transform = TransformsPipeline(*transforms) self._to = to if isinstance(to, Sequence) else [to] self._dispatch = dispatch def __call__(self, data: MutableSequence | MutableMapping): if self._dispatch == 'separate': for key in self._to: data[key] = self._transform(data[key]) return data if self._dispatch == 'joint': args = [data[key] for key in self._to] transformed = self._transform(*args) for output, key in zip(transformed, self._to): data[key] = output return data assert False I have double checked that this works in runtime and is pretty straightforward, but the typing is really horrendous. So the idea is that when we set up to to be an int, then data should be MutableSequence | MutableMapping[int, Any]; when to is Hashable, then data should be MutableMapping[Hashable or whatever type of to is, Any]. I know that int is Hashable which doesn't make this easier. My very poor attempt at typing this looks like this: T = TypeVar('T', bound=Hashable | int) C = TypeVar('C', bound=MutableMapping[T, Any] | MutableSequence) class ApplyTo(Generic[C, T]): def __init__( self, *transforms: Callable[..., Any], to: T | Sequence[T], dispatch: Literal['separate', 'joint'] = 'separate', ): self._transform = TransformsPipeline(*transforms) self._to = to if isinstance(to, Sequence) else [to] self._dispatch = dispatch def __call__(self, data: C): if self._dispatch == 'separate': for key in self._to: data[key] = self._transform(input[key]) return input if self._dispatch == 'joint': args = [data[key] for key in self._to] transformed = self._transform(*args) for output, key in zip(transformed, self._to): data[key] = output return data assert False Which makes mypy complain (no surprise): error: Type variable "task_driven_sr.transforms.generic.T" is unbound [valid-type] note: (Hint: Use "Generic[T]" or "Protocol[T]" base class to bind "T" inside a class) note: (Hint: Use "T" in function signature to bind "T" inside a function) Is there any way to type hint this correctly and somehow bound the type of to and data together? Maybe my approach is flawed and I have reached a dead end. Edit: Fixed some code inside 'joint' branch of dispatch, it was not connected with the typing in question, nevertheless I have made it as it should be. | You can replace the union of MutableMapping and MutableSequence with a Protocol: import typing DispatchType = typing.Literal['separate', 'joint'] # `P` must be declared with `contravariant=True`, otherwise it errors with # 'Invariant type variable "P" used in protocol where contravariant one is expected' K = typing.TypeVar('K', contravariant=True) class Indexable(typing.Protocol[K]): def __getitem__(self, key: K): pass def __setitem__(self, key: K, value: typing.Any): pass # Accepts only hashable types (including `int`s) H = typing.TypeVar('H', bound=typing.Hashable) class ApplyTo(typing.Generic[H]): _to: typing.Sequence[H] _dispatch: DispatchType _transform: typing.Callable[..., typing.Any] # TODO Initialize `_transform` def __init__(self, to: typing.Sequence[H] | H, dispatch: DispatchType = 'separate') -> None: self._dispatch = dispatch self._to = to if isinstance(to, typing.Sequence) else [to] def __call__(self, data: Indexable[H]) -> typing.Any: if self._dispatch == 'separate': for key in self._to: data[key] = self._transform(data[key]) return data if self._dispatch == 'joint': args = [data[key] for key in self._to] return self._transform(*args) assert False Usage: def main() -> None: r0 = ApplyTo(to=0)([1, 2, 3]) # typechecks r0 = ApplyTo(to=0)({1: 'a', 2: 'b', 3: 'c'}) # typechecks r1 = ApplyTo(to='a')(['b', 'c', 'd']) # does not typecheck: Argument 1 to "__call__" of "Applier" has incompatible type "list[str]"; expected "Indexable[str]" r1 = ApplyTo(to='a')({'b': 1, 'c': 2, 'd': 3}) # typechecks | 3 | 2 |
77,541,867 | 2023-11-24 | https://stackoverflow.com/questions/77541867/class-type-of-class-that-was-created-with-metaclass | class Meta(type): def __new__(cls, name, bases, dct): new_class = type(name, bases, dct) new_class.attr = 100 # add some to class return new_class class WithAttr(metaclass=Meta): pass print(type(WithAttr)) # <class 'type'> Why does it print <class 'type'>, but not <class '__main__.Meta'> Am I right that class WithAttr is instance of Meta? | This is because you're making an explicit call to type(name, bases, dct), which in turn calls type.__new__(type, name, bases, dct), with the type class passed as the first argument to the type.__new__ method, effectively constructing an instance of type rather than Meta. You can instead call type.__new__(cls, name, bases, dct), passing the child class as an argument to contruct a Meta instance. In case __new__ is overridden in a parent class that is a child class of type, call super().__new__ instead of type.__new__ to allow the method resolution order to be followed. Change: new_class = type(name, bases, dct) to: new_class = super().__new__(cls, name, bases, dct) Demo: https://ideone.com/KIy2qG | 3 | 5 |
77,542,112 | 2023-11-24 | https://stackoverflow.com/questions/77542112/vectorized-lookup-in-dataframe-using-numpy-array | Given a numpy array such as the one below, is it possible to use vectorization to return the associated value in column HHt without using a for loop? ex_arr = [2643, 2644, 2647] for i in ex_arr: h_p = df.at[i, "HHt"] Example df: HHt 2643 1 2644 2 2645 3 2646 4 2647 5 2648 6 2649 7 2650 8 Expected result: 1 2 5 | Use DataFrame.loc, if need list or array add Series.to_list or Series.to_numpy: print (df.loc[ex_arr,'HHt'].to_list()) [1, 2, 5] print (df.loc[ex_arr,'HHt'].to_numpy()) [1 2 5] | 2 | 3 |
77,539,424 | 2023-11-23 | https://stackoverflow.com/questions/77539424/breadth-first-search-bfs-in-python-for-path-traversed-and-shortest-path-taken | I tried implementing a bread-first search algorithm a lot, but I just don't get it right. The start node is S and the goal node is J. I'm just not getting J at all. This is the code that I'm using for printing the path being traversed, but I also need to find the shortest path: graph = { 'S' : ['A','B', 'C'], 'A' : ['D'], 'B' : ['E'], 'C' : ['F', 'J'], 'D' : ['G'], 'E' : ['I', 'J'], 'F' : ['S'], 'J' : [], 'G' : ['H'], 'I' : [], 'J' : [], 'H' : ['D'] } visited = [] # List for visited nodes. queue = [] #Initialize a queue def bfs(visited, graph, node): #function for BFS visited.append(node) queue.append(node) while queue: # Creating loop to visit each node m = queue.pop(0) print (m, end = " ") for neighbour in graph[m]: if neighbour not in visited: visited.append(neighbour) queue.append(neighbour) # Driver Code print("Following is the Breadth-First Search") bfs(visited, graph, 'S') # function calling | You'd need to tell your BFS function what your target node is, so that the BFS traversal can stop as soon as that target node is encountered. Some other comments: Your dict literal has a duplicate key 'J' — you should remove it. visited and queue only serve a single BFS call, and should start from scratch each time a BFS search is launched. So these should not be global names, but be defined in the scope of the function only. You print the nodes as they are visited, but this does not represent a path. It represents a level order traversal of the graph. So don't print this. Instead have the function return the path (see next point). The list visited is used to avoid revisiting the same node again, but it doesn't give you any information about how we got to a certain node. You could transform visited into a dictionary. Then, when you mark a node as visited, mark it with the node where you came from. This way you can reconstruct a path once you have found the target node — you can walk backwards back to the starting node. You've defined queue as a list, but a list's pop(0) method call is not efficient. Instead use a deque and its popleft() method. Here is your code with the above remarks taken into account. from collections import deque graph = { 'S' : ['A','B', 'C'], 'A' : ['D'], 'B' : ['E'], 'C' : ['F', 'J'], 'D' : ['G'], 'E' : ['I', 'J'], 'F' : ['S'], 'G' : ['H'], 'I' : [], 'J' : [], 'H' : ['D'] } def bfs(graph, node, target): # These lists should not be global. At each call of BFS, they should reset visited = {} # Use a dict so you can store where the visit came from queue = deque() # Use a deque to not lose efficiency with pop(0) visited[node] = None queue.append(node) while queue: m = queue.popleft() if m == target: # Bingo! # Extract path from visited information path = [] while m: path.append(m) m = visited[m] # Walk back return path[::-1] # Reverse it for neighbour in graph[m]: if neighbour not in visited: visited[neighbour] = m # Remember where we came from queue.append(neighbour) # Driver Code print("Following is the Breadth-First Search") print(bfs(graph, 'S', 'J')) Output: ['S', 'C', 'J'] If you want to see which nodes get visited, then you have some options: Show when a node is dequeued from the queue: Change: m = queue.popleft() to: m = queue.popleft() print (m, end = " ") or: Show when a node is enqueued to the queue: Change: queue.append(neighbour) to: queue.append(neighbour) print (neighbor, end = " ") and change: Change: queue.append(node) to: queue.append(node) print (node, end = " ") The output is slightly different, and it depends on what you call "visited". The second one will output two more nodes that will never be popped from the queue. To separate the output of the visits from the output of the path, just print a newline character with print(). So, in the driver code do: path = bfs(graph, 'S', 'J')) print() # Extra print to start a new line print(path) | 3 | 3 |
77,541,316 | 2023-11-24 | https://stackoverflow.com/questions/77541316/duckdb-how-to-iterate-over-the-result-returned-from-duckdb-sql-command | Using DuckDB Python client, I want to iterate over the results returned from query like import duckdb employees = duckdb.sql("select * from 'employees.csv' ") Using type(employees) returns 'duckdb.duckdb.DuckDBPyRelation'. I'm able to get the number of rows with len(employees), however for-loop iteration seems to not work. What is the correct way to process each row at a time? | Apparently, from DuckDB API, something like the following should work. I unfortunately cannot test it because DuckDB's package won't install for whatever reason. I hope this helps you get in the right path to finding your answer. import duckdb employees = duckdb.sql("select * from 'employees.csv'") rows = employees.fetchall() # Iterate over the rows for row in rows: print(row) | 2 | 3 |
77,528,356 | 2023-11-22 | https://stackoverflow.com/questions/77528356/gpiod-is-not-a-package | I'm trying to use libgpiod in python on my orangepi model 4B. I installed it with: sudo apt install libgpiod-dev python3-libgpiod Now I try to use it: from gpiod.line import Direction, Value But I get an error: ModuleNotFoundError: No module named 'gpiod.line'; 'gpiod' is not a package If I open python in terminal and import gpiod the autocomplete options for gpiod. are: gpiod.Chip( gpiod.LINE_REQ_FLAG_OPEN_DRAIN gpiod.ChipIter( gpiod.LINE_REQ_FLAG_OPEN_SOURCE gpiod.LINE_REQ_DIR_AS_IS gpiod.Line( gpiod.LINE_REQ_DIR_IN gpiod.LineBulk( gpiod.LINE_REQ_DIR_OUT gpiod.LineEvent( gpiod.LINE_REQ_EV_BOTH_EDGES gpiod.LineIter( gpiod.LINE_REQ_EV_FALLING_EDGE gpiod.find_line( gpiod.LINE_REQ_EV_RISING_EDGE gpiod.version_string( gpiod.LINE_REQ_FLAG_ACTIVE_LOW If I install gpiod through pip it says module 'gpiod' has no attribute 'Chip' when I try to use gpiod.Chip. What is wrong? Thanks in advance. | libgpiod-dev includes the the libgpiod C lib and its header files, while python3-libgpiod includes the Python bindings to that lib. So with the command: sudo apt istall libgpiod-dev python3-libgpiod You're installing the libgpiod lib, C headers and python bindings. This lib is an abstraction layer to use gpio char device in Linux. Examples for controlling gpios with these python bindings can be found here. Try copy-paste one of the examples and run it. Make sure package is installed correctly and you have properly configured char device interface (ls /dev here you must see gpiochipX devices). I just tested it and worked as expected. NOTE: When you use pip install gpiod you can't use gpiod.Chip, but you can use gpiod.chip instead (note that the former is capital). This is because you are using an old version of gpiod python package. You said that you're using gpiod version 1.5.4 so this is the corresponding documentation for that version. As you can see it claims to be a: It is a pure Python library and has no dependencies on other packages !! It also includes a link to the proper documentation site where you can see that the examples are using gpiod.chip instead of gpiod.Chip. The new version of gpiod package (2.1.3) is not a pure python package but official bindings to libgpiod. as you can see in the proper doc. This version DOES support the usage of gpiod.Chip as you can see in its examples. So, how to be able yo use gpio.Chip through a pip installation? pip install gpiod==2.1.3 This guarantees you use the new version | 2 | 3 |
77,537,779 | 2023-11-23 | https://stackoverflow.com/questions/77537779/how-do-i-use-pip-pipx-and-virtualenv-of-the-current-python-version-after-upgrad | This is the first time I am upgrading python (from 3.11.3 to 3.12.0) and some questions came up along the way. I think I have somehow understood how Path works on windows, but am not getting the complete picture right now. At the moment I have the issue, that python and pip still by default use the old python installation, so this led to a few questions: How do I use the correct version of pip? My understanding is, that the version of pip which is Called is determined by which Path entry of Python\Scripts is preferred by Windows. At the moment, python 3.11 and 3.9 are installed into C:\Program Files\, and their installation path is added to system PATH. 3.12 however is installed to C:\Users\...\AppData\Local\Programs\Python\Python312\ and the installation path is added to user PATH. Should I just delete the PATH entries directing to python 3.9 and 3.11? Does Windows prefer system path over user path? How do I ensure the right version of virtualenv and pipx are used? In the Scripts directory, I can't find entries for pipx and virtualenv, and there are no seperate entries for pipx and virtualenv in either PATH. How does my Terminal know, where to find the correct executables? Is this managed by pip? And how do I get my system to use the newly installed versions of virtualenv and pipx which use python3.12? Is there an easier way of upgrading my pipx installed tools than reinstalling all of them for a new python version? I have some tools installed via pipx, e.g. hatch, mypy, ipython, virtualenv. Do I need to reinstall all of those tools for every python upgrade I make? Or is there a way to tell pipx that I want it to use a new python version now? Edit: My Path entries System Path: C:\Program Files\Python311\Scripts\ C:\Program Files\Python311\ C:\Program Files\Python39\Scripts\ C:\Program Files\Python39\ User Path: C:\Users\UserName\AppData\Local\Programs\Python\Python312\Scripts\ C:\Users\UserName\AppData\Local\Programs\Python\Python312\ C:\Users\UserName\AppData\Local\Programs\Python\Launcher\ C:\users\UserName\appdata\roaming\python\python311\scripts | Through a lot of trial and error I figured it out, maybe it helps some people: How do I use the correct version of pip? In my case, I needed to remove python 3.9 and 3.11 (especially the Scripts folder, that is where pip.exe is located) from the system path. I also removed 3.11/Scripts from user path. For my terminal to notice the 3.12 path, I had to relog after installing python 3.12.This seems to be the case with all entries in user path. How do I ensure, that the right version of pipx and virtualenv are used? If installed through pip, pipx and virtualenv are specific to the python version they were installed with. So I need to reinstall pipx and virtualenv using the correct version of pip. Is there an easier way of upgrading my pipx installed tools than reinstalling all of them for a new python version? Yes there is. When installing pipx with pip from python3.12, it automatically recognises it's installations from the version installed with 3.11. So when upgrading the python version, all one has to do, is reinstalling pipx. If you want to upgrade the pipx packages to python3.12 aswell, you can use pipx reinstall-all. | 2 | 3 |
77,536,610 | 2023-11-23 | https://stackoverflow.com/questions/77536610/filling-nulls-after-merge-but-only-to-the-newly-merged-columns | I have two dataframes that I am merging: import pandas as pd df = pd.DataFrame({'ID': [1,2,3,4], 'Column A': [2, 3, 2, 3], 'Column B': [None, None, 3, None]}) df2 = pd.DataFrame({'ID': [1,2,4], 'Column C': [2, 3, 22]}) df = df.merge(df2, on='ID', how='left') df The output is: ID Column A Column B Column C 0 1 2 NaN 2.0 1 2 3 NaN 3.0 2 3 2 3.0 NaN 3 4 3 NaN 22.0 Position 2 on the new 'Column C' is Null because ID=3 does not exist in df2. How can I fill the nulls in the new columns only but leave the nulls in the old column intact? Expected output: ID Column A Column B Column C 0 1 2 NaN 2.0 1 2 3 NaN 3.0 2 3 2 3.0 0.0 3 4 3 NaN 22.0 | You can use the IDs to generate a mask for boolean indexing: out = df.merge(df2, on='ID', how='left') m = ~df['ID'].isin(df2['ID']) out[m] = out[m].fillna(0) Alternatively, reindex and concat: out = pd.concat([df.set_index('ID'), df2.set_index('ID') .reindex(df['ID'], fill_value=0)], axis=1) Another (not as nice) option, use a placeholder (e.g. object) for existing NaNs: out = (df.fillna(object) .merge(df2.fillna(object), on='ID', how='left') .fillna(0).replace({object: float('nan')}) ) Output: ID Column A Column B Column C 0 1 2 NaN 2.0 1 2 3 NaN 3.0 2 3 2 3.0 0.0 3 4 3 NaN 22.0 | 3 | 3 |
77,532,558 | 2023-11-22 | https://stackoverflow.com/questions/77532558/size-of-image-stream-in-opencv-for-python | In Python 3.7 I read an png image with opencv and convert it to an jpg image stream. How can I determine the number of bytes of the stream? image=cv2.imread('test.png',0) image_stream = io.BytesIO(cv2.imencode(".jpg",image)[1].tobytes()) | (success, data) = cv2.imencode(".jpg", image) assert success print(data.nbytes) This number of bytes is identical to the size of the bytes object returned by the .tobytes() call. Creating a BytesIO() also does not affect this size. | 3 | 4 |
77,531,213 | 2023-11-22 | https://stackoverflow.com/questions/77531213/convert-pandas-column-names-from-snake-case-to-camel-case | I have a pandas dataframe where the column names are capital and snake case. I want to convert them into camel case with first world starting letter to be lower case. The following code is not working for me. Please let me know how to fix this. import pandas as pd # Sample DataFrame with column names data = {'RID': [1, 2, 3], 'RUN_DATE': ['2023-01-01', '2023-01-02', '2023-01-03'], 'PRED_VOLUME_NEXT_360': [100, 150, 200]} df = pd.DataFrame(data) # Convert column names to lowercase df.columns = df.columns.str.lower() # Convert column names to camel case with lowercase starting letter df.columns = [col.replace('_', ' ').title().replace(' ', '').replace(col[0], col[0].lower(), 1) for col in df.columns] # Print the DataFrame with updated column names print(df) I want to column names RID, RUN_DATE, PRED_VOLUME_NEXT_360 to be converted to rid, runDate, predVolumeNext360, but the code is giving Rid, RunDate and PredVolumeNext360. | You could use a regex to replace _x by _X: df.columns = (df.columns.str.lower() .str.replace('_(.)', lambda x: x.group(1).upper(), regex=True) ) Or with a custom function: def to_camel(s): l = s.lower().split('_') l[1:] = [x.capitalize() for x in l[1:]] return ''.join(l) df = df.rename(columns=to_camel) Output: rid runDate predVolumeNext360 0 1 2023-01-01 100 1 2 2023-01-02 150 2 3 2023-01-03 200 | 3 | 3 |
77,528,415 | 2023-11-22 | https://stackoverflow.com/questions/77528415/how-to-join-two-dataframes-for-which-column-values-are-within-a-certain-range-an | I have been reading this other post, since I am dealing with a similar situation. However, I have a problem. In my version of df_1, I have timestamps which are outside of the values of the time ranges presented in df_2. Let's say I have an extra row print df_1 timestamp A B 0 2016-05-14 10:54:33 0.020228 0.026572 1 2016-05-14 10:54:34 0.057780 0.175499 2 2016-05-14 10:54:35 0.098808 0.620986 3 2016-05-14 10:54:36 0.158789 1.014819 4 2016-05-14 10:54:39 0.038129 2.384590 5 2023-11-22 10:54:39 0.000500 6.258710 print df_2 start end event 0 2016-05-14 10:54:31 2016-05-14 10:54:33 E1 1 2016-05-14 10:54:34 2016-05-14 10:54:37 E2 2 2016-05-14 10:54:38 2016-05-14 10:54:42 E3 I need to know how can I modify the solution to the previous post df_2.index = pd.IntervalIndex.from_arrays(df_2['start'],df_2['end'],closed='both') df_1['event'] = df_1['timestamp'].apply(lambda x : df_2.iloc[df_2.index.get_loc(x)]['event']) so that I get a null value for the fifth row, since I am getting now an error | I would use janitor's conditional_join that enables a left join easily and much more efficiently than using apply: import janitor out = df_1.conditional_join(df_2, ('timestamp', 'start', '>='), ('timestamp', 'end', '<='), right_columns=['event'], how='left') Or, if your intervals are non-overlapping and you expect a single match, using merge_asof and a mask: tmp = pd.merge_asof(df_1[['timestamp']], df_2.sort_values(by=['start', 'end']), left_on='timestamp', right_on='start') df_1['event'] = df_1['event'] = tmp['event'].where(tmp['timestamp'] .between(tmp['start'], tmp['end'], inclusive='both')) Output: timestamp A B event 0 2016-05-14 10:54:33 0.020228 0.026572 E1 1 2016-05-14 10:54:34 0.057780 0.175499 E2 2 2016-05-14 10:54:35 0.098808 0.620986 E2 3 2016-05-14 10:54:36 0.158789 1.014819 E2 4 2016-05-14 10:54:39 0.038129 2.384590 E3 5 2023-11-22 10:54:39 0.000500 6.258710 NaN | 3 | 2 |
77,527,951 | 2023-11-22 | https://stackoverflow.com/questions/77527951/how-to-cancel-tasks-in-anyio-taskgroup-context | I write a script to find out the fastest one in a list of cdn hosts: #!/usr/bin/env python3.11 import time from contextlib import contextmanager from enum import StrEnum import anyio import httpx @contextmanager def timeit(msg: str): start = time.time() yield cost = time.time() - start print(msg, f"{cost = }") class CdnHost(StrEnum): jsdelivr = "https://cdn.jsdelivr.net/npm/[email protected]/swagger-ui.css" unpkg = "https://unpkg.com/[email protected]/swagger-ui.css" cloudflare = ( "https://cdnjs.cloudflare.com/ajax/libs/swagger-ui/5.9.0/swagger-ui.css" ) TIMEOUT = 5 LOOP_INTERVAL = 0.1 async def fetch(client, url, results, index): try: r = await client.get(url) except (httpx.ConnectError, httpx.ReadError): ... else: print(f"{url = }\n{r.elapsed = }") if r.status_code < 300: results[index] = r.content class StopNow(Exception): ... async def find_fastest_host(timeout=TIMEOUT, loop_interval=LOOP_INTERVAL) -> str: urls = list(CdnHost) results = [None] * len(urls) try: async with anyio.create_task_group() as tg: with anyio.move_on_after(timeout): async with httpx.AsyncClient() as client: for i, url in enumerate(urls): tg.start_soon(fetch, client, url, results, i) for _ in range(int(timeout / loop_interval) + 1): for res in results: if res is not None: raise StopNow await anyio.sleep(0.1) except ( StopNow, httpx.ReadError, httpx.ReadTimeout, httpx.ConnectError, httpx.ConnectTimeout, ): ... for url, res in zip(urls, results): if res is not None: return url return urls[0] async def main(): with timeit("Sniff hosts"): url = await find_fastest_host() print("cdn host:", CdnHost) print("result:", url) if __name__ == "__main__": anyio.run(main) There are three cdn hosts (https://cdn.jsdelivr.net, https://unpkg.com, https://cdnjs.cloudflare.com). I make three concurrent async task to fetch them by httpx. If one of them get a response with status_code<300, then stop all task and return the right url. But I don't know how to cancel tasks without using a custom exception (in the script is StopNow). | You can call the cancel method of the cancel_scope attribute of the task group to cancel all of its tasks: async with anyio.create_task_group() as tg: ... tg.cancel_scope.cancel() | 2 | 2 |
77,527,874 | 2023-11-22 | https://stackoverflow.com/questions/77527874/how-can-i-add-the-x-or-y-value-from-a-line-above-without-reading-lines-not-c | So i a .csv that has a simplified machine programm that looks like this: X51.972,Y1433.401 LASER_ON X41.972 Y1438.401 X51.97 LASER_OFF X51.972,Y1382.401 LASER_ON X41.972 Y1377.401 X51.97 When there is only an X value, I want to write the Y value from the line above next to the X value and when there is only a Y value i want it to write the X value next to it. lines: list[tuple[str, str]] = [] with open(output_csv_file) as input_file: for line in map(str.strip, input_file): if line: a, *b = line.split(",") if a[0] == "X": if b: lines.append((a, b[0])) else: lines.append((a, lines[-1][1])) else: assert a[0] == "Y" if b: lines.append((b[0], a)) else: lines.append((lines[-1][0], a)) with open(output_csv_file, 'w', newline ='') as output_file: csv_writer = csv.writer(output_file) for line in lines: csv_writer.writerow(line) I managed to do what i want with this script but this only works, when there are no lines, that contain "LASER_ON" and LASER_OFF". Can i modify this to skip the lines containing these words? | The key is that you have to track the last values of X and Y. If there's a LASER line, you leave those two values alone but save the record. Also, you can't really use the CSV module to write the data if the lines are variable in length. output_csv_file = 'x.csv' lines = [] X = 'X0' Y = 'Y0' with open(output_csv_file) as input_file: for line in map(str.strip, input_file): if not line: continue if line.startswith('LASER'): lines.append([line]) else: row = line.split(",") if len(row) == 2: X,Y = row elif row[0][0] == 'X': X = row[0] else: Y = row[0] lines.append((X,Y)) with open(output_csv_file, 'w') as output_file: for line in lines: print( ','.join(line), file=output_file ) Output: X51.972,Y1433.401 LASER_ON X41.972,Y1433.401 X41.972,Y1438.401 X51.97,Y1438.401 LASER_OFF X51.972,Y1382.401 LASER_ON X41.972,Y1382.401 X41.972,Y1377.401 X51.97,Y1377.401 | 2 | 2 |
77,507,520 | 2023-11-18 | https://stackoverflow.com/questions/77507520/cannot-import-name-langchainembedding-from-llama-index | I'm trying to build a simple RAG, and I'm stuck at this code: from langchain.embeddings.huggingface import HuggingFaceEmbeddings from llama_index import LangchainEmbedding, ServiceContext embed_model = LangchainEmbedding( HuggingFaceEmbeddings(model_name="thenlper/gte-large") ) service_context = ServiceContext.from_defaults( chunk_size=256, llm=llm, embed_model=embed_model ) index = VectorStoreIndex.from_documents(documents, service_context=service_context) where I get ImportError: cannot import name 'LangchainEmbedding' from 'llama_index' How can I solve? Is it related to the fact that I'm working on Colab? | INFO 2024: This answer worked in 2023 when question was asked but it seems they moved all code and now you have to use other answers. Not from llama_index import LangchainEmbedding but from llama_index.embeddings import LangchainEmbedding (See source code for llama_index/embeddings/__ init__.py) | 3 | 10 |
77,519,206 | 2023-11-20 | https://stackoverflow.com/questions/77519206/polars-equivalent-to-pandas-min-count-on-groupby | I'm trying to find the equivalent of a min_count param on polars groupby, such as in pandas.groupby(key).sum(min_count=N). Let's suppose the dataframe df = pl.from_repr(""" ┌───────┬───────┐ │ fruit ┆ price │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════╪═══════╡ │ a ┆ 1 │ │ a ┆ 3 │ │ a ┆ 5 │ │ b ┆ 10 │ │ b ┆ 10 │ │ b ┆ 10 │ │ b ┆ 20 │ └───────┴───────┘ """) How can I groupby through the fruit key with the constrain of the group having at least 4 values for the sum? So instead of ┌───────┬───────┐ │ fruit ┆ price │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════╪═══════╡ │ b ┆ 50 │ │ a ┆ 9 │ └───────┴───────┘ I'd have only fruit b on the output, since it's the only one with at least 4 elements ┌───────┬───────┐ │ fruit ┆ price │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════╪═══════╡ │ b ┆ 50 │ └───────┴───────┘ | I don't think there's a built-in min_count for this, but you can just filter: ( df.group_by("fruit") .agg(pl.col("price").sum(), pl.len()) .filter(pl.col("len") >= 4) .drop("len") ) shape: (1, 2) ┌───────┬───────┐ │ fruit ┆ price │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════╪═══════╡ │ b ┆ 50 │ └───────┴───────┘ | 4 | 5 |
77,494,964 | 2023-11-16 | https://stackoverflow.com/questions/77494964/why-list-comprehensions-create-a-function-internally | This is disassembly of a list comprehension in python-3.10: Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import dis >>> >>> dis.dis("[True for _ in ()]") 1 0 LOAD_CONST 0 (<code object <listcomp> at 0x7fea68e0dc60, file "<dis>", line 1>) 2 LOAD_CONST 1 ('<listcomp>') 4 MAKE_FUNCTION 0 6 LOAD_CONST 2 (()) 8 GET_ITER 10 CALL_FUNCTION 1 12 RETURN_VALUE Disassembly of <code object <listcomp> at 0x7fea68e0dc60, file "<dis>", line 1>: 1 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) >> 4 FOR_ITER 4 (to 14) 6 STORE_FAST 1 (_) 8 LOAD_CONST 0 (True) 10 LIST_APPEND 2 12 JUMP_ABSOLUTE 2 (to 4) >> 14 RETURN_VALUE From what I understand it creates a code object called listcomp which does the actual iteration and return the result list, and immediately call it. I can't figure out the need to create a separate function to execute this job. Is this kind of an optimization trick? | The main logic of creating a function is to isolate the comprehension’s iteration variablepeps.python.org. By creating a function, Comprehension iteration variables remain isolated and don’t overwrite a variable of the same name in the outer scope, nor are they visible after the comprehension However, this is inefficient at runtime. Due to this reason, python-3.12 implemented an optimization called comprehension inlining(PEP 709)peps.python.org which will no longer create a separate code objectpeps.python.org. Dictionary, list, and set comprehensions are now inlined, rather than creating a new single-use function object for each execution of the comprehension. This speeds up execution of a comprehension by up to two times. See PEP 709 for further details. Here is the output for the same code disassembled with python-3.12: >>> import dis >>> dis.dis("[True for _ in ()]") 0 0 RESUME 0 1 2 LOAD_CONST 0 (()) 4 GET_ITER 6 LOAD_FAST_AND_CLEAR 0 (_) 8 SWAP 2 10 BUILD_LIST 0 12 SWAP 2 >> 14 FOR_ITER 4 (to 26) 18 STORE_FAST 0 (_) 20 LOAD_CONST 1 (True) 22 LIST_APPEND 2 24 JUMP_BACKWARD 6 (to 14) >> 26 END_FOR 28 SWAP 2 30 STORE_FAST 0 (_) 32 RETURN_VALUE >> 34 SWAP 2 36 POP_TOP 38 SWAP 2 40 STORE_FAST 0 (_) 42 RERAISE 0 ExceptionTable: 10 to 26 -> 34 [2] As you can see, there is no longer a MAKE_FUNCTION opcode nor a separate code object. Instead python-3.12 uses LOAD_FAST_AND_CLEARdocs.python.org(at offset 6) and STORE_FAST(at offset 30) opcodes to provide the isolation for the iteration variable. Quoting from the Specification sectionpeps.python.org of the PEP 709: Isolation of the x iteration variable is achieved by the combination of the new LOAD_FAST_AND_CLEAR opcode at offset 6, which saves any outer value of x on the stack before running the comprehension, and 30 STORE_FAST, which restores the outer value of x (if any) after running the comprehension. In addition to that, in python-3.12 there is no longer a separate frame for the comprehension in tracebacks. Traceback in <python-3.12 Traceback in python-3.12 >>> [1 / 0 for i in range(10)]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <listcomp>ZeroDivisionError: division by zero >>> [1 / 0 for i in range(10)]Traceback (most recent call last): File "<stdin>", line 1, in <module>ZeroDivisionError: division by zero And here is the benchmark resultspeps.python.org(measured with MacOS M2): $ python3.10 -m pyperf timeit -s 'l = [1]' '[x for x in l]' Mean +- std dev: 108 ns +- 3 ns $ python3.12 -m pyperf timeit -s 'l = [1]' '[x for x in l]' Mean +- std dev: 60.9 ns +- 0.3 ns | 49 | 69 |
77,522,753 | 2023-11-21 | https://stackoverflow.com/questions/77522753/how-to-use-to-dict-and-orient-records-in-polars-that-is-being-used-in-pandas | Using polars, I am not getting the same output as pandas when calling to_dict. Pandas. df = pd.DataFrame({ 'column_1': [1, 2, 1, 4, 5], 'column_2': ['Alice', 'Bob', 'Alice', 'Tom', 'Tom'], 'column_3': ['Alice1', 'Bob', 'Alice2', 'Tom', 'Tom'] }) df.to_dict(orient='records') produces [{'column_1': 1, 'column_2': 'Alice', 'column_3': 'Alice1'}, {'column_1': 2, 'column_2': 'Bob', 'column_3': 'Bob'}, {'column_1': 1, 'column_2': 'Alice', 'column_3': 'Alice2'}, {'column_1': 4, 'column_2': 'Tom', 'column_3': 'Tom'}, {'column_1': 5, 'column_2': 'Tom', 'column_3': 'Tom'}] Polars. df = pl.DataFrame({ 'column_1': [1, 2, 1, 4, 5], 'column_2': ['Alice', 'Bob', 'Alice', 'Tom', 'Tom'], 'column_3': ['Alice1', 'Bob', 'Alice2', 'Tom', 'Tom'] }) df.to_dict(as_series=False) produces {'column_1': [1, 2, 1, 4, 5], 'column_2': ['Alice', 'Bob', 'Alice', 'Tom', 'Tom'], 'column_3': ['Alice1', 'Bob', 'Alice2', 'Tom', 'Tom']} Here, the first example is pandas and the output I got when using to_dict with orient='records'. I expected to have the same output in polars. How can I replicate pandas' behaviour in polars? | To replicate the behaviour of pandas' to_dict with orient="records", you can use pl.DataFrame.to_dicts (notice the extra s). df.to_dicts() Output. [{'column_1': 1, 'column_2': 'Alice', 'column_3': 'Alice1'}, {'column_1': 2, 'column_2': 'Bob', 'column_3': 'Bob'}, {'column_1': 1, 'column_2': 'Alice', 'column_3': 'Alice2'}, {'column_1': 4, 'column_2': 'Tom', 'column_3': 'Tom'}, {'column_1': 5, 'column_2': 'Tom', 'column_3': 'Tom'}] | 7 | 10 |
77,507,580 | 2023-11-18 | https://stackoverflow.com/questions/77507580/userwarning-figurecanvasagg-is-non-interactive-and-thus-cannot-be-shown-plt-sh | I am using Windows 10 PyCharm 2021.3.3 Professional Edition python 3.11.5 matplotlib 3.8.1 How can I permanently resolve this issue in my development environment? import numpy as np import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt # Read data from file, skipping the first row (header) data = np.loadtxt('cm.dat', skiprows=1) # Initialize reference point x0, y0, z0 = data[0] # Compute squared displacement for each time step SD = [(x - x0)**2 + (y - y0)**2 + (z - z0)**2 for x, y, z in data] # Compute the cumulative average of SD to get MSD at each time step MSD = np.cumsum(SD) / np.arange(1, len(SD) + 1) # Generate time steps t = np.arange(1, len(SD) + 1) # Create a log-log plot of MSD versus t plt.figure(figsize=(8, 6)) plt.loglog(t, MSD, marker='o') plt.title('Mean Squared Displacement vs Time') plt.xlabel('Time step') plt.ylabel('MSD') plt.grid(True, which="both", ls="--") plt.show() C:\Users\pc\AppData\Local\Programs\Python\Python311\python.exe C:/git/RouseModel/tau_plot.py C:\git\RouseModel\tau_plot.py:29: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown plt.show() Process finished with exit code 0 | I have the same issue. In my case, I installed the PyQt5==5.15.10. After that, I run my code successfully. pip install PyQt5==5.15.10 or pip install PyQt5 with python==3.11 But from 2024, you guys should install version PyQt6 or the last version with python==3.12 or later. | 49 | 73 |
77,517,922 | 2023-11-20 | https://stackoverflow.com/questions/77517922/subclassing-polygon-in-shapely | I'm working with Shapely in Python and trying to subclass the Polygon class. However, I'm encountering an error when trying to add a custom attribute during object creation. Could you please provide guidance on how to subclass the Polygon class in Shapely and add custom attributes without running into this error? This is what I tried so far: from shapely.geometry import Polygon class CustomPolygon(Polygon): def __init__(self, shell=None, holes=None, name=None): super().__init__(shell, holes) self._name = name @property def name(self): return self._name @name.setter def name(self, value): self._name = value polygon1 = CustomPolygon([(0, 0), (0, 1), (1, 1), (1, 0)], name="Polygon1") And this is the error I get: polygon1 = CustomPolygon([(0, 0), (0, 1), (1, 1), (1, 0)], name="Polygon1") TypeError: __new__() got an unexpected keyword argument 'name' | You need to downgrade shapely to version 1.7 unfortunately. They intentionally got rid of any subclassing capability in 1.8 and they explicitly say they aren't going to support it anymore. See the issue here: https://github.com/shapely/shapely/issues/1698 | 3 | 1 |
77,502,027 | 2023-11-17 | https://stackoverflow.com/questions/77502027/infer-return-type-annotation-from-other-functions-annotation | I have a function with a complex return type annotation: from typing import (Union, List) # The -> Union[…] is actually longer than this example: def parse(what: str) -> Union[None, int, bool, float, complex, List[int]]: # do stuff and return the object as needed def parse_from_something(which: SomeType) -> ????: return parse(which.extract_string()) # … class SomeType: def extract_string(self) -> str: # do stuff and return str How can I type-annotate parse_from_something so that it is annotated to return the same types as parse, without repeating them? The problem I'm solving here is that one function is subject to change, but there's wrappers around it that will always return the identical set of types. I don't want to duplicate code, and because this is a refactoring and after-the-fact type annotation effort, I need to assume I'll remove possible return types from parse in the future, and a static type checker might not realize that parse_from_something can no longer return these. | What you describe is not possible by design. From PEP 484: It is recommended but not required that checked functions have annotations for all arguments and the return type. For a checked function, the default annotation for arguments and for the return type is Any. An exception is the first argument of instance and class methods. [...] So, if you omit the return type, it's Any for external observer. No othe treatment is possible. You have now two options. The first one (preferred, because it gives you type safety and gives your annotations better semantic meaning) is using an alias and just putting it everywhere when you want to say "same type as in X". It obviously scales not only to function return types, but also to class attributes, global constants, function and method arguments, etc. This approach is explained well in @erny answer. Another option is using some trick similar to this answer (but please rename it to snake_case for PEP8 compliance - it's still a function). It introduces a helper decorator to lie a little to your typechecker. When you apply this withParameterAndReturnTypesOf decorator, you basically say: "ignore what happens in function body, treat my func as a twin of the referenced one". It is a beautiful cheat, but this way you don't get the real safety: your implementation may start doing more work and deviate from the contract you signed with the typechecker. E.g. the following will pass: def parse(what: str) -> Union[None, int, bool, float, complex, List[int]]: # do stuff and return the object as needed @dangerous_with_parameters_and_return_type_of(parse) def parse_from_something(which: SomeType): if which is None: return NotImplemented # not covered by the union return parse(which.extract_string()) If you were using the aliased union, you'd get an error from mypy that return type is incompatible. | 2 | 1 |
77,520,963 | 2023-11-21 | https://stackoverflow.com/questions/77520963/using-locally-deployed-llm-with-langchains-openai-llm-wrapper | I have deployed llm model locally which follows openai api schema. As it's endpoint follows openai schema, I don't want to write separate inference client. Is there any way we can utilize existing openai wrapper by langchain to do inference for my localhost model. I checked there is a openai adapter by langchain, but it seems like it require provider, which again I have to write separate client. Overall goal it to not write any redundant code as it's already been maintained by langchain and may change with time. We can modify our api wrt openai and it works out of the box. Your suggestion is appreciated. | It turns out you can utilize existing ChatOpenAI wrapper from langchain and update openai_api_base with the url where your llm is running which follows openai schema, add any dummy value to openai_api_key can be any random string but is necessary as they have validation for this and finally set model_name to whatever model you've deployed. Rest other params are same as what you'll set in openai, if you want you can set it. from langchain_community.chat_models import ChatOpenAI llm = ChatOpenAI( openai_api_base="http://<host-ip>:<port>/<v1>", openai_api_key="dummy_value", model_name="model_deployed") | 2 | 2 |
77,494,543 | 2023-11-16 | https://stackoverflow.com/questions/77494543/seleniumbase-undetected-chrome-driver-how-to-set-request-header | I am using seleniumbase with Driver(uc=True), which works well for my specific scraping use case (and appears to be the only driver that consistently remains undetected for me). It is fine for everything that doesn't need specific header settings. For one particular type of scrape I need to set the Request Header (Accept -> application/json). This works fine, and consistently, done manually in Chrome via the Requestly extension, but I cannot work out how to put it in place for seleniumbase undetected Chrome. I tried using execute_cdp_cmd with Network.setExtraHTTPHeaders (with Network.enable first): this ran without error but the request appeared to ignore it. (I was, tbh, unconvinced that the uc=True support was handling this functionality properly, since it doesn't appear to have full Chromium driver capabilities.) Requestly has a selenium Python mechanism, but this has its own driver and I cannot see how it would integrate with seleniumbase undetected Chrome. The built-in seleniumbase wire=True support won't coexist with uc=True, as far as I can see. selenium-requests has an option to piggyback on an existing driver, but this is (to be honest) beyond my embryonic Python skills (though it does feel like this might be the answer if I knew how to put it in place). My scraping requires initial login, so I can't really swap from one driver to another in the course of the scraping session. | My code fragments from second effective solution derived from now deleted bountied answer (the .v2 was the piece I had not seen previously and which I think is what made it work): ... from seleniumwire import webdriver from selenium.webdriver.chrome.options import Options from seleniumwire.undetected_chromedriver.v2 import Chrome, ChromeOptions ... chrome_options = ChromeOptions() driver = Chrome(seleniumwire_options={'options': chrome_options}) driver.header_overrides = { 'Accept': 'application/json', } ... | 7 | 2 |
77,505,030 | 2023-11-17 | https://stackoverflow.com/questions/77505030/openai-api-error-you-tried-to-access-openai-chatcompletion-but-this-is-no-lon | I am currently working on a chatbot, and as I am using Windows 11 it does not let me migrate to newer OpenAI library or downgrade it. Could I replace the ChatCompletion function with something else to work on my version? This is the code: import openai openai.api_key = "private" def chat_gpt(prompt): response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message['content'].strip() if __name__ == "__main__": while True: user_input = input("You: ") if user_input.lower() in ["quit", "exit", "bye"]: break response = chat_gpt(user_input) print("Bot:", response) And this is the full error: ... You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API. You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface. Alternatively, you can pin your installation to the old version, e.g. <pip install openai==0.28> A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742 I tried both upgrading and downgrading through pip. | Try updating to the latest and using: from openai import OpenAI client = OpenAI( # defaults to os.environ.get("OPENAI_API_KEY") api_key="private", ) def chat_gpt(prompt): response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content.strip() Link EDIT: message.['content'] -> message.content on the return of this function, as a message object is not subscriptable error is thrown while using message.['content']. Also, update link from pointing to the README (subject to change) to migration guide specific to this code. | 20 | 34 |
77,510,558 | 2023-11-19 | https://stackoverflow.com/questions/77510558/why-do-different-devices-use-the-same-session-in-django | I use the session to count the number of failed logins. But when I use Wi-Fi internet, because the IP of two devices (mobile and PC) is the same, Django uses the same session. If I have two failed logins on PC and two failed logins on mobile, it will add these two together. But the interesting thing is that when I log into the account from one device, the second device does not log into the account and must be logged into the account separately. That is, it uses two different storage methods to log into the account and count the number of unsuccessful logins. if 'RC' in request.session: VRC = request.session['RC'] else: VRC = 1 VRC += 1 request.session['RC'] = VRC | All devices using the same Wi-Fi have the same external IP, what you mentioned as a problem is actually an advantage that prevents malicious attacks on the site. | 2 | 2 |
77,484,926 | 2023-11-15 | https://stackoverflow.com/questions/77484926/how-to-scroll-in-a-nav-element-using-seleniumbase-in-python | <nav class="flex h-full w-full flex-col p-2 gizmo:px-3 gizmo:pb-3.5 gizmo:pt-0" aria-label="Menu"> This is the nav although, its a-lot longer its full of divs I just want to know how to scroll till the end of the menu. Edit: loading element that has to be accounted for <svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" class="animate-spin text-center" height="1em" width="1em" xmlns="http://www.w3.org/2000/svg"> and under it there are many line elements top of nav xpath: /html/body/div[1]/div[1]/div[1]/div/div/div/div/nav top of svg xpath: /html/body/div[1]/div[1]/div[1]/div/div/div/div/nav/div[2]/div[2]/div[2]/svg do you need scrollbar html? | There are lots of specialized scroll methods in SeleniumBase: self.scroll_to(selector) self.slow_scroll_to(selector) self.scroll_into_view(selector) self.scroll_to_top() self.scroll_to_bottom() But you might not even need to scroll to the element. There's a self.js_click(selector) method, which lets you click on hidden elements that exist in the HTML. Here's an example test that uses that method to click a hidden logout menu item from the nav: from seleniumbase import BaseCase BaseCase.main(__name__, __file__) class SwagLabsLoginTests(BaseCase): def login_to_swag_labs(self): self.open("https://www.saucedemo.com") self.wait_for_element("div.login_logo") self.type("#user-name", "standard_user") self.type("#password", "secret_sauce") self.click('input[type="submit"]') def test_swag_labs_login(self): self.login_to_swag_labs() self.assert_element("div.inventory_list") self.assert_element('.inventory_item:contains("Backpack")') self.js_click("a#logout_sidebar_link") self.assert_element("div#login_button_container") | 4 | 1 |
77,519,280 | 2023-11-20 | https://stackoverflow.com/questions/77519280/pyspark-cumsum-with-salting-over-window-w-skew | How can I use salting to perform a cumulative sum window operation? While a tiny sample, my id column is heavily skewed, and I need to perform effectively this operation on it: window_unsalted = Window.partitionBy("id").orderBy("timestamp") # exected value df = df.withColumn("Expected", F.sum('value').over(window_unsalted)) However, I want to try salting because at the scale of my data, I cannot compute it otherwise. Consider this MWE. How can I replicate the expected value, 20, using salting techniques? from pyspark.sql import functions as F from pyspark.sql.window import Window data = [ (7329, 1636617182, 1.0), (7329, 1636142065, 1.0), (7329, 1636142003, 1.0), (7329, 1680400388, 1.0), (7329, 1636142400, 1.0), (7329, 1636397030, 1.0), (7329, 1636142926, 1.0), (7329, 1635970969, 1.0), (7329, 1636122419, 1.0), (7329, 1636142195, 1.0), (7329, 1636142654, 1.0), (7329, 1636142484, 1.0), (7329, 1636119628, 1.0), (7329, 1636404275, 1.0), (7329, 1680827925, 1.0), (7329, 1636413478, 1.0), (7329, 1636143578, 1.0), (7329, 1636413800, 1.0), (7329, 1636124556, 1.0), (7329, 1636143614, 1.0), (7329, 1636617778, -1.0), (7329, 1636142155, -1.0), (7329, 1636142061, -1.0), (7329, 1680400415, -1.0), (7329, 1636142480, -1.0), (7329, 1636400183, -1.0), (7329, 1636143444, -1.0), (7329, 1635977251, -1.0), (7329, 1636122624, -1.0), (7329, 1636142298, -1.0), (7329, 1636142720, -1.0), (7329, 1636142584, -1.0), (7329, 1636122147, -1.0), (7329, 1636413382, -1.0), (7329, 1680827958, -1.0), (7329, 1636413538, -1.0), (7329, 1636143610, -1.0), (7329, 1636414011, -1.0), (7329, 1636141936, -1.0), (7329, 1636146843, -1.0) ] df = spark.createDataFrame(data, ["id", "timestamp", "value"]) # Define the number of salt buckets num_buckets = 100 # Add a salted_id column to the dataframe df = df.withColumn("salted_id", (F.concat(F.col("id"), (F.rand(seed=42)*num_buckets).cast("int")).cast("string"))) # Define a window partitioned by the salted_id, and ordered by timestamp window = Window.partitionBy("salted_id").orderBy("timestamp") # Add a cumulative sum column df = df.withColumn("cumulative_sum", F.sum("value").over(window)) # Define a window partitioned by the original id, and ordered by timestamp window_unsalted = Window.partitionBy("id").orderBy("timestamp") # Compute the final cumulative sum by adding up the cumulative sums within each original id df = df.withColumn("final_cumulative_sum", F.sum("cumulative_sum").over(window_unsalted)) # exected value df = df.withColumn("Expected", F.sum('value').over(window_unsalted)) # incorrect trial df.agg(F.sum('final_cumulative_sum')).show() # expected value df.agg(F.sum('Expected')).show() | From what I see, the main issue here is that the timestamps must remain ordered for partial cumulative sums to be correct, e.g., if the sequence is 1,2,3 then 2 cannot go into different partition than 1 and 3. My suggestion is to use salt value based on timestamp that preserves the ordering. This will not completely remove skew, but you will still be able to partition within the same id: df = spark.createDataFrame(data, ["id", "timestamp", "value"]) bucket_size = 10000 # the actual size will depend on timestamp distribution # Add timestamp-based salt column to the dataframe df = df.withColumn("salt", F.floor(F.col("timestamp") / F.lit(bucket_size))) # Get partial cumulative sums window_salted = Window.partitionBy("id", "salt").orderBy("timestamp") df = df.withColumn("cumulative_sum", F.sum("value").over(window_salted)) # Get partial cumulative sums from previous windows df2 = df.groupby("id", "salt").agg(F.sum("value").alias("cumulative_sum_last")) window_full = Window.partitionBy("id").orderBy("salt") df2 = df2.withColumn("previous_sum", F.lag("cumulative_sum_last", default=0).over(window_full)) df2 = df2.withColumn("previous_cumulative_sum", F.sum("previous_sum").over(window_full)) # Join previous partial cumulative sums with original data df = df.join(df2, ["id", "salt"]) # maybe F.broadcast(df2) if it is small enough # Increase each cumulative sum value by final value of the previous window df = df.withColumn('final_cumulative_sum', F.col('cumulative_sum') + F.col('previous_cumulative_sum')) # expected value window_unsalted = Window.partitionBy("id").orderBy("timestamp") df = df.withColumn("Expected", F.sum('value').over(window_unsalted)) # new calculation df.agg(F.sum('final_cumulative_sum')).show() # expected value df.agg(F.sum('Expected')).show() | 2 | 2 |
77,490,435 | 2023-11-15 | https://stackoverflow.com/questions/77490435/attributeerror-cython-sources | I am using: python: 3.12 OS: Windows 11 Home I tried to install catboost==1.2.2 I am getting this error: C:\Windows\System32>py -3 -m pip install catboost==1.2.2 Collecting catboost==1.2.2 Downloading catboost-1.2.2.tar.gz (60.1 MB) ---------------------------------------- 60.1/60.1 MB 5.1 MB/s eta 0:00:00 Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [135 lines of output] Collecting setuptools>=64.0 Using cached setuptools-68.2.2-py3-none-any.whl (807 kB) Collecting wheel Using cached wheel-0.41.3-py3-none-any.whl (65 kB) Collecting jupyterlab Downloading jupyterlab-4.0.8-py3-none-any.whl (9.2 MB) ---------------------------------------- 9.2/9.2 MB 7.8 MB/s eta 0:00:00 Collecting conan<=1.59,>=1.57 Downloading conan-1.59.0.tar.gz (780 kB) -------------------------------------- 781.0/781.0 kB 4.9 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting async-lru>=1.0.0 (from jupyterlab) Downloading async_lru-2.0.4-py3-none-any.whl (6.1 kB) Collecting ipykernel (from jupyterlab) Downloading ipykernel-6.26.0-py3-none-any.whl (114 kB) -------------------------------------- 114.3/114.3 kB 6.5 MB/s eta 0:00:00 Collecting jinja2>=3.0.3 (from jupyterlab) Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB) -------------------------------------- 133.1/133.1 kB 7.7 MB/s eta 0:00:00 Collecting jupyter-core (from jupyterlab) Downloading jupyter_core-5.5.0-py3-none-any.whl (28 kB) Collecting jupyter-lsp>=2.0.0 (from jupyterlab) Downloading jupyter_lsp-2.2.0-py3-none-any.whl (65 kB) ---------------------------------------- 66.0/66.0 kB 3.7 MB/s eta 0:00:00 Collecting jupyter-server<3,>=2.4.0 (from jupyterlab) Downloading jupyter_server-2.10.1-py3-none-any.whl (378 kB) -------------------------------------- 378.6/378.6 kB 4.7 MB/s eta 0:00:00 Collecting jupyterlab-server<3,>=2.19.0 (from jupyterlab) Downloading jupyterlab_server-2.25.1-py3-none-any.whl (58 kB) ---------------------------------------- 59.0/59.0 kB 3.0 MB/s eta 0:00:00 Collecting notebook-shim>=0.2 (from jupyterlab) Downloading notebook_shim-0.2.3-py3-none-any.whl (13 kB) Collecting packaging (from jupyterlab) Downloading packaging-23.2-py3-none-any.whl (53 kB) ---------------------------------------- 53.0/53.0 kB 2.7 MB/s eta 0:00:00 Collecting tornado>=6.2.0 (from jupyterlab) Downloading tornado-6.3.3-cp38-abi3-win_amd64.whl (429 kB) -------------------------------------- 429.2/429.2 kB 9.1 MB/s eta 0:00:00 Collecting traitlets (from jupyterlab) Downloading traitlets-5.13.0-py3-none-any.whl (84 kB) ---------------------------------------- 85.0/85.0 kB 4.7 MB/s eta 0:00:00 Collecting requests<3.0.0,>=2.25 (from conan<=1.59,>=1.57) Downloading requests-2.31.0-py3-none-any.whl (62 kB) ---------------------------------------- 62.6/62.6 kB ? eta 0:00:00 Collecting urllib3<1.27,>=1.26.6 (from conan<=1.59,>=1.57) Downloading urllib3-1.26.18-py2.py3-none-any.whl (143 kB) -------------------------------------- 143.8/143.8 kB 4.3 MB/s eta 0:00:00 Collecting colorama<0.5.0,>=0.3.3 (from conan<=1.59,>=1.57) Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) Collecting PyYAML<=6.0,>=3.11 (from conan<=1.59,>=1.57) Downloading PyYAML-6.0.tar.gz (124 kB) -------------------------------------- 125.0/125.0 kB 3.6 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 [54 lines of output] running egg_info writing lib\PyYAML.egg-info\PKG-INFO writing dependency_links to lib\PyYAML.egg-info\dependency_links.txt writing top-level names to lib\PyYAML.egg-info\top_level.txt Traceback (most recent call last): File "C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module> main() File "C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\talta\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py", line 355, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in _get_build_requires self.run_setup() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in run_setup exec(code, locals()) File "<string>", line 288, in <module> File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\__init__.py", line 103, in setup return distutils.core.setup(**attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 185, in setup return run_commands(dist) ^^^^^^^^^^^^^^^^^^ File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands dist.run_commands() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands self.run_command(cmd) File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\dist.py", line 989, in run_command super().run_command(command) File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command cmd_obj.run() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 318, in run self.find_sources() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 326, in find_sources mm.run() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 548, in run self.add_defaults() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 586, in add_defaults sdist.add_defaults(self) File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\command\sdist.py", line 113, in add_defaults super().add_defaults() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\command\sdist.py", line 251, in add_defaults self._add_defaults_ext() File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\command\sdist.py", line 336, in _add_defaults_ext self.filelist.extend(build_ext.get_source_files()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<string>", line 204, in get_source_files File "C:\Users\talta\AppData\Local\Temp\pip-build-env-w9d6umo6\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 107, in __getattr__ raise AttributeError(attr) AttributeError: cython_sources [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Getting requirements to build wheel did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. [notice] A new release of pip is available: 23.1.2 -> 23.3.1 [notice] To update, run: C:\Users\talta\AppData\Local\Programs\Python\Python312\python.exe -m pip install --upgrade pip [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Any workaround or solutions? Comments and answers are much appreciated. | Edit: Adding the work arounds which worked for people. Two workarounds exist: 1.Preinstall cython<3, then install pyyaml without build isolation, then install the rest of your dependencies "AttributeError: cython_sources" with Cython 3.0.0a10 #601 (comment) $ pip install "cython<3.0.0" wheel $ pip install "pyyaml==5.4.1" --no-build-isolation $ pip install -r requirements.txt 2.Use a constraints file to force pip to use cython<3 at build time "AttributeError: cython_sources" with Cython 3.0.0a10 #601 (comment) $ echo "cython<3" > /tmp/constraint.txt $ PIP_CONSTRAINT=/tmp/constraint.txt pip install -r requirements.txt Credit to @astrojuanlu https://github.com/yaml/pyyaml/issues/601#issuecomment-1813963845 Looks like there is an ongoing issue for installation of catboost==1.2.2 git links: https://github.com/catboost/catboost/issues/2520 https://github.com/catboost/catboost/issues/2469 PyYAML and Cython are the culprit. Here is the main git : https://github.com/yaml/pyyaml/issues/601 | 31 | 88 |
77,485,901 | 2023-11-15 | https://stackoverflow.com/questions/77485901/pytest-how-to-find-the-test-result-after-a-test | After a test has been executed I need to collect the result of that test. But I don't find the result in the FixtureRequest object. I can find the test name and some additional data but nowhere I see something that shows if a test has been passed or failed. Nor if there were some exceptions example code: class TestSomething: @pytest.mark.test_case_id(99999) def test_example(self, api_interface) -> None: assert 5 == 5 and in someother file: @pytest.fixture def api_interface(request: FixtureRequest): I can see that the request has the title name and a node object but nowhere I see something result or assert like data.... | The way to achieve this is by using hooks. The hook will add the test result as an attribute of the test item, which afterwards you will be able to check and perform necessary actions. # example conftest.py import pytest # set up a hook to be able to check if a test has failed @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): outcome = yield rep = outcome.get_result() setattr(item, "rep_" + rep.when, rep) # check if a test has failed @pytest.fixture(scope="function", autouse=True) def test_failed_check(request): yield if request.node.rep_call.failed: # Do something if tast has failed print('Test Failed') | 3 | 2 |
77,522,941 | 2023-11-21 | https://stackoverflow.com/questions/77522941/using-sample-weights-through-metadata-routing-in-scikit-learn-in-nested-cross-va | I am using the sklearn version "1.4.dev0" to weight samples in the fitting and scoring process as described in this post and in this documentation. https://scikit-learn.org/dev/metadata_routing.html sklearn GridSearchCV not using sample_weight in score function I am trying to use this in a nested cross validation scheme, where hyperparmeter are tuned in a inner loop using "GridSearchCV" and performance is evaluated in the outer loop using "cross_validate". In both loops, the samples should be weighted for fitting and scoring. I got confused because if I use or dont use sample_weights in the inner loop (thus, in GridSearchCV) seems not to have an effect on the results of crossvalidate, although the fitting time implicates that the two functions calls of cross_validate differ. Maybe I have mistaken something but for me this seems rather unexpected and not right. Here is a reproducable example. I would like to know If my suggestion is right that the weighted cross_validate scores of the weighted and unweighted gridsearch estimator should differ How could I implement it in a way that I get the expected difference in the cross_validate scores # sklearn version is 1.4.dev0 from sklearn.datasets import make_regression from sklearn.linear_model import Lasso from sklearn.model_selection import GridSearchCV, cross_validate, KFold import numpy as np np.random.seed(42) sklearn.set_config(enable_metadata_routing=True) X, y = make_regression(n_samples=100, n_features=5, noise=0.5) sample_weights = np.random.rand(len(y)) estimator = Lasso().set_fit_request(sample_weight=True) hyperparameter_grid = {'alpha': [0.1, 0.5, 1.0, 2.0]} scoring_inner_cv = 'neg_mean_squared_error' inner_cv = KFold(n_splits=5, shuffle=True, random_state=42) grid_search_weighted = GridSearchCV(estimator=estimator, param_grid=hyperparameter_grid, cv=inner_cv, scoring=scoring_inner_cv) grid_search_unweighted = GridSearchCV(estimator=estimator, param_grid=hyperparameter_grid, cv=inner_cv, scoring=scoring_inner_cv) grid_search_weighted.fit(X, y, sample_weight=sample_weights) grid_search_unweighted.fit(X, y) est_weighted = grid_search_weighted.best_estimator_ est_unweighted = grid_search_unweighted.best_estimator_ weighted_score = grid_search_weighted.best_score_ unweighted_score = grid_search_unweighted.best_score_ predictions_weighted = grid_search_weighted.best_estimator_.predict(X)[:5] # these are differents depending on the use of sample weights predictions_unweighted = grid_search_unweighted.best_estimator_.predict(X)[:5] print('predictions weighted:', predictions_weighted) print('predictions unweighted:', predictions_unweighted) print('best grid search score weighted:', weighted_score) print('best grid search score unweighted:', unweighted_score) # Setting up outer cross-validation outer_cv = KFold(n_splits=5, shuffle=True, random_state=43) scorers = {'mse': 'neg_mean_squared_error'} results_weighted = cross_validate(est_weighted.set_score_request(sample_weight=True), X, y, cv=outer_cv, scoring=scorers, return_estimator=True, params={"sample_weight": sample_weights}) results_unweighted = cross_validate(est_unweighted.set_score_request(sample_weight=True), X, y, cv=outer_cv, scoring=scorers, return_estimator=True, params={"sample_weight": sample_weights}) print('cv fit time weighted:', results_weighted['fit_time']) print('cv fit_time unweighted', results_unweighted['fit_time']) print('cv score weighted:', results_weighted['test_mse']) print('cv score unweighted:', results_unweighted['test_mse']) Out: predictions weighted: [ -56.75523055 -46.40853794 -257.61879983 115.33482089 -123.2799114 ] predictions unweighted: [ -56.80695125 -46.46115926 -257.55129719 115.29365222 -123.17923488] best grid search score weighted: -0.28206979708971763 best grid search score unweighted: -0.2959277881104643 cv fit time weighted: [0.00086832 0.00075293 0.00104165 0.00075936 0.000736 ] cv fit_time unweighted [0.00077033 0.00074911 0.00076008 0.00075603 0.00073433] cv score weighted: [-0.29977789 -0.19323401 -0.3599154 -0.29672299 -0.42656506] cv score unweighted: [-0.29977789 -0.19323401 -0.3599154 -0.29672299 -0.42656506] Edit: Sorry, still a bit sleepy, I corrected the code | cross_validate trains and scores the estimator, which means if the hyperparameters of the estimators are the same, then the trained versions inside cross_validate would also be the same, which is the case here since both est_weighted and est_unweighted use alpha=0.1. There are a few issues here though, first, you're not using sample_weight in your scorer, which you should if you're using sample_weight. Second, for a nested cross validation, you should pass the GridSearchCV object to cross_validate. Here's the updated script: import sklearn from sklearn.metrics import get_scorer from sklearn.datasets import make_regression from sklearn.linear_model import Lasso from sklearn.model_selection import GridSearchCV, cross_validate, KFold import numpy as np np.random.seed(42) sklearn.set_config(enable_metadata_routing=True) X, y = make_regression(n_samples=100, n_features=5, noise=0.5) sample_weights = np.random.rand(len(y)) estimator = Lasso().set_fit_request(sample_weight=True) hyperparameter_grid = {"alpha": [0.1, 0.5, 1.0, 2.0]} scoring_inner_cv = get_scorer("neg_mean_squared_error").set_score_request( sample_weight=True ) inner_cv = KFold(n_splits=5, shuffle=True, random_state=42) grid_search_weighted = GridSearchCV( estimator=estimator, param_grid=hyperparameter_grid, cv=inner_cv, scoring=scoring_inner_cv, ) grid_search_unweighted = GridSearchCV( estimator=estimator, param_grid=hyperparameter_grid, cv=inner_cv, scoring=scoring_inner_cv, ) grid_search_weighted.fit(X, y, sample_weight=sample_weights) grid_search_unweighted.fit(X, y) est_weighted = grid_search_weighted.best_estimator_ est_unweighted = grid_search_unweighted.best_estimator_ print("best estimator weighted:", est_weighted) print("best estimator unweighted:", est_unweighted) weighted_score = grid_search_weighted.best_score_ unweighted_score = grid_search_unweighted.best_score_ predictions_weighted = grid_search_weighted.best_estimator_.predict(X)[ :5 ] # these are different depending on the use of sample weights predictions_unweighted = grid_search_unweighted.best_estimator_.predict(X)[:5] print("predictions weighted:", predictions_weighted) print("predictions unweighted:", predictions_unweighted) print("best grid search score weighted:", weighted_score) print("best grid search score unweighted:", unweighted_score) # Setting up outer cross-validation outer_cv = KFold(n_splits=5, shuffle=True, random_state=43) scorers = { "mse": get_scorer("neg_mean_squared_error").set_score_request(sample_weight=True) } results_weighted = cross_validate( grid_search_weighted, X, y, cv=outer_cv, scoring=scorers, return_estimator=True, params={"sample_weight": sample_weights}, ) results_unweighted = cross_validate( grid_search_unweighted, X, y, cv=outer_cv, scoring=scorers, return_estimator=True, ) print("cv fit time weighted:", results_weighted["fit_time"]) print("cv fit_time unweighted", results_unweighted["fit_time"]) print("cv score weighted:", results_weighted["test_mse"]) print("cv score unweighted:", results_unweighted["test_mse"]) And the output: best estimator weighted: Lasso(alpha=0.1) best estimator unweighted: Lasso(alpha=0.1) predictions weighted: [ -56.75523055 -46.40853794 -257.61879983 115.33482089 -123.2799114 ] predictions unweighted: [ -56.80695125 -46.46115926 -257.55129719 115.29365222 -123.17923488] best grid search score weighted: -0.30694013415226884 best grid search score unweighted: -0.2959277881104613 cv fit time weighted: [0.02880669 0.02938795 0.02891922 0.02823281 0.02768564] cv fit_time unweighted [0.02526283 0.0255146 0.0250349 0.02500224 0.02558732] cv score weighted: [-0.34250528 -0.21293099 -0.41301416 -0.36952952 -0.43474412] cv score unweighted: [-0.28752003 -0.20898288 -0.40011525 -0.28467415 -0.41647231] | 2 | 3 |
77,526,562 | 2023-11-21 | https://stackoverflow.com/questions/77526562/is-there-a-function-i-can-use-to-replace-my-if-statements-and-variables | I'm trying to figure out how to make my code more readable and have less lines in my code. It contains a lot of if elif statements that seem like can be combined into a few. fl = input("file:").lower().strip() a = fl.endswith(".jpeg") b = fl.endswith(".jpg") c = fl.endswith(".txt") d = fl.endswith(".png") e = fl.endswith(".pdf") f = fl.endswith(".zip") g = fl.endswith(".gif") if a or b is True: print("image/jpeg") elif c is True: print("text/plain") elif d is True: print("image/png") elif e is True: print("application/pdf") elif f is True: print("application/zip") elif g is True: print("image/gif") else: print("application/octet-stream") I tried getting rid of the variables on the top by putting fl == fl.endswith(".filetype") inside of the if statements, and instead of printing each type it only printed my else statement. I also tried looking up other ways to get the end of the str in python docs but couldn't find anything. Don't give me too direct of a solution to the problem, would like to avoid cs50 academic honesty issues. Also still quite new to python | Use a dictionary to map extensions to content-types. import os extension_content_types = { ".jpeg": "image/jpeg", ".jpg": "image/jpeg", ".txt": "text/plain", ".png": "image/png", ".pdf": "application/pdf", ".zip": "application/zip", ".gif": "image/gif", } fl = input("file:").lower().strip() filename, ext = os.path.splitext(fl) content_type = extension_content_types.get(ext, 'application/octet-stream') print(fl, content_type) | 3 | 5 |
77,522,301 | 2023-11-21 | https://stackoverflow.com/questions/77522301/how-to-change-sympy-plot-properties-in-jupyter-with-matplotlib-methods | The following code in a script works as expected, from sympy import * x = symbols('x') p = plot(x, x*(1-x), (x, 0, 1)) ax = p._backend.ax[0] ax.set_yticks((0, .05, .25)) p._backend.fig.savefig('Figure_1.png') but when I copy the code above in a notebook cell, this is what I get If it is possible to manipulate the (hidden) attributes of a Sympy's plot when one works in a Jupyter notebook, how can it be done? | As per this answer of Display two Sympy plots as two Matplotlib subplots, with inline mode in Jupyter, p.backend(p) must be used. The problem is that sympy.plotting.plot.plot(*args, show=True, **kwargs) creates its own figure and axes and plots.show() displays the plot immediately. Because of the way inline mode works, the plot is shown before the changes are implemented by ax.set_yticks((0, .05, .25)). Tested in python v3.12.0, matplotlib v3.8.1, sympy v1.11.1. from sympy import symbols, plot import matplotlib.pyplot as plt x = symbols('x') # note show=False, the default is True p = plot(x, x*(1-x), (x, 0, 1), show=False) fig, ax = plt.subplots() backend = p.backend(p) backend.ax = ax backend._process_series(backend.parent._series, ax, backend.parent) backend.ax.set_yticks((0, .05, .25)) plt.close(backend.fig) plt.show() In interactive mode %matplotlib qt, the code in the OP works fine. from sympy import symbols, plot import matplotlib.pyplot as plt %matplotlib qt # %matplotlib inline - to revert to inline x = symbols('x') p = plot(x, x*(1-x), (x, 0, 1)) ax = p._backend.ax[0] ax.set_yticks((0, .05, .25)) With a custom backend instance. This has as the advantage that matplotlib should not be explicitly imported. One should be aware of what you are doing: sympy provides a ready-to-use interface to plot functions on the fly, without the need to worry about (boring) mathematical technicality such as domain/range. If instead you try to act against such defaults you need to go back to the source code and do some reverse engineering. In the OP, custom ticks are required, but by default the MatplotlibBackend sets a linear scale, for both x & y, from the sympy.plotting.plot.Plot object, xscale='linear', yscale='linear', which interfere with the matplotlib.axes.Axes.set_yticks method. From the source code 1467 if parent.yscale and not isinstance(ax, Axes3D): 1468 ax.set_yscale(parent.yscale) By setting parent.yscale to False (or None, '') the condition will never be meet. super is built-in python method used for accessing inherited methods that have been overridden in a class. from sympy.plotting.plot import MatplotlibBackend from sympy import symbols class JupyterPlotter(MatplotlibBackend): def _process_series(self, series, ax, parent): parent.yscale = False ax.set_yticks((0, .05, .25)) # custom yticks super()._process_series(series, ax, parent) x = symbols('x') p = plot(x, x*(1-x), (x, 0, 1), backend=JupyterPlotter) Plots with special needs may require a special implementation, here is a different example. | 2 | 4 |
77,522,801 | 2023-11-21 | https://stackoverflow.com/questions/77522801/how-to-create-a-rotating-animation-of-a-scatter3d-plot-with-plotly-and-save-it-a | With plotly in jupyter I am creating a Scatter3D plot as follows: # Configure the trace. trace = go.Scatter3d( x=x, y=y, z=z, mode='markers', marker=dict(color=colors, size=1) ) # Configure the layout. layout = go.Layout( margin={'l': 0, 'r': 0, 'b': 0, 't': 0}, height = 1000, width = 1000 ) data = [trace] plot_figure = go.Figure(data=data, layout=layout) # Render the plot. plotly.offline.iplot(plot_figure) How can I rotate a plot generated like this in order to create a gif video out of it i.e. stored as a gif file like rotate.gif which shows an animation of the plot rotated? Based on the comments given I created this code (complete, working example): import plotly.graph_objects as go import numpy as np import plotly.io as pio # Helix equation t = np.linspace(0, 10, 50) x, y, z = np.cos(t), np.sin(t), t fig= go.Figure(go.Scatter3d(x=x, y=y, z=z, mode='markers')) x_eye = -1.25 y_eye = 2 z_eye = 0.5 fig.update_layout( title='Animation Test', width=600, height=600, scene_camera_eye=dict(x=x_eye, y=y_eye, z=z_eye), updatemenus=[dict(type='buttons', showactive=False, y=1, x=0.8, xanchor='left', yanchor='bottom', pad=dict(t=45, r=10), buttons=[dict(label='Play', method='animate', args=[None, dict(frame=dict(duration=5, redraw=True), transition=dict(duration=1), fromcurrent=True, mode='immediate' )] ) ] ) ] ) def rotate_z(x, y, z, theta): w = x+1j*y return np.real(np.exp(1j*theta)*w), np.imag(np.exp(1j*theta)*w), z frames=[] for k, t in enumerate(np.arange(0, 6.26, 0.1)): xe, ye, ze = rotate_z(x_eye, y_eye, z_eye, -t) newframe = go.Frame(layout=dict(scene_camera_eye=dict(x=xe, y=ye, z=ze))) frames.append(newframe) pio.write_image(newframe, f"images/images_{k+1:03d}.png", width=400, height=400, scale=1) fig.frames=frames fig.show() which runs without an error and does rotate the scenery when I press on Play, however the image that is saved just shows an empty 2D coordinate system: but not what I actually see rotating. It also seems those image are created when I execute the cell in the jupyter notebook and not after I press "Play". Seems that there are two figures, one that I can see rotating, and the image of an empty 2D coordinate system that gets saved to a file ... | Note pio.write_image() expects a Figure object or dict representing a figure as the 1st argument (we can't just pass an update object or a frame). The idea is precisely to apply the changes from each animation frame to the figure and export it at each point sequentially : frames=[] for k, t in enumerate(np.arange(0, 6.26, 0.1)): xe, ye, ze = rotate_z(x_eye, y_eye, z_eye, -t) newframe = go.Frame(layout=dict(scene_camera_eye=dict(x=xe, y=ye, z=ze))) frames.append(newframe) fig.update_layout(scene_camera_eye=dict(x=xe, y=ye, z=ze)) pio.write_image(fig, f"images/images_{k+1:03d}.png", width=400, height=400, scale=1) The animation frames describe only the changes to be applied at some point to the figure, asynchronously, using Plotly.js animate method (js runtime), but the image export is done synchronously (python runtime), before the figure is eventually rendered (because before fig.show()). | 3 | 2 |
77,518,816 | 2023-11-20 | https://stackoverflow.com/questions/77518816/can-i-install-a-python-package-in-a-conda-environment-while-a-script-is-currentl | I used a SLURM manager to submit a bunch of scripts that all run in one Conda environment. I would like to install a new Python package to this environment. Do I need to wait until all of my scripts are done running? Or can I install the package now without messing anything up? | Can you? Sure. Should you? No. It could lead to changing existing packages, which could potentially lead to problems (e.g., missing references, API changes) - really depends on how the scripts are written and the dynamics of library loading throughout the scripts. However, there is a larger issue of working reproducibly. May not apply here, but most SLURM users are doing scientific computing and scientific users should never mutate environments after using them to produce results. Intact environments are the scientific record and essential for reproducibility. If software requirements change, then create a new environment. Conda uses hardlinks to minimize disk usage, so one should be extremely liberal about creating new environments. | 2 | 5 |
77,522,071 | 2023-11-21 | https://stackoverflow.com/questions/77522071/error-numpy-core-multiarray-failed-to-import-pyhdf | I'm trying to open HDF files using a python code provided in this website (https://hdfeos.org/software/pyhdf.php). However, I get an error while importing the pyhdf package import os from pyhdf.SD import SD, SDC import numpy as np import rasterio The error : --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem . --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Input In [14], in <cell line: 2>() 1 import os ----> 2 from pyhdf.SD import SD, SDC 3 import rasterio File ~\Miniconda3\lib\site-packages\pyhdf\SD.py:1003, in <module> 38 """ 39 SD (scientific dataset) API (:mod:`pyhdf.SD`) 40 ============================================= (...) 999 1000 """ 1001 import os, sys, types -> 1003 from . import hdfext as _C 1004 from .six.moves import xrange 1005 from .error import _checkErr, HDF4Error File ~\Miniconda3\lib\site-packages\pyhdf\hdfext.py:10, in <module> 8 # Import the low-level C/C++ module 9 if __package__ or "." in __name__: ---> 10 from . import _hdfext 11 else: 12 import _hdfext ImportError: numpy.core.multiarray failed to import I tried upgrading numpy and pyhdf versions but it didn't work | As commented by @Cow, I am posting this as an answer: The following packages are required to build and install pyhdf: Python: Python 2.6 or newer for Python 2, or Python 3.2 or newer for Python 3. NumPy HDF4 libraries (to use their HDF4 binaries, you will also need szip, available from the same page) Compiler suite e.g. GCC. On Windows, you need to use a compatible Visual C++ compiler. zlib libjpeg https://fhs.github.io/pyhdf/install.html#requirements Pay special attention to Swig-generated interface files Interface files hdfext.py and hdfext_wrap.c (located under the pyhdf subdirectory) have been generated using the SWIG tool. Those two files should be usable as is on most environments. It could happen however that, for reasons related to your environment, your C compiler does not accept the ‘.c’ file and raises a compilation error. If so, the interface needs to be regenerated. To do so, install SWIG, then run: $ cd pyhdf $ swig -python hdfext.i SWIG should silently regenerate the two interface files, after which installation should proceed correctly. https://fhs.github.io/pyhdf/install.html#swig-generated-interface-files | 3 | 2 |
77,522,673 | 2023-11-21 | https://stackoverflow.com/questions/77522673/google-calendar-api-understanding-of-token-json | I am working with the Google calendar API, the python quickstart in particular, but the language does not matter. The example from https://developers.google.com/calendar/api/quickstart/python has: if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( "credentials.json", SCOPES ) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open("token.json", "w") as token: token.write(creds.to_json()) I am working on a website, that is mostly server side. That people will log in, and be able to create a calendar, that the server will allow them to create a calendar, and automatically add events depending on events that occur. Question 1: My question is about token.json, is that file shared between all users, or should a separator file be created for each person? Question 2: Should it be backed up, cause if I lost the file then will everyone be logged out? | Question 1: My question is about token.json, is that file shared between all users, or should a separator file be created for each person? Token Json is single user. In fact as your code is written it is single user. The first thing that sample does is check if the file exists if os.path.exists("token.json"): If it does it will load the credentials within that file. Question 2: Should it be backed up, cause if I lost the file then will everyone be logged out? Yes you should probably back it up as the user who authorized the application will be prompted to authorize your application again. Note this is authorization not authencation, there is no log-out Notes: The code you are following Authorize credentials for a desktop application it is designed for a desk top application, as written it is single user. It is also not going to work on a hosted web page as. flow = InstalledAppFlow.from_client_secrets_file( "credentials.json", SCOPES ) creds = flow.run_local_server(port=0) Will run the the authorization request on the machine its running on unless the user can login to the webserver its not going to work. | 2 | 3 |
77,521,658 | 2023-11-21 | https://stackoverflow.com/questions/77521658/vscode-python-pylance-does-not-gray-out-unaccessed-variable | In below screenshot when hovering over the grayed out variables Pylance (correctly!) says they are not accessed, e.g.; "_baz" is not accessed Pylance. My question is about waz, which is clearly not accessed in either tabs, still not grayed out. Why isn't it grayed out? I thought maybe it was related to waz not being a "private" (underscore) variable, but it just doesn't make sense... | Global variables without an underscore are considered public, so it might be used when imported from somewhere else. | 2 | 2 |
77,521,686 | 2023-11-21 | https://stackoverflow.com/questions/77521686/how-to-make-overlapping-windows-of-weeks-based-on-the-nearest-available-dates-in | Sorry guys for the title but it is really what I'm trying to do. Here is a table to explain that more. The bold lines makes the years and the the thin ones makes the weeks. For the expected output. It really doesn't matter its format. All I need is that if I ask for the dates of a pair YEAR/WEEK, I get the corresponding window of dates. For example, if I do some_window_function(2022, 5) I should have the result below (it correspond to the RED WINDOW) DATE YEAR WEEK 2020 30 Friday, July 24, 2020 2022 5 Wednesday, February 2, 2022 5 Thursday, February 3, 2022 5 Friday, February 4, 2022 7 Tuesday, February 15, 2022 And for example, if I do some_window_function(2022, 7) I should have the result below (it correspond to the BLUE WINDOW) DATE YEAR WEEK 2022 5 Friday, February 4, 2022 2022 7 Tuesday, February 15, 2022 7 Wednesday, February 16, 2022 7 Thursday, February 17, 2022 2023 44 Tuesday, October 31, 2023 The dataframe used is this : df = pd.DataFrame({'YEAR': [2020, 2020, 2020, 2020, 2020, 2020, 2020, 2022, 2022, 2022, 2022, 2022, 2022, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023], 'WEEK': [29, 29, 29, 30, 30, 30, 30, 5, 5, 5, 7, 7, 7, 44, 44, 44, 44, 45, 45, 45, 46, 46, 46, 46], 'DATE': ['Monday, July 13, 2020', 'Thursday, July 16, 2020', 'Friday, July 17, 2020', 'Monday, July 20, 2020', 'Tuesday, July 21, 2020', 'Thursday, July 23, 2020', 'Friday, July 24, 2020', 'Wednesday, February 2, 2022', 'Thursday, February 3, 2022', 'Friday, February 4, 2022', 'Tuesday, February 15, 2022', 'Wednesday, February 16, 2022', 'Thursday, February 17, 2022', 'Tuesday, October 31, 2023', 'Wednesday, November 02, 2023', 'Friday, November 03, 2023', 'Sunday, November 05, 2023', 'Monday, November 06, 2023', 'Tuesday, November 07, 2023', 'Wednesday, November 08, 2023', 'Monday, November 13, 2023', 'Tuesday, November 14, 2023', 'Wednesday, November 15, 2023', 'Thursday, November 16, 2023']}) I made the code below but it gives a similar dataframe to my input : def make_windows(group): if group.name == df.loc[df['YEAR'] == group.name, 'WEEK'].min(): group.at[group.index[-1]+1, 'DATE'] = df.at[group.index[-1]+1, 'DATE'] return group.ffill() elif group.name < df.loc[df['YEAR']== group.name, 'WEEK'].max(): group.at[group.index[-1]+1, 'DATE'] = df.at[group.index[-1]+1, 'DATE'] return group.iloc[1:].ffill() else: return group.iloc[1:].ffill() results = df.groupby('YEAR').apply(make_windows) | Looks like you could use a simple mask for the YEAR/WEEK and expand it one row above/below (assuming sorted dates): df = df.sort_values(by=['YEAR', 'WEEK']) def some_window_function(year, week): mask = df['YEAR'].eq(year) & df['WEEK'].eq(week) return df[mask|mask.shift()|mask.shift(-1)] some_window_function(2022, 5) Output: YEAR WEEK DATE 6 2020 30 Friday, July 24, 2020 7 2022 5 Wednesday, February 2, 2022 8 2022 5 Thursday, February 3, 2022 9 2022 5 Friday, February 4, 2022 10 2022 7 Tuesday, February 15, 2022 | 2 | 1 |
77,522,054 | 2023-11-21 | https://stackoverflow.com/questions/77522054/count-unique-value-with-prioritize-value-in-pandas | I have a simple data frame as below: import pandas as pd import numpy as np df = pd.DataFrame({'CUS_NO': ['900636229', '900636229', '900636080', '900636080', '900636052', '900636052', '900636053', '900636054', '900636055', '900636056'], 'indicator': ['both', 'left_only', 'both', 'left_only', 'both', 'left_only', 'both', 'left_only', 'both', 'left_only'], 'Nationality': ['VN', 'VN', 'KR', 'KR', 'VN', 'VN', 'KR', 'VN', 'KR', 'VN']}) CUS_NO indicator Nationality 0 900636229 both VN 1 900636229 left_only VN 2 900636080 both KR 3 900636080 left_only KR 4 900636052 both VN 5 900636052 left_only VN 6 900636053 both KR 7 900636054 left_only VN 8 900636055 both KR 9 900636056 left_only VN I want to count unique value of CUS_NO so I used pd.Series.nunique by below code: df2 = pd.pivot_table(df, values='CUS_NO', index='Nationality', columns='indicator', aggfunc=pd.Series.nunique, margins=True).reset_index() df2 And here is the result: indicator Nationality both left_only All 0 KR 3 1 3 1 VN 2 4 4 2 All 5 5 7 But I my expectation is if CUS_NO was same and indicator was different, I just need to count both indicator. So below is my expected Output: indicator Nationality both left_only All 0 KR 3 0 3 1 VN 2 2 4 2 All 5 2 7 Thank you. | You can sort_values to have "both" on top (if more categories, use a Categorical to define a custom order), then drop_duplicates: tmp = (df .sort_values(by='indicator') .drop_duplicates(subset=['CUS_NO', 'Nationality'], keep='first') ) df2 = pd.pivot_table(tmp, values='CUS_NO', index='Nationality', columns='indicator', aggfunc=pd.Series.nunique, margins=True, fill_value=0).reset_index() Output: indicator Nationality both left_only All 0 KR 3 0 3 1 VN 2 2 4 2 All 5 2 7 Intermediate tmp: CUS_NO indicator Nationality 0 900636229 both VN 2 900636080 both KR 4 900636052 both VN 6 900636053 both KR 8 900636055 both KR 7 900636054 left_only VN 9 900636056 left_only VN | 3 | 2 |
77,500,505 | 2023-11-17 | https://stackoverflow.com/questions/77500505/how-to-draw-the-radius-of-a-circle-within-a-cartopy-projection | I am trying to draw the radius of a circle on a Cartopy projection through one point. And I couldn't find anything related to this. Here is what I have so far: fig = plt.figure(figsize=(10, 6)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.Mercator()) ax.set_extent([18, 28, 59.5, 64.1], crs=ccrs.PlateCarree()) ax.coastlines(linewidth=.5) # Add the radar distance circle lon_ika = 23.076 lat_ika = 61.76 radius = 250 n_samples = 80 circles = Polygon(Geodesic().circle(lon_ika, lat_ika, radius*1000., n_samples=n_samples)) feature = cfeature.ShapelyFeature(circles, ccrs.PlateCarree(), fc='None', ec="black", lw=1, linestyle="-") linestyle="--") circle = ax.add_feature(feature) # Adding red dot and name of the radar station to the plot ax.plot(lon_ika, lat_ika, "o", c='r', transform=ccrs.PlateCarree(), markersize=6, label="Ikaalinen") # Adding red cross and name of IOP location to the plot lon_hyy = 24.3 lat_hyy = 61.83 ax.plot(lon_hyy, lat_hyy, "x", c='r', transform=ccrs.PlateCarree(), markersize=6, label="Hyytiälä") # Add labels plt.legend(loc='upper left', fontsize=12, framealpha=1, edgecolor='black') plt.show() I haven't found anything related on the web so far :/ | This code should give the required radius line that passes the X mark. The code: from shapely.geometry import Polygon from cartopy.geodesic import Geodesic # from geographiclib import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy import cartopy.feature as cfeature fig = plt.figure(figsize=(10, 6)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.Mercator()) # Avoid error AttributeError: 'GeoAxesSubplot' object has no attribute '_autoscaleXon' ax._autoscaleXon = False ax._autoscaleYon = False ax.set_extent([18, 28, 59.5, 64.1], crs=ccrs.PlateCarree()) ax.coastlines(linewidth=.5) # Add the radar distance circle lon_ika = 23.076 lat_ika = 61.76 ika = [lon_ika, lat_ika] radius = 250 # km n_samples = 80 circles = Polygon(Geodesic().circle(lon_ika, lat_ika, radius*1000., n_samples=n_samples)) feature = cfeature.ShapelyFeature([circles], ccrs.PlateCarree(), fc='None', ec="black", lw=1, linestyle="-") circle = ax.add_feature(feature) # Adding red dot and name of the radar station to the plot ax.plot(lon_ika, lat_ika, "o", c='r', transform=ccrs.PlateCarree(), markersize=6, label="Ikaalinen", zorder=30) # Adding red cross and name of IOP location to the plot lon_hyy = 24.3 lat_hyy = 61.83 hyy = [lon_hyy, lat_hyy] # Get (geod_distance, forward and backward azimuth) between 2 points dist_m, fw_azim, bw_azim = Geodesic().inverse(ika, hyy).T # Get (long, lat, forward_azimuth) of target point using direct problem solver px_lon, px_lat, fwx_azim = Geodesic().direct(ika, fw_azim, radius*1000).T ax.plot(lon_hyy, lat_hyy, "x", c='r', transform=ccrs.PlateCarree(), markersize=6, label="Hyytiälä", zorder=10) # Plot the target point on the circle's perimeter ax.plot(px_lon, px_lat, "x", c='g', transform=ccrs.PlateCarree(), markersize=12, label="Target") # Plot great-circle arc from circle center to the target point ax.plot([ika[0], px_lon], [ika[1], px_lat], '.-', color='blue', transform=ccrs.Geodetic(), zorder=5 ) gl = ax.gridlines(draw_labels=True) #ax.set_aspect(1) # Add labels plt.legend(loc='upper left', fontsize=12, framealpha=1, edgecolor='black') plt.show() The output plot: | 2 | 3 |
77,521,378 | 2023-11-21 | https://stackoverflow.com/questions/77521378/confusing-output-with-pandas-rolling-window-with-datetime64us-dtype | I get confusing results from pandas.rolling() when the dtype is datetime64[us]. Pandas version is 2.1.1. Let df be the dataframe day x 0 2021-01-01 3 1 2021-01-02 2 2 2021-01-03 1 3 2021-01-05 4 4 2021-01-08 2 5 2021-01-14 5 6 2021-01-15 6 7 2021-01-16 1 8 2021-01-19 5 9 2021-01-20 2 Its dtypes are: day datetime64[ns] x int64 dtype: object We specify a rolling window of length 3 days: df.rolling("3d", on="day", center=True)["x"].sum() Output is as expected: 0 5.0 1 6.0 2 3.0 3 4.0 4 2.0 5 11.0 6 12.0 7 7.0 8 7.0 9 7.0 Name: x, dtype: float64 Let us repeat this after casting the dtype datetime64[ns] to datetime64[us]: df["day"] = df["day"].astype("datetime64[us]") Using the exact same code as above for the rolling window now gives: 0 31.0 1 31.0 2 31.0 3 31.0 4 31.0 5 31.0 6 31.0 7 31.0 8 31.0 9 31.0 Name: x, dtype: float64 Why? | I can't reproduce it on pandas 2.1.2, make sure to use a recent version: print(pd.__version__) 2.1.2 df["day"] = df["day"].astype("datetime64[us]") print(df.dtypes) day datetime64[us] x int64 dtype: object df.rolling("3d", on="day", center=True)["x"].sum() 0 5.0 1 6.0 2 3.0 3 4.0 4 2.0 5 11.0 6 12.0 7 7.0 8 7.0 9 7.0 Name: x, dtype: float64 | 3 | 2 |
77,516,280 | 2023-11-20 | https://stackoverflow.com/questions/77516280/install-youtokentome-in-poetry-requires-cython-unable-to-configre-correctly | I'm trying to convert whipser-diarization to poetry. It goes well until I add nemo_toolkit[asr]==1.20.0, which depends on youtokentome (that name is well thought of, btw) File "/tmp/tmpexmdke23/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 507, in run_setup super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script) File "/tmp/tmpexmdke23/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in run_setup exec(code, locals()) File "<string>", line 5, in <module> ModuleNotFoundError: No module named 'Cython' So I tried adding cython to the dependencies. It works fine if I poetry shell and execute cython, so it is avaible. My pyproject so far: ... [tool.poetry.dependencies] python = "^3.10" faster-whisper = "0.9.0" wget = "^3.2" transformers = ">=4.26.1" whisperx = {git = "https://github.com/m-bain/whisperX.git", rev = "49e0130e4e0c0d99d60715d76e65a71826a97109"} deepmultilingualpunctuation = "^1.0.1" cython = "^3.0.5" [build-system] requires = ["poetry-core", "cython"] build-backend = "poetry.core.masonry.api" I added cython to the requires section, but that doesn't resolve the error. | Building this package is done in an isolated environment, so it doesn't matter if Cython is installed in your current environment. youtokentome has to define its build requirements according to PEP 518. There seems to be a Pull Request for it for a long time: https://github.com/VKCOM/YouTokenToMe/pull/108 | 3 | 2 |
77,503,260 | 2023-11-17 | https://stackoverflow.com/questions/77503260/can-we-stop-the-dash-post-dash-dash-update-component-http-1-1-log-messages | Has anyone figured out how to stop the following Dash log messages? They clutter up my logs, and on a busy website, make it almost impossible to see actual, useful log messages when errors occur. [17/Nov/2023:16:28:10 +0000] "POST /dash/_dash-update-component HTTP/1.1" 204 0 "https://example.com/" "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1" I've tried the suggestion here by the creator of Dash. He says to try the following, but it doesn't do anything for me: import logging logging.getLogger('werkzeug').setLevel(logging.ERROR) Here's a fuller example from this link if you want to try it: import logging from dash import Dash from flask import Flask logging.getLogger('werkzeug').setLevel(logging.ERROR) URL_BASE_PATHNAME = '/'+'example/' server = Flask(__name__) app = Dash(name=__name__, server=server, url_base_pathname=URL_BASE_PATHNAME) if __name__ == "__main__": app.run() Here's more like what mine looks like in production with Docker Swarm: import logging import os import time from datetime import datetime, timezone from logging.handlers import RotatingFileHandler from pathlib import Path from typing import List from dotenv import load_dotenv from flask import Flask, current_app, redirect, render_template, request, url_for from flask.globals import _request_ctx_stack from flask.logging import default_handler from flask_assets import Environment from flask_bcrypt import Bcrypt from flask_bootstrap import Bootstrap from flask_caching import Cache from flask_flatpages import FlatPages from flask_htmlmin import HTMLMIN as HTMLMin from flask_login import LoginManager, current_user from flask_mail import Mail from flask_migrate import Migrate from flask_sqlalchemy import SQLAlchemy from werkzeug.exceptions import NotFound from werkzeug.middleware.proxy_fix import ProxyFix from app import databases from app.assets import compile_assets # Dictionary pointing to classes of configs from app.config import ( INSTANCE_PATH, PROJECT_FOLDER, ROLE_ID_CUSTOMER_ADMIN, ROLE_ID_IJACK_ADMIN, ROLE_ID_IJACK_SERVICE, STATIC_FOLDER, TEMPLATE_FOLDER, app_config, ) from app.dash_setup import register_dashapps from app.utils import error_send_email_w_details # Ensure the .env file doesn't contain copies of the variables in the .flaskenv file, or it'll get confusing... load_dotenv(PROJECT_FOLDER, override=True) # Set log level globally so other modules can import it log_level = None db = SQLAlchemy() login_manager = LoginManager() bcrypt = Bcrypt() cache = Cache() mail = Mail() pages = FlatPages() assets_env = Environment() # noqa: C901 def create_app(config_name=None): """Factory function that creates the Flask app""" app = Flask( __name__, instance_path=INSTANCE_PATH, static_folder=STATIC_FOLDER, template_folder=TEMPLATE_FOLDER, static_url_path="/static", ) # Import the config class from config.py (defaults to 'development' if not in the .env file) if config_name is None: config_name = os.getenv("FLASK_CONFIG", "development") config_obj = app_config[config_name] app.config.from_object(config_obj) app.config["SECRET_KEY"] = os.getenv("SECRET_KEY") # Set up logging global log_level log_level = app.config.get("LOG_LEVEL", logging.INFO) app.logger.setLevel(log_level) # The default handler is a StreamHandler that writes to sys.stderr at DEBUG level. default_handler.setLevel(log_level) # Change default log format log_format = ( "[%(asctime)s] %(levelname)s: %(name)s: %(module)s: %(funcName)s: %(message)s" ) default_handler.setFormatter(logging.Formatter(log_format)) # Stop the useless 'dash-component-update' logging? (Unfortunately this doesn't seem to work...) # https://community.plotly.com/t/prevent-post-dash-update-component-http-1-1-messages/11132 # https://community.plotly.com/t/suppressing-component-update-output-message-in-the-terminal/7613 logging.getLogger("werkzeug").setLevel(logging.ERROR) # NOTE != means running the Flask application through Gunicorn in my workflow. if __name__ != "__main__" and not app.debug and app.env != "development": # Add a FileHandler to the Flask logger Path("logs").mkdir(exist_ok=True) file_handler = RotatingFileHandler( "logs/myijack.log", maxBytes=10240, backupCount=10 ) file_handler.setLevel(logging.ERROR) file_handler.setFormatter(logging.Formatter(log_format)) app.logger.addHandler(file_handler) app.logger.error( "Just testing Gunicorn logging in Docker Swarm service container ✅..." ) app.logger.info("myijack.com startup now...") app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_port=1) # Initialize extensions Bootstrap(app) db.init_app(app) # SQLAlchemy databases.init_app(app) # other custom database functions cache.init_app( app, config=config_obj.cache_config ) # Simple if dev, otherwise Redis for test/prod login_manager.init_app(app) Migrate(app, db) mail.init_app(app) bcrypt.init_app(app) pages.init_app(app) # By default, when a user attempts to access a login_required view without being logged in, # Flask-Login will flash a message and redirect them to the log in view. # (If the login view is not set, it will abort with a 401 error.) login_manager.login_view = "auth.login" # login_manager.login_message = "You must be logged in to access this page." # Register blueprints if ALL0_DASH1_FLASK2_ADMIN3 in (0, 2): pass app.logger.debug("Importing blueprint views...") from app.auth.oauth import azure_bp, github_bp, google_bp from app.auth.views import auth as auth_bp from app.dashapp.views import dash_bp from app.home.views import home as home_bp from app.pwa import pwa_bp app.logger.debug("Registering blueprint views...") app.register_blueprint(auth_bp) app.register_blueprint(home_bp) app.register_blueprint(pwa_bp) app.register_blueprint(dash_bp) app.register_blueprint(github_bp, url_prefix="/login") app.register_blueprint(azure_bp, url_prefix="/login") app.register_blueprint(google_bp, url_prefix="/login") # Register API for saving Flask-Admin views' metadata via JavaScript AJAX from app.api import api api.init_app(app) if ALL0_DASH1_FLASK2_ADMIN3 in (0, 3): pass # Setup Flask-Admin site app.logger.debug("Importing Flask-Admin views...") from app.flask_admin.views_admin import admin_views from app.flask_admin.views_admin_cust import admin_cust_views app.logger.debug("Adding Flask-Admin views...") admin_views(app, db) admin_cust_views(app, db) with app.app_context(): # Flask-Assets must come before the Dash app so it # can first render the {% assets %} blocks assets_env.init_app(app) compile_assets(assets_env, app) # HTMLMin must come after Dash for some reason... # app.logger.debug("Registering HTMLMin...") app.config["MINIFY_HTML"] = True HTMLMin( app, remove_comments=True, remove_empty_space=True, # This one can cause a bug... # disable_css_min=False, ) return app, dash_app The issue has been on Github since 2018 and apparently it's closed/fixed, but not for me... I'm using the following pyproject.toml in production: [tool.poetry.dependencies] python = ">=3.8,<3.12" dash = {extras = ["compress"], version = "^2.11.1"} scikit-learn = "1.1.3" pandas = "^1.5.3" flask-login = "^0.5.0" keras = "^2.4.3" joblib = "^1.2.0" boto3 = "^1.26.12" click = "^8.1.3" dash-bootstrap-components = "^1.4.2" dash-table = "^5.0.0" flask-caching = "2.0.1" flask-migrate = "^2.5.3" flask-sqlalchemy = "^2.4.4" flask-testing = "^0.8.0" gevent = "^22.10.2" greenlet = "^2.0.1" gunicorn = "^20.0.4" python-dotenv = "^0.19.2" python-dateutil = "^2.8.1" requests = "^2.24.0" email_validator = "^1.1.1" flask-redis = "^0.4.0" numexpr = "^2.7.1" flask-mail = "^0.9.1" python-jose = "^3.3.0" sqlalchemy = "^1.3" Flask-FlatPages = "^0.7.2" flask-bootstrap4 = "^4.0.2" colour = "^0.1.5" tenacity = "^6.3.1" psycopg2-binary = "^2.8.6" twilio = "^6.54.0" openpyxl = "^3.0.7" phonenumbers = "^8.12.29" celery = "^5.1.2" flower = "^1.0.0" Flask-Assets = "^2.0" webassets = "^2.0" cssmin = "^0.2.0" rjsmin = "^1.2.0" Flask-HTMLmin = "^2.2.0" ipinfo = "^4.2.1" dash-mantine-components = "^0.12.1" Flask = "^2.1.2" Flask-Bcrypt = "^1.0.1" Werkzeug = "2.0.3" Flask-WTF = "^1.0.1" flask-restx = "^0.5.1" flask-admin-plus = "^1.6.18" Pillow = "^9.2.0" multidict = "^6.0.2" gcld3 = "^3.0.13" plotly = "^5.14.1" flask-dance = "^7.0.0" blinker = "^1.6.2" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api" Here's my gunicorn.conf.py: # -*- encoding: utf-8 -*- bind = "0.0.0.0:5005" # The Access log file to write to, same as --access-logfile # Using default "-" makes gunicorn log to stdout - perfect for Docker accesslog = "-" # Same as --log-file or --error-logfile. Default "-" goes to stderr for Docker. errorlog = "-" # We overwrite the below loglevel in __init__.py # loglevel = "info" # Redirect stdout/stderr to specified file in errorlog capture_output = True enable_stdio_inheritance = True # gevent setup # workers = 4 # 4 threads (2 per CPU) # threads = 2 # 2 CPUs # Typically Docker handles the number of workers, not Gunicorn workers = 1 threads = 2 worker_class = "gevent" # The maximum number of simultaneous clients. # This setting only affects the Eventlet and Gevent worker types. worker_connections = 20 # Timeout in seconds (default is 30) timeout = 30 # Directory to use for the worker heartbeat temporary file. # Use an in-memory filesystem to avoid hanging. # In AWS an EBS root instance volume may sometimes hang for half a minute # and during this time Gunicorn workers may completely block. # https://docs.gunicorn.org/en/stable/faq.html#blocking-os-fchmod worker_tmp_dir = "/dev/shm" Here's the Dockerfile I'm using in production: # Builder stage ############################################################################ # Build args available during build, but not when container runs. # They can have default values, and can be passed in at build time. ARG ENVIRONMENT=production FROM python:3.8.15-slim-buster AS builder ARG POETRY_VERSION=1.2.2 # Use Docker BuildKit for better caching and faster builds ARG DOCKER_BUILDKIT=1 ARG BUILDKIT_INLINE_CACHE=1 # Enable BuildKit for Docker-Compose ARG COMPOSE_DOCKER_CLI_BUILD=1 # Python package installation stuff ARG PIP_NO_CACHE_DIR=1 ARG PIP_DISABLE_PIP_VERSION_CHECK=1 ARG PIP_DEFAULT_TIMEOUT=100 # Don't write .pyc bytecode ENV PYTHONDONTWRITEBYTECODE=1 # Don't buffer stdout. Write it immediately to the Docker log ENV PYTHONUNBUFFERED=1 ENV PYTHONFAULTHANDLER=1 ENV PYTHONHASHSEED=random # Tell apt-get we're never going to be able to give manual feedback: ENV DEBIAN_FRONTEND=noninteractive WORKDIR /project RUN apt-get update && \ apt-get install -y --no-install-recommends gcc redis-server libpq-dev sass \ g++ protobuf-compiler libprotobuf-dev && \ # Clean up apt-get autoremove -y && \ apt-get clean -y && \ rm -rf /var/lib/apt/lists/* # The following only runs in the "builder" build stage of this multi-stage build. RUN pip3 install "poetry==$POETRY_VERSION" && \ # Use a virtual environment for easy transfer of builder packages python -m venv /venv && \ /venv/bin/pip install --upgrade pip wheel # Poetry exports the requirements to stdout in a "requirements.txt" file format, # and pip installs them in the /venv virtual environment. We need to copy in both # pyproject.toml AND poetry.lock for this to work! COPY pyproject.toml poetry.lock ./ RUN poetry config virtualenvs.create false && \ poetry export --no-interaction --no-ansi --without-hashes --format requirements.txt \ $(test "$ENVIRONMENT" != "production" && echo "--with dev") \ | /venv/bin/pip install -r /dev/stdin # Make sure our packages are in the PATH ENV PATH="/project/node_modules/.bin:$PATH" ENV PATH="/venv/bin:$PATH" COPY wsgi.py gunicorn.conf.py .env .flaskenv entrypoint.sh postcss.config.js ./ COPY assets assets COPY app app RUN echo "Building flask assets..." && \ # Flask assets "clean" command may fail, in which case just run "build" flask assets clean || true && \ flask assets build # Final stage of multi-stage build ############################################################ FROM python:3.8.15-slim-buster as production # For setting up the non-root user in the container ARG USERNAME=user ARG USER_UID=1000 ARG USER_GID=$USER_UID # Use Docker BuildKit for better caching and faster builds ARG DOCKER_BUILDKIT=1 ARG BUILDKIT_INLINE_CACHE=1 # Enable BuildKit for Docker-Compose ARG COMPOSE_DOCKER_CLI_BUILD=1 # Don't write .pyc bytecode ENV PYTHONDONTWRITEBYTECODE=1 # Don't buffer stdout. Write it immediately to the Docker log ENV PYTHONUNBUFFERED=1 ENV PYTHONFAULTHANDLER=1 ENV PYTHONHASHSEED=random # Tell apt-get we're never going to be able to give manual feedback: ENV DEBIAN_FRONTEND=noninteractive # Add a new non-root user and change ownership of the workdir RUN addgroup --gid $USER_GID --system $USERNAME && \ adduser --no-create-home --shell /bin/false --disabled-password --uid $USER_UID --system --group $USERNAME && \ # Get curl and netcat for Docker healthcheck apt-get update && \ apt-get -y --no-install-recommends install nano curl netcat g++ && \ apt-get clean && \ # Delete index files we don't need anymore: rm -rf /var/lib/apt/lists/* WORKDIR /project # Make the logs directory writable by the non-root user RUN mkdir -p /project/logs && \ chown -R $USER_UID:$USER_GID /project/logs # Copy in files and change ownership to the non-root user COPY --chown=$USER_UID:$USER_GID --from=builder /venv /venv # COPY --chown=$USER_UID:$USER_GID --from=builder /node_modules /node_modules COPY --chown=$USER_UID:$USER_GID --from=builder /project/assets assets COPY --chown=$USER_UID:$USER_GID app app COPY --chown=$USER_UID:$USER_GID tests tests COPY --chown=$USER_UID:$USER_GID wsgi.py gunicorn.conf.py .env .flaskenv entrypoint.sh ./ # Set the user so nobody can run as root on the Docker host (security) USER $USERNAME # Just a reminder of which port is needed in gunicorn.conf.py (in-container, in production) # EXPOSE 5005 # Make sure we use the virtualenv ENV PATH="/venv/bin:$PATH" RUN echo PATH = $PATH CMD ["/bin/bash", "/project/entrypoint.sh"] My entrypoint.sh file starts everything as follows: #!/bin/bash # Enable exit on non 0 set -euo pipefail # Finally, start the Gunicorn app server for the Flask app. # All config options are in the gunicorn.conf.py file. echo "Starting Gunicorn with gunicorn.conf.py configuration..." gunicorn --config /project/gunicorn.conf.py wsgi:app Here's the wsgi.py file to which the above entrypoint.sh is referring: print("Starting: importing app and packages...") try: from app import cli, create_app, db except Exception as err: print(f"ERROR: {err}") print("ERROR: Unable to import cli, create_app, and db from app. Exiting...") exit(1) print("Creating app...") try: app, _ = create_app() cli.register(app) except Exception as err: print(f"ERROR: {err}") print("ERROR: Unable to create app. Exiting...") exit(1) print("App is ready ✅") UPDATE Nov 20, 2023: I've added the following to the code, and it still outputs the useless Dash POST /dash/_dash-update-component logs... gunicorn_logger = logging.getLogger("gunicorn.error") gunicorn_logger.setLevel(logging.ERROR) dash_logger = logging.getLogger("dash") dash_logger.setLevel(logging.ERROR) | I finally solved the problem, I think. As @EricLavault suggested, it was a Gunicorn problem, not a Dash/Flask logging problem. What I was seeing were the "access logs", which looked like this: [17/Nov/2023:16:28:10 +0000] "POST /dash/_dash-update-component HTTP/1.1" 204 0 "https://example.com/" "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1" To remove the "access logs" from my Docker log output in production, I just changed my gunicorn.conf.py settings to the following, where accesslog = None is the key line: accesslog = None errorlog = "-" loglevel = "error" | 3 | 2 |
77,514,237 | 2023-11-20 | https://stackoverflow.com/questions/77514237/getting-numbers-from-an-array-using-mask-and-regex | Having this array with code and collection, where X is a mask that can be "any number": input_array = [{"code": "XXXX10", "collection": "one"}, {"code": "XXX610", "collection": "two"}, {"code": "XXXX20", "collection": "three"}] I want a function that given any 6 digit code, for example 000710 returns the value that matches the best code mask (for the example would be one). This is my try: def get_collection_from_code(analysis_code): for collection in input_array: actual_code = collection["code"] mask_to_re = actual_code.replace("X", "[\d\D]") pattern = re.compile("^" + mask_to_re + "$") if pattern.match(analysis_code): print("Found collection '" + str(collection["collection"]) + "' for code: " + str(analysis_code)) return collection["collection"] res = get_collection_from_code("010610") print(res) The problem here is that if I inpuit the code 010610 (and I want to return two), it returns one as also matches the pattern XXXX10 first. For better understanding, if I input there values, I would like to have those ouputs: 010610 > two 010010 > one 123420 > three | You could iterate the entire collection, saving the length of the "X" part of any match, and then return the shortest: input_array = [{"code": "XXXX10", "collection": "one"}, {"code": "XXX610", "collection": "two"}, {"code": "XXXX20", "collection": "three"}] def get_collection_from_code(analysis_code): results = {} for collection in input: actual_code = collection["code"] mask_to_re = actual_code.replace("X", "[\d\D]") pattern = re.compile("^" + mask_to_re + "$") if pattern.match(analysis_code): results[collection["collection"]] = actual_code.count('X') if len(results): best = sorted(results.items(), key=lambda i:i[1])[0] print("Found collection '" + str(best[0]) + "' for code: " + str(analysis_code)) return best[0] res = get_collection_from_code("010610") # Found collection 'two' for code: 010610 Note I've saved all the matches in case you want to process them in any way. Otherwise you could just check for the "best" match in each iteration and update that instead. | 2 | 1 |
77,515,427 | 2023-11-20 | https://stackoverflow.com/questions/77515427/new-pandas-dataframe-column-with-totals-from-a-different-dataframe-by-conditions | I have two tables: one ('sales') with sales data (types of goods, dates of sale, and quantity) and another ('ref') with types of goods and a reference date. I want to add a column to the second table that would show the total quantity of the corresponding item sold within seven days (give or take) from the reference date. Here's the sample data: sales = pd.DataFrame({'Fruit': {0: 'apples', 1: 'oranges', 2: 'pears', 3: 'apples', 4: 'apples', 5: 'bananas', 6: 'oranges', 7: 'pears', 8: 'pears', 9: 'oranges', 10: 'bananas', 11: 'apples', 12: 'pears', 13: 'pears', 14: 'apples', 15: 'pears', 16: 'oranges', 17: 'oranges', 18: 'pears'}, 'Date': {0: '2023-07-07', 1: '2023-02-05', 2: '2023-08-16', 3: '2023-07-26', 4: '2023-07-14', 5: '2024-02-01', 6: '2023-09-19', 7: '2023-04-08', 8: '2023-06-08', 9: '2023-05-15', 10: '2023-10-20', 11: '2023-07-25', 12: '2023-07-31', 13: '2023-10-08', 14: '2023-06-28', 15: '2023-08-15', 16: '2023-05-14', 17: '2023-07-28', 18: '2023-07-29'}, 'Quantity': {0: 18, 1: 10, 2: 10, 3: 20, 4: 16, 5: 14, 6: 18, 7: 18, 8: 14, 9: 19, 10: 16, 11: 16, 12: 17, 13: 10, 14: 16, 15: 15, 16: 18, 17: 20, 18: 19}}) sales['Date'] = pd.to_datetime(sales['Date']) ref = pd.DataFrame({'Fruit': {0: 'apples', 1: 'bananas', 2: 'oranges', 3: 'apples', 4: 'pears', 5: 'oranges', 6: 'bananas', 7: 'oranges', 8: 'oranges'}, 'Date': {0: '2023-07-25', 1: '2023-12-27', 2: '2023-07-13', 3: '2023-06-27', 4: '2023-07-08', 5: '2023-09-17', 6: '2023-10-25', 7: '2023-10-05', 8: '2023-04-14'}}) ref['Date'] = pd.to_datetime(ref['Date']) For example, the first row of ref should show 36 (20 apples from 2023-07-36 and 16 apples from 2023-07-25). If I were using Excel, I'd use this formula: =SUMIFS(sales.Quantity, sales.Fruit, ref.Fruit, sales.Date, ">="&ref.Date-7, sales.Date, "<="&ref.Date+7). In Python, I can get the results I needed on a single-item basis, like this: sales[(sales['Fruit']=='apples')& (sales['Date']>=pd.to_datetime('2023-07-25')-pd.to_timedelta(7, unit='d'))& (sales['Date']<=pd.to_datetime('2023-07-25')+pd.to_timedelta(7, unit='d'))]['Quantity'].sum() and using iloc: sales[(sales['Fruit']==ref.iloc[0,0])& (sales['Date']>=ref.iloc[0,1]-pd.to_timedelta(7, unit='d'))& (sales['Date']<=ref.iloc[0,1]+pd.to_timedelta(7, unit='d'))]['Quantity'].sum() but when I try to add a new column to ref with this calculation, I get 'ValueError: Can only compare identically-labeled Series objects'. ref['Total'] = sales[(sales['Fruit']==ref.iloc[ref.index,0])& (sales['Date']>=ref.iloc[ref.index,1]-pd.to_timedelta(7, unit='d'))& (sales['Date']<=ref.iloc[ref.index,1]+pd.to_timedelta(7, unit='d'))]['Quantity'].sum() I'm guessing I'm wrong to use ref.index in place of the 0 in iloc to get the number I need - what should I use instead? | You can use conditional_join from pyjanitor: ref['start'] = ref['Date'] - pd.to_timedelta(7, unit='d') ref['end'] = ref['Date'] + pd.to_timedelta(7, unit='d') out = (sales .conditional_join( ref, ('Date', 'end', '<='), ('Date', 'start', '>='), ('Fruit', 'Fruit', '=='), how='right', df_columns='Quantity') .groupby(['Fruit', 'Date'], as_index=False)['Quantity'] .sum()) print (out) Fruit Date Quantity 0 apples 2023-06-27 16.0 1 apples 2023-07-25 36.0 2 bananas 2023-10-25 16.0 3 bananas 2023-12-27 0.0 4 oranges 2023-04-14 0.0 5 oranges 2023-07-13 0.0 6 oranges 2023-09-17 18.0 7 oranges 2023-10-05 0.0 8 pears 2023-07-08 0.0 | 2 | 3 |
77,517,147 | 2023-11-20 | https://stackoverflow.com/questions/77517147/in-python-if-i-make-a-list-of-optional-values-and-then-filter-out-none-how-do | In Python, when collecting a sequence of Optional values into an Iterable or List while filtering out None values, how do you express to the typechecker that the element type of the resulting Iterable is no longer Optional and the list does not contain Nones? Take the following example: def a(i: InputType) -> Optional[ResultType]: ... def b(is: List[InputType]) -> Iterable[ResultType]: return filter(partial(is_not, None), (a(x) for x in is)) The type checker (Pyright in my case) complains that I can't put the potential None value in the Iterable[ScanFileResult] return type. But since I'm filtering it out, I need some way to assert to prove to the typechecker that the element type is no longer Optional[ResultType] but just ResultType. What's the cleanest way of doing this? | Following the suggestion of @MateenUlhaq in the comments the problem you are facing is probably due to the use of filter(partial(is_not, None)) that makes it hard for the language server to infer the type of ots return value. You can make it easier for it by filtering out the Nones at the beginning. The following is an example: def a(x: int) -> int | None: if x % 2 == 0: return x return None def b(xs: list[int]): print(sum(xs)) xs = [ax for x in range(10) if (ax:=a(x)) is not None] b(xs) With this pyright does not complain since it can infer the type of xs to be list[int] Applying this to your specific case you would get something like: def b(inputs: List[InputType]) -> Iterable[ResultType]: return (ax for x in inputs if (ax:=a(x)) is not None) This works on python version 3.10+ (as it uses the := operator). | 3 | 2 |
77,514,680 | 2023-11-20 | https://stackoverflow.com/questions/77514680/use-textwrap-shorten-with-a-list | this code creates me a table in an excel file and a pie plot, the label fields, taken from a list of values, are way too long. plt.pie(df.Confronto.value_counts(), labels=textwrap.shorten(df.Confronto.value_counts().index.tolist(), width=10, placeholder="...")) plt.title("Confronto {} - {}".format(Mese,Mese2,)) plt.show() So I tried to shorten it using text.shorten, but it gives me this error: labels=textwrap.shorten(df.Confronto.value_counts().index.tolist(), width=10, placeholder="...")) AttributeError: 'list' object has no attribute 'strip' I tried to cast but it gave me an error about the indeterminate length of the label labels=textwrap.shorten(str(df.Confronto.value_counts().index.tolist()), width=10, placeholder="...") raise ValueError("'label' must be of length 'x'") ValueError: 'label' must be of length 'x' The image below shows the result of the code in its current state: the labels are too long and at the time the image is created nothing is read, I wish I could put a maximum on the length of the labels. | I am not sure as to what you are trying to achieve with this code, so I'll just explain why the exception is raised. That happens because pandas.Index.tolist returns a list, and textwrap.shorten is used on strings. So, cannot strip a string if it's a list. I assume df.Confronto.value_counts().index.tolist() returns a list of int, like [1,2,3]. You may try joining the list first with label_string = ''.join(map(str,[1,2,3])) Out: '123' It first maps str() over your list of int to produce a list of str, then joins the list in a single string with a '' symbol. However, labels= takes the list of str, so you might want to restore it back to list with label_list = [ch for ch in label_string] or even slice the original df.Confronto.value_counts().index.tolist() to reduce the number of elements, and then map str() over it. After clarification in comments I suggest a list comprehension to iterate textwrap on given list: label_list = [textwrap.shorten(string, width=10, placeholder="...") for string in df.Confronto.value_counts().index.tolist()] | 3 | 3 |
77,508,717 | 2023-11-18 | https://stackoverflow.com/questions/77508717/where-to-find-the-code-for-esrk1-and-rswm1-in-the-julia-library-source-code | I'm trying to implement the SDE solver called ESRK1 and the adaptive stepsize algorithm called RSwM1 from Rackauckas & Nie (2017). I'm writing a python implementation, mainly to confirm to myself that I've understood the algorithm correctly. However, I'm running into a problem already at the implementation of ESRK1: When I test my implementation with shorter and shorter timesteps on a simple SDE describing geometric Brownian motion, the solution does not converge as dt becomes smaller, indicating that I have a mistake in my code. I believe this algorithm is implemented as part of the library DifferentialEquations.jl in Julia, so I thought perhaps I could find some help by looking at the Julia code. However, I have had some trouble locating the relevant code. If someone could point me to the implementation of ESRK1 and RSwM1 in the relevant Julia librar(y/ies) (or indeed any other readable and correct implementation) of these algorithms, I would be most grateful. I searched for ESRK and RSwM in the github repo of StochasticDiffEq.jl, but I didn't find anything I could really recognise as the method from the paper I'm reading: https://github.com/search?q=repo%3ASciML%2FStochasticDiffEq.jl+rswm&type=code Update: I found the code for ESRK1, as shown in my answer below, but I'm still unable to find the code for RSwM1. For completeness, here is my own not-yet-correct implementation of ESRK1 in python: def ESRK1(U, t, dt, f, g, dW, dZ): # Implementation of ESRK1, following Rackauckas & Nie (2017) # Eq. (2), (3) and (4) and Table 1 # Stochastic integrals, taken from Eqs. (25) - (30) in Rackauckas & Nie (2017) I1 = dW I11 = (I1**2 - dt) / 2 I111 = (I1**3 - 3*dt*I1) / 6 I10 = (I1 + dZ/np.sqrt(3))*dt / 2 # Coefficients, taken from Table 1 in Rackauckas & Nie (2017) # All coefficients not included below are zero c0_2 = 3/4 c1_2, c1_3, c1_4 = 1/4, 1, 1/4 A0_21 = 3/4 B0_21 = 3/2 A1_21 = 1/4 A1_31 = 1 A1_43 = 1/4 B1_21 = 1/2 B1_31 = -1 B1_41, B1_42, B1_43 = -5, 3, 1/2 alpha1, alpha2 = 1/2, 2/3 alpha_tilde1, alpha_tilde2 = 1/2, 1/2 beta1_1, beta1_2, beta1_3 = -1, 4/3, 2/3 beta2_1, beta2_2, beta2_3 = -1, 4/3, -1/3 beta3_1, beta3_2, beta3_3 = 2, -4/3, -2/3 beta4_1, beta4_2, beta4_3, beta4_4 = -2, 5/3, -2/3, 1 # Stages in the Runge-Kutta approximation # Eqs. (3) and (4) and Table 1 in Rackauckas & Nie (2017) # First stages H0_1 = U # H^(0)_1 H1_1 = U # Second stages H0_2 = U + A0_21 * f(t, H0_1)*dt + B0_21 * g(t, H1_1)*I10/dt H1_2 = U + A1_21 * f(t, H0_1)*dt + B1_21 * g(t, H1_1)*np.sqrt(dt) # Third stages H0_3 = U H1_3 = U + A1_31 * f(t, H0_1) * dt + B1_31 * g(t, H1_1) * np.sqrt(dt) # Fourth stages H0_4 = U H1_4 = U + A1_43 * f(t, H0_3) * dt + (B1_41 * g(t, H1_1) + B1_42 * g(t+c1_2*dt, H1_2) + B1_43 * g(t+c1_3*dt, H1_3)) * np.sqrt(dt) # Construct next position # Eq. (2) and Table 1 in Rackauckas & Nie (2017) U_ = U + (alpha1*f(t, H0_1) + alpha2*f(t+c0_2*dt, H0_2))*dt \ + (beta1_1*I1 + beta2_1*I11/np.sqrt(dt) + beta3_1*I10/dt ) * g(t, H1_1) \ + (beta1_2*I1 + beta2_2*I11/np.sqrt(dt) + beta3_2*I10/dt ) * g(t + c1_2*dt, H1_2) \ + (beta1_3*I1 + beta2_3*I11/np.sqrt(dt) + beta3_3*I10/dt ) * g(t + c1_3*dt, H1_3) \ + (beta4_4*I111/dt ) * g(t + c1_4*dt, H1_4) # Calculate error estimate # Eq. (9) and Table 1 in Rackauckas & Nie (2017) E = -dt*(f(t, H0_1) + f(t + c0_2*dt, H0_2))/6 \ + (beta3_1*I10/dt + beta4_1*I111/dt)*g(t, H1_1) \ + (beta3_2*I10/dt + beta4_2*I111/dt)*g(t + c1_2*dt, H1_2) \ + (beta3_3*I10/dt + beta4_3*I111/dt)*g(t + c1_3*dt, H1_3) \ + (beta4_4*I111/dt)*g(t + c1_4*dt, H1_4) # Return next position and error return U_, E Rackauckas & Nie (2017): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5844583/pdf/nihms920388.pdf | So, I guess I found the first half of the answer to my question: The code for the ESRK1 method appears to be found in two places in StochasticDiffEq.jl: The coefficients from the paper (although they are now called SRIW1 instead of ESRK1) here: https://github.com/SciML/StochasticDiffEq.jl/blob/9d8eb5503f1d78cdb0de76691af2a89c20085486/src/tableaus.jl#L40 and the method (which can also work with other coefficients) here: https://github.com/SciML/StochasticDiffEq.jl/blob/9d8eb5503f1d78cdb0de76691af2a89c20085486/src/perform_step/sri.jl#L58 It's not super readable, at least not to a non-Julia programmer like me, as it's making use of some advanced features, but I think I'll be able to work it out with a bit of patience. Update: By translating the Julia code into Python I was able to write an implementation that at least seems to work in the sense that I observe an order 1.5 convergence in the strong sense when I'm testing it as if it was a fixed-step integrator. In case it may be of use to anyone else, here is the line-for-line translation. # c₀ = [0; 3//4; 0; 0] c0 = np.array([0, 3/4, 0, 0]) # c₁ = [0; 1//4; 1; 1//4] c1 = np.array([0, 1/4, 1, 1/4]) # A₀ = [0 0 0 0 # 3//4 0 0 0 # 0 0 0 0 # 0 0 0 0] A0 = np.array([ [0, 0, 0, 0], [3/4, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], ]) # A₁ = [0 0 0 0 # 1//4 0 0 0 # 1 0 0 0 # 0 0 1//4 0] A1 = np.array([ [0, 0, 0, 0], [1/4, 0, 0, 0], [1, 0, 0, 0], [0, 0, 1/4, 0], ]) # B₀ = [0 0 0 0 # 3//2 0 0 0 # 0 0 0 0 # 0 0 0 0] B0 = np.array([ [0, 0, 0, 0], [3/2, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], ]) # B₁ = [0 0 0 0 # 1//2 0 0 0 # -1 0 0 0 # -5 3 1//2 0] B1 = np.array([ [0, 0, 0, 0], [1/2, 0, 0, 0], [-1, 0, 0, 0], [-5, 3, 1/2, 0], ]) # α = [1//3; 2//3; 0;0] alpha = np.array([ 1/3, 2/3, 0, 0]) # β₁ = [-1; 4//3; 2//3; 0] beta1 = np.array([-1, 4/3, 2/3, 0]) # β₂ = -[1; -4//3; 1//3; 0] beta2 = np.array([-1, 4/3, -1/3, 0]) # β₃ = [2; -4//3; -2//3; 0] beta3 = np.array([ 2, -4/3, -2/3, 0]) # β₄ = [-2; 5//3; -2//3; 1] beta4 = np.array([-2, 5/3, -2/3, 1]) def ESRK1(U, t, dt, f, g, dW=None, dZ=None): # sqrt3 = sqrt(3one(eltype(W.dW))) sqrt3 = np.sqrt(3) # chi1 = (W.dW.^2 - abs(dt))/2integrator.sqdt #I_(1,1)/sqrt(h) chi1 = (dW**2 - np.abs(dt)) / (2*np.sqrt(dt)) # chi2 = (W.dW + W.dZ/sqrt3)/2 #I_(1,0)/h chi2 = (dW + dZ/sqrt3)/2 # chi3 = (W.dW.^3 - 3W.dW*dt)/6dt #I_(1,1,1)/h chi3 = (dW**3 - 3*dW*dt) / (6*dt) # for i=1:stages # fill!(H0[i],zero(eltype(integrator.u))) # fill!(H1[i],zero(eltype(integrator.u))) # end stages = 4 H0 = np.zeros(stages) H1 = np.zeros(stages) # for i = 1:stages # fill!(A0temp,zero(eltype(integrator.u))) # fill!(B0temp,zero(eltype(integrator.u))) # fill!(A1temp,zero(eltype(integrator.u))) # fill!(B1temp,zero(eltype(integrator.u))) for i in range(stages): A0temp = 0.0 B0temp = 0.0 A1temp = 0.0 B1temp = 0.0 # for j = 1:i-1 # integrator.f(ftemp,H0[j],p,t + c₀[j]*dt) # integrator.g(gtemp,H1[j],p,t + c₁[j]*dt) # @.. A0temp = A0temp + A₀[j,i]*ftemp # @.. B0temp = B0temp + B₀[j,i]*gtemp # @.. A1temp = A1temp + A₁[j,i]*ftemp # @.. B1temp = B1temp + B₁[j,i]*gtemp # end for j in range(i): ftemp = f(H0[j], t+c0[j]*dt) gtemp = g(H1[j], t+c1[j]*dt) A0temp = A0temp + A0[i,j]*ftemp B0temp = B0temp + B0[i,j]*gtemp A1temp = A1temp + A1[i,j]*ftemp B1temp = B1temp + B1[i,j]*gtemp # @.. H0[i] = uprev + A0temp*dt + B0temp*chi2 # @.. H1[i] = uprev + A1temp*dt + B1temp*integrator.sqdt # end H0[i] = U[0] + A0temp*dt + B0temp*chi2 H1[i] = U[0] + A1temp*dt + B1temp*np.sqrt(dt) # fill!(atemp,zero(eltype(integrator.u))) # fill!(btemp,zero(eltype(integrator.u))) # fill!(E₂,zero(eltype(integrator.u))) # fill!(E₁temp,zero(eltype(integrator.u))) atemp = 0.0 btemp = 0.0 E2 = 0.0 E1temp = 0.0 # for i = 1:stages # integrator.f(ftemp,H0[i],p,t+c₀[i]*dt) # integrator.g(gtemp,H1[i],p,t+c₁[i]*dt) # @.. atemp = atemp + α[i]*ftemp # @.. btemp = btemp + (β₁[i]*W.dW + β₂[i]*chi1)*gtemp # @.. E₂ = E₂ + (β₃[i]*chi2 + β₄[i]*chi3)*gtemp # if i <= error_terms # @.. E₁temp += ftemp # end for i in range(stages): ftemp = f(H0[i], t+c0[i]*dt) gtemp = g(H1[i], t+c1[i]*dt) atemp = atemp + alpha[i]*ftemp btemp = btemp + (beta1[i]*dW + beta2[i]*chi1)*gtemp E2 = E2 + (beta3[i]*chi2 + beta4[i]*chi3)*gtemp # @.. u = uprev + (dt*atemp + btemp) + E₂ return U + (dt*atemp + btemp) + E2 | 3 | 1 |
77,511,828 | 2023-11-19 | https://stackoverflow.com/questions/77511828/qgraphicsview-dragmode-with-middle-mouse-button | I encountered the problem indicated in the title. The only solutions I found were quite old and perhaps something has changed since then and now it is possible to make the DragMode.ScrollHandDrag action for the middle mouse key? class InfiniteCanvas(QGraphicsView): def __init__(self, parent=None): super(InfiniteCanvas, self).__init__(parent) self.setAcceptDrops(True) self.setScene(QGraphicsScene(self)) self.setRenderHint(QPainter.Antialiasing) self.setRenderHint(QPainter.SmoothPixmapTransform) self.setDragMode(QGraphicsView.DragMode.ScrollHandDrag) self.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff) self.setVerticalScrollBarPolicy(Qt.ScrollBarAlwaysOff) self.left_mouse_active = False screen = QApplication.primaryScreen() screen_size = screen.size() width = screen_size.width() * 0.7 height = screen_size.height() * 0.7 self.setGeometry(self.x(), self.y(), width, height) self.setStyleSheet(f"background-color: #2a2a2a;") def mousePressEvent(self, event): if event.button() == Qt.MidButton: self.viewport().setCursor(Qt.ClosedHandCursor) self.original_event = event handmade_event = QMouseEvent(QEvent.MouseButtonPress,QPointF(event.pos()),Qt.LeftButton,event.buttons(),Qt.KeyboardModifiers()) self.mousePressEvent(handmade_event) def dragEnterEvent(self, event): if event.mimeData().hasUrls(): event.acceptProposedAction() def dragMoveEvent(self, event): if event.mimeData().hasUrls(): event.acceptProposedAction() if __name__ == "__main__": app = QApplication(sys.argv) view = InfiniteCanvas() view.show() sys.exit(app.exec()) This is the sample code I use in the mousePressEvent method however it completely rules out the possibility of using the left mouse button for anything else. | You are trying to call the default mousePressEvent with the synthesized left click event, but since you overrode that function without ever calling the base implementation, nothing will happen. When you do the following: def mousePressEvent(self, event): if event.button() == Qt.MidButton: self.viewport().setCursor(Qt.ClosedHandCursor) self.original_event = event handmade_event = QMouseEvent(...) self.mousePressEvent(handmade_event) then the same mousePressEvent() will be called again from itself, but since now the button is the left one, the block within if event.button() == Qt.MidButton is skipped and there's nothing else to do after that: the event is completely ignored, because you are overriding its expected behavior. Your attempt to fix that with dragEnterEvent() and dragMoveEvent() is pointless and wrong, because those handlers are related to actual drag and drop, which is a different type of event management, used to drag objects that may eventually be dropped onto other components, even between different applications. The correct procedure would be to add an else clause and call super() with the default event handler: def mousePressEvent(self, event): if event.button() == Qt.MouseButton.MiddleButton: handmade_event = QMouseEvent( QEvent.Type.MouseButtonPress, event.position(), Qt.MouseButton.LeftButton, event.buttons(), event.modifiers()) super().mousePressEvent(handmade_event) else: super().mousePressEvent(event) Note the changes done above: There is no need to change the cursor, since the call to mousePressEvent() with the left button will do that on its own when the ScrollHandDrag mode is set; Since Qt6, event.pos() has been deprecated, and you should use event.position() instead (which already is a QPointF); Since Qt6 both PySide and PyQt use real Python enums, which need their specific namespaces; PySide6 has a "forgiveness mode" that still allows the old syntax, but its usage is now discouraged (as annoying as it can be); MidButton has been virtually considered obsolete for years, and it's deprecated since 5.15; the correct name is MiddleButton; Then, QGraphicsView assumes the scrolling state set from the left button event, and it ignores the actual buttons() in mouseMoveEvent(), allowing scrolling movements even if the left button is not actually pressed while moving. In reality, though, we should still theoretically override that too, similarly to what done before. Be aware of that, in case it will stop working for you in a future Qt version. Finally, we must do the same in the mouseReleaseEvent() in order to tell the internal state of graphics view that the mouse button has been released (even if virtually), which will also restore the cursor. def mouseReleaseEvent(self, event): if event.button() == Qt.MouseButton.MiddleButton: handmade_event = QMouseEvent( QEvent.Type.MouseButtonRelease, event.position(), Qt.MouseButton.LeftButton, event.buttons(), event.modifiers()) super().mouseReleaseEvent(handmade_event) else: super().mouseReleaseEvent(event) Be aware that all the above will never work as soon as the scene rect shown on the the view can fit its geometry, because the ScrollHandDrag only works within the scroll bar range (the fact that you made them invisible is irrelevant). If you actually want to be able to scroll no matter the scene rect size (which is what you probably need, given the name of your class), you cannot do it with the above. Instead, you should set a scene rect on the view that is large enough to allow (virtually) infinite scrolling, ensure that you properly set the transformationAnchor to NoAnchor, and then call translate() in mouseMoveEvent() with the delta of the current mouse movement. from random import randrange from PyQt6.QtCore import * from PyQt6.QtGui import * from PyQt6.QtWidgets import * class InfiniteCanvas(QGraphicsView): _isScrolling = _isSpacePressed = False def __init__(self, parent=None): super(InfiniteCanvas, self).__init__(parent) scene = QGraphicsScene(self) for _ in range(10): item = scene.addRect(randrange(1000), randrange(1000), 50, 50) item.setFlag(item.GraphicsItemFlag.ItemIsMovable) self.setScene(scene) self.setRenderHints( QPainter.RenderHint.Antialiasing | QPainter.RenderHint.SmoothPixmapTransform ) # Important! Without this, self.translate() will not work! self.setTransformationAnchor(self.ViewportAnchor.NoAnchor) self.setHorizontalScrollBarPolicy(Qt.ScrollBarPolicy.ScrollBarAlwaysOff) self.setVerticalScrollBarPolicy(Qt.ScrollBarPolicy.ScrollBarAlwaysOff) screen = QApplication.screenAt(QCursor.pos()) geo = screen.availableGeometry() geo.setSize(geo.size() * .7) geo.moveCenter(screen.availableGeometry().center()) self.setGeometry(geo) self.setSceneRect(-32000, -32000, 64000, 64000) self.fitInView(scene.itemsBoundingRect()) def mousePressEvent(self, event): if ( event.button() == Qt.MouseButton.MiddleButton or self._isSpacePressed and event.button() == Qt.MouseButton.LeftButton ): self._isScrolling = True self.viewport().setCursor(Qt.CursorShape.ClosedHandCursor) self.scrollPos = event.position() else: super().mousePressEvent(event) def mouseMoveEvent(self, event): if self._isScrolling: newPos = event.position() delta = newPos - self.scrollPos t = self.transform() self.translate(delta.x() / t.m11(), delta.y() / t.m22()) self.scrollPos = newPos else: super().mouseMoveEvent(event) def mouseReleaseEvent(self, event): if self._isScrolling: self._isScrolling = False if self._isSpacePressed: self.viewport().setCursor(Qt.CursorShape.OpenHandCursor) else: self.viewport().unsetCursor() super().mouseReleaseEvent(event) def keyPressEvent(self, event): if event.key() == Qt.Key.Key_Space and not event.isAutoRepeat(): self._isSpacePressed = True self.viewport().setCursor(Qt.CursorShape.OpenHandCursor) else: super().keyPressEvent(event) def keyReleaseEvent(self, event): if event.key() == Qt.Key.Key_Space and not event.isAutoRepeat(): self._isSpacePressed = False if not self._isScrolling: self.viewport().unsetCursor() else: super().keyReleaseEvent(event) if __name__ == '__main__': import sys app = QApplication(sys.argv) view = InfiniteCanvas() view.show() sys.exit(app.exec()) As an unrelated note, you shall never, ever set generic QSS properties on complex widgets like scroll areas (which is the case of QGraphicsView) or combo boxes, as you did with this line: self.setStyleSheet(f"background-color: #2a2a2a;") Doing so will propagate that property to any child widget (including scroll bars or context menus), which is very problematic for complex widgets like scroll areas or comboboxes. You should always use proper selector types instead, unless you are completely sure that you're applying the QSS to a simple standalone widget (like a button or a label). Also see these related posts: How to pan beyond the scrollbar range in a QGraphicsview? Allow QGraphicsView to move outside scene | 2 | 4 |
77,502,026 | 2023-11-17 | https://stackoverflow.com/questions/77502026/maximum-subarray-sum-with-at-most-k-elements | Maximum subarray sum with at most K elements : Given an array of integers and a positive integer k, find the maximum sum of a subarray of size less than or equal to k. A subarray is a contiguous part of the array. For example, if the array is [5, -3, 5, 5, -3, 5] and k is 3, then the maximum subarray sum with at most k elements is 10, which is obtained by the subarray [5, 5] My initial thought was to use the Kadane's algorithm with a sliding window of K. Below is the code: maxi = nums[0] max_so_far = 0 prev = 0 for i in range(len(nums)): max_so_far += nums[i] if (i - prev) >= k: max_so_far -= nums[prev] prev += 1 maxi = max(maxi, max_so_far) if max_so_far < 0: max_so_far = 0 prev = i + 1 return maxi but this approach won't work for the test case - nums = [5, -3, 5, 5, -3, 5] k = 3 Edit: I found the solution - Prefix Sum + Monotonic Deque - O(n) Time complexity def maxSubarraySum(self, nums, k) -> int: prefix_sum = [0] * len(nums) prefix_sum[0] = nums[0] for i in range(1, len(nums)): prefix_sum[i] = prefix_sum[i-1] + nums[i] q = deque() for i in range(k): while len(q) > 0 and prefix_sum[i] >= prefix_sum[q[-1]]: q.pop() q.append(i) maxi = max(prefix_sum[:k]) for i in range(1, len(nums)): if q[0] < i: q.popleft() if i + k - 1 < len(nums): while len(q) > 0 and prefix_sum[i + k - 1] >= prefix_sum[q[-1]]: q.pop() q.append(i + k - 1) maxi = max(maxi, prefix_sum[q[0]] - prefix_sum[i-1]) return maxi | There is an O(n) solution at https://cs.stackexchange.com/a/151327/10147 We can have O(n log n) with divide and conquer. Consider left and right halves of the array: the solution is either in (1) the left half exclusively, (2) the right half exclusively, or (3) a suffix of the left combined with a prefix of the right. To solve (3) in O(n), iterate from the middle to the left, recording for each index the higher of the highest seen or the total sum. Then iterate to the right and add a similar record for prefix length l with the recorded value for index k - l (or the longest possible if k-l is out of bounds) in the first iteration. For the example, [5, -3, 5, 5, -3, 5] k = 3, we have: [5, -3, 5, 5, -3, 5] 7 5 5 <--- ---> 5 5 7 ^-----^ best = 5 + 5 = 10 | 3 | 1 |
77,509,605 | 2023-11-19 | https://stackoverflow.com/questions/77509605/mt19937-generator-in-c-and-numpy-generate-different-numbers | I am trying to reproduce some C++ code in Python that involves random number generations. The C++ code uses the MT19937 generator as follows: #include <random> #include <iostream> int main() { std::mt19937 generator(1234); std::uniform_real_distribution<double> distribution(0.0, 1.0); for (int i = 0; i < 10; ++i) { std::cout << distribution(generator) << std::endl; } return 0; } The Python version is (with NumPy 1.23.3) import numpy as np rng = np.random.Generator(np.random.MT19937(1234)) for _ in range(10): print(rng.random()) In both cases, the random seed is set to 1234. But the two produce different outputs on my machine (macOS 14.0 ARM). The C++ code outputs 0.497664 0.817838 0.612112 0.77136 0.86067 0.150637 0.198519 0.815163 0.158815 0.116138 while the Python code outputs 0.12038356302504949 0.4037014194964441 0.8777026256367374 0.9565788014497463 0.42646002242298486 0.28304326113156464 0.9009410688498408 0.830833142531224 0.6752899264264728 0.3977176012599666 Why do the two MT19937 generators produce different sequences despite the same seed? How (if possible) can I make them the same? | The Mersenne Twister generator has a defined sequence for any seed that you give it. There are also test values that you can use to verify that the generator you use is conforming. The distributions are on the other hand not standardized and may produce different values in different implementations. Remove the distribution to compare the generators. Note also that std::mt19937 is a 32 bit generator and it's not obvious (to me) if the numpy version is a 32 or 64 bit generator. You may want to compare std::mt19937_64 with the numpy implementation - still without involving a distribution of course. | 3 | 4 |
77,505,772 | 2023-11-18 | https://stackoverflow.com/questions/77505772/finding-the-largest-groups-with-conditions-after-using-groupby | This is my dataframe: import pandas as pd df = pd.DataFrame( { 'a': ['a', 'a', 'a', 'c', 'b', 'a', 'a', 'b', 'c'], 'b': [20, 20, 20,-70, 70, -10, -10, -1, -1], } ) And this is the output that I want. I want to create a dataframe with two rows: direction length sum 0 long 3 60 1 short 2 -20 I want to get the largest streak of positive (long) and negative(short) numbers in b. And then get the length and sum of b values in that streak and create a new dataframe. Note that if for example the largest streak is two and there are more than one streak with that size, I want the streak that its sum of b is more that the rest. In my df, the largest long streak is the first three values. And the largest short streak is two. Since there are more than one streak with that size, I want the one that its sum is more. So I want rows 5 and 6. This is what I have tried but I don't know how to follow up: df['streak'] = df['b'].ne(df['b'].shift()).cumsum() df['size'] = df.groupby('streak')['b'].transform('size') a b streak size 0 a 20 1 3 1 a 20 1 3 2 a 20 1 3 3 c -70 2 1 4 b 70 3 1 5 a -10 4 2 6 a -10 4 2 7 b -1 5 2 8 c -1 5 2 | You need to use several groupby operations, one to find the streaks, one to aggregate the length and sum, and one to filter the output with idxmax on the largest length: group = df['b'].ne(df['b'].shift()).cumsum() out = (df .assign(direction=np.sign(df['b'])) .replace({'direction': {1: 'long', -1: 'short'}}) .groupby(['a', 'direction', group], as_index=False) .agg(length=('b', 'count'), sum=('b', 'sum')) .sort_values(by='sum', key=abs, ascending=False) .loc[lambda d: d.groupby('direction')['length'].idxmax(), ['direction', 'length', 'sum']] ) Output: direction length sum 0 long 3 60 1 short 2 -20 Intermediate before the final loc: a direction length sum 0 a long 3 60 1 a short 2 -20 2 b long 1 70 3 b short 1 -1 4 c short 1 -70 5 c short 1 -1 | 2 | 4 |
77,503,300 | 2023-11-17 | https://stackoverflow.com/questions/77503300/cause-of-this-error-no-list-matches-the-given-query | The user can add that product to the watch list by clicking on the add button, and then the add button will change to Rimo. And this time by clicking this button, the product will be removed from the list. There should be a link on the main page that by clicking on it, all the products in the watch list will be displayed, and by clicking on the detail button, you can see the details of the product, and by clicking on the remove button, you can remove them from the list. models.py class User(AbstractUser): pass class List(models.Model): choice = ( ('d', 'Dark'), ('s', 'Sweet'), ) user = models.CharField(max_length=64) title = models.CharField(max_length=64) description = models.TextField() category = models.CharField(max_length=64) first_bid = models.IntegerField() image = models.ImageField(upload_to="img/", null=True) image_url = models.CharField(max_length=228, default = None, blank = True, null = True) status = models.CharField(max_length=1, choices= choice) active_bool = models.BooleanField(default = True) class Watchlist(models.Model): user = models.CharField(max_length=64) watch_list = models.ForeignKey(List, on_deleted= models.CASCADE) views.py: def product_detail(request, product_id): product = get_object_or_404(List, pk=product_id) comments = Comment.objects.filter(product=product) if request.method == 'POST': # comment if 'comment' in request.POST: user = request.user if user.is_authenticated: content = request.POST.get('content') comment = Comment(product=product, user=user, content=content) comment.save() context = { 'product': product, 'comments': comments, } return render(request, 'auctions/product_detail.html', context) @login_required(login_url="login") def watchlist(request, username): products = Watchlist.objects.filter(user = username) return render(request, 'auctions/watchlist.html', {'products': products}) @login_required(login_url="login") def add(request, productid): #watch = Watchlist.objects.filter(user = #request.user.username) watchlist_product = get_object_or_404(Watchlist, pk=productid) for items in watchlist_product: if int(items.watch_list.id) == int(productid): return watchlist(request, request.user.username) product = get_object_or_404(List, pk=watchlist_product.watch_list) new_watch = Watchlist(product, user = request.user.username) new_watch.save() messages.success(request, "Item added to watchlist") return product_detail(request, productid) @login_required(login_url="login") def remove(request, pid): #remove_id = request.GET[""] list_ = Watchlist.objects.get(pk = pid) messages.success(request, f"{list_.watch_list.title} is deleted from your watchlist.") list_.delete() return redirect("index") product_deyail.html(Only the parts related to the question): <form method= "get" action = "{% url 'add' product.id %}"> <button type = "submit" value = "{{ product.id }}" name = "productid" >Add to Watchlist</button> </form> watchlist.html: {% extends "auctions/layout.html" %} {% block body %} {% if products %} {% for product in products %} <img src= {{ product.image.url }} alt = " {{product.title}}"><br> <a><a>Product:</a>{{ product.title }}</a><br> <a><a>Category: </a>{{ product.category }}</a><br> <a><a>Frist Bid: </a> {{ product.first_bid }} $ </a> <br> <a href="{% url 'product_detail' product.id %}">View Product</a> <form action="{% url 'remove' product.id %}" method="post"> {% csrf_token %} <button type="submit">Remove</button> </form> {% endfor %} {% else %} <p>No products in watchlist</p> {% endif %} {% endblock %} layout.html(Only the parts related to the question & To display the linked name): <ul class="nav"> <li class="nav-item"> <a class="nav-link" href="{% url 'index' %}">Active Listings</a> </li> <li class="nav-item"> <a class="nav-link" href="{% url 'watchlist' user.username %}">My WatchList</a> </li> {% if user.is_authenticated %} <li class="nav-item"> <a class="nav-link" href="{% url 'logout' %}">Log Out</a> </li> {% else %} <li class="nav-item"> <a class="nav-link" href="{% url 'login' %}">Log In</a> </li> <li class="nav-item"> <a class="nav-link" href="{% url 'register' %}">Register</a> </li> {% endif %} urls.py: path('watchlist/<str:username>', views.watchlist, name='watchlist'), path('add/<int:productid>', views.add, name='add'), path('remove/<int:pid>', views.remove, name='remove'), error: Reverse for 'product_detail' with arguments '('',)' not found. 1 pattern(s) tried: ['product_detail/(?P<product_id>[0-9]+)/\Z'] watchlist.html: <a href="{% url 'product_detail' product_id %}">View Product</a> | The reason for your initial error is that <form method= "get" action = "{% url 'add' %}"> is creating a url with the parameter productid, which your path, path('add/', views.add, name='add'), does not handle. You could change it to path('add/<int:productid>', views.add, name='add') and change the view to def add(request, productid): Then you wouldn't need the product_id = request.POST.get('productid', False). The reason you get "Reverse for 'product_detail' with arguments '('',)' not found. 1 pattern(s) tried: ['product_detail/(?P<product_id>[0-9]+)/\\Z']" is most likely because the line <button type = "submit" value = {{ product.id }} name = "productid" >Add to Watchlist</button> has the value unquoted, so it may not read it, thus sending an empty string back. Change it to <button type = "submit" value = "{{ product.id }}" name = "productid" >Add to Watchlist</button> | 2 | 2 |
77,503,593 | 2023-11-17 | https://stackoverflow.com/questions/77503593/python-tabulate-tablefmt-rounded-outline-prints-3-spaces-instead-of-1-space-betw | How can i avoid extra spaces in the tabulate grid? rows = [ ["A1", "B2"], ["C3", "D4"], ["E5", "E6"], ] print(tabulate(rows, headers="firstrow", tablefmt='rounded_outline')) gives me 2 extra spaces in every cell ╭──────┬──────╮ │ A1 │ B2 │ ├──────┼──────┤ │ C3 │ D4 │ │ E5 │ E6 │ ╰──────┴──────╯ how can i solve it to get ╭────┬────╮ │ A1 │ B2 │ ├────┼────┤ │ C3 │ D4 │ │ E5 │ E6 │ ╰────┴────╯ | You can leverage the MIN_PADDING constant. import tabulate rows = [ ["A1", "B2"], ["C3", "D4"], ["E5", "E6"], ] tabulate.MIN_PADDING = 0 print(tabulate.tabulate(rows, headers="firstrow", tablefmt="rounded_outline")) Outputs: ╭────┬────╮ │ A1 │ B2 │ ├────┼────┤ │ C3 │ D4 │ │ E5 │ E6 │ ╰────┴────╯ | 3 | 0 |
77,503,109 | 2023-11-17 | https://stackoverflow.com/questions/77503109/how-to-fix-invalidmanifesterror-for-pre-commit-with-black | Running python pre-commit with black latest version 23.11.0 leads to a wired InvalidManifestError. snippet from .pre-commit-config.yaml repos: - repo: https://github.com/psf/black rev: 23.11.0 hooks: - id: black types: [] files: ^.*.pyi?$ # format .py and .pyi files` output message: │ │ stdout = 'An error has occurred: InvalidManifestError: \n==> File │ │ │ │ /Users/robot/.cache/pre-c'+329 │ │ │ │ stdout_list = [ │ │ │ │ │ 'An error has occurred: InvalidManifestError: \n', │ │ │ │ │ '==> File │ │ │ │ /Users/robot/.cache/pre-commit/repoxhmwyits/.pre-commit-hooks.yaml\n', │ │ │ │ │ "==> At Hook(id='black')\n", │ │ │ │ │ '==> At key: stages\n', │ │ │ │ │ '==> At index 0\n', │ │ │ │ │ '=====> Expected one of commit, commit-msg, manual, merge-commit, │ │ │ │ post-checkout, '+86, │ │ │ │ │ 'Check the log at /Users/robot/.cache/pre-commit/pre-commit.log\n' │ │ │ │ ] | you're using an outdated version of pre-commit. you need to be using at least version 3.2.0 which introduced the renamed stages disclaimer: I wrote pre-commit | 6 | 8 |
77,502,500 | 2023-11-17 | https://stackoverflow.com/questions/77502500/is-it-possible-with-pathlib-to-set-file-modification-time | I need to rewrite a lot of files, but I need them to keep their modification time as I need them indexed by time. I am aware that I could use os, but as I like pathlib's object oriented way, I would prefer to use pathlib if that is an option. Pathlib-docs mention getting the st_mtime, but not setting it. I only found this description at tutorialspoint.com (last example) which uses an assignment method that seems not to work: filepath.Path.stat().st_ctime = timestamp Is it possible with pathlib or must I use os? | No, it's not. pathlib is for manipulating paths, not files. It's useful to think of it as a convenience wrapper around path string. Path.stat() is just a convenience method, doing nothing but calling os.stat() on path. So yes, you have to use os.utime(). | 3 | 2 |
77,502,055 | 2023-11-17 | https://stackoverflow.com/questions/77502055/calling-simple-hello-world-function-from-a-compiled-c-dll-from-python-results | The C code: // my_module.c #include <stdio.h> __declspec(dllexport) void hello() { printf("Hello World!\n"); } __declspec(dllexport) int add_numbers(int a, int b) { return a + b; } // entry point int main() { return 0; } The build script: # build.py from setuptools._distutils.ccompiler import new_compiler compiler = new_compiler() compiler.compile(["my_module.c"]) compiler.link_shared_lib(["my_module.obj"], "my_module") The main script: # main.py import ctypes my_module = ctypes.CDLL("./my_module.dll") my_module.add_numbers.argtypes = ctypes.c_int, ctypes.c_int my_module.add_numbers.restype = ctypes.c_int my_module.hello.argtypes = () my_module.hello.restype = None result = my_module.add_numbers(3, 4) print(type(result), result) my_module.hello() After running python build.py , the dll is created without issues. However, when running python main.py , the "add_numbers" function works, but calling the "hello" function results in "OSError: exception: access violation writing 0x0000000000002C44". Am I missing something? Do I somehow need to tell the compiler to include the "stdio.h" header? | it seems distutils is linking the msvc CRT incorrectly. you shouldn't import anything with an underscore such as _distutils, as it is not a part of the public API and you shouldn't use it. since this is a simple windows dll you could call cl.exe directly and compile it. (make sure you open x64 Native Tools Command Prompt for VS 2022 command prompt before you do) cl.exe /LD my_module.c which will work, but if you have more files then you should probably create a cmake project for it and use it to build your C dll from python. a quick look at the dependencies of the one generated from distutils. vs the one from just cl.exe directly. copying all extra dependencies from the windows sdk to the dll folder should get it to work, but this is not the correct approach. | 2 | 3 |
77,500,616 | 2023-11-17 | https://stackoverflow.com/questions/77500616/product-between-multiindex-and-a-list | I have a MultiIndex object with 2 levels: import pandas as pd mux = pd.MultiIndex.from_tuples([(1,1), (2,3)]) >>> MultiIndex([(1, 1), (2, 3)], ) I want to multiply it by a list l=[4,5], I tried pd.MultiIndex.from_product([mux.values, [4,5]]) >>> MultiIndex([((1, 1), 4), ((1, 1), 5), ((2, 3), 4), ((2, 3), 5)], ) I managed to get my expected result with from itertools import product pd.MultiIndex.from_tuples([(*a, b) for a, b in product(mux, [4,5])]) >>> MultiIndex([(1, 1, 4), (1, 1, 5), (2, 3, 4), (2, 3, 5)], ) Is there a better way to do this operation ? | I think your approach is quite reasonable. Another option using a cross-merge with an intermediate DataFrame format: pd.MultiIndex.from_frame(mux.to_frame().merge(pd.Series([4, 5], name=2), how='cross')) Output: MultiIndex([(1, 1, 4), (1, 1, 5), (2, 3, 4), (2, 3, 5)], names=[0, 1, 2]) | 3 | 3 |
77,493,160 | 2023-11-16 | https://stackoverflow.com/questions/77493160/why-we-cannot-use-alone-starred-target-while-assignment-in-python | I was going through the python docs on simple assignment. I found below from the docs. Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is recursively defined as follows. If the target list is a single target with no trailing comma, optionally in parentheses, the object is assigned to that target. Else: If the target list contains one target prefixed with an asterisk, called a “starred” target: The object must be an iterable with at least as many items as there are targets in the target list, minus one. The first items of the iterable are assigned, from left to right, to the targets before the starred target. The final items of the iterable are assigned to the targets after the starred target. A list of the remaining items in the iterable is then assigned to the starred target (the list can be empty). Else: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets. What I understood is that if the target list contains starred target then object, on RHS, must be iterable. So while assignment, python first unpacks the object and assigns the items as per the above rule and then rest of the values is assigned to the starred target. Now, Considering above I have kept only starred target and was expecting the values on RHS ( which is a tuple) to be assigned to starred target. However, Python gives syntax error for this line. I am still trying to understand where I am lacking as it is nowhere mentioned that I can't use starred target alone. *a=1,2,3 But Below works. Please explain why here alone why alone starred target is working here ? [*a] = 1,2,3 | You can instead add a comma to make the unpacking assignment to an explicit tuple: *a, = 1, 2, 3 as pointed out in PEP-3132: It is also an error to use the starred expression as a lone assignment target, as in *a = range(5) This, however, is valid syntax: *a, = range(5) It is also pointed out in the same documentation that unpacking assignment to a list is semantically equivalent to unpacking assignment to a tuple: For example, if seq is a sliceable sequence, all the following assignments are equivalent if seq has at least two elements: a, b, c = seq[0], list(seq[1:-1]), seq[-1] a, *b, c = seq [a, *b, c] = seq You can also refer to the discussion with Guido van Rossum, the creator of Python, from the Python mailing list, for how the decision to disallow *a = range(5) was made: Also, what should this do? Perhaps the grammar could disallow it? *a = range(5) I say disallow it. That is ambiguous as to what your intentions are even if you know what '*' does for multiple assignment. My real point was that the PEP lacks precision here. It should list the exact proposed changes to Grammar/Grammar. -- --Guido van Rossum (home page: http://www.python.org/~guido/) as well as this follow-up discussion: Also, what should this do? Perhaps the grammar could disallow it? *a = range(5) I'm not so sure about the grammar, I'm currently catching it in the AST generation stage. Hopefully it's possible to only allow this if there's at least one comma? In any case the grammar will probably end up accepting *a in lots of places where it isn't really allowed and you'll have to fix all of those. That sounds messy; only allowing *a at the end seems a bit more manageable. But I'll hold off until I can shoot holes in your implementation. ;-) -- --Guido van Rossum (home page: http://www.python.org/~guido/) | 3 | 4 |
77,492,044 | 2023-11-16 | https://stackoverflow.com/questions/77492044/how-to-create-a-stripplot-with-multiple-subplots | This is my example data data = {'pro1': [1, 1, 1, 0], 'pro2': [0, 1, 1, 1], 'pro3': [0, 1, 0, 1], 'pro4': [0.2, 0.5, 0.3, 0.1]} df = pd.DataFrame(data) I want to make striplot in seaborn like this (but actually wrong when running): sns.stripplot(x=['pro1', 'pro2', 'pro3'], y='pro4', data=df) This is my alternative code: # Create a figure with two subplots that share the y-axis fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4), sharey=True) # List of column names l = ['pro1', 'pro2', 'pro3', 'pro4'] # Subplot 1: Positive values df1 = df.copy(deep=True) for i in l: # Set values to pro4 only when the corresponding pro1, pro2, pro3 is 1 df1[i] = df1.apply(lambda row: row['pro4'] if row[i] == 1 else None, axis=1) df1.drop(['pro4'], axis=1, inplace=True) sns.stripplot(data=df1, ax=ax1) ax1.set_title('Positive Values') # Add a title to the subplot # Subplot 2: Zero values df1 = df.copy(deep=True) for i in l: # Set values to pro4 only when the corresponding pro1, pro2, pro3 is 0 df1[i] = df1.apply(lambda row: row['pro4'] if row[i] == 0 else None, axis=1) df1.drop(['pro4'], axis=1, inplace=True) sns.stripplot(data=df1, ax=ax2) ax2.set_title('Zero Values') # Add a title to the subplot # Show the plots plt.show() Result: My questions are: "is there any more simple way to do for the same result like below?" sns.stripplot(x=['pro1', 'pro2', 'pro3'], y='pro4', hue = [0,1], data=df) | In my opinion, the easiest is to melt and catplot: import seaborn as sns sns.catplot(df.melt('pro4'), x='variable', y='pro4', hue='variable', col='value', kind='strip') Output: | 2 | 3 |
77,498,106 | 2023-11-16 | https://stackoverflow.com/questions/77498106/how-to-perform-a-left-join-on-two-pandas-dataframes-for-a-specific-use-case | Example Data: Here is a simplified representation of my dataframes: My first dataframe(df1) is like: col1 col2 col3 col4 col5 1 2 a 3 4 11 22 aa 33 44 111 222 aaa And my second dataframe(df2) is something like: col3 col4 col5 a 3 4 aa 332 442 aaa 333 444 And i want my merged dataframe (result_df) to look something like: col1 col2 col3 col4 col5 1 2 a 3 4 11 22 aa 33 44 11 22 aa 332 442 111 222 aaa 333 444 I attempted to use the pd.merge function with a left join: result_df = pd.merge(df1, df2, on='col3', how='left') result_df looks like: col1 col2 col3 col4_x col5_x col4_y col5_y 1 2 a 3 4 3 4 11 22 aa 33 44 332 442 111 222 aaa 333 444 I'm trying to understand why the resulting dataframe has additional columns and how to achieve the desired output. Any help or insights are greatly appreciated. | A proposition with merge/lreshape : mg = pd.merge(df1, df2, on="col3", how="left") grps = {c: [f"{c}_{s}" for s in ["x", "y"]] for c in df1.columns.intersection(df2.columns).drop("col3")} out = pd.lreshape(mg, grps).drop_duplicates().convert_dtypes() NB: The loop is really optional here and can be replaced with a hardcoded mapping of the common columns (except the one to join on, i.e, col3) between both DataFrames : grps = {'col4': ['col4_x', 'col4_y'], 'col5': ['col5_x', 'col5_y']} Output : print(out) col1 col2 col3 col4 col5 0 1 2 a 3 4 1 11 22 aa 33 44 3 11 22 aa 332 442 4 111 222 aaa 333 444 [4 rows x 5 columns] | 2 | 5 |
77,497,397 | 2023-11-16 | https://stackoverflow.com/questions/77497397/how-to-autogenerate-pydantic-field-value-and-not-allow-the-field-to-be-set-in-in | I want to autogenerate an ID field for my Pydantic model and I don't want to allow callers to provide their own ID value. I've tried a variety of approaches using the Field function, but the ID field is still optional in the initializer. class MyModel(BaseModel): item_id: str = Field(default_factory=id_generator, init_var=False, frozen=True) I've also tried using PrivateAttr instead of Field, but then the ID field doesn't show up when I call model_dump. This seems like a pretty common and simple use case, but I can't find anything in the docs for how to accomplish this. | Use a combination of a PrivateAttr field and a computed_field property: from uuid import uuid4 from pydantic import BaseModel, PrivateAttr, computed_field class MyModel(BaseModel): _id: str = PrivateAttr(default_factory=lambda: str(uuid4())) @computed_field @property def item_id(self) -> str: return self._id print(MyModel().model_dump()) | 3 | 5 |
77,490,803 | 2023-11-15 | https://stackoverflow.com/questions/77490803/draw-rectangles-in-a-circular-pattern-in-turtle-graphics | To make use of the turtle methods and functionalities, we need to import turtle. “turtle” comes packed with the standard Python package and need not be installed externally. The roadmap for executing a turtle program follows 3 steps: a- Import the turtle module. b- Create a turtle to control. c- Draw around using the turtle methods. i need to make this Instructions: You should use the shown colors for the rectangles. You can choose the side’s length/ width for the rectangle, keeping the general view of the output like the given diagram. Make sure to properly set the starting position (x & y) of your drawing, to maintain the above diagram (at the center). NOTE!! here is my code: from turtle import * # Set up the screen and turtle screen = Screen() t = Turtle() t.speed(1) # You can adjust the speed as needed # Define colors color_fill = "yellow" color_border = "blue" border_size = 5 gap_size = 10 rectangle_width = 50 # Adjust the width as needed rectangle_height = 100 # Adjust the height as needed circle_radius = 50 # Adjust the radius for the circular space # Function to draw a colored rectangle with a border def draw_rectangle_with_border(fill_color, border_color, border_size, width, height): # Draw the border t.pencolor(border_color) t.pensize(border_size) t.penup() t.goto(-width / 2, -height / 2) t.pendown() for _ in range(2): t.forward(width) t.left(90) t.forward(height) t.left(90) # Draw the fill t.fillcolor(fill_color) t.begin_fill() for _ in range(2): t.forward(width) t.left(90) t.forward(height) t.left(90) t.end_fill() # Set the starting position t.penup() t.goto(0, -rectangle_height / 2) # Draw the central circular space t.pencolor("white") # Set the color to match the background t.fillcolor("white") t.penup() t.goto(0, -circle_radius) t.pendown() t.begin_fill() t.circle(circle_radius) t.end_fill() # Calculate the total number of rectangles to form a complete circle num_rectangles = 8 angle = 360 / num_rectangles # Draw the circular pattern of rectangles around the central circular space for _ in range(num_rectangles): draw_rectangle_with_border(color_fill, color_border, border_size, rectangle_width, rectangle_height) t.penup() t.goto(0, -rectangle_height / 2) t.left(angle) t.forward(gap_size) # Close the window on click screen.exitonclick() and here is the output: the output i get and i am want to have this output : the wanted output | Your rectangles look pretty good, although you don't need to draw the fill and outline separately. All that's missing is moving forward and backward from the start point to the corner of each square: from turtle import Screen, Turtle def draw_rectangle_with_border(t, width, height): t.pendown() t.begin_fill() for _ in range(2): t.forward(height) t.left(90) t.forward(width) t.left(90) t.end_fill() t.penup() def draw_rectangles_in_circle(t): color_fill = "yellow" color_border = "blue" border_size = 9 rectangle_width = 60 rectangle_height = 90 circle_radius = 110 num_rectangles = 8 angle = 360 / num_rectangles t.pencolor(color_border) t.pensize(border_size) t.fillcolor(color_fill) t.penup() for _ in range(num_rectangles): t.forward(circle_radius) draw_rectangle_with_border(t, rectangle_width, rectangle_height) t.backward(circle_radius) t.left(angle) if __name__ == "__main__": turtle = Turtle() draw_rectangles_in_circle(turtle) Screen().exitonclick() Now, the actual image has a little bit of overlap from square to square, so you could do a mini-turn to make that happen and adjust to taste: # ... t.penup() t.right(5) for _ in range(num_rectangles): t.forward(circle_radius) t.left(5) draw_rectangle_with_border(t, rectangle_width, rectangle_height) t.right(5) t.backward(circle_radius) t.left(angle) # ... Note that the from turtle import * advice is questionable. This adds over 100 methods into the global namespace, making it easy to encounter aliasing bugs and confusion between instance and functional interfaces. You're only using Screen and Turtle from turtle, so it's easy to import what you need explicitly, as shown above. | 2 | 2 |
77,495,593 | 2023-11-16 | https://stackoverflow.com/questions/77495593/polars-how-to-bin-a-datetime-type-column | I am trying to bin a date column. I was able to do that fairly easy using Pandas library. But I am getting the following error with Polars: Traceback (most recent call last): File "\workspace\polars.py", line 4, in <module> pl.col("ts").cut( File "\venvs\polars\Lib\site-packages\polars\utils\deprecation.py", line 192, in wrapper return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "\venvs\polars\Lib\site-packages\polars\expr\expr.py", line 3716, in cut self._pyexpr.cut(breaks, labels, left_closed, include_breaks) TypeError: argument 'breaks': must be real number, not datetime.datetime My Code is: >>> df ┌────────────┐ │ ts │ │ --- │ │ date │ ╞════════════╡ │ 2021-10-01 │ │ 2021-11-01 │ │ 2022-03-01 │ └────────────┘ >>> start_year = df.select("ts").min().item() >>> end_year = df.select("ts").max().item() >>> df = df.with_columns( pl.col("ts").cut( pl.datetime_range(start_year, end_year, interval="3mo", eager=True), ).alias("period_bins") ) | This looks like a bug - do you want to report it to https://github.com/pola-rs/polars/issues ? For now, as a workaround, you could cast to int: breaks = pl.date_range(start_year, end_year, interval="3mo", eager=True) df.with_columns( pl.col("ts").cast(pl.Int64).cut( breaks.cast(pl.Int64), ).alias("period_bins") ) | 3 | 2 |
77,488,499 | 2023-11-15 | https://stackoverflow.com/questions/77488499/how-to-re-infer-datatypes-on-existing-polars-dataframe | I have the following problem: I have a csv-file with faulty values (strings instead of integers) in some rows. To remedy that, I read it into polars and filter the dataframe. To be able to read it, I have to set infer_schema_length = 0, since otherwise the read would fail. This reads every column as a string, though. How would I re-infer the data types/schema of the corrected dataframe? I'd like to try to avoid setting every column individually, as there are a lot. I unfortunately can't edit the csv itself. ids_df = pl.read_csv(dataset_path, infer_schema_length=0) filtered_df = ids_df.filter(~(pl.col("Label") == "Label")) filtered_df.dtypes [Utf8, Utf8, Utf8, Utf8, Utf8, Utf8, Utf8, Utf8, Utf8, Utf8, ... Thanks for your help. | I don't think Polars has this funtionality yet, but I think I found a valid way to solve your problem: from io import BytesIO import polars as pl dataset_path = "./test_data.csv" ids_df = pl.read_csv(dataset_path, infer_schema_length=0) print("ids_df",ids_df) filtered_df = ids_df.filter(~(pl.col("Label") == "Label")) print("filtered_df", filtered_df) # Save data to memory as a IO stream bytes_io = BytesIO() filtered_df.write_csv(bytes_io) # Read from IO stream with infer_schema_lenth != 0 new_df = pl.read_csv(bytes_io) print("new_df", new_df) bytes_io.close() | 2 | 4 |
77,492,161 | 2023-11-16 | https://stackoverflow.com/questions/77492161/is-there-a-way-to-handle-overlapping-keyword-bracketing-in-python | Suppose I have a list of keywords and an input string labelled section, I want my code to find those keywords within section and put them inside [] square brackets. However, my keywords sometimes overlap with each other. keywords = ["alpha", "alpha beta", "alpha beta charlie", "alpha beta charlie delta"] To fix that I sorted them by length so that the longer keywords would be prioritized. However, when I run my code, sometimes I would get double or nested brackets (I assume it is because it still detects those as valid keywords) I tried this: import re keywords = ["alpha", "alpha beta", "alpha beta charlie", "alpha beta charlie delta"] keywords.sort(key=len, reverse=True) section = "alpha alpha beta alpha beta charlie alpha beta charlie delta" section = section.replace("'", "’").replace("\"", "”") section_lines = section.split('\n') for i, line in enumerate(section_lines): if not line.startswith('#'): section_lines[i] = re.sub(r'-',' ',line) for x in range(4): section_lines[i] = re.sub(r'\b' + f"{keywords[x]}" + r'\b', f"[{keywords[x]}]", section_lines[i], flags=re.IGNORECASE) section_lines[i] = section_lines[i].replace("[[", "[").replace("]]", "]") section = '\n'.join(section_lines) section = section.replace(" "," ").replace(" "," ") print(section) Do not mind the line splits, it is for another part of to handle multiple lines. I wanted: [alpha] [alpha beta] [alpha beta charlie] [alpha beta charlie delta] but instead I got: [alpha] [alpha] beta] [alpha] beta] charlie] [alpha] beta] charlie] delta] | You can join the sorted keywords into an alternation pattern instead of substituting the string with different keywords 4 times, each time potentially substituting the replacement string from the previous iteration: import re keywords = ["alpha", "alpha beta", "alpha beta charlie", "alpha beta charlie delta"] keywords.sort(key=len, reverse=True) section = "alpha alpha beta alpha beta charlie alpha beta charlie delta" print(re.sub(rf"\b({'|'.join(keywords)})\b", r'[\1]', section)) This outpouts: [alpha] [alpha beta] [alpha beta charlie] [alpha beta charlie delta] | 3 | 4 |
77,489,704 | 2023-11-15 | https://stackoverflow.com/questions/77489704/groupby-giving-same-aggregate-value-for-all-groups | I am trying to take mean of each group and trying to assign those to a new column in another dataframe but the first group's mean value is populating across all groups. Below is my dataframe df1 level value CF 5 CF 4 CF 6 EL 2 EL 3 EL 1 EF 4 EF 3 EF 6 I am taking the mean of each group and saving it to a new column in another dataframe df2. df2['value'] = df1.groupby(['level'])['value'].transform('mean') But this is giving me below result level value CF 5.0 EL 5.0 EF 5.0 which should actually be level value CF 5.0 EL 2.0 EF 4.333333 I get expected result if I am not saving the values to new columnn. I am not sure if this is correct way of assigning group values to new column. | You should not use groupby.transform, this is mostly useful when you want to assign to the same dataframe. Here you need to map: df2['value'] = df2['level'].map(df1.groupby(['level'])['value'].mean()) | 2 | 2 |
77,487,996 | 2023-11-15 | https://stackoverflow.com/questions/77487996/using-glob-to-recursively-get-terminal-subdirectories | I have a series of subdirectories with files in them: /cars/ford/escape/sedan/ /cars/ford/escape/coupe/ /cars/ford/edge/sedan/ /cars/ferrari/testarossa/ /cars/kia/soul/coupe/ etcetera. I'd like to get all of these terminal subdirectory paths from the root, /cars/, using glob (within Python), but not include any of the files within them, nor any parents of a subdirectory. Each one contains only files, no further subdirectories. I tried using glob("**/"), but that also returns /cars/ford/, /cars/ford/escape/, /cars/ford/edge, /cars/ferrari/, etc. I do not want these. I also tried using rglob("*/") but that also returns all files inside the terminal subdirectories. I can get what I need by just globbing the files and making a set out of their parents, but I feel like there must be an elegant solution to this from the glob side of things. Unfortunately I can't seem to find the proper search terms to discover it. Thanks! | glob is the wrong tool for this job: traditional POSIX-y glob expressions don't support any kind of negative assertion (extglobs do, but it's still a restrictive kind of support -- making assertions about an individual name, not what does or doesn't exist on the same filesystem -- that doesn't apply to your use case, and Python doesn't support them anyhow). os.walk() and its newer children are better suited. Assuming you're on a new enough Python to support pathlib.Path.walk(): import pathlib def terminal_dirs(parent): for root, dirs, files in pathlib.Path(parent).walk(): if not dirs: yield root For older versions of Python, os.walk() can be used similarly: import os def terminal_dirs(parent): for dirpath, dirnames, filenames in os.walk(parent): if not dirnames: yield dirpath Both of these can of course be collapsed to one-liners if in a rush: result = [ r for (r,d,f) in os.walk('/cars') if not d ] | 2 | 3 |
77,444,332 | 2023-11-8 | https://stackoverflow.com/questions/77444332/openai-python-package-error-chatcompletion-object-is-not-subscriptable | After updating my OpenAI package to version 1.1.1, I got this error when trying to read the ChatGPT API response: 'ChatCompletion' object is not subscriptable Here is my code: messages = [ {"role": "system", "content": '''You answer question about some service''' }, {"role": "user", "content": 'The user question is ...'}, ] response = client.chat.completions.create( model=model, messages=messages, temperature=0 ) response_message = response["choices"][0]["message"]["content"] How can I resolve this error? | In the latest OpenAI package the response.choices object type is changed and in this way you must read the response: print(response.choices[0].message.content) The complete working code: from openai import OpenAI client = OpenAI(api_key='YourKey') GPT_MODEL = "gpt-4-1106-preview" #"gpt-3.5-turbo-1106" messages = [ {"role": "system", "content": 'You answer question about Web services.' }, {"role": "user", "content": 'the user message'}, ] response = client.chat.completions.create( model=GPT_MODEL, messages=messages, temperature=0 ) response_message = response.choices[0].message.content print(response_message ) See this example in the project README. | 51 | 98 |
77,469,097 | 2023-11-12 | https://stackoverflow.com/questions/77469097/how-can-i-process-a-pdf-using-openais-apis-gpts | The web interface for ChatGPT has an easy pdf upload. Is there an API from openAI that can receive pdfs? I know there are 3rd party libraries that can read pdf but given there are images and other important information in a pdf, it might be better if a model like GPT 4 Turbo was fed the actual pdf directly. I'll state my use case to add more context. I intent to do RAG. In the code below I handle the PDF and a prompt. Normally I'd append the text at the end of the prompt. I could still do that with a pdf if I extract its contents manually. The following code is taken from here https://platform.openai.com/docs/assistants/tools/code-interpreter. Is this how I'm supposed to do it? # Upload a file with an "assistants" purpose file = client.files.create( file=open("example.pdf", "rb"), purpose='assistants' ) # Create an assistant using the file ID assistant = client.beta.assistants.create( instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.", model="gpt-4-1106-preview", tools=[{"type": "code_interpreter"}], file_ids=[file.id] ) There is an upload endpoint as well, but it seems the intent of those endpoints are for fine-tuning and assistants. I think the RAG use case is a normal one and not necessarily related to assistants. | As of today (openai.__version__==1.42.0) using OpenAI Assistants + GPT-4o allows to extract content of (or answer questions on) an input pdf file foobar.pdf stored locally, with a solution along the lines of from openai import OpenAI from openai.types.beta.threads.message_create_params import ( Attachment, AttachmentToolFileSearch, ) import os filename = "foobar.pdf" prompt = "Extract the content from the file provided without altering it. Just output its exact content and nothing else." client = OpenAI(api_key=os.environ.get("MY_OPENAI_KEY")) pdf_assistant = client.beta.assistants.create( model="gpt-4o", description="An assistant to extract the contents of PDF files.", tools=[{"type": "file_search"}], name="PDF assistant", ) # Create thread thread = client.beta.threads.create() file = client.files.create(file=open(filename, "rb"), purpose="assistants") # Create assistant client.beta.threads.messages.create( thread_id=thread.id, role="user", attachments=[ Attachment( file_id=file.id, tools=[AttachmentToolFileSearch(type="file_search")] ) ], content=prompt, ) # Run thread run = client.beta.threads.runs.create_and_poll( thread_id=thread.id, assistant_id=pdf_assistant.id, timeout=1000 ) if run.status != "completed": raise Exception("Run failed:", run.status) messages_cursor = client.beta.threads.messages.list(thread_id=thread.id) messages = [message for message in messages_cursor] # Output text res_txt = messages[0].content[0].text.value print(res_txt) The prompt can of course be replaced with the desired user request and I assume that the openai key is stored in a env var named MY_OPENAI_KEY. Limitations: it's not (yet) possible to enforce JSON structure (other than with instructions in the prompt). This solution is inspired by https://medium.com/@erik-kokalj/effectively-analyze-pdfs-with-gpt-4o-api-378bd0f6be03. this relies on text content in the PDF (i.e. searchable text content), and the queries won't be able to access e.g. image content in the pdf. | 30 | 17 |
77,482,135 | 2023-11-14 | https://stackoverflow.com/questions/77482135/cant-import-name-cygrpc-from-grpc-cython | I'm trying to locally test a function i've written, but keep getting the error "cannot import name 'cygrpc' from 'grpc._cython" which causes attempts to run the function to fail I've made sure i'm running python 3.9 for compatibility, and the most recent version of azure functions core tools, as specified on the microsoft troubleshooting web page. Does anyone know of another work around for this? | What worked for me was running func init in the project (I'd cloned the project to a new machine and not run this step). After this it worked fine. | 3 | 0 |
77,455,386 | 2023-11-9 | https://stackoverflow.com/questions/77455386/how-to-format-polars-duration | This: df = (polars .DataFrame(dict( j=['2023-01-02 03:04:05.111', '2023-01-08 05:04:03.789'], k=['2023-01-02 03:04:05.222', '2023-01-02 03:04:05.456'], )) .select( polars.col('j').str.to_datetime(), polars.col('k').str.to_datetime(), ) .with_columns( l=polars.col('k') - polars.col('j'), ) ) print(df) produces: j (datetime[μs]) k (datetime[μs]) l (duration[μs]) 2023-01-02 03:04:05.111 2023-01-02 03:04:05.222 111ms 2023-01-08 05:04:03.789 2023-01-02 03:04:05.456 -6d -1h -59m -58s -333ms shape: (2, 3) Doing this afterwards: print(df .select( polars.col('l').dt.to_string('%s.%f') ) ) raises: polars.exceptions.InvalidOperationError: `to_string` operation not supported for dtype `duration[μs]` How would I go about formatting this to a string with a custom format like I can do with polars.Datetime? | It looks like you need to use pl.format and cobble together what you want with dt.total_seconds, dt.total_milliseconds, etc. These aren't intended to be used together so if you do it'll be (roughly) doubling (for 2 elements) the total time so, imo, best to use the most granular unit and do the math yourself since you've got to some some math anyways. This should work for the "%Ss%fms" case df.with_columns( z=pl.format("{}s{}ms", ((milli:=pl.col('l').dt.total_milliseconds()).abs().floordiv(1000))*milli.sign(), milli.abs().mod(1000)) ) shape: (2, 4) ┌─────────────────────────┬─────────────────────────┬──────────────────────────┬───────────────┐ │ j ┆ k ┆ l ┆ z │ │ --- ┆ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ datetime[μs] ┆ duration[μs] ┆ str │ ╞═════════════════════════╪═════════════════════════╪══════════════════════════╪═══════════════╡ │ 2023-01-02 03:04:05.111 ┆ 2023-01-02 03:04:05.222 ┆ 111ms ┆ 0s111ms │ │ 2023-01-08 05:04:03.789 ┆ 2023-01-02 03:04:05.456 ┆ -6d -1h -59m -58s -333ms ┆ -525598s333ms │ └─────────────────────────┴─────────────────────────┴──────────────────────────┴───────────────┘ | 2 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.