question_id
int64
59.5M
79.4M
creation_date
stringdate
2020-01-01 00:00:00
2025-02-10 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
73,475,379
2022-8-24
https://stackoverflow.com/questions/73475379/image-too-big-for-processing-when-converting-large-1-3-gb-dng-file-to-png-usin
I need to convert a DNG file to PNG using python. I found a post here how to convert DNG: Opencv Python open dng format The code I tried: #open dng and convert import rawpy import imageio import os os.chdir(r'C:\Path\to\dir') path = r'path\to\file' with rawpy.imread(path) as raw: rgb = raw.postprocess() rgb_img.save('image.png') However, it spits out the following errors: line 13, in <module> rgb = raw.postprocess() File "rawpy\_rawpy.pyx", line 887, in rawpy._rawpy.RawPy.postprocess File "rawpy\_rawpy.pyx", line 790, in rawpy._rawpy.RawPy.dcraw_process File "rawpy\_rawpy.pyx", line 438, in rawpy._rawpy.RawPy.ensure_unpack File "rawpy\_rawpy.pyx", line 432, in rawpy._rawpy.RawPy.unpack File "rawpy\_rawpy.pyx", line 936, in rawpy._rawpy.RawPy.handle_error rawpy._rawpy.LibRawTooBigError: b'Image too big for processing' Is there either an alternative to convert the DNG files, or a way to bypass this error? Info from exiftool: ExifTool Version Number : 11.88 File Name : DSCF0001.DNG Directory : . File Size : 1313 MB File Modification Date/Time : 2022:08:24 12:06:31+01:00 File Access Date/Time : 2022:08:25 08:17:15+01:00 File Inode Change Date/Time : 2022:08:24 15:19:04+01:00 File Permissions : rwxrwxrwx File Type : DNG File Type Extension : dng MIME Type : image/x-adobe-dng Exif Byte Order : Little-endian (Intel, II) Make : FUJIFILM Camera Model Name : GFX 100 Preview Image Start : 115208860 Orientation : Horizontal (normal) Rows Per Strip : 3000 Preview Image Length : 3011337 Software : FUJIFILM Pixel Shift Combiner 1.2.0.2 (Real Color + High Resolution mode) Modify Date : 2022:08:24 12:06:29 Artist : Subfile Type : Full-resolution image Image Width : 23296 Image Height : 17472 Bits Per Sample : 16 16 16 Compression : JPEG Photometric Interpretation : Linear Raw Samples Per Pixel : 3 Planar Configuration : Chunky Tile Width : 128 Tile Length : 96 Tile Offsets : (Binary data 341026 bytes, use -b option to extract) Tile Byte Counts : (Binary data 198743 bytes, use -b option to extract) Black Level : 256 255 256 White Level : 65535 65535 65535 Default Scale : 1 1 Default Crop Origin : 16 12 Default Crop Size : 23264 17448 Anti Alias Strength : 1 Best Quality Scale : 1 Opcode List 3 : WarpRectilinear, FixVignetteRadial Rating : 0 Copyright : Exposure Time : 1/125 F Number : 8.0 Exposure Program : Manual ISO : 100 Sensitivity Type : Standard Output Sensitivity Standard Output Sensitivity : 100 Exif Version : 0230 Date/Time Original : 2019:03:10 00:44:16 Create Date : 2019:03:10 00:44:16 Shutter Speed Value : 1/125 Aperture Value : 8.0 Brightness Value : 8.57 Exposure Compensation : 0 Max Aperture Value : 2.0 Metering Mode : Multi-segment Light Source : Unknown Flash : No Flash Focal Length : 110.0 mm Version : 0130 Internal Serial Number : Quality : NORMAL White Balance : Auto Saturation : 0 (normal) White Balance Fine Tune : Red +0, Blue +0 Noise Reduction : 0 (normal) Fuji Flash Mode : Manual Flash Exposure Comp : 0 Focus Mode : Manual AF Mode : No Focus Pixel : 2001 1501 AF-S Priority : Release AF-C Priority : Release Focus Mode 2 : AF-M AF Area Mode : Single Point AF Area Point Size : n/a AF Area Zone Size : n/a AF-C Setting : Set 1 (multi-purpose) AF-C Tracking Sensitivity : 2 AF-C Speed Tracking Sensitivity : 0 AF-C Zone Area Switching : Auto Slow Sync : Off Picture Mode : Manual Exposure Count : 1 Shadow Tone : 0 (normal) Highlight Tone : 0 (normal) Lens Modulation Optimizer : On Grain Effect : Off Color Chrome Effect : Off Crop Mode : n/a Color Chrome FX Blue : Off Shutter Type : Electronic Auto Bracketing : Unknown (6) Sequence Number : 1 Drive Mode : Single Drive Speed : n/a Blur Warning : None Focus Warning : Good Exposure Warning : Good Dynamic Range : Standard Film Mode : F0/Standard (Provia) Dynamic Range Setting : Manual Development Dynamic Range : 100 Min Focal Length : 110 Max Focal Length : 110 Max Aperture At Min Focal : 2 Max Aperture At Max Focal : 2 Image Stabilization : Sensor-shift; Off; 0 Image Generation : Original Image Image Count : 34 Flicker Reduction : Off (0x0002) Faces Detected : 0 Num Face Elements : 0 Color Space : Uncalibrated Focal Plane X Resolution : 5320 Focal Plane Y Resolution : 5320 Focal Plane Resolution Unit : cm File Source : Digital Camera Scene Type : Directly photographed Custom Rendered : Normal Exposure Mode : Auto Focal Length In 35mm Format : 87 mm Scene Capture Type : Standard Sharpness : Unknown (3) Subject Distance Range : Unknown (48) Serial Number : Lens Info : 110mm f/2 Lens Make : FUJIFILM Lens Model : GF110mmF2 R LM WR Lens Serial Number : DNG Version : 1.4.0.0 DNG Backward Version : 1.1.0.0 Unique Camera Model : FUJIFILM GFX 100 Color Matrix 1 : 1.7191 -1.1 0.1278 -0.3574 1.1733 0.2076 -0.0002 0.0497 0.654 Color Matrix 2 : 1.6212 -0.8423 -0.1583 -0.4336 1.2583 0.1937 -0.0195 0.0726 0.6199 Analog Balance : 1 1 1 As Shot Neutral : 0.5644 1 0.5153 Baseline Exposure : -0.01 Baseline Noise : 1 Baseline Sharpness : 1.33 Linear Response Limit : 1 Camera Serial Number : DNG Lens Info : 110mm f/2 Shadow Scale : 1 DNG Private Data : (Binary data 114927728 bytes, use -b option to extract) Calibration Illuminant 1 : Standard Light A Calibration Illuminant 2 : D65 Aperture : 8.0 Image Size : 23296x17472 Megapixels : 407.0 Preview Image : (Binary data 3011337 bytes, use -b option to extract) Scale Factor To 35 mm Equivalent: 0.8 Shutter Speed : 1/125 Circle Of Confusion : 0.038 mm Field Of View : 23.4 deg Focal Length : 110.0 mm (35 mm equivalent: 87.0 mm) Hyperfocal Distance : 39.81 m Light Value : 13.0
There seems to be a limit of 2GB for the fully expanded in memory image. I don't mean the space your DNG requires on disk, I mean the following number: ImageHeight * ImageWidth * NumberOfChannels * BytesPerSample So it would be useful if you used exiftool to tell us those parameters, by clicking edit under your question and adding the output from: exiftool YOURIMAGE.DNG The limit is enforced in the variable imgdata.params.max_raw_memory_mb which is set to 2048 in the underlying libraw C code. I am not immediately sure how you could set that through the Python interface. You may have some success converting it to a PNG or a TIFF with ImageMagick as follows: magick YOURIMAGE.DNG converted.png If that works, you could use wand which is a Python binding to Imagemagick. You could try with ufraw maybe, along the lines of: ufraw-batch --out-type=png --out-depth=16 --output=result.png YOURIMAGE.dng
3
4
73,492,285
2022-8-25
https://stackoverflow.com/questions/73492285/subclass-enum-to-add-validation
The Python docs for the enum module contains the following example of subclassing Enum. The resulting class can be used to create enums that also validate that they have no two members with the same value. >>> class DuplicateFreeEnum(Enum): ... def __init__(self, *args): ... cls = self.__class__ ... if any(self.value == e.value for e in cls): ... a = self.name ... e = cls(self.value).name ... raise ValueError( ... "aliases not allowed in DuplicateFreeEnum: %r --> %r" ... % (a, e)) However, as a method of adding validation, this method is inelegant and restrictive. __init__ is called once for each member, whereas in order to validate an enum as a whole, it makes more sense to look at every member of the enum together. For instance, how would I validate that an enum has precisely two members, as below? class PreciselyTwoEnum(Enum): ... # ??? class Allowed(PreciselyTwoEnum): FOO = 1 BAR = 2 class Disallowed(PreciselyTwoEnum): # Should raise an error BAZ = 3 Can this be accomplished with a clever implementation of __init__? Is there another method that could be used — perhaps one that is called on the enum after it has been fully created?
__init_subclass__ is what you are looking for1: class PreciselyTwoEnum(Enum): def __init_subclass__(cls): if len(cls.__members__) != 2: raise TypeError("only two members allowed") and in use: >>> class Allowed(PreciselyTwoEnum): ... FOO = 1 ... BAR = 2 ... >>> class Disallowed(PreciselyTwoEnum): # Should raise an error ... BAZ = 3 ... Traceback (most recent call last): ... TypeError: only two members allowed [1] __init_subclass__ for Enum only works correctly in Python 3.11 or later, or by using the external aenum library v3.0 or later.2. [2] Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
4
2
73,471,981
2022-8-24
https://stackoverflow.com/questions/73471981/should-none-be-considered-a-data-type-python
I know this sounds stupid, but I'm reading a programming book and they talk about how print() can return nothing (None). They use this code to explain it. a = 10 b = 15 c = print('a =', a, 'b=', b) print(c) I get it, c isn't any data type that print() can take and, y'know, print it. c just has an empty value because it's not a valid data type. But what data type is c? What data type is None? If c isn't a string, integer, float, nor a boolean, what is it? Shouldn't None be its own data type? P.S. If I go to python and assign a variable None and print it, it recognises the data value and does not spit a name error. So theoretically, *None is its own data type, right? Oh, and why does Python not convert c to string and then print it?
None is (like literally everything else in Python besides keywords) an object. Meaning it is an instance of a class (or type if you will). None is an instance of NoneType, which you can find out, if you do this: print(type(None)) # <class 'NoneType'> So yes, None has its own data type in Python. The class is special in the sense that it only ever has one instance: None. And yes, there is a string representation defined for the NoneType class. And the string representation is, well "None", which you see, when you do this: (all equivalent in this case) print(None) print(str(None)) print(repr(None)) PS: It is also worth stressing that None is in fact a singleton. Meaning there only ever is exactly that one instance of the NoneType class. This is why you can do identity comparisons with None, i.e.: if my_variable is None: ... As opposed to equality comparisons (which of course work too) like this: if my_variable == None: ... And when any function (including built-ins like print) have no explicit return statement or an empty one (like return without anything after it), they implicitly always return that one special None object.
3
11
73,491,617
2022-8-25
https://stackoverflow.com/questions/73491617/convert-enum-to-literal-type-alias-in-python-typing
Is there a way to type annotate a function or variable in Python in such a way it allows both an enum or Literal formed form the attributes of the enum? from enum import Enum from typing import Literal class State(str, Enum): ENABLED = "enabled" DISABLED = "disabled" def is_enabled(state: State | Literal["enabled", "disabled"]) -> bool: if isinstance(state, str): state = State(state) return state == State.ENABLED In other words, is there a way to obtain the alias for Literal["enabled", "disabled"] without having to rewrite all the keys of the enum?
I'm afraid there's no such way. The first thing that comes to mind is iterating over enum's values to build a Literal type won't work, because Literal cannot contain arbitrary expressions. So, you cannot specify it explicitly: # THIS DOES NOT WORK def is_enabled(state: State | Literal[State.ENABLED.value, State.DISABLED.value]) -> bool: ... There's an open issue on GitHub with the related discussion. Basically, you could have hardcoded literals for every Enum, but in that case, you need to update it in accordance with Enum updates, and one day it will be messy. So, I would either stick with State | str annotation or just State and expect your function to accept only enums. Also, take into account that you do not need to explicitly create an Enum object to test its value, you can just write "enabled" == State.ENABLED as I mentioned in the comments.
18
13
73,472,218
2022-8-24
https://stackoverflow.com/questions/73472218/change-of-a-global-variable-gets-lost-when-importing-the-enclosing-namespace
I was playing with scopes and namespaces and I found a weird behaviour which I'm not sure how to explain. Say we have a file called new_script.py with inside a = 0 def func(): import new_script #import itself new_script.a += 1 print(new_script.a) func() print(a) when executing it prints 1 1 2 0 I didn't expect the last print of the number 0. From what I understand, it prints the first two 1 executing the self-import statement incrementing the global a, then it prints 2 because it increments again the global a from inside the function, but then why the last print is 0 instead of 2?
Well, this has lead down a very interesting rabit hole. So thanks you for that. Here are the key points: Imports will not recurse. If it's imported once, it will execute the module level code, but it will not execute again if it's imported again. Hence you only see 4 values. Imports are singletons. If you try this code: # singleton_test.py import singleton_test def func(): import singleton_test #import itself print(singleton_test.singleton_test == singleton_test) func() It will print: True True The imported singleton version of a module is different from the original ran version of the module With this in mind, we can explore your code, by enriching it with a few more comments, particularly using __name__ which contains the name of the current module, which will be __main__ if the current module is what was ran originally: a = 0 print("start", __name__) def func(): print("Do import", __name__) import new_script #import itself new_script.a += 1 print(new_script.a, "func", __name__) func() print(a, "outr", __name__) This will print start __main__ Do import __main__ start new_script Do import new_script 1 func new_script 1 outr new_script 2 func __main__ 0 outr __main__ This shows you quite well, given that the imported module is a singleton (but not the module that was ran), that you first print 1 in the function after you incremented value inside the function inside the module then you print 1 at the end of the imported module then you print 2 after incrementing the value on the singleton on the original run code then finally you print 0 for the unchanged outer module that you originally ran, but have not touched.
3
2
73,482,110
2022-8-25
https://stackoverflow.com/questions/73482110/what-is-fastest-way-to-convert-pdf-to-jpg-image
I am trying to convert multiple pdfs (10k +) to jpg images and extract text from them. I am currently using the pdf2image python library but it is rather slow, is there any faster/fastest library than this? from pdf2image import convert_from_bytes images = convert_from_bytes(open(path,"rb").read()) Note : I am using ubantu 18.04 CPU : 4 core 8 thread ( ryzen 3 3100) memory : 8 GB
pyvips is a bit quicker than pdf2image. I made a tiny benchmark: #!/usr/bin/python3 import sys from pdf2image import convert_from_bytes images = convert_from_bytes(open(sys.argv[1], "rb").read()) for i in range(len(images)): images[i].save(f"page-{i}.jpg") With this test document I see: $ /usr/bin/time -f %M:%e ./pdf.py nipguide.pdf 1991624:4.80 So 2GB of memory and 4.8s of elapsed time. You could write this in pyvips as: #!/usr/bin/python3 import sys import pyvips image = pyvips.Image.new_from_file(sys.argv[1]) for i in range(image.get('n-pages')): image = pyvips.Image.new_from_file(filename, page=i) image.write_to_file(f"page-{i}.jpg") I see: $ /usr/bin/time -f %M:%e ./vpdf.py nipguide.pdf[dpi=200] 676436:2.57 670MB of memory and 2.6s elapsed time. They are both using poppler behind the scenes, but pyvips calls directly into the library rather than using processes and temp files, and can overlap load and save. You can configure pyvips to use pdfium rather than poppler, though it's a bit more work, since pdfium is still not packaged by many distributions. pdfium can be perhaps 3x faster than poppler for some PDFs. You can use multiprocessing to get a further speedup. This will work better with pyvips because of the lower memory use, and the fact that it's not using huge temp files. If I modify the pyvips code to only render a single page, I can use gnu parallel to render each page in a separate process: $ time parallel ../vpdf.py us-public-health-and-welfare-code.pdf[dpi=150] ::: {1..100} real 0m1.846s user 0m38.200s sys 0m6.371s So 100 pages at 150dpi in 1.8s.
5
7
73,538,040
2022-8-30
https://stackoverflow.com/questions/73538040/cannot-add-conda-environment-to-pycharm-conda-executable-path-is-empty-even-wh
I am pretty proficient in pycharm but it is the first time I stumble into this problem. I created a conda environment Finding the conda executable which for me is in /home/my_username/.miniconda3/envs/py39/bin/python Adding it to pycharm results in: I tried to search for this issue and error but the results didnt help. I am using fedora 36 if it is relevant. Edit: The output of which conda is: /home/my_username/.miniconda3/condabin/conda Then trying to add it as the interpreter as suggested in Pycharm: Conda executable path is empty:
click the 'add interpreter'-'Add local interpreter'; click 'Conda Environment' on the left panel, browse and select 'yourAnacondaDir\Scripts\conda.exe'; click 'load Environment'; then two options: 'use existing environment' and 'create new environment' show up; click the first option; In the 'Use existing environment' box, select 'your_env'
8
12
73,507,177
2022-8-26
https://stackoverflow.com/questions/73507177/aws-sam-dockerbuildargs-it-does-not-add-them-when-creating-the-lambda-image
I am trying to test a lambda function locally, the function is created from the public docker image from aws, however I want to install my own python library from my github, according to the documentation AWS sam Build I have to add a variable to be taken in the Dockerfile like this: Dockerfile FROM public.ecr.aws/lambda/python:3.8 COPY lambda_preprocessor.py requirements.txt ./ RUN yum install -y git RUN python3.8 -m pip install -r requirements.txt -t . ARG GITHUB_TOKEN RUN python3.8 -m pip install git+https://${GITHUB_TOKEN}@github.com/repository/library.git -t . And to pass the GITHUB_TOKEN I can create a .json file containing the variables for the docker environment. .json file named env.json { "LambdaPreprocessor": { "GITHUB_TOKEN": "TOKEN_VALUE" } } And simply pass the file address in the sam build: sam build --use-container --container-env-var-file env.json Or directly the value without the .json with the command: sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE My problem is that I don't get the GITHUB_TOKEN variable either with the .json file or by putting it directly in the command with --container-env-var GITHUB_TOKEN=TOKEN_VALUE Using sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE --debug shows that it doesn't take it when creating the lambda image. The only way that has worked for me is to put the token directly in the Dockerfile not as an build argument. Promt output Building image for LambdaPreprocessor function Setting DockerBuildArgs: {} for LambdaPreprocessor function Does anyone know why this is happening, am I doing something wrong? If you need to see the template.yaml this is the lambda definition. template.yaml LambdaPreprocessor: Type: AWS::Serverless::Function Properties: PackageType: Image Architectures: - x86_64 Timeout: 180 Metadata: Dockerfile: Dockerfile DockerContext: ./lambda_preprocessor DockerTag: python3.8-v1 I'm doing it with vscode and wsl 2 with ubuntu 20.04 lts on windows 10
I am having this issue too. What I have learned is that in the Metadata field there is DockerBuildArgs: that you can also add. Example: Metadata: DockerBuildArgs: MY_VAR: <some variable> When I add this it does make it to the DockerBuildArgs dict.
4
6
73,521,495
2022-8-28
https://stackoverflow.com/questions/73521495/how-to-stop-vs-code-from-removing-python-unused-imports-on-save
Looking for general advice, as I'm not completely sure what is causing this behavior which I did not encounter until recently. I'm finding it quite annoying because it can delete imports if I comment out a line during development.
Adding the following to your settings.json file (you can access it on Windows with ctrl+shift+p followed by a search for settings) "editor.codeActionsOnSave": { ... [other settings] ... "source.organizeImports": false } Note: ensure that "editor.codeActionsOnSave" is not defined elsewhere (aside from language specific places), as this prevented the fix from working the first time I tried it.
4
9
73,528,560
2022-8-29
https://stackoverflow.com/questions/73528560/django-create-a-custom-model-field-for-currencies
Here I my custom model field I created it class CurrencyAmountField(models.DecimalField): INTEGER_PLACES = 5 DECIMAL_PLACES = 5 DECIMAL_PLACES_FOR_USER = 2 MAX_DIGITS = INTEGER_PLACES + DECIMAL_PLACES MAX_VALUE = Decimal('99999.99999') MIN_VALUE = Decimal('-99999.99999') def __init__(self, verbose_name=None, name=None, max_digits=MAX_DIGITS, decimal_places=DECIMAL_PLACES, **kwargs): super().__init__(verbose_name=verbose_name, name=name, max_digits=max_digits, decimal_places=decimal_places, **kwargs) How can I show the numbers in a comma-separated mode in Django admin forms? Should I override some method here on this custom model field or there is another to do that? Should be: Update: Tried to use intcomma like this: {% extends "admin/change_form.html" %} {% load humanize %} {% block field_sets %} {% for fieldset in adminform %} <fieldset class="module aligned {{ fieldset.classes }}"> {% if fieldset.name %}<h2>{{ fieldset.name }}</h2>{% endif %} {% if fieldset.description %} <div class="description">{{ fieldset.description|safe }}</div> {% endif %} {% for line in fieldset %} <div class="form-row{% if line.fields|length_is:'1' and line.errors %} errors{% endif %}{% if not line.has_visible_field %} hidden{% endif %}{% for field in line %}{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% endfor %}"> {% if line.fields|length_is:'1' %}{{ line.errors }}{% endif %} {% for field in line %} <div{% if not line.fields|length_is:'1' %} class="fieldBox{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% if not field.is_readonly and field.errors %} errors{% endif %}{% if field.field.is_hidden %} hidden{% endif %}"{% elif field.is_checkbox %} class="checkbox-row"{% endif %}> {% if not line.fields|length_is:'1' and not field.is_readonly %}{{ field.errors }}{% endif %} {% if field.is_checkbox %} {{ field.field }}{{ field.label_tag }} {% else %} {{ field.label_tag }} {% if field.is_readonly %} <div class="readonly">{{ field.contents }}</div> {% else %} {{ field.field|intcomma }} {% endif %} {% endif %} {% if field.field.help_text %} <div class="help">{{ field.field.help_text|safe }}</div> {% endif %} </div> {% endfor %} </div> {% endfor %} </fieldset> {% endfor %} {% endblock %} As you can see I added intcomma like this: {{ field.field|intcomma }} But I get HTML codes on my admin page instead of the forms and labels. What's wrong here? My priority is to use the first method and 'CurrencyAmountField'.
While I have yet to manage to get the comma to display on input, I have managed to get the comma to display when viewing the saved model This is what I have tried currently # forms.py class ValuesForm(forms.ModelForm): class Meta: model = Values fields = ['value'] value = forms.DecimalField(localize=True) #settings.py USE_THOUSAND_SEPARATOR = True #admin.py class ValuesAdmin(admin.ModelAdmin): form = ValuesForm #models.py class Values(models.Model): value = models.DecimalField(max_digits=10, decimal_places=2 While documentation suggests it's possible to do it on input, it might require a custom widget, so I'll investigate that shortly. As a working solution, I have replaced the comma with _ when inputting, manually. e.g. 12_345_678 which gets displayed on save as 12,345,678
4
1
73,498,513
2022-8-26
https://stackoverflow.com/questions/73498513/how-to-regrid-efficiently-a-multi-spectral-image
Given a multi-spectral image with the following shape: a = np.random.random([240, 320, 30]) where the tail axis represent values at the following fractional wavelengths: array([395.13, 408.62, 421.63, 434.71, 435.64, 453.39, 456.88, 471.48, 484.23, 488.89, 497.88, 513.35, 521.38, 528.19, 539.76, 548.39, 557.78, 568.06, 577.64, 590.22, 598.63, 613.13, 618.87, 632.75, 637.5 , 647.47, 655.6 , 672.66, 681.88, 690.1 ]) What is the most efficient, i.e. without iterating on every single wavelength,to regrid the data at integer wavelengths as follows: array([400, 410, 420, 430, 440, 450, 460, 470, 480, 490, 500, 510, 520, 530, 540, 550, 560, 570, 580, 590, 600, 610, 620, 630, 640, 650, 660, 670, 680, 690])
It depends on the interpolation method and Physics that you deem appropriate. From what you write, I would tend to assume that the error along the spatial dimensions is negligible compared to the error in the wavelength. If that is the case, an N-Dim interpolation is likely wrong as the pixel information should be independent. Instead what you would need to do is a 1D interpolation for all pixels. The simplest (and fastest) form of interpolation is with nearest neighbor. Now, if the new wavelength can be computed with np.round(decimals=-1). The data is already interpolated and you just need to update the wavelength values. If the new wavelength are not the old ones rounded, or if you do not need or want nearest neighbor interpolation, then you need to use a different approach, which at some point will involve looping through the pixels. SciPy offers scipy.interpolate.interp1d() which does exactly that in a vectorized fashion (i.e. the loop through the pixel is pushed outside of Python frames) and offers a variety of interpolation methods. For example, if samples are the measured wavelengths and new_samples contain the new ones, and arr contains the stacked images with the last axis running across wavelengths: import scipy.interpolate def interp_1d_last_sp(arr, samples, new_samples, kind="linear"): interpolator = scipy.interpolate.interp1d(samples, arr, axis=-1, kind=kind, fill_value="extrapolate") return interpolator(new_samples) Something similar can be computed manually with an explicit loop, at least for linear interpolation, using the much faster np.interp() function: import numpy as np def interp_1d_last_np(arr, samples, new_samples): shape = arr.shape k = shape[-1] arr = arr.reshape((-1, k)) result = np.empty_like(arr) n = arr.shape[0] for i in range(n): result[i, :] = np.interp(new_samples, samples, arr[i, :]) return result.reshape(shape) which does not support multidimensional input, but it can be accelerated with Numba: import numba as nb @nb.njit(parallel=True) def interp_1d_last_nb(arr, samples, new_samples): shape = arr.shape k = shape[-1] arr = arr.reshape((-1, k)) result = np.empty_like(arr) n = arr.shape[0] for i in nb.prange(n): result[i, :] = np.interp(new_samples, samples, arr[i, :]) return result.reshape(shape) While they all get to similar results (they deal slightly differently with extrapolation), the timings can be different: np.random.seed(0) a = np.random.random([240, 320, 30]) w_min = 400 w_max = 700 w_step = 10 w = np.arange(w_min, w_max, w_step) nw = w_step * (np.random.random(w.size) - 0.5) funcs = interp_1d_last_sp, interp_1d_last_np, interp_1d_last_nb base = funcs[0](a, w + nw, w) for func in funcs: res = func(a, w + nw, w) # the sum of the absolute difference with the non-interpolated array is reasonably the same is_good = np.isclose(np.sum(np.abs(base - a)), np.sum(np.abs(res - a))) print(f"{func.__name__:>12s} {is_good!s:>5} {np.sum(np.abs(res - a)):16.8f} ", end="") %timeit -n 2 -r 2 func(a, w + nw, w) # interp_1d_last_sp True 140136.05282911 90.8 ms ± 21.7 ms per loop (mean ± std. dev. of 2 runs, 2 loops each) # interp_1d_last_np True 140136.05282911 349 ms ± 14.2 ms per loop (mean ± std. dev. of 2 runs, 2 loops each) # interp_1d_last_nb True 140136.05282911 61.5 ms ± 1.83 ms per loop (mean ± std. dev. of 2 runs, 2 loops each)
4
4
73,548,604
2022-8-30
https://stackoverflow.com/questions/73548604/create-2d-matrix-of-ascending-integers-in-diagonal-triangle-like-order-with-nump
How do I create a matrix of ascending integers that are arrayed like this example of N=6? 1 3 6 2 5 0 4 0 0 Here another example for N=13: 1 3 6 10 0 2 5 9 13 0 4 8 12 0 0 7 11 0 0 0 10 0 0 0 0 Also, the solution should perform well for large N values. My code import numpy as np N = 13 array_dimension = 5 x = 0 y = 1 z = np.zeros((array_dimension,array_dimension)) z[0][0] = 1 for i in range(2, N+1): z[y][x] = i if y == 0: y = (x + 1) x = 0 else: x += 1 y -= 1 print(z) [[ 1. 3. 6. 10. 0.] [ 2. 5. 9. 0. 0.] [ 4. 8. 13. 0. 0.] [ 7. 12. 0. 0. 0.] [11. 0. 0. 0. 0.]] works, but there must be a more efficient way. Most likely via Numpy, but I cannot find a solution.
The assignment can be completed in one step by simply transforming the index of the lower triangle: def fill_diagonal(n): assert n > 0 m = int((2 * n - 1.75) ** 0.5 + 0.5) '''n >= ((1 + (m - 1)) * (m - 1)) / 2 + 1 => 2n - 2 >= m ** 2 - m => 2n - 7 / 4 >= (m - 1 / 2) ** 2 => (2n - 7 / 4) ** (1 / 2) + 1 / 2 >= m for n > 0 => m = floor((2n - 7 / 4) ** (1 / 2) + 1 / 2) or n <= ((1 + m) * m) / 2 => (2n + 1 / 4) ** (1 / 2) - 1 / 2 <= m for n > 0 => m = ceil((2n + 1 / 4) ** (1 / 2) - 1 / 2) ''' i, j = np.tril_indices(m) i -= j ret = np.zeros((m, m), int) ret[i[:n], j[:n]] = np.arange(1, n + 1) return ret Test: >>> for i in range(1, 16): ... print(repr(fill_diagonal(i)), end='\n\n') ... array([[1]]) array([[1, 0], [2, 0]]) array([[1, 3], [2, 0]]) array([[1, 3, 0], [2, 0, 0], [4, 0, 0]]) array([[1, 3, 0], [2, 5, 0], [4, 0, 0]]) array([[1, 3, 6], [2, 5, 0], [4, 0, 0]]) array([[1, 3, 6, 0], [2, 5, 0, 0], [4, 0, 0, 0], [7, 0, 0, 0]]) array([[1, 3, 6, 0], [2, 5, 0, 0], [4, 8, 0, 0], [7, 0, 0, 0]]) array([[1, 3, 6, 0], [2, 5, 9, 0], [4, 8, 0, 0], [7, 0, 0, 0]]) array([[ 1, 3, 6, 10], [ 2, 5, 9, 0], [ 4, 8, 0, 0], [ 7, 0, 0, 0]]) array([[ 1, 3, 6, 10, 0], [ 2, 5, 9, 0, 0], [ 4, 8, 0, 0, 0], [ 7, 0, 0, 0, 0], [11, 0, 0, 0, 0]]) array([[ 1, 3, 6, 10, 0], [ 2, 5, 9, 0, 0], [ 4, 8, 0, 0, 0], [ 7, 12, 0, 0, 0], [11, 0, 0, 0, 0]]) array([[ 1, 3, 6, 10, 0], [ 2, 5, 9, 0, 0], [ 4, 8, 13, 0, 0], [ 7, 12, 0, 0, 0], [11, 0, 0, 0, 0]]) array([[ 1, 3, 6, 10, 0], [ 2, 5, 9, 14, 0], [ 4, 8, 13, 0, 0], [ 7, 12, 0, 0, 0], [11, 0, 0, 0, 0]]) array([[ 1, 3, 6, 10, 15], [ 2, 5, 9, 14, 0], [ 4, 8, 13, 0, 0], [ 7, 12, 0, 0, 0], [11, 0, 0, 0, 0]]) For the case of large n, the performance is about 10 to 20 times that of the loop solution: %timeit fill_diagonal(10 ** 5) 1.63 ms ± 94.5 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) %timeit fill_diagonal_loop(10 ** 5) # OP's solution 25.1 ms ± 218 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
6
4
73,472,916
2022-8-24
https://stackoverflow.com/questions/73472916/error-cannot-import-name-wrappers-from-tensorflow-python-keras-layers
The code is giving the following error message Cannot import name 'wrappers' from 'tensorflow.python.keras.layers' - and ImportError: graphviz or pydot are not available. Even after installing the graphviz and pydot using the !apt-get -qq install -y graphviz && pip install pydot still not able to genrate model.png. i m running the following code in colab try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf from tensorflow.python.keras.utils.vis_utils import plot_model import pydot from tensorflow.keras.models import Model def build_model_with_sequential(): # instantiate a Sequential class and linearly stack the layers of your model seq_model = tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) return seq_model def build_model_with_functional(): # instantiate the input Tensor input_layer = tf.keras.Input(shape=(28, 28)) # stack the layers using the syntax: new_layer()(previous_layer) flatten_layer = tf.keras.layers.Flatten()(input_layer) first_dense = tf.keras.layers.Dense(128, activation=tf.nn.relu)(flatten_layer) output_layer = tf.keras.layers.Dense(10, activation=tf.nn.softmax)(first_dense) # declare inputs and outputs func_model = Model(inputs=input_layer, outputs=output_layer) return func_model model = build_model_with_functional() #model = build_model_with_sequential() # Plot model graph plot_model(model, show_shapes=True, show_layer_names=True, to_file='model.png') ERROR --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-5-aaa8f6f7cc6e> in <module> 3 4 # Plot model graph ----> 5 plot_model(model, show_shapes=True, show_layer_names=True, to_file='model.png') 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/vis_utils.py in plot_model(model, to_file, show_shapes, show_dtype, show_layer_names, rankdir, expand_nested, dpi) 327 rankdir=rankdir, 328 expand_nested=expand_nested, --> 329 dpi=dpi) 330 to_file = path_to_string(to_file) 331 if dot is None: /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/vis_utils.py in model_to_dot(model, show_shapes, show_dtype, show_layer_names, rankdir, expand_nested, dpi, subgraph) 96 ImportError: if graphviz or pydot are not available. 97 """ ---> 98 from tensorflow.python.keras.layers import wrappers 99 from tensorflow.python.keras.engine import sequential 100 from tensorflow.python.keras.engine import functional ImportError: cannot import name 'wrappers' from 'tensorflow.python.keras.layers' (/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/__init__.py) --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. -------------------------------------------------------------------------
It's the version issue. According to the latest tensorflow doc (at least from 2.8.1). there's no tensorflow.python package. plot_model module has been moved to tensorflow.keras.utils. so simply replacing from tensorflow.python.keras.utils.vis_utils import plot_model with from tensorflow.keras.utils import plot_model could solve the issue.
3
9
73,534,869
2022-8-29
https://stackoverflow.com/questions/73534869/vs-code-deactivate-venv-not-inside-workspace
I am using VS code and have a venv folder for shared projects that lives outside of workspace/project folders. I want to change my workspace to use the interpreter within my AppData\local... folder (system installation of Python). I have been reading up on this but not found a solution to do this. How would I do this please? I should note that the virtual environment is not active so I can not do this. In the terminal I see this showing the workspace folder & the virtual environment. This is messing up my debugging and very frustrating. Edit - to try better explain when opening my interpreter list (see image below). Each workspace (red box) has the interpreter named under it (green box). I want to change the interpreter that is linked to the workspace. Edit2 - solution - So I was able to fix this by doing something I had tried before but for some reason this time it worked. Which was selecting the 'Select at workspace level' option. Then I selected the interpreter and this time it allowed me to browse for the interpreter on my computer which then updated the interpreter. Don't see what the problem was but it worked.
You can use shortcuts "ctrl+shift+P" and type "Python: Clear Workspace Interpreter Settings" AND "Python: Select Interpreter" to change the environment. By default, the Python extension looks for and uses the first Python interpreter it finds in the system path. To select a specific environment, use the Python: Select Interpreter command from the Command Palette (Ctrl+Shift+P). You can refer to the docs for more details.
4
12
73,484,988
2022-8-25
https://stackoverflow.com/questions/73484988/tqdm-notebook-bar-outputs-text-in-jupyter-lab
I am having a problem when using tqdm.notebook progress bar in Jupyter (version 3.4.4). When I launch a for loop, instead of the progress bar, I get the following text as output: Input: from tqdm.notebook import tqdm for i in tqdm(range(100)): a = 1 Output: root: n: 0 total: 100 elapsed: 0.01399087905883789 ncols: null nrows: 29 prefix: "" ascii: false unit: "it" unit_scale: false rate: null bar_format: null postfix: null unit_divisor: 1000 initial: 0 colour: null This started happening after I updated Jupyter to its latest version. The usual solutions regarding Node.js and ipywidgets (see this one) didn't do the job. tqdm is also in its last version (4.63.0).
I ran across this in a dockerized jupyterlab service. This fixed it for me: (Done in the Dockerfile): pip install -U jupyterlab-widgets==1.1.1 pip install -U ipywidgets==7.7.2
17
16
73,534,425
2022-8-29
https://stackoverflow.com/questions/73534425/remove-image-background-so-that-only-the-logo-usually-some-text-remains-as-png
I would like to extract logos from golf balls for further image processing. I have already tried different methods. I wanted to use the grayscale value of the images to locate their location and then cut it out. Due to many different logos and a black border around the images, this method unfortunately failed. as my next approach I thought that I first remove the black background and then repeat the procedure from 1. but also without success because there is a dark shadow in the lower left corner and this is also recognized as the "logo" with the grayscale method. Covering the border further on the outside is not a solution, because otherwise logos that are on the border will also be cut away or only half of them will be detected. I used the edge detection algorithm Canny of the Open CV library. The detection looked very promising, but I was not able to extract only the logo from the detection, because the edge of the Golfball was also recognized. Any solution is welcome. Please forgive my English. Also, I am quite a beginner in programming. Probably there is a very simple solution to my problem but I thank you in advance for your help. Here are 2 example images first the type of images from which the logos should be extracted and then how the image should look like after extraction. Thank you very much. Best regards T
This is essentially "adaptive" thresholding, except this approach doesn't need to threshold. It adapts to the illumination, leaving you with a perfectly fine grayscale image (or color, if extended to do that). median blur (large kernel size) to estimate ball/illumination division to normalize illumination: normalized (and scaled a bit): thresholded with Otsu: def process(im, r=80): med = cv.medianBlur(im, 2*r+1) with np.errstate(divide='ignore', invalid='ignore'): normalized = np.where(med <= 1, 1, im.astype(np.float32) / med.astype(np.float32)) return (normalized, med) normalized, med = process(ball1, 80) # imshow(med) # imshow(normalized * 0.8) ret, thresh = cv.threshold((normalized.clip(0,1) * 255).astype('u1'), 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU) # imshow(thresh)
4
1
73,545,390
2022-8-30
https://stackoverflow.com/questions/73545390/monkey-patching-class-and-instance-in-python
I am confused with following difference. Say I have this class with some use case: class C: def f(self, a, b, c=None): print(f"Real f called with {a=}, {b=} and {c=}.") my_c = C() my_c.f(1, 2, c=3) # Output: Real f called with a=1, b=2 and c=3. I can monkey patch it for purpose of testing like this: class C: def f(self, a, b, c=None): print(f"Real f called with {a=}, {b=} and {c=}.") def f_monkey_patched(self, *args, **kwargs): print(f"Patched f called with {args=} and {kwargs=}.") C.f = f_monkey_patched my_c = C() my_c.f(1, 2, c=3) # Output: Patched f called with args=(1, 2) and kwargs={'c': 3}. So far so good. But I would like to patch only one single instance and it somehow consumes first argument: class C: def f(self, a, b, c=None): print(f"Real f called with {a=}, {b=} and {c=}.") def f_monkey_patched(self, *args, **kwargs): print(f"Patched f called with {args=} and {kwargs=}.") my_c = C() my_c.f = f_monkey_patched my_c.f(1, 2, c=3) # Output: Patched f called with args=(2,) and kwargs={'c': 3}. Why has been first argument consumed as self instead of the instance itself?
Functions in Python are descriptors; when they're attached to a class, but looked up on an instance of the class, the descriptor protocol gets invoked, producing a bound method on your behalf (so my_c.f, where f is defined on the class, is distinct from the actual function f you originally defined, and implicitly passes my_c as self). If you want to make a replacement that shadows the class f only for a specific instance, but still passes along the instance as self like you expect, you need to manually bind the instance to the function to create the bound method using the (admittedly terribly documented) types.MethodType: from types import MethodType # The class implementing bound methods in Python 3 # ... Definition of C and f_monkey_patched unchanged my_c = C() my_c.f = MethodType(f_monkey_patched, my_c) # Creates a pre-bound method from the function and # the instance to bind to Being bound, my_c.f will now behave as a function that does not accept self from the caller, but when called self will be received as the instance bound to my_c at the time the MethodType was constructed. Update with performance comparisons: Looks like, performance-wise, all the solutions are similar enough as to be irrelevant performance-wise (Kedar's explicit use of the descriptor protocol and my use of MethodType are equivalent, and the fastest, but the percentage difference over functools.partial is so small that it won't matter under the weight of any useful work you're doing): >>> # ... define C as per OP >>> def f_monkey_patched(self, a): # Reduce argument count to reduce unrelated overhead ... pass >>> from types import MethodType >>> from functools import partial >>> partial_c, mtype_c, desc_c = C(), C(), C() >>> partial_c.f = partial(f_monkey_patched, partial_c) >>> mtype_c.f = MethodType(f_monkey_patched, mtype_c) >>> desc_c.f = f_monkey_patched.__get__(desc_c, C) >>> %%timeit x = partial_c # Swapping in partial_c, mtype_c or desc_c ... x.f(1) ... I'm not even going to give exact timing outputs for the IPython %%timeit magic, as it varied across runs, even on a desktop without CPU throttling involved. All I could say for sure is that partial was reliably a little slower, but only by a matter of ~1 ns (the other two typically ran in 56-56.5 ns, the partial solution typically took 56.5-57.5), and it took quite a lot of paring of extraneous stuff (e.g. switching from %timeit reading the names from global scope causing dict lookups to caching to a local name in %%timeit to use simple array lookups) to even get the differences that predictable. Point is, any of them work, performance-wise. I'd personally recommend either my MethodType or Kedar's explicit use of descriptor protocol approach (they are identical in end result AFAICT; both produce the same bound method class), whichever one looks prettier to you, as it means the bound method is actually a bound method (so you can extract .__self__ and .__func__ like you would on any bound method constructed the normal way, where partial requires you to switch to .args[0] and .func to get the same info).
4
4
73,532,164
2022-8-29
https://stackoverflow.com/questions/73532164/proper-data-encryption-with-a-user-set-password-in-python3
I have been looking for a proper data encryption library in python for a long while, today I needed it once again, cannot find anything, so is there any way to encrypt data using a user-set password, if I find something it's usually insecure, if I find a good solution it has no support for user-set passwords, meaning I'm stuck, any way to do it? Here's some pseudocode: import encryption encryptor: encryption.Crypt = encryption.Crypt("my secret password") encryptor.encrypt("hello this is my very secret string") # => 9oe gyu yp9q*(Y 28j encryptor.decrypt("9oe gyu yp9q*(Y 28j") # => hello this is my very secret string I don't care if it's an object, for all I care it can also be a function which accepts the password: import encryption encryption.encrypt("hello this is my very secret string", "my secret password") # => 9oe gyu yp9q*(Y 28j encryption.decrypt("9oe gyu yp9q*(Y 28j", "my secret password") # => hello this is my very secret string I don't mind the way it's encrypted or decrypted, I just want to have a way to do it :), I also don't care abt it's output, it can be binary, an object, a string, anything
Building on the answer from Sam Hartzog, below is an example which follows the logic described for PBES2 (Password Based Encryption Scheme 2) defined in RFC8018, Section 6.2. However, it stops short of encoding algorithm choices and parameters. #!/usr/bin/python import base64 import secrets from cryptography.fernet import Fernet from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC KDF_ALGORITHM = hashes.SHA256() KDF_LENGTH = 32 KDF_ITERATIONS = 120000 def encrypt(plaintext: str, password: str) -> (bytes, bytes): # Derive a symmetric key using the passsword and a fresh random salt. salt = secrets.token_bytes(16) kdf = PBKDF2HMAC( algorithm=KDF_ALGORITHM, length=KDF_LENGTH, salt=salt, iterations=KDF_ITERATIONS) key = kdf.derive(password.encode("utf-8")) # Encrypt the message. f = Fernet(base64.urlsafe_b64encode(key)) ciphertext = f.encrypt(plaintext.encode("utf-8")) return ciphertext, salt def decrypt(ciphertext: bytes, password: str, salt: bytes) -> str: # Derive the symmetric key using the password and provided salt. kdf = PBKDF2HMAC( algorithm=KDF_ALGORITHM, length=KDF_LENGTH, salt=salt, iterations=KDF_ITERATIONS) key = kdf.derive(password.encode("utf-8")) # Decrypt the message f = Fernet(base64.urlsafe_b64encode(key)) plaintext = f.decrypt(ciphertext) return plaintext.decode("utf-8") def main(): password = "aStrongPassword" message = "a secret message" encrypted, salt = encrypt(message, password) decrypted = decrypt(encrypted, password, salt) print(f"message: {message}") print(f"encrypted: {encrypted}") print(f"decrypted: {decrypted}") Output: message: a secret message encrypted: b'gAAAAABjDlH2eaRZmB4rduBdNHUOITV5q4oelpnLRUgI_uyQyNpUyW8h3c2lZYS1MwMpRWIZposcZvag9si1pc4IEK83_CzyBdXF27Aop9WWS6ybxTg9BSo=' decrypted: a secret message
5
5
73,493,910
2022-8-25
https://stackoverflow.com/questions/73493910/chunking-api-response-cuts-off-required-data
I am reading chunks of data that is an API response using the following code: d = zlib.decompressobj(zlib.MAX_WBITS|16) # for gzip for i in range(0, len(data), 4096): chunk = data[i:i+4096] # print(chunk) str_chunk = d.decompress(chunk) str_chunk = str_chunk.decode() # print(str_chunk) if '"@odata.nextLink"' in str_chunk: ab = '{' + str_chunk[str_chunk.index('"@odata.nextLink"'):len(str_chunk)+1] ab = ast.literal_eval(ab) url = ab['@odata.nextLink'] return url An example of this working is: "@odata.nextLink":"someurl?$count=true It works in most cases but sometimes this key value pair gets cut off and it appears something like this: "@odata.nextLink":"someurl?$coun I can play around with the number of bits in this line for i in range(0, len(data), 4096) but that doesn't ensure that in some cases the data doesn't cut off as the page sizes (data size) can be different for each page size. How can I ensure that this key value pair is never cut off. Also, note that this key value pair is the last line/ last key-value pair of the API response. P.S.: I can't play around with API request parameters. Even tried reading it backwards but this gives a header incorrect issue: for i in range(len(data), 0, -4096): chunk = data[i -4096: i] str_chunk = d.decompress(chunk) str_chunk = str_chunk.decode() if '"@odata.nextLink"' in str_chunk: ab = '{' + str_chunk[str_chunk.index('"@odata.nextLink"'):len(str_chunk)+1] ab = ast.literal_eval(ab) url = ab['@odata.nextLink'] #print(url) return url The above produces the following error which is really strange: str_chunk = d.decompress(chunk) zlib.error: Error -3 while decompressing data: incorrect header check
str_chunk is a contiguous sequence of bytes from the API response that can start anywhere in the response, and end anywhere in the response. Of course it will sometimes end in the middle of some semantic content. (New information from comment that OP neglected to put in question. In fact, still not in question. OP requires that entire uncompressed content not be saved in memory.) If "@odata.nextLink" is a reliable marker for what you're looking for, then keep the last two decompressed chunks, concatenate those, then look for that marker. Once found, continue to read more chunks, concatenating them, until you have the full content you're looking for.
4
2
73,545,218
2022-8-30
https://stackoverflow.com/questions/73545218/utf-8-encoding-exception-with-subprocess-run
I'm having a hard time using the subprocess.run function with a command that contains accentuated characters (like "é" for example). Consider this simple example : # -*- coding: utf-8 -*- import subprocess cmd = "echo é" result = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE) print("Output of subprocess.run : {}".format(result.stdout.hex())) print("é char encoded manually : {}".format("é".encode("utf-8").hex())) It gives the following output : Output of subprocess.run : 820d0a é char encoded manually : c3a9 I don't understand the value returned by subprocess.run, shouldn't it also be c3a9 ? I understand the 0d0a is CR+LF, but why 82 ? Because of this, when I try to run this line : output = result.stdout.decode("utf-8") I get a UnicodeDecodeError Exception with the following message : 'utf-8' codec can't decode byte 0x82 in position 0: invalid start byte I tried explicitly specifying the encoding format like this : result = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, encoding="utf-8") But this raises the same exception ('utf-8' codec can't decode byte 0x82 in position 0: invalid start byte) when subprocess.run is called. I'm running this on Windows 10 with Python3.8.5. I hope someone can help me with this, any hint ?
As a fix try cp437 decoding: print("Output of subprocess.run : {}".format(result.stdout.decode('cp437'))) # or result = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, text=True, encoding="cp437") print(f"Output of subprocess.run : {result.stdout}") From other stackoverlow answers it seems that Windows terminal code issue is old and probably should be fixed by now, but it seems it still is present. https://stackoverflow.com/a/37260867/11815313 Anyway I have no deeper understanding in Windows 10 terminal encoding, but cp437 worked for my Win10 system. However the Python 3.9.13 documentation 3. Using Python on Windows 3.7. UTF-8 mode states an option to temporarily or permanent (note the caveat mentioned in the documentation) change the encoding.
4
3
73,531,279
2022-8-29
https://stackoverflow.com/questions/73531279/caching-at-queryset-level-in-django
I'm trying to get a queryset from the cache, but am unsure if this even has a point. I have the following method (simplified) inside a custom queryset: def queryset_from_cache(self, key: str=None, timeout: int=60): # Generate a key based on the query. if key is None: key = self.__generate_key # () # If the cache has the key, return the cached object. cached_object = cache.get(key, None) # If the cache doesn't have the key, set the cache, # and then return self (from DB) as cached_object if cached_object is None: cached_object = self cache.set(key, cached_object , timeout=timeout) return cached_object The usage is basically to append it to a django QuerySet method, for example: queryset = MyModel.objects.filter(id__range=[0,99]).queryset_from_cache() My question: Would usage like this work? Or would it call MyModel.objects.filter(id__range=[0,99]) from the database no matter what? Since normally caching would be done like this: cached_object = cache.get(key, None) if cached_object is None: cached_object = MyModel.objects.filter(id__range=[0,99]) #Only now call the query cache.set(key, cached_object , timeout=timeout) And thus the queryset filter() method only gets called when the key is not present in the cache, as opposed to always calling it, and then trying to get it from the cache with the queryset_from_cache method.
This is a really cool idea, but I'm not sure if you can Cache full-on Objects.. I think it's only attributes Now this having a point. Grom what I'm seeing from the limited code I've seen idk if it does have a point, unless filtering for Jane and John (and only them) is very common. Very narrow. Maybe just try caching ALL the users or just individual Users, and only the attributes you need Update Yes! you are completetly correct, you can cache full on objects- how cool! I don't think your example method of queryset = MyModel.objects.filter(id__range=[0,99]).queryset_from_cache() would work. but you can do something similar by using Model Managers and do something like: queryset = MyModel.objects.queryset_from_cache(filterdict) Models Natually you can return just the qs, this is just for the example to show it actually is from the cache from django.db import models class MyModelManager(models.Manager): def queryset_from_cache(self, filterdict): from django.core.cache import cache cachekey = 'MyModelCache' qs = cache.get(cachekey) if qs: d = { 'in_cache': True, 'qs': qs } else: qs = MyModel.objects.filter(**filterdict) cache.set(cachekey, qs, 300) # 5 min cache d = { 'in_cache': False, 'qs': qs } return d class MyModel(models.Model): name = models.CharField(max_length=200) # # other attributes # objects = MyModelManager() Example Use from app.models import MyModel filterdict = {'pk__range':[0,99]} r = MyModel.objects.queryset_from_cache(filterdict) print(r['qs']) While it's not exactly what you wanted, it might be close enough
4
5
73,539,271
2022-8-30
https://stackoverflow.com/questions/73539271/combining-several-sheets-into-one-excel
I am using this code to put all Excel files and sheets into one Excel file, and it works flawlessly. But on some occasions, I want to put them all into a single excel file, but to keep the sheets separate. I know there is "Copy sheet" in Excel, but I want to do it to multiple documents. I am sure pandas has such a function, so I do not have to do this manually. I also want to keep all the data as it was, with no added columns or rows, and keep the name of the sheet as well. If you have an idea, please help. Example: Workbook1: Sheet1, Sheet2, Sheet3 Workbook2: Sheet11, Sheet22 Apply code... FinalWorkbook: Sheet1, Sheet2, Sheet3, Sheet11, Sheet22, This code puts all the data into a single sheet. import os import pandas as pd print("Combine xls and xlsx") cwd = os.path.abspath('') files = os.listdir(cwd) ## get all sheets of a given file df_total = pd.DataFrame() for file in files: # loop through Excel files if file.endswith('.xls') or file.endswith('.xlsx'): excel_file = pd.ExcelFile(file) sheets = excel_file.sheet_names for sheet in sheets: # loop through sheets inside an Excel file print (file, sheet) df = excel_file.parse(sheet_name = sheet) df_total = df_total.append(df) print("Loaded, ENTER to combine:") dali=input() df_total.to_excel('Combined/combined_file.xlsx') print("Done") dali=input()
Here is the answer. I hope someone finds this useful. Combines all sheets from all excel files XLS od XLSX to a single excel file with all sheets. import pandas as pd import openpyxl print("Copying sheets from multiple files to one file") cwd = os.path.abspath('') files = os.listdir(cwd) df_total = pd.DataFrame() df_total.to_excel('Combined/combined_file.xlsx') #create a new file workbook=openpyxl.load_workbook('Combined/combined_file.xlsx') ss_sheet = workbook['Sheet1'] ss_sheet.title = 'TempExcelSheetForDeleting' workbook.save('Combined/combined_file.xlsx') for file in files: # loop through Excel files if file.endswith('.xls') or file.endswith('.xlsx'): excel_file = pd.ExcelFile(file) sheets = excel_file.sheet_names for sheet in sheets: # loop through sheets inside an Excel file print (file, sheet) df = excel_file.parse(sheet_name = sheet) with pd.ExcelWriter("Combined/combined_file.xlsx",mode='a') as writer: df.to_excel(writer, sheet_name=f"{sheet}", index=False) #df.to_excel("Combined/combined_file.xlsx", sheet_name=f"{sheet}") workbook=openpyxl.load_workbook('Combined/combined_file.xlsx') std=workbook["TempExcelSheetForDeleting"] workbook.remove(std) workbook.save('Combined/combined_file.xlsx') print("Loaded, press ENTER to end") dali=input() #df_total.to_excel('Combined/combined_file.xlsx') print("Done") dali=input()
4
2
73,539,783
2022-8-30
https://stackoverflow.com/questions/73539783/check-numerically-if-numbers-in-array-start-with-given-digits
I have a numpy array of integers a and an integer x. For each element in a I want to check whether it starts with x (so the elements of a usually have more digits than x, but that's not guaranteed for every element). I was thinking of converting the integers to strings and then checking it with pandas import pandas as pd import numpy as np a = np.array([4141, 4265, 4285, 4, 41656]) x = 42 pd.Series(a).astype(str).str.startswith(str(x)).values # .values returns numpy array instead of pd.Series [False True True False False] This works and is also quite performant, but for educational purpose I was wondering if there is also an elegant solution doing it numerically in numpy only.
You can get the number of digits using the log10, then divide as integer: # number of digits of x n = int(np.ceil(np.log10(x+1))) # number of digits in array n2 = np.ceil(np.log10(a+1)).astype(int) # get first n digits of numbers in array out = a//10**np.clip((n2-n), 0, np.inf) == x output: array([False, True, True, False, False])
3
4
73,496,946
2022-8-26
https://stackoverflow.com/questions/73496946/vscode-autocomplete-and-suggestion-intellisense-doesnt-work-for-tensorflow-an
The VSCode autocomplete option doesn't work for tensorflow and keras libraries; However i've installed python and pylance extension on it; is there any solution to make it work or not, without install new extension or something like as AI autocomplete; Kite and tabinine? For instance, here i'm trying to use layers or preprocessing from keras API but it doesn't show anything at all Also notice here the tensorflow version and python version
A potentially useful fix: try adding this to the bottom of your tensorflow/__init__.py # Explicitly import lazy-loaded modules to support autocompletion. # pylint: disable=g-import-not-at-top if _typing.TYPE_CHECKING: from tensorflow_estimator.python.estimator.api._v2 import estimator as estimator from keras.api._v2 import keras from keras.api._v2.keras import losses from keras.api._v2.keras import metrics from keras.api._v2.keras import optimizers from keras.api._v2.keras import initializers # pylint: enable=g-import-not-at-top Find the location of the tensorflow package and open the __init__.py file in the tensorflow folder Add the above codes at the bottom of the file It should be noted that it will be useful to import in the following way import tensorflow as tf tf.keras Importing in the previous way still can't get intellisense
3
5
73,507,463
2022-8-26
https://stackoverflow.com/questions/73507463/how-do-i-specify-a-custom-lookup-field-for-a-drf-action-on-a-viewset
I would like to specify a custom lookup field on the action (different from the viewset default "pk"), i.e. @action( methods=["GET"], detail=True, url_name="something", url_path="something", lookup_field="uuid", # this does not work unfortunately ) def get_something(self, request, uuid=None): pass But the router does not generate the correct urls: router = DefaultRouter() router.register(r"test", TestViewSet) router.urls yields url: '^test/(?P<pk>[^/.]+)/something/$' instead of '^test/(?P<uuid>[^/.]+)/something/$' I do not want to change the lookup field for the whole viewset though and have been unsuccessful in finding a way to do this for the action itself after debugging through the router url generation. I did notice that model viewsets have this method: get_extra_action_url_map(self) but am unsure how to get it to be called to generate custom urls or if it is even relevant. Any help would be great thanks!
I think it will create much confusion for your API consumers if you have 2 different resource identification on the same resource. You can name that action query_by_uuid or just allow them to use list_view to filter by uuid if you only want to represent the object tho. (so consumers can use /test/?uuid= to retrieve data) But if you really want to do it, you can simply override get_object method to filter for your custom action tho: def get_object(self): if self.action == 'do_something': return get_object_or_404(self.get_queryset(), uuid=self.kwargs['pk']) return super().get_object() Here is a bit hacky solution for generate uuid in router with detail=False. @action(detail=False, url_path=r'(?P<uuid>[^/.]+)/do_something') def do_something(self, request, uuid=None): pass
3
1
73,534,908
2022-8-29
https://stackoverflow.com/questions/73534908/how-to-groupby-and-resample-data-in-pandas
I have sales data for different customers on different dates. But the dates are not continuous and I would like to resample the data to daily frequency. How can I do this? MWE import numpy as np import pandas as pd df = pd.DataFrame({'id': list('aababcbc'), 'date': pd.date_range('2022-01-01',periods=8), 'value':range(8)}).sort_values('id') df id date value 0 a 2022-01-01 0 1 a 2022-01-02 1 3 a 2022-01-04 3 2 b 2022-01-03 2 4 b 2022-01-05 4 6 b 2022-01-07 6 5 c 2022-01-06 5 7 c 2022-01-08 7 The required output is following id date value a 2022-01-01 0 a 2022-01-02 1 a 2022-01-03 0 ** there is no data for a in this day a 2022-01-04 3 b 2022-01-03 2 b 2022-01-04 0 ** there is no data for b in this day b 2022-01-05 4 b 2022-01-06 0 ** there is no data for b in this day b 2022-01-07 6 c 2022-01-06 5 c 2022-01-07 0 ** there is no data for c in this day c 2022-01-08 7 My attempt df.groupby(['id']).resample('D',on='date')['value'].sum().reset_index()
df["date"] = pd.to_datetime(df["date"]) df.set_index("date").groupby("id").resample("1d").sum()
3
4
73,532,990
2022-8-29
https://stackoverflow.com/questions/73532990/get-a-count-of-vaues-that-are-on-or-before-a-certain-date-from-a-pandas-datafram
I have a date 2020-05-31 and the following dataframe, where the column names are statuses: rejected revocation decision rfe interview premium received rfe_response biometrics withdrawal appeal 196 None None 2020-01-28 None None None 2020-01-16 None None None None 203 None None 2020-06-20 2020-04-01 None None 2020-01-03 2020-08-08 None None None 209 None None 2020-12-03 2020-06-03 None None 2020-01-03 None None None None 213 None None 2020-06-23 None None None 2020-01-27 None 2020-02-19 None None 1449 None None 2020-05-12 None None None 2020-01-06 None None None None 1660 None None 2021-09-23 2021-05-27 None None 2020-01-21 2021-08-17 None None None I want to get the latest step each row is in, such that the latest steap is on or before the date mentioed above 2020-05-31 So the output for this woud be: 196: decision 203: rfe 209: received 213: biometrics 1449: decision 1660: received or even a count works: { rejected = 0, revocation = 0, decision = 2, rfe = 1, interview = 0, premium = 0, received = 2, rfe_response = 0, biometrics 0 0, withdrawal = 0, appeal = 0 } Currently i am looping through each row, where i create a dict of {status: date}, then i sort by date, and take the key of the last value (which is a status) This is very slow and takes forever Is there a simpler or cleaner way of doing it? NOTE: Each row will have atleast one date, in decision column
you can mask where the date is bigger than the chosen date, then use idxmax along the columns. dt_max = '2020-05-31' res = df.where(df.le(dt_max)).astype('datetime64[ns]')\ .dropna(how='all', axis=0).idxmax(axis=1) print(res) # 196 decision # 203 rfe # 209 received # 213 biometrics # 1449 decision # 1660 received # dtype: object And for the count, per status, then you can do with value_counts like dict_res = res.value_counts().reindex(df.columns, fill_value=0).to_dict() print(dict_res) #{'rejected': 0, 'revocation': 0, 'decision': 2, 'rfe': 1, 'interview': 0, 'premium': 0, # 'received': 2, 'rfe_response': 0, 'biometrics': 1, 'withdrawal': 0, 'appeal': 0} EDIT thanks to a comment from @mozway, I added dropna to create res to prevent the method to fail if none of the dates are under the threshold on a row
3
4
73,530,034
2022-8-29
https://stackoverflow.com/questions/73530034/allow-python-re-findall-to-find-overlapping-mathes-from-left-to-right
My requirement is very simple, but I just could not figure out how to reach it. This is the original string ACCCTNGGATGTGGGGGGATGTCCCCCATGTGCTCG, I want to find out all the sub-strings that only consist of [ACGT], end with ATGT, and have a length of at least 8. And what I expect is: GGATGTGGGGGGATGT GGATGTGGGGGGATGTCCCCCATGT With following code: import re seq = 'ACCCTNGGATGTGGGGGGATGTCCCCCATGTGCTCG' matches = re.findall("[ACGT]{4,}ATGT", seq) if matches: for match in matches: print(match) I got only GGATGTGGGGGGATGTCCCCCATGT The shorter one is missing. Then I realized that re.findall doesn't allow overlapping. I found this solution from How to use regex to find all overlapping matches, then I modified the code as: matches = re.findall("(?=([ACGT]{4,}ATGT))", seq) Then I got: GGATGTGGGGGGATGTCCCCCATGT GATGTGGGGGGATGTCCCCCATGT ATGTGGGGGGATGTCCCCCATGT TGTGGGGGGATGTCCCCCATGT GTGGGGGGATGTCCCCCATGT TGGGGGGATGTCCCCCATGT GGGGGGATGTCCCCCATGT GGGGGATGTCCCCCATGT GGGGATGTCCCCCATGT GGGATGTCCCCCATGT GGATGTCCCCCATGT GATGTCCCCCATGT ATGTCCCCCATGT TGTCCCCCATGT GTCCCCCATGT TCCCCCATGT CCCCCATGT CCCCATGT Then I realized that this searching starts from right to left. So how can I ask re.findall to search from left to right and also allow for overlapping?
You can use PyPi's regex module, utilizing reversed and overlapped matching using only a small addition to your initial pattern: (?r)[ACGT]{4,}ATGT For example: import regex as re seq = 'ACCCTNGGATGTGGGGGGATGTCCCCCATGTGCTCG' matches = re.findall(r'(?r)[ACGT]{4,}ATGT', seq, overlapped=True) print(matches) Prints: ['GGATGTGGGGGGATGTCCCCCATGT', 'GGATGTGGGGGGATGT']
3
5
73,485,081
2022-8-25
https://stackoverflow.com/questions/73485081/save-the-multiple-images-into-pdf-without-chainging-the-format-of-subplot
I've a df like this as shown below. What I'm doing is I'm trying to loop through the df column(s) with paths & printing the image as sub plots one column with image paths at axis0 and other column paths parallely on axis1 as follows. identity VGG-Face_cosine img comment 0 ./clip_v4/3.png 1.110223e-16 .\clip_v3\0.png .\clip_v3\0.png is matched with ./clip_v4/3.png 0 ./clip_v4/2.png 2.220446e-16 .\clip_v3\1.png .\clip_v3\1.png is matched with ./clip_v4/2.png 1 ./clip_v4/4.png 2.220446e-16 .\clip_v3\1.png .\clip_v3\1.png is matched with ./clip_v4/4.png 2 ./clip_v4/5.png 2.220446e-16 .\clip_v3\1.png .\clip_v3\1.png is matched with ./clip_v4/5.png 0 ./clip_v4/2.png 2.220446e-16 .\clip_v3\2.png .\clip_v3\2.png is matched with I'm looping through these 2 columns identity and img columns & plotting as follows import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg from matplotlib import rcParams df = df.iloc[1:] #merged_img = [] for index, row in df.iterrows(): # figure size in inches optional rcParams['figure.figsize'] = 11 ,8 # read images img_A = mpimg.imread(row['identity']) img_B = mpimg.imread(row['img']) # display images fig, ax = plt.subplots(1,2) ax[0].imshow(img_A) ax[1].imshow(img_B) sample output I got. ###Console output Upto now it's fine. My next idea is to save these images as it is with sublots on PDF. I don't want to change the structure the way it prints. Like I just want 2 images side by side in PDF too. I've went through many available solutions. But, I can't relate my part of code with the logic avaiable in documentation. Is there is any way to achieve my goal?. Any references would be helpful!!. Thanks in advance.
Use PdfPages from matplotlib.backends.backend_pdf to save figures one by one on separate pages of the same pdf-file: import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg from matplotlib import rcParams from matplotlib.backends.backend_pdf import PdfPages df = df.iloc[1:] rcParams['figure.figsize'] = 11 ,8 pdf_file_name = 'my_images.pdf' with PdfPages(pdf_file_name) as pdf: for index, row in df.iterrows(): img_A = mpimg.imread(row['identity']) img_B = mpimg.imread(row['img']) fig, ax = plt.subplots(1,2) ax[0].imshow(img_A) ax[1].imshow(img_B) # save the current figure at a new page in pdf_file_name pdf.savefig() See also https://matplotlib.org/stable/api/backend_pdf_api.html
6
4
73,530,081
2022-8-29
https://stackoverflow.com/questions/73530081/convert-nested-dictionary-to-multilevel-column-dataframe
I have a dictionary which I want to convert to multilevel column dataframe and the index will be the most outer keys of the dictionary. my_dict = {'key1': {'sub-key1': {'sub-sub-key1':'a','sub-sub-key2':'b'}, 'sub-key2': {'sub-sub-key1':'aa','sub-sub-key2':'bb'}}, 'key2': {'sub-key1': {'sub-sub-key1':'c','sub-sub-key2':'d'}, 'sub-key2': {'sub-sub-key1':'cc','sub-sub-key2':'dd'}}} My desired output should look like: sub-key1 sub-key2 sub-sub-key1 sub-sub-key2 sub-sub-key1 sub-sub-key2 key1 a b aa bb key2 c d cc dd I tried to use concat with pd.concat({k: pd.DataFrame.from_dict(my_dict, orient='index') for k, v in d.items()}, axis=1) but the result is not as expected. I also tried to reform the dictionary. reformed_dict = {} for outerKey, innerDict in my_dict.items(): for innerKey, values in innerDict.items(): reformed_dict[(outerKey, innerKey)] = values pd.DataFrame(reformed_dict) Again the result was not ok. The highest level column and index are interchanged. Is there any other way to do this?
You were pretty close with concat, need to unstack after so like res = pd.concat({k: pd.DataFrame.from_dict(v, orient='columns') for k, v in my_dict.items()} ).unstack() print(res) # sub-key1 sub-key2 # sub-sub-key1 sub-sub-key2 sub-sub-key1 sub-sub-key2 # key1 a b aa bb # key2 c d cc dd
3
3
73,514,548
2022-8-27
https://stackoverflow.com/questions/73514548/no-module-named-streamlit-cli
I am using MAC and downloaded the streamlit package using conda-forge. I am getting an error message as below. from streamlit.cli import main ModuleNotFoundError: No module named 'streamlit.cli' I have checked a stackoverflow post with the same issue, and it recommends installing networkx to fix this issue, but no help in my case. If you have any suggestions, please let me know. Edit I added some more information based on merv's suggestion. Q. commands used to install streamlit A. conda install -c conda-forge streamlit Q. how I activate the Conda environment A. conda create --name web-app python=3.9 conda activate web-app Q. how you start Python A. I used vs code and set the interpreter to the env that I created and also on the vscode terminal activated env that I created. Then I typed "streamlit hello"m and I was getting the error. Q. conda list output A. This file may be used to create an environment using: $ conda create --name --file platform: osx-64 abseil-cpp=20211102.0=h96cf925_1 altair=4.2.0=pyhd8ed1ab_1 appnope=0.1.3=pyhd8ed1ab_0 arrow-cpp=8.0.0=py39had1886b_0 asttokens=2.0.8=pyhd8ed1ab_0 attrs=22.1.0=pyh71513ae_1 aws-c-common=0.4.57=hb1e8313_1 aws-c-event-stream=0.1.6=h23ab428_5 aws-checksums=0.1.9=hb1e8313_0 aws-sdk-cpp=1.8.185=he271ece_0 backcall=0.2.0=pyh9f0ad1d_0 backports=1.0=py_2 backports.functools_lru_cache=1.6.4=pyhd8ed1ab_0 blinker=1.4=py_1 boost-cpp=1.80.0=h97e07a4_0 brotli=1.0.9=h5eb16cf_7 brotli-bin=1.0.9=h5eb16cf_7 brotlipy=0.7.0=py39h63b48b0_1004 bzip2=1.0.8=h0d85af4_4 c-ares=1.18.1=h0d85af4_0 ca-certificates=2022.6.15=h033912b_0 cachetools=5.2.0=pyhd8ed1ab_0 certifi=2022.6.15=pyhd8ed1ab_1 cffi=1.15.1=py39hae9ecf2_0 charset-normalizer=2.1.1=pyhd8ed1ab_0 click=8.1.3=py39h6e9494a_0 commonmark=0.9.1=py_0 cryptography=37.0.4=py39h9c2a9ce_0 dataclasses=0.8=pyhc8e2a94_3 debugpy=1.6.3=py39hd91caee_0 decorator=5.1.1=pyhd8ed1ab_0 entrypoints=0.4=pyhd8ed1ab_0 executing=0.10.0=pyhd8ed1ab_0 freetype=2.12.1=h3f81eb7_0 future=0.18.2=py39h6e9494a_5 gflags=2.2.2=hb1e8313_1004 gitdb=4.0.9=pyhd8ed1ab_0 gitpython=3.1.27=pyhd8ed1ab_0 glog=0.6.0=h8ac2a54_0 grpc-cpp=1.46.1=h067a048_0 icu=70.1=h96cf925_0 idna=3.3=pyhd8ed1ab_0 importlib-metadata=4.11.4=py39h6e9494a_0 importlib_resources=5.9.0=pyhd8ed1ab_0 ipykernel=6.15.1=pyh736e0ef_0 ipython=8.4.0=pyhd1c38e8_1 ipython_genutils=0.2.0=py_1 ipywidgets=8.0.1=pyhd8ed1ab_0 jedi=0.18.1=pyhd8ed1ab_2 jinja2=3.1.2=pyhd8ed1ab_1 jpeg=9e=hac89ed1_2 jsonschema=4.14.0=pyhd8ed1ab_0 jupyter_client=7.3.5=pyhd8ed1ab_0 jupyter_core=4.11.1=py39h6e9494a_0 jupyterlab_widgets=3.0.2=pyhd8ed1ab_0 krb5=1.19.3=hb49756b_0 lcms2=2.12=h577c468_0 lerc=4.0.0=hb486fe8_0 libblas=3.9.0=16_osx64_openblas libbrotlicommon=1.0.9=h5eb16cf_7 libbrotlidec=1.0.9=h5eb16cf_7 libbrotlienc=1.0.9=h5eb16cf_7 libcblas=3.9.0=16_osx64_openblas libcurl=7.83.1=h372c54d_0 libcxx=14.0.6=hce7ea42_0 libdeflate=1.13=h775f41a_0 libedit=3.1.20191231=h0678c8f_2 libev=4.33=haf1e3a3_1 libevent=2.1.10=h815e4d9_4 libffi=3.4.2=h0d85af4_5 libgfortran=5.0.0=10_4_0_h97931a8_25 libgfortran5=11.3.0=h082f757_25 liblapack=3.9.0=16_osx64_openblas libnghttp2=1.47.0=h7cbc4dc_1 libopenblas=0.3.21=openmp_h947e540_2 libpng=1.6.37=h5481273_4 libprotobuf=3.20.1=hfa58983_1 libsodium=1.0.18=hbcb3906_1 libsqlite=3.39.2=h5a3d3bf_1 libssh2=1.10.0=h7535e13_3 libthrift=0.15.0=h054ceb0_0 libtiff=4.4.0=h5e0c7b4_3 libwebp-base=1.2.4=h775f41a_0 libxcb=1.13=h0d85af4_1004 libzlib=1.2.12=hfe4f2af_2 llvm-openmp=14.0.4=ha654fa7_0 lz4-c=1.9.3=he49afe7_1 markupsafe=2.1.1=py39h63b48b0_1 matplotlib-inline=0.1.6=pyhd8ed1ab_0 nbformat=5.4.0=pyhd8ed1ab_0 ncurses=6.3=h96cf925_1 nest-asyncio=1.5.5=pyhd8ed1ab_0 numpy=1.23.2=py39h62c883e_0 openjpeg=2.5.0=h5d0d7b0_1 openssl=1.1.1q=hfe4f2af_0 orc=1.7.4=h9274d09_0 packaging=21.3=pyhd8ed1ab_0 pandas=1.4.3=py39hf72b562_0 parso=0.8.3=pyhd8ed1ab_0 pexpect=4.8.0=pyh9f0ad1d_2 pickleshare=0.7.5=py_1003 pillow=9.2.0=py39h4d560c1_2 pip=22.2.2=pyhd8ed1ab_0 pkgutil-resolve-name=1.3.10=pyhd8ed1ab_0 prompt-toolkit=3.0.30=pyha770c72_0 protobuf=3.20.1=py39hd408605_0 psutil=5.9.1=py39h701faf5_0 pthread-stubs=0.4=hc929b4f_1001 ptyprocess=0.7.0=pyhd3deb0d_0 pure_eval=0.2.2=pyhd8ed1ab_0 pyarrow=8.0.0=py39h2202ef3_0 pycparser=2.21=pyhd8ed1ab_0 pydeck=0.7.1=pyh6c4a22f_0 pygments=2.13.0=pyhd8ed1ab_0 pympler=1.0.1=pyhd8ed1ab_0 pyopenssl=22.0.0=pyhd8ed1ab_0 pyparsing=3.0.9=pyhd8ed1ab_0 pyrsistent=0.18.1=py39h63b48b0_1 pysocks=1.7.1=pyha2e5f31_6 python=3.9.13=h57e37ff_0_cpython python-dateutil=2.8.2=pyhd8ed1ab_0 python-fastjsonschema=2.16.1=pyhd8ed1ab_0 python-tzdata=2022.2=pyhd8ed1ab_0 python_abi=3.9=2_cp39 pytz=2022.2.1=pyhd8ed1ab_0 pytz-deprecation-shim=0.1.0.post0=py39h6e9494a_2 pyyaml=6.0=py39h63b48b0_4 pyzmq=23.2.1=py39h74f9307_0 re2=2022.04.01=h96cf925_0 readline=8.1.2=h3899abd_0 requests=2.28.1=pyhd8ed1ab_0 rich=12.5.1=pyhd8ed1ab_0 semver=2.13.0=pyh9f0ad1d_0 setuptools=65.3.0=pyhd8ed1ab_1 six=1.16.0=pyh6c4a22f_0 smmap=3.0.5=pyh44b312d_0 snappy=1.1.9=h6e38e02_1 sqlite=3.39.2=hd9f0692_1 stack_data=0.5.0=pyhd8ed1ab_0 streamlit=1.12.2=pyhd8ed1ab_0 tk=8.6.12=h5dbffcc_0 toml=0.10.2=pyhd8ed1ab_0 toolz=0.12.0=pyhd8ed1ab_0 tornado=6.2=py39h701faf5_0 traitlets=5.3.0=pyhd8ed1ab_0 typing_extensions=4.3.0=pyha770c72_0 tzdata=2022c=h191b570_0 tzlocal=4.2=py39h6e9494a_1 urllib3=1.26.11=pyhd8ed1ab_0 utf8proc=2.6.1=h9ed2024_0 validators=0.18.2=pyhd3deb0d_0 watchdog=2.1.9=py39h0056ad7_0 wcwidth=0.2.5=pyh9f0ad1d_2 wheel=0.37.1=pyhd8ed1ab_0 widgetsnbextension=4.0.2=pyhd8ed1ab_0 xorg-libxau=1.0.9=h35c211d_0 xorg-libxdmcp=1.1.3=h35c211d_0 xz=5.2.6=h775f41a_0 yaml=0.2.5=h0d85af4_2 zeromq=4.3.4=he49afe7_1 zipp=3.8.1=pyhd8ed1ab_0 zlib=1.2.12=hfe4f2af_2 zstd=1.5.2=hfa58983_4
According to the response in github. Use streamlit.web.cli instead of streamlit.cli
6
7
73,521,602
2022-8-28
https://stackoverflow.com/questions/73521602/melt-function-duplicating-dataset
I have a table like this: id name doggo floofer puppo pupper 1 rowa NaN NaN NaN NaN 2 ray NaN NaN NaN NaN 3 emma NaN NaN NaN pupper 4 sophy doggo NaN NaN NaN 5 jack NaN NaN NaN NaN 6 jimmy NaN NaN puppo NaN 7 bingo NaN NaN NaN NaN 8 billy NaN NaN NaN pupper 9 tiger NaN floofer NaN NaN 10 lucy NaN NaN NaN NaN I want the (doggo, floofer, puppo, pupper) columns to be in a single category column (dog_type). Note: The NaN should also be NaN in the column since not all the dogs were categorized. But after using: df1 = df.melt(id_vars = ['id', 'name'], value_vars = ['doggo', 'floofer', 'pupper', 'puppo'], var_name = 'dog_types', ignore_index = True) The melted df is now duplicated to 40 rows: id name dog_types value 0 1 rowa doggo NaN 1 2 ray doggo NaN 2 3 emma doggo NaN 3 4 sophy doggo doggo 4 5 jack doggo NaN 5 6 jimmy doggo NaN 6 7 bingo doggo NaN 7 8 billy doggo NaN 8 9 tiger doggo NaN 9 10 lucy doggo NaN 10 1 rowa floofer NaN 11 2 ray floofer NaN 12 3 emma floofer NaN 13 4 sophy floofer NaN 14 5 jack floofer NaN 15 6 jimmy floofer NaN 16 7 bingo floofer NaN 17 8 billy floofer NaN 18 9 tiger floofer floofer 19 10 lucy floofer NaN 20 1 rowa pupper NaN 21 2 ray pupper NaN 22 3 emma pupper pupper 23 4 sophy pupper NaN 24 5 jack pupper NaN 25 6 jimmy pupper NaN 26 7 bingo pupper NaN 27 8 billy pupper pupper 28 9 tiger pupper NaN 29 10 lucy pupper NaN 30 1 rowa puppo NaN 31 2 ray puppo NaN 32 3 emma puppo NaN 33 4 sophy puppo NaN 34 5 jack puppo NaN 35 6 jimmy puppo puppo 36 7 bingo puppo NaN 37 8 billy puppo NaN 38 9 tiger puppo NaN 39 10 lucy puppo NaN How I do get the correct results without duplicates?
df['dog_types'] = (df['doggo'].fillna(df['floofer']) .fillna(df['puppo']) .fillna(df['pupper'])) id name doggo floofer puppo pupper dog_types 0 1 rowa NaN NaN NaN NaN NaN 1 2 ray NaN NaN NaN NaN NaN 2 3 emma NaN NaN NaN pupper pupper 3 4 sophy doggo NaN NaN NaN doggo 4 5 jack NaN NaN NaN NaN NaN 5 6 jimmy NaN NaN puppo NaN puppo 6 7 bingo NaN NaN NaN NaN NaN 7 8 billy NaN NaN NaN pupper pupper 8 9 tiger NaN floofer NaN NaN floofer 9 10 lucy NaN NaN NaN NaN NaN Afterwards you can drop redundant columns: df.drop(columns=['doggo', 'floofer', 'puppo', 'pupper'], inplace=True) id name dog_types 0 1 rowa NaN 1 2 ray NaN 2 3 emma pupper 3 4 sophy doggo 4 5 jack NaN 5 6 jimmy puppo 6 7 bingo NaN 7 8 billy pupper 8 9 tiger floofer 9 10 lucy NaN
4
4
73,517,832
2022-8-28
https://stackoverflow.com/questions/73517832/how-to-make-an-color-picker-in-pygame
I am making an illustrator n pygame and now i need to color my shapes. I need to make an color picker like this: The idea is that the user will scroll on the bar and select an color then it will return the program an rgb value or you can say the color that will tell which color is selected. How can i make this possible. Should i have to use surface.set_at() and make this my setting the color of each pixel or there an alternate way of doing this. This is my code: import pygame import math pygame.init() HEIGHT = 700 WIDTH = 1000 WHITE = (255, 255, 255) BLACK = (0, 0, 0) AQUA = "aqua" shapes = [] window = pygame.display.set_mode((WIDTH, HEIGHT), pygame.RESIZABLE) pygame.display.set_caption("Illustrator") window.fill(WHITE) class Square: def __init__(self, square_x, square_y, square_width, square_height): self.square_x = square_x self.square_y = square_y self.square_width = square_width self.square_height = square_height class Circle: def __init__(self, circle_x, cirlce_y, circle_radius): self.circle_x = circle_x self.circle_y = cirlce_y self.circle_radius = circle_radius class Button: def __init__(self, x, y, width, height): self.x = x self.y = y self.width = width self.height = height def clicked(self): mouse_x, mouse_y = pygame.mouse.get_pos() if self.x <= mouse_x <= self.x + self.width and self.y <= mouse_y <= self.y + self.height: return True square_button = Button(10, 10, 50, 30) circle_button = Button(10, 50, 50, 30) def buttons(window): pygame.draw.rect(window, BLACK, pygame.Rect(square_button.x, square_button.y, square_button.width, square_button.height)) pygame.draw.rect(window, BLACK, pygame.Rect(circle_button.x, circle_button.y, circle_button.width, circle_button.height)) def draw_square(window, x, y, width, height): color = "black" pygame.draw.rect(window, color, pygame.Rect(x, y, width, height)) def draw_circle(window, x, y, radius): pygame.draw.circle(window, AQUA, (x, y), radius) def square_logic(mouse_x_button_down, mouse_y_button_down, mouse_x_button_up, mouse_y_button_up, delta_x, delta_y): if delta_x > 0 and delta_y > 0: shapes.append(Square(mouse_x_button_down, mouse_y_button_down, delta_x, delta_y)) elif delta_x < 0 and delta_y < 0 : shapes.append(Square(mouse_x_button_up, mouse_y_button_up, abs(delta_x), abs(delta_y))) elif delta_x < 0 and delta_y > 0: shapes.append(Square(mouse_x_button_up, mouse_y_button_down, abs(delta_x), delta_y)) elif delta_x > 0 and delta_y < 0: shapes.append(Square(mouse_x_button_down, mouse_y_button_up, delta_x, abs(delta_y))) def square_logic_for_drag(mouse_x_button_down, mouse_y_button_down, mouse_x, mouse_y, delta_x, delta_y): if delta_x > 0 and delta_y > 0: draw_square(window, mouse_x_button_down, mouse_y_button_down, delta_x, delta_y) elif delta_x < 0 and delta_y < 0 : draw_square(window, mouse_x, mouse_y, abs(delta_x), abs(delta_y)) elif delta_x < 0 and delta_y > 0: draw_square(window, mouse_x, mouse_y_button_down, abs(delta_x), delta_y) elif delta_x > 0 and delta_y < 0: draw_square(window, mouse_x_button_down, mouse_y, delta_x, abs(delta_y)) def circle_logic(delta_x, delta_y, mouse_x_button_down, mouse_y_button_down, radius): if delta_x > 0 and delta_y > 0 or delta_x < 0 and delta_y > 0 or delta_x > 0 and delta_y < 0 or delta_x < 0 and delta_y < 0: shapes.append(Circle(mouse_x_button_down + delta_x/2, mouse_y_button_down + delta_y/2, radius)) def circle_logic_for_drag(delta_x, delta_y, mouse_x_button_down, mouse_y_button_down, radius): if delta_x > 0 and delta_y > 0 or delta_x < 0 and delta_y > 0 or delta_x > 0 and delta_y < 0 or delta_x < 0 and delta_y < 0: draw_circle(window, mouse_x_button_down + delta_x/2, mouse_y_button_down + delta_y/2, radius) def draw_previous_shapes(): for j in range(len(shapes)): if type(shapes[j]) == Square: draw_square(window, shapes[j].square_x, shapes[j].square_y, shapes[j].square_width, shapes[j].square_height) if type(shapes[j]) == Circle: draw_circle(window, shapes[j].circle_x, shapes[j].circle_y, shapes[j].circle_radius) def main(): run = True mouse_button_down = False mouse_button_up = False shape = "square" while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.MOUSEBUTTONDOWN: mouse_x_button_down, mouse_y_button_down = pygame.mouse.get_pos() print(mouse_x_button_down, mouse_y_button_down) mouse_button_down = True if square_button.clicked(): shape = "square" if circle_button.clicked(): shape = "circle" if event.type == pygame.MOUSEBUTTONUP: mouse_x_button_up, mouse_y_button_up = pygame.mouse.get_pos() print(mouse_x_button_up, mouse_y_button_up) mouse_button_up = True if event.type == pygame.KEYDOWN: if event.key == pygame.K_z and (pygame.key.get_mods() == 64 or pygame.key.get_mods() == 128): try: shapes.pop(len(shapes) - 1) window.fill(WHITE) draw_previous_shapes() except: pass buttons(window) if mouse_button_down == True : if mouse_button_up == True: mouse_button_down = False mouse_button_up = False delta_x = mouse_x_button_up - mouse_x_button_down delta_y = mouse_y_button_up - mouse_y_button_down radius = math.sqrt(delta_x*delta_x + delta_y*delta_y) /2 print( delta_x, delta_y) if shape == "square": square_logic(mouse_x_button_down, mouse_y_button_down, mouse_x_button_up, mouse_y_button_up, delta_x, delta_y) if shape == "circle": circle_logic(delta_x, delta_y, mouse_x_button_down, mouse_y_button_down, radius) else: window.fill(WHITE) buttons(window) draw_previous_shapes() mouse_x, mouse_y = pygame.mouse.get_pos() delta_x = mouse_x - mouse_x_button_down delta_y = mouse_y - mouse_y_button_down radius = math.sqrt(delta_x*delta_x + delta_y*delta_y) /2 if shape == "square": square_logic_for_drag(mouse_x_button_down, mouse_y_button_down, mouse_x, mouse_y, delta_x, delta_y) if shape == "circle": circle_logic_for_drag(delta_x, delta_y, mouse_x_button_down, mouse_y_button_down, radius) pygame.display.flip() pygame.quit() if __name__ == "__main__": main() For further explaination comment me?
The pygame.Color object can be used to convert between the RGB and [HSL/HSV](HSL and HSV) color schemes. The hsla property: The HSLA representation of the Color. The HSLA components are in the ranges H = [0, 360], S = [0, 100], V = [0, 100], A = [0, 100]. Create a pygame.Surface and use the function `hsla to create an image for the color picker: class ColorPicker: def __init__(self, x, y, w, h): self.rect = pygame.Rect(x, y, w, h) self.image = pygame.Surface((w, h)) self.image.fill((255, 255, 255)) self.rad = h//2 self.pwidth = w-self.rad*2 for i in range(self.pwidth): color = pygame.Color(0) color.hsla = (int(360*i/self.pwidth), 100, 50, 100) pygame.draw.rect(self.image, color, (i+self.rad, h//3, 1, h-2*h//3)) Calculation of the relative position in the coller picker when the mouse button is held down: class ColorPicker: # [...] def update(self): moude_buttons = pygame.mouse.get_pressed() mouse_pos = pygame.mouse.get_pos() if moude_buttons[0] and self.rect.collidepoint(mouse_pos): self.p = (mouse_pos[0] - self.rect.left - self.rad) / self.pwidth self.p = (max(0, min(self.p, 1))) Determine the color from the relative position in the color picker: class ColorPicker: # [...] def get_color(self): color = pygame.Color(0) color.hsla = (int(self.p * self.pwidth), 100, 50, 100) return color Minimal example: import pygame class ColorPicker: def __init__(self, x, y, w, h): self.rect = pygame.Rect(x, y, w, h) self.image = pygame.Surface((w, h)) self.image.fill((255, 255, 255)) self.rad = h//2 self.pwidth = w-self.rad*2 for i in range(self.pwidth): color = pygame.Color(0) color.hsla = (int(360*i/self.pwidth), 100, 50, 100) pygame.draw.rect(self.image, color, (i+self.rad, h//3, 1, h-2*h//3)) self.p = 0 def get_color(self): color = pygame.Color(0) color.hsla = (int(self.p * self.pwidth), 100, 50, 100) return color def update(self): moude_buttons = pygame.mouse.get_pressed() mouse_pos = pygame.mouse.get_pos() if moude_buttons[0] and self.rect.collidepoint(mouse_pos): self.p = (mouse_pos[0] - self.rect.left - self.rad) / self.pwidth self.p = (max(0, min(self.p, 1))) def draw(self, surf): surf.blit(self.image, self.rect) center = self.rect.left + self.rad + self.p * self.pwidth, self.rect.centery pygame.draw.circle(surf, self.get_color(), center, self.rect.height // 2) pygame.init() window = pygame.display.set_mode((500, 200)) clock = pygame.time.Clock() cp = ColorPicker(50, 50, 400, 60) run = True while run: clock.tick(100) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False cp.update() window.fill(0) cp.draw(window) pygame.display.flip() pygame.quit() exit()
3
4
73,517,571
2022-8-28
https://stackoverflow.com/questions/73517571/typevar-inference-broken-by-lru-cache-decorator
python's TypeVar inference broken when using lru_cache decorator. For example, after applying mypy the following example, only function with lru_cache causes error like: main.py:14: error: Incompatible types in assignment (expression has type "T", variable has type "int") Found 1 error in 1 file (checked 1 source file) and pyright's editor support also warn the same thing. Is this lru_cache's own limitation or is there some good workaround? from functools import lru_cache from typing import TypeVar T = TypeVar("T") def working(foo: T) -> T: return foo @lru_cache(maxsize=None) def not_working(foo: T) -> T: return foo a: int = working(1) b: int = not_working(1)
Here's the relevant parts of the lru_cache type hints _T = TypeVar("_T") class _lru_cache_wrapper(Generic[_T]): __wrapped__: Callable[..., _T] def __call__(self, *args: Hashable, **kwargs: Hashable) -> _T: ... def lru_cache( maxsize: int | None = ..., typed: bool = ... ) -> Callable[[Callable[..., _T]], _lru_cache_wrapper[_T]]: ... so it appears that, in its attempts to allow for any argument set, it loses any connection between the input and output types and so is unable to refine T to int. You may have to wrap lru_cache locally to fix this. You may be able to use ParamSpec, but you might find difficulties with that, see the note below. If you only need it for a small set of function types (unary, binary, ternary), you could wrap it for those. Apparently they did actually fix this with ParamSpec but from a cursory reading it appears to break other things so they reverted it. This issue also discusses it.
4
6
73,517,189
2022-8-28
https://stackoverflow.com/questions/73517189/removing-n-from-columns-name-in-pandas-dataframe
I have a CSV file having column names with line breaks when I read the file with pd.read_csv() it returns the column names like this Violent\ncrime\nrate. how do I replace \n with "_" for all these columns?
Try: df.columns = [c.replace("\n", "_") for c in df.columns] print(df)
3
4
73,514,339
2022-8-27
https://stackoverflow.com/questions/73514339/django-admin-how-to-show-currency-numbers-in-comma-separated-format
In my models I have this: class Example(Basemodel): price = models.IntegerField(default=0) and in my admin I have this: @admin.register(Example) class ExampleAdmin(admin.ModelAdmin): list_display = ('price',) I want the price field to be shown in comma-separated format instead of the typical integer format and I want to do that on the backend side. For example: 333222111 should be 333,222,111 Any can recommend me a solution? Should be:
You can work with a property instead, for example: from django.contrib import admin class Example(Basemodel): price = models.IntegerField(default=0) @property @admin.display(description='price', ordering='price') def price_formatted(self): return f'{self.price:,}' and use that property: @admin.register(Example) class ExampleAdmin(admin.ModelAdmin): list_display = ('price_formatted',)
5
6
73,514,027
2022-8-27
https://stackoverflow.com/questions/73514027/how-to-split-and-sort-content-of-a-list-in-python
I have the following list: list1 = ['# Heading', '200: Stop Engine', '', '20: Start Engine', '400: Do xy'] and I want to get: list2 = ['20: Start Engine', '200: Stop Engine', '400: Do xy'] So the empty list item and the ones starting with # should be deleted or ignored and the rest should be sorted by the number. I tried to use split() to extract the numbers and the #: list2 = [i.split() for i in list1] but then I get a list in a list brings some other problems (I need to convert the content of the list to an int for the sorting which only works if I have a string). The output would be: list2 = ['#', 'Heading', '200:', 'Stop', 'Engine', '', '20:', 'Start', 'Engine', '400:', 'Do', 'xy'] and if I split(':'), I can't delete the #. For the sorting I tried: list2.sort(key = lambda x: x[0]) to sort the items by the number. This only works if I can delete the # and the empty item and convert the string to a int. I hope someone can help me! Thanks in advance!
Just do all the things you say: Ignore all the items which don't start with a number, then sort by the number before the colon delimiter: def FilterAndSort(items): items = [item for item in items if item and item[0].isdigit()] return sorted(items, key=lambda item:int(item.split(':')[0])) print(FilterAndSort(list1)) Output as requested
4
2
73,513,150
2022-8-27
https://stackoverflow.com/questions/73513150/pairing-list-items-in-a-list
I am trying to pair elements of a list based on a condition. If two element have a common i will merge them and do this until no elements can be merged. Currently, my problem is looping through same elements and getting same merged result from different items. I have to check if group has been added before.But as my array is empty in the beginning i could not check if element already in it with axis 1. I tried recursive : Also i am discarding if a group has length less than three. pairs = [[1, 3], [1, 8], [2, 1], [2, 3], [3, 1], [3, 8], [4, 11], [4, 15], [7, 13], [9, 12], [9, 13], [10, 1], [10, 18], [10, 20], ...] def groupG(pairs): groups = [] if len(pairs) > 1: for i,pair in enumerate(pairs): try: if (any(point in pairs[i+1] for point in pair)): group = np.concatenate(( pair,pairs[i+1])) group = np.unique(group) groups.append(group) except IndexError: continue if len(groups) == 0 : groupsFiltered = np.array([row for row in pairs if len(row)>=3]) return groupsFiltered else: return groupG(groups) expected result is : [[1,2,3,8,10,18,20],[4,11,15],[7,9,12,13]...] Is there a way to group these pairs with while,do while or recursive?
Use networkx's connected_components: import networkx as nx pairs = [[1, 3], [1, 8], [2, 1], [2, 3], [3, 1], [3, 8], [4, 11], [4, 15], [7, 13], [9, 12], [9, 13], [10, 1], [10, 18], [10, 20]] out = list(nx.connected_components(nx.from_edgelist(pairs))) output: [{1, 2, 3, 8, 10, 18, 20}, {4, 11, 15}, {7, 9, 12, 13}]
4
7
73,507,532
2022-8-27
https://stackoverflow.com/questions/73507532/why-is-the-return-value-for-clirunner-invoke-object-null
I'm using click v8.1.3 and I'm trying to create some pytests, but I'm not getting my expected return_value when using click.testing.CliRunner().invoke import click.testing import mycli def test_return_ctx(): @mycli.cli.command() def foo(): return "Potato" runner = click.testing.CliRunner() result = runner.invoke(mycli.cli, ["foo"]) assert result.return_value == "Potato" # this fails. b/c the actual value is None I tried updating the root command to return some random value as well to see if we get a value there # mycli import click @click.group() def cli(): return "Potato" But it didn't help. return_value for the Result object is still None Am I misunderstanding how I should return a value from a command? https://click.palletsprojects.com/en/8.1.x/api/#click.testing.Result.return_value
Click command handlers do not return a value unless you use: standalone_mode=False. You can do that during testing like: result = CliRunner().invoke(foo, standalone_mode=False) Test code: import click from click.testing import CliRunner def test_return_value(): @click.command() def foo(): return "bar" result = CliRunner().invoke(foo, standalone_mode=False) assert result.return_value == "bar" Test Results: ============================= test session starts ============================ collecting ... collected 1 item test_code.py::test_return_value PASSED [100%] ============================== 1 passed in 0.03s ==============================
5
7
73,480,501
2022-8-24
https://stackoverflow.com/questions/73480501/error-tcgetpgrp-failed-not-a-tty-using-python3-to-open-web-browser
Here's the breakdown of my Windows WSL environment: Windows 11 WSL version 2 Ubuntu version 20.04.3 LTS Python 3.8.10 I have a super simple Python program I'm using to open a web page in my default browser. Here is my code: import webbrowser webbrowser.open('https://github.com') When I run this from my terminal the webpage opens up as expected, but I also get this error in the terminal: tcgetpgrp failed: Not a tty When my terminal displays this message, the cursor goes down to the next line and it looks like a process is hung or something. To be able to use the terminal I have to Ctrl+C to get it to give me the command prompt. I looked for answers and everything I could find has to do with using Jupyter or PHP but I'm not using either of them, I'm just using plain old Python to try and open the browser. Can anyone tell me what the issue is here and how to fix this/prevent it from happening?
Yes, I can also reproduce it from the Python (and IPython) REPL on Ubuntu under WSL. I don't get the "lockup" that requires Ctrl+C when running interactively, at least. I'll theorize on the "why". Most of this I can confirm myself, but the last bullet below is still a bit of a mystery to me: webbrowser-open uses whatever browser is defined by the BROWSER environment variable first, but falls back to (I believe) xdg-open. xdg-open uses whatever browser is defined in the alternatives system for x-www-browser or www-browser. On Ubuntu 20.04 on WSL, the wslu package is installed by default (it is no longer a default package under 22.04, though). That package includes the wslview helper. From its manpage: [wslview] is a file viewer on WSL that allows you to open files and folders from WSL in Windows and a fake web browser that allows opening urls in your default browser on Windows 10. wslview is registered during the wslu installation as the alternative for both x-www-browser and www-browser. webbrowser.open doesn't just call xdg-open, but it attempts to get the process information of the resulting browser so that it can (at the least) raise the window if requested. Part of this is obtaining the process group via, apparently, the tcgetpgrp system call. According to the tcgetpgrp manpage: The function tcgetpgrp() returns the process group ID of the foreground process group on the terminal associated to fd, which must be the controlling terminal of the calling process. Here's where I have to "hand-wave" a bit -- Something in the hand-off from webbrowser.open to wslview to binfmt_misc (the kernel system that allows it to launch Windows executables) is "losing" or redirecting a file descriptor of the terminal, resulting in this message. It appears to me to be a bug (unintended side-effect?) of wslview, since making sure it isn't used will prevent the error from occurring. As a workaround, either: export BROWSER=/mnt/c/path/to/windows/browser before starting Python. Note that I'm not sure how to point to Edge, since it's there's no ".exe" for it that I'm aware of (it's a Universal/Modern/UWP/whatever app). Or, since you are on Windows 11, install a Linux browser. I used Vivaldi to test and confirm that it opened properly from Python under WSL. Note that you can't sudo apt install either Chromium or Firefox under WSL since they are both Snaps.
9
11
73,504,314
2022-8-26
https://stackoverflow.com/questions/73504314/type-hint-for-can-be-compared-objects
I am writing several functions that handle ordered datasets. Sometime, there is an argument that can be a int or float or a timestamp or anything that supports comparison (larger than / smaller than) and that I can use for trimming data for instance. Is there a way to type-hint such a parameter? The typing module doesn't seem to include this, but is there some other way?
There is no standard 'comparable' ABC, no, as the rich comparison methods are really very flexible and don't necessarily return booleans. The default built-in types return NotImplemented when applied to a type they can't be compared with, for example, while specialised libraries like SQLAlchemy and numpy use rich comparison methods to return completely different objects. See the documentation for the rich comparison methods for the details. But you should be able to define a a Protocol subclass for specific expectations: from typing import Protocol, TypeVar T = TypeVar("T") class Comparable(Protocol[T]): def __eq__(self: T, other: T) -> bool: ... def __lt__(self: T, other: T) -> bool: ... # ... etc You may need to tweak the protocol to fit your exact expectations, and / or use a non-generic version that's specific to the types you use (perhaps with @overloaded definitions for specific types). For sorting with the builtin sorted() function, __eq__ and __lt__ suffice.
4
4
73,498,143
2022-8-26
https://stackoverflow.com/questions/73498143/checking-for-equality-if-either-input-can-be-str-or-bytes
I am trying to write a function that checks if two strings (with ASCII-only content) or bytes are equal. Right now I have: import typing as typ def is_equal_str_bytes( a: typ.Union[str, bytes], b: typ.Union[str, bytes], ) -> bool: if isinstance(a, str): a = a.encode() if isinstance(b, str): b = b.encode() return a == b This works with the any combination of str or bytes types, while the == operator will return False (rightfully) if the two types differ. import itertools ss = "ciao", b"ciao" for a, b in itertools.product(ss, repeat=2): print(f"{a!r:<8} {b!r:<8} {is_equal_str_bytes(a, b)} {a == b}") # 'ciao' 'ciao' True True # 'ciao' b'ciao' True False # b'ciao' 'ciao' True False # b'ciao' b'ciao' True True Is there a simpler / faster way?
Some benchmarks with random equal strings/bytes of a million characters (on TIO with Python 3.8 pre-release, but I got similar times with 3.10.2): 186.88 us s.encode() 187.39 us s.encode("utf-8") 183.85 us s.encode("ascii") 94.62 us b.decode() 94.27 us b.decode("utf-8") 137.91 us b.decode("ascii") 79.93 us s == s2 82.69 us b == b2 182.72 us s + "a" 177.06 us b + b"a" 0.08 us len(s) 0.07 us len(b) 1.14 us s[:1000].encode() 0.97 us b[:1000].decode() 2.06 us s[::1000].encode() 1.45 us b[::1000].decode() 1.91 us hash(s) 1.56 us hash(b) 508.62 us hash(s2) 546.00 us hash(b2) 2.85 us str(s) 9142.59 us str(b) 13541.64 us repr(s) 9100.34 us repr(b) Thoughts based on that: I thought for simpler code, maybe we could apply str or repr to both of them and then somehow compare the resulting strings (like after removing b prefixes) but the benchmark shows that that would be very slow. Getting the lengths is very cheap, so I'd compare those first. Return False if different, otherwise continue. If you've hashed them already or are going to afterwards anyway, then you could compare the hashes (and return False if different, otherwise continue). See ASCII str / bytes hash collision for why equal ASCII string and ASCII bytes have the same hash. (But I'm not sure it's guaranteed by the language, so it might not be safe, I'm not sure). Note that hashing the first time is slow (see times for hashing s2/b2) but subsequent lookups of the stored hash is fast (see times for hashing s/b). Decoding seems faster than encoding, so do that instead. Only decode if the types differ (one is string and one is bytes), otherwise just use ==. It's wasteful to decode a million bytes if already the first one is a mismatch. So might be worth it to decode/compare chunks of shorter length instead of the whole thing, or test some short prefix or cross section before testing the whole thing. So here's some potentially faster one using the above optimizations (not tested/benchmarked, partly because it depends on your data): import typing as typ def is_equal_str_bytes( a: typ.Union[str, bytes], b: typ.Union[str, bytes], ) -> bool: if len(a) != len(b): return False if hash(a) != hash(b): return False if type(a) is type(b): return a == b if isinstance(a, bytes): # make a=str, b=bytes a, b = b, a if a[:1000] != b[:1000].decode(): return False if a[::1000] != b[::1000].decode(): return False return a == b.decode() My benchmark code: import os from timeit import repeat n = 10**6 b = bytes(x & 127 for x in os.urandom(n)) s = b.decode() assert hash(s) == hash(b) setup = ''' from __main__ import s, b s2 = b.decode() # Always fresh so it doesn't have a hash stored already b2 = s.encode() assert s2 is not s and b2 is not b ''' exprs = [ 's.encode()', 's.encode("utf-8")', 's.encode("ascii")', 'b.decode()', 'b.decode("utf-8")', 'b.decode("ascii")', 's == s2', 'b == b2', 's + "a"', 'b + b"a"', 'len(s)', 'len(b)', 's[:1000].encode()', 'b[:1000].decode()', 's[::1000].encode()', 'b[::1000].decode()', 'hash(s)', 'hash(b)', 'hash(s2)', 'hash(b2)', 'str(s)', 'str(b)', 'repr(s)', 'repr(b)', ] for _ in range(3): for e in exprs: number = 100 if exprs.index(e) < exprs.index('hash(s)') else 1 t = min(repeat(e, setup, number=number)) / number print('%8.2f us ' % (t * 1e6), e) print()
3
4
73,496,372
2022-8-26
https://stackoverflow.com/questions/73496372/is-there-any-way-to-capture-exact-line-number-where-exception-happened-in-python
Hi is there any way to get the exact line number where the exception happen? because i am using a wrapper method and in actual method there are many lines of code and i am getting a very generic exception and not sure where exactly it is happening . Eg code as below, import sys def test(**kwargs): print (kwargs) abc def wraper_test(**kwargs): try: test(**kwargs) except Exception as e: exception_type, exception_object, exception_traceback = sys.exc_info() print(exception_object.tfline_no) wraper_test(hello="test", value="lsdf") Now in the exception line number what i am getting is for test(**kwargs) and not the exact location where the exception is generated in this case "abc" which is inside the test method. Is there any way to capture the exact line number in exception when we are using wrapper method ?
Try this: the traceback library allows you to get a longer stack trace with more line numbers (this shows the real error is on line 5). import sys, traceback def test(**kwargs): print (kwargs) abc def wrapper_test(**kwargs): try: test(**kwargs) except Exception as e: exception_type, exception_object, exception_traceback = sys.exc_info() traceback.print_tb(exception_traceback, limit=2, file=sys.stdout) wrapper_test(hello="test", value="lsdf")
3
4
73,496,251
2022-8-26
https://stackoverflow.com/questions/73496251/find-all-combinations-of-tuples-inside-of-a-list
I am trying to find all permutations of the items inside of the tuples while in the list of length 2. The order of the tuples in relation to each other does not matter. perm = [(3, 6), (6, 8), (4, 1), (7, 4), (5, 3), (1, 9), (2, 5), (4, 8), (5, 1), (3, 7), (6, 9), (10, 2), (7, 10), (8, 2), (9, 10)] An example of one permutation of the above list would be: perm = [(6, 3), (6, 8), (4, 1), (7, 4), (5, 3), (1, 9), (2, 5), (4, 8), (5, 1), (3, 7), (6, 9), (10, 2), (7, 10), (8, 2), (9, 10)] Another example permutation would be: perm = [(6, 3), (8, 6), (1, 4), (4, 7), (3, 5), (9, 1), (5, 2), (8, 4), (1, 5), (7, 3), (9, 6), (2, 10), (10, 7), (2, 8), (10, 9)] In the end, the length of the list of permutations should be 32768, because each tuple is either swapped or not swapped, and 2^15 = 32768. I do not care about the order of the tuples in relation to each other, only the permutations of the items inside of the tuples. I have tried to use itertools permute, combinations, and product, but I haven't been able to get the desired result.
You can use product: from itertools import product lst = [(3, 6), (6, 8), (4, 1), (7, 4), (5, 3), (1, 9), (2, 5), (4, 8), (5, 1), (3, 7), (6, 9), (10, 2), (7, 10), (8, 2), (9, 10)] output = product(*([(x, y), (y, x)] for x, y in lst)) output = list(output) # if you want a list, rather than a generator print(len(output)) # 32768 print(output[0]) # ((3, 6), (6, 8), (4, 1), (7, 4), (5, 3), (1, 9), (2, 5), (4, 8), (5, 1), (3, 7), (6, 9), (10, 2), (7, 10), (8, 2), (9, 10)) print(output[-1]) # ((6, 3), (8, 6), (1, 4), (4, 7), (3, 5), (9, 1), (5, 2), (8, 4), (1, 5), (7, 3), (9, 6), (2, 10), (10, 7), (2, 8), (10, 9)) The key is to write something like output = product([(3,6), (6,3)], [(6,8), (8,6)], ..., [(9,10), (10,9)]) in a generic way so that any input list would work, which is done by the generator expression and unpacking (*).
3
4
73,493,393
2022-8-25
https://stackoverflow.com/questions/73493393/devcontainer-json-postcreatecommand-warns-running-pip-as-the-root-user
Question: How should I refactor my postCreateCommand so that project dependencies are not installed as root? Problem (research and solution attempt follow below): I run pip install -r requirements.txt within the postCreateCommand in my devcontainer.json. However, pip still complains about being run as root: "postCreateCommand": "pip3 install -r requirements.txt", Below is the output of my postCreateCommand: Running the postCreateCommand from devcontainer.json... [7619 ms] Start: Run in container: /bin/sh -c pip3 install --user -r requirements.txt Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113 Collecting absl-py==1.0.0 Downloading absl_py-1.0.0-py3-none-any.whl (126 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 126.7/126.7 kB 8.4 MB/s eta 0:00:00 Collecting anndata==0.8.0 Downloading anndata-0.8.0-py3-none-any.whl (96 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.1/96.1 kB 8.2 MB/s eta 0:00:00 Collecting argon2-cffi==21.3.0 Downloading argon2_cffi-21.3.0-py3-none-any.whl (14 kB) Collecting argon2-cffi-bindings==21.2.0 Downloading argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (86 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.2/86.2 kB 8.0 MB/s eta 0:00:00 Collecting asttokens==2.0.5 Downloading asttokens-2.0.5-py2.py3-none-any.whl (20 kB) Collecting attrs==21.4.0 Downloading attrs-21.4.0-py2.py3-none-any.whl (60 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.6/60.6 kB 4.4 MB/s eta 0:00:00 Collecting backcall==0.2.0 Downloading backcall-0.2.0-py2.py3-none-any.whl (11 kB) Collecting beautifulsoup4==4.11.1 Downloading beautifulsoup4-4.11.1-py3-none-any.whl (128 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 128.2/128.2 kB 12.3 MB/s eta 0:00:00 Collecting bleach==5.0.0 Downloading bleach-5.0.0-py3-none-any.whl (160 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.3/160.3 kB 15.0 MB/s eta 0:00:00 Collecting cachetools==5.2.0 Downloading cachetools-5.2.0-py3-none-any.whl (9.3 kB) Collecting certifi==2022.5.18.1 Downloading certifi-2022.5.18.1-py3-none-any.whl (155 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 155.2/155.2 kB 12.9 MB/s eta 0:00:00 Collecting cffi==1.15.0 Downloading cffi-1.15.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (446 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 446.7/446.7 kB 16.9 MB/s eta 0:00:00 Collecting charset-normalizer==2.0.12 Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting cycler==0.11.0 Downloading cycler-0.11.0-py3-none-any.whl (6.4 kB) Collecting debugpy==1.6.0 Downloading debugpy-1.6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 34.7 MB/s eta 0:00:00 Collecting decorator==5.1.1 Downloading decorator-5.1.1-py3-none-any.whl (9.1 kB) Collecting defusedxml==0.7.1 Downloading defusedxml-0.7.1-py2.py3-none-any.whl (25 kB) Collecting entrypoints==0.4 Downloading entrypoints-0.4-py3-none-any.whl (5.3 kB) Collecting executing==0.8.3 Downloading executing-0.8.3-py2.py3-none-any.whl (16 kB) Collecting fa2==0.3.5 Downloading fa2-0.3.5.tar.gz (435 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 435.4/435.4 kB 27.9 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Collecting fastjsonschema==2.15.3 Downloading fastjsonschema-2.15.3-py3-none-any.whl (22 kB) Collecting fonttools==4.33.3 Downloading fonttools-4.33.3-py3-none-any.whl (930 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 930.9/930.9 kB 43.2 MB/s eta 0:00:00 Collecting GEOparse==2.0.3 Downloading GEOparse-2.0.3.tar.gz (278 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 278.5/278.5 kB 23.1 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Collecting google-auth==2.6.6 Downloading google_auth-2.6.6-py2.py3-none-any.whl (156 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 156.7/156.7 kB 14.7 MB/s eta 0:00:00 Collecting google-auth-oauthlib==0.4.6 Downloading google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB) Collecting grpcio==1.46.3 Downloading grpcio-1.46.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 57.8 MB/s eta 0:00:00 Collecting h5py==3.7.0 Downloading h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.5/4.5 MB 58.5 MB/s eta 0:00:00 Collecting idna==3.3 Downloading idna-3.3-py3-none-any.whl (61 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 5.8 MB/s eta 0:00:00 Collecting igraph==0.9.10 Downloading igraph-0.9.10-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 51.5 MB/s eta 0:00:00 Collecting importlib-metadata==4.11.4 Downloading importlib_metadata-4.11.4-py3-none-any.whl (18 kB) Collecting importlib-resources==5.7.1 Downloading importlib_resources-5.7.1-py3-none-any.whl (28 kB) Collecting ipykernel==6.13.0 Downloading ipykernel-6.13.0-py3-none-any.whl (131 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.8/131.8 kB 12.2 MB/s eta 0:00:00 Collecting ipython==8.4.0 Downloading ipython-8.4.0-py3-none-any.whl (750 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 750.8/750.8 kB 39.4 MB/s eta 0:00:00 Collecting ipython-genutils==0.2.0 Downloading ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB) Collecting ipywidgets==7.7.0 Downloading ipywidgets-7.7.0-py2.py3-none-any.whl (123 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 123.4/123.4 kB 11.2 MB/s eta 0:00:00 Collecting jedi==0.18.1 Downloading jedi-0.18.1-py2.py3-none-any.whl (1.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 53.8 MB/s eta 0:00:00 Collecting Jinja2==3.1.2 Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 kB 12.9 MB/s eta 0:00:00 Collecting joblib==1.1.0 Downloading joblib-1.1.0-py2.py3-none-any.whl (306 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 307.0/307.0 kB 22.5 MB/s eta 0:00:00 Collecting jsonschema==4.5.1 Downloading jsonschema-4.5.1-py3-none-any.whl (72 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.9/72.9 kB 6.9 MB/s eta 0:00:00 Collecting jupyter-client==7.3.1 Downloading jupyter_client-7.3.1-py3-none-any.whl (130 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.0/131.0 kB 10.6 MB/s eta 0:00:00 Collecting jupyter-console==6.4.3 Downloading jupyter_console-6.4.3-py3-none-any.whl (22 kB) Collecting jupyter-core==4.10.0 Downloading jupyter_core-4.10.0-py3-none-any.whl (87 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.3/87.3 kB 7.9 MB/s eta 0:00:00 Collecting jupyterlab-pygments==0.2.2 Downloading jupyterlab_pygments-0.2.2-py2.py3-none-any.whl (21 kB) Collecting jupyterlab-widgets==1.1.0 Downloading jupyterlab_widgets-1.1.0-py3-none-any.whl (245 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 245.1/245.1 kB 20.5 MB/s eta 0:00:00 Collecting kiwisolver==1.4.2 Downloading kiwisolver-1.4.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 52.7 MB/s eta 0:00:00 Collecting leidenalg==0.8.10 Downloading leidenalg-0.8.10-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 54.2 MB/s eta 0:00:00 Collecting llvmlite==0.38.1 Downloading llvmlite-0.38.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 34.5/34.5 MB 43.2 MB/s eta 0:00:00 Collecting Markdown==3.3.7 Downloading Markdown-3.3.7-py3-none-any.whl (97 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.8/97.8 kB 8.9 MB/s eta 0:00:00 Collecting MarkupSafe==2.1.1 Downloading MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB) Collecting matplotlib==3.5.2 Downloading matplotlib-3.5.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (11.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.3/11.3 MB 62.6 MB/s eta 0:00:00 Collecting matplotlib-inline==0.1.3 Downloading matplotlib_inline-0.1.3-py3-none-any.whl (8.2 kB) Collecting mistune==0.8.4 Downloading mistune-0.8.4-py2.py3-none-any.whl (16 kB) Collecting natsort==8.1.0 Downloading natsort-8.1.0-py3-none-any.whl (37 kB) Collecting nbclient==0.6.3 Downloading nbclient-0.6.3-py3-none-any.whl (71 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 6.8 MB/s eta 0:00:00 Collecting nbconvert==6.5.0 Downloading nbconvert-6.5.0-py3-none-any.whl (561 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 561.6/561.6 kB 33.8 MB/s eta 0:00:00 Collecting nbformat==5.4.0 Downloading nbformat-5.4.0-py3-none-any.whl (73 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 73.3/73.3 kB 6.9 MB/s eta 0:00:00 Collecting nest-asyncio==1.5.5 Downloading nest_asyncio-1.5.5-py3-none-any.whl (5.2 kB) Collecting networkx==2.8.2 Downloading networkx-2.8.2-py3-none-any.whl (2.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 34.1 MB/s eta 0:00:00 Collecting notebook==6.4.11 Downloading notebook-6.4.11-py3-none-any.whl (9.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.9/9.9 MB 69.6 MB/s eta 0:00:00 Collecting numba==0.55.1 Downloading numba-0.55.1-1-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 51.1 MB/s eta 0:00:00 Collecting numpy==1.21.6 Downloading numpy-1.21.6-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.7/15.7 MB 60.2 MB/s eta 0:00:00 Collecting oauthlib==3.2.0 Downloading oauthlib-3.2.0-py3-none-any.whl (151 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 151.5/151.5 kB 13.7 MB/s eta 0:00:00 Collecting packaging==21.3 Downloading packaging-21.3-py3-none-any.whl (40 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.8/40.8 kB 3.5 MB/s eta 0:00:00 Collecting pandas==1.4.2 Downloading pandas-1.4.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.7/11.7 MB 66.7 MB/s eta 0:00:00 Collecting pandocfilters==1.5.0 Downloading pandocfilters-1.5.0-py2.py3-none-any.whl (8.7 kB) Collecting parso==0.8.3 Downloading parso-0.8.3-py2.py3-none-any.whl (100 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.8/100.8 kB 8.1 MB/s eta 0:00:00 Collecting patsy==0.5.2 Downloading patsy-0.5.2-py2.py3-none-any.whl (233 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 233.7/233.7 kB 19.1 MB/s eta 0:00:00 Collecting pexpect==4.8.0 Downloading pexpect-4.8.0-py2.py3-none-any.whl (59 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.0/59.0 kB 5.6 MB/s eta 0:00:00 Collecting pickleshare==0.7.5 Downloading pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB) Collecting Pillow==9.1.1 Downloading Pillow-9.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 62.2 MB/s eta 0:00:00 Collecting prometheus-client==0.14.1 Downloading prometheus_client-0.14.1-py3-none-any.whl (59 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.5/59.5 kB 5.6 MB/s eta 0:00:00 Collecting prompt-toolkit==3.0.29 Downloading prompt_toolkit-3.0.29-py3-none-any.whl (381 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 381.5/381.5 kB 24.8 MB/s eta 0:00:00 Collecting protobuf==3.20.1 Downloading protobuf-3.20.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 32.9 MB/s eta 0:00:00 Collecting psutil==5.9.1 Downloading psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (284 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 284.7/284.7 kB 21.5 MB/s eta 0:00:00 Collecting ptyprocess==0.7.0 Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB) Collecting pure-eval==0.2.2 Downloading pure_eval-0.2.2-py3-none-any.whl (11 kB) Collecting pyasn1==0.4.8 Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.1/77.1 kB 7.5 MB/s eta 0:00:00 Collecting pyasn1-modules==0.2.8 Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 155.3/155.3 kB 13.1 MB/s eta 0:00:00 Collecting pycparser==2.21 Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.7/118.7 kB 11.3 MB/s eta 0:00:00 Collecting Pygments==2.12.0 Downloading Pygments-2.12.0-py3-none-any.whl (1.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 47.6 MB/s eta 0:00:00 Collecting pynndescent==0.5.7 Downloading pynndescent-0.5.7.tar.gz (1.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 54.2 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Collecting pyparsing==3.0.9 Downloading pyparsing-3.0.9-py3-none-any.whl (98 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.3/98.3 kB 9.7 MB/s eta 0:00:00 Collecting pyrsistent==0.18.1 Downloading pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (119 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 119.8/119.8 kB 10.9 MB/s eta 0:00:00 Collecting python-dateutil==2.8.2 Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.7/247.7 kB 22.2 MB/s eta 0:00:00 Collecting pytz==2022.1 Downloading pytz-2022.1-py2.py3-none-any.whl (503 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 503.5/503.5 kB 32.0 MB/s eta 0:00:00 Collecting pyzmq==23.0.0 Downloading pyzmq-23.0.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 48.0 MB/s eta 0:00:00 Collecting requests==2.27.1 Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.1/63.1 kB 5.9 MB/s eta 0:00:00 Collecting requests-oauthlib==1.3.1 Downloading requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB) Collecting rsa==4.8 Downloading rsa-4.8-py3-none-any.whl (39 kB) Collecting scanpy==1.9.1 Downloading scanpy-1.9.1-py3-none-any.whl (2.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 65.1 MB/s eta 0:00:00 Collecting scikit-learn==1.1.1 Downloading scikit_learn-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (31.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 31.2/31.2 MB 44.7 MB/s eta 0:00:00 Collecting scikit-misc==0.1.4 Downloading scikit_misc-0.1.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (8.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.8/8.8 MB 67.5 MB/s eta 0:00:00 Collecting scipy==1.8.1 Downloading scipy-1.8.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (41.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.6/41.6 MB 39.3 MB/s eta 0:00:00 Collecting seaborn==0.11.2 Downloading seaborn-0.11.2-py3-none-any.whl (292 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 292.8/292.8 kB 23.5 MB/s eta 0:00:00 Collecting Send2Trash==1.8.0 Downloading Send2Trash-1.8.0-py3-none-any.whl (18 kB) Collecting session-info==1.0.0 Downloading session_info-1.0.0.tar.gz (24 kB) Preparing metadata (setup.py) ... done Collecting six==1.16.0 Downloading six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting soupsieve==2.3.2.post1 Downloading soupsieve-2.3.2.post1-py3-none-any.whl (37 kB) Collecting stack-data==0.2.0 Downloading stack_data-0.2.0-py3-none-any.whl (21 kB) Collecting statannotations==0.4.4 Downloading statannotations-0.4.4-py3-none-any.whl (31 kB) Collecting statsmodels==0.13.2 Downloading statsmodels-0.13.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.9/9.9 MB 62.4 MB/s eta 0:00:00 Collecting stdlib-list==0.8.0 Downloading stdlib_list-0.8.0-py3-none-any.whl (63 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.5/63.5 kB 5.8 MB/s eta 0:00:00 Collecting tensorboard==2.9.0 Downloading tensorboard-2.9.0-py3-none-any.whl (5.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 66.0 MB/s eta 0:00:00 Collecting tensorboard-data-server==0.6.1 Downloading tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl (4.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 61.9 MB/s eta 0:00:00 Collecting tensorboard-plugin-wit==1.8.1 Downloading tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 781.3/781.3 kB 40.5 MB/s eta 0:00:00 Collecting terminado==0.15.0 Downloading terminado-0.15.0-py3-none-any.whl (16 kB) Collecting texttable==1.6.4 Downloading texttable-1.6.4-py2.py3-none-any.whl (10 kB) Collecting threadpoolctl==3.1.0 Downloading threadpoolctl-3.1.0-py3-none-any.whl (14 kB) Collecting tinycss2==1.1.1 Downloading tinycss2-1.1.1-py3-none-any.whl (21 kB) Collecting torch-tb-profiler==0.4.0 Downloading torch_tb_profiler-0.4.0-py3-none-any.whl (1.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 46.9 MB/s eta 0:00:00 Collecting tornado==6.1 Downloading tornado-6.1-cp38-cp38-manylinux2010_x86_64.whl (427 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 427.5/427.5 kB 29.1 MB/s eta 0:00:00 Collecting tqdm==4.64.0 Downloading tqdm-4.64.0-py2.py3-none-any.whl (78 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.4/78.4 kB 7.5 MB/s eta 0:00:00 Collecting traitlets==5.2.1.post0 Downloading traitlets-5.2.1.post0-py3-none-any.whl (106 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 106.6/106.6 kB 9.8 MB/s eta 0:00:00 Collecting typing_extensions==4.2.0 Downloading typing_extensions-4.2.0-py3-none-any.whl (24 kB) Collecting umap-learn==0.5.3 Downloading umap-learn-0.5.3.tar.gz (88 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.2/88.2 kB 8.4 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Collecting urllib3==1.26.9 Downloading urllib3-1.26.9-py2.py3-none-any.whl (138 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 139.0/139.0 kB 10.9 MB/s eta 0:00:00 Collecting wcwidth==0.2.5 Downloading wcwidth-0.2.5-py2.py3-none-any.whl (30 kB) Collecting webencodings==0.5.1 Downloading webencodings-0.5.1-py2.py3-none-any.whl (11 kB) Collecting Werkzeug==2.1.2 Downloading Werkzeug-2.1.2-py3-none-any.whl (224 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 224.9/224.9 kB 17.8 MB/s eta 0:00:00 Collecting widgetsnbextension==3.6.0 Downloading widgetsnbextension-3.6.0-py2.py3-none-any.whl (1.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 53.5 MB/s eta 0:00:00 Collecting zipp==3.8.0 Downloading zipp-3.8.0-py3-none-any.whl (5.4 kB) Collecting torch==1.11.0+cu113 Downloading https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp38-cp38-linux_x86_64.whl (1637.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 GB 2.4 MB/s eta 0:00:00 Collecting torchvision==0.12.0+cu113 Downloading https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp38-cp38-linux_x86_64.whl (22.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.3/22.3 MB 56.9 MB/s eta 0:00:00 Collecting torchaudio==0.11.0+cu113 Downloading https://download.pytorch.org/whl/cu113/torchaudio-0.11.0%2Bcu113-cp38-cp38-linux_x86_64.whl (2.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.9/2.9 MB 25.2 MB/s eta 0:00:00 Requirement already satisfied: setuptools>=18.5 in /usr/local/python/lib/python3.8/site-packages (from ipython==8.4.0->-r requirements.txt (line 33)) (56.0.0) Collecting wheel>=0.26 Downloading wheel-0.37.1-py2.py3-none-any.whl (35 kB) Using legacy 'setup.py install' for fa2, since package 'wheel' is not installed. Using legacy 'setup.py install' for GEOparse, since package 'wheel' is not installed. Using legacy 'setup.py install' for pynndescent, since package 'wheel' is not installed. Using legacy 'setup.py install' for session-info, since package 'wheel' is not installed. Using legacy 'setup.py install' for umap-learn, since package 'wheel' is not installed. Installing collected packages: webencodings, wcwidth, texttable, tensorboard-plugin-wit, stdlib-list, Send2Trash, pytz, pyasn1, pure-eval, ptyprocess, pickleshare, mistune, ipython-genutils, fastjsonschema, executing, backcall, zipp, wheel, Werkzeug, urllib3, typing_extensions, traitlets, tqdm, tornado, tinycss2, threadpoolctl, tensorboard-data-server, soupsieve, six, session-info, rsa, pyzmq, pyrsistent, pyparsing, Pygments, pycparser, pyasn1-modules, psutil, protobuf, prompt-toolkit, prometheus-client, Pillow, pexpect, parso, pandocfilters, oauthlib, numpy, networkx, nest-asyncio, natsort, MarkupSafe, llvmlite, kiwisolver, jupyterlab-widgets, jupyterlab-pygments, joblib, igraph, idna, fonttools, entrypoints, defusedxml, decorator, debugpy, cycler, charset-normalizer, certifi, cachetools, attrs, torch, terminado, scipy, scikit-misc, requests, python-dateutil, patsy, packaging, numba, matplotlib-inline, leidenalg, jupyter-core, Jinja2, jedi, importlib-resources, importlib-metadata, h5py, grpcio, google-auth, cffi, bleach, beautifulsoup4, asttokens, absl-py, torchvision, torchaudio, stack-data, scikit-learn, requests-oauthlib, pandas, matplotlib, Markdown, jupyter-client, jsonschema, fa2, argon2-cffi-bindings, statsmodels, seaborn, pynndescent, nbformat, ipython, google-auth-oauthlib, GEOparse, argon2-cffi, anndata, umap-learn, tensorboard, statannotations, nbclient, ipykernel, torch-tb-profiler, scanpy, nbconvert, jupyter-console, notebook, widgetsnbextension, ipywidgets Running setup.py install for session-info ... done Running setup.py install for fa2 ... done Running setup.py install for pynndescent ... done Running setup.py install for GEOparse ... done Running setup.py install for umap-learn ... done Successfully installed GEOparse-2.0.3 Jinja2-3.1.2 Markdown-3.3.7 MarkupSafe-2.1.1 Pillow-9.1.1 Pygments-2.12.0 Send2Trash-1.8.0 Werkzeug-2.1.2 absl-py-1.0.0 anndata-0.8.0 argon2-cffi-21.3.0 argon2-cffi-bindings-21.2.0 asttokens-2.0.5 attrs-21.4.0 backcall-0.2.0 beautifulsoup4-4.11.1 bleach-5.0.0 cachetools-5.2.0 certifi-2022.5.18.1 cffi-1.15.0 charset-normalizer-2.0.12 cycler-0.11.0 debugpy-1.6.0 decorator-5.1.1 defusedxml-0.7.1 entrypoints-0.4 executing-0.8.3 fa2-0.3.5 fastjsonschema-2.15.3 fonttools-4.33.3 google-auth-2.6.6 google-auth-oauthlib-0.4.6 grpcio-1.46.3 h5py-3.7.0 idna-3.3 igraph-0.9.10 importlib-metadata-4.11.4 importlib-resources-5.7.1 ipykernel-6.13.0 ipython-8.4.0 ipython-genutils-0.2.0 ipywidgets-7.7.0 jedi-0.18.1 joblib-1.1.0 jsonschema-4.5.1 jupyter-client-7.3.1 jupyter-console-6.4.3 jupyter-core-4.10.0 jupyterlab-pygments-0.2.2 jupyterlab-widgets-1.1.0 kiwisolver-1.4.2 leidenalg-0.8.10 llvmlite-0.38.1 matplotlib-3.5.2 matplotlib-inline-0.1.3 mistune-0.8.4 natsort-8.1.0 nbclient-0.6.3 nbconvert-6.5.0 nbformat-5.4.0 nest-asyncio-1.5.5 networkx-2.8.2 notebook-6.4.11 numba-0.55.1 numpy-1.21.6 oauthlib-3.2.0 packaging-21.3 pandas-1.4.2 pandocfilters-1.5.0 parso-0.8.3 patsy-0.5.2 pexpect-4.8.0 pickleshare-0.7.5 prometheus-client-0.14.1 prompt-toolkit-3.0.29 protobuf-3.20.1 psutil-5.9.1 ptyprocess-0.7.0 pure-eval-0.2.2 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycparser-2.21 pynndescent-0.5.7 pyparsing-3.0.9 pyrsistent-0.18.1 python-dateutil-2.8.2 pytz-2022.1 pyzmq-23.0.0 requests-2.27.1 requests-oauthlib-1.3.1 rsa-4.8 scanpy-1.9.1 scikit-learn-1.1.1 scikit-misc-0.1.4 scipy-1.8.1 seaborn-0.11.2 session-info-1.0.0 six-1.16.0 soupsieve-2.3.2.post1 stack-data-0.2.0 statannotations-0.4.4 statsmodels-0.13.2 stdlib-list-0.8.0 tensorboard-2.9.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 terminado-0.15.0 texttable-1.6.4 threadpoolctl-3.1.0 tinycss2-1.1.1 torch-1.11.0+cu113 torch-tb-profiler-0.4.0 torchaudio-0.11.0+cu113 torchvision-0.12.0+cu113 tornado-6.1 tqdm-4.64.0 traitlets-5.2.1.post0 typing_extensions-4.2.0 umap-learn-0.5.3 urllib3-1.26.9 wcwidth-0.2.5 webencodings-0.5.1 wheel-0.37.1 widgetsnbextension-3.6.0 zipp-3.8.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [notice] A new release of pip available: 22.2.1 -> 22.2.2 [notice] To update, run: pip install --upgrade pip Done. Press any key to close the terminal Research on problem and attempt to correct: Consult the devcontainer.json docs postCreateCommand only runs after "the dev container has been assigned to a user for the first time" invocation of containerUser switches to root by default (1.1-2) imply that postCreateCommand just runs under the non-root user of my container I add the --user option to my pip call The full command is now "postCreateCommand": "pip3 install --user -r requirements.txt", (2.1) produces the same warning as the original call, indicating that postCreateCommand is still running as root
Turns out that my dockerfile, in order to provide the correct environment for setup scripts, sets the user via ARG calls instead of the USER instruction. Therefore containerUser is implicitly set as root at the time that postCreateCommand is invoked. It was sufficient to explicitly set containerUser to the user created in my dockerfile, before postCreateCommand: "remoteUser": "vscode", "postCreateCommand": "pip3 install --user -r requirements.txt",
5
5
73,493,678
2022-8-25
https://stackoverflow.com/questions/73493678/ipython-deprecation-warning-when-importing-display
When I run: from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) I got /var/folders/6g/6gqq4lhx4jbcl4_tbrsxj3xr0000gq/T/ipykernel_5625/333572366.py:1: DeprecationWarning: Importing display from IPython.core.display is deprecated since IPython 7.14, please import from IPython display from IPython.core.display import display, HTML
replace from IPython.core.display import display, HTML with from IPython.display import display, HTML source here
4
12
73,492,654
2022-8-25
https://stackoverflow.com/questions/73492654/python-regex-find-all-caps-with-no-following-lowercase
How can I retain all capital characters, given that subsequent characters are not lower case? Consider this example: import re test1 = 'ThisIsATestTHISISATestTHISISATEST' re.findall(r'[A-Z]{2}[^a-z]+', test1) # ['THISISAT', 'THISISATEST'] Expectation: This: 'THISISAT', should read: 'THISISA'
Try (regex101): import re test1 = "ThisIsATestTHISISATestTHISISATEST" print(re.findall(r"[A-Z]{2}[A-Z]*(?![a-z])", test1)) Prints: ['THISISA', 'THISISATEST']
3
4
73,486,279
2022-8-25
https://stackoverflow.com/questions/73486279/how-to-find-element-by-attribute-and-text-in-a-singe-locator
How can I find an element using Playwright using a single locator phrase? My element is: <div class="DClass">Hello</div> I wish to find the element by its class and text: myElement = self.page.locator('text="Hello",[class="DClass"]') Why it does not work?
If you separate the selectors with a , that's an or. You can chain selectors using >>. myElement = self.page.locator('text="Hello" >> [class="DClass"]')
5
8
73,483,284
2022-8-25
https://stackoverflow.com/questions/73483284/how-to-quickly-fillna-with-a-sequence
I have a question about how to quickly fillna with a sequence in Python(pandas).I have a dataset like following(the true dataset is longer), Time Number t0 NA t1 NA t2 NA t3 0 t4 NA t5 NA t6 NA t7 NA t8 0 t9 NA My requirement is to add numbers to N lines before and after non-blank lines, and the sequence range is range(-N,N+1).The interval between any two non-empty rows in the dataset is greater than C(constant), our N will be less than C, so there is no need to consider the coverage problem for the time being. Assuming N=2, the result I need is as follows : Time Number t0 NA t1 -2 t2 -1 t3 0 t4 1 t5 2 t6 -2 t7 -1 t8 0 t9 1 At present, the only way I can think of is to use a loop, but the efficiency is low. Does pandas have such a method to do it quickly?
There are still some unknowns in your question, like what happens if the intervals overlap. Here I will consider that a further interval overwrites the previous one (you can do the other way around with a change of code, see second part). Using rolling, groupby.cumcount, and a mask: s = df['Number'].notna().shift(-N, fill_value=False) m = s.rolling(2*N+1, min_periods=1).max().astype(bool) df['Number2'] = df.groupby(s.cumsum()).cumcount().sub(N).where(m) NB. I used a slightly different example to show the overlap. output: Time Number Number2 0 t0 NaN NaN 1 t1 NaN -2.0 2 t2 NaN -1.0 3 t3 0.0 0.0 4 t4 NaN 1.0 5 t5 NaN -2.0 # here we have an overlap, use latter value 6 t6 NaN -1.0 7 t7 0.0 0.0 8 t8 NaN 1.0 9 t9 NaN 2.0 10 t10 NaN NaN priority on first group s = df['Number'].notna().shift(N, fill_value=False)[::-1] m = s.rolling(2*N+1, min_periods=1).max().astype(bool) df['Number3'] = df.groupby(s.cumsum()).cumcount(ascending=False).rsub(N).where(m) output: Time Number Number2 Number3 0 t0 NaN NaN NaN 1 t1 NaN -2.0 -2.0 2 t2 NaN -1.0 -1.0 3 t3 0.0 0.0 0.0 4 t4 NaN 1.0 1.0 5 t5 NaN -2.0 2.0 # difference in behavior 6 t6 NaN -1.0 -1.0 7 t7 0.0 0.0 0.0 8 t8 NaN 1.0 1.0 9 t9 NaN 2.0 2.0 10 t10 NaN NaN NaN
4
4
73,484,719
2022-8-25
https://stackoverflow.com/questions/73484719/how-to-convert-list-to-list-of-list-for-adjacent-numbers
i have list [31, 32,33, 1,2,3,4, 11,12,13,14] I need to put into adjacent numbers into one list for i, i+1 Expected out [[1,2,3,4], [11,12,13,14], [31, 32,33]] l = [31, 32,33, 1,2,3,4, 11,12,13,14] l.sort() #sorted the items new_l = [] for i in l: temp_l = [] # temp list before appending to main list if i + 1 in l: # if i + 1 is present append to temp_list temp_l.append(i) new_l.append(temp_l) # temp_l has to append to main list My out is wrong : [[1], [2], [3], [], [11], [12], [13], [], [31], [32], []]
You can append an empty sub-list to the output list when the difference between the current number and the last number in the last sub-list in the output list is not 1, and keep appending the current number to the last sub-list of the output list: l = [31, 32,33, 1,2,3,4, 11,12,13,14] l.sort() output = [] for i in l: if not output or i - output[-1][-1] != 1: output.append([]) output[-1].append(i) output becomes: [[1, 2, 3, 4], [11, 12, 13, 14], [31, 32, 33]] Demo: https://replit.com/@blhsing/UnimportantValidTelecommunications
4
1
73,483,350
2022-8-25
https://stackoverflow.com/questions/73483350/why-defining-only-lt-makes-operation-possible
class Node: def __init__(self,a,b): self._a=a self._b=b def __lt__(self,other): return self._a<other._a a=Node(1,2) b=Node(0,4) print(a>b) The code above shows True. class Node: def __init__(self,a,b): self._a=a self._b=b def __lt__(self,other): return self._a<other._a def __eq__(self,other): return self._a==other._a a=Node(1,2) b=Node(0,4) print(a>=b) The code above shows TypeError: '<=' not supported between instances of 'Node' and 'Node. Why defining only lt makes >(which is gt) operation possible? why defining both lt and eq makes <= impossible?
The Python docs dictates: There are no swapped-argument versions of these methods (to be used when the left argument does not support the operation but the right argument does); rather, __lt__() and __gt__() are each other’s reflection, __le__() and __ge__() are each other’s reflection, and __eq__() and __ne__() are their own reflection. So if the left-hand-side argument doesn't implement a comparison operator while the right-hand-side implements its reflection, that reflection is called instead. This also explains why Python doesn't combine __lt__() and __eq__() into __le__() — it simply isn't considered.
3
5
73,481,220
2022-8-25
https://stackoverflow.com/questions/73481220/removing-index-from-pandas-data-frame-on-print
I'm really struggling to get this to print the way I want to. I've read through the documentation on removing index, but it seems like it still shows up. Here is my code: quotes = pd.read_csv("quotes.txt",header = None, index_col = False) quote_to_send = quotes.sample(ignore_index = True) print(quote_to_send) The text file isn't anything special, looks like this: "When you arise in the morning think of what a privilege it is to be alive, to think, to enjoy, to love..." - Marcus Aurelius "Either you run the day or the day runs you." - Jim Rohn .... The output of this looks like this: 0 0 You may have to fight a battle more than once ... How do I get rid of those random 0s?
The 0 on top is your column name, since you don't have one... The 0 on the left is your index, something that absolutely every dataframe needs. If you really want to see things without those essential pieces, you can use print(df.to_string(header=False, index=False)) When you arise in the morning think of what a privilege it is to be alive, to think, to enjoy, to love... - Marcus Aurelius Either you run the day or the day runs you. - Jim Rohn
3
4
73,479,715
2022-8-24
https://stackoverflow.com/questions/73479715/convert-local-file-url-to-file-path
I have a URL that points to a local file. 'file:///home/pi/Desktop/music/Radio%20Song.mp3' I need to somehow convert this into a traditional file path, like the os module employs. '/home/pi/Desktop/music/Radio Song.mp3' Right now I'm hacking it with the replace() method. path = file.replace('file://', '').replace('%20', ' ') I've looked at the os module, and it doesn't seem to have support for this. I've searched various ways of phrasing it, and I can't seem to find an answer. Am I just ignorant of the terminology? What's the proper way to do this?
The following would work: from urllib.request import url2pathname from urllib.parse import urlparse p = urlparse('file:///home/pi/Desktop/music/Radio%20Song.mp3') file_path = url2pathname(p.path) print(file_path) (thanks to user @MillerTime correctly pointing out that the solution will not remove file:// without the urlparse) Output: /home/pi/Desktop/music/Radio Song.mp3 urllib is a standard library, so no installation required. On a Windows machine, running just url2pathname would give you a valid file path right away, but it would be relative to the working directory of the script - e.g. running it from somewhere on D: drive: D:\home\pi\Desktop\music\Radio Song.mp3
5
5
73,477,369
2022-8-24
https://stackoverflow.com/questions/73477369/s3-bucket-sensor-for-new-file
I am working on an ETL pipeline using docker airflow. I want to trigger my pipeline whenever any new file is uploaded to S3 bucket. Is there any S3sensor in airflow that checks any new file in bucket? The S3sensor should ignore the existing files in location and should only trigger when new file is added to S3.
You have several options to achieve this goal: The best solution is creating S3 Event Notifications on file creation to send a message to SQS. In Airflow you can create a sensor to check if there are new messages to process them. You can also create a sensor which list the files in S3 bucket, and add them to a state store (DB for ex) with state to_process, the next time it will compares between the files list and the files in the state store to know it there are new files or not, then your dag process the records in the state store which have a state != done, and when it finish the processing, it updates the state to done. You can add other metadata like created_at, processed_at, and other states like error to reprocess them in the next run or send an alert to your team.
3
4
73,477,197
2022-8-24
https://stackoverflow.com/questions/73477197/how-do-you-use-either-databricks-job-task-parameters-or-notebook-variables-to-se
The goal is to be able to use 1 script to create different reports based on a filter. I want my Databricks Job Task parameters and Notebook variables to share the same value for filtering purposes. This is how I declared these widgets and stored in a variable: dbutils.widgets.text(name='field', defaultValue='', label='field') f1= dbutils.widgets.get('field')
There are two methods to use with widgets [.text() + .get()]. One to create the widget the first time and one to grab the value from the widget. Here is some sample screen shots from a class I teach. The .text method creates the widget and sets the value. It only has to be executed once. It can be commented out afterwards. This has to be recreated if you move the code from one Databricks workspace to another. In this example, I have a notebook that reads a csv file and performs a full load of a delta table. The source file, destination path, debug flag, file schema and partition count as passed as parameters. I am using the adventure works data files. The parent notebook calls the child notebook 15 times to load and create hive tables for the adventure works schema. The same can be done for report file creation. I hope this explains and shows how to use widgets effectively and call a notebook using the dutils.notebook.run() method. You can translate the .run() method calls to tasks in a databricks job. What I do not like is the interface. There is a lot of clicking and entering vs cut/paste when using a notebook. I guess you can use the JSON tab to the right. To be honest, I usually use ADF for scheduling since most of my clients data is hybrid in nature. Here is a screen shot of a sample job with at task to load the accounts hive table. Last but not least, the job runs to success.
3
4
73,476,388
2022-8-24
https://stackoverflow.com/questions/73476388/creating-user-name-from-name-in-python
I have a spreadsheet with data. There I have a name like Roger Smith. I would like to have the user name rsmith. Therefore, the first letter of the first name followed by the family name. How can I do this in Python?
def make_username(full_name: str) -> str: first_names, last_name = full_name.lower().rsplit(maxsplit=1) return first_names[0] + last_name print(make_username("Roger M. Smith")) Output: rsmith The use of rsplit is to ensure that in case someone has more than one first name, the last name is still taken properly. I assume that last names will not have spaces in them. Note however that depending on your use case, you may need to perform additional operations to ensure that you don't get duplicates.
3
7
73,442,335
2022-8-22
https://stackoverflow.com/questions/73442335/how-to-upload-a-large-file-%e2%89%a53gb-to-fastapi-backend
I am trying to upload a large file (≥3GB) to my FastAPI server, without loading the entire file into memory, as my server has only 2GB of free memory. Server side: @app.post("/uploadfiles") async def uploadfiles(upload_file: UploadFile = File(...): pass Client side: file_name="afd.tgz" m = MultipartEncoder(fields = {"upload_file":open(file_name,'rb')}) prefix = "http://xxx:5000" url = "{}/v1/uploadfiles".format(prefix) try: req = requests.post( url, data=m, verify=False, ) which returns the following 422 (Unprocessable entity) error: HTTP 422 {"detail":[{"loc":["body","upload_file"],"msg":"field required","type":"value_error.missing"}]} I am not sure what MultipartEncoder actually sends to the server, so that the request does not match. Any ideas?
With requests-toolbelt, you have to pass the filename as well, when declaring the field for upload_file, as well as set the Content-Type header—which is the main reason for the error you get, as you are sending the request without setting the Content-Type header to multipart/form-data, followed by the necessary boundary string—as shown in the docs. Example: filename = 'my_file.txt' m = MultipartEncoder(fields={'upload_file': (filename, open(filename, 'rb'))}) r = requests.post(url, data=m, headers={'Content-Type': m.content_type}) print(r.request.headers) # confirm that the 'Content-Type' header has been set However, I wouldn't recommend using requests-toolbelt, as it hasn't provided a new release for over three years now. I would suggest using Python requests instead, as demonstrated in this answer and this answer (also see Streaming Uploads and Chunk-Encoded Requests), or, preferably, use httpx, which supports sending requests asynchronously (if you had to send multiple requests simultaneously), as well as streaming File uploads by default, meaning that only one chunk at a time would be loaded into memory (see the docs). Option 1 (Simple and Fast) - Upload only File(s) using .stream() The example below demonstrates an approach, which was initially presented in this answer, on how to upload a file in a fast way compared to the one documented by FastAPI. As previously explained in the linked answer above, when you declare an UploadFile object, FastAPI/Starlette, under the hood, uses a SpooledTemporaryFile with the max_size attribute set to 1MB, meaning that the file data is spooled in memory, until the file size exceeds the max_size, at which point the contents will be written to disk; more specifically, the file data will be written to a temporary file on your OS's temporary directory—see this answer on how to find/change it—that you later need to read the data from, using the .read() method. Hence, this whole process makes uploading a file quite slow; especially, if it is a large one (as you'll see in Option 3 later on). To avoid that and speed up the process as well, as the linked answer above suggested, one could access the request body as a stream. As per Starlette documentation, if you use the request.stream() method, the (request) byte chunks are provided without storing the entire body into memory (and later to a temporary file, if the body size exceeds 1MB). This method allows you to read and process the byte chunks as they arrive. Even though the endpoint below is designed to only expect a single file, a client could make multiple calls to that endpoint (as demonstrated in the client examples below), in order to upload multiple files. Also, the endpoint below, compared to the ones from the other options later on, cannot accept Form data. One, however, could use the request headers—although it would be advisable to use the request body instead, as demonstrated in Options 2 and 3—in order to send some extra data (Note though that HTTP header values are restricted by server implementations; hence, be aware of the limits defined by the various web servers). The example below saves the incoming files to disk, but if one would like having them stored to RAM instead, see the "Update" section of the linked answer above, where this approach was first presented. Also, the filenames are encoded/quoted on client side and decoded/unquoted on server side. This is to ensure that the uploading wouldn't fail, if one tried uploading files that their name had non-ascii/unicode characters in it. app.py from fastapi import FastAPI, Request, HTTPException from fastapi.responses import HTMLResponse from fastapi.templating import Jinja2Templates from urllib.parse import unquote import aiofiles import os app = FastAPI() templates = Jinja2Templates(directory="templates") @app.post('/upload') async def upload(request: Request): try: filename = request.headers['filename'] filename = unquote(filename) filepath = os.path.join('./', os.path.basename(filename)) async with aiofiles.open(filepath, 'wb') as f: async for chunk in request.stream(): await f.write(chunk) except Exception: raise HTTPException(status_code=500, detail='Something went wrong') return {"message": f"Successfuly uploaded: {filename}"} @app.get("/", response_class=HTMLResponse) async def main(request: Request): return templates.TemplateResponse(request=request, name="index.html") templates/index.html <!DOCTYPE html> <html> <body> <label for="fileInput">Choose file(s) to upload</label> <input type="file" id="fileInput" name="fileInput" onchange="reset()" multiple><br> <input type="button" value="Submit" onclick="go()"> <p id="response"></p> <script> var resp = document.getElementById("response"); function reset() { resp.innerHTML = ""; } function go() { var fileInput = document.getElementById('fileInput'); if (fileInput.files[0]) { for (const file of fileInput.files) { let reader = new FileReader(); reader.onload = function () { uploadFile(reader.result, file.name); } reader.readAsArrayBuffer(file); } } } function uploadFile(contents, filename) { var headers = new Headers(); filename = encodeURI(filename); headers.append("filename", filename); fetch('/upload', { method: 'POST', headers: headers, body: contents, }) .then(response => response.json()) // or, response.text(), etc. .then(data => { resp.innerHTML += JSON.stringify(data); // data is a JSON object }) .catch(error => { console.error(error); }); } </script> </body> </html> test.py import httpx import time from urllib.parse import quote url = 'http://127.0.0.1:8000/upload' filename = 'bigFile.zip' headers = {'filename': quote(filename)} start = time.time() with open(filename, "rb") as f: r = httpx.post(url=url, data=f, headers=headers) end = time.time() print(f'Time elapsed: {end - start}s') print(r.json()) To upload multiple files, you could use: # ... import glob, os paths = glob.glob("big_files_dir/*", recursive=True) for p in paths: with open(p, "rb") as f: headers = {'filename': quote(os.path.basename(p))} # r = httpx... Option 2 (Fast) - Upload both File and Form data using .stream() The example below takes the suggested solution described above a step further, by using the streaming-form-data library, which provides a Python parser for parsing streaming multipart/form-data input chunks. This means that one not only can upload Form data along with File(s), but also the backend wouldn't have to wait for the entire request body to be received, in order to start parsing the data (as is the case with Option 3 below)—in other words, the parser would parse the data as they arrive, and that's what makes it fast. The way it is done is that you initialize the main parser class (passing the HTTP request headers that help determine the input Content-Type, and hence, the boundary used to separate each body part in the multipart payload, etc.), and associate one of the Target classes to define what should be done with a field, when it has been extracted out of the request body. For instance, FileTarget would stream the data to a file on disk, whereas ValueTarget would hold the data in memory (the ValueTarget class can be used for either Form or File data, if you don't need the file(s) saved to the disk). It is also possible to define your own custom Target classes. It should be noted that streaming-form-data does not currently support async calls to I/O operations, meaning that the writing of chunks is synchronous (within a def function). Though, as the endpoint in the example below uses .stream() (which is an async def function), it will give up time for other tasks/requests in the event loop to run, while waiting for data to become available from the stream. You could also run the function for parsing the received data in a separate thread and await it, using Starlette's run_in_threadpool()—e.g., await run_in_threadpool(parser.data_received, chunk)—which is internally used by FastAPI, when you make calls to the async methods of UploadFile, as shown here. For more details on def vs async def in FastAPI, see this answer. Using the solution below would also allow one to perform certain validation tasks, e.g., ensuring that the data size is not exceeding a certain limit—which couldn't be done with the UploadFile approach while data are streaming, as with UploadFile, the request gets inside the endpoint, after the file is fully uploaded. The solution below achieves that using MaxSizeValidator. However, as this would only be applied to File/Form fields that you had defined—and hence, it wouldn't prevent a malicious user from sending an extremely large request body (using random File/Form fields, for instance), which could result in consuming server resources in a way that the application may end up crashing or become unresponsive to legitimate users—the example below incorporates a custom MaxBodySizeValidator that could be used to ensure that the request body size does not exceed a pre-defined maximum value. Both validators desribed above solve the issue of limiting upload file size, as well as the entire request body size, in a likely better way than the one desribed here, which instead uses the UploadFile approach that requires the file to be entirely received and saved to the temporary directory, before performing any validation checks (not to mention that the approach described in that github post does not take into account the request body size at all, which makes the approach vulnerable to the attack mentioned earlier, where malicious actors may attempt to overload the server with excessively large requests). Using an ASGI middleware like this could be an alternative solution. Also, in case you are running Uvicorn with Gunicorn, you could also define limits, regarding the number of HTTP header fields in a request, the size of an HTTP request header field, etc. (see the docs). Similar limits could also be applied when using reverse proxy servers, such as Nginx (which also allows you to set the maximum request body size using the client_max_body_size directive). A few notes for the example below. Since this approach uses the Request object directly, and not UploadFile/Form, the endpoint won't be properly documented in the Swagger auto-generated docs at /docs (if that's important for your application at all). This also means that you have to perform some validation checks on your own, such as whether the required fields for the endpoint were received or not, and if yes, whether they were in the expected format. For instance, for the data field, you could check whether the data.value is empty or not (empty would mean that the user has either not included that field in the multipart/form-data, or sent an empty value), as well as if isinstance(data.value, str). As for the file(s), you could check whether file_.multipart_filename is not empty; however, since a filename could likely not be included in the Content-Disposition by the user in their client request, you would also may want to check if the file exists in the filesystem, using os.path.isfile(filepath), in order to ensure that a file has been indeed uploaded (Note: you need to make sure there is no pre-existing file with the same name in that specified location; otherwise, the aforementioned function would always return True, even when the user did not send the file. You could always generate unique UUIDs for the filenames, as suggested here and here). Regarding the applied size limits, the MAX_REQUEST_BODY_SIZE below must be larger than MAX_FILE_SIZE (plus all of the Form values size) you expcect to receive, as the raw request body (that you get from using the .stream() method) includes a few more bytes for the --boundary and Content-Disposition header for each of the fields in the body. Hence, you should add a few more bytes, depending on the Form values and the number of files you expect to receive (hence the MAX_FILE_SIZE + 1024 below). app.py from fastapi import FastAPI, Request, HTTPException, status from streaming_form_data import StreamingFormDataParser from streaming_form_data.targets import FileTarget, ValueTarget from streaming_form_data.validators import MaxSizeValidator import streaming_form_data from starlette.requests import ClientDisconnect from urllib.parse import unquote import os MAX_FILE_SIZE = 1024 * 1024 * 1024 * 4 # = 4GB MAX_REQUEST_BODY_SIZE = MAX_FILE_SIZE + 1024 app = FastAPI() class MaxBodySizeException(Exception): def __init__(self, body_len: str): self.body_len = body_len class MaxBodySizeValidator: def __init__(self, max_size: int): self.body_len = 0 self.max_size = max_size def __call__(self, chunk: bytes): self.body_len += len(chunk) if self.body_len > self.max_size: raise MaxBodySizeException(body_len=self.body_len) @app.post('/upload') async def upload(request: Request): body_validator = MaxBodySizeValidator(MAX_REQUEST_BODY_SIZE) filename = request.headers.get('filename') if not filename: raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, detail='Filename header is missing') try: filename = unquote(filename) filepath = os.path.join('./', os.path.basename(filename)) file_ = FileTarget(filepath, validator=MaxSizeValidator(MAX_FILE_SIZE)) data = ValueTarget() parser = StreamingFormDataParser(headers=request.headers) parser.register('file', file_) parser.register('data', data) async for chunk in request.stream(): body_validator(chunk) parser.data_received(chunk) except ClientDisconnect: print("Client Disconnected") except MaxBodySizeException as e: raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE, detail=f'Maximum request body size limit ({MAX_REQUEST_BODY_SIZE} bytes) exceeded ({e.body_len} bytes read)') except streaming_form_data.validators.ValidationError: raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE, detail=f'Maximum file size limit ({MAX_FILE_SIZE} bytes) exceeded') except Exception: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail='There was an error uploading the file') if not file_.multipart_filename: raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, detail='File is missing') print(data.value.decode()) print(file_.multipart_filename) return {"message": f"Successfuly uploaded {filename}"} As mentioned earlier, to upload the data (on client side), you can use the HTTPX library, which supports streaming file uploads by default, and thus allows you to send large streams/files without loading them entirely into memory. You can pass additional Form data as well, using the data argument. Below, a custom header, i.e., filename, is used to pass the filename to the server, so that the server instantiates the FileTarget class with that name (you could use the X- prefix for custom headers, if you wish; however, it is not officially recommended anymore). test.py import httpx import time from urllib.parse import quote url ='http://127.0.0.1:8000/upload' filename = 'bigFile.zip' files = {'file': open(filename, 'rb')} headers = {'filename': quote(filename)} data = {'data': 'Hello World!'} with httpx.Client() as client: start = time.time() r = client.post(url, data=data, files=files, headers=headers) end = time.time() print(f'Time elapsed: {end - start}s') print(r.status_code, r.json(), sep=' ') Upload multiple Files and Form data using .stream() One way to upload multiple files would be to perform multiple HTTP requests to that endpoint, one for each file, as explained in Option 1 earlier. Another way, however, since the streaming-form-data package allows defining multiple files and form data, would be to use a header for each filename, or use random names on server side—the filename is needed, as the parser requires pre-defining the filepath for the FileTarget() class—and initialize the FileTarget class for each file (as explained earlier, if you don't need the file to be saved to disk, you could use ValueTarget instead). If you chose using random names, once the file is fully uploaded (when no more chunks left in request.stream()), you could optionally rename it to file_.multipart_filename (if available), using os.rename(). Regardless, in a real-world scenario, you should never trust the filename (or even the file extension) passed by the user, as it might be malicious, trying to extract or replace files in your system, and thus, it is always a good practice to add some random alphanumeric characters at the end/front of the filename, if not using a completely random name, for each file that is uploaded. On client side, in Python, you should pass a list of files, as described in the httpx's documentation. Note that you should use a different key/name for each file, so that they don't overlap when parsing them on server side, e.g., files = [('file0', open('bigFile.zip', 'rb')),('file1', open('otherBigFile.zip', 'rb'))]. You could also test the example below, using the HTML template at /, which uses JavaScript to prepare and send the request with multiple files. For simplicity purposes, the example below does not perform any validation checks on the body size; however, if you wish, you could still perform those checks, using the code provided in the previous example. app.py from fastapi import FastAPI, Request, HTTPException, status from fastapi.responses import HTMLResponse from fastapi.templating import Jinja2Templates from starlette.requests import ClientDisconnect from urllib.parse import unquote import streaming_form_data from streaming_form_data import StreamingFormDataParser from streaming_form_data.targets import FileTarget, ValueTarget import os app = FastAPI() templates = Jinja2Templates(directory="templates") @app.get("/", response_class=HTMLResponse) async def main(request: Request): return templates.TemplateResponse(request=request, name="index.html") @app.post('/upload') async def upload(request: Request): try: parser = StreamingFormDataParser(headers=request.headers) data = ValueTarget() parser.register('data', data) headers = dict(request.headers) filenames = [] i = 0 while True: filename = headers.get(f'filename{i}', None) if filename is None: break filename = unquote(filename) filenames.append(filename) filepath = os.path.join('./', os.path.basename(filename)) file_ = FileTarget(filepath) parser.register(f'file{i}', file_) i += 1 async for chunk in request.stream(): parser.data_received(chunk) except ClientDisconnect: print("Client Disconnected") except Exception: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail='There was an error uploading the file') print(data.value.decode()) return {"message": f"Successfuly uploaded {filenames}"} templates/index.html <!DOCTYPE html> <html> <body> <input type="file" id="fileInput" name="files" onchange="reset()" multiple><br> <input type="button" value="Submit" onclick="submitUsingFetch()"> <p id="response"></p> <script> var resp = document.getElementById("response"); function reset() { resp.innerHTML = ""; } function submitUsingFetch() { var fileInput = document.getElementById('fileInput'); if (fileInput.files[0]) { var formData = new FormData(); var headers = new Headers(); formData.append("data", "Hello World!"); var i = 0; for (const file of fileInput.files) { filename = encodeURI(file.name); headers.append(`filename${i}`, filename); formData.append(`file${i}`, file, filename); i++; } fetch('/upload', { method: 'POST', headers: headers, body: formData, }) .then(response => response.json()) // or, response.text(), etc. .then(data => { resp.innerHTML = JSON.stringify(data); // data is a JSON object }) .catch(error => { console.error(error); }); } } </script> </body> </html> test.py To automatically load multiple files, see the clients in this answer and this answer. import httpx import time from urllib.parse import quote url ='http://127.0.0.1:8000/upload' filename0 = 'bigFile.zip' filename1 = 'otherBigFile.zip' headers = {'filename0': quote(filename0), 'filename1': quote(filename1)} files = [('file0', open(filename0, 'rb')), ('file1', open(filename1, 'rb'))] data = {'data': 'Hello World!'} with httpx.Client() as client: start = time.time() r = client.post(url, data=data, files=files, headers=headers) end = time.time() print(f'Time elapsed: {end - start}s') print(r.status_code, r.json(), sep=' ') Upload both Files and JSON body In case you would like to upload both file(s) and JSON instead of Form data, you could use the approach described in Method 3 of this answer, thus also saving you from performing manual checks on the received Form fields, as explained earlier (see the linked answer for more details). To that end, please make the following changes in the code above. For an HTML/JS example, please refer to this answer. app.py #... from fastapi import Form from pydantic import BaseModel, ValidationError from typing import Optional from fastapi.encoders import jsonable_encoder #... class Base(BaseModel): name: str point: Optional[float] = None is_accepted: Optional[bool] = False def checker(data: str = Form(...)): try: return Base.model_validate_json(data) except ValidationError as e: raise HTTPException(detail=jsonable_encoder(e.errors()), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY) @app.post('/upload') async def upload(request: Request): #... # place the below after the try-except block in the example given earlier model = checker(data.value.decode()) print(dict(model)) test.py #... import json data = {'data': json.dumps({"name": "foo", "point": 0.13, "is_accepted": False})} #... Option 3 (Slow) - Upload both File and Form data using FastAPI's UploadFile and Form This option, for the reasons outlined in the beginning of this answer, is much slower than the previous two. This approach is similar to using await request.form(), as demonstrated in this answer and Option 1 of this answer. If you would like to use a normal def endpoint instead, see this answer. app.py from fastapi import FastAPI, File, UploadFile, Form, HTTPException, status import aiofiles import os CHUNK_SIZE = 1024 * 1024 # adjust the chunk size as desired app = FastAPI() @app.post("/upload") async def upload(file: UploadFile = File(...), data: str = Form(...)): try: filepath = os.path.join('./', os.path.basename(file.filename)) async with aiofiles.open(filepath, 'wb') as f: while chunk := await file.read(CHUNK_SIZE): await f.write(chunk) except Exception: raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail='There was an error uploading the file') finally: await file.close() return {"message": f"Successfuly uploaded {file.filename}"} As mentioned earlier, using this option would take longer for the file upload to complete, and as HTTPX uses a default timeout of 5 seconds, you will most likely get a ReadTimeout exception when the file is rather large, as the server will need some time to read the SpooledTemporaryFile in chunks and write the contents to a permanent location on the disk. Thus, you can configure the timeout (see the Timeout class in the source code too), and more specifically, the read timeout, which "specifies the maximum duration to wait for a chunk of data to be received (for example, a chunk of the response body)". If set to None instead of some positive numerical value, there will be no timeout on read. test.py import httpx import time url ='http://127.0.0.1:8000/upload' files = {'file': open('bigFile.zip', 'rb')} data = {'data': 'Hello World!'} timeout = httpx.Timeout(None, read=180.0) with httpx.Client(timeout=timeout) as client: start = time.time() r = client.post(url, data=data, files=files) end = time.time() print(f'Time elapsed: {end - start}s') print(r.status_code, r.json(), sep=' ')
16
48
73,433,322
2022-8-21
https://stackoverflow.com/questions/73433322/tqdm-progress-bar-with-docker-logs
I am using tqdm to display various progress bars for my Python console application. For the production deployment of the applications, I use Docker. The progress bars work fine when running a Python application in a terminal. However, when Dockerized and the terminal output is accessed through docker logs the progress bar does not function because as far as I understand it is not an interactive terminal. Although it looks like the progress gets rendered if docker logs is dumped after the progress bar have completed, but not sure if there are some other conditions for this to happen (output buffering?). I would like to modify my tqdm behavior so that It detects when it is run in non-interactive Dockerised environment Instead of displaying interactive progress bar, it will log completion statements (10% done, X iterations/s) regularly This way the progress durations and such would be more accessible when running the application in production. What would be the way to attach such a custom behavior to tqdm?
The package tqdm_loggable is a drop in replacement for tqdm that works well for this use case. To install: pip install tqdm-loggable Then just replace any imports of tqdm (from tqdm import tqdm) with: from tqdm_loggable.auto import tqdm Be sure to set logging level to INFO to see the results in the logs: import logging logging.basicConfig(level=logging.INFO)
5
1
73,393,235
2022-8-17
https://stackoverflow.com/questions/73393235/polars-how-to-compute-rolling-ewm-grouped-by-column
What's the right way to perform a group_by + rolling aggregate operation in polars? For some reason performing an ewm_mean over a rolling groupby gives me the list of all the ewm's rolling by time. For example take the dataframe below: portfolios = pl.from_repr(""" ┌─────────────────────┬────────┬───────────┐ │ ts ┆ symbol ┆ signal_0 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ str ┆ f64 │ ╞═════════════════════╪════════╪═══════════╡ │ 2022-02-14 09:20:00 ┆ A ┆ -1.704301 │ │ 2022-02-14 09:20:00 ┆ AA ┆ -1.181743 │ │ 2022-02-14 09:50:00 ┆ A ┆ 1.040125 │ │ 2022-02-14 09:50:00 ┆ AA ┆ 0.776798 │ │ 2022-02-14 10:20:00 ┆ A ┆ 1.934686 │ │ 2022-02-14 10:20:00 ┆ AA ┆ 1.480892 │ │ 2022-02-14 10:50:00 ┆ A ┆ 2.073418 │ │ 2022-02-14 10:50:00 ┆ AA ┆ 1.623698 │ │ 2022-02-14 11:20:00 ┆ A ┆ 2.088835 │ │ 2022-02-14 11:20:00 ┆ AA ┆ 1.741544 │ └─────────────────────┴────────┴───────────┘ """) Here, I want to group by symbol and get the rolling mean for signal_0 at every timestamp. Unfortunately this doesn't work: portfolios.rolling("ts", group_by="symbol", period="1d").agg( pl.col("signal_0").ewm_mean(half_life=0.1).alias(f"signal_0_mean") ) shape: (10, 3) ┌────────┬─────────────────────┬─────────────────────────────────┐ │ symbol ┆ ts ┆ signal_0_mean │ │ --- ┆ --- ┆ --- │ │ str ┆ datetime[μs] ┆ list[f64] │ ╞════════╪═════════════════════╪═════════════════════════════════╡ │ A ┆ 2022-02-14 09:20:00 ┆ [-1.704301] │ │ A ┆ 2022-02-14 09:50:00 ┆ [-1.704301, 1.037448] │ │ A ┆ 2022-02-14 10:20:00 ┆ [-1.704301, 1.037448, 1.93381] │ │ A ┆ 2022-02-14 10:50:00 ┆ [-1.704301, 1.037448, … 2.0732… │ │ A ┆ 2022-02-14 11:20:00 ┆ [-1.704301, 1.037448, … 2.0888… │ │ AA ┆ 2022-02-14 09:20:00 ┆ [-1.181743] │ │ AA ┆ 2022-02-14 09:50:00 ┆ [-1.181743, 0.774887] │ │ AA ┆ 2022-02-14 10:20:00 ┆ [-1.181743, 0.774887, 1.480203… │ │ AA ┆ 2022-02-14 10:50:00 ┆ [-1.181743, 0.774887, … 1.6235… │ │ AA ┆ 2022-02-14 11:20:00 ┆ [-1.181743, 0.774887, … 1.7414… │ └────────┴─────────────────────┴─────────────────────────────────┘ If I wanted to do this in pandas, I would write: portfolios.to_pandas().set_index(["ts", "symbol"]).groupby(level=1)["signal_0"].transform( lambda x: x.ewm(halflife=10).mean() ) Which would yield: ts symbol 2022-02-14 09:20:00 A -1.704301 AA -1.181743 2022-02-14 09:50:00 A -0.284550 AA -0.168547 2022-02-14 10:20:00 A 0.507021 AA 0.419785 2022-02-14 10:50:00 A 0.940226 AA 0.752741 2022-02-14 11:20:00 A 1.202843 AA 0.978820 Name: signal_0, dtype: float64
You were close. Since ewm_mean produces an estimate for each observation in each window, you simply need to specify that you want the last calculated value in each rolling window. ( portfolios .rolling("ts", group_by="symbol", period="1d") .agg( pl.col("signal_0").ewm_mean(half_life=10).last().alias(f"signal_0_mean") ) .sort('ts', 'symbol') ) shape: (10, 3) ┌────────┬─────────────────────┬───────────────┐ │ symbol ┆ ts ┆ signal_0_mean │ │ --- ┆ --- ┆ --- │ │ str ┆ datetime[μs] ┆ f64 │ ╞════════╪═════════════════════╪═══════════════╡ │ A ┆ 2022-02-14 09:20:00 ┆ -1.704301 │ │ AA ┆ 2022-02-14 09:20:00 ┆ -1.181743 │ │ A ┆ 2022-02-14 09:50:00 ┆ -0.28455 │ │ AA ┆ 2022-02-14 09:50:00 ┆ -0.168547 │ │ A ┆ 2022-02-14 10:20:00 ┆ 0.507021 │ │ AA ┆ 2022-02-14 10:20:00 ┆ 0.419785 │ │ A ┆ 2022-02-14 10:50:00 ┆ 0.940226 │ │ AA ┆ 2022-02-14 10:50:00 ┆ 0.752741 │ │ A ┆ 2022-02-14 11:20:00 ┆ 1.202844 │ │ AA ┆ 2022-02-14 11:20:00 ┆ 0.97882 │ └────────┴─────────────────────┴───────────────┘
4
5
73,433,565
2022-8-21
https://stackoverflow.com/questions/73433565/how-to-run-multiple-camera-in-threading-using-python
below is the code i used to play multiple videos in parallel using multi threading pool. but only one video is playing for each input. i want each video to open separately. not combined import concurrent.futures RTSP_URL = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4" RTSP_List = [RTSP_URL, RTSP_URL, RTSP_URL, RTSP_URL] def url_to_video(url): video = cv2.VideoCapture(url) while True: _, frame = video.read() cv2.imshow("RTSP", frame) k = cv2.waitKey(1) if k == ord('q'): break video.release() cv2.destroyAllWindows() while True: with concurrent.futures.ThreadPoolExecutor() as executor: executor.map(url_to_video, RTSP_List)``` how to play each video separately.
you just need each thread to use a different name for the window in cv2.imshow, so that each thread will generate a different window, and you should place them somewhere distinct so that they aren't appearing one over the other, i just added in index to them so that each distinct index will have a position on screen and different title, also you shouldn't destroy all windows when one is done ... import concurrent.futures import cv2 RTSP_URL = r"C:/Users/Ahmed/Desktop/test.mp4" RTSP_List = [(RTSP_URL,0), (RTSP_URL,1), (RTSP_URL,2), (RTSP_URL,3)] def url_to_video(tup): url,index = tup video = cv2.VideoCapture(url) while True: _, frame = video.read() cv2.imshow(f"RTSP {index}", frame) cv2.moveWindow(f"RTSP {index}", index*300, 0) k = cv2.waitKey(1) if k == ord('q'): break video.release() while True: with concurrent.futures.ThreadPoolExecutor() as executor: executor.map(url_to_video, RTSP_List) cv2.destroyAllWindows() This code works on Windows because win32 backend allows each window to belong to a thread, for other operating systems and backends you can use the following non-threaded code. import cv2 import asyncio RTSP_URL = r"C:/Users/Ahmed/Desktop/test.mp4" RTSP_List = [(RTSP_URL,0), (RTSP_URL,1), (RTSP_URL,2), (RTSP_URL,3)] break_all = False async def url_to_video(tup): global break_all url,index = tup video = cv2.VideoCapture(url) while True: result, frame = await asyncio.get_event_loop().run_in_executor(None, video.read) if not result: break cv2.imshow(f"RTSP {index}", frame) cv2.moveWindow(f"RTSP {index}", index*300, 0) k = cv2.waitKey(1) if k == ord('q'): break_all = True if break_all: break video.release() async def main(): global break_all break_all = False await asyncio.gather(*[url_to_video(x) for x in RTSP_List]) while True: asyncio.run(main()) cv2.destroyAllWindows()
4
8
73,427,091
2022-8-20
https://stackoverflow.com/questions/73427091/polars-replace-part-of-string-in-column-with-value-of-other-column
So I have a Polars dataframe looking as such df = pl.DataFrame( { "ItemId": [15148, 15148, 24957], "SuffixFactor": [19200, 200, 24], "ItemRand": [254, -1, -44], "Stat0": ['+5 Defense', '+$i Might', '+9 Vitality'], "Amount": ['', '7', ''] } ) I want to replace $i in the column "Stat0" with Amount whenever Stat0 contains i$ I have tried a couple different things such as: df = df.with_columns( pl.col('Stat0').str.replace(r'\$i', pl.col('Amount')) ) Expected result result = pl.DataFrame( { "ItemId": [15148, 15148, 24957], "SuffixFactor": [19200, 200, 24], "ItemRand": [254, -1, -44], "Stat0": ['+5 Defense', '+7 Might', '+9 Vitality'], "Amount": ['', '7', ''] } ) shape: (3, 5) ┌────────┬──────────────┬──────────┬─────────────┬────────┐ │ ItemId ┆ SuffixFactor ┆ ItemRand ┆ Stat0 ┆ Amount │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ str ┆ str │ ╞════════╪══════════════╪══════════╪═════════════╪════════╡ │ 15148 ┆ 19200 ┆ 254 ┆ +5 Defense ┆ │ │ 15148 ┆ 200 ┆ -1 ┆ +7 Might ┆ 7 │ │ 24957 ┆ 24 ┆ -44 ┆ +9 Vitality ┆ │ └────────┴──────────────┴──────────┴─────────────┴────────┘ But this doesn't seem to work. I hope someone can help. Best regards
As of Polars 0.14.4, the replace and replace_all expressions allow an Expression for the value parameter. Thus, we can solve this more simply as: df.with_columns( pl.col('Stat0').str.replace(r'\$i', pl.col('Amount')) ) shape: (3, 5) ┌────────┬──────────────┬──────────┬─────────────┬────────┐ │ ItemId ┆ SuffixFactor ┆ ItemRand ┆ Stat0 ┆ Amount │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ str ┆ str │ ╞════════╪══════════════╪══════════╪═════════════╪════════╡ │ 15148 ┆ 19200 ┆ 254 ┆ +5 Defense ┆ │ │ 15148 ┆ 200 ┆ -1 ┆ +7 Might ┆ 7 │ │ 24957 ┆ 24 ┆ -44 ┆ +9 Vitality ┆ │ └────────┴──────────────┴──────────┴─────────────┴────────┘
9
13
73,447,258
2022-8-22
https://stackoverflow.com/questions/73447258/filtering-selected-columns-based-on-column-aggregate
I wish to select only columns with fewer than 3 unique values. I can generate a boolean mask via pl.all().n_unique() < 3, but I don't know if I can use that mask via the polars API for this. Currently, I am solving it via python. Is there a more idiomatic way? import polars as pl, pandas as pd df = pl.DataFrame({"col1":[1,1,2], "col2":[1,2,3], "col3":[3,3,3]}) # target is: # df_few_unique = pl.DataFrame({"col1":[1,1,2], "col3":[3,3,3]}) # my attempt: mask = df.select(pl.all().n_unique() < 3).to_numpy()[0] cols = [col for col, m in zip(df.columns, mask) if m] df_few_unique = df.select(cols) df_few_unique Equivalent in pandas: df_pandas = df.to_pandas() mask = (df_pandas.nunique() < 3) df_pandas.loc[:, mask]
The selected answer, though syntactically clean, is inefficient. You can do about better Let us first include at least two filters rather than just one Problem: Select only those columns where the number of unique values is between 1 and 200 The thing to consider is that you would need a pass over the data no matter what. So, reading it in is the first step Then, if you do pl.select( [s for s in df if s.n_unique() < 200 and s.n_unique() > 1] ) You are computing the filters in sequence and also keeping them in memory. Htop confirms that using just one core of the machine The ideal solution is to do it all in parallel. Let us do a few benchmarks. I am using a 32 cores machine. Parallelism would reduce the time further on machines with more cores set up the dataframes: import polars as pl import numpy as np df = pl.DataFrame({f'a_{i}':np.random.choice(['a','b','c','d'], 10000000) for i in range(100)}) This would take up about 20 GiB RAM. So, be careful if you want to replicate Selected solution (htop confirms that this solution uses only one core) %%timeit _df = pl.select( [s for s in df if s.n_unique() < 200 and s.n_unique() > 1] ) output: 18.7 s ± 92.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Let us now try to run the filters in parallel (htop confirms) %%timeit _df = df.select((pl.all().n_unique() < 200) & (pl.all().n_unique() > 1)) output: 1.35 s ± 21.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) We are still computing every filter twice in the two .n_unique() calls above. Let us do with just one by using in_between (parallel execution - htop confirms) %%timeit _df = df.select((pl.all().n_unique().is_between(1,200))) output: 708 ms ± 21.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Btw, if you don't want to remember the APIs like in_between and also not compute the n_unique() twice, you can use the lazy semantics df_lazy = df.lazy() Now, try the above solution %%timeit _df = df_lazy.select((pl.all().n_unique() < 200) & (pl.all().n_unique() > 1)).collect() output: 718 ms ± 15.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3
2
73,444,180
2022-8-22
https://stackoverflow.com/questions/73444180/what-does-runtimewarning-enable-tracemalloc-to-get-the-object-allocation-trace
When I call a coroutine without awaiting, in addition to a message warning me I have not awaited the coroutine, I also get the following warning message: RuntimeWarning: Enable tracemalloc to get the object allocation traceback I know how to fix this i.e. by awaiting the coroutine (and I do see a lot of questions about this warning, but all the answers are how to fix it; my goal is to understand it; please don't mark my question as duplicate if possible :) ); in particular, what I'd really like to understand is: What is tracemalloc? How do I enable it? What is the object allocation traceback? How is it related to coroutines? and, in particular Why is there a warning suggesting to enable tracemalloc when I don't await a coroutine? My goal in asking this is understanding the details and inner-workings of asyncio better.
What is tracemalloc? How do I enable it? tracemalloc is a module that is used to debug memory allocation in Python. You can enable it by setting PYTHONTRACEMALLOC environment variable to 1. Check the Tracemalloc docs for more info. What is the object allocation traceback? It's a way how Python manages memory and allocates memory for its objects. Why is there a warning suggesting to enable tracemalloc when I don't await a coroutine? I guess this is because memory is allocated and never used properly so we get loss of resources, i.e. a memory leak.
3
7
73,436,440
2022-8-21
https://stackoverflow.com/questions/73436440/replace-and-aggregate-rows-in-pandas-according-to-condition
I have a dataframe: lft rel rgt num 0 t3 r3 z2 3 1 t1 r3 x1 9 2 x2 r3 t2 8 3 x4 r1 t2 4 4 t1 r1 z3 1 5 x1 r1 t2 2 6 x2 r2 t4 4 7 z3 r2 t4 5 8 t4 r3 x3 4 9 z1 r2 t3 4 And a reference dictionary: replacement_dict = { 'X1' : ['x1', 'x2', 'x3', 'x4'], 'Y1' : ['y1', 'y2'], 'Z1' : ['z1', 'z2', 'z3'] } My goal is to replace all occurrences of replacement_dict['X1'] with 'X1', and then compute the group-wise sum of num rows. For example, any instance of 'x1', 'x2', 'x3' or 'x4' will be replaced by 'X1', etc., and the total of the 'X1'-'r1'-'t2' group (which is created by the remapping above) is 6, etc. So my desired output is: lft rel rgt num 0 X1 r3 t2 8 1 X1 r1 t2 6 2 X1 r2 t4 4 3 t1 r3 X1 9 4 t4 r3 X1 4 I am working with a dataframe with 6 million rows and a replacement dictionary with 60,000 keys. This is taking forever using a simple row wise extraction and replacement. How can this (specifically the last part) be scaled efficiently? Is there a pandas trick that someone can recommend?
Reverse the replacement_dict mapping and map() this new mapping to each of lft and rgt columns to substitute certain values (e.g. x1->X1, y2->Y1 etc.). As some values in lft and rgt columns don't exist in the mapping (e.g. t1, t2 etc.), call fillna() to fill in these values.1 You may also stack() the columns whose values need to be replaced (lft and rgt), call map+fillna and unstack() back but because there are only 2 columns, it may not be worth the trouble for this particular case. The second part of the question may be answered by summing num values after grouping by lft, rel and rgt columns; so groupby().sum() should do the trick. # reverse replacement map reverse_map = {v : k for k, li in replacement_dict.items() for v in li} # substitute values in lft column using reverse_map df['lft'] = df['lft'].map(reverse_map).fillna(df['lft']) # substitute values in rgt column using reverse_map df['rgt'] = df['rgt'].map(reverse_map).fillna(df['rgt']) # sum values in num column by groups result = df.groupby(['lft', 'rel', 'rgt'], as_index=False)['num'].sum() 1: map() + fillna() may perform better for your use case than replace() because under the hood, map() implements a Cython optimized take_nd() method that performs particularly well if there are a lot of values to replace, while replace() implements replace_list() method which uses a Python loop. So if replacement_dict is particularly large (which it is in your case), the difference in performance will be huge, but if replacement_dict is small, replace() may outperform map(). See this answer, which includes different benchmarks showing interaction between dictionary size and dataframe length, to get an idea of when to use replace and when to use map+fillna.
11
10
73,464,511
2022-8-23
https://stackoverflow.com/questions/73464511/rich-prompt-confirm-not-working-in-rich-progress-context-python
I am working on an app that uses a rich.Progress for rendering progress bars. The problem is rich.prompt.Confirm just flashes instead of showing the message and asking for the confirmation while in the Progress context. Demo Code from rich.progress import Progress from rich.prompt import Confirm from time import sleep with Progress() as progress: task = progress.add_task('Cooking') while not progress.finished: if Confirm.ask('Should I continue', default=False): progress.update(task, advance=0.6) sleep(0.4) EDIT: I have seen git issues and researched a bit and it seems input(which the rich.Prompt uses) doesn't work on any thing that uses rich.Live(which the rich.Progress uses). So now my question is, How can you structure your code so that you don't put a prompt inside a rich.Progress context manager. Or any possible workarounds to this issue.
So from the Github Issue (That might be the one you talked about), that is now a workaround, thanks to Leonardo Cencetti. The solution is simple. He pause the progress and clear the progress lines. When you are done, he starts the progress again. For Future people here is his code: from rich.progress import Progress class PauseProgress: def __init__(self, progress: Progress) -> None: self._progress = progress def _clear_line(self) -> None: UP = "\x1b[1A" CLEAR = "\x1b[2K" for _ in self._progress.tasks: print(UP + CLEAR + UP) def __enter__(self): self._progress.stop() self._clear_line() return self._progress def __exit__(self, exc_type, exc_value, exc_traceback): self._progress.start() And in you MWC, it is going to be used like: from rich.progress import Progress from rich.prompt import Confirm from time import sleep with Progress() as progress: task = progress.add_task('Cooking') while not progress.finished: with PauseProgress(progress): ok_to_go = Confirm.ask('Should I continue', default=False) if not ok_to_go: break progress.update(task, advance=0.6) sleep(1)
5
2
73,389,603
2022-8-17
https://stackoverflow.com/questions/73389603/pytorch-tensor-sort-rows-based-on-column
In a 2D tensor like so tensor([[0.8771, 0.0976, 0.8186], [0.7044, 0.4783, 0.0350], [0.4239, 0.8341, 0.3693], [0.5568, 0.9175, 0.0763], [0.0876, 0.1651, 0.2776]]) How do you sort the rows based off the values in a column? For instance if we were to sort based off the last column, I would expect the rows to be such... tensor([[0.7044, 0.4783, 0.0350], [0.5568, 0.9175, 0.0763], [0.0876, 0.1651, 0.2776], [0.4239, 0.8341, 0.3693], [0.8771, 0.0976, 0.8186]]) Values in the last column are now in ascending order.
t = torch.rand(5, 3) COL_INDEX_TO_SORT = 2 # sort() returns a tuple where first element is the sorted tensor # and the second is the indices of the sorted tensor. # The [1] at the end is used to select the second element - the sorted indices. sorted_indices = t[:, COL_INDEX_TO_SORT].sort()[1] t = t[sorted_indices]
3
3
73,463,001
2022-8-23
https://stackoverflow.com/questions/73463001/how-to-skip-parametrized-tests-with-pytest
Is it possible to conditionally skip parametrized tests?Here's an example: @pytest.mark.parametrize("a_date", a_list_of_dates) @pytest.mark.skipif(a_date > date.today()) def test_something_using_a_date(self, a_date): assert <some assertion> Of course I can do this inside the test method, but I'm looking for a structured way to do this with pytest.
If you create your own method you check the values in test collection time and run the relevant tests only a_list_of_dates = [date.today(), date(2024, 1, 1), date(2022, 1, 1)] def get_dates(): for d in a_list_of_dates: if d <= date.today(): yield d class TestSomething: @pytest.mark.parametrize("a_date", get_dates()) def test_something_using_a_date(self, a_date): print(a_date) Output TestSomething::test_something_using_a_date[a_date0] PASSED [ 50%] 2022-08-24 TestSomething::test_something_using_a_date[a_date1] PASSED [100%] 2022-01-01 If you still want to see the skipped tests you can add the skip marker to the relevant tests def get_dates(): for d in a_list_of_dates: markers = [] if d > date.today(): markers.append(pytest.mark.skip(reason=f'{d} is after today')) yield pytest.param(d, marks=markers) Output TestSomething::test_something_using_a_date[a_date0] PASSED [ 33%] 2022-08-24 TestSomething::test_something_using_a_date[a_date1] SKIPPED (2024-01-01 is after today) [ 66%] Skipped: 2024-01-01 is after today TestSomething::test_something_using_a_date[a_date2] PASSED [100%] 2022-01-01
3
3
73,391,230
2022-8-17
https://stackoverflow.com/questions/73391230/how-to-run-an-end-to-end-example-of-distributed-data-parallel-with-hugging-face
I've extensively look over the internet, hugging face's (hf's) discuss forum & repo but found no end to end example of how to properly do ddp/distributed data parallel with HF (links at the end). This is what I need to be capable of running it end to end: do we wrap the hf model in DDP? (script needs to know how to synchronize stuff at some point somehow somewhere, otherwise just launching torch.distributed from the command line) do we change the args to trainer or trainer args in anyway? wrap the optimizer in any distributed trainer (like cherry? cherry is a pytorch lib for things like this) do we do the usual init group that is usually needed for ddp? what is the role of local rank? terminal launch script e.g. python -m torch.distributed.launch --nproc_per_node=2 distributed_maml.py how do we use the world size to shard the data at each loop e.g. see https://github.com/learnables/learn2learn/blob/master/examples/vision/distributed_maml.py given answers to those I think I could write my own notebook and share it widely. This is my starter code that I want to complete but unsure if I am doing it right (especially since I don't know which args to trainer to change): """ - training on multiple gpus: https://huggingface.co/docs/transformers/perf_train_gpu_many#efficient-training-on-multiple-gpus - data paralelism, dp vs ddp: https://huggingface.co/docs/transformers/perf_train_gpu_many#data-parallelism - github example: https://github.com/huggingface/transformers/tree/main/examples/pytorch#distributed-training-and-mixed-precision - above came from hf discuss: https://discuss.huggingface.co/t/using-transformers-with-distributeddataparallel-any-examples/10775/7 ⇨ Single Node / Multi-GPU Model fits onto a single GPU: DDP - Distributed DP ZeRO - may or may not be faster depending on the situation and configuration used. ...https://huggingface.co/docs/transformers/perf_train_gpu_many#scalability-strategy python -m torch.distributed.launch \ --nproc_per_node number_of_gpu_you_have path_to_script.py \ --all_arguments_of_the_script python -m torch.distributed.launch --nproc_per_node 2 main_data_parallel_ddp_pg.py python -m torch.distributed.launch --nproc_per_node 2 ~/ultimate-utils/tutorials_for_myself/my_hf_hugging_face_pg/main_data_parallel_ddp_pg.py e.g. python -m torch.distributed.launch \ --nproc_per_node 8 pytorch/text-classification/run_glue.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 8 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/mnli_output/ """ # %% # - init group # - set up processes a la l2l # local_rank: int = local_rank: int = int(os.environ["LOCAL_RANK"]) # get_local_rank() # print(f'{local_rank=}') ## init_process_group_l2l(args, local_rank=local_rank, world_size=args.world_size, init_method=args.init_method) # init_process_group_l2l bellow # if is_running_parallel(rank): # print(f'----> setting up rank={rank} (with world_size={world_size})') # # MASTER_ADDR = 'localhost' # MASTER_ADDR = '127.0.0.1' # MASTER_PORT = master_port # # set up the master's ip address so this child process can coordinate # os.environ['MASTER_ADDR'] = MASTER_ADDR # print(f"---> {MASTER_ADDR=}") # os.environ['MASTER_PORT'] = MASTER_PORT # print(f"---> {MASTER_PORT=}") # # # - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends # if torch.cuda.is_available(): # backend = 'nccl' # # You need to call torch_uu.cuda.set_device(rank) before init_process_group is called. https://github.com/pytorch/pytorch/issues/54550 # torch.cuda.set_device( # args.device) # is this right if we do parallel cpu? # You need to call torch_uu.cuda.set_device(rank) before init_process_group is called. https://github.com/pytorch/pytorch/issues/54550 # print(f'---> {backend=}') # rank: int = torch.distributed.get_rank() if is_running_parallel(local_rank) else -1 # https://huggingface.co/docs/transformers/tasks/translation import datasets from datasets import load_dataset, DatasetDict books: DatasetDict = load_dataset("opus_books", "en-fr") print(f'{books=}') books: DatasetDict = books["train"].train_test_split(test_size=0.2) print(f'{books=}') print(f'{books["train"]=}') print(books["train"][0]) """ {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'}} """ # - t5 tokenizer from transformers import AutoTokenizer, PreTrainedTokenizerFast, PreTrainedTokenizer tokenizer: PreTrainedTokenizerFast = AutoTokenizer.from_pretrained("t5-small") print(f'{isinstance(tokenizer, PreTrainedTokenizer)=}') print(f'{isinstance(tokenizer, PreTrainedTokenizerFast)=}') source_lang = "en" target_lang = "fr" prefix = "translate English to French: " def preprocess_function(examples): inputs = [prefix + example[source_lang] for example in examples["translation"]] targets = [example[target_lang] for example in examples["translation"]] model_inputs = tokenizer(inputs, max_length=128, truncation=True) with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=128, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs # Then create a smaller subset of the dataset as previously shown to speed up the fine-tuning: (hack to seep up tutorial) books['train'] = books["train"].shuffle(seed=42).select(range(100)) books['test'] = books["test"].shuffle(seed=42).select(range(100)) # # use 🤗 Datasets map method to apply a preprocessing function over the entire dataset: # tokenized_datasets = dataset.map(tokenize_function, batched=True, batch_size=2) # todo - would be nice to remove this since gpt-2/3 size you can't preprocess the entire data set...or can you? # tokenized_books = books.map(preprocess_function, batched=True, batch_size=2) from uutils.torch_uu.data_uu.hf_uu_data_preprocessing import preprocess_function_translation_tutorial preprocessor = lambda examples: preprocess_function_translation_tutorial(examples, tokenizer) tokenized_books = books.map(preprocessor, batched=True, batch_size=2) print(f'{tokenized_books=}') # - load model from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") # - to DDP # model = model().to(rank) # from torch.nn.parallel import DistributedDataParallel as DDP # ddp_model = DDP(model, device_ids=[rank]) # Use DataCollatorForSeq2Seq to create a batch of examples. It will also dynamically pad your text and labels to the # length of the longest element in its batch, so they are a uniform length. # While it is possible to pad your text in the tokenizer function by setting padding=True, dynamic padding is more efficient. from transformers import DataCollatorForSeq2Seq # Data collator that will dynamically pad the inputs received, as well as the labels. data_collator: DataCollatorForSeq2Seq = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model) """ At this point, only three steps remain: - Define your training hyperparameters in Seq2SeqTrainingArguments. - Pass the training arguments to Seq2SeqTrainer along with the model, dataset, tokenizer, and data collator. - Call train() to fine-tune your model. """ report_to = "none" if report_to != 'none': import wandb wandb.init(project="playground", entity="brando", name='run_name', group='expt_name') from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer # fp16 = True # cuda # fp16 = False # cpu import torch fp16 = torch.cuda.is_available() # True for cuda, false for cpu training_args = Seq2SeqTrainingArguments( output_dir="./results", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=0.01, save_total_limit=3, num_train_epochs=1, fp16=fp16, report_to=report_to, ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_books["train"], eval_dataset=tokenized_books["test"], tokenizer=tokenizer, data_collator=data_collator, ) trainer.train() print('\n ----- Success\a') All references I consulted when writing this question: https://discuss.huggingface.co/t/using-transformers-with-distributeddataparallel-any-examples/10775/3 https://huggingface.co/docs/transformers/perf_train_gpu_many#scalability-strategy https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer Setting Hugging Face dataloader_num_workers for multi-GPU training using huggingface Trainer with distributed data parallel Why, using Huggingface Trainer, single GPU training is faster than 2 GPUs? https://discuss.huggingface.co/t/lm-example-run-clm-py-isnt-distributing-data-across-multiple-gpus-as-expected/3239/6 https://discuss.huggingface.co/t/which-data-parallel-does-trainer-use-dp-or-ddp/16021/3 https://github.com/huggingface/transformers/tree/main/examples/pytorch#distributed-training-and-mixed-precision https://pytorch.org/tutorials/intermediate/ddp_tutorial.html dist maml: https://github.com/learnables/learn2learn/blob/master/examples/vision/distributed_maml.py cross: https://discuss.huggingface.co/t/how-to-run-an-end-to-end-example-of-distributed-data-parallel-with-hugging-faces-trainer-api-ideally-on-a-single-node-multiple-gpus/21750
You don't need to setup anything, just do: python -m torch.distributed.launch --nproc_per_node 2 ~/src/main_debug.py or torchrun --nproc_per_node=2 --nnodes=2 --use_env ~/src/main_debug.py then monitor the gpus with nvidia-smi see: Example from alpaca: torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True ref: https://github.com/tatsu-lab/stanford_alpaca#fine-tuning
7
2
73,445,422
2022-8-22
https://stackoverflow.com/questions/73445422/does-f-string-formatting-cast-a-variable-into-a-string
I was working with f-strings, and I am fairly new to python. My question is does the f-string formatting cast a variable(an integer) into a string? number = 10 print(f"{number} is not a string") Is number cast into a string?
f"..." expressions format values to strings, integrate the result into a larger string and return that result. That's not quite the same as 'casting'*. number is an expression here, one that happens to produce an integer object. The integer is then formatted to a string, by calling the __format__ method on that object, with the first argument, a string containing a format specifier, left empty: >>> number = 10 >>> number.__format__('') '10' We'll get to the format specifier later. The original integer object, 10, didn't change here, it remains an integer, and .__format__() just returned a new object, a string: >>> f"{number} is not a string" '10 is not a string' >>> number 10 >>> type(number) int There are more options available to influence the output by adding a format specifier to the {...} placeholder, after a :. You can see what exact options you have by reading the Format Specification Mini Language documentation; this documents what the Python standard types might expect as format specifiers. For example, you could specify that the number should be left-aligned in a string that's at least 5 characters wide, with the specifier <5: >>> f"Align the number between brackets: [{number:<5}]" 'Align the number between brackets: [10 ]' The {number:<5} part tells Python to take the output of number.__format__("<5") and use that as the string value to combine with the rest of the string, in between the [ and ] parts. This still doesn't change what the number variable references, however. *: In technical terms, casting means changing the type of a variable. This is not something you do in Python because Python variables don't have a type. Python variables are references to objects, and in Python it is those objects that have a type. You can change what a variable references, e.g. number = str(number) would replace the integer with a string object, but that's not casting either.
3
8
73,461,385
2022-8-23
https://stackoverflow.com/questions/73461385/m1-mac-tensorflow-vs-code-rosetta2
I'm struggling to install tensorflow with a M1 mac. I've got python 3.9.7 and Monterrey 12.3 and apple silicon visual studio code. There is an apple solution involving miniconda apple dependancies and tensorflow-macos and tensorflow-metal. However this solution is not good for me as I have to use Rosetta2 emulator for multiple packages including PyQt5 etc. I was wondering if anyone has been able to use their M1 macs and pip installed tensorflow in a venv rosetta terminal. Thank you. Kevin
Running TensorFlow on miniforge + conda-forge (arm64) TensorFlow can run natively on M1 (arm64) macs. A highly recommended, easy way to install TensorFlow on arm64 macs is to via conda-forge. You should install python via miniforge or miniconda, because there is an arm64 (Apple Sillicon) distribution. With this, as of today, you can install the latest version 2.10.0 of TensorFlow: $ lipo -archs $(which python3) # python3 is running natively as arm64 arm64 $ conda install -c conda-forge tensorflow Note: tensorflow-macos 2.4.0 is obsolete so you shouldn't be using that. But still want Rosetta 2? Try conda-forge. If you really need to have python running on Rosetta 2 (x86_64) in cases where some packages does not support arm64, you can still install TensorFlow with a macOS x86_64 release via conda. Installing via pip and PyPI repository won't work here, because you will run into Illegal hardware instruction segfault because Google's official TF macos-x86_64 wheel releases on PyPI assumes a target platform that has AVX instructions. $ lipo -archs $(which python3) # x86_64 means Rosetta 2 x86_64 $ conda install -c conda-forge tensorflow # install via conda $ python -c 'import tensorflow; print(tensorflow.__version__)'
3
5
73,449,968
2022-8-22
https://stackoverflow.com/questions/73449968/pydantic-model-parse-pascal-case-fields-to-snake-case
I have a Pydantic class model that represents a foreign API that looks like this: class Position(BaseModel): AccountID: str AveragePrice: str AssetType: str Last: str Bid: str Ask: str ConversionRate: str DayTradeRequirement: str InitialRequirement: str PositionID: str LongShort: str Quantity: int Symbol: str Timestamp: str TodaysProfitLoss: str TotalCost: str MarketValue: str MarkToMarketPrice: str UnrealizedProfitLoss: str UnrealizedProfitLossPercent: str UnrealizedProfitLossQty: str This is the names of the API endpoint that I need to point to. I simply want to change the pascal case fields to a pythonic design. What I want is to deserialize the foreign API and serialize it back using Pydantic's BaseModel class. My problem is that if I use Pydantic's class Fields like this: class Position(BaseModel): account_id: str = Field(alias='AccountID') average_price: str = Field(alias='AveragePrice') asset_type: str = Field(alias='AssetType') last: str = Field(alias='Last') bid: str = Field(alias='Bid') ask: str = Field(alias='Ask') conversion_rate: str = Field(alias='ConversionRate') day_trade_requirement: str = Field(alias='DayTradeRequirement') initial_requirement: str = Field(alias='InitialRequirement') position_id: str = Field(alias='PositionID') long_short: str = Field(alias='LongShort') quantity: int = Field(alias='Quantity') symbol: str = Field(alias='Symbol') timestamp: str = Field(alias='Timestamp') todays_profit_loss: str = Field(alias='TodaysProfitLoss') total_cost: str = Field(alias='TotalCost') market_value: str = Field(alias='MarketValue') mark_to_market_price: str = Field(alias='MarkToMarketPrice') unrealized_profit_loss: str = Field(alias='UnrealizedProfitLoss') unrealized_profit_loss_percent: str = Field(alias='UnrealizedProfitLossPercent') unrealized_profit_loss_qty: str = Field(alias='UnrealizedProfitLossQty') I can only deserialize it and not the other way around. Any way I can do it for both "directions"?
Yes, it's possible, use .dict(by_alias=True), see example: from pydantic import BaseModel, Field class Position(BaseModel): account_id: str = Field(alias='AccountID') pos2 = Position(AccountID='10') print(pos2.dict()) print(pos2.dict(by_alias=True)) Output: {'account_id': '10'} {'AccountID': '10'}
3
5
73,406,581
2022-8-18
https://stackoverflow.com/questions/73406581/python-manage-py-collectstatic-error-cannot-find-rest-framework-bootstrap-min-c
I am reading the book 'Django for APIs' from 'William S. Vincent' (current edition for Django 4.0) In chapter 4, I cannot run successfully the command python manage.py collectstatic. I get the following error: Traceback (most recent call last): File "/Users/my_name/Projects/django/django_for_apis/library/manage.py", line 22, in <module> main() File "/Users/my_name/Projects/django/django_for_apis/library/manage.py", line 18, in main execute_from_command_line(sys.argv) File "/Users/my_name/Projects/django/django_for_apis/library/.venv/lib/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line utility.execute() File "/Users/my_name/Projects/django/django_for_apis/library/.venv/lib/python3.10/site-packages/django/core/management/__init__.py", line 440, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/my_name/Projects/django/django_for_apis/library/.venv/lib/python3.10/site-packages/django/core/management/base.py", line 402, in run_from_argv self.execute(*args, **cmd_options) File "/Users/my_name/Projects/django/django_for_apis/library/.venv/lib/python3.10/site-packages/django/core/management/base.py", line 448, in execute output = self.handle(*args, **options) File "/Users/my_name/Projects/django/django_for_apis/library/.venv/lib/python3.10/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 209, in handle collected = self.collect() File "/Users/my_name/Projects/django/django_for_apis/library/.venv/lib/python3.10/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 154, in collect raise processed whitenoise.storage.MissingFileError: The file 'rest_framework/css/bootstrap.min.css.map' could not be found with <whitenoise.storage.CompressedManifestStaticFilesStorage object at 0x102fa07f0>. The CSS file 'rest_framework/css/bootstrap.min.css' references a file which could not be found: rest_framework/css/bootstrap.min.css.map Please check the URL references in this CSS file, particularly any relative paths which might be pointing to the wrong location. I have the exact same settings like in the book in settings.py: STATIC_URL = "static/" STATICFILES_DIRS = [BASE_DIR / "static"] # new STATIC_ROOT = BASE_DIR / "staticfiles" # new STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage" # new I couldn't find any explanation for it. maybe someone can point me in the right direction.
Update: DRF 3.14.0 now supports Django 4.1. If you've added stubs to static as per below, be sure to remove them. This appears to be related to Django 4.1: either downgrade to Django 4.0 or simply create the following empty files in one of your static directories: static/rest_framework/css/bootstrap-theme.min.css.map static/rest_framework/css/bootstrap.min.css.map There's a recent change to ManifestStaticFilesStorage where it now attempts to replace source maps with their hashed counterparts. Django REST framework has only recently added the bootstrap css source maps but is not yet released.
14
25
73,394,472
2022-8-17
https://stackoverflow.com/questions/73394472/how-do-you-obtain-underlying-failed-request-data-when-catching-requests-exceptio
I am using a somewhat standard pattern for putting retry behavior around requests requests in Python, import requests from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry retry_strategy = Retry( total=HTTP_RETRY_LIMIT, status_forcelist=HTTP_RETRY_CODES, method_whitelist=HTTP_RETRY_METHODS, backoff_factor=HTTP_BACKOFF_FACTOR ) adapter = HTTPAdapter(max_retries=retry_strategy) http = requests.Session() http.mount("https://", adapter) http.mount("http://", adapter) ... try: response = http.get(... some request params ...) except requests.Exceptions.RetryError as err: # Do logic with err to perform error handling & logging. Unfortunately the docs on RetryError don't explain anything and when I intercept the exception object as above, err.response is None. While you can call str(err) to get the message string of the exception, this would require unreasonable string parsing to attempt to recover the specific response details and even if one is willing to try that, the message actually elides the necessary details. For example, one such response from a deliberate call on a site giving 400s (not that you would really retry on this but just for debugging) gives a message of "(Caused by ResponseError('too many 400 error responses'))" - which elides the actual response details, like the requested site's own description text for the nature of the 400 error (which could be critical to determining handling, or even just to pass back for logging the error). What I want to do is receive the response for the last unsuccessful retry attempt and use the status code and description of that specific failure to determine the handling logic. Even though I want to make it robust behind retries, I still need to know the underlying failure beyond "too many retries" when ultimately handling the error. Is it possible to extract this information from the exception raised for retries?
We can't get a response in every exception because a request may not have been sent yet or a request or response may not have reached its destination. For example these exceptions dont' get a response. urllib3.exceptions.ConnectTimeoutError urllib3.exceptions.SSLError urllib3.exceptions.NewConnectionError There's a parameter in urllib3.util.Retry named raise_on_status which defaults to True. If it's made False, urllib3.exceptions.MaxRetryError won't be raised. And if no exceptions are raised it is certain that a response has arrived. It now becomes easy to response.raise_for_status in the else block of the try block wrapped in another try. I've changed except RetryError to except Exception to catch other exceptions. import requests from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry from requests.exceptions import RetryError # DEFAULT_ALLOWED_METHODS = frozenset({'DELETE', 'GET', 'HEAD', 'OPTIONS', 'PUT', 'TRACE'}) # Default methods to be used for allowed_methods # RETRY_AFTER_STATUS_CODES = frozenset({413, 429, 503}) # Default status codes to be used for status_forcelist HTTP_RETRY_LIMIT = 3 HTTP_BACKOFF_FACTOR = 0.2 retry_strategy = Retry( total=HTTP_RETRY_LIMIT, backoff_factor=HTTP_BACKOFF_FACTOR, raise_on_status=False, ) adapter = HTTPAdapter(max_retries=retry_strategy) http = requests.Session() http.mount("https://", adapter) http.mount("http://", adapter) try: response = http.get("https://httpbin.org/status/503") except Exception as err: print(err) else: try: response.raise_for_status() except Exception as e: # Do logic with err to perform error handling & logging. print(response.reason) # Or # print(e.response.reason) else: print(response.text) Test; # https://httpbin.org/user-agent ➜ python requests_retry.py { "user-agent": "python-requests/2.28.1" } # url = https://httpbin.org/status/503 ➜ python requests_retry.py SERVICE UNAVAILABLE
5
2
73,455,881
2022-8-23
https://stackoverflow.com/questions/73455881/controlling-where-sphinx-generated-rst-files-are-saved
Suppose the following documentation structure for sphinx: doc |_ _static |_ _templates |_ api |_ index.rst |_ classes.rst |_ functions.rst |_ index.rst |_ more_functions.rst |_ conf.py And that classes.rst, functions.rst and more_functions.rst have classes and functions to auto-document with autodoc/autosummary. The build will generate .rst files for those classes and functions in: doc/generated for more_functions.rst doc/api/generated for classes.rst and functions.rst Is there a way to control where those generated folders are created? I'm trying to get a unique generated folder in the end. In this case, with this structure: doc |_ generated |_ generated-from-more-functions.rst |_ api |_ generated-from-api/classes.rst |_ generated-from-api/functions-rst
I don't think there is such native functionality in sphinx. The fastest way to achieve this without many headaches is to create a shell script to run the build and then move (with mv or rm if you're on gnu/linux) the files according to your needs.
5
1
73,462,684
2022-8-23
https://stackoverflow.com/questions/73462684/apply-function-to-dataframe-row-use-result-for-next-row-input
I am trying to create a rudimentary scheduling system. Here is what I have so far: I have a pandas dataframe job_data that looks like this: wc job start duration 1 J1 2022-08-16 07:30:00 17 1 J2 2022-08-16 07:30:00 5 2 J3 2022-08-16 07:30:00 21 2 J4 2022-08-16 07:30:00 12 It contains a wc (work center), job, a start date and duration for the job in hours. I have created a function add_hours that takes the following arguments: start (datetime), hours (int). It calculates the when the job will be complete based on the start time and duration. The code for add_hours is: def is_in_open_hours(dt): return ( dt.weekday() in business_hours["weekdays"] and dt.date() not in holidays and business_hours["from"].hour <= dt.time().hour < business_hours["to"].hour ) def get_next_open_datetime(dt): while True: dt = dt + timedelta(days=1) if dt.weekday() in business_hours["weekdays"] and dt.date() not in holidays: dt = datetime.combine(dt.date(), business_hours["from"]) return dt def add_hours(dt, hours): while hours != 0: if is_in_open_hours(dt): dt = dt + timedelta(hours=1) hours = hours - 1 else: dt = get_next_open_datetime(dt) return dt The code to calculate the end column is: df["end"] = df.apply(lambda x: add_hours(x.start, x.duration), axis=1) The result of function is the end column: wc job start duration end 1 J1 2022-08-16 07:30:00 17 2022-08-17 14:00:00 1 J2 2022-08-16 07:30:00 5 2022-08-17 10:00:00 2 J3 2022-08-16 07:30:00 21 2022-08-18 08:00:00 2 J4 2022-08-16 07:30:00 12 2022-08-18 08:00:00 Problem is, I need the start datetime in the second row to be the end datetime from the previous row instead of them all using the same start date. I also need to start this process over for each wc. So the desired output would be: wc job start duration end 1 J1 2022-08-16 07:30:00 17 2022-08-17 14:00:00 1 J2 2022-08-17 14:00:00 5 2022-08-17 19:00:00 2 J3 2022-08-16 07:30:00 21 2022-08-18 08:00:00 2 J4 2022-08-18 08:00:00 10 2022-08-18 18:00:00
I show an alternative method where you only need the first start date and then bootstrap the lists according to the job durations. # import required modules import io import pandas as pd from datetime import datetime from datetime import timedelta # make a dataframe # note: only the first start date is required x = ''' wc job start duration end 1 J1 2022-08-16 07:30:00 17 2022-08-17 14:00:00 1 J2 2022-08-16 07:30:00 5 2022-08-17 10:00:00 2 J3 2022-08-16 07:30:00 21 2022-08-18 08:00:00 2 J4 2022-08-16 07:30:00 12 2022-08-18 08:00:00 ''' data = io.StringIO(x) df = pd.read_csv(data, sep='\t') # construct start and end lists start = datetime.strptime(df['start'][0], '%Y-%m-%d %H:%M:%S') start_list = [start] end_list = [] for x in df['duration']: time_change = timedelta(hours=float(x)) new_time = start_list[-1] + time_change start_list.append(new_time) end_list.append(new_time) start_list.pop(-1) # add to dataframe df['start'] = start_list df['end'] = end_list # finished df The result is this:
4
1
73,458,847
2022-8-23
https://stackoverflow.com/questions/73458847/discord-py-error-message-discord-ext-commands-bot-privileged-message-content-i
Can someone help me? I keep getting this error message when I try to start up my discord bot. [2022-08-23 14:32:12] [WARNING ] discord.ext.commands.bot: Privileged message content intent is missing, commands may not work as expected. This is the code for the bot and after this is just commands and events and client.run(My_Token) import os import random import discord from discord.ext import commands from discord.ext import tasks from discord.ext.commands import has_permissions, MissingPermissions from discord.utils import get from itertools import cycle import json import random intents = discord.Intents.default() intents.members = True intents.typing = True intents.presences = True client = commands.Bot(command_prefix = "?", intents=intents) client.remove_command('help') status = cycle(["Minecraft", "Roblox", "Yo-Kai Watch"])
You've got to change intents = discord.Intents.default() to intents = discord.Intents.all() It was an unmentioned change in the v2.0 discord.py update. https://discordpy.readthedocs.io/en/latest/migrating.html
5
13
73,462,652
2022-8-23
https://stackoverflow.com/questions/73462652/how-to-create-a-new-sheet-within-a-spreadsheet-using-google-sheets-api
The official documentation shows how to create a spreadsheet, but I can't find how to create a sheet. How do I do it in Python?
@PCDSandwichMan's answer uses gspread, which is a very useful third-party library to simplify the Sheets API in Python. Not all of Google's APIs have libraries like this, though, so you may want to learn the regular way as well. As an alternative in case that you want to use Google's API you can check out the documentation for Google's Python API libraries. Most direct changes to a spreadsheet's properties are done with spreadsheets().batchUpdate(). Here's a sample based on Google's Python Quickstart that adds a new sheet. import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError SCOPES = ['https://www.googleapis.com/auth/spreadsheets'] # The ID of the spreadsheet YOUR_SPREADSHEET = 'some-id' def main(): creds = None if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) with open('token.json', 'w') as token: token.write(creds.to_json()) try: sheetservice = build('sheets', 'v4', credentials=creds) body = { "requests":{ "addSheet":{ "properties":{ "title":"New Sheet" } } } } sheetservice.spreadsheets().batchUpdate(spreadsheetId=YOUR_SPREADSHEET, body=body).execute() except HttpError as err: print(err) Most of it is the authorization. The relevant part is within the try block. You pretty much just call the batchUpdate() method with the spreadsheet's ID and a body object with all the requests you want to make. sheetservice = build('sheets', 'v4', credentials=creds) body = { "requests":[{ "addSheet":{ "properties":{ "title":"New Sheet" } } }] } sheetservice.spreadsheets().batchUpdate(spreadsheetId=YOUR_SPREADSHEET, body=body).execute() Sources: Google Sheets API Python docs General info on how to use batchUpdate Google Python Quickstart
3
8
73,421,164
2022-8-19
https://stackoverflow.com/questions/73421164/pass-a-variable-between-multiple-custom-permission-classes-in-drf
I have a base permission class that two ViewSets are sharing and one other permission class each that is custom to each of the ViewSets, so 3 permissions all together, is there a way to pass a specific variable down from the base permission class to the other permission classes? My setup looks like this: class BasePerm(permissions.BasePermission): def has_permission(self, request, view): some_var = # call an API using request variable class Perm1(permissions.BasePermission): def has_permission(self, request, view): # get the value of some_var from BasePerm class Perm2(permissions.BasePermission): def has_permission(self, request, view): # get the value of some_var from BasePerm class MyViewSet1(mixins.CreateModelMixin, viewsets.GenericViewSet): permission_classes = [BasePerm, Perm1] class MyViewSet2(mixins.CreateModelMixin, viewsets.GenericViewSet): permission_classes = [BasePerm, Perm2]
i don't understand why you don't use mixin. For you ask: class BasePerm(permissions.BasePermission): def has_permission(self, request, view): self.some_var = # call an API using request variable return True class Perm1(BasePerm): def has_permission(self, request, view): # get the value of some_var from BasePerm return super().has_permission(request, view) and some_staff_with(self.some_var) class Perm2(BasePerm): def has_permission(self, request, view): # get the value of some_var from BasePerm return super().has_permission(request, view) and some_other_staff_with(self.some_var) class MyViewSet1(mixins.CreateModelMixin, viewsets.GenericViewSet): permission_classes = [Perm1] class MyViewSet2(mixins.CreateModelMixin, viewsets.GenericViewSet): permission_classes = [Perm2]
4
6
73,464,414
2022-8-23
https://stackoverflow.com/questions/73464414/why-are-generics-in-python-implemented-using-class-getitem-instead-of-geti
I was reading python documentation and peps and couldn't find an answer for this. Generics in python are implemented by subscripting class objects. list[str] is a list where all elements are strings. This behaviour is achieved by implementing a special (dunder) classmethod called __class_getitem__ which as the documentation states should return a GenericAlias. An example: class MyGeneric: def __class_getitem__(cls, key): # implement generics ... This seems weird to me because the documentation also shows some code similar to what the interpreter does when faced with subscripting objects and shows that defining both __getitem__ on object's metaclass and __class_getitem__ on the object itself always chooses the metaclass' __getitem__. This means that a class with the same functionality as the one above can be implemented without introducing a new special method into the language. An example of a class with identical behaviour: class GenericMeta(type): def __getitem__(self, key): # implement generics ... class MyGeneric(metaclass=GenericMeta): ... Later the documentation also shows an example of Enums using a __getitem__ of a metaclass as an example of a __class_getitem__ not being called. My question is why was the __class_getitem__ classmethod introduced in the first place? It seems to do the exact same thing as the metaclass' __getitem__ but with the added complexity and the need for extra code in the interpreter for deciding which method to call. All of this comes with no extra benefit as defining both will simply call the same one every time unless specifically calling dunder methods (which should not be done in general). I know that implementing generics this way is discouraged. The general approach is to subclass a class that already defines a __class_getitem__ like typing.Generic but I'm still curious as to why that functionality was implemented that way.
__class_getitem__ exists because using multiple inheritance where multiple metaclasses are involved is very tricky and sets limitations that can’t always be met when using 3rd-party libraries. Without __class_getitem__ generics requires a metaclass, as defining a __getitem__ method on a class would only handle attribute access on instances, not on the class. Normally, object[...] syntax is handled by the type of object, not by object itself. For instances, that's the class, but for classes, that's the metaclass. So, the syntax: ClassObject[some_type] would translate to: type(ClassObject).__getitem__(ClassObject, some_type) __class_getitem__ exists to avoid having to give every class that needs to support generics, a metaclass. For how __getitem__ and other special methods work, see the Special method lookup section in the Python Datamodel chapter: For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary. The same chapter also explicitly covers __class_getitem__ versus __getitem__: Usually, the subscription of an object using square brackets will call the __getitem__() instance method defined on the object’s class. However, if the object being subscribed is itself a class, the class method __class_getitem__() may be called instead. This section also covers what will happen if the class has both a metaclass with a __getitem__ method, and a __class_getitem__ method defined on the class itself. You found this section, but it only applies in this specific corner-case. As stated, using metaclasses can be tricky, especially when inheriting from classes with different metaclasses. See the original PEP 560 - Core support for typing module and generic types proposal: All generic types are instances of GenericMeta, so if a user uses a custom metaclass, then it is hard to make a corresponding class generic. This is particularly hard for library classes that a user doesn’t control. ... With the help of the proposed special attributes the GenericMeta metaclass will not be needed. When mixing multiple classes with different metaclasses, Python requires that the most specific metaclass derives from the other metaclasses, a requirement that can't easily be met if the metaclass is not your own; see the documentation on determining the appropriate metaclass. As a side note, if you do use a metaclass, then __getitem__ should not be a classmethod: class GenericMeta(type): # not a classmethod! `self` here is a class, an instance of this # metaclass. def __getitem__(self, key): # implement generics ... Before PEP 560, that's basically what the typing.GenericMeta metaclass did, albeit with a bit more complexity.
9
13
73,394,537
2022-8-17
https://stackoverflow.com/questions/73394537/pip-freeze-throws-the-directory-name-is-invalid
Running pip freeze in the terminal throws the following error (full traceback): PS C:\Users\lhott> pip freeze ERROR: Exception: Traceback (most recent call last): File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\cli\base_command.py", line 167, in exc_logging_wrapper status = run_func(*args) File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\commands\freeze.py", line 87, in run for line in freeze( File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\operations\freeze.py", line 43, in freeze req = FrozenRequirement.from_dist(dist) File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\operations\freeze.py", line 237, in from_dist req, comments = _get_editable_info(dist) File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\operations\freeze.py", line 164, in _get_editable_info vcs_backend = vcs.get_backend_for_dir(location) File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\vcs\versioncontrol.py", line 238, in get_backend_for_dir repo_path = vcs_backend.get_repository_root(location) File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\vcs\git.py", line 501, in get_repository_root r = cls.run_command( File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\vcs\versioncontrol.py", line 650, in run_command return call_subprocess( File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\utils\subprocess.py", line 141, in call_subprocess proc = subprocess.Popen( File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 966, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Users\lhott\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1435, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, NotADirectoryError: [WinError 267] The directory name is invalid I have Python 3.10.2. pip freeze worked perfectly fine until today and I don't understand why. I have updated it recently but I don't know why that would've caused that. I also can install packages without a problem with pip install. Example: Solutions tried: I have tried restarting my laptop. Running the terminal with administrators privileges.
I actualy found the answer. @Greg7000 saying Maybe one of your dependency is badly installed actually gave me a hint. I had a dependency installed (package of a friend) that I uninstalled manually by pressing delete on the corresponding folder instead of doing pip uninstall. This is likely to have created the error "directory name is invalid". Using pip uninstall even after deleting the corresponding folder manually worked and fixed the problem.
3
2
73,449,754
2022-8-22
https://stackoverflow.com/questions/73449754/assigning-vs-defining-python-magic-methods
Consider the following abhorrent class: class MapInt: __call__ = int def __sub__(self, other): return map(self, other) __add__ = map One can then call map(int, lst) via MapInt() - lst, i.e. assert list(MapInt() - ['1','2','3'])) == [1,2,3] # passes However, addition is not so cooperative: assert list(MapInt() + ['1','2','3'])) == [1,2,3] # TypeError: map() must have at least two arguments. This strangeness can be resolve by invoking the magic method directly: assert list(MapInt.__add__(MapInt(), ['1','2','3'])) == [1,2,3] # passes assert list(MapInt().__add__(MapInt(), ['1','2','3'])) == [1,2,3] # passes So my question is, what gives? Assigning __add__ directly seems to "discard" the self argument, but invoking the method itself or defining it in the standard way works fine.
The transformation of instance methods is described in the Python Data Model (emphasis mine): Note that the transformation from function object to instance method object happens each time the attribute is retrieved from the instance [...] Also notice that this transformation only happens for user-defined functions; other callable objects (and all non-callable objects) are retrieved without transformation. Since map is a built-in, not a user-defined function, there is no transformation to an instance method, so the self argument is not added.
24
19
73,395,718
2022-8-17
https://stackoverflow.com/questions/73395718/join-dataframes-and-rename-resulting-columns-with-same-names
Shortened example: vals1 = [(1, "a"), (2, "b"), ] columns1 = ["id","name"] df1 = spark.createDataFrame(data=vals1, schema=columns1) vals2 = [(1, "k"), ] columns2 = ["id","name"] df2 = spark.createDataFrame(data=vals2, schema=columns2) df1 = df1.alias('df1').join(df2.alias('df2'), 'id', 'full') df1.show() The result has one column named id and two columns named name. How do I rename the columns with duplicate names, assuming that the real dataframes have tens of such columns?
Another method to rename only the intersecting columns from typing import List from pyspark.sql import DataFrame def join_intersect(df_left: DataFrame, df_right: DataFrame, join_cols: List[str], how: str = 'inner'): intersected_cols = set(df1.columns).intersection(set(df2.columns)) cols_to_rename = [c for c in intersected_cols if c not in join_cols] for c in cols_to_rename: df_left = df_left.withColumnRenamed(c, f"{c}__1") df_right = df_right.withColumnRenamed(c, f"{c}__2") return df_left.join(df_right, on=join_cols, how=how) vals1 = [(1, "a"), (2, "b")] columns1 = ["id", "name"] df1 = spark.createDataFrame(data=vals1, schema=columns1) vals2 = [(1, "k")] columns2 = ["id", "name"] df2 = spark.createDataFrame(data=vals2, schema=columns2) df_joined = join_intersect(df1, df2, ['name']) df_joined.show()
6
3
73,457,345
2022-8-23
https://stackoverflow.com/questions/73457345/how-to-test-dataclass-that-can-be-initialized-with-environment-variables
I have the following dataclass: import os import dataclasses @dataclasses.dataclass class Example: host: str = os.environ.get('SERVICE_HOST', 'localhost') port: str = os.environ.get('SERVICE_PORT', 30650) How do I write a test for this? I tried the following which looks like it should work: from stackoverflow import Example import os def test_example(monkeypatch): # GIVEN environment variables are set for host and port monkeypatch.setenv('SERVICE_HOST', 'server.example.com') monkeypatch.setenv('SERVICE_PORT', '12345') # AND a class instance is initialized without specifying a host or port example = Example() # THEN the instance should reflect the host and port specified in the environment variables assert example.host == 'server.example.com' assert example.port == '12345' but this fails with: ====================================================================== test session starts ====================================================================== platform linux -- Python 3.8.12, pytest-7.1.2, pluggy-1.0.0 rootdir: /home/biogeek/tmp collected 1 item test_example.py F [100%] =========================================================================== FAILURES ============================================================================ _________________________________________________________________________ test_example __________________________________________________________________________ monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f39de559220> def test_example(monkeypatch): # GIVEN environment variables are set for host and port monkeypatch.setenv('SERVICE_HOST', 'server.example.com') monkeypatch.setenv('SERVICE_PORT', '12345') # AND a class instance is initialized without specifying a host or port example = Example() # THEN the instance should reflect the host and port specified in the environment variables > assert example.host == 'server.example.com' E AssertionError: assert 'localhost' == 'server.example.com' E - server.example.com E + localhost test_example.py:12: AssertionError ==================================================================== short test summary info ==================================================================== FAILED test_example.py::test_example - AssertionError: assert 'localhost' == 'server.example.com' ======================================================================= 1 failed in 0.05s =======================================================================
Your tests fail because your code loads the environment variables when you import the module. Module-level code is very hard to test, as the os.environ.get() calls to set the default values have already run before your test runs. You'd have to effectively delete your module from the sys.modules module cache, and only import your module after mocking out the os.environ environment variables to test what happens at import time. You could instead use a dataclass.field() with a default_factory argument; that executes a callable to obtain the default whenever you create an instance of your dataclass: import os from dataclasses import dataclass, field from functools import partial @dataclass class Example: host: str = field(default_factory=partial(os.environ.get, 'SERVICE_HOST', 'localhost')) port: str = field(default_factory=partial(os.environ.get, 'SERVICE_PORT', '30650')) I used the functools.partial() object to create a callable that'll call os.environ.get() with the given name and default. Note that I also changed the default value for SERVICE_PORT to a string; after all, the port field is annotated as a str, not an int. :-) If you must set these defaults from environment variables at import time, then you could have pytest mock out these environment variables in a conftest.py module; these are imported before your tests are imported, so you get a chance to tweak things before your module-under-test is imported. This won't let you run multiple tests with different defaults, however: # add to conftest.py at the same package level, or higher. @pytest.fixture(autouse=True, scope="session") def mock_environment(monkeypatch): monkeypatch.setenv('SERVICE_HOST', 'server.example.com') monkeypatch.setenv('SERVICE_PORT', '12345') The above fixture example, when placed in conftest.py, would automatically patch your environment before your tests are loaded, and so before your module is imported, and this patch is automatically undone at the end of the test session.
4
11
73,457,379
2022-8-23
https://stackoverflow.com/questions/73457379/python-regex-and-leading-0-in-capturing-group
I'm writing a script in python 3 to automatically rename files. But I have a problem with the captured group in a regex. I have these kinds of files : test tome 01 something.cbz test tome 2 something.cbz test tome 20 something.cbz And I would like to have : test 001 something.cbz test 002 something.cbz test 020 something.cbz I tried several bits of code: Example 1: name = re.sub('tome [0]{0,1}(\d{1,})', str('\\1').zfill(3), name) The result is: test 01 something.cbz test 02 something.cbz test 020 something.cbz Example 2: name = re.sub('tome (\d{1,})', str('\\1').lstrip("0").zfill(3), name) The result is: test 001 something.cbz test 02 something.cbz test 020 something.cbz
You can run the zfill(3) on the .group(1) value after stripping the zeroes from the left side: import re s = ("test tome 01 something.cbz\n" "test tome 2 something.cbz\n" "test tome 20 something.cbz") result = re.sub( r'tome (\d+)', lambda x: x.group(1).lstrip("0").zfill(3), s ) print(result) Output test 001 something.cbz test 002 something.cbz test 020 something.cbz
9
7
73,428,753
2022-8-20
https://stackoverflow.com/questions/73428753/plotly-how-to-display-y-values-when-hovering-on-two-subplots-sharing-x-axis
I have two subplots sharing x-axis, but it only shows the y-value of one subplot not both. I want the hover-display to show y values from both subplots. Here is what is showing right now: But I want it to show y values from the bottom chart as well even if I am hovering my mouse on the top chart and vice versa. Here's my code: title = 'Price over time' err = 'Price' fig = make_subplots(rows=2, cols=1, vertical_spacing = 0.05, shared_xaxes=True, subplot_titles=(title,"")) # A fig.add_trace(go.Scatter(x= A_error['CloseDate'], y = A_error[err], line_color = 'green', marker_color = 'green', mode = 'lines+markers', showlegend = True, name = "A", stackgroup = 'one'), row = 1, col = 1, secondary_y = False) # B fig.add_trace(go.Scatter(x= B_error['CloseDate'], y = B_error[err], line_color = 'blue', mode = 'lines+markers', showlegend = True, name = "B", stackgroup = 'one'), row = 2, col = 1, secondary_y = False) fig.update_yaxes(tickprefix = '$') fig.add_hline(y=0, line_width=3, line_dash="dash", line_color="black") fig.update_layout(#height=600, width=1400, hovermode = "x unified", legend_traceorder="normal")
Edit: At this time, I don't think a Unified hovermode across the subplots will be provided. I got the rationale for this from here. It does affect some features, but this can be applied to work around it. In your example, the horizontal line does not appear on both graphs. So, I have added two horizontal lines in line mode for scatter plots to accommodate this. With the two stock prices, you have set a threshold value for each. Your objective is the same threshold value, so please modify that. import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots import yfinance as yf df = yf.download("AAPL MSFT", start="2022-01-01", end="2022-07-01", group_by='ticker') df.reset_index(inplace=True) import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots title = 'Price over time' err = 'Price' fig = make_subplots(rows=2, cols=1, vertical_spacing = 0.05, shared_xaxes=True, subplot_titles=(title,"")) # AAPL fig.add_trace(go.Scatter(x = df['Date'], y = df[('AAPL', 'Close')], line_color = 'green', marker_color = 'green', mode = 'lines+markers', showlegend = True, name = "AAPL", stackgroup = 'one'), row = 1, col = 1, secondary_y = False) # APPL $150 horizontal line fig.add_trace(go.Scatter(x=df['Date'], y=[125]*len(df['Date']), mode='lines', line_width=3, line_color='black', line_dash='dash', showlegend=False, name='APPL' ), row=1, col=1, secondary_y=False) # MSFT fig.add_trace(go.Scatter(x= df['Date'], y = df[('MSFT', 'Close')], line_color = 'blue', mode = 'lines+markers', showlegend = True, name = "MSFT", stackgroup = 'one'), row = 2, col = 1, secondary_y = False) # MSFT $150 horizontal line fig.add_trace(go.Scatter(x=df['Date'], y=[150]*len(df['Date']), mode='lines', line_width=3, line_color='black', line_dash='dash', showlegend=False, name='MSFT' ), row=2, col=1, secondary_y=False) fig.update_yaxes(tickprefix = '$') fig.update_xaxes(type='date', range=[df['Date'].min(),df['Date'].max()]) #fig.add_hline(y=0, line_width=3, line_dash="dash", line_color="black") fig.update_layout(#height=600, width=1400, hovermode = "x unified", legend_traceorder="normal") fig.update_traces(xaxis='x2') fig.show() enter code here
7
8
73,437,156
2022-8-21
https://stackoverflow.com/questions/73437156/jupyter-notebook-multiprocessing-code-not-working
I am new in python i have Anaconda Pyton 3.9 I was studying about Multiprocessing. When i try this code from multiprocessing import Process # gerekli kütüphaneyi çağıracağız. import time def subfunc1(): time.sleep(2) print("subfunc1: Baslatildi") time.sleep(2) print("subfunc1: Sonlandi") time.sleep(2) def subfunc2(): time.sleep(2) print("subfunc2: Baslatildi") time.sleep(2) print("subfunc2: Sonlandi") time.sleep(2) def mainfunc(): print("mainfunc: Baslatildi") pr1 = Process(target=subfunc1) pr2 = Process(target=subfunc2) pr1.start() pr2.start() print("mainfunc: Sonlandi") if __name__ == '__main__': # Main kod bloğunun içerisindeyken main fonk çağır! mainfunc() result is mainfunc: Baslatildi mainfunc: Sonlandi When i use Visual Code with Python 3.9 i have a virtual and code works! Visual Code uses Anaconda's python 3.9 within a virtual env! Could you please help me? Whey this code can't work properly in Jupyter Notebook? Thanks
Am I correct in assuming you are running this on ms-windows or macOS? In that case, multiprocessing will not work in an interactive interpreter like IPython. This is covered in the documentation, see the "note": Functionality within this package requires that the __main__ module be importable by the children. This is covered in Programming guidelines however it is worth pointing out here. This means that some examples, such as the multiprocessing.pool.Pool examples will not work in the interactive interpreter. This is caused by the spawn start method used on these operating systems. One possible fix is to save you code in a script, and to make sure that creating multiprocessing objects is done within the __main__-block. Another is in the comment by Aaron below
4
5
73,422,130
2022-8-19
https://stackoverflow.com/questions/73422130/what-are-all-the-valid-strings-i-can-use-with-keras-model-compile
What strings are valid metrics with keras.model.compile? The following works, model.compile(optimizer='sgd', loss='mse', metrics=['acc']) but this does not work, model.compile(optimizer='sgd', loss='mse', metrics=['recall', 'precision'])
Check method to check metrices. Check docstring for details
4
2
73,413,556
2022-8-19
https://stackoverflow.com/questions/73413556/how-to-make-a-dataclass-like-decorator-friendly-for-pylance
I'm using pylance and enabled the strict mode, and hoping for better developing experience. It works well until I define some class decorator def struct(cls : Type[Any]) -> Type[Any]: # ... do some magic here ... return dataclass(frozen=True)(cls) @struct class Vec: x: int y: int print(Vec(1, "abc")) # no error msg here, no hints about constructor arguments also. Here, when I'm typing Vec(, there is no hints about types of constructor arguments, and when I'm typing Vec(1, "abc"), there is no type error occurs. And I find that defining @struct as generic function (instead of use Any) makes things even worse: A = TypeVar("A") def struct(cls : Type[A]) -> Type[A]: # ... do some magic here ... return dataclass(frozen=True)(cls) @struct class Vec: x: int y: int print(Vec(1, 2)) # type error here: Expected no arguments to Vec In this case, when I'm typing Vec(1, 2), a type error occurs, and it says "Expected no arguments to Vec", which is not expected. I hope that there is some way I can tell pylance (or other static linter) about the meta information of the returned class (maybe generated from the original class via typing.get_type_hints, but there is a promise from me that the metadata of the returned class is not dynamically modified after that). I noticed that pylance can deal with @dataclass very well, so I think there might be some mechanism to achieve that. Is there any way to do that? or @dataclass is just special processed by pylance?
If I understand your problem correctly, PEP 681 (Data Class Transforms) may be able to help you -- provided that you would be able to use Python 3.11 (at the time of writing, only pre-release versions of Python 3.11 are available) Data class transforms were added to allow library authors to annotate functions or classes which provide behaviour similar to dataclasses.dataclass. The intended use case seems to be exactly what you are describing: allow static type checkers to infer when code is generated dynamically in a "dataclass-like" way. The PEP introduces a single decorator, typing.dataclass_transform. This decorator can be used to mark functions or classes that dynamically generate "dataclass-like" classes. If necessary, the decorator also allows you to specify some details about the generated classes (e.g. whether __eq__ is implemented by default). For all details, you can checkout PEP 681 or the documentation. The most basic case would be changing your code to @dataclass_transform() # <-- from typing import dataclass_transform def struct(cls : Type[Any]) -> Type[Any]: # ... do some magic here ... return dataclass(frozen=True)(cls) If you now write print(Vec(1, "abc")) You will get an error from PyLance: Argument of type "Literal['2']" cannot be assigned to parameter "y" of type "int" in function "__init__" "Literal['2']" is incompatible with "int"PylancereportGeneralTypeIssues If I understand correctly, dataclass_transform should also fix your second case. Edit after a request for a more general mechanism Using inheritance and metaclasses, you can push the boundaries of dataclass_transform a bit further. The special thing about dataclass_transform is that is allows you to annotate something which you normally cannot annotate: it allows static type checkers to infer that methods are generated dynamically, with a (@dataclass compatible) signature. If you want all classes to have some common functionality, you can use inheritance in stead of a class decorator, like in the example below: @typing.dataclass_transform() class BaseClass: def shared_method(self): print('This method is shared by all subclasses!') class Vec(BaseClass): x: int y: int However, this is of course fairly limited. You probably want to add dynamically generated methods to your class. Luckily, we can do this too. However, we will need metaclass for this. Consider the following example: @typing.dataclass_transform() class Metaclass(type): def __new__(cls, name: str, bases: tuple[type], class_dict: dict[str, typing.Any], **kwargs: typing.Any): self = super().__new__(cls, name, bases, class_dict, **kwargs) annotations: dict[str, type] = getattr(self, '__annotations__', {}) if annotations: squares = '+'.join( f'self.{name}**2' for name, data_type in annotations.items() if issubclass(data_type, numbers.Number) ) source = f'def length(self): return math.sqrt({squares})' namespace = {} exec(source, globals(), namespace) setattr(self, 'length', namespace['length']) # You can also generate your own __init__ method here. # Sadly, PyLance does not agree with this line. # I do not know how to fix this. return dataclass(frozen=True)(self) class BaseClass(metaclass=Metaclass): def length(self) -> float: return 0.0 class Vec(BaseClass): x: int y: int print(Vec(1, 2).length()) The metaclass scans all annotations, and generates a method length. It assumes that all its child classes are vectors, where all fields annotated with a numerical type are entries of the vector. The generated length method then uses these entries to compute the length of the vector. The trouble we now face is how to make sure the PyLance knows that classes using this metaclass have a length method which returns a float. To achieve this, we first define a base class using this metaclass, which has a correctly annotated length method. All your vector classes can now inherit from this baseclass. They will have their length method generated automatically, and PyLance is happy too. There are still limitations to this. You cannot generate methods with a dynamic signature. (e.g. you cannot generate a method def change_values(self, x: int, y: int)). This is because there is simply no way to annotate that in the child class. That is part of the magic of dataclass_transform: the ability to annotate a dynamic signature for the __init__ method.
5
5
73,441,477
2022-8-22
https://stackoverflow.com/questions/73441477/attributeerror-module-emoji-has-no-attribute-get-emoji-regexp
This is the code I'm using in Google Colab import re from textblob import TextBlob import emoji def clean_tweet(text): text = re.sub(r'@[A-Za-z0-9]+', '', str(text)) # remove @mentions text = re.sub(r'#', '', str(text)) # remove the '#' symbol text = re.sub(r'RT[\s]+', '', str(text)) # remove RT text = re.sub(r'https?\/\/S+', '', str(text)) # remove the hyperlink text = re.sub(r'http\S+', '', str(text)) # remove the hyperlink text = re.sub(r'www\S+', '', str(text)) # remove the www text = re.sub(r'twitter+', '', str(text)) # remove the twitter text = re.sub(r'pic+', '', str(text)) # remove the pic text = re.sub(r'com', '', str(text)) # remove the com return text def remove_emoji(text): return emoji.get_emoji_regexp().sub(u'', text) When I make these calls tweets['cleaned_text']=tweets['text'].apply(clean_tweet) tweets['cleaned_text']=tweets['cleaned_text'].apply(remove_emoji) I'm getting the below error AttributeError Traceback (most recent call last) <ipython-input-20-9fe71f3cdb0c> in <module> 1 tweets['cleaned_text']=tweets['text'].apply(clean_tweet) ----> 2 tweets['cleaned_text']=tweets['cleaned_text'].apply(remove_emoji) 4 frames <ipython-input-19-8c0d6ba00a5b> in remove_emoji(text) 24 25 def remove_emoji(text): ---> 26 return emoji.get_emoji_regexp().sub(u'', text) AttributeError: module 'emoji' has no attribute 'get_emoji_regexp' This is very strange. I have never seen this issue before. Could someone help me with this? Am I doing something wrong here?
AttributeError: module 'emoji' has no attribute 'get_emoji_regexp' - get_emoji_regexp method was deprecated and subsequently removed in new versions of the package.
4
2
73,433,750
2022-8-21
https://stackoverflow.com/questions/73433750/how-do-i-develop-a-negative-film-image-using-python
I have tried inverting a negative film images color with the bitwise_not() function in python but it has this blue tint. I would like to know how I could develop a negative film image that looks somewhat good. Here's the outcome of what I did. (I just cropped the negative image for a new test I was doing so don't mind that)
If you don't use exact maximum and minimum, but 1st and 99th percentile, or something nearby (0.1%?), you'll get some nicer contrast. It'll cut away outliers due to noise, compression, etc. Additionally, you should want to mess with gamma, or scale the values linearly, to achieve white balance. I'll apply a "gray world assumption" and scale each plane so the mean is gray. I'll also mess with gamma, but that's just messing around. And... all of that completely ignores gamma mapping, both of the "negative" and of the outputs. import numpy as np import cv2 as cv import skimage im = cv.imread("negative.png") (bneg,gneg,rneg) = cv.split(im) def stretch(plane): # take 1st and 99th percentile imin = np.percentile(plane, 1) imax = np.percentile(plane, 99) # stretch the image plane = (plane - imin) / (imax - imin) return plane b = 1 - stretch(bneg) g = 1 - stretch(gneg) r = 1 - stretch(rneg) bgr = cv.merge([b,g,r]) cv.imwrite("positive.png", bgr * 255) b = 1 - stretch(bneg) g = 1 - stretch(gneg) r = 1 - stretch(rneg) # gray world b *= 0.5 / b.mean() g *= 0.5 / g.mean() r *= 0.5 / r.mean() bgr = cv.merge([b,g,r]) cv.imwrite("positive_grayworld.png", bgr * 255) b = 1 - np.clip(stretch(bneg), 0, 1) g = 1 - np.clip(stretch(gneg), 0, 1) r = 1 - np.clip(stretch(rneg), 0, 1) # goes in the right direction b = skimage.exposure.adjust_gamma(b, gamma=b.mean()/0.5) g = skimage.exposure.adjust_gamma(g, gamma=g.mean()/0.5) r = skimage.exposure.adjust_gamma(r, gamma=r.mean()/0.5) bgr = cv.merge([b,g,r]) cv.imwrite("positive_gamma.png", bgr * 255) Here's what happens when gamma is applied to the inverted picture... a reasonably tolerable transfer function results from applying the same factor twice, instead of applying its inverse. Trying to "undo" the gamma while ignoring that the values were inverted... causes serious distortions: And the min/max values for contrast stretching also affect the whole thing. A simple photo of a negative simply won't do. It'll include stray light that offsets the black point, at the very least. You need a proper scan of the negative.
4
10
73,435,918
2022-8-21
https://stackoverflow.com/questions/73435918/nicely-convert-a-txt-file-to-json-file
I have a data.txt file which I want to convert to a data.json file and print a nice first 2 entries (data.txt contains 3 unique IDs). The data.txt can oublicly found here (this is a sample - original file contains 10000 unique "linkedin_internal_id). I tried the following: with open("data.txt", "r") as f: content = f.read() data = json.dumps(content, indent=3) This code doesn't print the appropriate JSON format of data.txt (it also includes \\). Also, my jupyter notebook gets stacked because of the large file size, for this, I want to nicely print only the first 2 entries.
It is called new line delimited json where each line is a valid JSON value and the line separator is '\n', you can read it like this line by line and push it to a list, so later it will be easy for you to iterate/process it further. See: ldjson import json with open("data.txt", "r") as f: contents = f.read() data = [json.loads(item) for item in contents.strip().split('\n')] print(data[0:2])
4
6
73,433,013
2022-8-21
https://stackoverflow.com/questions/73433013/expand-pandas-dataframe-from-row-wise-to-column-wise
I want to expand the columns of the following (toy example) pandas DataFrame, df = pd.DataFrame({'col1': ["A", "A", "A", "B", "B", "B"], 'col2': [1, 7, 3, 2, 9, 4], 'col3': [3, -1, 0, 5, -2, -3],}) col1 col2 col3 0 A 1 3 1 A 7 -1 2 A 3 0 3 B 2 5 4 B 9 -2 5 B 4 -3 such that it will become row-wise, col1 col2_1 col2_2 col2_3 col3_1 col3_2 col3_3 0 A 1 7 3 3 -1 0 1 B 2 9 4 5 -2 -3 I know that I shall use groupby('col1') but do not know how to achieve the desired DataFrame. Note: The number of elements in each group when we perform groupby('col1') are all equal (in this case we have three A's and three B's) Edit: I managed to do it by the following code, but it is not efficient, import pandas as pd from functools import partial def func(x, exclude_list): for col in x.columns: if col in exclude_list: continue for i, value in enumerate(x[col].values): x[f'{col}_{i+1}'] = value return x df = pd.DataFrame({'col1': ["A", "A", "A", "B", "B", "B"], 'col2': [1, 7, 3, 2, 9, 4], 'col3': [3, -1, 0, 5, -2, -3],}) exclude_list = ['col1'] columns_to_expand = ['col2', 'col3'] func2 = partial(func, exclude_list=exclude_list) df2 = df.groupby(exclude_list).apply(func2) df2.drop(columns_to_expand, axis=1, inplace=True) df3 = df2.groupby(exclude_list).tail(1).reset_index() df3.drop('index', axis=1, inplace=True) print(df3) which results in, col1 col2_1 col2_2 col2_3 col3_1 col3_2 col3_3 0 A 1 7 3 3 -1 0 1 B 2 9 4 5 -2 -3 Edit2: This code, based on ouroboros1 answer works efficiently, df_pivot = None for col in columns_to_expand: df['index'] = [f'{col}_{i}' for i in range(1,4)]*len(np.unique(df[exclude_list].values)) if df_pivot is None: df_pivot = df.pivot(index=exclude_list, values=col, columns='index').reset_index(drop=False) else: df_pivot = df_pivot.merge(df.pivot(index=exclude_list, values=col, columns='index').reset_index(drop=False))
Update: the question has been updated to expand multiple columns row-wise. This requires some refactoring of the initial answers that were tailored to the initial question, which only required the operation to take place on one column (col2). Note that the current refactored answers also work perfectly fine on a single column. However, since they are a little verbose for that situation, I'm keeping the original answers for just 1 column at the end. Answers for expanding multiple columns row-wise You could use df.pivot for this: import pandas as pd df = pd.DataFrame({'col1': ["A", "A", "A", "B", "B", "B"], 'col2': [1, 7, 3, 2, 9, 4], 'col3': [3, -1, 0, 5, -2, -3],}) cols = ['col2','col3'] # val count per unique val in col1: N.B. expecting all to have same count! vals_unique_col1 = df.col1.value_counts()[0]+1 # 3+1 (use in `range()`) len_unique_col1 = len(df.col1.unique()) # 2 # create temp cols [1,2,3] and store in new col df['my_index'] = [i for i in range(1,vals_unique_col1)]*len_unique_col1 df_pivot = df.pivot(index='col1',values=cols,columns='my_index')\ .reset_index(drop=False) # customize df cols by joining MultiIndex columns df_pivot.columns = ['_'.join(str(i) for i in x) for x in df_pivot.columns] df_pivot.rename(columns={'col1_':'col1'}, inplace=True) print(df_pivot) col1 col2_1 col2_2 col2_3 col3_1 col3_2 col3_3 0 A 1 7 3 3 -1 0 1 B 2 9 4 5 -2 -3 2 alternative solutions based on df.groupby could like this: Groupby solution 1 import pandas as pd df = pd.DataFrame({'col1': ["A", "A", "A", "B", "B", "B"], 'col2': [1, 7, 3, 2, 9, 4], 'col3': [3, -1, 0, 5, -2, -3],}) cols = ['col2','col3'] df_groupby = df.groupby('col1')[cols].agg(list)\ .apply(pd.Series.explode, axis=1).reset_index(drop=False) # same as in `pivot` method, this will be 3 len_cols = df.col1.value_counts()[0] # rename cols df_groupby.columns=[f'{col}_{(idx-1)%len_cols+1}' if col != 'col1' else col for idx, col in enumerate(df_groupby.columns)] Groupby solution 2 import pandas as pd import numpy as np df = pd.DataFrame({'col1': ["A", "A", "A", "B", "B", "B"], 'col2': [1, 7, 3, 2, 9, 4], 'col3': [3, -1, 0, 5, -2, -3],}) cols = ['col2','col3'] agg_lists = df.groupby('col1')[cols].agg(list) dfs = [pd.DataFrame(agg_lists[col].tolist(), index=agg_lists.index) for col in agg_lists.columns] df_groupby = pd.concat(dfs, axis=1) len_cols = df.col1.value_counts()[0] cols_rep = np.repeat(cols,len_cols) df_groupby.columns = [f'{col}_{str(i+1)}' for col, i in zip(cols_rep, df_groupby.columns)] df_groupby.reset_index(drop=False, inplace=True) (Original) answers for expanding single column row-wise You could use df.pivot for this: import pandas as pd df = pd.DataFrame({'col1': ["A", "A", "A", "B", "B", "B"], 'col2': [1, 7, 3, 2, 9, 4]}) # add col with prospective col names (`col1_1,*_2,*_3`) # and multiply by len unique values in `df.col1` df['index'] = [f'col2_{i}' for i in range(1,4)]*len(df.col1.unique()) df_pivot = df.pivot(index='col1',values='col2',columns='index')\ .reset_index(drop=False) print(df_pivot) index col1 col2_1 col2_2 col2_3 0 A 1 7 3 1 B 2 9 4 Alternative solution based on df.groupby could like this: import pandas as pd df = pd.DataFrame({'col1': ["A", "A", "A", "B", "B", "B"], \ 'col2': [1, 7, 3, 2, 9, 4]}) # create lists of values in `col2` per group in `col1`, # then expand into multiple cols with `apply(pd.Series), finally reset index df_groupby = df.groupby('col1').agg(list)['col2']\ .apply(pd.Series).reset_index(drop=False) # overwrite new cols (`0,1,2`) with desired col names `col2_1, etc.` df_groupby.columns=[f'col2_{col+1}' if col != 'col1' else col for col in list(df_groupby.columns)] print(df_groupby) col1 col2_1 col2_2 col2_3 0 A 1 7 3 1 B 2 9 4
3
3
73,430,919
2022-8-21
https://stackoverflow.com/questions/73430919/how-does-python-handle-list-unpacking-redefinition-and-reference
I am new to python and am trying to understand how it handles copies vs references in respect to list unpacking. I have a simple code snippet and am looking for an explanation as to why it is behaving the way it does. arr = [1, 2, 3, 4] [one, two, three, four] = arr print(id(arr[0]), arr[0]) print(id(one), one) one = 5 print(id(one), one) The output is: (16274840, 1) (16274840, 1) (16274744, 5) I am not sure why one is all the sudden moved to a different memory location when I try to modify its contents. I am using python version 2.7.18. This is my first post, so I apologize in advance if I am not adhering to the guidelines. Please let me know if I have violated them. Thank you for all the responses. They have helped me boil down my misunderstanding to this code: var = 1 print(id(var), var) var = 5 print(id(var), var) With output: (38073752, 1) (38073656, 5) Asking about lists and unpacking them was completely obfuscatory. This does a great job of explaining: http://web.stanford.edu/class/archive/cs/cs106a/cs106a.1212/handouts/mutation.html
The id/address is not associated with the variable/name; it's associated with the data that the variable is referring to. The 1 object is, in this instance, at address 16274840, and the 5 object is at address 16274744. one = 5 causes one to now refer to the 5 object which is at location 16274744. Just to rephrase this in terms of C, I think your question essentially boils down to "why does the following not modify the first element of arr?" (I'm ignoring unpacking since it isn't actually relevant to the question): arr = [1, 2, 3, 4] one = arr[0] one = 5 I would approximate that code to the following C which also does not modify arr: int internedFive = 5; int arr[4] = {1, 2, 3, 4}; int* one = &arr[0]; one = &internedFive; printf("%d", arr[0]); // Prints 1 one originally pointed to the first element of arr, but was reassigned to point to the 5. This reassignment of the pointer has no effect on the data location originally pointed to by one and arr[0].
5
6
73,395,427
2022-8-17
https://stackoverflow.com/questions/73395427/python-pandas-dataframe-how-to-use-stylesheet-in-to-xml-function
I have a dataframe like this: col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... and i want to create an xml like this: <?xml version='1.0' encoding='utf-8'?> <root xmlns:xsi="http://www.example.com" xmlns="http://www.example.com"> <all> <col> <col1>...</col1> <col2>...</col2> <col3>...</col3> <col4>...</col4> <col5>...</col5> <col6>...</col6> <group1> <col7>...</col7> <col8>...</col8> </group1> <group2> <col9>...</col9> <col10>...</col10> </group2> </col> <col> <col1>...</col1> <col2>...</col2> <col3>...</col3> <col4>...</col4> <col5>...</col5> <col6>...</col6> <group1> <col7>...</col7> <col8>...</col8> </group1> <group2> <col9>...</col9> <col10>...</col10> </group2> </col> </all> </root> my solution is to use stylesheet in to_xml function like this: df.to_xml("example.xml", root_name='all', row_name='col', encoding='utf-8', xml_declaration=True, pretty_print=True, index=False, stylesheet='example.xslt') but i have no idea how to write example.xslt file and how to set to_xml function to get desired xml. I am looking for suggestions and examples of xslt that might work
Your setup for the to_xml function seems to be ok. In the code below I'm generating a DataFrame with 20 rows and 10 columns to emulate your example. You will find below an example of a xslt file that might work for the sample you have given and the XML output from that. import pandas as pd import numpy as np np.random.seed(42) df = pd.DataFrame(np.random.randint(0,100,size=(20, 10)), columns=[f'col{i+1}' for i in range(10)]) df.to_xml("example.xml", root_name='all', row_name='col', encoding='utf-8', xml_declaration=True, pretty_print=True, index=False, stylesheet='example.xslt') File example.xslt <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <root> <all> <xsl:for-each select="all/col"> <col> <col1><xsl:value-of select="col1"/></col1> <col2><xsl:value-of select="col2"/></col2> <col3><xsl:value-of select="col3"/></col3> <col4><xsl:value-of select="col4"/></col4> <col5><xsl:value-of select="col5"/></col5> <col6><xsl:value-of select="col6"/></col6> <group1> <col7><xsl:value-of select="col7"/></col7> <col8><xsl:value-of select="col8"/></col8> </group1> <group2> <col9><xsl:value-of select="col9"/></col9> <col10><xsl:value-of select="col10"/></col10> </group2> </col> </xsl:for-each> </all> </root> </xsl:template> </xsl:stylesheet> Output File example.xml <?xml version="1.0"?> <root> <all> <col> <col1>51</col1> <col2>92</col2> <col3>14</col3> <col4>71</col4> <col5>60</col5> <col6>20</col6> <group1> <col7>82</col7> <col8>86</col8> </group1> <group2> <col9>74</col9> <col10>74</col10> </group2> </col> <col> <col1>87</col1> <col2>99</col2> <col3>23</col3> <col4>2</col4> ... ... ...
6
4
73,423,759
2022-8-20
https://stackoverflow.com/questions/73423759/how-to-update-a-list-of-dictionaries-from-a-user-input
I'm new to Python and working on a task where I need to update a list of dictionaries from a customer input. I have a list as follows: drinks_info = [{'Pepsi': 2.0}, {'Coke': 2.0}, {'Solo': 2.50}, {'Mt Dew': 3.0}] If the user inputs: Pepsi: 3.0, Sprite: 2.50 Then the list should update to: [{'Pepsi': 3.0}, {'Coke': 2.0}, {'Solo': 2.50}, {'Mt Dew': 3.0}], {'Sprite': 2.50}] Any help is appreciated. Thanks in advance.
We can use a function to update list based on user inputs. This function would ask user for each list item separately. def update_drinks_info(drinks_info): for drink in drinks_info: for key, value in drink.items(): print(key, value) new_price = input("Enter new price for " + key + ": ") if new_price != "": drink[key] = float(new_price) return drinks_info This function asks user to enter the new price for the specified item, if user presses Enter, then the price stays same without any additional input. We can use a function to update list based on user specified keys and values. def update_drinks_info(drinks_info, drink, price): index = None for i in range(len(drinks_info)): if drink in drinks_info[i]: index = i if index is None: drinks_info.append({drink: price}) else: drinks_info[index] = {drink: price} return drinks_info def split_input(input): drinks = input.split(", ") for drink in drinks: drink = drink.split(":") drink[1] = float(drink[1]) update_drinks_info(drinks_info, drink[0], drink[1]) Using split_input function, we are dividing user input to separate arrays and update the drinks info using the specified drink and price. For example: split_input(input("Enter drink and price: ")) with the example input of Pepsi: 3.0, Sprite: 2.50 is going to update the price of both items.
3
5
73,427,383
2022-8-20
https://stackoverflow.com/questions/73427383/how-to-check-the-availability-of-environment-variables-correctly
I did a token check, if at least one token is missing, 'True' will not be. Now I need to deduce which variable is missing, how to do it? PRACTICUM_TOKEN = os.getenv('PRACTICUM_TOKEN') TELEGRAM_TOKEN = os.getenv('TELEGRAM_TOKEN') TELEGRAM_CHAT_ID = os.getenv('TELEGRAM_CHAT_ID') def check_tokens(): """Checks the availability of environment variables.""" ENV_VARS = [PRACTICUM_TOKEN, TELEGRAM_TOKEN, TELEGRAM_CHAT_ID] if not all(ENV_VARS): print('Required environment variables are missing:', ...) else: return True
I might suggest putting these values inside a class. The check tokens method can be part of the class, and you can use __dict__ to dynamically get reference to all of the tokens you defined without having to duplicate code. class Environment: def __init__(self): self.PRACTICUM_TOKEN = os.getenv('PRACTICUM_TOKEN') self.TELEGRAM_TOKEN = os.getenv('TELEGRAM_TOKEN') self.TELEGRAM_CHAT_ID = os.getenv('TELEGRAM_CHAT_ID') def check_tokens(self): """Checks the availability of environment variables.""" missing_vars = [var for var, value in self.__dict__.items() if not value] if missing_vars: print('Required environment variables are missing:', *missing_vars) return False else: return True print(Environment().check_tokens())
4
8
73,426,545
2022-8-20
https://stackoverflow.com/questions/73426545/get-index-and-column-name-for-a-particular-value-in-pandas-dataframe
I have the following Pandas DataFrame: A B 0 Exporter Invoice No. & Date 1 ABC PVT LTD. ABC/1234/2022-23 DATED 20/08/2022 2 1234/B, XYZ, 3 ABCD, DELHI, INDIA Proforma Invoice No. Date. 4 AB/CDE/FGH/2022-23/1234 20.08.2022 5 Consignee Buyer (If other than consignee) 6 ABC Co. 8 P.O BOX NO. 54321 9 Berlin, Germany Now I want to search for a value in this DataFrame, and store the index and column name in 2 different variables. For example: If I search "Consignee", I should get index = 5 column = 'A'
Assuming you really want the index/column of the match, you can use a mask and stack: df.where(df.eq('Consignee')).stack() output: 5 A Consignee dtype: object As list: df.where(df.eq('Consignee')).stack().index.tolist() output: [(5, 'A')]
3
5
73,426,426
2022-8-20
https://stackoverflow.com/questions/73426426/understanding-the-logic-of-pandas-sort-values-in-python
here is the pandas code that i did to understand how it works for multiple columns. I thought, it sorts columns independently but it did not work like that. df = pd.DataFrame({ 'col1' : ['A', 'Z', 'E', np.nan, 'D', 'C','B'], 'col2' : [2, 1, 9, 8, 7, 4,10], 'col3': [0, 1, 9, 4, 2, 3,1], 'col4': [11,12,12,13,14,55,56], }) df_sort1= df.sort_values(by=['col1', 'col2','col3']) df_sort2= df.sort_values(by=['col1']) #this also return same result #df.sort_values(by=['col1', 'col2','col3','col4']) #df.sort_values(by=['col1', 'col2']) output of the df_sort1 and df_sort2 is the same. could someone explain please how it works? and what did I not understand here properly? Thanks in advance.
df_sort2 will sort the dataframe only on col1 value but df_sort1 will do the sorting considering all three columns, if there is a tie break i.e if two rows have same col1 value then it will check for the value of col2 in case col2 value have same value in both the row then it will look after col3 value Lets take the example: import pandas as pd df = pd.DataFrame({ 'col1' : ['A', 'A', 'E', np.nan, 'D', 'C','B'], 'col2' : [2, 1, 9, 8, 7, 4,10], 'col3': [0, 1, 9, 4, 2, 3,1], 'col4': [11,12,12,13,14,55,56], }) print(df.head()) col1 col2 col3 col4 0 A 2 0 11 1 A 1 1 12 2 E 9 9 12 3 NaN 8 4 13 4 D 7 2 14 df_sort1= df.sort_values(by=['col1', 'col2','col3']) print(df_sort1) col1 col2 col3 col4 1 A 1 1 12 0 A 2 0 11 6 B 10 1 56 5 C 4 3 55 4 D 7 2 14 2 E 9 9 12 3 NaN 8 4 13 df_sort2= df.sort_values(by=['col1']) print(df_sort2) col1 col2 col3 col4 0 A 2 0 11 1 A 1 1 12 6 B 10 1 56 5 C 4 3 55 4 D 7 2 14 2 E 9 9 12 3 NaN 8 4 13
4
4
73,425,359
2022-8-20
https://stackoverflow.com/questions/73425359/is-it-possible-to-compile-microbit-python-code-locally
I am running Ubuntu 22.04 with xorg. I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html. After a lot of debugging, I got to this point: https://pastebin.com/MGShD31N However, the file platform.h does exist. sawntoe@uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h sawntoe@uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg. Does anyone have any idea if this is possible? Thanks.
Okay, so elaborating on Peter Till's answer. Firstly, you can use uflash: uflash path/to/your/code . Or, you can use microfs: ufs put path/to/main.py
4
1
73,424,696
2022-8-20
https://stackoverflow.com/questions/73424696/how-to-get-the-consecutive-items-from-string
I need to get the substring which is continuous more than one char This is my code: l = [] p = 'abbdccc' for i in range(len(p)-1): m = '' if p[i] == p[i+1]: m +=p[i] l.append(m) print(l) My string is 'abbdccc' b and c are repeated more than 1 times expected out is ['bb', 'ccc'] if My string is '34456788' then my out is ['44', '88']
Solution with groupby from itertools import groupby [v for _, g in groupby(s) if (v := ''.join(g)) and len(v) > 1] Sample run for input string s: # input: 'abbdccc' # output: ['bb', 'ccc'] # input: '34456788' # output: ['44', '88']
5
3