question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
77,899,043 | 2024-1-29 | https://stackoverflow.com/questions/77899043/how-to-catch-dateparseerror-in-pandas | I am running a script that uses pd.to_datetime() on inputs that are sometime not able to be parsed. For example if I try to run pd.to_datetime('yesterday') it results to an error DateParseError: Unknown datetime string format, unable to parse: yesterday, at position 0 I 'd like to catch this exception and process it further in my code. I have tried: try: pd.to_datetime('yesterday') except pd.errors.ParserError: print('exception caught') but the exception is not caught. Does anyone know where DateParseError is defined and how I can catch it? Searching in pandas documentation doesn't yield any results | You can import it from pandas._libs.tslibs.parsing: from pandas._libs.tslibs.parsing import DateParseError try: pd.to_datetime('yesterday') except DateParseError: print('exception caught') | 5 | 6 |
77,897,621 | 2024-1-29 | https://stackoverflow.com/questions/77897621/map-each-row-into-a-list-and-collect-all | I have a dataframe: df = pl.DataFrame( { "t_left": [0.0, 1.0, 2.0, 3.0], "t_right": [1.0, 2.0, 3.0, 4.0], "counts": [1, 2, 3, 4], } ) And I want to map each row into a list, and then collect all values (to be passed into e.g. matplotlib.hist) I can do it by hand like: times = [] for t_left, t_right, counts in df.rows(): times.extend(np.linspace(t_left, t_right, counts + 1)[1:]) but this is painfully slow for my data set. I am completely new to both python and polars, so I was wondering if there is a better way to achieve this. EDIT Complete copy-paste example to reproduce. import polars as pl import numpy as np size = 1000000 df = pl.DataFrame( { "t_left": np.random.rand(size), "t_right": np.random.rand(size) + 1, "counts": [1] * size, } ) times = [] for t_left, t_right, counts in df.rows(): times.extend(np.linspace(t_left, t_right, counts + 1)[1:]) | As you need the final result as one big list, you don't really need to work with lists, you can just explode() your dataframe into multiple rows so then you can run vectorized operations on it: so, taking your example data: df = pl.DataFrame( { "t_left": [0.0, 1.0, 2.0, 3.0], "t_right": [1.0, 2.0, 3.0, 4.0], "counts": [1, 2, 3, 4], } ) ββββββββββ¬ββββββββββ¬βββββββββ β t_left β t_right β counts β β --- β --- β --- β β f64 β f64 β i64 β ββββββββββͺββββββββββͺβββββββββ‘ β 0.0 β 1.0 β 1 β β 1.0 β 2.0 β 2 β β 2.0 β 3.0 β 3 β β 3.0 β 4.0 β 4 β ββββββββββ΄ββββββββββ΄βββββββββ Now, let's create some columns we can use for our calculation. To calculate evenly distributed numbers you need a starting point - t_left, step - (t_right - t_left) / counts and then you need to repeat it counts times. # Thanks @Hericks for pointing out that code could be simplified even more times = df.select( start = pl.col('t_left'), step = ((pl.col("t_right") - pl.col("t_left")) / pl.col("counts")), i = pl.int_ranges(1, pl.col('counts') + 1) ).explode('i') # my old code # times = df.select( # start = pl.col('t_left').repeat_by('counts').explode(), # step = ((pl.col("t_right") - pl.col("t_left")) / pl.col("counts")).repeat_by('counts').explode(), # i = pl.int_ranges(1, pl.col('counts') + 1).explode() #) βββββββββ¬βββββββββββ¬ββββββ β start β step β i β β --- β --- β --- β β f64 β f64 β i64 β βββββββββͺβββββββββββͺββββββ‘ β 0.0 β 1.0 β 1 β β 1.0 β 0.5 β 1 β β 1.0 β 0.5 β 2 β β 2.0 β 0.333333 β 1 β β β¦ β β¦ β β¦ β β 3.0 β 0.25 β 1 β β 3.0 β 0.25 β 2 β β 3.0 β 0.25 β 3 β β 3.0 β 0.25 β 4 β βββββββββ΄βββββββββββ΄ββββββ now, it's just a matter of applying the formula and converting result to_list(): times.select( res = pl.col('start') + pl.col('step') * pl.col('i') )['res'].to_list() [1.0, 1.5, 2.0, 2.3333333333333335, 2.6666666666666665, 3.0, 3.25, 3.5, 3.75, 4.0] | 3 | 5 |
77,898,544 | 2024-1-29 | https://stackoverflow.com/questions/77898544/how-to-keep-substring-until-character-is-found-in-polars-dataframe | I am working with a Polars DataFrame that contains a column with values like this: shape: (3, 2) ββββββββββββββββ¬βββββββββββββββ β A β B β β --- β --- β β str β str β ββββββββββββββββΌβββββββββββββββ€ β A|B:0d:cs β C:2ew2 β ββββββββββββββββΌβββββββββββββββ€ β A:1ds0 β E|F:91we23 β ββββββββββββββββΌβββββββββββββββ€ β QW|P:3dwsd β A|Z:12w219 β ββββββββββββββββ΄βββββββββββββββ I want to extract the left substring from each cell in the DataFrame until I find the character ":". The desired result should look like this: shape: (3, 2) ββββββββββββββββ¬βββββββββββββββ β A β B β β --- β --- β β str β str β ββββββββββββββββΌβββββββββββββββ€ β A|B β C β ββββββββββββββββΌβββββββββββββββ€ β A β E|F β ββββββββββββββββΌβββββββββββββββ€ β QW|P β A|Z β ββββββββββββββββ΄βββββββββββββββ Thank you for your help in advance! | you can use str.split() and list.first(): df.select(pl.all().str.split(':').list.first()) βββββββββββ¬ββββββββ β integer β float β β --- β --- β β str β str β βββββββββββͺββββββββ‘ β A|B β C β β A β E|F β β QW|P β A|Z β βββββββββββ΄ββββββββ | 2 | 1 |
77,885,145 | 2024-1-26 | https://stackoverflow.com/questions/77885145/how-can-i-suppress-ruff-linting-on-a-block-of-code | I would like to disable/suppress the ruff linter (or certain linting rules) on a block of code. I know that I can do this for single lines (by using # noqa: <rule_code> at the end of the line) or for entire files/folder (#ruff: noqa <rule_code> at the top of the file). However, I would like to disable the linter ruff on one function or on a multi-line block of code, but not an entire file. Is there a way to do this, without adding noqa to the end of each line? | Not currently possible (as of 29 Jan 2024). See here for updates on the issue | 11 | 8 |
77,895,517 | 2024-1-28 | https://stackoverflow.com/questions/77895517/how-to-url-encode-special-character-in-python | I am trying to do an url encoding on the following string the following string 'https://bla/ble:bli~' using the urllib standard library on Python 3.10. Problem is that the ~ character is not encoded to %7E as per url encoding. Is there a way around it? I have tried from urllib.parse import quote input_string = 'https://bla/ble:bli~' url_encoded = quote(input_string) url_encoded takes the value 'https%3A//bla/ble%3Abli~' I have also tried: url_encoded = quote(input_string,safe='~') In this case the url_encoded takes the value 'https%3A%2F%2Fbla%2Fble%3Abli~' | '~' character is not required to be encoded. (Though its encoded formate is %7E). The tilde is one of the allowed characters in url. So you can use any of the followings: from urllib.parse import quote input_string = 'https://bla/ble:bli~' url_encoded = quote(input_string) or you can use request module as well import requests input_string = 'https://bla/ble:bli~' url_encoded = requests.utils.requote_uri(input_string) Edit: As you are specific about url_encoding for all the special characters, I'm adding the following function. Use the following function: def url_encode(url): #add more characters into the list if required lst = ['[', '@', '_', '!', '#', '$', '%', '^', '&', '*', '(', ')', '<', '>', '?', '/', '\\', '|', '{', '}', '~', ':', ']]'] for x in url: if x in lst: new = hex(ord(x)).replace('x', '').upper() new = f'%{new}' url = url.replace(x, new) return url | 2 | 5 |
77,895,351 | 2024-1-28 | https://stackoverflow.com/questions/77895351/how-can-i-fix-the-metaclass-conflict-in-django-rest-framework-serializer | I've written the following code in django and DRF: class ServiceModelMetaClass(serializers.SerializerMetaclass, type): SERVICE_MODELS = { "email": EmailSubscriber, "push": PushSubscriber } def __call__(cls, *args, **kwargs): service = kwargs.get("data", {}).get("service") cls.Meta.subscriber_model = cls.SERVICE_MODELS.get(service) return super().__call__(*args, **kwargs) class InterListsActionsSerializer(serializers.Serializer, metaclass=ServiceModelMetaClass): source_list_id = serializers.IntegerField() target_list_id = serializers.IntegerField() subscriber_ids = serializers.IntegerField(many=True, required=False) account_id = serializers.CharField() service = serializers.ChoiceField(choices=("email", "push")) class Meta: subscriber_model: Model = None def move(self): model = self.Meta.subscriber_model # Rest of the method code. The purpose of this code is that this serializer might need doing operation on different models based on the service that the user wants to use. So I wrote this metaclass to prevent writing duplicate code and simply change the subscriber_model based on user's needs. Now as you might know, serializers.Serializer uses a metaclass by its own, serializers.SerializerMetaclass. If I don't use this metaclass for creating my metaclass, it results in the following error: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases. But when I try making my ServiceModelMetaClass metaclass to inherit from serializers.SerializerMetaclass it gives me this error: File "/project-root/segment_management/serializers/subscriber_list.py", line 33, in <module> class InterListsActionsSerializer(serializers.Serializer, metaclass=ServiceModelMetaClass): File "/project-root/segment_management/serializers/subscriber_list.py", line 36, in InterListsActionsSerializer subscriber_ids = serializers.IntegerField(many=True, required=False) File "/project-root/.venv/lib/python3.10/site-packages/rest_framework/fields.py", line 894, in __init__ super().__init__(**kwargs) TypeError: Field.__init__() got an unexpected keyword argument 'many' What should I do to fix this problem or maybe a better alternative approach that keeps the code clean without using metaclass? Thanks in advance. | The problem is not the metaclass, or at least it is not the primary problem. The main problem is the many=True for your IntegerField. I looked it up, and apparently a field can not have many=True, only a Serializer can. Indeed, if we inspect the source code, we see [GitHub]: def __new__(cls, *args, **kwargs): # We override this method in order to automatically create # `ListSerializer` classes instead when `many=True` is set. if kwargs.pop('many', False): return cls.many_init(*args, **kwargs) return super().__new__(cls, *args, **kwargs) and the .many_init(β¦) essentially creates the field as a child field, and wraps it in a ListSerializer [GitHub]: @classmethod def many_init(cls, *args, **kwargs): # β¦ child_serializer = cls(*args, **kwargs) list_kwargs = { 'child': child_serializer, } # β¦ meta = getattr(cls, 'Meta', None) list_serializer_class = getattr(meta, 'list_serializer_class', ListSerializer) return list_serializer_class(*args, **list_kwargs) But we can appy essentially the same trick with a field, by using a ListField [drf-doc]: class InterListsActionsSerializer( serializers.Serializer, metaclass=ServiceModelMetaClass ): source_list_id = serializers.IntegerField() target_list_id = serializers.IntegerField() subscriber_ids = serializers.ListField( child=serializers.IntegerField(), required=False ) account_id = serializers.CharField() service = serializers.ChoiceField(choices=('email', 'push')) class Meta: subscriber_model: Model = None def move(self): model = self.Meta.subscriber_model | 2 | 2 |
77,895,186 | 2024-1-28 | https://stackoverflow.com/questions/77895186/re-split-on-an-empty-string | I'm curious about the result of this Python code that does a split on an empty string '': import re x = re.split(r'\W*', '') y = re.split(r'(\W*)', '') Since the string is an empty string, I expect the result for x = re.split(r'\W*', '') is an empty list and that for y = re.split(r'(\W*)', '') is ['']. The actual result for x = re.split(r'\W*','')is ['',''] and that for y = re.split(r'(\W*)','') is ['','','']. I don't know what leads to these results. | Note that the regular expression \W* can match an empty string. Thus, while it's not useful, it's true that the empty string can be split in half to produce an empty string: '' = '' + '' + '' The '' that precedes the regular expression The '' that matches the regular expression The '' that follows the regular expression In the first case, you get strings 1 and 3. In the second case, you also get string 2. (In general, it's probably never a good idea to use a regular expression that can mach the empty string as the first argument.) | 2 | 4 |
77,889,718 | 2024-1-27 | https://stackoverflow.com/questions/77889718/rolling-window-of-next-rows-in-polars | I want to create a new column that contains the max, min and average of the next X elements. After searching for a while the only solution I was able to find was: df = df.with_columns( pl.col("price").shift(-20).rolling_max(window_size=20).alias('max'), pl.col("price").shift(-20).rolling_min(window_size=20).alias('min') ) However for the first 20 rows there is no data and results in nulls because the .rolling_max function applies the windows of the previous rows and not the next. Is there a better way to achieve this? | @RomanPekar's solution works well for the creation of few columns. However, when working with multiple columns it could be preferable to use a full forward-looking rolling context using pl.DataFrame.group_by_dynamic as follows. Data. df = pl.DataFrame({ "price": [1, 2, 6, 1, 3, 5, 8, 7], }) Usage of pl.DataFrame.group_by_dynamic. I've added multiple aggregations of the rolling window and a separate window column to clarify the approach. ( df .group_by_dynamic( index_column=pl.int_range(0, pl.len()), every="1i", period="2i", offset="0i", ) .agg( pl.col("price").first(), pl.col("price").alias("window"), pl.col("price").max().alias("min"), pl.col("price").max().alias("max"), ) ) Output. shape: (8, 5) βββββββββββ¬ββββββββ¬ββββββββββββ¬ββββββ¬ββββββ β literal β price β window β min β max β β --- β --- β --- β --- β --- β β i64 β i64 β list[i64] β i64 β i64 β βββββββββββͺββββββββͺββββββββββββͺββββββͺββββββ‘ β 0 β 1 β [1, 2] β 2 β 2 β β 1 β 2 β [2, 6] β 6 β 6 β β 2 β 6 β [6, 1] β 6 β 6 β β 3 β 1 β [1, 3] β 3 β 3 β β 4 β 3 β [3, 5] β 5 β 5 β β 5 β 5 β [5, 8] β 8 β 8 β β 6 β 8 β [8, 7] β 8 β 8 β β 7 β 7 β [7] β 7 β 7 β βββββββββββ΄ββββββββ΄ββββββββββββ΄ββββββ΄ββββββ This approach also allows a separate by column to perform these aggregations within groups defined by a separate column. | 4 | 2 |
77,893,484 | 2024-1-28 | https://stackoverflow.com/questions/77893484/italicizing-letters-and-numbers-in-matplotlib | I am attempting to italicize text in my plot: import matplotlib.pyplot as plt plt.plot([1,2,3],[1,1,1], label = '$\it{ABC123}$') plt.legend() plt.show() as shown by Styling part of label in legend in matplotlib but this only italicized ABC, not 123, which was also noticed, but left unsolved, by Matplotlib italic font cannot be applied to numbers in the legend? I have also tried usetex italic symbols in matplotlib? but that changes the fonts which make this figure incompatible with other figures that I'm making. I'm on Python 3.10.12 and matplotlib 3.8.1 matplotlib-inline 0.1.3 how can I italicize both letters and numbers, e.g. in ABC123? | You can use the font_manager: import matplotlib.pyplot as plt import matplotlib.font_manager as font_manager font = font_manager.FontProperties(style='italic') plt.plot([1,2,3],[1,1,1], label = 'ABC123') plt.legend(prop=font) plt.show() To use it for x/y ticks or labels, you must use the fontproperties argument: plt.yticks(fontproperties=font) | 2 | 3 |
77,888,435 | 2024-1-26 | https://stackoverflow.com/questions/77888435/python-method-overriding-more-specific-arguments-in-derived-than-base-class | Let's say I want to create an abstract base class called Document. I want the type checker to guarantee that all its subclasses implement a class method called from_paragraphs, which constructs a document from a sequence of Paragraph objects. However, a LegalDocument should only be constructable from LegalParagraph objects, and an AcademicDocument - only from AcademicParagraph objects. My instinct is to do it like so: from abc import ABC, abstractmethod from typing import Sequence class Document(ABC): @classmethod @abstractmethod def from_paragraphs(cls, paragraphs: Sequence["Paragraph"]): pass class LegalDocument(Document): @classmethod def from_paragraphs(cls, paragraphs: Sequence["LegalParagraph"]): return # some logic here... class AcademicDocument(Document): @classmethod def from_paragraphs(cls, paragraphs: Sequence["AcademicParagraph"]): return # some logic here... class Paragraph: text: str class LegalParagraph(Paragraph): pass class AcademicParagraph(Paragraph): pass However, Pyright complains about this because from_paragraphs on the derived classes violates the Liskov substitution principle. How do I make sure that each derived class implements from_paragraphs for some kind of Paragraph? | Turns out this can be solved using generics: from abc import ABC, abstractmethod from typing import Generic, Sequence, TypeVar ParagraphType = TypeVar("ParagraphType", bound="Paragraph") class Document(ABC, Generic[ParagraphType]): @classmethod @abstractmethod def from_paragraphs(cls, paragraphs: Sequence[ParagraphType]): pass class LegalDocument(Document["LegalParagraph"]): @classmethod def from_paragraphs(cls, paragraphs): return # some logic here... class AcademicDocument(Document["AcademicParagraph"]): @classmethod def from_paragraphs(cls, paragraphs): return # some logic here... class Paragraph: text: str class LegalParagraph(Paragraph): pass class AcademicParagraph(Paragraph): pass Saying bound="Paragraph" guarantees that the ParagraphType represents a (subclass of) Paragraph, but the derived classes are not expected to implement from_paragraphs for all paragraph types, just for the one they choose. The type checker also automatically figures out the type of the argument paragraphs for LegalDocument.from_paragraphs, saving me some work :) | 7 | 1 |
77,892,148 | 2024-1-27 | https://stackoverflow.com/questions/77892148/python-concurrent-futures-wait-job-submission-order-not-preserved | Does python's concurrent.futures.wait() preserve the order of job submission? I submitted two jobs to ThreadPoolExecutor as follows: import concurrent.futures import time, random def mult(n): time.sleep(random.randrange(1,5)) return f"mult: {n * 2}" def divide(n): time.sleep(random.randrange(1,5)) return f"divide: {n // 2}" with concurrent.futures.ThreadPoolExecutor() as executor: mult_future = executor.submit(mult, 200) divide_future = executor.submit(divide, 200) # wait for both task to complete mult_task, divide_task = concurrent.futures.wait( [mult_future, divide_future], return_when=concurrent.futures.ALL_COMPLETED, ).done mult_result = mult_task.result() divide_result = divide_task.result() print(mult_result) print(divide_result) Sometimes I see divide: 50 mult: 400 and sometimes, mult: 400 divide: 50 shouldn't mult_task, divide_task always map to mult_future, divide_future ? python --version >> Python 3.8.16 | According to the doc that the function wait(): Returns a named 2-tuple of sets. By definition, sets are unordered. That means you cannot guarantee the order. Since you already have mult_future and divide_future, you can use them to guarantee the order. There is no need to call wait either. import concurrent.futures import random import time def mult(n): time.sleep(random.randrange(1, 5)) return f"mult: {n * 2}" def divide(n): time.sleep(random.randrange(1, 5)) return f"divide: {n // 2}" with concurrent.futures.ThreadPoolExecutor() as executor: mult_future = executor.submit(mult, 200) divide_future = executor.submit(divide, 200) # Print in order: mult div print(mult_future.result()) print(divide_future.result()) | 2 | 2 |
77,891,453 | 2024-1-27 | https://stackoverflow.com/questions/77891453/regex-string-parsing-pattern-starts-with-but-can-end-with | I am attempting to parse strings using Regex. The strings look like: Stack;O&verflow;i%s;the;best! I want to parse it to: Stack&verflow%sbest! So when we see a ; remove everything up until we see one of the following characters: [;,)%&@] (or replace with empty space ""). I am using re package in Python: string = re.sub('^[^-].*[)/]$', '', string) This is what I have right now: ^[^;].*[;,)%&@] Which as I understand it says: starting at the pattern with ;, read everything that matches in between ; and [;,)%&@] characters But the result is wrong and looks like: Stack;O&verflow;i%s;the; Demo here. What am I missing? EDIT: @InSync pointed out that there is a discrepancy if ; is in the end characters as well. As worded above, it should result inStack&verflow%s**;**best! but instead I want to see Stack&verflow%sbest!. Perhaps two regex lines are appropriate here, I am not sure; if you can get to Stack&verflow%s**;**best! then the rest is just simple replacement of all the remaining ;. EDIT2: The code I found that works was import re def remove_semicolons(name): name = re.sub(';.*?(?=[;,)%&@])', '', name) name = re.sub(';','',name) return name remove_semicolons('Stack;O&verflow;i%s;the;best!') Or if you feel like causing a headache to the next programmer who looks at your code: import re semicolon_string = 'Stack;O&verflow;i%s;the;best!' cleaned_string = re.sub(';','',re.sub(';.*?(?=[;,)%&@])', '', semicolon_string)) | Alright in my answer I assume you have a typo in your expected output. Remove everything starting with ; up to (;,)%&@) and so Stack ;O &verflow ;i %s ;the ;best! would become Stack&verflow%s;best! for the regex you want to start with ; then anything after 0 or more times .* (if you require a character change to .+) followed by your ending characters [;,)%&@]. To exclude them you need to add a positive lookahead ?(?=[;,)%&@]). This as the name suggests looks ahead one character and tries to match it to your sequence For a final regex: ;.*?(?=[;,)%&@]) or if you require characters in between: ;.+?(?=[;,)%&@]) | 2 | 2 |
77,890,197 | 2024-1-27 | https://stackoverflow.com/questions/77890197/fill-nulls-in-python-polars-lazyframe-by-groups-conditional-on-the-number-of-un | I have a large (~300M rows x 44 cols) dataframe and I need to fill in null values in certain ways depending on the characteristics of each group. For example, say we have lf = pl.LazyFrame( {'group':(1,1,1,2,2,2,3,3,3), 'val':('yes', None, 'no', '2', '2', '2', 'answer', None, 'answer') } ) βββββββββ¬βββββββββ β group β val β β --- β --- β β i64 β str β βββββββββͺβββββββββ‘ β 1 β yes β β 1 β null β β 1 β no β β 2 β 2 β β 2 β 2 β β 2 β 2 β β 3 β answer β β 3 β null β β 3 β answer β βββββββββ΄βββββββββ I want to fill in nulls if and only if the group contains a single non-null unique value in the other cells, since in my context that's the expectation of the data and the presense of more than one unique value (or all nulls) in the group signals another issue that will be handled differently. I'm able to fill null values for each group with the following: filled_lf = ( lf .with_columns( pl.col('val') .fill_null(pl.col('val').unique().first().over('group')).alias('filled_val') ) ) However, for one, it seems that pl.col('val').unique() includes 'null' as one of the values, and the ordering is stochastic so choosing the first value on the list has inconsistent results. Secondly, it doesn't include the condition I need. Desired result: βββββββββ¬βββββββββ¬βββββββββββββ β group β val β filled_val β β --- β --- β --- β β i64 β str β str β βββββββββͺβββββββββͺβββββββββββββ‘ β 1 β yes β yes β β 1 β null β null β β 1 β no β no β β 2 β 2 β 2 β β 2 β 2 β 2 β β 2 β 2 β 2 β β 3 β answer β answer β β 3 β null β answer β β 3 β answer β answer β βββββββββ΄βββββββββ΄βββββββββββββ Pandas 3.12 Polars 0.20.1 Thanks in advance for your advice! | You can add: .drop_nulls() .unique() with maintain_order=True when/then/otherwise to implement the conditional count/len logic unique = pl.col("val").drop_nulls().unique(maintain_order=True) df.with_columns( pl.when(unique.len().over("group") == 1) .then(pl.col("val").fill_null(unique.first().over("group"))) .otherwise(pl.col("val")) .alias("filled") ) shape: (9, 3) βββββββββ¬βββββββββ¬βββββββββ β group β val β filled β β --- β --- β --- β β i64 β str β str β βββββββββͺβββββββββͺβββββββββ‘ β 1 β yes β yes β β 1 β null β null β β 1 β no β no β β 2 β 2 β 2 β β 2 β 2 β 2 β β 2 β 2 β 2 β β 3 β answer β answer β β 3 β null β answer β β 3 β answer β answer β βββββββββ΄βββββββββ΄βββββββββ | 3 | 2 |
77,890,493 | 2024-1-27 | https://stackoverflow.com/questions/77890493/using-numba-for-scipy-fsolve-but-get-error | I want use numba for scipy.fsolve: from scipy.optimize import fsolve from numba import njit @njit def FUN12(): XGUESS=[8.0,7.0] X =[0.0,0.0] try: X = fsolve(FCN3, XGUESS) except: print("error") return X @njit def FCN3(X): F=[0.0,0.0] F[0]=4.*pow(X[0],2)-3.*pow(X[1],1)-7 F[1] = 5.*X[0] -2. * pow(X[1] , 2)+8 return F FUN12() I got this error for my code: numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Untyped global name 'fsolve': Cannot determine Numba type of <class 'function'> | Numba compatible code is only some subset of Python and Numpy functions. You should not run fsolve inside numba function. You should remove njit decorator from FUN12 function. You can keep it for FCN3. from scipy.optimize import fsolve from numba import njit # @njit remove this line, def FUN12(): XGUESS=[8.0,7.0] X =[0.0,0.0] try: X = fsolve(FCN3, XGUESS) except: print("error") return X @njit def FCN3(X): F=[0.0,0.0] F[0]=4.*pow(X[0],2)-3.*pow(X[1],1)-7 F[1] = 5.*X[0] -2. * pow(X[1] , 2)+8 return F FUN12() | 2 | 1 |
77,889,046 | 2024-1-26 | https://stackoverflow.com/questions/77889046/polars-read-csv-vs-polars-read-csv-batched-vs-polars-scan-csv | What is the difference between polars.read_csv vs polars.read_csv_batched vs polars.scan_csv ? polars.read_csv looks equivalent to pandas.read_csv as they have the same name. Which one to use in which scenario and how they are similar/different to pandas.read_csv? | polars.read_csv_batched is pretty equivalent to pandas.read_csv(iterator=True). polars.scan_csv doesn't do anything until you perform an operation on the dataframe like dask.dataframe.read_csv (lazy loading). Scenarios: I use pandas.read_csv when my data is messy or complex in structure and the data is not too large I use polars.read_csv when my data file is very large (> 10GB). This is an answer based solely on my (humble) opinion. | 4 | 1 |
77,888,782 | 2024-1-26 | https://stackoverflow.com/questions/77888782/wide-to-long-amid-merge | Hi I am trying to merge two dataset in the following way: df1=pd.DataFrame({'company name':['A','B','C'], 'analyst 1 name':['Tom','Mike',np.nan], 'analyst 2 name':[np.nan,'Alice',np.nan], 'analyst 3 name':['Jane','Steve','Alex']}) df2=pd.DataFrame({'company name':['A','B','C'], 'score 1':[3,5,np.nan], 'score 2':[np.nan,1,np.nan], 'score 3':[6,np.nan,11]}) df_desire=pd.DataFrame({'company name':['A','A','B','B','B','C'], 'analyst':['Tom','Jane','Mike','Alice','Steve','Alex'], 'score':[3,6,5,1,np.nan,11]}) Basically, df1 contains analyst name and df2 contains scores that analysts assign. I am trying to merge two into df_desire. The way to read two tables is: for company A, two person cover it, which are Tom and Jane, who assign 3 and 6 respectively. Noted that even though Steve covers company B but I deliberately assign the score to be NA for robustness purpose. What I have done is : pd.concat([df1.melt(id_vars='company name',value_vars=['analyst 1 name','analyst 2 name','analyst 3 name']),\ df2.melt(id_vars='company name',value_vars=['score 1','score 2','score 3'])],axis=1) I am looking for a more elegant solution. | Try: x = ( df1.set_index("company name") .stack(dropna=False) .reset_index(name="name") .drop(columns="company name") ) y = df2.set_index("company name").stack(dropna=False).reset_index(name="score") print( pd.concat([x, y], axis=1)[["company name", "name", "score"]] .dropna(subset=["name", "score"], how="all") .reset_index(drop=True) ) Prints: company name name score 0 A Tom 3.0 1 A Jane 6.0 2 B Mike 5.0 3 B Alice 1.0 4 B Steve NaN 5 C Alex 11.0 | 4 | 3 |
77,884,985 | 2024-1-26 | https://stackoverflow.com/questions/77884985/sql-injection-in-duckdb-queries-on-pandas-dataframes | In a project I am working with duckdb to perform some queries on dataframes. For one of the queries, I have some user-input that I need to add to the query. That is why I am wondering if SQL-Injection is possible in this case. Is there a way a user could harm the application or the system through the input? And if so, how could I prevent this case? It seems that duckdb has no PreparedStatement for queries on dataframes. I already looked up in the documentation (https://duckdb.org/docs/api/python/overview.html) but couldn't find anything useful. The method duckdb.execute(query, parameters) only seems to work on databases with a real sql-connection and not on dataframes. There is another question on stackoverflow (Syntax for Duckdb > Python SQL with Parameter\Variable) about this topic but the answer only works on real sql-connections and the version with f-strings seems insecure to me. Here is a small code sample to show what I mean: import duckdb import pandas as pd df_data = pd.DataFrame({'id': [1, 2, 3, 4], 'student': ['student_a', 'student_a', 'student_b', 'student_c']}) user_input = 3 # fetch some user_input here # How to prevent sql-injection, if its even possible in this case? result = duckdb.query("SELECT * FROM df_data WHERE id={}".format(user_input)) So how would you approach this problem? Is sql-injection even possible? Thanks for your help and feel free to ask for more details, if you need some more information! EDIT: Fixed a syntax error in the code | The method duckdb.execute(query, parameters) only seems to work on databases with a real sql-connection and not on dataframes. It seems it's possible: >>> duckdb.execute("""SELECT * FROM df_data WHERE id=?""", (user_input,)).df() id student 0 3 student_b | 2 | 3 |
77,886,016 | 2024-1-26 | https://stackoverflow.com/questions/77886016/label-a-list-following-the-unique-elements-appearing-in-it | Given a list of strings such as: foo = \['A', 'A', 'B', 'A', 'B', 'C', 'C', 'A', 'B', 'C', 'A'\] How can we label them such that the output would be: output = \['A1', 'A2', 'B1', 'A3', 'B2', 'C1', 'C2', 'A4', 'B2', 'C3', 'A5'\] (keeping the order of the original list) In the following case there are only 3 unique variables to look at, so the first think I tried was looking at the unique elements: import numpy as np np.unique(foo) Output = \['A', 'B', 'C'\] But then I get stacked when trying to find the proper loop to reach the desired output. | Using pure python, take advantage of a dictionary to count the values: foo = ['A', 'A', 'B', 'A', 'B', 'C', 'C', 'A', 'B', 'C', 'A'] d = {} out = [] for val in foo: d[val] = d.get(val, 0)+1 out.append(f'{val}{d[val]}') If you can use pandas: import pandas as pd s = pd.Series(foo) out = s.add(s.groupby(s).cumcount().add(1).astype(str)).tolist() Output: ['A1', 'A2', 'B1', 'A3', 'B2', 'C1', 'C2', 'A4', 'B3', 'C3', 'A5'] | 2 | 3 |
77,883,233 | 2024-1-25 | https://stackoverflow.com/questions/77883233/cannot-import-langchain-vectorstores-faiss-only-langchain-community-vectorstore | I am in the process of building a RAG like the one in this Video. However, I cannot import FAISS like this. from langchain.vectorstores import FAISS LangChainDeprecationWarning: Importing vector stores from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead: `from langchain_community.vectorstores import faiss`. However, it is possible to import faiss: from langchain_community.vectorstores import faiss But with this it is not possible to call faiss.from_text(). vectorstore = faiss.from_texts(text=text_chunks, embeddings=embeddings) AttributeError: module 'langchain_community.vectorstores.faiss' has no attribute 'from_texts' Is it no longer possible to call .from_text() with the current one? I didn't find anything about this in the documentation. Python=3.10.13 | The solution was for me to importing the FAISS class directly from the langchain.vectorstores.faiss module and then using the from_documents method. from langchain_community.vectorstores.faiss import FAISS I had importing the faiss module itself, rather than the FAISS class from the langchain.vectorstores.faiss module. The from_documents method is a class method of the FAISS class, not a function of the faiss module. | 4 | 2 |
77,882,709 | 2024-1-25 | https://stackoverflow.com/questions/77882709/update-all-values-within-each-subgroup-in-pandas-dataframe | I have a pandas dataframe which has 4 columns of note, ID, DATE, PRIMARY_INDICATOR, PHONE Each ID can have multiple rows in the table. The rows are guaranteed to be sorted in descending date order within each ID subgroup. for example: >>> df.head(10) ID DATE PRIMARY_INDICATOR PHONE 0 123 20230125 1 8071234 1 123 20230124 0 8079999 2 999 20230125 1 8074312 3 999 20230120 1 9087654 4 999 20230119 0 1235678 5 765 20230125 0 9990000 6 765 20230125 0 9999999 My goal is: If an ID group has a single primary_indicator as 1, nothing needs to be done (df.loc[df['ID'] == 123] in the above table is an example of this group) If an ID group has no primary indicators as 1, nothing needs to be done (df.loc[df['ID'] == 765] in the above table is an example of this group) If an ID group has more than 1 primary indicator as 1, set only the one with the most recent date to 1 and the rest to 0. For the group formed by df.loc[df['ID'] == 999] in the above example, the result would look like >>> df.head(10) ID DATE PRIMARY_INDICATOR PHONE 2 999 20230125 1 8074312 3 999 20230120 0 9087654 4 999 20230119 0 1235678 I have around 280000 unique IDs. I have tried adding the IDs to a set, then I pop from the set, create a subset of the dataframe via loc with the ID, and then iterate using itterrows and a bool flag. This approach works but is really slow. The only meaningfully slow part of the query was creating the subset dataframe for each pop'd record, idDataframe = df.loc[df['ID'] == currentId]. It was something like .016475 seconds per id, and the entire script took 80 minutes. >>> import pandas as pd >>> >>> pd.set_option('display.max_columns', None) >>> data = {'ID': [123, 123, 999, 999, 999, 765, 765], ... 'DATE': ['20230125', '20230124', '20230125', '20230120', '20230119', '20230125', '20230125'], ... 'PRIMARY_INDICATOR': [1, 0, 1, 1, 0, 0, 0], ... 'PHONE' : [8071234, 8079999, 8074312, 9087654, 1235678, 9990000, 9999999]} >>> df = pd.DataFrame.from_dict(data) >>> df.head(10) ID DATE PRIMARY_INDICATOR PHONE 0 123 20230125 1 8071234 1 123 20230124 0 8079999 2 999 20230125 1 8074312 3 999 20230120 1 9087654 4 999 20230125 0 1235678 5 765 20230125 0 9990000 6 765 20230125 0 9999999 import pandas as pd data = {'ID': [123, 123, 999, 999, 999, 765, 765], 'DATE': ['20230125', '20230124', '20230125', '20230120', '20230125', '20230125', '20230125'], 'PRIMARY_INDICATOR': [1, 0, 1, 1, 0, 0, 0], 'PHONE' : [8071234, 8079999, 8074312, 9087654, 1235678, 9990000, 9999999]} df = pd.DataFrame.from_dict(data) idSet = set(df.ID.unique()) while idSet : # pop one id from the set currentId = idSet .pop() # get a subset of the original dataframe which only shows the pop'd ids records idDataframe = df.loc[df['ID'] == currentId] idDataframe.drop_duplicates() # create the output row from each row in the subframe primaryPhoneIndicated = False for index, row in idDataframe.iterrows(): if not primaryPhoneIndicated and row['PRIMARY_INDICATOR'] == 1: PRIMARY_INDICATOR = 1 primaryPhoneIndicated = True else: PRIMARY_INDICATOR = 0 print([row['ID'], row['DATE'], PRIMARY_INDICATOR, row['PHONE']]) Is there a pandas-y way to do this, without the need to create a dataframe for each ID to apply the logic? | It can be done without groupby: idx = (df.loc[df['PRIMARY_INDICATOR'].eq(1)] .duplicated('ID') .loc[lambda x: x].index) df.loc[idx, 'PRIMARY_INDICATOR'] = 0 Output: >>> df ID DATE PRIMARY_INDICATOR PHONE 0 123 20230125 1 8071234 1 123 20230124 0 8079999 2 999 20230125 1 8074312 3 999 20230120 0 9087654 4 999 20230119 0 1235678 5 765 20230125 0 9990000 6 765 20230125 0 9999999 Step by step: # Select only rows where PRIMARY_INDICATOR==1 >>> out = df.loc[df['PRIMARY_INDICATOR'].eq(1)] ID DATE PRIMARY_INDICATOR PHONE 0 123 20230125 1 8071234 2 999 20230125 1 8074312 3 999 20230120 1 9087654 # Mark duplicated all oldest dates >>> out = out.duplicated('ID') 0 False 2 False 3 True dtype: bool # Extract index >>> idx = out.loc[lambda x: x].index Index([3], dtype='int64') | 2 | 2 |
77,882,214 | 2024-1-25 | https://stackoverflow.com/questions/77882214/airflow-dag-not-able-to-parse-ds | I have an Airflow DAG and I use {{ ds }} to get the logical date. As per Airflow documentation template {{ ds }} return logical date in format YYYY-MM-DD in string format. So I am using following code to manipulate the dates (datetime.strptime('{{ dag_run.logical_date|ds }}', '%Y-%m-%d') - timedelta(3)).strftime('%Y-%m-%d') but getting following error Broken DAG: [/usr/local/airflow/dags/custom_dags/feature_store_daily.py] Traceback (most recent call last): File "/usr/lib/python3.10/_strptime.py", line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) File "/usr/lib/python3.10/_strptime.py", line 349, in _strptime raise ValueError("time data %r does not match format %r" % ValueError: time data '{{dag_run.logical_date|ds}}' does not match format '%Y-%m-%d' I am not able to figure out why I am getting this error. Please advice | Firstly I thought the problem could be de lack of spaces before and after the "|" operand, but IΒ΄ve tested and works fine. In this case, the main problem is that youΒ΄re not using the Jinja template where it is effectively parsed. Due to this, the error is that you donΒ΄t have a date to format, but a string as it is. task_xx = SomeOperator( ... a="{{ dag_run.logical_date|ds }}", b="{{dag_run.logical_date | ds}}", c="{{ dag_run.logical_date | ds }}" ) And then, in the SomeOperator these attributes must be in template_fields so it can be parsed: class SomeOperator(BaseOperator): template_fields = ['a', 'b', 'c'] .... Now, when you use these attributes, youΒ΄ll get what you want. IΒ΄ve tested and works fine. Or, you can also use Jinja templates with fields that already accept this. | 2 | 2 |
77,881,942 | 2024-1-25 | https://stackoverflow.com/questions/77881942/whats-the-polars-equivalent-to-the-pandas-iloc-method | I'm looking for the recommended way to select an individual row of a polars.DataFrame by row number: something largely equivalent to pandas.DataFrame's .iloc[[n]] method for a given integer n. For polars imported as pl and a polars DataFrame df, my current approach would be: # for example n = 3 # create row index, filter for individual row, drop the row index. new_df = ( df.with_row_index() .filter(pl.col('index') == n) .select(pl.exclude('index')) ) I'm migrating from Pandas, and I've read the Pandas-to-Polars migration guide, but a slick solution to this specific case wasn't addressed there. Edit: to clarify, I am looking for an approach that returns a polars.DataFrame object for the chosen row. Does anyone have something slicker? | This is a very good sheet by: @Liam Brannigan Credit to them. https://www.rhosignal.com/posts/polars-pandas-cheatsheet/ A glimse from the sheet: You can find other information related to Filtering Rows using iloc and its equivalent in polars in the sheet. | 10 | 21 |
77,880,757 | 2024-1-25 | https://stackoverflow.com/questions/77880757/polars-filter-dataframe-with-multilple-conditions | I've got this pandas code: df['date_col'] = pd.to_datetime(df['date_col'], format='%Y-%m-%d') row['date_col'] = pd.to_datetime(row['date_col'], format='%Y-%m-%d') df = df[(df['groupby_col'] == row['groupby_col']) & (row['date_col'] - df['date_col'] <= timedelta(days = 10)) & (row['date_col'] - df['date_col'] > timedelta(days = 0))] row['mean_col]' = df['price_col'].mean() The name row comes from the fact that this function was applied by a lambda construct. I'm subsetting df with 2 types of conditions: A condition on values equality on a column named "groupby_col", Multiple onditions on time ranges based on the "date_col" column that features timestamps. I'm pretty sure that filter is the correct module to use: df.filter(condition_1 & condition_2) but i'm struggling to write the conditions. In order to embed condition 1 do i have to nest a filter condition or a when is the correct choice? How do i translate the timedelta condition? How do i replicate the lambda approach? | It's a bit hard to understand your example without test data. But if I try to create some sample data import polars as pl import datetime df = pl.DataFrame({ "date_col": ['2023-01-01','2023-01-02', '2023-01-03'], "groupby_col": [1,2,3], }) row = pl.DataFrame({ "date_col": ['2023-01-07','2023-01-08', '2023-01-25'], "groupby_col": [1,2,3], }) df = df.with_columns(pl.col('date_col').str.to_datetime().cast(pl.Date)) row = row.with_columns(pl.col('date_col').str.to_datetime().cast(pl.Date)) then you can filter on multiple conditions by joining two dataframes first and then filtering: (df .join(row, on=["groupby_col"]) .filter( pl.col("date_col_right") - pl.col("date_col") >= datetime.timedelta(days=0), pl.col("date_col_right") - pl.col("date_col") < datetime.timedelta(days=10), ).drop('date_col_right') ) shape: (2, 2) ββββββββββββββ¬ββββββββββββββ β date_col β groupby_col β β --- β --- β β date β i64 β ββββββββββββββͺββββββββββββββ‘ β 2023-01-01 β 1 β β 2023-01-02 β 2 β ββββββββββββββ΄ββββββββββββββ | 4 | 2 |
77,873,328 | 2024-1-24 | https://stackoverflow.com/questions/77873328/how-can-i-substitute-a-variable-in-format-command-of-python | My code finds an unknown (0<alpha<1), its precision depends on another parameter (dm). Finally, alpha is determined with e.g. 20 digits but depending on dm, alpha should be kept until definite (which is known during execution=njj) digit. At the end, I need to substitute njj in format command as following: njj= 8 ;alpha= 0.0004258819580078126 # I need this alpha= 0.00042588 print('{:.njj f}'.format(alpha)) But it doesn't work! It shows: ValueError: Format specifier missing precision | this is quite a good question. i find that this works nicely: njj= 8 alpha= 0.0004258819580078126 # I need this alpha= 0.00042588 print(f'{alpha:.{njj}f}') which gives this: 0.00042588 but also, you could do this: print(round(alpha, njj)) update: As per the comments below from @Mark, the round() function produces a different number of decimal places compared to the formatted f-string method when there are trailing Zero's. So the user would need to decide which one they wanted to see. | 3 | 1 |
77,878,672 | 2024-1-25 | https://stackoverflow.com/questions/77878672/how-can-i-get-the-seed-of-a-numpy-generator | I am using rng = np.random.default_rng(seed=None) for testing purposes following documentation. My program is a scientific code, so it is good to have some random values to test it, but if I found a problem with the code's result, I would like to get the seed back and try again to find the problem. Is there any way to do that? Things like this question does not seem to work with a Generator: AttributeError: 'numpy.random._generator.Generator' object has no attribute 'get_state' Of course I can always try a set of predefined seeds, but that is not what I want. | Use __getstate__, __setstate__ and forget about seed... it's just a parameter for default_rng! # old instance rng = np.random.default_rng(seed=None) s_old = rng.__getstate__() # new instance rng = np.random.default_rng(seed=123123) # seed=anything, also None s_new = rng.__getstate__() print(s_old == s_new) #False # update "seed" rng.__setstate__(s_old) # pass the old "seed" s_recovered = rng.__getstate__() print(s_old == s_recovered) #True Then when you call methods to generate random numbers you will be able to replicate the randomness. Alternative of the dunder methods # get the state s_old = rng.bit_generator.state and # set the state of the new instance of rng rng.bit_generator.state = s_old | 4 | 2 |
77,875,969 | 2024-1-24 | https://stackoverflow.com/questions/77875969/azure-machine-learning-pin-python-version-on-notebook | Looking for a way to use Python 3.10 on a compute resource associated to Azure ML Studio It appears 3.10 is available with ML Studio and we select 3.10 with SDK2 from the kernel selection dropdown but when running a "python --version" on the terminal, it says v 3.8.5. Is there a way to get Azure ML Studio to use a different version of Python? We've tried the environment option but had no luck. Looks like the kernel selection option does not match what the OS is running, perhaps it's designed to work this way? | That is because of the selected conda environment. There will be multiple environments created when you create a compute instance. To see, run the conda command below: %%sh conda info --env This lists the environments present. However, if you run this in a notebook, you cannot see which environment you are in. So, execute these commands in the terminal. Open the compute and select Terminal. There, you will see the default is Python 3.8. You can also see this with the commands below: conda info --env or conda env list or echo $CONDA_DEFAULT_ENV Output: Whenever you select the kernel, it doesn't switch the environment in the terminal; that only changes in the notebook Python code. Whatever command you run in a subprocess or using %%sh, it takes the default Python environment. import sys, platform print(sys.version) print(platform.python_version()) | 3 | 5 |
77,878,728 | 2024-1-25 | https://stackoverflow.com/questions/77878728/polars-dataframe-overlapping-groups | I am currently converting code from pandas to polars as I really like the api. This question is a more generally question to a previous question of mine (see here) I have the following dataframe # Dummy data df = pl.DataFrame({ "Buy_Signal": [1, 0, 1, 0, 1, 0, 0], "Returns": [0.01, 0.02, 0.03, 0.02, 0.01, 0.00, -0.01], }) I want to ultimately do aggregations on column Returns conditional on different intervals - which are given by column Buy_Signal. In the above case the length is from each 1 to the end of the dataframe. The resulting dataframe should therefore look like this shape: (15, 2) βββββββββ¬ββββββββββ β group β Returns β β --- β --- β β i64 β f64 β βββββββββͺββββββββββ‘ β 1 β 0.01 β β 1 β 0.02 β β 1 β 0.03 β β 1 β 0.02 β β 1 β 0.01 β β 1 β 0.0 β β 1 β -0.01 β β 2 β 0.03 β β 2 β 0.02 β β 2 β 0.01 β β 2 β 0.0 β β 2 β -0.01 β β 3 β 0.01 β β 3 β 0.0 β β 3 β -0.01 β βββββββββ΄ββββββββββ One approach posted as an answer to my previous question is the following: # Build overlapping group index idx = df.select(index= pl.when(pl.col("Buy_Signal") == 1) .then(pl.int_ranges(pl.int_range(pl.len()), pl.len() )) ).explode(pl.col("index")).drop_nulls().cast(pl.UInt32) # Join index with original data df = (df.with_row_index() .join(idx, on="index") .with_columns(group = (pl.col("index") == pl.col("index").max()) .shift().cum_sum().backward_fill() + 1) .select(["group", "Returns"]) ) Question: Are there other solutions to this problem that are both readable and fast? My actual problem contains much larger datasets. Thanks | For completeness, here is an alternative solution that doesnt rely on experimental functionality. ( df .with_columns( pl.col("Buy_Signal").cum_sum().alias("group") ) .with_columns( pl.int_ranges(pl.col("group").min(), pl.col("group")+1) ) .explode("group") .sort("group") ) Output. shape: (15, 3) ββββββββββββββ¬ββββββββββ¬ββββββββ β Buy_Signal β Returns β group β β --- β --- β --- β β i64 β f64 β i64 β ββββββββββββββͺββββββββββͺββββββββ‘ β 1 β 0.01 β 1 β β 0 β 0.02 β 1 β β 1 β 0.03 β 1 β β 0 β 0.02 β 1 β β 1 β 0.01 β 1 β β β¦ β β¦ β β¦ β β 0 β 0.0 β 2 β β 0 β -0.01 β 2 β β 1 β 0.01 β 3 β β 0 β 0.0 β 3 β β 0 β -0.01 β 3 β ββββββββββββββ΄ββββββββββ΄ββββββββ | 2 | 2 |
77,876,224 | 2024-1-24 | https://stackoverflow.com/questions/77876224/calculating-inverse-laplace-transform-using-python-or-matlab | I'm trying to simulate simple closed-loop system with PID controller in Python or MATLAB. In both cases I have problems with calculating time domain response of the system using inverse Laplace transform. To ilustrate the problem better, I'm following the names on the picture below: Depending on the transfer function of the system the result either calculates only in MATLAB or doesn't calculate at all. Here is what I have: syms s t k_p T_i T_d D_d v real; % controller transfer function C_s = k_p * (1 + 1 / (T_i * s) + (T_d * s) / (D_d * s + 1)); % plant transfer function G_s = 1 / ((s + 1) * (s + 2)); % closed loop transfer function W_s = C_s * G_s / (1 + C_s * G_s); % input signal R_t = 4 * heaviside(t - 2) * (1 - heaviside(t - 5)) + ... 6 * heaviside(t - 5) * (1 - heaviside(t - 6)) + ... 2 * heaviside(t - 6); R_s = laplace(R_t, t, s, noconds=True) % PID-controller system response C_res_s = C_s * R_s; w_t = ilaplace(C_res_s , s, t) % Closed loop system response Y_s = W_s * R_s y_t_closed_loop = ilaplace(Y_s, s, t) % Parameters k_p_val = 15; T_i_val = 5; T_d_val = 2; D_d_val = 0.1; time = linspace(0, 10, 1000); % Convert symbolic expressions to MATLAB functions w_t_param = matlabFunction(w_t, 'Vars', {t, k_p, T_i, T_d, D_d}); matlabFunction(y_t_closed_loop, 'File', 'y_t_closed_loop_func', 'Vars', {t, k_p, T_i, T_d, D_d}); w_t_param_closed_loop = str2func('y_t_closed_loop_func'); R_t_lam = matlabFunction(R_t, 'Vars', {t}); % Calculate system responses solution = zeros(size(time)); solution_closed_loop = zeros(size(time)); for i = 1:length(time) solution(i) = w_t_param(time(i), k_p_val, T_i_val, T_d_val, D_d_val); solution_closed_loop(i) = w_t_param_closed_loop(time(i), k_p_val, T_i_val, T_d_val, D_d_val); end For G_s = 1 / ((s + 1) * (s + 2)) I have (seems fine): For G_s = 1 / ((s + 1)) I have (doesn't seem fine): For G_s = 1 / ((s + 1) * (s + 2) * (s + 3)) I have nothing since the inverse laplace transform never calculates. Here is the same code in Python: s, t, k_p, T_i, T_d, D_d, v = symbols("s t k_p T_i T_d D_d v", real=True) #controller transfer function C_s = k_p * (1 + 1 / (T_i * s) + (T_d * s) / (D_d * s + 1)) #plant transfer function G_s = 1 / ((s + 1) * (s + 2)) #closed loop transfer function W_s = C_s * G_s / (1 + C_s * G_s) #input signal R_t = 4 * Heaviside(t - 2) * (1 - Heaviside(t - 5)) + 6 * Heaviside(t - 5) * (1 - Heaviside(t - 6)) + 2 * Heaviside(t - 6) R_s = laplace_transform(R_t, t, s, noconds=True) # PID-controller system response C_res_s = C_s * R_s w_t = inverse_laplace_transform(C_res_s , s, t) # Closed loop system response Y_s = W_s * R_s y_t_closed_loop = inverse_laplace_transform(Y_s, s, t) Python code falls into endless loop and never finds a solution to y_t_closed_loop. Additionaly, for some reason the MATLAB code doesn't run on my machine with Windows. It only works on linux. Am I missing something? Is there any way I can be sure that inverse laplace transform is calculated? The transfer funstions seem pretty normal to me and yet I have a feeling that something is wrong. Any help very appreciated!! EDIT: I also tried using sympy and control library in python, hoping it will change something: import sympy import numpy import matplotlib.pyplot as plt from tbcontrol.loops import feedback s = sympy.Symbol('s') t = sympy.Symbol('t', positive=True) tau = sympy.Symbol('tau', positive=True) K_p = sympy.Symbol('K_p') T_i = sympy.Symbol('T_i') T_d = sympy.Symbol('T_d') D_d = sympy.Symbol('D_d') G_p = 1/(s+1) G_c = K_p * (1 + 1 / (T_i * s) + (T_d * s) / (D_d * s + 1)) G_OL = G_p*G_c G_CL = feedback(G_OL, 1).cancel() general_timeresponse = sympy.inverse_laplace_transform(sympy.simplify(G_CL/s), s, t) As before, general_timeresponse never executes. | NOTE: I can't share my code, but I can give you the steps to reproduce it (which requires a little bit of work). The advantage of this procedure, which uses sympy's inverse_laplace_transform, is that you don't have to deal with Pade approximation of time delays. The main disadvantage is that you need to spend some time coding and testing it. Once you have your output, Y_s, insert the numerical values: sd = { k_p: 15, T_i: 5, T_d: 2, D_d: 0.1 } tmp = Y_s.subs(sd).nsimplify().simplify().expand().together() tmp Note the last time delay, exp(-6*s), it is rendered on the numerator, but it is actually located on the denominator of the expression. Extract the numerator and denominator. Time delays should be all on the numerator: n, d = fraction(tmp) n = (n * exp(-6*s)).expand() d = d / exp(6*s) display(n, d) Notice the numerator is an addition. We can apply linearity and works on the single addends: args = [a / d for a in n.args] display(args) This is where you have to put in your work. We need to create a function, func(expr), that does the following steps: i. Given an addend, func(a), look if a contains a time delay. If it does, remove it. (You can use delay = a.find(exp).pop().... ii. Extract the numerator and denominator: n, d = fraction(a). iii. Extract the coefficients from the numerator and denominator. For example, for the numerator: cn = Poly(n, s).all_coeffs(). iv. Use the residue function from scipy.signal, which is going to create a numerical partial fraction expansion. v. Build a new symbolic expression from the residues, new_expr. This is likely the hardest step. vi. Compute the inverse laplace transform of new_expr. Be sure to use new_expr.nsimplify(). If you don't, the computation might takes much longer. vii. If a time delay was found on point i., apply the time delay with out.subs(t, t+time_delay), where time_delay=delay.args[0]/s. viii. Return the result. Loops over the addends and run the function created in step 3. outs = [func(a) for a in args]. Create the output signal, and plot it (I'm using SymPy Plotting Backed: from spb import plot out = sum(outs) plot((out, "closed loop res"), (R_t, "R_t"), (t, -1, 10)) | 3 | 1 |
77,873,084 | 2024-1-24 | https://stackoverflow.com/questions/77873084/calling-constructors-of-both-parents-in-multiple-inheritance-in-python-general | I'm trying to figure out the multiple inheritance in Python, but all articles I find are limited to simple cases. Let's consider the following example: class Vehicle: def __init__(self, name: str) -> None: self.name = name print(f'Creating a Vehicle: {name}') def __del__(self): print(f'Deleting a Vehicle: {self.name}') class Car(Vehicle): def __init__(self, name: str, n_wheels: int) -> None: super().__init__(name) self.wheels = n_wheels print(f'Creating a Car: {name}') def __del__(self): print(f'Deleting a Car: {self.name}') class Boat(Vehicle): def __init__(self, name: str, n_props: int) -> None: super().__init__(name) self.propellers = n_props print(f'Creating a Boat: {name}') def __del__(self): print(f'Deleting a Boat: {self.name}') class Amfibii(Car, Boat): def __init__(self, name: str, n_wheels: int, n_props: int) -> None: Car.__init__(self, name, n_wheels) Boat.__init__(self, name, n_props) print(f'Creating an Amfibii: {name}') def __del__(self): print(f'Deleting an Amfibii: {self.name}') my_vehicle = Amfibii('Mazda', 4, 2) I want to understand the order of calling constructors and destructors, as well as the correct and general use of the 'super' keyword. In the example above, I get the following error: super().__init__(name) TypeError: Boat.__init__() missing 1 required positional argument: 'n_props' How should I correctly call constructors of both parents, which have different sets of constructor arguments? | Thanks to the comments and the following article, I understood it. https://rhettinger.wordpress.com/2011/05/26/super-considered-super/ Regarding the constructors' arguments, they should be named. In this way, every level strips what it needs and passes it to the next one using super() Every class calls super().__init__() only once. The following classes in the MRO order are responsible for continuing the initialization. The updated example: class Vehicle: def __init__(self, name: str) -> None: self.name = name print(f'Creating a Vehicle: {name}') def go(self): print('Somehow moving...') assert not hasattr(super(), 'go') class Car(Vehicle): def __init__(self, n_wheels: int, **kwargs) -> None: super().__init__(**kwargs) self.wheels = n_wheels print(f'Creating a Car: {self.name}') def go(self): print('Riding...') super().go() class Boat(Vehicle): def __init__(self, n_props, **kwargs) -> None: super().__init__(**kwargs) self.propellers = n_props print(f'Creating a Boat: {self.name}') def go(self): print('Swimming...') super().go() class Amfibii(Car, Boat): def __init__(self, **kwargs) -> None: super().__init__(**kwargs) print(f'Creating an Amfibii: {self.name}') def go(self): print('Riding or swimming...') super().go() my_vehicle = Amfibii(name='Mazda', n_wheels=4, n_props=2) my_vehicle.go() It's a bit cumbersome (gently speaking), isn't it? EDIT: I updated the code according co the comments. | 5 | 1 |
77,876,253 | 2024-1-24 | https://stackoverflow.com/questions/77876253/sort-imports-alphabetically-with-ruff | Trying ruff for the first time and I'm not being able to sort imports alphabetically, using default settings. According to docs ruff should be very similar to isort. Here is a short example with unsorted imports import os import collections Run ruff command $ ruff format file.py 1 file left unchanged But if I run isort the imports are properly sorted $ isort file.py Fixing .../file.py What am I doing wrong? | According to https://github.com/astral-sh/ruff/issues/8926#issuecomment-1834048218: In Ruff, import sorting and re-categorization is part of the linter, not the formatter. The formatter will re-format imports, but it won't rearrange or regroup them, because the formatter maintains the invariant that it doesn't modify the program's AST (i.e., its semantics and behavior). To get isort-like behavior, you'd want to run ruff check --fix with --select I or adding extend-select = ["I"] to your pyproject.toml or ruff.toml. | 10 | 18 |
77,874,908 | 2024-1-24 | https://stackoverflow.com/questions/77874908/polars-valueerror-could-not-convert-value-unknown-as-a-literal-when-taking | βββββββββ¬ββββββ β group β var β β --- β --- β β i64 β str β βββββββββͺββββββ‘ β 1 β x β β 1 β x β β 2 β x β β 2 β y β β 3 β y β β 3 β y β βββββββββ΄ββββββ Suppose the above dataframe is called df and I want to get the percentage of variables named 'x' by group. The following gives me ValueError: could not convert value 'Unknown' as a Literal. Can someone explain why? df.group_by('group').agg( pl.mean((pl.col('var') == 'x').cast(pl.Int8)) ) | As mentioned in the documentation, pl.mean() is just a syntactic sugar for pl.col(columns).mean(). Your code doesn't work because pl.mean() expects *columns: str (column names) as input, not Expr. But you can use pl.col(columns).mean() instead: df = df.group_by('group', maintain_order=True).agg( (pl.col('var') == 'x').mean() ) so you get shape: (3, 2) βββββββββ¬ββββββ β group β var β β --- β --- β β i64 β f64 β βββββββββͺββββββ‘ β 1 β 1.0 β β 2 β 0.5 β β 3 β 0.0 β βββββββββ΄ββββββ | 3 | 3 |
77,874,255 | 2024-1-24 | https://stackoverflow.com/questions/77874255/pyinstaller-modulenotfound-error-urllib3-packages-six-moves | I'm trying to package a script using pyinstaller to run as a windows service. When trying to run it I get the ModuleNotFoundError: No module named 'urllib3.packages.six.moves' error. I'm using the command pyinstaller.exe --onefile --runtime-tmpdir=. --hidden-import win32timezone --hidden-import urllib3 laps_service.py in a virtual environment. You'll see there that I added it as a --hidden-import but that didn't make any difference. After some googling I found that uninstalling and reinstalling urllib3 and requests could help, so I've done that - I explicitly installed requests==2.24.0 because servicemanager required 2.24.0; when this installed, pip automatically uninstalled urllib3-2.1.0 and installed the compatible version urllib3-1.25.11, but I'm still getting the same error. (Edit: I also installed six) From memory this script worked absolutely fine as a pyinstaller exe in a venv previously (on a laptop I no longer have), so I don't think there's any problem with the code and instead probably something with the versions of some package. I've run pip check but it found no broken dependencies. Full Traceback is below and I can give a list of imported packages if that's helpful. (Edit: I also have a working (but older and slightly different version) of the compiled .exe so if there's a way of getting into that to see the versions, that might help) PS C:\LAPS> .\laps_service.exe --startup delayed install Traceback (most recent call last): File "laps_service.py", line 29, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "requests\__init__.py", line 43, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "urllib3\__init__.py", line 7, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "urllib3\connectionpool.py", line 11, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "urllib3\exceptions.py", line 2, in <module> ModuleNotFoundError: No module named 'urllib3.packages.six.moves' [14276] Failed to execute script 'laps_service' due to unhandled exception! | In the end this was some kind of dependency conflict - I uninstalled urllib3 and requests, and then reinstalled requests - this triggered a reinstall of some urllib3 stuff. I don't quite know why, so here's the traceback for posterity: pip install requests Collecting requests Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB) Requirement already satisfied: charset-normalizer<4,>=2 in c:\nosync\projects\laps\.venv\lib\site-packages (from requests) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in c:\nosync\projects\laps\.venv\lib\site-packages (from requests) (2.10) Collecting urllib3<3,>=1.21.1 (from requests) Using cached urllib3-2.1.0-py3-none-any.whl.metadata (6.4 kB) Requirement already satisfied: certifi>=2017.4.17 in c:\nosync\projects\laps\.venv\lib\site-packages (from requests) (2023.11.17) Using cached requests-2.31.0-py3-none-any.whl (62 kB) Using cached urllib3-2.1.0-py3-none-any.whl (104 kB) Installing collected packages: urllib3, requests Successfully installed requests-2.31.0 urllib3-2.1.0 I then tested it before freezing with pyinstaller, and it couldn't find a module of servicemanager - turns out I had also installed a stand-alone servicemanager when I didn't need to because it's included with pywin32, and there was a conflict there. So I uninstalled servicemanager. At some point in all this it fixed itself, so my advice would be to either start fresh or systematically uninstall everything and reinstall one by one, running the script for errors. Not a completely satisfying answer, but an answer nonetheless. | 2 | 1 |
77,873,044 | 2024-1-24 | https://stackoverflow.com/questions/77873044/how-do-i-get-all-possible-orderings-of-9-zeros-and-9-ones-using-python | I want to end up with a list with 48 620 nested lists, containing 9 zeros and 9 ones in different orders. from itertools import permutations print(list(permutations('000000000111111111', r=18))) I assume the code above works, but every 0 and 1 is treated like an individual symbol, so for every ordering I get tons of repeats: ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ('0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '1', '1', '1', '1', '1', '1', '1', '1') ... So, basically, how do I shuffle a list in every possible way, excluding repeats? I tried to use every method from itertools, but I didn't find one that specifically does what i need. | Where 18 is the length of the list you want to generate and 9 is how many of them should be 0: for x in itertools.combinations(range(18),9): print(tuple(('0' if i in x else '1' for i in range(18)))) The idea here is to use combinations to choose the set of locations that will be 0 for each of (in your example's case) 48620 lists. | 2 | 5 |
77,871,144 | 2024-1-24 | https://stackoverflow.com/questions/77871144/typecheckerror-argument-config-file-none-did-not-match-any-element-in-the-u | I am getting an error when I tried to implement pandas profiling. Please find the code that I've tried, the error I got and the versions of the packages I used. Code: import pandas as pd from pandas_profiling import ProfileReport df = pd.read_csv("data.csv") profile = ProfileReport(df) profile Error: --------------------------------------------------------------------------- TypeCheckError Traceback (most recent call last) Cell In[18], line 1 ----> 1 profile = ProfileReport(df) 2 profile File ~\AppData\Local\anaconda3\lib\site-packages\pandas_profiling\profile_report.py:48, in ProfileReport.__init__(self, df, minimal, explorative, sensitive, dark_mode, orange_mode, tsmode, sortby, sample, config_file, lazy, typeset, summarizer, config, **kwargs) 45 _json = None 46 config: Settings ---> 48 def __init__( 49 self, 50 df: Optional[pd.DataFrame] = None, 51 minimal: bool = False, 52 explorative: bool = False, 53 sensitive: bool = False, 54 dark_mode: bool = False, 55 orange_mode: bool = False, 56 tsmode: bool = False, 57 sortby: Optional[str] = None, 58 sample: Optional[dict] = None, 59 config_file: Union[Path, str] = None, 60 lazy: bool = True, 61 typeset: Optional[VisionsTypeset] = None, 62 summarizer: Optional[BaseSummarizer] = None, 63 config: Optional[Settings] = None, 64 **kwargs, 65 ): 66 """Generate a ProfileReport based on a pandas DataFrame 67 68 Config processing order (in case of duplicate entries, entries later in the order are retained): (...) 82 **kwargs: other arguments, for valid arguments, check the default configuration file. 83 """ 85 if df is None and not lazy: File ~\AppData\Local\anaconda3\lib\site-packages\typeguard\_functions.py:138, in check_argument_types(func_name, arguments, memo) 135 raise exc 137 try: --> 138 check_type_internal(value, annotation, memo) 139 except TypeCheckError as exc: 140 qualname = qualified_name(value, add_class_prefix=True) File ~\AppData\Local\anaconda3\lib\site-packages\typeguard\_checkers.py:759, in check_type_internal(value, annotation, memo) 757 checker = lookup_func(origin_type, args, extras) 758 if checker: --> 759 checker(value, origin_type, args, memo) 760 return 762 if isclass(origin_type): File ~\AppData\Local\anaconda3\lib\site-packages\typeguard\_checkers.py:408, in check_union(value, origin_type, args, memo) 403 errors[get_type_name(type_)] = exc 405 formatted_errors = indent( 406 "\n".join(f"{key}: {error}" for key, error in errors.items()), " " 407 ) --> 408 raise TypeCheckError(f"did not match any element in the union:\n{formatted_errors}") TypeCheckError: argument "config_file" (None) did not match any element in the union: pathlib.Path: is not an instance of pathlib.Path str: is not an instance of str Versions: pandas==1.5.3 pandas-profiling==3.6.6 Couldn't find any resource to debug this. Tried updating the versions of pandas and pandas-profiling, but still couldn't succeed. | This is a known issue: Install ydata-profiling with pip pip install ydata-profiling Just add import and use the existing code as is. import pandas as pd from ydata_profiling import ProfileReport df = pd.read_csv('file.csv') profile_report = ProfileReport(df) Link to document : https://pypi.org/project/pandas-profiling/ | 3 | 2 |
77,847,983 | 2024-1-19 | https://stackoverflow.com/questions/77847983/processing-requests-in-fastapi-sequentially-while-staying-responsive | My server exposes an API for a resource-intensive rendering work. The job it does involves a GPU and as such the server can handle only a single request at a time. Clients should submit a job and receive 201 - ACCEPTED - as a response immediately after. The processing can take up to a minute and there can be a few dozens of requests scheduled. Here's what I came up with, boiled to a minimal reproducible example: import time import asyncio from fastapi import FastAPI, status app = FastAPI() fifo_queue = asyncio.Queue() async def process_requests(): while True: name = await fifo_queue.get() # Wait for a request from the queue print(name) time.sleep(10) # A RESOURCE INTENSIVE JOB THAT BLOCKS THE THREAD fifo_queue.task_done() # Indicate that the request has been processed @app.on_event("startup") async def startup_event(): asyncio.create_task(process_requests()) # Start the request processing task @app.get("/render") async def render(name): fifo_queue.put_nowait(name) # Add the request parameter to the queue return status.HTTP_201_CREATED # Return a 201 status code The problem with this approach is that the server does not stay responsive. After sending the first request it gets busy full time with it and does not respond as I have hoped. curl http://127.0.0.1:8000/render\?name\=001 In this example simply replacing time.sleep(10) with await asyncio.sleep(10) solves the problem, but not in the real use case (though possibly offers a clue as for what I am doing incorrectly). Any ideas? | As you may have figured out already, the main issue in your example is that you run a synchronous blocking operation within an async def endpoint, which blocks the event loop (of the main thread), and hence, the entire server. As explained in this answer, if one has to use an async def endpoint, they could run such CPU-bound tasks in an external ProcessPool and then await it (using asyncio's loop.run_in_executor()), which would return control back to the event loop, thus allowing other tasks in the event loop to run, until that task is completedβplease have a look at the linked answer above, as well as this answer for more details. As explained in the linked answers, when using ProcessPoolExecutor on Windows, it is important to protect the entry point of the program to avoid recursive spawning of subprocesses, etc. Basically, your code must be under if __name__ == '__main__' (as shown in the example below). I would also suggest using a lifespan handler, as demonstrated in this answer and this answer, instead of the deprecated startup and shutdown event handlers, to start the process_requests function, as well as instantiate the asyncio.Queue() and the ProcessPoolExecutor, and then add them to the request.state, so that they can be shared and re-used by every request/endpoint (especially, in the case of ProcessPoolExecutor, to avoid creating a new ProcessPool every time, as the computational costs for setting up processes can become expensive, when creating and destroying a lot of processes over and over). Further, I would suggest creating a unique ID for every request arrived, and return that ID to the client, so that they can use it to check on the status of their request, i.e., whether is completed or still pending processing. You could save that ID to your database storage (or a Key-Value store, such as Redis), as explained in this answer; however, for simplicity and demo purposes, the example belows uses a dict object for that purpose. It should also be noted that in the example below the ID is expected as a query parameter to the /status endpoint, but in real-world scenarios, you should never pass sensitive information to the query string, as this would pose security/privacy risks (see Solution 1 of this answer, where some of the risks are outlined). You should instead pass sensitive information to the request body, and always use the HTTPS protocol. Working Example from fastapi import FastAPI, Request from fastapi.responses import JSONResponse from contextlib import asynccontextmanager from dataclasses import dataclass from concurrent.futures import ProcessPoolExecutor import time import asyncio import uuid @dataclass class Item: id: str name: str # Simulating a Computationally Intensive Task def cpu_bound_task(item: Item): print(f"Processing: {item.name}") time.sleep(15) return 'ok' async def process_requests(q: asyncio.Queue, pool: ProcessPoolExecutor): while True: item = await q.get() # Get a request from the queue loop = asyncio.get_running_loop() fake_db[item.id] = 'Processing...' r = await loop.run_in_executor(pool, cpu_bound_task, item) q.task_done() # tell the queue that the processing on the task is completed fake_db[item.id] = 'Done.' @asynccontextmanager async def lifespan(app: FastAPI): q = asyncio.Queue() # note that asyncio.Queue() is not thread safe pool = ProcessPoolExecutor() asyncio.create_task(process_requests(q, pool)) # Start the requests processing task yield {'q': q, 'pool': pool} pool.shutdown() # free any resources that the pool is using when the currently pending futures are done executing fake_db = {} app = FastAPI(lifespan=lifespan) @app.get("/add") async def add_task(request: Request, name: str): item_id = str(uuid.uuid4()) item = Item(item_id, name) request.state.q.put_nowait(item) # Add request to the queue fake_db[item_id] = 'Pending...' return item_id @app.get("/status") async def check_status(item_id: str): if item_id in fake_db: return {'status': fake_db[item_id]} else: return JSONResponse("Item ID Not Found", status_code=404) if __name__ == '__main__': import uvicorn uvicorn.run(app) Note In case you encountered a memory leak (i.e., memory that is no longer needed, but is not released), when re-using a ProcessPoolExecutorβfor any reason, e.g., likely due to issues with some third-party library that you might be usingβyou could instead create a new instance of the ProcessPoolExecutor class for every request that needs to be processed and have it terminated (using the with statement) right after the processing is completed. Note, however, that creating and destroying many processes over and over could become computationally expensive. Example: async def process_requests(q: asyncio.Queue): while True: # ... with ProcessPoolExecutor() as pool: r = await loop.run_in_executor(pool, cpu_bound_task, item) # ... | 11 | 13 |
77,861,518 | 2024-1-22 | https://stackoverflow.com/questions/77861518/polars-compute-row-wise-quantile-over-dataframe | I have some polars DataFrames over which I want to compute some row-wise statistics. For some there is a .list.func function which exists (eg list.mean), however, for those which don't have a dedicated function I believe I must use list.eval. For the following example data: df = pl.DataFrame({ 'a': [1,10,1,.1,.1, np.NAN], 'b': [2, 8,1,.2, np.NAN,np.NAN], 'c': [3, 6,2,.3,.2, np.NAN], 'd': [4, 4,3,.4, np.NAN,np.NAN], 'e': [5, 2,3,.5,.3, np.NAN], }, strict=False) I have managed to come up with the following expression. It seems that list.eval returns a list (which I suppose is more generic) so I need to call .explode on the resulting 1-element list to get back a single value. The resulting column takes the name of the first column, so I then need to call .alias to give it a more meaningful name. df.select( pl.concat_list( pl.all().fill_nan(None) ) .list.eval(pl.element().quantile(0.25)) .explode() .alias('q1') ) Is this the recommended way of computing row-wise? | I would unpivot and join here. It should be faster than .list.eval plus it let's you more easily add other row wise aggregations. Note I've added q2,q3,q4 to the agg ( (_df:=df.with_row_index('i')) .join( _df .unpivot(index='i') .group_by('i') .agg( pl.col('value').quantile(x).alias(q) for q,x in {'q1':0.25,'q2':0.50, 'q3':0.75, 'q4':1}.items() ), on='i' ) .sort('i') .drop('i') ) shape: (6, 9) ββββββββ¬ββββββ¬ββββββ¬ββββββ¬ββββ¬ββββββ¬ββββββ¬ββββββ¬βββββββ β a β b β c β d β β¦ β q1 β q2 β q3 β q4 β β --- β --- β --- β --- β β --- β --- β --- β --- β β f64 β f64 β f64 β f64 β β f64 β f64 β f64 β f64 β ββββββββͺββββββͺββββββͺββββββͺββββͺββββββͺββββββͺββββββͺβββββββ‘ β 1.0 β 2.0 β 3.0 β 4.0 β β¦ β 2.0 β 3.0 β 4.0 β 5.0 β β 10.0 β 8.0 β 6.0 β 4.0 β β¦ β 4.0 β 6.0 β 8.0 β 10.0 β β 1.0 β 1.0 β 2.0 β 3.0 β β¦ β 1.0 β 2.0 β 3.0 β 3.0 β β 0.1 β 0.2 β 0.3 β 0.4 β β¦ β 0.2 β 0.3 β 0.4 β 0.5 β β 0.1 β NaN β 0.2 β NaN β β¦ β 0.2 β 0.3 β NaN β NaN β β NaN β NaN β NaN β NaN β β¦ β NaN β NaN β NaN β NaN β ββββββββ΄ββββββ΄ββββββ΄ββββββ΄ββββ΄ββββββ΄ββββββ΄ββββββ΄βββββββ I used the walrus operator to create _df so as to not have to invoke .with_row_index twice. If you prefer you can just do df=df.with_row_index('i') first instead. | 3 | 1 |
77,852,033 | 2024-1-20 | https://stackoverflow.com/questions/77852033/enable-ruff-rules-only-on-specific-files | I work on a large project and I'd slowly like to enfore pydocstyle using ruff. However, many files will fail on e.g. D103 "undocumented public function". I'd like to start with enforcing it on a few specific files, so I'd like to write something like select = ["D"] [tool.ruff.ignore-except-per-file] # this config does not exist # ignore D103 but on all files, except the ones that pass "properly_formatted_module1.py" = ["D103"] "properly_formatted_module2.py" = ["D103"] I don't think this is possible; the only way I see is to explicitly write down ALL of the file names in a [tool.ruff.extend-per-file-ignores]. There are a few hunderd of them so that's not really nice to do. | You can invert the file selection in the (extend-)per-file-ignores section to ignore for example D103 on all files except the ones that do match the pattern. from the Ruff documentation for the per-file-ignores: A list of mappings from file pattern to rule codes or prefixes to exclude, when considering any matching files. An initial '!' negates the file pattern. And the corresponding example in a pyproject.toml: # Ignore `D` rules everywhere except for the `src/` directory. "!src/**.py" = ["D"] | 6 | 3 |
77,869,420 | 2024-1-23 | https://stackoverflow.com/questions/77869420/how-to-connect-fastapi-jinja2-and-keycloak | I work for a company and I have a Python FastAPI project. This is something like a multi-page website, where each endpoint is a page of the site. This was done using Jinja2 - TemplateResponse(). I know that this is not the best solution for similar projects, such as Flask or Django, but there is no way to change it. I need to hide content using Keycloak authentication. I took the solution here: https://stackoverflow.com/a/77186511/21439459 Made same endpoint: @app.get("/secure") async def root(user: User = Depends(get_user_info)): return {"message": f"Hello {user.username} you have the following service: {user.realm_roles}"} When I run the page I get (Not authenticated): I don't understand how a user can authenticate. I expected a redirect to the Keycloak page like in other frameworks. Interesting point when running /docs Here, after entering the data, a correct redirect to Keycloak occurs and after entering the login/password, I can receive a response from the secure endpoint. I need help with a redirect when opening my website. I don't understand how to repeat authentication from the documentation. I tried the fastapi_keycloak library, but as I understand it, it does not work with the company's version of keycloak. I tried fastapi-keycloak-middleware, but I also received a 401 error and did not understand how to authenticate users. | Finally, my solution: import jwt import time import json import uvicorn import requests from fastapi import FastAPI, Request from fastapi.responses import RedirectResponse from fastapi.middleware.cors import CORSMiddleware from fastapi.templating import Jinja2Templates from fastapi.security.utils import get_authorization_scheme_param AUTH_URL = "... KeyCloak Auth URL..." # with "...&redirect_uri=http://127.0.0.1:8080/auth" TOKEN_URL = "...KeyCloak Token URL..." templates = Jinja2Templates(directory="templates") # paste your directory app = FastAPI() app.add_middleware( CORSMiddleware, allow_credentials=True, allow_methods=["*"], allow_headers=["*"] ) @app.get("/secure_method") def func(request: Request): authorization: str = request.cookies.get("Authorization") if not authorization: # You can add "&state=secure_method" to AUTH_URL for correct redirection after authentication return RedirectResponse(url=AUTH_URL) scheme, credentials = get_authorization_scheme_param(authorization) try: decoded = jwt.decode(jwt=credentials, options={'verify_signature': False}) except Exception as e return 0 # I send special error template # check expiration time if decoded['exp'] + 3600*24 < datetime.utcnow().timestamp(): return RedirectResponse(url=AUTH_URL) # generate data or make something return templates.TemplateResponse('secure.html', {"request": request}) app.get("/auth") def auth(code: str, state: str = "") -> RedirectResponse: payload = { 'grant_type': 'authorization_code', 'client_id': '...some client ID...', 'code': code, 'redirect_uri': 'http://127.0.0.1:8080/auth' } headers = {"Content-Type": "application/x-www-form-urlencoded"} token_response = requests.request("POST", TOKEN_URL, data=payload, headers=headers) token_body = json.loads(token_response.content) access_token = token_body.get("access_token") if not access_token: return {"ERROR": "access_token"} response = RedirectResponse(url="/" + state) response.set_cookie("Authorization", value=f"Bearer {access_token}") return response if __name__ == "__main__": uvicorn.run(app, host="127.0.0.1", port=8080) | 3 | 0 |
77,842,145 | 2024-1-18 | https://stackoverflow.com/questions/77842145/attributeerror-httpxbinaryresponsecontent-object-has-no-attribute-with-strea | I am using openai to convert text into audio , how ever this error always pops up : AttributeError: 'HttpxBinaryResponseContent' object has no attribute 'with_streaming_response' I have tried the stream_to_field() function but it neither works def audio(prompt): speech_file_path=Path('C:\\Users\\beni7\\Downloads\\proyectos_programacion\\proyecto_shorts\\audio').parent / "speech.mp3" response = openai.audio.speech.create( model="tts-1", voice="alloy", input=prompt ) response.with_streaming_response.method(speech_file_path) | .with_streaming_response is a method on speech. You can try the following: def audio(prompt): speech_file_path="<file path>" with client.audio.speech.with_streaming_response.create( model="tts-1", voice="alloy", input=prompt, ) as response: response.stream_to_file(speech_file_path) | 2 | 5 |
77,860,398 | 2024-1-22 | https://stackoverflow.com/questions/77860398/how-to-zip-multiple-csv-files-into-archive-and-return-it-in-fastapi | I'm attempting to compose a response to enable the download of reports. I retrieve the relevant data through a database query and aim to store it in memory to avoid generating unnecessary files on the server. My current challenge involves saving the CSV file within a zip file. Regrettably, I've spent several hours on this issue without finding a satisfactory solution, and I'm uncertain about the specific mistake I may be making. The CSV file in question is approximately 40 MB in size. This is my FastAPI code. I successfully saved the CSV file locally, and all the data within it is accurate. I also managed to correctly create a zip file containing the CSV. However, the FastAPI response is not behaving as expected. After downloading it returns me zip with error: The ZIP file is corrupted, or there's an unexpected end of the archive. from fastapi import APIRouter, Depends from sqlalchemy import text from libs.auth_common import veryfi_admin from libs.database import database import csv import io import zipfile from fastapi.responses import Response router = APIRouter( tags=['report'], responses={404: {'description': 'not found'}} ) @router.get('/raport', dependencies=[Depends(veryfi_admin)]) async def get_raport(): query = text( """ some query """ ) data_de = await database.fetch_all(query) csv_buffer = io.StringIO() csv_writer_de = csv.writer(csv_buffer, delimiter=';', lineterminator='\n') csv_writer_de.writerow([ "id", "name", "date", "stock", ]) for row in data_de: csv_writer_de.writerow([ row.id, row.name, row.date, row.stock, ]) csv_buffer.seek(0) zip_buffer = io.BytesIO() with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zip_file: zip_file.writestr("data.csv", csv_buffer.getvalue()) response = Response(content=zip_buffer.getvalue()) response.headers["Content-Disposition"] = "attachment; filename=data.zip" response.headers["Content-Type"] = "application/zip" response.headers["Content-Length"] = str(len(zip_buffer.getvalue())) print("CSV Buffer Contents:") print(csv_buffer.getvalue()) return response Here is also the vue3 code const downloadReport = () => { loading.value = true; instance .get(`/raport`) .then((res) => { const blob = new Blob([res.data], { type: "application/zip" }); const link = document.createElement("a"); link.href = window.URL.createObjectURL(blob); link.download = "raport.zip"; link.click(); loading.value = false; }) .catch(() => (loading.value = false)); }; <button @click="downloadReport" :disabled="loading"> Download Report </button> Thank you for your understanding as I navigate through my first question on this platform. | Here's a working example on how to create multiple csv files, then compress them into a zip file, and finally, return the zip file to the user. This answer makes use of code and concepts previously discussed in the following answers: this, this and this. Thus, I would suggest having a look at those answers for more details. Also, since the zipfile module's operations are synchronous, you should define the endpoint with normal def instead of async def, unless you used some third-party library that provides an async API as well, or you had to await for some coroutine (async def function) inside the endpoint, in which case I would suggest running the zipfile's operations in an external ThreadPool (since they are blocking IO-bound operations). Please have a look at this answer for relevant solutions, as well as details on async / await and how FastAPI deals with async def and normal def API endpoints. Further, you don't really need using StreamingResponse, if the data are already loaded into memory, as shown in the example below. You should instead return a custom Response (see the example below, as well as this, this and this for more details). Note that the example below uses utf-16 encoding on the csv data, in order to make it compatible with data that include unicode or non-ASCII characters, as explained in this answer. If there's none of such characters in your data, utf-8 encoding could be used as well. Also, note that for demo purposes, the example below loops through a list of dict objects to write the csv data, in order to make it more easy for you to adapt it to your database query data case. Otherwise, one could also csv.DictWriter() and its writerows() method, as demosntrated in this answer, in order to write the data, instead of looping through the list. Working Example from fastapi import FastAPI, HTTPException, BackgroundTasks, Response import zipfile import csv import io app = FastAPI() fake_data = [ { "Id": "1", "name": "Alice", "age": "20", "height": "62", "weight": "120.6" }, { "Id": "2", "name": "Freddie", "age": "21", "height": "74", "weight": "190.6" } ] def create_csv(data: list): s = io.StringIO() try: writer = csv.writer(s, delimiter='\t') writer.writerow(data[0].keys()) for row in data: writer.writerow([row['Id'], row['name'], row['age'], row['height'], row['weight']]) s.seek(0) return s.getvalue().encode('utf-16') except: raise HTTPException(detail='There was an error processing the data', status_code=400) finally: s.close() @app.get('/') def get_data(): zip_buffer = io.BytesIO() try: with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zip_file: for i in range(5): zip_info = zipfile.ZipInfo(f'data_{i}.csv') csv_data = create_csv(fake_data) zip_file.writestr(zip_info, csv_data) zip_buffer.seek(0) headers = {"Content-Disposition": "attachment; filename=files.zip"} return Response(zip_buffer.getvalue(), headers=headers, media_type="application/zip") except: raise HTTPException(detail='There was an error processing the data', status_code=400) finally: zip_buffer.close() | 4 | 2 |
77,841,985 | 2024-1-18 | https://stackoverflow.com/questions/77841985/python-requests-giving-me-missing-metadata-when-trying-to-upload-attachment | I am trying to upload an attachment via an API. I can make it work in the software's Swagger environment with this: curl -X 'POST' \ 'https://demo.citywidesolutions.com/v4_server/external/v1/maintenance/service_requests/9/attached_files' \ -H 'accept: */*' \ -H 'Content-Type: multipart/form-data' \ -H 'Authorization: Bearer 123456789' \ -F 'description=' \ -F 'data=@This is my file.pdf;type=application/pdf' When I try to do the same with python requests I get a 400 error with a message of missing metadata. Here is what I am trying to pass with Python: import requests Headers = {'accept': '*/*', 'Content-Type': 'multipart/form-data', 'Authorization': 'Bearer 123456789'} Attachment = {'description': '', 'data': open('C:/This is my file.pdf', 'rb')} Response = requests.post(url='https://demo.citywidesolutions.com/v4_server/external/v1/maintenance/service_requests/9/attached_files', headers=Headers, files=Attachment) From that I get a 400 response and the JSON says Missing Metadata. What am I missing here? | You could try explicitly setting the file type in your code; you do so in the curl method, but not in the Python script. Also, the requests library is able to handle 'Content-Type', so typically you don't need to set it. And the description parameter may be expected as a tuple. Here are all those changes; I have not tested since I do not have access to the endpoint: import requests headers = { 'accept': '*/*', 'Authorization': 'Bearer 123456789' } # excluded content-type files = { 'description': (None, ''), 'data': ('blank.pdf', open('blank.pdf', 'rb'), 'application/pdf') #added application/pdf format } url = 'https://demo.citywidesolutions.com/v4_server/external/v1/maintenance/service_requests/9/attached_files' response = requests.post(url, headers=headers, files=files) | 2 | 1 |
77,848,276 | 2024-1-19 | https://stackoverflow.com/questions/77848276/linear-regression-output-are-nonsensical | I have a dataset and am trying to fill in the missing values by utilizing a 2d regression to get the slope of the surrounding curves to approximate the missing value. I am not sure if this is the right approach here, but am open to listen to other ideas. However, here's my example: local_window = pd.DataFrame({102.5: {0.021917: 0.0007808776581961896, 0.030136: 0.0009108521507099643, 0.035616: 0.001109650616093018, 0.041095: 0.0013238862647034224, 0.060273: 0.0018552410055933753}, 105.0: {0.021917: 0.0008955896980595855, 0.030136: 0.001003244315807649, 0.035616: 0.0011852612740301449, 0.041095: 0.0013952857530607904, 0.060273: 0.0018525880756980716}, 107.5: {0.021917: np.nan, 0.030136: 0.0012354997955153118, 0.035616: 0.00140044893559622, 0.041095: 0.0015902024099268574, 0.060273: 0.001973254493672934}}) def predict_nan_local(local_window): if not local_window.isnull().values.any(): return local_window # Extract x and y values for the local window X_local = local_window.columns.values.copy() y_local = local_window.index.values.copy() # Create a meshgrid of x and y values X_local, y_local = np.meshgrid(X_local, y_local) # Flatten x and y for fitting the model X_local_flat = X_local.flatten() y_local_flat = y_local.flatten() values_local_flat = local_window.values.flatten() # Find indices of non-NaN values non_nan_indices = ~np.isnan(values_local_flat) # Filter out NaN values X_local_flat_filtered = X_local_flat[non_nan_indices] y_local_flat_filtered = y_local_flat[non_nan_indices] values_local_flat_filtered = values_local_flat[non_nan_indices] regressor = LinearRegression() regressor.fit(np.column_stack((X_local_flat_filtered, y_local_flat_filtered)), values_local_flat_filtered) nan_indices = np.argwhere(np.isnan(local_window.values)) X_nan = local_window.columns.values[nan_indices[:, 1]] y_nan = local_window.index.values[nan_indices[:, 0]] # Predict missing value predicted_values = regressor.predict(np.column_stack((X_nan, y_nan))) local_window.iloc[nan_indices[:, 0], nan_indices[:, 1]] = predicted_values return local_window The output - as you can see - doesn't make a whole lot of sense. Is there anything I am missing? | I made some modifications to your predict function: def predict_nan_local(local_window, degree=2): # Only proceed if there are NaNs to fill if not local_window.isnull().values.any(): return local_window # Create a meshgrid of x and y values x = local_window.columns.values y = local_window.index.values X, Y = np.meshgrid(x, y) # Flatten the grid for fitting X_flat = X.ravel() Y_flat = Y.ravel() Z_flat = local_window.values.ravel() # Filter out NaN values valid_mask = ~np.isnan(Z_flat) X_valid = X_flat[valid_mask] Y_valid = Y_flat[valid_mask] Z_valid = Z_flat[valid_mask] # Create polynomial features poly = PolynomialFeatures(degree=degree) XY_poly = poly.fit_transform(np.column_stack((X_valid, Y_valid))) # Fit the model regressor = LinearRegression() regressor.fit(XY_poly, Z_valid) # Predict missing values XY_all_poly = poly.transform(np.column_stack((X_flat, Y_flat))) Z_pred_flat = regressor.predict(XY_all_poly) # Fill in the missing values Z_flat[~valid_mask] = Z_pred_flat[~valid_mask] filled_local_window = pd.DataFrame(Z_flat.reshape(local_window.shape), index=y, columns=x) return filled_local_window Running this on your data: predict_nan_local(local_window) 102.5 105.0 107.5 0.021917 0.000781 0.000896 0.001089 0.030136 0.000911 0.001003 0.001235 0.035616 0.001110 0.001185 0.001400 0.041095 0.001324 0.001395 0.001590 0.060273 0.001855 0.001853 0.001973 So we have imputed 0.001089 for the missing value. Plotting this: import matplotlib.pyplot as plt import numpy as np import pandas as pd # Assuming local_window is your DataFrame with the same structure as in your example local_window = pd.DataFrame({ 102.5: {0.021917: 0.0007808776581961896, 0.030136: 0.0009108521507099643, 0.035616: 0.001109650616093018, 0.041095: 0.0013238862647034224, 0.060273: 0.0018552410055933753}, 105.0: {0.021917: 0.0008955896980595855, 0.030136: 0.001003244315807649, 0.035616: 0.0011852612740301449, 0.041095: 0.0013952857530607904, 0.060273: 0.0018525880756980716}, 107.5: {0.021917: 0.001089, 0.030136: 0.0012354997955153118, 0.035616: 0.00140044893559622, 0.041095: 0.0015902024099268574, 0.060273: 0.001973254493672934} }) # Transpose the DataFrame to plot each x-value (102.5, 105.0, 107.5) as a separate line local_window_transposed = local_window.T # Create the plot plt.figure(figsize=(10, 5)) # Plot each column as a separate line for column in local_window_transposed.columns: plt.plot(local_window_transposed.index, local_window_transposed[column], marker='o', label=f'y={column}') # Find missing data point(s) for col in local_window_transposed.columns: missing_data = local_window_transposed[col].isnull() if missing_data.any(): missing_x = local_window_transposed.index[missing_data] for mx in missing_x: plt.scatter(mx, local_window[col][mx], s=100, facecolors='none', edgecolors='r') # Add titles and labels plt.title('Chart Title') plt.xlabel('X-axis Label') plt.ylabel('Y-axis Label') plt.legend(title='Legend Title') # Show the plot plt.show() Is this closer to what you expected ? | 3 | 2 |
77,833,712 | 2024-1-17 | https://stackoverflow.com/questions/77833712/odoo-16-custom-website-pagination-problem | I paginated the search result. For the first, query set gives me perfect result , but the pagination set all pages, and I get all data from database by clicking on any page number. Here is my code: This is the controller which I added : class MyCustomWeb(http.Controller): @http.route(['/customer', '/customer/page/<int:page>'], type="http", auth="user", website=True) def customer_kanban(self, page=1, search=None, **post): domain = [] if search: domain.append(('name', 'ilike', search)) post["search"] = search customer_obj = request.env['res.partner'].sudo().search(domain) total = customer_obj.sudo().search_count([]) pager = request.website.pager( url='/customer', total=total, page=page, step=3, ) offset = pager['offset'] customer_obj = customer_obj[offset: offset + 5] return request.render('my_module.customer_form', { 'search': search, 'customer_details': customer_obj, 'pager': pager, }) This is the XML code : the template of customer website : <template id="customer_form" name="Customers"> <t t-call="website.layout"> <div> <div class="col-md-6"> <br/> <div> <form action="/customer" method="post"> <t t-call="website.website_search_box"> </t> <input type="hidden" name="csrf_token" t-att-value="request.csrf_token()"/> <div> <section> <div class="customer_details"> <center> <h3>Customers</h3> </center> </div> <br/> <div class="oe_product_cart_new row" style="overflow: hidden;"> <t t-foreach="customer_details" t-as="customers"> <div class="col-md-3 col-sm-3 col-xs-12" style="padding:1px 1px 1px 1px;"> <div style="border: 1px solid #f0eaea;width: 150px;height: auto;padding: 7% 0% 10% 0%; border-radius: 3px;overflow: hidden; margin-bottom: 44px !important;width: 100%;height: 100%;"> <div class="oe_product_image"> <center> <div style="width:100%;overflow: hidden;"> <img t-if="customers.image_1920" t-attf-src="/web/image/res.partner/#{customers.id}/image_1920" class="img oe_product_image" style="padding: 0px; margin: 0px; width:auto; height:100%;"/> </div> <div style="text-align: left;margin: 10px 15px 3px 15px;"> <t t-if="customers.name"> <span t-esc="customers.name" style="font-weight: bolder;color: #3e3b3b;"/> <br/> </t> </div> </center> </div> </div> </div> </t> </div> <div class="products_pager form-inline justify-content-center mt-3"> <t t-call="website.pager"> <t t-set="_classes">mt-2 ml-md-2</t> </t> </div> </section> <br/> <hr class="border-600 s_hr_1px w-100 mx-auto s_hr_dotted"/> </div> </form> </div> </div> </div> </t> </template> Any help please? Thanks. | Since you have not supplied a domain, customer_obj.sudo().search_count([]) will return the whole amount of records in the model. Alternatively, you can use len(customer_obj) or provide a domain inside the search_count as well. total = len(customer_obj) pager = request.website.pager( url='/customer', total=total, page=page, url_args={'search': search}, step=3, ) | 5 | 4 |
77,867,589 | 2024-1-23 | https://stackoverflow.com/questions/77867589/how-do-you-replace-set-tight-layout-with-set-layout-engine | When I call fig.set_tight_layout(True) on a Matplotlib figure, I receive this deprecation warning: The set_tight_layout function will be deprecated in a future version. Use set_layout_engine instead. How do I call set_layout_engine so as to match the current behavior as closely as possible? Environment: OS: Mac Python: 3.10.6 matplotlib: 3.7.2 | From: https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.set_layout_engine fig.set_layout_engine("tight") | 2 | 1 |
77,864,704 | 2024-1-23 | https://stackoverflow.com/questions/77864704/annotated-transformer-why-x-dropoutsublayerlayernormx | Please clarify if the Annotated Transformer Encoder LayerNorm implementation is correct. Transformer paper says the output of the sub layer is LayerNorm(x + Dropout(SubLayer(x))). LayerNorm should be applied after the DropOut(SubLayer(x)) as per the paper: However, the Annotated Transformer implementation does x + DropOut(SubLayer(LayerNorm(x))) where LayerNorm is applied before Sublayer, which is the other way around. class SublayerConnection(nn.Module): """ A residual connection followed by a layer norm. Note for code simplicity the norm is first as opposed to last. """ def __init__(self, size, dropout): super(SublayerConnection, self).__init__() self.norm = LayerNorm(size) self.dropout = nn.Dropout(dropout) def forward(self, x, sublayer): "Apply residual connection to any sublayer with the same size." return x + self.dropout(sublayer(self.norm(x))) # <--- LayerNorm before SubLayer | Original paper applied Dropout to the Sub-Layer (Multi Head Attention) before Residual Connection and Layer Normalization. This is called Post Normalization. dropout to the output of each sub-layer, before it is added to the sub-layer input (x) and (layer) normalized. However, recent approach is Pre Normalization where LayerNorm is applied to the input x into the sub-layer as explained in Let's build GPT: from scratch, in code, spelled out. Very few details about the Transformer have changed in the last five years, but there is something slightly departs from the original paper. You see that Add and Norm is applied after the transformation (Multi Head Attention). But now it is more common to apply LayerNorm before the transformation, so there is a reshuffling of the Layer Norm. This is called pre-norm formulation and that is the one we are going to implement as well. This is proposed in On Layer Normalization in the Transformer Architecture. The Annotated Transformer is also following this approach. | 3 | 5 |
77,869,443 | 2024-1-23 | https://stackoverflow.com/questions/77869443/create-custom-model-class-with-python | I have an example class as follows: class MLP(nn.Module): # Declare a layer with model parameters. Here, we declare two fully # connected layers def __init__(self): # Call the constructor of the `MLP` parent class `Module` to perform # the necessary initialization. In this way, other function arguments # can also be specified during class instantiation, such as the model # parameters, `params` (to be described later) super().__init__() self.hidden = nn.Linear(20, 256) # Hidden layer self.out = nn.Linear(256, 10) # Output layer # Define the forward propagation of the model, that is, how to return the # required model output based on the input `X` def forward(self, X): # Note here we use the funtional version of ReLU defined in the # nn.functional module. return self.out(torch.relu(self.hidden(X))) Calling the class is like this: net = MLP() net(X) Now, I need to create a similar class and function for a model with 4 layers: Layers Configuration Activation Function fully connected input size 128, output size 64 ReLU fully connected input size 64, output size 32 ReLU dropout probability 0.5 - fully connected input size 32, output size 1 Sigmoid I need to pass the following assertion: model = Net() assert model.fc1.in_features == 128 assert model.fc1.out_features == 64 assert model.fc2.in_features == 64 assert model.fc2.out_features == 32 assert model.fc3.in_features == 32 assert model.fc3.out_features == 1 x = torch.rand(2, 128) output = model.forward(x) assert output.shape == (2, 1), "Net() is wrong!" Here is what I have so far: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(128, 64) self.fc2 = nn.Linear(64, 32) self.dropout = nn.Dropout(p=0.5) self.fc3 = nn.Linear(32, 1) def forward(self, x): return self.fc3(torch.sigmoid(self.dropout(self.fc2(torch.relu(self.fc1(torch.relu(X))))))) But I'm getting an error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x20 and 128x64) How to resolve it? | You have a capital X in the forward method, while the function expects lowercase x. You must have a tensor with shape (2, 20) assigned to variable capital X. | 2 | 2 |
77,864,142 | 2024-1-23 | https://stackoverflow.com/questions/77864142/pass-socketio-server-instance-to-fastapi-routers-as-dependency | I'm using python-socketio and trying to pass the server instance to my app routers as dependency: main.py file: import socketio import uvicorn from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from src.api.rest.api_v1.route_builder import build_routes app = FastAPI() app.add_middleware( CORSMiddleware, allow_credentials=True, allow_methods=["*"], ) sio = socketio.AsyncServer(cors_allowed_origins="*", async_mode="asgi") app = build_routes(app) sio_asgi_app = socketio.ASGIApp(sio, app) @app.get("/") async def root(): return {"message": "Hello from main server !"} def start_server(): api_port = "0.0.0.0" api_host = "8000" uvicorn.run( "src.api.main:sio_asgi_app", host=api_host, port=int(api_port), log_level="info", reload=True, ) if __name__ == "__main__": start_server() product_routes.py file: from typing import Annotated import socketio from fastapi import APIRouter, Depends router = APIRouter(prefix="/products") @router.get("/emit") async def emit_message( sio: Annotated[socketio.AsyncServer, Depends()] ) -> None: await sio.emit("reply", {"message": "Hello from server !"}) build_routes.py file: def build_routes(app: FastAPI) -> FastAPI: app.include_router( r_products, prefix="/v1", dependencies=[Depends(socketio.AsyncServer)], ) return app So at this point I'm not sure how to pass that dependency to the routes. When I hit the emit endpoint I get this error response: { "detail": [ { "type": "missing", "loc": [ "query", "kwargs" ], "msg": "Field required", "input": null, "url": "https://errors.pydantic.dev/2.5/v/missing" } ] } | You need to create function, where just return AsyncServer object. Then use this function as a dependency in your endpoint functions. It's better to create this dependency in separate file to avoid import errors. dependencies.py import socketio sio = socketio.AsyncServer(cors_allowed_origins="*", async_mode="asgi") def sio_dep() -> socketio.AsyncServer: return sio then pass this function as a parameter to Depends: from typing import Annotated from dependencies import sio_dep from fastapi import APIRouter, Depends router = APIRouter(prefix="/products") @router.get("/emit") async def emit_message( sio: Annotated[socketio.AsyncServer, Depends(sio_dep)] ) -> None: await sio.emit("reply", {"message": "Hello from server !"}) | 2 | 1 |
77,868,799 | 2024-1-23 | https://stackoverflow.com/questions/77868799/polars-dataframe-add-columns-conditional-on-other-column-yielding-different-len | I have the following dataframe in polars. df = pl.DataFrame({ "Buy_Signal": [1, 0, 1, 0, 0], "Returns": np.random.normal(0, 0.1, 5), }) Ultimately, I want to do aggregations on column Returns conditional on different intervals - which are given by column Buy_Signal. In the above case the length is from each 1 to the end of the dataframe. What is the most polar way to do this? My "stupid" solution (before I can apply aggregations) is as follows: df = (df .with_columns(pl.col("Buy_Signal").cum_sum().alias("Holdings")) # (1) Determine returns for each holding period (>=1, >=2) and unpivot into one column .with_columns( pl.when(pl.col("Holdings") >= 1).then(pl.col("Returns")).alias("Port_1"), pl.when(pl.col("Holdings") >= 2).then(pl.col("Returns")).alias("Port_2"), ) ) The solution is obviously not working for many "buy signals". So I wrote a separate function calc_port_returns which includes a for loop which is then passed to polar's pipe function. See here: def calc_port_returns(_df: pl.DataFrame) -> pl.DataFrame: n_portfolios = _df["Buy_Signal"].sum() data = pl.DataFrame() _df = _df.with_columns(pl.col("Buy_Signal").cum_sum().alias("Holdings")) for i in range(1, n_portfolios + 1): tmp = ( _df.with_columns( pl.when(pl.col("Holdings") >= i).then(pl.col("Returns")).alias(f"Port_{i}"), ) .select(pl.col(f"Port_{i}")) ) data = pl.concat([data, tmp], how="horizontal") _df = pl.concat([_df, data], how="horizontal") return _df df = pl.DataFrame({ "Buy_Signal": [1, 0, 1, 0, 0], "Returns": np.random.normal(0, 0.1, 5), }) df.pipe(calc_port_returns) What is the "polars" way to do this? In pandas I could imagine solving it using df.assign({f"Port_{i}": ... for i in range(1, ...)}) with a few prior extra columns / side calculations. Thanks for any suggestions. | It's pretty similar to what you were thinking... ( df .with_columns( Holdings = pl.col('Buy_Signal').cum_sum() ) .with_columns( pl.when(pl.col("Holdings")>=x) .then(pl.col("Returns")).alias(f"Port_{x}") for x in range(1,df['Buy_Signal'].sum()+1) ) ) shape: (5, 5) ββββββββββββββ¬ββββββββββββ¬βββββββββββ¬ββββββββββββ¬ββββββββββββ β Buy_Signal β Returns β Holdings β Port_1 β Port_2 β β --- β --- β --- β --- β --- β β i64 β f64 β i64 β f64 β f64 β ββββββββββββββͺββββββββββββͺβββββββββββͺββββββββββββͺββββββββββββ‘ β 1 β -0.05515 β 1 β -0.05515 β null β β 0 β 0.205705 β 1 β 0.205705 β null β β 1 β -0.068856 β 2 β -0.068856 β -0.068856 β β 0 β -0.141231 β 2 β -0.141231 β -0.141231 β β 0 β -0.028524 β 2 β -0.028524 β -0.028524 β ββββββββββββββ΄ββββββββββββ΄βββββββββββ΄ββββββββββββ΄ββββββββββββ | 2 | 3 |
77,854,397 | 2024-1-21 | https://stackoverflow.com/questions/77854397/pandas-resample-signal-series-with-its-corresponding-label | I have this table with these columns: Seconds, Amplitude, Labels, Metadata. Basically, it's an ECG signal. You can download the csv here: https://tmpfiles.org/3951223/question.csv As you see, the second timestep is 0.004. How to resample that with the desired new timestep, such as 0.002, without destructing another column. Such as label_encoding, that column is intended for machine learning y label purpose, especially multiclassification problem; it's segmentation region. It's unique values are (24, 1, 27). While bound_or_peak is intended for displaying or plotting the purpose of the region. It consists of 3 bits (the maximum value is 7). If the most significant bit set, then it started region to plot (onset). If the second bit is set, then it must be a peak of the ECG signal wave. If the least significant bit is set, then it must be an offset region to plot. Here is the table produced by this code: %load_ext google.colab.data_table import numpy as np import pandas as pd # Create a NumPy matrix with row and column labels matrix_data = signal.signal_arr dtype_dict = {'seconds': float, 'amplitude': float, 'label_encoding': int, 'bound_or_peak': int} # Convert the NumPy matrix to a Pandas DataFrame with labels df = pd.DataFrame(matrix_data, columns=dtype_dict.keys()).astype(dtype_dict) # Display the DataFrame df[:250] What I mean with without destruction another column is: after resampled, another column such as labels and bound_or_peak are located as is following df before resampled. While amplitude should have interpolated, especially linear interpolated. Actually, I have an idea to ignore the seconds column. Instead, that column can be compressed into a single value, such as in frequency sampling. So converting timestep to frequency sampling is a good idea, I think. 0.004 means 1/0.004; therefore, the frequency sampling is 250. Now the problem is how to resample or interpolate the amplitude to another frequency sampling without destructing another column. Update: As the commentator said, I should have used textual representation to show the table instead of a picture: index seconds amplitude label_encoding bound_or_peak 0 0.0 0.035 0 0 1 0.004 0.06 0 0 2 0.008 0.065 0 0 3 0.012 0.075 0 0 4 0.016 0.085 0 0 5 0.02 0.075 0 0 6 0.024 0.065 0 0 7 0.028 0.065 0 0 8 0.032 0.065 0 0 9 0.036000000000000004 0.07 0 0 10 0.04 0.075 0 0 11 0.044 0.075 0 0 12 0.048 0.075 0 0 13 0.052000000000000005 0.07 0 0 14 0.056 0.065 0 0 15 0.06 0.065 0 0 16 0.064 0.065 0 0 17 0.068 0.065 0 0 18 0.07200000000000001 0.065 0 0 19 0.076 0.06 0 0 20 0.08 0.055 0 0 21 0.084 0.04 0 0 22 0.088 0.03 0 0 23 0.092 0.015 0 0 24 0.096 0.0 0 0 25 0.1 -0.01 0 0 26 0.10400000000000001 -0.02 0 0 27 0.108 -0.03 0 0 28 0.112 -0.04 0 0 29 0.116 -0.05 0 0 30 0.12 -0.06 0 0 31 0.124 -0.07 0 0 32 0.128 -0.08 0 0 33 0.132 -0.09 0 0 34 0.136 -0.095 0 0 35 0.14 -0.09 0 0 36 0.14400000000000002 -0.085 0 0 37 0.148 -0.085 0 0 38 0.152 -0.085 0 0 39 0.156 -0.09 0 0 40 0.16 -0.095 0 0 41 0.164 -0.09 0 0 42 0.168 -0.085 0 0 43 0.17200000000000001 -0.085 0 0 44 0.176 -0.085 0 0 45 0.18 -0.085 0 0 46 0.184 -0.08 0 0 47 0.188 -0.075 0 0 48 0.192 -0.075 0 0 49 0.196 -0.075 0 0 50 0.2 -0.075 0 0 51 0.20400000000000001 -0.075 0 0 52 0.20800000000000002 -0.075 0 0 53 0.212 -0.075 0 0 54 0.216 -0.07 0 0 55 0.22 -0.065 0 0 56 0.224 -0.06 0 0 57 0.228 -0.055 0 0 58 0.232 -0.055 0 0 59 0.23600000000000002 -0.055 0 0 60 0.24 -0.065 0 0 61 0.244 -0.075 0 0 62 0.248 -0.075 0 0 63 0.252 -0.075 0 0 64 0.256 -0.07 0 0 65 0.26 -0.065 0 0 66 0.264 -0.06 0 0 67 0.268 -0.06 0 0 68 0.272 -0.07 0 0 69 0.276 -0.075 0 0 70 0.28 -0.075 0 0 71 0.28400000000000003 -0.075 0 0 72 0.28800000000000003 -0.075 0 0 73 0.292 -0.07 0 0 74 0.296 -0.06 0 0 75 0.3 -0.06 0 0 76 0.304 -0.07 0 0 77 0.308 -0.075 0 0 78 0.312 -0.08 0 0 79 0.316 -0.085 0 0 80 0.32 -0.085 0 0 81 0.324 -0.085 0 0 82 0.328 -0.08 0 0 83 0.332 -0.075 0 0 84 0.336 -0.075 0 0 85 0.34 -0.08 0 0 86 0.34400000000000003 -0.085 24 4 87 0.34800000000000003 -0.08 24 0 88 0.352 -0.075 24 0 89 0.356 -0.06 24 0 90 0.36 -0.045 24 0 91 0.364 -0.035 24 0 92 0.368 -0.025 24 0 93 0.372 -0.025 24 0 94 0.376 -0.025 24 0 95 0.38 -0.02 24 0 96 0.384 -0.015 24 0 97 0.388 -0.01 24 0 98 0.392 -0.005 24 0 99 0.396 0.005 24 0 100 0.4 0.02 24 0 101 0.404 0.035 24 0 102 0.40800000000000003 0.045 24 2 103 0.41200000000000003 0.05 24 0 104 0.41600000000000004 0.055 24 0 105 0.42 0.05 24 0 106 0.424 0.035 24 0 107 0.428 0.015 24 0 108 0.432 -0.005 24 0 109 0.436 -0.035 24 0 110 0.44 -0.05 24 0 111 0.444 -0.065 24 1 112 0.448 -0.08 0 0 113 0.452 -0.09 0 0 114 0.456 -0.095 0 0 115 0.46 -0.09 0 0 116 0.464 -0.085 0 0 117 0.468 -0.09 0 0 118 0.47200000000000003 -0.095 0 0 119 0.47600000000000003 -0.095 0 0 120 0.48 -0.095 0 0 121 0.484 -0.1 0 0 122 0.488 -0.105 0 0 123 0.492 -0.105 0 0 124 0.496 -0.105 0 0 125 0.5 -0.105 0 0 126 0.504 -0.115 0 0 127 0.508 -0.115 0 0 128 0.512 -0.11 0 0 129 0.516 -0.105 0 0 130 0.52 -0.105 0 0 131 0.524 -0.105 0 0 132 0.528 -0.095 0 0 133 0.532 -0.085 0 0 134 0.536 -0.09 0 0 135 0.54 -0.095 0 0 136 0.544 -0.09 0 0 137 0.548 -0.085 0 0 138 0.552 -0.08 1 4 139 0.556 -0.075 1 0 140 0.56 -0.08 1 0 141 0.5640000000000001 -0.07 1 0 142 0.5680000000000001 -0.025 1 0 143 0.5720000000000001 0.075 1 0 144 0.5760000000000001 0.25 1 0 145 0.58 0.54 1 0 146 0.584 0.96 1 0 147 0.588 1.41 1 2 148 0.592 1.885 1 0 149 0.596 1.735 1 0 150 0.6 1.09 1 0 151 0.604 0.35 1 0 152 0.608 -0.455 1 0 153 0.612 -0.725 1 0 154 0.616 -0.705 1 0 155 0.62 -0.54 1 0 156 0.624 -0.315 1 0 157 0.628 -0.195 1 0 158 0.632 -0.115 1 1 159 0.636 -0.09 0 0 160 0.64 -0.08 0 0 161 0.644 -0.075 0 0 162 0.648 -0.08 0 0 163 0.652 -0.085 0 0 164 0.656 -0.085 0 0 165 0.66 -0.085 0 0 166 0.664 -0.08 0 0 167 0.668 -0.08 0 0 168 0.672 -0.085 0 0 169 0.676 -0.085 0 0 170 0.68 -0.085 0 0 171 0.684 -0.075 0 0 172 0.6880000000000001 -0.065 0 0 173 0.6920000000000001 -0.07 0 0 174 0.6960000000000001 -0.075 0 0 175 0.7000000000000001 -0.07 0 0 176 0.704 -0.065 0 0 177 0.708 -0.06 0 0 178 0.712 -0.055 0 0 179 0.716 -0.05 0 0 180 0.72 -0.045 0 0 181 0.724 -0.04 0 0 182 0.728 -0.035 27 4 183 0.732 -0.035 27 0 184 0.736 -0.035 27 0 185 0.74 -0.035 27 0 186 0.744 -0.035 27 0 187 0.748 -0.03 27 0 188 0.752 -0.02 27 0 189 0.756 -0.01 27 0 190 0.76 -0.005 27 0 191 0.764 0.0 27 0 192 0.768 0.005 27 0 193 0.772 0.005 27 0 194 0.776 0.005 27 0 195 0.78 0.01 27 0 196 0.784 0.025 27 0 197 0.788 0.04 27 0 198 0.792 0.045 27 0 199 0.796 0.05 27 0 200 0.8 0.055 27 0 201 0.804 0.055 27 0 202 0.808 0.055 27 0 203 0.812 0.06 27 0 204 0.8160000000000001 0.065 27 0 205 0.8200000000000001 0.07 27 0 206 0.8240000000000001 0.085 27 0 207 0.8280000000000001 0.1 27 0 208 0.8320000000000001 0.105 27 0 209 0.836 0.105 27 0 210 0.84 0.11 27 0 211 0.844 0.115 27 0 212 0.848 0.12 27 0 213 0.852 0.125 27 0 214 0.856 0.12 27 2 215 0.86 0.115 27 0 216 0.864 0.115 27 0 217 0.868 0.115 27 0 218 0.872 0.115 27 0 219 0.876 0.115 27 0 220 0.88 0.115 27 0 221 0.884 0.115 27 0 222 0.888 0.115 27 0 223 0.892 0.115 27 0 224 0.896 0.11 27 0 225 0.9 0.105 27 0 226 0.904 0.1 27 0 227 0.908 0.09 27 0 228 0.912 0.07 27 0 229 0.916 0.05 27 0 230 0.92 0.035 27 0 231 0.924 0.015 27 0 232 0.928 -0.005 27 0 233 0.932 -0.02 27 0 234 0.936 -0.03 27 0 235 0.9400000000000001 -0.04 27 0 236 0.9440000000000001 -0.05 27 0 237 0.9480000000000001 -0.055 27 0 238 0.9520000000000001 -0.06 27 0 239 0.9560000000000001 -0.07 27 1 240 0.96 -0.08 0 0 241 0.964 -0.085 0 0 242 0.968 -0.085 0 0 243 0.972 -0.085 0 0 244 0.976 -0.085 0 0 245 0.98 -0.085 0 0 246 0.984 -0.08 0 0 247 0.988 -0.075 0 0 248 0.992 -0.08 0 0 249 0.996 -0.085 0 0 | This is a straightforward application of resample(), but you have to make some aggregation decisions. from io import StringIO import pandas as pd content = ''' index seconds amplitude label_encoding bound_or_peak 0 0.0 0.035 0 0 1 0.004 0.06 0 0 2 0.008 0.065 0 0 3 0.012 0.075 0 0 4 0.016 0.085 0 0 5 0.02 0.075 0 0 6 0.024 0.065 0 0 ... 245 0.98 -0.085 0 0 246 0.984 -0.08 0 0 247 0.988 -0.075 0 0 248 0.992 -0.08 0 0 249 0.996 -0.085 0 0 ''' with StringIO(content) as file: df = pd.read_csv(file, delim_whitespace=True) df['seconds'] *= pd.Timedelta(1, 's') df.set_index('seconds', drop=True, inplace=True) sampler = df.resample(rule='2ms') resampled = sampler.nearest()[['index', 'label_encoding']] resampled['amplitude'] = sampler.interpolate('time')['amplitude'] resampled['bound_or_peak'] = sampler.asfreq(fill_value=0)['bound_or_peak'] pd.options.display.width = 200 pd.options.display.max_columns = 10 print(resampled) index label_encoding amplitude bound_or_peak seconds 0 days 00:00:00 0 0 0.0350 0 0 days 00:00:00.002000 1 0 0.0475 0 0 days 00:00:00.004000 1 0 0.0600 0 0 days 00:00:00.006000 2 0 0.0625 0 0 days 00:00:00.008000 2 0 0.0650 0 ... ... ... ... ... 0 days 00:00:00.988000 247 0 -0.0750 0 0 days 00:00:00.990000 248 0 -0.0775 0 0 days 00:00:00.992000 248 0 -0.0800 0 0 days 00:00:00.994000 249 0 -0.0825 0 0 days 00:00:00.996000 249 0 -0.0850 0 [499 rows x 4 columns] | 2 | 1 |
77,866,314 | 2024-1-23 | https://stackoverflow.com/questions/77866314/converting-sklearn-model-to-core-ml-model-for-ios | I have created sklearn model for predicting ESG Scores. But for imputation reasons my input features matrix X were converted to np.ndarray. I found on apple developers website how to convert them, but there is optional kwargs input_features and output_features. import coremltools coreml_model = coremltools.converters.sklearn.convert(model, ["bedroom", "bath", "size"], "price") coreml_model.save('HousePricer.mlmodel') I am wondering is it necessary match names of features in X with value I am sending to convert() method with input_features argument? I.e. imputer = KNNImputer() # Impute missing values in X X_imputed = imputer.fit_transform(X) X_imputed Out: array([[ 5.00000e+00, 1.80000e+04, 3.27000e+08, ..., -2.20000e+07, 2.64000e+08, 2.64000e+08], [ 6.00000e+00, 1.32500e+05, 9.00000e+06, ..., 9.38000e+08, 1.90000e+07, 1.90000e+07], [ 2.00000e+00, 4.00000e+04, 2.37765e+08, ..., -1.88580e+07, 1.78696e+08, 1.78696e+08], ..., ]) So there are no labels now. The question is how to convert it to Core ML Model now, with no labels for input_features. Unfortunately I don't have access to macOS to try different approaches and test them. I would be grateful if you showed not only how to convert but also sample usage of the model with Swift. (Model based on Random Forest Regressor) | There is dedicated method called ct.converters.sklearn.convert for this goal. Here is a full example of the conversion: from sklearn.linear_model import LinearRegression import numpy as np import coremltools as ct # Load data X = np.random.rand(10,5).astype(np.ndarray) y = np.random.rand(10).astype(np.ndarray) # Train a model model = LinearRegression() model.fit(X, y) # save the scikit-learn model coreml_model = ct.converters.sklearn.convert(model) coreml_model.save('Predictor.mlmodel') Printing coreml_model you get: In [13]: coreml_model Out[13]: input { name: "input" type { multiArrayType { shape: 5 dataType: DOUBLE } } } output { name: "prediction" type { doubleType { } } } predictedFeatureName: "prediction" metadata { userDefined { key: "com.github.apple.coremltools.source" value: "scikit-learn==1.1.1" } userDefined { key: "com.github.apple.coremltools.version" value: "7.1" } } | 3 | 1 |
77,866,543 | 2024-1-23 | https://stackoverflow.com/questions/77866543/create-pydantic-computed-field-with-invalid-syntax-name | I have to model a pydantic class from a JSON object which contains some invalid syntax keys. As an example: example = { "$type": "Menu", "name": "lunch", "children": [ {"$type": "Pasta", "title": "carbonara"}, {"$type": "Meat", "is_vegetable": false}, ] } My pydantic classes at the moment looks like: class Pasta(BaseModel): title: str class Meat(BaseModel): is_vegetable: bool class Menu(BaseModel): name: str children: list[Pasta | Meat] Now, this work except for $type field. If the field was called "dollar_type", I would simply create the following TranslationModel base class and let Pasta, Meat and Menu inherit from TranslationModel: class TranslationModel(BaseModel): @computed_field def dollar_type(self) -> str: return self.__class__.__name__ so that by executing Menu(**example).model_dump() I get { 'dollar_type': 'Menu', 'name': 'lunch', 'children': [ {'dollar_type': 'Pasta', 'title': 'carbonara'}, {'dollar_type': 'Meat', 'is_vegetable': False} ] } But sadly I have to strictly follow the original json structure, so I have to use $type. I have tried using alias and model_validator by following the documentation but without success. How could I solve this? Thanks in advance | This very much looks like you would rather apply a discriminated union pattern. See the following example: from pydantic import BaseModel, Field from typing import Literal, Annotated example = { "$type": "Menu", "name": "lunch", "children": [ {"$type": "Pasta", "title": "carbonara"}, {"$type": "Meat", "is_vegetable": False}, ] } class Pasta(BaseModel): type: Literal["Pasta"] = Field("Pasta", alias="$type") title: str class Meat(BaseModel): type: Literal["Meat"] = Field("Meat", alias="$type") is_vegetable: bool AnyDish = Annotated[Pasta | Meat, Field(discriminator="type")] class Menu(BaseModel): name: str children: list[AnyDish] menu = Menu.model_validate(example) print(menu) menu.model_dump(by_alias=True) Which prints: name='lunch' children=[Pasta(type='Pasta', title='carbonara'), Meat(type='Meat', is_vegetable=False)] {'name': 'lunch', 'children': [{'$type': 'Pasta', 'title': 'carbonara'}, {'$type': 'Meat', 'is_vegetable': False}]} This pattern has multiple advantages: It decouples the class name from the tag. This is usually preferred, because class names might change. But you might still want to read old files. The pattern is extensible and explicit. You can easily add new types of dishes later and just include them in the AnyDish type. To reduce verbosity one can also introduce a short-cut like: class TypeLiteral: def __class_getitem__(cls, tag: str): return Annotated[Literal[tag], Field(default=tag, alias="$type")] class Pasta(BaseModel): type: TypeLiteral["Pasta"] title: str class Meat(BaseModel): type: TypeLiteral["Meat"] is_vegetable: bool You can find more about discriminated unions in the pydantic docs: https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions I hope this is useful! | 2 | 0 |
77,866,553 | 2024-1-23 | https://stackoverflow.com/questions/77866553/write-selected-columns-from-pandas-to-excel-with-xlsxwriter-efficiently | Iβm trying to write some of the columns of a pandas dataframe to an Excel file, but itβs taking too long. Iβm currently using xlsxwriter to write all of the data to the file, which is much faster. How can I write only some of the columns to the file without sacrificing performance? My code for writing selected columns so far: for col_num, col_name in enumerate(zip(column_indexes, selected_columns)): for row_num in range(2, max_rows): value = df.iloc[row-2][col] if pd.isna(value): value ='' worksheet.write(row_num-1, col_num, value ) | You can use the write_column method for col_num, col_name in enumerate(selected_columns): col_data = df[col_name].tolist() worksheet.write_column(1, col_num, col_data) You can take a look and learn more about here: https://xlsxwriter.readthedocs.io/worksheet.html | 2 | 1 |
77,866,010 | 2024-1-23 | https://stackoverflow.com/questions/77866010/typevar-in-a-nested-type-specification | Let's say I have a type defined with TypeVar, like so: T = TypeVar('T') MyType = Union[list[T], tuple[T]] def func(a: MyType[int]): pass I also want to have an optional version of it: MyTypeOpt = Optional[MyType] def opt_func(a: MyTypeOpt[int] = None): pass But this is not to mypy's liking and I'm getting an error: Bad number of arguments for type alias, expected: 0, given: 1 [type-arg]. Is there a way to make it work without writing Optional[MyType[...]] every time? | I think you are missing the type parameter for your generic type MyType. mypy is also complaining about that. from typing import Optional, Sequence, Tuple, Union from typing_extensions import TypeVar T = TypeVar('T') MyType = Union[Sequence[T], Tuple[T]] MyTypeOpt = Optional[MyType[T]] # <--------- def func(a: MyType[int]) -> None: pass def opt_func(a: MyTypeOpt[int] = None) -> None: pass def opt_func2(a: Optional[MyType[int]] = None) -> None: pass | 2 | 3 |
77,865,964 | 2024-1-23 | https://stackoverflow.com/questions/77865964/merge-resulted-column-from-pandas-dataframe-based-on-condition | I have two dataframes as below df1 = pd.DataFrame( { "names": ['alpha', 'bravo', 'charlie', 'delta', 'echo', 'foxtrot', 'golf'], "Debit": [0, 5000, 0, 5000, 3000, 0, 700], "Credit": [1000, 0, 2000, 0, 0, 8000, 0], } ) and df2 = pd.DataFrame( { "names": ['alpha', 'bravo', 'charlie', 'delta', 'echo', 'foxtrot'], "db_head": [1, 1, 1, 1, 1, 1], "cr_head": [2, 2, 2, 2, 2, 2], } ) The output I want is: names Debit Credit head 0 alpha 0 1000 2 1 bravo 5000 0 1 2 charlie 0 2000 2 3 delta 5000 0 1 4 echo 3000 0 1 5 foxtrot 0 8000 2 I tried to merge but did not understand how to get the value from two of the last columns based on the current df's column values if merging without condition print(df1.merge(df2, how="inner", on="names")) by simply merging both of the dataframes resulted like this names Debit Credit db_head cr_head 0 alpha 0 1000 1 2 1 bravo 5000 0 1 2 2 charlie 0 2000 1 2 3 delta 5000 0 1 2 4 echo 3000 0 1 2 5 foxtrot 0 8000 1 2 tried these two methods but both of methods are giving error df1['acchead'] = [df2[df2['names'] == x].db_head.item() if y > 0 else df2[df2['names'] == x].cr_head.item() for x, y in [df1['names'], df1['Debit']]] df1['acchead'] = [df2[df2['names'] == x].db_head.item() if df1.Debit.item() > 0 else df2[df2['names'] == x].cr_head.item() for x in [df1['names']]] Any Help would be appreciated. | You can merge, then post-process the output with pop and where: out = df1.merge(df2, on='names', how='inner') out['head'] = out.pop('db_head').where(out['Debit'].ne(0), out.pop('cr_head')) Output: names Debit Credit head 0 alpha 0 1000 2 1 bravo 5000 0 1 2 charlie 0 2000 2 3 delta 5000 0 1 4 echo 3000 0 1 5 foxtrot 0 8000 2 Other approach with a reshaping before the merge: (df1.assign(variable=np.where(df1['Debit'].ne(0), 'db_head', 'cr_head')) .merge(df2.melt('names', value_name='head'), on=['names', 'variable']) #.drop(columns='variable') ) Output: names Debit Credit variable head 0 alpha 0 1000 cr_head 2 1 bravo 5000 0 db_head 1 2 charlie 0 2000 cr_head 2 3 delta 5000 0 db_head 1 4 echo 3000 0 db_head 1 5 foxtrot 0 8000 cr_head 2 | 2 | 2 |
77,864,140 | 2024-1-23 | https://stackoverflow.com/questions/77864140/delete-2-column-headers-and-shift-the-whole-column-row-to-left-in-dataframe | Delete 2 column headers and shift the whole column row to left in dataframe. Below is my dataframe. trash1 trash2 name age 0 john 54 1 mike 34 2 suz 56 3 tin 78 4 yan 31 i need the final df as below name age 0 john 54 1 mike 34 2 suz 56 3 tin 78 4 yan 31 I tried all the commands but its deleting the whole column. Please help me in this. | You can slice with iloc and rename with set_axis: N = 2 out = (df.iloc[:, :-N] .set_axis(df.columns[N:], axis=1) ) Output: name age 0 john 54 1 mike 34 2 suz 56 3 tin 78 4 yan 31 Timing of answers for a generalized number of columns: Overall manually setting the name is reliably the fastest since the data is fully untouched, independently of the number of columns. The nice pop trick becomes very greedy after a threshold. Some of the approaches like shift+drop additionally cause the DataFrame to become fragmented | 2 | 2 |
77,864,953 | 2024-1-23 | https://stackoverflow.com/questions/77864953/how-to-add-a-column-or-change-data-in-each-group-after-using-group-by-in-pandas | I am now using Pandas to handle some data. After I used group by in pandas, the simplified DataFrame's format is [MMSI(Vessel_ID), BaseTime, Location, Speed, Course,...]. I use for MMSI, group in grouped_df: print(MMSI) print(group) to print the data. For example, one group of data is: MMSI BaseDateTime LAT LON SOG COG 1507 538007509.0 2022-12-08T00:02:25 49.29104 -123.19135 0.0 9.6 1508 538007509.0 2022-12-08T00:05:25 49.29102 -123.19138 0.0 9.6 I want to add a column which is the time difference of two points. Below is the Output I want MMSI BaseDateTime LAT LON SOG COG Time-diff 1507 538007509.0 2022-12-08T00:02:25 49.29104 -123.19135 0.0 9.6 0.05(hours) 1508 538007509.0 2022-12-08T00:05:25 49.29102 -123.19138 0.0 9.6 Na So I use the code below to try to get the result: for MMSI, group in grouped_df: group = group.sort_values(by='BaseDateTime') group['new-time'] = group.shift(-1)['BaseDateTime'] group.dropna() for x in group.index: group.loc[x,'time-diff'] = get_timediff(group.loc[x,'new-time'],group.loc[x,'BaseDateTime']) # A function to calculate the time difference group['GROUP'] = group['time-diff'].fillna(np.inf).ge(2).cumsum() # When Time-diff >= 2hours split them into different group I can use print to show group['GROUP'] and group['time-diff']. The result is not shown after I tried to visit grouped_df again. There's a warning showing that my group in grouped_df is just a copy of a slice from a DataFrame and it recommend me using .loc[row_indexer,col_indexer] = value instead. But in this case I don't know how to use .loc to visit the specific [row,col]. At the very beginning, I tried to use grouped_df['new-time'] = grouped_df.shift(-1)['BaseDateTime'] grouped_df.dropna() But it shows 'DataFrameGroupBy' object does not support item assignment Now my solution is create an empty_df and then concatenate the groups in grouped_df step by step like this: df['time-diff'] = pd.Series(dtype='float64') df['GROUP'] = pd.Series(dtype='int') grouped_df = df.groupby('MMSI') for MMSI, group in grouped_df: # ... as the same as the code above group = group.sort_values(by='BaseDateTime') group['new-time'] = group.shift(-1)['BaseDateTime'] group.dropna() for x in group.index: group.loc[x,'time-diff'] = get_timediff(group.loc[x,'new-time'],group.loc[x,'BaseDateTime']) # A function to calculate the time difference group['GROUP'] = group['time-diff'].fillna(np.inf).ge(2).cumsum() # ... as the same as the code above frame = [empty_df, group] empty_df = pd.concat(frames) I am not satisfied with this solution but I didn't find a proper way to change the value in grouped_df. I'm now trying to use the solution from this question to handle the DataFrame before group by. Can someone help me? | Don't use a loop, directly go with groupby.shift: s = pd.to_datetime(df['BaseDateTime']) df['Time-diff'] = (s.groupby(df['MMSI']).shift(-1) .sub(s).dt.total_seconds().div(3600) ) Or groupby.diff: s = pd.to_datetime(df['BaseDateTime']) df['Time-diff'] = (s.groupby(df['MMSI']).diff(-1) .mul(-1).dt.total_seconds().div(3600) ) Output: MMSI BaseDateTime LAT LON SOG COG Time-diff 1507 538007509.0 2022-12-08T00:02:25 49.29104 -123.19135 0.0 9.6 0.05 1508 538007509.0 2022-12-08T00:05:25 49.29102 -123.19138 0.0 9.6 NaN 1509 538007510.0 2022-12-08T00:02:25 49.29104 -123.19135 0.0 9.6 0.05 1510 538007510.0 2022-12-08T00:05:25 49.29102 -123.19138 0.0 9.6 NaN 1511 538007511.0 2022-12-08T00:02:25 49.29104 -123.19135 0.0 9.6 0.05 1523 538007511.0 2022-12-08T00:05:25 49.29102 -123.19138 0.0 9.6 NaN | 3 | 2 |
77,862,489 | 2024-1-22 | https://stackoverflow.com/questions/77862489/why-does-adding-the-decorator-lru-cachefrom-functools-break-this-function | The function is a part of the solution to the following problem: "Find all valid combinations of k numbers that sum up to n such that the following conditions are true: Only numbers 1 through 9 are used. Each number is used at most once. Return a list of all possible valid combinations. The list must not contain the same combination twice, and the combinations may be returned in any order." An example test case which fails with the code: import functools @functools.lru_cache(maxsize=None) def CombSum(k,n,num): if k<=0 or n<=0 or num>9: return [] if k==1 and n>=num and n<=9: return [[n]] ans=[] while num <= 9: for arr in CombSum(k-1,n-num,num+1): arr.append(num) ans.append(arr) num+=1 return ans print(CombSum(4,16,1)) produces [[9, 4, 2, 1], [8, 5, 2, 1], [7, 6, 2, 1], [8, 4, 3, 1], [7, 5, 3, 1], [6, 5, 4, 1, 5, 3, 2], [7, 4, 3, 2], [6, 5, 4, 1, 5, 3, 2]] The above result is incorrect and when I remove the decorator below, everything works fine: import functools def CombSum(k,n,num): if k<=0 or n<=0 or num>9: return [] if k==1 and n>=num and n<=9: return [[n]] ans=[] while num <= 9: for arr in CombSum(k-1,n-num,num+1): arr.append(num) ans.append(arr) num+=1 return ans print(CombSum(4,16,1)) produces [[9, 4, 2, 1], [8, 5, 2, 1], [7, 6, 2, 1], [8, 4, 3, 1], [7, 5, 3, 1], [6, 5, 4, 1], [7, 4, 3, 2], [6, 5, 3, 2]] This is the correct answer. Why is the decorator breaking the function? | You violated the decoratorβs contract. Youβre mutating elements of the returned list. The decorator should only be applied to Pure functions. You defined a Public API that returns a list of lists. Switch to returning an immutable tuple of tuples. Then your approach will be compatible with the LRU decorator. | 3 | 2 |
77,861,284 | 2024-1-22 | https://stackoverflow.com/questions/77861284/regex-algorithm-in-python-to-extract-comments-from-class-attributes | Given the code for a class definition, I am trying to extract all attributes and their comments ("" empty string if no comments). class Player(Schema): score = fields.Float() """ Total points from killing zombies and finding treasures """ name = fields.String() age = fields.Int() backpack = fields.Nested( PlayerBackpackInventoryItem, missing=[PlayerBackpackInventoryItem.from_name("knife")], ) """ Collection of items that a player can store in their backpack """ In the above example, we expected the parsed result to be: [ ("score", "Total points from killing zombies and finding treasures"), ("name", ""), ("age", ""), ("backpack", "Collection of items that a player can store in their backpack") ] In my attempt below, it is failing to extract the comments properly, giving an output: [ ('score', 'Total points from killing zombies and finding treasures'), ('name', ''), ('age', ''), ('backpack', '') ] How can the regex expression (or even the entire parsing logic) be fixed to handle the situations present in the example class code? Thanks import re code_block = '''class Player(Schema): score = fields.Float() """ Total points from killing zombies and finding treasures """ name = fields.String() age = fields.Int() backpack = fields.Nested( PlayerBackpackInventoryItem, missing=[PlayerBackpackInventoryItem.from_name("knife")], ) """ Collection of items that a player can store in their backpack """ ''' def parse_schema_comments(code): # Regular expression pattern to match field names and multiline comments pattern = r'(\w+)\s*=\s*fields\.\w+\([^\)]*\)(?:\n\s*"""\n(.*?)\n\s*""")?' # Find all matches using the pattern matches = re.findall(pattern, code, re.DOTALL) # Process the matches to format them as required result = [] for match in matches: field_name, comment = match comment = comment.strip() if comment else "" result.append((field_name, comment)) return result parsed_comments = parse_schema_comments(code_block) print(parsed_comments) | I might use ast, e.g.: import ast code_block = '''class Player(Schema): score = fields.Float() """ Total points from killing zombies and finding treasures """ name = fields.String() age = fields.Int() backpack = fields.Nested( PlayerBackpackInventoryItem, missing=[PlayerBackpackInventoryItem.from_name("knife")], ) """ Collection of items that a player can store in their backpack """ ''' doc = {} for x in ast.parse(code_block).body[0].body: if isinstance(x, ast.Assign): name = x.targets[0].id doc[name] = '' else: doc[name] = x.value.value.strip() print(list(doc.items())) Result (Attempt This Online!): [('score', 'Total points from killing zombies and finding treasures'), ('name', ''), ('age', ''), ('backpack', 'Collection of items that a player can store in their backpack')] | 2 | 2 |
77,861,508 | 2024-1-22 | https://stackoverflow.com/questions/77861508/python-break-a-timer-repeat-after-time-went-zero-on-raspberry-pi | i am programming a pythonscript with guizero. there is a PushButton and its command runs a function. This function ist build after a PomodoraTimer Tutorial i found on youtube. As far, it works, but i want it to stop, after the alarm is running. The Code is the following: #UserInterface, GPIO-Steuerung und Timer-Steuerung importieren from guizero import App, Text, TextBox, PushButton, Box, Window import RPi.GPIO as GPIO import time import datetime #this is what gets triggered by the Pushbutton def startTimer(): global timer if (set_time_input.value == "" and set_time_input_min.value == ""): currentTimeLeft = "3" #60 #set timer to 1 minuit if textfield empty else: if (set_time_input.value != "" and set_time_input_min.value == ""): currentTimeLeft = int(set_time_input.value) else: currentTimeLeft = int(set_time_input_min.value) * 60 #Countdownanzeige timer = Text(timer_box, text="\r"+str(currentTimeLeft), size=30, color="red") timerbottom = Text(timer_box, text="\n Sekunden.") timer.repeat(1000, reduceTime) def reduceTime(): global timer if int(timer.value) > 0: timer.value = int(timer.value) -1 else: start_alarm_1() #this triggers a GPIO on my Raspi for 3 seconds #exit() #####this is just the rest of the app if ou want to build it and test it: #GPIO Steuerung initialisieren GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.cleanup() #Start Alarm -> i've got a remote alarm that works this way ;) def start_alarm_1(): GPIO.setup(20, GPIO.OUT) GPIO.output(20, GPIO.LOW) time.sleep(3) GPIO.output(20, GPIO.HIGH) #App-Fenster initialisieren app = App(title="OSCE Timer", width=800, height=400) #textfields to set timer time textfield_msg = Text(app, text="Timer in seconds:") set_time_input = TextBox(app) textfield_msg = Text(app, text="OR timer in Minutes:") set_time_input_min = TextBox(app) run_timer_button = PushButton(app, command=startTimer, text="start Countdown") #timer window in app timerTitle = Text(app, text="Time:\n") #Appwindow app.display() My problem is: if in the reduceTime() the timer.value gets "0", then the alarm starts over and over again. this is because the timer.repeat() function is triggering the reduceTime() function over and over again - for ever. Now i had multiple ideas: build a while-loop around the repeat() function, add return -1 statements to different places, but none of it worked. I would be so happy if anyone could help me! (the problem with the while statment: it doesnt display my timer anymore because the function is stucked in the while loop obv.) | Damn, after hours of trying i posted this - and few seconds later I found a solution! I had to add the following line insted of the '#exit()' statement i also worked with: time.cancel(reduceTime) | 3 | 1 |
77,847,952 | 2024-1-19 | https://stackoverflow.com/questions/77847952/find-a-substring-of-4-consecutive-digits-and-a-substring-of-3-consecutive-digits | I have strings of the type: 1432_ott_457_blusp 312_fooob_bork_1234 broz_901_6453 kkhas_1781_LET_GROK_234 1781_234_kkhas etc. In other words, each string contains multiple substrings delimited by _. The total number of substrings is variable. I look for a substring containing 4 digits, and another one containing 3 digits. As you can see, the two substrings can be located anywhere inside the string. A solution such as import re three_digits = re.findall('\d{3}') doesn't work, because it will match both the 3 digits substring and the first 3 digits of the 4 digits one. A solution which assumes that both the 3 digits and the 4 digits substring exist is fine, but one that checks this precondition would be even better. | If you want to stick with regular expressions, you're almost there: import re input = """1432_ott_457_blusp 312_fooob_bork_1234 broz_901_6453 kkhas_1781_LET_GROK_234 1781_234_kkhas""" for s in input.splitlines(): print(re.findall(r'\d{3,4}', s)) This doesn't take advantage of the underscore delimiter rule so if you have input like 123abc_000_1111 it will trigger on the 123 as well. If you need to handle such input the regular expression gets a little more complicated: >>> re.findall(r'(?:_|^)(\d{3,4})(?=_|$)', '999_123abc_abc234_000_1111') ['999', '000', '1111'] # (?:_|^)(\d{3,4})(?=_|$) # ^^^^^^^---------------- non-capturing group that ensures the # digits start with either '_' or start-of-line # ^^^^^^^^^------- capture group for the digits you're looking for # ^^^^^^^ lookahead to ensure the digits end with either # '_' or end-of-line The syntax used above (and much more!) is explained in the Regular Expression HOWTO guide. | 2 | 5 |
77,858,217 | 2024-1-22 | https://stackoverflow.com/questions/77858217/3d-antenna-radiation-plot-in-spherical-coorinates-problem | I want to plot 3D antenna radiation pattern from CSV file, after data loading i have got: Theta Phi Dir 0 0.000 0.0 8.272 1 5.000 0.0 8.221 2 10.000 0.0 8.064 3 15.000 0.0 7.804 4 20.000 0.0 7.444 ... ... ... ... 2659 160.000 355.0 -13.240 2660 165.000 355.0 -12.330 2661 170.000 355.0 -11.460 2662 175.000 355.0 -10.870 2663 180.000 355.0 -10.670 so Theta chages from 0 to 180 deg, and Phi 0 to 355. Then i run code: import pandas as pd import numpy as np from array import * import numpy as np import pandas as pd import plotly.graph_objects as go from plotly.offline import plot # Read data from CSV file df = pd.read_csv('output_file.csv') # reshaping data for plot theta1d = df['Theta'] theta1d = np.array(theta1d) theta1d_unique = np.unique(theta1d) angle_count_theta = len(theta1d_unique) phi1d = df['Phi'] phi1d = np.array(phi1d) phi1d_unique = np.unique(phi1d) angle_count_phi = len(phi1d_unique) theta2d = theta1d.reshape([angle_count_theta, angle_count_phi]) phi2d = phi1d.reshape([angle_count_theta, angle_count_phi]) dir1d = df['Dir'] dir1d = np.array(dir1d) R = np.empty((angle_count_theta, angle_count_phi)) row_count = 0 for j in range(angle_count_phi): for i in range(angle_count_theta): R[i,j] = dir1d[row_count * angle_count_theta + i] row_count += 1 THETA = np.deg2rad(theta2d) PHI = np.deg2rad(phi2d) THETA = THETA.reshape(R.shape[0],R.shape[1]) PHI = PHI.reshape(R.shape[0],R.shape[1]) # transformation of spherical data X = R * np.sin(THETA) * np.cos(PHI) Y = R * np.sin(THETA) * np.sin(PHI) Z = R * np.cos(THETA) min_X = np.min(X) max_X = np.max(X) min_Y = np.min(Y) max_Y = np.max(Y) layout = go.Layout(title="3D Radiation Pattern of 5G CW data", xaxis = dict(range=[min_X,max_X],), yaxis = dict(range=[min_Y,max_Y],)) fig = go.Figure(data=[go.Surface(x=X, y=Y, z=Z, surfacecolor=R, colorscale='mygbm', colorbar = dict(title = "Gain", thickness = 50, xpad = 500))], layout = layout) fig.update_layout(autosize = True, margin = dict(l = 50, r = 50, t = 250, b = 250)) plot(fig) Here are plots. What I have from python script: and From x-y side: (from 'top') we can see that there is no smooth connection between values, but sphere back to (0, 0, 0) point. The proper radiation pattern: How to solve this? | I think that you are just getting row/column order muddled up (multiple times). Theta changes fastest in your input. Note that you will also get a small gap in your plot unless your input data file includes the full-circle value for phi=360 degrees at the end. I don't have your data, so I had to generate my own (see the Fortran program at the bottom of the post - its much simpler than your actual data). However, see if this solution works for your file. import pandas as pd import numpy as np from array import * import numpy as np import pandas as pd import plotly.graph_objects as go from plotly.offline import plot # Read data from CSV file df = pd.read_csv('output_file.csv') # Separate columns theta1d = np.array( df['Theta'] ) phi1d = np.array( df['Phi'] ) dir1d = np.array( df['Dir'] ) # Reshape for plot. NOTE: theta varies fastest in your input file angle_count_theta = len( np.unique( theta1d ) ) angle_count_phi = len( np.unique( phi1d ) ) theta2d = theta1d.reshape( [ angle_count_phi, angle_count_theta ] ) phi2d = phi1d .reshape( [ angle_count_phi, angle_count_theta ] ) R = np.empty ( ( angle_count_phi, angle_count_theta ) ) for i in range( angle_count_phi ): for j in range( angle_count_theta ): R[i,j] = dir1d[ i * angle_count_theta + j ] THETA = np.deg2rad(theta2d) PHI = np.deg2rad(phi2d) X = R * np.sin( THETA ) * np.cos( PHI ) Y = R * np.sin( THETA ) * np.sin( PHI ) Z = R * np.cos( THETA ) min_X = np.min(X) max_X = np.max(X) min_Y = np.min(Y) max_Y = np.max(Y) layout = go.Layout(title="3D Radiation Pattern of 5G CW data", xaxis = dict(range=[min_X,max_X],), yaxis = dict(range=[min_Y,max_Y],)) fig = go.Figure(data=[go.Surface(x=X, y=Y, z=Z, surfacecolor=R, colorscale='mygbm', colorbar = dict(title = "Gain", thickness = 50, xpad = 500))], layout = layout) fig.update_layout(autosize = True, margin = dict(l = 50, r = 50, t = 250, b = 250)) plot(fig) Output: Some data generation: program test implicit none real, parameter :: PI = 4.0 * atan( 1.0 ) integer nphi, ntheta integer i, j real theta, phi, z real dtheta, dphi nphi = 73; ntheta = 37 dphi = 360.0 / ( nphi - 1 ) dtheta = 180.0 / ( ntheta - 1 ) open( 10, file="output_file.csv" ) write( 10, "( a, ',', a, ',', a )" ) "Theta", "Phi", "Dir" do j = 0, nphi - 1 phi = j * dphi do i = 0, ntheta - 1 theta = i * dtheta z = 10.0 * ( 1 + cos( theta * PI / 180.0 ) ) write( 10, "( 1x, f7.2, ',', f7.2, ',', f7.2 )" ) theta, phi, z end do end do close( 10 ) end program test | 2 | 1 |
77,860,036 | 2024-1-22 | https://stackoverflow.com/questions/77860036/why-reading-a-compressed-tar-file-in-reverse-order-is-100x-slower | First, let's generate a compressed tar archive: from io import BytesIO from tarfile import TarInfo import tarfile with tarfile.open('foo.tgz', mode='w:gz') as archive: for file_name in range(1000): file_info = TarInfo(str(file_name)) file_info.size = 100_000 archive.addfile(file_info, fileobj=BytesIO(b'a' * 100_000)) Now, if I read the archive contents in natural order: import tarfile with tarfile.open('foo.tgz') as archive: for file_name in archive.getnames(): archive.extractfile(file_name).read() and measure the execution time using the time command, I get less than 1 second on my PC: real 0m0.591s user 0m0.560s sys 0m0.011s But if I read the archive contents in reverse order: import tarfile with tarfile.open('foo.tgz') as archive: for file_name in reversed(archive.getnames()): archive.extractfile(file_name).read() the execution time is now around 120 seconds: real 2m3.050s user 2m0.910s sys 0m0.059s Why is that? Is there some bug in my code? Or is it some tar's feature? Is it documented somewhere? | A tar file is strictly sequential. You end up reading the beginning of the file 1000 times, rewinding between them, reading the second member 999 times, etc etc. Remember, the "tape archive" format was designed at a time when unidirectional tape reels on big spindles was the hardware they used. Having an index would only have wasted space on the tape, as you would literally have to read every byte between where you are and where you want to seek to on the tape anyway. In contrast, modern archive formats like .zip are designed for use on properly seekable devices, and typically contain an index which lets you quickly move to the position where a specific archive member can be found. | 3 | 7 |
77,859,431 | 2024-1-22 | https://stackoverflow.com/questions/77859431/how-to-get-the-index-of-function-parameter-list-comprehension | Gurobipy can apparently read the index of a list comprehension formulated within the parentheses of a function. How does this work? Shouldn't this formulation pass a generator object to the function? How do you read the index from that? md = gp.Model() md.addConstrs(True for i in [1,2,5,3]) The output contains the indices that where used in the list comprehension formulation: {1: <gurobi.Constr *Awaiting Model Update*>, 2: <gurobi.Constr *Awaiting Model Update*>, 5: <gurobi.Constr *Awaiting Model Update*>, 3: <gurobi.Constr *Awaiting Model Update*>} | I am not sure if I understand your question correctly, but if you are wondering how you can retrieve the iterator from generator expression, then that's by accessing <generator>.gi_frame.f_locals. The gi_frame contains the frame object corresponds to the generator expression and it has f_locals attribute which denotes the local namespace seen by this frame. >>> my_gen = (True for i in [1,2,5,3]) >>> type(my_gen) <class 'generator'> >>> my_gen.gi_frame.f_locals {'.0': <tuple_iterator object at 0x1003cfa60>} >>> my_gen.gi_frame.f_locals['.0'] <tuple_iterator object at 0x1003cfa60> >>> list(my_gen.gi_frame.f_locals['.0']) [1, 2, 5, 3] You can even use the more direct API inspect.getgeneratorlocals. >>> import inspect >>> >>> inspect.getgeneratorlocals(my_gen) {'.0': <tuple_iterator object at 0x1003cfc70>} But please do note that: CPython implementation detail: This function relies on the generator exposing a Python stack frame for introspection, which isnβt guaranteed to be the case in all implementations of Python. In such cases, this function will always return an empty dictionary. | 5 | 6 |
77,844,870 | 2024-1-19 | https://stackoverflow.com/questions/77844870/google-gemini-api-response | I am using the Gemini API, and need some help. When I ran my code that I got from the docs it returned: <google.generativeai.types.generation_types.GenerateContentResponse> . Here is my code import pathlib import textwrap import google.generativeai as genai from IPython.display import display from IPython.display import Markdown def to_markdown(text): text = text.replace('β’', ' *') return Markdown(textwrap.indent(text, '> ', predicate=lambda _: True)) genai.configure(api_key='AN API KEY WAS HERE') model = genai.GenerativeModel('gemini-pro') response = model.generate_content("What is the meaning of life?") print(response) I ran this, and got the response above. | That's a google object of the response you are getting. You need to mention as response.text to get the actual text. Like this: print(response.text) | 2 | 3 |
77,838,537 | 2024-1-18 | https://stackoverflow.com/questions/77838537/consistently-compiling-c-code-with-fpic-for-a-rust-ffi-crate | My rust project uses a FFI to the C lib SuperLU, which is called superlu-sys. My rust code produces Python bindings with PyO3. As soon as the python bindings have a function calling SuperLU I get the following linker error on building: relocation R_X86_64_PC32 against symbol `stderr@@GLIBC_2.2.5' can not be used when making a shared object; recompile with -fPIC This is strange as -fPIC is enforced by build.rs of superlu-sys: run!(cmd!("make") .current_dir(&source.join("SRC")) .arg("NOOPTS=-O0 -fPIC -w") .arg("CFLAGS=-O3 -DNDEBUG -DPRNTlevel=0 -fPIC -w") .arg("DZAUX=") .arg("SCAUX=") .arg(&format!("SuperLUroot={}", source.display())) .arg(&format!( "SUPERLULIB={}", lib.join("libsuperlu.a").display() ))); Here is the full build.rs. On producing a minimum example I have found that it works as long as the PyO3 bindings are in the same crate in which SuperLu is built. I am under the impression that the adherence to the flags from build.rs is only given if it it the root of the crate tree. How can I modify build.rs to get consistent enforcement of -fPIC? This is a minimum example causing the issue: lib.rs: use pyo3::prelude::*; use std::os::raw::c_int; use superlu_sys; #[pyclass] pub struct Object {} #[pymethods] impl Object { pub fn test(&self) -> i32 { unsafe { *superlu_sys::intMalloc(1 as c_int) } } } #[allow(unused)] #[pymodule] fn bind_superlu(py: Python, m: &PyModule) -> PyResult<()> { m.add_class::<Object>()?; Ok(()) } cargo.toml: [package] name = "bind_superlu" version = "0.1.0" edition = "2021" [lib] name = "ress" crate-type = ["cdylib"] [dependencies] superlu-sys = "0.3.4" pyo3 = {version = "0.18.1", features = ["auto-initialize"]} | I switched the buid.rs script to using the cmake rust crate instead of invoking make as a subprocess. | 5 | 0 |
77,855,706 | 2024-1-21 | https://stackoverflow.com/questions/77855706/how-can-i-merge-two-dataframes-based-on-range-of-dates | I have two DataFrames: df1 and df2 import pandas as pd df1 = pd.DataFrame( { 'a': ['2024-01-01 04:00:00', '2023-02-02 20:00:00'], 'id':['a_1', 'a_2'] } ) df2 = pd.DataFrame( { 'a': [ '2024-01-01 4:00:00', '2024-01-01 05:00:00', '2024-01-01 06:00:00', '2024-01-01 07:00:00', '2024-01-01 08:00:00', '2024-01-01 09:00:00', '2023-02-02 21:00:00', '2023-02-02 23:00:00', ] } ) And this is the expected output. I want to merge id from df1 to df2: a id 0 2024-01-01 04:00:00 a_1 1 2024-01-01 05:00:00 a_1 2 2024-01-01 06:00:00 a_1 3 2024-01-01 07:00:00 a_1 4 2024-01-01 08:00:00 NaN 5 2024-01-01 09:00:00 NaN 6 2023-02-02 21:00:00 a_2 7 2023-02-02 23:00:00 a_2 If you are familiar with candlesticks, df1.a is 4 hour candlestick. For example df1.a.iloc[0]is: 2024-01-01 04:00:00 2024-01-01 05:00:00 2024-01-01 06:00:00 2024-01-01 07:00:00 Basically it is a range from df1.a.iloc[0] to df1.a.iloc[0] + pd.Timedelta(hours=3). And I want to merge these ids by the range of hours that they cover. This is my attempt but I don't know how to merge by range of dates: df1['a'] = pd.to_datetime(df1.a) df2['a'] = pd.to_datetime(df2.a) df1['b'] = df1.a + pd.Timedelta(hours=3) | If I understand correctly, you need a merge_asof with tolerance: df1['a'] = pd.to_datetime(df1['a']) df2['a'] = pd.to_datetime(df2['a']) out = (pd.merge_asof(df2.reset_index().sort_values(by='a'), df1.sort_values(by='a'), on='a', tolerance=pd.Timedelta(hours=3) ) .set_index('index').reindex(df2.index) ) Output: a id 0 2024-01-01 04:00:00 a_1 1 2024-01-01 05:00:00 a_1 2 2024-01-01 06:00:00 a_1 3 2024-01-01 07:00:00 a_1 4 2024-01-01 08:00:00 NaN 5 2024-01-01 09:00:00 NaN 6 2023-02-02 21:00:00 a_2 7 2023-02-02 23:00:00 a_2 | 3 | 2 |
77,851,866 | 2024-1-20 | https://stackoverflow.com/questions/77851866/custom-legend-for-the-plot-with-lines-changing-colour | I want to plot two error bars plots with lines changing colour, one going from pink to blue, and another from blue to pink. I did not find a way to do this with plt.errorbar(), but managed to find a workaround solution using LineCollection. I am plotting error bar plots with a dashed line, and then adding two lines that change colour using LineCollection. Each line is separated into segments coloured according to the range of colours from the chosen colormap. Now the problem is to create a legend for this plot. I am using a custom legend, but cannot figure out a way to have lines changing colour in the legend. Is there a way to do so? Here is a code to make the plot: import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import LineCollection from matplotlib.colors import BoundaryNorm, ListedColormap from matplotlib.lines import Line2D fig, axs = plt.subplots(1, 1, figsize=(10 , 8)) T = 9 x = np.linspace(0, T-1, T) data = np.random.rand(T, 30) # plot with error bars Y_mean = np.mean(data, axis=1) # mean for every row Y_std = np.std(data, axis=1, ddof=1) # std plt.errorbar(x, Y_mean, yerr=Y_std, capsize=6, elinewidth=4, ecolor = "grey",linestyle = 'dotted',color='black') # plot a line going from blue to pink y = Y_mean y_col = np.linspace(-10, 10, 8) # Create a set of line segments so that we can color them individually # This creates the points as an N x 1 x 2 array so that we can stack points # together easily to get the segments. The segments array for line collection # needs to be (numlines) x (points per line) x 2 (for x and y) points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) # Create a continuous norm to map from data points to colors norm_cmap = plt.Normalize(y_col.min(), y_col.max()) lc = LineCollection(segments, cmap='cool', norm=norm_cmap) # Set the values used for colormapping lc.set_array(y_col) lc.set_linewidth(4) line = axs.add_collection(lc) # plot the second plot with error bars data = np.random.rand(T, 30) + 1 Y_mean = np.mean(data, axis=1) # mean Y_std = np.std(data, axis=1, ddof=1) # std plt.errorbar(x, Y_mean, yerr=Y_std, capsize=6, elinewidth=4, ecolor = "grey",linestyle = 'dotted',color='black') # plot the second line y = Y_mean y_col = np.linspace(10, -10, 8) points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) norm_cmap = plt.Normalize(y_col.min(), y_col.max()) lc = LineCollection(segments, cmap='cool', norm=norm_cmap) lc.set_array(y_col) lc.set_linewidth(4) line = axs.add_collection(lc) fig.suptitle("example plot") legend_elements = [Line2D([0], [0], color='black', linestyle = 'dotted', alpha = 1, label = r'line1', lw=4), Line2D([0], [0], color='black', linestyle = 'dotted', alpha = 1, label = r'line2', lw=4)] axs.legend(handles=legend_elements, bbox_to_anchor=(1.05, 1), loc="upper left", title_fontsize = 18); Here is the plot: I tried adding a custom legend using "colour patches" plotted with matplotlib.lines.Line2D, but it seems like it's only possible to specify one colour per patch. | this can be done by slightly adjusting the answer I posted to a similar question concerning custom legend handles for multi-color points (https://stackoverflow.com/a/67870930/9703451) The idea is to use a custom handler that will draw the legend-patch... this should do the job: # define an object that will be used by the legend class MulticolorPatch(object): def __init__(self, cmap, ncolors=100): self.ncolors = ncolors if isinstance(cmap, str): self.cmap = plt.get_cmap(cmap) else: self.cmap = cmap # define a handler for the MulticolorPatch object class MulticolorPatchHandler(object): def legend_artist(self, legend, orig_handle, fontsize, handlebox): n = orig_handle.ncolors width, height = handlebox.width, handlebox.height patches = [] for i, c in enumerate(orig_handle.cmap(i/n) for i in range(n)): patches.append( plt.Rectangle([width / n * i - handlebox.xdescent, -handlebox.ydescent], width / n, height, facecolor=c, edgecolor='none')) patch = PatchCollection(patches,match_original=True) handlebox.add_artist(patch) return patch # ------ create the legend handles = [ MulticolorPatch("cool"), MulticolorPatch("cool_r"), MulticolorPatch("viridis") ] labels = [ "a cool line", "a reversed cool line", "a viridis line" ] # ------ create the legend fig.legend(handles, labels, loc='upper left', handler_map={MulticolorPatch: MulticolorPatchHandler()}, bbox_to_anchor=(.125,.875)) | 3 | 2 |
77,855,094 | 2024-1-21 | https://stackoverflow.com/questions/77855094/how-to-get-the-session-id-of-windows-using-ctypes-in-python | I am attempting to retrieve the session ID of the current user session on Windows using ctypes in Python. My goal is to accomplish this without relying on external libraries. Here is the code I have so far: import ctypes from ctypes import wintypes def get_current_session_id(): WTS_CURRENT_SERVER_HANDLE = ctypes.wintypes.HANDLE(-1) WTS_CURRENT_SESSION = 0xFFFFFFFF session_id = ctypes.create_string_buffer(4) # Assuming a DWORD is 4 bytes server_handle = ctypes.windll.wtsapi32.WTSOpenServerW(None) if not server_handle: raise OSError("Failed to open server handle") try: result = ctypes.windll.wtsapi32.WTSQuerySessionInformationW( server_handle, WTS_CURRENT_SESSION, 0, ctypes.byref(session_id), ctypes.byref(ctypes.c_ulong()) ) if not result: raise OSError("Failed to query session information") finally: ctypes.windll.wtsapi32.WTSCloseServer(server_handle) session_id_value = struct.unpack("I", session_id.raw)[0] return session_id_value try: session_id = get_current_session_id() print("Current session ID:", session_id) except OSError as e: print(f"Error: {e}") However, I am encountering an access violation error (exception: access violation reading). I suspect there might be an issue with memory access or the usage of ctypes. Can someone help me identify and correct the issue in the code? Additionally, if there are alternative approaches to achieve the same result using ctypes, I would appreciate any guidance. I would appreciate any insights into what might be causing the access violation error and any suggestions for correcting the code. What I Tried: I attempted to retrieve the session ID of the current user session on Windows using the provided Python script that utilizes ctypes. What I Expected: I expected the script to successfully retrieve the session ID without encountering any errors, providing the correct session ID as output. Actual Result: However, during execution, an access violation error (exception: access violation reading) occurred. | PROBLEM SOLVED: import ctypes from ctypes import wintypes def get_current_session_id(): ProcessIdToSessionId = ctypes.windll.kernel32.ProcessIdToSessionId ProcessIdToSessionId.argtypes = [wintypes.DWORD, wintypes.PDWORD] ProcessIdToSessionId.restype = wintypes.BOOL process_id = wintypes.DWORD(ctypes.windll.kernel32.GetCurrentProcessId()) session_id = wintypes.DWORD() if ProcessIdToSessionId(process_id, ctypes.byref(session_id)): return session_id.value else: raise ctypes.WinError() current_session_id = get_current_session_id() print(f"Current session ID: {current_session_id}") | 2 | 0 |
77,854,931 | 2024-1-21 | https://stackoverflow.com/questions/77854931/how-improve-updating-multiple-columns-of-a-pandas-dataframe-when-the-columns | I have a DataFrame that has columns as datetime objects, and I would like to update specific values for a specific index based on a list of days. Here is an MRE that works: import pandas as pd start, stop = "2023-12-01", "2023-12-31" dates: pd.Index = pd.date_range(start, stop, freq="D") december = pd.DataFrame(columns = dates, index = ['ft', 'pt']) december.loc['ft', pd.to_datetime(['2023-12-04', '2023-12-06', '2023-12-11', '2023-12-12', '2023-12-27', '2023-12-28'])] = 'X' In this example, I update the columns for the following dates: ['2023-12-04', '2023-12-06', '2023-12-11', '2023-12-12', '2023-12-27', '2023-12-28'] But ideally I would pass a list of days like [4, 6, 11, 12, 27, 28] to the .loc call. | Just do that using the day attribute and isin: december.loc['ft', december.columns.day.isin([4, 6, 11, 12, 27, 28])] = 'X' If you also only want to target december (or a list of months): december.loc['ft', (december.columns.day.isin([4, 6, 11, 12, 27, 28]) &december.columns.month.isin([12]) )] = 'X' Output: 2023-12-01 2023-12-02 2023-12-03 2023-12-04 2023-12-05 2023-12-06 2023-12-07 2023-12-08 2023-12-09 2023-12-10 2023-12-11 2023-12-12 2023-12-13 2023-12-14 2023-12-15 2023-12-16 2023-12-17 2023-12-18 2023-12-19 2023-12-20 2023-12-21 2023-12-22 2023-12-23 2023-12-24 2023-12-25 2023-12-26 2023-12-27 2023-12-28 2023-12-29 2023-12-30 2023-12-31 ft NaN NaN NaN X NaN X NaN NaN NaN NaN X X NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN X X NaN NaN NaN pt NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN | 2 | 2 |
77,851,097 | 2024-1-20 | https://stackoverflow.com/questions/77851097/waterfall-plot-with-treeexplainer | Using TreeExplainer in SHAP, I could not plot the Waterfall Plot. Error Message: ---> 17 shap.plots.waterfall(shap_values[0], max_display=14) TypeError: The waterfall plot requires an `Explanation` object as the `shap_values` argument. Since my model is tree based, I use TreeExplainer (because of using xgb.XGBClassifier). If I use the Explainer instead TreeExplainer, I can plot Waterfall Plot. My code is given below: import pandas as pd data = { 'a': [1, 2, 3, 3, 2, 1, 4, 5, 6, 7, 8, 1, 2, 3, 3, 2, 1, 4, 5, 6, 7, 8], 'b': [2, 1, 2, 3, 4, 6, 5, 8, 7, 9, 10, 2, 1, 2, 3, 4, 6, 5, 8, 7, 9, 10], 'c': [1, 5, 2, 4, 3, 9, 6, 8, 7, 10, 1, 1, 5, 2, 4, 3, 9, 6, 8, 7, 10, 1], 'd': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1], 'e': [1, 2, 3, 4, 3, 2, 1, 5, 4, 2, 1, 1, 2, 3, 4, 3, 2, 1, 5, 4, 2, 1], 'f': [1, 1, 2, 1, 2, 2, 3, 3, 3, 2, 1, 1, 1, 2, 1, 2, 2, 3, 3, 3, 2, 1], 'g': [3, 3, 2, 1, 3, 2, 1, 1, 1, 2, 2, 3, 3, 2, 1, 3, 2, 1, 1, 1, 2, 2], 'h': [1, 2, 1, 2, 3, 4, 5, 3, 4, 5, 5, 1, 2, 1, 2, 3, 4, 5, 3, 4, 5, 5], 'i': [1, 2, 1, 2, 3, 4, 5, 6, 5, 4, 6, 1, 2, 1, 2, 3, 4, 5, 6, 5, 4, 6], 'j': [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6], 'k': [3, 3, 2, 1, 4, 3, 2, 2, 2, 1, 1, 3, 3, 2, 1, 4, 3, 2, 2, 2, 1, 1], 'r': [1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1] } df = pd.DataFrame(data) X = df.iloc[:,[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]] y = df.iloc[:,11] from sklearn.model_selection import train_test_split, GridSearchCV X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state = 42) import xgboost as xgb from sklearn.metrics import accuracy_score, confusion_matrix, classification_report from sklearn.model_selection import GridSearchCV param_grid = { 'max_depth' : [6], 'n_estimators' : [500], 'learning_rate' : [0.3] } grid_search_xgboost = GridSearchCV( estimator = xgb.XGBClassifier(), param_grid = param_grid, cv = 3, verbose = 2, n_jobs = -1 ) grid_search_xgboost.fit(X_train, y_train) print("Best Parameters:", grid_search_xgboost.best_params_) best_model_xgboost = grid_search_xgboost.best_estimator_ import shap explainer = shap.TreeExplainer(best_model_xgboost) shap_values = explainer.shap_values(X_train) shap.summary_plot(shap_values, X_train, plot_type="bar") shap.summary_plot(shap_values, X_train) for name in X_train.columns: shap.dependence_plot(name, shap_values, X_train) shap.force_plot(explainer.expected_value, shap_values[0], X_train.iloc[0], matplotlib=True) shap.decision_plot(explainer.expected_value, shap_values[:10], X_train.iloc[:10]) shap.plots.waterfall(shap_values[0], max_display=14) Where is the problem? | Instead of feeding shap values as numpy.ndarray try an Explanation object: import xgboost as xgb import shap from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.metrics import accuracy_score, confusion_matrix, classification_report from sklearn.model_selection import GridSearchCV data = { 'a': [1, 2, 3, 3, 2, 1, 4, 5, 6, 7, 8, 1, 2, 3, 3, 2, 1, 4, 5, 6, 7, 8], 'b': [2, 1, 2, 3, 4, 6, 5, 8, 7, 9, 10, 2, 1, 2, 3, 4, 6, 5, 8, 7, 9, 10], 'c': [1, 5, 2, 4, 3, 9, 6, 8, 7, 10, 1, 1, 5, 2, 4, 3, 9, 6, 8, 7, 10, 1], 'd': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1], 'e': [1, 2, 3, 4, 3, 2, 1, 5, 4, 2, 1, 1, 2, 3, 4, 3, 2, 1, 5, 4, 2, 1], 'f': [1, 1, 2, 1, 2, 2, 3, 3, 3, 2, 1, 1, 1, 2, 1, 2, 2, 3, 3, 3, 2, 1], 'g': [3, 3, 2, 1, 3, 2, 1, 1, 1, 2, 2, 3, 3, 2, 1, 3, 2, 1, 1, 1, 2, 2], 'h': [1, 2, 1, 2, 3, 4, 5, 3, 4, 5, 5, 1, 2, 1, 2, 3, 4, 5, 3, 4, 5, 5], 'i': [1, 2, 1, 2, 3, 4, 5, 6, 5, 4, 6, 1, 2, 1, 2, 3, 4, 5, 6, 5, 4, 6], 'j': [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6], 'k': [3, 3, 2, 1, 4, 3, 2, 2, 2, 1, 1, 3, 3, 2, 1, 4, 3, 2, 2, 2, 1, 1], 'r': [1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1] } df = pd.DataFrame(data) X = df.iloc[:,[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]] y = df.iloc[:,11] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state = 42) param_grid = { 'max_depth' : [6], 'n_estimators' : [500], 'learning_rate' : [0.3] } grid_search_xgboost = GridSearchCV( estimator = xgb.XGBClassifier(), param_grid = param_grid, cv = 3, verbose = 2, n_jobs = -1 ) grid_search_xgboost.fit(X_train, y_train) print("Best Parameters:", grid_search_xgboost.best_params_) best_model_xgboost = grid_search_xgboost.best_estimator_ explainer = shap.TreeExplainer(best_model_xgboost) exp = explainer(X_train) # <-- here print(type(exp)) shap.plots.waterfall(exp[0]) <class 'shap._explanation.Explanation'> Why? Because SHAP has 2 plotting interfaces: old and new one. The old one (your first 2 plots) expects shap values as numpy's ndarray. The new one expects an Explanation object (which is BTW clearly stated in the error message). | 2 | 2 |
77,854,185 | 2024-1-21 | https://stackoverflow.com/questions/77854185/how-to-create-one-dictonary-using-3-lists | I am new to Python and need help creating one dictionary file using three lists. example: list 1 = ["private", "private"] list 2 = ["10.1.1.1", "10.11.11.11"] list3 = [ "sfc", "gfc"] my code: dict = {} for s in range(len(list1)): dict[list1[s]] = {list2[s]: list3[s]} print(dict) I am getting output as {'private': {'10.1.1.1': 'sfc'}} but I want output as {'private': {'10.1.1.1': 'sfc'}, {'10.11.11.11': 'gfx'}} Could somebody help me on how to get in this format? | list1 = ["private", "private"] list2 = ["10.1.1.1", "10.11.11.11"] list3 = ["sfc", "gfc"] my_dict = {} for s in range(len(list1)): if list1[s] not in my_dict: my_dict[list1[s]] = [] my_dict[list1[s]].append({list2[s]: list3[s]}) print(my_dict) Output: {'private': [{'10.1.1.1': 'sfc'}, {'10.11.11.11': 'gfc'}]} It checks for element with same name basically key if present instead of overwriting it appends and update the old one To get the output for Outer key, Inner key and value: my_dict = {'private': [{'10.1.1.1': 'sfc'}, {'10.11.11.11': 'gfc'}]} for key, value_list in my_dict.items(): for value_dict in value_list: for inner_key, inner_value in value_dict.items(): print(f"Outer Key: {key}, Inner Key: {inner_key}, Value: {inner_value}") Output: Outer Key: private, Inner Key: 10.1.1.1, Value: sfc Outer Key: private, Inner Key: 10.11.11.11, Value: gfc Please change it as per your need | 2 | 2 |
77,854,089 | 2024-1-21 | https://stackoverflow.com/questions/77854089/how-to-unify-the-response-format-in-fastapi-while-preserving-pydantic-data-model | In FastAPI, I am using SQLAlchemy and Pydantic to return data. @router.get("/1", response_model=User) def read_user(db: Session = Depends(get_db)): db_user = user_module.get_user(db, user_id="1") if db_user is None: raise HTTPException(status_code=404, detail="User not found") return db_user This approach helps me standardize the returned model, but I want to unify the response format for all APIs to {"code": 0, "msg": "success", "data": {...}}, so that the User model from the original return model is placed within the "data" field, making it easier for frontend management. I attempted to use FastAPI middleware for implementation, but it doesn't recognize the User return model in Swagger and other documentation. If I redefine a generic Pydantic return model with nested models, I cannot manipulate the SQLAlchemy returned data model into the desired User model. Is there any way to solve my requirement or are there any better solutions? To unify the response format in FastAPI to {"code": 0, "msg": "success", "data": {...}}, while preserving Pydantic data models and ensuring proper recognition in Swagger and other documentation. | from pydantic import BaseModel, Field from typing import Generic, TypeVar, Type, Optional from fastapi import Depends, HTTPException, APIRouter from sqlalchemy.orm import Session T = TypeVar('T') class GenericResponse(BaseModel, Generic[T]): code: int = Field(default=0, example=0) msg: str = Field(default="success", example="success") data: Optional[T] router = APIRouter() @router.get("/1", response_model=GenericResponse[User]) def read_user(db: Session = Depends(get_db)): db_user = user_module.get_user(db, user_id="1") if db_user is None: raise HTTPException(status_code=404, detail="User not found") return GenericResponse(data=db_user) declaring a generic response might be able to help you fix the structure and standardize the response in your required type. | 3 | 2 |
77,852,109 | 2024-1-20 | https://stackoverflow.com/questions/77852109/using-ruff-linter-with-python-in-vs-code | I have installed the Ruff extension and enabled it in VS Code, but it doesn't seem to be underlining my code at all and providing suggestions like my previous linters did. I did a clean install of VS Code, so most of the Python/Ruff extension settings are default. Is there an additional step I need to take to get it to start underlining my code and providing recommendations? Here is a screenshot: It's highlighting the imports for not being used, but I would expect other things to be highlighted like the line length, the additional spaces at the end of the file, not having 2 spaces before function declaration, etc. Here is the sample code as requested: import pandas as pd import numpy as np print('kkkkkkkkjlskdjflksdjflsdjflkdsjflksdjflkjdslkfjsdlkjflsdjflsdjfldsjflsdkjflsdjflksdjflksdjflksdjflskdjflsdkjfklsdjkl') def test_func(x): y=x+1 return y | Take another look at the ruff documentation. You must enable or disable your desired linter rules and/or your formatting rules. For example, if you create a ruff.toml configuration in the root of your project with [lint] select = ["ALL"] the output looks more like what you expect. | 10 | 7 |
77,843,567 | 2024-1-19 | https://stackoverflow.com/questions/77843567/discord-py-interaction-already-acknowledged | The code runs fine up until I click the button and It gives an error (included at top). The script still runs with the error but I need to fix it for hosting purposes. Thanks! class CollabButton(discord.ui.View): def __init__(self): super().__init__(timeout=None) self.add_item( discord.ui.Button(label='Accept', style=discord.ButtonStyle.green)) @client.event async def on_interaction(interaction): if interaction.type == discord.InteractionType.component: if is_moderator(interaction.user): collab_button_view = CollabButton() if not interaction.response.is_done(): await interaction.response.send_message( f"Hey {interaction.user.mention}!\nAn admin has accepted the collaboration offer!" ) else: if not interaction.response.is_done(): await interaction.response.send_message( "You do not have permission to use this button.", ephemeral=True) def is_moderator(user): #Role name can be changed to anything return any(role.name == 'Cadly' for role in user.roles) I tried adding the if statement to verify if the response is done and I'm running out of ideas. Any insight helps! | This problem is happening because you are not filtering the desired interactions in the on_interaction event. As a result, interactions that are handled in other components are also being received by this event. It is highly recommended that you define a custom_id for the component you want to handle in this event and filter only interactions arising from it: class CollabButton(discord.ui.View): def __init__(self): super().__init__(timeout=None) button = discord.ui.Button( label='Accept', style=discord.ButtonStyle.green, custom_id="collab_button" ) self.add_item(button) @client.event async def on_interaction(interaction): item_id = interaction.data.get("custom_id") if item_id == "collab_button": if is_moderator(interaction.user): if not interaction.response.is_done(): await interaction.response.send_message( f"Hey {interaction.user.mention}!\nAn admin has accepted the collaboration offer!" ) else: if not interaction.response.is_done(): await interaction.response.send_message( "You do not have permission to use this button.", ephemeral=True) | 4 | 1 |
77,851,213 | 2024-1-20 | https://stackoverflow.com/questions/77851213/error-when-specifying-cmap-in-plt-matshow | I get an error every time when I try to define the cmap while calling sns.color_palette(). I think it has something to do with the definition of the color palette but I am not sure: colors = ["#1d4877", "#1b8a5a", "#fbb021", "#f68838", "#ee3e32"] sns.set_palette(colors) And when I try to set it: plt.matshow(num1.corr(), cmap=sns.color_palette()) plt.show() I get the following error: ValueError: [(0.11372549019607843, 0.2823529411764706, 0.4666666666666667), (0.10588235294117647, 0.5411764705882353, 0.35294117647058826), (0.984313725490196, 0.6901960784313725, 0.12941176470588237), (0.9647058823529412, 0.5333333333333333, 0.2196078431372549), (0.9333333333333333, 0.24313725490196078, 0.19607843137254902)] is not a valid value for cmap; supported values are 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'crest', 'crest_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'flare', 'flare_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'icefire', 'icefire_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'mako', 'mako_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean', 'ocean_r', 'pink', 'pink_r', 'plasma', 'plasma_r', 'prism', 'prism_r', 'rainbow', 'rainbow_r', 'rocket', 'rocket_r', 'seismic', 'seismic_r', 'spring', 'spring_r', 'summer', 'summer_r', 'tab10', 'tab10_r', 'tab20', 'tab20_r', 'tab20b', 'tab20b_r', 'tab20c', 'tab20c_r', 'terrain', 'terrain_r', 'turbo', 'turbo_r', 'twilight', 'twilight_r', 'twilight_shifted', 'twilight_shifted_r', 'viridis', 'viridis_r', 'vlag', 'vlag_r', 'winter', 'winter_r' I also tried calling the sns.color_palette() function like this: plt.matshow(num1.corr(), cmap=sns.color_palette(as_cmap=True)) plt.show() and it still produced the same error. | Just use matplotib's ListedColormap from matplotlib.colors import ListedColormap colors = ["#1d4877", "#1b8a5a", "#fbb021", "#f68838", "#ee3e32"] plt.matshow(np.arange(12).reshape(3, 4), cmap=ListedColormap(colors)) Note however, that this cmap is not suitable if you have continuous data. Output: | 2 | 2 |
77,847,798 | 2024-1-19 | https://stackoverflow.com/questions/77847798/how-to-find-an-optimal-shape-with-sections-missing | Given an n by n matrix of integers, I want to find a part of the matrix that has the maximum sum with some restrictions. In the version I can solve, I am allowed to draw two lines and remove everything below and/or to the right. These can be one horizontal line and one diagonal line at 45 degrees (going up and right). For example with this 10 by 10 matrix: [[ 1, -3, -2, 2, -1, -3, 0, -2, 0, 0], [-1, 3, 3, -3, 0, -1, 0, 0, -2, -2], [-1, 0, -1, 0, 2, 1, 1, -3, 2, 1], [-3, 1, -3, -1, 1, -3, -2, -1, -3, 1], [ 1, -3, 1, -2, 2, 1, -3, 2, -3, 0], [-1, -2, 0, -2, 2, -3, 3, -1, -1, 2], [ 2, 2, -3, -1, 0, -1, 2, 0, 3, 0], [-1, 3, -1, 1, -1, 0, 0, 3, -3, 0], [ 3, 2, 1, 1, 2, 3, 0, 2, 0, -3], [ 0, 3, 2, 0, -1, -2, 3, -3, -3, 1]] An optimal sum is 3 which you get by the shaded area here: If square contains the 2d array then this code will find the location of the places where the bottom row should end to get the maximum sum: max_sums = np.empty_like(square, dtype=np.int_) max_sums[0] = np.cumsum(square[0]) for row_idx in range(1, dim): cusum = np.cumsum(square[row_idx]) for col_idx in range(dim): if col_idx < dim - 1: max_sums[row_idx, col_idx] = cusum[col_idx] + max_sums[row_idx - 1, col_idx + 1] else: max_sums[row_idx, col_idx] = cusum[col_idx] + max_sums[row_idx - 1, col_idx] maxes = np.argwhere(max_sums==max_sums.max()) # Finds all the locations of the max print(f"The coordinates of the maximums are {maxes} with sum {np.max(max_sums)}") Problem I would like to be able to leave out regions of the matrix defined by consecutive rows when finding the maximum sum. In the simpler version that I can solve I am only allowed to leave out rows at the bottom of the matrix. That is one region in the terminology we are using. The added restriction is that I am only allowed to leave out two regions at most. For the example given above, let us say we leave out the first two rows and rows 3 to 5 (indexing from 0) and rerun the code above. We then get a sum of 26. Excluded regions must exclude entire rows, not just parts of them. I would like to do this for much larger matrices. How can I solve this problem? | Here is a solution that will handle any number of alternating include/exclude regions. (But always in that order.) def optimize_exclude(matrix, blocks=5): """Finds the optimal exclusion pattern for a matrix. matrix is an nxm array of arrays. blocks is the number of include/exclude blocks. We start with an include block, so 5 blocks will include up to three stretches of rows and exclude the other 2. It returns (best_sum, diag, boundaries). So a return of (25, 15, [5, 7, 10, 12]) represents that we excluded all entries (i, j) where 5 <= i < 7, or 10 <= i < 12, or 15 < i+j. It will run in time O(n*(n+m)*blocks). """ sums = [] # We start with a best which is excluding everything. best = (0, -1, []) n = len(matrix) if 0 == n: # Handle trivial edge case. return best m = len(matrix[0]) for diag in range(n + m - 1): if diag < n: # Need to sum a new row. sums.append(0) # Add the diagonal of i+j = diag to sums. for i in range(max(0, diag-m+1), min(diag+1, n)): j = diag - i sums[i] += matrix[i][j] # Start dp. It will be an array of, # (total, (boundary, (boundary, ...()...))) with the # current block being the position. We include even # blocks, exclude odd ones. # # At first we can be on block 0, sum 0, no boundaries. dp = [(0, ())] if 1 < blocks: # If we can have a second block, we could start excluding. dp.append((0, (0, ()))) for i, s in enumerate(sums): # Start next_dp with basic "sum everything." # We do this so that we can assume next_dp[block] # is always there in the loop. next_dp = [(dp[0][0] + s, dp[0][1])] for block, entry in enumerate(dp): total, boundaries = entry # Are we including or excluding this block? if 0 == block%2: # Include row? if next_dp[block][0] < total + s: next_dp[block] = (total + s, boundaries) # Start new block? if block + 1 < blocks: next_dp.append((total, (i, boundaries))) else: # Exclude row? if next_dp[block][0] < total: next_dp[block] = entry # Start new block? if block + 1 < blocks: next_dp.append((total + s, (i, boundaries))) dp = next_dp # Did we improve best? for total, boundaries in dp: if best[0] < total: best = (total, diag, boundaries) # Undo the linked list for convenience. total, diag, boundary_link = best reversed_boundaries = [] while 0 < len(boundary_link): reversed_boundaries.append(boundary_link[0]) boundary_link = boundary_link[1] return (total, diag, list(reversed(reversed_boundaries))) And let's test it on your example. matrix = [[ 1, -3, -2, 2, -1, -3, 0, -2, 0, 0], [-1, 3, 3, -3, 0, -1, 0, 0, -2, -2], [-1, 0, -1, 0, 2, 1, 1, -3, 2, 1], [-3, 1, -3, -1, 1, -3, -2, -1, -3, 1], [ 1, -3, 1, -2, 2, 1, -3, 2, -3, 0], [-1, -2, 0, -2, 2, -3, 3, -1, -1, 2], [ 2, 2, -3, -1, 0, -1, 2, 0, 3, 0], [-1, 3, -1, 1, -1, 0, 0, 3, -3, 0], [ 3, 2, 1, 1, 2, 3, 0, 2, 0, -3], [ 0, 3, 2, 0, -1, -2, 3, -3, -3, 1]] for row in matrix: print(row) print(optimize_exclude(matrix)) As we hope, it finds the solution (26, 15, [0, 2, 3, 6]). Which corresponds to a sum of 26, a diagonal of 15 (so exclude when 15 < i+j - hits the end of the matrix by excluding from matrix[9][7] on), and excluding rows 0,1 then rows 3,4,5. All of which exactly matches your diagram. | 2 | 2 |
77,848,941 | 2024-1-19 | https://stackoverflow.com/questions/77848941/how-to-use-numpy-frombuffer-to-read-a-file-sent-using-fastapi | I'm trying to read data from a text file sent to my API built using fastapi. The files template is always the same and consists of three columns of numbers as shown in the picture below: I tried solving the problem with the following code using numpy: @app.post("/uploadfile/") async def create_upload_file(file: UploadFile): file_data = await file.read() print(len(file_data)) print(file_data) deserialized_bytes = np.frombuffer(file_data,float) print(deserialized_bytes) I got the following error: ValueError: buffer size must be a multiple of element size When printing file_data I got the following: b'0.01\t1.008298628\t-0.007582043\n0.012589254\t1.007411741\t-0.008969602\n0.015848932\t1.00632129\t-0.010491102\n0.019952623\t1.005019534\t-0.012152029\n0.025118864\t1.00349648\t-0.013967763\n0.031622777\t1.001734774\t-0.015961535\n0.039810717\t0.999706432\t-0.018160753\n0.050118723\t0.997371077\t-0.020592808\n0.063095734\t0.994675168\t-0.023280535\n0.079432823\t0.991552108\t-0.026237042\n0.1\t0.987923699\t-0.029459556\n0.125892541\t0.983703902\t-0.032922359\n0.158489319\t0.978806153\t-0.036569606\n0.199526231\t0.973155266\t-0.040309894\n0.251188643\t0.96670398\t-0.044015666\n0.316227766\t0.959452052\t-0.047531126\n0.398107171\t0.95146288\t-0.050691362\n0.501187234\t0.942870343\t-0.05335184\n0.630957344\t0.933869168\t-0.055422075\n0.794328235\t0.924687135\t-0.056892752\n1\t0.915545303\t-0.057845825\n1.258925412\t0.906618399\t-0.058443323\n1.584893192\t0.898007292\t-0.05889944\n1.995262315\t0.88972925\t-0.05944639\n2.511886432\t0.88172395\t-0.060304263\n3.16227766\t0.873868464\t-0.06166047\n3.981071706\t0.865994233\t-0.063658897\n5.011872336\t0.857901613\t-0.066395558\n6.309573445\t0.849370763\t-0.069916657\n7.943282347\t0.840170081\t-0.074215864\n10\t0.830064778\t-0.079229225\n12.58925412\t0.818828635\t-0.084828018\n15.84893192\t0.80626166\t-0.090811856\n19.95262315\t0.792215036\t-0.09690624\n25.11886432\t0.776622119\t-0.102770212\n31.6227766\t0.759530416\t-0.10801952\n39.81071706\t0.741125379\t-0.112267806\n50.11872336\t0.7217348\t-0.115182125\n63.09573445\t0.701805205\t-0.116541504\n79.43282347\t0.681849867\t-0.116282042\n100\t0.662379027\t-0.114513698\n125.8925412\t0.643830642\t-0.111502997\n158.4893192\t0.626519742\t-0.107628211\n199.5262315\t0.61061625\t-0.103322294\n251.1886432\t0.596150187\t-0.099019949\n316.227766\t0.583035191\t-0.095119782\n398.1071706\t0.571098913\t-0.091964781\n501.1872336\t0.560111017\t-0.089838356\n630.9573445\t0.549803457\t-0.08897027\n794.3282347\t0.539881289\t-0.089546481\n1000\t0.530024653\t-0.091717904\n1258.925412\t0.519883761\t-0.095604233\n1584.893192\t0.509069473\t-0.101289726\n1995.262315\t0.497142868\t-0.10880829\n2511.886432\t0.48360875\t-0.118115658\n3162.27766\t0.4679204\t-0.12904786\n3981.071706\t0.449505702\t-0.141268753\n5011.872336\t0.427826294\t-0.154216509\n6309.573445\t0.402477916\t-0.167069625\n7943.282347\t0.373326923\t-0.17876369\n10000\t0.340652906\t-0.188091157\n12589.25412\t0.305239137\t-0.193894836\n15848.93192\t0.268343956\t-0.195318306\n19952.62315\t0.231521442\t-0.192025496\n25118.86432\t0.196333603\t-0.184290646\n31622.7766\t0.164060493\t-0.17291366\n39810.71706\t0.135516045\t-0.15900294\n50118.72336\t0.111014755\t-0.143723646\n63095.73445\t0.090459787\t-0.128100619\n79432.82347\t0.073486071\t-0.11291483\n100000\t0.059599333\t-0.098684088\n125892.5412\t0.04827997\t-0.085696257\n158489.3192\t0.03904605\t-0.074063717\n199526.2315\t0.031483308\t-0.063778419\n251188.6432\t0.025253597\t-0.054757832\n316227.766\t0.020091636\t-0.046879426\n398107.1706\t0.01579663\t-0.040005154\n501187.2336\t0.012222187\t-0.033998672\n630957.3445\t0.009265484\t-0.028737822\n794328.2347\t0.006855045\t-0.024123472\n1000000\t0.00493654\t-0.020083713\n' and the length of file_data is 2897 which doesn't divide by 8 as it should. Thinking that the problem originated from the Tab's and NewLine commands in the file, I tried removing the newLines, and replacing the tabs with spaces but I ended getting different numbers than the ones in the file. I don't quite understand how to convert file_data from bytes to a numpy array using the numpy library and not an entire function of my own which would be possible but much more complicated. What would be the right way to read the data into an array? If you can help me find a quick way to insert each column into a separate array automatically with no additional loop that would be great. | Why the error frombuffer is to read raw, "binary" data. So if you are trying to read float64, for examples, it just read packets of 64 bits (as the internal representation of float64) and fills a numpy array of float64 with it. For example np.frombuffer(b'\x00\x01\x02\x03', dtype=np.uint8) # β array([0,1,2,3], dtype=uint8) # because each byte is the representation of 1 uint8 integer np.frombuffer(b'\x00\x01\x02\x03', dtype=np.uint16) # β array([256,770], dtype=uint16) on my machine # because each pairs of bytes make the 16 bits of a uint16 16 bits integer # 0 1, 00000000 00000001 in binary, with a little endian machine # is 00000001 00000000 in binary = 256 in decimal. # (on a big endian machine it would have been 1) # then 2=00000010 3=00000011. So on my little endian machine that is # 00000011 00000010 = 512+256+2 = 770 # on a big endian machine, that would have been 00000010 00000011 = 512+2+1=515 Etc. I could continue with examples of float32 etc. But that would be longer to detail, and useless, since understanding frombuffer is not really what you want. Point is, it is not what you think it is. In practice, frombuffer is for reading numpy array from memory that was produced by a .tobytes() previously (np.array([256,770]).tobytes() = b'\x00\x01\x02\x03'). Or by some equivalent code for other libraries or language (for example if a C code fwrite the content of a float * array, then you could get the np.float32 back into a numpy array with .frombuffer. But of course, you can use .frombuffer only if you have a consistent number of bytes. So, for a uint16, the number of bits have to be a multiple of 16, so the number of bytes has to be a multiple of 2. And, in your case, for a float64, the number of bits has to be a multiple of 64, so number of bytes a multiple of 8. Which is not the case. Which is lucky for you. Because if your data happened to contain a multiple of 8 bytes (it has a 12.5% probability to happen), it would have worked without error, and you would have some hard time understanding why, with no error message at all, you end up with a numpy array containing numbers that are not the good ones. (Just had 3 spaces at the end of your file...) What to do then The bytes you are trying to parse are obviously in ascii format, containing decimal representation of real numbers, separated by tabulations (\t) and line feed (\n). Sometimes called tsv format. So what you need is a function that read and parse tsv format. Those are not "ready to use" bytes representing numbers (it is a human readable format). numpy.loadtxt does just that. Its normal usage is to open files. But it can also parse directly data, as long as you feed it with array (or generators) of lines. So, your file_data is a bytestring containing lines (each of them containing numbers separated by tabs) separated by line feed. Just split it with b'\n' separator, to get an array of lines, and give that array to np.loadtxt tl;dr deserialized_bytes = np.loadtxt(file_data.split(b'\n')) is what you want | 3 | 2 |
77,849,025 | 2024-1-19 | https://stackoverflow.com/questions/77849025/avoiding-deprecationwarning-when-extracting-indices-to-subset-vectors | General idea: I want to take a slice of a 3D surface plot of a function of two variables f(x,y) at a given x = some value. The problem is that I have to know the index where x assumes this value after creating x as a vector with np.linspace, for instance. Finding this index turns out to be doable thanks to another post in SO. What I can't do is use this index as is returned to subset a different vector Z, because of the index is returned as a 1-element list (or tuple), and I need an integer. When I use int() I get a warning: import numpy as np lim = 10 x = np.linspace(-lim,lim,2000) y = np.linspace(-lim,lim,2000) X, Y = np.meshgrid(x, y) Z = X**2 + Y**2 def find_nearest(array, value): array = np.asarray(array) idx = (np.abs(array - value)).argmin() return array[idx] idx = int(np.where(x == find_nearest(x,0))[0]) print(idx) print(Z[idx,:].shape) Output with warning: <ipython-input-282-c2403341abda>:16: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.) idx = int(np.where(x == find_nearest(x,0))[0]) 1000 (2000,) The shape of the ultimate sub-setting of Z is (2000,), which allows me to plot it against x. However, if instead I just extract with [0] the value return by np.where(), the shape of the final sub-set Z, i.e. (1,2000) is not going to allow me to plot it against x: idx1 = np.where(x == find_nearest(x,0))[0] print(idx1) print(Z[idx1,:].shape) How can I extract the index corresponding to the value of x is want as an integer, and thus avoid the warning (see below)? DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.) idx = int(np.where(x == find_nearest(x,0))[0]) | Break down the resulting value you get from np.where: >>> np.where(x == find_nearest(x,0)) (array([1000]),) Then dereference the first element: >>> np.where(x == find_nearest(x,0))[0] array([1000]) Ok, same thing. Get first element: >>> np.where(x == find_nearest(x,0))[0][0] 1000 | 2 | 2 |
77,848,932 | 2024-1-19 | https://stackoverflow.com/questions/77848932/simplify-dataframe-by-combining-data-from-various-columns-together-for-same-inst | I have a dataframe after concatenting several dataframes df1 = instance scan comp sort 0 A 23:15:12 NaN NaN 1 B 23:17:12 NaN NaN 2 C 23:16:12 NaN NaN 0 A NaN 23:19:32 NaN 1 B NaN 23:19:32 NaN 2 C NaN 23:43:23 NaN 0 A NaN NaN 23:45:32 1 B NaN NaN 23:45:26 2 C NaN NaN 23:45:12 I need to simplify above and have all the columns for the same instance to be updated df2 = instance scan comp sort A 23:15:12 23:19:32 23:45:32 B 23:17:12 23:19:32 23:45:26 C 23:16:12 23:43:23 23:45:12 There can 100's of such instances and many columns for each instance such as scan, comp,sort etc. I have tried groupby(['instance'), but it doesnt seem to work and resulting in object errors. <pandas.core.groupby.generic.DataFrameGroupBy object at 0x7f72f595f650> | You can use groupby_first: df2 = df1.groupby('instance', as_index=False).first() print(df2) # Output instance scan comp sort 0 A 23:15:12 23:19:32 23:45:32 1 B 23:17:12 23:19:32 23:45:26 2 C 23:16:12 23:43:23 23:45:12 From the documentation, the function returns: First non-null of values within each group. Maybe I'm wrong but IIUC, your dataframes (before pd.concat) should look like: >>> df1 instance scan 0 A 23:15:12 1 B 23:17:12 2 C 23:16:12 >>> df2 instance comp 0 A 23:19:32 1 B 23:19:32 2 C 23:43:23 >>> df3 instance sort 0 A 23:45:32 1 B 23:45:26 2 C 23:45:12 So you can use pd.merge instead of pd.concat with reduce function from functools module: from functools import reduce out = reduce(lambda left, right: left.merge(right, on='instance', how='outer'), [df1, df2, df3]) print(out) # Output instance scan comp sort 0 A 23:15:12 23:19:32 23:45:32 1 B 23:17:12 23:19:32 23:45:26 2 C 23:16:12 23:43:23 23:45:12 | 2 | 4 |
77,848,288 | 2024-1-19 | https://stackoverflow.com/questions/77848288/how-to-permute-values-in-a-single-row-of-a-pandas-dataframe | I have some messy data that requires me to split it up as a combination of values. This is sort of what it looks like: import pandas as pd data = {"Name":["name1", "name2"], "A": [[1, 2, 3], [1]], "B": [["a", "b"], ["a"]]} df = pd.DataFrame(data) And I would like to permute the two columns with list data so that the resulting dataframe carries each combination of pairs in individual rows like so: {"Name": ["name1", "name1", "name1", "name1", "name1", "name1", "name2"], "A": [1, 1, 2, 2, 3, 3, 1], "B": ["a", "b", "a", "b", "a", "b", "a"]} How do I do this while keeping non combined columns along side them? | You can explode the two columns successively: df.explode('A').explode('B', ignore_index=True) Output: Name A B 0 name1 1 a 1 name1 1 b 2 name1 2 a 3 name1 2 b 4 name1 3 a 5 name1 3 b 6 name2 1 a If you have more than two columns, you could use functools.reduce: from functools import reduce df = pd.DataFrame({'Name': ['name1', 'name2'], 'A': [[1, 2, 3], [1]], 'B': [['a', 'b'], ['a']], 'C': [['w', 'x'], ['y', 'z']] }) out = reduce(lambda a, b: a.explode(b, ignore_index=True), ['A', 'B', 'C'], df) Or a python loop: out = df for col in ['A', 'B', 'C']: out = out.explode(col, ignore_index=True) Output: Name A B C 0 name1 1 a w 1 name1 1 a x 2 name1 1 b w 3 name1 1 b x 4 name1 2 a w 5 name1 2 a x 6 name1 2 b w 7 name1 2 b x 8 name1 3 a w 9 name1 3 a x 10 name1 3 b w 11 name1 3 b x 12 name2 1 a y 13 name2 1 a z | 2 | 2 |
77,843,012 | 2024-1-18 | https://stackoverflow.com/questions/77843012/animating-circles-on-a-matplotlib-plot-for-orbit-simulation-in-python | Intro to the task I am working on a simulating the orbit of moon Phobos around the planet Mars. I have completed the task of using numerical integration to update the velocity and position of both bodies. My final task is to produce an animated plot of the orbit of Phobos, with Mars centred at its initial position. Code snippet # plot the animated orbit def plot_orbit(self): # load all the data from the json file all_data = self.load_data_for_all_bodies() # get the positions and velocities as 2d vectors positions, velocities = self.simulate() # Create a figure and axis fig = plt.figure() ax = plt.axes() mars_initial_pos_x = all_data["Mars"]["initial_position"][0] mars_initial_pos_y = all_data["Mars"]["initial_position"][1] print("mars: ", mars_initial_pos_x, mars_initial_pos_y) phobos_initial_pos_x = all_data["Phobos"]["initial_position"][0] phobos_initial_pos_y = all_data["Phobos"]["initial_position"][1] print("phobos: ", phobos_initial_pos_x, phobos_initial_pos_y) # Set the limits of the axes ax.set_xlim(-9377300, 9377300) # mars as a red circle mars_circle = plt.Circle( (mars_initial_pos_x, mars_initial_pos_y), 4, color="red", label="Mars", animated=True, ) ax.add_patch(mars_circle) # Phobos as a blue circle phobos_circle = plt.Circle( (phobos_initial_pos_x, phobos_initial_pos_y), 80, color="blue", label="Phobos", animated=True, ) ax.add_patch(phobos_circle) # Function to update the position of Phobos def animate(frame): phobos_circle.center = (positions[frame][0], positions[frame][1]) return (phobos_circle,) # Create the animation ani = FuncAnimation( fig, animate, frames=self.iteration_counts, interval=20, blit=True ) plt.title(f"Orbit of {self.name}") plt.xlabel("x (m)") plt.ylabel("y (m)") plt.legend() plt.show() Data file(JSON) { "Mars": { "mass": 6.4185e+23, "initial_position": [ 0, 0 ], "initial_velocity": [ 0, 0 ], "orbital_radius": null }, "Phobos": { "mass": 1.06e+16, "orbital_radius": 9377300.0, "initial_position": [ 9377300.0, 0 ], "initial_velocity": [ 0, 2137.326983980658 ] } } Output In the above output I can see that the positions and velocities array has been correctly filled because of my previous methods. However, the graph that is produced is a complete disaster. I ran the code in Jupyter notebook and VS code, Jupyter notebook does this and VS code prints nothing out on the screen. My first step is to get circles on the graph in the correct positions initially and then think about the animate function. Sadly, I can't see any circles in the output. I used very similar code that was given in the answer to a question about FuncAnimation on stack. Any help would be appreciated! Edit revised code(executable) # include neccesary libraries import numpy as np import matplotlib.pyplot as plt import json from matplotlib.animation import FuncAnimation # define constants timestep = 0.001 iterations = 1000 G = 6.674 * 10**-11 def main(): # make the modification modify_initial_velocity_of_phobos() # Example usage mars = Planet("Mars") phobos = Planet("Phobos") # Display information mars.display_info() phobos.display_info() # mars.acceleration() # phobos.acceleration() phobos_orbit = Simulate("Phobos", timestep, iterations) phobos_orbit.plot_orbit() # Function to calculate the initial velocity of a moon orbiting a planet def calculate_orbital_velocity(mass, radius): # use the formula given to calculate return (G * mass / radius) ** 0.5 # self explanatory function def modify_initial_velocity_of_phobos(): # Read data from the JSON file with open("planets.json", "r+") as file: celestial_data = json.load(file) # Extract necessary data for calculation mass_of_mars = celestial_data["Mars"]["mass"] orbital_radius_of_phobos = celestial_data["Phobos"]["orbital_radius"] # Calculate orbital velocity of Phobos velocity_of_phobos = calculate_orbital_velocity( mass_of_mars, orbital_radius_of_phobos ) # Update the initial_velocity for Phobos in the data celestial_data["Phobos"]["initial_velocity"] = [0, velocity_of_phobos] # Move the file pointer is back to the start of the file file.seek(0) # Write the updated data back to the JSON file json.dump(celestial_data, file, indent=4) # Truncate the file to remove any leftover data file.truncate() # create a class for the planet which calculates the net gravitational force and acceleration due to it acting on it class Planet(): def __init__(self, name): self.name = name body_data = self.load_data_for_main_body() # Initialize attributes with data from JSON or default values self.mass = body_data.get("mass") self.position = np.array(body_data.get("initial_position")) self.velocity = np.array(body_data.get("initial_velocity")) self.orbital_radius = body_data.get("orbital_radius", 0) # load main planet(body) data from the json file def load_data_for_main_body(self): with open("planets.json", "r") as file: all_data = json.load(file) body_data = all_data.get(self.name, {}) return body_data # load all data from the json file def load_data_for_all_bodies(self): with open("planets.json", "r") as file: all_data = json.load(file) return all_data # calculate the gravitational force between two bodies def force(self): all_bodies = self.load_data_for_all_bodies() # initialize the total force vector total_force = np.array([0.0, 0.0]) # iterate over all the bodies for body_name, body_data in all_bodies.items(): if body_name == self.name: continue # get the position of each body other_body_position = np.array(body_data["initial_position"]) # get the mass of each body other_body_mass = body_data["mass"] # calculate distance vector between the two bodies r = other_body_position - self.position # Calculate the distance between the two bodies mag_r = np.linalg.norm(r) # Normalize the distance vector r_hat = r / mag_r # Calculate the force vector between the two bodies force = (G * self.mass * other_body_mass) / (mag_r**2) * r_hat # Add the force vector to the total force vector total_force += force return total_force # calculate the acceleration due to the force def acceleration(self): # Calculate the force vector force = self.force() # Calculate the acceleration vector acceleration = force / self.mass return acceleration # update the position of the body using the velocity and time step def update_position(self): self.position = self.position + self.velocity * timestep return self.position # update the velocity of the body using the acceleration and time step def update_velocity(self): self.velocity = self.velocity + self.acceleration() * timestep return self.velocity def display_info(self): print(f"Name: {self.name}") print(f"Mass: {self.mass} kg") print(f"Position: {self.position} m") print(f"Velocity: {self.velocity} m/s") if self.orbital_radius is not None: print(f"Orbital Radius: {self.orbital_radius} m") else: print("Orbital Radius: Not applicable") # define class to simulate the orbit of the moon around the planet class Simulate(Planet): def __init__(self, name, delta_t, iteration_counts): super().__init__(name) self.delta_t = delta_t self.iteration_counts = iteration_counts def simulate(self): global timestep # initialize the arrays to store the positions and velocities as vectors positions = np.zeros((self.iteration_counts, 2)) velocities = np.zeros((self.iteration_counts, 2)) # iterate over the number of iterations for i in range(self.iteration_counts): # update the position positions[i] = self.update_position() # update the velocity velocities[i] = self.update_velocity() # update time timestep += self.delta_t print("pos: ", positions) print("vel: ", velocities) return positions, velocities # plot the animated orbit def plot_orbit(self): # load all the data from the json file all_data = self.load_data_for_all_bodies() # get the positions and velocities as vectors positions, velocities = self.simulate() # debug statements print("pos: ", positions) print("vel: ", velocities) # Create a figure and axis fig, ax = plt.subplots() mars_initial_pos_x = all_data["Mars"]["initial_position"][0] mars_initial_pos_y = all_data["Mars"]["initial_position"][1] print("mars: ", mars_initial_pos_x, mars_initial_pos_y) phobos_initial_pos_x = all_data["Phobos"]["initial_position"][0] phobos_initial_pos_y = all_data["Phobos"]["initial_position"][1] print("phobos: ", phobos_initial_pos_x, phobos_initial_pos_y) # Set the limits of the axes ax.set_xlim(-9377300, 9377300) # mars as a red circle mars_circle = plt.Circle( (mars_initial_pos_x, mars_initial_pos_y), 0.1, color="red", label="Mars", ec = "None" ) ax.add_patch(mars_circle) # Phobos as a blue circle phobos_circle = plt.Circle( (phobos_initial_pos_x, phobos_initial_pos_y), 0.1, color="blue", label="Phobos", ec = "None" ) ax.add_patch(phobos_circle) # Function to update the position of Phobos def animate(frame): phobos_circle.center = (positions[frame][0], positions[frame][1]) ax.draw_artist(phobos_circle) return phobos_circle, # Create the animation ani = FuncAnimation( fig = fig, func = animate, frames=self.iteration_counts, interval=20, blit=True ) plt.title(f"Orbit of {self.name}") plt.xlabel("x (m)") plt.ylabel("y (m)") plt.legend() plt.show() fig.savefig('MyFigure.png', dpi=200) # plot the orbit # plt.plot(positions[:, 0], positions[:, 1]) # plt.title(f"Orbit of {self.name}") # plt.xlabel("x(m)") # plt.ylabel("y(m)") # plt.show() if __name__ == "__main__": main() | ok... with the full code it was a lot easier to see where things go wrong :-) The main problems are: your circles are extremely small! you set an enormous x-extent but keep the y-extent to 0-1 (since your axis does not preserve the aspect-ratio (e.g. ax.set_aspect("equal")) this means your circles look like lines... your motion is extremely slow your object gets garbage-collected as soon as your main() function has resolved so the animation cannot run To fix your code, this is what you have to do: Give Mars and Phobos a proper radius mars_circle = plt.Circle( (mars_initial_pos_x, mars_initial_pos_y), 1e5, color="red", label="Mars", ec = "None" ) # Phobos as a blue circle phobos_circle = plt.Circle( (phobos_initial_pos_x, phobos_initial_pos_y), 1e5, color="blue", label="Phobos", ec = "None", ) make sure your axis-limits are properly set either ax.set_xlim(-9377300, 9377300) ax.set_ylim(-9377300, 9377300) or ax.set_xlim(-9377300, 9377300) ax.set_aspect("equal") reduce the interval to something you can actually see (otherwise your movement will be extremely slow) self.ani = FuncAnimation( fig = fig, func = animate, frames=self.iteration_counts, interval=.5, blit=True ) make sure the animation object is not garbage-collected # Create the animation self.ani = FuncAnimation( fig = fig, func = animate, frames=self.iteration_counts, interval=.5, blit=True ) and def main(): ... ... phobos_orbit = Simulate("Phobos", timestep, iterations) phobos_orbit.plot_orbit() return phobos_orbit and if __name__ == "__main__": phobos_orbit = main() ... and you apparently don't need an explicit call to ax.draw_artist()... If you change all that, it seems to do the job nicely: | 2 | 3 |
77,845,596 | 2024-1-19 | https://stackoverflow.com/questions/77845596/i-want-to-check-that-a-certain-list-follows-a-pattern-in-python | I'm building a program that needs to have and input from the user where the user introduces a sequence of numbers separated with commas, example: 1,2,3,4. The sequence can be any amount of numbers, have any number in it and have spaces in between, the only thing that has to be followed is that there's a number then a coma and maybe in between somewhere a space I've done this until now `sec=input("Please insert a sequence of numbers separated by commas:") com = 0 num = 1 numlist=[] for x in sec: if x == "," or x==" ": None else: try: numlist.append(int(x)) except: print("Please just insert numbers and commas") if x == ",": com+=1 elif x == " ": None else: num+=1 if com <= num and (com>(num-3)): print(numlist) print(tuple(numlist)) else: print("Please enter a comma between the numbers")` The only thing I'm missing is to check the that the sequence is number comma number comma etc... Because with the code I have actually, you can introduce Number comma comma Number and it would work 1, 2, ,3 5 I was expecting this: Please enter a comma between the numbers Got this Please insert a sequence of numbers separated by commas:1, 2, ,3 5 [1, 2, 3, 5] (1, 2, 3, 5) | You can use regular expression re to match the string whether it follows your pattern or not. For example, if your requirement is: Any string where there is a single comma , between digits, can have multiple spaces in between and the string should end with a number or space (I am assuming you don't want to end with a comma), you can use below expression : ^(\s*\d+\s*\,)*(\s*\d+\s*)$ If you are new to regular expressions, see below explanation: In above expression, (\s*\d+\s*\,), means multiple spaces \s* (even 0) Followed by one or more digit \d+ Followed by again multiple spaces \s* Ending with a comma \,. The * after this group means above pattern can repeat zero or more time The last group (\s*\d+\s*) means a digit with leading and trailing spaces. Your string should end with this pattern. Below is the full code that may help you: import re sec=input("Please insert a sequence of numbers separated by commas:") # Check if your string matches your pattern if re.match(r'^(\s*\d+\s*\,)*(\s*\d+\s*)$', sec): # Split the string from comma # str.split("pattern") converts your string into a string of list # "1, 2, 4".split(",") -> ["1 ", "2 ", "4"] # "1@@ 2@@ abcd".split("@@") -> ["1 ", "2 ", "abcd"] arr = sec.split(",") # For each element in arr, use int() function # map(function, iterable): applies given function on each element # of the iterator. Here, we want to apply int() on each element numlist = list(map(int, arr)) print(numlist) print(tuple(numlist)) # Otherwise ask user to input proper string else: print("Please enter a comma between the numbers") Hope it fulfills your requirement. | 2 | 2 |
77,844,459 | 2024-1-19 | https://stackoverflow.com/questions/77844459/interpolate-points-over-a-surface | I have a function f(x,y) that I know for certain values of X and Y. Then, I have a new function x,y= g(t) that produces a new series of points X and Y. for those points, using the pre-calculated data I need interpolate. import numpy as np import timeit import matplotlib.pyplot as plt def f(X,Y): return np.sin(X)+np.cos(Y) def g(t): return 3*np.cos(t)-0.5, np.sin(5*t)-1 x= np.linspace(-4,4,100) y= np.linspace(-4,4,100) X, Y= np.meshgrid(x,y) Z= f(X,Y) t= np.linspace(0,1,50) x_t, y_t= g(t) plt.figure() plt.imshow(Z, extent=(-4, 4, -4, 4), origin='lower') plt.scatter(x_t,y_t) plt.show() In other words, I need to obtain the shown curve with the previously calculated Z values, doing interpolation since in real life I dont have access to the actual function Many thanks! EDIT: I found a function that does exactly what I want, but it produces the wrongs samples. import numpy as np import timeit import matplotlib.pyplot as plt from scipy.interpolate import interpn from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D def f(X,Y): return np.sin(X)+np.cos(Y) def g(t): return -np.cos(t), np.sin(2*t) x= np.linspace(-4,4,1001) y= np.linspace(-4,4,1001) X, Y= np.meshgrid(x,y) Z= f(X,Y) grid= (x,y) t= np.linspace(0,2*np.pi, 50) x_t, y_t= g(t) xy_t= np.stack([x_t, y_t], axis=-1) Z_real= f(x_t, y_t) Z_inter= interpn(grid, Z, xy_t) fig= plt.figure() gs= fig.add_gridspec(1,2, hspace=0.1, wspace=0.1) ax1= fig.add_subplot(gs[0,0],projection="3d") surf= ax1.plot_surface(X,Y,Z, cmap=cm.jet, alpha=0.3) ax1.scatter(x_t, y_t, Z_real) fig.colorbar(surf, shrink=0.5, aspect=5) ax2= fig.add_subplot(gs[0,1]) ax2.plot(t, Z_real) ax2.plot(t, Z_inter, '+') plt.show() Can anyone tell me what I am doing wrong? | I recently shared a solution to a comparable question that is applicable to your case as well. In brief, scipy.interpolate provides a range of 2D interpolation options. In this instance, I utilized RegularGridInterpolator or CloughTocher2DInterpolator, but you also have the option to employ another methods within scipy.interpolate. RegularGridInterpolator outperforms CloughTocher2DInterpolator in terms of speed. However, the latter does not necessitate data to be arranged on a grid. import scipy.interpolate # Option 1: RegularGridInterpolator z_reg_interpolator = RegularGridInterpolator( (x, y), f(*np.meshgrid(x, y, indexing='ij', sparse=True)) ) Z_reg_inter= z_reg_interpolator(np.array(g(t)).T) # Option 2: CloughTocher2DInterpolator with less datapoints to speed it up n = 101 x_n= np.linspace(-4, 4, n) y_n= np.linspace(-4, 4, n) XY_n = np.meshgrid(x_n,y_n) z_interpolator = scipy.interpolate.CloughTocher2DInterpolator( np.dstack(XY_n).reshape(-1,2), f(*XY_n).reshape(-1,1)) Z_clo_inter = z_interpolator(np.array(g(t)).T) | 3 | 1 |
77,841,192 | 2024-1-18 | https://stackoverflow.com/questions/77841192/replacing-values-from-a-given-column-with-the-modified-values-from-a-different-c | I need to update the values of a column (X) in correspondance with certain values ('v') of another column Y with values from another column (Z) times some numbers and then cast the obtained column to int: In pandas the code is as follow: df.loc[df["Y"] == "v", "X"] = ( df.loc[df["Y"] == "v", "Z"] * 1.341 * 15).astype(int) In polars i'm able to select values (e.g. nulls) from a column and replace them with values from another column: df=df.with_columns( pl.when(df['col'].is_null()) .then(df['other_col']) .otherwise(df['col']) .alias('alias_col') ) but i'm stuck at nesting the call to the (modified) third column. Any help would be very much appreciated! | First, we create a dataframe to operate on. I inferred the context and types from your pandas example. import polars as pl df = pl.DataFrame({ "X": [1, 1, 1, 1, 1], "Y": ["w", "w", "v", "w", "v"], "Z": [1, 2, 3, 4, 5], }) Output. shape: (5, 3) βββββββ¬ββββββ¬ββββββ β X β Y β Z β β --- β --- β --- β β i64 β str β i64 β βββββββͺββββββͺββββββ‘ β 1 β w β 1 β β 1 β w β 2 β β 1 β v β 3 β β 1 β w β 4 β β 1 β v β 5 β βββββββ΄ββββββ΄ββββββ Then, it looks like you want to do the following When column Y is equal to "v", then take the integer value of column Z times 3.41 x 15. Otherwise, take the value of columns X. Store the result again in column X. This can be achieved with polars' when-then-otherwise expression. df.with_columns( pl.when( pl.col("Y") == "v" ).then( (pl.col("Z") * 1.341 * 15).cast(pl.Int64) ).otherwise( pl.col("X") ).alias("X") ) Output. shape: (5, 3) βββββββ¬ββββββ¬ββββββ β X β Y β Z β β --- β --- β --- β β i64 β str β i64 β βββββββͺββββββͺββββββ‘ β 1 β w β 1 β β 1 β w β 2 β β 60 β v β 3 β β 1 β w β 4 β β 100 β v β 5 β βββββββ΄ββββββ΄ββββββ | 2 | 3 |
77,842,223 | 2024-1-18 | https://stackoverflow.com/questions/77842223/pivoting-based-on-multiple-columns-and-rearrange-data-in-python-dataframe | have a python dataframe. df1 Load Instance_name counter_name counter_value 0 A bytes_read 0 0 A bytes_written 90 0 B bytes_read 100 0 B bytes_Written 90 1 A bytes_read 10 1 A bytes_written 940 1 B bytes_read 1100 1 B bytes_written 910 To simplify view, I need something like below ex. transform counter_name column fields to columns and rearrange data.*** df2 = Load Instance_name bytes_read bytes_written 0 A 0 90 0 B 100 90 1 A 10 940 1 B 1100 910 I am new to python dataframe libraries and not sure the right way to achieve this. | Try this: df.set_index(['Load', 'Instance_name', 'counter_name'])['counter_value'].unstack() Output: counter_name Load Instance_name bytes_Written bytes_read bytes_written 0 0 A NaN 0.0 90.0 1 0 B 90.0 100.0 NaN 2 1 A NaN 10.0 940.0 3 1 B NaN 1100.0 910.0 Note typo in input dataframe bytes_Written and bytes_written. | 2 | 3 |
77,838,888 | 2024-1-18 | https://stackoverflow.com/questions/77838888/overriding-nested-values-in-polyfactory-with-pydantic-models | Is it possible to provide values for complex types generated by polyfactory? I use pydantic for models and pydantic ModelFactory. I noticed that build method supports kwargs that can provide values for constructed model, but I didn't figure if it's possible to provide values for nested fields. For example, if I have model A which is also the type for field a in model B, is ti possible to construct B via polyfactory and provide some values for field 'a'? I tried to call build with MyFactory.build(**{"a": {"nested_value": "b"}}) but it does not work. Is it possible to override nested values? | just add another Factory for 'b' example code: from pydantic_factories import ModelFactory from datetime import date, datetime from typing import List, Union, Dict from pydantic import BaseModel, UUID4 class B(BaseModel): k1: int k2: int class Person(BaseModel): id: UUID4 name: str hobbies: List[str] age: Union[float, int] birthday: Union[datetime, date] nested_model: B class PersonFactory(ModelFactory): __model__ = Person class KFactory(ModelFactory): __model__ = B result = PersonFactory.build(**{"name" :"test","hobbies" : [1,2],"nested_model" : KFactory.build(k1=1,k2=2)}) print(result) # same result result = PersonFactory.build(**{"name" :"test","hobbies" : [1,2],"nested_model" : KFactory.build(**{"k1":1,"k2":2})}) print(result) result: id=UUID('1ff7c9ed-223a-4f98-a95e-e3307426f54e') name='test' hobbies=['1', '2'] age=488202245.889748 birthday=datetime.date(2023, 3, 2) nested_model=B(k1=1, k2=2) | 4 | 3 |
77,836,174 | 2024-1-17 | https://stackoverflow.com/questions/77836174/how-can-i-add-a-progress-bar-status-when-creating-a-vector-store-with-langchain | Creating a vector store with the Python library langchain may take a while. How can I add a progress bar? Example of code where a vector store is created with langchain: import pprint from langchain_community.vectorstores import FAISS from langchain_community.embeddings import HuggingFaceEmbeddings from langchain.docstore.document import Document model = "sentence-transformers/multi-qa-MiniLM-L6-cos-v1" embeddings = HuggingFaceEmbeddings(model_name = model) def main(): doc1 = Document(page_content="The sky is blue.", metadata={"document_id": "10"}) doc2 = Document(page_content="The forest is green", metadata={"document_id": "62"}) docs = [] docs.append(doc1) docs.append(doc2) for doc in docs: doc.metadata['summary'] = 'hello' pprint.pprint(docs) db = FAISS.from_documents(docs, embeddings) db.save_local("faiss_index") new_db = FAISS.load_local("faiss_index", embeddings) query = "Which color is the sky?" docs = new_db.similarity_search_with_score(query) print('Retrieved docs:', docs) print('Metadata of the most relevant document:', docs[0][0].metadata) if __name__ == '__main__': main() Tested with Python 3.11 with: pip install langchain==0.1.1 langchain_openai==0.0.2.post1 sentence-transformers==2.2.2 langchain_community==0.0.13 faiss-cpu==1.7.4 The vector store is created with db = FAISS.from_documents(docs, embeddings). | Langchain does not natively support any progress bar for this at the moment with release of 1.0.0 I also had similar case, so instead of sending all the documents, I send independent document for ingestion and tracked progress at my end. This was helpful for me. You can do the ingestion in the following way with tqdm(total=len(docs), desc="Ingesting documents") as pbar: for d in docs: if db: db.add_documents([d]) else: db = FAISS.from_documents([d], embeddings) pbar.update(1) From what I checked from langchain code https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/retrievers.py#L31 they are making call to add_texts as well, so no major operation is being performed here other than parsing. I had simple documents, and I didn't observe much difference. Probably others who has tried on huge documents can add if it adds latency in their usecase. Below is your updated code import pprint from tqdm import tqdm from langchain_community.vectorstores import FAISS from langchain_community.embeddings import HuggingFaceEmbeddings from langchain.docstore.document import Document model = "sentence-transformers/multi-qa-MiniLM-L6-cos-v1" embeddings = HuggingFaceEmbeddings(model_name = model) def main(): doc1 = Document(page_content="The sky is blue.", metadata={"document_id": "10"}) doc2 = Document(page_content="The forest is green", metadata={"document_id": "62"}) docs = [] docs.append(doc1) docs.append(doc2) for doc in docs: doc.metadata['summary'] = 'hello' db = None with tqdm(total=len(docs), desc="Ingesting documents") as pbar: for d in docs: if db: db.add_documents([d]) else: db = FAISS.from_documents([d], embeddings) pbar.update(1) # pprint.pprint(docs) # db = FAISS.from_documents(docs, embeddings) db.save_local("faiss_index") new_db = FAISS.load_local("faiss_index", embeddings) query = "Which color is the sky?" docs = new_db.similarity_search_with_score(query) print('Retrieved docs:', docs) print('Metadata of the most relevant document:', docs[0][0].metadata) if __name__ == '__main__': main() | 6 | 7 |
77,838,069 | 2024-1-18 | https://stackoverflow.com/questions/77838069/unexpected-method-call-order-in-python-multiple-inheritance | I have a child class named USA and it has two parent classes, A and B. Both parents have a method named change_to_4 and in B.__init__ I call the method, but instead of using the method that I defined in B it uses the A definition of the change_to_4 method. class A: def __init__(self) -> None: self.a = 1 super().__init__() def change_to_4(self): self.x = 4 class B: def __init__(self) -> None: self.b = 2 self.change_to_4() super().__init__() def change_to_4(self): self.b = 4 class USA(A, B): def __init__(self) -> None: super().__init__() print(f"A vars = {vars(A())}") print(f"B vars = {vars(B())}") print(f"USA vars = {vars(USA())}") print(f"USA mro -> {USA.__mro__}") I expect something like this: A vars = {'a': 1} B vars = {'b': 4} USA vars = {'a': 1, 'b': 4} USA mro -> (<class '__main__.USA'>, <class '__main__.A'>, <class '__main__.B'>, <class 'object'>) But the output is A vars = {'a': 1} B vars = {'b': 4} USA vars = {'a': 1, 'b': 2, 'x': 4} USA mro -> (<class '__main__.USA'>, <class '__main__.A'>, <class '__main__.B'>, <class 'object'>) | When the interpreter looks up an attribute of an instance, in this case self.change_to_4 on an instance of USA, it first tries to find 'change_to_4' in self.__dict__, and failing to find it there, the interpreter then follows the method resolution order of USA (USA, A, B, object as shown in the output) to find the change_to_4 attribute in the first base class that has it defined. Excerpt from the documentation of Custom classes: Class attribute references are translated to lookups in this dictionary, e.g., C.x is translated to C.__dict__["x"] (although there are a number of hooks which allow for other means of locating attributes). When the attribute name is not found there, the attribute search continues in the base classes. This search of the base classes uses the C3 method resolution order which behaves correctly even in the presence of βdiamondβ inheritance structures where there are multiple inheritance paths leading back to a common ancestor. In this case, A is the first base class of USA that defines change_to_4, so self.change_to_4() gets translated to A.change_to_4(self), resulting in self.x = 4 getting executed. That the call self.change_to_4() is made from a method in B does not change the fact that A comes before B in the method resolution order of a USA instance. | 2 | 3 |
77,835,491 | 2024-1-17 | https://stackoverflow.com/questions/77835491/mypy-type-narrowing-with-recursive-generic-types | Let's say I make a generic class whose objects only contain one value (of type T). T = TypeVar('T') class Contains(Generic[T]): val: T def __init__(self, val: T): self.val = val Note that self.val can itself be a Contains object, so a recursive structure is possible. I want to define a function that will reduce such a structure to a single non-Contains object. def flatten(x): while isinstance(x, Contains): x = x.val return x What should the type signature of 'flatten' be? I tried to make a recursive Nested type Nested = T | Contains['Nested[T]'] but it confuses the type checker as T can also mean a Contains object. def flatten(x: Nested[T]) -> T: while isinstance(x, Contains): reveal_type(x) # reveals Contains[Unknown] | Contains[Nested] x = x.val reveal_type(x) # reveals object* | Unknown return x Another approach was to make a separate class class Base(Generic[T]): self.val: T def __init__(self, val): self.val = val Nested = Base[T] | Contains['Nested[T]'] def flatten(x: Nested[T]) -> T: while isinstance(x, Contains): x = x.val return x.val This works, but you have to wrap the argument in a Base object every time, which is cumbersome. Furthermore, Base has the same behaviour as Contains, it's the same thing written twice! I tried to use NewType instead, but it isn't subscriptable. Is there any nice (or, at least not too ugly) way to do it? | The problem is that static type checkers are typically unable to infer dynamic type changes from assignments such as: x = x.val As a workaround you can make flatten a recursive function instead to avoid an assignment to x: from typing import TypeVar, Generic, TypeAlias T = TypeVar('T') class Contains(Generic[T]): val: T def __init__(self, val: T): self.val = val Nested: TypeAlias = T | Contains['Nested[T]'] def flatten(x: Nested[T]) -> T: if isinstance(x, Contains): return flatten(x.val) reveal_type(x) # Type of "x" is "object*" return x Demo with PyRight here Demo with mypy here | 2 | 2 |
77,834,580 | 2024-1-17 | https://stackoverflow.com/questions/77834580/pyside-qt-align-text-in-vertical-center-for-qpushbutton | How can I vertically align the text in a large font QPushButton? For example this Python code creates a button where the text is not vertically aligned. I have tried everything I can think of to solve it but I can't get it working. from PySide2 import QtCore, QtGui, QtWidgets button = QtWidgets.QPushButton("+") # "a" or "A" button.setStyleSheet("font-size: 100px") layout = QtWidgets.QVBoxLayout() layout.addWidget(button) window = QtWidgets.QWidget() window.setLayout(layout) window.show()' Here is what the code above creates: Note that I am running this code in Maya but it should be the same problem in an QT environment I think. | For example this Python code creates a button where the text is not vertically aligned. In reality, the text is vertically aligned: you are not considering how text is normally rendered. Brief sum of vertical font elements When dealing with text drawing, the font metrics (see typeface anatomy) should always be considered. Vertically speaking, a typeface always has a fundamental height, and its drawing is always based on the base line. Consider the following image, which renders the text xgdΓ_+: The "baseline" is the vertical reference position from which every character is being drawn; you can consider it like the lines of a notebook: it's where you normally place the bottom part of the circle of "o", or the dot of a question mark, and the bottom of letters such as "g" are normally drawn underneath that line. Qt always uses the QFontMetrics of a given font, and it uses font metrics functions in order to properly display text; those functions always have some reference to the base line above. From the base line, then, we can get the following relative distances: descent(): which is the distance to the lowest point of the characters (the descender); xHeight(): the distance to the height of a lower case character that has no ascender, which normally is the height of the letter x (see x-height); capHeight(): the height of a standard upper case letter (see cap height); ascent(): the maximum upper extent of a font above the x-height, normally (but not always) allowing further space for letters such "d", diacritic marks for upper case letters, or fancy decorations; Finally, the whole font height() is the sum of the ascent and descent. From there, you can get the "center" (often coinciding but not to be confused with the median or mean line), which is where a Qt.AlignVCenter aligned text would normally have as its virtual middle line when showing vertically centered text in Qt. Simply put (as more complex text layouts may complicate things), when Qt draws some text, it uses the font metrics height() as a reference, and then aligns the text on the computed distances considering the overall height and/or the ascent and descent of the metrics. Once the proper base line has been found, the text is finally drawn based on it. Vertical alignment is wrong (or not?) When Qt aligns some text, it always considers the metrics of the font, which can be misleading. Consider the case above, and imagine that you wanted to use the underscore character (_) for your button, which clearly is placed way below the base line. The result would be something like this: This is clearly not "vertically aligned". It seems wrong, but is it? As you can see from the image above, the "+" symbol is not vertically aligned to the center in the font I'm using. But, even if it was, would it be valid? For instance, consider a button with "x" as its text, but using that letter as its mnemonic shortcut, which is normally underlined. Even assuming that you can center the x, would it be properly aligned when underlined? Some styles even show the underlined mnemonic only upon user interaction (usually by pressing Alt): should the text be vertically translated in that case? Possible implementation Now, it's clear that considering the overall vertical alignment including the mnemonic is not a valid choice. But there is still a possibility, using QStyle functions and some ingenuity. QPushButton draws its contents using a relatively simple paintEvent() override: void QPushButton::paintEvent(QPaintEvent *) { QStylePainter p(this); QStyleOptionButton option; initStyleOption( &option); p.drawControl(QStyle::CE_PushButton, option); } This can be ported in Python with the following: class CustomButton(QPushButton): def paintEvent(self, event): p = QStylePainter(self) option = QStyleOptionButton() self.initStyleOption(option) p.drawControl(QStyle.CE_PushButton, option) Since p.drawControl(QStyle.CE_PushButton, option) will use the option.text, we can draw a no-text button with the following: class CustomButton(QPushButton): def paintEvent(self, event): p = QStylePainter(self) option = QStyleOptionButton() option.text = '' self.initStyleOption(option) p.drawControl(QStyle.CE_PushButton, option) Now comes the problem: how to draw the text. While we could simply use QPainter functions such as drawText(), this won't be appropriate, as we should consider style aspects that use custom drawing: for instance, disabled buttons, or style sheets. A more appropriate approach should consider the offset between the center of the font metrics and the visual center of the character(s) we want to draw. QPainterPath allows us to add some text to a vector path and get its visual center based on its contents. So, we can use addText() and subtract the difference of its center from the center of the font metrics height(). Here is the final result: class CenterTextButton(QPushButton): def paintEvent(self, event): if not self.text().strip(): super().paintEvent(event) return qp = QStylePainter(self) # draw the button without text opt = QStyleOptionButton() self.initStyleOption(opt) text = opt.text opt.text = '' qp.drawControl(QStyle.CE_PushButton, opt) fm = self.fontMetrics() # ignore mnemonics temporarily tempText = text.replace('&', '') if '&&' in text: # consider *escaped* & characters tempText += '&' p = QPainterPath() p.addText(0, 0, self.font(), tempText) # the relative center of the font metrics height fontCenter = fm.ascent() - fm.height() / 2 # the relative center of the actual text textCenter = p.boundingRect().center().y() # here comes the magic... qp.translate(0, -(fontCenter + textCenter)) # restore the original text and draw it as a real button text opt.text = text qp.drawControl(QStyle.CE_PushButtonLabel, opt) Further notes The above implementation is not perfect. Most importantly: some styles draw a "focus outline", which normally is a dashed line around the text, and in that case a small dashed rectangle may be drawn at the center of the button; this can be completely disabled by doing opt.state &= ~QStyle.State_HasFocus before qp.drawControl(QStyle.CE_PushButton, opt), but may have effects in case stylesheets are being used; as said, mnemonics that show underlined characters may result in unexpected behavior; character display is not regular: the apparent visual center of a complex shape as that of one or more characters is not always the center of its bounding rectangle; fonts are not reliable: while most common fonts are well designed, there are plenty of faulty fonts that are inconsistent to some extent (and depending on the given size); even fonts have bugs; the above obviously does not consider buttons with icons; Finally, for simple symbols like in this case, using an icon is almost always the better choice. Luckily, Qt also has SVG file support for icons, and it also provides a QIconEngine API that allows further customization. | 2 | 4 |
77,835,940 | 2024-1-17 | https://stackoverflow.com/questions/77835940/is-there-a-way-to-add-only-specific-elements-of-two-numpy-arrays-together | I am attempting to add together specific elements of two numpy arrays. For example consider these two arrays: f1 = np.array([ [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]] ]) f2 = np.array([ [[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]] ]) I would like to create a new array that looks like this: [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]] The operation I would like to perform would look something like (f1[r][c][0]+f2[r][c][0])/2 for each array of three values in f1 and f2. I know that np.add(f1,f2)/2 would result in something close to what I'm looking for, except that it would perform the add and scale operations on every element of the array, as opposed to just the first element in each length 3 subarray. Is the best way to do this just breaking up each original array into three separate arrays, thus allowing me to use np.add(f1,f2)/2? Apologies if this is a duplicate, I couldn't seem to find anything about performing an operation like this. | You can index the desired position before computing the sum: (f1[...,0]+f2[...,0])/2 Output: array([[0.5, 0.5, 0.5], [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]]) | 2 | 4 |
77,834,081 | 2024-1-17 | https://stackoverflow.com/questions/77834081/pil-not-working-on-python-3-12-even-though-its-installed-and-up-to-date-and | I've upgraded from Python 3.8 32 bit to 3.12 64 bit because of memory limitations. However, the module PIL isn't being recognised, with this error: Traceback (most recent call last): File "C:\Users\rhann\Desktop\Scripts\langton's ant.py", line 2, in <module> from PIL import Image ModuleNotFoundError: No module named 'PIL' The version of Pillow is 10.2.0, and I installed it using the command pip3 install Pillow on the command line. The line causing the error is from PIL import Image. The line before it, import math, does not cause problems. I've looked up questions on this website, and some others, but none of them have helped. I've tried uninstalling and reinstalling Pillow. Let me know if you need any more details. | Each version of Python (and each virtualenv) has a separate set of installed packages -- so just running pip3 will install a package for one Python 3.x environment (usually, but not always, the same one that python3 will start an interpreter for), not every Python 3.x environment. As an alternative to invoking pip3 via its command wrapper, you can invoke pip as a module within the specific interpreter you want it to install a package for; that way it doesn't matter which interpreter's copy of pip3 comes first in your PATH. Installing Pillow for Python 3.12 on Windows this way might look like: py -3.12 -m pip install Pillow On more UNIX-y systems, this might instead be python3.12 -m pip install Pillow. | 3 | 5 |
77,833,699 | 2024-1-17 | https://stackoverflow.com/questions/77833699/invoking-function-after-keyboardinterrupt | Is is possible to invoke a function in a KeyboardInterrupt after pressing CTRL+C? I have following code: try: #other code except KeyboardInterrupt: GPIO.cleanup() finally: print(summary.getSummary()) I have also tried putting the print(summary.getSummary) before the GPIO.cleanup() inside the exception, but the program exits before executing the print command. Thanks in advance | Catching KeyboardInterrupt should work as expected: import time try: while True: print("Sleeping") time.sleep(1) except KeyboardInterrupt as ki: print("\nCaught KeyboardInterrupt") finally: print("Finally here") prints: ^C Sleeping Caught KeyboardInterrupt Finally here It is possible that the code in the try block is catching KeyboardInterrupt or BaseException, so the outer block cannot catch it. Also note, from the documentation: Catching a KeyboardInterrupt requires special consideration. Because it can be raised at unpredictable points, it may, in some circumstances, leave the running program in an inconsistent state. It is generally best to allow KeyboardInterrupt to end the program as quickly as possible or avoid raising it entirely. (See Note on Signal Handlers and Exceptions.) | 2 | 2 |
77,831,595 | 2024-1-17 | https://stackoverflow.com/questions/77831595/ax-set-facecolor-between-angles-in-a-polar-plot | Say I have a standard matplotlib polar plot: import matplotlib.pyplot as plt import numpy as np r = np.arange(0, 2, 0.01) theta = 2 * np.pi * r fig, ax = plt.subplots(subplot_kw={'projection': 'polar'}) ax. Plot(theta, r) ax.set_rmax(2) ax.set_rticks([0.5, 1, 1.5, 2]) # Less radial ticks ax.set_rlabel_position(-22.5) # Move radial labels away from plotted line ax. Grid(True) ax.set_title("A line plot on a polar axis", va='bottom') plt.show() Which yields (source): In my case, the results are not trustful between certain angles (e.g. between 0ΒΊ and 45ΒΊ in the above figure), so I would like to change the plot background color, but only between those angles; i.e. apply ax.set_facecolor only between certain angles. Is it possible? I could not find online an example for that. | You can use axvspan(...) to color between two angles (in radians). import matplotlib.pyplot as plt import numpy as np r = np.arange(0, 2, 0.01) theta = 2 * np.pi * r fig, ax = plt.subplots(subplot_kw={'projection': 'polar'}) ax.plot(theta, r) ax.set_rmax(2) ax.set_rticks([0.5, 1, 1.5, 2]) # Less radial ticks ax.set_rlabel_position(-22.5) # Move radial labels away from plotted line ax.grid(True) startangle = 0 endangle = 45 ax.axvspan(np.deg2rad(startangle), np.deg2rad(endangle), facecolor='red', alpha=0.3) plt.show() | 2 | 2 |
77,832,241 | 2024-1-17 | https://stackoverflow.com/questions/77832241/polars-dataframe-sorting-based-on-absolute-value-of-a-column | I would like to sort a polars dataframe based on absolute value of a column in either ascending or descending order. It is easy to do in Pandas, or using sorted function in python. Let's say I want to sort based on val column in the below dataframe. import numpy as np np.random.seed(42) import polars as pl df = pl.DataFrame({ "name": ["one", "one", "one", "two", "two", "two"], "id": ["C", "A", "B", "B", "C", "C"], "val": np.random.randint(-10, 10, 6) }) Returns: ββββββββ¬ββββββ¬ββββββ β name β id β val β β --- β --- β --- β β str β str β i32 β ββββββββͺββββββͺββββββ‘ β one β C β -4 β β one β A β 9 β β one β B β 4 β β two β B β 0 β β two β C β -3 β β two β C β -4 β ββββββββ΄ββββββ΄ββββββ Thanks! | You want to sort based on the absolute value of 'val'? Expressions are your friend, no need for temporary columns: In [33]: df.sort(pl.col('val').abs()) Out[33]: shape: (6, 3) ββββββββ¬ββββββ¬ββββββ β name β id β val β β --- β --- β --- β β str β str β i64 β ββββββββͺββββββͺββββββ‘ β two β B β 0 β β two β C β -3 β β one β C β -4 β β one β B β 4 β β two β C β -4 β β one β A β 9 β ββββββββ΄ββββββ΄ββββββ | 2 | 5 |
77,831,751 | 2024-1-17 | https://stackoverflow.com/questions/77831751/how-can-i-merge-two-dataframes-keeping-overlapping-values-and-nan-for-other-val | I've got two dataframes, one of which is a timestamp, and the other also has timestamps, but has gaps in it. The two dataframes have some overlap in timestamps, and some not. MWE: This code creates the first dataframe: daterange = pd.date_range(start='1/1/2023 09:30:00', end='1/3/2023 09:35:00', freq = 'min') daterange_keep = (pd.DatetimeIndex(pd.to_datetime(daterange)) .indexer_between_time('09:30', '09:35') ) firstdf= pd.DataFrame(daterange[daterange_keep]) firstdf.columns = ['timestamp'] firstdf This creates the following dataframe, with times from 9:30 to 9:33, 1 Jan to 3 Jan: The second dataframe looks like this: seconddf = pd.DataFrame({'timestamp': ['2023-01-01 09:30:00', '2023-01-01 09:32:00', '2023-01-01 09:34:00', '2023-02-01 09:30:00'], 'value': [3,5,7,9]}) seconddf I want to merge the two dataframes, keeping all the timestamps in the first dataframe and inserting NaNs for the missing data from the second dataframe, and dropping all the data in the second frame that isn't in the first frame. The desired output is: What is the best way to do this? (Ideally I'll also be able to rename the 'value' column but I assume I can do that independently of the merge.) The obvious way appears to be firstdf.merge(seconddf, how = 'inner'), but this yields an error that says I should use concat instead, and I can't figure out how concat can achieve this merge. | Do you missed to convert timestamp of your second dataframe to datetime64? seconddf['timestamp'] = pd.to_datetime(seconddf['timestamp']) out = firstdf.merge(seconddf, on='timestamp', how='left') Output: >>> out timestamp value 0 2023-01-01 09:30:00 3.0 1 2023-01-01 09:31:00 NaN 2 2023-01-01 09:32:00 5.0 3 2023-01-01 09:33:00 NaN 4 2023-01-01 09:34:00 7.0 5 2023-01-01 09:35:00 NaN 6 2023-01-02 09:30:00 NaN 7 2023-01-02 09:31:00 NaN 8 2023-01-02 09:32:00 NaN 9 2023-01-02 09:33:00 NaN 10 2023-01-02 09:34:00 NaN 11 2023-01-02 09:35:00 NaN 12 2023-01-03 09:30:00 NaN 13 2023-01-03 09:31:00 NaN 14 2023-01-03 09:32:00 NaN 15 2023-01-03 09:33:00 NaN 16 2023-01-03 09:34:00 NaN 17 2023-01-03 09:35:00 NaN You can also use: dmap = seconddf.set_index('timestamp')['value'] firstdf['value'] = firstdf['timestamp'].map(dmap) Using indexing: out = (seconddf.set_index('timestamp') .reindex(firstdf['timestamp']) .reset_index()) | 2 | 3 |
77,829,681 | 2024-1-17 | https://stackoverflow.com/questions/77829681/scikit-learn-and-scipy-yield-diverging-cosine-distances-after-removing-featur | When we take an N x M matrix with N observations and M features, a common task is to compute pairwise distances between the N observations, resulting in an N x N distance matrix. The popular Python libraries scipy and scikit-learn both provide methods for performing this task and we expect them to yield the same results for metrics that both have implemented. The following function tests the equivalence for a given matrix called arr: import numpy as np from sklearn.metrics import pairwise_distances from scipy.spatial.distance import pdist, squareform def test_equivalence(arr: np.array, metric="cosine") -> bool: scipy_result = squareform(pdist(arr, metric=metric)) sklearn_result = pairwise_distances(arr, metric=metric) return np.isclose(scipy_result, sklearn_result).all() Now I happen to have this 1219 x 37652 array arr where each row sums to 1 (normalized) and test_equivalence(arr) yields True, as expected. That is, the N x N cosine-distance matrices returned by both libraries can be used interchangeably. However, when I cull the last i columns, test_equivalence(arr[:, -i]) yields True only up to a certain value (which happens to be i = 25676). From this value onwards, the equivalence does not hold. I am completely short of ideas why this is, any guidance? I may share the array as .npz file for debugging if someone can advise how, but maybe someone already has a premonition. The ultimate question will be, of course, which implementation should I be using? I've also tested the failing arr[:, -25675] with these other metrics: ["braycurtis", "canberra", "chebyshev", "cityblock", "correlation", "euclidean", "hamming", "matching", "minkowski", "rogerstanimoto", "russellrao", "seuclidean", "sokalmichener", "sokalsneath", "sqeuclidean", "yule"] out of which all except "correlation" were equivalent. Edit: A reduced (1219 x 96) array that fails the equivalence test can be downloaded from https://drive.switch.ch/index.php/s/B19JbTL5aZ4pY3f/download and loaded via np.load("tf_matrix.npz")["arr_0"]. | There are a few diagnostics you can try in this situation. One diagnostic you can try is to plot the places where the two methods of computing distance disagree. # Modified version of test_equivalence() that returns boolean matrix of disagreements def test_equivalence(arr: np.array, metric="cosine"): scipy_result = squareform(pdist(arr, metric=metric)) sklearn_result = pairwise_distances(arr, metric=metric) return np.isclose(scipy_result, sklearn_result) plt.imshow(test_equivalence(arr)) This gives the following plot: Note that it is True everywhere except for a horizontal line around 1200, and a vertical line around 1200. So it's not that the two methods disagree about all vectors - they disagree in all comparisons involving a particular vector. Let's find out which column contains the vector that they disagree over: >>> row, col = np.where(~test_equivalence(arr)) >>> print(col[0]) 1168 Is there anything strange about vector 1168? >>> print(arr[1168]) [1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23 1.20000016e-23] This vector is very, very small. Is it unusually small compared to other vectors, though? You can test this by graphing the euclidean length of each vector by position within the array. plt.scatter(np.arange(len(arr)), np.linalg.norm(arr, axis=1)) plt.yscale('log') This plot shows that most vectors have a euclidean length of about 0.1, except one vector, which is 20 orders of magnitude smaller. It's vector 1168 again. To check this theory about small vectors causing the problem, here is an alternate way of showing the problem. I took your array, and repeatedly simplified it until I had a test case which was as simple as possible, but still showed the issue. arr_small = np.array([[1, 0], [1e-15, 1e-15]]) print(test_equivalence(arr_small)) print(squareform(pdist(arr_small, metric="cosine"))) print(pairwise_distances(arr_small, metric="cosine")) Output: [[ True False] [False True]] [[0. 0.29289322] [0.29289322 0. ]] [[0. 1.] [1. 0.]] I declare two vectors, one with coordinates (1, 0), and the other with coordinates (1e-15, 1e-15). These should have an angle of 45 degrees between them. In cosine distance terms, that should be 1 - cos(45 degrees) = 0.292. The pdist() function agrees with this calculation. However, pairwise_distances() says the distance is 1. In other words, it says that the two vectors are orthogonal. Why does it do this? Let's look at the definition of cosine distance to understand why. Image credit: SciPy documentation In this equation, if either u or v contain all zeros, then the denominator will be zero, and you'll have a division by zero, which is undefined. What pairwise_distances() does in this case is that in any case where the euclidean length of a vector is "too small," then the length of the vector is replaced with 1 instead to avoid the division by zero. This causes the numerator to be much smaller than the denominator, so the fraction is 0, and the distance becomes 1. More precisely, a vector is "too small" when the length of the vector is smaller than 10 times the machine epsilon for the relevant type, which is approximately 2.22e-15 for 64-bit floats. (Source.) In contrast, pdist() does not contain any code to avoid this division by zero. >>> print(squareform(pdist(np.array([[1, 0], [0, 0]]), metric="cosine"))) [[ 0. nan] [nan 0.]] | 2 | 2 |
77,792,905 | 2024-1-10 | https://stackoverflow.com/questions/77792905/pyside2-load-svg-from-variable-rather-than-from-file | I want to display an svg file using PySide. But I would prefer not having to create a file.svg and then load it using QSvgWidget Rather I just have the file contents of the file.svg contents already stored in a variable as a string. How would I go about directly loading the SVG from the variable rather than form the file. Best | You need to convert the string to a QByteArray. From the QSvgWidget.load documentation: PySide6.QtSvgWidgets.QSvgWidget.load(contents) PARAMETERS: contents β PySide6.QtCore.QByteArray Here's a full demo: from PySide6.QtWidgets import QApplication from PySide6.QtSvgWidgets import QSvgWidget from PySide6.QtCore import QByteArray import sys svg_string="""<svg width="50" height="50"> <circle cx="25" cy="25" r="20"/> </svg>""" svg_bytes = QByteArray(svg_string) app = QApplication(sys.argv) svgWidget = QSvgWidget() svgWidget.renderer().load(svg_bytes) svgWidget.show() sys.exit(app.exec()) | 2 | 2 |
77,823,218 | 2024-1-16 | https://stackoverflow.com/questions/77823218/how-to-know-if-virtual-environment-is-working-in-vs-code | Iβve read through VS Codeβs documentation on creating a virtual environment and Iβve used the venv command to do so, but how do I know if my current Python files are working using that Virtual Environment? In the folder Iβm saving my .py files in, I see the .venv folder which was created. Is that it or is there more to it? Iβve used VS Codeβs terminal commands and followed the documentation instructions, but there is no documentation to tell me exactly what Iβm supposed to see when it works, so I donβt know if Iβm done. | You want to check if you are in the venv or not: Type which python into VS Code's integrated terminal. If you are using the venv, it will show the path to the python executable inside the venv. For example, for me, the .venv folder is located in /Users/lion/example_folder, but for you the path would be different. If the path does not include the .venv folder inside your Python Projects folder, then you are not inside the venv. For example, the path shown by the command is /usr/bin/python, but the .venv is at /user/Python Projects. You are not inside the venv and you want to activate it Inside VS Code's terminal, type source .venv/bin/activate (or specify the absolute path source ~/example_folder/.venv/bin/activate). | 3 | 3 |
77,807,142 | 2024-1-12 | https://stackoverflow.com/questions/77807142/attributeerror-calling-operator-bpy-ops-import-scene-obj-error-could-not-be | I am trying to write a python script that will convert triangular-mesh objects to quad-mesh objects. For example, image (a) will be my input (.obj/.stl) file and image (b) will be the output. I am a noob with mesh-algorithms or how they work all together. So, far this is the script I have written: import bpy inp = 'mushroom-shelve-1-merged.obj' # Load the triangle mesh OBJ file bpy.ops.import_scene.obj(filepath=inp, use_smooth_groups=False, use_image_search=False) # Get the imported mesh obj = bpy.context.selected_objects[0] # Convert triangles to quads # The `beauty` parameter can be set to False if desired bpy.ops.object.mode_set(mode='EDIT') bpy.ops.mesh.select_all(action='SELECT') bpy.ops.mesh.tris_convert_to_quads(beauty=True) bpy.ops.object.mode_set(mode='OBJECT') # Export to OBJ with quads bpy.ops.export_scene.obj(filepath='quad_mesh.obj') This results in the following error: Traceback (most recent call last): File "/home/arrafi/mesh-convert-application/test.py", line 8, in <module> bpy.ops.import_scene.obj(filepath=inp, File "/home/arrafi/mesh-convert-application/venv/lib/python3.10/site-packages/bpy/4.0/scripts/modules/bpy/ops.py", line 109, in __call__ ret = _op_call(self.idname_py(), kw) AttributeError: Calling operator "bpy.ops.import_scene.obj" error, could not be found Any help with what I am doing wrong here would be greatly appreciated. Also please provide your suggestions for if you know any better way to convert triangular-mesh to quad-mesh with Python. If you guys know of any API that I can call with python to do the conversion, that would work too. | Turns out bpy.ops.import_scene.obj was removed at bpy==4 which is the latest blender-api for python, hence the error. In bpy>4 you have to use bpy.ops.wm.obj_import(filepath='') I just downgraded to bpy==3.60 to import object directly in the current scene. pip install bpy==3.6.0 I also modified my script to take input of .obj files in triangular-mesh and then convert the mesh to quadrilateral, then export as both stl and obj. Here's my working script: def convert_tris_to_quads(obj_path, export_folder): try: filename = os.path.basename(obj_path).split('.')[0] logging.info(f"Importing {obj_path}") bpy.ops.object.select_all(action='DESELECT') bpy.ops.object.select_by_type(type='MESH') bpy.ops.object.delete() bpy.ops.import_scene.obj(filepath=obj_path) print("current objects in the scene: ", [obj for obj in bpy.context.scene.objects]) for obj in bpy.context.selected_objects: bpy.context.view_layer.objects.active = obj logging.info("Converting mesh") bpy.ops.object.mode_set(mode='EDIT') bpy.ops.mesh.select_all(action='SELECT') bpy.ops.mesh.tris_convert_to_quads() bpy.ops.object.mode_set(mode='OBJECT') # Export to OBJ obj_export_path = export_folder + filename + '_quad.obj' logging.info(f"Exporting OBJ to {obj_export_path}") bpy.ops.export_scene.obj(filepath=obj_export_path, use_selection=True) # Export to STL stl_export_path = export_folder + filename + '_quad.stl' logging.info(f"Exporting STL to {stl_export_path}") bpy.ops.export_mesh.stl(filepath=stl_export_path, use_selection=True) except Exception as e: logging.error(f"Error processing {obj_path}: {e}") return False This still might not be the best approach to this, so do let me know if anyone know any better approach. | 10 | 10 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.