question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
75,761,216 | 2023-3-16 | https://stackoverflow.com/questions/75761216/how-can-i-create-an-avro-schema-from-a-python-class | How can I transform my simple python class like the following into a avro schema? class Testo(SQLModel): name: str mea: int This is the Testo.schema() output { "title": "Testo", "type": "object", "properties": { "name": { "title": "Name", "type": "string" }, "mea": { "title": "Mea", "type": "integer" } }, "required": [ "name", "mea" ] } from here I would like to create an Avro record. This can be converted online on konbert.com (select JSON to AVRO Schema) and it results in the Avro schema below. (all valid despite the name field which should be "Testo" instead of "Record".) { "type": "record", "name": "Record", "fields": [ { "name": "title", "type": "string" }, { "name": "type", "type": "string" }, { "name": "properties.name.title", "type": "string" }, { "name": "properties.name.type", "type": "string" }, { "name": "properties.mea.title", "type": "string" }, { "name": "properties.mea.type", "type": "string" }, { "name": "required", "type": { "type": "array", "items": "string" } } ] } Anyhow, if they can do it, there certainly must be a way to convert it with current python libraries. Which library can do a valid conversion (and also complex python models/classes? If there is an opinion of that this is a wrong approach, that is also welcome - if - pointing out a better way how this translation process can be done. | It looks like there are a few libraries that aim to provide this kind of functionality: py-avro-schema has support for generic Python classes dataclasses-avroschema has support for dataclasses, pydantic models, and faust records pydantic-avro requires your Python class to inherit from pydantic.BaseModel | 3 | 3 |
75,785,663 | 2023-3-19 | https://stackoverflow.com/questions/75785663/mkdocs-mkdocstrings-add-link-back-to-github-source-code | I am making docs for my Python code with mkdocs and mkdocstrings. I would like to have link from the docs to the source code (page and line number on github). Is there a way to automatic add that with function/class syntax (e.g., '::: identifier')? I am looking for something similar to 'SciPy' docs, where they have [source] button (right of function heading). For an example: Poisson distribution | You can find inspiration in this archived repository: https://github.com/AI2Business/mkdocstrings-sourcelink. As of now (2023/07/06) there's no built-in way to achieve that. VCS support might happen in the future :) (source: am maintainer). | 5 | 1 |
75,793,794 | 2023-3-20 | https://stackoverflow.com/questions/75793794/adding-getitem-accessor-to-python-class-method | I'm attempting to add an item getter (__getitem__, to provide the [] syntax) to a class method so that I can use some unique-ish syntax to provide types to functions outside the normal parentheses, like the following. The syntax on the last line (of this first snippet) is really the goal for this whole endeavor. class MyClass: @typedmethod def compute_typed_value(self, value, *args, **kwargs): print(self, args, kwargs) result = TypedMethod.requested_type(kwargs)(value) self.my_other_method() return result def my_other_method(self): print('Doing some other things!') return 3 a = MyClass() a.compute_typed_value[int]('12345') # returns int value 12345 Additionally, I'd like to retain the intuitive behavior that a defined function can be called like a function, potentially with a default value for the type, like so: a = MyClass() a.compute_typed_value('12345') # should return whatever the default type is, with the value of '12345', # or allow some other default behavior In a broader context, this would be implemented as a piece of an API adapter that implements a generic request processor, and I'd like the data to come out of the API adapter in a specific format. So the way that this might look in actual use could be something like the following: @dataclass class MyAPIData: property_a: int = 0 property_b: int = 0 class MyAPIAdapter: _session def __init__(self, token): self._init_session(token) @typedmethod def request_json(self, url, **kwargs): datatype = TypedMethod.requested_type(kwargs) response_data = self._session.get(url).json() if datatype: response_data = datatype(**response_data) return response_data def fetch_myapidata(self, search): return self.request_json[MyAPIData](f"/myapi?q={search}") I'm attempting to achieve this kind of behavior with a decorator that I can throw onto any function that I want to enable this behavior. Here is my current full implementation: from functools import partial class TypedMethod: _REQUESTED_TYPE_ATTR = '__requested_type' def __init__(self, method): self._method = method print(method) self.__call__ = method.__call__ def __getitem__(self, specified_type, *args, **kwargs): print(f'getting typed value: {specified_type}') if not isinstance(specified_type, type): raise TypeError("Only Type Accessors are supported - must be an instance of `type`") return partial(self.__call__, **{self.__class__._REQUESTED_TYPE_ATTR: specified_type}) def __call__(self, *args, **kwargs): print(args, kwargs) return self._method(self, *args, **kwargs) @classmethod def requested_type(cls, foo_kwargs): return foo_kwargs[cls._REQUESTED_TYPE_ATTR] if cls._REQUESTED_TYPE_ATTR in foo_kwargs else None def typedmethod(foo): print(f'wrapping {foo.__name__} with a Typed Method: {foo}') _typed_method = TypedMethod(foo) def wrapper(self, *args, **kwargs): print('WRAPPER', self, args, kwargs) return _typed_method(self, *args, **kwargs) _typed_method.__call__ = wrapper return _typed_method class MyClass: @typedmethod def compute_typed_value(self, value, *args, **kwargs): print(self, args, kwargs) result = TypedMethod.requested_type(kwargs)(value) print(result) self.my_other_method() return result def my_other_method(self): print('Doing some other things!') return 3 a = MyClass() a.compute_typed_value[int]('12345') If you run this code, it will fail stating that 'TypedMethod' object has no attribute 'my_other_method'. Further inspection reveals that the first line of compute_typed_value is not printing what one would intuitively expect from the code: <__main__.TypedMethod object at 0x10754e790> () {'__requested_type': <class 'int'>} Specifically, the first item printed, which is a TypedMethod instead of a MyClass instance Basically, the idea is use the __getitem__ callout to generate a functools.partial so that the subsequent call to the resulting function contains the __getitem__ key in a known "magic" kwargs value, which should hypothetically work, except that now the self reference that is available to MyClass.compute_typed_value is actually a reference to the TypedMethod instance generated by the wrapper instead of the expected MyClass instance. I've attempted a number of things to get the MyClass instance passed as self, but since it's implemented as a decorator, the instance isn't available at the time of decoration, meaning that somehow it needs to be a bound method at the time of function execution, I think. I know I could just pass this value in as like the first positional argument, but I want it to work with the square bracket annotation because I think it'd be cool and more readable. This is mostly a learning exercise to understand more of Python's inner workings, so the answer could ultimately be "no". | I never thought I would use a Python class this way, but if the pattern fits... I put a cleaned-up example of the below answer in this github gist that shows the same implementation and usage. The issue I was having was that the class instance reference (self) was being overridden due to the reassignment of the function itself via a decorator. My solution is basically to add another class-level decorator to manually put that reference back. This looks a little different than my originally asked question due to a change in naming, but the essence is still the same. We define a class that is itself used as the decorator. Python just allows this because class types, when called, are function calls themselves, to create and init a new instance of the class. This also allows us to provide a few more magic methods to make other quality of life items easier. Here's what that looks like: from functools import partial, wraps from types import MethodType from functools import partial, wraps from types import MethodType, FunctionType from typing import Any, Callable class subscriptable: _SUBSCRIPT_KEY = '___SUBSCRIPT_KEY' _HAS_SUBSCRIPTABLE_METHODS = '___HAS_SUBSCRIPTABLE_METHODS' _callout: Callable def __init__(self, callout: Callable, instance: Any = None): if instance is not None and isinstance(callout, FunctionType): self._callout = MethodType(callout, instance) else: self._callout = callout def bind(self, instance): return self.__class__(self._callout, instance=instance) def __getitem__(self, specified_type): return partial(self.__call__, **{self.__class__._SUBSCRIPT_KEY: specified_type}) def __call__(self, *args, **kwargs): """A transparent passthrough to the wrapped method""" return self._callout(*args, **kwargs) def __str__(self): return f"<{self.__class__.__name__} {self._callout}>" @classmethod def has_key(cls, foo_kwargs): """A utility method to determine whether the provided kwargs has the expected subscript key""" return cls._SUBSCRIPT_KEY in foo_kwargs @classmethod def key(cls, foo_kwargs:dict): """A utility method that allows the subscript key to be consumed by the wrapped method, without needing to know the inner workings""" return foo_kwargs.pop(cls._SUBSCRIPT_KEY, None) @classmethod def container(cls, clazz): """A decorator for classes containing `subscriptable` methods""" if not hasattr(clazz, cls._HAS_SUBSCRIPTABLE_METHODS): orig_init = clazz.__init__ @wraps(clazz.__init__) def __init__(self, *args, **kwargs): for attr_name in dir(self): attr_value = getattr(self, attr_name) if isinstance(attr_value, cls): setattr(self, attr_name, attr_value.bind(self)) orig_init(self, *args, **kwargs) clazz.__init__ = __init__ setattr(clazz, cls._HAS_SUBSCRIPTABLE_METHODS, True) return clazz As a bonus feature, As some previously written answers allude to, class functions are not the only place that something like this might be useful, so this also allows for standalone functions to exhibit the same behavior. See this example, and its output: @subscriptable def other_typed_value(value, **kwargs): subscript = subscriptable.key(kwargs) print(subscript, value) value = other_typed_value[int]('12345') value = other_typed_value('12345') print("Standard function str:", other_typed_value) Produces the output: <class 'int'> 12345 None 12345 Standard function str: <subscriptable <function other_typed_value at 0x000001DC809384C0>> And finally, the original point of the question, whether we can apply this pattern to class methods. The answer is yes, but with the assistance of yet another decorator. This is where subscriptable.container steps in. Since we can't access the instance at the time of class definition, I used an additional decorator to provide a pre-init hook that initializes all the functions so they are usable as expected (as properly bound class methods, even!), available even in the __init__ function. This kind of processing is probably pretty slow, but for my use case, it's mostly for singletons anyway. @subscriptable.container class MyClass: @subscriptable def compute_typed_value(self, value, *args, **kwargs): print(self, args, kwargs) if subscriptable.has_key(kwargs): value = subscriptable.key(kwargs)(value) self.my_other_method() return value def my_other_method(self): print('Doing some other things!') return 3 a = MyClass() value = a.compute_typed_value[int]('12345') print(value, type(value)) value = a.compute_typed_value('12345') print(value, type(value)) print("Class Method str:", a.compute_typed_value) Anyhoo, the above code yields the following output, which you'll notice has all the correct references in the places they were missing before. great success! Doing some other things! 12345 <class 'int'> <__main__.MyClass object at 0x000001DC808EB1C0> () {} Doing some other things! 12345 <class 'str'> Class Method str: <subscriptable <bound method MyClass.compute_typed_value of <__main__.MyClass object at 0x000001DC808EB1C0>>> I was hoping to do this without a second decorator, but a single class decorator to enable the desired behavior when I'm already using one is a price I'm willing to pay. | 3 | 0 |
75,804,182 | 2023-3-21 | https://stackoverflow.com/questions/75804182/valueerror-object-has-no-field-when-using-monkeypatch-setattr-on-a-pydantic-bas | I usually can patch methods of normal objects using pytest monkeypatch. However when I try with a pydantic.BaseModel it fails. from pydantic import BaseModel class Person(BaseModel): name: str age: int def intro(self) -> str: return f"I am {self.name}, {self.age} years old." def test_person_intro(monkeypatch): p = Person(name='Joe', age=20) monkeypatch.setattr(p, 'intro', lambda: 'patched intro') assert p.intro() == 'patched intro' Raises: monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x10f0d3040> def test_intro(monkeypatch): p = Person(name='Joe', age=20) > monkeypatch.setattr(p, 'intro', lambda: 'patched intro') - - - - - - - - - - - - - - - - - - - - - > ??? E ValueError: "Person" object has no field "intro" pydantic/main.py:422: ValueError | You could patch the type: def test_person_intro(monkeypatch): p = Person(name='Joe', age=20) monkeypatch.setattr(Person, 'intro', lambda self: 'patched intro') assert p.intro() == 'patched intro' Or you could set item in the instance dict: def test_person_intro(monkeypatch): p = Person(name='Joe', age=20) monkeypatch.setitem(p.__dict__, 'intro', lambda: 'patched intro') assert p.intro() == 'patched intro' The reason an instance setattr isn't working as usual is that pydantic.BaseModel overrides __setattr__ to only allow a restricted set of names. You will see the same error doing person.intro = "other" as well, it is unrelated to pytest's monkeypatch fixture. | 4 | 4 |
75,773,786 | 2023-3-18 | https://stackoverflow.com/questions/75773786/why-cant-i-access-gpt-4-models-via-api-although-gpt-3-5-models-work | I'm able to use the gpt-3.5-turbo-0301 model to access the ChatGPT API, but not any of the gpt-4 models. Here is the code I am using to test this (it excludes my openai API key). The code runs as written, but when I replace "gpt-3.5-turbo-0301" with "gpt-4", "gpt-4-0314", or "gpt-4-32k-0314", it gives me an error openai.error.InvalidRequestError: The model: `gpt-4` does not exist I have a ChatGPT+ subscription, am using my own API key, and can use gpt-4 successfully via OpenAI's own interface. It's the same error if I use gpt-4-0314 or gpt-4-32k-0314. I've seen a couple articles claiming this or similar code works using 'gpt-4' works as the model specification, and the code I pasted below is from one of them. Is it possible to access the gpt-4 model via Python + API, and if so, how? openai_key = "sk..." openai.api_key = openai_key system_intel = "You are GPT-4, answer my questions as if you were an expert in the field." prompt = "Write a blog on how to use GPT-4 with python in a jupyter notebook" # Function that calls the GPT-4 API def ask_GPT4(system_intel, prompt): result = openai.ChatCompletion.create(model="gpt-3.5-turbo-0301", messages=[{"role": "system", "content": system_intel}, {"role": "user", "content": prompt}]) print(result['choices'][0]['message']['content']) # Call the function above ask_GPT4(system_intel, prompt) | Currently the GPT 4 API is restricted, Even to users with a Chat GPT + subscription. You may need to join the Waitlist for the API. | 20 | 18 |
75,804,781 | 2023-3-21 | https://stackoverflow.com/questions/75804781/how-to-create-pyspark-dataframes-from-pandas-dataframes-with-pandas-2-0-0 | I normally use spark.createDataFrame() which used to throw me deprecated warnings about iteritems() call in earlier version of Pandas. With pandas 2.0.0 it doesn't work at all, resulting in an error below: AttributeError Traceback (most recent call last) File <command-2209449931455530>:64 61 df_train_test_p.loc[df_train_test_p.is_train=='N','preds']=preds_test 63 # save the original table and predictions into spark dataframe ---> 64 df_test = spark.createDataFrame(df_train_test_p.loc[df_train_test_p.is_train=='N']) 65 df_results = df_results.union(df_test) 67 # saving all relevant data File /databricks/spark/python/pyspark/instrumentation_utils.py:48, in _wrap_function.<locals>.wrapper(*args, **kwargs) 46 start = time.perf_counter() 47 try: ---> 48 res = func(*args, **kwargs) 49 logger.log_success( 50 module_name, class_name, function_name, time.perf_counter() - start, signature 51 ) 52 return res File /databricks/spark/python/pyspark/sql/session.py:1211, in SparkSession.createDataFrame(self, data, schema, samplingRatio, verifySchema) 1207 data = pd.DataFrame(data, columns=column_names) 1209 if has_pandas and isinstance(data, pd.DataFrame): 1210 # Create a DataFrame from pandas DataFrame. -> 1211 return super(SparkSession, self).createDataFrame( # type: ignore[call-overload] 1212 data, schema, samplingRatio, verifySchema 1213 ) 1214 return self._create_dataframe( 1215 data, schema, samplingRatio, verifySchema # type: ignore[arg-type] 1216 ) File /databricks/spark/python/pyspark/sql/pandas/conversion.py:478, in SparkConversionMixin.createDataFrame(self, data, schema, samplingRatio, verifySchema) 476 warn(msg) 477 raise --> 478 converted_data = self._convert_from_pandas(data, schema, timezone) 479 return self._create_dataframe(converted_data, schema, samplingRatio, verifySchema) File /databricks/spark/python/pyspark/sql/pandas/conversion.py:516, in SparkConversionMixin._convert_from_pandas(self, pdf, schema, timezone) 514 else: 515 should_localize = not is_timestamp_ntz_preferred() --> 516 for column, series in pdf.iteritems(): 517 s = series 518 if should_localize and is_datetime64tz_dtype(s.dtype) and s.dt.tz is not None: File /local_disk0/.ephemeral_nfs/envs/pythonEnv-fefe10af-04b7-4277-b395-2f16b77bd90b/lib/python3.9/site-packages/pandas/core/generic.py:5981, in NDFrame.__getattr__(self, name) 5974 if ( 5975 name not in self._internal_names_set 5976 and name not in self._metadata 5977 and name not in self._accessors 5978 and self._info_axis._can_hold_identifiers_and_holds_name(name) 5979 ): 5980 return self[name] -> 5981 return object.__getattribute__(self, name) AttributeError: 'DataFrame' object has no attribute 'iteritems' How can I solve this? I use Spark 3.3.2. I did find seemingly updated code with problematic call replaced here: https://github.com/apache/spark/blob/master/python/pyspark/sql/pandas/conversion.py but not sure which version it is and whether it is available. EDIT: sample code which reproduces the problem below: import pandas as pd from pyspark.sql import SparkSession # create a sample pandas dataframe data = {'name': ['John', 'Mike', 'Sara', 'Adam'], 'age': [25, 30, 18, 40]} df_pandas = pd.DataFrame(data) # convert pandas dataframe to PySpark dataframe spark = SparkSession.builder.appName('pandasToSpark').getOrCreate() df_spark = spark.createDataFrame(df_pandas) # show the PySpark dataframe df_spark.show() | This is currently a broken dependency. The issue was recently merged. It will be released in pyspark==3.4. Unfortunately, only pyspark==3.3.2 is available in pypi at the moment . But since pandas==2.0.0 was just released in pypi today (as of April 3, 2023), the current pyspark appears to be temporarily broken. The only way to make this work is to pin to the older pandas version as suggested, until the next pyspark release. Alternatively, you could attempt to pull the new pyspark from the release candidates. | 3 | 3 |
75,797,278 | 2023-3-21 | https://stackoverflow.com/questions/75797278/playing-a-video-with-captions-in-jupyter-notebook | How to play a video with captions in Jupyter notebook? With code snippets from these post, I've tried to play a video inside jupyter notebook: How can I play a local video in my IPython notebook? how play mp4 video in google colab from IPython.display import HTML # Show video compressed_path = 'team-rocket.video-compressed.mp4' mp4 = open(compressed_path,'rb').read() data_url = "data:video/mp4;base64," + b64encode(mp4).decode() HTML(""" <video width=400 controls> <source src="%s" type="video/mp4"> </video> """ % data_url) [out]: When I tried to add a .vtt file as caption, the option appears from IPython.display import HTML # Show video compressed_path = 'team-rocket.video-compressed.mp4' mp4 = open(compressed_path,'rb').read() data_url = "data:video/mp4;base64," + b64encode(mp4).decode() HTML(""" <video width=400 controls> <source src="%s" type="video/mp4"> <track src="team-rocket.vtt" label="English" kind="captions" srclang="en" default > </video> """ % data_url) [out]: But the subtitles/captions are not present when playing the video. How to play a video with captions in Jupyter notebook? The files used in the examples above are on: team-rocket.vtt team-rocket.video-compressed.mp4 file can be found on https://colab.research.google.com/drive/1IHgo9HWRc8tGqjmwWGunzxXDVhPaGay_?usp=sharing | Try this: from IPython.display import HTML from base64 import b64encode video_path = 'team-rocket.video-compressed.mp4' captions_path = 'team-rocket.vtt' with open(video_path, 'rb') as f: video_data = f.read() video_base64 = b64encode(video_data).decode() with open(captions_path, 'r') as f: captions_data = f.read() captions_base64 = b64encode(captions_data.encode('utf-8')).decode() video_html = f""" <video width="640" height="360" controls> <source src="data:video/mp4;base64,{video_base64}" type="video/mp4"> <track src="data:text/vtt;base64,{captions_base64}" kind="captions" srclang="en" label="English" default> </video> """ HTML(video_html) For some reason, explicitly streaming in the video + captions/subtitle while specifying the encoding would bypass the security issues that @igrinis' answer pointed out. "data:video/mp4;base64,{video_base64}" "data:text/vtt;base64,{captions_base64}" | 4 | 5 |
75,804,180 | 2023-3-21 | https://stackoverflow.com/questions/75804180/how-to-do-a-qcut-by-group-in-polars | Consider the following example zz = pl.DataFrame({'group' : ['a','a','a','a','b','b','b'], 'col' : [1,2,3,4,1,3,2]}) zz Out[16]: shape: (7, 2) βββββββββ¬ββββββ β group β col β β --- β --- β β str β i64 β βββββββββͺββββββ‘ β a β 1 β β a β 2 β β a β 3 β β a β 4 β β b β 1 β β b β 3 β β b β 2 β βββββββββ΄ββββββ I am trying to create a binned variable by group, essentially replicating a pandas qcut by group. This is easy in Pandas, as shown here: xx = pl.DataFrame({'group' : ['a','a','a','a','b','b','b'], 'col' : [1,2,3,4,1,3,2]}).to_pandas() xx.groupby('group').col.transform(lambda x: pd.qcut(x, q = 2, labels = False)) Out[18]: 0 0 1 0 2 1 3 1 4 0 5 1 6 0 Name: col, dtype: int64 But how to do this in Polars? Thanks! | Update: Series.qcut was added in polars version 0.16.15 As it's not available on expressions as of yet, you could .partition_by pl.concat( frame.get_column("col") .qcut([.5], maintain_order=True) .select(pl.col("category").to_physical()) for frame in df.partition_by("group") ) shape: (7, 1) ββββββββββββ β category β β --- β β u32 β ββββββββββββ‘ β 0 β β 0 β β 1 β β 1 β β 0 β β 1 β β 0 β ββββββββββββ Or .apply in a groupby context: df.with_columns( pl.col("col") .apply( lambda x: x.qcut([.5], maintain_order=True)["category"].to_physical()) .over("group") ) shape: (7, 2) βββββββββ¬ββββββ β group β col β β --- β --- β β str β u32 β βββββββββͺββββββ‘ β a β 0 β β a β 0 β β a β 1 β β a β 1 β β b β 0 β β b β 1 β β b β 0 β βββββββββ΄ββββββ | 4 | 1 |
75,747,156 | 2023-3-15 | https://stackoverflow.com/questions/75747156/invalid-json-given-in-the-body-of-the-request-expected-a-map-when-using-rese | I am trying to change an existing job settings using the cli but when I invoke the reset_job method I am getting this error: Traceback (most recent call last): File "/home/vsts/work/1/s/S1.DataPlatform.DR/main.py", line 78, in <module> dr.experiment(host,token) File "/home/vsts/work/1/s/S1.DataPlatform.DR/main.py", line 58, in experiment jobs.reset_job(job_json) File "/home/vsts/.local/lib/python3.10/site-packages/databricks_cli/jobs/api.py", line 49, in reset_job return self.client.client.perform_query('POST', '/jobs/reset', data=json, headers=headers, File "/home/vsts/.local/lib/python3.10/site-packages/databricks_cli/sdk/api_client.py", line 174, in perform_query raise requests.exceptions.HTTPError(message, response=e.response) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://spg-sustainable1-qa.cloud.databricks.com/api/2.0/jobs/reset Response from server: { 'error_code': 'MALFORMED_REQUEST', 'message': 'Invalid JSON given in the body of the request - expected a map'} Here is the sample python code I am using: ... api_client = ApiClient(host=databricks_host, token=databricks_token) jobs = JobsApi(api_client) job_list = jobs.list_jobs()["jobs"] job_name = "DP DataSync Job" result_list = list( filter( lambda job: job['settings']['name'] == job_name, job_list) ) job = result_list[0] job_id = job["job_id"] job["settings"]["schedule"]["pause_status"] = "UNPAUSED" print(f"Resetting job with id: {job_id}") job_json = json.dumps(job) jobs.reset_job(job_json) Here is the json that gets passed to reset_job: { "job_id": 217841321277199, "creator_user_name": "...", "settings": { "name": "DP DataSync Job", "new_cluster": { "cluster_name": "", "spark_version": "10.4.x-scala2.12", "aws_attributes": { "first_on_demand": 1, "availability": "SPOT_WITH_FALLBACK", "zone_id": "us-east-1a", "spot_bid_price_percent": 100, "ebs_volume_count": 0 }, "node_type_id": "d3.4xlarge", "custom_tags": { "Owner": "[email protected]", "AppID": "appidhere", "Environment": "" }, "spark_env_vars": { "PYSPARK_PYTHON": "/databricks/python3/bin/python3" }, "enable_elastic_disk": false, "runtime_engine": "STANDARD", "autoscale": { "min_workers": 2, "max_workers": 16 } }, "libraries": [ { "jar": "DataSync-1.0-all.jar" } ], "email_notifications": { "on_start": [ "[email protected]" ], "on_success": [ "[email protected]" ], "on_failure": [ "[email protected]" ], "no_alert_for_skipped_runs": false }, "timeout_seconds": 0, "schedule": { "quartz_cron_expression": "35 0 21 * * ?", "timezone_id": "America/New_York", "pause_status": "UNPAUSED" }, "spark_jar_task": { "jar_uri": "", "main_class_name": "com.company.s.dp.datasync", "parameters": [ "Config.json" ], "run_as_repl": true }, "max_concurrent_runs": 1, "format": "SINGLE_TASK" }, "created_time": 1678272261985 } Databricks CLI version: 17.4 | The payload that you're using is only for the Job Get response - you can't use it as-is for resetting the job. If you look into the Job Reset API, you will see that the payload consists only of two fields: job_id - ID of the job to reset new_settings - settings to set for the job, while you use the settings. { "job_id": 11223344, "new_settings": { "name": "A multitask job", ... } } You also don't need to do json.dumps yourself - it will be done by the API client (see source code). So your code should be modified to the following: orig_job = result_list[0] job_id = job["job_id"] job = {"job_id": job_id, "new_settings": job["settings"]} job["new_settings"]["schedule"]["pause_status"] = "UNPAUSED" jobs.reset_job(job) | 3 | 3 |
75,795,170 | 2023-3-20 | https://stackoverflow.com/questions/75795170/how-to-design-codeforces-interactive-grader | I came across one interactive problem in Codeforces. I want to know how the grader or interactor (as per Codeforces' terms) might be designed. Let's say I want to create a grader for this problem: 1. Guess the Number. My solution to the above problem is stored in 1_Guess_the_Number.py file. It is a correct solution and is accepted by the CF grader. #!/usr/bin/env python3 l, r = 1, 1000000 while l != r: mid = (l + r + 1) // 2 print(mid, flush=True) response = input() if response == "<": r = mid - 1 else: l = mid print("!", l) I created the following grader.py file: #!/usr/bin/env python3 import sys INP = 12 def interactor(n): if n > INP: return "<" return ">=" while True: guess = input() if guess.startswith("!"): print(int(guess.split()[1]) == INP, flush=True) sys.exit() print(interactor(int(guess)), flush=True) So, when I run ./1_Guess_the_Number.py | ./grader_1.py, I expect it to work correctly. But in the terminal, the above command runs for an infinite time with only the following output: < I don't know what is going wrong. Also, it will be very helpful if someone can provide any other way. | The comment from @user2357112 describes correctly why it is not working. While your pipe is sending the output of the first script to the grader, you're not sending 'grader.py''s responses to the first script. So what we need to do is to establish a two way communication. Here is one way to do it. In the grader, call the script you want to test as subprocess and comunicate with it though pipes. I added additional explanation to the code. 1_Guess_the_Number.py is the same as yours: #!/usr/bin/env python3 l, r = 1, 1000000 while l != r: mid = (l + r + 1) // 2 print(mid, flush=True) response = input() if response == "<": r = mid - 1 else: l = mid print("!", l) grader.py takes the name of the test file as input and executes it using subprocess.Popen: # !/usr/bin/env python3 import sys import subprocess INP = 12 def interactor(n): if n > INP: return "<" return ">=" def decodeBytes(message): # Helperfunction to decode the message from stdin # guess has the format b'12345\r\n' # rstrip() removes the \r\n, then we decode it as ascii return message.rstrip().decode('ascii') if __name__ == '__main__': print(f'The secret answer is: {INP}') # get script name: name = sys.argv[1] print(f'calling script {name}') # start script as a subprocess p = subprocess.Popen(["python3", name], stdout=subprocess.PIPE, stdin=subprocess.PIPE) # create iterator to read subsequent guesses stdout_iterator = iter(p.stdout.readline, b"") # loop over all guesses for msg_in in stdout_iterator: #msg_in is a byte array, we need to decode it guess = decodeBytes(msg_in) print(f'got msg: {guess}') if guess.startswith("!"): final_answer = int(guess.split()[1]) print(f'The final answer is: {final_answer}') if final_answer == INP: print('Answer was correct!') break print('Answer is wrong!') break # we determine the message to send out msg_out = interactor(int(guess)) print(f'send msg: {msg_out}') # sending it to the scripts stdin. p.stdin.write((msg_out + '\n').encode()) # we need to flush stdin, to make sure the message is not stuck in a buffer p.stdin.flush() From the commandline you call the grader by python grader.py 1_Guess_the_Number.py The print-out is: The secret answer is: 12 calling script 1_Guess_the_Number.py got msg: 500001 send msg: < got msg: 250001 send msg: < got msg: 125001 send msg: < got msg: 62501 send msg: < got msg: 31251 send msg: < got msg: 15626 send msg: < got msg: 7813 send msg: < got msg: 3907 send msg: < got msg: 1954 send msg: < got msg: 977 send msg: < got msg: 489 send msg: < got msg: 245 send msg: < got msg: 123 send msg: < got msg: 62 send msg: < got msg: 31 send msg: < got msg: 16 send msg: < got msg: 8 send msg: >= got msg: 12 send msg: >= got msg: 14 send msg: < got msg: 13 send msg: < got msg: ! 12 The final answer is: 12 Answer was correct! | 7 | 3 |
75,749,584 | 2023-3-15 | https://stackoverflow.com/questions/75749584/how-can-i-convert-a-yolov8s-model-to-coreml-model-using-a-custom-dataset | I have trained a YOLOv8 object detection model using a custom dataset, and I want to convert it to a Core ML model so that I can use it on iOS. After exporting the model, I have a converted model to core ml, but I need the coordinates or boxes of the detected objects as output in order to draw rectangular boxes around the detected objects. As a beginner in this area, I am unsure how to achieve this. Can anyone help me with this problem? Training model: !yolo task=detect mode=train model=yolov8s.pt data= data.yaml epochs=25 imgsz=640 plots=True Validation: !yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml Export this model to coreML: !yolo mode=export model=runs/detect/train/weights/best.pt format=coreml How can I get the co ordinate output? | To get the cordinates as output use nms=True from ultralytics import YOLO model=YOLO('best.pt') model.export(format='coreml',nms=True) or yolo export model=path/to/best.pt format=onnx nms=True This will give a option to preview your model in Xcode , and the output will return coordinates | 8 | 6 |
75,747,955 | 2023-3-15 | https://stackoverflow.com/questions/75747955/transcription-via-openais-whisper-assertionerror-incorrect-audio-shape | I'm trying to use OpenAI's open source Whisper library to transcribe audio files. Here is my script's source code: import whisper model = whisper.load_model("large-v2") # load the entire audio file audio = whisper.load_audio("/content/file.mp3") #When i write that code snippet here ==> audio = whisper.pad_or_trim(audio) the first 30 secs are converted and without any problem they are converted. # make log-Mel spectrogram and move to the same device as the model mel = whisper.log_mel_spectrogram(audio).to(model.device) # detect the spoken language _, probs = model.detect_language(mel) print(f"Detected language: {max(probs, key=probs.get)}") # decode the audio options = whisper.DecodingOptions(fp16=False) result = whisper.decode(model, mel, options) # print the recognized text if available try: if hasattr(result, "text"): print(result.text) except Exception as e: print(f"Error while printing transcription: {e}") # write the recognized text to a file try: with open("output_of_file.txt", "w") as f: f.write(result.text) print("Transcription saved to file.") except Exception as e: print(f"Error while saving transcription: {e}") In here: # load the entire audio file audio = whisper.load_audio("/content/file.mp3") when I write below: " audio = whisper.pad_or_trim(audio) ", the first 30 secs of the sound file is transcribed without any problem and language detection works as well, but when I delete it and want the whole file to be transcribed, I get the following error: AssertionError: incorrect audio shape What should I do? Should I change the structure of the sound file? If yes, which library should I use and what type of script should I write? | I had the same problem and after some digging I found that whisper.decode is meant to extract metadata about the input, such as the language, and hence the limit to 30 seconds. (see source code for decode function here) In order to transcribe (even audio longer than 30 seconds) you can use whisper.transcribe as shown in the following snippet import whisper model = whisper.load_model("large-v2") # load the entire audio file audio = whisper.load_audio("/content/file.mp3") options = { "language": "en", # input language, if omitted is auto detected "task": "translate" # or "transcribe" if you just want transcription } result = whisper.transcribe(model, audio, **options) print(result["text"]) You can find some documentation of the transcribe method in the source code along with some documentation about the DecodingOptions structure | 4 | 6 |
75,803,317 | 2023-3-21 | https://stackoverflow.com/questions/75803317/what-is-the-most-efficient-way-of-computing-abs2-of-a-complex-numpy-ndarray | I'm looking for the most time-efficient way of computing the absolute squared value of a complex ndarray in python. arr = np.empty((8, 4000), dtype="complex128") # typical size I have tried these options: # numpy implementation def abs2_numpy(x): return x.real**2 + x.imag**2 # numba implementation @numba.vectorize([numba.float64(numba.complex128)]) def abs2_numba(x): return x.real**2 + x.imag**2 It turns out that the numba implementation is roughly 4x faster than numpy, buy I would like to know if there exist a faster method. I have read this question which mentions several methods, but the post is oriented to memory efficiency, which is not a constrain in my case. | The execution time of the provided function is very challenging. In this case, the best solution to find if there is a better possible implementation is to look the generated assembly code since trying to write many alternative function blindly is a wast of time if the generated assembly code is close to be optimal. In fact, this is the case here : the generated assembly code is very good regarding the input layout. Numba generates many way to compute the array internally and chose the best at runtime based on the input. When the input array is contiguous in memory, the best implementation use the following assembly hot loop: .LBB0_55: vmovupd -192(%r9,%rbp,2), %ymm0 vmovupd -160(%r9,%rbp,2), %ymm1 vmovupd -128(%r9,%rbp,2), %ymm2 vmovupd -96(%r9,%rbp,2), %ymm3 vmovupd -64(%r9,%rbp,2), %ymm4 vmovupd -32(%r9,%rbp,2), %ymm5 vmovupd (%r9,%rbp,2), %ymm6 vmulpd %ymm1, %ymm1, %ymm1 vmulpd %ymm0, %ymm0, %ymm0 vhaddpd %ymm1, %ymm0, %ymm0 vmovupd 32(%r9,%rbp,2), %ymm1 vmulpd %ymm3, %ymm3, %ymm3 vmulpd %ymm2, %ymm2, %ymm2 vhaddpd %ymm3, %ymm2, %ymm2 vmulpd %ymm5, %ymm5, %ymm3 vmulpd %ymm4, %ymm4, %ymm4 vhaddpd %ymm3, %ymm4, %ymm3 vmulpd %ymm1, %ymm1, %ymm1 vmulpd %ymm6, %ymm6, %ymm4 vhaddpd %ymm1, %ymm4, %ymm1 vpermpd $216, %ymm0, %ymm0 vpermpd $216, %ymm2, %ymm2 vpermpd $216, %ymm3, %ymm3 vpermpd $216, %ymm1, %ymm1 vmovupd %ymm0, (%r11,%rbp) vmovupd %ymm2, 32(%r11,%rbp) vmovupd %ymm3, 64(%r11,%rbp) vmovupd %ymm1, 96(%r11,%rbp) subq $-128, %rbp addq $-16, %rdx jne .LBB0_55 This loops is pretty efficient. Indeed, it is vectorized using the AVX-2 SIMD instruction set (the best in this case on my machine). This can be seen by looking the opcodes prefix/suffix and the operands : the v prefix is for AVX, the pd suffix is for packed double-precision operations, and the ymm are 256-bit wise AVX registers. The loop is also unrolled so the overhead of the loop iteration is negligible. I hardly believe any alternative Numba function can generate a better hot loop. In fact, I do not expect any auto-vectorized native code to be significantly faster either. While the loop is very good, I do not think it is optimal. Indeed, vhaddpd can certainly be replaced by vertical vaddpd combined with swizzle instructions (like vshufpd, vunpckhpd and vunpcklpd as well as possibly a vpermpd). That being said, the speed up will likely be small. Indeed, the input takes 64 KiB and the output takes 32 KiB. This means the operation will certainly be done in the L2 cache which is generally slower and may be the bottleneck on a more optimized code. In fact, on my machine, a function returning just x.real takes the same time! Not to mention writing such a code in assembly or using intrinsics in native language (like C/C++) results in a non-portable code and it requires to be highly skilled (in both C/C++, assembly, and have a very good knowledge of how x86-64 processors operate). Put it shortly, I think this loop is the best you can get for a sequential Python code. Regarding your target platform, using multiple threads can be faster. Creating threads, sharing work and waiting for them take a significant time at this granularity so it can actually be slower than the computation on some computing machines (especially computing servers with many cores). On my machine the overhead is low (few microseconds). Thus, using multiple threads is slightly faster. This is far from being great since using more threads is like wasting most of your cores due to a quite disappointing speed-up. On top of that, Numba fails to auto-vectorize the code using SIMD instructions with plain loops here (and vectorize seems not to support the parallel=True flag yet). We can speed up the function a bit more by preallocating the array once since temporary array are a bit slow to create (and fill the first time). Here is the resulting code: @numba.njit('(complex128[:,::1],float64[:,::1])', parallel=True) def abs2_numba(x, out): for i in numba.prange(x.shape[0]): tin = x[i, :] tout = out[i, :] for j in range(x.shape[1]): v = tin[j] real = v.real imag = v.imag tout[j] = real * real + imag * imag out = np.empty(arr.shape) %timeit -n 10_000 abs2_numba(x, out) This takes 9.8 Β΅s on my 6-core i5-9600 machine while the initial code takes 12.9 Β΅s. Most of the time is spent in the parallel overhead and the fact that the code is not as well vectorized as the other sequential code. I do not advise you to use this parallel implementation in your case unless every microsecond matter and you do not plan to run it on a machine with many cores. Instead, optimizing the code calling this function is certainly the best thing to do in your case. | 4 | 1 |
75,807,298 | 2023-3-21 | https://stackoverflow.com/questions/75807298/sympy-drop-terms-with-small-coefficients | Is it possible to drop terms with coefficients below a given, small number (say 1e-5) in a Sympy expression? I.e., such that 0.25 + 8.5*exp(-2.6*u) - 2.7e-17*exp(-2.4*u) + 1.2*exp(-0.1*u) becomes 0.25 + 8.5*exp(-2.6*u) + 1.2*exp(-0.1*u) for instance. | You can use a combination of coeffs, subs and Poly: u = Symbol('u') poly = Poly(0.25 + 8.5*exp(-2.6*u) - 2.7e-17*exp(-2.4*u) + 1.2*exp(-0.1*u)) threshold = 1e-5 to_remove = [abs(i) for i in poly.coeffs() if abs(i) < threshold] for i in to_remove: poly = poly.subs(i, 0) poly no yields the following: 0.25 + 8.5*exp(-2.6*u) + 1.2*exp(-0.1*u) | 3 | 2 |
75,806,691 | 2023-3-21 | https://stackoverflow.com/questions/75806691/python-groupby-columns-and-apply-function | I have a dataframe that looks like this which contains all divisions and both conferences from 2000-2022. Tm Conference Division W-L%. Year Bills AFC East 0.813 2022 Dolphins AFC East 0.529 2022 Patriots AFC East 0.471 2022 Jets AFC East 0.412 2022 Cowboys NFC East 0.706 2022 Giants NFC East 0.559 2022 Eagles NFC East 0.824 2022 Commanders NFC East 0.500 2022 I want to groupby Team, Conference, and year, and create a new column called 'Division W-L%' which finds the average W-L% for every team in a particular division, conference, and year except for the team we are calculating it on. I know the formula to find the Division W-L% which is: df['Division_W-L%'] = (df['W-L%'].sum() - df['W-L%']) / (len(df) -1). This is what I want the dataframe to look like. For example, for the 'Bills' we would calculate the Division W-L% by doing (0.529 + 0.471 + 0.412)/3 since those 3 teams are in the same conference, division, and year. Tm Conference Division W-L%. Year Division W-L% Bills AFC East 0.813 2022 0.470667 Dolphins AFC East 0.529 2022 0.565333 Patriots AFC East 0.471 2022 0.584667 Jets AFC East 0.412 2022 0.604333 Cowboys NFC East 0.706 2022 0.627667 Giants NFC East 0.559 2022 0.676667 Eagles NFC East 0.824 2022 0.588333 Commanders NFC East 0.500 2022 0.696333 I tried doing what I described above, which is grouping by those three categories, and then applying that formula to the W-L% column, however I continue to receive errors. All help is appreciated! | You can use transform instead of apply. Compute the sum for the group and subtract the W-L%. of the current row then divide by the size of the group minus 1 (because you want to exclude the row itself): df['Division W-L%'] = (df.groupby(['Conference', 'Division', 'Year'])['W-L%.'] .transform(lambda x: (x.sum() - x) / (len(x) - 1))) Output: >>> df Tm Conference Division W-L%. Year Division W-L% 0 Bills AFC East 0.813 2022 0.470667 1 Dolphins AFC East 0.529 2022 0.565333 2 Patriots AFC East 0.471 2022 0.584667 3 Jets AFC East 0.412 2022 0.604333 4 Cowboys NFC East 0.706 2022 0.627667 5 Giants NFC East 0.559 2022 0.676667 6 Eagles NFC East 0.824 2022 0.588333 7 Commanders NFC East 0.500 2022 0.696333 | 3 | 3 |
75,804,936 | 2023-3-21 | https://stackoverflow.com/questions/75804936/pytest-caplog-for-testing-logging-formatter | I'm using a logging formatter to redact passwords. I want to write a test to confirm that the logging redactor is effective. In this example I simplified the code to redact "foo". With this redactor code in the my my_logger.py module (simplified redaction of a specific word): class RedactFoo: def __init__(self, base_formatter): self.base_formatter = base_formatter def format(self, record): msg = self.base_formatter.format(record) return msg.replace("foo", "<<REDACTED>>") def __getattr__(self, attr): return getattr(self.orig_formatter, attr) Then I configure my logger and wrap the formatter for each handler: logging.config.dictConfig(logging_config) # Set the formatter on all handlers for h in logging.root.handlers: h.setFormatter(RedactFoo(h.formatter)) def get_logger(name): return logging.getLogger(name) If I run: logger = my_logger.get_logger("Test") logger.error(f"This message has foo.") the logger redacts the message. However, in Pytest test_my_logger.py: import my_logger def test_logging_redactor(caplog): logger = my_logger.get_logger("test") logger.error(f"This message has foo.") assert ""This message has <<REDACTED>>." in caplog.text This test does not pass because the Pytest logging configuration overwrites my custom config. How can I use Pytest and the caplog fixture to perform this test? I have seen the unittest solution here: How to test specific logging formatting in Python (using `pytest` and `unittest`), but I'm interested in doing this with Pytest. | The caplog fixture works by injecting its own LogCaptureHandler to the logging framework configuration. That's how it is able to intercept log records and provide the events at caplog.text, caplog.records, caplog.record_tuples etc which the user can make assertions against. Note that it is capturing stdlib LogRecord instances, i.e. the log events have not been rendered yet and your custom format method has not been called. If you want the formatted text redacted during test as well, you'll have to apply it also to the test logging configuration i.e. after the caplog fixture has been entered: @pytest.fixture(autouse=True) def redact_caplog_handlers(caplog): caplog.handler.setFormatter(RedactFoo(caplog.handler.formatter)) | 3 | 4 |
75,804,827 | 2023-3-21 | https://stackoverflow.com/questions/75804827/the-z-axis-label-is-not-showing-in-a-3d-plot | I have an issue with a 3d plot, at the moment of visualizing it. It appears without the z-axis label, but when I set a longer title, it appears: Is there any way to be able to "see" the z-axis label without modifying the title or another kind of solution to this issue? This is my code: mask1, mask2, mask3 shapes are (100,100) with values of 20, 32, 49 respectively fig = plt.figure(figsize=(12, 4)) ax = fig.add_subplot(projection='3d') x, y = np.meshgrid(np.arange(100)*20, np.arange(100)*20) s1 = ax.plot_surface(x, y, mask1*20, linewidth=0, label='Reflector 1') s1._facecolors2d = s1._facecolor3d s1._edgecolors2d = s1._edgecolor3d s2 = ax.plot_surface(x, y, mask2*20, linewidth=0, label='Reflector 2') s2._facecolors2d = s2._facecolor3d s2._edgecolors2d = s2._edgecolor3d s3 = ax.plot_surface(x, y, mask3*20, linewidth=0, label='Reflector 3') s3._facecolors2d = s3._facecolor3d s3._edgecolors2d = s3._edgecolor3d ax.set(xlim=[0, 2000], ylim=([0, 2000]), zlim=([1000, 0]), xlabel=(r' $x [m]$'), ylabel=(r'$y [m]$'), zlabel=(r'$z [m]$')) ax.legend() ax.set_title('a short title', fontsize=18) plt.show() | You can zoom out a bit to make the label visible. E.g.: import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=(12, 4)) ax = fig.add_subplot(projection='3d') x, y = np.meshgrid(np.arange(100)*20, np.arange(100)*20) mask = np.ones(x.shape) s1 = ax.plot_surface(x, y, mask*400, linewidth=0, label='Reflector 1') s1._facecolors2d = s1._facecolor3d s1._edgecolors2d = s1._edgecolor3d s2 = ax.plot_surface(x, y, mask*700, linewidth=0, label='Reflector 2') s2._facecolors2d = s2._facecolor3d s2._edgecolors2d = s2._edgecolor3d s3 = ax.plot_surface(x, y, mask*1000, linewidth=0, label='Reflector 3') s3._facecolors2d = s3._facecolor3d s3._edgecolors2d = s3._edgecolor3d ax.set(xlim=[0, 2000], ylim=([0, 2000]), zlim=([1000, 0]), xlabel=(r' $x [m]$'), ylabel=(r'$y [m]$'), zlabel=(r'$z [m]$')) ax.legend() ax.set_title('a short title', fontsize=18) ax.set_box_aspect(aspect=None, zoom=0.8) plt.show() Which creates this plot: | 3 | 3 |
75,802,643 | 2023-3-21 | https://stackoverflow.com/questions/75802643/is-double-close-in-strace-necessarily-bad | I am training a neural network. Somewhere in my code base, I have a code snippet like the following: def foo(): d = {} with PIL.Image.open(img_path) as img: d["img"] = torchvision.transforms.functional.to_tensor(img) return d This code doesn't cause any problems. However, when I run my program under strace, I see that there is a double call of the close syscall (below, I omit everything except relevant strace lines): openat(AT_FDCWD, "example.tif", O_RDONLY|O_CLOEXEC) = 3</tmp/example.tif> fstat(3</tmp/example.tif>, {st_mode=S_IFREG|0644, st_size=9274, ...}) = 0 lseek(3</tmp/example.tif>, 0, SEEK_CUR) = 0 lseek(3</tmp/example.tif>, 0, SEEK_SET) = 0 read(3</tmp/example.tif>, "II*\0\250#\0\0\200\0 P8$\26\r\7\204BaP\270d6\35\17\210DbQ8\244"..., 4096) = 4096 # here some more lseeks and reads are omitted mmap(NULL, 1765376, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7feb10bbd000 fcntl(3</tmp/example.tif>, F_DUPFD_CLOEXEC, 0) = 4</tmp/example.tif> lseek(4</tmp/example.tif>, 0, SEEK_SET) = 0 read(4</tmp/example.tif>, "II*\0\250#\0\0", 8) = 8 fstat(4</tmp/example.tif>, {st_mode=S_IFREG|0644, st_size=9274, ...}) = 0 mmap(NULL, 9274, PROT_READ, MAP_SHARED, 4</tmp/example.tif>, 0) = 0x7feb9e7a4000 munmap(0x7feb9e7a4000, 9274) = 0 close(4</tmp/example.tif>) = 0 close(4) = -1 EBADF (Bad file descriptor) close(3</tmp/example.tif>) = 0 I am worried about close(4) = -1 EBADF (Bad file descriptor). We can see that there is a duplicate call to the close syscall for some reason. The code works fine though. But still, does this necessarily mean that I have a bug in my code? Upd: if anyone is interested, I have compiled a minimal self-contained example and reported the bug at https://github.com/python-pillow/Pillow/issues/7042 | I would try to find out what causes this. In this particular example it seems OK, but there is room for very subtle errors. Imagine that you have a second thread, which open()s a file between the first and second close(4). It is likely, that this file descriptor get fd 4 assigned, and therefore will be closed immediately. This kind of error is a nightmare to debug. | 3 | 3 |
75,801,560 | 2023-3-21 | https://stackoverflow.com/questions/75801560/use-bytesio-instead-of-namedtemporaryfile-with-openpyxl | I understand that with openpyxl>=3.1 function save_virtual_workbook is gone and preferred solution is using NamedTemporaryFile. I want it to be done without writing to filesystem, only in memory, just like BytesIO did. Before update I had: wb = Workbook() populate_workbook(wb) return BytesIO(save_virtual_workbook(wb)) Now I need: wb = Workbook() populate_workbook(wb) tmpfile = NamedTemporaryFile() wb.save(tmpfile.name) return tmpfile As in https://openpyxl.readthedocs.io/en/3.1/tutorial.html?highlight=save#saving-as-a-stream As I'm using django probably I could use InMemoryUploadedFile but I don't want to. How can I use BytesIO or something similar in elegant fashion ie. without hacks and additional packages? | You're so close, dude. Just pass a io.BytesIO buffer to the wb.save method. import openpyxl, io # Create an in-memory bytes buffer, which is a file-like object buffer = io.BytesIO() # Create a workbook, do the shit you gotta do wb = openpyxl.Workbook() # Save the workbook to the buffer, as if it were a file on disk opened for writing bytes wb.save(buffer) # Get all the bytes in the buffer. # It's functionally the same as buffer.seek(0) then buffer.read() buffer.getvalue() | 3 | 2 |
75,799,722 | 2023-3-21 | https://stackoverflow.com/questions/75799722/how-to-deal-with-stack-expects-each-tensor-to-be-equal-size-eror-while-fine-tuni | I tried to fine tune a model with my personal information. So I can create a chat box where people can learn about me via chat gpt. However, I got the error of RuntimeError: stack expects each tensor to be equal size, but got [47] at entry 0 and [36] at entry 1 Because I have different length of input Here are 2 of my sample input What is the webisite of ABC company ? -> https://abcdef.org/ Do you know the website of ABC company ? -> It is https://abcdef.org/ Here is what I have tried so far import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel from torch.utils.data import Dataset, DataLoader class QADataset(Dataset): def __init__(self, questions, answers, tokenizer, max_length): self.questions = questions self.answers = answers self.tokenizer = tokenizer self.max_length = max_length # Add a padding token to the tokenizer self.tokenizer.add_special_tokens({'pad_token': '[PAD]'}) def __len__(self): return len(self.questions) def __getitem__(self, index): question = self.questions[index] answer = self.answers[index] input_text = f"Q: {question} A: {answer}" input_ids = self.tokenizer.encode(input_text, add_special_tokens=True, max_length=self.max_length, padding=True, truncation=True) if input_ids is None: return None input_ids = torch.tensor(input_ids, dtype=torch.long) print(f"Input ids size: {input_ids.size()}") return input_ids # Set up the tokenizer and model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') # Load the question and answer data questions = ["What is the webisite of ABC company ?", "Do you know the website of ABC company ?"] answers = ["https://abcdef.org/", "It is https://abcdef.org/"] # Create the dataset and data loader max_length = 64 dataset = QADataset(questions, answers, tokenizer, max_length=max_length) data_loader = DataLoader(dataset, batch_size=8, shuffle=True) # Fine-tune the model on the QA dataset device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) criterion = torch.nn.CrossEntropyLoss() for epoch in range(3): running_loss = 0.0 for batch in data_loader: batch = batch.to(device) outputs = model(batch, labels=batch) loss, _ = outputs[:2] optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() print(f"Epoch {epoch + 1} loss: {running_loss / len(data_loader)}") # Save the fine-tuned model model.save_pretrained("qa_finetuned_gpt2") I dont have a solid background of AI, it is more like reading references and try to implement it. | Yes seems like you didn't pad your inputs. The model expects the size to be the same for each text. So if it's too short, you pad it, and if it's too long, it should be truncated. See also https://huggingface.co/docs/transformers/pad_truncation How does max_length, padding and truncation arguments work in HuggingFace' BertTokenizerFast.from_pretrained('bert-base-uncased')? https://ai.stackexchange.com/questions/37624/why-do-transformers-have-a-fixed-input-length Try changing how the tokenizer process the inputs: # Define the data loading class class MyDataset(Dataset): def __init__(self, data_path, tokenizer): self.data_path = data_path self.tokenizer = tokenizer with open(self.data_path, 'r') as f: self.data = f.read().split('\n') def __len__(self): return len(self.data) def __getitem__(self, index): text = self.data[index] inputs = self.tokenizer.encode(text, add_special_tokens=True, truncation=True, max_length=80, padding="max_length") return torch.tensor(inputs) | 3 | 2 |
75,795,486 | 2023-3-20 | https://stackoverflow.com/questions/75795486/how-can-i-get-an-access-token-using-msal-in-python | I cannot seem to figure out how to acquire an access token using MSAL. Iβve spend time reading the source code and Microsoft documentation to no avail. Iβd like to use the PublicClientApplication to acquire the token. Even when running proof of concepts with the QuickStarts using ConfidentialClientApplication I seem to only get an ID_Token not an access token. Ultimately Iβm trying to build a desktop/mobile app and want to be able to use MSAL for id token claims and access tokens for access to APIs. Also worth mentioning I want to implement this using AAD B2C Thanks! | To get your access use the acquire_token_silent or acquire_token_interactive methods of the PublicClientApplication class. Below is an example, replace the needed variables with your own. import msal app = msal.PublicClientApplication( "your_client_id", authority="https://yourtenant.b2clogin.com/yourtenant.onmicrosoft.com/B2C_1_signup_signin_policy", client_credential=None ) scopes = ["https://yourtenant.onmicrosoft.com/api/read"] result = app.acquire_token_interactive(scopes=scopes) print(result["access_token"]) | 3 | 4 |
75,794,069 | 2023-3-20 | https://stackoverflow.com/questions/75794069/in-python-how-to-create-multiple-dataclasses-instances-with-different-objects-in | I'm trying to write a parser and I'm missing something in the dataclasses usage. I'm trying to be as generic as possible and to do the logic in the parent class but every child has the sames values in the end. I'm confused with what dataclasse decorator do with class variables and instances variables. I should probably not use self.__dict__ in my post_init. How would you do to have unique instances using the same idea ? from dataclasses import dataclass class VarSlice: def __init__(self, start, end): self.slice = slice(start, end) self.value = None @dataclass class RecordParser(): line: str def __post_init__(self): for k, var in self.__dict__.items(): if isinstance(var, VarSlice): self.__dict__[k].value = self.line[var.slice] @dataclass class HeaderRecord(RecordParser): sender : VarSlice = VarSlice(3, 8) k = HeaderRecord(line="abcdefgh") kk = HeaderRecord(line="123456789") print(k.sender.value) print(kk.sender.value) Result : 45678 45678 Expected result is : abcde 45678 I tried changing VarSlice to a dataclass too but it changed nothing. | This curious behavior is observed, since when you do: sender: VarSlice = VarSlice(3, 8) The default value here is a specific instance VarSlice(3, 8) - which is shared between all HeaderRecord instances. This can be confirmed, by printing the id of the VarSlice object - if they are the same when constructing an instance of a RecordParser subclass more than once, then we have a problem: if isinstance(var, VarSlice): print(id(var)) ... This is very likely not what you want. The desired behavior is likely going to be create a new VarSlice(3, 8) instance, each time a new HeaderRecord object is instantiated. To resolve the issue, I would suggest to use default_factory instead of default, as this is the recommended (and documented) approach for fields with mutable default values. i.e., sender: VarSlice = field(default_factory=lambda: VarSlice(3, 8)) instead of: sender: VarSlice = VarSlice(3, 8) The above, being technically equivalent to: sender: VarSlice = field(default=VarSlice(3, 8)) Full code with example: from dataclasses import dataclass, field class VarSlice: def __init__(self, start, end): self.slice = slice(start, end) self.value = None @dataclass class RecordParser: line: str def __post_init__(self): for var in self.__dict__.values(): if isinstance(var, VarSlice): var.value = self.line[var.slice] @dataclass class HeaderRecord(RecordParser): sender: VarSlice = field(default_factory=lambda: VarSlice(3, 8)) k = HeaderRecord(line="abcdefgh") kk = HeaderRecord(line="123456789") print(k.sender.value) print(kk.sender.value) Now prints: defgh 45678 Improving Performance Though clearly this is not a bottleneck, when creating multiple instances of a RecordParser subclass, I note there could be areas for potential improvement. Reasons that performance could be (slightly) impacted: There currently exists a for loop on each instantiation to iterate over dataclass fields which are of a specified type VarSlice, where a loop could potentially be avoided. The __dict__ attribute on the instance is accessed each time, which can also be avoided. Note that using dataclasses.fields() instead is actually worse, as this value is not cached on a per-class basis. An isinstance check is run on each dataclass field, each time a subclass is instantiated. To resolve this, I could suggest improving performance by statically generating a __post__init__() method for the subclass via dataclasses._create_fn() (or copying this logic to avoid dependency on an "internal" function), and setting it on the subclass, i.e. before the @dataclass decorator runs for the subclass. An easy way could be to utilize the __init_subclass__() hook which runs when a class is subclassed, as shown below. # to test when annotations are forward-declared (i,e. as strings) # from __future__ import annotations from collections import deque from dataclasses import dataclass, field, _create_fn class VarSlice: def __init__(self, start, end): self.slice = slice(start, end) self.value = None @dataclass class RecordParser: line: str def __init_subclass__(cls, **kwargs): # list containing the (dynamically-generated) body lines of `__post_init__()` post_init_lines = deque() # loop over class annotations (this is a greatly "simplified" # version of how the `dataclasses` module does it) for name, tp in cls.__annotations__.items(): if tp is VarSlice or (isinstance(tp, str) and tp == VarSlice.__name__): post_init_lines.append(f'var = self.{name}') post_init_lines.append('var.value = line[var.slice]') # if there are no dataclass fields of type `VarSlice`, we are done if post_init_lines: post_init_lines.appendleft('line = self.line') cls.__post_init__ = _create_fn('__post_init__', ('self', ), post_init_lines) @dataclass class HeaderRecord(RecordParser): sender: VarSlice = field(default_factory=lambda: VarSlice(3, 8)) k = HeaderRecord(line="abcdefgh") kk = HeaderRecord(line="123456789") print(k.sender.value) print(kk.sender.value) | 3 | 3 |
75,793,007 | 2023-3-20 | https://stackoverflow.com/questions/75793007/what-is-the-benefit-of-using-complex-numbers-to-store-graph-coordinates | I am looking at a solution to an Advent of Code puzzle that stores coordinates as complex numbers: heightmap = { complex(x, y): c for y, ln in enumerate(sys.stdin.read().strip().split("\n")) for x, c in enumerate(ln) } Then accesses them later as follows: for xy, c in heightmap.items(): for d in (1, -1, 1j, -1j): if ord(heightmap.get(xy + d, "{")) <= ord(c) + 1: G.add_edge(xy, xy + d) I can see that this code makes the 'get neighbors' line easy to write/think about, but I don't see that it is worth the added complexity (no pun intended). Can someone explain why it's useful to store the grid coordinates as complex numbers? | Yes, because it's easy/less to write and think about. Also means less opportunity for typos :-) I've been doing that for years, ever since I saw someone else do that. Usually not even typing the deltas explicitly but calculating them. I.e., instead of for d in (1, -1, 1j, -1j): use(z + d) do: for i in range(4): use(z + 1j**i) Possible alternatives when using separate x and y variables: for dx, dy in ((1, 0), (0, 1), (-1, 0), (0, -1)): use(x+dx, y+dy) for x2, y2 in ((x+1, y), (x, y+1), (x-1, y), (x, y-1)): use(x2, y2) Ermahgerd, so frustrating. I actually did make several typos while writing these :-) (Tests at Attempt This Online!) | 4 | 5 |
75,786,523 | 2023-3-20 | https://stackoverflow.com/questions/75786523/how-to-limit-the-display-width-in-polars-so-that-wide-dataframes-are-printed-in | Consider the following example pd.set_option('display.width', 50) pl.DataFrame(data = np.random.randint(0,20, size = (10, 42)), columns = list('abcdefghijklmnopqrstuvwxyz123456789ABCDEFG')).to_pandas() You can see how nicely the columns are formatted, breaking a line after column k so that the full dataframe is printed in chunks in the console. This was controlled by the pandas width argument above. I was not able to reproduce this behavior using polars and all the format options. I have tried tweaking all possible settings: pl.Config.set_tbl_cols(10) pl.Config.set_fmt_str_lengths(10) pl.Config.set_tbl_width_chars(70) pl.Config.set_tbl_rows(2) pl.Config.set_tbl_formatting('NOTHING') pl.Config.set_tbl_column_data_type_inline(True) pl.Config.set_tbl_dataframe_shape_below(True) See below: Any ideas? Thanks! | You can display all frame columns like so... with pl.Config() as cfg: cfg.set_tbl_cols(-1) print(df) ...which will give you a good result on the given frame if you have sufficient terminal/console/output width available. If this isn't enough, I recommend making a feature request for this on the polars GitHub repository | 5 | 4 |
75,785,959 | 2023-3-20 | https://stackoverflow.com/questions/75785959/assertion-failed-when-using-pyswip | I have just installed pyswip and I was testing if it is working correctly but I always get this error: Assertion failed: 0, file /home/swipl/src/swipl-devel/src/pl-fli.c, line 2637 I was using this example code: from pyswip import Prolog prolog = Prolog() prolog.assertz("father(michael,john)") I am using windows 10 python 3.11.1 and I have swipl in the path. I tested in WSL ubuntu and get the same error. Knowing why this error happen and solve it. | I faced a similar issue when getting a neurosymbolic model called DeepProbLog to run on my Windows. It'd be helpful to know which version of swipl and pyswip you have. I personally use swipl version 8.4.2 for Windows 64 bits (https://www.swi-prolog.org/download/stable?show=all) \ fork the pyswip provided by the repo in DeepProbLog pip install git+https://github.com/ML-KULeuven/pyswip If you want to use WSL Ubuntu, you can keep reading DeepProbLog's github page as they also provide commands on working with Prolog in Linux. | 3 | 3 |
75,789,383 | 2023-3-20 | https://stackoverflow.com/questions/75789383/pylint-func-max-is-not-callable | In my python code, I import func... from sqlalchemy.sql.expression import func Then, during my code I select data from a database table... select(func.max(MyTable.my_datetime)) ...where my_datetime is a DateTime data type... from sqlalchemy.types import DateTime my_datetime = Column('my_datetime', DateTime) The code runs OK, but in the vscode editor I am getting the following error... func.max is not callable Pylint(E1102:not-callable) I don't want to ignore this if there is a genuine concern behind this Pylint error. Should I be concerned by this error or can I safely ignore it? | The Pylint error you have (func.max is not callable Pylint(E1102:not-callable)) is a false positive and you can ignore it in your case. Pylint is flagging func.max not callable because it can't analyze the func object statically and determine that it has a max method. You can use func: Callable after importing . | 9 | 11 |
75,781,862 | 2023-3-19 | https://stackoverflow.com/questions/75781862/running-alembic-command-cause-importerror-cannot-import-name-bindparamclause | This happens whenever I ran any alembic command. I am using sqlalchemy version 2.0.3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/__init__.py", line 8, in <module> from . import op # noqa File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/op.py", line 1, in <module> from .operations.base import Operations File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/operations/__init__.py", line 1, in <module> from .base import Operations, BatchOperations File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/operations/base.py", line 3, in <module> from .. import util File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/util/__init__.py", line 9, in <module> from .sqla_compat import ( # noqa File "/home/ado/anaconda3/lib/python3.8/site-packages/alembic/util/sqla_compat.py", line 8, in <module> from sqlalchemy.sql.expression import _BindParamClause ImportError: cannot import name '_BindParamClause' from 'sqlalchemy.sql.expression' (/home/***/anaconda3/lib/python3.8/site-packages/sqlalchemy/sql/expression.py) | Solved after uninstalling alembic and reinstalling it afresh I ran: pip3 uninstall alembic pip3 install alembic | 4 | 10 |
75,787,164 | 2023-3-20 | https://stackoverflow.com/questions/75787164/expected-type-iterablesupportslessthan-any-matched-generic-type-iterable | The two following blocks of code should be equivalent: Block 1 array_1 = [[12, 15], [10, 1], [5, 13]] print(array_1) """ output: [[12, 15], [10, 1], [5, 13]] """ print(sorted(array_1)) """ output: [[5, 13], [10, 1], [12, 15]] """ Block 2 import numpy as np np_array_1 = np.array([[12, 15], [10, 1], [5, 13]]) print(np_array_1) """ output: [[12 15] [10 1] [ 5 13]] """ array_1 = np_array_1.tolist() print(array_1) """ output: [[12, 15], [10, 1], [5, 13]] """ print(sorted(array_1)) """ output: [[5, 13], [10, 1], [12, 15]] """ But, in PyCharm, if I use: sorted(np_array_1.tolist()) # SEE BLOCK 2 I receive this only warning, but apparently without any issue: Expected type 'Iterable[SupportsLessThan | Any]' (matched generic type 'Iterable[SupportsLessThanT]'), got 'object' instead I would like to get clean code, thank you. | This is a bug in PyCharm. Please see: https://youtrack.jetbrains.com/issue/PY-45958/Type-error-sorting-Iterable-of-dataclassorderTrue-instances | 3 | 3 |
75,785,079 | 2023-3-19 | https://stackoverflow.com/questions/75785079/python-parent-class-information-do-not-display | import sqlite3 sqlite3.Cursor.__base__ ## output: <class 'object'> however, clearly, it inherited from Iterator, can see from the source code class Cursor(Iterator[Any]): and issubclass(sqlite3.Cursor, Iterator) # previous edit was saying iterable, sorry ## output: True so, why does Python not properly display the parent class info? how to retrieve it properly? | The cursor object is defined in C as far as I can tell, so it doesn't have a parent class. As to why issubclass returns that, issubclass does essentially a "duck typing check" in some cases, as hinted at in the ABC docs: Checking isinstance(obj, Iterable) detects classes that are registered as Iterable or that have an __iter__() method. . . It's talking about isinstance there, but issubclass works the same. You can see this with a simple test: import typing class T: # Not an explicit subclass of "Iterable" def __iter__(self): pass print(issubclass(T, Iterable)) # True | 3 | 2 |
75,784,875 | 2023-3-19 | https://stackoverflow.com/questions/75784875/can-another-thread-release-a-lock-held-by-another-thread-in-python | I am trying to understand threading in Python via this website. Here, it has the following code for a single producer/consumer-type problem: import random SENTINEL = object() def producer(pipeline): """Pretend we're getting a message from the network.""" for index in range(10): message = random.randint(1, 101) logging.info("Producer got message: %s", message) pipeline.set_message(message, "Producer") # Send a sentinel message to tell consumer we're done pipeline.set_message(SENTINEL, "Producer") def consumer(pipeline): """Pretend we're saving a number in the database.""" message = 0 while message is not SENTINEL: message = pipeline.get_message("Consumer") if message is not SENTINEL: logging.info("Consumer storing message: %s", message) if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") # logging.getLogger().setLevel(logging.DEBUG) pipeline = Pipeline() with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: executor.submit(producer, pipeline) executor.submit(consumer, pipeline) class Pipeline: """ Class to allow a single element pipeline between producer and consumer. """ def __init__(self): self.message = 0 self.producer_lock = threading.Lock() self.consumer_lock = threading.Lock() self.consumer_lock.acquire() def get_message(self, name): logging.debug("%s:about to acquire getlock", name) self.consumer_lock.acquire() logging.debug("%s:have getlock", name) message = self.message logging.debug("%s:about to release setlock", name) self.producer_lock.release() logging.debug("%s:setlock released", name) return message def set_message(self, message, name): logging.debug("%s:about to acquire setlock", name) self.producer_lock.acquire() logging.debug("%s:have setlock", name) self.message = message logging.debug("%s:about to release getlock", name) self.consumer_lock.release() logging.debug("%s:getlock released", name) IIUC, the consumer lock is acquired and released in different threads (and the producer too). Is this possible? It feels like I'm missing something because then couldn't any thread release a lock held by another thread? | The answer is in the documentation: The class implementing primitive lock objects. Once a thread has acquired a lock, subsequent attempts to acquire it block, until it is released; any thread may release it. In fact this technique is frequently used to orchestrate the threads. If you look closely two Lock objects are created in the Pipeline class. The consumer_lock is acquired in initialization which means if two threads start concurrently, the consumer thread would hit its Lock object in self.consumer_lock.acquire() and gonna be blocked. It's gonna wait for producer thread to put something in there. There has to be something to be consumed after all. When producer puts something in, it releases the consumer_lock so that it can go ahead consumes it. | 3 | 2 |
75,771,154 | 2023-3-17 | https://stackoverflow.com/questions/75771154/is-it-necessary-to-add-a-constant-to-a-logit-model-run-on-categorical-variables | I have a dataframe that looks like this: And am running a logit model on fluid as dependent variable, and excluding vp and perip: model = smf.logit('''fluid ~ C(examq3_n, Treatment(reference = 2.0)) + C(pmhq3_n) + C(fluidq3_n) + C(mapq3_n, Treatment(reference = 3.0)) + C(examq6_n, Treatment(reference = 2.0)) + C(pmhq6_n) + C(fluidq6_n) + C(mapq6_n, Treatment(reference = 3.0)) + + C(case, Treatment(reference = 2))''', data = case1_2_vars).fit() print(model.summary()) I get the following results: I am wondering if I need to add a constant to the data and if so, how? I've tried adding a column to the dataframe called const which equals 1, but when I then add const to the logit equation I get LinAlgError: Singular Matrix, and I don't know how to add it using smf.add_constant() because I have had to specify the categorical variables and their respective reference numbers in the equation, rather than defining x and y separately and simply inputting those into the smf.logit() call. My questions are: a) do I need to add a constant, and b) how? There are some links online that seem to imply it might not be necessary for a categorical variable-based logit model, but I would rather do it if it's best practice. I'm also wondering, does statsmodels automatically include a constant? Because Intercept is listed in the results. | If you use formulas, then the formula handling by patsy adds automatically a constant/intercept. (when using e.g. smf.logit or sm.Logit.from_formula) If you create a model without formula using numpy arrays or pandas DataFrame, then the exog is not changed by statsmodels, i.e. users needs to add a constant themselves. The helper function is sm.add_constant which adds a column of ones to the array or DataFrame. (when using e.g. sm.Logit(y, x)) | 3 | 2 |
75,780,600 | 2023-3-19 | https://stackoverflow.com/questions/75780600/how-to-raise-exceptions-in-python-asyncio-background-task | Problem I have a few tasks that run continuously, but one of them occasionally needs to restart so this one is run in the background. How can exceptions from this background task be raised immediately? In the following example, the exception is not raised until the next attempt to restart, which can be very infrequent in real applications, so can undesirably go unnoticed for a long time. Example https://replit.com/@PatrickPei/how-to-raise-exceptions-in-python-asyncio-background-task This example runs 3 tasks: foo, which raises no exceptions bar, which raises an exception after 6 iterations on_interval, which restarts bar every 5 seconds import asyncio task = None i = 0 async def foo(): while True: print("foo") await asyncio.sleep(1) async def bar(): while True: global i i += 1 if i > 4: raise ValueError() print("bar", i) await asyncio.sleep(1) async def on_interval(n): while True: await asyncio.sleep(n) # Cancel task. global task print("Canceling bar") task.cancel() try: await task except asyncio.CancelledError: pass # Restart task. print("Restarting bar") task = asyncio.create_task(bar()) async def main(): # Start background task. print("Starting bar") global task task = asyncio.create_task(bar()) # Start other tasks. await asyncio.gather( foo(), on_interval(3), ) if __name__ == "__main__": asyncio.run(main()) Output bar iterates 4 times and raises an exception, which is not caught until the next restart, as shown by 3 foo iterations after bar 4. This is a problem when there is a lot of time in between restarts since exceptions go unnoticed for a long time. Starting bar bar 1 foo bar 2 foo bar 3 foo Canceling bar Restarting bar bar 4 foo foo foo Canceling bar Traceback (most recent call last): File "~/example.py", line 60, in <module> asyncio.run(main()) File "~/.pyenv/versions/3.10.5/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "~/.pyenv/versions/3.10.5/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete return future.result() File "~/example.py", line 53, in main await asyncio.gather( File "~/example.py", line 37, in on_interval await task File "~/example.py", line 22, in bar raise ValueError() ValueError Attempts Started another task to check asyncio.Task.exception, but this is cumbersome because every background task needs another busy loop to help raise its exceptions. Tried asyncio.Task.add_done_callback but since the background task is still not awaited until the next restart, it only logs the error and does not stop the other task foo. | In python 3.11, using async with the asynchronous context manager and asyncio.TaskGroup() solves this problem simply. import asyncio i = 0 async def foo(): while True: print("foo") await asyncio.sleep(1) async def bar(): while True: global i i += 1 if i > 14: raise ValueError() print("bar", i) await asyncio.sleep(1) async def on_interval(n): while True: async with asyncio.TaskGroup() as tg1: print("Restarting bar") task2 = tg1.create_task(bar()) await asyncio.sleep(n) print("Canceling bar") task2.cancel() async def main(): async with asyncio.TaskGroup() as tg: task1 = tg.create_task(foo()) task3 = tg.create_task(on_interval(3)) asyncio.run(main()) ------------------------- foo Restarting bar bar 1 foo bar 2 foo bar 3 Canceling bar foo Restarting bar bar 4 + Exception Group Traceback (most recent call last): | File "C:\Users\Ρ\PycharmProjects\tkinter\rrr.py", line 42, in <module> | asyncio.run(main()) | File "C:\Users\Ρ\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run | return runner.run(main) | ^^^^^^^^^^^^^^^^ | File "C:\Users\Ρ\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run | return self._loop.run_until_complete(task) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "C:\Users\Ρ\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 650, in run_until_complete | return future.result() | ^^^^^^^^^^^^^^^ | File "C:\Users\Ρ\PycharmProjects\tkinter\rrr.py", line 37, in main | async with asyncio.TaskGroup() as tg: | File "C:\Users\Ρ\AppData\Local\Programs\Python\Python311\Lib\asyncio\taskgroups.py", line 135, in __aexit__ | raise me from None | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Exception Group Traceback (most recent call last): | File "C:\Users\Ρ\PycharmProjects\tkinter\rrr.py", line 25, in on_interval | async with asyncio.TaskGroup() as tg1: | File "C:\Users\Ρ\AppData\Local\Programs\Python\Python311\Lib\asyncio\taskgroups.py", line 135, in __aexit__ | raise me from None | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "C:\Users\Ρ\PycharmProjects\tkinter\rrr.py", line 17, in bar | raise ValueError() | ValueError +------------------------------------ foo Process finished with exit code 1 | 4 | 3 |
75,782,615 | 2023-3-19 | https://stackoverflow.com/questions/75782615/how-to-render-a-gym-env-in-test-but-not-in-learning | I want to render a gym env in test but not in learning. Here is my code: import gymnasium as gym import numpy as np env = gym.make('FrozenLake-v1') # initialize Q table Q = np.zeros([env.observation_space.n, env.action_space.n]) print(Q) # parameter lr = 0.8 gamma = 0.95 num_episodes = 2000 # learning for i in range(num_episodes): state = env.reset()[0] done = False while not done: # epsilon-greedy policy if np.random.uniform(0, 1) < 0.5: action = env.action_space.sample() else: action = np.argmax(Q[state,:]) next_state, reward, done, _, _ = env.step(action) Q[state, action] = (1 - lr) * Q[state, action] + \ lr * (reward + gamma * np.max(Q[next_state, :])) state = next_state print(Q) # test state = env.reset()[0] done = False while not done: action = np.argmax(Q[state, :]) next_state, reward, done, _, _ = env.step(action) state = next_state env.render() It doesn't render and give warning: WARN: You are calling render method without specifying any render mode. You can specify the render_mode at initialization, e.g. gym.make("FrozenLake-v1", render_mode="rgb_array") If I specify the render_mode to 'human', it will render both in learning and test, which I don't want. How should I do? | You can just recreate a new environment specifying the render mode. import gymnasium as gym import numpy as np env_train = gym.make('FrozenLake-v1') # initialize Q table Q = np.zeros([env_train.observation_space.n, env_train.action_space.n]) print(Q) # parameter lr = 0.8 gamma = 0.95 num_episodes = 2000 # learning for i in range(num_episodes): state = env_train.reset()[0] done = False while not done: # epsilon-greedy policy if np.random.uniform(0, 1) < 0.5: action = env_train.action_space.sample() else: action = np.argmax(Q[state,:]) next_state, reward, done, _, _ = env_train.step(action) Q[state, action] = (1 - lr) * Q[state, action] + \ lr * (reward + gamma * np.max(Q[next_state, :])) state = next_state print(Q) env_train.close() # test env_test = gym.make('FrozenLake-v1', render_mode="human") state = env_test.reset()[0] done = False while not done: action = np.argmax(Q[state, :]) next_state, reward, done, _, _ = env_test.step(action) state = next_state env_test.render() env_test.close() | 4 | 5 |
75,780,039 | 2023-3-19 | https://stackoverflow.com/questions/75780039/sum-up-rows-containing-exact-characters-in-dataframe | I have a dataframe like this. I want to sum up special rows containing exact characters that matched my targets. Ko_EC FPKM count 0 1.1.1.1 16.7 1 1 1.1.1.15 30.0 7 2 4.2.1.128 40.5 9 3 4.2.1.12 57.0 10 4 3.2.1.1 1.1.1.1 22.0 4 Here are my dataframe and my targets. # coding=utf-8 import pandas as pd import numpy as np ######### classes = [('1.1.1.1', 16.7, 1), ('1.1.1.15', 30, 7), ('4.2.1.128', 40.5, 9), ('4.2.1.12', 57, 10), ('3.2.1.1 1.1.1.1', 22, 4)] labels = ['Ko_EC','FPKM', 'count'] alls = pd.DataFrame.from_records(classes, columns=labels) target_list = ['1.1.1.1', '4.2.1.128', '4.2.1.12; 1.1.1.15; 3.2.1.1', '1.1.1.15'] I want to sum up the alls['Ko_EC'] rows containing exact characters that matched the unique target_list. Based on the answer to a previous question, I used this code: result = pd.DataFrame() for target in target_list: mask = alls.apply(lambda x: any([cls in target for cls in x['Ko_EC'].split(' ')]), axis=1) target_sum = alls.loc[mask,["FPKM", 'count']].sum().reset_index().rename(columns={0:target}).iloc[:,1:] result = pd.concat([result,target_sum], axis=1) #concat as result result = result.T result.columns = ['FPKM', 'Count'] It results like: FPKM Count 1.1.1.1 38.7 5.0 4.2.1.128 97.5 19.0 4.2.1.12; 1.1.1.15; 3.2.1.1 125.7 22.0 1.1.1.15 68.7 12.0 What I want is FPKM Count 1.1.1.1 38.7 5.0 4.2.1.128 40.5 9.0 4.2.1.12; 1.1.1.15; 3.2.1.1 109.0 21.0 1.1.1.15 30.0 7.0 Here, the 4.2.1.128 row FPKM Count only sum that of itself, and 4.2.1.12; 1.1.1.15; 3.2.1.1 rows sum that of 4.2.1.12, 1.1.1.15 and 3.2.1.1 1.1.1.1 in origin alls dataframe, but my code confused the characters 1.1.1.1/1.1.1.15 and 4.2.1.128/4.2.1.12, because of their similarity. Could anyone tell me how to do this and why my code cannot work? Thanks! | Using a double explode, a merge then groupby.agg: # create a reference from target_list ref = (pd.Series(target_list, name='Ko_EC') .str.split(r';\s*') .explode() .reset_index() ) # merge the exploded "alls" then aggregate out = ( ref.merge(alls.assign(Ko_EC=alls['Ko_EC'].str.split()).explode('Ko_EC'), how='left') .groupby('index') .agg({'Ko_EC': '; '.join, 'FPKM': 'sum', 'count': 'sum'}) ) print(out) NB. You can use a single pipeline, I split it in two for clarity. Output: Ko_EC FPKM count index 0 1.1.1.1; 1.1.1.1 38.7 5 1 4.2.1.128 40.5 9 2 4.2.1.12; 1.1.1.15; 3.2.1.1 109.0 21 3 1.1.1.15 30.0 7 | 4 | 1 |
75,773,085 | 2023-3-18 | https://stackoverflow.com/questions/75773085/subprocess-runhuggingface-cli-login-token-token-works-on-mac-but | I am tring to run subprocess.run(["huggingface-cli", "login", "--token", TOKEN]) in a Jupyter notebook, which works on Mac but gets the following error on Ubuntu. I checked that pip install huggingface_hub has been executed. subprocess.run(["git", "lfs", "install"]) works. !huggingface-cli login in the jupyter cell gets the error /bin/bash: huggingface-cli: command not found Anyone can help? | Your huggingface_hub is not in your path env variable for your Ubuntu system, it is not the same between your jupyter and your terminal session here what you can do, get the path of the executable pip show huggingface_hub | grep Location then update the path in your jupyter notebook, like for example: import os import sys import subprocess huggingface_bin_path = "/home/user/.local/bin" os.environ["PATH"] = f"{huggingface_bin_path}:{os.environ['PATH']}" subprocess.run(["huggingface-cli", "login", "--token", TOKEN]) replace /home/user/.local/bin and TOKEN with the correct one, then it should work | 4 | 1 |
75,778,543 | 2023-3-18 | https://stackoverflow.com/questions/75778543/python-telnetlib3-examples | I would like to understand how to use telnetlib3 for a simple scenario. The longstanding telnetlib (not 3) has a simple example at https://docs.python.org/3/library/telnetlib.html where the python program connects to a telnet server, then looks for prompts and provides responses. One can readily see how to extend this example to different prompts, add timeouts, and add more prompt-response steps. import getpass import telnetlib HOST = "localhost" user = input("Enter your remote account: ") password = getpass.getpass() tn = telnetlib.Telnet(HOST) tn.read_until(b"login: ") tn.write(user.encode('ascii') + b"\n") if password: tn.read_until(b"Password: ") tn.write(password.encode('ascii') + b"\n") tn.write(b"ls\n") tn.write(b"exit\n") print(tn.read_all().decode('ascii')) However, telnetlib (not 3) is deprecated. The replacement, telnetlib3 ( https://telnetlib3.readthedocs.io/en/latest/intro.html#quick-example ) provides an example based on asyncio, and the async "shell" function (that interacts with the server) blocks waiting for prompt (rationale for async) and always responds to the server with 'y'. import asyncio, telnetlib3 async def shell(reader, writer): while True: # read stream until '?' mark is found outp = await reader.read(1024) if not outp: # End of File break elif '?' in outp: # reply all questions with 'y'. writer.write('y') # display all server output print(outp, flush=True) # EOF print() loop = asyncio.get_event_loop() coro = telnetlib3.open_connection('localhost', 6023, shell=shell) reader, writer = loop.run_until_complete(coro) loop.run_until_complete(writer.protocol.waiter_closed) I am a few clues short on how to get code that's structured this way to perform the more mainstream task that's demonstrated in the (literally!) straightforward telnetlib (not 3) example where the server provides a series of different prompts, and the python program is to provide corresponding responses. I suspect that this is partly down to my unfamiliarity with asyncio and what code patterns one should use to get an async function to carry out a series of steps. So it would be a great help to see the telnetlib (not 3) example implemented in this style. | I think it's a bit of a stretch to call telnetlib3 a "replacement" for telnetlib. I guess it's similar in that it allows you to write a telnet client (or server), but it's really an entirely different beast. For the sort of thing you're doing in the initial telnetlib example, I would generally reach for pexpect (or just, you know, normal expect). It looks like PEP 594 also points to Exscript as a solution, and that looks closer to telnetlib than telnetlib3. Here's an example I whipped up that uses telnetlib3 to connect to a local switch, login, send the enable command, and then log out: import asyncio import telnetlib3 async def shell(reader, writer): rules = [ ('User:', 'admin'), ('Password:', 'secret'), (') >', 'enable'), (') #', 'exit'), (') >', 'logout'), ] ruleiter = iter(rules) expect, send = next(ruleiter) while True: outp = await reader.read(1024) if not outp: break if expect in outp: writer.write(send) writer.write('\r\n') try: expect, send = next(ruleiter) except StopIteration: break # display all server output print(outp, flush=True) # EOF print() async def main(): reader, writer = await telnetlib3.open_connection('office-sw-0', 23, shell=shell) await writer.protocol.waiter_closed if __name__ == '__main__': asyncio.run(main()) I don't really like it. I'd much rather do something like this: import sys import pexpect rules = [ ("User:", "admin"), ("Password:", "secret"), (r"\) >", "enable"), (r"\) #", "exit"), (r"\) >", "logout"), (pexpect.EOF, None), ] client = pexpect.spawn("telnet office-sw-0") # This is so we can see what's happening. client.logfile = sys.stdout.buffer for expect, send in rules: client.expect(expect) if send is None: break client.sendline(send) | 4 | 5 |
75,774,350 | 2023-3-18 | https://stackoverflow.com/questions/75774350/special-case-when-for-string-concatenation-is-more-efficient-than | I have this code using python 3.11: import timeit code_1 = """ initial_string = '' for i in range(10000): initial_string = initial_string + 'x' + 'y' """ code_2 = """ initial_string = '' for i in range(10000): initial_string += 'x' + 'y' """ time_1 = timeit.timeit(code_1, number=100) time_2 = timeit.timeit(code_2, number=100) print(time_1) # 0.5770808999950532 print(time_2) # 0.08363639999879524 Why += is more efficient in this case? As far as I know, there is the same number of concatenation, and the order of execution doesn't change the result. Since strings are immutable, it's not because of inplace shinanigans, and the only thing I found about string concat is about .join efficiency, but I don't want the most efficient, just understand why += seems more efficient than =. With this code, performances between forms almost equals: import timeit code_1 = """ initial_string = '' for i in range(10000): initial_string = initial_string + 'x' """ code_2 = """ initial_string = '' for i in range(10000): initial_string += 'x' """ time_1 = timeit.timeit(code_1, number=100) time_2 = timeit.timeit(code_2, number=100) print(time_1) # 0.07953230000566691 print(time_2) # 0.08027460001176223 I noticed a difference using different Python version ('x' + 'y' form): Python 3.7 to 3.9: print(time_1) # ~0.6 print(time_2) # ~0.3 Python 3.10: print(time_1) # ~1.7 print(time_2) # ~0.8 Python 3.11 for comparison: print(time_1) # ~0.6 print(time_2) # ~0.1 Similar but not answering the question: How is the s=s+c string concat optimization decided? If s is a string, then s = s + 'c' might modify the string in place, while t = s + 'c' can't. But how does the operation s + 'c' know which scenario it's in? In a nutshell: Optimization occur when s = s + 'c', not when t = s + 'c' because python need to keep a ref to the first string and can't concatenate in-place. Here, we are always assigning using simple assignment or augmented assignment to the original string, so in-place concatenation should apply in both cases. | For a while now, CPython has had an optimization that tries to perform string concatenation in place where possible. The details vary between Python versions, sometimes a lot - for example, it doesn't work for globals on Python 3.11, and it used to be specific to bytestrings on Python 2, but it's specific to Unicode strings on Python 3. On Python 3.10, the optimization starts in unicode_concatenate, and it eventually hits a PyObject_Realloc inside resize_compact or resize_inplace, attempting to resize the left-hand operand in place. One fairly consistent thing about the optimization across Python versions is that it only works if the left-hand side of the concatenation has no other references, or if the only reference is a variable that the result of the concatenation will be assigned to. In your slow case: initial_string = initial_string + 'x' + 'y' the LHS of initial_string + 'x' is initial_string, and you're not going to assign the result back to initial_string - you're going to concatenate 'y' to the result first. Thus, the optimization can't kick in for initial_string + 'x'. (It kicks in for the + 'y' part, though.) For your other cases, the optimization works. For example, in initial_string += 'x' + 'y' instead of concatenating initial_string and 'x' and then appending 'y', you concatenate 'x' and 'y' and then concatenate initial_string and the result. The changed order of operations means that you're assigning the result of the initial_string concatenation back to initial_string, so the optimization applies. (Also the 'x' + 'y' gets constant-folded away, which helps a little but isn't the primary cause of the performance difference.) | 8 | 7 |
75,774,668 | 2023-3-18 | https://stackoverflow.com/questions/75774668/how-could-i-pair-up-x-and-y-generated-by-np-meshgrid-using-python | I'm trying to generate a 2-dim coordinates matrix using python. I'm using x=np.linespace(min, max, step) y=np.linespace(min, max, step) X, Y = np.meshgrid(x, y) to generate x and y coordinates, where X like: [[0. 1. 2. 3. 4.] [0. 1. 2. 3. 4.] [0. 1. 2. 3. 4.] [0. 1. 2. 3. 4.] [0. 1. 2. 3. 4.] [0. 1. 2. 3. 4.] [0. 1. 2. 3. 4.] [0. 1. 2. 3. 4.]] and Y like: [[-2. -2. -2. -2. -2.] [-1. -1. -1. -1. -1.] [ 0. 0. 0. 0. 0.] [ 1. 1. 1. 1. 1.] [ 2. 2. 2. 2. 2.] [ 3. 3. 3. 3. 3.] [ 4. 4. 4. 4. 4.] [ 5. 5. 5. 5. 5.]] I want to get: [[[0, -2] [0, -1] [0, 0] [0, 1] [0, 2]] [[1, -2] [1, -1] [1, 0] [1, 1] [1, 2]] [[2, -2] [2, -1] [2, 0] [2, 1] [2, 2]] [[3, -2] [3, -1] [3, 0] [3, 1] [3, 2]] [[4, -2] [4, -1] [4, 0] [4, 1] [4, 2]]] (or its horizontal mirror) How to do that? | You can implement something like this: #!/usr/bin/env ipython # --------------------------- import numpy as np x0,x1 = -2, 2 y0,y1 = 0,4 x=np.arange(x0,x1, 1) y=np.arange(y0,y1, 1) X, Y = np.meshgrid(x, y) ny,nx = np.shape(X) # ----------------------------------------------------------- ans = [[[X[jj,ii],Y[jj,ii]] for ii in range(nx) ] for jj in range(ny)] I switched to np.arange instead of np.linspace. | 3 | 2 |
75,773,844 | 2023-3-18 | https://stackoverflow.com/questions/75773844/read-a-7z-file-in-memory-with-python-and-process-each-line-as-a-stream | I'm working with a huge .7z file that I need to process line by line. First I tried py7zr, but it only works by first decompressing the whole file into an object. This runs out of memory. Then libarchive is able to read block by block, but there's no straightforward way of splitting these binary blocks into lines. What can I do? Related questions I researched first: How to read contents of 7z file using python: The answers only decompress the whole file. How to read from a text file compressed with 7z?: Seeks Python 2.7 answers. Python: How can I read a line from a compressed 7z file in Python?: Focuses on a single line, no accepted answer - only answer posted 7 years ago. I'm looking for ways to improve the temporary solution I built myself - posted as an answer here. Thanks! | This solution goes through all available get_blocks(). If the last line doesn't end in \n, we keep the remaining bytes to be yield on the next block. import libarchive def process(my_file): data = '' with libarchive.file_reader(my_file) as e: for entry in e: for block in entry.get_blocks(): data += block.decode('ISO-8859-1') lines = data.splitlines() if not data.endswith('\n'): data = lines.pop() else: data = '' for line in lines: yield ({'l': line},) | 3 | 4 |
75,768,651 | 2023-3-17 | https://stackoverflow.com/questions/75768651/assign-and-re-use-quantile-based-buckets-by-group-in-pandas | What I am trying to achieve I have a pandas DataFrame in a long format, containing values for different groups. I want to compute and apply a quantile based-binning (e.g. quintiles in this example) to each group of the DataFrame. I also need to be able to keep the bins edges for each group and apply the same labelling (via pd.cut) to a new DataFrame. E.g. for each of group, find the quintiles and assign them to a new column value_label. import numpy as np import pandas as pd df1 = pd.DataFrame({"group": "A", "val": np.random.normal(loc=10, scale=5, size=100)}) df2 = pd.DataFrame({"group": "B", "val": np.random.normal(loc=5, scale=3, size=100)}) df = pd.concat([df1, df2], ignore_index=True) # apply qcut labels_and_bins = df.groupby("group")["val"].apply( lambda x: pd.qcut(x, q=5, duplicates="drop", retbins=True) ) # where e.g. labels_and_bins["A"][0] # are the applied labels to all the rows in group "A" labels_and_bins["A"][1] # are the bin edges to apply the same segmentation going forward for group in df.group.unique(): df.loc[df["group"] == group, "value_label"] = labels_and_bins[group][0] When I try to run this, at the second iteration I get the following error: TypeError: Cannot set a Categorical with another, without identical categories So, essentially I need Pandas to accept extending the categories belonging to the column dtype. What I have tried/considered Transform Using .transform() would probably solve the issue of assigning the label on the first DataFrame, but it's not clear to me how I could re-use the identified bins in future iterations Unioning the categorical dtype I tried two approaches: add_categories() labels_and_bins['A'][0].cat.add_categories(labels_and_bins['B'][0].cat.as_unordered()) This results in ValueError: Categorical categories must be unique union_categoricals() pd.api.types.union_categoricals( [labels_and_bins["A"][0].cat.as_unordered(), labels_and_bins["B"][0].cat.as_unordered()].get_inde ) This results in InvalidIndexError: cannot handle overlapping indices; use IntervalIndex.get_indexer_non_unique One solution Get rid of the Interval object by calling qcut without labels, e.g.: labels_and_bins = df.groupby("group")["val"].apply( lambda x: pd.qcut(x, q=5, duplicates="drop", retbins=True, labels=False) ) However I would be interested in a way to keep the Interval if possible for better interpretability Overall this feels like a big anti-pattern, so I'm confident I'm missing a much more basic solution to this problem! Thanks in advance for your inputs! | You can use groupby(...).quantile to get your bins. Getting the labels is the tricky part, if you want to have the same type of labels that cut and qcut returns you can convert this result to an pandas.arrays.IntervalArray and grab the left edges from there. bins = ( df.groupby('group')['val'] .quantile([0, .2, .4, .6, .8, 1]) # all of the below is to get those silly labels .groupby('group').apply(lambda s: pd.Series(pd.arrays.IntervalArray.from_breaks(s)) ) .to_frame('labels') .assign(breaks=lambda d: d['labels'].array.left) ) print(bins) labels breaks group A 0 (1.0537779249398262, 6.012235091686201] 1.053778 1 (6.012235091686201, 8.290140550780626] 6.012235 2 (8.290140550780626, 10.486391253124191] 8.290141 3 (10.486391253124191, 14.187164706356482] 10.486391 4 (14.187164706356482, 30.41871778451676] 14.187165 B 0 (-2.223397003597773, 2.929644956582803] -2.223397 1 (2.929644956582803, 4.205189382479166] 2.929645 2 (4.205189382479166, 5.512744579486252] 4.205189 3 (5.512744579486252, 6.9496607395781504] 5.512745 4 (6.9496607395781504, 11.247907758459675] 6.949661 # Note that the 'labels' columns are not actually a bunch of strings, # but is a `Series` wrapped around the underlying `IntervalArray` # you can access the `IntervalArray` via the `.array` property From here you can use pd.merge_asof to align your bins to other datasets or even back to your original DataFrame! # all of the sorting is required for `merge_asof` result = pd.merge_asof( df.sort_values('val'), bins.sort_values('breaks'), left_on='val', right_on='breaks', by='group' ) print(result.sample(8, random_state=0)) group val labels breaks 18 B 2.323406 (-2.223397003597773, 2.929644956582803] -2.223397 170 A 12.238075 (10.486391253124191, 14.187164706356482] 10.486391 107 A 6.680556 (6.012235091686201, 8.290140550780626] 6.012235 98 A 6.499656 (6.012235091686201, 8.290140550780626] 6.012235 177 A 13.156868 (10.486391253124191, 14.187164706356482] 10.486391 182 A 14.734745 (14.187164706356482, 30.41871778451676] 14.187165 5 B 0.227872 (-2.223397003597773, 2.929644956582803] -2.223397 146 A 9.562343 (8.290140550780626, 10.486391253124191] 8.290141 | 3 | 1 |
75,767,309 | 2023-3-17 | https://stackoverflow.com/questions/75767309/python-speeding-up-a-loop-for-text-comparisons | I have a loop which is comparing street addresses. It then uses fuzzy matching to tokenise the addresses and compare the addresses. I have tried this both with fuzzywuzzy and rapidfuzz. It subsequently returns how close the match is. The aim is to try and take all my street addresses 30k or so and match a variation of the street address to a structured street address in my dataset. The end result would be a reference table with two columns: Column A is the reference column address Column B is an address where the match is good enough to be associated with column A Column A can have many associated addresses. I am not a huge python user but do know that for loops are the last resort for most problems (third answer). With that in mind, I have used for loops. However my loops will take approx 235 hours which is sub-optimal to say the least. I have created a reproducible example below. Can anyone see where i can make any tweaks? I have added a progress bar to give you an idea of the speed. You can increase the number of addresses by changing the line for _ in range(20): import pandas as pd from tqdm import tqdm from faker import Faker from rapidfuzz import process, fuzz # GENERATE FAKE ADDRESSES FOR THE REPRODUCIBLE EXAMPLE ----------------------------------------------- fake = Faker() fake_addresses = pd.DataFrame() for _ in range(20): # Generate fake address d = {'add':fake.address()} df = pd.DataFrame(data = [d]) # Append it to the addresses dataframe fake_addresses = pd.concat([fake_addresses, df]) fake_addresses = fake_addresses.reset_index(drop=True) # COMPARE ADDRESSES --------------------------------------------------------------------------------- # Here we are making a "dictionary" of the addresses where we use left side as a reference address # We use the right side as all the different variations of the address. The addresses have to be # 0% similar. Normally this is 95% similarity reference = fake_addresses['add'].drop_duplicates() ref_addresses = pd.DataFrame() # This takes a long time. I have added tqdm to show how long when the number of addresses is increased dramatically for address in tqdm(reference): for raw_address in reference: result = fuzz.token_sort_ratio(address, raw_address) d = {'reference_address': address, 'matched_address': raw_address, 'matched_result': result} df = pd.DataFrame(data = [d]) if len(df.index) > 0: filt = df['matched_result'] >= 0 df = df.loc[filt] ref_addresses = pd.concat([ref_addresses, df], ignore_index=True) else: ref_addresses = ref_addresses | I would start by pre-calculating the sorted tokens once for each address so that you don't end up doing it n*n-1 times. This allows you to bypass the processor and avoid calling the sort method of the fuzzer. After that, I would take pandas out of the picture at least while doing these tests. This test runs through 1k faked address in about 2 seconds and 10k in about 4 1/2 minutes. At the moment if addr1 and addr2 are similar we record that both ways.: import faker import rapidfuzz import itertools import tqdm ## ----------------------- ## Generate some "fake" addresses ## apply the deault process once to each address ## sort the tokens once for each address ## ----------------------- faked_address_count = 1_000 make_fake = faker.Faker() fake_addresses = [make_fake.address() for _ in range(faked_address_count)] fake_addresses = { address: " ".join(sorted( x.strip() for x in rapidfuzz.utils.default_process(address).split(" ") if x.strip() )) for address in fake_addresses } ## ----------------------- ## ----------------------- ## Find similar addresses ## ----------------------- threshhold = 82 results = {} pairs = itertools.combinations(fake_addresses.items(), 2) for (addr1, processed_addr1), (addr2, processed_addr2) in tqdm.tqdm(pairs): similarity = rapidfuzz.fuzz.token_ratio( processed_addr1, processed_addr2, processor=None ) if similarity > threshhold: results.setdefault(addr1, []).append([addr2, similarity]) results.setdefault(addr2, []).append([addr1, similarity]) # also record 2 similar to 1? ## ----------------------- ## ----------------------- ## Print the final results ## ----------------------- import json print(json.dumps(results, indent=4)) ## ----------------------- generating a result like: { "PSC 0586, Box 8976\nAPO AE 52923": [ ["PSC 0586, Box 6535\nAPO AE 76148", 83.33333333333334] ], "PSC 0586, Box 6535\nAPO AE 76148": [ ["PSC 0586, Box 8976\nAPO AE 52923", 83.33333333333334] ], "USNS Brown\nFPO AE 20210": [ ["USNV Brown\nFPO AE 70242", 82.6086956521739] ], "USNV Brown\nFPO AE 70242": [ ["USNS Brown\nFPO AE 20210", 82.6086956521739] ] } To create a version that more directly aligns with what I think your inputs and outputs might be, you can take a look at. This will compare 1k standard addresses to 10k test addresses in about 40 seconds.: import faker import rapidfuzz import itertools import tqdm ## ----------------------- ## Generate sets of standard addresses and test addresses ## ----------------------- make_fake = faker.Faker() standard_addresses = [make_fake.address() for _ in range(1_000)] test_addresses = [make_fake.address() for _ in range(2_000)] ## ----------------------- ## ----------------------- ## pre-process the addresses ## ----------------------- def address_normalizer(address): return " ".join(sorted( x.strip() for x in rapidfuzz.utils.default_process(address).split(" ") if x.strip() )) standard_addresses = {address: address_normalizer(address) for address in standard_addresses} test_addresses = {address: address_normalizer(address) for address in test_addresses} ## ----------------------- ## ----------------------- ## Create a list to hold our results ## ----------------------- results = [] ## ----------------------- ## ----------------------- ## Find similar addresses ## ----------------------- threshhold = 82 pairs = itertools.product(standard_addresses.items(), test_addresses.items()) for standard_address_kvp, test_address_kvp in tqdm.tqdm(pairs): similarity = rapidfuzz.fuzz.token_ratio( standard_address_kvp[1], test_address_kvp[1], processor=None ) if similarity > threshhold: results.append([standard_address_kvp[0], test_address_kvp[0]]) ## ----------------------- for pair in results: print(pair) That will generate a result like: ['USS Collins\nFPO AE 04706', 'USNS Collins\nFPO AE 40687'] ['USS Miller\nFPO AP 15314', 'USS Miller\nFPO AP 91203'] ['Unit 4807 Box 9762\nDPO AP 67543', 'Unit 6542 Box 9721\nDPO AP 48806'] ['Unit 4807 Box 9762\nDPO AP 67543', 'Unit 9692 Box 6420\nDPO AP 46850'] | 3 | 1 |
75,765,213 | 2023-3-17 | https://stackoverflow.com/questions/75765213/pytube-attributeerror-nonetype-object-has-no-attribute-span-cipher-py | Yesterday this works fine, today i'm getting error on my local machine, colab notebook, even on my VPS. /usr/local/lib/python3.9/dist-packages/pytube/cipher.py in get_throttling_plan(js) 409 match = plan_regex.search(raw_code) 410 --> 411 transform_plan_raw = find_object_from_startpoint(raw_code, match.span()[1] - 1) 412 413 # Steps are either c[x](c[y]) or c[x](c[y],c[z]) from pytube import YouTube def audio_download(video_url): audio_file = YouTube(video_url).streams.filter(only_audio=True).first().download(filename="audio.mp4") return 'ok' Expected to download audio. I even tried to changes in cipher.py file as said in other solution but not working. | Found a solution. cipher.py Line 411 transform_plan_raw = find_object_from_startpoint(raw_code, match.span()[1] - 1) to transform_plan_raw = js | 4 | 7 |
75,763,556 | 2023-3-17 | https://stackoverflow.com/questions/75763556/sys-getrefcount-returning-very-large-reference-counts | In CPython 3.11, the following code returns very large reference counts for some objects. It seems to follow pre-cached objects like integers -5 to 256, but CPython 3.10 does not: Python 3.11.2 (tags/v3.11.2:878ead1, Feb 7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> for i in (-6, -5, 0, 255, 256, 257): ... print(i, sys.getrefcount(i)) ... -6 5 -5 1000000004 0 1000000535 255 1000000010 256 1000000040 257 5 Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> for i in (-6, -5, 0, 255, 256, 257): ... print(i, sys.getrefcount(i)) ... -6 5 -5 6 0 234 255 8 256 26 257 5 PEP 683 - Immortal Objects, Using a Fixed Refcount may be related, but it isn't mentioned in What's New in Python 3.11, nor is a change in sys.getrefcount() documented. Anybody have knowledge about this change? | This is not related to PEP 683, but it will probably be superseded by PEP 683 once PEP 683 has been implemented. This refcount change was first introduced in a commit on December 13, 2021, which introduced a _PyObject_IMMORTAL_INIT macro: #define _PyObject_IMMORTAL_INIT(type) \ { \ .ob_refcnt = 999999999, \ .ob_type = type, \ } Before this change, small int refcounts were initialized to 1. This change made it so their refcounts were instead initialized to 999999999. The relevant issue discussion is here, and the pull request for the commit is here. The value of 999999999 seems to have been copied from a fairly arbitrary decision Guido van Rossum made while writing an unrelated tool. Quoting a message by Guido in the issue discussion: I used 999999999 in deepfreeze.py to signify "immortal object". It has been copied by others (small integers are essentially immortal too). I wasn't too sure that the refcount wouldn't go below zero if the interpreter is repeatedly finalized and reinitialized. Once we have official immortal objects (see PEP-683) we should switch to that. | 8 | 8 |
75,756,665 | 2023-3-16 | https://stackoverflow.com/questions/75756665/how-to-make-pylance-understand-pydantics-allow-population-by-field-name-for-i | In my current project, we are using an OpenAPI-to-TypeScript-API generator, that generates automatically typed functions for calling API endpoints via Axios. In Python, we use snake_case for our class properties, while in TypeScript we use camelCase. Using this setup, we have found that the alias property (Field(..., alias="***")) is very helpful, combined with the allow_population_by_field_name = True property. Simple model class example: from pydantic import BaseModel, Field class MyClass(BaseModel): my_property: str = Field(..., alias="myProperty") class Config: allow_population_by_field_name = True if __name__ == "__main__": my_object = MyClass(my_property="banana") The question is: Why doesn't Pylance understand that I want to be able to write MyClass(my_property="banana")? See screenshot from vscode below: Is there any way to configure Pylance/Pydantic/vscode to better understand this? "Just living with" this is problematic because having red lines like this in our code promotes a mindset of "red is probably ok", so they lose their power. Edit May 8th, 2023: We found a way to "circumvent" this problem, by creating a wrapper class, containing an alias_generator. It doesn't seem like pylance picks up on this, and therefore it acts as if it doesn't exist. I'm sure using the system like this isn't something people do a lot, but this helped our team. Also it allows us to not write = Field(..., alias="camelCase") everywhere. | It seems like this is a known problem as you can see from issue #4936. It has to do with the way dataclass_transform (from PEP 681) handles the alias field attribute. There is currently no elegant solution for this, but there may be with Pydantic v2 at some point. I agree that you should not just passively ignore the warnings. It is always better to explicitly ignore them, if you did everything you could to fix the underlying issue. Therefore I propose this is a clear-cut use case for # type: ignore or more specifically # pyright: ignore as suggested here in the documentation. Sometimes you just hit the limits of the Python typing system. And when you do, explicit ignore-directives are the correct way to deal with that. | 4 | 5 |
75,739,583 | 2023-3-15 | https://stackoverflow.com/questions/75739583/is-there-a-way-to-detect-and-quantify-the-missing-area-of-objects-in-images | I am trying to calculate the percentage of leaf damage using R. I am able to do the image segmentation and area calculation using the pliman package, following the vignettes: https://cran.r-project.org/web/packages/pliman/vignettes/pliman_start.html I am using the following R code. I put a 1cm square as a reference scale. A sample segmented image can be downloaded here: #...........................Load image........................... library(pliman) img_segm <- pliman::image_import("C:/Users/sando/Desktop/test/imgseg/imgseg.jpg") #.........................Analize leaves......................... layout(t(1:2)) meas <- analyze_objects(img_segm, index = "BIM", marker = "id", invert= F,show_image=T, parallel=T, watershed= T, fill_hull= F, tolerance = 30, object_size = "large") # Measures area in pixels area1 <- get_measures(meas)[,c(1,4)] area1 # Correct the measures using an object in cm real.area <- get_measures(meas, id = 1, area ~ 1)[,c(1,4)] real.area #........................Analize contour......................... # Draw a convex hull around the objects. cont_meas <- analyze_objects(img_segm, watershed = T, marker = "id", show_chull = TRUE,index = "BIM", invert= F,show_image=T, parallel=T, fill_hull= F, tolerance = 30, object_size = "large") # shows the convex hull # Measures area real.area2 <- get_measures(cont_meas, id = 1, area ~ 1 )[,c(1,4)] However, I can't obtain the area damaged, and I can't use color segmentation because background is white. I would like: Identify each leaf individually Predict or select the missing leaf edges using some form of convex hull detection or ROI. Calculate the area of damage (in red) and total area (leaf + red). I know it's possible to do some binary transformation. So I've tried the following code: #### detect contours in red library(imager) img_segm2 <- imager::as.cimg(img_segm) plot(img_segm2) # isoblur, greyscale img_segm3 <- isoblur(grayscale(img_segm2),2) > .60 plot(img_segm3) px <- img_segm3 > 0.1 ct <- imager::contours(px,nlevels=3) plot(px) #Add contour lines purrr::walk(ct,function(v) lines(v$x,v$y,col="red",lwd=1.5)) Is there any method that allows me to obtain this in a semi-automatic way? I know it can be done in ImageJ and some software like leafbyte, or bioleaf, but I would like to be able to analyze these images in R or Python. Thank you for your time. | Alright, here's a start for you: https://github.com/brockbrownwork/leaves I'll polish up this answer later, but here's the main results: Input: Output: Fraying: 3.947% Percentage of non-fray holes: 5.589% Edit: If you have a few examples of the images you're going to use, that would be helpful. Mainly I want to know if convex hulls will accurately wrap around your leaves, otherwise we'll have to use something else. Are you looking for the stats of each individual leaf, or just the average of the whole? You may have to change the variable that represents the minimum area of a leaf in pixels. The hole detection wraps a little bit around the border of the leaf too, will fix that. | 4 | 1 |
75,760,261 | 2023-3-16 | https://stackoverflow.com/questions/75760261/most-efficient-way-to-continuously-find-the-median-of-a-stream-of-numbers-in-py | I'm trying to solve a problem which reads as follows: A queue of eager single-digits (your input) are waiting to enter an empty room. I allow one digit (from the left) to enter the room each minute. Each time a new digit enters the room I chalk up the median of all the digits currently in the room on the chalkboard. [The median is the middle digit when the digits are arranged in ascending order.]. If there are two median numbers (i.e. two middles) then rather than using the average, I chalk up the lower one of the two. I chalk new digits to the right of existing digits so my chalkboard number keeps getting longer. What number ends up on your chalkboard once all the digits are in the room? Consider the example input: 21423814127333 2 (the leftmost) is allowed into the room where it is the only digit so I write 2 on the chalkboard. 1 is then allowed into the room to join 2. The smaller one of these two is used as the median so I chalk up 1 to the right of 2 on the chalkboard (my number is now 21) 4 now enters the room. The median of 1, 2 and 4 is 2 so I add 2 to my chalkboard (my number is now 212) ...this process continues until the final 3 enters the room ... all the numbers are in the room now which, when sorted, are 1,1,1,2,2,2,3,3,3,3,4,7,8,8. There are two median digits but they are both 3 so I add 3 to my chalkboard and my final number is 21222222222233 My current solution: num = input() new = str(num[0]) whole = [num[0]] for i in range(1, len(num)): whole.append(num[i]) whole.sort() new += whole[i//2] print(new) The problem is that it takes too long - so it passes 6/10 (hidden) test cases but exceeds the time limit for the other 4. Any help would be greatly appreciated. | You are repeatedly sorting, with key comparison, so total cost is O(N * N log N), that is, it is at least quadratic. single-digits (your input) are waiting to enter The key to this problem is the range limit on input. We know that each input x is in this range: 0 <= x < 10 Use counters. We can easily allocate ten of them. Keep a running count of total number of digits that have been admitted to the room. Each time you have to report a median, compute sum of ordered counters, stopping when you get to half the total count. max_val = 10 counter = {i: 0 for i in range(max_val)} ... assert 0 <= input_val < max_val counter[input_val] += 1 cum_sum = 0 for i in range(max_val): cum_sum += counter[i] ... Since median is a robust statistic, typically there will be some stability in the median you report, e.g. "2, 1, 2, 2, 2, 2". You can use that to speed the computation still further, by incrementally computing the cumulative sum. It won't change the big-Oh complexity, though, as there's a constant number of counters. We're still looking at O(N), since we have to examine each of the N digits admitted to the room and then report the current median. This does beat the O(N log N) cost of an approach that relies on bisecting an ordered vector. | 5 | 4 |
75,759,568 | 2023-3-16 | https://stackoverflow.com/questions/75759568/python-convert-a-dataframe-of-events-into-a-square-wave-time-serie | How would you transform a pandas dataframe composed of successive events: start_time | end_time | value 1671221209 1671234984 2000 1671240425 1671241235 1000 1671289246 1671289600 133 ... ... ... into timeserie like this: time | value 1671221209 2000 1671234984 2000 1671234985 0 1671240424 0 1671240425 1000 1671241235 1000 1671241236 0 1671289245 0 1671289246 133 1671289600 133 1671289601 0 ... ... | You can use melt function: result = ( df.melt(id_vars='value', var_name='time_type', value_name='time') .drop(columns=['time_type']) .sort_values(by='time') .reset_index(drop=True) ) result = result[['time', 'value']] | 3 | 2 |
75,745,438 | 2023-3-15 | https://stackoverflow.com/questions/75745438/how-to-print-polars-dataframe-without-the-leading-shape-information | When I print a Polars dataframe (e.g., from terminal or within Jupyter notebook), there is a leading string citing the shape of the resultant dataframe. I am looking for the method to not have this printed as part of the result. The simple example would be the following: >>> import polars as pl >>> df = pl.DataFrame() >>> df shape: (0, 0) ββ ββ‘ ββ >>> print(df) shape: (0, 0) ββ ββ‘ ββ >>> print(df[0, :]) shape: (0, 0) ββ ββ‘ ββ My question is how would I print only the dataframe without the lurking "shape: [n, m]" string? | I found a native way: you could use pl.Config.set_tbl_hide_dataframe_shape. This import polars as pl pl.Config.set_tbl_hide_dataframe_shape(True) df = pl.DataFrame({'col1': range(3), 'col2': ['a', 'b', 'c']}) print(df) results in ββββββββ¬βββββββ β col1 β col2 β β --- β --- β β i64 β str β ββββββββͺβββββββ‘ β 0 β a β β 1 β b β β 2 β c β ββββββββ΄βββββββ | 4 | 7 |
75,754,219 | 2023-3-16 | https://stackoverflow.com/questions/75754219/how-to-put-the-max-of-3-separate-columns-in-a-new-column-in-python-pandas | for example we have: a b c 1 1 3 2 4 1 3 2 1 And now using python I'm trying to create this: a b c max 1 1 3 3c 2 4 1 4b 3 2 1 3a | If need match first maximal column join max converted to strings with DataFrame.idxmax for columns names by max: cols = ['a','b','c'] df['max'] = df[cols].max(axis=1).astype(str).str.cat(df[cols].idxmax(axis=1)) print (df) a b c max 0 1 1 3 3c 1 2 4 1 4b 2 3 2 1 3a If possible multiple max values and need all matched columns use DataFrame.dot trick with separator ,: print (df) a b c 0 3 1 3 1 2 4 1 2 3 2 1 cols = ['a','b','c'] max1 = df[cols].max(axis=1) s = df[cols].eq(max1, axis=0).dot(pd.Index(cols) + ',').str[:-1] df['max'] = max1.astype(str).str.cat(s) print (df) a b c max 0 3 1 3 3a,c 1 2 4 1 4b 2 3 2 1 3a | 3 | 3 |
75,747,252 | 2023-3-15 | https://stackoverflow.com/questions/75747252/using-sqlalchemy-orm-with-composite-primary-keys | Using sqlalchemy.orm I am trying to link two tables on a composite key, but keep getting an error. Unfortunately, the official docs provide an example that uses a single primary key (not composite), so I tried to come up with a basic example that reproduces the issue: from sqlalchemy import Column, ForeignKey, Integer, String from sqlalchemy.orm import DeclarativeBase, mapped_column, relationship class Base(DeclarativeBase): pass class Customer(Base): __tablename__ = "customer" id = mapped_column(String, primary_key=True) country = mapped_column(String, primary_key=True) billing_address_id = mapped_column(Integer, ForeignKey("address.idx")) shipping_address_id = mapped_column(Integer, ForeignKey("address.idx")) billing_address = relationship("Address", foreign_keys=[billing_address_id]) shipping_address = relationship("Address", foreign_keys=[shipping_address_id]) class Address(Base): __tablename__ = "address" idx = mapped_column(Integer, primary_key=True) address = mapped_column(String) customer_id = mapped_column(ForeignKey("customer.id")) customer_country = mapped_column(ForeignKey("customer.country")) customers_using_this_adress = relationship( "Customer", foreign_keys=[customer_id, customer_country] ) # trying to define a customer triggers an error c = Customer(id="A", country="B") I get the following error: AmbiguousForeignKeysError: Could not determine join condition between parent/child tables on relationship Address.customers_using_this_adress - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table. Here's the full traceback: --------------------------------------------------------------------------- AmbiguousForeignKeysError Traceback (most recent call last) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/relationships.py:2421, in JoinCondition._determine_joins(self) 2420 if self.primaryjoin_initial is None: -> 2421 self.primaryjoin = join_condition( 2422 self.parent_persist_selectable, 2423 self.child_persist_selectable, 2424 a_subset=self.parent_local_selectable, 2425 consider_as_foreign_keys=consider_as_foreign_keys, 2426 ) 2427 else: File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/sql/util.py:123, in join_condition(a, b, a_subset, consider_as_foreign_keys) 101 """Create a join condition between two tables or selectables. 102 103 e.g.:: (...) 121 122 """ --> 123 return Join._join_condition( 124 a, 125 b, 126 a_subset=a_subset, 127 consider_as_foreign_keys=consider_as_foreign_keys, 128 ) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/sql/selectable.py:1356, in Join._join_condition(cls, a, b, a_subset, consider_as_foreign_keys) 1355 if len(constraints) > 1: -> 1356 cls._joincond_trim_constraints( 1357 a, b, constraints, consider_as_foreign_keys 1358 ) 1360 if len(constraints) == 0: File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/sql/selectable.py:1501, in Join._joincond_trim_constraints(cls, a, b, constraints, consider_as_foreign_keys) 1500 if len(constraints) != 1: -> 1501 raise exc.AmbiguousForeignKeysError( 1502 "Can't determine join between '%s' and '%s'; " 1503 "tables have more than one foreign key " 1504 "constraint relationship between them. " 1505 "Please specify the 'onclause' of this " 1506 "join explicitly." % (a.description, b.description) 1507 ) AmbiguousForeignKeysError: Can't determine join between 'address' and 'customer'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. The above exception was the direct cause of the following exception: AmbiguousForeignKeysError Traceback (most recent call last) Cell In[11], line 32 26 customer_country = mapped_column(ForeignKey("customer.country")) 27 customers_using_this_adress = relationship( 28 "Customer", foreign_keys=[customer_id, customer_country] 29 ) ---> 32 c = Customer(id="A", country="B") File <string>:4, in __init__(self, **kwargs) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/state.py:570, in InstanceState._initialize_instance(*mixed, **kwargs) 567 self, instance, args = mixed[0], mixed[1], mixed[2:] # noqa 568 manager = self.manager --> 570 manager.dispatch.init(self, args, kwargs) 572 try: 573 manager.original_init(*mixed[1:], **kwargs) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/event/attr.py:487, in _CompoundListener.__call__(self, *args, **kw) 485 fn(*args, **kw) 486 for fn in self.listeners: --> 487 fn(*args, **kw) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/mapper.py:4308, in _event_on_init(state, args, kwargs) 4306 instrumenting_mapper = state.manager.mapper 4307 if instrumenting_mapper: -> 4308 instrumenting_mapper._check_configure() 4309 if instrumenting_mapper._set_polymorphic_identity: 4310 instrumenting_mapper._set_polymorphic_identity(state) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/mapper.py:2374, in Mapper._check_configure(self) 2366 @util.langhelpers.tag_method_for_warnings( 2367 "This warning originated from the `configure_mappers()` process, " 2368 "which was invoked automatically in response to a user-initiated " (...) 2371 ) 2372 def _check_configure(self) -> None: 2373 if self.registry._new_mappers: -> 2374 _configure_registries({self.registry}, cascade=True) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/mapper.py:4116, in _configure_registries(registries, cascade) 4110 Mapper.dispatch._for_class(Mapper).before_configured() # type: ignore # noqa: E501 4111 # initialize properties on all mappers 4112 # note that _mapper_registry is unordered, which 4113 # may randomly conceal/reveal issues related to 4114 # the order of mapper compilation -> 4116 _do_configure_registries(registries, cascade) 4117 finally: 4118 _already_compiling = False File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/mapper.py:4158, in _do_configure_registries(registries, cascade) 4156 if not mapper.configured: 4157 try: -> 4158 mapper._post_configure_properties() 4159 mapper._expire_memoizations() 4160 mapper.dispatch.mapper_configured(mapper, mapper.class_) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/mapper.py:2391, in Mapper._post_configure_properties(self) 2388 self._log("initialize prop %s", key) 2390 if prop.parent is self and not prop._configure_started: -> 2391 prop.init() 2393 if prop._configure_finished: 2394 prop.post_instrument_class(self) File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/interfaces.py:544, in MapperProperty.init(self) 537 """Called after all mappers are created to assemble 538 relationships between mappers and perform other post-mapper-creation 539 initialization steps. 540 541 542 """ 543 self._configure_started = True --> 544 self.do_init() 545 self._configure_finished = True File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/relationships.py:1634, in RelationshipProperty.do_init(self) 1632 self._setup_entity() 1633 self._setup_registry_dependencies() -> 1634 self._setup_join_conditions() 1635 self._check_cascade_settings(self._cascade) 1636 self._post_init() File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/relationships.py:1881, in RelationshipProperty._setup_join_conditions(self) 1880 def _setup_join_conditions(self) -> None: -> 1881 self._join_condition = jc = JoinCondition( 1882 parent_persist_selectable=self.parent.persist_selectable, 1883 child_persist_selectable=self.entity.persist_selectable, 1884 parent_local_selectable=self.parent.local_table, 1885 child_local_selectable=self.entity.local_table, 1886 primaryjoin=self._init_args.primaryjoin.resolved, 1887 secondary=self._init_args.secondary.resolved, 1888 secondaryjoin=self._init_args.secondaryjoin.resolved, 1889 parent_equivalents=self.parent._equivalent_columns, 1890 child_equivalents=self.mapper._equivalent_columns, 1891 consider_as_foreign_keys=self._user_defined_foreign_keys, 1892 local_remote_pairs=self.local_remote_pairs, 1893 remote_side=self.remote_side, 1894 self_referential=self._is_self_referential, 1895 prop=self, 1896 support_sync=not self.viewonly, 1897 can_be_synced_fn=self._columns_are_mapped, 1898 ) 1899 self.primaryjoin = jc.primaryjoin 1900 self.secondaryjoin = jc.secondaryjoin File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/relationships.py:2308, in JoinCondition.__init__(self, parent_persist_selectable, child_persist_selectable, parent_local_selectable, child_local_selectable, primaryjoin, secondary, secondaryjoin, parent_equivalents, child_equivalents, consider_as_foreign_keys, local_remote_pairs, remote_side, self_referential, prop, support_sync, can_be_synced_fn) 2305 self.support_sync = support_sync 2306 self.can_be_synced_fn = can_be_synced_fn -> 2308 self._determine_joins() 2309 assert self.primaryjoin is not None 2311 self._sanitize_joins() File ~/miniconda3/envs/excel/lib/python3.11/site-packages/sqlalchemy/orm/relationships.py:2465, in JoinCondition._determine_joins(self) 2453 raise sa_exc.AmbiguousForeignKeysError( 2454 "Could not determine join " 2455 "condition between parent/child tables on " (...) 2462 "parent and child tables." % (self.prop, self.secondary) 2463 ) from afe 2464 else: -> 2465 raise sa_exc.AmbiguousForeignKeysError( 2466 "Could not determine join " 2467 "condition between parent/child tables on " 2468 "relationship %s - there are multiple foreign key " 2469 "paths linking the tables. Specify the " 2470 "'foreign_keys' argument, providing a list of those " 2471 "columns which should be counted as containing a " 2472 "foreign key reference to the parent table." % self.prop 2473 ) from afe AmbiguousForeignKeysError: Could not determine join condition between parent/child tables on relationship Address.customers_using_this_adress - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table. | Unlike a composite primary key where you can add primary_key=True to columns you want, composite foreign keys do not work that way. What you have right now is two different foreign keys and not a single composite foreign key. You need to use ForeignKeyConstraint in the __table_args__ tuple like the following. __table_args__ = ( ForeignKeyConstraint( [customer_id, customer_country], [Customer.id, Customer.country] ), ) Full code from sqlalchemy import Column, ForeignKey, Integer, String, ForeignKeyConstraint from sqlalchemy.orm import DeclarativeBase, mapped_column, relationship class Base(DeclarativeBase): pass class Customer(Base): __tablename__ = "customer" id = mapped_column(String, primary_key=True) country = mapped_column(String, primary_key=True) billing_address_id = mapped_column(Integer, ForeignKey("address.idx")) shipping_address_id = mapped_column(Integer, ForeignKey("address.idx")) billing_address = relationship("Address", foreign_keys=[billing_address_id]) shipping_address = relationship("Address", foreign_keys=[shipping_address_id]) class Address(Base): __tablename__ = "address" idx = mapped_column(Integer, primary_key=True) address = mapped_column(String) customer_id = mapped_column(Integer) customer_country = mapped_column(Integer) customers_using_this_adress = relationship( "Customer", foreign_keys=[customer_id, customer_country] ) __table_args__ = ( ForeignKeyConstraint( [customer_id, customer_country], [Customer.id, Customer.country] ), ) c = Customer(id="A", country="B") | 4 | 5 |
75,744,991 | 2023-3-15 | https://stackoverflow.com/questions/75744991/is-there-any-pep-regarding-a-pipe-operator-in-python | It is often common to write this type of python code: def load_data(): df_ans = get_data() df_checked = check_data(df_ans) # returns df_ans or raises an error return_dict = format_data(df_ans) return return_dict One could write the above like this (but it's ugly in my opinion) def load_data(): return format_data(check_data(get_data())) If this were all pandas code, one could use the .pipe method to do as follows: def load_data(): return get_data().pipe(check_data).pipe(format_data) However, there seems to be no universal way to "pipe" things in Python, something like: def load_data(): return (get_data() |> check_data |> format_data) Is there any PEP proposal for this? I could not find one. I've seen some libraries that do operator overloading on | but I don't think that's what I'm looking for - I'm looking for x {operator} y to be the same as y(x) (which of course requires y to be callable) | I don't think a pipe operator in Python will be approved in the near future. To do that operation there is already the function pipe in the toolz library. It is not a Python built in library but is widely used. In case you are curious the implementation is just: def pipe(data, *funcs): for func in funcs: data = func(data) return data | 4 | 1 |
75,745,192 | 2023-3-15 | https://stackoverflow.com/questions/75745192/extract-digits-from-a-string-within-a-word | I want a regular expression, which returns only digits, which are within a word, but I can only find expressions, which returns all digits in a string. I've used this example: text = 'I need this number inside my wor5d, but also this word3 and this 4word, but not this 1 and not this 555.' The following code returns all digits, but I am only interested in ['5', '3', '4'] import re print(re.findall(r'\d+', text)) Any suggestions? | You can use re.findall(r'(?<=[a-zA-Z])\d+|\d+(?=[a-zA-Z])', text) This regex will extract all one or more digit chunks that are immediately preceded or followed with an ASCII letter. A fully Unicode version for Python re would look like (?<=[^\W\d_])\d+|\d+(?=[^\W\d_]) where [^\W\d_] matches any Unicode letter. See the regex demo for reference. | 4 | 1 |
75,683,009 | 2023-3-9 | https://stackoverflow.com/questions/75683009/how-to-catch-the-error-message-from-chem-molfromsmilesformula | I'm new to this rdkit, below is the code that I'm using to get the chemical image from the formula, from rdkit import Chem m = Chem.MolFromSmiles('OCC1OC(C(C(C1O)O)O)[C]1(C)(CO)CC(=O)C=C(C1CCC(=O)C)C') m if the code is correct, it displays the structre. The above code displays the error saying "[15:23:55] Explicit valence for atom # 11 C, 5, is greater than permitted" I tried adding try catch but, it does not help. I need to catch this exception/Message and display it to the user. How can we do this?Issue Image Here try: from rdkit import Chem m = Chem.MolFromSmiles('OCC1OC(C(C(C1O)O)O)[C]1(C)(CO)CC(=O)C=C(C1CCC(=O)C)C') m except Exception as e: print(e) | It is not an Error, it is a Warning. Your code does not break doing that assignment. So there is no Error to catch. But your solution lays within the return of Chem.MolFromSmiles. If it fails to build a mol object of your SMILES it returns None, whereas it returns the mol object when it manages to "deal with the SMILES properly". For sure you can raise an error yourself! from rdkit import Chem m = Chem.MolFromSmiles('OCC1OC(C(C(C1O)O)O)[C]1(C)(CO)CC(=O)C=C(C1CCC(=O)C)C') if m is None: print("hey user, there was an error!") # or even raise an error: if m is None: raise ValueError("Could not interpret SMILES to mol object") I know my answer is lacking. My solution is not forwarding the warning to the user. I feel like it is quite hard to do. If there is a solution it might be found utilizing logging or warnings package. I would also be interested in that but couldn't manage so far myself. | 4 | 1 |
75,726,959 | 2023-3-13 | https://stackoverflow.com/questions/75726959/how-to-reroute-requests-to-a-different-url-endpoint-in-fastapi | I am trying to write a middleware in my FastAPI application, so that requests coming to endpoints matching a particular format will be rerouted to a different URL, but I am unable to find a way to do that since request.url is read-only. I am also looking for a way to update request headers before rerouting. Are these things even possible in FastAPI? Redirection is the best I could do so far: from fastapi import Request from fastapi.responses import RedirectResponse @app.middleware("http") async def redirect_middleware(request: Request, call_next): if matches_certain_format(request.url.path): new_url = create_target_url(request.url.path) return RedirectResponse(url=new_url) | To change the request's URL pathβin other words, reroute the request to a different endpointβone can simply modify the request.scope['path'] value inside the middleware, before processing the request, as demonstrated in Option 3 of this answer. If your API endpoints include path parameters (e.g., '/users/{user_id}'), then you mgiht want to have a look at this answer on how to extract that kind of path from the request object, and then compare it against a predefined list of routes_to_reroute, as shown below. As for updating the request headers, or adding new custom headers to the request, you can follow a similar approach to the one described here, which demonstrates how to modify the request.scope['headers'] value. Working Example If you would like to avoid maintaining a list of routes to reroute and performing checks inside the middleware, you could instead mount a sub-application, which will contain only the routes that require rerouting, and add the middleware to that sub-app, similar to Option 3 of this answer. from fastapi import FastAPI, Request app = FastAPI() routes_to_reroute = ['/'] @app.middleware('http') async def some_middleware(request: Request, call_next): if request.url.path in routes_to_reroute: request.scope['path'] = '/welcome' headers = dict(request.scope['headers']) headers[b'custom-header'] = b'my custom header' request.scope['headers'] = [(k, v) for k, v in headers.items()] return await call_next(request) @app.get('/') async def main(): return 'OK' @app.get('/welcome') async def welcome(request: Request): return {'msg': 'Welcome!', 'headers': request.headers} | 8 | 7 |
75,737,824 | 2023-3-14 | https://stackoverflow.com/questions/75737824/bases-must-be-type-error-when-running-from-google-cloud-import-bigquery-in-jup | I tried running the following: from google.cloud import bigquery But from some reason, I keep getting this "Bases must be types" error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-1035661e8528> in <module> ----> 1 from google.cloud import bigquery /opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/__init__.py in <module> 33 __version__ = bigquery_version.__version__ 34 ---> 35 from google.cloud.bigquery.client import Client 36 from google.cloud.bigquery.dataset import AccessEntry 37 from google.cloud.bigquery.dataset import Dataset /opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/client.py in <module> 58 59 import google.api_core.client_options ---> 60 import google.api_core.exceptions as core_exceptions 61 from google.api_core.iam import Policy 62 from google.api_core import page_iterator /opt/conda/lib/python3.7/site-packages/google/api_core/exceptions.py in <module> 27 import warnings 28 ---> 29 from google.rpc import error_details_pb2 30 31 try: /opt/conda/lib/python3.7/site-packages/google/rpc/error_details_pb2.py in <module> 18 # source: google/rpc/error_details.proto 19 """Generated protocol buffer code.""" ---> 20 from google.protobuf import descriptor as _descriptor 21 from google.protobuf import descriptor_pool as _descriptor_pool 22 from google.protobuf import message as _message /opt/conda/lib/python3.7/site-packages/google/protobuf/descriptor.py in <module> 38 import warnings 39 ---> 40 from google.protobuf.internal import api_implementation 41 42 _USE_C_DESCRIPTORS = False /opt/conda/lib/python3.7/site-packages/google/protobuf/internal/api_implementation.py in <module> 102 try: 103 # pylint: disable=g-import-not-at-top --> 104 from google.protobuf.pyext import _message 105 sys.modules['google3.net.proto2.python.internal.cpp._message'] = _message 106 _c_module = _message TypeError: bases must be types I'm not sure what I did wrong. Maybe I install or uninstalled a package somewhere. But now I can't get this simple command to work. Any help would be greatly appreciated. | Marking this as an answer for better visibility, as this resolved the question: pip installing protobuff v3.20.1 as @runner16 mentioned pip install protobuf==3.20.1 | 3 | 4 |
75,680,581 | 2023-3-9 | https://stackoverflow.com/questions/75680581/how-to-stream-from-a-generator-to-a-polars-dataframe-and-subsequent-lazy-plan | I have a very long generator function that I want to process as a column using Polars. Due to its size, I want to run it in lazy streaming mode using the generator as a source, but I have been unable to work out how to do it (if it is possible). Creating a normal dataframe and then converting to lazy obviously doesn't work since the generator is exhausted before the lazy plan is run with collect(). This also happens with the LazyFrame initialiser, which is just a shortcut to above. Are there any other options that don't involve writing then scanning a csv? Example code: import polars as pl def Generator(): yield 1 yield 2 yield 3 generator = Generator() df = pl.DataFrame({"a": generator}).lazy() print(df) # naive plan... print([i for i in generator]) # [] generator2 = Generator() df = pl.LazyFrame({"a": generator2}) print(df) # naive plan... print([i for i in generator2]) # [] | Creating a normal dataframe and then converting to lazy obviously doesn't work since the generator is exhausted before the lazy plan is run with collect(). This is a mistaken assumption. If you do: df = pl.DataFrame({"a": generator}).lazy() then, yes, the generator is exhausted but all of its data is in df so it doesn't matter that the generator is exhausted If you take that back a step and imagine you're just doing df = pl.DataFrame({"a": generator}) then clearly you've got the data in df. When you tack on .lazy() all you're doing is setting a flag that any operations shouldn't be computed until .collect (or similar) is invoked. If you've got the memory to do that then it is the best way to consume the generator. If you don't have the memory to consume the generator then you'd have to dump it into a file but that file doesn't have to be a csv. You can use pyarrow to dump it into a parquet or ipc file which you can subsequently scan with polars. That would look like this: import pyarrow as pa import polars as pl generator = Generator() with pa.ipc.new_file('somefile.ipc', pa.schema([('a',pa.int32())])) as writer: while True: try: writer.write( pa.record_batch([pa.array([next(generator)], pa.int32())], names=['a']) ) except StopIteration: break df=pl.scan_ipc('somefile.ipc') If you want to write a parquet file then you'd use import pyarrow.parquet as pq import pyarrow as pa import polars as pl generator = Generator() with pq.ParquetWriter('somefile.parquet', pa.schema([('a',pa.int32())])) as writer: while True: try: writer.write( pa.record_batch([pa.array([next(generator)], pa.int32())], names=['a']) ) except StopIteration: break dfpq=pl.scan_parquet('somefile.parquet') | 4 | 2 |
75,711,757 | 2023-3-12 | https://stackoverflow.com/questions/75711757/fastapi-get-endpoint-returns-405-method-not-allowed-response | A GET endpoint in FastAPI is returning correct result, but returns 405 method not allowed when curl -I is used. This is happening with all the GET endpoints. As a result, the application is working, but health check on application from a load balancer is failing. Any suggestions what could be wrong? code @app.get('/health') async def health(): """ Returns health status """ return JSONResponse({'status': 'ok'}) result curl http://172.xx.xx.xx:8080 return header curl -I http://172.xx.xx.xx:8080 | The curl -I option (which is used in the example you provided) is the same as using curl --head and performrs an HTTP HEAD request, in order to fetch the headers only (not the body/content of the resource): The HTTP HEAD method requests the headers that would be returned if the HEAD request's URL was instead requested with the HTTP GET method. For example, if a URL might produce a large download, a HEAD request could read its Content-Length header to check the filesize without actually downloading the file. The requested resource/endpoint you are trying to call supports only GET requests; hence, the 405 Method Not Allowed response status code, which indicates that the server knows the request method, but the target resource doesn't support this method. To demonstrate this, have a look at the example below: from fastapi import FastAPI app = FastAPI() @app.get('/') async def main(): return {'Hello': 'World'} Test using Python requests (similar result is obtained using curl -I http://127.0.0.1:8000) import requests # Making a GET request # r = requests.get('http://127.0.0.1:8000') # Making a HEAD request r = requests.head('http://127.0.0.1:8000') # check status code for response received print(r.status_code, r.reason) # print headers of request print(r.headers) # checking if request contains any content print(r.content) Output (indicating by the allow response header which request methods are supported by the requested resource): 405 Method Not Allowed {'date': 'Sun, 12 Mar 2023', 'server': 'uvicorn', 'allow': 'GET', 'content-length': '31', 'content-type': 'application/json'} b'' If, instead, you performed a GET request (in order to issue a GET request in the example above, uncomment the line for GET request and comment the one for HEAD request, or in curl use curl http://127.0.0.1:8000), the response would be as follows: 200 OK {'date': 'Sun, 12 Mar 2023', 'server': 'uvicorn', 'content-length': '17', 'content-type': 'application/json'} b'{"Hello":"World"}' Solutions To make a FastAPI endpoint supporting more than one HTTP request methods (e.g., both GET and HEAD requests), the following solutions are available. Solution 1 Cretate two different functions with a decorator refering to the request method for which they should be called, i.e., @app.head() and @app.get(), or, if the logic of the function is the same for both the request methods, then simply add a decorator for each request method that you would like the endpoint to support to the same function. For instance: from fastapi import FastAPI app = FastAPI() @app.head('/') @app.get('/') async def main(): return {'msg': 'Hello World'} Solution 2 Use the @app.api_route() decorator, which allows you to define the set of supported request methods for the endpoint. For example: from fastapi import FastAPI app = FastAPI() @app.api_route('/', methods=['GET', 'HEAD']) async def main(): return {'msg': 'Hello World'} Output Both solutions above would respond as follows (when a HEAD request is issued by a client): 200 OK {'date': 'Sun, 12 Mar 2023', 'server': 'uvicorn', 'content-length': '17', 'content-type': 'application/json'} b'' In both solutions, if you would like to distinguish between the methods used to call the endpoint, you could use the .method attribute of the Request object. For instance: from fastapi import FastAPI, Request app = FastAPI() @app.api_route('/', methods=['GET', 'HEAD']) async def main(request: Request): if request.method == 'GET': print('GET method was used') elif request.method == 'HEAD': print('HEAD method was used') return {'msg': 'Hello World'} Note 1: In the test example using Python requests above, please avoid calling the json() method on the response when performing a HEAD request, as a response to a HEAD method request does not normally have a body, and it would raise an exception, if attempted calling r.json()βsee this answer for more details. Instead, use print(r.content), as demonstrated in the example above on HEAD requests, if you would like to check whether or not there is a response body. Note 2: The 405 Method Not Allowed response may also be caused by other reasonsβsee related answers here and here, as well as here and here. | 6 | 4 |
75,715,657 | 2023-3-12 | https://stackoverflow.com/questions/75715657/getting-tox-to-use-the-python-version-set-by-pyenv | I can't seem to wrap my head around managing Python versions. When I run tox, I can immediately see that it's using Python 3.7.9: $ tox py39: commands[0]> coverage run -m pytest ================================================================================== test session starts ================================================================================== platform darwin -- Python 3.7.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /usr/local/bin/python3 But it's configured to use 3.9: [tox] envlist = py39,manifest,check-formatting,lint skipsdist = True usedevelop = True indexserver = spotify = https://artifactory.spotify.net/artifactory/api/pypi/pypi/simple [testenv] basepython = python3.9 deps = :spotify:-r{toxinidir}/dev-requirements.txt commands = coverage run -m pytest {posargs} allowlist_externals = coverage [testenv:manifest] ; a safety check for source distributions basepython = python3.9 deps = check-manifest skip_install = true commands = check-manifest Here's what I see with which: $ pyenv local 3.9.10 $ which python /Users/acheong/.pyenv/shims/python $ which python3 /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 $ pyenv which python /Users/acheong/.pyenv/versions/3.9.10/bin/python pytest also uses the wrong version: $ pytest tests ================================================================================== test session starts ================================================================================== platform darwin -- Python 3.7.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 cachedir: .pytest_cache rootdir: /Users/acheong/src/spotify/protean/ezmode-cli, configfile: tox.ini, testpaths: tests plugins: mock-3.10.0, cov-2.10.0 But in this case I learned I can do this: $ pyenv exec pytest tests ================================================================================== test session starts ================================================================================== platform darwin -- Python 3.9.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/acheong/.pyenv/versions/3.9.10/bin/python cachedir: .pytest_cache rootdir: /Users/acheong/src/spotify/protean/ezmode-cli, configfile: tox.ini, testpaths: tests plugins: mock-3.10.0, cov-2.10.0 But when I try that with tox, I get an error: $ pyenv exec tox Traceback (most recent call last): File "/Users/acheong/.pyenv/versions/3.9.10/bin/tox", line 8, in <module> sys.exit(run()) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/run.py", line 19, in run result = main(sys.argv[1:] if args is None else args) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/run.py", line 38, in main state = setup_state(args) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/run.py", line 53, in setup_state options = get_options(*args) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/config/cli/parse.py", line 38, in get_options guess_verbosity, log_handler, source = _get_base(args) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/config/cli/parse.py", line 61, in _get_base MANAGER.load_plugins(source.path) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/plugin/manager.py", line 90, in load_plugins self._register_plugins(inline) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/plugin/manager.py", line 38, in _register_plugins self.manager.load_setuptools_entrypoints(NAME) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints plugin = ep.load() File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/importlib/metadata.py", line 77, in load module = import_module(match.group('module')) File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox_pyenv.py", line 48, in <module> from tox import hookimpl as tox_hookimpl ImportError: cannot import name 'hookimpl' from 'tox' (/Users/acheong/.pyenv/versions/3.9.10/lib/python3.9/site-packages/tox/__init__.py) I've tried a lot of things I found online but I'm afraid if anything I've only messed up my environment even more. What steps can I take to diagnose the problem and get tox to use my pyenv version? | Although this is an old question and it has already been marked as solved, I dare to give another answer since this post appears on the first page of Google search results of the "tox pyenv" query. As already noted, tox-pyenv is not compatible with tox version 4. Moreover, tox no longer discovers Python executables by itself, this job is now delegated to virtualenv. That being said, a special tox plugin like tox-pyenv is no longer needed, the discovery machinery is extended via virtualenv plugins, not tox plugins. Here is an explanation in the tox-pyenv issues, and here is a more detailed migration guide. For those who don't want to configure virtualenv and are looking for a simpler solution, there is another tox plugin β tox-pyenv-redux (not related to tox-pyenv). It is compatible with tox 4, uses virtualenv-pyenv under the hood, and does not require any configuration to work (though, it has a configuration setting). | 3 | 4 |
75,701,437 | 2023-3-10 | https://stackoverflow.com/questions/75701437/why-do-we-multiply-learning-rate-by-gradient-accumulation-steps-in-pytorch | Loss functions in pytorch use "mean" reduction. So it means that the model gradient will have roughly the same magnitude given any batch size. It makes sense that you want to scale the learning rate up when you increase batch size because your gradient doesn't become bigger as you increase batch size. For gradient accumulation in PyTorch, it will "sum" the gradient N times where N is the number of times you call backward() before you call step(). My intuition is that this would increase the magnitude of the gradient and you should reduce the learning rate, or at least not increase it. But I saw people wrote multiplication to gradient accumulation steps in this repo: if args.scale_lr: args.learning_rate = ( args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) I also see similar code in this repo: model.learning_rate = accumulate_grad_batches * ngpu * bs * base_lr I understand why you want to increase the learning rate by batch size. But I don't understand why they try to increase the learning rate by the number of accumulation steps. Do they divide the loss by N to reduce the magnitude of the gradient? Otherwise why do they multiply learning rate by the accumulation steps? How are gradients from different GPUs accumulated? Is it using mean or sum? If it's sum, why are they multiplying the learning rate by nGPUs? | I found that they indeed divided the loss by N (number of gradient accumulation steps). You can see sample code from accelerate package here: https://huggingface.co/docs/accelerate/usage_guides/gradient_accumulation Notice the following line of code from the guide above: loss = loss / gradient_accumulation_steps This is why you need to multiply the learning rate by gradient accumulation steps to cancel the above division. I assume that the same procedure also happens in PyTorch Lightning. I asked a related Lightning question at the github discussion here: https://github.com/Lightning-AI/lightning/discussions/17035 I hope that someone will answer later that Trainer in Lightning does the same division process. The evidence from accelerate package made me think that gradients from different GPUs are also averaged, not summed. If they are going to be summed, the loss on each GPU has to be divided by the number of GPUs. This leads to a simple intuition about gradients in most PyTorch training workflows: No matter how big or small the batch is, the gradient will always have roughly the same magnitude. If you check the magnitude of the gradient right before step() call, it should stay roughly the same even if you vary batch size, number of gradient accumulation steps, number of GPUs, or even number of computers. | 5 | 6 |
75,709,118 | 2023-3-11 | https://stackoverflow.com/questions/75709118/how-are-attribute-names-with-underscore-managed-in-pydantic-models | Can anyone explain how Pydantic manages attribute names with an underscore? In Pydantic models, there is a weird behavior related to attribute naming when using the underscore. That behavior does not occur in python classes. The test results show some allegedly "unexpected" errors. The following code is catching some errors for simple class and instance operations: class PydanticClass(BaseModel): a: int = "AAA" _z: int = "_Z_Z_Z" @classmethod def get_a(cls): return cls.a @classmethod def get_z(cls): return cls._z class NonPydanticClass: a: int = "AAA" _z: int = "_Z_Z_Z" @classmethod def get_a(cls): return cls.a @classmethod def get_z(cls): return cls._z print ("PYDANTIC MODEL CLASS TEST:") pydantic_instance = PydanticClass() try: msg1, msg2 = "SUCCESS", f"{pydantic_instance.get_a()}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "1 pydantic_instance.get_a()", "Accessing non-underscored attributes names in a class method", msg2)) try: msg1, msg2 = "SUCCESS", f"{pydantic_instance.get_z()}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "2 pydantic_instance.get_z()", "Accessing underscored attributes names in a class method", msg2)) try: msg1, msg2 = "SUCCESS", f"{PydanticClass.a}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "3 PydanticClass.a ","Accessing non-underscored attribute names as a 'static attribute'", msg2)) try: msg1, msg2 = "SUCCESS", f"{PydanticClass._z}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "4 PydanticClass._z","Accessing underscored attributes names as a 'static attribute'", msg2)) print ("\nNON PYDANTIC CLASS TEST:") nom_pydantic_instance = NonPydanticClass() try: msg1, msg2 = "SUCCESS", f"{nom_pydantic_instance.get_a()}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "1 class_instance.get_a()", "Accessing non-underscored attributes names in a class method", msg2)) try: msg1, msg2 = "SUCCESS", f"{nom_pydantic_instance.get_z()}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "2 class_instance.get_z()", "Accessing underscored attributes names in a class method", msg2)) try: msg1, msg2 = "SUCCESS", f"{NonPydanticClass.a}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "3 PydanticClass.a ","Accessing non-underscored attribute names as a 'static attribute'", msg2)) try: msg1, msg2 = "SUCCESS", f"{NonPydanticClass._z}" except Exception as e: msg1, msg2 = "ERROR", f"{e}" print('{0:<7} :: {1:<27} :: {2:<65} :: {3}'.format(msg1, "4 PydanticClass._z","Accessing underscored attributes names as a 'static attribute'", msg2)) Here are the results: PYDANTIC MODEL CLASS TEST: ERROR :: 1 pydantic_instance.get_a() :: Accessing non-underscored attributes names in a class method :: type object 'PydanticClass' has no attribute 'a' SUCCESS :: 2 pydantic_instance.get_z() :: Accessing underscored attributes names in a class method :: _Z_Z_Z ERROR :: 3 PydanticClass.a :: Accessing non-underscored attribute names as a 'static attribute' :: type object 'PydanticClass' has no attribute 'a' SUCCESS :: 4 PydanticClass._z :: Accessing underscored attributes names as a 'static attribute' :: _Z_Z_Z NON PYDANTIC CLASS TEST: SUCCESS :: 1 class_instance.get_a() :: Accessing non-underscored attributes names in a class method :: AAA SUCCESS :: 2 class_instance.get_z() :: Accessing underscored attributes names in a class method :: _Z_Z_Z SUCCESS :: 3 PydanticClass.a :: Accessing non-underscored attribute names as a 'static attribute' :: AAA SUCCESS :: 4 PydanticClass._z :: Accessing underscored attributes names as a 'static attribute' :: _Z_Z_Z | This different treatment of underscored variables is actually explicitly documented in the section "Automatically excluded attributes", although it is not worded clearly enough and could be better explained in my humble opinion. UPDATE (2023-08-16): In fairness, the Pydantic v2 docs do a better job by explicitly stating that they are not treated as fields. See below for more on what that means. One crucial thing to understand about why Pydantic models treat their namespace differently than "regular" Python classes is that by default Pydantic constructs a field for every name declared in its namespace. A Pydantic field is a special construct that behaves differently than regular class/instance attributes would by design. A field can only be written to and read from on instances of a model. Even though you can "assign a value" in the class namespace definition, under the hood that is treated as defining a default value for that field and the value is not retained as a class attribute (unlike with regular classes). This is done by the meta class (source) of BaseModel, whenever you subclass it: from pprint import pprint from pydantic import BaseModel class Model(BaseModel): a: int = "foo" _b: int = "spam" pprint(dict(Model.__dict__)) print(f"{hasattr(Model, 'a')=}") print(f"{hasattr(Model, '_b')=}") Output (reduced) for Pydantic v1: {# ... '__annotations__': {'_b': <class 'int'>, 'a': <class 'int'>}, # ... '__fields__': {'a': ModelField(name='a', type=int, required=False, default='foo')}, # ... '_b': 'spam'} hasattr(Model, 'a')=False hasattr(Model, '_b')=True As you can see by checking the class' __dict__, once the subclass is fully constructed, there is no attribute a left on Model, which is why you get an error, when you try to do get it via Model.a (or inside a classmethod via cls.a, which is equivalent). Instead it has a field a that can be accessed from instances of Model and that field will receive the value "foo", unless another value is passed to the constructor. Underscored names inside a model's definition are an exception to this behavior. Let me quote: Class variables which begin with an underscore [...] will be automatically excluded from the model. What "excluded from the model" means is that underscored names are actually treated like "normal" class attributes by Pydantic v1 models. Meaning if you assign a value to them in the definition, you can later access that from the class directly. But they do not get all the special bells and whistles that fields get. They are just plain old attributes. And while those attributes can be read from both the class and an instance of that class, they can not be re-assigned on an instance (neither from the outside nor in an instance method) in Pydantic v1: # Pydantic v1 from pydantic import BaseModel class Model(BaseModel): _b: str = "spam" obj = Model() print(obj._b) # spam try: obj._b = "eggs" except Exception as e: print(repr(e)) # ValueError('"Model" object has no field "_b"') The reason for that is that Pydantic v1 models also have a custom __setattr__ implementation, which by default raises a ValueError, if you try to assign to a non-field (source). If you need underscored (protected) attributes for instances of your model, in Pydantic v1 you need to define them accordingly, using either PrivateAttr or the Config setting to make underscored attributes private by default: # Pydantic v1 from pydantic import BaseModel, PrivateAttr class Model(BaseModel): _b: str = PrivateAttr("spam") obj = Model() print(obj._b) # spam obj._b = "eggs" print(obj._b) # eggs This behavior may seem weird, when you first notice it, if you assume fields behave like regular class/instance attributes, but you get used to it, once you understand that this different behavior is necessary to enable all the magic that Pydantic models provide. UPDATE: With Pydantic v2 this is no longer necessary because all single-underscored attributes are automatically converted to "private attributes" and can be set as you would expect with normal classes: # Pydantic v2 from pydantic import BaseModel class Model(BaseModel): _b: str = "spam" obj = Model() print(obj._b) # spam obj._b = "eggs" print(obj._b) # eggs This is arguably a less confusing, more consistent approach now. But the distinction between regular fields and private attributes is still present and important in v2. | 5 | 13 |
75,671,499 | 2023-3-8 | https://stackoverflow.com/questions/75671499/duckdb-binder-error-referenced-column-not-found-in-from-clause | I am working in DuckDB in a database that I read from json. Here is the json: [{ "account": "abcde", "data": [ { "name": "hey", "amount":1, "flow":"INFLOW" }, { "name": "hello", "amount":-2, "flow": null } ] }, { "account": "hijkl", "data": [ { "name": "bonjour", "amount":1, "flow":"INFLOW" }, { "name": "hallo", "amount":-3, "flow":"OUTFLOW" } ] } ] I am opening it in Python as follows: import duckdb duckdb.sql(""" CREATE OR REPLACE TABLE mytable AS SELECT * FROM "example2.json" """) This all works fine and I get a copy of my table, but then I try to update it: duckdb.sql(""" UPDATE mytable SET data = NULL WHERE account = "abcde" """) which crashes with --------------------------------------------------------------------------- BinderException Traceback (most recent call last) Cell In[109], line 1 ----> 1 duckdb.sql(""" 2 UPDATE mytable SET data = NULL WHERE account = "abcde" 3 """) 6 # duckdb.sql(""" 7 # DELETE FROM mytable WHERE account = "abcde" 8 # """) 10 duckdb.sql(""" 11 SELECT * FROM mytable 12 """) BinderException: Binder Error: Referenced column "abcde" not found in FROM clause! Candidate bindings: "mytable.data" LINE 2: ...mytable SET data = NULL WHERE account = "abcde" ^ I have searched the documentation and the error but I just can't find what I am doing wrong here. | I actually solved the issue. I had to use single quotes ' instead of double quotes " in the string comparison... Solution duckdb.sql(""" UPDATE mytable SET data = NULL WHERE account = 'abcde' """) correctly does βββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β account β data β β varchar β struct("name" varchar, amount bigint, flow varchar)[] β βββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β hijkl β [{'name': bonjour, 'amount': 1, 'flow': INFLOW}, {'name': hallo, 'amount': -3, 'flow': OUTFLOW}] β β abcde β NULL β βββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Interestingly, ChatGPT helped me spot this mistake. (There is a ban on posting AI answers, but it's OK if they are human-verified). | 3 | 8 |
75,728,233 | 2023-3-14 | https://stackoverflow.com/questions/75728233/catch-overflow-error-in-numba-integer-multiplication | I am using numba and I would like to know if an overflow has occurred when I multiply two integers. Say the integers are positive for simplicity. I have written the following function to try and achieve this: from numba import njit import numpy as np @njit def safe_mul(a, b): c = a * b print('a: ', a) print('b: ', b) print('c: ', c) print('c//a: ', c//a) print('0//a: ', 0//a) if c // a != b or c // b != a: # do something else or raise error raise ValueError() return c @njit def safe_mul_2(a, b): if (np.log2(np.abs(a)) + np.log2(np.abs(b))) >= 63: \# do something else or raise error raise ValueError() return a * b print(safe_mul(2**21, 2**51)) The code prints: a: 2097152 b: 2251799813685248 c: 0 c//a: 2251799813685248 0//a: 0 0 safe_mul does not catch the overflow when 2**21 and 2**51 are passed in. Perhaps numba is compiling out the integer division since it knows c has just been multiplied by what it is being divided by? I am not sure about this since when you enter something where the arguments are not both powers of 2 then the error is caught. safe_mul_2 does catch the error, and is surprisingly not much slower than safe_mul (when the prints are removed). I would like to know what is happening in safe_mul and if something that is faster than safe_mul_2 can be written. | On the Numba discourse site, the user sschaer was able to provide a very fast solution. See this: https://numba.discourse.group/t/catch-overflow-in-integer-multiplication/1827 for the original discussion. The solution is copied here with sschaer's permission. LLVM has integer operations that return an overflow bit and they are already known to Numba's intermediate representation. The following functions exposes the multiplication operations: import numpy as np from numba import njit, types from numba import TypingError from numba.extending import intrinsic @intrinsic def mul_with_overflow(typingctx, a, b): if not (isinstance(a, types.Integer) and isinstance(b, types.Integer)): raise TypingError("both arguments must be integers") if a.signed != b.signed: raise TypingError("can only multiply integers of equal signedness") if a.signed: ext = lambda builder, a, b: builder.sext(a, b) mul = lambda builder, a, b: builder.smul_with_overflow(a, b) else: ext = lambda builder, a, b: builder.zext(a, b) mul = lambda builder, a, b: builder.umul_with_overflow(a, b) retint_ty = max(a, b, key=lambda ty: ty.bitwidth) sig = types.Tuple([retint_ty, types.boolean])(a, b) def codegen(context, builder, signature, args): int_ty = context.get_value_type(retint_ty) a = ext(builder, args[0], int_ty) b = ext(builder, args[1], int_ty) prod_and_flag = mul(builder, a, b) return prod_and_flag return sig, codegen @njit def foo(a, b): return mul_with_overflow(a, b) foo(np.int8(1), np.int32(-2)) # returns int32 foo(np.uint8(1), np.uint16(2)) # returns uint16 foo(np.int32(2), np.int64(np.iinfo(np.int64).max)) # overflow foo(np.uint64(1), np.int64(2)) # error foo(np.uint64(1), np.float32(2)) # error | 5 | 0 |
75,734,368 | 2023-3-14 | https://stackoverflow.com/questions/75734368/what-is-the-grid-size-parameter-in-shapely-operations-do | In a practical sense, what does the grid_size parameter do for you? When/why would you change it from the default? I understand from testing that it imposes a discretization on the coordinates of the resulting geometries, e.g. with grid_size=0.01 the fractional part of the coordinates will be multiples of 0.01. Does this play into the logic of the algorithms or is just a convenience for applications where, in the end, the user is going to discretize the coordinates anyway? | A practical reason why I use it sometimes is to avoid having slivers after applying overlays (in non-topological data). The code sample below illustrates this: the intersection between the 2 polygons without grid_size results in a narrow polygon as intersection. the intersection between the 2 polygons with grid_size results in a line, which is easy to filter out. import shapely import shapely.plotting import matplotlib.pyplot as plt poly1 = shapely.Polygon([(0, 0), (0, 10), (10, 10), (5, 0), (0, 0)]) poly2 = shapely.Polygon([(5, 0), (8, 7), (10, 7), (10, 0), (5, 0)]) intersection_nogridsize = poly1.intersection(poly2) intersection_gridsize = poly1.intersection(poly2, grid_size=1) shapely.plotting.plot_polygon(poly1, color="green") shapely.plotting.plot_polygon(poly2, color="blue") shapely.plotting.plot_polygon(intersection_nogridsize, color="red") plt.show() shapely.plotting.plot_polygon(poly1, color="green") shapely.plotting.plot_polygon(poly2, color="blue") shapely.plotting.plot_line(intersection_gridsize, color="red") plt.show() Result without gridsize (red intersection is a polygon): Result with gridsize (red intersection is a line): | 3 | 3 |
75,690,124 | 2023-3-9 | https://stackoverflow.com/questions/75690124/find-a-specific-concordance-index-using-nltk | I use this code below to get a concordance from nltk and then show the indices of each concordance. And I get these results show below. So far so good. How do I look up the index of just one specific concordance? It is easy enough to match the concordance to the index in this small example, but if I have 300 concordances, I want to find the index for one. .index doesn't take multiple items in a list as an argument. Can someone point me to the command/structure I should be using to get the indices to display with the concordances? I've attached an example below of a more useful result that goes outside nltk to get a separate list of indices. I'd like to combine those into one result, but how do I get there? import nltk nltk.download('punkt') from nltk.tokenize import sent_tokenize, word_tokenize from nltk.text import Text moby = open('mobydick.txt', 'r') moby_read = moby.read() moby_text = nltk.Text(nltk.word_tokenize(moby_read)) moby_text.concordance("monstrous") moby_indices = [index for (index, item) in enumerate(moby_text) if item == "monstrous"] print(moby_indices) Displaying 11 of 11 matches: ong the former , one was of a most monstrous size . ... This came towards us , N OF THE PSALMS . `` Touching that monstrous bulk of the whale or ork we have r ll over with a heathenish array of monstrous clubs and spears . Some were thick d as you gazed , and wondered what monstrous cannibal and savage could ever hav that has survived the flood ; most monstrous and most mountainous ! That Himmal they might scout at Moby Dick as a monstrous fable , or still worse and more de of Radney . ' '' CHAPTER 55 Of the Monstrous Pictures of Whales . I shall ere l ing Scenes . In connexion with the monstrous pictures of whales , I am strongly ere to enter upon those still more monstrous stories of them which are to be fo ght have been rummaged out of this monstrous cabinet there is no telling . But e of Whale-Bones ; for Whales of a monstrous size are oftentimes cast up dead u [858, 1124, 9359, 9417, 32173, 94151, 122253, 122269, 162203, 205095] I'd ideally like to have something like this. Displaying 11 of 11 matches: [858] ong the former , one was of a most monstrous size . ... This came towards us , [1124] N OF THE PSALMS . `` Touching that monstrous bulk of the whale or ork we have r [9359] ll over with a heathenish array of monstrous clubs and spears . Some were thick [9417] d as you gazed , and wondered what monstrous cannibal and savage could ever hav [32173] that has survived the flood ; most monstrous and most mountainous ! That Himmal [94151] they might scout at Moby Dick as a monstrous fable , or still worse and more de [122253] of Radney . ' '' CHAPTER 55 Of the Monstrous Pictures of Whales . I shall ere l [122269] ing Scenes . In connexion with the monstrous pictures of whales , I am strongly [162203] ere to enter upon those still more monstrous stories of them which are to be fo [162203] ght have been rummaged out of this monstrous cabinet there is no telling . But [205095] e of Whale-Bones ; for Whales of a monstrous size are oftentimes cast up dead u | We can use concordance_list function (https://www.nltk.org/api/nltk.text.html) so that we can specify the width and number of lines, and then iterate over lines getting the 'offset' (i.e. line number) and adding surrounding brackets '[' ']' plus roi (i.e. 'monstrous') between the left and right words (of each line): some_text = open('/content/drive/My Drive/Colab Notebooks/DATA_FOLDERS/TEXT/mobydick.txt', 'r') roi = 'monstrous' moby_read = some_text.read() moby_text = nltk.Text(nltk.word_tokenize(moby_read)) moby_text = moby_text.concordance_list(roi, width=22, lines=1000) for line in moby_text: print('[' + str(line.offset) + '] ' + ' '.join(line.left) + ' ' + roi + ' ' + ' '.join(line.right)) or if you find this more readable (import numpy as np): for line in moby_text: print('[' + str(line.offset) + '] ', np.append(' '.join(np.append(np.array(line.left), roi)), np.array(' '.join(line.right)))) Outputs (my line numbers don't match yours because I used this source: https://gist.github.com/StevenClontz/4445774 which just has different spacing/line numbers): [494] 306 LV . OF THE monstrous PICTURES OF WHALES . [1385] one was of a most monstrous size . * * [1652] the Psalms. ' Touching that monstrous bulk of the whale [9874] with a heathenish array of monstrous clubs and spears . [9933] gazed , and wondered what monstrous cannibal and savage could [32736] survived the Flood ; most monstrous and most mountainous ! [95115] scout at Moby-Dick as a monstrous fable , or still [121328] '' CHAPTER LV OF THE monstrous PICTURES OF WHALES I [121991] this bookbinder 's fish an monstrous PICTURES OF WHALES 333 [122749] same field , Desmarest , monstrous PICTURES OF WHALES 335 [123525] SCENES IN connection with the monstrous pictures of whales , [123541] enter upon those still more monstrous stories of them which If we want to consider punctuation and all that, we can do something like: for line in moby_text: left_words = [left_word for left_word in line.left] right_words = [right_word for right_word in line.right] return_text = '[' + str(line.offset) + '] ' for word in left_words: if any([word == '.', word == ',', word == ';', word == '!']): return_text += word else: return_text += ' ' + word if return_text[-1] != ' ' else word return_text += roi + ' ' for word in right_words: if any([word == '.', word == ',', word == ';', word == '!']): return_text += word else: return_text += ' ' + word if return_text[-1] != ' ' else word print(return_text) Outputs: [494] 306 LV. OF THE monstrous PICTURES OF WHALES. [1385] one was of a most monstrous size. * * [1652] the Psalms.' Touching that monstrous bulk of the whale [9874] with a heathenish array of monstrous clubs and spears. [9933] gazed, and wondered what monstrous cannibal and savage could [32736] survived the Flood; most monstrous and most mountainous! [95115] scout at Moby-Dick as a monstrous fable, or still [121328] '' CHAPTER LV OF THE monstrous PICTURES OF WHALES I [121991] this bookbinder 's fish an monstrous PICTURES OF WHALES 333 [122749] same field, Desmarest, monstrous PICTURES OF WHALES 335 [123525] SCENES IN connection with the monstrous pictures of whales, [123541] enter upon those still more monstrous stories of them which but you may have to tweak it as I didn't put a lot of thought into the different contexts that may arise (e.g. '*', numbers, chapter titles in ALL-CAPS, roman numerals, etc.) and this is more up to you for how you want the output text to look like--I'm just providing an example. Note: width in the concordance_list function refers to the max length of the next left (and right) word, so if we set it to 4 the first line would print: [494] THE monstrous because len('THE ') is 4, so setting it to 3 would cut off 'THE' next left word of 'monstrous': [494] monstrous While lines in the concordance_list function refers to the max number of lines, so if we want only the first two lines containing 'monstrous' (i.e. moby_text.concordance_list(..., lines=2)): [494] 306 LV . OF THE monstrous PICTURES OF WHALES . [1385] one was of a most monstrous size . * * | 5 | 2 |
75,671,456 | 2023-3-8 | https://stackoverflow.com/questions/75671456/error-installing-pyqt5-under-aarch64-architecture | I'm trying to install pyqt5 V5.15.2 on an emulate qemu aarch64 debian distro, but it fails with the following trace: root@debian-arm64:~# pip install pyqt5==5.15.2 --config-settings --confirm-license= --verbose Using pip 23.0.1 from /usr/local/lib/python3.9/dist-packages/pip (python 3.9) Collecting pyqt5==5.15.2 Using cached PyQt5-5.15.2.tar.gz (3.3 MB) Running command pip subprocess to install build dependencies Collecting sip<7,>=5.3 Using cached sip-6.7.7-cp37-abi3-linux_aarch64.whl Collecting PyQt-builder<2,>=1.6 Using cached PyQt_builder-1.14.1-py3-none-any.whl (3.7 MB) Collecting toml Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB) Collecting packaging Using cached packaging-23.0-py3-none-any.whl (42 kB) Collecting ply Using cached ply-3.11-py2.py3-none-any.whl (49 kB) Collecting setuptools Using cached setuptools-67.5.1-py3-none-any.whl (1.1 MB) Installing collected packages: ply, toml, setuptools, packaging, sip, PyQt-builder Successfully installed PyQt-builder-1.14.1 packaging-23.0 ply-3.11 setuptools-67.5.1 sip-6.7.7 toml-0.10.2 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Installing build dependencies ... done Running command Getting requirements to build wheel Getting requirements to build wheel ... done Running command Preparing metadata (pyproject.toml) Querying qmake about your Qt installation... This is the GPL version of PyQt 5.15.2 (licensed under the GNU General Public License) for Python 3.9.2 on linux. Found the license file 'pyqt-gpl.sip'. Checking to see if the QtCore bindings can be built... Checking to see if the QtNetwork bindings can be built... Checking to see if the QtGui bindings can be built... Checking to see if the QtWidgets bindings can be built... Checking to see if the QtQml bindings can be built... Checking to see if the QAxContainer bindings can be built... Checking to see if the QtAndroidExtras bindings can be built... Checking to see if the QtBluetooth bindings can be built... Checking to see if the QtDBus bindings can be built... Checking to see if the QtDesigner bindings can be built... Checking to see if the Enginio bindings can be built... Checking to see if the QtHelp bindings can be built... Checking to see if the QtMacExtras bindings can be built... Checking to see if the QtMultimedia bindings can be built... Checking to see if the QtMultimediaWidgets bindings can be built... Checking to see if the QtNetworkAuth bindings can be built... Checking to see if the QtNfc bindings can be built... Checking to see if the QtOpenGL bindings can be built... Checking to see if the QtPositioning bindings can be built... Checking to see if the QtLocation bindings can be built... Checking to see if the QtPrintSupport bindings can be built... Checking to see if the QtQuick bindings can be built... Checking to see if the QtQuick3D bindings can be built... Checking to see if the QtQuickWidgets bindings can be built... Checking to see if the QtRemoteObjects bindings can be built... Checking to see if the QtSensors bindings can be built... Checking to see if the QtSerialPort bindings can be built... Checking to see if the QtSql bindings can be built... Checking to see if the QtSvg bindings can be built... Checking to see if the QtTest bindings can be built... Checking to see if the QtTextToSpeech bindings can be built... Checking to see if the QtWebChannel bindings can be built... Checking to see if the QtWebKit bindings can be built... Checking to see if the QtWebKitWidgets bindings can be built... Checking to see if the QtWebSockets bindings can be built... Checking to see if the QtWinExtras bindings can be built... Checking to see if the QtX11Extras bindings can be built... Checking to see if the QtXml bindings can be built... Checking to see if the QtXmlPatterns bindings can be built... Checking to see if the _QOpenGLFunctions_2_0 bindings can be built... Checking to see if the _QOpenGLFunctions_2_1 bindings can be built... Checking to see if the _QOpenGLFunctions_4_1_Core bindings can be built... Checking to see if the dbus-python support should be built... The dbus-python package does not seem to be installed. These bindings will be built: Qt, QtCore, QtNetwork, QtGui, QtWidgets, QtDBus, QtOpenGL, QtPrintSupport, QtSql, QtTest, QtXml, _QOpenGLFunctions_2_0, _QOpenGLFunctions_2_1, _QOpenGLFunctions_4_1_Core, pylupdate, pyrcc. Generating the Qt bindings... _in_process.py: /tmp/pip-install-m6oiyjbv/pyqt5_1965ddd1193045bab17bb1f59ff08aa1/sip/QtCore/qprocess.sip: line 99: column 5: 'Q_PID' is undefined error: subprocess-exited-with-error Γ Preparing metadata (pyproject.toml) did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. full command: /usr/bin/python3 /usr/local/lib/python3.9/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpo3qo2dlh cwd: /tmp/pip-install-m6oiyjbv/pyqt5_1965ddd1193045bab17bb1f59ff08aa1 Preparing metadata (pyproject.toml) ... error error: metadata-generation-failed Γ Encountered error while generating package metadata. β°β> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. The only 2 stuff I'm getting warn about are: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv The dbus-python package does not seem to be installed. Speaking about the first one, every package installation is printing it: I don't care, that's just an emulated test environment. Speaking about the second one, I've installed libdbus-1-3 and libdbus-1-dev and ran both pip install dbus-python (which seems to be a deprecated module) and dbus-next, so i don't know what still missing there. ISSUE Apparently a Q_PID var in qprocess.sip seems to be undefined _in_process.py: /tmp/pip-install-m6oiyjbv/pyqt5_1965ddd1193045bab17bb1f59ff08aa1/sip/QtCore/qprocess.sip: line 99: column 5: 'Q_PID' is undefined QUESTION What am I doing wrong? How should I fix this error? EXTRA INFOS List of installed packages: root@debian-arm64:~# pip list Package Version ------------------------- -------------- altgraph 0.17.3 certifi 2020.6.20 chardet 4.0.0 dbus-next 0.2.3 dbus-python 1.3.2 httplib2 0.18.1 idna 2.10 Mako 1.1.3 Markdown 3.3.4 MarkupSafe 1.1.1 packaging 23.0 pip 23.0.1 ply 3.11 pycurl 7.43.0.6 Pygments 2.7.1 pyinstaller 5.8.0 pyinstaller-hooks-contrib 2023.0 PySimpleSOAP 1.16.2 python-apt 2.2.1 python-debian 0.1.39 python-debianbts 3.1.0 PyYAML 5.3.1 reportbug 7.10.3+deb11u1 requests 2.25.1 setuptools 67.5.1 six 1.16.0 toml 0.10.2 urllib3 1.26.5 wheel 0.34.2 Python environment: root@debian-arm64:~# python3 --version Python 3.9.2 root@debian-arm64:~# pip --version pip 23.0.1 from /usr/local/lib/python3.9/dist-packages/pip (python 3.9) OS specs (debian-11.6.0-arm64): root@debian-arm64:~# uname -a Linux debian-arm64 5.10.0-21-arm64 #1 SMP Debian 5.10.162-1 (2023-01-21) aarch64 GNU/Linux Are you guys able to help there? Thanks in advance, Hele. | I founded the following solution that worked for me: Instead of using pip for installing PyQt5, a PyQt5 package exists. Installing the package from apt worked for me. sudo apt-get install python3-PyQt5 | 3 | 1 |
75,719,072 | 2023-3-13 | https://stackoverflow.com/questions/75719072/dynamically-set-sql-default-value-with-table-name-in-sqlmodel | I'm trying to create a base-class in SQLModel which looks like this: class BaseModel(SQLModel): @declared_attr def __tablename__(cls) -> str: return cls.__name__ guid: Optional[UUID] = Field(default=None, primary_key=True) class SequencedBaseModel(BaseModel): sequence_id: str = Field(sa_column=Column(VARCHAR(50), server_default=text(f"SELECT '{TABLENAME}_' + convert(varchar(10), NEXT VALUE FOR dbo.sequence)"))) so I got a table like this: class Project(SequencedBaseModel): ... where alembic would generate a migration for a table Project with columns guid and sequence_id. The default-value for sequence-id is a sequence which is generated with SELECT '{TABLENAME}_' + convert(varchar(10), NEXT VALUE FOR dbo.sequence) and should insert into project-table the values Project_1, Project_2, ... Any idea on how to set the tablename dynamically? I cannot use a constructor for setting the columns because alembic is ignoring them, I cannot access the __tablename__() function, or cls, because the columns are static... | Unfortunately, if you have an attribute that relies on a @declared_attr, it must also be a @declared_attr, since SqlAlchemy will wait until the whole mapping is completed and the classes get actual tables to be resolved (at least, that is my understanding). Now: declared_attr(s) are an SqlAlchemy concept, whereas the idea of Field is an SQLModel concept and they don't seem to talk to each other regarding this sort of """deferred""" attribute thingy ("""deferred""" in the sense that it won't be evaluated until the mapping is completed and the tables known, not deferred as in waiting until it's accessed)... at least, not that I know. You could (maybe? hopefully?) do something like what's recommended in this SQLModel GitHub issue: """Defer""" the SqlAlchemy Column and alias the SQLModel field: class SequencedBaseModel(BaseModel): sequence_id: str = Field(alias="sequence_id") @declared_attr def sequence_id(cls): return Column( 'sequence_id', VARCHAR(50), server_default=text(f"SELECT '{cls.__tablename__}_'" f" + convert(varchar(10), NEXT VALUE FOR dbo.sequence)")) class Project(SequencedBaseModel, table=True): pass Running alembic revision --autogenerate -m "init" will produce a migration file with the proper __tablename__ + '_' (meaning: Product_) expanded in the server_default's SELECT...: def upgrade() -> None: op.create_table('Project', sa.Column('guid', sqlmodel.sql.sqltypes.GUID(), nullable=False), sa.Column('sequence_id', sa.VARCHAR(length=50), server_default=sa.text("SELECT 'Project_' + convert(varchar(10), NEXT VALUE FOR dbo.sequence)"), nullable=True), sa.PrimaryKeyConstraint('guid'), # ... # ... This assumes your alembic environment has been properly configured. I can't help but pointing out that alembic will generate the migration using sqlmodel.sql.sqltypes.GUID() as the column type for the guid attribute, so you'll need to make sure the package sqlmodel is imported on each migration file. Probably by editing the template script.py.mako as described in this link where it shows that you must add import sqlmodel # NEW . β οΈ I tried to test this, but I don't exactly know where dbo.sequence comes from (a SQL server, maybe?). I used a PostgreSQL sequence (which I christened so75719072) to simulate it. This means that I can't confirm whether the syntax for the DEFAULT SELECT... will be valid in your situation. I'm quite suspicious about you being able to use the result of a SELECT as default value for a column but hopefully I'm wrong. import uuid from typing import Optional from uuid import UUID from sqlalchemy import Column, VARCHAR, text from sqlalchemy.orm import declared_attr from sqlmodel import SQLModel, Field, create_engine, Session, select class BaseModel(SQLModel): __table_args__ = {'schema': 'SO-75719072'} @declared_attr def __tablename__(cls) -> str: return cls.__name__ guid: Optional[UUID] = Field(default=None, primary_key=True) class SequencedBaseModel(BaseModel): sequence_id: str = Field(alias="sequence_id") @declared_attr def sequence_id(cls): return Column( 'sequence_id', VARCHAR(50), server_default=text(f"nextval('so75719072')")) class Project(SequencedBaseModel, table=True): pass if __name__ == "__main__": engine = create_engine( "postgresql+psycopg2://postgres:postgrespw@localhost:32768/stackoverflow") with Session(engine) as session: for i in range(3): proj1 = Project(guid=uuid.uuid4()) session.add(proj1) session.commit() with Session(engine) as session: statement = select(Project).where(Project.sequence_id.in_(["1", "2", "3"])) for project in session.exec(statement): print(f"guid: {project.guid}") Produces the following output: guid: c5e5902d-e224-48f1-95f5-fa47a73f7b05 guid: 1c25550b-258c-49c5-9acc-90ae7ad8460c guid: eb84e90c-9449-4974-8eb4-bad98728b0f9 Which came from the following table in Postgres: # select * from "SO-75719072"."Project"; guid | sequence_id --------------------------------------+------------- c5e5902d-e224-48f1-95f5-fa47a73f7b05 | 1 1c25550b-258c-49c5-9acc-90ae7ad8460c | 2 eb84e90c-9449-4974-8eb4-bad98728b0f9 | 3 (3 rows) | 5 | 4 |
75,727,494 | 2023-3-13 | https://stackoverflow.com/questions/75727494/how-can-i-code-vuongs-statistical-test-in-python | I need to implement Vuong's test for non-nested models. Specifically, I have logistic-regression models that I would like to compare. I have found implementations in R and STATA online, but unfortunately I work in Python and am not familiar with those frameworks/languages. Also unfortunate is that I have not been unable to find a Python implementation of the test (if someone knows of one that would be fantastic!). I asked on Cross Validated and was redirected to here. After scouring the original paper and a few other pages that I found (references below) I think I've managed to port the R implementation from the pscl. Example code is below. Is there someone who is fluent in both Python and R who could help confirm that the below code reproduces the R implementation? And, if not, help determine where I erred? In addition to helping with my immediate problem, perhaps this can help someone in the future. References consulted: Original Vuong Paper. See page 318 in particular Paper on Misuse R pscl Implementation Wikipedia Matlab User's Implementation Stata User's Implementation Example code with current implementation and use for logistic regression fits of dummy data. Note that this is a stock dataset and the predictor variables were arbitrarily selected. import numpy as np import pandas as pd import statsmodels.api as sm from scipy.stats import norm from sklearn.datasets import load_breast_cancer def vuong_test(mod1, mod2, correction=True): ''' mod1, mod2 - non-nested logitstic regression fit results from statsmodels ''' # number of observations and check of models N = mod1.nobs N2 = mod2.nobs if N != N2: raise ValueError('Models do not have the same number of observations') # extract the log-likelihood for individual points with the models m1 = mod1.model.loglikeobs(mod1.params) m2 = mod2.model.loglikeobs(mod2.params) # point-wise log likelihood ratio m = m1 - m2 # calculate the LR statistic LR = np.sum(m) # calculate the AIC and BIC correction factors -> these go to zero when df is same between models AICcor = mod1.df_model - mod2.df_model BICcor = np.log(N)*AICcor/2 # calculate the omega^2 term omega2 = np.var(m) # calculate the Z statistic with and without corrections Zs = np.array([LR,LR-AICcor,LR-BICcor]) Zs /= np.sqrt(N*omega2) # calculate the p-value ps = [] msgs = [] for Z in Zs: if Z>0: ps.append(1 - norm.cdf(Z)) msgs.append('model 1 preferred over model 2') else: ps.append(norm.cdf(Z)) msgs.append('model 2 preferred over model 1') # share information print('=== Vuong Test Results ===') labs = ['Uncorrected'] if AICcor!=0: labs += ['AIC Corrected','BIC Corrected'] for lab,msg,p,Z in zip(labs,msgs,ps,Zs): print(' -> '+lab) print(' -> '+msg) print(' -> Z: '+str(Z)) print(' -> p: '+str(p)) # load sample data X,y = load_breast_cancer( return_X_y=True, as_frame=True) # create data for modeling X1 = sm.add_constant( X.loc[:,('mean radius','perimeter error','worst symmetry')]) X2 = sm.add_constant( X.loc[:,('mean area','worst smoothness')]) # fit the models mod1 = sm.Logit( y, X1).fit() mod2 = sm.Logit( y, X2).fit() # run Vuong's Test vuong_test( mod1, mod2) Output Optimization terminated successfully. Current function value: 0.172071 Iterations 9 Optimization terminated successfully. Current function value: 0.156130 Iterations 9 === Vuong Test Results === -> Uncorrected -> model 2 preferred over model 1 -> Z: -0.7859538767172318 -> p: 0.21594725450542168 -> AIC Corrected -> model 2 preferred over model 1 -> Z: -0.8726045081599004 -> p: 0.19143934123980472 -> BIC Corrected -> model 2 preferred over model 1 -> Z: -1.0608044994241501 -> p: 0.14438937850119132 | In order to move forward, I learned enough R to be able to make my own comparison between R and python for Vuong's Test as below. In the end, my original implementation was close except for the subtle point of difference between how numpy and R calculate standard deviations by default. After correcting to now calculate the sample standard deviation (by changing np.var(m) to np.var(m,ddof=1)), I get a match between pscl and my own python code. Updated Python implementation import numpy as np import pandas as pd import statsmodels.api as sm from scipy.stats import norm from sklearn.datasets import load_breast_cancer def vuong_test(mod1, mod2, correction=True): ''' mod1, mod2 - non-nested logitstic regression fit results from statsmodels ''' # number of observations and check of models N = mod1.nobs N2 = mod2.nobs if N != N2: raise ValueError('Models do not have the same number of observations') # extract the log-likelihood for individual points with the models m1 = mod1.model.loglikeobs(mod1.params) m2 = mod2.model.loglikeobs(mod2.params) # point-wise log likelihood ratio m = m1 - m2 # calculate the LR statistic LR = np.sum(m) # calculate the AIC and BIC correction factors -> these go to zero when df is same between models AICcor = mod1.df_model - mod2.df_model BICcor = np.log(N)*AICcor/2 # calculate the omega^2 term omega2 = np.var(m, ddof=1) # calculate the Z statistic with and without corrections Zs = np.array([LR,LR-AICcor,LR-BICcor]) Zs /= np.sqrt(N*omega2) # calculate the p-value ps = [] msgs = [] for Z in Zs: if Z>0: ps.append(1 - norm.cdf(Z)) msgs.append('model 1 preferred over model 2') else: ps.append(norm.cdf(Z)) msgs.append('model 2 preferred over model 1') # share information print('=== Vuong Test Results ===') labs = ['Uncorrected'] if AICcor!=0: labs += ['AIC Corrected','BIC Corrected'] for lab,msg,p,Z in zip(labs,msgs,ps,Zs): print(' -> '+lab) print(' -> '+msg) print(' -> Z: '+str(Z)) print(' -> p: '+str(p)) # load sample data X,y = load_breast_cancer( return_X_y=True, as_frame=True) # create data for modeling X1 = sm.add_constant( X.loc[:,('mean radius','perimeter error','worst symmetry')]) X2 = sm.add_constant( X.loc[:,('mean area','worst smoothness')]) # fit the models mod1 = sm.Logit( y, X1).fit() mod2 = sm.Logit( y, X2).fit() # run Vuong's Test vuong_test( mod1, mod2) # save data for R test function pd.concat( [X,y], axis=1).to_csv('breast_cancer_data.csv') With output Optimization terminated successfully. Current function value: 0.172071 Iterations 9 Optimization terminated successfully. Current function value: 0.156130 Iterations 9 === Vuong Test Results === -> Uncorrected -> model 2 preferred over model 1 -> Z: -0.7852629281206245 -> p: 0.21614971334597216 -> AIC Corrected -> model 2 preferred over model 1 -> Z: -0.8718373831692781 -> p: 0.19164854872682913 -> BIC Corrected -> model 2 preferred over model 1 -> Z: -1.0598719238597758 -> p: 0.1446014350740505 Comparison R code # load library library(pscl) # load breast cancer data set bcdata <- read.csv('breast_cancer_data.csv') # build logistic regression models mod1 <- glm( target ~ mean.radius + perimeter.error + worst.symmetry, data=bcdata, family="binomial") mod2 <- glm( target ~ mean.area + worst.smoothness, data=bcdata, family="binomial") # compare the models vuong( mod1, mod2) With output Vuong Non-Nested Hypothesis Test-Statistic: (test-statistic is asymptotically distributed N(0,1) under the null that the models are indistinguishible) ------------------------------------------------------------- Vuong z-statistic H_A p-value Raw -0.7852629 model2 > model1 0.21615 AIC-corrected -0.8718374 model2 > model1 0.19165 BIC-corrected -1.0598719 model2 > model1 0.14460 Warning messages: 1: glm.fit: fitted probabilities numerically 0 or 1 occurred 2: glm.fit: fitted probabilities numerically 0 or 1 occurred | 3 | 3 |
75,700,322 | 2023-3-10 | https://stackoverflow.com/questions/75700322/when-is-d1-d2-not-equivalent-to-d1-eq-d2 | According to the docs (in Python 3.8): By default, object implements __eq__() by using is, returning NotImplemented in the case of a false comparison: True if x is y else NotImplemented. And also: The correspondence between operator symbols and method names is as follows: [...] x==y calls x.__eq__(y) So I expect == to be equivalent to __eq__() and a custom class without an explicitly defined __eq__ to return NotImplemented when using == to compare two different instances of the class. Yet in the following, == comparison returns False, while __eq__() returns NotImplemented: class Dummy(): def __init__(self, a): self.a = a d1 = Dummy(3) d2 = Dummy(3) d1 == d2 # False d1.__eq__(d2) # NotImplemented Why? | == calls the right operand's .__eq__() if the left operand's .__eq__() returns NotImplemented and if both sides return NotImplemented, == will return false. You can see this behavior by changing your class: class Dummy(): def __eq__(self, o): print(f'{id(self)} eq') return NotImplemented d1 = Dummy() d2 = Dummy() print(d1 == d2) # 2180936917136 eq # 2180936917200 eq # False This is a common behavior for operators where Python test if the left operand have an implementation, if not, Python call the same or reflection (e.g. if x.__lt__(y) is not implemented y.__gt__(x) is called) operator of the right operand. class Dummy: def __eq__(self, o): print(f"{id(self)} eq") return NotImplemented def __lt__(self, o): """reflexion of gt""" print(f"{id(self)} lt") return NotImplemented def __gt__(self, o): """reflexion of lt""" print(f"{id(self)} gt") return False def __le__(self, o): """reflexion of ge""" print(f"{id(self)} le") return NotImplemented def __ge__(self, o): """reflexion of le""" print(f"{id(self)} ge") return False d1 = Dummy() d2 = Dummy() print(d1 == d2) # 2480053379984 eq # 2480053380688 eq # False print(d1 < d2) # 2480053379984 lt # 2480053380688 gt # False print(d1 <= d2) # 2480053379984 le # 2480053380688 ge # False ! Exception to the left before right: If the operands are of different types, and right operandβs type is a direct or indirect subclass of the left operandβs type, the reflected method of the right operand has priority, otherwise the left operandβs method has priority. | 4 | 2 |
75,725,718 | 2023-3-13 | https://stackoverflow.com/questions/75725718/redirect-to-a-url-in-dash | I'm using dash to build a dashboard where I am creating a unique url whenever a particular data point is clicked, how can I redirect users to this created url? I'm using the below given code where whenever someone click on any data points, click event will trigger and callback function executes. app.layout = html.Div(children=[ html.H1(children='Plot'), dcc.Graph(id='example-graph', figure = plot()), html.Div(id='dummy-div') ]) @app.callback(Output('dummy-div', 'childern'), Input('graph', 'clickData')) def redirect_to_url(clickData): triggered_id = dash.callback_context.triggered[0]['prop_id'].split('.')[0] if triggered_id=='graph' and clickData is not None: url = 'https:www.example.com' # need to develop to a method to redirect to this url | With a clienside callback, using window.location : app.clientside_callback( """ function(clickData) { if (clickData?.points?.length) { const point = clickData['points'][0]; const path = point.customdata ?? 'fallbackPath'; const url = `https://example.com/${path}`; window.location = url; } } """, Output('dummy-div', 'children'), Input('example-graph', 'clickData'), prevent_initial_call=True # ) Nb. This option also allows to open the target page in a new tab, for this just replace the line window.location = url with : window.open(url, '_blank'); Or, by adding a dcc.Location() component in the layout, you can then update its href property using a basic callback : app.layout = html.Div(children=[ dcc.Location(id='location'), html.H1(children='Plot'), dcc.Graph(id='example-graph', figure=plot()), html.Div(id='dummy-div') ]) @app.callback(Output('location', 'href'), Input('example-graph', 'clickData'), prevent_initial_call=True) def redirect_to_url(clickData): # do something with clickData url = 'https:www.example.com' return url A few things to note : You don't need to check the trigger id if you have only one Input() (ie. the condition if triggered_id=='graph' is not necessary) Ensure your Input/Output ids match what you have in the layout ('graph' vs 'example-graph') Redirecting the user "whenever a particular data point is clicked" implies the condition clickData is not None instead of clickData is None | 3 | 4 |
75,719,802 | 2023-3-13 | https://stackoverflow.com/questions/75719802/test-code-and-branch-coverage-simultanously-with-pytest | I am using pytest to test my Python code. To test for code coverage (C0 coverage) I run pytest --cov and I can specify my desired coverage in my pyproject.toml file like this: [tool.coverage.report] fail_under = 95 I get this result with a coverage a 96.30%: ---------- coverage: platform linux, python 3.8.13-final-0 ----------- Name Stmts Miss Cover ------------------------------------------------------------------------------------------------ ..................................... Required test coverage of 95.0% reached. Total coverage: 96.30% To test for branch coverage (C1 coverage) I run pytest --cov --cov-branch. I get this result with a coverage of 95.44%: ---------- coverage: platform linux, python 3.8.13-final-0 ----------- Name Stmts Miss Branch BrPart Cover -------------------------------------------------------------------------------------------------------------- ..................................... Required test coverage of 95.0% reached. Total coverage: 95.44% I get two different coverage values, so I am testing two different coverage instances. What I would like to do is be able to test for code coverage AND branch coverage with the same command, and also be able to specify two different required coverages. For now, all I can do is execute pytest two times, with two disadvantages: I have to run my tests 2 times, so it takes twice as long. I am limited to the same required coverage for both. | My final solution uses pytest to execute the tests, coverage to generate coverage reports and coverage-threshold to interpret the results. From coverage-theshold's README: A command line tool for checking coverage reports against configurable coverage minimums. Currently built for use around python's coverage. Tools to install are: pytest (you can also use another test runner), coverage, coverage-threshold. I execute: coverage run --branch -m pytest src/ # the --branch argument enables branch coverage coverage json coverage-threshold My pyproject.toml contains the following lines to configure: [coverage-threshold] line_coverage_min = 90 file_line_coverage_min = 90 branch_coverage_min = 80 file_branch_coverage_min = 80 In order: line_coverage_min sets a threshold for the line coverage. file_line_coverage_min sets a threshold for the line coverage to be respected by each file. branch_coverage_min sets a threshold for the branch coverage. file_branch_coverage_min sets a threshold for the branch coverage to be respected by each file. The CLI options are: > coverage-threshold --help usage: coverage-threshold [-h] [--line-coverage-min LINE_COVERAGE_MIN] [--branch-coverage-min BRANCH_COVERAGE_MIN] [--combined-coverage-min COMBINED_COVERAGE_MIN] [--file-line-coverage-min FILE_LINE_COVERAGE_MIN] [--file-branch-coverage-min FILE_BRANCH_COVERAGE_MIN] [--file-combined-coverage-min FILE_COMBINED_COVERAGE_MIN] [--coverage-json COVERAGE_JSON] [--config CONFIG] A command line tool for checking coverage reports against configurable coverage minimums optional arguments: -h, --help show this help message and exit --line-coverage-min LINE_COVERAGE_MIN minimum global average line coverage threshold --branch-coverage-min BRANCH_COVERAGE_MIN minimum global average branch coverage threshold --combined-coverage-min COMBINED_COVERAGE_MIN minimum global average combined line and branch coverage threshold --file-line-coverage-min FILE_LINE_COVERAGE_MIN the line coverage threshold for each file --file-branch-coverage-min FILE_BRANCH_COVERAGE_MIN the branch coverage threshold for each file --file-combined-coverage-min FILE_COMBINED_COVERAGE_MIN the combined line and branch coverage threshold for each file --coverage-json COVERAGE_JSON path to coverage json (default: ./coverage.json) --config CONFIG path to config file (default: ./pyproject.toml) | 5 | 5 |
75,727,685 | 2023-3-13 | https://stackoverflow.com/questions/75727685/how-do-i-get-a-list-of-table-like-objects-visible-to-duckdb-in-a-python-session | I like how duckdb lets me query DataFrames as if they were sql tables: df = pandas.read_parquet("my_data.parquet") con.query("select * from df limit 10").fetch_df() I also like how duckdb has metadata commands like SHOW TABLES;, like a real database. However, SHOW TABLES; doesn't show pandas DataFrames or other table-like objects. my question is: does duckdb offer something like SHOW TABLES; that includes both (1) real database tables and (2) table-like objects (e.g. pandas DataFrames) and their schemas? Thanks! | You can use the different metadata table functions duckdb_% as referred here For an equivalent of SHOW TABLES and convert it as a pandas dataframe import duckdb df = duckdb.sql("SELECT * FROM duckdb_tables;").df() print(df.dtypes) database_name object database_oid int64 schema_name object schema_oid int64 table_name object table_oid int64 internal bool temporary bool has_primary_key bool estimated_size int64 column_count int64 index_count int64 check_constraint_count int64 sql object dtype: object Note : I'm using the latest version of duckDB v0.7.1 | 3 | 8 |
75,695,118 | 2023-3-10 | https://stackoverflow.com/questions/75695118/how-to-avoid-reading-half-written-arrays-spanning-multiple-chunks-using-zarr | In a multiprocess situation, I want to avoid reading arrays from a zarr group that haven't fully finished writing by the other process yet. This functionality does not seem to come out of the box with zarr. While chunk writing is atomic in zarr, array writing seems not to be (i.e. while you can never have a half-written chunk, you can have a half-written array if said array spans multiple chunks). In my concrete example, one process is writing to the position group. This group contains a 1D array with a chunksize of 100. All goes well if the array I'm writing is smaller than this chunksize. Larger arrays will be written into several chunks, but not all of them are written simultaneously. A parallel process may then try to read the array and find only a first chunk. Zarr then blithely returns an array of 100 elements. Milliseconds later, the 2nd chunk is written, and a subsequent opening of the group now yields 200 elements. I can identify a number of solutions: A store/group lock which must be acquired before writing or reading the entire array. This works, but makes concurrent writing and reading a lot harder because chunk-level locking is better than group/store-level locking. For simple 1D arrays that are write once/read many, that's enough. A store/group lock that does not allow reading the entire array while the array is write-locked. I don't know if such read/write locks exist in zarr, or if I should brew my own using the fasteners library. Again for more complex N-D arrays this means loss of performance. Adjust my write/read code to obtain a lock based on the region to write or read (the lock key could be composed of the indices to write or chunks to write). This would have better performance but it seems absurd that this isn't out-of-the-box supported by zarr. The zarr docs are a bit too succinct and don't delve very deep into the concept of synchronisation/locking, so maybe I'm just missing something. | You can't really get around using some form of synchronization. Your best bet is to communicate from the writer process to the reader processes that something is ready to be consumed. A simple multiprocessing.Queue would work if you are forking from the same process and each worker will operate on the array themselves. If multiple workers are coordinating to consume the array in parallel a multiprocessing.Barrier or multiprocessing.Event might work better. There does appear to be some built in synchronization in the zarr.core class, synchronization. Looking at the source code it appears that zarr locks per chunk, so you'll need to do something to coordinate your reads and writes. | 4 | 3 |
75,738,343 | 2023-3-14 | https://stackoverflow.com/questions/75738343/how-can-i-use-pip-install-when-using-pyscript | I am trying to code an html file that is able to contact with chat gpt. this is the start of the code: <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" /> <script defer src="https://pyscript.net/latest/pyscript.js"></script> <py-script> import openai </py-script> but when I run it it says: ModuleNotFoundError: No module named 'openai' any idea how I can install it just on the html file? I tried to find out how to use: pip install openai , but I dont know how to run bash commands from the python. I also tried to use <py-env>, but it had no effect on the code whatsoever. + it could be blocked by my proxy, + I'm a beginner to python, so please explain it in simple terms. | According to the Getting Started documentation, you import external modules using a <py-config> tag: <py-config> packages = ["openai"] </py-config> However, even with that, this code: <html> <head> <link rel="stylesheet" href="https://pyscript.net/latest/pyscript.css" /> <script defer src="https://pyscript.net/latest/pyscript.js"></script> </head> <body> <py-config> packages = ["openai"] </py-config> <py-script> import openai </py-script> </body> </html> gives the following error message: (PY1001): Unable to install package(s) 'openai'. Reason: Can't find a pure Python 3 Wheel for package(s) 'openai'. See: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel for more information. At the Pyodide link above is the following: Why canβt I just use urllib or requests? We currently canβt use such packages since sockets are not available in Pyodide. See Write http.client in terms of Web APIs for more information. openai requires requests, so until sockets are implemented in Pyodide (which is used by PyScript), you won't be able to use the openai module, or many others, in PyScript. However, it is possible to make an async HTTP request using PyScript/Pyodide: see this part of the docs. | 3 | 3 |
75,721,577 | 2023-3-13 | https://stackoverflow.com/questions/75721577/add-a-new-option-to-an-existing-selection-field | The following options are what I have available currently: I want to add another option called 'Solved', with blue colored circle. I did inherit the "project.task" model and added selection_add method and overridden the kanban_state selection field with the 'Solved' text: # Python File from odoo import models, fields, api, _ class ProjectTask(models.Model): _inherit = 'project.task' kanban_state = fields.Selection(selection_add=[('solved', 'Solved')], ondelete={'solved': 'cascade'}) However it shows as 'Blocked' value with red circle: I'm sure that I'm missing something here. | The selection state is designed to use red color for all new states. To use another color, you will need to override the selection state widget or create a custom selection widget Example: 1. Create a new selection state widget to set a custom color in solved state /** @odoo-module **/ import basicFields from "web.basic_fields"; import fieldRegistry from 'web.field_registry'; import { qweb } from 'web.core'; var CustomStateSelectionWidget = basicFields.StateSelectionWidget.extend({ _prepareDropdownValues: function () { var self = this; var _data = []; var current_stage_id = self.recordData.stage_id && self.recordData.stage_id[0]; var stage_data = { id: current_stage_id, legend_normal: this.recordData.legend_normal || undefined, legend_blocked : this.recordData.legend_blocked || undefined, legend_done: this.recordData.legend_done || undefined, legend_solved: this.recordData.legend_solved || undefined, }; _.map(this.field.selection || [], function (selection_item) { var value = { 'name': selection_item[0], 'tooltip': selection_item[1], }; if (selection_item[0] === 'normal') { value.state_name = stage_data.legend_normal ? stage_data.legend_normal : selection_item[1]; } else if (selection_item[0] === 'done') { value.state_class = 'o_status_green'; value.state_name = stage_data.legend_done ? stage_data.legend_done : selection_item[1]; } else if (selection_item[0] === 'solved') { value.state_class = 'o_status_blue'; value.state_name = stage_data.legend_solved ? stage_data.legend_solved : selection_item[1]; } else { value.state_class = 'o_status_red'; value.state_name = stage_data.legend_blocked ? stage_data.legend_blocked : selection_item[1]; } _data.push(value); }); return _data; }, _render: function () { var states = this._prepareDropdownValues(); // Adapt "FormSelection" // Like priority, default on the first possible value if no value is given. var currentState = _.findWhere(states, {name: this.value}) || states[0]; this.$('.o_status') .removeClass('o_status_red o_status_green o_status_blue') .addClass(currentState.state_class) .prop('special_click', true) .parent().attr('title', currentState.state_name) .attr('aria-label', this.string + ": " + currentState.state_name); // Render "FormSelection.Items" and move it into "FormSelection" var $items = $(qweb.render('FormSelection.items', { states: _.without(states, currentState) })); var $dropdown = this.$('.dropdown-menu'); $dropdown.children().remove(); // remove old items $items.appendTo($dropdown); // Disable edition if the field is readonly var isReadonly = this.record.evalModifiers(this.attrs.modifiers).readonly; this.$('a[data-toggle=dropdown]').toggleClass('disabled', isReadonly || false); }, }); fieldRegistry.add('custom_state_selection', CustomStateSelectionWidget); return CustomStateSelectionWidget; 2. Update models to use the legend field for solved state from odoo import models, fields, api, _ from odoo.addons import project project.models.project.PROJECT_TASK_READABLE_FIELDS.add('legend_solved') class ProjectTaskType(models.Model): _inherit = 'project.task.type' legend_solved = fields.Char( 'Blue Kanban Label', default=lambda s: _('Solved'), translate=True, required=True, help='Override the default value displayed for the solved state for kanban selection when the task or issue is in that stage.') class ProjectTask(models.Model): _inherit = 'project.task' kanban_state = fields.Selection(selection_add=[('solved', 'Solved')], ondelete={'solved': 'cascade'}) legend_solved = fields.Char(related='stage_id.legend_solved', string='Kanban Valid Explanation', readonly=True, related_sudo=False) @api.depends('stage_id', 'kanban_state') def _compute_kanban_state_label(self): for task in self: if task.kanban_state == 'normal': task.kanban_state_label = task.legend_normal elif task.kanban_state == 'blocked': task.kanban_state_label = task.legend_blocked elif task.kanban_state == 'solved': task.kanban_state_label = task.legend_solved else: task.kanban_state_label = task.legend_done 3. Alter the project kanban view to set the selection state widget <?xml version="1.0" encoding="UTF-8" ?> <odoo> <record id="view_task_kanban_custom_state_selection" model="ir.ui.view"> <field name="name">project.task.kanban.custom.state.selection</field> <field name="model">project.task</field> <field name="inherit_id" ref="project.view_task_kanban"/> <field name="arch" type="xml"> <field name="kanban_state" position="attributes"> <attribute name="widget">custom_state_selection</attribute> </field> </field> </record> </odoo> 4. Create an scss file to declare the o_status_blue class used in the custom widget .o_status { &.o_status_blue { @extend .bg-primary; } } To load the js and scss files add the file paths under assets/web.assets_backend and also add the view under data 'data': [ 'views/project_view.xml', ], 'assets': { 'web.assets_backend': [ 'MODULE_NAME/static/src/js/StateSelectionWidget.js', 'MODULE_NAME/static/src/css/StateSelectionWidget.scss', ], }, | 3 | 2 |
75,736,939 | 2023-3-14 | https://stackoverflow.com/questions/75736939/error-could-not-build-wheels-for-python-ldap-which-is-required-to-install-pypr | I was installing Odoo 15 inside a Python virtual environment on Ubuntu 20.04. I've downloaded Odoo from the official GitHub repository and use Nginx as a reverse proxy. after following the documentation to install and set up odoo in ubuntu 22.04, I did followed this how-to-do doc in this link I get this error the moment I do install pip packages using the command pip install -r requirement.txt any help please. Building wheels for collected packages: python-ldap Building wheel for python-ldap (pyproject.toml) ... error error: subprocess-exited-with-error Γ Building wheel for python-ldap (pyproject.toml) did not run successfully. β exit code: 1 β°β> [111 lines of output] /tmp/pip-build-env-rbfhnio_/overlay/lib/python3.10/site-packages/setuptools/config/setupcfg.py:516: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-310 copying Lib/ldapurl.py -> build/lib.linux-x86_64-cpython-310 copying Lib/ldif.py -> build/lib.linux-x86_64-cpython-310 creating build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/resiter.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/syncrepl.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/__init__.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/dn.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/async.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/sasl.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/asyncsearch.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/logger.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/functions.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/modlist.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/constants.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/ldapobject.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/filter.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/pkginfo.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/cidict.py -> build/lib.linux-x86_64-cpython-310/ldap copying Lib/ldap/compat.py -> build/lib.linux-x86_64-cpython-310/ldap creating build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/psearch.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/sessiontrack.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/simple.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/__init__.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/vlv.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/sss.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/libldap.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/pagedresults.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/pwdpolicy.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/readentry.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/deref.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/ppolicy.py -> build/lib.linux-x86_64-cpython-310/ldap/controls copying Lib/ldap/controls/openldap.py -> build/lib.linux-x86_64-cpython-310/ldap/controls creating build/lib.linux-x86_64-cpython-310/ldap/extop copying Lib/ldap/extop/passwd.py -> build/lib.linux-x86_64-cpython-310/ldap/extop copying Lib/ldap/extop/__init__.py -> build/lib.linux-x86_64-cpython-310/ldap/extop copying Lib/ldap/extop/dds.py -> build/lib.linux-x86_64-cpython-310/ldap/extop creating build/lib.linux-x86_64-cpython-310/ldap/schema copying Lib/ldap/schema/__init__.py -> build/lib.linux-x86_64-cpython-310/ldap/schema copying Lib/ldap/schema/subentry.py -> build/lib.linux-x86_64-cpython-310/ldap/schema copying Lib/ldap/schema/tokenizer.py -> build/lib.linux-x86_64-cpython-310/ldap/schema copying Lib/ldap/schema/models.py -> build/lib.linux-x86_64-cpython-310/ldap/schema creating build/lib.linux-x86_64-cpython-310/slapdtest copying Lib/slapdtest/__init__.py -> build/lib.linux-x86_64-cpython-310/slapdtest copying Lib/slapdtest/_slapdtest.py -> build/lib.linux-x86_64-cpython-310/slapdtest running egg_info writing Lib/python_ldap.egg-info/PKG-INFO writing dependency_links to Lib/python_ldap.egg-info/dependency_links.txt writing requirements to Lib/python_ldap.egg-info/requires.txt writing top-level names to Lib/python_ldap.egg-info/top_level.txt reading manifest file 'Lib/python_ldap.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' no previously-included directories found matching 'Doc/.build' adding license file 'LICENCE' writing manifest file 'Lib/python_ldap.egg-info/SOURCES.txt' /tmp/pip-build-env-rbfhnio_/overlay/lib/python3.10/site-packages/setuptools/command/build_py.py:202: SetuptoolsDeprecationWarning: Installing 'slapdtest.certs' as data is deprecated, please list it in `packages`. !! ############################ # Package would be ignored # ############################ Python recognizes 'slapdtest.certs' as an importable package, but it is not listed in the `packages` configuration of setuptools. 'slapdtest.certs' has been automatically added to the distribution only because it may contain data files, but this behavior is likely to change in future versions of setuptools (and therefore is considered deprecated). Please make sure that 'slapdtest.certs' is included as a package by using the `packages` configuration field or the proper discovery methods (for example by using `find_namespace_packages(...)`/`find_namespace:` instead of `find_packages(...)`/`find:`). You can read more about "package discovery" and "data files" on setuptools documentation page. !! check.warn(importable) creating build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/README -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/ca.conf -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/ca.pem -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/client.conf -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/client.key -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/client.pem -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/gencerts.sh -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/gennssdb.sh -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/server.conf -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/server.key -> build/lib.linux-x86_64-cpython-310/slapdtest/certs copying Lib/slapdtest/certs/server.pem -> build/lib.linux-x86_64-cpython-310/slapdtest/certs running build_ext building '_ldap' extension creating build/temp.linux-x86_64-cpython-310 creating build/temp.linux-x86_64-cpython-310/Modules x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -DHAVE_SASL -DHAVE_TLS -DHAVE_LIBLDAP_R -DHAVE_LIBLDAP_R -DLDAPMODULE_VERSION=3.4.0 "-DLDAPMODULE_AUTHOR=python-ldap project" "-DLDAPMODULE_LICENSE=Python style" -IModules -I/opt/odoo15/odoo-venv/include -I/usr/include/python3.10 -c Modules/LDAPObject.c -o build/temp.linux-x86_64-cpython-310/Modules/LDAPObject.o In file included from Modules/LDAPObject.c:3: Modules/common.h:15:10: fatal error: lber.h: No such file or directory 15 | #include <lber.h> | ^~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for python-ldap Failed to build python-ldap ERROR: Could not build wheels for python-ldap, which is required to install pyproject.toml-based projects | You need to install on your system the GCC compiler package and the package that includes the development libraries and header files needed to compile applications that use LDAP | 11 | 5 |
75,724,033 | 2023-3-13 | https://stackoverflow.com/questions/75724033/set-the-media-type-of-a-custom-error-response-via-a-pydantic-model-in-fastapi | In my FastAPI application I want to return my errors as RFC Problem JSON: from pydantic import BaseModel class RFCProblemJSON(BaseModel): type: str title: str detail: str | None status: int | None I can set the response model in the OpenAPI docs with the responses argument of the FastAPI class: from fastapi import FastAPI, status api = FastAPI( responses={ status.HTTP_401_UNAUTHORIZED: {'model': RFCProblemJSON}, status.HTTP_422_UNPROCESSABLE_ENTITY: {'model': RFCProblemJSON}, status.HTTP_500_INTERNAL_SERVER_ERROR: {'model': RFCProblemJSON} } ) However, I want to set the media type as 'application/problem+json'. I tried two methods, first just adding a 'media type' field on to the basemodel: class RFCProblemJSON(BaseModel): media_type = "application/problem+json" type: str title: str detail: str | None status: int | None and also, inheriting from fastapi.responses.Response: class RFCProblemJSON(Response): media_type = "application/problem+json" type: str title: str detail: str | None status: int | None However neither of these modify the media_type in the openapi.json file/the swagger UI. When you add the media_type field to the basemodel, the media type in the SwaggerUI is not modified:: And when you make the model inherit from Response, you just get an error (this was a long shot from working but tried it anyway). raise fastapi.exceptions.FastAPIError( fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'RoutingServer.RestAPI.schema.errors.RFCProblemJSON'> is a valid Pydantic field type. If you are using a return type annotation that is not a valid Pydantic field (e.g. Union[Response, dict, None]) you can disable generating the response model from the type annotation with the path operation decorator parameter response_model=None. Read more: https://fastapi.tiangolo.com/tutorial/response-model/ It is possible to get the swagger UI to show the correct media type if you manually fill out the OpenAPI definition: api = FastAPI( debug=debug, version=API_VERSION, title="RoutingServer API", openapi_tags=tags_metadata, swagger_ui_init_oauth={"clientID": oauth2_scheme.client_id}, responses={ status.HTTP_401_UNAUTHORIZED: { "content": {"application/problem+json": { "example": { "type": "string", "title": "string", "detail": "string" }}}, "description": "Return the JSON item or an image.", }, } ) However, I want to try and implement this with a BaseModel so that I can inherit from RFCProblemJSON and provide some optional extras for some specific errors. The minimal example to reproduce my problem is: from pydantic import BaseModel from fastapi import FastAPI, status, Response, Request from fastapi.exceptions import RequestValidationError from pydantic import error_wrappers import json import uvicorn from typing import List, Tuple, Union, Dict, Any from typing_extensions import TypedDict Loc = Tuple[Union[int, str], ...] class _ErrorDictRequired(TypedDict): loc: Loc msg: str type: str class ErrorDict(_ErrorDictRequired, total=False): ctx: Dict[str, Any] class RFCProblemJSON(BaseModel): type: str title: str detail: str | None status: int | None class RFCUnprocessableEntity(RFCProblemJSON): instance: str issues: List[ErrorDict] class RFCProblemResponse(Response): media_type = "application/problem+json" def render(self, content: RFCProblemJSON) -> bytes: return json.dumps( content.dict(), ensure_ascii=False, allow_nan=False, indent=4, separators=(", ", ": "), ).encode("utf-8") api = FastAPI( responses={ status.HTTP_422_UNPROCESSABLE_ENTITY: {'model': RFCUnprocessableEntity}, } ) @api.get("/{x}") def hello(x: int) -> int: return x @api.exception_handler(RequestValidationError) def format_validation_error_as_problem_json(request: Request, exc: error_wrappers.ValidationError): status_code = status.HTTP_422_UNPROCESSABLE_ENTITY content = RFCUnprocessableEntity( type="/errors/unprocessable_entity", title="Unprocessable Entity", status=status_code, detail="The request has validation errors.", instance=request.url.path, issues=exc.errors() ) return RFCProblemResponse(content, status_code=status_code) uvicorn.run(api) When you go to http://localhost:8000/hello, it will return as application/problem+json in the headers, however if you go to the swagger ui docs the ui shows the response will be application/json. I dont know how to keep the style of my code, but update the openapi definition to show that it will return as 'application/problem+json` in a nice way. Is this possible to do? | As described in FastAPI's documentation about Additional Responses in OpenAPI: You can pass to your path operation decorators a parameter responses. It receives a dict, the keys are status codes for each response, like 200, and the values are other dicts with the information for each of them. Each of those response dicts can have a key model, containing a Pydantic model, just like response_model. FastAPI will take that model, generate its JSON Schema and include it in the correct place in OpenAPI. Also, as described in Additional Response with model (see under Info): The model key is not part of OpenAPI. FastAPI will take the Pydantic model from there, generate the JSON Schema, and put it in the correct place. The correct place is: In the key content, that has as value another JSON object (dict) that contains: A key with the media type, e.g. application/json, that contains as value another JSON object, that contains: A key schema, that has as the value the JSON Schema from the model, here's the correct place. FastAPI adds a reference here to the global JSON Schemas in another place in your OpenAPI instead of including it directly. This way, other applications and clients can use those JSON Schemas directly, provide better code generation tools, etc. Hence, there doesn't currently seem to be a way to achieve what you are askingβ i.e., adding a media_type field to the BaseModel, in order to set the media type of an error response (e.g., 422 UNPROCESSABLE ENTITY) to application/problem+jsonβsince the model key is only used to generate the schema. There has been an extensive discussion on github on a similar issue, where people provide a few solutions, which mainly focus on changing the 422 error response schema, similar to the one described in your question, but in a more elegant way (see this comment, for instance). The example below demonstrates a similar approach that can be easily adapted to your needs. Working Example from fastapi import FastAPI, Response, Request, status from fastapi.exceptions import RequestValidationError from fastapi.openapi.constants import REF_PREFIX from fastapi.responses import JSONResponse from pydantic import BaseModel import json class Item(BaseModel): id: str value: str class SubMessage(BaseModel): msg: str class Message(BaseModel): msg: str sub: SubMessage class CustomResponse(Response): media_type = 'application/problem+json' def render(self, content: Message) -> bytes: return json.dumps( content.dict(), ensure_ascii=False, allow_nan=False, indent=4, separators=(', ', ': '), ).encode('utf-8') def get_422_schema(): return { 'model': Message, 'content': { 'application/problem+json': { 'schema': {'$ref': REF_PREFIX + Message.__name__} } }, } app = FastAPI(responses={status.HTTP_422_UNPROCESSABLE_ENTITY: get_422_schema()}) @app.exception_handler(RequestValidationError) async def validation_exception_handler(request: Request, exc: RequestValidationError): msg = Message(msg='main message', sub=SubMessage(msg='sub message')) return CustomResponse(content=msg, status_code=status.HTTP_422_UNPROCESSABLE_ENTITY) @app.post('/items') async def submit(item: Item): return item | 4 | 4 |
75,721,031 | 2023-3-13 | https://stackoverflow.com/questions/75721031/hide-legend-values-on-zoom-in-plotly-express | Is there a way to only show visible values in the legend in plotly express? So for example if this was my original scatter plot: and I zoom in on a few points like so: Is there a way to only show the legend values for the points that are currently visible, rather than showing all legend values? Code for reference: fig_2d = px.scatter( proj_2D, x=0, y=1, color=topic_nums.astype(str), labels={'color': 'topic'}, #color_continuous_scale="spectral" ) fig_2d.update_traces(marker=dict(size=5), selector=dict(mode='markers')) fig_2d.update_layout( autosize=False, width=1000, height=800, ) fig_2d.show() where proj_2D is a 2D matrix | You can create a JupyterDash app, and use a callback to read in the layout data from zoom events, then update the figure accordingly. In order to reset the default (with all of the points and traces in the legend), you can click Zoom Out tooltip in the figure. import json import numpy as np import pandas as pd import plotly.express as px from jupyter_dash import JupyterDash from dash import dcc, html, Input, Output # Sample Data np.random.seed(42) proj_2D = pd.DataFrame({ 'x': np.random.uniform(low=0.0, high=1.0, size=100), 'y': np.random.uniform(low=0.0, high=1.0, size=100), 'topic': list(range(1,11))*10 }) proj_2D['topic'] = proj_2D['topic'].astype(str) fig_2d = px.scatter( proj_2D, x='x', y='y', color='topic' ) fig_2d.update_traces(marker=dict(size=5), selector=dict(mode='markers')) fig_2d.update_layout( autosize=False, width=1000, height=800, ) # Build App app = JupyterDash(__name__, prevent_initial_callbacks=True) app.layout = html.Div([ html.H1("JupyterDash Scatter Zoom Update"), dcc.Graph(id='scatter-fig', figure=fig_2d), ]) @app.callback( Output('scatter-fig', 'figure'), Input('scatter-fig', 'relayoutData'), ) def display_relayout_data(relayoutData): xaxis_min, xaxis_max = relayoutData["xaxis.range[0]"], relayoutData["xaxis.range[1]"] yaxis_min, yaxis_max = relayoutData["yaxis.range[0]"], relayoutData["yaxis.range[1]"] ## subset data proj_2D_new = proj_2D[ proj_2D['x'].between(xaxis_min, xaxis_max) & proj_2D['y'].between(yaxis_min, yaxis_max) ] fig_2d = px.scatter( proj_2D_new, x='x', y='y', color='topic' ) fig_2d.update_traces(marker=dict(size=5), selector=dict(mode='markers')) fig_2d.update_layout( autosize=False, width=1000, height=800, ) return fig_2d # Run app and display result inline in the notebook app.run_server(mode='inline', debug=True, port=8000) | 3 | 2 |
75,733,809 | 2023-3-14 | https://stackoverflow.com/questions/75733809/how-to-crop-an-audio-file-based-on-the-timestamps-present-in-a-list | So, I have an audio file which is very long in duration. I have manual annotations (start and end duration in seconds) of the important parts which I need from the whole audio in a text file. I have converted this text file into a nested list where in each list has [start , end] The whole list looks like [[start1,end1],[start2,end2]......] what I need to do is go through my annotation list shown above, get one timestamp(start and end time sublist) and then crop this part from the whole original audio and then the next timestamp and crop that part out from the whole audio and so on. I understand that I need to make sure the reference for the timings must be in accordance with the first unedited original audio. note that, the timestamps are float values and its quite important to keep them as is. The next step would be to extract audio characteristics such as mfcc from the cropped audio file. fs1, y1 = scipy.io.wavfile.read(file_path) l1 = numpy.array(annotation_list) newWavFileAsList = [] for elem in l1: startRead = elem[0] endRead = elem[1] newWavFileAsList.extend(y1[startRead:endRead]) newWavFile = numpy.array(newWavFileAsList) scipy.io.wavfile.write(sample, fs1, newWavFile) I have tried it as above, however it shows an error that the indexes startRead and endRead must be integers. I understand referencing y1 using those indexes is completely dumb, but how can I relate the duration which I have in seconds to the indexes of the read audio file? How do you suggest I approach this? | Try out Pydub! :) from pydub import AudioSegment def trim_audio(intervals, input_file_path, output_file_path): # load the audio file audio = AudioSegment.from_file(input_file_path) # iterate over the list of time intervals for i, (start_time, end_time) in enumerate(intervals): # extract the segment of the audio segment = audio[start_time*1000:end_time*1000] # construct the output file path output_file_path_i = f"{output_file_path}_{i}.wav" # export the segment to a file segment.export(output_file_path_i, format='wav') # test it out print("Trimming audio...") trim_audio([[0, 1], [1, 2]], "test_input.wav", "test_output") print("...done! <3") This code works for me. Lmk if you encounter any problems. Edit: Just to let you know, I tried this out for floats, and it works just fine. I took a look at it and it seemed like it should behave oddly with floats, but it apparently works just fine. I tried long weird ones like 2.2352344, seems ok. Another edit: I just remembered you might need ffmpeg to be able to use Pydub. To install ffmpeg, go download it, extract it, then add the path of it to your Windows path variable. | 4 | 4 |
75,734,763 | 2023-3-14 | https://stackoverflow.com/questions/75734763/ordering-multi-indexed-pandas-dataframe-on-two-levels-with-different-criteria-f | Consider the dataframe df_counts, constructed as follows: df2 = pd.DataFrame({ "word" : ["AA", "AC", "AC", "BA", "BB", "BB", "BB"], "letter1": ["A", "A", "A", "B", "B", "B", "B"], "letter2": ["A", "C", "C", "A", "B", "B", "B"] }) df_counts = df2[["word", "letter1", "letter2"]].groupby(["letter1", "letter2"]).count() Output: What I would like to do from here, is to order first by letter1 totals, so the rows for letter1 == "B" appear first (there are four words starting with B, vs only three with A), and then ordered within each grouping of letter1 by the values in the word column. So the final output should be: word letter1 letter2 B B 3 A 1 A C 2 A 1 Is this possible to do? | When you have a complex sorting order, it's always easy to use numpy.lexsort: # minor sorting order first, major one last # - to inverse the order order = np.lexsort([-df_counts['word'], -df_counts.groupby('letter1')['word'].transform('sum')]) out = df_counts.iloc[order] The pandas equivalent would be: (df_counts .assign(total=df_counts.groupby('letter1')['word'].transform('sum')) .sort_values(by=['total', 'word'], ascending=False) .drop(columns='total') ) Output: word letter1 letter2 B B 3 A 1 A C 2 A 1 | 3 | 3 |
75,673,304 | 2023-3-8 | https://stackoverflow.com/questions/75673304/how-to-get-mastodon-direct-messages-using-mastodon-py | I am trying to get direct messages from my Mastodon account using Mastodon.py. I am able to get everything on the timeline, but can't seem to return direct messages. Since direct messages are separate from the timeline (on the web interface), I am assuming there is some function other than timeline() I should be using. I looked into the status() function but I would need to already have the ID of the message to use it. I have been searching the API documentation, but there doesn't appear to be any function or method that is obvious to me. My function currently looks like this: def get_direct_messages(mastodon): # Get all status messages from the user's timeline statuses = mastodon.timeline() print(statuses) # Filter the status messages to only include direct messages direct_messages = [status for status in statuses if status["visibility"] == "direct"] print (direct_messages) return direct_messages This will print out all the messages on the timeline, so I know the connection and everything is valid, but I'm just not doing something right. print(statuses) shows timeline messages only, so I am sure this isn't the right way to access direct messages. | Instead of mastodon.timeline() you want to call mastodon.conversations() To get the content of the conversations you can do conversations = mastodon.conversations() for c in conversations: print(c.last_status.content) https://mastodonpy.readthedocs.io/en/stable/07_timelines.html#mastodon.Mastodon.conversations | 4 | 2 |
75,729,285 | 2023-3-14 | https://stackoverflow.com/questions/75729285/pandas-find-the-left-most-value-in-a-pandas-dataframe-followed-by-all-1s | I have the following dataset data = {'ID': ['A', 'B', 'C', 'D'], '2012': [0, 1, 1, 1], '2013': [0, 0, 1, 1], '2014': [0, 0, 0, 1], '2015': [0, 0, 1, 1], '2016': [0, 0, 1, 0], '2017': [1, 0, 1,1]} df = pd.DataFrame(data) For each row I want to generate a new column - Baseline_Year - which assumes the name of the column with all values to the right that are equal to 1. In case there is not column with all the values equal to 1, I would like the Baseline_Year to be equal to missing. See the expected results data = {'ID': ['A', 'B', 'C', 'D', 'E'], '2012': [0, 1, 1, 1, 1], '2013': [0, 0, 1, 1, 1], '2014': [0, 0, 0, 1, 1], '2015': [0, 0, 1, 1, 1], '2016': [0, 0, 1, 0, 1], '2017': [1, 0, 1,1, 1], 'Baseline_Year': [np.nan, np.nan, '2015','2017', '2012'], } df_results = pd.DataFrame(data) df_results | I would use a boolean mask and idxmax: # get year columns, identify rightmost 1s m = (df.filter(regex=r'\d+') .loc[:, ::-1] .eq(1).cummin(axis=1) .loc[:, ::-1] ) df['Baseline_Year'] = m.idxmax(axis=1).where(m.any(axis=1)) Output: ID 2012 2013 2014 2015 2016 2017 Baseline_Year 0 A 0 0 0 0 0 1 2017 1 B 1 0 0 0 0 0 NaN 2 C 1 1 0 1 1 1 2015 3 D 1 1 1 1 0 1 2017 If you want a minimum number of 1s on the right: N = 2 df['Baseline_Year'] = m.idxmax(axis=1).where(m.sum(axis=1).ge(N)) Output: ID 2012 2013 2014 2015 2016 2017 Baseline_Year 0 A 0 0 0 0 0 1 NaN 1 B 1 0 0 0 0 0 NaN 2 C 1 1 0 1 1 1 2015 3 D 1 1 1 1 0 1 NaN Intermediate m: 2012 2013 2014 2015 2016 2017 0 False False False False False True 1 False False False False False False 2 False False False True True True 3 False False False False False True | 3 | 5 |
75,714,883 | 2023-3-12 | https://stackoverflow.com/questions/75714883/how-to-test-a-fastapi-endpoint-that-uses-lifespan-function | Could someone tell me how I can test an endpoint that uses the new lifespan feature from FastAPI? I am trying to set up tests for my endpoints that use resources from the lifespan function, but the test failed since the dict I set up in the lifespan function is not passed to the TestClient as part of the FastAPI app. My API looks as follows. from fastapi import FastAPI from contextlib import asynccontextmanager ml_model = {} @asynccontextmanager async def lifespan(app: FastAPI): predictor = Predictor(model_version) ml_model["predict"] = predictor.predict_from_features yield # Clean up the ML models and release the resources ml_model.clear() app = FastAPI(lifespan=lifespan) @app.get("/prediction/") async def get_prediction(model_input: str): prediction = ml_model["predict"](model_input) return prediction And the test code for the /prediction endpoint looks as follows: from fastapi.testclient import TestClient from app.main import app client = TestClient(app) def test_read_prediction(): model_input= "test" response = client.get(f"/prediction/?model_input={model_input}") assert response.status_code == 200 The test failed with an error message saying KeyError: 'predict', which shows that the ml_models dict was not passed with the app object. I also tried using app.state.ml_models = {}, but that didn't work either. I would appreciate any help! | Use the TestClient as a context manager. This triggers startup/shutdown events as well as lifespans. from fastapi.testclient import TestClient from app.main import app def test_read_prediction(): with TestClient(app) as client: model_input= "test" response = client.get(f"/prediction/?model_input={model_input}") assert response.status_code == 200 Here's some documentation about testing startup/shutdown events. It also applies to lifespans. | 18 | 45 |
75,690,059 | 2023-3-9 | https://stackoverflow.com/questions/75690059/pandas-interpolation-to-extend-data-is-giving-bad-results | I have a dataset with 'DEN' values as a function of 'Z', which goes to Z = ~425000, but I would like to extend it up to Z = 500000. I attempted to do this by adding a new data point to my pandas column at Z = 500000, and filling in the NaN values with spline and linear interpolation, but neither result gives a good fit. I tried the other interpolation methods, but none worked. This is the output: but it should look something more like this (with some curvature, probably) How could I get a better fit? Here is my code: import pandas as pd Z = [67.0016860961914, 202.4105987548828, 339.6864929199219, 478.540283203125, 618.98046875, 761.0699462890625, 904.8601684570312, 1050.3714599609375, 1197.6475830078125, 1346.7713623046875, 1498.6524658203125, 1653.7181396484375, 1837.8468017578125, 2079.58544921875, 2355.20263671875, 2638.4990234375, 2929.645751953125, 3229.349853515625, 3617.75537109375, 4104.705078125, 4617.30859375, 5158.63427734375, 5733.37353515625, 6345.8583984375, 6999.3095703125, 7698.5947265625, 8450.8916015625, 9409.703125, 10553.18359375, 11668.6767578125, 12741.990234375, 13772.39453125, 14763.3818359375, 15719.619140625, 16649.482421875, 17554.62109375, 18469.857421875, 19430.984375, 20422.037109375, 21439.216796875, 22486.515625, 23560.853515625, 24665.4140625, 25805.2890625, 26985.13671875, 28204.599609375, 29457.5390625, 30742.876953125, 32064.46875, 33431.6640625, 34851.78515625, 36325.08984375, 37850.5390625, 39429.359375, 41058.421875, 42756.5703125, 44542.91015625, 46396.62890625, 48294.97265625, 50223.65234375, 52059.734375, 53766.05859375, 55550.89453125, 57403.92578125, 59192.73828125, 60936.0234375, 62660.0703125, 64366.703125, 66034.6953125, 67662.28125, 69244.1484375, 70767.453125, 72257.0859375, 73747.015625, 75229.140625, 76701.421875, 78177.2734375, 79667.5859375, 81194.4921875, 82738.984375, 84261.1875, 85742.84375, 87180.859375, 88590.0078125, 89967.6328125, 91287.2109375, 92543.7890625, 93754.3515625, 94940.046875, 96156.3359375, 97419.6953125, 98704.9296875, 100037.609375, 101529.7890625, 103333.3046875, 105506.3125, 107998.2734375, 110736.3828125, 113655.7734375, 116720.3515625, 119921.03125, 123249.9140625, 126694.953125, 130245.1953125, 133898.578125, 137668.90625, 141590.484375, 145714.953125, 150101.5, 154805.265625, 159865.140625, 165296.765625, 171092.75, 177227.46875, 183663.90625, 190363.34375, 197291.578125, 204419.296875, 211723.71875, 219187.59375, 226796.78125, 234539.328125, 242404.578125, 250382.59375, 258463.765625, 266638.875, 274899.125, 283236.34375, 291643.1875, 300113.1875, 308640.875, 317221.75, 325852.28125, 334530.15625, 343254.53125, 352026.25, 360848.3125, 369726.125, 378668.3125, 387687.25, 396800.21875, 406031.03125, 415412.28125, 424988.75, 434822.46875] DEN = [2.934534393261856e-09, 3.046047858390466e-09, 3.287511374239216e-09, 3.5445970603120713e-09, 3.818016125478607e-09, 4.121767371856322e-09, 4.556317101389595e-09, 4.9088515474693395e-09, 5.256281188081857e-09, 5.6165689876763736e-09, 5.962251581337341e-09, 6.3753162748980685e-09, 7.051504713473378e-09, 7.8024715577385e-09, 8.947714569274012e-09, 1.0181534726427799e-08, 1.1241577446696738e-08, 1.2603742050032452e-08, 1.4735684672473326e-08, 1.801341120710731e-08, 2.2238978658606356e-08, 2.669004040001255e-08, 4.1519353288776983e-08, 5.4049081654738984e-08, 6.672090790971197e-08, 7.643878774388213e-08, 9.041653470376332e-08, 1.5108118134321558e-07, 2.133168237605787e-07, 2.2585844305922365e-07, 2.736193494001782e-07, 2.0513878951078368e-07, 1.841667227608923e-07, 2.2165528434925363e-07, 1.9528846451066784e-07, 1.9578206433834566e-07, 2.8487770009633095e-07, 4.6579410195590754e-07, 8.215626507990237e-07, 1.3647811556438683e-06, 1.9769688606174896e-06, 2.7490045795275364e-06, 3.741817181435181e-06, 5.50590266357176e-06, 7.5475641097000334e-06, 1.0890928024309687e-05, 1.6043488358263858e-05, 2.4222534193540923e-05, 3.2964235288091004e-05, 4.440318662091158e-05, 6.032012606738135e-05, 8.014062041183934e-05, 0.00011465149145806208, 0.00018679998174775392, 0.0003073820553254336, 0.00041833153227344155, 0.0004992358153685927, 0.0005737273604609072, 0.000637566379737109, 0.0054977829568088055, 0.08780906349420547, 0.5919457674026489, 3.2492551803588867, 11.0116548538208, 26.446338653564453, 38.97719955444336, 49.074031829833984, 57.095115661621094, 65.73072814941406, 74.49889373779297, 82.38655853271484, 85.20193481445312, 87.75443267822266, 89.5878677368164, 87.19244384765625, 83.95445251464844, 80.79202270507812, 79.2406997680664, 85.03714752197266, 106.86959838867188, 131.66307067871094, 173.25582885742188, 192.48924255371094, 259.2572326660156, 283.16607666015625, 445.22332763671875, 874.3299560546875, 1519.5841064453125, 2273.568115234375, 2919.748046875, 3389.169677734375, 3628.605224609375, 3582.08447265625, 3295.89013671875, 2909.16015625, 2511.64208984375, 2176.353271484375, 1895.889892578125, 1657.74365234375, 1452.5299072265625, 1280.7451171875, 1150.204345703125, 1073.4234619140625, 1064.7406005859375, 1123.1546630859375, 1220.3154296875, 1320.48486328125, 1394.374267578125, 2001.9783935546875, 7956.9150390625, 24130.984375, 50260.73828125, 87648.0, 133744.84375, 179154.34375, 214849.5625, 237765.265625, 249376.375, 252877.265625, 251325.625, 246972.625, 241273.703125, 235049.90625, 228696.953125, 222232.765625, 215427.25, 207914.984375, 199298.984375, 189272.90625, 177776.140625, 165114.734375, 151878.109375, 138702.59375, 126065.34375, 114250.4921875, 103397.0546875, 93629.078125, 85087.1875, 77709.4296875, 71285.0859375, 65632.859375, 60621.0, 56143.3984375, 52112.625, 48511.3671875] dict = { 'Z' : Z, 'DEN': DEN } df = pd.DataFrame.from_dict(dict) df = df.append({'Z':500000}, ignore_index=True) df2 = df.interpolate(method='spline', order=3,limit=10,limit_direction='both', axis=0) df3 = df.interpolate(method='linear',limit=10,limit_direction='both', axis=0) plt.plot(df2['DEN'],df2['Z']) plt.plot(df3['DEN'],df3['Z']) plt.show() | The main issue is that your Z value makes a big jump from 434k to 500k. You should use Z as the index of df because the interpolate method is based on the index values. Method 1 - Linear extrapolation You can do it by adding a single new datapoint. df = pd.DataFrame.from_dict(dict) df_new = pd.DataFrame({'Z':[500000]}) df = pd.concat([df, df_new], ignore_index=True) # Append is deprecated, use concat instead df.set_index('Z', inplace=True) df1 = df.interpolate(method='spline', order=1, axis=0) plt.plot(df1['DEN'], df1.index, label='1 - Linear') plt.plot(df['DEN'], df.index, label='Initial data') # Initial data as a reference plt.legend(loc="lower right") plt.show() Output: Method 2 - Polynomial extrapolation You need to add more than 1 new datapoint to get a nice visual plot. df = pd.DataFrame.from_dict(dict) last_z = df.loc[len(df)-1,'Z'] # 434822.46875 # For example adding datapoints by steps of 1,000 until 500,000 df_new = pd.DataFrame({'Z' : list(range(int(last_z), 500_000, 1_000))}) df = pd.concat([df, df_new], ignore_index=True) # Append is deprecated, use concat instead df.set_index('Z', inplace=True) df1 = df.interpolate(method='spline', order=1, axis=0) # Linear extrapolation df2 = df.interpolate(method='spline', order=2, axis=0) # Quadratic extrapolation df3 = df.interpolate(method='spline', order=3, axis=0) # Cubic extrapolation df4 = df.interpolate(method='spline', order=4, axis=0) # Quartic extrapolation plt.plot(df1['DEN'], df1.index, label='1 - Linear') plt.plot(df2['DEN'], df2.index, label='2 - Quadratic') plt.plot(df3['DEN'], df3.index, label='3 - Cubic') plt.plot(df4['DEN'], df4.index, label='4 - Quartic') plt.plot(df['DEN'], df.index, label='Initial data') # Initial data as a reference plt.legend(loc="lower right") plt.show() Output: | 3 | 4 |
75,726,452 | 2023-3-13 | https://stackoverflow.com/questions/75726452/can-i-force-the-install-of-a-package-that-requires-a-newer-python-version-than-t | I know it isn't a correct thing to do, but I would like to try to install package that requires Python 3.8, but my installed Python is 3.7. Is it possible using pip? Or I must clone the repository and change the setup.py? | You can use the --ignore-requires-python option. pip install --help --ignore-requires-python Ignore the Requires-Python information. You can try it with your package, or also with this minimal setup.py: from setuptools import setup, find_packages setup( name="foobar", version="1.0", packages=find_packages(), python_requires="<3.7" ) With Python 3.7, $ pip install . Processing /home/vvvvv/75726452 ERROR: Package 'foobar' requires a different Python: 3.7.11 not in '<3.7' $ pip install . --ignore-requires-python Processing /home/vvvvv/75726452 Installing collected packages: foobar Running setup.py install for foobar ... done Successfully installed foobar-1.0 However, as Mark Ransom suggested, there probably is a good reason the package you are trying to install does requires a version of Python different from 3.7. | 8 | 15 |
75,726,719 | 2023-3-13 | https://stackoverflow.com/questions/75726719/confused-by-python-async-for-loop-executes-sequentially | I am new to asyncio and trying to understand basic for loop behavior. The code below executes sequentially, but my naive assumption was that while the sleeps are occurring, other items could be fetched via the for loop and start processing. But that doesn't seem to happen. For example, while the code is "doing something else with 1" it seems like it could fetch the next item from the loop and start working on it while waiting for the sleep to end on item 1. But when I run, it executes sequentially with pauses for the sleeps like a non-async program. What am I missing here? import asyncio class CustomIterator(): def __init__(self): self.counter = 0 def __aiter__(self): return self async def __anext__(self): if self.counter >= 3: raise StopAsyncIteration await asyncio.sleep(1) self.counter += 1 return self.counter async def f(item): print(f"doing something with {item}") await asyncio.sleep(3) async def f2(item): print(f"doing something else with {item}") await asyncio.sleep(2) async def do_async_stuff(): async for item in CustomIterator(): print(f"got {item}") await f(item) await f2(item) if __name__ == '__main__': asyncio.run(do_async_stuff()) Output: got 1 doing something with 1 doing something else with 1 got 2 doing something with 2 doing something else with 2 got 3 doing something with 3 doing something else with 3 | I think you have a common misunderstanding of how async works. You have written your program to be synchronous. await foo() says to call foo(), and feel free to go do something else while we're waiting for foo to return with its answer. Likewise, getting the next element from your custom iterator says "get the next element of this iterator, but feel free to go do something else while waiting for the result". In both cases, you have nothing else to do, so your code wants. If it is safe for two things in your code to run at once, it is your job to say so, using appropriate primitives. | 4 | 3 |
75,725,818 | 2023-3-13 | https://stackoverflow.com/questions/75725818/loading-hugging-face-model-is-taking-too-much-memory | I am trying to load a large Hugging face model with code like below: model_from_disc = AutoModelForCausalLM.from_pretrained(path_to_model) tokenizer_from_disc = AutoTokenizer.from_pretrained(path_to_model) generator = pipeline("text-generation", model=model_from_disc, tokenizer=tokenizer_from_disc) The program is quickly crashing after the first line because it is running out of memory. Is there a way to chunk the model as I am loading it, so that the program doesn't crash? EDIT See cronoik's answer for accepted solution, but here are the relevant pages on Hugging Face's documentation: Sharded Checkpoints: https://huggingface.co/docs/transformers/big_models#sharded-checkpoints:~:text=in%20the%20future.-,Sharded%20checkpoints,-Since%20version%204.18.0 Large Model Loading: https://huggingface.co/docs/transformers/main_classes/model#:~:text=the%20weights%20instead.-,Large%20model%20loading,-In%20Transformers%204.20.0 | You could try to load it with low_cpu_mem_usage: from transformers import AutoModelForSeq2SeqLM model_from_disc = AutoModelForCausalLM.from_pretrained(path_to_model, low_cpu_mem_usage=True) Please note that low_cpu_mem_usage requires: Accelerate >= 0.9.0 and PyTorch >= 1.9.0. | 7 | 9 |
75,707,701 | 2023-3-11 | https://stackoverflow.com/questions/75707701/why-i-can-not-install-pyside2 | I want to Install PySide 2 library But apparently this library is not found. I tried this to install PySide2: pip3 install PySide2 But after executing this command, I encountered the same problem: ERROR: Could not find a version that satisfies the requirement PySide2 (from versions: none) ERROR: No matching distribution found for PySide2 I used https://pypi.org/project/PySide2/ to solve my problem but I did not reach any result. | According to the documentation for versions of python 3.7+ you need to do the following to install the module: pip install pyside6 | 5 | 9 |
75,723,227 | 2023-3-13 | https://stackoverflow.com/questions/75723227/pandas-select-rows-where-any-column-passes-condition | How can I return all rows where one of the columns passes a given condition, WITHOUT specifying any column? My situation is as follows: import pandas as pd print(pd.__version__) # 1.5.2 x = [ 1, 2, 3 ] y = [ 4, 5, 6 ] z = [ 7, 8, 9 ] df = pd.DataFrame({ "a": x, "b": y, "c": z }) I know you can apply a condition on a given column in order to return all the rows where that column passes the condition. df.loc[df.a > 1] # or df.loc[df["a"] > 1] Which would output: a b c 2 5 8 3 6 9 I want to select all rows from the data frame, where any of the columns passes the given condition. df.loc["*" > 1] Which should output: a b c 1 4 7 2 5 8 3 6 9 I have tried the following but it sets any non matching values to NaN: df[df > 1] Output: a b c NaN 4 7 2 5 8 3 6 9 | Use vectorial code with any and boolean indexing: df[df.gt(1).any(axis=1)] | 3 | 7 |
75,719,665 | 2023-3-13 | https://stackoverflow.com/questions/75719665/in-python-issubclass-unexpectedly-complains-about-protocols-with-non-method-m | I have tried the obvious way to check my protocol: from typing import Any, Protocol, runtime_checkable @runtime_checkable class SupportsComparison(Protocol): def __eq__(self, other: Any) -> bool: ... issubclass(int, SupportsComparison) Unfortunately the issubclass() call ends with an exception (Python 3.10.6 in Ubuntu 22.04): $ python3.10 protocol_test.py Traceback (most recent call last): File "protocol_test.py", line 8, in <module> issubclass(object, SupportsComparison) File "/usr/lib/python3.10/abc.py", line 123, in __subclasscheck__ return _abc_subclasscheck(cls, subclass) File "/usr/lib/python3.10/typing.py", line 1570, in _proto_hook raise TypeError("Protocols with non-method members" TypeError: Protocols with non-method members don't support issubclass() As you can see I added no non-method members to SupportsComparison. Is this a bug in the standard library? | From the documentation: A class that overrides __eq__() and does not define __hash__() will have its __hash__() implicitly set to None. Therefore, in your case, you have the implicit non-method member SupportsComparison.__hash__ = None. You can fix it by declaring __hash__ explicitly: from typing import Any, Protocol, runtime_checkable @runtime_checkable class SupportsComparison(Protocol): def __eq__(self, other: Any) -> bool: ... def __hash__(self) -> int: ... issubclass(int, SupportsComparison) | 3 | 4 |
75,719,006 | 2023-3-13 | https://stackoverflow.com/questions/75719006/in-python-for-parameter-typing-do-you-use-the-capital-versions-of-dict-and-list | Apologies, am new to Python so a very basic question. Below is an example line of my method definition. Where would I be using capital Dict/List and lowercase Dict/List? Thanks in advance! Example scenarios below def execute_signals(self, parameter1: dict[list], parameter2: dict) -> list[dict]: def execute_signals(self, parameter1: Dict[List], parameter2: Dict) -> List[Dict]: def execute_signals(self, parameter1: dict[List], parameter2: dict) -> List[dict]: | On Python 3.8 and earlier, the name of the collection type is capitalized, and the type is imported from the typing module. Python 3.5 - 3.8 (also works in Python 3.9+): from typing import List, Set, Dict, Tuple x: List[int] = [1] x: Set[int] = {6, 7} x: Dict[str, float] = {"field": 2.0} x: Tuple[int, str, float] = (3, "yes", 7.5) x: Tuple[int, ...] = (1, 2, 3) Python 3.9+: x: list[int] = [1] x: set[int] = {6, 7} x: dict[str, float] = {"field": 2.0} x: tuple[int, str, float] = (3, "yes", 7.5) x: tuple[int, ...] = (1, 2, 3) I recommend you read through the mypy's docs about type hints. | 10 | 6 |
75,674,773 | 2023-3-8 | https://stackoverflow.com/questions/75674773/creating-huggingface-dataset-to-train-an-bio-tagger | I have a list of dictionaries: sentences = [ {'text': ['I live in Madrid'], 'labels':[O, O, O, B-LOC]}, {'text': ['Peter lives in Spain'], 'labels':[B-PER, O, O, B-LOC]}, {'text': ['He likes pasta'], 'labels':[O, O, B-FOOD]}, ... ] I want to create a HuggingFace dataset object from this data so that I can later preprocess it and feed to a transformer model much more easily, but so far I have not found a viable way to do this. | First you'll need some extra libraries to use the metrics and datasets features. pip install -U transformers datasets evaluate seqeval To convert list of dict to Dataset object import pandas as pd from datasets import Dataset sentences = [ {'text': 'I live in Madrid', 'labels':['O', 'O', 'O', 'B-LOC']}, {'text': 'Peter lives in Spain', 'labels':['B-PER', 'O', 'O', 'B-LOC']}, {'text': 'He likes pasta', 'labels':['O', 'O', 'B-FOOD']}, ] ds = Dataset.from_pandas(pd.DataFrame(data=sentences)) Convert the dataset into a "Trainer-able" Dataset object from datasets import Dataset from datasets import ClassLabel # Define a Classlabel object to use to map string labels to integers. classmap = ClassLabel(num_classes=4, names=['B-LOC', 'B-PER', 'B-FOOD', 'O']) train_sentences = [ {'text': 'I live in Madrid', 'labels':['O', 'O', 'O', 'B-LOC']}, {'text': 'Peter lives in Spain', 'labels':['B-PER', 'O', 'O', 'B-LOC']}, {'text': 'He likes pasta', 'labels':['O', 'O', 'B-FOOD']}, ] # Map text to tokenizer ids. ds = ds.map(lambda x: tokenizer(x["text"], truncation=True)) # Map labels to label ids. ds = ds.map(lambda y: {"labels": classmap.str2int(y["labels"])}) To compute metrics with the labeled inputs that you have: import evaluate metric = evaluate.load("seqeval") def compute_metrics(p): predictions, labels = p predictions = predictions.argmax(axis=2) # Remove ignored index (special tokens) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] results = metric.compute(predictions=true_predictions, references=true_labels) return { "precision": results["overall_precision"], "recall": results["overall_recall"], "f1": results["overall_f1"], "accuracy": results["overall_accuracy"], } To use with the Trainer object import pandas as pd import evaluate from datasets import Dataset from datasets import ClassLabel from transformers import AutoModelForTokenClassification, Trainer, AutoTokenizer, DataCollatorForTokenClassification # Define a Classlabel object to use to map string labels to integers. classmap = ClassLabel(num_classes=4, names=['B-LOC', 'B-PER', 'B-FOOD', 'O']) train_sentences = [ {'text': 'I live in Madrid', 'labels':['O', 'O', 'O', 'B-LOC']}, {'text': 'Peter lives in Spain', 'labels':['B-PER', 'O', 'O', 'B-LOC']}, {'text': 'He likes pasta', 'labels':['O', 'O', 'B-FOOD']}, ] eval_sentences = [ {"text": "I like pasta from Madrid , Spain", 'labels': ['O', 'O', 'B-FOOD', 'O', 'B-LOC', 'O', 'B-LOC']} ] ds_train = Dataset.from_pandas(pd.DataFrame(data=train_sentences)) ds_eval = Dataset.from_pandas(pd.DataFrame(data=eval_sentences)) model = AutoModelForTokenClassification.from_pretrained("distilbert-base-multilingual-cased", id2label={i:classmap.int2str(i) for i in range(classmap.num_classes)}, label2id={c:classmap.str2int(c) for c in classmap.names}, finetuning_task="ner") tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased") data_collator = DataCollatorForTokenClassification(tokenizer) ds_train = ds_train.map(lambda x: tokenizer(x["text"], truncation=True)) ds_eval = ds_eval.map(lambda x: tokenizer(x["text"], truncation=True)) ds_train = ds_train.map(lambda y: {"labels": classmap.str2int(y["labels"])}) ds_eval = ds_eval.map(lambda y: {"labels": classmap.str2int(y["labels"])}) metric = evaluate.load("seqeval") def compute_metrics(p): predictions, labels = p predictions = predictions.argmax(axis=2) # Remove ignored index (special tokens) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] results = metric.compute(predictions=true_predictions, references=true_labels) return { "precision": results["overall_precision"], "recall": results["overall_recall"], "f1": results["overall_f1"], "accuracy": results["overall_accuracy"], } # Initialize our Trainer trainer = Trainer( model=model, train_dataset=ds_train, eval_dataset=ds_eval, data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) trainer.train() | 4 | 8 |
75,717,408 | 2023-3-13 | https://stackoverflow.com/questions/75717408/python-dataframe-isocalendar-boolean-condition-not-producing-desired-result-wh | I am surprised that my simple boolean condition was producing a complete year result when I wanted only the first week's data of that year only. My code: # Some sample data df1 = pd.DataFrame([1596., 1537., 1482., 1960., 1879., 1824.],index=['2007-01-01 00:00:00', '2007-01-01 01:00:00', '2007-01-01 02:00:00', '2007-12-31 21:00:00', '2007-12-31 22:00:00', '2007-12-31 23:00:00']) df1.index = pd.to_datetime(df1.index,format = '%Y-%m-%d %H:%M:%S') # Consider and plot only the 2007 year and first week result year_plot = 2007 year_data = df1[(df1.index.year==year_plot)&(df1.index.isocalendar().week==1)] print(year_data) DAYTON_MW Datetime 2007-01-01 00:00:00 1596.0 2007-01-01 01:00:00 1537.0 2007-01-01 02:00:00 1482.0 2007-01-01 03:00:00 1422.0 2007-01-01 04:00:00 1402.0 ... ... 2007-12-31 19:00:00 2110.0 2007-12-31 20:00:00 2033.0 2007-12-31 21:00:00 1960.0 2007-12-31 22:00:00 1879.0 2007-12-31 23:00:00 1824.0 192 rows Γ 1 columns year_data.plot(figsize=(15, 5), title='Week Of Data') plt.show() I need your help to know where the problem is. Update: The problem has been found. Meantime, @J_H also found the issue. I am surprized why it is behaving like this, where, it is treating last days in 2007 year as week 1. Result: Based on the accepted answer, the solution is df1[(df1.index.isocalendar().year==year_plot)&(df1.index.isocalendar().week==1)]\ .plot(figsize=(15, 5), title='Week Of Data')# plt.savefig('oneweek.png') plt.show() | This is normal behavior. >>> df1.index.isocalendar().week 2007-01-01 00:00:00 1 2007-01-01 01:00:00 1 2007-01-01 02:00:00 1 2007-12-31 21:00:00 1 2007-12-31 22:00:00 1 2007-12-31 23:00:00 1 Name: week, dtype: UInt32 >>> >>> df1.index.isocalendar().year 2007-01-01 00:00:00 2007 2007-01-01 01:00:00 2007 2007-01-01 02:00:00 2007 2007-12-31 21:00:00 2008 2007-12-31 22:00:00 2008 2007-12-31 23:00:00 2008 Name: year, dtype: UInt32 Saying "January" is a bit vague, but "January 2007" describes a specific 31-day interval. Similarly, saying "week 1" is a bit vague. Typically we would pass around a 2-tuple of (iso_year, iso_week). The difficulty you're running into here is that all of these timestamps are in week 1, but some are week 1 of 2007 and some are week 1 of 2008. https://en.wikipedia.org/wiki/ISO_week_date An ISO week-numbering year (also called ISO year informally) has 52 or 53 full weeks. That is 364 or 371 days ... Weeks start on a Monday. December 31st was a Monday. Each week's year is the Gregorian year in which the Thursday falls. The first week of the year, hence, always contains 4 January. In the period 4 January to 28 December the ISO week year number is always equal to the Gregorian year number. The same is true for every Thursday. Monday the 31st of December 2007 satisfies neither of those. The code is doing what we asked of it. Recommend that you model time with (iso_year, iso_week) rather than just a single (iso_week) attribute. Carefully segregate these two identifiers: year iso_year The first refers to a Gregorian year, e.g. 2023 CE, the sort of thing that appears on your desk calendar. The second refers to "an ISO week-numbering year", which is quite a different concept. See the wikipedia page for a definition of what it describes. | 3 | 3 |
75,714,478 | 2023-3-12 | https://stackoverflow.com/questions/75714478/splitting-text-based-on-a-delimiter-in-python | I have just started learning python. I am working on netflix_tiles dataset downloaded from Kaggle. Some of the entries in the director column have multiple director names separated by column, i was trying to separate director names using the split function The following is one of the original values loaded from the file to dataframe s7 Movie My Little Pony: A New Generation, Robert Cullen, JosΓ© Luis Ucha Vanessa Hudgens, .. I am using the following code to do the split def strip(x): x = x.strip().split(',') return x director_counts = df["director"].apply(strip) After the above code executes the output is as follows s7 [Robert Cullen, JosΓ© Luis Ucha] The director name is not split based on comma and I am also seeing the index(s7) is also returned from the function when i passed just the director column to the function. Can anyone please tell me why is it behaving in this way? Edit: Tried this as well director_counts = df['director'].str.split(',\s*') Link to collab: https://colab.research.google.com/drive/1OXJ9XKCBVg4-6W8Hiqfy4ZTkgz0IVqbR?usp=sharing | Use str.strip(): df = pd.read_csv('/home/damien/Downloads/netflix_titles.csv.zip') directors = df['director'].str.split(',\s*') Output: >>> directors 0 [Kirsten Johnson] 1 NaN 2 [Julien Leclercq] 3 NaN 4 NaN ... 8802 [David Fincher] 8803 NaN 8804 [Ruben Fleischer] 8805 [Peter Hewitt] 8806 [Mozez Singh] Name: director, Length: 8807, dtype: object Update I was expecting it to come in two different rows Use explode: >>> directors.explode() 0 Kirsten Johnson 1 NaN 2 Julien Leclercq 3 NaN 4 NaN ... 8802 David Fincher 8803 NaN 8804 Ruben Fleischer 8805 Peter Hewitt 8806 Mozez Singh Name: director, Length: 9612, dtype: object # <- 9612 rows instead of 8807 To get count per director, use value_counts (drop nan by default): >>> directors.explode().value_counts() Rajiv Chilaka 22 Jan Suter 21 RaΓΊl Campos 19 Suhas Kadav 16 Marcus Raboy 16 .. Raymie Muzquiz 1 Stu Livingston 1 Joe Menendez 1 Eric Bross 1 Mozez Singh 1 Name: director, Length: 4993, dtype: int64 | 3 | 3 |
75,714,660 | 2023-3-12 | https://stackoverflow.com/questions/75714660/how-to-view-the-value-of-a-metashape-application-attribute-that-displays-in-angl | How do I access the value of an object attribute that displays in angle brackets like: <attribute 'version' of 'Metashape.Metashape.Application' objects>? Specifically, I am using the Metashape Python module and run the following lines within an interactive Python session: import Metashape a = Metashape.Application a.version and this is when I get <attribute 'version' of 'Metashape.Metashape.Application' objects> I've tried print(a.version) and get the same output. According to the module reference doc, this attribute should be a string, so I'm confused why it can't just be displayed as a string. | According to the docs: An instance of Application object can be accessed using Metashape.app attribute, so there is usually no need to create additional instances in the user code. so... import Metashape print(Metashape.app.version) If you want to do it your way, you need to instance Application. import Metashape app = Metashape.Application() print(app.version) | 3 | 4 |
75,713,961 | 2023-3-12 | https://stackoverflow.com/questions/75713961/python-requests-returns-403-even-with-headers | I'm trying to get content of website but my requests return me an 403 ERROR. After searching, I found Network>Headers section to add headers before GET request and tried these headers. from bs4 import BeautifulSoup as bs import requests url = "https://clutch.co/us/agencies/digital-marketing" HEADERS = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"} ### Also tried "Referer" , "sec-ch-ua-platform" and "Origin" headers but nothing changed. html = requests.get(url,headers=HEADERS) print("RESULT:",html) But result didn't change. | You can try to load the page from the Google cache instead directly: import requests from bs4 import BeautifulSoup url = "https://clutch.co/us/agencies/digital-marketing" cache_URL = "https://webcache.googleusercontent.com/search?q=cache:" def get_data(link): hdr = { "User-Agent": "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Mobile Safari/537.36" } req = requests.get(cache_URL + link, headers=hdr) content = req.text return content soup = BeautifulSoup(get_data(url), 'html.parser') for h3 in soup.select('h3.company_info'): print(h3.get_text(strip=True)) Prints: WebFX Ignite Visibility SmartSites Thrive Internet Marketing Agency Lilo Social NEWMEDIA.COM Funnel Boost Media Direct Online Marketing SeedX Inc. Impactable ... | 3 | 3 |
75,714,112 | 2023-3-12 | https://stackoverflow.com/questions/75714112/how-do-i-configure-pdm-for-a-src-based-python-repository | I have previously arranged a Python repository without a src folder, and got it running with: pdm install --dev pdm run mymodule I am failing to replicate the process in a repository with a src folder. How do I do it? pyproject.toml [project] name = "mymodule" version = "0.1.0" description = "Minimal Python repository with a src layout." requires-python = ">=3.10" [build-system] requires = ["pdm-pep517>=1.0.0"] build-backend = "pdm.pep517.api" [project.scripts] mymodule = "cli:invoke" src/mymodule/__init__.py Empty file. src/mymodule/cli.py def invoke(): print("Hello world!") if __name__ == "__main__": invoke() With the configuration above, I can pdm install --dev but pdm run mymodule fails with: Traceback (most recent call last): File "/home/user/Documents/mymodule/.venv/bin/mymodule", line 5, in <module> from cli import invoke ModuleNotFoundError: No module named 'cli' | You need to modify your pyproject.toml as follows: [project.scripts] mymodule = "mymodule.cli:invoke" For good measure, you may want to delete the .venv folder and .pdm.toml file before running pdm install --dev again. | 7 | 3 |
75,712,458 | 2023-3-12 | https://stackoverflow.com/questions/75712458/disable-postgres-index-updation-temporarily-and-update-the-indexes-manually-late | i have about 1300 CSV files with almost 40k of rows in each file, i have written a python code to read the file and convert all the 40k entries into single insert statement to insert in Postgres database. The psudocode is following for file in tqdm(files, desc="Processing files"): rows = ReadFile(file) # read all 40k rows from the file q = GenerateQuery(rows) # convert all rows into single bulk insert statement InsertData(q) # Execute the generated query to insert data The code is fine but there are performance issues, when I start the code with empty table it takes around 2 to 3it/s, but after 15 to 20 files it takes 10 to 12it/s, and then the performance drops exponentially with each 10 to 15 files processing, the per iteration time keeps on increasing even it reaches 40 to 50s/it, it's hilarious, according to my understanding I have developed the following hypothesis Since in start table is empty its very easy to update table indexes, so it takes no time for 40k bulk insert records to update indexes but with growing records it become harder and even harder to update indexes in the table with 10m+ records. My Question is can I temporarily disable index updation of the table, so that after complete data dump I will then manually update the indexes by calling some query in postgres which for now I don't know if it really exists. | No, you cannot disable indexes. If the table is empty or almost empty when you start loading, you can win by dropping the indexes before you start and creating them again afterwards. However, INSERT performance should stay constant over time. It is difficult to guess what might be the cause. You can try using auto_explain to capture EXPLAIN (ANALYZE, BUFFERS) output for a slow insert, that may give you a clue. For bulk loads, you will get the best performance with COPY. | 3 | 1 |
75,692,370 | 2023-3-10 | https://stackoverflow.com/questions/75692370/fastapi-how-to-customise-422-exception-for-specific-route | How to replace 422 standard exception with custom exception only for one route in FastAPI? I don't want to replace for the application project, just for one route. I read many docs, and I don't understand how to do this. Example of route that I need to change the 422 exception: from fastapi import APIRouter from pydantic import BaseModel router = APIRouter() class PayloadSchema(BaseModel): value_int: int value_str: str @router.post('/custom') async def custom_route(payload: PayloadSchema): return payload | You can register multiple error handlers with the router. You can re-declare the default handler, and then optionally call it depending on the path: class PayloadSchema(BaseModel): value_int: int value_str: str router = APIRouter() @router.post('/standard') async def standard_route(payload: PayloadSchema): return payload @app.exception_handler(RequestValidationError) async def standard_validation_exception_handler(request: Request, exc: RequestValidationError): return JSONResponse( status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, content=jsonable_encoder({"detail": exc.errors(), "body": exc.body}), ) @router.post('/custom') async def custom_route(payload: PayloadSchema): return payload @app.exception_handler(RequestValidationError) async def custom_exception_handler(request: Request, exc: RequestValidationError): if (request.url.path == '/custom'): return JSONResponse({"error": "Bad request, must be a valid PayloadSchema format"}, status_code=400) else: return await standard_validation_exception_handler(request, exc) app = FastAPI() app.include_router(router) app.add_exception_handler(RequestValidationError, custom_exception_handler) Or you can do it directly with the @app.exception_handler decorator without the router (see FastAPI docs). | 5 | 3 |
75,678,711 | 2023-3-8 | https://stackoverflow.com/questions/75678711/why-does-the-moon-spiral-towards-earth-in-simulation | I am trying to simulate a somewhat realistic program where Earth and the moon can interact gravitationally with each other. Now the problem is that the moon keeps on spiraling towards Earth and I don't understand why. This is my code: from math import sin,cos,sqrt,atan2,pi import pygame pygame.init() class Planet: dt = 1/100 G = 6.67428e-11 #G constant scale = 1/(1409466.667) #1 m = 1/1409466.667 pixels def __init__(self,x=0,y=0,radius=0,color=(0,0,0),mass=0,vx=0,vy=0): self.x = x #x-coordinate pygame-window self.y = y #y-coordinate pygame-window self.radius = radius self.color = color self.mass = mass self.vx = vx #velocity in the x axis self.vy = vy #velocity in the y axis def draw(self,screen): pygame.draw.circle(screen, self.color, (self.x, self.y), self.radius) def orbit(self,trace): pygame.draw.rect(trace, self.color, (self.x, self.y, 2, 2)) def update_vel(self,Fnx,Fny): ax = Fnx/self.mass #Calculates acceleration in x- and y-axis for body 1. ay = Fny/self.mass self.vx -= ((ax * Planet.dt)/Planet.scale) self.vy -= ((ay * Planet.dt)/Planet.scale) self.update_pos() def update_pos(self): self.x += ((self.vx * Planet.dt)) #changes position considering each body's velocity. self.y += ((self.vy * Planet.dt)) def move(self,body): dx = (self.x - body.x) #Calculates difference in x- and y-axis between the bodies dy = (self.y - body.y) r = (sqrt((dy**2)+(dx**2))) #Calculates the distance between the bodies angle = atan2(dy, dx) #Calculates the angle between the bodies with atan2! if r < self.radius: #Checks if the distance between the bodies is less than the radius of the bodies. Uses then Gauss gravitational law to calculate force. F = 4/3 * pi * r Fx = cos(angle) * F Fy = sin(angle) * F else: F = (Planet.G*self.mass*body.mass)/((r/Planet.scale)**2) #Newtons gravitational formula. Fx = cos(angle) * F Fy = sin(angle) * F return Fx,Fy def motion(): for i in range(0,len(bodies)): Fnx = 0 #net force Fny = 0 for j in range(0,len(bodies)): if bodies[i] != bodies[j]: Fnx += (bodies[i].move(bodies[j]))[0] Fny += (bodies[i].move(bodies[j]))[1] elif bodies[i] == bodies[j]: continue bodies[i].update_vel(Fnx,Fny) bodies[i].draw(screen) bodies[i].orbit(trace) Fnx,Fny=0,0 screen = pygame.display.set_mode([900,650]) #width - height trace = pygame.Surface((900, 650)) pygame.display.set_caption("Moon simulation") FPS = 150 #how quickly/frames per second our game should update. earth = Planet(450,325,30,(0,0,255),5.97219*10**(24),-24.947719394204714/2) #450= xpos,325=ypos,30=radius luna = Planet(450,(575/11),10,(128,128,128),7.349*10**(22),1023) bodies = [earth,luna] running = True clock = pygame.time.Clock() while running: #if user clicks close window clock.tick(FPS) for event in pygame.event.get(): if event.type == pygame.QUIT: running = False screen.fill((0,0,0)) pygame.Surface.blit(screen, trace, (0, 0)) motion() pygame.display.flip() pygame.quit() Once I get the earth-moon system to work I would like to expand on it and try having three bodies (the reason why there is so much of what otherwise would be "unnecessary" code) I am open to suggestions or/and advices! Thanks | You need to compute the forces in one go, not body-move by body-move. Instead of computing the forces during the update loop, while shifting positions: def motion(): for i in range(0,len(bodies)): Fnx = 0 #net force Fny = 0 for j in range(0,len(bodies)): if bodies[i] != bodies[j]: Fnx += (bodies[i].move(bodies[j]))[0] Fny += (bodies[i].move(bodies[j]))[1] elif bodies[i] == bodies[j]: continue bodies[i].update_vel(Fnx,Fny) bodies[i].draw(screen) bodies[i].orbit(trace) Fnx,Fny=0,0 settle the forces upfront: def motion(): force = [ ( sum([(bodies[i].move(bodies[j]))[0] for j in range(0, len(bodies)) if i != j ]), sum([(bodies[i].move(bodies[j]))[1] for j in range(0, len(bodies)) if i != j ]) ) for i in range(0,len(bodies)) ] for i in range(0,len(bodies)): Fnx = force[i][0] Fny = force[i][1] bodies[i].update_vel(Fnx,Fny) bodies[i].draw(screen) bodies[i].orbit(trace) Fnx,Fny=0,0 (I don't write in Python usually, so the style is not perfect.) The following text is from a previous answer. It may be helpful, but it is not required to solve the problem asked; you may stop reading here. You can reduce numeric truncation errors further with more elaborated methods, like Runge-Kutta. For that: don't do update_vel and update_pos one-after the other, instead, try to write an update_state method which combines both simultaneously; important is that the left hand side of the equations is either the delta or the new state, the right hand side of the equations is the old state, exclusively (higher-order Runge-Kutta will have some intermediate states, fractions of Planet.dt) If Runge-Kutta is too heavy to start with, consider MacCormack or Lax-Wendroff. For a Lax-Wendroff'ish way, instead of: def update_vel(self,Fnx,Fny): ax = Fnx/self.mass ay = Fny/self.mass self.vx -= ((ax * Planet.dt)/Planet.scale) self.vy -= ((ay * Planet.dt)/Planet.scale) self.update_pos() def update_pos(self): self.x += ((self.vx * Planet.dt)) self.y += ((self.vy * Planet.dt)) try this: def update_state(self,Fnx,Fny): ax = Fnx/self.mass ay = Fny/self.mass self.x += (self.vx * Planet.dt) - (ax/Planet.scale) * Planet.dt**2 self.y += (self.vy * Planet.dt) - (ay/Planet.scale) * Planet.dt**2 self.vx -= (ax/Planet.scale) * Planet.dt self.vy -= (ay/Planet.scale) * Planet.dt | 5 | 4 |
75,701,208 | 2023-3-10 | https://stackoverflow.com/questions/75701208/how-to-apply-condition-on-grouped-records-in-a-pyspark-dataframe | I am looking to find a solution to check multiple conditions within a group. First I'm checking for overlaps between records (based on id) and second I should make an exception for the highest numbered record within the same contagious overlap. On top of that, there can be multiple overlaps for same id. The example: data = [('A',1000,1,100), ('B',1001,0,10), ('B',1002,10,15), ('B',1002,20,22), ('B',1003,25,50), ('B',1004,50,55), ('B',1005,53,56), ('B',1006,60,100), ('C',1007,1,100) ] schema = StructType([ \ StructField("id",StringType(),True), \ StructField("tran",IntegerType(),True), \ StructField("start",IntegerType(),True), \ StructField("end",IntegerType(),True), \ ]) df = spark.createDataFrame(data=data,schema=schema) df.show() +---+----+-----+---+ | id|tran|start|end| +---+----+-----+---+ | A|1000| 1|100| | B|1001| 0| 10| | B|1002| 10| 15| | B|1003| 20| 22| | B|1004| 25| 50| | B|1005| 50| 55| | B|1006| 53| 56| | B|1007| 60|100| | C|1008| 1|100| +---+----+-----+---+ The desired dataframe should look like this: | id|tran|start|end|valid| +---+----+-----+---+-----+ | A|1000| 1|100| yes| # this is valid because by id there is no overlap between start and end | B|1001| 0| 10| no| # invalid because by id it overlaps with the next | B|1002| 10| 15| yes| # it overlaps with the previous one but it has the highest tran number between the two | B|1003| 20| 22| yes| # yes because no overlap | B|1004| 25| 50| no| # invalid because overlaps and the tran is not the highest | B|1005| 50| 55| no| # invalid because overlaps and the tran is not the highest | B|1006| 53| 56| yes| # it overlaps with the previous ones but it has the highest tran number among the three contagiously overlapping ones | B|1007| 60|100| yes| # no overlap | C|1008| 1|100| yes| # no overlap +---+----+-----+---+-----+ Massive thanks for the Legend who solves this :) | Join with self, and group by for the dataframe, count how many of the records are overlapped and get the validation. df.alias('a').join( df.alias('b'), (f.col('a.id') == f.col('b.id')) & (f.col('a.start') != f.col('b.start')) & (f.col('b.start').between(f.col('a.start'), f.col('a.end'))), 'left' ) \ .groupBy('a.id', 'a.tran', 'a.start', 'a.end') \ .agg(f.count('b.id').alias('valid')) \ .withColumn('valid', f.when(f.col('valid') == f.lit(0), 'yes').otherwise('no')) \ .show() +---+----+-----+---+-----+ | id|tran|start|end|valid| +---+----+-----+---+-----+ | B|1001| 0| 10| no| | A|1000| 1|100| yes| | B|1002| 10| 15| yes| | B|1002| 20| 22| yes| | B|1003| 25| 50| no| | B|1004| 50| 55| no| | B|1005| 53| 56| yes| | B|1006| 60|100| yes| | C|1007| 1|100| yes| +---+----+-----+---+-----+ | 3 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.