question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
76,423,192 | 2023-6-7 | https://stackoverflow.com/questions/76423192/output-of-typeobject-not-returned-in-assert-message-databricks-python | When applying the Python function type() to a single object, the type of the object is returned. num = 5 type(num) Out[1]: int When embedding this output into a string and printing the result, this seems to behave as expected. num = 5 print(f"type of {num} is {type(num)}") type of 5 is <class 'int'> However, when using this exact message as an assertion error message, the type disappears from the message output. num = 5 assert isinstance(num,str), f"type of {num} is {type(num)}" AssertionError: type of 5 is I am running the code through a notebook on a Databricks cluster, which is displayed in a web browser. The expected output would be: AssertionError: type of 5 is <class 'int'>. What is the reason for this and how can it be avoided? Edit: As mentioned by several commenters, the code works just fine running outside of the databricks environment. I have successfully verified this too using Python 3.10.6. | The character < in the output is being interpreted as the beginning of an HTML tag. By applying replace("<","<") to the f-string, this can be avoided. The full code then becomes: num = 5 assert isinstance(num,str), f"type of {num} is {type(num)}".replace("<", "<") | 2 | 4 |
76,426,902 | 2023-6-7 | https://stackoverflow.com/questions/76426902/how-to-add-rows-on-a-dataset-based-on-a-date-condition | I'm having problems with a dataset I want to use to display on a PowerBI report but in some registers it has a start time and an end time with a different date, which makes me hard to display on a daily basis. I want to divide the register automatically for each day. I have the following register, for example: Date_Start Date_End 18/04/2023 10:53:00 a. m. 20/04/2023 03:51:00 a. m. Since I am using Date_Start column to create the report on a daily basis, with this register, I can't display the date 19/04/2022, since I don't have a register on Date_Start with that date. So, I want to process the register and divide it like this: Date_Start Date_End 18/04/2023 10:53:00 a. m. 18/04/2023 11:59:59 p. m. 19/04/2023 00:00:00 a. m. 19/04/2023 11:59:59 p. m. 20/04/2023 00:00:00 a. m. 20/04/2023 03:51:00 a. m. I am not sure if this is possible using PowerQuery or maybe using Python among Pandas or Numpy library. Can you support me with this topic?, I'd appreciate it. Thanks! :) | Powerquery method let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], #"Added Custom" = Table.AddColumn(Source, "Custom", each List.Transform({Number.IntegerDivide(Number.From([Date_Start]), 1)..Number.IntegerDivide(Number.From([Date_End]), 1)}, each Text.From(Date.From(_)))), #"Expanded Custom" = Table.ExpandListColumn(#"Added Custom", "Custom"), #"Added Index" = Table.AddIndexColumn(#"Expanded Custom", "Index", 0, 1, Int64.Type), #"Added Custom1" = Table.AddColumn(#"Added Index", "Date_Start.", each if [Index]=0 then [Custom] & " " & Text.From(DateTime.Time([Date_Start])) else [Custom] & " " &"00:00 AM"), #"Added Custom2" = Table.AddColumn(#"Added Custom1", "Date_End.", each if [Index]=List.Max(#"Added Index"[Index]) then [Custom] & " " & Text.From(DateTime.Time([Date_End])) else [Custom] & " " &"11:59 PM"), #"Removed Columns" = Table.RemoveColumns(#"Added Custom2",{"Date_Start", "Date_End", "Custom", "Index"}), #"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",{{"Date_Start.", type datetime}, {"Date_End.", type datetime}}) in #"Changed Type" | 2 | 1 |
76,421,664 | 2023-6-7 | https://stackoverflow.com/questions/76421664/automatically-merging-multiple-pydantic-models-with-overlapping-fields | It is kind of difficult to accurately phrase my question in one sentence. I have the following models: from pydantic import BaseModel class Detail1(BaseModel): round: bool volume: float class AppleData1(BaseModel): origin: str detail: Detail1 class Detail2(BaseModel): round: bool weight: float class AppleData2(BaseModel): origin: str detail: Detail2 Here AppleData1 has an attribute detail which is of the type Detail1. AppleData2 has an attribute detail which is of the type Detail2. I want to make an Apple class which contains all the attributes of AppleData1 and AppleData2. Question (How to implement the algorithm?) Do you have a generic approach to implement this algorithm: Whenever AppleData1 and AppleData2 have an attribute of the same name: If they are of the same type, use one of them. For example, AppleData1.origin and AppleData2.origin are both of the type str. So Apple.origin is also of type str. If they are of different types, merge them. For example, AppleData1.detail and AppleData2.detail, they are of type Detail1 and Detail2 respectively. So Apple.detail should contain all the inner attributes. Any common inner attribute is always for the same physical quantity. So overwriting is allowed. For example, Detail1.round and Detail2.round are both of type bool. So the resulting Apple.detail.round is also of type bool. Expect Results The end results should be equivalent to the Apple model below. (The definition of Detail class below is only used to make the code below complete. The generic approach should not hard-code the Detail class.) class Detail(BaseModel): round: bool volume: float weight: float class Apple(BaseModel): origin: str detail: Detail My Solution (bad example) class Detail(Detail1, Detail2): pass class Apple(AppleData1, AppleData2): origin: str detail: Detail print(Apple.schema_json()) This solution works but it is too-specific. Here I need to pin-point that detail attribute from AppleData1 and AppleData2, and specifically create the Detail class from specifically Detail1 and Detail2. I need to pin-point that origin is a common attribute of the same type (str). So I specifically hard-coded origin: str in the definition of the Apple class. | Simplified solution Implementing a custom recursive version of the create_model function to dynamically construct a "combined" model class should work: from typing import TypeGuard, TypeVar from pydantic import BaseModel, create_model from pydantic.fields import SHAPE_SINGLETON M = TypeVar("M", bound=BaseModel) def is_pydantic_model(obj: object) -> TypeGuard[type[BaseModel]]: return isinstance(obj, type) and issubclass(obj, BaseModel) def create_combined_model( __name__: str, /, model1: type[M], model2: type[M], ) -> type[M]: field_overrides = {} for name, field1 in model1.__fields__.items(): field2 = model2.__fields__.get(name) if field2 is None: continue if is_pydantic_model(field1.type_): assert field1.shape == SHAPE_SINGLETON, "No model collections allowed" assert is_pydantic_model(field2.type_), f"{name} with different types" sub_model = create_combined_model( f"Combined{field1.type_.__name__}{field2.type_.__name__}", field1.type_, field2.type_, ) field_overrides[name] = (sub_model, field1.field_info) else: assert field1.annotation == field2.annotation, f"Different types" return create_model(__name__, __base__=(model1, model2), **field_overrides) # type: ignore This incorporates your restrictions/assumptions about the models that can be combined that you elaborated on in your comments. It does not support combining fields that are annotated with C[M], where C is any generic collection type and M is a subclass of BaseModel. That is what the SHAPE_SINGLETON check assures. It would possible to incorporate logic that allows combining models and retaining the shape of the field (e.g. list[Detail1] and list[Detail2]), but I left that out because you did not ask for that explicitly and it is a bit more complicated. Demo from pydantic import BaseModel class AppleBase(BaseModel): foo: str class DetailBase(BaseModel): round: bool class Detail1(DetailBase): volume: float class AppleData1(AppleBase): bar: int detail: Detail1 class Detail2(DetailBase): weight: float class AppleData2(AppleBase): baz: float detail: Detail2 Apple = create_combined_model("Apple", AppleData1, AppleData2) print(Apple.schema_json(indent=4)) Output { "title": "Apple", "type": "object", "properties": { "foo": { "title": "Foo", "type": "string" }, "baz": { "title": "Baz", "type": "number" }, "detail": { "$ref": "#/definitions/CombinedDetail1Detail2" }, "bar": { "title": "Bar", "type": "integer" } }, "required": [ "foo", "baz", "detail", "bar" ], "definitions": { "CombinedDetail1Detail2": { "title": "CombinedDetail1Detail2", "type": "object", "properties": { "round": { "title": "Round", "type": "boolean" }, "weight": { "title": "Weight", "type": "number" }, "volume": { "title": "Volume", "type": "number" } }, "required": [ "round", "weight", "volume" ] } } } Caveats An obvious drawback to this solution is that because it dynamically creates the model class, it is impossible to properly convey the type of the resulting model in terms of static analysis. The way I wrote it now, the function is generic to the greatest extent possible in that the returned type will be inferred as either the joined or the union type, depending on the static type checker, of the two input models model1 and model2. In the demo example this means some type checkers like Mypy for example will infer the type of Apple to be AppleBase (join). This is of course not wrong, but it is not as specific as we might like because it fails to account for the existence of the bar, baz, and detail attributes. A type checker that uses unions instead might infer the type as AppleData1 | AppleData2 instead. (I have not tested it, but I believe Pyright does this.) This may or may not be preferable, because it would at least always cover the existence of a detail attribute (albeit with yet another union type of Detail1 | Detail2), but it would be ambiguous whether or not Apple has a bar or a baz attribute to such a type checker. The ideal solution would be to define the return type as the intersection of the two model types passed into it. But unfortunately we do not have that typing construct (yet). All of this has no effect on the runtime behavior of the constructed class of course, but it is not ideal for IDE auto-suggestions for example. Consequently, your initial explicit approach of using multiple inheritance for all the models involved is still something I would recommend, unless your models become very large/complex and numerous. | 4 | 4 |
76,426,124 | 2023-6-7 | https://stackoverflow.com/questions/76426124/is-vars-same-as-dict | I read that vars() is a built-in that returns the __dict__ attribute of class, module or object. But when I checked for vars(Person) is Person.__dict__, it returned False(Person is the name of class). class Person: def __init__(self, name, age): self.name = name self.age = age vars(Person) is Person.__dict__ # False | Classes create a new mappingproxy on every __dict__ access. You would see the exact same results from Person.__dict__ is Person.__dict__. | 4 | 4 |
76,380,172 | 2023-6-1 | https://stackoverflow.com/questions/76380172/polars-group-by-value-counts | I need some help with polars: I have a dataframe with a categorical values column ┌───────────────────┬──────────────┬────────┐ │ session_id ┆ elapsed_time ┆ fqid │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i32 ┆ cat │ ╞═══════════════════╪══════════════╪════════╡ │ 20090312431273200 ┆ 0 ┆ intro │ │ 20090312431273200 ┆ 1323 ┆ gramps │ │ 20090312431273200 ┆ 831 ┆ gramps │ │ 20090312431273200 ┆ 1147 ┆ gramps │ │ … ┆ … ┆ … │ │ 20090312431273200 ┆ 5197 ┆ teddy │ │ 20090312431273200 ┆ 6180 ┆ teddy │ │ 20090312431273200 ┆ 7014 ┆ teddy │ │ 20090312431273200 ┆ 7946 ┆ teddy │ └───────────────────┴──────────────┴────────┘ And I want to transform the fqid-column to look like this: ┌───────────────────┬─────────────┬────────────┬────────────┐ │ session_id ┆ fqid_gramps ┆ fqid_intro ┆ fqid_teddy │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i32 ┆ i32 ┆ i32 │ ╞═══════════════════╪═════════════╪════════════╪════════════╡ │ 20090312431273200 ┆ 1 ┆ 1 ┆ 4 │ └───────────────────┴─────────────┴────────────┴────────────┘ That is, I would like to: Group_by over session_id, Make a value_counts() over fqid, Rename columns so that it would be 'fqid_' + category, Turn them into columns (transpose), Add them to the result. Technically, I could achieve this without groupby by using something like column_values = train['fqid'].value_counts().with_columns(pl.concat_str(pl.lit('fqid' + '_').alias('fqid'), pl.col('fqid').cast(pl.String))).transpose() column_values = column_values.rename(column_values.head(1).to_dicts().pop()).slice(1) But when I am trying to make an aggregating function from this replacing train['fqid'] with pl.col('fqid') and making a group_by('session_id').aggregate(func('fqid')) it gives me nothing but errors like AttributeError: 'Expr' object has no attribute 'with_columns'. Could you kindly suggest a proper way of making this operation? | Starting from train=pl.from_repr( """┌───────────────────┬──────────────┬────────┐ │ session_id ┆ elapsed_time ┆ fqid │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i32 ┆ cat │ ╞═══════════════════╪══════════════╪════════╡ │ 20090312431273200 ┆ 0 ┆ intro │ │ 20090312431273200 ┆ 1323 ┆ gramps │ │ 20090312431273200 ┆ 831 ┆ gramps │ │ 20090312431273200 ┆ 1147 ┆ gramps │ │ 20090312431273200 ┆ 5197 ┆ teddy │ │ 20090312431273200 ┆ 6180 ┆ teddy │ │ 20090312431273200 ┆ 7014 ┆ teddy │ │ 20090312431273200 ┆ 7946 ┆ teddy │ └───────────────────┴──────────────┴────────┘""") we can do ( train .group_by( (piv_idx:='session_id'), (len_id:='fqid'), maintain_order=True) .len() .pivot(on=len_id, index=piv_idx, values='len', aggregate_function='first') .select( piv_idx, pl.exclude(piv_idx).name.prefix(f"{len_id}_") ) ) shape: (1, 4) ┌───────────────────┬────────────┬─────────────┬────────────┐ │ session_id ┆ fqid_intro ┆ fqid_gramps ┆ fqid_teddy │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ u32 ┆ u32 ┆ u32 │ ╞═══════════════════╪════════════╪═════════════╪════════════╡ │ 20090312431273200 ┆ 1 ┆ 3 ┆ 4 │ └───────────────────┴────────────┴─────────────┴────────────┘ Since you want the count (now len) of the fqids, you need to include that in the group_by. Next, we do a pivot to make the results wide. The output of pivot doesn't keep the original column name so we have to add that back manually. We do that in a select by first taking the session_id and then adding to that every column except session_id with the prefix 'fqid_' to get the final desired result. Incidentally, I'm not using value_counts because it returns a list of structs so we can't do, for example, train.select(pl.col('fqid').value_counts().over('session_id')) I used the walrus operator to assign the column names to variables in the group_by so that you only need to change the columns in one place without repeating yourself. in the pivot and select. | 4 | 6 |
76,404,811 | 2023-6-5 | https://stackoverflow.com/questions/76404811/attributeerror-dataframe-object-has-no-attribute-iteritems | I am using pandas to read csv on my machine then I create a pyspark dataframe from pandas dataframe. df = spark.createDataFrame(pandas_df) I updated my pandas from version 1.3.0 to 2.0 Now, I am getting this error: AttributeError: 'DataFrame' object has no attribute 'iteritems' | Found an answer on github: https://github.com/YosefLab/Compass/issues/92 It is an issue going on. iteritems is removed from pandas 2.0 For now I need to downgrade pandas back to version 1.5.3 Edit: Other workarounds may be Use the latest Spark (3.4.1) https://spark.apache.org/downloads.html For pandas >=2.0 You can also assign DataFrame.items to DataFrame.iteritems import pandas as pd pd.DataFrame.iteritems = pd.DataFrame.items https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.items.html?highlight=items#pandas.DataFrame.items | 10 | 28 |
76,380,381 | 2023-6-1 | https://stackoverflow.com/questions/76380381/create-virtualenv-for-python-2-7-with-python-3-10 | I am trying to create a virtual environment for python 2.7 on Ubuntu 22.04. I always receive an error as follows: RuntimeError: failed to query /usr/bin/python2.7 with code 1 err: ' File "/usr/local/lib/python3.10/dist-packages/virtualenv/discovery/py_info.py", line 152\n os.path.join(base_dir, exe) for exe in (f"python{major}", f"python{major}.{minor}")\n ^\nSyntaxError: invalid syntax\n' Here is a capture of my terminal for useful information: user@machine:~/environments$ ls /usr/bin/pytho* /usr/bin/python2 /usr/bin/python2.7 /usr/bin/python3 /usr/bin/python3-config /usr/bin/python3.10 /usr/bin/python3.10-config user@machine:~/environments$ pip --version pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10) user@machine:~/environments$ virtualenv --version virtualenv 20.23.0 from /usr/local/lib/python3.10/dist-packages/virtualenv/__init__.py user@machine:~/environments$ virtualenv -p /usr/bin/python2.7 py2_env RuntimeError: failed to query /usr/bin/python2.7 with code 1 err: ' File "/usr/local/lib/python3.10/dist-packages/virtualenv/discovery/py_info.py", line 152\n os.path.join(base_dir, exe) for exe in (f"python{major}", f"python{major}.{minor}")\n ^\nSyntaxError: invalid syntax\n' user@machine:~/environments$ Has anyone else had this problem, or successfully achieved this? | virtualenv versions >= 20.22.0 dropped support for creating Python environments for Python versions <= 3.6, so you'll need to downgrade virtualenv, e.g.: pip install virtualenv==20.21.1 | 6 | 16 |
76,391,344 | 2023-6-2 | https://stackoverflow.com/questions/76391344/implementing-name-synchronization-and-money-transfers-in-transactions-model-with | I have the following models in my Django application: class Transaction (models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) account_number = models.IntegerField() name = models.CharField(max_length=50) amount = models.DecimalField(max_digits=5, decimal_places=2) created_on = models.DateTimeField() class Wallet(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) account_balance = models.DecimalField(max_digits=5, decimal_places=2, default=0) class AccountNum(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) account_number = models.IntegerField() slug = models.SlugField(unique=True) I want to implement a feature where the name field in the Transactions model gets synchronized with the account owner's name based on the provided account_number input. Additionally, I want to enable money transfers using the current user's wallet and the specified amount in the Transactions model. To provide some context, I have a post-save signal generate_account_number which generates a random 10-digit account number. What are some recommended techniques or approaches to achieve this synchronization of the name field with the account owner's name and enable money transfers using the wallet model and specified amount in the Transaction model? | Even though I failed to implement an account name based on a given account number, I'm happy to share the way we can send money from one account to another. The technical way to do so is by creating only two models, Account and Transaction models, and adding what is in the Wallet model to an Account model like this: class Account(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) account_number = models.IntegerField() account_balance = models.DecimalField(max_digits=12, decimal_places=6) To send fund from one account to another, we have to create sender and receiver field and assign them to a CustomUser model with different related_name in the Transaction model, just like this: import random def generate_random_number(): return random.randint(1, 30) class Transaction(models.Model): amount = models.DecimalField(max_digits=12, decimal_places=6) sender = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='transfer_sents') receiver = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='transfer_receives') account_number = models.IntegerField() name = models.CharField(max_length=50) refrence_number = models.CharField(max_length=50, default=generate_random_number) I've written a Django view function that's designed to handle financial transactions. Let me break down how it works: The view takes the amount and name values from the submitted form data, and it retrieves the sender and receiver account objects from the database using the Account model. The sender account is associated with the currently logged-in user, while the receiver account is identified by the account_number provided in the form data, and then ensure there are sufficient funds, the view checks if the sender account balance can cover the transaction amount. If it can, the view deducts the 'amount' from the sender account balance and increases the receiver account balance by the same 'amount'. These changes are then saved to the database. In the event of insufficient funds in the sender account, the view generates an error message using Django messaging framework. The user is then redirected to that page. views.py from decimal import Decimal from django.contrib import messages def create_transfer(request): if request.method == 'POST': amount = Decimal(request.POST.get('amount')) name = request.POST.get('name') sender_account = Account.objects.get(user=request.user) receiver_account = Account.objects.get(account_number=request.POST.get('account_number')) if sender_account.account_balance >= amount: sender_account.account_balance -= amount sender_account.save() receiver_account.account_balance += amount receiver_account.save() Transaction.objects.create( sender=sender_account.user, receiver=receiver_account.user, amount=amount, name=name, account_number=receiver_account ) else: messages.error(request, 'Insufficient Funds') return redirect('Transaction') return render(request, 'create_transfer.html') | 4 | 0 |
76,379,924 | 2023-6-1 | https://stackoverflow.com/questions/76379924/create-dataclass-instance-from-union-type-based-on-string-literal | I'm trying to strongly type our code base. A big part of the code is handling events that come from external devices and forwarding them to different handlers. These events all have a value attribute, but this value can have different types. This value type is mapped per event name. So a temperature event always has an int value, an register event always as RegisterInfo as its value. So I would like to map the event name to the value type. But we are struggling with implementation. This setup comes the closest to what we want: @dataclass class EventBase: name: str value: Any value_type: str @dataclass class RegisterEvent(EventBase): value: RegisterInfo name: Literal["register"] value_type: Literal["RegisterInfo"] = "RegisterInfo" @dataclass class NumberEvent(EventBase): value: float | int name: Literal["temperature", "line_number"] value_type: Literal["number"] = "number" @dataclass class StringEvent(EventBase): value: str name: Literal["warning", "status"] value_type: Literal["string"] = "string" Events: TypeAlias = RegisterEvent | NumberEvent | StringEvent With this setup mypy will flag incorrect code like: def handle_event(event: Events): if event.name == "temperature": event.value.upper() (It sees that a temperature event should have value type int, and that doesn't have an upper() method) But creating the events becomes ugly this way. I don't want a big if statement that maps each event name to a specific event class. We have lots of different event types, and this mapping info is already inside these classes. Ideally I would like it to look like this: def handle_device_message(message_info): event_name = message_info["event_name"] event_value = message_info["event_value"] event = Events(event_name, event_value) Is a "one-liner" like this possible? I feel like we are kinda walking against wall here, could it be that the code is architecturally wrong? | UPDATE: Using Pydantic v2 If you are willing to switch to Pydantic instead of dataclasses, you can define a discriminated union via typing.Annotated and use the TypeAdapter as a "universal" constructor that is able to discriminate between distinct Event subtypes based on the provided name string. Here is what I would suggest: from typing import Annotated, Any, Literal from pydantic import BaseModel, Field, TypeAdapter class EventBase(BaseModel): name: str value: Any class NumberEvent(EventBase): name: Literal["temperature", "line_number"] value: float class StringEvent(EventBase): name: Literal["warning", "status"] value: str Event = TypeAdapter(Annotated[ NumberEvent | StringEvent, Field(discriminator="name"), ]) event_temp = Event.validate_python({"name": "temperature", "value": 3.14}) event_status = Event.validate_python({"name": "status", "value": "spam"}) print(repr(event_temp)) # NumberEvent(name='temperature', value=3.14) print(repr(event_status)) # StringEvent(name='status', value='spam') An invalid name would of course cause a validation error, just like a completely wrong and type for value (that cannot be coerced). Example: from pydantic import ValidationError try: Event.validate_python({"name": "temperature", "value": "foo"}) except ValidationError as err: print(err.json(indent=4)) try: Event.validate_python({"name": "foo", "value": "bar"}) except ValidationError as err: print(err.json(indent=4)) Output: [ { "type": "float_parsing", "loc": [ "temperature", "value" ], "msg": "Input should be a valid number, unable to parse string as a number", "input": "foo", "url": "https://errors.pydantic.dev/2.1/v/float_parsing" } ] [ { "type": "union_tag_invalid", "loc": [], "msg": "Input tag 'foo' found using 'name' does not match any of the expected tags: 'temperature', 'line_number', 'warning', 'status'", "input": { "name": "foo", "value": "bar" }, "ctx": { "discriminator": "'name'", "tag": "foo", "expected_tags": "'temperature', 'line_number', 'warning', 'status'" }, "url": "https://errors.pydantic.dev/2.1/v/union_tag_invalid" } ] Original Answer: Using Pydantic v1 If you are willing to switch to Pydantic instead of dataclasses, you can define a discriminated union via typing.Annotated and use the parse_obj_as function as a "universal" constructor that is able to discriminate between distinct Event subtypes based on the provided name string. Here is what I would suggest: from typing import Annotated, Any, Literal from pydantic import BaseModel, Field, parse_obj_as class EventBase(BaseModel): name: str value: Any class NumberEvent(EventBase): name: Literal["temperature", "line_number"] value: float class StringEvent(EventBase): name: Literal["warning", "status"] value: str Event = Annotated[ NumberEvent | StringEvent, Field(discriminator="name"), ] event_temp = parse_obj_as(Event, {"name": "temperature", "value": "3.14"}) event_status = parse_obj_as(Event, {"name": "status", "value": -10}) print(repr(event_temp)) # NumberEvent(name='temperature', value=3.14) print(repr(event_status)) # StringEvent(name='status', value='-10') In this usage demo I purposefully used the "wrong" types for the respective value fields to show that Pydantic will automatically try to coerce them to the right types, once it determines the correct model based on the provided name. An invalid name would of course cause a validation error, just like a completely wrong and type for value (that cannot be coerced). Example: from pydantic import ValidationError try: parse_obj_as(Event, {"name": "temperature", "value": "foo"}) except ValidationError as err: print(err.json(indent=4)) try: parse_obj_as(Event, {"name": "foo", "value": "bar"}) except ValidationError as err: print(err.json(indent=4)) Output: [ { "loc": [ "__root__", "NumberEvent", "value" ], "msg": "value is not a valid float", "type": "type_error.float" } ] [ { "loc": [ "__root__" ], "msg": "No match for discriminator 'name' and value 'foo' (allowed values: 'temperature', 'line_number', 'warning', 'status')", "type": "value_error.discriminated_union.invalid_discriminator", "ctx": { "discriminator_key": "name", "discriminator_value": "foo", "allowed_values": "'temperature', 'line_number', 'warning', 'status'" } } ] Side notes An alias for a union of types like NumberEvent | StringEvent should still have a singular name, i.e. Event rather than Events because semantically the annotation e: Event indicates e should be an instance of one of those types, whereas e: Events would suggest e will be multiple instances (a collection) of either of those types. Also the union float | int is almost always equivalent to float because int is by convention considered a subtype of float by all type checkers. | 3 | 4 |
76,407,803 | 2023-6-5 | https://stackoverflow.com/questions/76407803/define-an-output-schema-for-a-nested-json-in-langchain | Whats the recommended way to define an output schema for a nested json, the method I use doesn't feel ideal. # adding to planner -> from langchain.experimental.plan_and_execute import load_chat_planner refinement_response_schemas = [ ResponseSchema(name="plan", description="""{'1': {'step': '','tools': [],'data_sources': [],'sub_steps_needed': bool}, '2': {'step': '','tools': [<empty list>],'data_sources': [<>], 'sub_steps_needed': bool},}"""),] #define json schema in description, works but doesn't feel proper refinement_output_parser = StructuredOutputParser.from_response_schemas(refinement_response_schemas) refinement_format_instructions = refinement_output_parser.get_format_instructions() refinement_output_parser.parse(output) gives: {'plan': {'1': {'step': 'Identify the top 5 strikers in La Liga', 'tools': [], 'data_sources': ['sports websites', 'official league statistics'], 'sub_steps_needed': False}, '2': {'step': 'Identify the top 5 strikers in the Premier League', 'tools': [], 'data_sources': ['sports websites', 'official league statistics'], 'sub_steps_needed': False}, ... '6': {'step': 'Given the above steps taken, please respond to the users original question', 'tools': [], 'data_sources': [], 'sub_steps_needed': False}}} it works but I want to know if theres a better way to go about this. | From what I can see the recommended approach is to use the pydantic output parser as opposed to the structured output parser... python.langchain.com/docs/modules/model_io/output_parsers/… (and dealing with nesting explained here... youtube.com/watch?v=yD_oDTeObJY). e.g. from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field, validator from typing import List, Optional ... class PlanItem(BaseModel): step: str tools: Optional[str] = [] data_sources: Optional[str] = [] sub_steps_needed: str class Plan(BaseModel): plan: List[PlanItem] parser = PydanticOutputParser(pydantic_object=Plan) parser.get_format_instructions() | 6 | 8 |
76,379,440 | 2023-6-1 | https://stackoverflow.com/questions/76379440/how-to-see-the-embedding-of-the-documents-with-chroma-or-any-other-db-saved-in | I can see everything but the Embedding of the documents when I used Chroma with Langchain and OpenAI embeddings. It always show me None for that Here is the code: for db_collection_name in tqdm(["class1-sub2-chap3", "class2-sub3-chap4"]): documents = [] doc_ids = [] for doc_index in range(3): cl, sub, chap = db_collection_name.split("-") content = f"This is {db_collection_name}-doc{doc_index}" doc = Document(page_content=content, metadata={"chunk_num": doc_index, "chapter":chap, "class":cl, "subject":sub}) documents.append(doc) doc_ids.append(str(doc_index)) # # Initialize a Chroma instance with the original document db = Chroma.from_documents( collection_name=db_collection_name, documents=documents, ids=doc_ids, embedding=embeddings, persist_directory="./data") db.persist() when I do db.get(), I see everything as expected except embedding is None. {'ids': ['0', '1', '2'], 'embeddings': None, 'documents': ['This is class1-sub2-chap3-doc0', 'This is class1-sub2-chap3-doc1', 'This is class1-sub2-chap3-doc2'], 'metadatas': [{'chunk_num': 0, 'chapter': 'chap3', 'class': 'class1', 'subject': 'sub2'}, {'chunk_num': 1, 'chapter': 'chap3', 'class': 'class1', 'subject': 'sub2'}, {'chunk_num': 2, 'chapter': 'chap3', 'class': 'class1', 'subject': 'sub2'}]} My embeddings is also working fine as it returns: len(embeddings.embed_documents(["EMBED THIS"])[0]) >> 1536 also, in my ./data directory I have Embedding file as chroma-embeddings.parquet I tried the example with example given in document but it shows None too # Import Document class from langchain.docstore.document import Document # Initial document content and id initial_content = "This is an initial document content" document_id = "doc1" # Create an instance of Document with initial content and metadata original_doc = Document(page_content=initial_content, metadata={"page": "0"}) # Initialize a Chroma instance with the original document new_db = Chroma.from_documents( collection_name="test_collection", documents=[original_doc], embedding=OpenAIEmbeddings(), # using the same embeddings as before ids=[document_id], ) Here also new_db.get() gives me None | You just need to specify that you want the embeddings as well when using .get # Get all embeddings db._collection.get(include=['embeddings']) # Get embeddings by document_id db._collection.get(ids=['doc0', ..., 'docN'], include=['embeddings']) | 9 | 20 |
76,406,637 | 2023-6-5 | https://stackoverflow.com/questions/76406637/how-to-add-custom-html-content-to-fastapi-swagger-ui-docs | I need to add a custom button in Swagger UI of my FastAPI application. I found this answer which suggest a good solution to add custom javascript to Swagger UI along with this documentations from FastAPI. But this solution only works for adding custom javascript code. I tried to add some HTML code for adding a new button to it using the swagger UI Authorise button style: custom_html = '<div class="scheme-containerr"><section class="schemes wrapper block col-12"><div class="auth-wrapper"><button class="btn authorize"><span>Authorize Google</span><svg width="20" height="20"><use href="#unlocked" xlink:href="#unlocked"></use></svg></button></div></section></div>' @app.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=app.openapi_url, title=app.title + " - Swagger UI", oauth2_redirect_url=app.swagger_ui_oauth2_redirect_url, swagger_js_url="/static/swagger-ui-bundle.js", swagger_css_url="/static/swagger-ui.css", custom_js_url=google_custom_button, custom_html=custom_html, ) def get_swagger_ui_html( *, ... custom_html: Optional[str] = None, ) -> HTMLResponse: ... html = f""" <!DOCTYPE html> <html> <head> <link type="text/css" rel="stylesheet" href="{swagger_css_url}"> <link rel="shortcut icon" href="{swagger_favicon_url}"> <title>{title}</title> </head> <body> <div id="swagger-ui"> {custom_html if custom_html else ""} # <-- I added the HTML code here </div> """ .... But looks like whatever I put between <div id="swagger-ui"></div> gets overwritten somehow and won't make it in the Swagger UI. How to add custom HTML (in this case, buttons like Swagger's Authorise button) for specific needs in Swagger UI using FastAPI? Update If I add the custom HTML outside of the <div id="swagger-ui"></div> I can see my custom button in Swagger UI like this: But I would like to add my button where the original Authorise button is. | You could modify FastAPI's get_swagger_ui_html() function, in order to inject some custom JavaScript code, as described by @lunaa here, and create the custom HTML button programmatically through the custom_script.js. However, since the Authorize button element is created after the DOM/Window is loaded—and there doesn't seem to be a native way to run your JS code after is defined, even if you use Window.load event to run the JavaScript code—and you need to add your custom button next to it, you could simply wait for that element to be created, using the approach described here, and then create the custom button and add it to the DOM. Complete Working Example app.py from fastapi import FastAPI from fastapi import Depends from fastapi.security import OpenIdConnect from fastapi.staticfiles import StaticFiles from fastapi.openapi.docs import ( get_redoc_html, get_swagger_ui_oauth2_redirect_html, ) from custom_swagger import get_swagger_ui_html app = FastAPI(docs_url=None) app.mount("/static", StaticFiles(directory="static"), name="static") oidc_google = OpenIdConnect(openIdConnectUrl='https://accounts.google.com/.well-known/openid-configuration') @app.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=app.openapi_url, title="My API", oauth2_redirect_url=app.swagger_ui_oauth2_redirect_url, #swagger_js_url="/static/swagger-ui-bundle.js", # Optional #swagger_css_url="/static/swagger-ui.css", # Optional #swagger_favicon_url="/static/favicon-32x32.png", # Optional custom_js_url="/static/custom_script.js", ) @app.get('/') def main(token: str = Depends(oidc_google)): return "You are Authenticated" custom_swagger.py import json from typing import Any, Dict, Optional from fastapi.encoders import jsonable_encoder from fastapi.openapi.docs import swagger_ui_default_parameters from starlette.responses import HTMLResponse def get_swagger_ui_html( *, openapi_url: str, title: str, swagger_js_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@4/swagger-ui-bundle.js", swagger_css_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@4/swagger-ui.css", swagger_favicon_url: str = "https://fastapi.tiangolo.com/img/favicon.png", oauth2_redirect_url: Optional[str] = None, init_oauth: Optional[Dict[str, Any]] = None, swagger_ui_parameters: Optional[Dict[str, Any]] = None, custom_js_url: Optional[str] = None, ) -> HTMLResponse: current_swagger_ui_parameters = swagger_ui_default_parameters.copy() if swagger_ui_parameters: current_swagger_ui_parameters.update(swagger_ui_parameters) html = f""" <!DOCTYPE html> <html> <head> <link type="text/css" rel="stylesheet" href="{swagger_css_url}"> <link rel="shortcut icon" href="{swagger_favicon_url}"> <title>{title}</title> </head> <body> <div id="swagger-ui"> </div> """ if custom_js_url: html += f""" <script src="{custom_js_url}"></script> """ html += f""" <script src="{swagger_js_url}"></script> <!-- `SwaggerUIBundle` is now available on the page --> <script> const ui = SwaggerUIBundle({{ url: '{openapi_url}', """ for key, value in current_swagger_ui_parameters.items(): html += f"{json.dumps(key)}: {json.dumps(jsonable_encoder(value))},\n" if oauth2_redirect_url: html += f"oauth2RedirectUrl: window.location.origin + '{oauth2_redirect_url}'," html += """ presets: [ SwaggerUIBundle.presets.apis, SwaggerUIBundle.SwaggerUIStandalonePreset ], })""" if init_oauth: html += f""" ui.initOAuth({json.dumps(jsonable_encoder(init_oauth))}) """ html += """ </script> </body> </html> """ return HTMLResponse(html) static/custom_script.js function waitForElm(selector) { return new Promise(resolve => { if (document.querySelector(selector)) { return resolve(document.querySelector(selector)); } const observer = new MutationObserver(mutations => { if (document.querySelector(selector)) { resolve(document.querySelector(selector)); observer.disconnect(); } }); observer.observe(document.body, { childList: true, subtree: true }); }); } waitForElm('.auth-wrapper').then((elm) => { var authWrapper = document.getElementsByClassName("auth-wrapper")[0]; var btn = document.createElement("BUTTON"); btn.innerHTML = "Click me"; btn.id = "btn-id"; btn.onclick = function() { alert("button is clicked"); }; authWrapper.append(btn); }); Instead of programmatically creating the button through JavaScript, you could load an external HTML file (using JavaScript), which would contain the HTML code for the button and any other elements you would possibly like to insert. Example below: static/custom_script.js function waitForElm(selector) { // same as in the previous code snippet } waitForElm('.auth-wrapper').then((elm) => { var authWrapper = document.getElementsByClassName("auth-wrapper")[0]; fetch('/static/button.html') .then(response => response.text()) .then(text => { const newDiv = document.createElement("div"); newDiv.innerHTML = text; authWrapper.append(newDiv); }); }); static/button.html <button onclick="alert('button is clicked');" class="btn authorize unlocked Google"> <span>Authorize Google</span> <svg width="20" height="20"> <use href="#unlocked" xlink:href="#unlocked"></use> </svg> </button> Adding Dynamic Custom Content In case you would like to add some dynamic content, instead of static JS/HTML file content, you could either pass the content directly as a string to the get_swagger_ui_html() function, or use a combination of static content with dynamic variables, which could be added using Jinja2 templates. Example is given below, demonstrating the changes to be made to the code provided earlier—rest of the code should remain the same as above. The dynamic variable in the exmaple below is msg. Example app.py # ... from jinja2 import Environment, FileSystemLoader def get_template(): env = Environment(loader=FileSystemLoader('./static')) template = env.get_template('custom_script.js') context = {'msg': 'button is clicked!'} html = template.render(context) return html @app.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=app.openapi_url, title="My API", oauth2_redirect_url=app.swagger_ui_oauth2_redirect_url, custom_js_content=get_template() ) custom_swagger.py def get_swagger_ui_html( *, # ... custom_js_content: Optional[str] = None, ) -> HTMLResponse: # ... if custom_js_content: html += f""" <script>{custom_js_content}</script> """ # ... static/custom_script.js function waitForElm(selector) { // ... } waitForElm('.auth-wrapper').then((elm) => { var authWrapper = document.getElementsByClassName("auth-wrapper")[0]; var btn = document.createElement("BUTTON"); btn.innerHTML = ` <span>Authorize Google</span> <svg width="20" height="20"> <use href="#unlocked" xlink:href="#unlocked"></use> </svg> `; btn.className = "btn authorize unlocked Google"; btn.onclick = function() { alert("{{msg}}"); }; authWrapper.append(btn); }); or static/custom_script.js function waitForElm(selector) { // ... } waitForElm('.auth-wrapper').then((elm) => { var authWrapper = document.getElementsByClassName("auth-wrapper")[0]; var html = ` <button onclick="alert('{{msg}}');" class="btn authorize unlocked Google"> <span>Authorize Google</span> <svg width="20" height="20"> <use href="#unlocked" xlink:href="#unlocked"></use> </svg> </button> `; var newDiv = document.createElement("div"); newDiv.innerHTML = html; authWrapper.append(newDiv); }); | 5 | 2 |
76,413,746 | 2023-6-6 | https://stackoverflow.com/questions/76413746/saving-a-model-i-get-module-tensorflow-python-saved-model-registration-has-no | When I try to save my ternsorflow model I get this error message. What is the problem here and how do I fix it? model = tf.keras.models.Sequential() # define the neural network architecture model.add( tf.keras.layers.Dense(50, input_dim=hidden_dim, activation="relu") ) model.add(tf.keras.layers.Dense(n_classes)) k += 1 model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=lr), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=["mse", "accuracy"], ) history = model.fit( x_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=(x_test, y_test), verbose=0, ) folder = "model_mlp_lm" file = f"m{k}_model" os.makedirs(folder, exist_ok=True) path = f"{folder}/{file}" if os.path.isfile(path) is False: model.save(path) module 'tensorflow.python.saved_model.registration' has no attribute 'get_registered_name' This is the stack trace: Traceback (most recent call last): File "D:\Anaconda\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "D:\Anaconda\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\__main__.py", line 39, in <module> cli.main() File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 430, in main run() File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 284, in run_file runpy.run_path(target, run_name="__main__") File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "c:\Users\hijik\.vscode\extensions\ms-python.python-2023.10.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "D:\_lodestar\personality-prediction\finetune_models\MLP_LM.py", line 273, in <module> File "D:\Anaconda\lib\site-packages\tensorflow\python\saved_model\save.py", line 1450, in _build_meta_graph_impl object_graph_proto = _serialize_object_graph( File "D:\Anaconda\lib\site-packages\tensorflow\python\saved_model\save.py", line 1022, in _serialize_object_graph _write_object_proto(obj, obj_proto, asset_file_def_index, File "D:\Anaconda\lib\site-packages\tensorflow\python\saved_model\save.py", line 1061, in _write_object_proto registered_name = registration.get_registered_name(obj) AttributeError: module 'tensorflow.python.saved_model.registration' has no attribute 'get_registered_name' | Check if your tensorflow version is older or up-to-date. This seems to be a newer module https://www.tensorflow.org/api_docs/python/tf/keras/saving/get_registered_name Make sure you have this version of tensorflow installed in your environment pip install tensorflow==2.12.0 I don't know about the dataset so I assumed a small one. This is the code I ran import tensorflow as tf import numpy as np import os # Define placeholder values hidden_dim = 2 n_classes = 2 lr = 0.001 epochs = 10 batch_size = 32 # Create a simple dataset x_train = np.array([[1, 2], [3, 4], [5, 6], [7, 8]]) y_train = np.array([0, 1, 0, 1]) # Convert y_train to one-hot encoded format y_train = tf.keras.utils.to_categorical(y_train, num_classes=n_classes) model = tf.keras.models.Sequential() # Define the neural network architecture model.add( tf.keras.layers.Dense(50, input_dim=hidden_dim, activation="relu") ) model.add(tf.keras.layers.Dense(n_classes)) k = 0 # Initialize k k += 1 # Increment k model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=lr), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=["mse", "accuracy"], ) history = model.fit( x_train, y_train, epochs=epochs, batch_size=batch_size, verbose=0, ) folder = "model_mlp_lm" file = f"m{k}_model" os.makedirs(folder, exist_ok=True) path = f"{folder}/{file}" if os.path.isfile(path) is False: model.save(path) And it ran fine | 4 | 1 |
76,418,777 | 2023-6-6 | https://stackoverflow.com/questions/76418777/kivy-is-there-a-list-of-all-color-names | In Kivy, the widgets' color property allows enter its value as a string of a color name, too, e.g. in .kv file: Label: color: "red" Is there a list of all possible color names? | TL;DR from kivy.utils import colormap # import dict with all CSS 3 Colors # like {'aliceblue':[0.9411764705882353, 0.9725490196078431, 1.0, 1.0]} 'aliceblue' in colormap # True For the plot, see the end of this answer. Dictionary of colors The kivy docs mention that colors referenced by name are retrieved from an object called colormap. This object resides in kivy.utils as a variable storing a dictionary comprehension that iterates over another dictionary named hex_colormap: colormap = {k: get_color_from_hex(v) for k, v in hex_colormap.items()} The ultimate source for these dictionaries is referenced only indirectly in the docs (link). A better reference would be: CSS 3 Colors (recommended by the W3C). At any rate, in order to retrieve all the valid color names, you can import either one of these objects: from kivy.utils import hex_colormap, colormap hex_colormap # name (key): hex (value) {'aliceblue': '#f0f8ff', 'antiquewhite': '#faebd7', ... 'yellow': '#ffff00', 'yellowgreen': '#9acd32'} print('aliceblue' in hex_colormap) # True (all colors in CSS3) colormap # name (key): rgba (value) {'aliceblue': [0.9411764705882353, 0.9725490196078431, 1.0, 1.0], 'antiquewhite': [0.9803921568627451, 0.9215686274509803, 0.8431372549019608, 1.0], ... 'yellow': [1.0, 1.0, 0.0, 1.0], 'yellowgreen': [0.6039215686274509, 0.803921568627451, 0.19607843137254902, 1.0]} print('rebeccapurple' in hex_colormap) # False (only color added in CSS4) * on "rebeccapurple": Changes from Colors 3. Plot The matplotlib docs contain a nice function (plot_colortable) that you can copy/paste to plot a list of named colors. You can pass either one of the dictionaries to this function to get a nice sorted list of colors and their names. plot_colortable(colormap) # add `sort_colors=False` for unsorted plot plt.show() Result: Of course, this plot (showing Kivy's 147 named colors) is just the same as the plot already shown in the docs for mcolors.CSS4_COLORS (containing CSS4's 148 colors), the only difference being that my plot is missing "rebeccapurple". | 3 | 2 |
76,394,246 | 2023-6-3 | https://stackoverflow.com/questions/76394246/streaming-openai-results-from-a-lambda-function-using-python | I'm trying to stream results from Open AI using a Lambda function on AWS using the OpenAI Python library. For the invoke mode I have: RESPONSE_STREAM. And, using the example provided for streaming, I can see the streamed results in the Function Logs (abbreviated below): Response null Function Logs START RequestId: 3e0148c3-1269-4e38-bd08-e29de5751f18 Version: $LATEST { "choices": [ { "finish_reason": null, "index": 0, "logprobs": null, "text": "\n" } ], "created": 1685755648, "id": "cmpl-7NALANaR7eLwIMrXTYJVxBpk6tiZb", "model": "text-davinci-003", "object": "text_completion" } { "choices": [ { "finish_reason": null, "index": 0, "logprobs": null, "text": "\n" } ],.... but, the Response is null. I've tested this by entering the URL in the browser and by performing a get request via cURL: both respond with null. Below is the exact code (with the secret key changed) that I used, but it can also be found on the link provided: import json import openai import boto3 def lambda_handler(event, context): model_to_use = "text-davinci-003" input_prompt="Write a sentence in 4 words." openai.api_key = 'some-secret key' response = openai.Completion.create( model=model_to_use, prompt=input_prompt, temperature=0, max_tokens=100, top_p=1, frequency_penalty=0.0, presence_penalty=0.0, stream=True ) for chunk in response: print(chunk) | You are having trouble because python runtimes do not currently support streaming responses. From 4/7/2023 AWS announcement of streaming responses: Response streaming currently supports the Node.js 14.x and subsequent managed runtimes. As of 6/8/2023 this is still true. | 2 | 1 |
76,404,464 | 2023-6-5 | https://stackoverflow.com/questions/76404464/vector-stores-storage-in-langchain | I am working with LangChain for the first time. Due to data security, I want to be sure about the storage of langchain's vector store storage. I am using HNSWLib vector store, which mentions it is an in-memory store. What does it mean? Does Langchain/vector stores store any data in its servers? https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/hnswlib https://github.com/nmslib/hnswlib | HNSWLib store data in the server where the project is host. So if you host your server into vercel then your vector store is running in memory in vercel server. You can test this logic if true when you execute await vectorStore.save(directory); which will generate vector files inside your project directory. | 2 | 4 |
76,413,508 | 2023-6-6 | https://stackoverflow.com/questions/76413508/why-keyword-argument-are-not-passed-into-init-subclass | Code: class ExternalMeta(type): def __new__(cls, name, base, dct, **kwargs): dct['district'] = 'Jiading' x = super().__new__(cls, name, base, dct) x.city = 'Shanghai' return x class MyMeta(ExternalMeta): def __new__(cls, name, base, dct, age=0, **kwargs): x = super().__new__(cls, name, base, dct) x.name = 'Jerry' x.age = age return x def __init__(self, name, base, dct, age=0, **kwargs): self.country = 'China' class MyClass(metaclass=MyMeta, age=10): def __init_subclass__(cls, say_hi, **kwargs): print(f'keyword arguments are: {kwargs}') super().__init_subclass__(**kwargs) cls.hello = say_hi class DerivedClass(MyClass, say_hi="hello"): pass this throws: Traceback (most recent call last): File "app2.py", line 27, in <module> class DerivedClass(MyClass, say_hi="hello"): File "app2.py", line 11, in __new__ x = super().__new__(cls, name, base, dct) File "app2.py", line 4, in __new__ x = super().__new__(cls, name, base, dct) TypeError: __init_subclass__() missing 1 required positional argument: 'say_hi' From the offical doc: classmethod object.__init_subclass__(cls) This method is called whenever the containing class is subclassed. cls is then the new subclass. If defined as a normal instance method, this method is implicitly converted to a class method. Keyword arguments which are given to a new class are passed to the parent’s class __init_subclass__. For compatibility with other classes using __init_subclass__, one should take out the needed keyword arguments and pass the others over to the base class, as in: https://docs.python.org/3/reference/datamodel.html#object.\_\_init_subclass\_\_ I try to print kwargs, it's {}, empty, so why my say_hi, arguments are not passed to __init_subclass__ method? edit: I read more materials and write an article with diagrams and tests about the creation of instance and classes in Python: https://shan-weiqiang.github.io/2023/06/24/Python-metaclass.html Reference to articles(including this page) are included in it, hopes it can help anyone. | Python exposes the mechanisms classes are created in the form of customizable meta-classes - and does not perform any magic beyond that. Which means: there is no "hidden channel" through which the keyword arguments of a class are passed to __init_subclass__ - that is done inside Python's type.__new__ call. When using __init_subclass__ one will typically do not combine it with a metaclass (one of the idea of the creation of the former was to reduce the need for metaclasses at all). In this case, you do use a metaclass, and you supress the keyword arguments in the call to type.__new__: there are no arguments it can convey to __init_subclass__. Simply pass the arguments there, and Python will be able to pass them down to __init_subclass__: ... class ExternalMeta(type): def __new__(cls, name, base, dct, **kwargs): dct['district'] = 'Jiading' # Forward the kwargs here: x = super().__new__(cls, name, base, dct, **kwargs) x.city = 'Shanghai' return x class MyMeta(ExternalMeta): def __new__(cls, name, base, dct, age=0, **kwargs): # and here: x = super().__new__(cls, name, base, dct, **kwargs) x.name = 'Jerry' x.age = age return x ... The extra kwargs arguments are also passed to the metaclass' __init__ method. But they follow a different "pipe". I will try to summarize the whole thing here: Keyword arguments are declared in the class statement. (ex. class MyClass(MyBase1, metaclass=MyMeta, extraarg="foobar"): The Python runtime will check the bases for the class and any "metaclass" argument to calculate the metaclass (if there is a metaclass conflict it will raise a typeerror) Python will then call the metaclass __prepare__ method, passing any extra arguments it got and use its return value as the namespace in which the class body will be executed (by default an ordinary dict) some special attributes (like __module__, __qualname__ and __anotations__ (with an empty dictionary)) are assigned in the namespace. the class body itself is executed: declared methods are created as functions and class attributes are assigned in the namespace provided by __prepare__. After the class body is executed, attribute annotations are assigned in the namespace __annotations__ entry, one by one. (not in any spec, could as well happen as each attribute is encountered, and is subject to change with PEP 649 implementation for Python 3.13) The Python runtime will call the metaclass ("MyMeta" in the example) passing it all arguments and keyword arguments, but for the argument named metaclass itself. Calling the metaclass means the __call__ method of the class of the metaclass will be executed. (The "twice removed metaclass" or "metametaclass"). This is ordinarily type itself, and is not usually customized, but for learning purposes. This __call__ method will get all the arguments and call the metaclass __new__ (this step is detailed bellow) and, if that returns an instance of the metaclass (i.e. an ordinary class), call the metaclass __init__ method - always passing the mandatory arguments + any named arguments. The __new__ method on the metaclass have, at some point, to call the super type.__new__ method: it is the only way for code written in Python (as opposed to code in an extension) to create a new class. In a custom metaclass, like in this example, the developer can remove, add or pre-process any extra arguments as desired - and forward them to type.__new__. type.__new__ in turn will: (1)calculate the class "MRO" (Method resolution order) (and yes, again - it was done prior to determine the metaclass, but the runtime has to confirm that a linearized MRO is possible here), including calling special methods for that, if one of the bases is a not a class, but features a __mro_entries__ special method, (2) create a new instance of the class,(3) call any existing descriptors in the passed namespace __set_name__ function, (4) call the class's most derived __init_subclass__ (i.e. the first __init_subclass__ it finds in a superclass), passing any extra arguments it got and (5) return the newly created class (to the "metametaclass" __call__) The metaclass __init__ is called, also with any extra arguments. By default it is type.__init__ which does nothing. The "metametaclass" __call__ returns the class that was returned by the metaclass __new__ call to the runtime if there are any class decorators, they are called with this returned class object the name given in the class statement is assigned to the class returned above in the context of the statement. class MetaMeta(type): def __call__(mcls, *args, **kwargs): print(f"Meta-meta-class __call__ with {mcls}, {args}, {kwargs}") result = super().__call__(*args, **kwargs) print("returning from meta-meta-class __call__") return result class Meta(type, metaclass=MetaMeta): @classmethod def __prepare__(mcls, *args, **kwargs): print(f"Metaclass __prepare__ with {mcls}, {args}, {kwargs}") class VerboseDict(dict): def __init__(self, name): self.name = name def __setitem__(self, name, value): print(f"{self.name} assignment {name}={value}") if name == "__annotations__": value = VerboseDict(" annotations") super().__setitem__(name, value) return VerboseDict("ns") def __new__(mcls, name, bases, ns, **kwargs): print(f"metaclass __new__ with {mcls}, {name}, {bases}, {ns}, {kwargs}") result = super().__new__(mcls, name, bases, ns, **kwargs) print(f"Returning from the metaclass `__new__`") return result def __init__(cls, *args, **kwargs): print(f"metaclass __init__ with {cls}, {args}, {kwargs}") return super().__init__(*args, **kwargs) def __call__(cls, *args, **kwargs): print(f"metaclass __call__ (creating an instance) with {cls}, {args}, {kwargs}") return super().__call__(*args, **kwargs) class Descriptor: def __get__(self, instance, owner): return 23 def __set_name__(self, owner, name): print(f"Descriptor __set_name__ with {self}, {owner}, {name}") def decorator(cls): print(f"class decorator with {cls}") return cls class Base: def __init_subclass__(cls, **kwargs): # one can't pass extra kwargs to this call: it will raise TypeError super().__init_subclass__() print(f"{__class__} __init_subclass__ with {cls}, {kwargs}") @decorator class MyClass(Base, metaclass=Meta, extra="foobar"): a = "foobar" b = Descriptor() c: int = 0 d: str def e(self): __class__ # triggers the creatin of "__classcell__" attr in the namespace Output: Metaclass __prepare__ with <class '__main__.Meta'>, ('MyClass', (<class '__main__.Base'>,)), {'extra': 'foobar'} ns assignment __module__=__main__ ns assignment __qualname__=MyClass ns assignment __annotations__={} ns assignment a=foobar ns assignment b=<__main__.Descriptor object at 0x7f47415f49e0> ns assignment c=0 annotations assignment c=<class 'int'> annotations assignment d=<class 'str'> ns assignment e=<function MyClass.e at 0x7f4740c27880> ns assignment __classcell__=<cell at 0x7f47415f4280: empty> Meta-meta-class __call__ with <class '__main__.Meta'>, ('MyClass', (<class '__main__.Base'>,), {'__module__': '__main__', '__qualname__': 'MyClass', '__annotations__': {'c': <class 'int'>, 'd': <class 'str'>}, 'a': 'foobar', 'b': <__main__.Descriptor object at 0x7f47415f49e0>, 'c': 0, 'e': <function MyClass.e at 0x7f4740c27880>, '__classcell__': <cell at 0x7f47415f4280: empty>}), {'extra': 'foobar'} metaclass __new__ with <class '__main__.Meta'>, MyClass, (<class '__main__.Base'>,), {'__module__': '__main__', '__qualname__': 'MyClass', '__annotations__': {'c': <class 'int'>, 'd': <class 'str'>}, 'a': 'foobar', 'b': <__main__.Descriptor object at 0x7f47415f49e0>, 'c': 0, 'e': <function MyClass.e at 0x7f4740c27880>, '__classcell__': <cell at 0x7f47415f4280: empty>}, {'extra': 'foobar'} Descriptor __set_name__ with <__main__.Descriptor object at 0x7f47415f49e0>, <class '__main__.MyClass'>, b <class '__main__.Base'> __init_subclass__ with <class '__main__.MyClass'>, {'extra': 'foobar'} Returning from the metaclass `__new__` metaclass __init__ with <class '__main__.MyClass'>, ('MyClass', (<class '__main__.Base'>,), {'__module__': '__main__', '__qualname__': 'MyClass', '__annotations__': {'c': <class 'int'>, 'd': <class 'str'>}, 'a': 'foobar', 'b': <__main__.Descriptor object at 0x7f47415f49e0>, 'c': 0, 'e': <function MyClass.e at 0x7f4740c27880>, '__classcell__': <cell at 0x7f47415f4280: Meta object at 0x1e57400>}), {'extra': 'foobar'} returning from meta-meta-class __call__ class decorator with <class '__main__.MyClass'> | 3 | 8 |
76,413,729 | 2023-6-6 | https://stackoverflow.com/questions/76413729/define-partial-views-on-a-sqlalchemy-model | We are dealing with a very large legacy SQLAlchemy model (and underlying table schema) aggregating many logically-separate models, that cannot, for practical reasons, be refactored. We would want to be able to design authorizations/permissions on sub-models that restrict read/write to a subset of attributes (and perhaps methods as well). What would be the best approach to design a "partial view" Mixin/Class/Metaclass that would allow: keeping the same original SQLAlchemy/flask-sqlalchemy model underneath (support .query etc) once loaded/inited, restrict access to a designated subset (stored at class level) of attributes In essence going from: class MyLargeModel(db.Model): id = db.Column(db.Integer, primary_key=True) foo = db.Column(db.Text, nullable=False) bar = db.Column(db.Text, nullable=False) baz = db.Column(db.Text, nullable=False) To something like: class ModelBase(db.Model): id = db.Column(db.Integer, primary_key=True) class FooView(ModelBase): foo = db.Column(db.Text, nullable=False) [??] class BarView(ModelBase): bar = db.Column(db.Text, nullable=False) [??] class BarbazView(BarView): baz = db.Column(db.Text, nullable=False) [??] class MyLargeModel(FooView, BarbazView): [??] Where each of the view class retain as much of the SQLA model properties as possible (loading, and ideally persisting). In a way, something similar to what SQLA-Marshmallow etc do for schemas (except while keeping an actual ORM model). Is there any known pattern that could help me here? [Edit] After further digging, it seems like SQLAlchemy's Single table inheritance might hold the key to what we are trying to do, but it seems to require a polymorphic_on column to split vertically on (I only want to split horizontally). | After a lot of experimenting, I finally found a solution that, while not extremely satisfying (it relies on a fairly ugly __table__ definition) seems to fit my bill exactly. Sharing for any future searches: from redatabase.database import db from redatabase.models import ValuationRequest from sqlalchemy.orm.relationships import RelationshipProperty class ModelViewMeta(type(db.Model)): def __new__(mcs, name, bases, attrs): parent_model = attrs.get('_' + name + '__parent_model', None) included_attributes = attrs.get('_' + name + '__included_attributes', []) if parent_model: attrs['__table__'] = parent_model.__table__.select().with_only_columns([ getattr(parent_model, attribute) for attribute in included_attributes if not isinstance(getattr(parent_model, attribute).property, RelationshipProperty) ]).subquery() for attribute in included_attributes: attrs[attribute] = getattr(parent_model, attribute) else: attrs['__abstract__'] = True return super().__new__(mcs, name, bases, attrs) class ModelView(db.Model, metaclass=ModelViewMeta): def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) class FooModelView(ModelView): __parent_model = OriginalModel __included_attributes = ['id', 'foo'] The class FooModelView above will correctly expose only __included_attributes while still keeping all the properties expected from a SQLAlchemy model. | 3 | 0 |
76,414,514 | 2023-6-6 | https://stackoverflow.com/questions/76414514/cannot-import-name-default-ciphers-from-urllib3-util-ssl-on-aws-lambda-us | What I want to achieve To scrape an website using AWS Lambda and save the data on S3. The issues I'm having When I execute Lambda, the following error message appears. { "errorMessage": "Unable to import module 'lambda_function': cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/opt/python/urllib3/util/ssl_.py)", "errorType": "Runtime.ImportModuleError", "requestId": "fb66bea9-cbad-4bd3-bd4d-6125454e21be", "stackTrace": [] } Code The minimum Lambda code is as follows. import requests import boto3 def lambda_handler(event, context): s3 = boto3.client('s3') upload_res = s3.put_object(Bucket='horserace-dx', Key='/raw/a.html', Body='testtext') return event An layer was added to the Lambda. Files were save in python folder using the commands below , frozen in a zip file, then uploaded to AWS Lambda as a layer. !pip install requests -t ./python --no-user !pip install pandas -t ./python --no-user !pip install beautifulsoup4 -t ./python --no-user The bucket horserace-dx exists The folder raw exists The role of the Lambda is properly set. It can read from and write to S3 The runtime of the Lambda is Python 3.9. The python version of the local computer is 3.9.13. What I did so far I google "cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_'" and found some suggestions. I made the layer with the following code and tried again in vain. !pip install requests -t ./python --no-user !pip install pandas -t ./python --no-user !pip install beautifulsoup4 -t ./python --no-user !pip install urllib3==1.26.15 -t ./python --no-user So what should I do to achieve what I want to achieve? Any suggestions would be greatly appreciated. | Execute the following commands. pip install requests==2.25.0 -t ./python --no-user pip install beautifulsoup4 -t ./python --no-user pip install pytz -t ./python --no-user On PyPI, download the following whl files from the pages of numpy and pandas numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl pandas-2.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl Unzip the files and move the contents to the python folder. Zip the python folder and upload it to AWS Lambda Layer. Set the layer to the Lambda. Then the code runs without errors. | 69 | 4 |
76,414,998 | 2023-6-6 | https://stackoverflow.com/questions/76414998/what-does-a-solution-like-r5-or-c15415-mean-when-i-use-solve-from-sagemath-py | i implemented a system of complex valued functions and i solved it with sagemath's "solve" method. In my solutions array i find entries like: x1 == r5 or for different cases x1 == c15403. (and so on with diffent numbers) Does the r5 stand for an arbitary rational number and c15403 for arbitary complex number? Maybe someone can give me a hint where to find such stuff in the documentation. Ty and have nice day. | Yes, 'c' stands for complex. I think that 'r' stands for real, 'z' stands for integer. I don't know if there is a symbol for rationals. I don't know where it's documented, but I believe this comes from Maxima, which Sage uses as a solver. See the documentation for new_variable at https://maxima.sourceforge.io/docs/manual/maxima_346.html#Functions-and-Variables-for-to_005fpoly_005fsolve, and in particular look at the first code example. The Sage reference manual mentions the r and z prefixes in the documentation of the solve method of symbolic relations. Slightly amended excerpt: If there is a parameter in the answer, that will show up as a new variable. In the following example, r1 is an arbitrary real (hence the r). sage: forget() sage: x, y = var('x, y') sage: solve([x + y == 3, 2*x + 2*y == 6], x, y) [[x == -r1 + 3, y == r1]] sage: b, c = var('b, c') sage: solve((b - 1)*(c - 1), [b, c]) [[b == 1, c == r...], [b == r..., c == 1]] Especially with trigonometric functions, the dummy variable may be implicitly an integer (hence the z). sage: solve(sin(x) == cos(x), x, to_poly_solve=True) [x == 1/4*pi + pi*z...] sage: solve([cos(x)*sin(x) == 1/2, x + y == 0], x, y) [[x == 1/4*pi + pi*z..., y == -1/4*pi - pi*z...]] | 3 | 2 |
76,417,916 | 2023-6-6 | https://stackoverflow.com/questions/76417916/filtering-return-data-from-beautiful-soup | I'm trying to search for specific data on a page. I'm able to gather all the data from the page with: page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') #html.parser') results = soup.find_all('tr', {'class':"parent"}) It provides a number of lines that look like the following: <tr class="parent" data-row='{"call":{"symbol":"BTCC 231215C7.00","class_symbol":"BTCC","root_symbol":"BTCC","underlying_symbol":"BTCC.B","expiry_date":"2023-12-15","strike_price":7,"instrument_type":0,"last_price":0.15,"volume":0,"bid_price":0.11,"ask_price":0.15,"bid_size":25,"ask_size":50,"net_change":0,"settlement_price":0,"open_interest":235,"is_option":1,"is_weekly":0,"last_close_price":0.15,"open_price":0,"high_price":0,"low_price":0,"nb_trades":0,"volatility":36.842},"put":{"symbol":"BTCC 231215P7.00","class_symbol":"BTCC","root_symbol":"BTCC","underlying_symbol":"BTCC.B","expiry_date":"2023-12-15","strike_price":7,"instrument_type":0,"last_price":2.52,"volume":0,"bid_price":2.08,"ask_price":2.32,"bid_size":25,"ask_size":25,"net_change":0,"settlement_price":0,"open_interest":0,"is_option":1,"is_weekly":0,"last_close_price":2.52,"open_price":0,"high_price":0,"low_price":0,"nb_trades":0,"volatility":71.286}}' id="BTCC-20231215-700" tabindex="0"> <td class="expiry_date" data-sort="2023-12-15">December 15, 2023</td> <td class="call bid_price" data-sort="0.11">0.11</td> <td class="call ask_price" data-sort="0.15">0.15</td> <td class="call last_price" data-sort="0.15">0.15</td> <td class="call net_change" data-sort="0">0</td> <td class="call open_interest" data-sort="235">235</td> <td class="call volume" data-sort="0">0</td> <td class="strike_price" data-sort="7.0000">7.00</td> <td class="put bid_price" data-sort="2.08">2.08</td> <td class="put ask_price" data-sort="2.32">2.32</td> <td class="put last_price" data-sort="2.52">2.52</td> <td class="put net_change" data-sort="0">0</td> <td class="put open_interest" data-sort="0">0</td> <td class="put volume" data-sort="0">0</td> </tr> <tr class="parent" data-row='{"call":{"symbol":"BTCC 250117C5.00","class_symbol":"BTCC","root_symbol":"BTCC","underlying_symbol":"BTCC.B","expiry_date":"2025-01-17","strike_price":5,"instrument_type":0,"last_price":1.05,"volume":20,"bid_price":0.86,"ask_price":1.13,"bid_size":17,"ask_size":10,"net_change":0.17,"settlement_price":0,"open_interest":832,"is_option":1,"is_weekly":0,"last_close_price":0.88,"open_price":1.05,"high_price":1.06,"low_price":1.05,"nb_trades":3,"volatility":26.595},"put":{"symbol":"BTCC 250117P5.00","class_symbol":"BTCC","root_symbol":"BTCC","underlying_symbol":"BTCC.B","expiry_date":"2025-01-17","strike_price":5,"instrument_type":0,"last_price":1.49,"volume":0,"bid_price":1.13,"ask_price":1.38,"bid_size":10,"ask_size":17,"net_change":0,"settlement_price":0,"open_interest":678,"is_option":1,"is_weekly":0,"last_close_price":1.49,"open_price":0,"high_price":0,"low_price":0,"nb_trades":0,"volatility":62.77}}' id="BTCC-20250117-500" tabindex="0"> <td class="expiry_date" data-sort="2025-01-17">January 17, 2025</td> <td class="call bid_price" data-sort="0.86">0.86</td> <td class="call ask_price" data-sort="1.13">1.13</td> <td class="call last_price" data-sort="1.05">1.05</td> <td class="call net_change up" data-sort="0.17">0.17</td> <td class="call open_interest" data-sort="832">832</td> <td class="call volume" data-sort="20">20</td> <td class="strike_price" data-sort="5.0000">5.00</td> <td class="put bid_price" data-sort="1.13">1.13</td> <td class="put ask_price" data-sort="1.38">1.38</td> <td class="put last_price" data-sort="1.49">1.49</td> <td class="put net_change" data-sort="0">0</td> <td class="put open_interest" data-sort="678">678</td> <td class="put volume" data-sort="0">0</td> </tr> However I'm unable to select the specific line. I tried to get data from line so tried to narrow it down using the following code: for result in results: name = result.find('tr', attrs={'symbol':"BTCC 250117C5.00"}) unfortunately it doesn't find the line I'm looking for :( Once the line is selected I would like to retrieve the "bid_price" and "ask_price" for that specific symbol. | You can select <tr> that contains the symbol in data-row= attribute. Then select the correct cells: # html_doc contains the HTML snippet from the question soup = BeautifulSoup(html_doc, 'html.parser') row = soup.select_one('tr[data-row*="BTCC 250117C5.00"]') bid_price = row.select_one('.bid_price').text ask_price = row.select_one('.ask_price').text print(f'{bid_price=} {ask_price=}') Prints: bid_price='0.86' ask_price='1.13' | 3 | 1 |
76,415,360 | 2023-6-6 | https://stackoverflow.com/questions/76415360/opencv-moments-returns-zero-causing-zerodivisionerror-in-calculation-of-objec | I am trying to compute the area and the center coordinates of objects in a binary mask using opencv. However, I noticed that in some cases I get the wrong result. For example, if I get the contours like this: import numpy as np import cv2 binary_mask = np.array([ [0, 0, 1, 1, 0, 0], [0, 0, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1]]) contours, _ = cv2.findContours( binary_mask.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) >>> contours (array([[[0, 3]], [[5, 3]]], dtype=int32), array([[[2, 0]], [[2, 1]], [[3, 1]], [[3, 0]]], dtype=int32)) Then I get ZeroDivisionError for the center calculation: def get_centroid_from_contour(contour): M = cv2.moments(contour) cX = int(M["m10"] / M["m00"]) cY = int(M["m01"] / M["m00"]) return (cX, cY) >>> get_centroid_from_contour(contour) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in get_centroid_from_contour_ ZeroDivisionError: float division by zero This I think is related to the fact that somehow opencv thinks the 2x2 squared object has zero area: >>> cv2.contourArea(contours[0]) 0.0 It seems something related to "open objects". Indeed the first contour only contains two points that do not close the polygon, but I have no idea how to fix this. I also tried closing the contour as suggested here but it doesn't work. | Looks like a bug in the implementation of moments(). It appears to not consider how contours are defined in OpenCV. The issue isn't with your 2x2 connected component, but with the 6x1 one. That one is returned first. Its contour consists of two points. For the contour of a 1-pixel line, moments() should have still given a non-zero m00 because that is how contours work in OpenCV. Since you say you got a ZeroDivisionError, it must have been zero. That is incorrect for OpenCV's notion of a contour, but correct for every other notion of a polygon/contour. In OpenCV, a contour describes the polygon that needs to be drawn to reproduce the picture. The line goes ON the outer edge pixels that are still inside the connected component. That is why the contour of your line CC becomes a 2-corner polygon of, strictly speaking, zero area. OpenCV could have defined contours to be zero-width lines that circumscribe the outer pixels of a connected component. That would have required: the drawing calls in OpenCV to not be as broken and neglected as they are (I'm being blunt but it's true) the conventions for a contour to allow non-integer coordinates Feel free to submit an issue on OpenCV's github page. The moments() implementation needs a fix. Addendum: the fix should probably be another flag that distinguishes "contours" from proper polygons, so the mathematically correct behavior isn't lost, but instead made optional. Everyone calls that function on contours, so that should be the default. | 2 | 3 |
76,416,898 | 2023-6-6 | https://stackoverflow.com/questions/76416898/pandas-doesnt-assign-values-to-dataframe | I wrote a function that creates a table of periods given the comparison of a value from a series as well as a specific minimum duration. Now I recently cleaned up my environment and somehow the same function doesn't assign values anymore as it did before. However, it still creates a dataframe of the right shape, it just doesn't assign values and I can't figure out why. Previous output: Current output: from datetime import datetime import pandas as pd pd.options.mode.chained_assignment = None import pandas_datareader.data as web import dataframe_image as dfi # getting the last periods of increasing interest rates (Federal Funds Effective MONTHLY Rate) from 1965 onwards fed_rates = web.DataReader("FEDFUNDS", "fred", 1965) fed_rates.set_index(pd.to_datetime(fed_rates.index.date), inplace=True) # test for empty values print(fed_rates.index.isna().sum()) def period_df(start, duration, fed_rates=fed_rates): fed_rates = fed_rates[fed_rates.index >= start] df = pd.DataFrame(columns=["Name", "Start", "Last"]) df.loc[df.shape[0]] = [None, None, None] period = 0 j = 0 for i in range(0, len(fed_rates) - 1): if (fed_rates.iloc[i + 1]["FEDFUNDS"] <= fed_rates.iloc[i]["FEDFUNDS"]) and ( i - j >= duration ): df.loc[period]["Last"] = datetime.strftime(fed_rates.index[i], "%Y-%m-%d") period += 1 if (fed_rates.index[-1] - fed_rates.index[i]).days >= 365: df.loc[len(df)] = [None, None, None] j = i elif (fed_rates.iloc[i + 1]["FEDFUNDS"] <= fed_rates.iloc[i]["FEDFUNDS"]) and ( i - j < duration ): df.iloc[period]["Name"] = "Period " + str(period + 1) df.loc[period]["Start"] = datetime.strftime(fed_rates.index[i], "%Y-%m-%d") j = i # add last date df.loc[period]["Last"] = datetime.strftime(fed_rates.index[-1], "%Y-%m-%d") # add duration column df["Duration"] = ( pd.to_datetime(df["Last"]) - pd.to_datetime(df["Start"]) ) / np.timedelta64(1, "M") # export df as image dfi.export( df.style.set_properties( **{"background-color": "white", "color": "black", "border-color": "#948b8b"} ), "periods" + start + ".png", ) return df period_df(start="1995", duration=9) | In pandas the loc/iloc operations, when they are not setting anything, just return a copy of the data. When you do something along the lines of df.loc[row][col] = value, it may look like the loc operation setting something, but this "assignment" happen in two stages: First, df.loc[row] retrieves a copy of the relevant row Then, the copy's col is set to value You can avoid this by passing row and col to loc/iloc or even use at/iat: df.loc[row, col] = value df.at[row, col] = value | 3 | 2 |
76,415,555 | 2023-6-6 | https://stackoverflow.com/questions/76415555/write-test-for-dagster-asset-job | I am trying to write a simple test for a dagster job and I can't get it through... I am using dagster 1.3.6 So I have defined this job using the function dagster.define_asset_job from dagster import define_asset_job my_job: UnresolvedAssetJobDefinition = define_asset_job( name='name_for_my_job', selection=AssetSelection.assets( source_asset_1, source_asset_2, asset_1, asset_2 ) ) Intuitive try By reading the documentation, I figured that I had to call the execute_in_process method, which is defined in the JobDefinition class. from my_package import my_job def test_documentation(): result = my_job.execute_in_process() assert result.success But like I've highligted in the first code block, my_job is of type UnresolvedAssetJobDefinition. By digging a bit more in the code, I see that there is a resolve method, which returns a JobDefinition. So I wanted to do that, but I've seen that you can't call resolve without parameter; you are required to provide asset_graph. But it's exactly what I was trying to avoid. I don't want to provide the list of the assets/source assets, I want them to be deduced from the job definition. Journey I've seen that in addition to the UnresolvedAssetJobDefinition.resolve().execute_in_process(), I could look at the materialize_to_memory function; but I faced the same issue: I need to provide a list of assets. I spent some time trying to get the assets out of the UnresolvedAssetJobDefinition. I've seen that there is a .selection property that allows me to get a KeysAssetSelection, which basically contains a list of AssetKey. But I need a list of Union[AssetsDefinition, SourceAsset] and I don't know how to convert an AssetKey into an AssetDefinition. Last try Hereafter there is my last try, you can see that I am just trying to wire things together, as a admission of my weakness I am not even trying to use the job definition to get the assets. import pytest from my_package import my_job, source_asset_1, source_asset_2, asset_1, asset_2 from dagster._core.definitions.asset_graph import AssetGraph @pytest.fixture def test_resources() -> Mapping[str, object]: return { "parquet_io_manager": parquet_io_manager.configured({'data_path': DATA_FOLDER }), } def test_my_job( test_resources: Mapping[str, object], ): graph = AssetGraph.from_assets([source_asset_1, source_asset_2, asset_1, asset_2]) job = my_job.resolve(asset_graph=graph) result = job.execute_in_process(resources=test_resources) assert result.success but I can't quite get what I want. In the last example, I got this error dagster._core.errors.DagsterInvalidSubsetError: AssetKey(s) {AssetKey(['source_asset_1']), AssetKey(['source_asset_2']), AssetKey(['asset_1']), AssetKey(['asset_2'])} were selected, but no AssetsDefinition objects supply these keys. Make sure all keys are spelled correctly, and all AssetsDefinitions are correctly added to the Definitions. Help I know that I can test each asset by just importing and calling the function decorated by the @asset dagster keyword. But I want to be able to launch all the assets from the job, without having to rewrite this test function. Do you think that it's something possible? Am I doing something wrong? I must miss something obvious... any help would be appreciated. Have a nice day! | The object that's produced by define_asset_job does not include object references to the asset definitions selected in the job. This means that, to execute an asset job in-process, you need to somehow pass those asset definition object references. One way to do this is through a Definitions object: from dagster import from my_package import my_module my_job = define_asset_job("my_job", ...) all_assets = load_assets_from_modules([my_module]) defs = Definitions(assets=all_assets, jobs=[my_job]) result = defs.get_job_def("my_job").execute_in_process() | 2 | 7 |
76,410,949 | 2023-6-6 | https://stackoverflow.com/questions/76410949/how-to-count-failure-occurrences-in-a-column-using-pandas | I need to use python's pandas to tabulate the test result in a csv format. The result could be "passsed" or sometime "failed". After I import python as pd,my code is: df = pd.read_csv('myfile.csv') pass_res =df['Status'].value_counts()['passed'] fail_res =df['Status'].value_counts()['failed'] this code will work if there IS a case of fail. However, when there is no failure, the last line of code will cause an error. How do check, if there is a failure, then I will execute my last line. | You can also add a CategoricalDType as value_counts returns all observed: # sample df = pd.DataFrame({'Status': ['passed']*5 + ['other']*3}) status = pd.CategoricalDtype(['passed', 'failed'], ordered=True) passed, failed = df['Status'].astype(status).value_counts().sort_index() Output: >>> passed 5 >>> failed 0 >>> df['Status'].astype(status).value_counts().sort_index() Status passed 5 failed 0 Name: count, dtype: int64 >>> df Status 0 passed 1 passed 2 passed 3 passed 4 passed 5 other 6 other 7 other | 2 | 2 |
76,398,598 | 2023-6-4 | https://stackoverflow.com/questions/76398598/streamlit-why-does-updating-the-session-state-with-form-data-require-submitting | I appear to fundamentally misunderstand how Streamlit's forms and session_state variable work. Form data is not inserted into the session_state upon submit. However, submitting a second time inserts the data. Updating session_state values always requires submitting the form 2 times. I'd like to know if this is expected behavior if I'm making a mistake if there is a workaround that allows immediate session_state updates on submit EXAMPLE 1: import streamlit as st # View all key:value pairs in the session state s = [] for k, v in st.session_state.items(): s.append(f"{k}: {v}") st.write(s) # Define the form with st.form("my_form"): st.session_state['name'] = st.text_input("Name") st.form_submit_button("Submit") When the page loads, the session state is empty: [] After submitting the form once, the session_state contains "name: ". The key has been added, but not the value. After pressing Submit a second time, the session_state now contains "name: Chris" EXAMPLE 2: Using a callback function import streamlit as st # View all key:value pairs in the session state s = [] for k, v in st.session_state.items(): s.append(f"{k}: {v}") st.write(s) # Define the form with st.form("my_form"): def update(): st.session_state['name'] = name name = st.text_input("Name") st.form_submit_button("Submit", on_click=update) When the page loads, the session state is empty: [] After submitting the form once, the session_state contains "name: ". The key has been added, but not the value. After pressing Submit a second time, the session_state now contains "name: Chris" | The critical part to think about is: where are you writing session state values relative to where the widget is? In particular, you are accessing/displaying session state values before the widget. Try this to see what's happening a bit clearer: import streamlit as st st.write(st.session_state) with st.form('my_form'): st.session_state.A = st.text_input('A') st.text_input('B', key='B') st.form_submit_button('Submit') st.write(st.session_state) You will note for widget A that session state does not update until after the widget. If you want to access data for widget A before widget A, then you end up with your "double submit" problem. For widget B, you can see that session state is correct right away, even when accessed before the widget. What is happening? For a widget (outside of form), when a user makes a change: User enters a new value Session state associated to the widget's key is updated The page reloads The output of the widget function shows the new value If you have the output of a widget assign a value to session state instead of setting the key for the widget directly, then that output data cannot be updated until the widget function is executed again and can output that new value. With a form, the same logic applies except that the new value is not registered until the submit button is clicked. Solutions You can assign a key to a widget directly so that the value is update upon the change, before the page loads. (Or) You can make sure to not access widget data before the widget. If needed you can use containers to allow you to have the widget logically first in the script but displayed later in the page. | 4 | 7 |
76,411,055 | 2023-6-6 | https://stackoverflow.com/questions/76411055/typeerror-dataframe-drop-takes-from-1-to-2-positional-arguments-but-3-were-gi | I have a large file that I'm trying to reduce using dataframe.drop Here's my code: probe_df = pd.read_csv(csv_file,header = 9233, nrows = 4608) # Get rid of stuff c_to_drop = 'Unnamed: ' + str(count_tp+1) probe_df = probe_df.set_index("Chamber ID").drop(c_to_drop,1) When I ran this, I got this error message: probe_df = probe_df.set_index("Chamber ID").drop(c_to_drop,1) TypeError: DataFrame.drop() takes from 1 to 2 positional arguments but 3 were given I don't understand why I got the error and how to fix it. Please help! I'm a newbie and I tried looking online for a solution but I'm still quite new and didn't see anything that matched my issue exactly. | According to the pandas documentation, the source for drop would be DataFrame.drop(labels=None, *, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise') The * represents the end of allowed positional arguments. This indicates that the positional arguments would be labels, and for python's error message this would likely be from the self positional argument that is not directly shown as it is implicitly supplied. Therefore, doing probe_df = probe_df.set_index("Chamber ID").drop(c_to_drop,1) would be feeding 2 positional arguments into a function which only takes 1 (not including self). By changing it to probe_df.set_index("Chamber ID").drop(c_to_drop, axis=1), we convert the 1 from a positional argument to a keyword argument as required by the function. | 10 | 16 |
76,409,916 | 2023-6-5 | https://stackoverflow.com/questions/76409916/python-xarray-valueerror-unrecognized-chunk-manager-dask-must-be-one-of | I am using xarray for combining multiple netcdf files using xarray.open_mfdataset. But I get the error while running the command, below are the commands and error. nc_all = xarray.open_mfdataset(files,combine = 'nested', concat_dim="time") files = glob.glob("/filepath/*") I get the following error- Traceback (most recent call last): File "/home/lsrathore/GLEAM/GLEAM_HPC.py", line 85, in <module> nc_1980_90 = xarray.open_mfdataset(files[1:11],combine = 'nested', concat_dim="time") File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 1038, in open_mfdataset datasets = [open_(p, **open_kwargs) for p in paths] File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 1038, in <listcomp> datasets = [open_(p, **open_kwargs) for p in paths] File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 572, in open_dataset ds = _dataset_from_backend_dataset( File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 367, in _dataset_from_backend_dataset ds = _chunk_ds( File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/backends/api.py", line 315, in _chunk_ds chunkmanager = guess_chunkmanager(chunked_array_type) File "/home/lsrathore/.local/lib/python3.9/site-packages/xarray/core/parallelcompat.py", line 87, in guess_chunkmanager raise ValueError( ValueError: unrecognized chunk manager dask - must be one of: [] What is causing the problem? | The issue was resolved when I downgraded the xarray version to 0.21.1 from 2023.5.0 | 6 | 4 |
76,409,390 | 2023-6-5 | https://stackoverflow.com/questions/76409390/prevent-premature-wrapping-in-2-column-html-report-using-css | I'm building a two-column "report" in HTML and CSS (I'm new to both) and printing it to a PDF via Weasyprint in Python. My problem is that content in the first column is wrapping into the second column prematurely, ultimately resulting in a broken table that should remain in one column: The HTML file calls the CSS file: <html> <head> <meta charset="UTF-8"> <link href="report.css" rel="stylesheet"> <title>Report</title> <meta name="description" content="Report example"> </head> ... at some point, I create a page style in CSS called "satgeom": @page { @top-left { background: #FF874A; content: counter(page); height: 1cm; text-align: center; width: 1cm; } @top-center { background: #FF874A; content: ''; display: block; height: .05cm; opacity: .5; width: 100%; } @top-right { content: string(heading); font-size: 9pt; height: 1cm; vertical-align: middle; width: 100%; } } @page :blank { @top-left { background: none; content: '' } @top-center { content: none } @top-right { content: none } } @page no-chapter { @top-left { background: none; content: none } @top-center { content: none } @top-right { content: none } } @page :first { background: url(report_cover.png) no-repeat center; background-size: cover; margin: 0; } @page chapter { background: #FF874A; margin: 0; @top-left { content: none } @top-center { content: none } @top-right { content: none } } html { color: #393939; font-family: Montserrat; font-size: 11pt; font-weight: 300; line-height: 1.5; } h1 { color: #FF874A; font-size: 38pt; margin: 5cm 2cm 0 2cm; page: no-chapter; width: 100%; } h2, h3, h4 { color: black; font-weight: 400; } h2 { break-before: always; font-size: 28pt; string-set: heading content(); } h3 { font-weight: 300; font-size: 15pt; } h4 { font-size: 13pt; } .column { display: flex; flex-direction: column; flex-basis: 100%; flex: 1; } #satgeom section { columns: 2; column-gap: 1cm; } #satgeom section p { text-align: justify; } /* Table */ .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;word-break:normal;} .tg .tg-zv4m{border-color:#fcbb9a;text-align:left;vertical-align:top} .tg .tg-ofj5{border-color:#fcbb9a;text-align:right;vertical-align:top} and call this style in the HTML. The contents of this page contain a lengthy table and text. My problem is that the table is wrapping prematurely, and I cannot figure out why. Ideally, I would like to wrap the text into the second column after the first column fills up. A snippet of my HTML for the "satgeom" page is as follows: <article id="satgeom"> <h2 id="satgeom-title">Satellite geometry</h2> <h3>Satellite geometry, depiction, and description</h3> <section> <img src="./satellite.png" alt=""> <p> <table class="tg" style="table-layout: fixed; width: 300px"> <colgroup> <col style="width: 150px"> <col style="width: 150px"> </colgroup> <tr> <td class="tg-zv4m">Name</th> <td class="tg-ofj5">Uydu</th> </tr> <tr> <td class="tg-zv4m">Cost [$]</th> <td class="tg-ofj5">600,000,000</th> </tr> <tr> <td class="tg-zv4m">Manufacturer</td> <td class="tg-ofj5">TAI</td> </tr> <tr> <td class="tg-zv4m">Duration [years]</td> <td class="tg-ofj5">15</td> </tr> <tr> <td class="tg-zv4m">Orbit altitude [km]</td> <td class="tg-ofj5">35,785</td> </tr> <tr> <td class="tg-zv4m">Max. velocity [km/s]</td> <td class="tg-ofj5">11,051</td> </tr> <tr> <td class="tg-zv4m">Dy mass [kg]</td> <td class="tg-ofj5">1,577</td> </tr> <tr> <td class="tg-zv4m">NORAD ID</td> <td class="tg-ofj5"> - </td> </tr> <tr> <td class="tg-zv4m">Uplink [GHz]</td> <td class="tg-ofj5">7.3 - 18.10</td> </tr> <tr> <td class="tg-zv4m">Downlink [GHz]</td> <td class="tg-ofj5">11.70 - 12.75</td> </tr> <tr> <td class="tg-zv4m">Reference frame</td> <td class="tg-ofj5">Geocentric</td> </tr> <tr> <td class="tg-zv4m">Regime</td> <td class="tg-ofj5">Geostationary</td> </tr> </table> </p> <p> Launched in 2024, the Uydu satellite was manufactured by the Turkish Aerospace Industries for roughly $600,000,000. The satellite's mission is </p> <p> Construction-wise, the Uydu satellite comprises a main body and two solar panel arrays extending laterally to its side. For power consumption, the solar panels can be rotated to face the sun. </p> </section> </article> I've tried adding a div{} to my CSS file and messed with the nowrap property, modifying the CSS file, and have also done a number of Google / SO searches, but haven't found a solution. Honestly, I'm not sure I'm looking for the right phrases. Edit: Stefan's answer below resulted in the "baseball card" solution I was looking for: | Why not use a div with two divs inside it for each column. Something like this: <article id="satgeom"> <h2 id="satgeom-title">Satellite geometry</h2> <h3>Satellite geometry, depiction, and description</h3> <div style="display: flex;"> <div style="width: 50%; padding-right: 2em;"> <img style="width: 100%;" src="./satellite.png" alt=""> <table class="tg" style="table-layout: fixed; width: 100%"> <colgroup> <col style="width: 150px"> <col style="width: 150px"> </colgroup> <tr> <td class="tg-zv4m">Name</td> <td class="tg-ofj5">Uydu</td> </tr> <tr> <td class="tg-zv4m">Cost [$]</td> <td class="tg-ofj5">600,000,000</td> </tr> <tr> <td class="tg-zv4m">Manufacturer</td> <td class="tg-ofj5">TAI</td> </tr> <tr> <td class="tg-zv4m">Duration [years]</td> <td class="tg-ofj5">15</td> </tr> <tr> <td class="tg-zv4m">Orbit altitude [km]</td> <td class="tg-ofj5">35,785</td> </tr> <tr> <td class="tg-zv4m">Max. velocity [km/s]</td> <td class="tg-ofj5">11,051</td> </tr> <tr> <td class="tg-zv4m">Dy mass [kg]</td> <td class="tg-ofj5">1,577</td> </tr> <tr> <td class="tg-zv4m">NORAD ID</td> <td class="tg-ofj5"> - </td> </tr> <tr> <td class="tg-zv4m">Uplink [GHz]</td> <td class="tg-ofj5">7.3 - 18.10</td> </tr> <tr> <td class="tg-zv4m">Downlink [GHz]</td> <td class="tg-ofj5">11.70 - 12.75</td> </tr> <tr> <td class="tg-zv4m">Reference frame</td> <td class="tg-ofj5">Geocentric</td> </tr> <tr> <td class="tg-zv4m">Regime</td> <td class="tg-ofj5">Geostationary</td> </tr> </table> </div> <div style="width: 50%;"> <p> Launched in 2024, the Uydu satellite was manufactured by the Turkish Aerospace Industries for roughly $600,000,000. The satellite's mission is </p> <p> Construction-wise, the Uydu satellite comprises a main body and two solar panel arrays extending laterally to its side. For power consumption, the solar panels can be rotated to face the sun. </p> </div> </div> </article> | 2 | 3 |
76,397,149 | 2023-6-3 | https://stackoverflow.com/questions/76397149/how-can-i-wrap-subplot-columns | I've been struggling with visualizing subplots column wrapping in Seaborn histogram plots (kdeplot, histplot). Tried various things including fig, ax & enumerate(zip(df.columns, ax.flatten()). Here's the dataset for col in df.columns: plt.figure(figsize = (3,3)) sns.histplot(df, x = col, kde = True, bins = 40, hue = 'Dataset', fill = True) plt.show(); How can the plots be done with other seaborn plots or plots with facet wrap functionality? | seaborn.displot with kind='hist' can be used to create subplots / facets, where col_wrap specifies the number of columns. See How to plot in multiple subplots for specifying nrows and ncols when using axes-level plots. See Figure-level vs. axes-level functions The data for 'Female' and 'Male' should be shown separately, because gender statistics are often different, so presenting them together can skew the impression of the data. Plotting separate FacetGrids for each 'Gender' produces the best display option. seaborn histplot and displot output doesn't match Tested in python 3.11.3, pandas 2.0.1, matplotlib 3.7.1, seaborn 0.12.2 import pandas as pd import seaborn as sns # load the dataset downloaded from https://www.kaggle.com/uciml/indian-liver-patient-records df = pd.read_csv('d:/data/kaggle/indian_liver_patient.csv') # convert the data to a long form dfm = df.melt(id_vars=['Gender', 'Dataset']) # plot the data for each gender for gender, data in dfm.groupby('Gender'): g = sns.displot(kind='hist', data=data, x='value', col='variable', hue='Dataset', hue_order=[1, 2], common_norm=False, common_bins=False, multiple='dodge', kde=True, col_wrap=3, height=2.5, aspect=2, facet_kws={'sharey': False, 'sharex': False}, palette='tab10') fig = g.fig fig.suptitle(f'Gender: {gender}', y=1.02) fig.savefig(f'hist_{gender}.png', bbox_inches='tight') The only problem with this option is common_bins=False means the bins of the two hue groups don't align. However, setting it to True causes sharex=False to be ignored, so all of the x-axis limits will be 0 - 2000, as can be seen in this plot. The plot generated by the following code has too many columns col_wrap can't be used if row is also in use. g = sns.displot(kind='hist', data=dfm, x='value', row='Dataset', col='variable', hue='Gender', common_norm=False, common_bins=False, multiple='dodge', kde=True, facet_kws={'sharey': False, 'sharex': False}) g.fig.savefig('hist.png') The following plot does not separate the data by 'Gender'. g = sns.displot(kind='hist', data=dfm, x='value', col='variable', col_wrap=3, hue='Dataset', common_norm=False, common_bins=False, multiple='dodge', kde=True, height=2.5, aspect=2, facet_kws={'sharey': False, 'sharex': False}, palette='tab10') The following option correctly allows common_bins=True to be used. import seaborn as sns import numpy as np import pandas as pd # load the dataset df = pd.read_csv('d:/data/kaggle/indian_liver_patient.csv') # convert the data to a long form dfm = df.melt(id_vars=['Gender', 'Dataset']) # iterate through the data for each gender for gen, data in dfm.groupby('Gender'): # create the figure and axes fig, axes = plt.subplots(3, 3, figsize=(11, 5), sharex=False, sharey=False, tight_layout=True) # flatten the array of axes axes = axes.flatten() # iterate through each axes and variable category for ax, (var, sel) in zip(axes, data.groupby('variable')): sns.histplot(data=sel, x='value', hue='Dataset', hue_order=[1, 2], kde=True, ax=ax, common_norm=False, common_bins=True, multiple='dodge', palette='tab10') ax.set(xlabel='', title=var.replace('_', ' ').title()) ax.spines[['top', 'right']].set_visible(False) # remove all the legends except for Aspartate Aminotrnsferase, which will be move to used for the figure for ax in np.append(axes[:5], axes[6:]): ax.get_legend().remove() sns.move_legend(axes[5], bbox_to_anchor=(1, 0.5), loc='center left', frameon=False) fig.suptitle(f'Gender: {gen}', y=1.02) fig.savefig(f'hist_{gen}.png', bbox_inches='tight') Some columns in df have significant outliers. Removing them will improve the histogram visualization. from scipy.stats import zscore from typing import Literal def remove_outliers(data: pd.DataFrame, method: Literal['std', 'z'] = 'std') -> pd.DataFrame: # remove outliers with std or zscore if method == 'std': std = data.value.std() low = data.value.mean() - std * 3 high = data.value.mean() + std * 3 data = data[data.value.between(low, high)] else: data = data[(np.abs(zscore(data['value'])) < 3)] return data # iterate through the data for each gender for gen, data in dfm.groupby('Gender'): ... # iterate through each axes and variable category for ax, (var, sel) in zip(axes, data.groupby('variable')): # remove outliers of specified columns if var in df.columns[2:7]: sel = remove_outliers(sel) sns.histplot(data=sel, x='value', hue='Dataset', hue_order=[1, 2], kde=True, ax=ax, common_norm=False, common_bins=True, multiple='dodge', palette='tab10') .... .... | 2 | 3 |
76,403,216 | 2023-6-5 | https://stackoverflow.com/questions/76403216/how-can-i-generate-a-sine-wave-with-consistent-vibrato | I am trying to create a .wav file which contains a 440Hz sine wave tone, with 10Hz vibrato that varies the pitch between 430Hz and 450Hz. Something must be wrong with my approach, because when I listen to the generated .wav file, it sounds like the "amplitude" of the vibrato (e.g. the highest/lowest pitch reached by the peaks and troughs of the waveform of the vibrato) just progressively increases over time, instead of staying between 430-450Hz. What is wrong with my approach here? Here is some minimal python code which illustrates the issue: import math import wave import struct SAMPLE_RATE = 44100 NOTE_PITCH_HZ = 440.0 # Note pitch, Hz VIBRATO_HZ = 10.0 # Vibrato frequency, Hz VIBRATO_VARIANCE_HZ = 10.0 # Vibrato +/- variance from note pitch, Hz NOTE_LENGTH_SECS = 2.0 # Length of .wav file to generate, in seconds NUM_SAMPLES = int(SAMPLE_RATE * NOTE_LENGTH_SECS) # Generates a single point on a sine wave def _sine_sample(freq: float, sine_index: int): return math.sin(2.0 * math.pi * float(freq) * (float(sine_index) / SAMPLE_RATE)) samples = [] for i in range(NUM_SAMPLES): # Generate sine point for vibrato, map to range -VIBRATO_VARIANCE_HZ:VIBRATO_VARIANCE_HZ vibrato_level = _sine_sample(VIBRATO_HZ, i) vibrato_change = vibrato_level * VIBRATO_VARIANCE_HZ # Mofidy note pitch based on vibrato state note_pitch = NOTE_PITCH_HZ + vibrato_change sample = _sine_sample(note_pitch, i) * 32767.0 # Turn amplitude down to 80% samples.append(int(sample * 0.8)) # Create mono .wav file with a 2 second 440Hz tone, with 10Hz vibrato that varies the # pitch by +/- 10Hz (between 430Hz and 450Hz) with wave.open("vibrato.wav", "w") as wavfile: wavfile.setparams((1, 2, SAMPLE_RATE, NUM_SAMPLES, "NONE", "not compressed")) for sample in samples: wavfile.writeframes(struct.pack('h', sample)) | A more straight forward approach that will accomplish what you want is to use a phasor (linear ramp that goes from 0 to 1 then shoots back down to 0) to look up the sin of that value. Then, you can control the amount the phasor increments (the frequency of vibrato). Here is the code. I lowered the sampling rate to make it easier to look at: import math import matplotlib.pyplot as plt SAMPLE_RATE = 10000 NOTE_PITCH_HZ = 100.0 # Note pitch, Hz VIBRATO_HZ = 20.0 # Vibrato frequency, Hz VIBRATO_VARIANCE_HZ = 20.0 # Vibrato +/- variance from note pitch, Hz NOTE_LENGTH_SECS = 2.0 # Length of .wav file to generate, in seconds NUM_SAMPLES = int(SAMPLE_RATE * NOTE_LENGTH_SECS) # Generates a single point on a sine wave def _sine_sample(freq: float, sine_index: int): return math.sin(2.0 * math.pi * float(freq) * (float(sine_index) / SAMPLE_RATE)) phasor_state = 0 phasored_samples = [] samples = [] unmodulated_samples = [] for i in range(NUM_SAMPLES): # Generate sine point for vibrato, map to range -VIBRATO_VARIANCE_HZ:VIBRATO_VARIANCE_HZ vibrato_level = _sine_sample(VIBRATO_HZ, i) vibrato_change = vibrato_level * VIBRATO_VARIANCE_HZ # Mofidy note pitch based on vibrato state note_pitch = NOTE_PITCH_HZ + vibrato_change samples.append(_sine_sample(note_pitch, i)+5) unmodulated_samples.append(_sine_sample(NOTE_PITCH_HZ, i)) phasored_samples.append(math.sin(2*math.pi*phasor_state)+10) phasor_inc = note_pitch/SAMPLE_RATE phasor_state += phasor_inc if phasor_state>=1: phasor_state -=1 plt.plot(unmodulated_samples, label='unmodulated') plt.plot(samples, label='not working') plt.plot(phasored_samples, label='using phasor') plt.legend() plt.show() A zoom in on the output shows you the difference between these approaches: Keep in mind though, that this still isn't quite right. A violinist or vocalist will vibrate up and down in a more or less linear trajectory, not a sinusoidal one. To be more 'correct' (if that is what you are going for, that is) would be to compute the change in phase increment as a triangle wave, not a sinusoidal one. | 2 | 2 |
76,402,412 | 2023-6-4 | https://stackoverflow.com/questions/76402412/how-do-i-calculate-the-total-number-of-likes-for-all-of-a-users-posts-in-django | I'm working on a Django project and I'm trying to calculate the total number of likes for a user based on their posts (i.e if a user has post with 10 likes and another with 5, the total likes must display 15) . I've tried various approaches, but I haven't been able to get it working. Here is my models.py: class User(AbstractUser): username = models.CharField(...) first_name = models.CharField(...) last_name = models.CharField(...) email = models.EmailField(...) bio = models.CharField(...) ... class Post(models.Model): author = models.ForeignKey(User, null=True, on_delete=models.SET_NULL) text = models.CharField(max_length=280) created_at = models.DateTimeField(auto_now_add=True) likes = models.ManyToManyField(User, related_name="user_post") def total_likes(self): return self.likes.count() class Meta: ordering = ["created_at"] And here I am trying to display the total number of likes for all posts from a specific user inside this html file (user profile): ... <p>Total likes: {{ post.likes.all.count }} </p> ... views.py @login_required def show_user(request, user_id): try: user = User.objects.get(id=user_id) posts = Post.objects.filter(author=user) except ObjectDoesNotExist: return redirect('user_list') else: return render(request, 'show_user.html', {'user': user , 'posts' : posts, 'request_user': request.user}) Here's what I've tried so far: Initially, I attempted to use the {{ post.likes.all.count }} template tag to display the total number of likes for each post. However, this didn't work as it didn't return any value on the screen. Next, I tried using the {% add %} template tag within a loop to iterate over all the posts and accumulate the likes count. However, this resulted in a TemplateSyntaxError. I am not sure what the best way to solve this issue is. I would greatly appreciate any guidance or suggestions on how to calculate and display the total number of likes for a user's from all posts in Django. | You can determine the total number of likes for an author with: from django.db.models import Count @login_required def show_user(request, user_id): user = get_object_or_404( User.objects.annotate(total_likes=Count('post__likes')), pk=user_id ) posts = Post.objects.filter(author=user) return render( request, 'show_user.html', {'user': user, 'posts': posts, 'request_user': request.user}, ) in the template you can then render the total likes of the user with: {{ user.total_likes }} You can also boost counting the number of likes per post with: from django.db.models import Count @login_required def show_user(request, user_id): user = get_object_or_404( User.objects.annotate(total_likes=Count('post__likes')), pk=user_id ) posts = Post.objects.annotate(num_likes=Count('likes')).filter(author=user) return render( request, 'show_user.html', {'user': user, 'posts': posts, 'request_user': request.user}, ) In the template you can then render this with: {% for post in posts %} {{ post }} (likes: {{ post.num_likes }}) {% endfor %} Note: It is often better to use get_object_or_404(…) [Django-doc], then to use .get(…) [Django-doc] directly. In case the object does not exists, for example because the user altered the URL themselves, the get_object_or_404(…) will result in returning a HTTP 404 Not Found response, whereas using .get(…) will result in a HTTP 500 Server Error. Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation. | 2 | 3 |
76,399,078 | 2023-6-4 | https://stackoverflow.com/questions/76399078/creating-a-typeddict-with-enum-keys | I am trying to create a TypedDict for better code completion and am running into an issue. I want to have a fixed set of keys (an Enum) and the values to match a specific list of objects depending on the key. For example: from enum import Enum class OneObject: pass class TwoObject: pass class MyEnum(Enum): ONE: 1 TWO: 2 I am looking to have something like this: from typing import TypedDict class CustomDict(TypedDict): MyEnum.ONE: list[OneObject] MyEnum.TWO: list[TwoObject] However, I am getting Non-self attribute could not be type hinted and it doesn't really work. What are my options? | This is not compatible with the TypedDict specification as laid out in PEP 589. Let me quote: (emphasis mine) A TypedDict type represents dictionary objects with a specific set of string keys, and with specific value types for each valid key. So using arbitrary enum members for defining TypedDict keys is invalid. While TypedDict does also support an alternative, functional definition syntax and you could theoretically make your enum have the str data type by doing class MyEnum(str, Enum): ..., you would still probably not be able to define a TypedDict with those enum members in a way that your type checker understands. That is because only actual string literals are officially accepted as keys as mentioned in the section on the Use of Final Values and Literal Types. Quote: (again, emphasis mine) Type checkers are only expected to support actual string literals, not final names or literal types, for specifying keys in a TypedDict type definition. [...] The motivation for this is to make type declarations self-contained, and to simplify the implementation of type checkers. In other words, whether something like the following is supported depends entirely on any given type checker: from enum import Enum from typing import TypedDict class OneObject: pass class TwoObject: pass class MyEnum(str, Enum): ONE = "1" TWO = "2" CustomDict = TypedDict( "CustomDict", {MyEnum.ONE: list[OneObject], MyEnum.TWO: list[TwoObject]} ) Mypy (currently) does not and gives the output: error: Invalid TypedDict() field name. (By the way, I tested it with Final variables as keys and those are also rejected.) So depending on what your use case is, you will probably have to bite the bullet and explicitly type out the enum/key names again or just not use an enum for that in the first place, as suggested by @artem in his answer. | 8 | 3 |
76,398,117 | 2023-6-3 | https://stackoverflow.com/questions/76398117/how-to-override-the-default-200-response-in-fastapi-docs | I have this small fastapi application import uvicorn from fastapi import FastAPI, APIRouter from fastapi import Path from pydantic import BaseModel from starlette import status app = FastAPI() def test(): print("creating the resource") return "Hello world" router = APIRouter() class MessageResponse(BaseModel): detail: str router.add_api_route( path="/test", endpoint=test, methods=["POST"], responses={ status.HTTP_201_CREATED: {"model": MessageResponse} } ) app.include_router(router) def main(): uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001) if __name__ == "__main__": main() when I check the docs on http://127.0.0.1:8001/docs#/default/test_test_post, in the list of responses in the docs, I see two responses: 200 and 201 I don't have any 200 responses here. I don't want 200 to be shown for me in the docs. Here is the fast api auto-generated openapi.json file { "openapi": "3.0.2", "info": {"title": "FastAPI", "version": "0.1.0"}, "paths": {"/test": { "post": {"summary": "Test", "operationId": "test_test_post", "responses": { "200": { "description": "Successful Response", "content": {"application/json": {"schema": {}}} }, "201": { "description": "Created", "content": {"application/json": {"schema": {"$ref": "#/components/schemas/MessageResponse"}}}}}}} }, "components": {"schemas": { "MessageResponse": {"title": "MessageResponse", "required": ["detail"], "type": "object", "properties": {"detail": {"title": "Detail", "type": "string"}}}}}} I should not be seeing "description": "Successful Response", "content": {"application/json": {"schema": {}}} }, What should I do? UPDATE: this one also did not work import uvicorn from fastapi import FastAPI, APIRouter from pydantic import BaseModel from starlette import status from starlette.responses import Response app = FastAPI() def test(response: Response): print("creating the resource") response.status_code = 201 return "Hello world" router = APIRouter() class MessageResponse(BaseModel): detail: str router.add_api_route( path="/test", endpoint=test, methods=["POST"], response_model=None, responses={ 200: {}, status.HTTP_201_CREATED: {"model": MessageResponse} } ) app.include_router(router) def main(): uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001) if __name__ == "__main__": main() | The default response can be set with the status_code parameter, and the default response model can also be directly controlled by the return type. This example shows you how to do it with the decorator paradigm which is recommended over manually adding API routes. class MessageResponse(BaseModel): detail: str @router.post('/test', status_code=201) def test() -> MessageResponse: print("creating the resource") return "Hello world" If you really need to do it with your current structure, you can simply define status_code in the add_api_route function. router.add_api_route( path="/test", endpoint=test, methods=["POST"], status_code=201, responses={ status.HTTP_201_CREATED: {"model": MessageResponse} } ) | 2 | 6 |
76,397,098 | 2023-6-3 | https://stackoverflow.com/questions/76397098/how-to-make-a-variable-in-global-scope-in-robot-framework | I have create a small robot framework test suit which will communicate with trace32 Lauterbach. My idea is to run different functions name using a loop. Every loop, it will make a breakpoint in the Trace32 later batch. I have written a simple python script as library in the robot framework. test.robot file import os *** Settings *** Documentation simple test script to control Trace32 Library Collections Library can.Trace32 Suite Setup Suite Teardown *** Variables *** ${temp} 1 *** Test Cases *** Check Input and Output [Documentation] test script [Setup] #Retrive Data . This list has 5 values ${MainList} Create List #start debugger start Debugger #connect debugger Connect Debugger #Iterate 5 times FOR ${UserAttribute} IN @{MainList} #sleep 1 sec Sleep 1 seconds #call for breakpoint break Debugger ${temp} +=1 END Disconnect Debugger [Teardown] and the trace 32 script file: import time import ctypes from ctypes import c_void_p import enum T32_DEV = 1 class Trace32: def start_Debugger(self): self.t32api = ctypes.cdll.LoadLibrary('D:/test/api/python/t32api64.dll') self.t32api.T32_Config(b"NODE=",b"localhost") self.t32api.T32_Config(b"PORT=",b"20000") self.t32api.T32_Config(b"PACKLEN=",b"1024") rc = self.t32api.T32_GetChannelSize() ch1 = ctypes.create_string_buffer(rc) self.t32api.T32_GetChannelDefaults(ctypes.cast(ch1,ctypes.c_void_p)) ch2 = ctypes.create_string_buffer(rc) self.t32api.T32_GetChannelDefaults(ctypes.cast(ch2,ctypes.c_void_p)) self.t32api.T32_SetChannel(ctypes.cast(ch2,c_void_p)) def Connect_Debugger(self): rc = self.t32api.T32_Init() rc = self.t32api.T32_Attach(T32_DEV) def breakpoint_Debugger(self): rc = self.t32api.T32_Ping() time.sleep(2) rc = self.t32api.T32_Cmd(b"InterCom M7_0 Break") time.sleep(3) rc = self.t32api.T32_Cmd(b"InterCom M7_0 Go") time.sleep(2) rc = self.t32api.T32_Cmd(b"InterCom M7_0 break.Set My_func") time.sleep(2) def Disconnect_Debugger(self): rc = self.t32api.T32_Exit() In the robot file, I am calling start Debugger and Connect Debugger function to start and connect the debugger. I want self.t32api to be global. So that I can call break_Debugger many times to put a breakpoint. But Unfortunately, I can only put breakpoint in the first iteration. In second iteration, the breakpoint is not working. How can I make self.t32api global until the robot file executed completely? | Just initialize it in the constructor of the Trace32 class, so it will persist as long as Trace32 object exist, we can then also remove start_Debugger() class Trace32: def __init__(self): self.t32api = ctypes.cdll.LoadLibrary('D:/test/api/python/t32api64.dll') def start_Debugger(self): self.t32api.T32_Config(b"NODE=",b"localhost") self.t32api.T32_Config(b"PORT=",b"20000") self.t32api.T32_Config(b"PACKLEN=",b"1024") rc = self.t32api.T32_GetChannelSize() ch1 = ctypes.create_string_buffer(rc) self.t32api.T32_GetChannelDefaults(ctypes.cast(ch1,ctypes.c_void_p)) ch2 = ctypes.create_string_buffer(rc) self.t32api.T32_GetChannelDefaults(ctypes.cast(ch2,ctypes.c_void_p)) self.t32api.T32_SetChannel(ctypes.cast(ch2,c_void_p)) | 3 | 2 |
76,395,953 | 2023-6-3 | https://stackoverflow.com/questions/76395953/regex-to-catch-email-addresses-in-email-header | I'm trying to parse a To email header with a regex. If there are no <> characters then I want the whole string otherwise I want what is inside the <> pair. import re re_destinatario = re.compile(r'^.*?<?(?P<to>.*)>?') addresses = [ 'XKYDF/ABC (Caixa Corporativa)', 'Fulano de Tal | Atlantica Beans <[email protected]>' ] for address in addresses: m = re_destinatario.search(address) print(m.groups()) print(m.group('to')) But the regex is wrong: ('XKYDF/ABC (Caixa Corporativa)',) XKYDF/ABC (Caixa Corporativa) ('Fulano de Tal | Atlantica Beans <[email protected]>',) Fulano de Tal | Atlantica Beans <[email protected]> What am I missing? | You may use this regex: <?(?P<to>[^<>]+)>?$ RegEx Demo RegEx Demo: <?: Match an optional < (?P<to>[^<>]+): Named capture group to to match 1+ of any characters that are not < and > >?: Match an optional > $: End Code Demo Code: import re re_destinatario = re.compile(r'<?(?P<to>[^<>]+)>?$') addresses = [ 'XKYDF/ABC (Caixa Corporativa)', 'Fulano de Tal | Atlantica Beans <[email protected]>' ] for address in addresses: m = re_destinatario.search(address) print(m.group('to')) Output: XKYDF/ABC (Caixa Corporativa) [email protected] | 2 | 4 |
76,371,195 | 2023-5-31 | https://stackoverflow.com/questions/76371195/how-to-make-a-json-post-request-from-java-client-to-python-fastapi-server | I send a post request from a java springboot application like this: String requestBody = gson.toJson(sbert); System.out.println(requestBody); // If I print this, and use this in postman it works! HttpRequest add_request = HttpRequest.newBuilder() .uri(URI.create("http://localhost:4557/sbert_similarity")) .POST(HttpRequest.BodyPublishers.ofString(requestBody)) .header("Content-Type", "application/json") .build(); HttpResponse<String> response = client.sendAsync(add_request, HttpResponse.BodyHandlers.ofString()).get(); This is essentially what my fastapi service looks like: from fastapi import FastAPI, Request from pydantic import BaseModel from sentence_transformers import SentenceTransformer, util model = SentenceTransformer('all-MiniLM-L6-v2') import pprint as pp import uvicorn from typing import Any app = FastAPI() class Item(BaseModel): user_input: str document: str class ResponseItem(BaseModel): similarity: float @app.post("/sbert_similarity") def check_similarity(item: Item) -> ResponseItem: pp.pprint(item) sentences1 = [item.user_input] sentences2 = [item.document] cosine_score = 0 embeddings1 = model.encode(sentences1, convert_to_tensor=True) embeddings2 = model.encode(sentences2, convert_to_tensor=True) cosine_score = util.cos_sim(embeddings1, embeddings2) return { "similarity" : cosine_score } if __name__=="__main__": uvicorn.run("similarity_server:app",host='0.0.0.0', port=4557, reload=True, workers=3) When I print out the returned json object I get: {"detail":[{"loc":["body"],"msg":"field required","type":"value_error.missing"}]} Which doesn't make sense considering the same json object I use for the post request works perfectly fine when I use it in postman. My fastapi server says: INFO: 127.0.0.1:65066 - "POST /sbert_similarity HTTP/1.1" 422 Unprocessable Entity Anyone know what's going on? Thanks! Edit: So just a quick update it seems like from this code: @app.post("/sbert_similarity_v3") async def check_similarity(request: Request) -> Any: content_type = request.headers.get('Content-Type') if content_type is None: return 'No Content-Type provided.' elif content_type == 'application/json': try: json = await request.json() return json except JSONDecodeError: return 'Invalid JSON data.' else: return 'Content-Type not supported.' there was a JSONDecodeError for all post requests sent from the Java Springboot application, I still don't know why that is though? Second Edit: So, now I have to ask why is the http client sending a null object as opposed to it's actual json object? | The example below deomonstrates how to make a JSON POST request using HttpURLConnection. The issue in your code might or might not be with Apache's HttpClient (you would need to test this on your own), but might be originated from the requestBody you send to the server; hence, I would suggest you manually specify a JSON string first (before using other Java serialization libraries, such as gson, to convert Java Objects into JSON), as shown below, and see how that works. Working Example Python FastAPI server (taken from this answer) from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class User(BaseModel): user: str @app.post('/') def main(user: User): return user Java Client example import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStream; import java.net.HttpURLConnection; import java.net.URL; import java.nio.charset.StandardCharsets; public class PostJSON { public static void main(String[] args) throws IOException { URL url = new URL("http://127.0.0.1:8000/"); HttpURLConnection con = (HttpURLConnection) url.openConnection(); con.setRequestMethod("POST"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("Accept", "application/json"); con.setDoOutput(true); String jsonInputString = "{\"user\": \"foo\"}"; try (OutputStream os = con.getOutputStream()) { byte[] input = jsonInputString.getBytes(StandardCharsets.UTF_8); os.write(input, 0, input.length); } try (BufferedReader br = new BufferedReader(new InputStreamReader(con.getInputStream(), StandardCharsets.UTF_8))) { StringBuilder response = new StringBuilder(); String responseLine = null; while ((responseLine = br.readLine()) != null) { response.append(responseLine.trim()); } System.out.println(con.getResponseCode() + " " + response); } } } | 2 | 2 |
76,392,744 | 2023-6-2 | https://stackoverflow.com/questions/76392744/can-you-use-the-programs-you-pip-install-in-the-command-line | As a Python beginner, I was downloading the OpenAI's Whisper with the following command: pip install -U openai-whisper, and noticed that you can use Whisper in both Python and the Command-line. To my knowledge, pip install installs Python packages, so should only be available within Python, but it seems like you can use Whisper in the command line? In summary, why does pip install-ing Python packages let you use the package in the command line? | When you install something with pip, if the package defines an entry point, it creates a command-line wrapper, and adds it to your Python installation's bin. Your PATH environment variable should include the folder Python's bin, so you can run the package from the command-line. https://packaging.python.org/en/latest/specifications/entry-points/ | 2 | 5 |
76,389,832 | 2023-6-2 | https://stackoverflow.com/questions/76389832/polars-how-to-add-two-series-that-contain-lists-as-elements | Trying to add, subtract, two Series that contains datatype List[i64]. The operation seems to be not supported. a = pl.Series("a",[[1,2],[2,3]]) b = pl.Series("b",[[4,5],[6,7]]) c = a+b this gives the error: PanicException: `add` operation not supported for dtype `list[i64]` I would expect a element-wise sum, like would happen with numpy array for example: c = [[5,7],[8,10]] What's the correct syntax to add two series of lists? | you can do the following: c = (a.explode() + b.explode()).reshape((2,-1)).alias('c') shape: (2,) Series: 'a' [list[i64]] [ [5, 7] [8, 10] ] Final thoughts: if your list has a fixed size, then you might consider using the new Polars Array datatype. | 3 | 4 |
76,385,931 | 2023-6-1 | https://stackoverflow.com/questions/76385931/validate-csv-by-checking-if-enumeration-columns-contains-any-invalid-coded-value | We recieve many different csv files from external labs and centers. When recieving such a file, we first need to do some QA checks before further processing. So make sure the data is correct, at least on a technical level. We have some Python scripts to check the number of columns, check date values, min/max range etc. But now we also want to check wether the enumerated columns are correct. So for example, if a column visit is a coded value and may only contain baseline, fup_6_m, fup_12_m then it shouldn't contain anything else like fup_36_m. We have the metadata specifications, so the column names and the lists of coded values (aka enumeration) are known beforehand. This is the Python script I've got so far: # check if coded values are correct import pandas as pd import io ## load data from csv files ##df = pd.read_csv (r'patlist_mcl2017.csv', sep = ",", decimal=".") # TESTING: create data frame from text str_patients = """patid,dob,sex,height,score,visit 1072,16-01-1981,M,154,1,fup_12_m 1091,20-12-1991,M,168,4,baseline 1126,25-12-1999,M,181,3,fup_6_m 1139,14-04-1980,Y,165,1,baseline 1171,05-11-1984,M,192,2,fup_12_m 1237,17-08-1983,F,170,3,fup_6_m 1334,26-08-1985,F,160,5,fup_6_m 1365,14-09-1976,M,184,3,fup_24_m 1384,28-12-1993,F,152,1,baseline 1456,27-09-1998,F,164,5,fup_12_m """ df = pd.read_csv(io.StringIO(str_patients), sep = ",", decimal=".") print(df) # allowed values for enumeration columnms allowed_enum = { 'sex': ['M', 'F'], 'score': [0, 1, 2, 3, 4], 'visit': ['baseline', 'fup_6_m', 'fup_12_m'] } # check enumeration for column_name, allowed_values in allowed_enum.items(): df_chk = df[~df[column_name].isin(allowed_values)].groupby(column_name).size().reset_index(name='Count') if not df_chk.empty: print("Found invalid values for column '%s':" % column_name) print(df_chk) It works and the output is like this: Found invalid values for column 'sex': sex Count 0 Y 1 Found invalid values for column 'score': score Count 0 5 2 Found invalid values for column 'visit': visit Count 0 fup_24_m 1 But the different files can contain many columns, and for better reporting we'd like to get the output as one dataframe, so something like this: Column_name Invalid Count 0 Sex Y 1 1 Score 5 2 2 visit fup_24_m 1 So my question is: What is the best way to collect the invalid values in a dataframe, like above? Or, is there maybe a better way for checking/validating these kind of coded values? | You could try ... dfs = { column_name: df[~df[column_name].isin(allowed_values)] .value_counts(subset=column_name) .to_frame().reset_index(names="Invalid") for column_name, allowed_values in allowed_enum.items() } out = pd.concat(dfs, names=("Column_name", None)).droplevel(1) to get Invalid count Column_name sex Y 1 score 5 2 visit fup_24_m 1 for the sample dataframe (another .reset_index would give you the format in the question). Or, similiar to Zach Young's proposal, you could do ... columns = ( df.loc[~df[column_name].isin(allowed_values), column_name] for column_name, allowed_values in allowed_enum.items() ) out = pd.concat(columns, axis=1, sort=True) to get a sub-dataframe which contains only the invalid values sex score visit 3 Y NaN NaN 6 NaN 5.0 NaN 7 NaN NaN fup_24_m 9 NaN 5.0 NaN | 3 | 2 |
76,389,849 | 2023-6-2 | https://stackoverflow.com/questions/76389849/pandas-drop-duplicates-with-a-tolerance-value-for-duplicates | What I have is two Pandas dataframes of coordinates in xyz-format. One of these contains points that should be masked in the other one, but the values are slightly offset from each other, meaning a direct match with drop_duplicates is not possible. My idea was to round the values to the nearest significant number, but this also does not always work, since if some values are rounded to different numbers, they won't match and won't be removed. For example, if one point lies at x = 149 and another at x = 151, rounding them to the nearest hundred gives different values. My code looks something like this: import pandas as pd import numpy as np df_test_1 = pd.DataFrame(np.array([[123, 449, 756.102], [406, 523, 543.089], [140, 856, 657.24], [151, 242, 124.42]]), columns = ['x', 'y', 'z']) df_test_2 = pd.DataFrame(np.array([[123, 451, 756.099], [404, 521, 543.090], [139, 859, 657.23], [633, 176, 875.76]]), columns = ['x', 'y', 'z']) df_test_3 = pd.concat([df_test_1, df_test_2]) df_test_3['xr'] = df_test_3.x.round(-2) df_test_3['yr'] = df_test_3.y.round(-2) df_test_3['zr'] = df_test_3.z.round(1) df_test_3 = df_test_3.drop_duplicates(subset=['xr', 'yr', 'zr'], keep=False) What I want is to remove duplicates if the columns 'xr' and 'yr' are duplicates +-100 and 'zr' duplicates +-0.1. For example, if two coordinates are rounded to (100, 300, 756.2) and (200, 400, 756.1), they should be considered duplicates and should be removed. Any ideas are appreciated, thanks! | You can numpy broadcasting: # Convert to numpy vals1 = df_test_1.values vals2 = df_test_2.values # Remove from df_test_1 arr1 = np.abs(vals1 - vals2[:, None]) msk1 = ~np.any(np.all(arr1 < [100, 100, 0.1], axis=2), axis=1) # Remove from df_test_2 arr2 = np.abs(vals2 - vals1[:, None]) msk2 = ~np.any(np.all(arr1 < [100, 100, 0.1], axis=2), axis=1) out = pd.concat([df_test_1[msk1], df_test_2[msk2]], ignore_index=True) Output: >>> out x y z 0 151.0 242.0 124.42 1 633.0 176.0 875.76 Comment of @James This removes left vs right and right vs left, but not duplicates within left vs left or right vs right. In this case: df_test_3 = pd.concat([df_test_1, df_test_2]) arr = df_test_3.values msk = np.abs(arr - arr[:, None]) < [100, 100, 0.1] out = df_test_3[np.sum(np.all(msk, axis=2), axis=1) == 1] print(out) # Output x y z 3 151.0 242.0 124.42 3 633.0 176.0 875.76 | 2 | 5 |
76,389,663 | 2023-6-2 | https://stackoverflow.com/questions/76389663/algorithm-to-list-all-combinations-from-a-table-where-data-is-present-or-null | I have an Excel file (which can optionally be loaded into a database, and into an array of arrays of course) with values such as: A B C NULL NULL zxy xyz xzy NULL xyz xzy xyy yzy yyx yxy NULL NULL xyx xyz NULL yxx and so on. There are thousands of values. Is there any known algorithm to come up with all possible combinations of rows where values are not "NULL"? For example for the table above the result would be: A B C Number of occurrences NULL NULL * 2 * * NULL 1 * * * 2 * NULL * 1 I feel like it is a typical task, but cannot find the algorithm anywhere. Would appreciate your help a lot. | If you are tempted to use pandas : #pip install pandas import pandas as pd df = pd.read_excel("file.xlsx") out = ( df.replace(".+", "*", regex=True).fillna("NULL") .groupby(list(df), group_keys=False, sort=False) .size().reset_index(name="Number of occurrences") ) Output : print(out) A B C Number of occurrences 0 NULL NULL * 2 1 * * NULL 1 2 * * * 2 3 * NULL * 1 | 2 | 2 |
76,389,309 | 2023-6-2 | https://stackoverflow.com/questions/76389309/how-to-capture-words-with-letters-separated-by-a-consistent-symbol-in-python-reg | I am trying to write a Python regex pattern that will allow me to capture words in a given text that have letters separated by the same symbol or space. For example, in the text "This is s u p e r and s.u.p.e.r and s👌u👌p👌e👌r and s!u.p!e.r", my goal is to extract the words "s u p e r", "s.u.p.e.r", and s👌u👌p👌e👌r. However, I want to exclude "s!u.p!e.r" because it does not have the same consistent separating symbol within the word. I'm currently using the following: x="This is s u p e r and s.u.p.e.r and s👌u👌p👌e👌r and s!u.p!e.r" pattern = r"(?:\b\w[^\w\d]){2,}" re.findall(pattern, x) ['s u p e r ', 's.u.p.e.r ', 's👌u👌p👌e👌r ', 's!u.p!e.'] I'm just curious if it's possible to exclude the cases that do not have the same symbol. | You may consider using pattern = r"(?<!\S)\w(?=(\W))(?:\1\w)+(?!\S)" results = [m.group() for m in re.finditer(pattern, x)] See the Python demo and the regex demo. import re x="This is s u p e r and s.u.p.e.r and s👌u👌p👌e👌r and s!u.p!e.r" pattern = r"(?<!\S)\w(?=(\W))(?:\1\w)+(?!\S)" print([m.group() for m in re.finditer(pattern, x)]) # => ['s u p e r', 's.u.p.e.r', 's👌u👌p👌e👌r'] Pattern details (?<!\S) - left-hand whitespace boundary \w - a word char (?=(\W)) - a positive lookahead that requires the next char to e a non-word char capturing it into Group 1 (\1) (?:\1\w)+ - one or more repetitions of the same char as captured in Group 1 and then a single word char (?!\S) - right-hand whitespace boundary | 3 | 3 |
76,389,395 | 2023-6-2 | https://stackoverflow.com/questions/76389395/attributeerror-module-numpy-has-no-attribute-long | I am trying to find 9 raise to power 19 using numpy. I am using numpy 1.24.3 This is the code I am trying: import numpy as np np.long(9**19) This is the error I am getting: AttributeError: module 'numpy' has no attribute 'long' | Sadly, numpy.long was deprecated in numpy 1.20 and it is removed in numpy 1.24 If you wan the result you have to try numpy.longlong import numpy as np np.longlong(9**19) #output 1350851717672992089 https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations | 8 | 14 |
76,388,105 | 2023-6-2 | https://stackoverflow.com/questions/76388105/efficient-algorithm-to-calculate-the-most-right-non-zero-digit-of-a-numbers-fac | Calculate the most right non-zero digit of n factorial efficiently I want to calculate the right most digit of a given number's factorial and print it. What I've done so far is: import math n = int(input()) fact = math.factorial(n) print(str(fact).rstrip('0')[-1]) but I still get time limits and I look for faster solutions. It's worth noting that I must use python to solve this problem. Also, I shall point out that n is from 1 to 65536, the time limit is 0.5 seconds and I have 256 megabytes of memory. | There is a neat recursive formula you can use: let D(n) be the last non-zero digit in n! If n<10, use a lookup table If the second last digit of n is odd, D(n) = 4 * D(n//5) * D(unit digit of n) If the second last digit of n is even, D(n) = 6 * D(n//5) * D(Unit digit of n) See this math stackexchange post for a proof. Translating it into code: def last_nonzero_factorial_digit(n): lookup = [1, 1, 2, 6, 4, 2, 2, 4, 2, 8] if n < 10: return lookup[n] if ((n // 10) % 10) % 2 == 0: return (6 * last_nonzero_factorial_digit(n // 5) * lookup[n % 10]) % 10 else: return (4 * last_nonzero_factorial_digit(n // 5) * lookup[n % 10]) % 10 On my laptop, this version runs ~14,000 times faster on a 5-digit number. | 5 | 5 |
76,385,999 | 2023-6-1 | https://stackoverflow.com/questions/76385999/how-to-export-a-pydantic-model-instance-as-yaml-with-url-type-as-string | I have a Pydantic model with a field of type AnyUrl. When exporting the model to YAML, the AnyUrl is serialized as individual field slots, instead of a single string URL (perhaps due to how the AnyUrl.__repr__ method is implemented). For example: from pydantic import BaseModel, AnyUrl import yaml class MyModel(BaseModel): url: AnyUrl data = {'url': 'https://www.example.com'} model = MyModel.parse_obj(data) y = yaml.dump(model.dict(), indent=4) print(y) Produces: url: !!python/object/new:pydantic.networks.AnyUrl args: - https://www.example.com state: !!python/tuple - null - fragment: null host: www.example.com host_type: domain password: null path: null port: null query: null scheme: https tld: com user: null Ideally, I would like the serialized YAML to contain https://www.example.com instead of individual fields. I have tried to override the __repr__ method of AnyUrl to return the AnyUrl object itself, as it extends the str class, but no luck. | Unfortunately, the pyyaml documentation is just horrendous, so seemingly elemental things like customizing (de-)serialization are a pain to figure out properly. But there are essentially two ways you could solve this. Option A: Subclass YAMLObject You had the right right idea of subclassing AnyUrl, but the __repr__ method is irrelevant for YAML serialization. For that you need to do three things: Inherit from YAMLObject, define a custom yaml_tag, and override the to_yaml classmethod. Then pyyaml will serialize this custom class (that inherits from both AnyUrl and YAMLObject) in accordance with what you define in to_yaml. The to_yaml method always receives exactly two arguments: A yaml.Dumper instance with built-in capabilities to serialize standard types (via methods like represent_str for example) and the actual data to be serialized. To avoid adding/overriding additional methods, you can leverage the fact that AnyUrl inherits from string and the underlying str.__new__ method actually receives the full URL during construction. Therefore the str.__str__ method will return that "as is". from pydantic import AnyUrl, BaseModel from yaml import Dumper, ScalarNode, YAMLObject, dump, safe_load class Url(AnyUrl, YAMLObject): yaml_tag = "!Url" @classmethod def to_yaml(cls, dumper: Dumper, data: str) -> ScalarNode: return dumper.represent_str(str.__str__(data)) class MyModel(BaseModel): foo: int = 0 url: Url obj = MyModel.parse_obj({"url": "https://www.example.com"}) print(obj) serialized = dump(obj.dict()).strip() print(serialized) deserialized = MyModel.parse_obj(safe_load(serialized)) print(deserialized == obj and isinstance(deserialized.url, Url)) Output: foo=0 url=Url('https://www.example.com', scheme='https', host='www.example.com', tld='com', host_type='domain') foo: 0 url: https://www.example.com True Option B: Register a representer function for AnyUrl You can avoid defining your own subclass and instead globally register a function that defines how instances of AnyUrl should be serialized, by using the yaml.add_representer function. That function takes two mandatory arguments: The class for which you want to define your custom serialization behavior and the representer function that defines that serialization behavior. The representer function essentially has to have the same signature as the YAMLObject.to_yaml classmethod presented in option A, i.e. it takes a Dumper instance and the data to be serialized as arguments. from pydantic import AnyUrl, BaseModel from yaml import Dumper, ScalarNode, add_representer, dump, safe_load def url_representer(dumper: Dumper, data: AnyUrl) -> ScalarNode: return dumper.represent_str(str.__str__(data)) add_representer(AnyUrl, url_representer) class MyModel(BaseModel): foo: int = 0 url: AnyUrl obj = MyModel.parse_obj({"url": "https://www.example.com"}) print(obj) serialized = dump(obj.dict()).strip() print(serialized) deserialized = MyModel.parse_obj(safe_load(serialized)) print(deserialized == obj and isinstance(deserialized.url, AnyUrl)) Output is the same as with the code from option A. The benefit of this approach is that it involves less code and potential namespace collisions between the two parent classes in option A. A potential drawback is that it modifies a global setting for the entire runtime of the program, which can become less transparent, if your application becomes large and is just something to be aware of, in case you decide you want to serialize AnyUrl objects differently at some point. | 2 | 3 |
76,383,877 | 2023-6-1 | https://stackoverflow.com/questions/76383877/how-to-find-out-which-package-depends-on-futures-in-requirements-txt | I have defined many pip packages in a requirements.txt, but I have not define the "futures" package: ... future == 0.18.3 six == 1.16.0 joblib == 1.2.0 ... And then download all packages with the following command on Ubuntu 22.04: pip3.9 download -r "/home/requirements.txt" The above command exited with the following error: ... ... Collecting widgetsnbextension~=4.0.7 Downloading widgetsnbextension-4.0.7-py3-none-any.whl (2.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 3.9 MB/s eta 0:00:00 Collecting branca>=0.5.0 Downloading branca-0.6.0-py3-none-any.whl (24 kB) Collecting traittypes<3,>=0.2.1 Downloading traittypes-0.2.1-py2.py3-none-any.whl (8.6 kB) Collecting xyzservices>=2021.8.1 Downloading xyzservices-2023.5.0-py3-none-any.whl (56 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.5/56.5 KB 1.3 MB/s eta 0:00:00 Collecting futures Downloading futures-3.0.5.tar.gz (25 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [25 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 14, in <module> File "/python39/lib/python3.9/site-packages/setuptools/__init__.py", line 18, in <module> from setuptools.dist import Distribution File "/python39/lib/python3.9/site-packages/setuptools/dist.py", line 32, in <module> from setuptools.extern.more_itertools import unique_everseen File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 666, in _load_unlocked File "<frozen importlib._bootstrap>", line 565, in module_from_spec File "/python39/lib/python3.9/site-packages/setuptools/extern/__init__.py", line 52, in create_module return self.load_module(spec.name) File "/python39/lib/python3.9/site-packages/setuptools/extern/__init__.py", line 37, in load_module __import__(extant) File "/python39/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/__init__.py", line 1, in <module> from .more import * # noqa File "/python39/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/more.py", line 5, in <module> from concurrent.futures import ThreadPoolExecutor File "/tmp/pip-download-jelw4tc2/futures/concurrent/futures/__init__.py", line 8, in <module> from concurrent.futures._base import (FIRST_COMPLETED, File "/tmp/pip-download-jelw4tc2/futures/concurrent/futures/_base.py", line 357 raise type(self._exception), self._exception, self._traceback ^ SyntaxError: invalid syntax [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> futures note: This is an issue with the package mentioned above, not pip. hint: See above for details. How to find out which package depends on the "futures" from the "requirements.txt"? Here is the dummy code: # find_out_depends --requirement-file "/home/requirements.txt" --find-depends "futures" Is there any "find_out_depends" command for accepting requirements.txt as argument and then print out the whole dependencies tree? | Create a fresh Python 3.9 venv and install your requirements without dependencies: python3.9 -m pip install --no-deps requirements.txt Then run the pip check CLI: python3.9 -m pip check It will complain that some package(s) have unmet dependencies, and you should find futures somewhere in there. Not to be confused with future, which is cross-compat. | 3 | 1 |
76,376,575 | 2023-5-31 | https://stackoverflow.com/questions/76376575/python-setuptools-exclude-dependencies-when-installing | Python setuptools allows you to specify optional dependencies, but does it allow you to do something in the inverse? For example, let's say I have a list of dependencies in my_package like below: numpy pandas So if I installed the package with pip install my_package, it would also install these two dependencies. However, for certain use cases, a user may not need pandas. So I would want to do something like pip install my_package[~pandas] or something like that to instruct pip to not install pandas. Is this something that is currently supported? | It is not currently supported - extras are strictly additive. It has been proposed several times, but the discussions never seem to get anywhere. The latest discussion is here: Proposal - expanding optional dependencies to support opt-out of recommended/default installables. As a workaround, you can use: pip install --no-deps my_package But this would exclude every dependency, including pandas. You'd have to find and install the other dependencies manually. | 4 | 3 |
76,371,334 | 2023-5-31 | https://stackoverflow.com/questions/76371334/zlib-difference-in-size-for-level-0-between-python-3-9-and-3-10 | In this code that uses zlib to encode some data, but with level=0 so it's not actually compressed: import zlib print('zlib.ZLIB_VERSION', zlib.ZLIB_VERSION) total = 0 print('Total 1', total) compress_obj = zlib.compressobj(level=0, memLevel=9, wbits=-zlib.MAX_WBITS) total += len(compress_obj.compress(b'-' * 1000000)) print('Total 2', total) total += len(compress_obj.flush()) print('Total 3', total) Python 3.9.12 outputs zlib.ZLIB_VERSION 1.2.12 Total 1 0 Total 2 983068 Total 3 1000080 but Python 3.10.6 (and Python 3.11.0) outputs zlib.ZLIB_VERSION 1.2.13 Total 1 0 Total 2 1000080 Total 3 1000085 so both a different final size, and a different size along the way. Why? And how can I get them to be identical? (I'm writing a library where I would prefer identical behaviour between Python versions) | zlib 1.2.12 and 1.2.13 behave identically in this regard. The Python library must be making different deflate() calls with different amounts of data, and possibly introducing a flush in the later version. You can look in the Python source code to find out. You should be able to force identical output if you feed smaller amounts of data to .compress() each time, e.g. less than 64K-1, and use .flush() after each. The output will be larger, but should be identical across versions. A quick look turned up this commit, which is likely the culprit. | 3 | 4 |
76,375,307 | 2023-5-31 | https://stackoverflow.com/questions/76375307/how-to-make-typer-traceback-look-normal | When using typer to parse CLI arguments, I get very verbose and colorful error messages. How can I get a normal Python traceback? See screenshot for an example traceback (just the first few lines) for illustration of the verbose style: ❯ python scripts/add_priors.py ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /Users/corneliusromer/code/nextclade_data_workflows/sars-cov-2/scripts/add_priors.py:26 in main │ │ │ │ 23 │ import polars as pl │ │ 24 │ │ │ 25 │ priors = ( │ │ ❱ 26 │ │ pl.scan_ndjson(ndjson, infer_schema_length=10000) │ │ 27 │ │ .select( │ │ 28 │ │ │ [ │ │ 29 │ │ │ │ pl.col("nearestNodes"), │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ json = <module 'json' from │ │ │ │ | You can disable it on a one-off basis by setting the environment variable _TYPER_STANDARD_TRACEBACK=1. Disabling rich exceptions is possible by passing the kwarg pretty_exceptions_enable=False when initializing typer: import typer app = typer.Typer(pretty_exceptions_enable=False) @app.command() def main(): raise Exception("test") if __name__ == "__main__": app() See the documentation for more options | 13 | 10 |
76,369,970 | 2023-5-31 | https://stackoverflow.com/questions/76369970/how-can-i-resolve-circular-references-between-two-instances-of-a-class-in-python | I have two instances of a class that compete in a simulation where they try to shoot at each other. The class contains a position variable and a target variable. The second instance's target variable references the first instance's position object, and vice-versa. I have a chicken-and-egg problem when creating the two instances, and when trying to bind the references after instantiating, they don't update properly This is a simplified version of my code: class Thing(): def __init__(self, position, target): self.position = position self.target = target def move(self): self.position += 10 ## thing1 = Thing(position = 0, target = thing2.position) # Ideally this line would work... thing1 = Thing(position = 0, target = 0) thing2 = Thing(position = 100, target = thing1.position) print(thing1.target) thing1.target = thing2.position print(thing1.target) thing2.move() print(thing1.target) The output I get is 0,100,100, and the output I want is 0,100,110. | I think there are two parts to your question: How to I get my references to another object's position to stay synced up, and how do you initialize objects that reference each other in a cycle. For the first part, I'd suggest a slightly simpler approach that the other answers: Don't reference the target position directly, target the Thing. You can get the position via self.target.position (or the equivalent) whenever you need it. For the second part, you need some way to set up the reference cycle. The simplest approach is to start the way you have so far, initializing one object without a reference to its target, and then passing a reference to this first object to the second object. Then in another step, give a reference to the second object to the first. You're kind of doing this amid your print calls where you do thing1.target = thing2.position, but because you're referencing the position directly, you don't see updates. I'd solve both problems like this: class Thing(): def __init__(self, position, target=None): # target is now optional self.position = position self.target = target def move(self): self.position += 10 thing1 = Thing(0) # no target passed, so it defaults to None (for now) thing2 = Thing(100, thing1) # initialize thing 2 to immediately target thing1 thing1.target = thing2 # update the target of thing1, now that thing2 exists print(thing1.target.position) # get thing2's position via thing1's target reference thing2.move() print(thing1.target.position) | 3 | 2 |
76,330,421 | 2023-5-25 | https://stackoverflow.com/questions/76330421/specifying-a-different-input-type-for-a-pydantic-model-field-comma-separated-st | Using Pydantic, how can I specify an attribute that has an input type different from its actual type? For example I have a systems field that contains a list of systems (so a list of strings) and the user can provide this systems list as a comma separated string (e.g. "system1,system2"); then I use a validator to split this string into a list of strings. The code below is doing that and it's working but the type hinting is wrong as the systems field is actually a list of strings, not a string; the validator is splitting the original string into a list of strings. How can I fix this? import typing from pydantic import BaseSettings, Field, validator class Config(BaseSettings): systems: str = Field([], description="list of systems as a comma separated list (e.g. 'sys1,sys2')") @validator("systems") def set_systems(cls, v) -> typing.List[str]: if v == "": return [] systems = list(filter(None, v.split(","))) return systems if __name__ == "__main__": c = Config(**{"systems": "foo,bar"}) print(c) | Always annotate model fields with the types you actually want in your schema! If you want the field systems to be a list of strings, then annotate it accordingly. A comma-separated string is the exception after all. To allow it, use a mode='before' validator to intercept that string before the default field validators get to it (and raise an error). Then you can split it and return the list as you wish: from pydantic import BaseSettings, Field, field_validator class Config(BaseSettings): systems: list[str] = Field(default_factory=list) @field_validator("systems", mode="before") @classmethod def split_comma_separated(cls, v: object) -> object: if isinstance(v, str): v = v.strip() return [] if v == "" else v.split(",") return v if __name__ == "__main__": print(Config.parse_obj({"systems": "foo,bar"})) print(Config.parse_obj({"systems": ""})) print(Config()) Output: systems=['foo', 'bar'] systems=[] systems=[] | 4 | 5 |
76,322,463 | 2023-5-24 | https://stackoverflow.com/questions/76322463/how-to-initialize-a-global-object-or-variable-and-reuse-it-in-every-fastapi-endp | I am having a class to send notifications. When being initialized, it involves making a connection to a notification server, which is time-consuming. I use a background task in FastAPI to send notifications, as I don't want to delay the response due to notification. Below is the sample code. file1.py: noticlient = NotificationClient() @app.post("/{data}") def send_msg(somemsg: str, background_tasks: BackgroundTasks): result = add_some_tasks(data, background_tasks, noticlient) return result file2.py: def add_some_tasks(data, background_tasks: BackgroundTasks, noticlient): background_tasks.add_task(noticlient.send, param1, param2) result = some_operation return result Here, notification client is declared globally. I could have it initialized in file2.py under add_some_tasks, but it would get initialized every time a request arrives, and that would require some time. Is there any way to use a middleware to re-use it every time a request arrives, so that it doesn' t need to be initialized every time. or Approach two: Initialize notification in class def file1.py: class childFastApi(FastAPI): noticlient = NotificationClient() app = childFastApi() @app.post("/{data}") def send_msg(somemsg: str, background_tasks: BackgroundTasks): result = add_some_tasks(data, background_tasks, app.noticlient) return result | Option 1 You could store the custom class object to the app instance, which allows you to store arbitrary extra state using the generic the app.state attribute, as demonstrated here, as well as here and here. To access the app.state attribute, and subsequently the object, outside the main file (for instance, from a routers submodule that uses APIRouter), you could use the Request object, as demonstrated in this answer (i.e., using request.app.state). You could either use a startup event (as shown here) to initialize the object, but since it is now deprecated (and might be removed in future versions), you could instead use a lifespan function. Example from fastapi import FastAPI, Request from contextlib import asynccontextmanager @asynccontextmanager async def lifespan(app: FastAPI): ''' Run at startup Initialize the Client and add it to app.state ''' app.state.n_client = NotificationClient() yield ''' Run on shutdown Close the connection Clear variables and release the resources ''' app.state.n_client.close() app = FastAPI(lifespan=lifespan) @app.get('/') async def main(request: Request): n_client = request.app.state.n_client # ... Option 2 Since the introduction of Starlette's lifespan handler, which, similar to startup and shutdown event handlers, allows one to define code that needs to run before the application starts up, or when the application is shutting down, one could also define objects to be accesible from the request.state. As per Starlette's documentation: The lifespan has the concept of state, which is a dictionary that can be used to share the objects between the lifespan, and the requests. The state received on the requests is a shallow copy of the state received on the lifespan handler. Hence, after instantiating the class object in the lifespan handler, you could then add it to the dictionary (i.e., the state), and access it within endpoints—even those defined in APIRouters outside the main application file— using request.state. Example from fastapi import FastAPI, Request from contextlib import asynccontextmanager @asynccontextmanager async def lifespan(app: FastAPI): ''' Run at startup Initialize the Client and add it to request.state ''' n_client = NotificationClient() yield {'n_client': n_client} ''' Run on shutdown Close the connection Clear variables and release the resources ''' n_client.close() app = FastAPI(lifespan=lifespan) @app.get('/') async def main(request: Request): n_client = request.state.n_client # ... | 20 | 36 |
76,338,261 | 2023-5-26 | https://stackoverflow.com/questions/76338261/polars-and-the-lazy-api-how-to-drop-columns-that-contain-only-null-values | I am working with Polars and need to drop columns that contain only null values during my data preprocessing. However, I am having trouble using the Lazy API to accomplish this. For instance, given the table below, how can I drop column "a" using Polars' Lazy API? df = pl.DataFrame( { "a": [None, None, None, None], "b": [1, 2, None, 1], "c": [1, None, None, 1], } ) df shape: (4, 3) ┌──────┬──────┬──────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ f64 ┆ i64 ┆ i64 │ ╞══════╪══════╪══════╡ │ null ┆ 1 ┆ 1 │ │ null ┆ 2 ┆ null │ │ null ┆ null ┆ null │ │ null ┆ 1 ┆ 1 │ └──────┴──────┴──────┘ I am aware of Issue #1613 and the solution of filtering columns where all values are null, but this is not Lazy API. FYI, # filter columns where all values are null df[:, [not (s.null_count() == df.height) for s in df]] I am also aware of the drop_nulls function in Polars, which can only drop all rows that contain null values, unlike the dropna function in Pandas that can take two arguments, axis and how. Can someone provide an example of how to drop columns with all null values in Polars using the Lazy API? | You can't, at least not in the way you want. polars doesn't know enough about the lazyframe to tell which columns are only nulls until you collect. That means you need a collect in order to get the columns you want and then another one to materialize the columns you wanted. Let's turn your df=df.lazy() Step 1: (df.select(pl.all().is_null().all()) .unpivot() .filter(pl.col('value')==False) .select('variable') .collect() .to_series() .to_list()) Those are your columns that have no nulls so now you wrap it in its own select Step 2: (df.select( df.select(pl.all().is_null().all()) .unpivot() .filter(pl.col('value')==False) .select('variable') .collect() .to_series() .to_list()) .collect()) | 4 | 5 |
76,351,947 | 2023-5-28 | https://stackoverflow.com/questions/76351947/polars-convert-string-of-digits-to-list | So i have a polars column/series that is strings of digits. s = pl.Series("a", ["111","123","101"]) s shape: (3,) Series: 'a' [str] [ "111" "123" "101" ] I would like to convert each string into a list of integers. I have found a working solution but i am not sure if it is optimal. s.str.split("").list.eval(pl.element().str.to_integer(base=10)) shape: (3,) Series: 'a' [list[i32]] [ [1, 1, 1] [1, 2, 3] [1, 0, 1] ] This seems to be working but id like to know if there are better ways to do this or any of the individual steps. | Update: .str.split("") no longers inserts leading/trailing empty strings in the result. https://github.com/pola-rs/polars/pull/15922 So you can just .cast() the resulting list directly. s.str.split("").cast(pl.List(pl.Int64)) shape: (3,) Series: 'a' [list[i64]] [ [1, 1, 1] [1, 2, 3] [1, 0, 1] ] | 3 | 5 |
76,322,342 | 2023-5-24 | https://stackoverflow.com/questions/76322342/fastapi-sqlalchemy-cannot-convert-dictionary-update-sequence-element-0-to-a-seq | I'm trying to return list of operations and getting error @router.get("/") async def get_specific_operations(operation_type: str, session: AsyncSession = Depends(get_async_session)): query = select(operation).where(operation.c.type == operation_type) result = await session.execute(query) return result.all() Error: ValueError: [TypeError('cannot convert dictionary update sequence element #0 to a sequence'), TypeError('vars() argument must have __dict__ attribute')] | For some simple cases, if you want to return a list of Pydantic models, just use response_model: response_model receives the same type you would declare for a Pydantic model field, so, it can be a Pydantic model, but it can also be, e.g. a list of Pydantic models, like List[Item]. Just like @sergei klinov mentioned. For other cases, such as returning a JSONResponse that contains the query results from SQLAlchemy (sqlmodel): e.g., I used the below code (sqlmodel) to return custom data to the frontend for datatables: results = session.exec(statement).all() head= ["headerName1", "headerName2", "headerName3"] data = { "head": [{"title": column} for column in head], "rows": [list(result) for result in results] } in the above snippet code, in order to convert the SQLAlchemy Sequence object (results), we can use a List Comprehension (or a for loop) and convert the type to list using list(result). For cases such as dynamic Pydantic models that are defined at runtime (e.g. within the endpoint), or if you only want to return certain columns: e.g., In my case (sqlmodel), I only want to select certain columns (fields) and return them with some manipulation. results = session.exec(statement).all() if results: data = [] for r in results: data.append({ "full_name": f"{r.firstname} {r.lastname}", "value": r.value, "x": r.x, "y": r.y, } ) return JSONResponse(content=data, status_code=200) You can access the object's attributes and append to the list where you want to return. Note: I am using sqlmodel, as @Eugene said, replace .all() to .scalars().all(), see the difference here in the sqlmodel official documentation | 4 | 3 |
76,340,960 | 2023-5-26 | https://stackoverflow.com/questions/76340960/cuda-to-docker-container | I need to make docker of my server, but it works only with cuda, how can i add it in my Dockerfile? FROM python:3.10 ENV FLASK_RUN_PORT=5000 RUN sudo nvidia-ctk runtime configure # Here COPY . /app WORKDIR /app RUN pip install --no-cache-dir -r requirements.txt EXPOSE 5000 CMD ["python", "server.py"] I try to do it bellow, but it doesn't work, please, help | You can start with a CUDA Docker image and then install Python, for example: FROM nvidia/cuda:12.1.1-runtime-ubuntu20.04 # Install Python RUN apt-get update && \ apt-get install -y python3-pip python3-dev && \ rm -rf /var/lib/apt/lists/* Note: User @chronoclast has suggested additionally installing python-is-python3 to fix the broken symlink to the default Python, in which case the Python installation step would instead be: RUN apt-get update && \ apt-get install -y python3-pip python3-dev python-is-python3 && \ rm -rf /var/lib/apt/lists/* | 6 | 11 |
76,324,677 | 2023-5-24 | https://stackoverflow.com/questions/76324677/django-4-1-9-requires-system-checks-issue-with-manage-py-is-this-a-bug-or-not | Django 4.1.9 requires_system_checks issue with manage.py - is this a bug or not we are upgrading our wagtail 4.2.2 app to from django 3.1.9 to django 4.1.9 getting the error TypeError: requires_system_checks must be a list or tuple. when running python manage.py runserver_plus 0.0.0.0:8020 --keep-meta-shutdown Is this a bug and / or can i be worked around Could I monkey patch BaseCommand.init() to set requires_system_checks = [] | In 4.1, support for setting a boolean to requires_system_checks on a management command was dropped (release notes). You'll need to check your django "app" dependencies to see which define management commands, and which could be affected. Here's some examples, but the list is not exhaustive: graphene-django fixed in 3.0.0 (technically, fixed in 3.0.0b8) django-extensions (noted by @DavidU) fixed in 3.2.1 | 2 | 1 |
76,363,168 | 2023-5-30 | https://stackoverflow.com/questions/76363168/openai-api-how-do-i-handle-errors-in-python | I tried using the below code, but the OpenAI API doesn't have the AuthenticationError method in the library. How can I effectively handle such error. import openai # Set up your OpenAI credentials openai.api_key = 'YOUR_API_KEY' try: # Perform OpenAI API request response = openai.some_function() # Replace with the appropriate OpenAI API function # Process the response # ... except openai.AuthenticationError: # Handle the AuthenticationError print("Authentication error: Invalid API key or insufficient permissions.") # Perform any necessary actions, such as displaying an error message or exiting the program | Error handling with the OpenAI Python SDK v1.0.0 or newer • If you don't want to handle error types individually: import os from openai import OpenAI, OpenAIError client = OpenAI() OpenAI.api_key = os.getenv('OPENAI_API_KEY') try: # Make your OpenAI API request here response = client.completions.create( model="gpt-3.5-turbo-instruct", prompt="Say this is a test" ) print(response) except OpenAIError as e: # Handle all OpenAI API errors print(f"Error: {e}") • If you want to handle error types individually: Note: Because there are a lot of classes for error handling, it might not be so elegant to import them individually. Instead, use import openai and all classes for error handling will be imported automatically. But the code is a bit different now. import os import openai # Import openai from openai import OpenAI # But don't import OpenAIError client = OpenAI() OpenAI.api_key = os.getenv('OPENAI_API_KEY') try: # Make your OpenAI API request here response = client.completions.create( model="gpt-3.5-turbo-instruct", prompt="Say this is a test" ) print(response) except openai.BadRequestError as e: # Don't forget to add openai # Handle error 400 print(f"Error 400: {e}") except openai.AuthenticationError as e: # Don't forget to add openai # Handle error 401 print(f"Error 401: {e}") except openai.PermissionDeniedError as e: # Don't forget to add openai # Handle error 403 print(f"Error 403: {e}") except openai.NotFoundError as e: # Don't forget to add openai # Handle error 404 print(f"Error 404: {e}") except openai.UnprocessableEntityError as e: # Don't forget to add openai # Handle error 422 print(f"Error 422: {e}") except openai.RateLimitError as e: # Don't forget to add openai # Handle error 429 print(f"Error 429: {e}") except openai.InternalServerError as e: # Don't forget to add openai # Handle error >=500 print(f"Error >=500: {e}") except openai.APIConnectionError as e: # Don't forget to add openai # Handle API connection error print(f"API connection error: {e}") See the official OpenAI GitHub Python repository. Error handling with the OpenAI Python SDK v0.28.0 Your code isn't correct. Change this... except openai.AuthenticationError ...to this. except openai.error.AuthenticationError | 5 | 11 |
76,367,218 | 2023-5-30 | https://stackoverflow.com/questions/76367218/how-do-i-make-a-time-delta-column-in-polars-from-two-datetime-columns | How would I make a column with the delta (in days) of two date columns. I thought I could just subtract the date objects, but I'm obviously missing something (pl.from_records([{'start': '2021-01-01', 'end': '2022-01-01'}]) .with_columns(pl.col(['start', 'end']).str.to_date('%Y-%m-%d')) .with_columns(delta = pl.col('end') - pl.col('start')) ) | You can try using the Expr.sub() function instead of the - operator: (pl.from_records([{'start': '2021-01-01', 'end': '2022-01-01'}]) .with_columns(pl.col(['start', 'end']).str.to_date('%Y-%m-%d')) .with_columns(delta = pl.col('end').sub(pl.col('start')))) | 4 | 2 |
76,358,689 | 2023-5-29 | https://stackoverflow.com/questions/76358689/uncrop-3d-plots-in-jupyter-notebook | I'm doing some 3d scatter plots with jupyter notebooks in VSCode, and they aren't showing properly. I went to the documentation in matlab and downloaded the jupyter notebook for the 3d scatter plot and tried running that in vscode, getting the same results, the z label gets cut off. I've seen a lot of questions about making the plot interactive with matplotlib magic, and some of those solutions (%matplotlib qt) do work (the image isn't cropped anymore, but gets created in a separate window. I want the plot to be inline, because I'm doing a lot of them and having one 40 windows created every time is a mess. I've tried the magic %matplotlib widget and %matplotlib notebook, as suggested here, and the %matplotlib ipympl as suggested here but when I use those the plot stops showing, appearing only after I change to %matplotlib inline and showing any plot I've done before at that point (all cropped). I've also checked the code in jupyter lab and it does not have this problem, the image shows completely fine, so it seems to be a problem with Jupyter notebooks in VsCode. I'm not trying to change the position of the z axis, It's fine where it is, I just want to make the image bigger so the z label is shown properly. Just in case, I've tried the comment of Trenton McKinney of doing ax.zaxis._axinfo['juggled'] = (1, 2, 2) to change the z-label to the other side, and it still gets cut, just in the other side of the image. So it's not an issue of where the z axes and label are. PS: As requested, I put the from the example here for ease of use. import matplotlib.pyplot as plt import numpy as np # Fixing random state for reproducibility np.random.seed(19680801) def randrange(n, vmin, vmax): """ Helper function to make an array of random numbers having shape (n, ) with each number distributed Uniform(vmin, vmax). """ return (vmax - vmin)*np.random.rand(n) + vmin fig = plt.figure() ax = fig.add_subplot(projection='3d') n = 100 # For each set of style and range settings, plot n random points in the box # defined by x in [23, 32], y in [0, 100], z in [zlow, zhigh]. for m, zlow, zhigh in [('o', -50, -25), ('^', -30, -5)]: xs = randrange(n, 23, 32) ys = randrange(n, 0, 100) zs = randrange(n, zlow, zhigh) ax.scatter(xs, ys, zs, marker=m) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show() Update: I've posted an issue in the github VSCode repo, link here Update on the update: The issue has been found to be a matplotlib/jupyter problem, so I've opened a new issue in the matplotlib repo, link here | So, after a pair of issues in github, I've finally found an answer. I'll leave here the answer in github for reference It appears the problem is caused by the default inline backend saving the figure with bbox_inches= 'tight' which causes the code to crop the image. this can be solved using the inline magic %config InlineBackend.print_figure_kwargs = {'bbox_inches':None} which overthrows the default and gives you a nice, big image with plenty of room for labels and everything else. In case this doesn't work (the image created is rather big) I leave here the link to the docs. Another option is to add padding, which can be done with the magic %config InlineBackend.print_figure_kwargs = {'pad_inches': .3} This option might need some trial and error to get the size right. In my case 0.3 worked as a charm. | 3 | 3 |
76,365,636 | 2023-5-30 | https://stackoverflow.com/questions/76365636/warning-news-is-an-entry-point-defined-in-pyproject-toml-but-its-not-instal | I am trying to run my poetry based python project inside docker using docker compose When I run the application, it works but it gives me this warning ch_news_dev_python | Warning: 'news' is an entry point defined in pyproject.toml, but it's not installed as a script. You may get improper `sys.argv[0]`. ch_news_dev_python | ch_news_dev_python | The support to run uninstalled scripts will be removed in a future release. ch_news_dev_python | ch_news_dev_python | Run `poetry install` to resolve and get rid of this message. My project structure news ├── docker │ ├── development │ │ ├── ... │ │ ├── python_server │ │ │ └── Dockerfile │ │ ├── .env │ │ └── docker-compose.yml │ ├── production │ │ └── ... │ └── test │ └── ... ├── src │ └── news │ ├── __init__.py │ ├── __main__.py │ ├── app.py │ └── ... ├── tests ├── .gitignore ├── pyproject.toml ├── poetry.lock └── ... My python_server/Dockerfile FROM python:3.10.11-slim ENV PYTHONDONTWRITEBYTECODE 1 \ PYTHONUNBUFFERED 1 RUN apt-get update \ && apt-get install --no-install-recommends -y gcc libffi-dev g++\ && apt-get clean \ && rm -rf /var/lib/apt/lists/* ENV POETRY_VERSION=1.5.0 RUN pip install "poetry==$POETRY_VERSION" RUN groupadd --gid 10000 ch_news \ && useradd --uid 10000 --gid ch_news --shell /bin/bash --create-home ch_news WORKDIR /home/ch_news COPY --chown=10000:10000 pyproject.toml poetry.lock ./ USER ch_news RUN poetry install --no-root --no-ansi --without dev COPY --chown=10000:10000 ./src ./ CMD ["poetry", "run", "news"] My docker-compose file version: '3.9' # optional since v1.27.0 name: ch_news_dev services: ... ch_news_dev_python: build: context: ../.. dockerfile: ./docker/development/python_server/Dockerfile container_name: ch_news_dev_python depends_on: ch_news_dev_postgres: condition: service_healthy env_file: - .env image: ch_news_dev_python_image networks: - network restart: 'always' volumes: - postgres_certs:/home/ch_news/certs networks: network: driver: bridge volumes: postgres_certs: driver: local postgres_data: driver: local My pyproject.toml file [tool.poetry] authors = ["..."] description = "..." name = "news" version = "0.1.0" [tool.poetry.dependencies] feedparser = "^6.0.10" python = "^3.10" aiohttp = "^3.8.4" python-dateutil = "^2.8.2" asyncpg = "^0.27.0" loguru = "^0.7.0" [tool.poetry.dev-dependencies] commitizen = "^3.2.2" pre-commit = "^3.3.2" pytest = "^7.3.1" pytest-cov = "^4.0.0" tox = "^4.5.1" bandit = "^1.7.5" black = "^23.3.0" darglint = "^1.8.1" flake8 = "^6.0.0" flake8-bugbear = "^23.5.9" flake8-docstrings = "^1.7.0" isort = "^5.12.0" mypy = "^1.3.0" pytest-clarity = "^1.0.1" pytest-sugar = "^0.9.7" typeguard = "^4.0.0" xdoctest = "^1.1.0" aioresponses = "^0.7.4" pytest-asyncio = "^0.21.0" types-python-dateutil = "^2.8.19" [tool.poetry.group.dev.dependencies] isort = "^5.12.0" types-python-dateutil = "^2.8.19.7" flake8-docstrings = "^1.7.0" xdoctest = "^1.1.1" pre-commit = "^3.3.2" commitizen = "^3.2.2" tox = "^4.5.1" mypy = "^1.3.0" pytest = "^7.3.1" flake8-bugbear = "^23.5.9" black = "^23.3.0" pytest-asyncio = "^0.21.0" bandit = "^1.7.5" typeguard = "^4.0.0" pytest-sugar = "^0.9.7" [tool.coverage.run] branch = true omit = ["src/news/__main__.py", "src/news/app.py"] source = ["news"] [tool.pytest.ini_options] pythonpath = "src" addopts = [ "--import-mode=importlib", ] [tool.coverage.report] fail_under = 95 [tool.isort] profile = "black" src_paths = ["src", "tests"] skip_gitignore = true force_single_line = true atomic = true color_output = true [tool.mypy] pretty = true show_column_numbers = true show_error_codes = true show_error_context = true ignore_missing_imports = true strict = true warn_unreachable = true [tool.poetry.scripts] news = "news.__main__:app" [tool.commitizen] name = "cz_conventional_commits" tag_format = "v$major.$minor.$patch$prerelease" version = "0.0.1" [build-system] build-backend = "poetry.core.masonry.api" requires = ["poetry-core>=1.0.0"] Can someone kindly tell me how to get rid of this warning? UPDATE 1 Getting the warning even after removing --no-root | the error is related to the fact that the entry point is declared in poetry in your file pyproject.toml : [tool.poetry.scripts] news = "news.__main__:app" after declaring the entry point, you must execute the command poetry install in your terminal | 13 | 2 |
76,365,797 | 2023-5-30 | https://stackoverflow.com/questions/76365797/how-do-i-get-airflow-to-work-with-sqlalchemy-2-0-2-when-it-has-a-1-4-48-version | I have some problems in my project: I use SQLalchemy 2.0.2 in modules for working with the database however i try to use Apache Airflow 2.6.1 which has sqlalchemy 1.4.48 dependencies. After I run the code, the interpreter either does not work correctly with the database functions (if sqlalchemy 1.4.48 is installed), or it throws an exception(if version is 2.0) like: TypeError: Invalid argument(s) 'encoding' sent to create_engine(), using configuration SQLiteDialect_pysqlite/QueuePool/Engine Step by step: I install sqlalchemy(2.0) into a clean virtual environment on my computer with Windows OS. Install apache-airflow with next command pip install "apache-airflow[postgres]==2.6.1" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.6.1/constraints-3.9.txt" Then i tried to start the code or use pip install --upgrade sqlalchemy==2.0 and then start the code. Please tell me what I'm doing wrong? The airflow documentation has a good description of this situation, but I have not been able to figure out how to solve this problem. I tried to uninstall SQLalchemy and apache-airflow and then tried to install with constrain. Tried to change some settings in airflow.cfg | Have you tried using the PythonVirtualEnvOperator ? It will allow you to install the library at runtime so you don't need to make changes on the server just for one job. To run a function called my_callable, simply use the following: my_task = PythonVirtualenvOperator( task_id="my_task ", requirements="sqlalchemy==2.0", python_callable=my_callable, ) Since the task runs in a virtual environment, it shouldn't be limited by Airflow dependencies. | 3 | 3 |
76,344,856 | 2023-5-27 | https://stackoverflow.com/questions/76344856/retaining-changes-for-streamlit-data-editor-when-hiding-or-switching-between-wid | I created this simple python example below. I used streamlit together with pandas. This example has an editable dataframe in each selectbox "A" and "B". When I hit the selectbox "A", and edit the table: for example, add a new row as "a4" and "4" as value, then hit the selectbox "B" and come back to selectbox "A", the df1 goes back to original dataframe because the whole funct1 is rerun from the start. How can the edited dataframe information be stored so the edited dataframe infromation wouldn't be lost? I don't want to @st.cache_data it as I want the dataframe to be editable continuously. import streamlit as st import pandas as pd page = st.sidebar.selectbox("Select: ", ("A","B")) ### Added code - but this doesn't work: st.session_state['page'] = page selected_app_mode = st.session_state.get('page') app_mode_ix = 0 if selected_app_mode: app_mode_ix = page.index(selected_app_mode) page = st.sidebar.selectbox(page, index=app_mode_ix) st.session_state['page'] = page ### End of added code def funct1(): df1 = pd.DataFrame({"col1": ["a1", "a2", "a3"], "Values": [1, 2, 3]}) edited_df1 = st.experimental_data_editor(df1, num_rows="dynamic") return df1 def funct2(): df2 = pd.DataFrame({"col1": ["b1", "b2", "b3"], "Values": [1, 2, 3]}) edited_df1 = st.experimental_data_editor(df2, num_rows="dynamic") return df2 if page == "A": funct1() elif page == "B": funct2() What I got (if I remove the added code): df1 a1 1 a2 2 a3 3 Expected to get: df1 a1 1 a2 2 a3 3 a4 4 | Comments in the code below. Something to keep in mind The data editor is a little different than other widgets; you can't "store" its state directly. However, widgets lose their information when they disappear from the screen. This creates a problem. For other widgets, you can save their value in session state (assigned to a different key than the widget's key) to keep their information while they are not displayed. When the widget comes back, you can assign it its previous state directly. However, because the data editor is the way it is, you can't directly save and assign its state. The best you can do is save the result of edits and then initialize a new editor that starts out where the previous one left off. A caution You don't want to feed a dataframe's edited result back into itself in real time. This will not work: st.session_state.df = st.experimental_data_editor(st.session_state.df) Such a pattern will cause the data editor to need each change entered twice to be reflected in the result. If an argument is changed in the creation of a widget, Streamlit thinks its a brand new widget and throws away any retained "memory" it had. The solution For each "page" you need to have two dataframes saved in session state: an original and an edited version. While on a page, you have a data editor based on the original and it saves the edited result directly into session state with each edit the user makes. When the page is changed, the edited version in session state is copied and overwrites the original one. Thus, when you return to the page, the data editor will start off where the last edit ended. import streamlit as st import pandas as pd # Initialize session state with dataframes # Include initialization of "edited" slots by copying originals if 'df1' not in st.session_state: st.session_state.df1 = pd.DataFrame({ "col1": ["a1", "a2", "a3"], "Values": [1, 2, 3] }) st.session_state.edited_df1 = st.session_state.df1.copy() st.session_state.df2 = pd.DataFrame({ "col1": ["b1", "b2", "b3"], "Values": [1, 2, 3] }) st.session_state.edited_df2 = st.session_state.df2.copy() # Save edits by copying edited dataframes to "original" slots in session state def save_edits(): st.session_state.df1 = st.session_state.edited_df1.copy() st.session_state.df2 = st.session_state.edited_df2.copy() # Sidebar to select page and commit changes upon selection page = st.sidebar.selectbox("Select: ", ("A","B"), on_change=save_edits) # Convenient shorthand notation df1 = st.session_state.df1 df2 = st.session_state.df2 # Page functions commit edits in real time to "editied" slots in session state def funct1(): st.session_state.edited_df1 = st.experimental_data_editor(df1, num_rows="dynamic") return def funct2(): st.session_state.edited_df2 = st.experimental_data_editor(df2, num_rows="dynamic") return if page == "A": st.header("Page A") funct1() elif page == "B": st.header("Page B") funct2() PS. Strictly speaking, you can get away without the .copy() methods since the data editor is not performing any modification in place to the dataframe it's given. I just left them in as a kind of conceptual nod. Edit: Further detailed explanation of the code per comment below There are two pieces to focus on in the script: page = st.sidebar.selectbox("Select: ", ("A","B"), on_change=save_edits) and for each dataframe: st.session_state.edited_df1 = st.experimental_data_editor(df1, num_rows="dynamic") Say you have a page displaying the data for df1 for the user. If the user is editing the dataframe, then withe each edit: User makes an edit The value of the widget in session state is updated (we didn't use a manually assigned key, so you can't see this) The page reloads When the script gets to the widget again, it outputs the new state This new output is saved to the edited_df1 key in session state. Repeat 1-5 for however many edits the user does. User changes to df2 on_change=save_edits executes before the new page load, hence st.session_state.edited_df1 is copied to st.session_state.df1 (same for df2 but it's trivial since they are the same) Page reloads User sees df2 Let's say the user immediately switches back to df1 Now the user sees the edited df1 because st.session_state.df1 was overwritten with the edited version when the user left that page/view | 5 | 6 |
76,368,961 | 2023-5-30 | https://stackoverflow.com/questions/76368961/how-do-you-detect-when-an-asyncio-tcp-connection-is-gone | I apologize if this is a repeat question: I have looked and haven't found any that would satisfy my question. I have a python script that allows my computer to connect to a piece of hardware using a static IP address and port. This piece of hardware only allows one connection at a time on this port. My first issue is that asyncio.open_connection() returns a successful connection status even if there is already another "user" connected to the device. When a true connection happens, the hardware sends a connection status message which, in my case, I do not receive until after the other "user" disconnects. While annoying, I can work around this issue by waiting for the status update message after "connecting" before allowing my script to proceed. My bigger issue is that I do not have a way of knowing when my physical connection has been removed. For instance, I am connected to the hardware using a USB connection. The hardware requires that I send a keep alive message every 5 seconds but it does not send a response to the keep alive messages. If I pull the USB cable out of the device I would expect to receive errors when writing the keep alive message but I do not. My script involves multiple concurrent asyncio tasks, but this simplified example should suffice. I would expect to receive an error when calling self.writer.write() or self.writer.drain() after I yank out the USB cable but I receive no indication of any change in the connection. My code just eats it and continues to send keep alive messages. What am I missing? import asyncio import logging from typing import TypeVar logger = logging.getLogger(__name__) host = '169.254.13.95' port = 51717 timeout_sec = 10 lock = asyncio.Lock() # if using 3.11 or greater this line is not needed Self = TypeVar("Self", bound="Foo") class TcpConnection: """A sample TCP connection class to demonstrate my point""" def __init__(self, reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None: self.reader: asyncio.StreamReader = reader self.writer: asyncio.StreamWriter = writer @classmethod async def connect(cls, host: str, port: int) -> Self | None: connection = None logger.info(f'Connecting to {host}:{port}') try: reader, writer = await asyncio.open_connection(host=host, port=port) logger.info('Connected') connection = TcpConnection(reader, writer) except ConnectionRefusedError: logger.info(f'Connect call refused ({host}:{port})') except OSError: logger.info(f'Connect call failed ({host}:{port})') except Exception as e: logger.warning(f'Unknown exception caught:\n{e}') finally: return connection def is_connected(self) -> bool: return self.writer.is_closing() == False async def keep_alive(self) -> None: logger.info('Starting keep alive task') keep_alive_msg = b'\x00' while self.is_connected(): async with lock: self.writer.write(keep_alive_msg) await self.writer.drain() logger.debug('Sent keep alive message') await asyncio.sleep(4.5) # don't wait the full 5 seconds just in case logger.info('Terminating keep alive task') async def main() -> None: while 1: tcp = await TcpConnection.connect(host, port) if tcp and tcp.is_connected(): try: # create a task to run the keep alive message keep_alive_task = asyncio.create_task(tcp.keep_alive()) await keep_alive_task except ConnectionError: logger.info('Client disconnected') logger.info(f'Waiting {timeout_sec} seconds before trying to reconnect') await asyncio.sleep(timeout_sec) if __name__ == '__main__': logging.basicConfig(format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', level=logging.DEBUG) try: logger.info('Starting application') asyncio.run(main()) except KeyboardInterrupt: logger.info('Exiting application') | My first issue is that asyncio.open_connection() returns a successful connection status even if there is already another "user" connected to the device. Establishing a connection is done inside the OS kernel and the kernel can do this for many connections in parallel, even if the user space application handles only one connection at a time. There is no way around it. The hardware requires that I send a keep alive message every 5 seconds but it does not send a response to the keep alive messages. TCP is about reliability. It will try to retransmit the data and this retransmission attempts will only time out after a while. It will not immediately react to a broken link since it might not even notice or hope that the link gets re-established in time so that the data can get successfully retransmitted. If you want immediate notice then the peer would need to send some feedback that it received your data and you could react if you don't get this feedback. But this is not how keep alive seems to be designed in your case - it is just about keeping the connection alive (i.e. no state closing in firewalls because of idle connections) and not about immediately detecting broken links. would expect to receive an error when calling self.writer.write() Write just delivers the data to the local socket buffer. It can thus not provide any information if something went wrong when delivering the data. It will return an error if the socket was marked as broken when resubmissions of the previous data has ultimately failed, but this will take some time after the original data got written to the socket. | 2 | 3 |
76,368,086 | 2023-5-30 | https://stackoverflow.com/questions/76368086/search-for-imports-which-could-be-type-checking | I make heavy use of mypy static type checking. I have a large lib where I know I have many imports which are being done just for typehints that could be protected with an if TYPE_CHECKING to speed things up. But searching for them all is proving difficult. Is there a way to identify these "unused" imports automatically so I can fix them? | You can use the flake8-type-checking library for this. Installed using pip install flake8-type-checking | 3 | 3 |
76,362,805 | 2023-5-30 | https://stackoverflow.com/questions/76362805/why-is-np-zeros-faster-than-re-initializing-an-existing-array-in-numba-with-py | Why isnumpy.zeros() faster than re-initializing an existing array? I work with computer modeling and use numba for my work. Sometimes it is necessary to have a zeroed array to accumulate the results of some operation. In general, I suppose that zeroing an already allocated array cannot be slower than creating a new array filled with zeros, but it is not. I know about lazy selection (for example Why is the speed difference between Python's Numpy zeros and empty functions gone for larger array sizes? https://vorpus.org/blog/why-does-calloc-exist/), but it must take time to make it zeroed. As far as I know, np.zeros use calloc and all acceleration comes from this call and should be reproducible for other languages. Any guarantees, that's always the case? It the good practices or not? import numpy as np import numba as nb import benchit nb.set_num_threads(1) @nb.njit def numba_operation(in_arr, out): for i in range(out.shape[0]): for j in range(out.shape[1]): out[i,j] += in_arr[i,j] * 2 + 4 @nb.njit def numba_operation_with_zeros(in_arr, out): for i in range(out.shape[0]): for j in range(out.shape[1]): out[i,j] = 0 for i in range(out.shape[0]): for j in range(out.shape[1]): out[i,j] += in_arr[i,j] * 2 + 4 def every_time_generate_zeros(data): in_arr, out = data out = np.zeros(shape=(out.shape[0], out.shape[0])) numba_operation(in_arr, out) return out def make_zeros_numba(data): in_arr, out = data numba_operation_with_zeros(in_arr, out) def generate_arrays(n): in_arr = np.random.rand(2**n, 2**n) out = np.random.rand(2**n, 2**n) return in_arr, out t = benchit.timings([every_time_generate_zeros, make_zeros_numba], {n:generate_arrays(n) for n in np.arange(9, 15, 1)}, input_name='2^n') t.plot(modules=benchit.extract_modules_from_globals(globals())) Results: | TL;DR: the observed behaviour is due to a combination of several low-level effects related to CPU caches and virtual memory. For large arrays, np.zeros does not actually fill anything in physical memory on mainstream platforms. In this case, the calloc system call is used internally to reserve a zeroized memory space from the Operating System (OS). This memory space is virtually allocated, not physically. Virtual memory is split in small chunks called pages. Allocated pages are only filled during first touch on mainstream OS. Note that malloc (called by np.empty) also zeroize memory for sake of security (since no information should leak from one application to another). What this means is that np.zeros is cheap for large arrays (because of the lazy/deferred initialization) compared to manually filling arrays with zeros (much slower on mainstream OS). If you write into a newly allocated array, like in every_time_generate_zeros, pages needs to be zeroized. However, zeroing memory is performed on the fly, page per page. This is a huge difference with the make_zeros_numba implementation which first zeroize the whole array and then fill it again with non-zero values! Indeed, classical pages are typically few KiB wide (4 KiB on mainstream x86-64 platforms) so they can fit in the L1 CPU cache. When every_time_generate_zeros write a value in a virtually allocated page not yet filled with zeros, a page fault is triggered and the processor fills the whole page with zeros. The zeroized page is then in the cache so writing in it is much faster. This is why make_zeros_numba is slower in your case : the array needs to be stored to the DRAM twice because it likely do not fit in the (same) CPU cache (at least, not for n >= 2^12). What happens under the hood is pretty complex. In fact, there are few missing details making this even more complex, but I tried to make the explantation relatively simple to be quite easy to understand so far. If you want something fast, then you need to virtually split the array in chunks (ie. tile) filled/computed on the fly and also avoid creating temporary arrays. However, this is hard to do in non-trivial codes (in fact, not always possible). This is critical for performance because of the Memory Wall. Additional notes and explanation Note that page faults are also expensive. In fact they can be more expensive than reusing the same array on some system (typically servers having a big DRAM bandwidth). As a result, there are computing machine where make_zeros_numba can actually be faster! The behaviour is also dependent of the operating system and the standard C library implementation. Using multiple threads to fill the target arrays also often impact performance of the two approach differently. Indeed, page faults can barely scale on some system (eg. Windows) while they can scale well on some others (eg. Linux). In general, DRAM writes do not scale with the number of core : only few cores are enough on most machines to saturate the DRAM bandwidth. I deliberately not mentioned an important factor when it comes to filling memory with zeros. Modern x86-64 processors use a write-back CPU caches. This means data needs to be read from the DRAM for a cache-line to be written (possibly many times). The modified cache-lines are then written back later to the DRAM (typically during cache-misses). Reading DRAM to write zeros is inefficient (half the bandwidth is wasted). This is why modern x86-64 processors also have dedicated instruction to avoid this problem : non-temporal stores (NT-stores). The memcpy (and possibly memset) system calls generally use them when needed. NT-stores are only worth it for large arrays not fitting in RAM or ones that are never directly re-used (the later very hard to know in practice). Indeed, small arrays tends to fit in CPU caches so they do not need to be stored over and over to DRAM (much slower than CPU caches) This is why small array can behave very differently than large ones. Recent modern x86-64 processors even have special instructions to fill memory with zeros faster than usual instructions. Note that there are also huge pages which are much bigger than classical pages (eg. 2 MiB) so to reduce the overhead of the small classical pages (especially page faults). Using them can strongly impact performance since the L1 cache is generally sufficiently large to hold 1 huge page. In fact, this is often true for the L2 (if any). The LLC cache tends to be large enough, but it is also significantly slower than the L1/L2. Besides, huge pages can be automatically used by the OS. Finally, note that the Numba JIT can be clever enough to replace the zeroing loop with a memset which can be significantly faster because of NT-store. However, it turns out this is platform dependent so far. Additional related posts: How is memory handled once touched for the first time in numpy.zeros? Why is Numpy much faster at creating a Zero array compared to replacing the values of an existing array with zeros? | 5 | 5 |
76,368,145 | 2023-5-30 | https://stackoverflow.com/questions/76368145/filling-in-for-missing-values-by-name-pandas | I have a dataset that I need to fill in the blanks of the ID. ID Name Adam 101 Adam Adam 101 Adam 102 Ben 102 Ben Cathy Cathy 103 Cathy What I need: ID Name 101 Adam 101 Adam 101 Adam 101 Adam 102 Ben 102 Ben 103 Cathy 103 Cathy 103 Cathy I tried using df['Name'].ffill() but does not work when trying on multiple names. Any other suggestions? | Try with first df['ID'] = df.groupby('Name')['ID'].transform('first') | 3 | 3 |
76,366,233 | 2023-5-30 | https://stackoverflow.com/questions/76366233/context-based-regex-search | I need RegEx to find string based on it's first letters Hello. I have string ABDADBADBADBABDABDA I want program to find string with mask А*А*А*A..... where "*" is group of any symbols except "A", but every "*" is the same group I've tried /((A[^A]+)+A)/g but it matches the whole line Example Input: AxxAxxAbxAx Output: AxxAxxA Is there any intended RegEx solution or I have to use some code too? | Yon need to use backreferences for this. For supplied example A(.*?)A(?:\1A)* should do the trick. Here: A matches A, (.*?) matches everything till the next A and puts it into group #1, (?:\1A)* matches content of group #1 followed by A any number of times. Demo here. | 2 | 2 |
76,365,558 | 2023-5-30 | https://stackoverflow.com/questions/76365558/match-at-whitespace-with-at-most-one-newline-in-regex | I would like to match a b if between a and b is only whitespace with at most one newline. Python example: import re r = "a\s*b" # ? # should match: print(re.match(r, "ab")) print(re.match(r, "a b")) print(re.match(r, "a \n b")) # shouldn't match: print(re.match(r, "a\n\nb")) print(re.match(r, "a \n\n b")) | You need to exclude a newline from \s and then optional match a newline with any zero or more whitespace chars other than a newline: a[^\S\n]*(?:\n[^\S\n]*)?b See the regex demo. Details: a - an a letter [^\S\n]* - zero or more whitespace chars other than newline (?:\n[^\S\n]*)? - one or zero occurrences of \n - a newline char [^\S\n]* - zero or more whitespace chars other than newline b - a b letter. | 4 | 2 |
76,364,144 | 2023-5-30 | https://stackoverflow.com/questions/76364144/typeerror-histogram-got-an-unexpected-keyword-argument-normed | I am using numpy.histogram and I am getting this error: import numpy as np np.histogram(np.arange(4), bins=np.arange(5), normed=True) TypeError: histogram() got an unexpected keyword argument 'normed' I was expecting: (array([0.2,0.25,0.25]),array([0,1,2,3,4])) I am using numpy 1.24.3 | The normed parameter in the numpy.histogram function was deprecated in NumPy version 1.21.0 and removed in version 1.24.0. Example import numpy as np result = np.histogram(a=np.arange(4), bins=np.arange(5), density=True) print(result) # (array([0.25, 0.25, 0.25, 0.25]), array([0, 1, 2, 3, 4])) | 2 | 3 |
76,363,921 | 2023-5-30 | https://stackoverflow.com/questions/76363921/how-to-fix-pandas-v2-valueerror-cannot-convert-from-timedelta64ns-to-timedel | When upgrading from pandas version 1 to 2.0.0, I suddenly get a ValueError in a script that worked fine before upgrading pandas to version 2: ValueError: Cannot convert from timedelta64[ns] to timedelta64[D]. Supported resolutions are 's', 'ms', 'us', 'ns' This is a minimally reproducible example: import pandas as pd df = pd.DataFrame({'designation_date': ['2021-01-01', '2021-01-02']}) df['recency'] = pd.to_datetime('today') - pd.to_datetime(df['designation_date']) df['recency'] = df['recency'].astype('timedelta64[D]') What do I need to replace df['recency'].astype('timedelta64[D]') with so that the code works with pandas v2? Using astype('timedelta64[D]') is used quite a bit in answers across SO, e.g. here. | Use the .dt.days accessor instead of astype('timedelta64[D]): df['recency'] = df['recency'].dt.days The change in behaviour from v1 to v2 is documented here in the Pandas changelog. | 3 | 6 |
76,326,219 | 2023-5-24 | https://stackoverflow.com/questions/76326219/how-to-make-changes-in-a-built-in-library-file-in-chaquopy | I am facing a problem where I have to change a line in a built-in file in a particular library (installed using pip). I have located the file in app\build\pip\debug\common\<library folder> But every time I run the Gradle (for installing or creating APK), the entire folder is recreated, and hence, the file is again the same as previous. Is there any way to make the change permanent? | As mentioned in the comment by David K Hess, monkey patching may be the easiest solution. If monkey patching isn't suitable for your issue, then assuming the library is pure-Python, you can download it from PyPI, edit your local copy, and then install from that: For example, you could download a .whl file, edit the file inside it, and then add an install line pointing to the .whl. Or you could download an sdist (.tar.gz file), extract it to a directory, edit the file inside it, and then add an install line pointing to the directory. In both cases, the install line should probably come first in the requirements list, before anything else which may depend on the library. | 3 | 2 |
76,357,846 | 2023-5-29 | https://stackoverflow.com/questions/76357846/numbers-of-combinations-modulo-m-efficiently | First of all I'm solving a programming problem rather than a math problem now. The question is Anish got an unbiased coin and he tossed it n times and he asked Gourabh to count all the number of possible outcomes with j heads, for all j from 0 to n. Since the number of possible outcome can be huge, he will tell the values modulo m. To be clear, we need to return one integer per value of j. The question is simple, but the problem arises with the time limit, being 1.5 seconds, but with input n as large as 200000. I used math.comb to calculate the values, but it took more than 1.5 seconds to run. So, are there any ways to calculate combinations in a faster way? Edit#1: Sample input: 2 998244353 Sample output: 1 2 1 Edit#2: Here is the code that I've tried: import math n,m=input().split() n = int(n) m = int(m) l = [] for i in range(n+1): l.append(math.comb(n,i)%m) print(*l) P.S: Please let me know if this is off topic for this site and suggest a suitable SE site to post this question. Thanks in advance! This question is from an inter college contest which ended 2 months ago. Here is the original problem: https://codeforces.com/gym/430360/problem/B (you'll need an account, and first time follow the "Contest Link" here to enter). In case you are not able to view the problem, please find the picture below. | Using the usual multiplicative formula to compute the next number from the previous, but with keeping the numbers small. Let's first look at a naive version for clarity. Naive def naive(n, m): c = 1 yield c for k in range(n): c = c * (n-k) // (k+1) yield c % m n, m = map(int, input().split()) print(*naive(n, m)) Takes me ~30 seconds with n=200000. Because c grows very large, up to 60204 digits (199991 bits). And calculations with such large numbers are slow. Fast Instead of naively computing those large c and using modulo m only for output, let's keep c small throughout, modulo m. Got accepted on the site, taking ~0.68 seconds. from math import gcd def fast(n, m): c = 1 G = 1 yield c for k in range(n): mul = n - k while (g := gcd(mul, m)) > 1: mul //= g G *= g div = k + 1 while (g := gcd(div, m)) > 1: div //= g G //= g c = c * mul * pow(div, -1, m) % m yield c * G % m n, m = map(int, input().split()) print(*fast(n, m)) Attempt This Online! Multiplication is fine under modulo. If it were only c = c * (n-k), we could just do c = c * (n-k) % m. Division doesn't allow that. So instead of dividing by k+1, we multiply with its inverse (k+1)-1 modulo m. The inverse of some number x is the number x-1 so you get x·x-1 = 1. For example, 7-1 modulo 10 is 3. Because multiplying 7 and 3 gives you 21, which is 1 (modulo 10). Next issue: Not all numbers have an inverse modulo m. For example, 6 doesn't have an inverse modulo 10. You can't multiply 6 with any integer and get 1 (modulo 10). Because 6 and 10 have common divisor 2. What we'll do is invert as much of 6 as possible. Extract the common divisor 2, leaving us with 3. That does have an inverse modulo 10 (namely 7). So extract prime factors in the multipliers/divisors common with m into a separate number G. And update c with what remains, modulo m. Then combine c and G for output. Rough times I get for n=200000, m=998244353 (the large prime from the question): naive: 30.0 seconds fast: 1.0 seconds Matt's: 1.0 seconds For n=200000, m=2*3*5*7*11*13*17*19*23: naive: 30.0 seconds fast: 1.2 seconds Matt's: 4.8 seconds I think worst case is a modulus with many primes like m=2*3*5*7*11*13*17*19*23, that maximizes my G. With n=200000, G grows up to 127 bits. Nothing to worry about. My solution/explanation for a similar problem on Leetcode. That had modulus 10 and I hardcoded factors 2 and 5 and counted them instead of multiplying them into a number G like I did here. Maybe I'll revisit it with this general solution... | 12 | 10 |
76,358,367 | 2023-5-29 | https://stackoverflow.com/questions/76358367/figure-out-return-of-a-method-that-returns-empty-hash-on-some-condition | I'm trying to understand how to make this work: def someMethod() -> dict[any, any]: if not os.path.exists('some path'): return {} config = {'a': 1, 'b': 2} return config I don't think that's correct. Seeing this error - Declared return type, "dict[Unknown, Unknown]", is partially unknownPylance The idea is to return empty dict if a path doesn't exist (or on some condition) or correct dict with key-value pairs. Any ideas? | Lowercase any is a Python built-in function and not a type. Instead, you have to import capital Any from the typing module. from typing import Any import os def someMethod() -> dict[Any, Any]: if not os.path.exists('some path'): return {} config = {'a': 1, 'b': 2} return config | 2 | 5 |
76,352,280 | 2023-5-28 | https://stackoverflow.com/questions/76352280/can-a-python-function-be-both-a-generator-and-a-non-generator | I have a function which I want to yield bytes from (generator behaviour) and also write to a file (non-generator behaviour) depending on whether the save boolean is set. Is that possible? def encode_file(source, save=False, destination=None): # encode the contents of an input file 3 bytes at a time print('hello') with open(source, 'rb') as infile: # save bytes to destination file if save: print(f'saving to file {destination}') with open(destination, 'wb') as outfile: while (bytes_to_encode := infile.read(3)): l = len(bytes_to_encode) if l < 3: bytes_to_encode += (b'\x00' * (3 - l)) outfile.write(bytes_to_encode) return # yield bytes to caller else: while (bytes_to_encode := infile.read(3)): l = len(bytes_to_encode) if l < 3: bytes_to_encode += (b'\x00' * (3 - l)) # pad bits if short yield encode(bytes_to_encode) return In the above implementation, the function always behaves as a generator. When I call encode_file('file.bin', save=True, destination='output.base64') it does not print "hello" instead, it returns a generator object. This does not make sense to me. Shouldn't "hello" be printed and then shouldn't control be directed to the if save: portion of the code thus avoiding the part of the function that yields completely? | A function can’t be a generator and also not be one, but of course you can decide whether to return a generator object or not by defining a helper function. To avoid duplicating the (read) with between the two (and reduce redundancy in general), make one branch a client of the other: def encode_file(source, save=False, destination=None): # encode the contents of an input file 3 bytes at a time print('hello') # save bytes to destination file if save: print(f'saving to file {destination}') with open(destination, 'wb') as outfile: for bytes_to_encode in encode_file(source): outfile.write(bytes_to_encode) # yield bytes to caller else: def g(): with open(source, 'rb') as infile: while (bytes_to_encode := infile.read(3)): l = len(bytes_to_encode) if l < 3: bytes_to_encode += (b'\x00' * (3 - l)) # pad bits if short yield encode(bytes_to_encode) return g() (Thanks to interjay for pointing out the need for the with in g.) | 2 | 3 |
76,341,290 | 2023-5-26 | https://stackoverflow.com/questions/76341290/unable-to-import-requests-in-web2py-even-though-requests-is-accessible-directl | I'm attempting to integrate MSAL, which requires the requests module. I'm running Python 3.7 on Linux and using pipenv to manage the environment. I'm also using web2py 2.24.1 from source (as in I download the web2py framework via the source button on the web2py website). When I am in the pipenv shell and go into the python shell, I can access the requests module; however, when I try to access it from web2py (running in the same shell), I get a "module not found" error. When I check the site-packages folder, the requests package is present. I have checked PYTHONPATH and seen that the path from the virtual environment is present. When attempting to load the web2py python shell, it gives the same error. I'm probably missing something, but it sometimes appears as if web2py does some code compilation and then uses the compiled stuff and ignores code changes after a certain point. Asking as I have commented out all the code involving the requests module in an effort to get the web2py shell working, but still get the error. Not sure what to try next. Any ideas are appreciated. | This is due to a buggy interaction between web2py's custom importer and the urllib3 module (which is imported by requests). web2py's custom importer raises an ImportError if a module is not found (code). (Arguably, it should instead raise ModuleNotFoundError, which is the subclass of ImportError specifically for this situation.) urllib3 uses except ModuleNotFoundError: to ignore errors when importing urllib3_secure_extra (code; only used when distributed via PyPI). (Arguably, it should instead catch the more general ImportError.) The result is that when urllib3_secure_extra is missing, the loading of urllib3 fails instead of continuing gracefully. Until this is fixed in web2py and/or urllib3, here are some workarounds: Patch web2py (change the line with the "-" to the line with the "+") --- web2py/gluon/custom_import.py.orig +++ web2py/gluon/custom_import.py @@ -77,7 +77,7 @@ try: result = sys.modules[modules_prefix] except KeyError: - raise ImportError("No module named %s" % modules_prefix) + raise ModuleNotFoundError("No module named %s" % modules_prefix) return result else: # "from x import a, b, ..." Patch urllib3 --- <virtualenv>/lib/python3.9/site-packages/urllib3/__init__.py.orig +++ <virtualenv>/lib/python3.9/site-packages/urllib3/__init__.py @@ -48,7 +48,7 @@ # See: https://github.com/urllib3/urllib3/issues/2680 try: import urllib3_secure_extra # type: ignore # noqa: F401 -except ModuleNotFoundError: +except ImportError: pass else: warnings.warn( Dynamically patch web2py's importer (does not require modifying the web2py/urllib3 files) Insert the following in your application's initialization code (e.g. web2py/applications/<yourapp>/__init__.py): import builtins _real_import = builtins.__import__ def _import_with_modulenotfounderror(*args, **kwargs): try: return _real_import(*args, **kwargs) except ImportError as e: if e.__class__ is ImportError and str(e).startswith("No module named "): raise ModuleNotFoundError(str(e)) from e else: raise builtins.__import__ = _import_with_modulenotfounderror P.S. I'm not sure why you keep getting the error after commenting out the code. Some things to try: In the admin interface, click "Manage" next to your app then click "Remove compiled" (this button will only be shown if the application is compiled). Stop web2py then start it again. Look in the stack trace for lines from your source files, and double check that the lines at those line numbers are commented out. A similar issue was described here (search for the word "recompile"); the workaround mentioned there is to recompile programmatically. | 2 | 4 |
76,348,393 | 2023-5-27 | https://stackoverflow.com/questions/76348393/count-number-of-zeros-after-last-non-zero-value-per-row | I have the following df: index jan feb marc april One 1 7 0 0 two 0 8 7 0 three 0 0 0 1 I'd like to get the number of zeros after the last non-zero value per row. So the output should look like index num One 2 two 1 three 0 | Similar logic to that of @BrJ but more straightforward in my opinion. Using a reversed cummin to set to False all True preceding a False, then sum: out = (df.loc[:,::-1].eq(0) .cummin(axis=1).sum(axis=1) .to_frame('num') ) Output: num One 2 two 1 three 0 Intermediates: # boolean mask (0s are True) jan feb marc april One False False True True two True False False True three True True True False # after reversed cummin (and reversed again for clarity) # all True that preceded a False are now False jan feb marc april One False False True True two False False False True three False False False False | 4 | 3 |
76,328,152 | 2023-5-25 | https://stackoverflow.com/questions/76328152/ebpf-kprobe-argument-not-matching-the-syscall | I'm learning eBPF and I'm playing with it in order to understand it better while following the docs but there's something I don't understand why it's not working... I have this very simple code that stops the code and returns 5. int main() { exit(5); return 0; } The exit function from the code above calls the exit_group syscall as can we can see by using strace (image below) yet within my Python code that's using eBPF through bcc the output I get for my bpf_trace_printk is the value 208682672 and not the value 5 that the exit_group syscall is called with as I was expecting... from bcc import BPF def main(): bpftext = """ #include <uapi/linux/ptrace.h> void my_exit(struct pt_regs *ctx, int status){ bpf_trace_printk("%d", status); } """ bpf = BPF(text=bpftext) fname = bpf.get_syscall_fnname('exit_group') bpf.attach_kprobe(event=fname, fn_name='my_exit') while True: print(bpf.trace_fields()) if __name__ == '__main__': main() I've looked into whatever I found online but I couldn't find a solution as I've been investigating this problem for a few days now... I truly appreciate any help available and thank you! | Fix You need to rename your function from my_exit to syscall__exit_group. Why does this matter? BPF programs named in this way get special handling from BCC. Here's what the documentation says: 8. system call tracepoints Syntax: syscall__SYSCALLNAME syscall__ is a special prefix that creates a kprobe for the system call name provided as the remainder. You can use it by declaring a normal C function, then using the Python BPF.get_syscall_fnname(SYSCALLNAME) and BPF.attach_kprobe() to associate it. Arguments are specified on the function declaration: syscall__SYSCALLNAME(struct pt_regs *ctx, [, argument1 ...]). For example: int syscall__execve(struct pt_regs *ctx, const char __user *filename, const char __user *const __user *__argv, const char __user *const __user *__envp) { [...] } This instruments the execve system call. Source. Corrected Code from bcc import BPF def main(): bpftext = """ #include <uapi/linux/ptrace.h> void syscall__exit_group(struct pt_regs *ctx, int status){ bpf_trace_printk("%d", status); } """ bpf = BPF(text=bpftext) fname = bpf.get_syscall_fnname('exit_group') bpf.attach_kprobe(event=fname, fn_name='syscall__exit_group') while True: print(bpf.trace_fields()) if __name__ == '__main__': main() Output from the sample program exiting: (b'<...>', 14896, 0, b'd...1', 3996.079261, b'5') How it Works After BCC transforms your BPF program, this results in a slightly different interpretation of the arguments passed. You can use bpf = BPF(text=bpftext, debug=bcc.DEBUG_PREPROCESSOR) to see how your code is transformed. Here's what happens without the syscall__ prefix: void my_exit(struct pt_regs *ctx){ int status = ctx->di; ({ char _fmt[] = "%d"; bpf_trace_printk_(_fmt, sizeof(_fmt), status); }); } This reads in the RDI register and interprets it as the syscall argument. On the other hand, here's what happens if it's named syscall__exit_group: void syscall__exit_group(struct pt_regs *ctx){ #if defined(CONFIG_ARCH_HAS_SYSCALL_WRAPPER) && !defined(__s390x__) struct pt_regs * __ctx = ctx->di; int status; bpf_probe_read(&status, sizeof(status), &__ctx->di); #else int status = ctx->di; #endif ({ char _fmt[] = "%d"; bpf_trace_printk_(_fmt, sizeof(_fmt), status); }); } If the CONFIG_ARCH_HAS_SYSCALL_WRAPPER is defined (it is on x86_64) then the RDI register is interpreted as a pointer to a struct pt_regs, which looks up the RDI register in that, which is the first argument to exit_group(). On systems without syscall wrappers, this does the same thing as the previous example. | 3 | 1 |
76,346,099 | 2023-5-27 | https://stackoverflow.com/questions/76346099/type-annotations-typevar-bound-problem | In the Python Documentation, we find: T = TypeVar('T') # Can be anything S = TypeVar('S', bound=str) # Can be any subtype of str A = TypeVar('A', str, bytes) # Must be exactly str or bytes We find also this code: def repeat(x: T, n: int) -> Sequence[T]: """Return a list containing n references to x.""" return [x]*n def print_capitalized(x: S) -> S: """Print x capitalized, and return x.""" print(x.capitalize()) return x def concatenate(x: A, y: A) -> A: """Add two strings or bytes objects together.""" return x + y Adding then this line of code print(concatenate("hello ", "world")) should be perfectly oky with mypy and it is! I define now a class: class sstr(str): pass and use this code: s1 = sstr("hello ") s2 = sstr("world") print(concatenate(s1, s2)) In the definition of A it says: "Must be exactly str or bytes" s1 and s2 are not exactly str or bytes but sstr, which is a subclass of str. So to my mind, mypy should raise an error, but it says: Success: no issues found in 1 source file Apparantly, I must have got something wrong. I even tried it with a class class sstr(str): def __add__(self, other): return self + other + "_42" def __sub__(self, other): return "foo" What did I get wrong? Can anybody help? | "exactly" is misleading here. In general, the type system assumes Liskov substitutability. So a subtype S of a type T is always acceptable in the place of T. The "exactly" is referring to a specific behavior. When you use a constrained instead of bound type variable*, the result is always either str or bytes, so even if you provide a MyStr, it returns a str as far as the type checker is concerned: import typing T = typing.TypeVar("T", str, bytes) class MyString(str): pass def alphabetize(a: T, b: T) -> T: if a >= b: return a else: return b c0 = alphabetize("foo", "bar") c1 = alphabetize("foo", MyString("bar")) c2 = alphabetize(MyString("foo"), MyString("bar")) reveal_type(c0) reveal_type(c1) reveal_type(c2) Using mypy gives: test_typing.py:20: note: Revealed type is "builtins.str" test_typing.py:21: note: Revealed type is "builtins.str" test_typing.py:22: note: Revealed type is "builtins.str" Success: no issues found in 1 source file Here is how this is described in PEP 483, the first PEP that described how the type system was going to work and the idea of type variables: A constrained type variable ranges only over constrains t1, etc. exactly; subclasses of the constrains are replaced by the most-derived base class among t1, However, that isn't how it works when you give it an upper bound: import typing T = typing.TypeVar("T", bound=str) class MyString(str): pass def alphabetize(a: T, b: T) -> T: if a >= b: return a else: return b c0 = alphabetize("foo", "bar") c1 = alphabetize("foo", MyString("bar")) c2 = alphabetize(MyString("foo"), MyString("bar")) reveal_type(c0) reveal_type(c1) reveal_type(c2) And mypy gives you: test_typing.py:20: note: Revealed type is "builtins.str" test_typing.py:21: note: Revealed type is "builtins.str" test_typing.py:22: note: Revealed type is "test_typing.MyString" Success: no issues found in 1 source file Note, in the condition where you mix types , the result is the narrowest common type possible. Consider a situation where you have a deeper inheritance hierarchy: import typing T = typing.TypeVar("T", bound=str) class SpecialString(str): pass class SuperSpecialString(SpecialString): pass def alphabetize(a: T, b: T) -> T: if a >= b: return a else: return b c0 = alphabetize(SpecialString("foo"), SuperSpecialString("bar")) c1 = alphabetize("foo", SuperSpecialString("bar")) c2 = alphabetize(SuperSpecialString("foo"), SuperSpecialString("bar")) reveal_type(c0) reveal_type(c1) reveal_type(c2) And here, mypy gives: test_typing.py:22: note: Revealed type is "test_typing.SpecialString" test_typing.py:23: note: Revealed type is "builtins.str" test_typing.py:24: note: Revealed type is "test_typing.SuperSpecialString | 2 | 5 |
76,342,355 | 2023-5-26 | https://stackoverflow.com/questions/76342355/python-classes-difference-between-setting-an-attribute-and-using-setattr | I'm trying to set attributes to a class of which I don't know the name a-priori. I also want to avoid users to write to that attribute, so I use a property factory with getters and setters which returns a property object. However, when calling the property object, I get the reference to that object, instead of whatever the getter should be returning. So I try to do this: def property_factory(name): def getter(self): return self.__getattribute__(name) def setter(self, value): raise Exception('Cannot set a value') return property(getter, setter) # This is just a read_file placeholder class read_file(object): def __init__(self): self.name = 'myName' self.value = 'myValue' def __iter__(self): return self class a(object): list1 = read_file() def __init__(self): list1 = read_file() self.__setattr__('_' + list1.name, list1.value) # this doesn't work: self.__setattr__(list1.name, property_factory('_' + list1.name)) # this actually does work, but with the wrong attribute name: notMyName = property_factory('_' + list1.name) Then I get this: In [38]: b = a() In [39]: b.myName Out[39]: <property at 0x2883d454450> In [40]: b.notMyName Out[40]: 'myValue' In [41]: b.notMyName = 'true' --------------------------------------------------------------------------- Exception Traceback (most recent call last) Cell In[41], line 1 ----> 1 b.notMyName = 'true' Cell In[37], line 6, in property_factory.<locals>.setter(self, value) 5 def setter(self, value): ----> 6 raise Exception('Cannot set a value') Exception: Cannot set a value What I want is this: In [39]: b.myName Out[40]: 'myValue' In [41]: b.MyName = 'true' --------------------------------------------------------------------------- Exception Traceback (most recent call last) Cell In[41], line 1 ----> 1 b.MyName = 'true' Cell In[37], line 6, in property_factory.<locals>.setter(self, value) 5 def setter(self, value): ----> 6 raise Exception('Cannot set a value') Exception: Cannot set a value How do I do this? | Why do you want to do this? I've always gone back to this answer whenever I have an idea that uses the notion of dynamically-named attributes -- which is essentially what you're trying to do here if I'm not mistaken (with added read-only "protection" applied only to the keys in list1). Do you need to use a property factory? You could do something like this: class A(object): list1 = read_file() def __init__(self): self.__dict__[A.list1.name] = A.list1.value def __setattr__(self, name, value): if name == A.list1.name: raise Exception('Cannot set a value for this key!') Now at least this works: >>> b = A() >>> b.myName 'myValue' >>> b.myName = 'true' Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 7, in __setattr__ Exception: Cannot set a value for this key! However both methods will be susecptable to the following: >>> b.__dict__['myName'] = 'true' >>> b.myName 'true' Obviously there's a lot of optimization to be done here, adding sentinels, name mangling, etc, plus I'd need a lot more information regarding ultimately what you're trying to achieve and why -- but is this getting a little closer to what you want? I'll delete this answer (or tidy) if necessary, too long for a comment. Also, typo: In [41]: b.MyName = 'true' Should be myName. | 3 | 1 |
76,343,110 | 2023-5-26 | https://stackoverflow.com/questions/76343110/select-cell-from-pandas-dataframe-and-convert-to-int | When selecting cell from Dataframe, it returns me Series value and append it to list as series. How to convert cell into single int value? df = pd.DataFrame({"name": ['John', 'George', 'Ray'], "score": [123, 321, 112]}) x = df.loc[df['name']=='John', 'score'].reset_index(drop=True) x.astype(int) x list=[] list.append(x) list Return me [0 123 Name: score, dtype: int64], but need just [123] | This happens because in general df["name"] == "John" could be true for several rows. A simple way to to work around this is to temporarily turn the "name" column into the DataFrame's index with set_index: import pandas as pd df = pd.DataFrame({"name": ['John', 'George', 'Ray'], "score": [123, 321, 112]}) x = df.set_index("name").loc["John", "score"] list_ = [] list_.append(x) print(list_) Additional notes: If you have "John" more than once in the "name" column, you get the same problem with this solution as in your original code. Depending on your data model, it might make sense to use the names as index permanently by moving set_index to the line where you define the DataFrame. It's bad practice to shadow built-in names like list because you might need them later. | 2 | 2 |
76,343,201 | 2023-5-26 | https://stackoverflow.com/questions/76343201/create-a-set-from-a-list-using-set-vs-unpacking-into-curly-brackets | In Python, a set can be created from a list using either the set constructor or by unpacking the list into curly brackets. For example: my_list = [1, 2, 3] my_set = set(my_list) or my_list = [1, 2, 3] my_set = {*my_list} Are there any specific reasons or use cases where one approach is preferred over the other? What are the advantages or disadvantages of each method in terms of performance, readability, or any other relevant factors? | There is a subtle difference. set(my_list) produces whatever the callable bound to set returns. set is a built-in name for the set type, but it's possible to shadow the name with a global or local variable. {*my_list}, on the other hand, always creates a new set instance. It's not possible to change what the brace syntax means without modifying the Python implementation itself (and then, you are no longer implementing Python, but a very Python-like language). In CPython, using {*mylist} also avoids a function call, it uses BUILD_SET and SET_UPDATE opcodes rather than calling whatever set is bound to. | 2 | 4 |
76,340,851 | 2023-5-26 | https://stackoverflow.com/questions/76340851/xgboost-raising-valueerror-with-sklearn-metric | Im trying to use an XGBClassifier with a validation set and a metric taken from sklearn.metrics as eval_metric, as suggested by the XGBoost documentation. The MWE looks like this: import numpy as np from xgboost import XGBClassifier from sklearn.metrics import accuracy_score x_train, y_train = np.random.rand(10,3), np.where(np.random.rand(10,)>0.5, 1, 0) x_valid, y_valid = np.random.rand(5,3), np.where(np.random.rand(5,)>0.5, 1, 0) model = XGBClassifier( n_estimators=100, eval_metric=accuracy_score ) model.fit( X=x_train, y=y_train, eval_set=[(x_train, y_train), (x_valid, y_valid)] ) This code raises the following error message: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-5-b63cd5cfabda> in <cell line: 1>() ----> 1 model.fit( 2 X=x_train, y=y_train, 3 eval_set=[(x_train, y_train), (x_valid, y_valid)] 4 ) 9 frames /usr/local/lib/python3.10/dist-packages/sklearn/metrics/_classification.py in _check_targets(y_true, y_pred) 93 94 if len(y_type) > 1: ---> 95 raise ValueError( 96 "Classification metrics can't handle a mix of {0} and {1} targets".format( 97 type_true, type_pred ValueError: Classification metrics can't handle a mix of binary and continuous targets The same code works commenting out the eval_set line, or using instead eval_metric="error", for example. What am I doing wrong and how is it solved? Edit: I'd like to use in the future different metrics like sklearn.metrics.balanced_accuracy_score or sklearn.metrics.recall_score. | The reason is that xgboost will feed probability outputs to the evaluation function (your accuracy here), but sklearn's accuracy score is expecting hard decisions (1s or 0s) not probabilities. It is unaware of your decision threshold, so it cannot map them to hard decisions. You can use model = xgb.XGBClassifier( n_estimators=100, eval_metric='error' ) or model = xgb.XGBClassifier( n_estimators=100, eval_metric='[email protected]' ) for a threshold of 0.6 instead of 0.5. See https://xgboost.readthedocs.io/en/stable/parameter.html For recall, since it's not in the xgboost builtin options, you need to manually threshold your predictions: import numpy as np from xgboost import XGBClassifier import xgboost as xgb from sklearn.metrics import accuracy_score, recall_score x_train, y_train = np.random.rand(10,3), np.where(np.random.rand(10,)>0.5, 1, 0) x_valid, y_valid = np.random.rand(5,3), np.where(np.random.rand(5,)>0.5, 1, 0) def thresholded_recall_score(y_true, y_preds, thresh=0.5): return recall_score(y_true, y_preds > thresh) model = xgb.XGBClassifier( n_estimators=100, eval_metric=thresholded_recall_score ) model.fit( X=x_train, y=y_train, eval_set=[(x_train, y_train), (x_valid, y_valid)] ) | 3 | 3 |
76,337,589 | 2023-5-26 | https://stackoverflow.com/questions/76337589/repeat-rows-in-dataframe-with-respect-to-column | I have a Pandas DataFrame that looks like this: df = pd.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6], 'col3': [7, 8, 9]}) df col1 col2 col3 0 1 4 7 1 2 5 8 2 3 6 9 I would like to create a Pandas DataFrame like this: df_new col1 col2 col3 0 1 4 7 1 1 5 8 2 1 6 9 3 2 4 7 4 2 5 8 5 2 6 9 6 3 4 7 7 3 5 8 8 3 6 9 Is there built-in or combination of built-in Pandas methods that can achieve this? Even if there are duplicates in df, I would like the output to be the same format. In other words: df col1 col2 col3 0 1 4 7 1 2 5 8 2 2 6 8 df_new col1 col2 col3 0 1 4 7 1 1 5 8 2 1 6 8 3 2 4 7 4 2 5 8 5 2 6 8 6 2 4 7 7 2 5 8 8 2 6 8 | I would also have gone for a cross merge as suggested by @Henry in comments: out = df[['col1']].merge(df[['col2', 'col3']], how='cross').reset_index(drop=True) Output: col1 col2 col3 0 1 4 7 1 1 5 8 2 1 6 9 3 2 4 7 4 2 5 8 5 2 6 9 6 3 4 7 7 3 5 8 8 3 6 9 Comparison of the different approaches: Note that @sammywemmy's approach behaves differently when rows are duplicated, which leads to a non comparable timing. | 9 | 7 |
76,331,894 | 2023-5-25 | https://stackoverflow.com/questions/76331894/custom-fastapi-middleware-causes-localprotocolerrortoo-much-data-for-declared | I have a middleware implemented for FastAPI. For responses that includes some content, it works perfectly. But if a response has no body, it is causing LocalProtocolError("Too much data for declared Content-Length") exception. To isolate the problem, I've reduced the middleware class to this: from starlette.middleware.base import BaseHTTPMiddleware from fastapi import FastAPI, Request class LanguageManagerMiddleware(BaseHTTPMiddleware): def __init__(self, app: FastAPI): super().__init__(app) async def dispatch(self, request: Request, call_next) -> None: return await call_next(request) It basically does nothing. When I add the middleware, I have an exception: raise LocalProtocolError("Too much data for declared Content-Length") h11._util.LocalProtocolError: Too much data for declared Content-Length When I disable the middleware, I have no problem. Here is the line that creates the response which triggers the exception: return Response(status_code=HTTP_204_NO_CONTENT) To further debug the problem, I've activated a breakpoint in the h11/_writers.py ContentLengthWriter class, where the actual exception occurs. I've tried to decode the byte stream with utf-8 and cp437, but had no luch. class ContentLengthWriter(BodyWriter): def __init__(self, length: int) -> None: self._length = length def send_data(self, data: bytes, write: Writer) -> None: self._length -= len(data) if self._length < 0: raise LocalProtocolError("Too much data for declared Content-Length") write(data) I'm stopping the code at this line: self._length -= len(data) If the middleware is disabled, data looks like this: b'' If the middleware is enabled, data looks like this: b'\x1f\x8b\x08\x00\xf6God\x02\xff' What would be modifying the content of the response? | I've solved it. I've just changed the order of which middlewares are added. When I moved my middleware after the GZIP middleware, problem disappeared. Thanks to MatsLindh for pointing out that it is a gzip header. | 4 | 1 |
76,334,787 | 2023-5-25 | https://stackoverflow.com/questions/76334787/python-given-dict-of-old-index-new-index-move-multiple-elements-in-a-list | In Python 3 what would be the best way to move multiple potentially non-contiguous elements to new potentially non-contiguous indexes given a dict of {old index: new index, old index: new index, old index: new index} Important Note: the dict may not contain all the new positions of elements, this is why the examples below the first example do not work Edit: Sorry I forgot to mention that you don't have to worry about checking all the indexes in new_idxs are valid and within the bounds of seq. The keys of new_idxs are also already in sorted order from typing import Any def move_elements( seq: list[Any], new_idxs: dict, ) -> list[Any]: new = [] idx = 0 done = set() while len(new) < len(seq): if idx in new_idxs and idx not in done: new.append(seq[new_idxs[idx]]) done.add(idx) elif idx not in done: new.append(seq[idx]) idx += 1 else: idx += 1 return new # works new_idxs = {0: 1, 1: 0} seq = [0, 1] seq = move_elements(seq, new_idxs) print ("\nexpected:", [1, 0]) print ("actual :", seq) # expected output # [1, 0] # doesn't work new_idxs = {3: 0, 5: 1} seq = [0, 1, 2, 3, 4, 5, 6, 7] seq = move_elements(seq, new_idxs) print ("\nexpected:", [3, 5, 0, 1, 2, 4, 6, 7]) print ("actual :", seq) # expected output # [3, 5, 0, 1, 2, 4, 6, 7] # doesn't work new_idxs = {3: 6, 5: 7} seq = [0, 1, 2, 3, 4, 5, 6, 7] seq = move_elements(seq, new_idxs) print ("\nexpected:", [0, 1, 2, 4, 6, 7, 3, 5]) print ("actual :", seq) # expected output # [0, 1, 2, 4, 6, 7, 3, 5] new_idxs = {3: 1, 7: 4} seq = [0, 1, 2, 3, 4, 5, 6, 7] seq = move_elements(seq, new_idxs) print ("\nexpected:", [0, 3, 1, 2, 7, 4, 5, 6]) print ("actual :", seq) # expected output # [0, 3, 1, 2, 7, 4, 5, 6] new_idxs = {0: 3, 3: 1, 7: 4} seq = [0, 1, 2, 3, 4, 5, 6, 7] seq = move_elements(seq, new_idxs) print ("\nexpected:", [1, 3, 2, 0, 7, 4, 5, 6]) print ("actual :", seq) # expected output # [0, 1, 2, 3, 4, 5, 6, 7] # [1, 3, 2, 0, 7, 4, 5, 6] | A linear time one. I start with a result list full of dummy objects. Then move elements from the input sequence into the result list as requested. Then replace the remaining dummies in the result with the remaining non-dummy elements from the input sequence. from typing import Any def move_elements( seq: list[Any], new_idxs: dict, ) -> list[Any]: dummy = object() res = [dummy] * len(seq) seq = seq[:] for old, new in new_idxs.items(): res[new] = seq[old] seq[old] = dummy remaining = (x for x in seq if x is not dummy) for i, x in enumerate(res): if x is dummy: res[i] = next(remaining) return res def test(seq, new_idxs, expect): result = move_elements(seq, new_idxs) print('seq: ', seq) print('new_idxs:', new_idxs) print('expect: ', expect) print('result: ', result) print('correct? ', result == expect) print() test([0, 1], {0: 1, 1: 0}, [1, 0]) test([0, 1, 2, 3, 4, 5, 6, 7], {3: 0, 5: 1}, [3, 5, 0, 1, 2, 4, 6, 7]) test([0, 1, 2, 3, 4, 5, 6, 7], {3: 6, 5: 7}, [0, 1, 2, 4, 6, 7, 3, 5]) test([0, 1, 2, 3, 4, 5, 6, 7], {3: 1, 7: 4}, [0, 3, 1, 2, 7, 4, 5, 6]) Output (Attempt This Online!): seq: [0, 1] new_idxs: {0: 1, 1: 0} expect: [1, 0] result: [1, 0] correct? True seq: [0, 1, 2, 3, 4, 5, 6, 7] new_idxs: {3: 0, 5: 1} expect: [3, 5, 0, 1, 2, 4, 6, 7] result: [3, 5, 0, 1, 2, 4, 6, 7] correct? True seq: [0, 1, 2, 3, 4, 5, 6, 7] new_idxs: {3: 6, 5: 7} expect: [0, 1, 2, 4, 6, 7, 3, 5] result: [0, 1, 2, 4, 6, 7, 3, 5] correct? True seq: [0, 1, 2, 3, 4, 5, 6, 7] new_idxs: {3: 1, 7: 4} expect: [0, 3, 1, 2, 7, 4, 5, 6] result: [0, 3, 1, 2, 7, 4, 5, 6] correct? True | 3 | 1 |
76,331,049 | 2023-5-25 | https://stackoverflow.com/questions/76331049/ruamel-yaml-anchors-with-roundtriploader-roundtripdumper | I am trying to load below example yaml file using the ruamel.yaml python package. - database: dev_db <<: &defaults adapter: postgres host: localhost username: postgres password: password - database: test_db <<: *defaults - database: prod_db <<: *defaults from pydantic import BaseModel from ruamel.yaml import YAML yaml = YAML(typ='rt') with open('config.yaml', 'r') as file: envs = yaml.load(file) for env in envs: print(c) This generates below output which misses the aliased tags completely. But when I change the typ to 'safe', even the aliased tags are output correctly. {'database': 'dev_db'} {'database': 'test_db'} {'database': 'prod_db'} I am trying to create Pydantic data models with each entry in the YAML. How to get all the attributes with default loader(rt)? | Although you should be able to specify a mapping as value for a merge key (instead of an alias to some previously anchored mapping, or a list of such aliases), this doesn't work properly in ruamel.yaml's round-trip mode for ruamel.yaml<0.17.27: import sys import ruamel.yaml yaml_str = """\ a: 42 <<: {b: 96} """ yaml = ruamel.yaml.YAML() data = yaml.load(yaml_str) print(data) gives: {'a': 42} This is caused by an incorrect creation of that node, and although the anchor default in your example is created (otherwise you could not use the alias), the value for that appears to be an empty dict. The fix for this bug is rather small, but it is in a local function of the flatten_mapping method of the RoundTripConstructor, so you'll have to provide the full replacement for that: from ruamel.yaml.nodes import MappingNode from ruamel.yaml.constructor import ConstructorError, DuplicateKeyError, DuplicateKeyFutureWarning def my_flatten_mapping(self, node): def constructed(value_node): if value_node in self.constructed_objects: value = self.constructed_objects[value_node] else: value = self.construct_object(value_node, deep=True) # << used to be deep=False return value merge_map_list = [] index = 0 while index < len(node.value): key_node, value_node = node.value[index] if key_node.tag == 'tag:yaml.org,2002:merge': if merge_map_list: # double << key if self.allow_duplicate_keys: del node.value[index] index += 1 continue args = [ 'while constructing a mapping', node.start_mark, f'found duplicate key "{key_node.value}"', key_node.start_mark, """ To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys """, """\ Duplicate keys will become an error in future releases, and are errors by default when using the new API. """, ] if self.allow_duplicate_keys is None: warnings.warn(DuplicateKeyFutureWarning(*args), stacklevel=1) else: raise DuplicateKeyError(*args) del node.value[index] if isinstance(value_node, MappingNode): merge_map_list.append((index, constructed(value_node))) elif isinstance(value_node, SequenceNode): for subnode in value_node.value: if not isinstance(subnode, MappingNode): raise ConstructorError( 'while constructing a mapping', node.start_mark, f'expected a mapping for merging, but found {subnode.id!s}', subnode.start_mark, ) merge_map_list.append((index, constructed(subnode))) else: raise ConstructorError( 'while constructing a mapping', node.start_mark, 'expected a mapping or list of mappings for merging, ' f'but found {value_node.id!s}', value_node.start_mark, ) elif key_node.tag == 'tag:yaml.org,2002:value': key_node.tag = 'tag:yaml.org,2002:str' index += 1 else: index += 1 return merge_map_list yaml = ruamel.yaml.YAML() yaml.Constructor.flatten_mapping = my_flatten_mapping data = yaml.load("""\ a: 42 <<: {b: 96} """) print(data) print('=' * 20) data = yaml.load("""\ - database: dev_db <<: &defaults adapter: postgres host: localhost username: postgres password: password - database: test_db <<: *defaults - database: prod_db <<: *defaults """) for d in data: print(d) gives: {'a': 42, 'b': 96} ==================== {'database': 'dev_db', 'adapter': 'postgres', 'host': 'localhost', 'username': 'postgres', 'password': 'password'} {'database': 'test_db', 'adapter': 'postgres', 'host': 'localhost', 'username': 'postgres', 'password': 'password'} {'database': 'prod_db', 'adapter': 'postgres', 'host': 'localhost', 'username': 'postgres', 'password': 'password'} The fix for this is in ruamel.yaml>=0.17.27 | 3 | 1 |
76,330,754 | 2023-5-25 | https://stackoverflow.com/questions/76330754/how-to-define-a-pydantic-model-nested-under-a-class | I have two Pydantic models: from typing import List, Union from pydantic import BaseModel class Students: class Student(BaseModel): StudentName: str StudentAge: int class StudentRequest(BaseModel): Class: int UUID: str Students: Union[List[Student], None] For the above class at Students: Union[List[Student], None], I get the error Unresolved reference 'Student'. Can we not define a model under a class and use it for segregating them? The code below works, but I want to get an understanding whether the above BaseModel nested under a class will work or not: class Student(BaseModel): StudentName: str StudentAge: int class StudentRequest(BaseModel): Class: int UUID: str Students: Union[List[Student], None] | You need to understand that as long as the outer class is not fully constructed (when you are still setting up things inside its namespace), you will inevitably have to deal with forward references. So there are two mandatory things (and one optional) you need to remember, when doing this. 1) Use the qualified class name OuterClass.InnerClass The Python interpreter itself will have no trouble with a forward reference to another inner class in an annotation. That is simply because it does not actually do anything with those annotations by default. So you could just do this: from pydantic import BaseModel class OuterClass: class Student(BaseModel): name: str age: int class StudentRequest(BaseModel): ... students: list["Student"] But this will fall apart with Pydantic models because those actually use those annotations to construct objects based off of them. As you will see in the next section, at some point Pydantic will have to actually resolve the refernce to Student so get the actual underlying class at runtime. And since that will inevitably happen outside the scope of the OuterClass, without the qualified name, it will run into a NameError. So you have to do it like this: ... class OuterClass: class Student(BaseModel): ... class StudentRequest(BaseModel): ... students: list["OuterClass.Student"] 2) Update forward references after the outer class is constructed An annotation as shown above is internally stored as a ForwardRef object. As mentioned above, Pydantic will have to resolve those forward references eventually, for you to be able to actually use those models. However it is not always able to do so automatically. To quote the documentation: In some cases, a ForwardRef won't be able to be resolved during model creation. [...] When this happens, you'll need to call update_forward_refs after the model has been created before it can be used. But with a setup like yours, when the model is nested in the namespace of an outer class, you cannot just do so after the model is created. You must do that after the outer class is created. So with that setup, you will have to do this: from pydantic import BaseModel class OuterClass: class Student(BaseModel): name: str age: int class StudentRequest(BaseModel): ... students: list["OuterClass.Student"] OuterClass.StudentRequest.update_forward_refs() Notice that the call happens outside of OuterClass after it is created. 3) Enable postponed evaluation of annotations (optional) Since PEP 563 you can do from __future__ import annotations at the top of your module and then omit the quotes from your forward references. This just improves readability and makes things generally easier to code. So in total, your code should look like this: from __future__ import annotations from pydantic import BaseModel class OuterClass: class Student(BaseModel): name: str age: int class StudentRequest(BaseModel): ... students: list[OuterClass.Student] OuterClass.StudentRequest.update_forward_refs() Demo: print(OuterClass.StudentRequest.schema_json(indent=4)) obj = OuterClass.StudentRequest.parse_obj({ "students": [ {"name": "foo", "age": 18}, {"name": "bar", "age": 19}, ] }) print(obj.json(indent=4)) Output: { "title": "StudentRequest", "type": "object", "properties": { "students": { "title": "Students", "type": "array", "items": { "$ref": "#/definitions/Student" } } }, "required": [ "students" ], "definitions": { "Student": { "title": "Student", "type": "object", "properties": { "name": { "title": "Name", "type": "string" }, "age": { "title": "Age", "type": "integer" } }, "required": [ "name", "age" ] } } } { "students": [ { "name": "foo", "age": 18 }, { "name": "bar", "age": 19 } ] } | 4 | 9 |
76,330,655 | 2023-5-25 | https://stackoverflow.com/questions/76330655/attributeerror-module-numpy-has-no-attribute-complex | I am trying to make a real number complex using numpy. I am using numpy version 1.24.3 Here is the code: import numpy as np c=np.complex(1) However, I get this error: AttributeError: module 'numpy' has no attribute 'complex'. | np.complex was a deprecated alias for the builtin complex. Instead of np.complex you can use: complex(1) #output (1+0j) #or np.complex128(1) #output (1+0j) #or np.complex_(1) #output (1+0j) #or np.cdouble(1) #output (1+0j) Link to doc: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations | 3 | 7 |
76,328,904 | 2023-5-25 | https://stackoverflow.com/questions/76328904/is-there-a-difference-between-starlette-fastapi-background-tasks-and-simply-usin | I am looking for different ways to queue up functions that will do things like copy files, scrape websites, and manipulate files (tasks that will take considerable time). I am using FastAPI as a backend API, and I came across FastAPI's background task documentation as well as Starlette's background task documentation and I fail to understand why I couldn't just use multiprocessing. This is what I do currently using Multiprocessing and it works fine. from multiprocessing import Process from fastapi import FastAPI, File, UploadFile app = FastAPI() def handleFileUpload(file): print(file) #handle uploading file here @app.post("/uploadFileToS3") async def uploadToS3(bucket: str, file: UploadFile = File(...)): uploadProcess = Process(target=handleFileUpload, args(file)) uploadProcess.start() return { "message": "Data has been queued for upload. You will be notified when it is ready." "status": "OK" } If this works why would FastAPI Background Tasks exist if I can do it just as simply as using Multiprocessing? My only guess is that it has to do with scaling? It may work for myself just testing, but I know that multiprocessing has to do with the number of cores a system has. I may be completely missing the point of multiprocessing. Please help me understand. Thanks. | TL;DR Those background tasks will always execute in the same process as your main application. They will either just run asynchronously on the event loop or in a separate thread. For operations that are not primarily I/O, you should probably avoid using them and use multiprocessing instead. Details Use multiprocessing (correctly), if you want I fail to understand why I couldn't just use multiprocessing. Not only does the documentation not discourage using multiprocessing, the FastAPI docs explicitly suggest it for computation intensive tasks. Quote: (emphasis mine) If you need to perform heavy background computation and you don't necessarily need it to be run by the same process (for example, you don't need to share memory, variables, etc), you might benefit from using other bigger tools [...]. So you can. And if you want to do CPU-bound work in the background, you almost certainly have to use your own multiprocessing setup. But in the example you showed in your question, it seems that the operation you want to perform in the background is to upload a file somewhere. Such a task will probably lend itself well to BackgroundTasks-based concurrency because it is I/O-bound. Spawning another process introduces additional overhead that might make it less efficient than what the BackgroundTasks do. Also, you did not show in your code, when and how you are joining that new process. This is important and mentioned in the guidelines for multiprocessing: [...] when a process finishes but has not been joined it becomes a zombie. [...] it is probably good practice to explicitly join all the processes that you start. Just spawning it and forgetting about it is probably a terrible idea, especially when that happens every time that route is requested. And a child process can not just join itself because that would cause a deadlock. Technical distinctions As you know, the FastAPI background tasks are just a re-import of the BackgroundTasks class from Starlette (see docs). FastAPI just integrates them into its route handling setup in such a way that the user does not need to explicitly return them at any point. But the Starlette docs clearly state that the class is for in-process background tasks. And if we take a look at the source, we can see that under the hood it's __call__ implementation really just does one of two things: If the function you passed is asynchronous, it simply awaits it. If the function you passed is a "regular" function (not async), it runs it in a thread-pool. (If you go deeper, you'll see that it utilizes the anyio.to_thread.run_sync coroutine.) This means that at no point is there another process in play. In case 1) it is even scheduled on the same exact event loop as the rest of the application, which means it is all happening in one thread. And in case 2), an additional thread performs the operation. The implications are very obvious, if you have some experience dealing with concurrency in Python: Do not use BackgroundTasks, if you want to perform CPU-bound operations there. Those would completely block your application because they will either 1) block the event loop in the only available thread or 2) cause the GIL to lock up the main thread. Legitimate use cases On the flip side, if your tasks perform some I/O-bound operations (an example given in the docs is connecting to an email server to send something, after the request was processed), the BackgroundTasks machinery is very convenient. The main benefit of BackgroundTasks to a custom setup in my opinion is that you do not need to worry about how and when exactly the coroutines will be awaited or the threads joined. That is all abstracted away behind the route handler. You just need to specify what function you want executed some time after the response. You could just e.g. call asyncio.create_task just before the end of your route handler function. That would probably schedule the task right after the request is processed and effectively make it run in the background. But there are three problems with that: There is no guarantee it will be scheduled immediately after. It may take a while, if there are a lot of requests being processed. You have no chance to actually await that task and ensure it actually finishes (as expected or with an error), unless you develop some mechanism yourself to keep track of it outside the route handler. Since the event loop only keeps weak references to tasks, such a task might get garbage collected before it is finished. (That means it will just straight up disappear.) | 6 | 9 |
76,328,441 | 2023-5-25 | https://stackoverflow.com/questions/76328441/regex-to-remove-captions-with-condition-not-to-overlap-second-match | I have the following string, which I extract from a pdf: This is Fig. 13: John holding his present and the flowers Source: official photographer a beautiful Table: a table of some kind and fully complete Table: John holding his present and Source: official photographer sentence The text includes figs and tables, most of which have a caption on top and a source on bottom, but some don't. Fundamentally, the text I want to be left with should be: This is a beautiful and fully complete sentence I have tried the following: s = re.sub(r'(Fig|Table)[\s\S]+?Source:.*\n', '', mystring,flags=re.MULTILINE) But unfortunately it returns: This is a beautiful sentence With my limited knowledge of regex I cannot figure out how to put such a condition: It should stop at the first \n after Source, only if there is no new fig|table in between, in which case it should have stopped at the first \n from start. Any idea? Thank you. | What you need to match is a Fig or Table followed by either Characters up to and including a line starting with Source, with no Fig or Table in between the original one and Source; or Characters up to the end of line You can achieve #1 above by using a tempered greedy token, which ensures that each character processed until Source is found does not precede Fig or Table. This regex will do what you want: (?:Fig|Table)(?:(?:(?!Fig|Table)[\s\S])+?Source[^\n]*\n|[^\n]*\n) This matches: (?:Fig|Table) : a word Fig or Table; and then either (?:(?!Fig|Table)[\s\S])+? : a minimal number of characters, none of which precede either of the words Fig or Table Source[^\n]*\n : The word Source followed by some number of characters until newline; or [^\n]*\n some number of characters until newline Regex demo on regex101 In python: s = re.sub(r'(?:Fig|Table)(?:(?:(?!Fig|Table)[\s\S])+?Source[^\n]*\n|[^\n]*\n)', '', mystring) print(s) Output: This is a beautiful and fully complete sentence Note this does leave newlines (if present in the original string) at the start and end of the string, they can be removed with strip. | 2 | 4 |
76,328,354 | 2023-5-25 | https://stackoverflow.com/questions/76328354/subset-first-and-last-consecutive-value-from-pandas-df-col-python | I want to subset a df by returning the first and last consecutive value from a pandas col. Drop_duplciates won't work because it doesn't account for consecutive groupings. I'm using .shift() (below) but this only returns the last consecutive value, where I want the first and last. import pandas as pd df = pd.DataFrame({"Item":['A', 'A', 'A', 'B', 'B', 'B', 'B', 'A', 'A'], "Val1":[-20, -21, -20, -20, -20, -21, -20, -23, -22], "Val2":[150, 151, 150, 148, 149, 150, 151, 150, 148] }) df1 = df[df['Item'].ne(df['Item'].shift())] print(df1) intended output: Item Val1 Val2 0 A -20 150 2 A -20 150 3 B -20 148 6 B -20 151 7 A -23 150 8 A -22 148 | You need to compare against both the forward and backward shifted values so that you can find the start and finish of each group: df1 = df[(df['Item'].ne(df['Item'].shift())) | (df['Item'].ne(df['Item'].shift(-1)))] Output: Item Val1 Val2 0 A -20 150 2 A -20 150 3 B -20 148 6 B -20 151 7 A -23 150 8 A -22 148 | 2 | 3 |
76,325,603 | 2023-5-24 | https://stackoverflow.com/questions/76325603/evaluating-forward-references-with-typing-get-type-hints-in-python-for-a-class-d | I'm having trouble calling typing.get_type_hints() for classes that have forward references as strings. My code works with not defined inside of a function. I've reproduced a minimal example below in Python 3.10: import typing class B: pass class A: some_b: "B" print(typing.get_type_hints(A)) # prints {'some_b': <class '__main__.B'>} import typing def func(): class B: pass class A: some_b: "B" print(typing.get_type_hints(A)) func() # NameError: name 'B' is not defined Is this expected behavior? Is there any way to get around this, and make sure that forward references with strings get evaluated in the correct scope? | typing.get_type_hints allows you to explicitly pass the local namespace to use for resolving references via the localns parameter. from typing import get_type_hints def func(): class A: some_b: "B" class B: pass print(get_type_hints(A, localns=locals())) func() Output: {'some_b': <class '__main__.func.<locals>.B'>} See the docs for locals. Side note: By utilizing postponed evaluation of annotations (PEP 563) you can omit the quotation marks: from __future__ import annotations from typing import get_type_hints def func(): class A: some_b: B class B: pass print(get_type_hints(A, localns=locals())) | 6 | 3 |
76,322,524 | 2023-5-24 | https://stackoverflow.com/questions/76322524/how-to-use-asyncsession-from-sqlalchemy-in-celery-tasks | Use AsyncSession in celery tasks I use fastapi and sqlalchemy, I must create celery task, that will go to the database and check does any objects of my Event (table) has end_time < datetime.now() There is my code: @asynccontextmanager async def scoped_session(): scoped_factory = async_scoped_session( async_session, scopefunc=asyncio.current_task() ) try: async with scoped_factory() as s: yield s finally: await scoped_factory().remove() async def logic(): async with scoped_session() as session: stmt = select(event.models.Event).where( event.models.Event.end_time <= datetime.now() ) results = await session.execute(stmt) for res in results.fetchall(): print(res.is_event_done) @celery.task(name='is_event_done', bind=True, ignore_result=True) def is_event_done(self): asyncio.run(logic()) here is my async_session engine = create_async_engine(settings.db_url, echo=True) async_session = sessionmaker( engine, class_=AsyncSession, expire_on_commit=False ) so I got \'_asyncio.Task\' object is not callable | I just do like this async def update_event() -> None: async with async_session() as session: stmt = update(event.models.Event).where( event.models.Event.end_time <= datetime.now(), event.models.Event.is_active is True ).values(is_done=True) await session.execute(stmt) @celery.task(name='is_event_done', bind=True, ignore_result=True) def is_event_done(self) -> None: loop.run_until_complete(update_event()) and now its work fine, I hope that its a good solution If there is anything wrong let me know, thanks! | 4 | 3 |
76,322,128 | 2023-5-24 | https://stackoverflow.com/questions/76322128/pyo3-how-to-return-enums-to-python-module | I'm trying to build a Python package from Rust using PyO3. Right now I'm stuck trying to return enums Rust type to Python. I have a simple enum like so: pub enum Lang { Deu, Eng, Fra } And in lib.rs #[pyfunction] fn detect_language(text: &str) -> PyResult<????> { // Do some stuff .... res:Lang = Do_some_stuff(text) Ok(res) } #[pymodule] fn pymylib(_py: Python, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(detect_language, m)?)?; Ok(()) } In Python code from pymylib import detect_language res=detect_language('Ceci est un test') print(res) # Lang:Fra ??? | One approach would to use #[pyclass] attribute to make the Python class from Rust Enum. Also make sure to export this class from Rust code, so that you can do the comparison on python layer. ie, use pyo3::prelude::*; #[pyclass] pub enum Lang { Deu, Eng, Fra } #[pyfunction] fn detect_language(text: &str) -> PyResult<Lang> { // Write your actual code here // But for testing purpose let's return `Deu` varient. Ok(Lang::Deu) } /// A Python module implemented in Rust. #[pymodule] fn pymylib(_py: Python, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(detect_language, m)?)?; m.add_class::<Lang>()?; Ok(()) } Now in Python interpreter you can do, >>> from pymylib import Lang >>> import pymylib >>> >>> pymylib.detect_language("1") Lang.Deu >>> pymylib.detect_language("1") == Lang.Deu True See also Support exporting Rust enums to Python | 2 | 3 |
76,319,917 | 2023-5-24 | https://stackoverflow.com/questions/76319917/python-unable-to-import-spacy-and-download-en-core-web-sm | What I want to achieve: Import spacy and use it. What I've tried: When I try to import spacy on python I get ImportError: cannot import name util error (detail on error1) Spacy is sucessfully installed to my device. https://github.com/explosion/spaCy/issues/2370 Following article I operated pip uninstall en_core_web_sm then I got WARNING: Skipping en_core_web_sm as it is not installed. operate python -m spacy download en_core_web_sm give me TypeError: issubclass() arg 1 must be a class error (detail in error2) Error1: ImportError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_11524/513823458.py in <module> 1 import sys ----> 2 import spacy ~\AppData\Roaming\Python\Python39\site-packages\spacy\__init__.py in <module> 12 from thinc.api import Config 13 ---> 14 from . import pipeline # noqa: F401 15 from .cli.info import info # noqa: F401 16 from .glossary import explain # noqa: F401 ~\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\__init__.py in <module> ----> 1 from .attributeruler import AttributeRuler 2 from .dep_parser import DependencyParser 3 from .edit_tree_lemmatizer import EditTreeLemmatizer 4 from .entity_linker import EntityLinker 5 from .ner import EntityRecognizer ~\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\attributeruler.py in <module> 4 from pathlib import Path 5 ----> 6 from .pipe import Pipe 7 from ..errors import Errors 8 from ..training import Example ~\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\pipe.pyx in init spacy.pipeline.pipe() ~\AppData\Roaming\Python\Python39\site-packages\spacy\vocab.pyx in init spacy.vocab() ~\AppData\Roaming\Python\Python39\site-packages\spacy\tokens\__init__.py in <module> ----> 1 from .doc import Doc 2 from .token import Token 3 from .span import Span 4 from .span_group import SpanGroup 5 from ._serialize import DocBin ~\AppData\Roaming\Python\Python39\site-packages\spacy\tokens\doc.pyx in init spacy.tokens.doc() ImportError: cannot import name util Error2: Traceback (most recent call last): File "C:\Users\akira\anaconda3\lib\runpy.py", line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "C:\Users\akira\anaconda3\lib\runpy.py", line 147, in _get_module_details return _get_module_details(pkg_main_name, error) File "C:\Users\akira\anaconda3\lib\runpy.py", line 111, in _get_module_details __import__(pkg_name) File "C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\__init__.py", line 14, in <module> from . import pipeline # noqa: F401 File "C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\__init__.py", line 1, in <module> from .attributeruler import AttributeRuler File "C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\pipeline\attributeruler.py", line 6, in <module> from .pipe import Pipe File "spacy\pipeline\pipe.pyx", line 1, in init spacy.pipeline.pipe File "spacy\vocab.pyx", line 1, in init spacy.vocab File "C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\tokens\__init__.py", line 1, in <module> from .doc import Doc File "spacy\tokens\doc.pyx", line 36, in init spacy.tokens.doc File "C:\Users\akira\AppData\Roaming\Python\Python39\site-packages\spacy\schemas.py", line 222, in <module> class TokenPattern(BaseModel): File "pydantic\main.py", line 205, in pydantic.main.ModelMetaclass.__new__ File "pydantic\fields.py", line 491, in pydantic.fields.ModelField.infer File "pydantic\fields.py", line 421, in pydantic.fields.ModelField.__init__ File "pydantic\fields.py", line 537, in pydantic.fields.ModelField.prepare File "pydantic\fields.py", line 634, in pydantic.fields.ModelField._type_analysis File "pydantic\fields.py", line 641, in pydantic.fields.ModelField._type_analysis File "C:\Users\akira\anaconda3\lib\typing.py", line 847, in __subclasscheck__ return issubclass(cls, self.__origin__) TypeError: issubclass() arg 1 must be a class | This has been reported. See the suggested workaround: https://github.com/explosion/spaCy/issues/12659. | 4 | 4 |
76,321,221 | 2023-5-24 | https://stackoverflow.com/questions/76321221/error-importerror-cannot-import-name-get-object-size-from-bson | when running the below file , I am getting error "ImportError: cannot import name 'get_object_size' from 'bson' (C:\Users\Dell\AppData\Local\Programs\Python\Python310\lib\site-packages\bson_init.py)" code: `from flask import Flask, request, jsonify from flask_pymongo import PyMongo # from bson.objectid import ObjectId app = Flask(__name__) app.config['MONGO_URI'] = connectionstring mongo = PyMongo(app) # Create a new to-do item @app.route('/api/todo', methods=['POST']) def create_todo(): # data = request.json # Create a new to-do item in the database new_todo = { # 'task': data['task'], # 'due_date': data['due_date'], # 'completed': False "task": "study", "date": 18, "completed": False } result = mongo.db.todos.insert_one(new_todo) print(result,"has been created")` I am creating a todo list using flask environment. | uninstall both pymongo and bson and install just pymongo, pymongo automatically installs bson pip uninstall pymongo pip uninstall bson pip install pymongo | 8 | 41 |
76,296,961 | 2023-5-20 | https://stackoverflow.com/questions/76296961/microservices-architecture-with-django | I have some questions about creating microservices with Django. Let's say we have an online shop or a larger system with many database requests and users. I want to practice and simulate a simple microservice to learn something new. We want to create a microservices-based system with the following components: A Django-based microservice with its admin panel and full functionality (excluding DRF). One or more microservices with a React/Angular frontend. Several additional microservices to separate functionalities. I'm unsure about the architecture. Let's assume we want to manage data using the Django admin panel. The simplest solution would be to add DRF to the first microservice and extend its functionality (REST app) - instead of creating different services (3.). But what if we want to separate functionality into different microservices? Should the microservices in point 3 be connected to the same database and treated as different Django projects (with DRF)? Can we use GoLang, FastAPI, or Java Spring for the third microservice? If yes, should all models be duplicated and registered in the first microservice? Alternatively, is there a better way to approach this? It would be great to hear your perspective and methods on how to proceed with this. Have a wonderful day! | First a quick summary of Microservices vs Monolithic apps pros and cons (this is important). Microservices: [ PROS ] scalability (they scale independently) flexibility (each microservice can use its own stack & hardware setup) isolation (the failure of one microservice does not affect another, only its service fails.) [ CONS ] Complexity (so much infrastructure to setup and maintain at every layer) Data consistency (each db is independent so making sure consistency is maintained is added complexity) Distributed system challenges ( latency/fault tolerance and testing is much harder) Now for your questions: separating functionality into different microservices. That is what apps in a Django project are for, and is a core principle of software engineering, separation of concerns can still be applied in a monolithic application. When discussing microservices, the questions should be about what benefit would it bring at the cost of complexity, such having a service that does pure GPU computation, perhaps would benefit from being a microservice running on an optimized language and system with access to GPUs. I would even argue you should only transition to using microservices, when you have explored all other solutions, and have composed an irrefutable argument to do so with the team. Should microservices be connected to the same DB? Microservices should have their own db, see isolation. Otherwise it's the same as just using a monolithic app with extra complexity and no benefit. Can you use a different stack, and should duplicated models be registered? This again comes under a missunderstanding of what a microservice is. Your microservice should encapsulate the minimum amount of data it needs to function independently. Alternative: Design your Monolithic application very well, separate out your concerns and business logic in a de-coupled design, even if it's a monolith. you have to have the mindset of: "if I want to swap this functionality for a microservice, how easily will it be to rip it out, what is the coupling, etc...) A good design leads to scalability and maintainability which is most important. It also allows other people to contribute their expertise on a subset of the project, without needing to know the whole. | 12 | 22 |
76,301,087 | 2023-5-21 | https://stackoverflow.com/questions/76301087/polars-list-to-columns-without-get | Say I have: In [1]: df = pl.DataFrame({'a': [[1,2], [3,4]]}) In [2]: df Out[2]: shape: (2, 1) ┌───────────┐ │ a │ │ --- │ │ list[i64] │ ╞═══════════╡ │ [1, 2] │ │ [3, 4] │ └───────────┘ I know that all elements of 'a' are lists of the same length. I can do: In [10]: df.select(pl.col('a').list.get(i).alias(f'a_{i}') for i in range(2)) Out[10]: shape: (2, 2) ┌─────┬─────┐ │ a_0 ┆ a_1 │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪═════╡ │ 1 ┆ 2 │ │ 3 ┆ 4 │ └─────┴─────┘ but this involved hard-coding 2. Is there a way to do this without hard-coding the 2? I may not know in advance how many elements there in the lists (I just know that they all have the same number of elements) | You can convert the list to a struct and .unnest() df.with_columns(pl.col("a").list.to_struct()).unnest("a") shape: (2, 2) ┌─────────┬─────────┐ │ field_0 ┆ field_1 │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════════╪═════════╡ │ 1 ┆ 2 │ │ 3 ┆ 4 │ └─────────┴─────────┘ Warning: If your lists are not the same length, you must set n_field_strategy to max_width. .list.to_struct("max_width") By default, it uses the length of the first list found. This would result in truncated data if you had longer lists later in your data. | 7 | 14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.