question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
73,744,658 | 2022-9-16 | https://stackoverflow.com/questions/73744658/resource-punkt-not-found-please-use-the-nltk-downloader-to-obtain-the-resource | I have NLTK installed and it is giving me an error: Resource punkt not found. Please use the NLTK Downloader to obtain the resource: import nltk nltk.download('punkt') For more information see: https://www.nltk.org/data.html Attempted to load tokenizers/punkt/PY3/english.pickle Searched in: - '/Users/divyanshundley/nltk_data' - '/Library/Frameworks/Python.framework/Versions/3.10/nltk_data' - '/Library/Frameworks/Python.framework/Versions/3.10/share/nltk_data' - '/Library/Frameworks/Python.framework/Versions/3.10/lib/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' - '' ********************************************************************** And my code is: import nltk nltk.download('punkt') def tokenize(token): return nltk.word_tokenize(token); tokenize("why is this not working?"); | Adding these code will resolve your issue: nltk.download('punkt') nltk.download('wordnet') nltk.download('omw-1.4') | 13 | 11 |
73,721,736 | 2022-9-14 | https://stackoverflow.com/questions/73721736/what-is-the-proper-way-to-make-downstream-http-requests-inside-of-uvicorn-fastap | I have an API endpoint (FastAPI / Uvicorn). Among other things, it makes a request to yet another API for information. When I load my API with multiple concurrent requests, I begin to receive the following error: h11._util.LocalProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE In a normal environment, I would take advantage of request.session, but I understand it not to be fully thread safe. Thus, what is the proper approach to using requests within a framework such as FastAPI, where multiple threads would be using the requests library at the same time? | Instead of using requests, you could use httpx, which offers an async API as well (httpx is also suggested in FastAPI's documentation when performing async tests, as well as FastAPI/Starlette recently replaced the HTTP client on TestClient from requests to httpx). The below example is based on the one given in httpx documentation, demonstrating how to use the library for making an asynchronous HTTP(s) request, and subsequently, streaming the response back to the client. The httpx.AsyncClient() is what you can use instead of requests.Session(), which is useful when several requests are being made to the same host, as the underlying TCP connection will be reused, instead of recreating one for every single requestβhence, resulting in a significant performance improvement. Additionally, it allows you to reuse headers and other settings (such as proxies and timeout), as well as persist cookies, across requests. You spawn a Client and reuse it every time you need it. You can use await client.aclose() to explicitly close the client once you are done with it (you could do that inside a shutdown event handler). Examples and more details can also be found in this answer. Example from fastapi import FastAPI from starlette.background import BackgroundTask from fastapi.responses import StreamingResponse import httpx app = FastAPI() @app.on_event("startup") async def startup_event(): app.state.client = httpx.AsyncClient() @app.on_event('shutdown') async def shutdown_event(): await app.state.client.aclose() @app.get('/') async def home(): client = app.state.client req = client.build_request('GET', 'https://www.example.com/') r = await client.send(req, stream=True) return StreamingResponse(r.aiter_raw(), background=BackgroundTask(r.aclose)) Example (Updated) Since startup and shutdown have now been deprecated (and might be completely removed in the future), you could instead use a lifespan handler to initialize the httpx Client, as well as close the Client instance on shutdown, similar to what has been demonstrated in this answer. Starlette specifically provides an example using a lifespan handler and httpx Client in their documentation page. As described in Starlette's documentation: The lifespan has the concept of state, which is a dictionary that can be used to share the objects between the lifespan, and the requests. The state received on the requests is a shallow copy of the state received on the lifespan handler. Hence, objects added to the state in the lifespan handler can be accessed inside endpoints using request.state. The example below uses a streaming response to both communicate with the external server, as well as send the response back to the client. See here for more details on the async response streaming methods of httpx (i.e., aiter_bytes(), aiter_text(), aiter_lines(), etc.). If you would like to use a media_type for the StreamingResponse, you could use the one from the original response like this: media_type=r.headers['content-type']. However, as described in this answer, you need to make sure that the media_type is not set to text/plain; otherwise, the content would not stream as expected in the browser, unless you disable MIME Sniffing (have a look at the linked answer for more details and solutions). from fastapi import FastAPI, Request from contextlib import asynccontextmanager from fastapi.responses import StreamingResponse from starlette.background import BackgroundTask import httpx @asynccontextmanager async def lifespan(app: FastAPI): # Initialize the Client on startup and add it to the state async with httpx.AsyncClient() as client: yield {'client': client} # The Client closes on shutdown app = FastAPI(lifespan=lifespan) @app.get('/') async def home(request: Request): client = request.state.client req = client.build_request('GET', 'https://www.example.com') r = await client.send(req, stream=True) return StreamingResponse(r.aiter_raw(), background=BackgroundTask(r.aclose)) If, for any reason, you need to read the content chunk by chunk on server side before responding back to the client, you could do this as follows: @app.get('/') async def home(request: Request): client = request.state.client req = client.build_request('GET', 'https://www.example.com') r = await client.send(req, stream=True) async def gen(): async for chunk in r.aiter_raw(): yield chunk await r.aclose() return StreamingResponse(gen()) If you don't want to use a streaming response, but rather have httpx reading the response for you in the first place (which would store the response data to the server's RAM; hence, you should make sure there is enough space available to accommodate the data), you could use the following. Note that using r.json() should only apply to cases where the response data are in JSON format (see this answer for more details on how to return JSON data in FastAPI); otherwise, you could return a PlainTextResponse or a custom Response directly, as demonstrated below. from fastapi import Response from fastapi.responses import PlainTextResponse @app.get('/') async def home(request: Request): client = request.state.client req = client.build_request('GET', 'https://www.example.com') r = await client.send(req) content_type = r.headers.get('content-type') if content_type == 'application/json': return r.json() elif content_type == 'text/plain': return PlainTextResponse(content=r.text) else: return Response(content=r.content) Using the async API of httpx would mean that you have to define your endpoints with async def; otherwise, you would have to use the standard synchronous API (for def vs async def see this answer), and as described in this github discussion: Yes. HTTPX is intended to be thread-safe, and yes, a single client-instance across all threads will do better in terms of connection pooling, than using an instance-per-thread. You can also control the connection pool size using the limits keyword argument on the Client (see Pool limit configuration). For example: limits = httpx.Limits(max_keepalive_connections=5, max_connections=10) client = httpx.Client(limits=limits) | 9 | 17 |
73,759,718 | 2022-9-18 | https://stackoverflow.com/questions/73759718/how-to-post-json-data-from-javascript-frontend-to-fastapi-backend | I am trying to pass a value called 'ethAddress' from an input form on the client to FastAPI so that I can use it in a function to generate a matplotlib chart. I am using fetch to POST the inputted text in Charts.tsx file: fetch("http://localhost:8000/ethAddress", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(ethAddress), }).then(fetchEthAddresses); Then I have my api.py file set up as follows: #imports app = FastAPI() @app.get("/ethAddress") async def get_images(background_tasks: BackgroundTasks, ethAddress: str): image = EthBalanceTracker.get_transactions(ethAddress) img_buf = image background_tasks.add_task(img_buf.close) headers = {'Content-Disposition': 'inline; filename="out.png"'} return Response(img_buf.getvalue(), headers=headers, media_type='image/png') @app.post("/ethAddress") async def add_ethAddress(ethAddress: str): return ethAddress To my understanding, I am passing the 'ethAddress' in the Request Body from the client to the backend using fetch POST request, where I then have access to the value that has been posted using @app.post in FastAPI. I then return that value as a string. Then I am using it in the GET route to generate the chart. I'm getting this error: INFO: 127.0.0.1:59821 - "POST /ethAddress HTTP/1.1" 422 Unprocessable Entity INFO: 127.0.0.1:59821 - "GET /ethAddress HTTP/1.1" 422 Unprocessable Entity I have also tried switching the fetch method on the client to GET instead of POST. But get the following error: TypeError: Failed to execute 'fetch' on 'Window': Request with GET/HEAD method cannot have body. | The way you defined ethAddress in your endpoint is expected as a query parameter; hence, the 422 Unprocessable Entity error. As per the documentation: When you declare other function parameters that are not part of the path parameters, they are automatically interpreted as "query" parameters. For the parameter to be interpreted as JSON, you would need to use one of the following options. Option 1 Create a Pydantic model: from pydantic import BaseModel class Item(BaseModel): eth_addr: str @app.post('/') async def add_eth_addr(item: Item): return item FastAPI will expect a body like: { "eth_addr": "some addr" } Perform HTTP request using Fetch API: fetch('/', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ "eth_addr": "some addr" }), }) .then(resp => resp.json()) // or, resp.text(), etc. .then(data => { console.log(data); // handle response data }) .catch(error => { console.error(error); }); Option 2 Use the Body parameter type: from fastapi import Body @app.post('/') async def add_eth_addr(eth_addr: str = Body()): return {'eth_addr': eth_addr} FastAPI will expect a body like: "some addr" Perform HTTP request using Fetch API: fetch('/', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify("some addr"), }) .then(resp => resp.json()) // or, resp.text(), etc. .then(data => { console.log(data); // handle response data }) .catch(error => { console.error(error); }); Option 3 Since you have a single body parameter, you might want to use the special Body parameter embed: from fastapi import Body @app.post('/') async def add_eth_addr(eth_addr: str = Body(embed=True)): return {'eth_addr': eth_addr} FastAPI will expect a body like: { "eth_addr": "some addr" } Perform HTTP request using Fetch API: fetch('/', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ "eth_addr": "some addr" }), }) .then(resp => resp.json()) // or, resp.text(), etc. .then(data => { console.log(data); // handle response data }) .catch(error => { console.error(error); }); Related answers, including JavaScript examples on how to post JSON data, can be found here, here, as well as here and here. This answer might also prove helpful as well, when it comes to posting both JSON data and Files in the same request. | 5 | 9 |
73,776,179 | 2022-9-19 | https://stackoverflow.com/questions/73776179/element-wise-aggregation-of-a-column-of-type-listf64-in-polars | I want to apply aggregation functions like sum, mean, etc to a column of type List[f64] after a group_by such that I get a List[64] entry back. Say I have: import polars as pl df = pl.DataFrame( { "Case": ["case1", "case1"], "List": [[1, 2, 3], [4, 5, 6]], } ) print(df) shape: (2, 2) βββββββββ¬βββββββββββββ β Case β List β β --- β --- β β str β list[i64] β βββββββββͺβββββββββββββ‘ β case1 β [1, 2, 3] β β case1 β [4, 5, 6] β βββββββββ΄βββββββββββββ I want to group_by Case and sum List so that I end up with: βββββββββ¬βββββββββββββ β Case β List β β --- β --- β β str β list[i64] β βββββββββͺβββββββββββββ‘ β case1 β [5, 7, 9] β βββββββββ΄βββββββββββββ How would I best do this? Note that the length of each of the lists are 256, so indexing each of them is not a good solution. Thanks! | Note that the length of each of the lists are 256, so indexing each of them is not a good solution. If you are sure of the length of your lists ahead of time, then we can avoid the typical explode/index/group_by solution as follows: list_size = 3 ( df.group_by("Case") .agg( pl.concat_list( pl.col("List") .list.slice(n, 1) .list.first() .sum() for n in range(0, list_size) ) ) ) shape: (1, 2) βββββββββ¬ββββββββββββ β Case β List β β --- β --- β β str β list[i64] β βββββββββͺββββββββββββ‘ β case1 β [5, 7, 9] β βββββββββ΄ββββββββββββ How it works To see how this works, let's look at how the algorithm adds the first elements of each list. (We can extrapolate for all elements from this example.) In the first step, we use group_by to accumulate all the lists for each Case. ( df.group_by("Case") .agg( pl.concat_list("List") ) ) shape: (1, 2) βββββββββ¬βββββββββββββββββββββββββ β Case β List β β --- β --- β β str β list[list[i64]] β βββββββββͺβββββββββββββββββββββββββ‘ β case1 β [[1, 2, 3], [4, 5, 6]] β βββββββββ΄βββββββββββββββββββββββββ The next step is to slice each list so that we get only the nth element of each list. In this example, we want only the first element of each list, corresponding to the 0 in slice(0, 1). Notice that the internal lists are now all only one element each. ( df.group_by("Case") .agg( pl.concat_list( pl.col("List").list.slice(0, 1) ) ) ) shape: (1, 2) βββββββββ¬ββββββββββββββββββ β Case β List β β --- β --- β β str β list[list[i64]] β βββββββββͺββββββββββββββββββ‘ β case1 β [[1], [4]] β βββββββββ΄ββββββββββββββββββ In the last step, we sum the individual elements: ( df.groupby("Case") .agg( pl.concat_list( pl.col("List") .list.slice(0, 1) .list.first() .sum() ) ) ) shape: (1, 2) βββββββββ¬ββββββββββββ β Case β List β β --- β --- β β str β list[i64] β βββββββββͺββββββββββββ‘ β case1 β [5] β βββββββββ΄ββββββββββββ To accomplish this for all n elements of each list, we simply write our expressions using a list comprehension, substituting n in our slice(n, 1). | 4 | 4 |
73,716,862 | 2022-9-14 | https://stackoverflow.com/questions/73716862/converting-string-to-datetime-polars | I have a Polars dataframe with a column of type str with the date and time df = pl.from_repr(""" βββββββββββββββββββββββββββ β EventTime β β --- β β str β βββββββββββββββββββββββββββ‘ β 2020-03-02T13:10:42.550 β βββββββββββββββββββββββββββ """) I want to convert this column to the polars.Datetime type. After reading this post Easily convert string column to pl.datetime in Polars, I came up with: df = df.with_columns(pl.col('EventTime').str.to_datetime("%Y-%m-%dT%H:%M:%f", strict=False)) However, the values my column "EventTime' are all null. Many Thanks! | You were close. You forgot the seconds component of your format specifier: ( df .with_columns( pl.col('EventTime') .str.to_datetime( format="%Y-%m-%dT%H:%M:%S%.f", strict=False) .alias('parsed EventTime') ) ) shape: (1, 2) βββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββ β EventTime β parsed EventTime β β --- β --- β β str β datetime[ns] β βββββββββββββββββββββββββββͺββββββββββββββββββββββββββ‘ β 2020-03-02T13:10:42.550 β 2020-03-02 13:10:42.550 β βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββ BTW, the format you are using is standard, so you can eliminate the format specifier altogether. ( df .with_columns( pl.col('EventTime') .str.to_datetime() .alias('parsed EventTime') ) ) shape: (1, 2) βββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββ β EventTime β parsed EventTime β β --- β --- β β str β datetime[ΞΌs] β βββββββββββββββββββββββββββͺββββββββββββββββββββββββββ‘ β 2020-03-02T13:10:42.550 β 2020-03-02 13:10:42.550 β βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββ Edit And what if I would like to ignore the miliseconds? so the "%.f", if I just leave it out it can't interpret properly the dataframe We need to allow Polars to parse the date string according to the actual format of the string. That said, after the parsing, we can use dt.truncate to throw away the fractional part. ( df .with_columns( pl.col('EventTime') .str.to_datetime() .dt.truncate('1s') .alias('parsed EventTime') ) ) shape: (1, 2) βββββββββββββββββββββββββββ¬ββββββββββββββββββββββ β EventTime β parsed EventTime β β --- β --- β β str β datetime[ΞΌs] β βββββββββββββββββββββββββββͺββββββββββββββββββββββ‘ β 2020-03-02T13:10:42.550 β 2020-03-02 13:10:42 β βββββββββββββββββββββββββββ΄ββββββββββββββββββββββ | 4 | 8 |
73,733,368 | 2022-9-15 | https://stackoverflow.com/questions/73733368/how-to-set-a-minimal-logging-level-with-loguru | I would like to use a different logging level in development and production. To do so, I need early in my program to set the minimal level for logs to be triggered. The default is to output all severities: from loguru import logger as log log.debug("a debug log") log.error("an error log") # output # 2022-09-15 16:51:23.325 | DEBUG | __main__:<module>:3 - a debug log # 2022-09-15 16:51:23.327 | ERROR | __main__:<module>:4 - an error log There is a Changing the level of an existing handler section in the documentation, that states among others that Once a handler has been added, it is actually not possible to update it. (...) The most straightforward workaround is to remove() your handler and then re-add() it with the updated level parameter. My problem is that I have not added anything, so there is nothing to remove. I also cannot modify log. So what should I do? | Like so: import sys from loguru import logger as log log.remove() #remove the old handler. Else, the old one will work along with the new one you've added below' log.add(sys.stderr, level="INFO") log.debug("debug message") log.info("info message") log.warning("warning message") log.error("error message") You can use sys.stdout or other file object, in place of sys.stderr. More info in this related GitHub thread. | 9 | 19 |
73,763,352 | 2022-9-18 | https://stackoverflow.com/questions/73763352/how-do-i-type-hint-a-variable-whose-value-is-itself-a-type-hint | I have a function one of whose arguments is expected to be a type hint: something like typing.List, or typing.List[int], or even just int. Anything you would reasonably expect to see as a type annotation to an ordinary field. What's the correct type hint to put on this argument? (The context for this is that I'm writing a utility that works on classes that define fields using type annotations, a bit like the dataclass decorator.) | Answer for Python 3.10: Almost complete but less readable answer: type | types.GenericAlias | types.UnionType | typing._BaseGenericAlias | typing._SpecialForm Here are the possibilities of all types of annotations I can think of: Type object itself, such as int, list, etc. Corresponds to type. Type hinting generics in standard collections, such as list[int]. Corresponds to types.GenericAlias. Types union, such as int | float (but typing.Union[int, float] is not, it corresponds to the next item). Corresponds to types.UnionType. Generic concrete collections in typing, such as typing.List, typing.List[int], etc. Here I choose the common base class typing._BaseGenericAlias that first appeared in their inheritance chain. (In fact, not only these, but almost all subscriptible annotation classes in typing inherit from this class, including typing.Literal[True], typing.Union[int, float], etc.) Special typing primitives in typing, such as typing.Any, typing.NoReturn, etc. Corresponds to typing._SpecialForm. It should be noted that the last two types begin with a single underline, indicating that they are internal types, and should not be imported and used from typing. However, they should be indispensable if you insist on completely covering all type annotations. Python 3.11 update: I noticed that typing.Any has changed from typing._SpecialForm to typing._AnyMeta (to allow inheritance of it), but it is now a type (which we have already included) and we don't need to add anything new to it. typing.Self and typing.LiteralString have been added in 3.11, but they are both typing._SpecialForm, so there is also no need for anything new. Python 3.12 update: We now have a type statement, which allows us to create typing.TypeAliasType object, this seems to be the only new content. | 6 | 8 |
73,753,000 | 2022-9-17 | https://stackoverflow.com/questions/73753000/how-to-address-current-state-of-list-comprehension-in-its-if-condition | I would like to turn the loop over the items in L in following code: L = [1,2,5,2,1,1,3,4] L_unique = [] for item in L: if item not in L_unique: L_unique.append(item) to a list comprehension like: L_unique = [ item for item in L if item not in ???self??? ] Is this possible in Python? And if it is possible, how can it be done? | It's possible. Here's a hack that does it, but I wouldn't use this in practice as it's nasty and depends on implementation details that might change and I believe it's not thread-safe, either. Just to demonstrate that it's possible. You're mostly right with your "somewhere must exist an object storing the current state of the comprehension" (although it doesn't necessarily have to be a Python list object, Python could store the elements some other way and create the list object only afterwards). We can find the new list object in the objects tracked by garbage collection. Collect the IDs of lists before the comprehension's list is created, then look again and take the one that wasn't there before. Demo: import gc L = [1,2,5,2,1,1,3,4] L_unique = [ item # the hack to get self for ids in ({id(o) for o in gc.get_objects() if type(o) is list},) for self in (o for o in gc.get_objects() if type(o) is list and id(o) not in ids) for item in L if item not in self ] print(L_unique) Output (Attempt This Online!): [1, 2, 5, 3, 4] Tested and worked in several versions from Python 3.7 to Python 3.11. For an alternative with the exact style you asked, only replacing your ???self???, see Mechanic Pig's updated answer. | 4 | 12 |
73,778,532 | 2022-9-19 | https://stackoverflow.com/questions/73778532/aws-glue-job-an-error-occurred-while-calling-getcatalogsource-none-get | I was using Password/Username in my aws glue conenctions and now I switched to Secret Manager. Now I get this error when I run my etl job : An error occurred while calling o89.getCatalogSource. None.get Even tho the connections and crawlers works : The Connection Image. (I added the connection to the job details) The Crawlers Image. This example of the etl job that used to work before : import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job args = getResolvedOptions(sys.argv, ["JOB_NAME"]) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args["JOB_NAME"], args) # Script generated for node PostgreSQL PostgreSQL_node1663615620851 = glueContext.create_dynamic_frame.from_catalog( database="pg-db", table_name="postgres_schema_table", transformation_ctx="PostgreSQL_node1663615620851", ) this what I see as erros in the logs : 2022-09-19 19:28:19,322 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Error from Python:Traceback (most recent call last): File "/tmp/FC 2 job.py", line 19, in <module> transformation_ctx="PostgreSQL_node1663615620851", File "/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py", line 629, in from_catalog return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs) File "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", line 186, in create_dynamic_frame_from_catalog makeOptions(self._sc, additional_options), catalog_id), File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco return f(*a, **kw) File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o89.getCatalogSource. : java.util.NoSuchElementException: None.get at scala.None$.get(Option.scala:349) at scala.None$.get(Option.scala:347) at com.amazonaws.services.glue.util.DataCatalogWrapper.$anonfun$getJDBCConf$1(DataCatalogWrapper.scala:208) at scala.util.Try$.apply(Try.scala:209) at com.amazonaws.services.glue.util.DataCatalogWrapper.getJDBCConf(DataCatalogWrapper.scala:199) at com.amazonaws.services.glue.GlueContext.getGlueNativeJDBCSource(GlueContext.scala:485) at com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:320) at com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:185) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) and also this : 2022-09-19 19:28:19,348 ERROR [main] glueexceptionanalysis.GlueExceptionAnalysisListener (Logging.scala:logError(9)): [Glue Exception Analysis] { "Event": "GlueETLJobExceptionEvent", "Timestamp": 1663615699344, "Failure Reason": "Traceback (most recent call last):\n File \"/tmp/FC 2 job.py\", line 19, in <module>\n transformation_ctx=\"PostgreSQL_node1663615620851\",\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py\", line 629, in from_catalog\n return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs)\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/context.py\", line 186, in create_dynamic_frame_from_catalog\n makeOptions(self._sc, additional_options), catalog_id),\n File \"/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py\", line 1305, in __call__\n answer, self.gateway_client, self.target_id, self.name)\n File \"/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py\", line 111, in deco\n return f(*a, **kw)\n File \"/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py\", line 328, in get_return_value\n format(target_id, \".\", name), value)\npy4j.protocol.Py4JJavaError: An error occurred while calling o89.getCatalogSource.\n: java.util.NoSuchElementException: None.get\n\tat scala.None$.get(Option.scala:349)\n\tat scala.None$.get(Option.scala:347)\n\tat com.amazonaws.services.glue.util.DataCatalogWrapper.$anonfun$getJDBCConf$1(DataCatalogWrapper.scala:208)\n\tat scala.util.Try$.apply(Try.scala:209)\n\tat com.amazonaws.services.glue.util.DataCatalogWrapper.getJDBCConf(DataCatalogWrapper.scala:199)\n\tat com.amazonaws.services.glue.GlueContext.getGlueNativeJDBCSource(GlueContext.scala:485)\n\tat com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:320)\n\tat com.amazonaws.services.glue.GlueContext.getCatalogSource(GlueContext.scala:185)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)\n\tat py4j.Gateway.invoke(Gateway.java:282)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.GatewayConnection.run(GatewayConnection.java:238)\n\tat java.lang.Thread.run(Thread.java:750)\n", "Stack Trace": [ { "Declaring Class": "get_return_value", "Method Name": "format(target_id, \".\", name), value)", "File Name": "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", "Line Number": 328 }, { "Declaring Class": "deco", "Method Name": "return f(*a, **kw)", "File Name": "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", "Line Number": 111 }, { "Declaring Class": "__call__", "Method Name": "answer, self.gateway_client, self.target_id, self.name)", "File Name": "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", "Line Number": 1305 }, { "Declaring Class": "create_dynamic_frame_from_catalog", "Method Name": "makeOptions(self._sc, additional_options), catalog_id),", "File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py", "Line Number": 186 }, { "Declaring Class": "from_catalog", "Method Name": "return self._glue_context.create_dynamic_frame_from_catalog(db, table_name, redshift_tmp_dir, transformation_ctx, push_down_predicate, additional_options, catalog_id, **kwargs)", "File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/dynamicframe.py", "Line Number": 629 }, { "Declaring Class": "<module>", "Method Name": "transformation_ctx=\"PostgreSQL_node1663615620851\",", "File Name": "/tmp/FC 2 job.py", "Line Number": 19 } ], "Last Executed Line number": 19, "script": "FC 2 job.py" } | Update: It appears that this issue has fixed by AWS. I did not see any official announcement on this, however I noticed a few months ago that my Python Glue jobs started running successfully after updating the Glue connections to use Secrets Manager. I would try this again and see if it works for you now. Old Answer: I ran into the same issue. I asked about it on AWS re:Post and was told that the reason why this exception gets thrown is that getCatalogSource and getCatalogSink do not yet support connecting with Secrets Manager. The workaround is to either use boto3 to retrieve credentials from Secrets Manager or connect with username/password. | 4 | 4 |
73,750,930 | 2022-9-16 | https://stackoverflow.com/questions/73750930/how-can-i-ensure-that-my-quilt-data-package-displays-relevant-information-by-def | When viewing a Quilt data package in the catalog view, how do I ensure that the relevant information and data to my users bubbles up from N-depth of folders/files to the data package landing view? | [Disclaimer: I originally wrote this answer when I was working at Quilt Data] Create a file called quilt_summarize.json [Reference] which is a configuration file that renders one or more data-package elements in both Bucket view and Packages view. The contents of quilt_summarize.json are a JSON array of files that you wish to preview in the catalog, from any depth in your data package. Each file may be represented as a string or, if you wish to provide more configuration, as an object. You can preview all sorts of different files out-of-the-box including JSON, CSV, TXT, MD, XLS, XLSX, PARQUET, TSV, visualization libraries (including Vega, Altair, eCharts, and Voila), and iPython notebooks. Here's the syntax: // quilt_summarize.json [ "file1.json", "root_folder/second_level/third_level/file2.csv", "notebooks/file3.ipynb" ] You can also define multi-column outputs: [ "file1.json", [{ "path": "file2.csv", "width": "200px" }, { "path": "file3.ipynb", "title": "Scientific notebook", "description": "[See docs](https://docs.com)" }] ] which will render the file file1.json first (and span the whole width of the screen), then the viewport will be split into N columns (based on N-length of the JSON array) in the widths defined in the attributes. | 5 | 6 |
73,718,577 | 2022-9-14 | https://stackoverflow.com/questions/73718577/updating-multiple-pydantic-fields-that-are-validated-together | How do you update multiple properties on a pydantic model that are validated together and dependent upon each other? Here is a contrived but simple example: from pydantic import BaseModel, root_validator class Example(BaseModel): a: int b: int @root_validator def test(cls, values): if values['a'] != values['b']: raise ValueError('a and b must be equal') return values class Config: validate_assignment = True example = Example(a=1, b=1) example.a = 2 # <-- error raised here because a is 2 and b is still 1 example.b = 2 # <-- don't get a chance to do this Error: ValidationError: 1 validation error for Example __root__ a and b must be equal (type=value_error) Both a and b having a value of 2 is valid, but they can't be updated one at a time without triggering the validation error. Is there a way to put the validation on hold until both are set? Or a way to somehow update both of them at the same time? Thanks! | I found a couple solutions that works well for my use case. manually triggering the validation and then updating the __dict__ of the pydantic instance directly if it passes -- see update method a context manager that delays validation until after the context exits -- see delay_validation method from pydantic import BaseModel, root_validator from contextlib import contextmanager import copy class Example(BaseModel): a: int b: int @root_validator def enforce_equal(cls, values): if values['a'] != values['b']: raise ValueError('a and b must be equal') return values class Config: validate_assignment = True def update(self, **kwargs): self.__class__.validate(self.__dict__ | kwargs) self.__dict__.update(kwargs) @contextmanager def delay_validation(self): original_dict = copy.deepcopy(self.__dict__) self.__config__.validate_assignment = False try: yield finally: self.__config__.validate_assignment = True try: self.__class__.validate(self.__dict__) except: self.__dict__.update(original_dict) raise example = Example(a=1, b=1) # ================== This didn't work: =================== # example.a = 2 # <-- error raised here because a is 2 and b is still 1 # example.b = 2 # <-- don't get a chance to do this # ==================== update method: ==================== # No error raised example.update(a=2, b=2) # Error raised as expected - a and b must be equal example.update(a=3, b=4) # Error raised as expected - a and b must be equal example.update(a=5) # # =============== delay validation method: =============== # No error raised with example.delay_validation(): example.a = 2 example.b = 2 # Error raised as expected - a and b must be equal with example.delay_validation(): example.a = 3 example.b = 4 # Error raised as expected - a and b must be equal with example.delay_validation(): example.a = 5 | 4 | 4 |
73,724,304 | 2022-9-15 | https://stackoverflow.com/questions/73724304/how-to-display-a-bytes-type-image-in-html-jinja2-template-using-fastapi | I have a FastAPI app that gets an image from an API. This image is stored in a variable with type: bytes. I want to display the image in HTML/Jinja2 template (without having to download it). I followed many tutorials but couldn't find the solution. Here is what I came up with so far: @app.get("/{id}") async def root(request: Request, id: str): picture = await get_online_person() data = base64.b64encode(picture) # convert to base64 as bytes data = data.decode() # convert bytes to string # str_equivalent_image = base64.b64encode(img_buffer.getvalue()).decode() img_tag = '<img src="data:image/png;base64,{}">'.format(data) return templates.TemplateResponse( "index.html", {"request": request, "img": img_tag} ) All I get in the HTML is this: (as text on the page, not from source code) <img src="data:image/png;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQICAQECAQEBAgICAgICAgICAQICAgICAgICAgL/2wBDAQEBAQEBAQEBAQECAQEBAgICAgI CAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgL/wgARCAQABAADASIAA hEBAxEB/8QAHgAAAQUBAQEBAQAAAAAAAAAABQIDBAYHAQgACQr/xAAcAQACAwEBAQEAAAAAAAAAAAACAwABBAUGBwj/2gAMAwEAAhADEAAAAfEpwSR+a+9IPR3c7347iwscmWyYchEIJjn+MbJj/c4FFbbb9J5.................... Note: For people who are marking my question to a duplicate talking about urllib, I cannot use urllib because the image I'm getting is from ana API, and using their direct url will result in a 403 Forbidden, so I should use their python API to get the image. | On server sideβas shown in the last section of this answerβyou should return only the base64-encoded string in the context of the TemplateResponse (without using the <img> tag, as shown in your question): # ... base64_encoded_image = base64.b64encode(image_bytes).decode("utf-8") return templates.TemplateResponse("index.html", {"request": request, "myImage": base64_encoded_image}) On client side, you could display the image as follows: <img src="data:image/jpeg;base64,{{ myImage | safe }}"> Alternative approaches can be found here. | 4 | 4 |
73,745,607 | 2022-9-16 | https://stackoverflow.com/questions/73745607/how-to-pass-arguments-to-huggingface-tokenclassificationpipelines-tokenizer | I've finetuned a Huggingface BERT model for Named Entity Recognition. Everything is working as it should. Now I've setup a pipeline for token classification in order to predict entities out the text I provide. Even this is working fine. I know that BERT models are supposed to be fed with sentences less than 512 tokens long. Since I have texts longer than that, I split the sentences in shorter chunks and I store the chunks in a list chunked_sentences. To make it brief my tokenizer for training looks like this: from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') tokenized_inputs = tokenizer(chunked_sentences, is_split_into_words=True, padding='longest') I pad everything to the longest sequence and avoid truncation so that if a sentence is tokenized and goes beyond 512 tokens I receive a warning that I won't be able to train. This way I know that I have to split the sentences in smaller chunks. During inference I wanted to achieve the same thing, but I haven't found a way to pass arguments to the pipeline's tokenizer. The code looks like this: from transformers import pipeline ner_pipeline = pipeline('token-classification', model=model_folder, tokenizer=model_folder) out = ner_pipeline(text, aggregation_strategy='simple') I'm pretty sure that if a sentence is tokenized and surpasses the 512 tokens, the extra tokens will be truncated and I'll get no warning. I want to avoid this. I tried passing arguments to the tokenizer like this: tokenizer_kwargs = {'padding': 'longest'} out = ner_pipeline(text, aggregation_strategy='simple', **tokenizer_kwargs) I got that idea from this answer, but it seems not to be working, since I get the following error: Traceback (most recent call last): File "...\inference.py", line 42, in <module> out = ner_pipeline(text, aggregation_strategy='simple', **tokenizer_kwargs) File "...\venv\lib\site-packages\transformers\pipelines\token_classification.py", line 191, in __call__ return super().__call__(inputs, **kwargs) File "...\venv\lib\site-packages\transformers\pipelines\base.py", line 1027, in __call__ preprocess_params, forward_params, postprocess_params = self._sanitize_parameters(**kwargs) TypeError: TokenClassificationPipeline._sanitize_parameters() got an unexpected keyword argument 'padding' Process finished with exit code 1 Any ideas? Thanks. | I took a closer look at https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/pipelines/token_classification.py#L86. It seems you can override preprocess() to disable truncation and add padding to longest. from transformers import TokenClassificationPipeline class MyTokenClassificationPipeline(TokenClassificationPipeline): def preprocess(self, sentence, offset_mapping=None): truncation = False padding = 'longest' model_inputs = self.tokenizer( sentence, return_tensors=self.framework, truncation=truncation, padding=padding, return_special_tokens_mask=True, return_offsets_mapping=self.tokenizer.is_fast, ) if offset_mapping: model_inputs["offset_mapping"] = offset_mapping model_inputs["sentence"] = sentence return model_inputs ner_pipeline = MyTokenClassificationPipeline(model=model_folder, tokenizer=model_folder) out = ner_pipeline(text, aggregation_strategy='simple') | 5 | 2 |
73,749,897 | 2022-9-16 | https://stackoverflow.com/questions/73749897/imports-are-incorrectly-sorted-and-or-formatted-vs-code-python | In a few of my beginner projects this strange red line underscoring one or more of my imports keeps appearing almost randomly and I can't figure out why. As the module is working perfectly fine it shouldn't have something to do regarding which Folder I open VS Code in as it can get resolved, so sys.path should also have the right path, as far as I'm concerned. Sometimes it works when I switch my imports around but often it just underscores a single import or switching them around doesn't do anything. Also when I try to let VS Code sort them with isort, nothing happens and nothing had ever happened. | Edit: I realized that recently the isort extension from Microsoft was automatically added to my extensions and this has caused the annoying error to start showing. It may be that the extension is conflicting somehow with the isort library installed in your venv. The extension isn't needed, so I've just disabled it and no longer encounter this error. It seems like this error started happening after I recently updated my VS Code to 1.73.0 (Insiders). I was able to get around it by splitting up my imports so that they don't get auto-formatted to be on multiple lines. Here's an example: Before the "fix", notice the squiggly red line with the annoying error: After the "fix", no more squiggly red line: | 8 | 10 |
73,785,052 | 2022-9-20 | https://stackoverflow.com/questions/73785052/working-with-picture-in-alternatecontent-tag | I need to move an element from one document to another by using python-docx. The element is AlternateContent which represents shapes and figures in Office Word, the issue here is that one of the elements contains an image like this: <AlternateContent> <Choice Requires="wpc"> <drawing> <inline distT="0" distB="0" distL="0" distR="0" wp14:anchorId="0DCE320C" wp14:editId="0DCE320D"> <extent cx="5826587" cy="2494357" /> <effectExtent l="0" t="0" r="0" b="1270" /> <docPr id="1108" name="Zeichenbereich 5" /> <cNvGraphicFramePr> <graphicFrameLocks xmlns:a="http://schemas.openxmlformats.org/drawingml/2006/main" noChangeAspect="1" /> </cNvGraphicFramePr> <graphic xmlns:a="http://schemas.openxmlformats.org/drawingml/2006/main"> <graphicData uri="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas"> <wpc> <pic xmlns:pic="http://schemas.openxmlformats.org/drawingml/2006/picture"> <nvPicPr> <cNvPr id="687" name="Picture 28" /> <cNvPicPr> <picLocks noChangeAspect="1" noChangeArrowheads="1" /> </cNvPicPr> </nvPicPr> <blipFill> <blip r:embed="rId20"> <extLst> <ext uri="{28A0092B-C50C-407E-A947-70E740481C1C}"> <useLocalDpi xmlns:a14="http://schemas.microsoft.com/office/drawing/2010/main" val="0" /> </ext> </extLst> </blip> <srcRect /> <stretch> <fillRect /> </stretch> </blipFill> </pic> </wpc> </graphicData> </graphic> </inline> </drawing> </Choice> </AlternateContent> What I did is extract the image by getting its rid from r:embed and then save it to the disk, after I re-add the image using add_picture() from the Run class, sadly this process cannot be achieved because from above example the <pic> tag is not included in a run. So my question is how I can save the element AlternateContent into python object then re-add it to a Word document? | Because of the fact that this functionality is not fully supported through python-docx API and processing images is kind of complicated -because of the multiple parts that must be handled (ImagePart, Relationship, rId)- the work had to be done at a low level by going into lxml, in these steps: Save the images into the disk when reading the file. Get rid of the image to add it separately. Build a pic:pic element using lxml functions (SubElement and Element) Repeat the process to handle all the pictures. for image_elem in list_of_images: image_path = image_elem("image_path") rel_id, _ = _run.part.get_or_add_image(image_path) image_name = image_path.split("\\")[-1] add_image_to_shape(shape_element, rel_id, image_name) The function add_image_to_shape is done like this: shape = [elem for elem in shape_element.iterdescendants(tag='{http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas}wpc')][0] image = etree.SubElement(shape, 'pic') nvPicPr = etree.SubElement(image, 'nvPicPr') cNvPr = etree.SubElement(nvPicPr, 'cNvPr') cNvPr.set('id', '1') cNvPr.set('descr', image_name) cNvPicPr = etree.SubElement(nvPicPr, 'cNvPicPr') picLocks = etree.SubElement(cNvPicPr, 'picLocks') picLocks.set('noChangeAspect', '1') picLocks.set('noChangeArrowheads', '1') blipFill = etree.SubElement(image, 'blipFill') blip = etree.SubElement(blipFill, 'blip') blip.set('embed', rel_id) srcRect = etree.SubElement(blipFill, 'srcRect') stretch = etree.SubElement(blipFill, 'stretch') fillRect = etree.SubElement(stretch, 'fillRect') | 4 | 1 |
73,758,140 | 2022-9-17 | https://stackoverflow.com/questions/73758140/python-macos-loop-files-get-file-info | I am trying to loop through all mp3 files in my directory in MacOS Monterrey and for every iteration get the file's more info attributes, like Title, Duration, Authors etc. I found a post saying use xattr, but when i create a variable with xattr it doesn't show any properties or attributes of the files. This is in Python 3.9 with xattr package import os import xattr directory = os.getcwd() for filename in os.listdir(directory): f = os.path.join(directory, filename) # checking if it is a file if os.path.isfile(f): print(f) x = xattr.xattr(f) xs = x.items() | xattr is not reading mp3 metadata or tags, it is for reading metadata that is stored for the particular file to the filesystem itself, not the metadata/tags thats stored inside the file. In order to get the data you need, you need to read the mp3 file itself with some library that supports reading ID3 of the file, for example: eyed3. Here's a small example: from pathlib import Path import eyed3 root_directory = Path(".") for filename in root_directory.rglob("*.mp3"): mp3data = eyed3.load(filename) if mp3data.tag != None: print(mp3data.tag.artist) print(mp3data.info.time_secs) | 4 | 5 |
73,793,363 | 2022-9-20 | https://stackoverflow.com/questions/73793363/how-can-i-check-if-a-protobuf-message-has-a-field-defined | I'm working with a protobuf message that has some of the fields marked for deprecation with [deprecated = true]. To my understanding the field can still be used by some part of the code (maybe with a warning). I want to make sure that my code is still supporting this field with the possibility of handling the case when it actually gets deprecated. Was thinking HasField gives me that tool but sounds like HasField only check if an existing field in a message has been set or not. In my case my proto message looks roughly like this: message Message1 { map<string, Message2> message_collection = 1; } message Message2 { bool some_var = 1 [deprecated = true]; } I was hoping for a piece of code like this: my_message = Message1() for mystr, mymessage2 in my_message.message_collection.items(): if mymessage2.HasField("some_var"): mymessage2.some_var = True How can I check if some_var in Message2 is still a defined field or not? | Deprecated fields have no impact on code, as per the docs. In your case, it looks like you are trying to evaluate if the class Message2 has a field defined or not? Before going forward, you need add optional as an option in your proto to allow for HasField() to work: my_message.proto syntax = "proto3"; message Message1 { map<string, Message2> message_collection = 1; } message Message2 { optional bool some_var = 1 [deprecated = true]; } Your python code should look like the following: main.py from sample_message_pb2 import Message1, Message2 # Create your object my_message = Message1( message_collection={ 'key_1': Message2(some_var=True), 'key_2': Message2(some_var=False), 'key_3': Message2() } ) # Iterate over your object for mystr, mymessage2 in my_message.message_collection.items(): # If the value is assigned or not if mymessage2.HasField(field_name): print(f'{mystr}: {field_name} = {getattr(mymessage2, field_name)}') # Reassign the value mymessage2.some_var = True # If the field exists or not field_names = mymessage2.ListFields() if 'pickles' in field_names: print('Uh oh, here comes Mr. Pickles') I dug around and could not find if it's possible to surface the deprecation warning to your python code. Looks like it's a limitation presently. | 6 | 3 |
73,721,599 | 2022-9-14 | https://stackoverflow.com/questions/73721599/in-the-conda-environment-yml-file-how-do-i-append-to-existing-variables-without | This is a follow up question for this answer. For a conda environment specification file environment.yml, if the variable that I am defining is PATH, for example, how can I prepend or append to it instead of just overwriting it? Is the following correct? name: foo channels: - defaults dependencies: - python variables: MY_VAR: something OTHER_VAR: ohhhhya PATH: /some/path:$PATH | It dependes on whether you are using windows or linux. A look at the source code of the environment init code reveals that conda itself simply executes bash (linux) or cmd.exe (win) calls: linux: yield from (f"export {envvar}='{value}'" for envvar, value in sorted(env_vars.items())) windows: yield from (f'@SET "{envvar}={value}"' for envvar, value in sorted(env_vars.items())) So make sure that you are using the correct syntax for your variable. In case of linux, that would be variables: MY_VAR: something OTHER_VAR: ohhhhya PATH: /some/path $PATH AFAIK windows uses ; to delimit entries, so you would probably have to do this (untested): variables: MY_VAR: something OTHER_VAR: ohhhhya PATH: /some/path;%PATH% | 5 | 2 |
73,713,072 | 2022-9-14 | https://stackoverflow.com/questions/73713072/solving-sylvester-equations-in-pytorch | I'm trying to solve a Sylvester matrix equation of the form AX + XB = C From what I've seen, these equations are usually solved with the Bartels-Stewart algorithm taking successive Schur decompositions. I'm aware scipy.linalg already has a solve_sylvester function, but I'm integrating the solution to the Sylvester equation into a neural network, so I need a way to calculate gradients to make A, B, and C learnable. Currently, I'm just solving a linear system with torch.linalg.solve using the Kronecker product and vectorization trick, but this has terrible runtime complexity. I haven't found any PyTorch support for Sylvester equations, let alone Schur decompositions, but before I try to implement Barters-Stewart on the GPU, is there a simpler way to find the gradients? | Initially I wrote a solution that would give complex X based on Bartels-Stewart algorithm for the m=n case. I had some problems because the eigenvector matrix is not accurate enough. Also the real part gives the real solution, and the imaginary part must be a solution for AX - XB = 0 import torch def sylvester(A, B, C, X=None): m = B.shape[-1]; n = A.shape[-1]; R, U = torch.linalg.eig(A) S, V = torch.linalg.eig(B) F = torch.linalg.solve(U, (C + 0j) @ V) W = R[..., :, None] - S[..., None, :] Y = F / W X = U[...,:n,:n] @ Y[...,:n,:m] @ torch.linalg.inv(V)[...,:m,:m] return X.real if all(torch.isreal(x.flatten()[0]) for x in [A, B, C]) else X As can be verified on the GPU with device='cuda' # Try different dimensions for batch_size, M, N in [(1, 4, 4), (20, 16, 16), (6, 13, 17), (11, 29, 23)]: print(batch_size, (M, N)) A = torch.randn((batch_size, N, N), dtype=torch.float64, device=device, requires_grad=True) B = torch.randn((batch_size, M, M), dtype=torch.float64, device=device, requires_grad=True) X = torch.randn((batch_size, N, M), dtype=torch.float64, device=device, requires_grad=True) C = A @ X - X @ B X_ = sylvester(A, B, C) C_ = (A) @ X_ - X_ @ (B) print(torch.max(abs(C - C_))) X.sum().backward() A faster algorithm, but inaccurate in the current pytorch version is def sylvester_of_the_future(A, B, C): def h(V): return V.transpose(-1,-2).conj() m = B.shape[-1]; n = A.shape[-1]; R, U = torch.linalg.eig(A) S, V = torch.linalg.eig(B) F = h(U) @ (C + 0j) @ V W = R[..., :, None] - S[..., None, :] Y = F / W X = U[...,:n,:n] @ Y[...,:n,:m] @ h(V)[...,:m,:m] return X.real if all(torch.isreal(x.flatten()[0]) for x in [A, B, C]) else X I will leave it here maybe in the future it will work properly. | 4 | 4 |
73,778,158 | 2022-9-19 | https://stackoverflow.com/questions/73778158/how-can-i-type-hint-the-init-params-are-the-same-as-fields-in-a-dataclass | Let us say I have a custom use case, and I need to dynamically create or define the __init__ method for a dataclass. For exampel, say I will need to decorate it like @dataclass(init=False) and then modify __init__() method to taking keyword arguments, like **kwargs. However, in the kwargs object, I only check for presence of known dataclass fields, and set these attributes accordingly (example below) I would like to type hint to my IDE (PyCharm) that the modified __init__ only accepts listed dataclass fields as parameters or keyword arguments. I am unsure if there is a way to approach this, using typing library or otherwise. I know that PY3.11 has dataclass transforms planned, which may or may not do what I am looking for (my gut feeling is no). Here is a sample code I was playing around with, which is a basic case which illustrates problem I am having: from dataclasses import dataclass # get value from input source (can be a file or anything else) def get_value_from_src(_name: str, tp: type): return tp() # dummy value @dataclass class MyClass: foo: str apple: int def __init__(self, **kwargs): for name, tp in self.__annotations__.items(): if name in kwargs: value = kwargs[name] else: # here is where I would normally have the logic # to read the value from another input source value = get_value_from_src(name, tp) if value is None: raise ValueError setattr(self, name, value) c = MyClass(apple=None) print(c) c = MyClass(foo='bar', # here, I would like to auto-complete the name # when I start typing `apple` ) print(c) If we assume that number or names of the fields are not fixed, I am curious if there could be a generic approach which would basically say to type checkers, "the __init__ of this class accepts only (optional) keyword arguments that match up on the fields defined in the dataclass itself". Addendums, based on notes in comments below: Passing @dataclass(kw_only=True) won't work because imagine I am writing this for a library, and need to support Python 3.7+. Also, kw_only has no effect when a custom __init__() is implemented, as in this case. The above is just a stub __init__ method. it could have more complex logic, such as setting attributes based on a file source for example. basically the above is just a sample implementation of a larger use case. I can't update each field to foo: Optional[str] = None because that part would be implemented in user code, which I would not have any control over. Also, annotating it in this way doesn't make sense when you know a custom __init__() method will be generated for you - meaning not by dataclasses. Lastly, setting a default for each field just so that the class can be instantiated without arguments, like MyClass(), don't seem like the best idea to me. It would not work to let dataclasses auto-generate an __init__, and instead implement a __post_init__(). This would not work because I need to be able to construct the class without arguments, like MyClass(), as the field values will be set from another input source (think local file or elsewhere); this means that all fields would be required, so annotating them as Optional would be fallacious in this case. I still need to be able to support user to enter optional keyword arguments, but these **kwargs will always match up with dataclass field names, and so I desire some way for auto-completion to work with my IDE (PyCharm) Hope this post clarifies the expectations and desired result. If there are any questions or anything that is a bit vague, please let me know. | What you are describing is impossible in theory and unlikely to be viable in practice. TL;DR Type checkers don't run your code, they just read it. A dynamic type annotation is a contradiction in terms. Theory As I am sure you know, the term static type checker is not coincidental. A static type checker is not executing the code your write. It just parses it and infers types according to it's own internal logic by applying certain rules to a graph that it derives from your code. This is important because unlike some other languages, Python is dynamically typed, which as you know means that the type of a "thing" (variable) can completely change at any point. In general, there is theoretically no way of knowing the type of all variables in your code, without actually stepping through the entire algorithm, which is to say running the code. As a silly but illustrative example, you could decide to put the name of a type into a text file to be read at runtime and then used to annotate some variable in your code. Could you do that with valid Python code and typing? Sure. But I think it is beyond clear, that static type checkers will never know the type of that variable. Why your proposition won't work Abstracting away all the dataclass stuff and the possible logic inside your __init__ method, what you are asking boils down to the following. "I want to define a method (__init__), but the types of its parameters will only be known at runtime." Why am I claiming that? I mean, you do annotate the types of the class' attributes, right? So there you have the types! Sure, but these have -- in general -- nothing whatsoever to do with the arguments you could pass to the __init__ method, as you yourself point out. You want the __init__ method to accept arbitrary keyword-arguments. Yet you also want a static type checker to infer which types are allowed/expected there. To connect the two (attribute types and method parameter types), you could of course write some kind of logic. You could even implement it in a way that enforces adherence to those types. That logic could read the type annotations of the class attributes, match up the **kwargs and raise TypeError if one of them doesn't match up. This is entirely possible and you almost implemented that already in your example code. But this only works at runtime! Again, a static type checker has no way to infer that, especially since your desired class is supposed to just be a base class and any descendant can introduce its own attributes/types at any point. But dataclasses work, don't they? You could argue that this dynamic way of annotating the __init__ method works with dataclasses. So why are they so different? Why are they correctly inferred, but your proposed code can't? The answer is, they aren't. Even dataclasses don't have any magical way of telling a static type checker which parameter types the __init__ method is to expect, even though they do annotate them, when they dynamically construct the method in _init_fn. The only reason mypy correctly infers those types, is because they implemented a separate plugin just for dataclasses. Meaning it works because they read through PEP 557 and hand-crafted a plugin for mypy that specifically facilitates type inference based on the rules described there. You can see the magic happening in the DataclassTransformer.transform method. You cannot generalize this behavior to arbitrary code, which is why they had to write a whole plugin just for this. I am not familiar enough with how PyCharm does its type checking, but I strongly suspect they used something similar. So you could argue that dataclasses are "cheating" with regards to static type checking. Though I am certainly not complaining. Pragmatic solution Even something as "high-profile" as Pydantic, which I personally love and use extensively, requires its own mypy plugin to realize the __init__ type inference properly (see here). For PyCharm they have their own separate Pydantic plugin, without which the internal type checker cannot provide those nice auto-suggestions for initialization etc. That approach would be your best bet, if you really want to take this further. Just be aware that this will be (in the best sense of the word) a hack to allow specifc type checkers to catch "errors" that they otherwise would have no way of catching. The reason I argue that it is unlikely to be viable is because it will essentially blow up the amount of work for your project to also cover the specific hacks for those type checkers that you want to satisfy. If you are committed enough and have the resources, go for it. Conclusion I am not trying to discourage you. But it is important to know the limitations enforced by the environment. It's either dynamic types and hacky imperfect type checking (still love mypy), or static types and no "kwargs can be anything" behavior. Hope this makes sense. Please let me know, if I made any errors. This is just based on my understanding of typing in Python. | 4 | 6 |
73,711,633 | 2022-9-14 | https://stackoverflow.com/questions/73711633/how-to-calculate-and-store-results-based-upon-the-matching-rows-of-two-different | I have three DataFrames which I am importing from Excel Files. The dataframes are given below as HTML Tables, Season Wise Record (this contains a Column Reward which is initialized with 0 initially) <table><tbody><tr><th>Unnamed: 0</th><th>Name</th><th>Team</th><th>Position</th><th>Games Played</th><th>PassingCompletions</th><th>PassingYards</th><th>PassingTouchdowns</th><th>RushingYards</th><th>RushingTouchdowns</th><th>ReceivingYards</th><th>Receptions</th><th>Touchdowns</th><th>Type</th><th>Sacks</th><th>SoloTackles</th><th>TacklesForLoss</th><th>FumblesForced</th><th>DefensiveTouchdowns</th><th>Interceptions</th><th>PassesDefended</th><th>ReceivingTouchdowns</th><th>Reward</th></tr><tr><td>0</td><td>Tom Brady</td><td>TAM</td><td>QB</td><td>17</td><td>485</td><td>5316</td><td>43</td><td>81</td><td>2</td><td>0</td><td>0</td><td>2</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>1</td><td>Justin Herbert</td><td>LAC</td><td>QB</td><td>17</td><td>443</td><td>5014</td><td>38</td><td>302</td><td>3</td><td>0</td><td>0</td><td>3</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>2</td><td>Matthew Stafford</td><td>LAR</td><td>QB</td><td>17</td><td>404</td><td>4886</td><td>41</td><td>43</td><td>0</td><td>0</td><td>0</td><td>0</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>3</td><td>Patrick Mahomes</td><td>KAN</td><td>QB</td><td>17</td><td>436</td><td>4839</td><td>37</td><td>381</td><td>2</td><td>0</td><td>0</td><td>2</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>4</td><td>Derek Carr</td><td>LVR</td><td>QB</td><td>17</td><td>428</td><td>4804</td><td>23</td><td>108</td><td>0</td><td>0</td><td>0</td><td>0</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>5</td><td>Joe Burrow</td><td>CIN</td><td>QB</td><td>16</td><td>366</td><td>4611</td><td>34</td><td>118</td><td>2</td><td>0</td><td>0</td><td>2</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>6</td><td>Dak Prescott</td><td>DAL</td><td>QB</td><td>16</td><td>410</td><td>4449</td><td>37</td><td>146</td><td>1</td><td>0</td><td>0</td><td>1</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>7</td><td>Josh Allen</td><td>BUF</td><td>QB</td><td>17</td><td>409</td><td>4407</td><td>36</td><td>763</td><td>6</td><td>0</td><td>0</td><td>6</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>88</td><td>Ezekiel Elliott</td><td>DAL</td><td>RB</td><td>17</td><td>1</td><td>4</td><td>0</td><td>1002</td><td>10</td><td>287</td><td>47</td><td>12</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>2</td><td>0</td></tr><tr><td>89</td><td>Marcus Mariota</td><td>LVR</td><td>QB</td><td>10</td><td>1</td><td>4</td><td>0</td><td>87</td><td>1</td><td>0</td><td>0</td><td>1</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>90</td><td>Johnny Hekker</td><td>LAR</td><td>QB</td><td>17</td><td>1</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>91</td><td>Greg Ward</td><td>PHI</td><td>QB</td><td>17</td><td>1</td><td>2</td><td>0</td><td>0</td><td>0</td><td>95</td><td>7</td><td>3</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>3</td><td>0</td></tr><tr><td>92</td><td>Kendall Hinton</td><td>DEN</td><td>WR</td><td>16</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td><td>175</td><td>15</td><td>1</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td></tr><tr><td>93</td><td>Keenan Allen</td><td>LAC</td><td>WR</td><td>16</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1138</td><td>106</td><td>6</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>6</td><td>0</td></tr><tr><td>94</td><td>Danny Amendola</td><td>HOU</td><td>QB</td><td>8</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>248</td><td>24</td><td>3</td><td>OFFENSE</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>3</td><td>0</td></tr><tr><td>95</td><td>Cole Beasley</td><td>BUF</td><td>WR</td><td>16</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>693</td><td>82</td><td>1</td><td>OFFENSE</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td></tr></tbody></table> Game Wise Record (I am only adding some sample rows, there are 20k+ rows in it) <table><tbody><tr><th>Index</th><th>Week</th><th>Name</th><th>Team</th><th>Starter</th><th>Interceptions</th><th>PassesDefended</th><th>Sacks</th><th>SoloTackles</th><th>TacklesForLoss</th><th>FumblesForced</th><th>PassesCompletions</th><th>PassingYards</th><th>PassingTouchdowns</th><th>PassingInterceptions</th><th>RushingYards</th><th>RushingTouchdowns</th><th>Receptions</th><th>ReceivingYards</th><th>ReceivingTouchdowns</th></tr><tr><td>0</td><td>1</td><td>Jourdan Lewis</td><td>DAL</td><td>1</td><td>1</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>1</td><td>1</td><td>Trevon Diggs</td><td>DAL</td><td>1</td><td>1</td><td>2</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>2</td><td>1</td><td>Anthony Brown</td><td>DAL</td><td>1</td><td>0</td><td>0</td><td>0</td><td>6</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>3</td><td>1</td><td>Jayron Kearse</td><td>DAL</td><td>0</td><td>0</td><td>0</td><td>0</td><td>5</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>4</td><td>1</td><td>Micah Parsons</td><td>DAL</td><td>1</td><td>0</td><td>1</td><td>0</td><td>3</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>5</td><td>1</td><td>Keanu Neal</td><td>DAL</td><td>1</td><td>0</td><td>0</td><td>0</td><td>3</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>6</td><td>1</td><td>DeMarcus Lawrence</td><td>DAL</td><td>1</td><td>0</td><td>0</td><td>0</td><td>4</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>7</td><td>1</td><td>Jaylon Smith</td><td>DAL</td><td>0</td><td>0</td><td>0</td><td>0</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>8</td><td>1</td><td>Dorance Armstrong Jr.</td><td>DAL</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>9</td><td>1</td><td>Tarell Basham</td><td>DAL</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>5175</td><td>5</td><td>Patrick Mahomes</td><td>KAN</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>33</td><td>272</td><td>2</td><td>2</td><td>61</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>5176</td><td>5</td><td>Darrel Williams</td><td>KAN</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>27</td><td>0</td><td>3</td><td>18</td><td>0</td></tr><tr><td>5177</td><td>5</td><td>Tyreek Hill</td><td>KAN</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>15</td><td>0</td><td>7</td><td>63</td><td>0</td></tr><tr><td>5178</td><td>5</td><td>Clyde Edwards-Helaire</td><td>KAN</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>13</td><td>0</td><td>1</td><td>11</td><td>0</td></tr><tr><td>5179</td><td>5</td><td>Jerick McKinnon</td><td>KAN</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>2</td><td>0</td><td>2</td><td>13</td><td>0</td></tr><tr><td>5180</td><td>5</td><td>Michael Burton</td><td>KAN</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>5181</td><td>5</td><td>Mecole Hardman</td><td>KAN</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>9</td><td>76</td><td>0</td></tr><tr><td>5182</td><td>5</td><td>Travis Kelce</td><td>KAN</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>6</td><td>57</td><td>1</td></tr></tbody></table> And lastly, there's a Player Goals File (this is an Excel File containing Sheets for each of the position, I am only sharing for QB sheet, to keep the question short. IF needed, I can share the rest too) <table><tbody><tr><th>Goal</th><th>Goal Type</th><th>PCC Reward</th><th>Target</th><th>Min Value</th><th>Max Value</th><th>Games Required</th><th>Started</th><th>Level 99 PCC Reward x4 (current series)</th><th>TImes achieved</th><th>PCC Rewarded</th><th> </th></tr><tr><td>Throw 300-399 yds</td><td>Game</td><td>25</td><td>PassingYards</td><td>300</td><td>399</td><td>0</td><td>0</td><td>100</td><td>8</td><td>200</td><td> </td></tr><tr><td>Throw 400-499 yds</td><td>Game</td><td>50</td><td>PassingYards</td><td>400</td><td>499</td><td>0</td><td>0</td><td>200</td><td>5</td><td>250</td><td>1000</td></tr><tr><td>Throw 500+ yds</td><td>Game</td><td>150</td><td>PassingYards</td><td>500</td><td>99999</td><td>0</td><td>0</td><td>600</td><td> </td><td>0</td><td>0</td></tr><tr><td>Throw 2 TDs</td><td>Game</td><td>50</td><td>Touchdowns</td><td>2</td><td>2</td><td>0</td><td>0</td><td>200</td><td>9</td><td>450</td><td>1800</td></tr><tr><td>Throw 3 TDs</td><td>Game</td><td>75</td><td>Touchdowns</td><td>3</td><td>3</td><td>0</td><td>0</td><td>300</td><td>4</td><td>300</td><td>1200</td></tr><tr><td>Throw 4 TDs</td><td>Game</td><td>100</td><td>Touchdowns</td><td>4</td><td>4</td><td>0</td><td>0</td><td>400</td><td>2</td><td>200</td><td>800</td></tr><tr><td>Throw 5+ TDs</td><td>Game</td><td>300</td><td>Touchdowns</td><td>5</td><td>10000</td><td>0</td><td>0</td><td>1200</td><td> </td><td>0</td><td>0</td></tr><tr><td>30-39 Completions</td><td>Game</td><td>50</td><td>PassingCompletions</td><td>30</td><td>39</td><td>0</td><td>0</td><td>200</td><td>5</td><td>250</td><td>1000</td></tr><tr><td>40+ Completions</td><td>Game</td><td>200</td><td>PassingCompletions</td><td>40</td><td>9999</td><td>0</td><td>0</td><td>800</td><td>1</td><td>200</td><td>800</td></tr><tr><td>0 INTs (must have been designated starter)</td><td>Game</td><td>200</td><td>PassingInterceptions</td><td>0</td><td>0</td><td>0</td><td>1</td><td>800</td><td>7</td><td>1400</td><td>5600</td></tr><tr><td>3500-3999 Passing YDs</td><td>Season</td><td>500</td><td>PassingYards</td><td>3500</td><td>3999</td><td>0</td><td>0</td><td>2000</td><td> </td><td>0</td><td>0</td></tr><tr><td>4000-4999 Passing YDS</td><td>Season</td><td>750</td><td>PassingYards</td><td>4000</td><td>4999</td><td>0</td><td>0</td><td>3000</td><td> </td><td>0</td><td>0</td></tr><tr><td>5000+ Passing YDS</td><td>Season</td><td>1250</td><td>PassingYards</td><td>5000</td><td>99999</td><td>0</td><td>0</td><td>5000</td><td> </td><td>0</td><td>0</td></tr><tr><td>30-39 Passing TDS</td><td>Season</td><td>750</td><td>PassingTouchdowns</td><td>30</td><td>39</td><td>0</td><td>0</td><td>3000</td><td> </td><td>0</td><td>0</td></tr><tr><td>40-45 Passing TDS</td><td>Season</td><td>1250</td><td>PassingTouchdowns</td><td>40</td><td>49</td><td>0</td><td>0</td><td>5000</td><td> </td><td>0</td><td>0</td></tr><tr><td>50+ Passing TDS</td><td>Season</td><td>2000</td><td>PassingTouchdowns</td><td>50</td><td>99999</td><td>0</td><td>0</td><td>8000</td><td> </td><td>0</td><td>0</td></tr></tbody></table> What I want to do is analyze the Records of the Season Wise Records and the Game Wise Records, and based upon the Goals given in the Player Goals File, I want to add the Reward for all the players. This is player position dependent so I made the following function to calculate Rewards for all the players (for the Season Records only) def calculatePointsSeason(target, min_value, games_played_condition, max_value, tier_position, player_position, reward, games_played): if player_position in positions[tier_position]: if games_played > games_played_condition: if target >= min_value and target <= max_value: return reward return 0 Similarly, I made this function to calculate Game wise Record, def calculatePointsGame(target, min_value, max_value, tier_position, player_position, reward, started, started_condition): if player_position in positions[tier_position]: if started == started_condition: if target >= min_value and target <= max_value: return reward return 0 Following is the function in which I am applying these two functions to calculate the Reward for each player, for key, value in positions.items(): # Positions has a list of all the positions for (idx, row) in rewards[key].iterrows(): # Rewards is a Dict containing Pandas Dataframes against each position if row['Goal Type'] == 'Season': df = df.copy(deep=True) # df contains the Season Wise Record Dataframe df['Reward'] += df.apply(lambda x: calculatePointsSeason(x[row['Target']], row['Min Value'], row['Games Required'], row['Max Value'], key, x['Position'], row['PCC Reward'], x['Games Played']), axis=1) else: # For Game wise points for (i, main_player) in df.iterrows(): for (j, game_player) in data.iterrows(): # data contains the Game Wise Record dataframe if main_player['Name'] == game_player['Name']: main_player['Reward'] += calculatePointsGame(main_player[row['Target']], row['Min Value'], row['Max Value'], key, main_player['Position'], row['PCC Reward'], game_player['Starter'], row['Started']) This function works well for the Season Wise Records, but for the Game Wise, I couldn't come up with any Pandas way to do it (eliminating the need of iteration of two Dataframes). I want some way to, Match the Rows given in the Game Wise Record file with the Season Wise Record file, based upon the Name attribute Send the Values from the Game Wise Record to the Custom Function and the Position of the player from the Season Wise Record (so that, only the specific reward is calculated for the player, e.g. if player is QB, so only QB Rewards will be match with him and etc. There are Excel Sheets for each position rewards) Get the Reward Value back and add it to the Reward in the Season Wise Record against that specific player record. I previously tried to do it by comparing the Name of the Player in the Season Wise Record with the Game Wise Record, but it didn't work. Is there any Pandas way to solve this issue? (where you don't have to iterate all the rows two times) | I hope I understood correctly your intentions. To avoid double for loops, you need to use groupby() method and then apply the desired function to every row of the group; finally the aggregation function (sum()) should be applied to the group. Although you can use the Name as a key for grouping, I recommend to add PlayerID. The approach needs little preparation: data = data.join( df.reset_index().set_index(['Name', 'Team'], drop=False)[['index','Position']], on=['Name','Team'], how='left' ).rename({'index':'PlayerID'}, axis=1) We add 2 columns to data DataFrame, namely Position and PlayerID which is the index of the first DataFrame df. We search for the ID checking Name and Team that still may cause a collision (when there 2 players with identical name in the same team). When it's done the last part of the code will be like this: for key, value in positions.items(): # Positions has a list of all the positions for (_, row) in rewards[key].iterrows(): # Rewards is a Dict containing Pandas Dataframes against each position if row['Goal Type'] == 'Season': if row['Target'] in df.columns: df['Reward'] += df.apply(lambda x: calculatePointsSeason( x[row['Target']], row['Min Value'], row['Games Required'], row['Max Value'], key, x['Position'], row['PCC Reward'], x['Games Played'] ), axis=1) else: # For Game wise points if row['Target'] in data.columns: # I added these 2 checks because sometimes target is not presented in the columns which raises the error df['Reward'] = df['Reward'].add( data.groupby('PlayerID').apply( lambda group: group.apply(lambda game_player: calculatePointsGame( game_player[row['Target']], row['Min Value'], row['Max Value'], key, game_player['Position'], row['PCC Reward'], game_player['Starter'], row['Started'] ), axis=1).sum() ), fill_value=0 ) | 9 | 1 |
73,787,469 | 2022-9-20 | https://stackoverflow.com/questions/73787469/errno-13-permission-denied-error-when-trying-to-load-huggingface-dataset | I'm trying to do a very simple thing: to load a dataset from the Huggingface library (see example code here) on my Mac: from datasets import load_dataset raw_datasets = load_dataset("glue", "mrpc") I'm getting the following error: PermissionError: [Errno 13] Permission denied: '/Users/username/.cache/huggingface/datasets/downloads/6d9bc094a0588d875caee4e51df39ab5d6b6316bf60695294827b02601d421a5.759f3e257a3fad0984d9f8ba9a26479d341795eb50fa64e4c1de40f1fc421313.py.lock' I've just spent an hour googling solutions for this, but so far nothing has worked. Can anyone help? Thanks in advance! | OK, I managed to solve it by manually changing the permissions of the right folders on my Mac: I navigated to the /Users/username/.cache/huggingface/datasets/downloads folder in the Finder (you can see hidden files and folders such as ".cache" by pressing " command shift + . ") Then I went to the info window for this 'downloads' folder (" command i "), clicked 'Sharing & Permissions', clicked the lock to make changes and then gave everyone read & write access (I'm the only one using this computer, so it doesn't matter) I then got some new permission errors relating to some other folders, so I just went to each folder and changed the permissions for all of them Not sure why this worked and the 'chmod 777' command didn't, but I'm glad it did. Thank you @Cuartero for pointing me in the right direction! | 5 | 3 |
73,791,594 | 2022-9-20 | https://stackoverflow.com/questions/73791594/how-to-vectorize-a-torch-function | When using numpy I can use np.vectorize to vectorize a function that contains if statements in order for the function to accept array arguments. How can I do the same with torch in order for a function to accept tensor arguments? For example, the final print statement in the code below will fail. How can I make this work? import numpy as np import torch as tc def numpy_func(x): return x if x > 0. else 0. numpy_func = np.vectorize(numpy_func) print('numpy function (scalar):', numpy_func(-1.)) print('numpy function (array):', numpy_func(np.array([-1., 0., 1.]))) def torch_func(x): return x if x > 0. else 0. print('torch function (scalar):', torch_func(-1.)) print('torch function (tensor):', torch_func(tc.tensor([-1., 0., 1.]))) | You can use .apply_() for CPU tensors. For CUDA ones, the task is problematic: if statements aren't easy to SIMDify. You may apply the same workaround for functorch.vmap as video drivers used to do for shaders: evaluate both branches of the condition and stick to arithmetics. Otherwise, just use a for loop: that's what np.vectorize() mostly does anyway. def torch_vectorize(f, inplace=False): def wrapper(tensor): out = tensor if inplace else tensor.clone() view = out.flatten() for i, x in enumerate(view): view[i] = f(x) return out return wrapper | 4 | 6 |
73,782,876 | 2022-9-20 | https://stackoverflow.com/questions/73782876/debugging-a-streamlit-application-in-pycharm-on-windows | I'm trying to setup a way to debug a Streamlit script in PyCharm. I'm on a Win10/64bit machine, working within an virtual environment created with conda. Running the code in the default way with streamlit run main.py works as expected. I have already read several forum posts and most importantly this related question on SO here. My question is the following: The answer in the question above suggests to change the debug configuration to use Module name instead of Script path and enter streamlit.cli as module. Then in the parameters one should set run main.py as the argument. Unfortunately this gets me the following error: No module named streamlit.cli Where do I find streamlit.cli shouldn't it be installed along with the default pip install of the library? Do I need to install it separately? Any help is much appreciated! | Stumbled across the (rather simple) answer by accident: Simply use the "correct" module name, which in my case was streamlit instead of streamlit.cli. So for debugging I now have the following configuration: Module Name (instead of "Script path"): streamlit Parameter: run main.py Interpreter options: not set Working directory: <path\to\main.py> As I have read in some streamlit forum post: "Works like a (Py)Charm!" :) | 8 | 16 |
73,790,207 | 2022-9-20 | https://stackoverflow.com/questions/73790207/how-to-override-the-update-action-in-django-rest-framework-modelviewset | These are the demo models class Author(models.Model): name = models.CharField(max_lenght=5) class Post(models.Model): author = models.ForeignKey(Author, on_delete=models.CASCADE) title = models.CharField(max_lenght=50) body = models.TextField() And the respective views are class AuthorViewSet(viewsets.ModelViewSet): queryset = Author.objects.all() serializer_class = AuthorSerializer class PostViewSet(viewsets.ModelViewSet): queryset = Post.objects.all() serializer_class = PostStatSerializer I am trying to perform an update/put action on PostViewSet and which is succesfull, but I am expecting different output. After successful update of Post record, I want to send its Author record as output with AuthorSerializer. How to override this and add this functionality? | I figured out some less code fix for my issue. class PostViewSet(viewsets.ModelViewSet): queryset = Post.objects.all() serializer_class = PostStatSerializer def update(self, request, *args, **kwargs): super().update(request, *args, **kwargs) instance = self.get_object() return Response(AuthorSerializer(instance.author).data) | 5 | 2 |
73,793,168 | 2022-9-20 | https://stackoverflow.com/questions/73793168/dataframe-column-stores-lists-of-dictionaries-as-strings-parse-it-and-build-a-n | I have a huge dataframe (2 million rows) in which a certain column has a string representations of a list of dictionaries (it is the school history of several people). So, what I'm trying to do is parsing this data to a new dataframe (because the relation is going to be 1 person to many schools). However, my first option was to loop over the dataframe with itertuples(). Too slow! The first few rows look like this: list_of_dicts = { 0: '[]', 1: "[{'name': 'USA Health', 'subject': 'Residency, Internal Medicine, 2006 - 2009'}, {'name': 'Ross University School of Medicine', 'subject': 'Class of 2005'}]", 2: "[{'name': 'Physicians Medical Center Carraway', 'subject': 'Residency, Surgery, 1957 - 1960'}, {'name': 'Physicians Medical Center Carraway', 'subject': 'Internship, Transitional Year, 1954 - 1955'}, {'name': 'University of Alabama School of Medicine', 'subject': 'Class of 1954'}]" } df_dict = pd.DataFrame.from_dict(list_of_dicts, orient='index', columns=['school_history']) What I thought about, was to have a function and then apply it to the dataframe: def parse_item(row): eval_dict = eval(row)[0] school_df = pd.DataFrame.from_dict(eval_dict, orient='index').T return school_df df['column'].apply(lambda x: parse_item(x)) However, I'm not able to figure out how to generate a dataframe bigger than original (due to situations of multiple schools to one person). From those 3 rows, the idea is to have this dataframe (that has 5 rows from 2 rows): | This does the trick using your sample data (thanks for the performance tip in comments): list_df = df_dict.school_history.map(ast.literal_eval) exploded = list_df[list_df.str.len() > 0].explode() final = pd.DataFrame(list(exploded), index=exploded.index) This produces the following: In [54]: final Out[54]: name subject 1 USA Health Residency, Internal Medicine, 2006 - 2009 1 Ross University School of Medicine Class of 2005 2 Physicians Medical Center Carraway Residency, Surgery, 1957 - 1960 2 Physicians Medical Center Carraway Internship, Transitional Year, 1954 - 1955 2 University of Alabama School of Medicine Class of 1954 This will probably not be super fast given the amount of data, but parsing a dictionary of strings with nested objects inside will probably be pretty slow no matter what. You're probably better off parsing the file upstream first, then converting to pandas. | 4 | 3 |
73,790,198 | 2022-9-20 | https://stackoverflow.com/questions/73790198/how-to-prevent-inpainting-blocks-and-improve-coloring | I wanted to Remove all the texts USING INPAINTING from this IMAGE. I had been trying various methods, and eventually found that I can get the results through OCR and then using thresholding MASK THE IMAGE. processedImage = preprocess(partOFimg) mask = np.ones(img.shape[:2], dtype="uint8") * 255 for c in cnts: cv2.drawContours(mask, [c], -1, 0, -1) img = cv2.inpaint(img,mask,7,cv2.INPAINT_TELEA) Preprocess operations: ret,thresh1 = cv2.threshold(gray, 0, 255,cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV) rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15, 3)) dilation = cv2.dilate(thresh1, rect_kernel, iterations = 1) edged = cv2.Canny(dilation, 50, 100) cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) mask = np.ones(img.shape[:2], dtype="uint8") * 255 When I run the above code, I here am the OUTPUT Image OUTPUT. As we can see, it is making some BLOCKS OF DIFFERENT COLOR over the IMAGE, I want to prevent that, How do I achieve this? I see that mask images are not formed well many times, and in cases when the text is white the PREPROCESSING doesn't occur properly. How do I prevent these BLOCKS of other colours to FORM on the IMAGE? Grayed Sub Image GRAYED Threshold Sub IMG part: Thresholded Image Masked Image Masked EDIT 1: I've managed to get this new better result by noticing that my threshold is the best mask I can get. After doing this I performed the masking process 3 different times with variable masks and inversions. I did the inpainting algorithm 3 times, it basically the other times inverse the mask, because in some cases required mask is the inversed mask. Still I think it needs improvement, If I chose a different image the results are not so good. | Python/OpenCV inpaint methods, generally, are not appropriate to your type of image. They work best on thin (scratch-like) regions, not large blocks. You really need an exemplar type method such as https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/criminisi_tip2004.pdf. But OpenCV does not have that. However, the OpenCV methods do work here, I suspect, because you are filling with constant colors (green) and not texture. So you are best to try to get the mask of just the letters (characters), not rectangular blocks for the words. So, to show you what I mean, here is my Python/OpenCV approach. Input: Read the input Threshold on the green sign Apply morphology to close it up and keep as mask1 Apply the mask to the image to blacken out the outside of the sign Threshold on the white in this new image and keep as mask2 Apply morphology dilate to enlarge it slightly and save as mask3 Do the inpaint Save the results import cv2 import numpy as np # read input img = cv2.imread('airport_sign.jpg') # threshold on green sign lower = (30,80,0) upper = (70,120,20) thresh = cv2.inRange(img, lower, upper) # apply morphology close kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (135,135)) mask1 = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel) # apply mask to img img2 = img.copy() img2[mask1==0] = (0,0,0) # threshold on white #gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) #mask2 = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1] lower = (120,120,120) upper = (255,255,255) mask2 = cv2.inRange(img2, lower, upper) # apply morphology dilate kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5)) mask3 = cv2.morphologyEx(mask2, cv2.MORPH_DILATE, kernel) # do inpainting result1 = cv2.inpaint(img,mask3,11,cv2.INPAINT_TELEA) result2 = cv2.inpaint(img,mask3,11,cv2.INPAINT_NS) # save results cv2.imwrite('airport_sign_mask.png', mask3) cv2.imwrite('airport_sign_inpainted1.png', result1) cv2.imwrite('airport_sign_inpainted2.png', result1) # show results cv2.imshow('thresh',thresh) cv2.imshow('mask1',mask1) cv2.imshow('img2',img2) cv2.imshow('mask2',mask2) cv2.imshow('mask3',mask3) cv2.imshow('result1',result1) cv2.imshow('result2',result2) cv2.waitKey(0) cv2.destroyAllWindows() Mask 3: Inpaint 1 (Telea): Inpaint 2 (NS): | 4 | 4 |
73,791,116 | 2022-9-20 | https://stackoverflow.com/questions/73791116/sum-every-n-elements-from-numpy-array | For example, given an array arr = np.array([1,2,3,2,3,7,2,3,4]) there are 9 elements. I want to get sum every 3 elements: [6, 12, 9] Is there any numpy api I can use? | If your arr can be divided into groups of 3, i.e. has length 3*k, then: arr.reshape(-1,3).sum(axis=-1) # array([ 6, 12, 9]) In the general case, bincounts: np.bincount(np.arange(len(arr))//3, arr) # array([ 6., 12., 9.]) | 6 | 6 |
73,766,401 | 2022-9-18 | https://stackoverflow.com/questions/73766401/why-file-write-is-appending-in-r-mode-in-python | Assume I have a file content.txt, with the content Ex nihilo nihil fit. I am willing to replace it with Ex nihilo nihil est. The code is: with open("content.txt", "r+") as f: content = f.read() content = content.replace("fit", "est") print(content) f.write(content) f.close() After that, the content of the file becomes: Ex nihilo nihil fit Ex nihilo nihil est Why? The only thing I need is Ex nihilo nihil est. What is the correct code? | r+ opens the file and puts a pointer at the first byte. When you use f.read() the pointer moves to the end of the file, so when you try to write it starts at the end of the file. To move back to the start (so you can override), use f.seek(0): with open("text.txt", "r+") as f: content = f.read() content = content.replace("fit", "est") print(content) f.seek(0) f.truncate(0) f.write(content) f.close() | 4 | 3 |
73,785,952 | 2022-9-20 | https://stackoverflow.com/questions/73785952/pandas-join-two-dataframes-according-to-range-and-date | I have two dataframes like this: DATE MAX_AMOUNT MIN_AMOUNT MAX_DAY MIN_DAY RATE 01/09/2022 20 15 10 5 0.01 01/09/2022 25 20 15 10 0.02 03/09/2022 30 10 5 3 0.03 03/09/2022 40 30 20 5 0.04 04/09/2022 10 5 10 1 0.05 ID DATE AMOUNT DAY 1 01/09/2022 18 7 2 01/09/2022 22 11 3 01/09/2022 30 20 4 03/09/2022 35 10 5 04/09/2022 35 10 I want to bring the RATE values to the second df in accordance with the DATE. Also, the AMOUNT and DAY values in the relevant DATE must be within the appropriate range (MAX_AMOUNT & MIN_AMOUNT, MAX_DAY & MIN_DAY). Desired output like this: ID DATE AMOUNT DAY RATE 1 01/09/2022 18 7 0.01 2 01/09/2022 22 11 0.02 3 01/09/2022 30 20 4 03/09/2022 35 10 0.04 5 04/09/2022 35 10 Could you please help me about this? | Use merge first with filter columns by Series.between and then use Series.map for RATE column with first matched ID - added DataFrame.drop_duplicates: df = df2.merge(df1, on='DATE') df = (df[df['AMOUNT'].between(df['MIN_AMOUNT'], df['MAX_AMOUNT']) & df['DAY'].between(df['MIN_DAY'], df['MAX_DAY'])]) df2['RATE'] = df2['ID'].map(df.drop_duplicates('ID').set_index('ID')['RATE']) print (df2) ID DATE AMOUNT DAY RATE 0 1 01/09/2022 18 7 0.01 1 2 01/09/2022 22 11 0.02 2 3 01/09/2022 30 20 NaN 3 4 03/09/2022 35 10 0.04 4 5 04/09/2022 35 10 NaN | 4 | 4 |
73,739,552 | 2022-9-16 | https://stackoverflow.com/questions/73739552/select-columns-from-a-highly-nested-data | For the dataframe below, which was generated from an avro file, I'm trying to get the column names as a list or other format so that I can use it in a select statement. node1 and node2 have the same elements. For example I understand that we could do df.select(col('data.node1.name')), but I'm not sure how to select all columns at once without hardcode all the column names, and how to handle the nested part. I think to make it readable, the productvalues and porders should be selected into separate individual dataframes/tables? Input schema: root |-- metadata: struct |... |-- data :struct | |--node1 : struct | | |--name : string | | |--productlist: array | | |--element : struct | |--productvalues: array | |--element : struct | |-- pname:string | |-- porders:array | |--element : struct | |-- ordernum: int | |-- field: string |--node2 : struct | |--name : string | |--productlist: array | |--element : struct |--productvalues: array |--element : struct |-- pname:string |-- porders:array |--element : struct |-- ordernum: int |-- field: string | The following way, you will not need to hardcode all the struct fields. But you will need to provide a list of those columns/fields which have the type of array of struct. You have 3 of such fields, we will add one more column, so in total it will be 4. First of all, the dataframe, similar to yours: from pyspark.sql import functions as F df = spark.createDataFrame( [( ('a', 'b'), ( ( 'name_1', [ ([ ( 'pname_111', [ (1111, 'field_1111'), (1112, 'field_1112') ] ), ( 'pname_112', [ (1121, 'field_1121'), (1122, 'field_1122') ] ) ],), ([ ( 'pname_121', [ (1211, 'field_1211'), (1212, 'field_1212') ] ), ( 'pname_122', [ (1221, 'field_1221'), (1222, 'field_1222') ] ) ],) ] ), ( 'name_2', [ ([ ( 'pname_211', [ (2111, 'field_2111'), (2112, 'field_2112') ] ), ( 'pname_212', [ (2121, 'field_2121'), (2122, 'field_2122') ] ) ],), ([ ( 'pname_221', [ (2211, 'field_2211'), (2212, 'field_2212') ] ), ( 'pname_222', [ (2221, 'field_2221'), (2222, 'field_2222') ] ) ],) ] ) ), )], 'metadata:struct<fld1:string,fld2:string>, data:struct<node1:struct<name:string, productlist:array<struct<productvalues:array<struct<pname:string, porders:array<struct<ordernum:int, field:string>>>>>>>, node2:struct<name:string, productlist:array<struct<productvalues:array<struct<pname:string, porders:array<struct<ordernum:int, field:string>>>>>>>>' ) # df.printSchema() # root # |-- metadata: struct (nullable = true) # | |-- fld1: string (nullable = true) # | |-- fld2: string (nullable = true) # |-- data: struct (nullable = true) # | |-- node1: struct (nullable = true) # | | |-- name: string (nullable = true) # | | |-- productlist: array (nullable = true) # | | | |-- element: struct (containsNull = true) # | | | | |-- productvalues: array (nullable = true) # | | | | | |-- element: struct (containsNull = true) # | | | | | | |-- pname: string (nullable = true) # | | | | | | |-- porders: array (nullable = true) # | | | | | | | |-- element: struct (containsNull = true) # | | | | | | | | |-- ordernum: integer (nullable = true) # | | | | | | | | |-- field: string (nullable = true) # | |-- node2: struct (nullable = true) # | | |-- name: string (nullable = true) # | | |-- productlist: array (nullable = true) # | | | |-- element: struct (containsNull = true) # | | | | |-- productvalues: array (nullable = true) # | | | | | |-- element: struct (containsNull = true) # | | | | | | |-- pname: string (nullable = true) # | | | | | | |-- porders: array (nullable = true) # | | | | | | | |-- element: struct (containsNull = true) # | | | | | | | | |-- ordernum: integer (nullable = true) # | | | | | | | | |-- field: string (nullable = true) The answer Spark 3.1+ nodes = df.select("data.*").columns for n in nodes: df = df.withColumn("data", F.col("data").withField(n, F.struct(F.lit(n).alias("node"), f"data.{n}.*"))) df = df.withColumn("data", F.array("data.*")) for arr_of_struct in ["data", "productlist", "productvalues", "porders"]: df = df.select( *[c for c in df.columns if c != arr_of_struct], F.expr(f"inline({arr_of_struct})") ) Lower Spark versions: nodes = df.select("data.*").columns for n in nodes: df = df.withColumn( "data", F.struct( F.struct(F.lit(n).alias("node"), f"data.{n}.*").alias(n), *[f"data.{c}" for c in df.select("data.*").columns if c != n] ) ) df = df.withColumn("data", F.array("data.*")) for arr_of_struct in ["data", "productlist", "productvalues", "porders"]: df = df.select( *[c for c in df.columns if c != arr_of_struct], F.expr(f"inline({arr_of_struct})") ) Results: df.printSchema() # root # |-- metadata: struct (nullable = true) # | |-- fld1: string (nullable = true) # | |-- fld2: string (nullable = true) # |-- node: string (nullable = false) # |-- name: string (nullable = true) # |-- pname: string (nullable = true) # |-- ordernum: integer (nullable = true) # |-- field: string (nullable = true) df.show() # +--------+-----+------+---------+--------+----------+ # |metadata| node| name| pname|ordernum| field| # +--------+-----+------+---------+--------+----------+ # | {a, b}|node1|name_1|pname_111| 1111|field_1111| # | {a, b}|node1|name_1|pname_111| 1112|field_1112| # | {a, b}|node1|name_1|pname_112| 1121|field_1121| # | {a, b}|node1|name_1|pname_112| 1122|field_1122| # | {a, b}|node1|name_1|pname_121| 1211|field_1211| # | {a, b}|node1|name_1|pname_121| 1212|field_1212| # | {a, b}|node1|name_1|pname_122| 1221|field_1221| # | {a, b}|node1|name_1|pname_122| 1222|field_1222| # | {a, b}|node2|name_2|pname_211| 2111|field_2111| # | {a, b}|node2|name_2|pname_211| 2112|field_2112| # | {a, b}|node2|name_2|pname_212| 2121|field_2121| # | {a, b}|node2|name_2|pname_212| 2122|field_2122| # | {a, b}|node2|name_2|pname_221| 2211|field_2211| # | {a, b}|node2|name_2|pname_221| 2212|field_2212| # | {a, b}|node2|name_2|pname_222| 2221|field_2221| # | {a, b}|node2|name_2|pname_222| 2222|field_2222| # +--------+-----+------+---------+--------+----------+ Explanation nodes = df.select("data.*").columns for n in nodes: df = df.withColumn("data", F.col("data").withField(n, F.struct(F.lit(n).alias("node"), f"data.{n}.*"))) Using the above, I decided to save the node title in case you need it. It first gets a list of nodes from "data" column fields. Using the list, the for loop creates one more field inside every node struct for the title of the node. df = df.withColumn("data", F.array("data.*")) The above converts the "data" column type from struct to array so that in the next step we could easily explode it into columns. for arr_of_struct in ["data", "productlist", "productvalues", "porders"]: df = df.select( *[c for c in df.columns if c != arr_of_struct], F.expr(f"inline({arr_of_struct})") ) In the above, the main line is F.expr(f"inline({arr_of_struct})"). It must be used inside a loop, because it's a generator and you cannot nest them together in Spark. inline explodes arrays of structs into columns. At this step you have 4 of [array of struct], so 4 inline expressions will be created. | 6 | 1 |
73,779,649 | 2022-9-19 | https://stackoverflow.com/questions/73779649/pandas-dataframe-reshaping-columns-using-start-and-end-pairs | I have been trying to figure out a way to transform this dataframe. It contains only two columns; one is timeseries data and the other is event markers. Here is an example of the initial dataframe: df1 = pd.DataFrame({'Time': ['1:42 AM','2:30 AM','3:29 AM','4:19 AM','4:37 AM','4:59 AM','5:25 AM','5:33 AM','6:48 AM'], 'Event': ['End','Start','End','Start','End','Start','Start','End','Start']}) This is how I would like it to look after transformation: df2 = pd.DataFrame({'Start': ['', '2:30 AM', '4:19 AM', '4:59 AM', '5:25 AM', '6:48 AM'], 'End': ['1:42 AM', '3:29 AM', '4:37 AM', '', '5:33 AM', '']}) Essentially, I want to make the event markers the new columns, with the start and end times paired up down the table as they happen. This example includes both exceptions that will sometimes happen: First row of data is an 'End' marker (data gets truncated by date). Sometimes there will not be an 'End' marker for a specific 'Start' marker (job failed or wasn't complete at time the report was run). I looked at pivot and pivot_table but I wasn't able to add an index that got the output I wanted. Pretty sure this should be possible, I just am not an expert with dataframes yet. | Try: df1["tmp"] = df1["Event"].eq("Start").cumsum() df1 = df1.pivot(index="tmp", columns="Event", values="Time").fillna("") df1.columns.name, df1.index.name = None, None print(df1[["Start", "End"]]) Prints: Start End 0 1:42 AM 1 2:30 AM 3:29 AM 2 4:19 AM 4:37 AM 3 4:59 AM 4 5:25 AM 5:33 AM 5 6:48 AM | 4 | 5 |
73,775,424 | 2022-9-19 | https://stackoverflow.com/questions/73775424/python-http-server-how-to-serve-html-from-directory-above-cwd | I've got a question that I could really use some guidance with. I am simply trying to serve HTML from a directory that is not the same directory as the server itself. Ideally, I'd like to move one level above the server's directory, and serve a file located there. When I try that, I get a 404. class Server(SimpleHTTPRequestHandler): def do_GET(self): ... some other stuff ... elif self.path == "/manage": self.send_response(200) self.path = "/manage.html" return SimpleHTTPRequestHandler.do_GET(self) else: f = self.send_head() if f: try: self.copyfile(f, self.wfile) finally: f.close() ^this code serves the file, if it's in the same directory as the server. If I then move manage.html upward (where I'd like it to live), and try stuff like '../manage.html', or an absolute path, I get a 404. Am I bumping into like, some built-in directory traversal mitigation? If so, is there a way I can disable it? All of this is local, security isn't really a problem. I could try a subdirectory, but if I start down that road, I'd have to rename & rearrange the entire directory structure, because the naming won't make sense. Thanks in advance! (Python 3.10.2) 64-bit, Windows. | This is a security feature as you mentioned. You wouldn't want users to be able to see all files of the server, would you? Starting with Python 3.7, the constructor of SimpleHTTPRequestHandler has a directory parameter (see docs: https://docs.python.org/3/library/http.server.html#http.server.SimpleHTTPRequestHandler). With that you can tell SimpleHTTPRequestHandler from where to server the files. You can either change that wherever you instantiate your server, i.e. Server(directory=...) or you can alter the init method of you Server class class Server(SimpleHTTPRequestHandler): def __init__(self, *args, **kwargs): super().__init__(*args, directory=..., **kwargs) EDIT: I dug a little deeper, and here is where the sanitization happens https://github.com/python/cpython/blob/9a95fa9267590c6cc66f215cd9808905fda1ee25/Lib/http/server.py#L839-L847 # ... path = posixpath.normpath(path) words = path.split('/') words = filter(None, words) path = self.directory for word in words: if os.path.dirname(word) or word in (os.curdir, os.pardir): # Ignore components that are not a simple file/directory name continue path = os.path.join(path, word) # ... | 5 | 4 |
73,745,853 | 2022-9-16 | https://stackoverflow.com/questions/73745853/celery-send-task-method | I have my API, and some endpoints need to forward requests to Celery. Idea is to have specific API service that basically only instantiates Celery client and uses send_task() method, and seperate service(workers) that consume tasks. Code for task definitions should be located in that worker service. Basicaly seperating celery app (API) and celery worker to two seperate services. I dont want my API to know about any celery task definitions, endpoints only need to use celery_client.send_task('some_task', (some_arguments)). So on one service i have my API, an on other service/host I have celery code base where my celery worker will execute tasks. I came across this great article that describes what I want to do. https://medium.com/@tanchinhiong/separating-celery-application-and-worker-in-docker-containers-f70fedb1ba6d and this post Celery - How to send task from remote machine? I need help on how to create routes for tasks from the API? I was expecting for celery_client.send_task() to have queue= keyword, but it does not. I need to have 2 queues, and two workers that will consume content from these two queues. Commands for my workers: celery -A <path_to_my_celery_file>.celery_client worker --loglevel=info -Q queue_1 celery -A <path_to_my_celery_file>.celery_client worker --loglevel=info -Q queue_2 I have also visited celery "Routing Tasks" documentation, but it is still unclear to me how to establish this communication. | Your API side should hold the router. I guess it's not an issue because it is only a map of task -> queue (aka send task1 to queue1). In other words, your celery_client should have task_routes like: task_routes = { 'mytasks.some_task': 'queue_1', 'mytasks.some_other_task': 'queue_2', } | 4 | 3 |
73,748,939 | 2022-9-16 | https://stackoverflow.com/questions/73748939/this-tensorflow-binary-is-optimized-with-oneapi-deep-neural-network-library-one | I have the following code: import tensorflow as tf print("Hello") And the output is: This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Hello # This is printed about 5 seconds after the message I had looked into the meaning of the message in this thread and in this one, but failed to make it disappear (and to run any program in less than 5 seconds). Any help would be greatly appreciated. | The message is just telling you that certain optimizations are on for you by default and if you want even more optimizations you can recompile TF to get even more performant optimizations. By default they compile using AVX2 which isnβt the fastest AVX, but it is the most compatible. If you donβt need to enable those (which you probably donβt) then you can just ignore the informational message knowing that you are getting some optimizations for your runs utilizing the oneDNN CPU optimizations. | 6 | 7 |
73,772,930 | 2022-9-19 | https://stackoverflow.com/questions/73772930/pip-failing-to-install-psycopg2-binary-on-ubuntu-22-04-with-python-3-10 | seeing this on Ubuntu 22.04 which has python 3.10 pip3 install psycopg2-binary==2.8.5 Defaulting to user installation because normal site-packages is not writeable Collecting psycopg2-binary==2.8.5 Using cached psycopg2-binary-2.8.5.tar.gz (381 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ python setup.py egg_info did not run successfully. β exit code: 1 β°β> [23 lines of output] running egg_info creating /tmp/pip-pip-egg-info-qc0tg_5p/psycopg2_binary.egg-info writing /tmp/pip-pip-egg-info-qc0tg_5p/psycopg2_binary.egg-info/PKG-INFO writing dependency_links to /tmp/pip-pip-egg-info-qc0tg_5p/psycopg2_binary.egg-info/dependency_links.txt writing top-level names to /tmp/pip-pip-egg-info-qc0tg_5p/psycopg2_binary.egg-info/top_level.txt writing manifest file '/tmp/pip-pip-egg-info-qc0tg_5p/psycopg2_binary.egg-info/SOURCES.txt' Error: pg_config executable not found. pg_config is required to build psycopg2 from source. Please add the directory containing pg_config to the $PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. If you prefer to avoid building psycopg2 from source, please install the PyPI 'psycopg2-binary' package instead. For further information please check the 'doc/src/install.rst' file (also at <https://www.psycopg.org/docs/install.html>). [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ Encountered error while generating package metadata. β°β> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. | This is a known issue. Use the binary version instead: pip install psycopg2-binary EDIT: You dont have 'pg_config' which is part of libpq-dev on Ubuntu. Install it with sudo apt-get install libpq-dev and try abgain. | 5 | 14 |
73,743,988 | 2022-9-16 | https://stackoverflow.com/questions/73743988/stratified-sampling-with-priors-in-python | Context The common scenario of applying stratified sampling is about choosing a random sample that roughly maintains the distribution of the selected variable(s) so that it is representative. Goal: The goal is to create a function to perfrom stratified sampling but with some provided proportions of the considered variable instead of the original dataset proportions. The Function: def stratified_sampling_prior(df,column,prior_dict,sample_size): ... return df_sampled column: this is a categorical variable used to perform stratified sampling. prior_dict: it contains percentages by category in the selected variable. df: the input dataset. sample_size: this is the amount of instances we would like to have the sample. Example Here I provide a working data sample: import pandas as pd priors_dict = { "A":0.2 "B":0.2 "C":0.1 "D":0.5 } df = pd.DataFrame({"Category":["A"]*10+["B"]*50+["C"]*15+["D"]*100, "foo":["foo" for i in range(175)], "bar":["bar" for i in range(175)]}) With a traditional stratified sampling with a defined sample_size we would get the following output: df["Category"].value_counts()/df.shape[0]*100 D 57.14 B 28.57 C 8.57 A 5.71 However, the expected result when using the prior_dict the proportions of the output would be: df_sample = stratified_sampling_prior(df,"Category",prior_dict,sample_size=100): df_sample["Category"].value_counts()/df_sample.shape[0]*100 D 50.00 B 20.00 C 10.00 A 20.00 | From your question it is unclear if you need it to be a probabilistic function. That is, that the expectation of the proportions converge to the prior, or do you wish it to conform to the prior no matter what? If you want it to conform to the prior then I see 2 major issues: The randomness of the sampling could potentially be severely hurt - imagine a situation where all of the a category rows should be included. On the flip side, there are times where it will be virtually impossible to satisfy. If in your example there is 0 examples of A, there is no way to make it account for 20% of the sampling points: df = pd.DataFrame({"Category":["A"]*0+["B"]*50+["C"]*15+["D"]*100, "foo":["foo" for i in range(165)], "bar":["bar" for i in range(165)]}) Probabilistic Function In this case you can use the prior to calculate a per-sample weight. We need the present proportion of the category, this we can obtain by: df['Category'].value_counts(normalize=True) D 0.571429 B 0.285714 C 0.085714 A 0.057143 Assuming we begin with weight 1 for each entry, we now know how to scale each point to obtain the new weight: new_weight = desired_proportion / present_proportion In the case of D for instance, it means each example weight is new_weight = 1 * (0.5 / 0.571) = 0.875. We need to repeat it for each class. Here is a snippet that does that: prior = { "A":0.2, "B":0.2, "C":0.1, "D":0.5 } df['weight'] = 1 present_dist = df['Category'].value_counts(normalize=True) for cat, p in present_dist.items(): df.loc[df['Category'] == cat, 'weight'] = prior[cat] / (p + 1e-6) sampledf = df.sample(weights = df['weight']) Testing I ran some experiments in the results show that indeed we converge to the desired prior. I ran 100,000 experiments and this is the distribution that we got: {'A': 19917, 'B': 19982, 'C': 9975, 'D': 50126} That corresponds to: A: 19.92% B: 19.98% C: 9.975% D: 50.13% Edit: I inflated the df size and used sample size 10,000 to see if per sample we converge to the desired distribution: # df composition (you can see it vastly differs from our desired prior) A_l = 90000 B_l = 4500466 C_l = 5243287 D_l = 144144 tot = A_l + B_l + C_l + D_l df = pd.DataFrame({"Category":["A"]*A_l+["B"]*B_l+["C"]*C_l+["D"]*D_l, "foo":["foo" for _ in range(tot)], "bar":["bar" for _ in range(tot)]}) Here are 10 tests sampling 10k rows: {'A': 2007, 'B': 2038, 'C': 1029, 'D': 4926} {'A': 1999, 'B': 1974, 'C': 1042, 'D': 4985} {'A': 2018, 'B': 2024, 'C': 1011, 'D': 4947} {'A': 1996, 'B': 2046, 'C': 979, 'D': 4979} {'A': 2027, 'B': 2012, 'C': 1043, 'D': 4918} {'A': 1991, 'B': 2031, 'C': 1027, 'D': 4951} {'A': 1984, 'B': 1984, 'C': 1075, 'D': 4957} {'A': 1972, 'B': 2014, 'C': 962, 'D': 5052} {'A': 1975, 'B': 1998, 'C': 962, 'D': 5065} {'A': 2016, 'B': 1966, 'C': 994, 'D': 5024} You can see that regardless of the distribution changes we manage to enforce our prior. If you still want the deterministic fuction then tell me, however I strongly recommend no to use it as it will be mathematically incorrect and will cause you pain later on. | 4 | 1 |
73,739,158 | 2022-9-16 | https://stackoverflow.com/questions/73739158/nodejs-convert-to-byte-array-code-return-different-results-compare-to-python | I got the following Javascript code and I need to convert it to Python(I'm not an expert in hashing so sorry for my knowledge on this subject) function generateAuthHeader(dataToSign) { let apiSecretHash = new Buffer("Rbju7azu87qCTvZRWbtGqg==", 'base64'); let apiSecret = apiSecretHash.toString('ascii'); var hash = CryptoJS.HmacSHA256(dataToSign, apiSecret); return hash.toString(CryptoJS.enc.Base64); } when I ran generateAuthHeader("abc") it returned +jgBeooUuFbhMirhh1KmQLQ8bV4EXjRorK3bR/oW37Q= So I tried writing the following Python code: def generate_auth_header(data_to_sign): api_secret_hash = bytearray(base64.b64decode("Rbju7azu87qCTvZRWbtGqg==")) hash = hmac.new(api_secret_hash, data_to_sign.encode(), digestmod=hashlib.sha256).digest() return base64.b64encode(hash).decode() But when I ran generate_auth_header("abc") it returned a different result aOGo1XCa5LgT1CIR8C1a10UARvw2sqyzWWemCJBJ1ww= Can someone tell me what is wrong with my Python code and what I need to change? The base64 is the string I generated myself for this post UPDATE: this is the document I'm working with //Converting the Rbju7azu87qCTvZRWbtGqg== (key) into byte array //Converting the data_to_sign into byte array //Generate the hmac signature it seems like apiSecretHash and api_secret_hash is different, but I don't quite understand as the equivalent of new Buffer() in NodeJS is bytearray() in python | It took me 2 days to look it up and ask for people in python discord and I finally got an answer. Let me summarize the problems: API secret hash from both return differents hash of the byte array javascript Javascript apiSecret = "E8nm,ns:\u0002NvQY;F*" Python api_secret_hash = b'E\xb8\xee\xed\xac\xee\xf3\xba\x82N\xf6QY\xbbF\xaa' once we replaced the hash with python code it return the same result def generate_auth_header(data_to_sign): api_secret_hash = "E8nm,ns:\u0002NvQY;F*".encode() hash = hmac.new(api_secret_hash, data_to_sign.encode(), digestmod=hashlib.sha256).digest() return base64.b64encode(hash).decode() encoding for ASCII in node.js you can find here https://github.com/nodejs/node/blob/a2a32d8beef4d6db3a8c520572e8a23e0e51a2f8/src/string_bytes.cc#L636-L647 case ASCII: if (contains_non_ascii(buf, buflen)) { char* out = node::UncheckedMalloc(buflen); if (out == nullptr) { *error = node::ERR_MEMORY_ALLOCATION_FAILED(isolate); return MaybeLocal<Value>(); } force_ascii(buf, out, buflen); return ExternOneByteString::New(isolate, out, buflen, error); } else { return ExternOneByteString::NewFromCopy(isolate, buf, buflen, error); } there is this force_ascii() function that is called when the data contains non-ASCII characters which is implemented here https://github.com/nodejs/node/blob/a2a32d8beef4d6db3a8c520572e8a23e0e51a2f8/src/string_bytes.cc#L531-L573 so we need to check for the hash the same as NodeJS one, so we get the final version of the Python code: def generate_auth_header(data_to_sign): # convert to bytearray so the for loop below can modify the values api_secret_hash = bytearray(base64.b64decode("Rbju7azu87qCTvZRWbtGqg==")) # "force" characters to be in ASCII range for i in range(len(api_secret_hash)): api_secret_hash[i] &= 0x7f; hash = hmac.new(api_secret_hash, data_to_sign.encode(), digestmod=hashlib.sha256).digest() return base64.b64encode(hash).decode() now it returned the same result as NodeJS one Thank you Mark from the python discord for helping me understand and fix this! Hope anyone in the future trying to convert byte array from javascript to python know about this different of NodeJS Buffer() function | 6 | 5 |
73,765,587 | 2022-9-18 | https://stackoverflow.com/questions/73765587/how-to-get-a-warning-about-a-list-being-a-mutable-default-argument | I accidentally used a mutable default argument without knowing it. Is there a linter or tool that can spot this and warn me? | flake8-bugbear, Pylint, PyCharm, and Pyright can detect this: Bugbear has B006 (Do not use mutable data structures for argument defaults). Do not use mutable data structures for argument defaults. They are created during function definition time. All calls to the function reuse this one instance of that data structure, persisting changes between them. Pylint has W0102 (dangerous default value). Used when a mutable value as list or dictionary is detected in a default value for an argument. Pyright has reportCallInDefaultInitializer. Generate or suppress diagnostics for function calls, list expressions, set expressions, or dictionary expressions within a default value initialization expression. Such calls can mask expensive operations that are performed at module initialization time. This does what you want, but be aware that it also checks for function calls in default arguments. PyCharm has Default argument's value is mutable. This inspection detects when a mutable value as list or dictionary is detected in a default value for an argument. Default argument values are evaluated only once at function definition time, which means that modifying the default value of the argument will affect all subsequent calls of the function. Unfortunately, I can't find online documentation for this. If you have PyCharm, you can access all inspections and navigate to this inspection to find the documentation. | 8 | 8 |
73,722,570 | 2022-9-14 | https://stackoverflow.com/questions/73722570/unable-to-write-files-in-a-gcp-bucket-using-gcsfuse | I have mounted a storage bucket on a VM using the command: gcsfuse my-bucket /path/to/mount After this I'm able to read files from the bucket in Python using Pandas, but I'm not able to write files nor create new folders. I have tried with Python and from the terminal using sudo but get the same error. I have also tried Using the key_file from the bucket: sudo mount -t gcsfuse -o implicit_dirs,allow_other,uid=1000,gid=1000,key_file=Notebooks/xxxxxxxxxxxxxx10b3464a1aa9.json <BUCKET> <PATH> It does not through errors when I run the code, but still I'm not able to write in the bucket. I have also tried: gcloud auth login But still have the same issue. | I ran into the same thing a while ago, which was really confusing. You have to set the correct access scope for the virtual machine so that anyone using the VM is able to call the storage API. The documentation shows that the default access scope for storage on a VM is read-only: When you create a new Compute Engine instance, it is automatically configured with the following access scopes: Read-only access to Cloud Storage: https://www.googleapis.com/auth/devstorage.read_only All you have to do is change this scope so that you are also able to write to storage buckets from the VM. You can find an overview of different scopes here. To apply the new scope to your VM, you have to first shut it down. Then from your local machine execute the following command: gcloud compute instances set-scopes INSTANCE_NAME \ --scopes=storage-rw \ --zone=ZONE You can do the same thing from the portal if you go to the settings of your VM, scroll all the way down, and choose "Set Access for each API". You have the same options when you create the VM for the first time. Below is an example of how you would do this: | 6 | 3 |
73,762,452 | 2022-9-18 | https://stackoverflow.com/questions/73762452/what-is-the-most-efficient-way-to-read-and-augment-copy-samples-and-change-some | Currently, I have managed to solve this but it is slower than what I need. It takes approximately: 1 hour for 500k samples, the entire dataset is ~100M samples, which requires ~200hours for 100M samples. Hardware/Software specs: RAM 8GB, Windows 11 64bit, Python 3.8.8 The problem: I have a dataset in .csv (~13GB) where each sample has a value and a respective start-end period of few months.I want to create a dataset where each sample will have the same value but referring to each specific month. For example: from: idx | start date | end date | month | year | value 0 | 20/05/2022 | 20/07/2022 | 0 | 0 | X to: 0 | 20/05/2022 | 20/07/2022 | 5 | 2022 | X 1 | 20/05/2022 | 20/07/2022 | 6 | 2022 | X 2 | 20/05/2022 | 20/07/2022 | 7 | 2022 | X Ideas: Manage to do it parallel (like Dask, but I am not sure how for this task). My implementation: Chunk read in pandas, augment in dictionaries , append to CSV. Use a function that, given a df, calculates for each sample the months from start date to end date and creates a copy sample for each month appending it to a dictionary. Then it returns the final dictionary. The calculations are done in dictionaries as they were found to be way faster than doing it in pandas. Then I iterate through the original CSV in chunks and apply the function at each chunk appending the resulting augmented df to another csv. The function: def augment_to_monthly_dict(chunk): ''' Function takes a df or subdf data and creates and returns an Augmented dataset with monthly data in Dictionary form (for efficiency) ''' dict={} l=1 for i in range(len(chunk)):#iterate through every sample # print(str(chunk.iloc[i].APO)[4:6] ) #Find the months and years period mst =int(float((str(chunk.iloc[i].start)[4:6])))#start month mend=int(str(chunk.iloc[i].end)[4:6]) #end month yst =int(str(chunk.iloc[i].start)[:4] )#start year yend=int(str(chunk.iloc[i].end)[:4] )#end year if yend==yst: months=[ m for m in range(mst,mend+1)] years=[yend for i in range(len(months))] elif yend==yst+1:# year change at same sample months=[m for m in range(mst,13)] years=[yst for i in range(mst,13)] months= months+[m for m in range(1, mend+1)] years= years+[yend for i in range(1, mend+1)] else: continue #months is a list of each month in the period of the sample and years is a same #length list of the respective years eg months=[11,12,1,2] , years= #[2021,2022,2022,2022] for j in range(len(months)):#iterate through list of months #copy the original sample make it a dictionary tmp=pd.DataFrame(chunk.iloc[i]).transpose().to_dict(orient='records') #change the month and year values accordingly (they were 0 for initiation) tmp[0]['month'] = months[j] tmp[0]['year'] = years[j] # Here could add more calcs e.g. drop irrelevant columns, change datatypes etc #to reduce size # #------------------------------------- #Append new row to the Augmented data dict[l] = tmp[0] l+=1 return dict Reading the original dataset (.csv ~13GB), augment using the function and append result to new .csv: chunk_count=0 for chunk in pd.read_csv('enc_star_logar_ek.csv', delimiter=';', chunksize=10000): chunk.index = chunk.reset_index().index aug_dict = augment_to_monthly_dict(chunk)#make chunk dictionary to work faster chunk_count+=1 if chunk_count ==1: #get the column names and open csv write headers and 1st chunk #Find the dicts keys, the column names only from the first dict(not reading all data) for kk in aug_dict.values(): key_names = [i for i in kk.keys()] print(key_names) break #break after first input dict #Open csv file and write ';' separated data with open('dic_to_csv2.csv', 'w', newline='') as csvfile: writer = csv.DictWriter(csvfile,delimiter=';', fieldnames=key_names) writer.writeheader() writer.writerows(aug_dict.values()) else: # Save the rest of the data chunks print('added chunk: ', chunk_count) with open('dic_to_csv2.csv', 'a', newline='') as csvfile: writer = csv.DictWriter(csvfile,delimiter=';', fieldnames=key_names) writer.writerows(aug_dict.values()) | Pandas efficiency comes in to play when you need to manipulate columns of data, and to do that Pandas reads the input row-by-row building up a series of data for each column; that's a lot of extra computation your problem doesn't benefit from, and in fact just slows your solution down. You actually need to manipulate rows, and for that the fastest way is to use the standard csv module; all you need to do is read a row in, write the derived rows out, and repeat: import csv import sys from datetime import datetime def parse_dt(s): return datetime.strptime(s, r"%d/%m/%Y") def get_dt_range(beg_dt, end_dt): """ Returns a range of (month, year) tuples, from beg_dt up-to-and-including end_dt. """ if end_dt < beg_dt: raise ValueError(f"end {end_dt} is before beg {beg_dt}") mo, yr = beg_dt.month, beg_dt.year dt_range = [] while True: dt_range.append((mo, yr)) if mo == 12: mo = 1 yr = yr + 1 else: mo += 1 if (yr, mo) > (end_dt.year, end_dt.month): break return dt_range fname = sys.argv[1] with open(fname, newline="") as f_in, open("output_csv.csv", "w", newline="") as f_out: reader = csv.reader(f_in) writer = csv.writer(f_out) writer.writerow(next(reader)) # transfer header for row in reader: beg_dt = parse_dt(row[1]) end_dt = parse_dt(row[2]) for mo, yr in get_dt_range(beg_dt, end_dt): row[3] = mo row[4] = yr writer.writerow(row) And, to compare with Pandas in general, let's examine @abokey's specifc Pandas solutionβI'm not sure if there is a better Pandas implementation, but this one kinda does the right thing: import sys import pandas as pd fname = sys.argv[1] df = pd.read_csv(fname) df["start date"] = pd.to_datetime(df["start date"], format="%d/%m/%Y") df["end date"] = pd.to_datetime(df["end date"], format="%d/%m/%Y") df["month"] = df.apply( lambda x: pd.date_range( start=x["start date"], end=x["end date"] + pd.DateOffset(months=1), freq="M" ).month.tolist(), axis=1, ) df["year"] = df["start date"].dt.year out = df.explode("month").reset_index(drop=True) out.to_csv("output_pd.csv") Let's start with the basics, though, do the programs actually do the right thing. Given this input: idx,start date,end date,month,year,value 0,20/05/2022,20/05/2022,0,0,X 0,20/05/2022,20/07/2022,0,0,X 0,20/12/2022,20/01/2023,0,0,X My program, ./main.py input.csv, produces: idx,start date,end date,month,year,value 0,20/05/2022,20/05/2022,5,2022,X 0,20/05/2022,20/07/2022,5,2022,X 0,20/05/2022,20/07/2022,6,2022,X 0,20/05/2022,20/07/2022,7,2022,X 0,20/12/2022,20/01/2023,12,2022,X 0,20/12/2022,20/01/2023,1,2023,X I believe that's what you're looking for. The Pandas solution, ./main_pd.py input.csv, produces: ,idx,start date,end date,month,year,value 0,0,2022-05-20,2022-05-20,5,2022,X 1,0,2022-05-20,2022-07-20,5,2022,X 2,0,2022-05-20,2022-07-20,6,2022,X 3,0,2022-05-20,2022-07-20,7,2022,X 4,0,2022-12-20,2023-01-20,12,2022,X 5,0,2022-12-20,2023-01-20,1,2022,X Ignoring the added column for the frame index, and the fact the date format has been changed (I'm pretty sure that can be fixed with some Pandas directive I don't know), it still does the right thing with regards to creating new rows with the appropriate date range. So, both do the right thing. Now, on to performance. I duplicated your initial sample, just the 1 row, for 1_000_000 and 10_000_000 rows: import sys nrows = int(sys.argv[1]) with open(f"input_{nrows}.csv", "w") as f: f.write("idx,start date,end date,month,year,value\n") for _ in range(nrows): f.write("0,20/05/2022,20/07/2022,0,0,X\n") I'm running a 2020, M1 MacBook Air with the 2TB SSD (which gives very good read/write speeds): 1M rows (sec, RAM) 10M rows (sec, RAM) csv module 7.8s, 6MB 78s, 6MB Pandas 75s, 569MB 750s, 5.8GB You can see both programs following a linear increase in time-to-run that follows the increase in the size of rows. The csv module's memory remains constanly non-existent because it's streaming data in-and-out (holding on to virtually nothing); Pandas's memory rises with the size of rows it has to hold so that it can do the actual date-range computations, again on whole columns. Also, not shown, but for the 10M-rows Pandas test, Pandas spent nearly 2 minutes just writing the CSVβlonger than the csv-module approach took to complete the entire task. Now, for all my putting-down of Pandas, the solution is far fewer lines, and is probably bug free from the get-go. I did have a problem writing get_dt_range(), and had to spend about 5 minutes thinking about what it actually needed to do and debug it. You can view my setup with the small test harness, and the results, here. | 5 | 3 |
73,724,454 | 2022-9-15 | https://stackoverflow.com/questions/73724454/plot-density-function-on-sphere-surface-using-plotly-python | I'm interested in plotting a real-valued function f(x,y,z)=a, where (x,y,z) is a 3D point on the sphere and a is a real number. I calculate the Cartesian coordinates of the points of the sphere as follows, but I have no clue on how to visualize the value of f on each of those points. import plotly.graph_objects as go import numpy as np fig = go.Figure(layout=go.Layout(title=go.layout.Title(text=title), hovermode=False)) # Create mesh grid for spherical coordinates phi, theta = np.mgrid[0.0:np.pi:100j, 0.0:2.0 * np.pi:100j] # Get Cartesian mesh grid x = np.sin(phi) * np.cos(theta) y = np.sin(phi) * np.sin(theta) z = np.cos(phi) # Plot sphere surface self.fig.add_surface(x=x, y=y, z=z, opacity=0.35) fig.show() I would imagine/expect/like a visualization like this Additionally, I also have the gradient of f calculated in closed-form (i.e., for each (x,y,z) I calculate the 3D-dimensional gradient of f). Is there a way of plotting this vector field, similarly to what is shown in the figure above? | Here's an answer that's far from perfect, but hopefully that's enough for you to build on. For the sphere itself, I don't know of any "shortcut" to do something like that in plotly, so my approach is simply to manually create a sphere mesh. Generating the vertices is simple, for example like you did - the slightly more tricky part is figuring out the vertex indices for the triangles (which depends on the vertex generation scheme). There are various algorithms to do that smoothly (i.e. generating a sphere with no "tip"), I hacked something crude just for the demonstration. Then we can use the Mesh3d object to display the sphere along with the intensities and your choice of colormap: N = 100 # Sphere resolution (both rings and segments, can be separated to different constants) theta, z = np.meshgrid(np.linspace(-np.pi, np.pi, N), np.linspace(-1, 1, N)) r = np.sqrt(1 - z ** 2) x = r * np.cos(theta) y = r * np.sin(theta) x = x.ravel() y = y.ravel() z = z.ravel() # Triangle indices indices = np.arange(N * (N - 1) - 1) i1 = np.concatenate([indices, (indices // N + 1) * N + (indices + 1) % N]) i2 = np.concatenate([indices + N, indices // N * N + (indices + 1) % N]) i3 = np.concatenate([(indices // N + 1) * N + (indices + 1) % N, indices]) # Point intensity function def f(x, y, z): return (np.cos(x * 2) + np.sin(y ** 2) + np.sin(z) + 3) / 6 fig = go.Figure(data=[ go.Mesh3d( x=x, y=y, z=z, colorbar_title='f(x, y, z)', colorscale=[[0, 'gold'], [0.5, 'mediumturquoise'], [1, 'magenta']], intensity = f(x, y, z), i = i1, j = i2, k = i3, name='y', showscale=True ) ]) fig.show() This yields the following interactive plot: To add the vector field you can use the Cone plot; this requires some tinkering because when I simply draw the cones at the same x, y, z position as the sphere, some of the cones are partially or fully occluded by the sphere. So I generate another sphere, with a slightly larger radius, and place the cones there. I also played with some lighting parameters to make it black like in your example. The full code looks like this: N = 100 # Sphere resolution (both rings and segments, can be separated to different constants) theta, z = np.meshgrid(np.linspace(-np.pi, np.pi, N), np.linspace(-1, 1, N)) r = np.sqrt(1 - z ** 2) x = r * np.cos(theta) y = r * np.sin(theta) x = x.ravel() y = y.ravel() z = z.ravel() # Triangle indices indices = np.arange(N * (N - 1) - 1) i1 = np.concatenate([indices, (indices // N + 1) * N + (indices + 1) % N]) i2 = np.concatenate([indices + N, indices // N * N + (indices + 1) % N]) i3 = np.concatenate([(indices // N + 1) * N + (indices + 1) % N, indices]) # Point intensity function def f(x, y, z): return (np.cos(x * 2) + np.sin(y ** 2) + np.sin(z) + 3) / 6 # Vector field function def grad_f(x, y, z): return np.stack([np.cos(3 * y + 5 * x), np.sin(z * y), np.cos(4 * x - 3 * y + z * 7)], axis=1) # Second sphere for placing cones N2 = 50 # Smaller resolution (again rings and segments combined) R2 = 1.05 # Slightly larger radius theta2, z2 = np.meshgrid(np.linspace(-np.pi, np.pi, N2), np.linspace(-R2, R2, N2)) r2 = np.sqrt(R2 ** 2 - z2 ** 2) x2 = r2 * np.cos(theta2) y2 = r2 * np.sin(theta2) x2 = x2.ravel() y2 = y2.ravel() z2 = z2.ravel() uvw = grad_f(x2, y2, z2) fig = go.Figure(data=[ go.Mesh3d( x=x, y=y, z=z, colorbar_title='f(x, y, z)', colorscale=[[0, 'gold'], [0.5, 'mediumturquoise'], [1, 'magenta']], intensity = f(x, y, z), i = i1, j = i2, k = i3, name='y', showscale=True ), go.Cone( x=x2, y=y2, z=z2, u=uvw[:, 0], v=uvw[:, 1], w=uvw[:, 2], sizemode='absolute', sizeref=2, anchor='tail', lighting_ambient=0, lighting_diffuse=0, opacity=.2 ) ]) fig.show() And yields this plot: Hope this helps. There are a lot of tweaks to the display, and certainly better ways to construct a sphere mesh (e.g. see this article), so there should be a lot of freedom there (albeit at the cost of some work). Good luck! | 4 | 5 |
73,763,827 | 2022-9-18 | https://stackoverflow.com/questions/73763827/python-equality-statement-of-the-form-a-b-in-c-d-e | I just came across some python code with the following statement: if a==b in [c,d,e]: ... It turns out that: >>> 9==9 in [1,2,3] False >>> 9==9 in [1,2,3,9] True >>> (9==9) in [1,2,3,9] True >>> 9==(9 in [1,2,3,9]) False >>> True in [1,2,3,9] True >>> True in [] False >>> False in [] False >>> False in [1,2,3] False Am I right in assuming that a==b in [c,d,e] is equivalent to (a==b) in [c,d,e] and therefore only really makes sense if [c,d,e] is a list of True/False values? And in the case of the code I saw b is always in the list [c,d,e]. Would it then be equivalent to simply using a==b? | Am I right in assuming that a==b in [c,d,e] is equivalent to (a==b) in [c,d,e] No. Since both == and in are comparison operators, the expression a == b in [c, d, e] is equivalent to (a == b) and (b in [c, d, e]) since all comparison operators have the same precedence but can be chained. and therefore only really makes sense if [c,d,e] is a list of True/False values? It can also make sense to check if a boolean value is contained in a list of integers. Since True is considered equivalent to 1, and False is considered equivalent to 0 (see The standard type hierarchy), the result of this check can even be True. | 5 | 3 |
73,761,571 | 2022-9-18 | https://stackoverflow.com/questions/73761571/row-wise-cumulative-mean-across-grouped-columns-using-pandas | I would like to create multiple columns which show the row-wise cumulative mean for grouped columns. Here is some sample data: import pandas as pd data = [[1, 4, 6, 10, 15, 40, 90, 100], [2, 5, 3, 11, 25, 50, 90, 120], [3, 7, 9, 14, 35, 55, 100, 120]] df = pd.DataFrame(data, columns=['a1', 'a2', 'a3', 'a4', 'b1', 'b2', 'b3', 'b4']) a1 a2 a3 a4 b1 b2 b3 b4 0 1 4 6 10 15 40 90 100 1 2 5 3 11 25 50 90 120 2 3 7 9 14 35 55 100 120 What I want is to generate new columns like this: New column a1_2 is calculated by the mean of columns a1 and a2 row-wise. New column a1_3 is calculated by the mean of columns a1, a2 and a3 row-wise. New column a1_4 is calculated by the mean of columns a1, a2, a3 and a4 row-wise. The same should happen for the grouped columns with b. Of course you can do this manually, but this is not ideal when you have too many variables. Here is the expected output: df['a1_2'] = df[['a1', 'a2']].mean(axis=1) df['a1_3'] = df[['a1', 'a2', 'a3']].mean(axis=1) df['a1_4'] = df[['a1', 'a2', 'a3', 'a4']].mean(axis=1) df['b1_2'] = df[['b1', 'b2']].mean(axis=1) df['b1_3'] = df[['b1', 'b2', 'b3']].mean(axis=1) df['b1_4'] = df[['b1', 'b2', 'b3', 'b4']].mean(axis=1) a1 a2 a3 a4 b1 b2 b3 b4 a1_2 a1_3 a1_4 b1_2 b1_3 b1_4 0 1 4 6 10 15 40 90 100 2.5 3.666667 5.25 27.5 48.333333 61.25 1 2 5 3 11 25 50 90 120 3.5 3.333333 5.25 37.5 55.000000 71.25 2 3 7 9 14 35 55 100 120 5.0 6.333333 8.25 45.0 63.333333 77.50 So I was wondering if there is some automatic way of doing this? | expanding.mean for c in ('a', 'b'): m = df.filter(like=c).expanding(axis=1).mean().iloc[:, 1:] df[m.columns.str.replace(r'(\d+)$', r'1_\1', regex=True)] = m Result a1 a2 a3 a4 b1 b2 b3 b4 a1_2 a1_3 a1_4 b1_2 b1_3 b1_4 0 1 4 6 10 15 40 90 100 2.5 3.666667 5.25 27.5 48.333333 61.25 1 2 5 3 11 25 50 90 120 3.5 3.333333 5.25 37.5 55.000000 71.25 2 3 7 9 14 35 55 100 120 5.0 6.333333 8.25 45.0 63.333333 77.50 Another option: out = [value.expanding(axis=1).mean() .rename(columns = lambda col: f"{col[0]}1_{col[1]}") for _, value in df.groupby(df.columns.str[0], axis = 1)] pd.concat([df]+out, axis = 1) a1 a2 a3 a4 b1 b2 b3 b4 a1_1 a1_2 a1_3 a1_4 b1_1 b1_2 b1_3 b1_4 0 1 4 6 10 15 40 90 100 1.0 2.5 3.666667 5.25 15.0 27.5 48.333333 61.25 1 2 5 3 11 25 50 90 120 2.0 3.5 3.333333 5.25 25.0 37.5 55.000000 71.25 2 3 7 9 14 35 55 100 120 3.0 5.0 6.333333 8.25 35.0 45.0 63.333333 77.50 | 5 | 3 |
73,749,995 | 2022-9-16 | https://stackoverflow.com/questions/73749995/why-does-matplotlib-3-6-0-on-macos-throw-an-attributeerror-when-showing-a-plot | I have the following straightforward code: import matplotlib.pyplot as plt x = [1,2,3,4] y = [34, 56, 78, 21] plt.plot(x, y) plt.show() But after changing my MacBook Pro to the M1 chip, I'm getting the following error: Traceback (most recent call last): File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/main.py", line 291, in <module> plt.plot(x, y) File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 2728, in plot return gca().plot( File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 2225, in gca return gcf().gca() File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 830, in gcf return figure() File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/_api/deprecation.py", line 454, in wrapper return func(*args, **kwargs) File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 771, in figure manager = new_figure_manager( File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 346, in new_figure_manager _warn_if_gui_out_of_main_thread() File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 336, in _warn_if_gui_out_of_main_thread if (_get_required_interactive_framework(_get_backend_mod()) and File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 206, in _get_backend_mod switch_backend(dict.__getitem__(rcParams, "backend")) File "/Users/freddy/PycharmProjects/TPMetodosNoParametricos/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 266, in switch_backend canvas_class = backend_mod.FigureCanvas AttributeError: module 'backend_interagg' has no attribute 'FigureCanvas' Why does the code throw this error? My matplotlib version is 3.6.0 | i had the same problem today on a different machine in the same matplotlib version. I downgrade to Version 3.5.0 and now it works. | 8 | 13 |
73,743,437 | 2022-9-16 | https://stackoverflow.com/questions/73743437/how-can-i-add-a-column-of-empty-arrays-to-polars-dataframe | I am trying to add a column of empty lists to a polars dataframe in python. My code import polars as pl a = pl.DataFrame({'a': [1, 2, 3]}) a.with_columns([pl.lit([]).alias('b')]) throws Traceback (most recent call last): File "<input>", line 1, in <module> a.with_columns([pl.lit([]).alias('b')]) File "/usr/local/lib/python3.10/site-packages/polars/internals/lazy_functions.py", line 767, in lit return pli.wrap_expr(pylit(item, allow_object)) ValueError: could not convert value '[]' as a Literal How can I create this column? | This works for me. I wrote pl.Series() with empty lists [] as values: import polars as pl from polars import col df = pl.DataFrame({'a': [1, 2, 3]}) # .lazy() df = df.with_columns([ col('a'), pl.Series('empty lists', [[]], dtype=pl.List), pl.lit(None).alias('null column'), ]) print(df) # print(df.collect()) (in case of LazyFrame) βββββββ¬ββββββββββββββ¬ββββββββββββββ β a β empty lists β null column β β --- β --- β --- β β i64 β list[f64] β bool β βββββββͺββββββββββββββͺββββββββββββββ‘ β 1 β [] β null β βββββββΌββββββββββββββΌββββββββββββββ€ β 2 β [] β null β βββββββΌββββββββββββββΌββββββββββββββ€ β 3 β [] β null β βββββββ΄ββββββββββββββ΄ββββββββββββββ | 8 | 11 |
73,731,982 | 2022-9-15 | https://stackoverflow.com/questions/73731982/update-to-python-3-10-in-google-cloud-shell | I want to upgrade to python 3.10 in my google cloud shell, but I failed to do so. I found two methods online, which were unsuccessful so far. Using the deadsnakes repo: I tried the following commands: sudo add-apt-repository ppa:deadsnakes/ppa sudo apt update But I get an Error The repository 'http://ppa.launchpad.net/deadsnakes/ppa/ubuntu kinetic Release' does not have a Release file. Install from the source: I followed the tutorial in this link. I am able to install python 3.10. But when my Google Cloud Shell restarts, I am unable to run python3.10. It states -bash: python3.10: command not found. | This worked out for me. The default python version remained 3.10.7 even after the shell restarted. # install pyenv to install python on persistent home directory curl https://pyenv.run | bash # add to path echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.bashrc echo 'eval "$(pyenv init -)"' >> ~/.bashrc echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.bashrc # updating bashrc source ~/.bashrc # install python 3.10.7 and make default pyenv install 3.10.7 pyenv global 3.10.7 # execute python credit to @ali-khorso and @yungchin. Link to the post | 4 | 7 |
73,740,018 | 2022-9-16 | https://stackoverflow.com/questions/73740018/how-to-iterate-through-a-nested-dictionary-with-varying-depth-and-make-a-copy-w | I have a dictionary {'n11' : {'n12a': {'n13a' : 10 , 'n13b' : "some text"}, 'n12b': {'n13c' : {'n14a': 40} } }, 'n21': {'n22a' : 20 } } And I want to iterate through the dictionary until I reach a value which is not a dictionary, and replace it with the "full path" to that value. {'n11' : {'n12a': {'n13a' : 'n11_n12a_n13a' , 'n13b' : 'n11_n12a_n13b'}, 'n12b': {'n13c' : {'n14a': 'n11_n12b_n13c_n14a'} } }, 'n21': {'n22a' : 'n21_n22a' } } I know how to iterate through a nested dictionary with the following function, but I don't understand how to copy the same structure but with the updated value. def myprint(d,path=[]): for k, v in d.items(): if isinstance(v, dict): path.append(k) myprint(v,path) else: print('_'.join(path)) output: 'n11_n12a_n13a' 'n11_n12a_n13b' 'n11_n12b_n13c_n14a' 'n21_n22a' But how do I get it into another dictionary? | The most efficient way to do this would be using the same recursive function to not generate excess performance overhead. To copy the dictionary you can use copy.deepcopy() and then take that copy through the function, but replace the values instead of just printing the path: import copy data = {'n11': {'n12a': {'n13a': 10, 'n13b': "some text"}, 'n12b': {'n13c': {'n14a': 40}}}, 'n21': {'n22a': 20}} def myreplace(d, path=[]): for k, v in d.items(): if isinstance(v, dict): myreplace(v, path + [k]) else: print('_'.join(path + [k])) d[k] = '_'.join(path + [k]) return d copy_of_data = copy.deepcopy(data) copy_of_data = myreplace(copy_of_data) print(data) print(copy_of_data) I had to modify the original function slightly to get it working. | 4 | 5 |
73,740,722 | 2022-9-16 | https://stackoverflow.com/questions/73740722/smtpsenderrefused-at-password-reset-django | settings.py EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'smtp.gmail.com' EMAIL_PORT = 587 EMAIL_USE_TLS = True EMAIL_HOST_USER = os.environ.get('USER_EMAIL') EMAIL_HOST_PASSWORD = os.environ.get('USER_PASS') Error: SMTPSenderRefused at /password-reset/ (530, b'5.7.0 Authentication Required. Learn more at\n5.7.0 https://support.google.com/mail/?p=WantAuthError h10-20020a170902680a00b0015e8d4eb1d5sm14008586plk.31 - gsmtp', 'webmaster@localhost') Request Method: POST Request URL: http://localhost:8000/password-reset/ Django Version: 4.1.1 Exception Type: SMTPSenderRefused Exception Value: (530, b'5.7.0 Authentication Required. Learn more at\n5.7.0 https://support.google.com/mail/?p=WantAuthError h10-20020a170902680a00b0015e8d4eb1d5sm14008586plk.31 - gsmtp', 'webmaster@localhost') Exception Location: C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\smtplib.py, line 887, in sendmail Raised during: django.contrib.auth.views.PasswordResetView Python Executable: D:\Django\Tutorial\env\Scripts\python.exe Python Version: 3.10.2 Python Path: ['D:\\Django\\Tutorial\\django_project', 'C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python310\\python310.zip', 'C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python310\\DLLs', 'C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python310', 'D:\\Django\\Tutorial\\env', 'D:\\Django\\Tutorial\\env\\lib\\site-packages'] Server time: Fri, 16 Sep 2022 06:10:41 +0000 urls.py: path('password-reset/', auth_views.PasswordResetView.as_view( template_name='users/password_reset.html' ), name='password_reset'), path('password-reset/done/', auth_views.PasswordResetDoneView.as_view( template_name='users/password_reset_done.html' ), name='password_reset_done'), path('password-reset-confirm/<uidb64>/<token>/', auth_views.PasswordResetConfirmView.as_view( template_name='users/password_reset_confirm.html' ), name='password_reset_confirm'), path('password-reset-complete/', auth_views.PasswordResetCompleteView.as_view( template_name='users/password_reset_complete.html' ), name='password_reset_complete'), path("", include('blog.urls')), I am trying to set an email password reset in my Django app but get this unexpected error. What I am trying to do here is I am using Django inbuild views PasswordResetView, PasswordResetDoneView, PasswordResetConfirmView to reset my registered account's password through email. Can you help me with this or provide me with some links so I can reach the core of this error? | I have solved the issue the issue was with my Gmail account you need to go to the settings > security then create a new app password and then replace the Gmail password in settings.py with your newly created app password. I think your Gmail should also have 2-factor authentication, mine was already on but if you try this I think you should first try to turn on the 2-factor authentication and then try all these steps. | 4 | 3 |
73,740,568 | 2022-9-16 | https://stackoverflow.com/questions/73740568/pandas-faster-method-than-df-atx-y | I have df1 df1 = pd.DataFrame({'x':[1,2,3,5], 'y':[2,3,4,6], 'value':[1.5,2.0,0.5,3.0]}) df1 x y value 0 1 2 1.5 1 2 3 2.0 2 3 4 0.5 3 5 6 3.0 and I want to assign the value at x and y coordinates to another dataframe df2 df2 = pd.DataFrame(0.0, index=[x for x in range(0,df1['x'].max()+1)], columns=[y for y in range(0,df1['y'].max()+1)]) df2 0 1 2 3 4 5 6 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 by for x, y, value in zip(df1['x'],df1['y'],df1['value']): df2.at[x,y] = value to give 0 1 2 3 4 5 6 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 0.0 0.0 1.5 0.0 0.0 0.0 0.0 2 0.0 0.0 0.0 2.0 0.0 0.0 0.0 3 0.0 0.0 0.0 0.0 0.5 0.0 0.0 4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 0.0 0.0 0.0 0.0 0.0 0.0 3.0 However, it is a bit slow because I have a long df1. Do we have a faster method than df.at[x,y]? | You can avoid create zero df2 and using df.at method by DataFrame.pivot, DataFrame.fillna and DataFrame.reindex: df2 = (df1.pivot('x','y','value') .fillna(0) .reindex(index=range(df1['x'].max()+1), columns=range(df1['y'].max()+1), fill_value=0)) print (df2) y 0 1 2 3 4 5 6 x 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 0.0 0.0 1.5 0.0 0.0 0.0 0.0 2 0.0 0.0 0.0 2.0 0.0 0.0 0.0 3 0.0 0.0 0.0 0.0 0.5 0.0 0.0 4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 0.0 0.0 0.0 0.0 0.0 0.0 3.0 | 4 | 8 |
73,739,633 | 2022-9-16 | https://stackoverflow.com/questions/73739633/python-ast-libary-how-to-retreive-value-of-a-specific-node | I have the following script: extras_require={"dev": dev_reqs} x = 3 entry_points={ "console_scripts": ["main=smamesdemo.run.main:main"] } I want to retrieve the following part as python data, dict and list not Nodes. entry_points={ "console_scripts": ["main=smamesdemo.run.main:main"] } I have tried the following, but I am not actually able to get the values with this: import ast class CustomNodeTransformer(ast.NodeTransformer): def visit_Assign(self, node): print(node.value.__dict__) return node with open("./setup.py") as f: code = f.read() node = ast.parse(code) CustomNodeTransformer().visit(node) {'keys': [<ast.Constant object at 0x0000029E992962C0>], 'values': [<ast.Name object at 0x0000029E99296260>], 'lineno': 1, 'col_offset': 15, 'end_lineno': 1, 'end_col_offset': 32} Expected: entry_points={"console_scripts": ["main=smamesdemo.run.main:main"]} Can anyone help me to achieve this? | You should use ast.NodeVisitor rather than ast.NodeTransformer since you are not making modifications to nodes. But for the purpose of retrieving a single node, it is simpler to use ast.walk instead to traverse the AST in a flat manner and find the ast.Assign node whose target id is 'entry_points', and then wrap its value in an ast.Expression node for compilation with compile and evaluation with eval into an actual Python dict. Remember to use ast.fix_missing_locations to realign the line numbers and offsets of the node when you plant it into another tree: import ast code = '''extras_require={"dev": dev_reqs} x = 3 entry_points={ "console_scripts": ["main=smamesdemo.run.main:main"] }''' for node in ast.walk(ast.parse(code)): if isinstance(node, ast.Assign) and node.targets[0].id == 'entry_points': expr = ast.Expression(body=node.value) ast.fix_missing_locations(expr) entry_points = eval(compile(expr, filename='', mode='eval')) print(entry_points) This outputs: {'console_scripts': ['main=smamesdemo.run.main:main']} Demo: https://replit.com/@blhsing/OrangeredCurlyApplication | 5 | 2 |
73,735,974 | 2022-9-15 | https://stackoverflow.com/questions/73735974/convert-dataclass-of-dataclass-to-json-string | I have a json string that I want to read, convert it to an object that I can manipulate, and then convert it back into a json string. I am utilizing the python 3.10 dataclass, and one of the attributes of the class is another class (mySubClass). When I call json.loads(myClass), I get the following error: TypeError: Object of type mySubClass is not JSON serializable. Is there a way I can instantiate the dataclass myClass with everything it needs (including mySubClass), and then have a "post init operation" that will convert myClass.mySubClass into a simple json str? Or am I going about this the wrong way? My original goal was to have the following: import json from dataclasses import dataclass @dataclass mySubClass: sub_item1: str sub_item2: str @dataclass myClass: item1: str item2: mySubClass() ... convert_new_jsonStr_toObj = json.loads(received_json_str, object_hook=lambda d: SimpleNamespace(**d)) ... #: Get new values/do "stuff" to the received json string myClass_to_jsonStr = json.dumps(myClass(item1=convert_new_jsonStr_toObj.item1, item2=mySubClass(sub_item1=convert_new_jsonStr_toObj.sub_item1, sub_item2=convert_new_jsonStr_toObj.sub_item2))) ... #: Final json will look something like: processed_json_str = "{ "item1" : "new_item1", "item2" : { "sub_item1": "new_sub_item1", "sub_item2": "new_sub_item2" }" } #: send processed_json_str back out... #: Note: "processed_json_str" has the same structure as "received_json_str". | If I've understood your question correctly, you can do something like this:: import json import dataclasses @dataclasses.dataclass class mySubClass: sub_item1: str sub_item2: str @dataclasses.dataclass class myClass: item1: str item2: mySubClass # We need a __post_init__ method here because otherwise # item2 will contain a python dictionary, rather than # an instance of mySubClass. def __post_init__(self): self.item2 = mySubClass(**self.item2) sampleData = ''' { "item1": "This is a test", "item2": { "sub_item1": "foo", "sub_item2": "bar" } } ''' myvar = myClass(**json.loads(sampleData)) myvar.item2.sub_item1 = 'modified' print(json.dumps(dataclasses.asdict(myvar))) Running this produces: {"item1": "This is a test", "item2": {"sub_item1": "modified", "sub_item2": "bar"}} As a side note, this all becomes easier if you use a more fully featured package like pydantic: import json from pydantic import BaseModel class mySubClass(BaseModel): sub_item1: str sub_item2: str class myClass(BaseModel): item1: str item2: mySubClass sampleData = ''' { "item1": "This is a test", "item2": { "sub_item1": "foo", "sub_item2": "bar" } } ''' myvar = myClass(**json.loads(sampleData)) myvar.item2.sub_item1 = 'modified' print(myvar.json()) | 6 | 7 |
73,735,592 | 2022-9-15 | https://stackoverflow.com/questions/73735592/how-to-display-the-label-from-models-textchoices-in-the-template | The Django docs says that one can use .label, but it does not work in the template. class Model(models.Model): class ModelChoices(models.TextChoices): ENUM = 'VALUE', 'Label' model_choice = models.CharField(choices=ModelChoices.choices) In the template object.model_choice displays the value ('VALUE'). object.model_choice.label displays nothing. How is it possible to get the label ('Label') in the template? | You'd use get_{field_name}_display Python modelObj.get_model_choice_display() Template {{modelObj.get_model_choice_display}} | 5 | 8 |
73,715,821 | 2022-9-14 | https://stackoverflow.com/questions/73715821/jupyter-lab-issue-displaying-widgets-javascript-error | I have troubles replicating a JupyterLab install on a new PC. It is working fine on my previous one. I am unable to display simple widgets (like a checkbox from ipywidgets or ipyvuetify). I checked that jupyter-widgets is enabled with jupyter labextension list. The results is : jupyter-vue v1.7.0 enabled ok jupyter-vuetify v1.8.4 enabled ok @jupyter-widgets/jupyterlab-manager v5.0.2 enabled ok (python, jupyterlab_widgets) In the notebook, when i try to display a widget, the cell displays a Javascrip error : [Open Browser Console for more detailed log - Double click to close this message] Failed to load model class 'CheckboxModel' from module '@jupyter-widgets/controls' Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is at f.loadClass (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab- manager/static/134.083e6b37f2f7b2f04b5e.js?v=083e6b37f2f7b2f04b5e:1:74976) at f.loadModelClass (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab- manager/static/150.467514c324d2bcc23502.js?v=467514c324d2bcc23502:1:10721) at f._make_model (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab- manager/static/150.467514c324d2bcc23502.js?v=467514c324d2bcc23502:1:7517) at f.new_model (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab- manager/static/150.467514c324d2bcc23502.js?v=467514c324d2bcc23502:1:5137) at f.handle_comm_open (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab- manager/static/150.467514c324d2bcc23502.js?v=467514c324d2bcc23502:1:3894) at _handleCommOpen (http://localhost:8888/lab/extensions/@jupyter-widgets/jupyterlab- manager/static/134.083e6b37f2f7b2f04b5e.js?v=083e6b37f2f7b2f04b5e:1:73392) at b._handleCommOpen (http://localhost:8888/static/lab/jlab_core.86360d749a1ef5f29afb.js? v=86360d749a1ef5f29afb:2:924842) at async b._handleMessage (http://localhost:8888/static/lab/jlab_core.86360d749a1ef5f29afb.js? v=86360d749a1ef5f29afb:2:926832) | That error is consistent with one noted here in an issue report recently. The suggestion there is to change to ipywidgets version 7.7.2 or 7.6.5 to fix this issue. Also, see the note here, too. | 16 | 10 |
73,724,269 | 2022-9-14 | https://stackoverflow.com/questions/73724269/which-module-should-i-use-for-python-collection-type-hints-collections-abc-or-t | I am unclear on whether to use typing or ABC classes for collection type hints in Python. It seems one can use either one, but the name typing suggests that's the preferred one. However, I see several places saying collections.abc should be used instead, not least PEP 585. I am getting the sense that typing was created to support generics but that generics is being extended to the "regular" classes so typing is becoming obsolete. Is that the correct understanding? | Yes. collections.abc is the correct module going forward. Collections in typing have been deprecated since 3.9 | 4 | 6 |
73,719,101 | 2022-9-14 | https://stackoverflow.com/questions/73719101/connecting-a-c-program-to-a-python-script-with-shared-memory | I'm trying to connect a C++ program to python using shared memory but I don't know how to pass the name of the memory segment to python. Here is my C++ code: key_t key = ftok("address", 1); int shm_o; char* msg = "hello there"; int len = strlen(msg) + 1; void* addr; shm_o = shmget(key, 20, IPC_CREAT | 0600); if(shm_o == -1) { std::cout << "Failed: shmget.\n"; return 1; } addr = shmat(shm_o, NULL, 0); if(addr == (void*) -1) { std::cout << "Failed: shmat.\n"; return 1; } std::cout << "Shared memory segment created successfully with id: " << shm_o; memcpy(addr, msg, len); getchar(); return 0; I'm trying to get python to read from the shared memory segment like so: shm_a = shared_memory.SharedMemory(name="address", create=False, size=20) print(bytes(shm_a.buf[:11])) but it throws an exception saying there is no file or directory called 'address'. Am I going about this correctly or is there another way to attach python to the shared memory segment? Any help would be much appreciated. | Taking the liberty to post a working example here for POSIX shared memory segments, which will work across C/C++ and Python on Linux/UNIX-like systems. This will not work on Windows. C++ code to create and write data into a shared memory segment (name provided on command line): #include <sys/mman.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <string.h> #include <iostream> #include <string> int main(int argc, char * argv[]) { if (argc != 2) { std::cerr << "Argument <shmem_name> required" << std::endl; return 1; } const char * shmem_name = argv[1]; size_t shm_size = 4096; int shmem_fd = shm_open(shmem_name, O_CREAT|O_RDWR, S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP); if (shmem_fd == -1) { perror("shm_open"); return 1; } std::cout << "Shared Memory segment created with fd " << shmem_fd << std::endl; if (ftruncate(shmem_fd, shm_size) == -1) { perror("ftruncate"); return 1; } std::cout << "Shared Memory segment resized to " << shm_size << std::endl; void * addr = mmap(0, shm_size, PROT_WRITE, MAP_SHARED, shmem_fd, 0); if (addr == MAP_FAILED) { perror("mmap"); return 1; } std::cout << "Please enter some text to write to shared memory segment\n"; std::string text; std::getline(std::cin, text); while (! text.empty()) { strncpy((char *)addr, text.data(), shm_size); std::cout << "Written '" << text << "' to shared memory segment\n"; std::getline(std::cin, text); } std::cout << "Unlinking shared memory segment." << std::endl; shm_unlink(shmem_name) ; } Python code to read any string from the beginning of the shared memory segment: import sys from multiprocessing import shared_memory, resource_tracker if len(sys.argv) != 2: print("Argument <shmem_name> required") sys.exit(1) shm_seg = shared_memory.SharedMemory(name=sys.argv[1]) print(bytes(shm_seg.buf).strip(b'\x00').decode('ascii')) shm_seg.close() # Manually remove segment from resource_tracker, otherwise shmem segment # will be unlinked upon program exit resource_tracker.unregister(shm_seg._name, "shared_memory") | 9 | 2 |
73,720,063 | 2022-9-14 | https://stackoverflow.com/questions/73720063/docker-compose-keeps-running-healthcheck | I am running a couple of services with docker compose. The data-loader service has to wait for the translator, but once the data-loader is running, the healthcheck does not stop executing, even after exiting. translator: build: ./translator container_name: translator command: uvicorn app.main:app --reload --host 0.0.0.0 --port 8000 healthcheck: test: curl -f translator:8000/healthcheck || exit 1 interval: 5s timeout: 5s retries: 5 env_file: - ./translator/.env networks: - fds-network ports: - 8000:8000 restart: always depends_on: - redis volumes: - ./translator:/app/ data-loader: build: context: ./data-loader dockerfile: Dockerfile container_name: data-loader command: python3 app/main.py environment: - "DOCKER=true" - "BASE_LANG=es" - "LOAD_DISEASES=false" depends_on: cassandra-load-keyspace: condition: service_completed_successfully translator: condition: service_healthy env_file: - ./data-loader/.env networks: - fds-network The endpoint for the healthcheck: @app.get("/healthcheck") async def healthcheck(): return {"status": "ok"} And this is the console output: cassandra-load-keyspace exited with code 0 data-loader | Dataloader connected to Cassandra data-loader | Dataloader started data-loader | load diseases False data-loader | Skipping disease loading translator | INFO: 172.18.0.4:46638 - "GET /healthcheck HTTP/1.1" 200 OK data-loader exited with code 0 translator | INFO: 172.18.0.4:56822 - "GET /healthcheck HTTP/1.1" 200 OK translator | INFO: 172.18.0.4:56836 - "GET /healthcheck HTTP/1.1" 200 OK translator | INFO: 172.18.0.4:40618 - "GET /healthcheck HTTP/1.1" 200 OK translator | INFO: 172.18.0.4:40624 - "GET /healthcheck HTTP/1.1" 200 OK translator | INFO: 172.18.0.4:51620 - "GET /healthcheck HTTP/1.1" 200 OK translator | INFO: 172.18.0.4:51630 - "GET /healthcheck HTTP/1.1" 200 OK and so on. If the data-loader service is running it is because the healthcheck was OK, and the number of calls to the endpoint is greater to the number of tries in the condition for the healthcheck, and no more services nor functions call that endpoint, so it must be the healthechk at the compose. What is wrong here? | so it must be the healthechk at the compose. Yes, translator health check is run by docker, which will keep running the health check every 5 seconds interval: 5s, whether there are any dependent services running or not. | 4 | 2 |
73,715,131 | 2022-9-14 | https://stackoverflow.com/questions/73715131/pydantic-nonetype-object-is-not-subscriptable-type-type-error | I have the following structure of the models using Pydantic library. I created some of the classes, and one of them contains list of items of another, so the problem is that I can't parse json using this classes: class FileTypeEnum(str, Enum): file = 'FILE' folder = 'FOLDER' class ImportInModel(BaseModel): id: constr(min_length=1) url: constr(max_length=255) = None parentId: str | None = None size: PositiveInt | None = None type: FileTypeEnum @root_validator def check_url(cls, values: dict): file_type = values['type'] file_url = values['url'] if file_type == FileTypeEnum.folder and file_url is not None: raise ValueError( f'{file_type}\'s type file has {file_url} file url!' ) @root_validator def check_size(cls, values: dict): file_type = values['type'] file_size = values['size'] if file_type == FileTypeEnum.file and file_size is None: raise ValueError( f'{file_type}\'s type file has {file_size} file size!' ) class ImportsInModel(BaseModel): items: list[ImportInModel] updateDate: datetime class Config: json_encoders = { datetime: isoformat } json_loads = ujson.loads @validator('items') def check_parent(cls, v): for item_1 in v: for item_2 in v: print(item_1.type, item_1.parentId, item_2.id, item_2.type) assert item_1.type == FileTypeEnum.file \ and item_1.parentId == item_2.id \ and item_2.type == FileTypeEnum.folder But then I use it like try: json_raw = '''{ "items": [ { "id": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_4", "url": "/file/url1", "parentId": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_1", "size": 234, "type": "FILE" } ], "updateDate": "2022-05-28T21:12:01.000Z" } ''' ImportsInModel.parse_raw(json_raw) # print(json_) except ValidationError as e: print(e) It gives me error: 1 validation error for ImportsInModel items -> 0 -> __root__ 'NoneType' object is not subscriptable (type=type_error) And I have no idea what's wrong, because it literally copy of the documentation. | You have to return parsed result after validation if success, i.e., you should add return values in the end of the two root_validator in ImportInModel. For instance, if I run below code, a corresponding error will throw up: if __name__ == '__main__': try: x_raw = ''' { "id": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_4", "url": "/file/url1", "parentId": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_1", "size": 234, "type": "FILE" } ''' y = ImportInModel.parse_raw(x_raw) except ValidationError as e: print(e) Traceback (most recent call last): File "pydantic/main.py", line 344, in pydantic.main.BaseModel.__init__ TypeError: __dict__ must be set to a dictionary, not a 'NoneType' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/will/PycharmProjects/pythonProject2/pydantic_1.py", line 90, in <module> y = ImportInModel.parse_raw(x_raw) File "pydantic/main.py", line 549, in pydantic.main.BaseModel.parse_raw File "pydantic/main.py", line 526, in pydantic.main.BaseModel.parse_obj File "pydantic/main.py", line 346, in pydantic.main.BaseModel.__init__ TypeError: Model values must be a dict; you may not have returned a dictionary from a root validator As for the validator in ImportsInModel, maybe you should check your logic of the code. You codes want to assert item_1.parentId == item_2.id. However, in your testcase, item_1.parentId will never equal to item_2.id since you only have one item in your items list, which means item_1 == item_2. So the final snippet is: from datetime import datetime from enum import Enum import ujson from pydantic.json import isoformat from pydantic import BaseModel, constr, PositiveInt, root_validator, validator, ValidationError class FileTypeEnum(Enum): file = 'FILE' folder = 'FOLDER' class ImportInModel(BaseModel): id: constr(min_length=1) url: constr(max_length=255) = None parentId: str | None = None size: PositiveInt | None = None type: FileTypeEnum @root_validator def check_url(cls, values: dict): file_type = values['type'] file_url = values['url'] if file_type == FileTypeEnum.folder and file_url is not None: raise ValueError( f'{file_type}\'s type file has {file_url} file url!' ) return values @root_validator def check_size(cls, values: dict): file_type = values['type'] file_size = values['size'] if file_type == FileTypeEnum.file and file_size is None: raise ValueError( f'{file_type}\'s type file has {file_size} file size!' ) return values class ImportsInModel(BaseModel): items: list[ImportInModel] updateDate: datetime class Config: json_encoders = { datetime: isoformat } json_loads = ujson.loads @validator('items') def check_parent(cls, v): for item_1 in v: for item_2 in v: print(item_1.type, item_1.parentId, item_2.id, item_2.type) assert item_1.type == FileTypeEnum.file \ and item_1.parentId == item_2.id \ and item_2.type == FileTypeEnum.folder if __name__ == '__main__': try: json_raw = '''{ "items": [ { "id": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_4", "url": "/file/url1", "parentId": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_1", "size": 234, "type": "FILE" } ], "updateDate": "2022-05-28T21:12:01.000Z" } ''' x_raw = ''' { "id": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_4", "url": "/file/url1", "parentId": "ΡΠ»Π΅ΠΌΠ΅Π½Ρ_1_1", "size": 234, "type": "FILE" } ''' y = ImportInModel.parse_raw(x_raw) z = ImportsInModel.parse_raw(json_raw) print(y, z) except ValidationError as e: print(e) | 4 | 2 |
73,717,356 | 2022-9-14 | https://stackoverflow.com/questions/73717356/casting-a-value-based-on-a-trigger-in-pandas | I would like to create a new column every time I get 1 in the 'Signal' column that will cast the corresponding value from the 'Value' column (please see the expected output below). Initial data: Index Value Signal 0 3 0 1 8 0 2 8 0 3 7 1 4 9 0 5 10 0 6 14 1 7 10 0 8 10 0 9 4 1 10 10 0 11 10 0 Expected Output: Index Value Signal New_Col_1 New_Col_2 New_Col_3 0 3 0 0 0 0 1 8 0 0 0 0 2 8 0 0 0 0 3 7 1 7 0 0 4 9 0 7 0 0 5 10 0 7 0 0 6 14 1 7 14 0 7 10 0 7 14 0 8 10 0 7 14 0 9 4 1 7 14 4 10 10 0 7 14 4 11 10 0 7 14 4 What would be a way to do it? | You can use a pivot: out = df.join(df # keep only the values where signal is 1 # and get Signal's cumsum .assign(val=df['Value'].where(df['Signal'].eq(1)), col=df['Signal'].cumsum() ) # pivot cumsumed Signal to columns .pivot(index='Index', columns='col', values='val') # ensure column 0 is absent (using loc to avoid KeyError) .loc[:, 1:] # forward fill the values .ffill() # rename columns .add_prefix('New_Col_') ) output: Index Value Signal New_Col_1 New_Col_2 New_Col_3 0 0 3 0 NaN NaN NaN 1 1 8 0 NaN NaN NaN 2 2 8 0 NaN NaN NaN 3 3 7 1 7.0 NaN NaN 4 4 9 0 7.0 NaN NaN 5 5 10 0 7.0 NaN NaN 6 6 14 1 7.0 14.0 NaN 7 7 10 0 7.0 14.0 NaN 8 8 10 0 7.0 14.0 NaN 9 9 4 1 7.0 14.0 4.0 10 10 10 0 7.0 14.0 4.0 11 11 10 0 7.0 14.0 4.0 | 5 | 4 |
73,653,265 | 2022-9-8 | https://stackoverflow.com/questions/73653265/whats-the-cleanest-way-to-indent-multiple-function-arguments-considering-pep8 | I am wondering what's the best way to format a function with multiple arguments. Suppose I have a function with many arguments with potentially long argument names or default values such as for example: def my_function_with_a_long_name(argument1="this is a long default value", argument2=["some", "long", "default", "values"]): pass Naturally, I would try to make the function more readable. Following the pep8 styling guide https://peps.python.org/pep-0008/ I have multiple options to format such function: I can use a hanging indent with extra indentation of the arguments: def my_function( a="a", b="b", c="c"): pass I could also put the first argument on the first line and align the following arguments with it: def my_function(a="a", b="b", c="c" ): pass My linter also accepts the following option, though I'm not sure it is in agreement with pep8: def my_function( a="a", b="b", c="c" ): pass None of these options looks 100% 'clean' to me. I guess the first option seems cleanest though it bothers me a bit to have the parenthesis on the same line as the last argument. However, putting the parenthesis on an extra line seems to conflict with pep8: def my_function( a="a", b="b", c="c" ): pass Now I am wondering, while multiple options might be acceptable if we only consider pep8, is there one option that is generally agreed on in the python community? Are there arguments for or against any of these options or does just come down to personal preference? In a collaborative project, which option would you choose and why? | If pep8 doesn't strictly say that you have to use this style, you're free to choose between correct ones. But why bothering yourself to format it manually? The only important thing is to be consistent in your code. So just leave it to a formatter. That takes care of it. I would suggest Black. Written by a core developer - Εukasz Langa It formats your function to: def my_function_with_a_long_name( argument1="this is a long default value", argument2=["some", "long", "default", "values"], ): pass Update I still do recommend the formatting style of "Black", but there is a relatively new popular package called Ruff which is almost considered as a drop-in replacement for "Black". (99% compatible) It's super fast and it does other things too. Check the documentation. | 5 | 4 |
73,699,500 | 2022-9-13 | https://stackoverflow.com/questions/73699500/python-polars-split-string-column-into-many-columns-by-delimiter | In pandas, the following code will split the string from col1 into many columns. is there a way to do this in polars? data = {"col1": ["a/b/c/d", "a/b/c/d"]} df = pl.DataFrame(data) df_pd = df.to_pandas() df_pd[["a", "b", "c", "d"]] = df_pd["col1"].str.split("/", expand=True) pl.from_pandas(df_pd) shape: (2, 5) βββββββββββ¬ββββββ¬ββββββ¬ββββββ¬ββββββ β col1 β a β b β c β d β β --- β --- β --- β --- β --- β β str β str β str β str β str β βββββββββββͺββββββͺββββββͺββββββͺββββββ‘ β a/b/c/d β a β b β c β d β β a/b/c/d β a β b β c β d β βββββββββββ΄ββββββ΄ββββββ΄ββββββ΄ββββββ | Here's an algorithm that will automatically adjust for the required number of columns -- and should be quite performant. Let's start with this data. Notice that I've purposely added the empty string "" and a null value - to show how the algorithm handles these values. Also, the number of split strings varies widely. import polars as pl df = pl.DataFrame( { "my_str": ["cat", "cat/dog", None, "", "cat/dog/aardvark/mouse/frog"], } ) df shape: (5, 1) βββββββββββββββββββββββββββββββ β my_str β β --- β β str β βββββββββββββββββββββββββββββββ‘ β cat β β cat/dog β β null β β β β cat/dog/aardvark/mouse/frog β βββββββββββββββββββββββββββββββ The Algorithm The algorithm below may be a bit more than you need, but you can edit/delete/add as you need. ( df .with_row_index('id') .with_columns(pl.col('my_str').str.split('/').alias('split_str')) .explode('split_str') .with_columns( ('string_' + pl.int_range(pl.len()).cast(pl.String).str.zfill(2)) .over('id') .alias('col_nm') ) .pivot( on='col_nm', index=['id', 'my_str'] ) .with_columns( pl.col('^string_.*$').fill_null('') ) ) shape: (5, 7) βββββββ¬ββββββββββββββββββββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ β id β my_str β string_00 β string_01 β string_02 β string_03 β string_04 β β --- β --- β --- β --- β --- β --- β --- β β u32 β str β str β str β str β str β str β βββββββͺββββββββββββββββββββββββββββββͺββββββββββββͺββββββββββββͺββββββββββββͺββββββββββββͺββββββββββββ‘ β 0 β cat β cat β β β β β β 1 β cat/dog β cat β dog β β β β β 2 β null β β β β β β β 3 β β β β β β β β 4 β cat/dog/aardvark/mouse/frog β cat β dog β aardvark β mouse β frog β βββββββ΄ββββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ How it works We first assign a row number id (which we'll need later), and use split to separate the strings. Note that the split strings form a list. ( df .with_row_index("id") .with_columns(pl.col("my_str").str.split("/").alias("split_str")) ) shape: (5, 3) βββββββ¬ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ β id β my_str β split_str β β --- β --- β --- β β u32 β str β list[str] β βββββββͺββββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββ‘ β 0 β cat β ["cat"] β β 1 β cat/dog β ["cat", "dog"] β β 2 β null β null β β 3 β β [""] β β 4 β cat/dog/aardvark/mouse/frog β ["cat", "dog", "aardvark", "moβ¦ β βββββββ΄ββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ Next, we'll use explode to put each string on its own row. (Notice how the id column tracks the original row that each string came from.) ( df .with_row_index("id") .with_columns(pl.col("my_str").str.split("/").alias("split_str")) .explode("split_str") ) shape: (10, 3) βββββββ¬ββββββββββββββββββββββββββββββ¬ββββββββββββ β id β my_str β split_str β β --- β --- β --- β β u32 β str β str β βββββββͺββββββββββββββββββββββββββββββͺββββββββββββ‘ β 0 β cat β cat β β 1 β cat/dog β cat β β 1 β cat/dog β dog β β 2 β null β null β β 3 β β β β 4 β cat/dog/aardvark/mouse/frog β cat β β 4 β cat/dog/aardvark/mouse/frog β dog β β 4 β cat/dog/aardvark/mouse/frog β aardvark β β 4 β cat/dog/aardvark/mouse/frog β mouse β β 4 β cat/dog/aardvark/mouse/frog β frog β βββββββ΄ββββββββββββββββββββββββββββββ΄ββββββββββββ In the next step, we're going to generate our column names. I chose to call each column string_XX where XX is the offset with regards to the original string. I've used the handy zfill expression so that 1 becomes 01. (This makes sure that string_02 comes before string_10 if you decide to sort your columns later.) You can substitute your own naming in this step as you need. ( df .with_row_index("id") .with_columns(pl.col("my_str").str.split("/").alias("split_str")) .explode("split_str") .with_columns( ("string_" + pl.int_range(pl.len()).cast(pl.String).str.zfill(2)) .over("id") .alias("col_nm") ) ) shape: (10, 4) βββββββ¬ββββββββββββββββββββββββββββββ¬ββββββββββββ¬ββββββββββββ β id β my_str β split_str β col_nm β β --- β --- β --- β --- β β u32 β str β str β str β βββββββͺββββββββββββββββββββββββββββββͺββββββββββββͺββββββββββββ‘ β 0 β cat β cat β string_00 β β 1 β cat/dog β cat β string_00 β β 1 β cat/dog β dog β string_01 β β 2 β null β null β string_00 β β 3 β β β string_00 β β 4 β cat/dog/aardvark/mouse/frog β cat β string_00 β β 4 β cat/dog/aardvark/mouse/frog β dog β string_01 β β 4 β cat/dog/aardvark/mouse/frog β aardvark β string_02 β β 4 β cat/dog/aardvark/mouse/frog β mouse β string_03 β β 4 β cat/dog/aardvark/mouse/frog β frog β string_04 β βββββββ΄ββββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββ In the next step, we'll use the pivot function to place each string in its own column. ( df .with_row_index("id") .with_columns(pl.col("my_str").str.split("/").alias("split_str")) .explode("split_str") .with_columns( ("string_" + pl.int_range(pl.len()).cast(pl.String).str.zfill(2)) .over("id") .alias("col_nm") ) .pivot( on="col_nm", index=["id", "my_str"] ) ) shape: (5, 7) βββββββ¬ββββββββββββββββββββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββββ β id β my_str β string_00 β string_01 β string_02 β string_03 β string_04 β β --- β --- β --- β --- β --- β --- β --- β β u32 β str β str β str β str β str β str β βββββββͺββββββββββββββββββββββββββββββͺββββββββββββͺββββββββββββͺββββββββββββͺββββββββββββͺββββββββββββ‘ β 0 β cat β cat β null β null β null β null β β 1 β cat/dog β cat β dog β null β null β null β β 2 β null β null β null β null β null β null β β 3 β β β null β null β null β null β β 4 β cat/dog/aardvark/mouse/frog β cat β dog β aardvark β mouse β frog β βββββββ΄ββββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ All that remains is to use fill_null to replace the null values with an empty string "". Notice that I've used a regex expression in the col expression to target only those columns whose names start with "string_". (Depending on your other data, you may not want to replace null with "" everywhere in your data.) | 15 | 14 |
73,700,879 | 2022-9-13 | https://stackoverflow.com/questions/73700879/interaction-between-pydantic-models-schemas-in-the-fastapi-tutorial | I follow the FastAPI Tutorial and am not quite sure what the exact relationship between the proposed data objects is. We have the models.py file: from sqlalchemy import Boolean, Column, ForeignKey, Integer, String from sqlalchemy.orm import relationship from .database import Base class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True, index=True) email = Column(String, unique=True, index=True) hashed_password = Column(String) is_active = Column(Boolean, default=True) items = relationship("Item", back_populates="owner") class Item(Base): __tablename__ = "items" id = Column(Integer, primary_key=True, index=True) title = Column(String, index=True) description = Column(String, index=True) owner_id = Column(Integer, ForeignKey("users.id")) owner = relationship("User", back_populates="items") And the schemas.py file: from typing import List, Union from pydantic import BaseModel class ItemBase(BaseModel): title: str description: Union[str, None] = None class ItemCreate(ItemBase): pass class Item(ItemBase): id: int owner_id: int class Config: orm_mode = True class UserBase(BaseModel): email: str class UserCreate(UserBase): password: str class User(UserBase): id: int is_active: bool items: List[Item] = [] class Config: orm_mode = True Those classes are then used to define db queries like in the crud.py file: from sqlalchemy.orm import Session from . import models, schemas def get_user(db: Session, user_id: int): return db.query(models.User).filter(models.User.id == user_id).first() def get_user_by_email(db: Session, email: str): return db.query(models.User).filter(models.User.email == email).first() def get_users(db: Session, skip: int = 0, limit: int = 100): return db.query(models.User).offset(skip).limit(limit).all() def create_user(db: Session, user: schemas.UserCreate): fake_hashed_password = user.password + "notreallyhashed" db_user = models.User(email=user.email, hashed_password=fake_hashed_password) db.add(db_user) db.commit() db.refresh(db_user) return db_user def get_items(db: Session, skip: int = 0, limit: int = 100): return db.query(models.Item).offset(skip).limit(limit).all() def create_user_item(db: Session, item: schemas.ItemCreate, user_id: int): db_item = models.Item(**item.dict(), owner_id=user_id) db.add(db_item) db.commit() db.refresh(db_item) return db_item And in the FastAPI code main.py: from typing import List from fastapi import Depends, FastAPI, HTTPException from sqlalchemy.orm import Session from . import crud, models, schemas from .database import SessionLocal, engine models.Base.metadata.create_all(bind=engine) app = FastAPI() # Dependency def get_db(): db = SessionLocal() try: yield db finally: db.close() @app.post("/users/", response_model=schemas.User) def create_user(user: schemas.UserCreate, db: Session = Depends(get_db)): db_user = crud.get_user_by_email(db, email=user.email) if db_user: raise HTTPException(status_code=400, detail="Email already registered") return crud.create_user(db=db, user=user) @app.get("/users/", response_model=List[schemas.User]) def read_users(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): users = crud.get_users(db, skip=skip, limit=limit) return users @app.get("/users/{user_id}", response_model=schemas.User) def read_user(user_id: int, db: Session = Depends(get_db)): db_user = crud.get_user(db, user_id=user_id) if db_user is None: raise HTTPException(status_code=404, detail="User not found") return db_user @app.post("/users/{user_id}/items/", response_model=schemas.Item) def create_item_for_user( user_id: int, item: schemas.ItemCreate, db: Session = Depends(get_db) ): return crud.create_user_item(db=db, item=item, user_id=user_id) @app.get("/items/", response_model=List[schemas.Item]) def read_items(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): items = crud.get_items(db, skip=skip, limit=limit) return items From what I understand: The models data classes define the SQL tables. The schemas data classes define the API that FastAPI uses to interact with the database. They must be convertible into each other so that the set-up works. What I don't understand: In crud.create_user_item I expected the return type to be schemas.Item, since that return type is used by FastAPI again. According to my understanding the response model of @app.post("/users/{user_id}/items/", response_model=schemas.Item) in the main.py is wrong, or how can I understand the return type inconsistency? However inferring from the code, the actual return type must be models.Item, how is that handled by FastAPI? What would be the return type of crud.get_user? | I'll go through your bullet points one by one. The models data classes define the SQL tables. Yes. More precisely, the orm classes that map to actual database tables are defined in the models module. The schemas data classes define the API that FastAPI uses to interact with the database. Yes and no. The Pydantic models in the schemas module define the data schemas relevant to the API, yes. But that has nothing to do with the database yet. Some of these schemas define what data is expected to be received by certain API endpoints for the request to be considered valid. Others define what the data returned by certain endpoints will look like. They must be convertible into each other so that the set-up works. While the database table schemas and the API data schemas are usually very similar, that is not necessarily the case. In the tutorial however, they correspond quite neatly, which allows succinct crud code, like this: db_item = models.Item(**item.dict(), owner_id=user_id) Here item is a Pydantic model instance, i.e. one of your API data schemas schemas.ItemCreate containing data you decided is necessary for creating a new item. Since its fields (their names and types) correspond to those of the database model models.Item, the latter can be instantiated from the dictionary representation of the former (with the addition of the owner_id). In crud.create_user_item I expected the return type to be schemas.Item, since that return type is used by FastAPI again. No, this is exactly the magic of FastAPI. The function create_user_item returns an instance of models.Item, i.e. the ORM object as constructed from the database (after calling session.refresh on it): def create_user_item(db: Session, item: schemas.ItemCreate, user_id: int): db_item = models.Item(**item.dict(), owner_id=user_id) ... return db_item And the API route handler function create_item_for_user actually does return that same object (of class models.Item). @app.post("/users/{user_id}/items/", response_model=schemas.Item) def create_item_for_user( user_id: int, item: schemas.ItemCreate, db: Session = Depends(get_db) ): return crud.create_user_item(db=db, item=item, user_id=user_id) However, the @app.post decorator takes that object and uses it to construct an instance of the response_model you defined for that route, which is schemas.Item in this case. This is why you set orm_mode in your schemas.Item model: class Config: orm_mode = True This allows an instance of that class to be created via the .from_orm method. (This only applies to Pydantic v1 Models.) That all happens behind the scenes and again depends on the SQLAlchemy model corresponding to the Pydantic model with regards to field names and types. Otherwise validation fails. According to my understanding the response model [...] is wrong No, see above. The decorated route function actually returns an instance of the schemas.Item model. However inferring from the code, the actual return type must be models.Item Yes, see above. The return type of the undecorated route handler function create_item_for_user is in fact models.Item. But its return type is not the response model. I assume that to reduce confusion the documentation example does not annotate the return type of those route functions. If it did, it would look like this: @app.post("/users/{user_id}/items/", response_model=schemas.Item) def create_item_for_user( user_id: int, item: schemas.ItemCreate, db: Session = Depends(get_db) ) -> models.Item: return crud.create_user_item(db=db, item=item, user_id=user_id) It may help to remember that a function decorator is just syntactic sugar for a function that takes a function as argument and (usually) returns a function. Typically the returned function actually internally calls the function passed to it as argument and does additional things before and/or after that call. I could rewrite the route above like this and it would be exactly the same: def create_item_for_user( user_id: int, item: schemas.ItemCreate, db: Session = Depends(get_db) ) -> models.Item: return crud.create_user_item(db=db, item=item, user_id=user_id) create_item_for_user = app.post( "/users/{user_id}/items/", response_model=schemas.Item )(create_item_for_user) What would be the return type of crud.get_user? That would be models.User because that is the database model and is what the first method of that query returns. def get_user(db: Session, user_id: int) -> models.User: return db.query(models.User).filter(models.User.id == user_id).first() This is then again returned by the read_user API route function in the same fashion as I explained above for models.Item. @app.get("/users/{user_id}", response_model=schemas.User) def read_user(user_id: int, db: Session = Depends(get_db)) -> models.User: db_user = crud.get_user(db, user_id=user_id) ... return db_user # <-- instance of `models.User` That is, the models.User object is intercepted by the internal function of the decorator and (because of the defined response_model) passed to schemas.User.from_orm, which returns a schemas.User object. Hope this helps. | 20 | 30 |
73,664,464 | 2022-9-9 | https://stackoverflow.com/questions/73664464/setuptools-and-pyproject-toml-specify-source | I am defining a python project using a pyproject.toml file. No setup.py, no setup.cfg. The project has dependencies on an alternate repository: https://artifactory.mypypy.com, how do I specify it ? | You put this in pyproject.toml: [[tool.poetry.source]] name = "internal-repo-2" url = "https://<private-repo-2>" priority = "explicit" There are alternative for priority but they come with a security risk: an attacker who learns the name of your internal package can push a package of the same name to PyPI, and it will then become part of your executable. | 4 | 0 |
73,705,069 | 2022-9-13 | https://stackoverflow.com/questions/73705069/how-to-type-hint-kwargs-when-they-are-passed-as-is-to-another-function | Suppose I have a fully type hinted method with only keyword arguments : class A: def func(a: int, b: str, c: SomeIntricateTypeHint) -> OutputClass: ... Now suppose I have a function that takes in variable keyword arguments, and passes them entirely onto that method : def outer_func(n_repeat: int, **kwargs: ???) -> OtherOutputClass: a = A() for _ in range(n_repeat): a.func(**kwargs) In doing so, I have lost the benefits of the type hints for func. How do I type hint kwargs in outer_func such that I recover those benefits ? For extra detail, in my case, I don't personally define func. It's actually a method from a boto3 client object. As a result I'm looking for a solution that dynamically creates the type hint, rather than having to manually create the TypedDict. | In my case, it is json.loads wrapper. My get_json accept the bytes and doing the decoding but the json.loads will accept str | bytes | bytearray as first argument. from typing import ( ParamSpec, Callable, Generic, Concatenate, Any, ) P = ParamSpec("P") def __get_json_func_creator( _: Callable[ Concatenate[Any, P], Any ] = json.loads ): def func(data: bytes, *args: P.args, **kwargs: P.kwargs): return json.loads(__get_text(data), *args, **kwargs) return func; get_json = __get_json_func_creator() In your case, you're defining func in class A. Assume that you're missing self argument in method declaration and are intended to strip a and b arguments? class A: def func( self, a: int, b: str, c: SomeIntricateTypeHint = None ) -> OutputClass: ... def __wrap_class_a_func( _: Callable[ Concatenate[A, Any, Any, P], Any ] = A.func ): def func(n_repeat: int, *args: P.args, **kwargs: P.kwargs): a = A() for num in range(n_repeat): a.func(num, str(num), **kwargs) return func outer_func = __wrap_class_a_func() Unfortunately, in this case if you want to strip all positional parameter, you will have to define all positional type in Concatenate | 9 | 3 |
73,637,315 | 2022-9-7 | https://stackoverflow.com/questions/73637315/oserror-no-library-called-cairo-2-was-found-from-custom-widgets-import-proje | How to fix this error? C:\Users\vanvl\OneDrive\Bureaublad\Progammeren\Project 1.02.2>python Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from Custom_Widgets import ProjectMaker Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\vanvl\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\Custom_Widgets\ProjectMaker.py", line 14, in <module> import cairosvg File "C:\Users\vanvl\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\cairosvg\__init__.py", line 26, in <module> from . import surface # noqa isort:skip File "C:\Users\vanvl\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\cairosvg\surface.py", line 9, in <module> import cairocffi as cairo File "C:\Users\vanvl\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\cairocffi\__init__.py", line 48, in <module> cairo = dlopen( File "C:\Users\vanvl\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\cairocffi\__init__.py", line 45, in dlopen raise OSError(error_message) # pragma: no cover OSError: no library called "cairo-2" was found no library called "cairo" was found no library called "libcairo-2" was found cannot load library 'libcairo.so.2': error 0x7e cannot load library 'libcairo.2.dylib': error 0x7e cannot load library 'libcairo-2.dll': error 0x7e | This is quite an annoying dependency problem because cairocffi is not built for windows, you need the additional dependency as explained there: https://doc.courtbouillon.org/cairocffi/stable/overview.html#installing-cairo-on-windows A quicker solution is to do the following: I used pipwin which install from an unofficial repository https://www.lfd.uci.edu/~gohlke/pythonlibs/#pycairo pip install pipwin pipwin install cairocffi Also see this related issue: get cairosvg working in windows | 10 | 20 |
73,700,143 | 2022-9-13 | https://stackoverflow.com/questions/73700143/bad-string-representation-of-negative-imaginary-numbers-in-python | For some reason the string representation of negative imaginary numbers in Python is different for equal values: >>> str(-3j) '(-0-3j)' >>> str(0-3j) '-3j' Moreover, if I try to get the string representation of -0-3j, I get: >>> str(-0-3j) '-3j' Which seems like an inconsistency. The problem is that I use the string representation of complex scalar tensors to compute the hash of some operators, and because of this inconsistency I get different hashes for equal operators. Is there any way to convert '(-0-3j)' to '-3j'? Edit: Mechanic Pig pointed out that: >>> str(-0.-3j) '(-0-3j)' | This issue is filed as a bug and stimulated appearance of PEP-0682 - Format Specifier for Signed Zero and originates from Floating-point arithmetic limitations. Once the bug is resolved -0 will not scare unnecessarily any more. But until then... You can adapt the snippet from this answer: def real_norm(no): return no if no != 0 else abs(no) def imag_norm(ino): return complex(real_norm(ino.real), real_norm(ino.imag)) test_cases = [-0-3j, 0-3j, -0-0j, 0-0j, -0+0j] for tc in test_cases: tn = imag_norm(tc) print(f"{tc} -> {tn}") | 4 | 2 |
73,633,063 | 2022-9-7 | https://stackoverflow.com/questions/73633063/distribute-alembic-migration-scripts-in-application-package | I have an application that uses SQLAlchemy and Alembic for migrations. The repository looks like this: my-app/ my_app/ ... # Source code migrations/ versions/ ... # Migration scripts env.py alembic.ini MANIFEST.in README.rst setup.py When in the repo, I can call alembic commands (alembic revision, alembic upgrade). I want to ship the app as a package to allow users to pip install, and I would like them to be able to just alembic upgrade head to migrate their DB. How can I achieve this? alembic is listed as a dependency. What I don't know is how to ensure alembic.ini and revision files are accessible to the alembic command without the user having to pull the repo. Adding them to MANIFEST.in will add them to the source package but AFAIU, when installing with pip, only my_app and subfolders end up in the (virtual) environment (this plus entry points). Note: the notions of source dist, wheel, MANIFEST.in and include_package_data are still a bit blurry to me but hopefully the description above makes the use case clear. | The obvious part of the answer is "include migration files in app directory". my-app/ my_app/ ... # Source code migrations/ versions/ ... # Migration scripts env.py alembic.ini MANIFEST.in README.rst setup.py The not so obvious part is that when users install the package, they are not in the app directory, so they would need to specify the location of the alembic.ini file as a command line argument to use alembic commands (and this path is somewhere deep into a virtualenv). Not so nice. From a discussion with Alembic author, the recommended way is to provide user commands using the Alembic Python API internally to expose only a subset of user commands. Here's what I did. In migrations directory, I added this __init__.py file: """DB migrations""" from pathlib import Path from alembic.config import Config from alembic import command ROOT_PATH = Path(__file__).parent.parent ALEMBIC_CFG = Config(ROOT_PATH / "alembic.ini") def current(verbose=False): command.current(ALEMBIC_CFG, verbose=verbose) def upgrade(revision="head"): command.upgrade(ALEMBIC_CFG, revision) def downgrade(revision): command.downgrade(ALEMBIC_CFG, revision) Then in a commands.py file in application root, I added a few commands: @click.command() @click.option("-v", "--verbose", is_flag=True, default=False, help="Verbose mode") def db_current_cmd(verbose): """Display current database revision""" migrations.current(verbose) @click.command() @click.option("-r", "--revision", default="head", help="Revision target") def db_upgrade_cmd(revision): """Upgrade to a later database revision""" migrations.upgrade(revision) @click.command() @click.option("-r", "--revision", required=True, help="Revision target") def db_downgrade_cmd(revision): """Revert to a previous database revision""" migrations.downgrade(revision) And of course, in setup.py setup( ... entry_points={ "console_scripts": [ ... "db_current = my_app.commands:db_current_cmd", "db_upgrade = my_app.commands:db_upgrade_cmd", "db_downgrade = my_app.commands:db_downgrade_cmd", ], }, ) | 5 | 5 |
73,656,975 | 2022-9-9 | https://stackoverflow.com/questions/73656975/pytorch-silent-data-corruption | I am on a workstation with 4 A6000 GPUs. Moving a Torch tensor from one GPU to another GPU corrupts the data, silently!!! See the simple example below. x >tensor([1], device='cuda:0') x.to(1) >tensor([1], device='cuda:1') x.to(2) >tensor([0], device='cuda:2') x.to(3) >tensor([0], device='cuda:3') Any ideas what is the cause of this issue? Other info that might be handy: (there was two nvlinks which I manually removed trying to solve the problem) GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU0 X SYS SYS SYS 0-63 N/A GPU1 SYS X SYS SYS 0-63 N/A GPU2 SYS SYS X SYS 0-63 N/A GPU3 SYS SYS SYS X 0-63 N/A nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Tue_Mar__8_18:18:20_PST_2022 Cuda compilation tools, release 11.6, V11.6.124 Build cuda_11.6.r11.6/compiler.31057947_0 NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 Edit: adding some screenshots It seems to be stateful. Changes which GPUs work fine together after starting a new python runtime. | The solution was to disable IOMMU. On our server, in the BIOS settings /Advanced/AMD CBS/NBIO Common Options/IOMMU -> IOMMU - Disabled See the PyTorch issues thread for more information. | 5 | 2 |
73,663,939 | 2022-9-9 | https://stackoverflow.com/questions/73663939/is-there-a-way-to-specify-min-and-max-values-for-integer-column-in-slqalchemy | class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) age = Column(Integer) # I need age to have min value of 0 and max of 100 email = Column(String) SQLAlchemy documentation says that there is no such attributes to pass while creating a column | As described in brunson's answer, you can add a check constraint to perform the validation in the database, if the database supports check constraints. import sqlalchemy as sa ... class Test(Base): __tablename__ = 'test' id = sa.Column(sa.Integer, primary_key=True) age = sa.Column(sa.Integer, sa.CheckConstraint('age > 0 AND age < 100')) It may be more convenient to perform the validation before data is sent to the database, in the application layer. In this case, the orm.validates decorator can be used: class Test(Base): __tablename__ = 'test' id = sa.Column(sa.Integer, primary_key=True) age = sa.Column(sa.Integer, sa.CheckConstraint('age > 0 AND age < 100')) @orm.validates('age') def validate_age(self, key, value): if not 0 < value < 100: raise ValueError(f'Invalid age {value}') return value | 3 | 7 |
73,662,432 | 2022-9-9 | https://stackoverflow.com/questions/73662432/pipenv-no-such-option-requirements-in-latest-version | command: pipenv lock --requirements --keep-outdated output: Usage: pipenv lock [OPTIONS] Try 'pipenv lock -h' for help. Error: No such option: --requirements Did you mean --quiet? Any idea how to fix this? | the -r option on pipenv lock command is deprecated for some time. use the requirements option to generate the requirements.txt ie: pipenv requirements > requirements.txt (Default dependencies) and to freeze dev dependencies as well use the --dev option pipenv requirements --dev > dev-requirements.txt Sometimes, you would want to generate a requirements file based on your current environment, for example to include tooling that only supports requirements.txt. You can convert a Pipfile and Pipfile.lock into a requirements.txt file very easily. see also: https://pipenv.pypa.io/en/latest/advanced/#generating-a-requirements-txt | 14 | 21 |
73,683,616 | 2022-9-12 | https://stackoverflow.com/questions/73683616/multiple-rows-into-single-row-in-pandas | I wish to flatten(I am not sure whether is the correct thing to call it flatten) the columns with rows. multiple rows into single row with column change to column_rows I have a dataframe as below: data = {"a":[3,4,5,6], "b":[88,77,66,55], "c":["ts", "new", "thing", "here"], "d":[9.1,9.2,9.0,8.4]} df = pd.DataFrame(data) my current output is: a b c d 0 3 88 ts 9.1 1 4 77 new 9.2 2 5 66 thing 9.0 3 6 55 here 8.4 my expected otput: a_0 a_1 a_2 a_3 b_0 b_1 b_2 b_3 c_0 c_1 c_2 c_3 d_0 d_1 d_2 d_3 0 3 4 5 6 88 77 66 55 ts new thing here 9.1 9.2 9.0 8.4 from shape (4,4) to (1, 16) | Update let's use the walrus operator new in Python 3.8 to create a one-liner: (df_new := df.unstack().to_frame().T).set_axis( [f"{i}_{j}" for i, j in df_new.columns], axis=1 ) Output: a_0 a_1 a_2 a_3 b_0 b_1 b_2 b_3 c_0 c_1 c_2 c_3 d_0 d_1 d_2 d_3 0 3 4 5 6 88 77 66 55 ts new thing here 9.1 9.2 9.0 8.4 Try this, using unstack, to_frame and transpose. Next, flatten the column headers using list comprehension: df_new = df.unstack().to_frame().T df_new.columns = [f'{i}_{j}' for i, j in df_new.columns] df_new Output: a_0 a_1 a_2 a_3 b_0 b_1 b_2 b_3 c_0 c_1 c_2 c_3 d_0 d_1 d_2 d_3 0 3 4 5 6 88 77 66 55 ts new thing here 9.1 9.2 9.0 8.4 | 3 | 5 |
73,636,196 | 2022-9-7 | https://stackoverflow.com/questions/73636196/masking-layer-vs-attention-mask-parameter-in-multiheadattention | I use MultiHeadAttention layer in my transformer model (my model is very similar to the named entity recognition models). Because my data comes with different lengths, I use padding and attention_mask parameter in MultiHeadAttention to mask padding. If I would use the Masking layer before MultiHeadAttention, will it have the same effect as attention_mask parameter? Or should I use both: attention_mask and Masking layer? | The Tensoflow documentation on Masking and padding with keras may be helpful. The following is an excerpt from the document. When using the Functional API or the Sequential API, a mask generated by an Embedding or Masking layer will be propagated through the network for any layer that is capable of using them (for example, RNN layers). Keras will automatically fetch the mask corresponding to an input and pass it to any layer that knows how to use it. tf.keras.layers.MultiHeadAttention also supports automatic mask propagation in TF2.10.0. Improved masking support for tf.keras.layers.MultiHeadAttention. Implicit masks for query, key and value inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding with mask_zero=True to automatically infer a correct padding mask. Added a use_causal_mask call time arugment to the layer. Passing use_causal_mask=True will compute a causal attention mask, and optionally combine it with any attention_mask passed in directly when calling the layer. | 5 | 3 |
73,704,681 | 2022-9-13 | https://stackoverflow.com/questions/73704681/how-to-iterate-through-json-to-print-in-a-formatted-way | This is the JSON i have """ { "A":1, "B":[ { "C": { "D": "0" }, "E": { "F": "1", "G": [ { "H": { "I": 12, "J": 21 } } ] } } ] } """ I want to print the JSON in the following way more likely in a tree fashion ----------------------------------------------------------------------- --------------------------EXPECTED OUTPUT!----------------------------- ----------------------------------------------------------------------- { 'title': 'ROOT', 'key': '0', 'children': [ { 'title': 'A', 'key': 'A', 'children':[] }, { 'title': 'B', 'key': 'B', 'children':[ { 'title': 'C', 'key': 'C', 'children':[ { 'title': 'D', 'key': 'D', 'children':[] } ] }, { 'title': 'E', 'key': 'E', 'children':[ { 'title': 'F', 'key': 'F', 'children':[] },{ 'title': 'G', 'key': 'G', 'children':[ { 'title': 'H', 'key': 'H', 'children':[ { 'title': 'I', 'key': 'I', 'children':[] }, { 'title': 'J', 'key': 'J', 'children':[] } ] } ] } ] } ] } ] } here is the reference to the above printing pattern link ! Here is the code i tried import json from pprint import pprint json_data = json.loads(""" { "A":1, "B":[ { "C": { "D": "0" }, "E": { "F": "1", "G": [ { "H": { "I": 12, "J": 21 } } ] } } ] } """) __version__ = '1.0.0' _branch_extend = 'β ' _branch_mid = 'ββ ' _branch_last = 'ββ ' _spacing = ' ' last_items = [] main_json_obj = { 'title': """ ROOT """, 'key': 'ROOT', 'children': [] } def _getHierarchy(jsonData, name='', file=None, _prefix='', _last=True): """ Recursively parse json data to print data types """ # Handle object datatype if isinstance(jsonData, dict): name = name print(_prefix, _branch_last if _last else _branch_mid, \ name, sep="", file=file) if _last: last_items.append([{'title': name, 'key': name, 'children': []}]) main_json_obj['children'] += [{'title': name, 'key': name, 'children': []}] _prefix += _spacing if _last else _branch_extend length = len(jsonData) for i, key in enumerate(jsonData.keys()): _last = i == (length - 1) _getHierarchy(jsonData[key], '"' + key + '"', file, _prefix, _last) # Handle array datatype elif isinstance(jsonData, list): # name += ' (array)' print(_prefix, _branch_last if _last else _branch_mid, \ name, sep="", file=file) if _last: last_items.append([{'title': name, 'key': name, 'children': []}]) main_json_obj['children'] += [{'title': name, 'key': name, 'children': []}] _prefix += _spacing if _last else _branch_extend _getHierarchy(jsonData[0], '', file, _prefix, _last=True) else: # Handle string datatype if isinstance(jsonData, str): name = name if _last: last_items.append([{'title': name, 'key': name, 'children': []}]) main_json_obj['children'] += [{'title': name, 'key': name, 'children': []}] # Handle number datatype else: name = name if _last: last_items.append([{'title': name, 'key': name, 'children': []}]) main_json_obj['children'] += [{'title': name, 'key': name, 'children': []}] print(_prefix, _branch_last if _last else _branch_mid, \ name, sep="", file=file) def setSymbols(branch_extend='β ', branch_mid='ββ ', branch_last='ββ '): """ Override symbols for the tree structure """ global _branch_extend global _branch_mid global _branch_last _branch_extend = branch_extend _branch_mid = branch_mid _branch_last = branch_last print("-----------------------------------------------------------") print("-------------------Hierarchy is shown below--------------- ") print("-----------------------------------------------------------") print(" ") _getHierarchy(json_data) print(" ") print("-----------------------------------------------------------") print("-------------------Hierarchy ENDS HERE !--------------- ") print("-----------------------------------------------------------") print(" ") print(" ") print("--------------------OUTPUT AT THE MOMENT !!!!!!-------------- ") print(" ") print(" ") pprint(main_json_obj) print(" ") print(" ") this is its output at the moment! ----------------------------------------------------------- -------------------Hierarchy is shown below--------------- ----------------------------------------------------------- ββ ββ "A" ββ "B" ββ ββ "C" β ββ "D" ββ "E" ββ "F" ββ "G" ββ ββ "H" ββ "I" ββ "J" ----------------------------------------------------------- -------------------Hierarchy ENDS HERE !--------------- ----------------------------------------------------------- --------------------OUTPUT AT THE MOMENT !!!!!!-------------- {'children': [{'children': [], 'key': '', 'title': ''}, {'children': [], 'key': '"A"', 'title': '"A"'}, {'children': [], 'key': '"B"', 'title': '"B"'}, {'children': [], 'key': '', 'title': ''}, {'children': [], 'key': '"C"', 'title': '"C"'}, {'children': [], 'key': '"D"', 'title': '"D"'}, {'children': [], 'key': '"E"', 'title': '"E"'}, {'children': [], 'key': '"F"', 'title': '"F"'}, {'children': [], 'key': '"G"', 'title': '"G"'}, {'children': [], 'key': '', 'title': ''}, {'children': [], 'key': '"H"', 'title': '"H"'}, {'children': [], 'key': '"I"', 'title': '"I"'}, {'children': [], 'key': '"J"', 'title': '"J"'}], 'key': 'ROOT', 'title': ' ROOT '} If you observer closely, The difference between the output i want and the output i got is that the child item are being printed separately as a dictionary. But, i want to include them inside the children list of the parent. For example the key "A" doesn't have any children. So, it should be printed as a separate dictionary and its working this way as of now. But if you look at the key "B" it has keys "C" and "E" as its children. so "E and C" should be included inside B's Children. moreover the keys "E and C" have children themselves, for "C" the child "D" and for "E" the children are "F" and "G" so "F and G" should be included in the children list of "E" . "D" should be inside children list of "C". any help would be really appreciated! Thanks in advance. | Simple recursion: def tree(json_data): def _tree(obj): if isinstance(obj, dict): return [{'title': k, 'key': k, 'children': _tree(v)} for k, v in obj.items()] elif isinstance(obj, list): return [elem for lst in map(_tree, obj) for elem in lst] else: return [] return {'title': 'ROOT', 'key': '0', 'children': _tree(json_data)} Test: >>> json_data = json.loads("""{ ... "A": 1, ... "B": [ ... { ... "C": { ... "D": "0" ... }, ... "E": { ... "F": "1", ... "G": [ ... { ... "H": { ... "I": 12, ... "J": 21 ... } ... } ... ] ... } ... } ... ] ... }""") >>> pp(tree(json_data), width=1) {'title': 'ROOT', 'key': '0', 'children': [{'title': 'A', 'key': 'A', 'children': []}, {'title': 'B', 'key': 'B', 'children': [{'title': 'C', 'key': 'C', 'children': [{'title': 'D', 'key': 'D', 'children': []}]}, {'title': 'E', 'key': 'E', 'children': [{'title': 'F', 'key': 'F', 'children': []}, {'title': 'G', 'key': 'G', 'children': [{'title': 'H', 'key': 'H', 'children': [{'title': 'I', 'key': 'I', 'children': []}, {'title': 'J', 'key': 'J', 'children': []}]}]}]}]}]} | 4 | 6 |
73,698,100 | 2022-9-13 | https://stackoverflow.com/questions/73698100/pandas-how-to-merging-on-multiple-columns-not-working-and-other-solutions-regula | I have an old dataframe (dfo) that someone decided to add additional columns (notes) to and this data set does not have a key column. I also have a new dataframe (dfn) that is suppose to represent the same data but does not have the notes columns. I was asked just to transfer the old notes to the new dataframe. I have been able to get matches for some rows but not all. What I want to is to find if there are additional tricks to merging on multiple columns or is there alternatives that might fit better. below is example data from the original csv that did not merge then placing it in the Dictionaries it works just fine. example_new = {'S1': ['W', 'CD', 'W', 'W', 'CD', 'W', 'CD'], 'DateTime': ['6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26'], 'N1': ['N', 'Y', 'N', 'Y', 'N', 'N', 'N'], 'AC': ['C253', '100', '1064', '1920', '1996', '100', 'C253'], 'PS': ['C_041', 'C_041', 'C_041', 'C_041', 'C_041', 'C_041', 'C_041'], 'TI': ['14-2-EP', '14-2-EP', '14-2-EP', '14-2-EP', '14-2-EP', '14-2-EP', '14-2-EP'], 'U': [' ', 'N', 'U/C', 'T', 'C', 'N', 'P'], 'LN': ['Eddie', 'Eddie', 'Eddie', 'Eddie', 'Eddie', 'Eddie', 'Eddie'], 'R2': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]} example_old = {'S1': ['W', 'W', 'W', 'W', 'CD', 'CD'], 'DateTime': ['6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26', '6/9/2021 13:26'], 'N1': ['N', 'Y', 'N', 'N', 'N', 'Y'], 'AC': ['1064', '1920', 'C253', '100', 'C253', '100'], 'PS': ['C_041', 'C_041', 'C_041', 'C_041', 'C_041', 'C_041'], 'TI': ['14-2-EP', '14-2-EP', '14-2-EP', '14-2-EP', '14-2-EP', '14-2-EP'], 'U': ['U/C', 'T', ' ', 'N', 'P', 'N'], 'LN': ['Eddie', 'Eddie', 'Eddie', 'Eddie', 'Eddie', 'Eddie'], 'R2': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], 'Note1': ['Y', 'Y', 'Y', 'Y', 'N', 'N']} dfo = pd.DataFrame.from_dict(example_old) dfn = pd.DataFrame.from_dict(example_new) dfn['DateTime'] = pd.to_datetime(dfnt['DateTime']) dfo['DateTime'] = pd.to_datetime(dfot['DateTime']) The code: dfo = dfo # shape (10250, 10) the additional columns are notes. # columns: ['S1', 'DateTime', 'N1', 'AC', 'PS', 'TI', 'U', 'LN', 'R2', 'Note1'] dfn = dfn # shape (13790, 9) there are a lot or corrects to the prior data # and additional new data. # columns: ['S1', 'DateTime', 'N1', 'AC', 'PS', 'TI', 'U', 'LN', 'R2'] # to make sure that the dtypes are the same. # I read that making sure the object columns are all strings works better. Really Good tip!! str_col = ['S1', 'N1', 'AC', 'PS', 'TI', 'U', 'LN', 'R2'] dfo[str_col] = dfo[str_col].astype(str) dfn[str_col] = dfn[str_col].astype(str) dfo = dfo.apply(lambda x: x.str.strip() if x.dtype == "object" else x) dfn = dfn.apply(lambda x: x.str.strip() if x.dtype == "object" else x) # I read that encoding the columns might show characters that are hidden. # I did not find this helpful for my data. # u = dfo.select_dtypes(include=[object]) # dfo[u.columns] = u.apply(lambda x: x.str.encode('utf-8')) # u = dfn.select_dtypes(include=[object]) # dfn[u.columns] = u.apply(lambda x: x.str.encode('utf-8')) # test / check the dtypes otypes = dfo.columns.to_series().groupby(dfo.dtypes).groups ntypes = dfn.columns.to_series().groupby(dfn.dtypes).groups # display results... dtypes In [95]: print(otypes) Out[74]: {datetime64[ns]: ['DateTime'], object: ['S1', 'N1', 'AC', 'PS', 'TI', 'U', 'LN', 'R2', 'Note1']} In [82]: print(ntypes) Out[82]: {datetime64[ns]: ['DateTime'], object: ['S1', 'N1', 'AC', 'PS', 'TI', 'U', 'LN', 'R2']} # Time to merge subset = ['S1', 'DateTime', 'N1', 'AC', 'PS', 'TI', 'U', 'LN', 'R2'] dfm = pd.merge(dfn,dfo, how="left", on=subset) About 75% of the data is merging. I have done spot checks and there is a lot more data that could merge but it is not. What else should I do to get the remaining 15~25% to merge? If you want to see the data in the csv file I have included a link. Github to csv files | If I understand correctly the β25% that do not mergeβ refers to the length of the data frame that results from an inner join: print('inner', pd.merge(dfn, dfo, how="inner", on=subset).shape) # (7445, 10) Which means that roughly 2700 rows of the old data frame are not merged with the new data frame. It seems to me your code works fine. It is just that these βmissingβ rows in the old data frame do not have a counterpart in the new data frame. If you create a data frame of data that did not merge this can be checked: dfi = pd.merge(dfn, dfo, how="inner", on=subset) dfi['inner'] = 1 df_o_nm = pd.merge(dfo, dfi, how="left", on=subset) df_o_nm = df_oi.loc[df_oi['inner'] != 1][subset] # not merged data from old data frame Take for example the first row in df_o_nm: S1 DateTime N1 AC PS TI U LN R2 CD 2018-02-01 11:37:00 N 1005 C_031 15-85-SR U/L Eaton nan If only a subset of columns is compared (specifically S1, N1, AC, PS, TI, U), exactly one potential row can be found in df_n, which has a different DateTime and LN value: S1 DateTime N1 AC PS TI U LN R2 CD 2020-07-01 12:59:00 N 1005 C_031 15-85-SR U/L Bob nan Differences in the 2 data frames seem to be mostly caused by the columns DateTime, LN, PS and TI. So a more thorough (still simple) comparison can be done like this: for index, row in df_o_nm.iterrows(): df_sel = dfn.loc[ (dfn['S1']==row['S1']) & (dfn['N1']==row['N1']) & (dfn['AC']==row['AC']) & (dfn['U']==row['U'])] if len(df_sel) == 0: print('no matching data subset found.') else: print(f'{len(df_sel)} rows matching subset of columns found') for idx, row_sel in df_sel.iterrows(): for col in ['DateTime', 'LN', 'PS', 'TI']: if row[col] != row_sel[col]: print(f"{idx}: {col}: {row[col]} --> {row_sel[col]}") print('---') | 4 | 1 |
73,638,290 | 2022-9-7 | https://stackoverflow.com/questions/73638290/python-on-mac-is-it-safe-to-set-objc-disable-initialize-fork-safety-yes-globall | Some python processes crash with: objc[51435]: +[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. Those are processes using child shells, forking threads, etc. MacOS blocks them for some security reasons (which I am not sure what are, but that is what people say) The solution is to disable this security check: export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES Which is fine for known libraries and dependencies, and within the currently running shell. Is it safe to set it as a global environment variable, disabling this check globally in my local mac machine? | Apple changed the way fork() behaves in High Sierra (>= 10.13). If enabled, the OBJC_DISABLE_INITIALIZE_FORK_SAFETY variable turns off the immediate crash behaviour that their newer ObjectiveC framework otherwise enforces by default, as part of this change. Your question "is it safe to set it as a global environment variable" depends on your definition of "safe" in this context. It's safe in the sense your computer won't burst into flames. It's unsafe in the sense it may mask crash information that would be otherwise presented by an app that goes awry, and may allow a fork-bomb type process to crash your computer. So if you only have one use case where setting the flag is strictly necessary, then it's best to localise its setting to just that script/scenario. | 7 | 10 |
73,708,409 | 2022-9-13 | https://stackoverflow.com/questions/73708409/resolved-geopandas-sjoin-nearest-where-dataframes-share-a-common-attribute | What is the most elegant way to group df1 and df2 by admin region as a constraint on a near spatial join? Data I have two spatial dataframes, df1 and df2. Both are from polygon shapefiles. The two dataframes have a common attribute, ADM, which is the administrative area that contains their centroids. df1: df1_ID year ADM geometry 9 2015 170 MULTIPOLYGON (((-1641592.97... 10 2015 170 MULTIPOLYGON (((-1641621.22... 11 2015 169 MULTIPOLYGON (((-1641621.22... 12 2011 169 MULTIPOLYGON (((-1641367.00... 13 1985 169 MULTIPOLYGON (((-1641367.00... ... ... ... ... 21 2007 170 MULTIPOLYGON (((-1641367.00... 22 2013 170 MULTIPOLYGON (((-1641367.00... 23 2013 169 MULTIPOLYGON (((-1641367.00... 24 2011 169 MULTIPOLYGON (((-1641367.00... df2: df2_ID settlement ADM geometry 0 7 169 MULTIPOLYGON (((-1639892.85... 1 7 170 MULTIPOLYGON (((-1645683.51... 2 4619 170 MULTIPOLYGON (((-1641531.18... Desired spatial join output I want to join the attributes of df2 onto df1 (left join) based on their spatial proximity: the nearest df2 feature that is within 500 meters of a df1 feature is joined onto the df1 feature row. However, the function should only consider df2 features that are in the same ADM region as the df1 features in question. Map: df1 (green), df2 (purple), and the admin boundary (orange) Geopandas.sjoin_nearest() can easily perform the near join, but it does not have an option to run "by group." For example, using just sjoin_nearest() would produce the following result. Notice that Feature 11 from df1 joins with Feature 2 from df2, and 23 joins with 1, despite being in different admin areas. df1_Near_df2 = sjoin_nearest(df1, df2, how='left', max_distance=500) Sjoin_nearest() without grouping: df1_ID year ADM_left Near_df2 ADM_right 9 2015 170 2 170 10 2015 170 2 170 11 2015 169 2 170 12 2011 169 0 169 13 1985 169 1 169 ... ... ... ... ... 21 2007 170 1 170 22 2013 170 1 170 23 2013 169 1 170 24 2011 169 0 169 Desired result: sjoin_nearest() grouped by ADM: df1_ID year ADM_left Near_df2 ADM_right 9 2015 170 2 170 10 2015 170 2 170 11 2015 169 0 169 12 2011 169 0 169 13 1985 169 0 169 ... ... ... ... ... 21 2007 170 1 170 22 2013 170 1 170 23 2013 169 0 169 24 2011 169 0 169 Notes PS: I want to complete this task in Python. That said, for reference there is an ArcGIS Pro tool called Near By Group that does exactly this. The tool worked well on the sample dataset shown here, but it takes hours and seems to error out on much larger (country-sized) datasets. https://www.arcgis.com/home/item.html?id=37dbbaa29baa467d9da8e27d87d8ad45 https://www.esri.com/arcgis-blog/products/arcgis-desktop/analytics/finding-the-nearest-feature-with-the-same-attributes/ PPS: The closest Stack Overflow thread I have found is here: How to sjoin using geopandas using a common key column and also location. I have created a new question for the following reasons: This is for sjoin_nearest() instead of sjoin(). There may be a spatial solution to my problem given that the common field for the grouping is an administrative area, whereas the linked example is grouping based on common time value. I am providing a minimal reproducible example. SOLUTION @rob-raymond's updating sharding approach (see comments) works beautifully. It processed the country-wide version of this dataset in just minutes. One thing to keep in mind: it requires geopandas v0.11+, which in turn requires a more recent Python version. (Confirmed that Python v10 and v11 are successful). Note that if you are running your code via Anaconda, you need to update Python from within Anaconda (e.g. Anaconda Prompt command line). | Have adapted sharding approach. First generate a dict of df2 then use groupby apply to sjoin_nearest() each group to appropriate shard now have settlement associated in way you want import geopandas as gpd df1 = gpd.read_file("https://github.com/gracedoherty/urban_growth/blob/main/df1.zip?raw=true") df2 = gpd.read_file("https://github.com/gracedoherty/urban_growth/blob/main/df2.zip?raw=true") # shard the smaller dataframe into a dict shards = {k:d for k, d in df2.groupby("ADM")} # now just group by ADM, sjoin_nearest appropriate shard df_n = df1.groupby("ADM").apply(lambda d: gpd.sjoin_nearest(d, shards[d["ADM"].values[0]])) df_n.sample(5, random_state=42) year ADM_left geometry index_right ADM_right settlement 44 1999 170 POLYGON ((-1641847.202 472181.866, -1641847.20... 1 170 7 4 2008 169 POLYGON ((-1641197.517 472577.326, -1641225.76... 0 169 7 53 2011 170 POLYGON ((-1641564.730 472153.619, -1641592.97... 1 170 7 42 2013 170 POLYGON ((-1641931.943 472181.866, -1641931.94... 1 170 7 10 2015 170 POLYGON ((-1641621.225 472436.091, -1641677.71... 2 170 4619 visualise df_n.explore("settlement", categorical=True, height=300,width=400) dependencies geopandas v0.11+ which in turn drops support for python 3.7 CHANGELOG | 4 | 4 |
73,667,333 | 2022-9-9 | https://stackoverflow.com/questions/73667333/open-ai-gym-environments-dont-render-dont-show-at-all | So I wanted to try some reinforcement learning, I haven't coded anything for a while. On Jupiter Notebooks when I run this code import gym env = gym.make("MountainCar-v0") env.reset() done = False while not done: action = 2 # always go right! env.step(action) env.render() it just tries to render it but can't, the hourglass on top of the window is showing but it never renders anything, I can't do anything from there. Same with this code import gym env_name = "MountainCar-v0" env = gym.make(env_name) env.reset() for _ in range(200) action = env.action_space.sample() env.step(action) env.render() Both of these don't work neither on Jupiter notebooks nor Pycharm nor terminal. I'm on Windows. Couldn't find anything similar to this online. Yes I'm new to this Edit- I did this # Install latest stable version from PyPI !pip install -U pysdl2 # Install latest development verion from GitHub !pip install -U git+https://github.com/py-sdl/py-sdl2.git and it now says error: windlib not available I tried !pip install windlib but still can't fix the error | Use an older version that supports your current version of Python. I solved the problem using gym 0.17.3 pip install gym==0.17.3 and the code: import gym env = gym.make("MountainCar-v0") state = env.reset() done = False while not done: action = 2 # always go right! env.step(action) print(new_state, render) env.render(mode="human") env.close() | 5 | 8 |
73,708,478 | 2022-9-13 | https://stackoverflow.com/questions/73708478/the-git-or-python-command-requires-the-command-line-developer-tools | This knowledge post isn't a duplication of other similar ones, since it's related to 12/September/2022 Xcode update, which demands a different kind of solution I have come to my computer today and discovered that nothing runs on my terminal Every time I have opened my IDE (VS Code or PyCharm), it has given me this message in the start of the terminal. I saw so many solutions, which have said to uninstall pyenv and install python via brew, which was a terrible idea, because I need different python versions for different projects. Also, people spoke a lot about symlinks, which as well did not make any sense, because everything was working until yesterday. Furthermore, overwriting .oh-my-zsh with a new built one did not make any difference. | I was prompted to reinstall commandLine tools over and over when trying to accept the terms I FIXED this by opening xcode and confirming the new update information | 33 | 80 |
73,708,230 | 2022-9-13 | https://stackoverflow.com/questions/73708230/pandas-groupby-and-transform-based-on-multiple-columns | I have seen a lot of similar questions but none seem to work for my case. I'm pretty sure this is just a groupby transform but I keep getting KeyError along with axis issues. I am trying to groupby filename and check count where pred != gt. For example Index 2 is the only one for f1.wav so 1, and Index (13,14,18) for f2.wav so 3. df = pd.DataFrame([{'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 2, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f1.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f2.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f2.wav'}, {'pred': 2, 'gt': 2, 'filename': 'f2.wav'}, {'pred': 0, 'gt': 2, 'filename': 'f2.wav'}, {'pred': 0, 'gt': 2, 'filename': 'f2.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f2.wav'}, {'pred': 0, 'gt': 0, 'filename': 'f2.wav'}, {'pred': 2, 'gt': 2, 'filename': 'f2.wav'}, {'pred': 0, 'gt': 2, 'filename': 'f2.wav'}, {'pred': 2, 'gt': 0, 'filename': 'f2.wav'}]) pred gt filename 0 0 0 f1.wav 1 0 0 f1.wav 2 2 0 f1.wav 3 0 0 f1.wav 4 0 0 f1.wav 5 0 0 f1.wav 6 0 0 f1.wav 7 0 0 f1.wav 8 0 0 f1.wav 9 0 0 f1.wav 10 0 0 f2.wav Expected output pred gt filename counts 0 0 0 f1.wav 1 1 0 0 f1.wav 1 2 2 0 f1.wav 1 3 0 0 f1.wav 1 4 0 0 f1.wav 1 5 0 0 f1.wav 1 6 0 0 f1.wav 1 7 0 0 f1.wav 1 8 0 0 f1.wav 1 9 0 0 f1.wav 1 10 0 0 f2.wav 3 11 0 0 f2.wav 3 12 2 2 f2.wav 3 13 0 2 f2.wav 3 14 0 2 f2.wav 3 15 0 0 f2.wav 3 16 0 0 f2.wav 3 17 2 2 f2.wav 3 18 0 2 f2.wav 3 19 2 0 f2.wav 3 I was thinking df.groupby('filename').transform(lambda x: x['pred'].ne(x['gt']).sum(), axis=1) but I get TypeError: Transform function invalid for data types | .transform operates on each column individually, so you won't be able to access both 'pred' and 'gt' in a transform operation. This leaves you with 2 options: aggregate and reindex or join back to the original shape pre-compute the boolean array and .transform on that approach 2 will probably be the fastest here: df['counts'] = ( (df['pred'] != df['gt']) .groupby(df['filename']).transform('sum') ) print(df) pred gt filename counts 0 0 0 f1.wav 1 1 0 0 f1.wav 1 2 2 0 f1.wav 1 3 0 0 f1.wav 1 4 0 0 f1.wav 1 5 0 0 f1.wav 1 6 0 0 f1.wav 1 7 0 0 f1.wav 1 8 0 0 f1.wav 1 9 0 0 f1.wav 1 10 0 0 f2.wav 4 11 0 0 f2.wav 4 12 2 2 f2.wav 4 13 0 2 f2.wav 4 14 0 2 f2.wav 4 15 0 0 f2.wav 4 16 0 0 f2.wav 4 17 2 2 f2.wav 4 18 0 2 f2.wav 4 19 2 0 f2.wav 4 Note that f2.wav has 4 instances where 'pre' != 'gt' (index 13, 14, 18, 19) | 3 | 7 |
73,698,041 | 2022-9-13 | https://stackoverflow.com/questions/73698041/how-retain-grad-in-pytorch-works-i-found-its-position-changes-the-grad-result | in a simple test in pytorch, I want to see grad in a non-leaf tensor, so I use retain_grad(): import torch a = torch.tensor([1.], requires_grad=True) y = torch.zeros((10)) gt = torch.zeros((10)) y[0] = a y[1] = y[0] * 2 y.retain_grad() loss = torch.sum((y-gt) ** 2) loss.backward() print(y.grad) it gives me a normal output: tensor([2., 4., 0., 0., 0., 0., 0., 0., 0., 0.]) but when I use retain grad() before y[1] and after y[0] is assigned: import torch a = torch.tensor([1.], requires_grad=True) y = torch.zeros((10)) gt = torch.zeros((10)) y[0] = a y.retain_grad() y[1] = y[0] * 2 loss = torch.sum((y-gt) ** 2) loss.backward() print(y.grad) now the output changes to: tensor([10., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) I can't understand the result at all. | Okay so what's going on is really weird. What .retain_grad() essentially does is convert any non-leaf tensor into a leaf tensor, such that it contains a .grad attribute (since by default, pytorch computes gradients to leaf tensors only). Hence, in your first example, after calling y.retain_grad(), it basically converted y into a leaf tensor with an accessible .grad attribute. However, in your second example, you initially converted the entire y tensor into a leaf tensor; then, you created a non-leaf tensor (y[1]) within your leaf tensor (y), which is what caused the confusion. y = torch.zeros((10)) # y is a non-leaf tensor y[0] = a # y[0] is a non-leaf tensor y.retain_grad() # y is a leaf tensor (including y[1]) y[1] = y[0] * 2 # y[1] is a non-leaf tensor, BUT y[0], y[2], y[3], ..., y[9] are all leaf tensors! The confusing part is: y[1] after calling y.retain_grad() is now a leaf tensor with a .grad attribute. However, y[1] after the computation (y[1] = y[0] * 2) is now not a leaf tensor with a .grad attribute; it is now treated as a new non-leaf variable/tensor. Therefore, when calling loss.backward(), the Chain rule of the loss w.r.t y, and particularly looking at the Chain rule of the loss w.r.t leaf y[1] now looks something like this: | 13 | 9 |
73,700,044 | 2022-9-13 | https://stackoverflow.com/questions/73700044/how-to-update-two-columns-with-different-values-on-the-same-condition-in-pyspark | I have a DataFrame with column a. I would like to create two additional columns (b and c) based on column a. I could solve this problem doing the same thing twice: df = df.withColumn('b', when(df.a == 'something', 'x'))\ .withColumn('c', when(df.a == 'something', 'y')) I would like to avoid doing the same thing twice, as the condition on which b and c are updated are the same, and also there are a lot of cases for column a. Is there a smarter solution to this problem? Could "withColumn" accept multiple columns perhaps? | A struct is best suited in such a case. See below example. spark.sparkContext.parallelize([('something',), ('foobar',)]).toDF(['a']). \ withColumn('b_c_struct', func.when(func.col('a') == 'something', func.struct(func.lit('x').alias('b'), func.lit('y').alias('c')) ) ). \ select('*', 'b_c_struct.*'). \ show() # +---------+----------+----+----+ # | a|b_c_struct| b| c| # +---------+----------+----+----+ # |something| {x, y}| x| y| # | foobar| null|null|null| # +---------+----------+----+----+ Just use a drop('b_c_struct') after the select to remove the struct column and keep the individual fields. | 4 | 2 |
73,679,164 | 2022-9-11 | https://stackoverflow.com/questions/73679164/how-to-turn-off-automatic-deletion-of-unused-imports-in-vs-code | VS Code automatically deletes unused imports and even more annoyingly it deletes imports that are used, but commented out. So for example, if I was to save this code: from pprint import pprint # pprint("foo") It would remove the first line. So how can I turn this feature off, because it constantly forces me to rewrite the imports? | Check your settings.json (User and Workspace) for the following settings, delete this configuration or change it to false. "editor.codeActionsOnSave": { "source.fixAll": true, }, | 4 | 7 |
73,694,266 | 2022-9-12 | https://stackoverflow.com/questions/73694266/how-to-specify-date-bin-ranges-for-seaborn-displot | Problem statement I am creating a distribution plot of flood events per N year periods starting in 1870. I am using Pandas and Seaborn. I need help with... specifying the date range of each bin when using sns.displot, and clearly representing my bin size specifications along the x axis. To clarify this problem, here is the data that I am working with, what I have tried, and a description of the desired output. The Data The data I am using is available from the U.S. Weather service. import pandas as pd import bs4 import urllib.request link = "https://water.weather.gov/ahps2/crests.php?wfo=jan&gage=jacm6&crest_type=historic" webpage=str(urllib.request.urlopen(link).read()) soup = bs4.BeautifulSoup(webpage) tbl = soup.find('div', class_='water_information') vals = tbl.get_text().split(r'\n') tcdf = pd.Series(vals).str.extractall(r'\((?P<Rank>\d+)\)\s(?P<Stage>\d+.\d+)\sft\son\s(?P<Date>\d{2}\/\d{2}\/\d{4})')\ .reset_index(drop=True) tcdf['Stage'] = tcdf.Stage.astype(float) total_crests_events = len(tcdf) tcdf['Rank'] = tcdf.Rank.astype(int) tcdf['Date'] = pd.to_datetime(tcdf.Date) What works I am able to plot the data with Seaborn's displot, and I can manipulate the number of bins with the bins command. The second image is closer to my desired output. However, I do not think that it's clear where the bins start and end. For example, the first two bins (reading left to right) clearly start before and end after 1880, but the precise years are not clear. import seaborn as sns # fig. 1: data distribution using default bin parameters sns.displot(data=tcdf,x="Date") # fig. 2: data distribution using 40 bins sns.displot(data=tcdf,x="Date",bins=40) What fails I tried specifying date ranges using the bins input. The approach is loosely based on a previous SO thread. my_bins = pd.date_range(start='1870',end='2025',freq='5YS') sns.displot(data=tcdf,x="Date",bins=my_bins) This attempt, however, produced a TypeError TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe' This is a long question, so I imagine that some clarification might be necessary. Please do not hesitate to ask questions in the comments. Thanks in advance. | Seaborn internally converts its input data to numbers so that it can do math on them, and it uses matplotlib's "unit conversion" machinery to do that. So the easiest way to pass bins that will work is to use matplotlib's date converter: sns.displot(data=tcdf, x="Date", bins=mpl.dates.date2num(my_bins)) | 5 | 6 |
73,667,667 | 2022-9-9 | https://stackoverflow.com/questions/73667667/installing-odoo-on-mac-raises-gevent-error | I'm following this tutorial to install Odoo 15 on Mac, but I'm getting this error when running pip install -r requirements.txt: Error compiling Cython file: ------------------------------------------------------------ ... cdef load_traceback cdef Waiter cdef wait cdef iwait cdef reraise cpdef GEVENT_CONFIG ^ ------------------------------------------------------------ src/gevent/_gevent_cgreenlet.pxd:181:6: Variables cannot be declared with 'cpdef'. Use 'cdef' instead. I have found several documents addressing cython errors, but none addressing the specific exception I'm getting. | Odoo 15 currently seems to be incompatible with Python 3.10. You can get rid of that error by upgrading gevent/greenlet to newer versions in the requirements file. I've successfully tried gevent 21.12.0 and greenlet 1.1.3, BUT then you'll run into more trouble due to changes in the Python collections package. Here is a link to the issue regarding the gevent error for reference. Downgrade to Python 3.9.0 for the moment to continue with the installation. | 3 | 1 |
73,693,104 | 2022-9-12 | https://stackoverflow.com/questions/73693104/valueerror-exceeds-the-limit-4300-for-integer-string-conversion | >>> import sys >>> sys.set_int_max_str_digits(4300) # Illustrative, this is the default. >>> _ = int('2' * 5432) Traceback (most recent call last): ... ValueError: Exceeds the limit (4300) for integer string conversion: value has 5432 digits. Python 3.10.7 introduced this breaking change for type conversion. Documentation: Integer string conversion length limitation Actually I don't understand why this was introduced and where does the default value of 4300 come from? Sounds like an arbitrary number. | See github issue CVE-2020-10735: Prevent DoS by large int<->str conversions #95778: Problem A Denial Of Service (DoS) issue was identified in CPython because we use binary bignumβs for our int implementation. A huge integer will always consume a near-quadratic amount of CPU time in conversion to or from a base 10 (decimal) string with a large number of digits. No efficient algorithm exists to do otherwise. It is quite common for Python code implementing network protocols and data serialization to do int(untrusted_string_or_bytes_value) on input to get a numeric value, without having limited the input length or to do log("processing thing id %s", unknowingly_huge_integer) or any similar concept to convert an int to a string without first checking its magnitude. (http, json, xmlrpc, logging, loading large values into integer via linear-time conversions such as hexadecimal stored in yaml, or anything computing larger values based on user controlled inputsβ¦ which then wind up attempting to output as decimal later on). All of these can suffer a CPU consuming DoS in the face of untrusted data. Everyone auditing all existing code for this, adding length guards, and maintaining that practice everywhere is not feasible nor is it what we deem the vast majority of our users want to do. This issue has been reported to the Python Security Response Team multiple times by a few different people since early 2020, most recently a few weeks ago while I was in the middle of polishing up the PR so itβd be ready before 3.11.0rc2. Mitigation After discussion on the Python Security Response Team mailing list the conclusion was that we needed to limit the size of integer to string conversions for non-linear time conversions (anything not a power-of-2 base) by default. And offer the ability to configure or disable this limit. The Python Steering Council is aware of this change and accepts it as necessary. Further discussion can be found on the Python Core Developers Discuss thread Int/str conversions broken in latest Python bugfix releases. I found this comment by Steve Dower to be informative: Our apologies for the lack of transparency in the process here. The issue was first reported to a number of other security teams, and converged in the Python Security Response Team where we agreed that the correct fix was to modify the runtime. The delay between report and fix is entirely our fault. The security team is made up of volunteers, our availability isnβt always reliable, and thereβs nobody βin chargeβ to coordinate work. Weβve been discussing how to improve our processes. However, we did agree that the potential for exploitation is high enough that we didnβt want to disclose the issue without a fix available and ready for use. We did work through a number of alternative approaches, implementing many of them. The code doing int(gigabyte_long_untrusted_string) could be anywhere inside a json.load or HTTP header parser, and can run very deep. Parsing libraries are everywhere, and tend to use int indiscriminately (though they usually handle ValueError already). Expecting every library to add a new argument to every int() call would have led to thousands of vulnerabilities being filed, and made it impossible for users to ever trust that their systems could not be DoSβd. We agree itβs a heavy hammer to do it in the core, but itβs also the only hammer that has a chance of giving users the confidence to keep running Python at the boundary of their apps. Now, Iβm personally inclined to agree that int->str conversions should do something other than raise. I was outvoted because it would break round-tripping, which is a reasonable argument that I accepted. We can still improve this over time and make it more usable. However, in most cases we saw, rendering an excessively long string isnβt desirable either. That should be the opt-in behaviour. Raising an exception from str may prove to be too much, and could be reconsidered, but we donβt see a feasible way to push out updates to every user of int, so that will surely remain global. | 28 | 12 |
73,693,149 | 2022-9-12 | https://stackoverflow.com/questions/73693149/replace-value-of-variable-directly-inside-f-string | I'm trying to replace a value inside a variable that I use within an f-string. In this case it is a single quote. For example: var1 = "foo" var2 = "bar '" print(f"{var1}-{var2}") Now, I want to get rid of the single quote within var2, but do it directly in the print statement. I've tried: print(f"{var1}-{var2.replace("'","")}") which gives me: EOL while scanning string literal. I do not want to impose a third variable, so no var3 = var2.replace(",","") etc... I would rather not use a regex, but if there is no other way, please tell me how to do it. What is the best way to solve this? | When the contents contains both ' and ", you can use a triple quoted string: >>> print(f'''{var1}-{var2.replace("'","")}''') foo-bar | 4 | 3 |
73,650,173 | 2022-9-8 | https://stackoverflow.com/questions/73650173/poetry-and-buildkit-mount-type-cache-not-working-when-building-over-airflow-imag | I have 2 examples of docker file and one is working and another is not. The main difference between the 2 is the base image. Simple python base image docker file: # syntax = docker/dockerfile:experimental FROM python:3.9-slim-bullseye RUN apt-get update -qy && apt-get install -qy \ build-essential tini libsasl2-dev libssl-dev default-libmysqlclient-dev gnutls-bin RUN pip install poetry==1.1.15 COPY pyproject.toml . COPY poetry.lock . RUN poetry config virtualenvs.create false RUN --mount=type=cache,mode=0777,target=/root/.cache/pypoetry poetry install Airflow base image docker file: # syntax = docker/dockerfile:experimental FROM apache/airflow:2.3.3-python3.9 USER root RUN apt-get update -qy && apt-get install -qy \ build-essential tini libsasl2-dev libssl-dev default-libmysqlclient-dev gnutls-bin USER airflow RUN pip install poetry==1.1.15 COPY pyproject.toml . COPY poetry.lock . RUN poetry config virtualenvs.create false RUN poetry config cache-dir /opt/airflow/.cache/pypoetry RUN --mount=type=cache,uid=50000,mode=0777,target=/opt/airflow/.cache/pypoetry poetry install Before building the docker file run poetry lock in the same folder as the pyproject.toml file! pyproject.toml file: [tool.poetry] name = "Airflow-test" version = "0.1.0" description = "" authors = ["Lorem ipsum"] [tool.poetry.dependencies] python = "~3.9" apache-airflow = { version = "2.3.3", extras = ["amazon", "crypto", "celery", "postgres", "hive", "jdbc", "mysql", "ssh", "slack", "statsd"] } prometheus_client = "^0.8.0" isodate = "0.6.1" dacite = "1.6.0" sqlparse = "^0.3.1" python3-openid = "^3.1.0" flask-appbuilder = ">=3.4.3" alembic = ">=1.7.7" apache-airflow-providers-google = "^8.1.0" apache-airflow-providers-databricks = "^3.0.0" apache-airflow-providers-amazon = "^4.0.0" pendulum = "^2.1.2" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" In order to build the images this is the command that I use: DOCKER_BUILDKIT=1 docker build --progress=plain -t airflow-test -f Dockerfile . For both images the first time they build poetry install will need to download all dependencies. The interesting part is, the second time I build the image, the python-based image is a lot faster as the dependencies are already cached, but the airflow-based image will try and download all 200 dependencies once again. From what O know by specifying --mount=type=cache that directory will be stored in the image repository so it can be reused next time the image is build. By this you trim the final image size. When running the image how do the dependencies appear? If I run docker run -it --user 50000 --entrypoint /bin/bash image a simple python import is working on the airflow image but not on the python image. When and how will the dependencies be reattached to the image? If you want to try it out, here is a dummy project that can be cloned locally and played around with: https://github.com/ioangrozea/Docker-dummy | Maybe it is not answering the question directly but I think what you are trying to do makes very little sense in the first place, so I would recommend you to change the approach, completely, especially that what you are trying to achieve is very well described in The Airflow Official image documentation including plenty of examples to follow. And what you are trying to achieve will (no matter how hard you try) end up with the image that is more than 200 MB bigger (at least 20%) than what you can try to get it if you follow the official documentation. Using poetry to build that image makes very little sense and is not recommended (and there is absolutely no need to use poetry in this case). See the comment here. While there are some successes with using other tools like poetry or pip-tools, they do not share the same workflow as pip - especially when it comes to constraint vs. requirements management. Installing via Poetry or pip-tools is not currently supported. If you wish to install airflow using those tools you should use the constraints and convert them to appropriate format and workflow that your tool requires. Poetry and pip have completely different way of resolving dependencies and while poetry is a cool tool for managing dependencies of small projects and I really like poetry, they opinionated choice of treating libraries and applications differently, makes it not suitable to manage dependencies for Airflow which is a both - application to install and library for developers to build on top of and Poetry's limitation are simply not working for Airflow. I explained it more in the talk I gave last year and you can see it if you are interested in "why". Then - how to solve your problem? Don't use --mount-type cache in this case and poetry. Use multi-segmented image of Apache Airflow and "customisation" option rather than "extending" the image. This will give you a lot more savings - because you will not have "build-essentials" added to your final image (on their own they add ~200 MB to the image size and the only way to get rid of them is to split your image into two segments - the one that has "build-essentials" and allows you to build Python packages, and the one that you use as "runtime" where you only copy the "build" python libraries. This is exactly the approach that Airfow Official Python image takes - it's highly optimised for size and speed of rebuilds and while internals of it are pretty complex, the actual building of your highly optimised, completely custom image are as simple as downloading the airflow Dockerfile and running the right docker buildx build . --build-arg ... --build-arg ... command - and the Airflow Dockerfile will do all the optimisations for you - resulting in as small image as humanly possible, and also it allows you to reuse build cache - especially if you use buildkit - which is a modern, slick and very well optimised way of building the images (Airflow Dockerfile requires buildkit as of Airflow 2.3). You can see all the details on how to build the customised image here - where you have plenty of examples and explanation why it works the way it works and what kind of optimisations you can get. There are examples on how you can add dependencies, python packages etc. While this is pretty sophisticated, you seem to do sophisticated thing with your image, that's why I am suggesting you follow that route. Also, if you are interested in other parts of reasoning why it makes sense, you can watch my talk from Airflow Summit 2020 - while the talk was given 2 years ago and some small details changed, the explanation on how and why building the image the way we do in Airflow still holds very strongly. It got a little simpler since the talk was give (i.e. the only thing you need not is Dockerfile, no full Airflow sources are needed) and you need to use buildkit - all the rest remains the same however. | 4 | 1 |
73,685,400 | 2022-9-12 | https://stackoverflow.com/questions/73685400/regex-to-match-the-first-occurance-starting-from-the-end-of-the-string | How do you match the first occurance of start, starting from the end of the string? I have tried it with a negative lookahead but instead I get start\nfoo\nmoo\nstart\nfoo\ndoo as match import re pattern='(start[\s\S]*$)(?=$)' string='start\nfoo\nmoo\nstart\nfoo\ndoo' re.search(pattern, string) expected match: start\nfoo\ndoo | You can use this code: string='start\nfoo\nmoo\nstart\nfoo\ndoo' print (re.findall(r'(?s).*(\bstart\b.*)', string)) ##> ['start\nfoo\ndoo'] RegEx Breakup: (?s): Enable single line or DOTALL mode to make dot match line break as well .*: Match longest possible match including line breaks (\bstart\b.*): Match word start and everything after that till end in capture group #1. \b are necessary to avoid it matching restart or starting words. PS: Since .* is greedy in nature before start it consume longest possible string before matching last start | 4 | 5 |
73,676,321 | 2022-9-11 | https://stackoverflow.com/questions/73676321/how-do-you-simulate-buoyancy-in-games | I have a working code for buoyancy simulation but it's not behaving properly. I have 3 scenarios and the expected behavior as follows: Object initial position is already submerged : (a) expected to move up; (b) then down but not beyond its previous submersion depth; (c) repeat a-b until object stops or floats at surface level; Object initial position is at surface level : (a) remains at that surface level or have a floating effect; Object initial position above the surface level : (a) object falls down to the water, until it reaches a certain depth; (b) expected to move up; (c) then down but not beyond its previous submersion depth; (d) repeat b-c until object stops or floats at surface level; To visualize the above expectations (particularly scenario 3), you may refer to this video (https://www.youtube.com/watch?v=Z_vfP_S5wis) and skip to 00:06:00 time frame. I also have the code as follows: import pygame import sys def get_overlapping_area(rect1, rect2): overlap_width = min(rect1.right, rect2.right) - max(rect1.left, rect2.left) overlap_height = min(rect1.bottom, rect2.bottom) - max(rect1.top, rect2.top) return overlap_width * overlap_height class Player(pygame.sprite.Sprite): def __init__(self, pos): super().__init__() self.image = pygame.Surface((16, 32)) self.image.fill((0, 0, 0)) self.rect = self.image.get_rect(center=pos) self.y_vel = 0 def apply_gravity(self): self.y_vel += GRAVITY def check_buoyancy_collisions(self): buoyant_force = 0 for sprite in buoyant_group: if sprite.rect.colliderect(self.rect): submerged_area = get_overlapping_area(sprite.rect, self.rect) buoyant_force -= sprite.buoyancy * GRAVITY * submerged_area self.y_vel += buoyant_force def update(self): self.apply_gravity() self.check_buoyancy_collisions() self.rect.top += self.y_vel def draw(self, screen: pygame.display): screen.blit(self.image, self.rect) class Fluid(pygame.sprite.Sprite): def __init__(self, pos, tile_size): super().__init__() self.image = pygame.Surface((tile_size, tile_size)) self.image.fill((0, 255, 255)) self.rect = self.image.get_rect(topleft=pos) self.buoyancy = 0.00225 pygame.init() WIDTH, HEIGHT = 500, 700 screen = pygame.display.set_mode((WIDTH, HEIGHT)) TILE_SIZE = 32 GRAVITY = 0.05 buoyant_group = pygame.sprite.Group() player_y = HEIGHT * 3 // 4 # player drop point few pixels below fluid surface # player_y = HEIGHT // 4 # player drop point at fluid surface # player_y = HEIGHT // 8 # player drop point few pixels above fluid surface # Instantiate Player and Fluid objects player = Player((WIDTH // 2, player_y)) for r in range(HEIGHT // 4, HEIGHT * 2, TILE_SIZE): for c in range(0, WIDTH, TILE_SIZE): buoyant_group.add(Fluid(pos=(c, r), tile_size=TILE_SIZE)) # Game Loop while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() player.update() screen.fill((100, 100, 100)) buoyant_group.draw(surface=screen) player.draw(screen=screen) pygame.display.update() Few explanations for the code: get_overlapping_area function : The buoyant force is computed as fluid weight density x displaced fluid volume, and since I'm dealing with 2D the overlapping area between the object and the fluid is computed rather than the displaced fluid volume. Player class : A 16x32 sprite affected by both gravity and buoyant force. It has a check_buoyancy_collisions function responsible for the computation of the upward force on the object. Fluid class : A 32x32 sprite responsible for the buoyancy. It has the self.buoyancy attribute calculated enough to overcome the Player object's weight due to gravity, thus an upward resultant force acting on the Player object. player_y varaible : I have provided 3 set-ups to simulate the 3 scenarios mentioned above. You may comment/uncomment them accordingly. PS: I have been on this problem for a week now. Most resources I found talks about the theory. The remaining few resources uses game engines that does the physics for them. | Finally found the solution, though not as perfect as it should be. I'm no physics expert but based on my research, in addition to the buoyant force, there are three drag forces acting on a moving/submerging body namely (1) skin friction drag, (2) form drag, and (3) interference drag. To my understanding, skin friction drag acts on the body's surface parallel to the direction of movement; form drag acts on the body's surface perpendicular to the direction of movement; and interference drag which is generated by combination of fluid flows. In general, these drag forces acts in opposition to the body's movement direction. Thus, for simplicity's sake, we can generalize them into a drag attribute to a specific fluid like so: class Fluid(pygame.sprite.Sprite): def __init__(self, pos, tile_size): super().__init__() self.image = pygame.Surface((tile_size, tile_size)) self.image.fill((0, 255, 255)) self.rect = self.image.get_rect(topleft=pos) self.buoyancy = 0.00215 self.drag = 0.0000080 The total drag/resistance to the moving/submerging body is then proportional to the body's submerged area (area in contact with the fluid). Finally, the resulting drag force is applied opposite to the direction of the body's movement. class Player(pygame.sprite.Sprite): def check_buoyancy_collisions(self): buoyant_force = 0 drag = 0 for sprite in buoyant_group: if sprite.rect.colliderect(self.rect): submerged_area = get_overlapping_area(sprite.rect, self.rect) buoyant_force -= sprite.buoyancy * GRAVITY * submerged_area drag += sprite.drag * submerged_area self.y_vel += buoyant_force if self.y_vel > 0: self.y_vel -= drag else: self.y_vel += drag This is only for the movement in y-direction. I believe drag forces should also be applied on the body's movement in x-direction. Moreover, for a sophisticated simulation, the three types of drags may have to be applied separately. | 4 | 4 |
73,675,431 | 2022-9-10 | https://stackoverflow.com/questions/73675431/printing-a-pdf-with-selenium-chrome-driver-in-headless-mode | I have no problems printing without headless mode, however once I enable headless mode, it just refuses to print a PDF. I'm currently working on an app with a GUI, so I'd rather not have the Selenium webdriver visible to the end user if possible. For this project I'm using an older version of Selenium, 4.2.0. That coupled with Python 3.9. import os from os.path import exists import json from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver import Chrome, ChromeOptions # Paths dir_path = os.getcwd() download_path = os.path.join(dir_path, "letters") chrome_path = os.path.join(dir_path, "chromium\\app\\Chrome-bin\\chrome.exe") user_data_path = os.path.join(dir_path, "sessions") website = "https://www.google.com/" def main(): print_settings = { "recentDestinations": [{ "id": "Save as PDF", "origin": "local", "account": "", }], "selectedDestinationId": "Save as PDF", "version": 2, "isHeaderFooterEnabled": False, "isLandscapeEnabled": True } options = ChromeOptions() options.binary_location = chrome_path options.add_argument("--start-maximized") options.add_argument('--window-size=1920,1080') options.add_argument(f"user-data-dir={user_data_path}") options.add_argument("--headless") options.add_argument('--enable-print-browser') options.add_experimental_option("prefs", { "printing.print_preview_sticky_settings.appState": json.dumps(print_settings), "savefile.default_directory": download_path, # Change default directory for downloads "download.default_directory": download_path, # Change default directory for downloads "download.prompt_for_download": False, # To auto download the file "download.directory_upgrade": True, "profile.default_content_setting_values.automatic_downloads": 1, "safebrowsing.enabled": True }) options.add_argument("--kiosk-printing") driver = Chrome(service=Service(ChromeDriverManager().install()), options=options) driver.get(website) driver.execute_script("window.print();") if exists(os.path.join(user_data_path, "Google.pdf")): print("YAY!") else: print(":(") if __name__ == '__main__': main() | For anyone else coming across this with a similar issue, I fixed it by using the print method described here: Selenium print PDF in A4 format Using my example from above, I replaced: driver.execute_script("window.print();") with: pdf_data = driver.execute_cdp_cmd("Page.printToPDF", print_settings) with open('Google.pdf', 'wb') as file: file.write(base64.b64decode(pdf_data['data'])) And that worked just fine for me. | 3 | 6 |
73,679,420 | 2022-9-11 | https://stackoverflow.com/questions/73679420/how-to-extract-a-part-of-the-string-to-another-column | I have a column that contains data like Dummy data: df = pd.DataFrame(["Lyreco A-Type small 2i", "Lyreco C-Type small 4i", "Lyreco N-Part medium", "Lyreco AKG MT 4i small", "Lyreco AKG/ N-Type medium 4i", "Lyreco C-Type medium 2i", "Lyreco C-Type/ SNU medium 2i", "Lyreco K-part small 4i", "Lyreco K-Part medium", "Lyreco SNU small 2i", "Lyreco C-Part large 2i", "Lyreco N-Type large 4i"]) I want to create an extra column that strips the data and gives you the required part of the string(see below) in each row. The extracted column should look like this Column_1 Column_2 Lyreco A-Type small 2i A-Type Lyreco C-Type small 4i C-Type Lyreco N-Part medium N-Part Lyreco STU MT 4i small STU MT Lyreco AKG/ N-Type medium 4i AKG/ N-Type Lyreco C-Type medium 2i C-Type Lyreco C-Type/ SNU medium 2i C-Type/ SNU Lyreco K-part small 4i K-part Lyreco K-Part medium K-Part Lyreco SNU small 2i SNU Lyreco C-Part large 2i C-Part Lyreco N-Type large 4i N-Type How can I extract column 2 from the first column? | Looking at the example you posted, it's enough to split the column values and return "the middle" items. You can make a simple function to encapsulate the logic and apply it to the dataframe. from math import floor df = pd.DataFrame( {'Columns_1': ["Lyreco A-Type small 2i", "Lyreco C-Type small 4i", "Lyreco N-Part medium", "Lyreco AKG MT 4i small", "Lyreco AKG/ N-Type medium 4i", "Lyreco C-Type medium 2i", "Lyreco C-Type/ SNU medium 2i", "Lyreco K-part small 4i", "Lyreco K-Part medium", "Lyreco SNU small 2i", "Lyreco C-Part large 2i", "Lyreco N-Type large 4i" ] } ) def f(row): blocks = row['Columns_1'].split() mid_index = 1 if len(blocks) <= 4 else floor(len(blocks)/2) return ' '.join(blocks[1:mid_index+1]) df['Columns_2'] = df.apply(f, axis=1) print(df) Output: Columns_1 Columns_2 0 Lyreco A-Type small 2i A-Type 1 Lyreco C-Type small 4i C-Type 2 Lyreco N-Part medium N-Part 3 Lyreco AKG MT 4i small AKG MT 4 Lyreco AKG/ N-Type medium 4i AKG/ N-Type 5 Lyreco C-Type medium 2i C-Type 6 Lyreco C-Type/ SNU medium 2i C-Type/ SNU 7 Lyreco K-part small 4i K-part 8 Lyreco K-Part medium K-Part 9 Lyreco SNU small 2i SNU 10 Lyreco C-Part large 2i C-Part 11 Lyreco N-Type large 4i N-Type | 4 | 2 |
73,676,661 | 2022-9-11 | https://stackoverflow.com/questions/73676661/how-to-trap-this-nested-httpx-exception | I'm trapping thus: with httpx.Client(**sessions[scraperIndex]) as client: try: response = client.get(...) except TimeoutError as e: print('does not hit') except Exception as e: print(f'βοΈ Unexpected exception: {e}') print_exc() # hits! However I'm getting the below crashdump. Pulling out key lines: TimeoutError: The read operation timed out During handling of the above exception, another exception occurred: httpcore.ReadTimeout: The read operation timed out The above exception was the direct cause of the following exception: httpx.ReadTimeout: The read operation timed out Why isn't my TimeoutError catching this? And what's the correct catch? Can someone give a logic for deducing it? CrashDump: βοΈ Unexpected exception: The read operation timed out Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/httpcore/_exceptions.py", line 8, in map_exceptions yield File "/usr/local/lib/python3.10/dist-packages/httpcore/backends/sync.py", line 26, in read return self._sock.recv(max_bytes) File "/usr/lib/python3.10/ssl.py", line 1258, in recv return self.read(buflen) File "/usr/lib/python3.10/ssl.py", line 1131, in read return self._sslobj.read(len) TimeoutError: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions yield File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 218, in handle_request resp = self._pool.handle_request(req) File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py", line 253, in handle_request raise exc File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py", line 237, in handle_request response = connection.handle_request(request) File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection.py", line 90, in handle_request return self._connection.handle_request(request) File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py", line 105, in handle_request raise exc File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py", line 84, in handle_request ) = self._receive_response_headers(**kwargs) File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py", line 148, in _receive_response_headers event = self._receive_event(timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/http11.py", line 177, in _receive_event data = self._network_stream.read( File "/usr/local/lib/python3.10/dist-packages/httpcore/backends/sync.py", line 24, in read with map_exceptions(exc_map): File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.10/dist-packages/httpcore/_exceptions.py", line 12, in map_exceptions raise to_exc(exc) httpcore.ReadTimeout: The read operation timed out The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/root/scraper-pi/Scrape.py", line 148, in main cursor, _nScraped = scrape(client, cursor) File "/root/scraper-pi/Scrape.py", line 79, in scrape response = client.get( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1039, in get return self.request( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 815, in request return self.send(request, auth=auth, follow_redirects=follow_redirects) File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 902, in send response = self._send_handling_auth( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 930, in _send_handling_auth response = self._send_handling_redirects( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 967, in _send_handling_redirects response = self._send_single_request(request) File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1003, in _send_single_request response = transport.handle_request(request) File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 217, in handle_request with map_httpcore_exceptions(): File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ReadTimeout: The read operation timed out | The base class for all httpx timeout errors is not the built-in TimeoutError (presumably because that would also make timeouts OSErrors, which doesn't sound correct), but httpx.TimeoutException. import httpx with httpx.Client() as client: try: response = client.get("http://httpbin.org/get", timeout=0.001) except httpx.TimeoutException as e: print('gottem') prints gottem just fine. | 6 | 4 |
73,677,424 | 2022-9-11 | https://stackoverflow.com/questions/73677424/plotly-express-colorscale-map-by-absolute-value-instead-of-0-1 | I have a map of the US and am plotting winning margin in Presidential elections in Plotly Express. I want a winning margin of 0 to be displayed as white and the scale to diverge into red/blue. However, the color_continuous_scale keyword argument takes a range from 0-1, and where a winning margin of 0 falls in this 0-1 range varies each election. Is there a workaround for this that would allow me to define 0 as white, and the extremes as red/blue? i have tried to convert the winning margin of 0 to a fraction between 0 and 1 but this changes with every election cycle. fig = px.choropleth(merged, locationmode='USA-states', locations='state_po', animation_frame='year', color='R_margin_x', color_continuous_scale=[(0, 'blue'), (merged['R_margin_x'].min()/(merged['R_margin_x'].min()-merged['R_margin_x'].max()), 'white'), (1, 'red')], scope='usa') fig.show() dataframe is of the form party_simplified state_po year DEMOCRAT LIBERTARIAN OTHER REPUBLICAN D_pct_x L_pct_x O_pct_x R_pct_x R_margin_x D_pct_y L_pct_y O_pct_y R_pct_y R_margin_y R_lean 0 AK 2008 123594.0 1589.0 7173.0 193841.0 37.889374 0.487129 2.198978 59.424520 21.535146 52.761558 0.388418 1.488455 45.361569 -7.39999 28.935135 1 AL 2008 813479.0 NaN 19794.0 1266546.0 38.740434 NaN 0.942653 60.316913 21.576479 52.761558 0.388418 1.488455 45.361569 -7.39999 28.976468 2 AR 2008 422310.0 4776.0 21514.0 638017.0 38.864660 0.439529 1.979906 58.715904 19.851245 52.761558 0.388418 1.488455 45.361569 -7.39999 27.251234 3 AZ 2008 1034707.0 12555.0 16102.0 1230111.0 45.115251 0.547423 0.702079 53.635248 8.519997 52.761558 0.388418 1.488455 45.361569 -7.39999 15.919987 4 CA 2008 8274473.0 67582.0 208064.0 5011781.0 61.012638 0.498323 1.534180 36.954859 -24.057780 52.761558 0.388418 1.488455 45.361569 -7.39999 -16.657790 | In plotly you can pass the midpoint for continuous scales: color_continuous_scale=px.colors.sequential.RdBu, color_continuous_midpoint=0.0 The RdBu scale goes from red to white to blue. By passing color_continuous_midpoint=0.0 you specify that 0.0 is in the middle of the scale, i.e. white. The other colors will be determined accordingly. The same goes if you use a custom continuous color scale. | 4 | 3 |
73,630,653 | 2022-9-7 | https://stackoverflow.com/questions/73630653/redirect-to-login-page-if-user-not-logged-in-using-fastapi-login-package | I would like to redirect users to the login page, when they are not logged in. Here is my code: from fastapi import ( Depends, FastAPI, HTTPException, status, Body, Request ) from fastapi.encoders import jsonable_encoder from fastapi.security import OAuth2PasswordBearer, OAuth2PasswordRequestForm from fastapi.responses import HTMLResponse, RedirectResponse import app.models as models import app.database as database from datetime import datetime, timedelta from jose import JWTError, jwt from starlette.responses import FileResponse from fastapi_login import LoginManager from fastapi_login.exceptions import InvalidCredentialsException from fastapi import Cookie import re app = FastAPI() oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token") manager = LoginManager(SECRET_KEY, token_url="/auth/login", use_cookie=True) manager.cookie_name = "token" @app.get("/") @app.get("/item") async def read_index(user=Depends(manager)): try: return FileResponse('item.html') except status.HTTP_401_UNAUTHORIZED: return RedirectResponse(url="/login", status_code=status.HTTP_302_FOUND) However, when I access this page: localhost:8000/item, I get the following: {"detail":"Not authenticated"} | From the code snippet you provided, you seem to be using the (third-party) FastAPI-Login package. Their documentation suggests using a custom Exception on the LoginManager instance, which can be used to redirect the user to the login page, if they are not logged in. Working example: The authentication below is based on cookies, but you could pass the token in the Authorization header as well. Navigate to /protected route when you are not logged in for the redirection to take effect. from fastapi import FastAPI, Depends, Request, Response, status from starlette.responses import RedirectResponse, HTMLResponse, JSONResponse from fastapi.security import OAuth2PasswordRequestForm from fastapi_login.exceptions import InvalidCredentialsException from fastapi_login import LoginManager class NotAuthenticatedException(Exception): pass app = FastAPI() SECRET = "super-secret-key" manager = LoginManager(SECRET, '/login', use_cookie=True, custom_exception=NotAuthenticatedException) DB = { 'users': { '[email protected]': { 'name': 'John Doe', 'password': 'hunter2' } } } def query_user(user_id: str): return DB['users'].get(user_id) @manager.user_loader() def load_user(user_id: str): user = DB['users'].get(user_id) return user @app.exception_handler(NotAuthenticatedException) def auth_exception_handler(request: Request, exc: NotAuthenticatedException): """ Redirect the user to the login page if not logged in """ return RedirectResponse(url='/login') @app.get("/login", response_class=HTMLResponse) def login_form(): return """ <!DOCTYPE html> <html> <body> <form method="POST" action="/login"> <label for="username">Username:</label><br> <input type="text" id="username" name="username" value="[email protected]"><br> <label for="password">Password:</label><br> <input type="password" id="password" name="password" value="hunter2"><br><br> <input type="submit" value="Submit"> </form> </body> </html> """ @app.post('/login') def login(data: OAuth2PasswordRequestForm = Depends()): email = data.username password = data.password user = query_user(email) if not user: # you can return any response or error of your choice raise InvalidCredentialsException elif password != user['password']: raise InvalidCredentialsException token = manager.create_access_token(data={'sub': email}) response = RedirectResponse(url="/protected",status_code=status.HTTP_302_FOUND) manager.set_cookie(response, token) return response @app.get('/protected') def protected_route(user=Depends(manager)): return {'user': user} | 4 | 9 |
73,674,735 | 2022-9-10 | https://stackoverflow.com/questions/73674735/how-to-search-and-select-column-names-based-on-values | Suppose I have a pandas DataFrame like this name col1 col2 col3 0 AAA 1 0 2 1 BBB 2 1 2 2 CCC 0 0 2 I want (a) the names of any columns that contain a value of 2 anywhere in the column (i.e., col1, col3), and (b) the names of any columns that contain only values of 2 (i.e., col3). I understand how to use DataFrame.any() and DataFrame.all() to select rows in a DataFrame where a value appears in any or all columns, but I'm trying to find COLUMNS where a value appears in (a) any or (b) all rows. | You can do what you described with columns: df.columns[df.eq(2).any()] # Index(['col1', 'col3'], dtype='object') df.columns[df.eq(2).all()] # Index(['col3'], dtype='object') | 4 | 3 |
73,672,316 | 2022-9-10 | https://stackoverflow.com/questions/73672316/how-to-get-debug-logs-from-boto3-in-a-local-script | I have a local script that lists buckets: import boto3 s3_client = boto3.client('s3') for bucket in s3_client.list_buckets()["Buckets"]: print(bucket['Name']) when I execute it locally, it does just that. Now if I execute this code as a lambda on AWS and set the log level to DEBUG, like so: import boto3 import logging logger = logging.getLogger() logger.setLevel(logging.DEBUG) s3_client = boto3.client('s3', endpoint_url="http://localhost:4566") def lambda_handler(event, context): s3_client.list_buckets() return { "statusCode": 200 } I get detailed logs such as the headers of the HTTP request that is sent to S3. But If I add these lies to my local script nothing changes. I've tried: import boto3 import logging logger = logging.getLogger() logger.setLevel(logging.DEBUG) s3_client = boto3.client('s3') for bucket in s3_client.list_buckets()["Buckets"]: print(bucket['Name']) and import boto3 import logging logging.getLogger('boto3').setLevel(logging.DEBUG) logging.getLogger('botocore').setLevel(logging.DEBUG) logging.getLogger('s3transfer').setLevel(logging.DEBUG) s3_client = boto3.client('s3') for bucket in s3_client.list_buckets()["Buckets"]: print(bucket['Name']) as suggested here and in both cases I get no logs. How can I get my local script the show the logs for what boto3 does under the hood? | This seems to work for me: import boto3 import logging boto3.set_stream_logger('', logging.DEBUG) s3_client = boto3.client('s3') for bucket in s3_client.list_buckets()["Buckets"]: print(bucket['Name']) Spews out a whole bunch of stuff. From this doc | 9 | 13 |
73,659,445 | 2022-9-9 | https://stackoverflow.com/questions/73659445/get-directories-only-with-glob-pattern-using-pathlib | I want to use pathlib.glob() to find directories with a specific name pattern (*data) in the current working dir. I don't want to explicitly check via .isdir() or something else. Input data This is the relevant listing with three folders as the expected result and one file with the same pattern but that should be part of the result. ls -ld *data drwxr-xr-x 2 user user 4,0K 9. Sep 10:22 2021-02-11_68923_data/ drwxr-xr-x 2 user user 4,0K 9. Sep 10:22 2021-04-03_38923_data/ drwxr-xr-x 2 user user 4,0K 9. Sep 10:22 2022-01-03_38923_data/ -rw-r--r-- 1 user user 0 9. Sep 10:24 2011-12-43_3423_data Expected result [ '2021-02-11_68923_data/', '2021-04-03_38923_data/', '2022-01-03_38923_data/' ] Minimal working example from pathlib import Path cwd = Path.cwd() result = cwd.glob('*_data/') result = list(result) That gives me the 3 folders but also the file. Also tried the variant cwd.glob('**/*_data/'). | The trailing path separator certainly should be respected in pathlib.glob patterns. This is the expected behaviour in shells on all platforms, and is also how the glob module works: If the pattern is followed by an os.sep or os.altsep then files will not match. However, there is a bug in pathlib that was fixed in bpo-22276, and merged in Python-3.11.0rc1 (see what's new: pathlib). In the meantime, as a work-around you can use the glob module to get the behaviour you want: $ ls -ld *data drwxr-xr-x 2 user user 4096 Sep 9 22:45 2022-01-03_38923_data drwxr-xr-x 2 user user 4096 Sep 9 22:44 2021-04-03_38923_data drwxr-xr-x 2 user user 4096 Sep 9 22:44 2021-02-11_68923_data -rw-r--r-- 1 user user 0 Sep 9 22:45 2011-12-43_3423_data >>> import glob >>> res = glob.glob('*_data') >>> print('\n'.join(res)) 2022-01-03_38923_data 2011-12-43_3423_data 2021-02-11_68923_data 2021-04-03_38923_data >>> res = glob.glob('*_data/') >>> print('\n'.join(res)) 2022-01-03_38923_data/ 2021-02-11_68923_data/ 2021-04-03_38923_data/ | 6 | 8 |
73,668,028 | 2022-9-9 | https://stackoverflow.com/questions/73668028/how-would-i-group-summarize-and-filter-a-df-in-pandas-in-dplyr-fashion | I'm currently studying pandas and I come from an R/dplyr/tidyverse background. Pandas has a not-so-intuitive API and how would I elegantly rewrite such operation from dplyr using pandas syntax? library("nycflights13") library("tidyverse") delays <- flights %>% group_by(dest) %>% summarize( count = n(), dist = mean(distance, na.rm = TRUE), delay = mean(arr_delay, na.rm = TRUE) ) %>% filter(count > 20, dest != "HNL") | pd.DataFrame.agg method doesn't allow much flexibility for changing columns' names in the method itself That's not exactly true. You could actually rename the columns inside agg similar to in R although it is a better idea to not use count as a column name as it is also an attribute: delays = ( flights .groupby('dest', as_index=False) .agg( count=('year', 'count'), dist=('distance', 'mean'), delay=('arr_delay', 'mean')) .query('count > 20 & dest != "HNL"') .reset_index(drop=True) ) | 4 | 6 |
73,668,088 | 2022-9-9 | https://stackoverflow.com/questions/73668088/can-we-use-plotly-express-to-plot-zip-codes | I'm using the code from this link. https://devskrol.com/2021/12/27/choropleth-maps-using-python/ Here's my actual code. import plotly.express as px from urllib.request import urlopen import json with urlopen('https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json') as response: counties = json.load(response) #import libraries import pandas as pd import plotly.express as px fig = px.choropleth(df_mover, geojson=counties, locations='my_zip', locationmode="USA-states", color='switcher_flag', range_color=(10000, 100000), scope="usa" ) fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0}) fig.show() I'm simply trying to pass in data from my dataframe named df_movers, which has two fields: my_zip and switcher_flag. When I run this in a Jupyter notebook, it just runs and runs; it never stops. I'm only trying to plot 25 records, so it's not like there's too much data here. Finally, my_zip is data type object. Any idea what could be wrong here? | Since you did not provide any user data, I tried your code with data including US zip codes from here. I think the issue is that you don't need to specify the location mode. I specified county_fips for the location and population for the color fill. import plotly.express as px from urllib.request import urlopen import json with urlopen('https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json') as response: counties = json.load(response) #import libraries import pandas as pd us_zip = pd.read_csv('data/uszips.csv', dtype={'county_fips': str}) fig = px.choropleth(us_zip, geojson=counties, locations='county_fips', #locationmode="USA-states", color='population', range_color=(1000, 10000), scope="usa" ) fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0}) fig.show() | 4 | 3 |
73,664,093 | 2022-9-9 | https://stackoverflow.com/questions/73664093/lightgbm-train-vs-update-vs-refit | I'm implementing LightGBM (Python) into a continuous learning pipeline. My goal is to train an initial model and update the model (e.g. every day) with newly available data. Most examples load an already trained model and apply train() once again: updated_model = lightgbm.train(params=last_model_params, train_set=new_data, init_model = last_model) However, I'm wondering if this is actually the correct way to approach continuous learning within the LightGBM library since the amount of fitted trees (num_trees()) grows for every application of train() by n_estimators. For my understanding a model update should take an initial model definition (under a given set of model parameters) and refine it without ever growing the amount of trees/size of the model definition. I find the documentation regarding train(), update() and refit() not particularly helpful. What would be considered the right approach to implement continuous learning with LightGBM? | In lightgbm (the Python package for LightGBM), these entrypoints you've mentioned do have different purposes. The main lightgbm model object is a Booster. A fitted Booster is produced by training on input data. Given an initial trained Booster... Booster.refit() does not change the structure of an already-trained model. It just updates the leaf counts and leaf values based on the new data. It will not add any trees to the model. Booster.update() will perform exactly 1 additional round of gradient boosting on an existing Booster. It will add at most 1 tree to the model. train() with an init_model will perform gradient boosting for num_iterations additional rounds. It also allows for lots of other functionality, like custom callbacks (e.g. to change the learning rate from iteration-to-iteration) and early stopping (to stop adding trees if performance on a validation set fails to improve). It will add up to num_iterations trees to the model. What would be considered the right approach to implement continuous learning with LightGBM? There are trade-offs involved in this choice and no one of these is the globally "right" way to achieve the goal "modify an existing model based on newly-arrived data". Booster.refit() is the only one of these approaches that meets your definition of "refine [the model] without ever growing the amount of trees/size of the model definition". But it could lead to drastic changes in the predictions produced by the model, especially if the batch of newly-arrived data is much smaller than the original training data, or if the distribution of the target is very different. Booster.update() is the simplest interface for this, but a single iteration might not be enough to get most of the information from the newly-arrived data into the model. For example, if you're using fairly shallow trees (say, num_leaves=7) and a very small learning rate, even newly-arrived data that is very different from the original training data might not change the model's predictions by much. train(init_model=previous_model) is the most flexible and powerful option, but it also introduces more parameters and choices. If you choose to use train(init_model=previous_model), pay attention to parameters num_iterations and learning_rate. Lower values of these parameters will decrease the impact of newly-arrived data on the trained model, higher values will allow a larger change to the model. Finding the right balance between those is a concern for your evaluation framework. | 10 | 18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.