question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
73,137,036
2022-7-27
https://stackoverflow.com/questions/73137036/expected-type-warning-from-changing-dictionary-value-from-none-type-to-str-typ
I have a dictionary for which the key "name" is initialized to None (as this be easily used in if name: blocks) if a name is read in it is then assigned to name. All of this works fine but Pycharm throws a warning when "name" is changed due to the change in type. While this isn't the end of the world it's a pain for debugging (and could be a pain for maintaining the code). Does anyone know if there is a way either to provide something akin to a type hint to the dictionary or failing that to tell Pycharm the change of type is intended? code replicating issue: from copy import deepcopy test = { "name": None, "other_variables": "Something" } def read_info(): test_2 = deepcopy(test) test_2["name"] = "this is the name" # Pycharm shows warning return test_2["name"] ideal solution: from copy import deepcopy test = { "name": None type=str, "other_variables": "Something" } def read_info(): test_2 = deepcopy(test) test_2["name"] = "this is the name" # no warning return test_2["name"] Note: I know that setting the default value to "" would behave the same but a) it's handy having it print out "None" if name is printed before assignment and b) I find it slightly more readable to have None instead of "". Note_2: I am unaware why (it may be a bug or intended for some reason I don't understand) but Pycharm only gives a wanrning if the code shown above is found within a function. i.e. replacing the read_info() function with the lines: test_2 = deepcopy(test) test_2["name"] = "this is the name" # Pycharm shows warning Does not give a warning
Type hinting that dictionary with dict[str, None | str] (Python 3.10+, older versions need to use typing.Dict[str, typing.Optional[str]]) seems to fix this: from copy import deepcopy test: dict[str, None | str] = { "name": None, "other_variables": "Something" } def read_info(): test_2 = deepcopy(test) test_2["name"] = "this is the name" # no warning return test_2["name"] As noticed by @Tomerikoo simply type hinting as dict also works (this should work on all? Python versions that support type hints too): test: dict = { "name": None, "other_variables": "Something" }
7
4
73,135,253
2022-7-27
https://stackoverflow.com/questions/73135253/how-to-compare-two-dataframes-and-find-matches-from-columns-pandas
let's say we have the following code example where we create two basic dataframes: import pandas as pd # Creating Dataframes a = [{'Name': 'abc', 'Age': 8, 'Grade': 3}, {'Name': 'xyz', 'Age': 9, 'Grade': 3}] df1 = pd.DataFrame(a) b = [{'ID': 1,'Name': 'abc', 'Age': 8}, {'ID': 2,'Name': 'xyz', 'Age': 9}] df2 = pd.DataFrame(b) # Printing Dataframes display(df1) display(df2) We get the following datasets: Name Age Grade 0 abc 8 3 1 xyz 9 3 ID Name Age 0 1 abc 8 1 2 xyz 9 How can I find the list of columns that are not repeated in these frames when they are intersected? That is, as a result, I want to get the names of the following columns: ['Grade', 'ID']
Use symmetric_difference res = df2.columns.symmetric_difference(df1.columns) print(res) Output Index(['Grade', 'ID'], dtype='object') Or as an alternative, use set.symmetric_difference res = set(df2.columns).symmetric_difference(df1.columns) print(res) Output {'Grade', 'ID'} A third alternative, suggested by @SashSinha, is to use the shortcut: res = df2.columns ^ df1.columns but as of pandas 1.4.3 this issue a warning: FutureWarning: Index.xor operating as a set operation is deprecated, in the future this will be a logical operation matching Series.xor. Use index.symmetric_difference(other) instead. res = df2.columns ^ df1.columns
4
6
73,134,521
2022-7-27
https://stackoverflow.com/questions/73134521/how-to-train-on-a-tensorflow-datasets-dataset
I'm playing around with tensorflow to become a bit more familiar with the overall workflow. To do this I thought I should start with creating a simple classifier for the well known Iris dataset. I load the dataset using: ds = tfds.load('iris', split='train', shuffle_files=True, as_supervised=True) I use the following classifier: model = keras.Sequential([ keras.layers.Dense(10,activation="relu"), keras.layers.Dense(10,activation="relu"), keras.layers.Dense(3, activation="softmax") ]) model.compile( optimizer=tf.keras.optimizers.Adam(0.001), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], ) I then try to fit the model using: model.fit(ds,batch_size=50, epochs=100) This gives the following error: Input 0 of layer "dense" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (4,) Call arguments received by layer "sequential" (type Sequential): β€’ inputs=tf.Tensor(shape=(4,), dtype=float32) β€’ training=True β€’ mask=None I also tried defining the model using the functional API(as this was my orignal goal to learn) inputs = keras.Input(shape=(4,), name='features') first_hidden = keras.layers.Dense(10, activation='relu')(inputs) second_hidden = keras.layers.Dense(10, activation="relu")(first_hidden) outputs = keras.layers.Dense(3, activation='softmax')(second_hidden) model = keras.Model(inputs=inputs, outputs=outputs, name="test_iris_classification") I now get the same error as before but this time with a warning: WARNING:tensorflow:Model was constructed with shape (None, 4) for input KerasTensor(type_spec=TensorSpec(shape=(None, 4), dtype=tf.float32, name='features'), name='features', description="created by layer 'features'"), but it was called on an input with incompatible shape (4,). I suspect this is something quite fundamental that haven't understood but I have not been able to figure it out, despite several hours of googling. PS: I also tried to download the whole dataset from the UCI Machine Learning Repository as a CSV file. I read it in like this: ds = pd.read_csv("iris.data", header=None) labels = [] for name in ds[4]: if name == "Iris-setosa": labels.append(0) elif name == "Iris-versicolor": labels.append(1) elif name == "Iris-virginica": labels.append(2) else: raise ValueError(f"Name wrong name: {name}") labels = np.array(labels) features = np.array(ds[[0,1,2,3]]) And fit it like this: model.fit(features, labels,batch_size=50, epochs=100) And I'm able to fit the model to this dataset without any problems for both the sequential and the functional API. Which makes me suspect my misunderstanding has something to do with how the tensorflow_datasets works.
Set the batch size when loading your data: import tensorflow_datasets as tfds import tensorflow as tf ds = tfds.load('iris', split='train', shuffle_files=True, as_supervised=True, batch_size=10) model = tf.keras.Sequential([ tf.keras.layers.Dense(10,activation="relu"), tf.keras.layers.Dense(10,activation="relu"), tf.keras.layers.Dense(3, activation="softmax") ]) model.compile( optimizer=tf.keras.optimizers.Adam(0.001), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], ) model.fit(ds, epochs=100) Also regarding model.fit, the docs state: Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
7
5
73,131,597
2022-7-27
https://stackoverflow.com/questions/73131597/pytorch-lightning-display-metrics-after-validation-epoch
I've implemented validation_epoch_end to produce and log metrics, and when I run trainer.validate, the metrics appear in my notebook. However, when I run trainer.fit, only the training metrics appear; not the validation ones. The validation step is still being run (because the validation code calls a print statement, which does appear), but the validation metrics don't appear, even though they're logged. Or, if they do appear, the next epoch immediately erases them, so that I can't see them. (Likewise, tensorboard sees the validation metrics) How can I see the validation epoch end metrics in a notebook, as each epoch occurs?
You could do the following. Let's say you have the following LightningModule: class MNISTModel(LightningModule): def __init__(self): super().__init__() self.l1 = torch.nn.Linear(28 * 28, 10) def forward(self, x): return torch.relu(self.l1(x.view(x.size(0), -1))) def training_step(self, batch, batch_nb): x, y = batch loss = F.cross_entropy(self(x), y) # prog_bar=True will display the value on the progress bar statically for the last complete train epoch self.log("train_loss", loss, on_step=False, on_epoch=True, prog_bar=True) return loss def validation_step(self, batch, batch_nb): x, y = batch loss = F.cross_entropy(self(x), y) # prog_bar=True will display the value on the progress bar statically for the last complete validation epoch self.log("val_loss", loss, on_step=False, on_epoch=True, prog_bar=True) return loss def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=0.02) The trick is to use prog_bar=True in combination with on_step and on_epoch depending on when you want the update on the progress bar. So, in this case, when training: # Train the model ⚑ trainer.fit(mnist_model, MNIST_dm) you will see: Epoch 4: 100% -------------------------- 939/939 [00:09<00:00, 94.51it/s, loss=0.636, v_num=4, val_loss=0.743, train_loss=0.726] Where loss will be updating each batch as it is the step loss. However, val_loss and train_loss will be static values that will only change after each validation or train epoch respectively.
5
3
73,076,517
2022-7-22
https://stackoverflow.com/questions/73076517/how-to-send-redirectresponse-from-a-post-to-a-get-route-in-fastapi
I want to send data from app.post() to app.get() using RedirectResponse. @app.get('/', response_class=HTMLResponse, name='homepage') async def get_main_data(request: Request, msg: Optional[str] = None, result: Optional[str] = None): if msg: response = templates.TemplateResponse('home.html', {'request': request, 'msg': msg}) elif result: response = templates.TemplateResponse('home.html', {'request': request, 'result': result}) else: response = templates.TemplateResponse('home.html', {'request': request}) return response @app.post('/', response_model=FormData, name='homepage_post') async def post_main_data(request: Request, file: FormData = Depends(FormData.as_form)): if condition: ...... ...... return RedirectResponse(request.url_for('homepage', **{'result': str(trans)}), status_code=status.HTTP_302_FOUND) return RedirectResponse(request.url_for('homepage', **{'msg': str(err)}), status_code=status.HTTP_302_FOUND) How do I send result or msg via RedirectResponse, url_for() to app.get()? Is there a way to hide the data in the URL either as path parameter or query parameter? How do I achieve this? I am getting the error starlette.routing.NoMatchFound: No route exists for name "homepage" and params "result". when trying this way. Update: I tried the below: return RedirectResponse(app.url_path_for(name='homepage') + '?result=' + str(trans), status_code=status.HTTP_303_SEE_OTHER) The above works, but it works by sending the param as query param, i.e., the URL looks like this localhost:8000/?result=hello. Is there any way to do the same thing but without showing it in the URL?
In brief, as explained in this answer and this answer, as well as mentioned by @tiangolo here, when performing a RedirectResponse from a POST request route to a GET request route, the response status code has to change to 303 See Other. For instance (completet working example is given below): return RedirectResponse(redirect_url, status_code=status.HTTP_303_SEE_OTHER) As for the reason for getting starlette.routing.NoMatchFound error, this is because request.url_for() receives path parameters, not query parameters. Your msg and result parameters are query ones; hence, the error. A solution would be to use a CustomURLProcessor, as suggested in this and this answer, allowing you to pass both path (if need to) and query parameters to the url_for() function and obtain the URL. As for hiding the path and/or query parameters from the URL, you can use a similar approach to this answer that uses history.pushState() (or history.replaceState()) to replace the URL in the browser's address bar. Working example can be found below (you can use your own TemplateResponse in the place of HTMLResponse). Working Example from fastapi import FastAPI, Request, status from fastapi.responses import RedirectResponse, HTMLResponse from typing import Optional import urllib app = FastAPI() class CustomURLProcessor: def __init__(self): self.path = "" self.request = None def url_for(self, request: Request, name: str, **params: str): self.path = request.url_for(name, **params) self.request = request return self def include_query_params(self, **params: str): parsed = list(urllib.parse.urlparse(self.path)) parsed[4] = urllib.parse.urlencode(params) return urllib.parse.urlunparse(parsed) @app.get('/', response_class=HTMLResponse) def event_msg(request: Request, msg: Optional[str] = None): if msg: html_content = """ <html> <head> <script> window.history.pushState('', '', "/"); </script> </head> <body> <h1>""" + msg + """</h1> </body> </html> """ return HTMLResponse(content=html_content, status_code=200) else: html_content = """ <html> <body> <h1>Create an event</h1> <form method="POST" action="/"> <input type="submit" value="Create Event"> </form> </body> </html> """ return HTMLResponse(content=html_content, status_code=200) @app.post('/') def event_create(request: Request): redirect_url = CustomURLProcessor().url_for(request, 'event_msg').include_query_params(msg="Succesfully created!") return RedirectResponse(redirect_url, status_code=status.HTTP_303_SEE_OTHER) Update 1 - About including query parameters Regarding adding query params to url_for() function, another solution would be using Starlette's starlette.datastructures.URL, which now provides a method to include_query_params. Example: from starlette.datastructures import URL redirect_url = URL(request.url_for('event_msg')).include_query_params(msg="Succesfully created!") Update 2 - About including query parameters The request.url_for() function now returns a starlette.datastructures.URL object. Hence, you could add query parameters as follows: redirect_url = request.url_for('event_msg').include_query_params(msg="Succesfully created!")
9
6
73,110,208
2022-7-25
https://stackoverflow.com/questions/73110208/how-to-load-a-different-file-than-index-html-in-fastapi-root-path-while-using-st
Here is a simple static FastAPI app. With this setup even though the root path is expected to return a FileResponse of custom.html, the app still returns index.html. How can I get the root path work and render custom.html? from fastapi import FastAPI from fastapi.staticfiles import StaticFiles from fastapi.responses import FileResponse app = FastAPI() app.mount( "/", StaticFiles(directory="static", html=True), name="static", ) @app.get("/") async def index() -> FileResponse: return FileResponse("custom.html", media_type="html")
As per Starlette documentation: StaticFiles Signature: StaticFiles(directory=None, packages=None, html=False, check_dir=True, follow_symlink=False) html - Run in HTML mode. Automatically loads index.html for directories if such file exists. In addtion, as shown from the code snippet you provided, you have mounted StaticFiles to the root directory (i.e., /), that is: app.mount('/', StaticFiles(directory='static', html=True), name='static') instead of, for example, /static (or some other path name), as shown below: from fastapi import FastAPI from fastapi.staticfiles import StaticFiles app = FastAPI() app.mount('/static', StaticFiles(directory='static', html=True), name='static') As per FastAPI documentation: "Mounting" means adding a complete "independent" application in a specific path, that then takes care of handling all the sub-paths. Hence, in your example, any path that starts with / will be handled by that StaticFiles application, and due to specifying html=True in the arguments, index.html will be automatically loaded; regardless of creating a separate endpoint pointing to the root path / and trying to return some other file, as demonstrated in the example given in your question. Order Matters If, for example, you moved app.mount("/",StaticFiles(... line after defining your @app.get("/") endpoint, you would see that order matters and index.html would not automatically be loaded anymore, as endpoints are evaluated in order. Note that, in your case, you might get an Internal Server Error, as your @app.get("/") endpoint would be called and attempt to find custom.html, but if this file is not located under the root / directory, but rather under /static directory (as shown from your code), you would then get a File does not exist error, and hence, you should instead return FileResponse('static/custom.html'). Even if you removed html=True, but keep StaticFiles mounted to the root directory and defined before your / endpoint, you would get a {"detail":"Not Found"} error response when attempting to access http://localhost:8000/. This is because the / route would still be handled by the StaticFiles application (as mentioned earlier), and you should thus need to specify the file that you would like to access (when html=True is not used), e.g., http://localhost:8000/index.html. Even if you defined other endpoints in your code (e.g., /register, /login, /hello), as long as StaticFiles is mounted to the root directory (i.e., /) and defined in your code before all other endpoints, for instance: app.mount('/', StaticFiles(directory='static'), name='static') @app.post('/register') async def register(): pass @app.post('/login') async def login(): pass @app.get('/hello') async def hello(): pass every request to those routes would again be handled by the StaticFiles application, and hence, would lead to an error response, such as {"detail":"Not Found"} (if you send a GET request, such as when you type a URL in the address bar of the web browser and then hit the Enter key, and the given path does not match a file name in the static web directory), or {detail": "Method Not Allowed"} (if you issue a POST request through Swagger UI or some other client platform/application). As described in Starlette's documentation on StaticFiles (see StaticFiles class implementation as well): Static files will respond with 404 Not found or 405 Method not allowed responses for requests which do not match. In HTML mode, if 404.html file exists, it will be shown as 404 response. Hence, you should either mount the StaticFiles instance to a different/unique path, such as /static (i.e., app.mount('/static', ..., as shown at the top of this answer), or, if you still want to mount the StaticFiles instance to / path, define StaticFiles after declaring all your API endpoints, for example: from fastapi import FastAPI from fastapi.staticfiles import StaticFiles from fastapi.responses import FileResponse app = FastAPI() @app.post('/register') async def register(): pass @app.post('/login') async def login(): pass @app.get('/hello') async def hello(): pass @app.get('/') async def index(): return FileResponse('static/custom.html') app.mount('/',StaticFiles(directory='static', html=True), name='static') Note Every time a webpage is loaded, the web browser caches most content on the page, in order to shorten laod times (the next time that user loads the page). Thus, if you tried out the example provided earlier, i.e., where the StaticFiles application is defined before every API endpoint, and then, using the same browser session, you tried out the example above with the StaticFiles application defined after all API endpoints, but the browser still displays the content of static/index.html file instead of static/custom.htmlβ€”when accessing http://localhost:8000/ in your browserβ€”this is due to the browser loading the webpage from the cache. To overcome this, you could either clear your browser's cache, or open the webpage in an Incognito window (and close it when you are done with it), or simply press Ctrl+F5, instead of just F5, in your browser (using either an Incognito or regular window), which would force the browser to retrieve the webpage from the server instead of loading it from the cache. You may also find this answer helpful, regarding the order of endpoints in FastAPI. The html=True option Setting the html argument of StaticFiles instance to True (i.e., html=True) simply provides an easy way to serve a directory of web content with just one line of code. If you only need to serve static files, such as package docs directory, then this is the way to go. If, however, you need to serve different HTML files that will get dynamically updated, as well as you wish to create additional routes/endpoints, you should better have a look at Templates (not FileResponse), as well as mount your StaticFiles instance to a different path (e.g., /static), rather than root path (and without using html=True).
4
9
73,065,760
2022-7-21
https://stackoverflow.com/questions/73065760/how-do-i-optimize-this-xor-sum-algorithm
I'm trying to solve this hackerrank problem https://www.hackerrank.com/challenges/xor-subsequence/problem from functools import reduce def xor_sum(arr): return reduce(lambda x,y: x^y, arr) def xorSubsequence(arr): freq = {} max_c = float("-inf") # init val min_n = float("inf") # init val for slice_size in range(1, len(arr)+1): for step in range(0, len(arr)+1-slice_size): n = xor_sum(arr[i] for i in range(step,step+slice_size)) freq[n] = freq.get(n,0)+1 if freq[n] >= max_c and (n < min_n or freq[n]> max_c): min_n = n max_c = freq[n] return min_n, freq[min_n] But it times out since it's ~O(n^3). I feel like there is some math trick, can someone explain the solution to me? I tried to read some solutions in the discussion but I didn't quite get them. Problem copy: Consider an array, A, of n integers (A=a0,a1,...,an-1). We take all consecutive subsequences of integers from the array that satisfy the following: {ai,ai+1,...,aj-1,aj}, where 0≀i≀j≀n For each subsequence, we apply the bitwise XOR (βŠ•) operation on all the integers and record the resultant value. Given array A, find the XOR sum of every subsequence of A and determine the frequency at which each number occurs. Then print the number and its respective frequency as two space-separated values on a single line. Output Format Print 2 space-separated integers on a single line. The first integer should be the number having the highest frequency, and the second integer should be the number's frequency (i.e., the number of times it appeared). If there are multiple numbers having maximal frequency, choose the smallest one. Constraints β€’ 1≀n≀105 β€’ 1≀ai<216
Why Xor sum is Dyadic convolution Denote the input array as a. Construct an array b, such that b[i]=a[0]βŠ•a[1]βŠ•...βŠ•a[i]. One can then construct a list M, M[i] stands for the number of element in b which has a value i. Note that some zero-padding is added to make the length of M be a power of 2. Then consider the Dyadic (XOR) convolution. The definition is (picture taken form this question): Consider conduct this Dyadic convolution between M and M, i.e. N=M*M where * stands for Dyadic convolution. Then N[i] is the sum of M[j]M[k] over all (j,k) that jβŠ•k=i. Consider each subsequence xor(a[p:q]), we have xor(a[p:q])=b[p]βŠ•b[q]. For Every integer i, all the consecutive subsequences the xor results of which are equal to i can be converted to in this form(i=xor(a[p:q])=b[p]βŠ•b[q]). We further group this family of subsequences by the value of b[p] and the value b[q], for example, if xor(a[p1:q1])=xor(a[p2,q2])=i, and if b[p1]=b[p2],b[q1]=b[q2], these two subsequences will be grouped in the same sub-group. Consider sub-group (j,k), where the subsequences can be represented in the formi=xor(a[p':q'])=b[p']βŠ•b[q'], b[p']=j, b[q']=k, the number of subsquences in this sub-group (j,k) is M[j]M[k] (Recall that M[i] stands for the number of element in b which has a value i). So N[i] is the number sequence a[p:q] that xor(a[p:q])=i. Howvever, since a[p:q] and a[q:p] are identical, we count every subsequence twice. So N[i] is twice of "number of consecutive subsequences that xor get i". Use FWHT to compute the convolution So now what we need is compute N=M*M, according to The Dyadic (XOR) convolution theorem(see proof here), we can first perform compute H(N)=H(M)Γ—H(M). As H is involutive(see wiki), to get N just apply H on H(N) again. Analysis of code In this section, I will analysis the code provided by @Kache. Actually, b is accumulate([0] + seq, xor). Using histogram = Counter(accumulate([0] + seq, xor)), one can get a dict of {possible_value_in_b: num_of_occurrence}. Then the next line, histogram = [histogram[value] for value in range(next_pow2)], this is the M mentioned above with padding added. Then in histogram = [x * x for x in fwht(histogram)], so now histogram is H(N). And histogram = [y // next_pow2 for y in fwht(histogram)] serves as inverse transformation. this is what histogram = [y // next_pow2 for y in fwht(histogram)] do. The histogram[0] -= len(seq) + 1 eliminate the influence of the fact that a[p:p]=0. And histogram = [y // 2 for y in histogram] avoid counting twice (as stated before, N[i] counts a[p:q] and a[q:p] separately).
6
2
73,095,192
2022-7-24
https://stackoverflow.com/questions/73095192/poetry-show-command-what-do-the-red-listed-packages-mean
When I run poetry show - most of my packages are blue but a few are red? What do these two colors mean? I think red means the package is not @latest ?
Black: Not required package Red: Not installed / It needs an immediate semver-compliant upgrade Yellow: It needs an upgrade but has potentially breaking changes so is not urgent Green: Already latest
16
18
73,129,798
2022-7-26
https://stackoverflow.com/questions/73129798/how-to-model-an-empty-dictionary-in-pydantic
I'm working with a request of a remote webhook where the data I want to validate is either there, or an empty dictionary. I would like it to run through the model validator if it's there but also not choke if it's an empty dictionary. input 1: { "something": {} } input 2: { "something": { "name": "George", } } input 3: { "something": { "invalid": true } } class Person(BaseModel): name: str class WebhookRequest(BaseModel): something: Union[Person, Literal[{}]] # invalid literal How would I model something like this in Pydantic such that inputs 1 and 2 succeed while input 3 fails?
Use extra = "forbid" option to disallow extra fields and use an empty model to represent an empty dictionary. from pydantic import BaseModel, ConfigDict class Empty(BaseModel): ... model_config = ConfigDict(extra="forbid") class Person(BaseModel): name: str model_config = ConfigDict(extra="forbid") class WebhookRequest(BaseModel): something: Person | Empty model_config = ConfigDict(extra="forbid")
5
7
73,084,052
2022-7-22
https://stackoverflow.com/questions/73084052/writing-multiple-dataframes-to-multiple-sheets-in-an-excel-file
I have two data frames that I would like to each write to its own sheet in an Excel file. The following code accomplishes what I want: import pandas as pd df_x = pd.DataFrame({'a':[1, 2, 3]}) df_y = pd.DataFrame({'b':['a', 'b', 'c']}) writer = pd.ExcelWriter('df_comb.xlsx', engine='xlsxwriter') df_x.to_excel(writer, sheet_name='df_x', index=False) df_y.to_excel(writer, sheet_name='df_y', index=False) writer.save() writer.close() However, in my actual use case, I have a large number of dataframes and do not want to write a to_excel statement for each. Is there anyway to loop over a list of dataframes to accomplish this, something along the lines of: for i in [df_x, df_y]: i.to_excel(writer, sheet_name = i, index=False)
What you have is almost there, I think you'll run into problems trying to assign the sheet_name to be the DataFrame as well. I would suggest also having a list of names that you'd like the sheets to be. You could then do something like this: names = ["df_x", "df_y"] dataframes = [df_x, df_y] for i, frame in enumerate(dataframes): frame.to_excel(writer, sheet_name = names[i], index=False) If you aren't familiar with it, enumerate makes a tuple where the values are the index and the item in the list, allowing you to index another list. Below is a solution using a dictionary of the dataframes and looping across dictionary.items(). dataframes = {"df_x": df_x, "df_y": df_y} for name, frame in dataframes.items(): frame.to_excel(writer, sheet_name = name, index=False)
7
10
73,121,344
2022-7-26
https://stackoverflow.com/questions/73121344/i-cant-delete-an-poetry-managed-environment
I'd like to remove an environment (see this question) when I issue /progetti/project_blah$ poetry env remove ./.venv I get /bin/sh: 1: ./.venv: Permission denied EnvCommandError Command ./.venv -c "import sys; print('.'.join([str(s) for s in sys.version_info[:3]]))" errored with the following return code 126, and output: at ~/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py:625 in remove 621β”‚ shell=True, 622β”‚ ) 623β”‚ ) 624β”‚ except CalledProcessError as e: β†’ 625β”‚ raise EnvCommandError(e) 626β”‚ 627β”‚ python_version = Version.parse(python_version.strip()) 628β”‚ minor = "{}.{}".format(python_version.major, python_version.minor) 629β”‚ What is this ?
Besides the permission error, if you are here because you're unable to delete a locally created environment using poetry, you can refer to this github issue. Basically, at the moment, you can't delete the local environment with the poetry env remove <python> command since it will return the Environment does not exist error. To overcome the issue you can delete the .venv folder as suggested also in the comment to the question or run poetry env remove --all. Be careful with this last command and use poetry env list before to check that there is only one environment (the locally created one) that is going to be deleted. UPDATE: Problem seems to be fixed now. Check your version of poetry before.
9
11
73,070,845
2022-7-21
https://stackoverflow.com/questions/73070845/how-to-import-julia-packages-into-python
One can use Julia's built-in modules and functions using the juliacall package. for example: >>> from juliacall import Main as jl >>> import numpy as np # Create a 2*2 random matrix >>> arr = jl.rand(2,2) >>> arr <jl [0.28133223988783074 0.22498491616860727; 0.008312971104033062 0.12927167014532326]> # Check whether Numpy can recognize the shape of the array or not >>> np.array(arr).shape (2, 2) >>> type(np.array(arr)) <class 'numpy.ndarray'> Then I'm curious to know if importing an installed julia package into Python is possible? For example, let's say that one wants to import Flux.jl into Python. Is there a way to achieve this?
I found it by scrutinizing the pictures of the second example of the juliacall GitHub page. According to the example, I'm able to import Flux.jl by taking these steps: >>> from juliacall import Main as jl >>> jl.seval("using Flux") Also, one can install any registered Julia package using Pkg in Python: >>> from juliacall import Main as jl >>> jl.Pkg.add("Flux") # After a successful installation >>> jl.Pkg.status() Status `C:\Users\Shayan\miniconda3\envs\im\julia_env\Project.toml` [587475ba] Flux v0.14.10 [6099a3de] PythonCall v0.9.15 # This indicates that the installation was successful Additional note One can use Julia packages in Python using the pyjulia package that can be installed using pip install julia in Python. Afterward, the following commands should be entered: >>> import julia >>> julia.install() # Then >>> from julia import Pkg >>> Pkg.add("Flux") # After a successful installation >>> Pkg.status() Status `C:\Users\Shayan\.julia\environments\v1.10\Project.toml` [587475ba] Flux v0.14.10 # This indicates that the installation was successful >>> from julia import Flux You can also say, from julia import Flux as flx. Note that using juliacall is recommended.
6
5
73,077,163
2022-7-22
https://stackoverflow.com/questions/73077163/how-to-get-the-number-of-available-cores-in-python
The standard method that I know to get number of cores in python is to use multiprocess.cpu_count import multiprocessing print(multiprocessing.cpu_count()) Also, when creating a task we can specify some cores that the process can access using taskset. Apparently cpu_count is always getting the number of available cores of the machine, regardless if the process was restricted to use only some of those cores. python nproc.py taskset 7 python nproc.py taskset 1 python nproc.py nproc on the other hand gives the number of cores available for the process. taskset 7 nproc # gives at most 3 taskset 1 nproc # gives 1 I know I could invoke nproc or taskset -p {os.getpid()} to get the correct number of cores. Is there another way to do this, without relying on other programs (reading /proc/{os.pid()}/* could be a solution). Thank you
According to the docs, this may be of limited availability, but it seems the os library has what you want; Interface to the scheduler These functions control how a process is allocated CPU time by the operating system. They are only available on some Unix platforms. For more detailed information, consult your Unix manpages. New in version 3.3. ... os.sched_getaffinity(pid) Return the set of CPUs the process with PID pid (or the current process if zero) is restricted to. tested using Ubuntu 20.04.2 (WSL): aaron@DESKTOP:/mnt/c$ python3 -c "import os; print(os.sched_getaffinity(0))" {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11} aaron@DESKTOP:/mnt/c$ taskset 7 python3 -c "import os; print(os.sched_getaffinity(0))" {0, 1, 2} Edit: on windows you'll probably have to look to psutil
6
4
73,075,949
2022-7-22
https://stackoverflow.com/questions/73075949/using-pydantic-with-xml
I am working on a project that uses a lot of xml, and would like to use pydantic to model the objects. In this case I simplified the xml but included an example object. <ns:SomeType name="NameType" shortDescription="some data"> <ns:Bar thingOne="alpha" thingTwo="beta" thingThree="foobar"/> </ns:SomeType> Code from pydantic import BaseModel from typing import Optional, List from xml.etree import ElementTree as ET class Bar(BaseModel): thing_one: str thing_two: str thing_three: str class SomeType(BaseModel): name: str short_description: str bar: Optional[Bar] def main(): with open("path/to/file.xml") as fp: source = fp.read() root = ET.fromstring(source) some_type_list = [] for child in root: st = SomeType( name=child.attrib["name"], short_description=child.attrib["shortDescription"], ) for sub in child: st.bar = Bar( thing_one=sub.attrib["thingOne"], thing_two=sub.attrib["thingTwo"], thing_three=sub.attrib["thingThree"], ) I looked into BaseModel.parse_obj or BaseModel.parse_raw but I don't think that will solve the problem. I also thought I could try to use xmltodict to convert the xml, the namespace's and the @ attribute's get even more in the way... >>> import xmltodict >>> xmltodict.parse(input_xml) {'ns:SomeType': {'@name': 'NameType', '@shortDescription': 'some data', ... }}
xmltodict can help in your example if you combine it with field aliases: from typing import Optional import xmltodict from pydantic import BaseModel, Field class Bar(BaseModel): thing_one: str = Field(alias="@thingOne") thing_two: str = Field(alias="@thingTwo") thing_three: str = Field(alias="@thingThree") class SomeType(BaseModel): name: str = Field(alias="@name") short_description: str = Field(alias="@shortDescription") bar: Optional[Bar] = Field(alias="ns:Bar") class Root(BaseModel): some_type: SomeType = Field(alias="ns:SomeType") print( Root.model_validate( xmltodict.parse( """<ns:SomeType name="NameType" shortDescription="some data"> <ns:Bar thingOne="alpha" thingTwo="beta" thingThree="foobar"/> </ns:SomeType>""")).some_type) Output: name='NameType' short_description='some data' bar=Bar(thing_one='alpha', thing_two='beta', thing_three='foobar') You can see in the example above that a Root model is needed because the dict has an ns:SomeType key.
6
6
73,089,846
2022-7-23
https://stackoverflow.com/questions/73089846/python-3-simple-http-server-with-get-functional
I can't find any Python code for the equivalent of python -m http.server port --bind addr --directory dir So I need basically a working server class that process at least GET requests. Most of the things I found on Google were either an HTTP server with some special needs or something like that, where you need to code the response behaviour be yourself: from http.server import BaseHTTPRequestHandler, HTTPServer def run(server_class=HTTPServer, handler_class=BaseHTTPRequestHandler): server_address = ('', 8000) httpd = server_class(server_address, handler_class) httpd.serve_forever() run() All that I need is a default working skeleton of a Python HTTP server, where you can provide address, port and directory, and it would normally process GET requests.
That's what I ended up with: # python -m http.server 8000 --directory ./my_dir from http.server import HTTPServer as BaseHTTPServer, SimpleHTTPRequestHandler import os class HTTPHandler(SimpleHTTPRequestHandler): """This handler uses server.base_path instead of always using os.getcwd()""" def translate_path(self, path): path = SimpleHTTPRequestHandler.translate_path(self, path) relpath = os.path.relpath(path, os.getcwd()) fullpath = os.path.join(self.server.base_path, relpath) return fullpath class HTTPServer(BaseHTTPServer): """The main server, you pass in base_path which is the path you want to serve requests from""" def __init__(self, base_path, server_address, RequestHandlerClass=HTTPHandler): self.base_path = base_path BaseHTTPServer.__init__(self, server_address, RequestHandlerClass) web_dir = os.path.join(os.path.dirname(__file__), 'my_dir') httpd = HTTPServer(web_dir, ("", 8000)) httpd.serve_forever() A simple HTTP server that handles GET requests with, working with a certain directory.
5
3
73,104,543
2022-7-25
https://stackoverflow.com/questions/73104543/how-to-install-different-versions-of-r-e-g-r-4-1-2-in-a-conda-environment
I want to be able to install higher versions of R in my conda environment. In particular r 4.1.2. I have also installed Mamba fyi. Currently r-base has: conda activate main conda search r-base Name Version r-base 3.2.4 r-base 3.3.0 r-base 3.3.1 r-base 3.3.2 r-base 3.4.1 r-base 3.4.2 r-base 3.4.3 r-base 3.4.3 r-base 3.5.1 r-base 3.5.3 r-base 3.6.0 r-base 3.6.1 r-base 3.6.1
You could specify the version directly: mamba install -c conda-forge r-base=4.1.2 or conda install -c conda-forge r-base=4.1.2 On Jul 25, 2022 the highest available in conda-forge was 4.1.3 while in pkgs/r 4.2.0 was the highest version available.
8
12
73,105,877
2022-7-25
https://stackoverflow.com/questions/73105877/importerror-cannot-import-name-parse-rule-from-werkzeug-routing
I got the following message after running my Flask project on another system. The application ran all the time without problems: Error: While importing 'app', an ImportError was raised: Traceback (most recent call last): File "c:\users\User\appdata\local\programs\python\python39\lib\site-packages\flask\cli.py", line 214, in locate_app __import__(module_name) File "C:\Users\User\Desktop\Projekt\app\__init__.py", line 3, in <module> from flask_restx import Namespace, Api File "c:\users\User\appdata\local\programs\python\python39\lib\site-packages\flask_restx\__init__.py", line 5, in <module> File "c:\users\User\appdata\local\programs\python\python39\lib\site-packages\flask_restx\api.py", line 50, in <module> from .swagger import Swagger File "c:\users\User\appdata\local\programs\python\python39\lib\site-packages\flask_restx\swagger.py", line 18, in <module> from werkzeug.routing import parse_rule ImportError: cannot import name 'parse_rule' from 'werkzeug.routing' (c:\users\User\appdata\local\programs\python\python39\lib\site-packages\werkzeug\routing\__i nit__.py) My requirements.txt Flask~=2.1.2 psycopg2-binary==2.9.3 Flask-SQLAlchemy==2.5.1 flask-restx==0.5.1 qrcode~=7.3.1 PyPDF2==2.6.0 reportlab~=3.6.10 WTForms~=3.0.1 flask-bootstrap==3.3.7.1 flask-wtf==1.0.1
The workaround I use for now is to pin werkzeug to 2.1.2 in requirements.txt. This should only be done until the other libraries are compatible with the latest version of Werkzeug, at which point the pin should be updated. werkzeug==2.1.2
40
38
73,060,080
2022-7-21
https://stackoverflow.com/questions/73060080/how-do-i-use-qt6-dark-theme-with-pyside6
Simple demo application I am trying to set the theme to dark. I would prefer a code version (non QtQuick preferred), but only way I see for Python is with a QtQuick config file, and even that does not work. from PySide6 import QtWidgets from PySide6 import QtQuick if __name__ == '__main__': app = QtWidgets.QApplication() app.setApplicationDisplayName("Should be Dark Theme") app.setStyle("Universal") view = QtQuick.QQuickView() view.show() app.exec() And I have a qtquickcontrols2.conf configuration file in the same directory. (Also tried setting QT_QUICK_CONTROLS_CONF to absolute path.) [Controls] Style=Material [Universal] Theme=Dark [Material] Theme=Dark And yet, it's still bright white: I do not care if it is Material or Universal style, just want some built in dark mode for the title bar. In the end, need a way to make the titlebar dark without creating a custom one. Thank you for any guidance!
import sys sys.argv += ['-platform', 'windows:darkmode=2'] app = QApplication(sys.argv) above 3 lines can change your window to dark mode if you are using windows and Fusion style makes the app more beautiful, tested in windows 10, 11 example:- from PySide6.QtWidgets import ( QApplication, QCheckBox, QComboBox, QDateEdit, QDateTimeEdit, QDial, QDoubleSpinBox, QFontComboBox, QLabel, QLCDNumber, QLineEdit, QMainWindow, QProgressBar, QPushButton, QRadioButton, QSlider, QSpinBox, QTimeEdit, QVBoxLayout, QWidget, ) import sys sys.argv += ['-platform', 'windows:darkmode=2'] class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Widgets App") layout = QVBoxLayout() widgets = [ QCheckBox, QComboBox, QDateEdit, QDateTimeEdit, QDial, QDoubleSpinBox, QFontComboBox, QLCDNumber, QLabel, QLineEdit, QProgressBar, QPushButton, QRadioButton, QSlider, QSpinBox, QTimeEdit, ] for w in widgets: layout.addWidget(w()) widget = QWidget() widget.setLayout(layout) self.setCentralWidget(widget) app = QApplication(sys.argv) app.setStyle('Fusion') window = MainWindow() window.show() app.exec()
8
11
73,080,088
2022-7-22
https://stackoverflow.com/questions/73080088/how-to-solve-jsondecodeerror-when-using-poetry-in-github-actions
Issue I've got a problem using poetry install in my CI/CD pipeline (Github Actions), on any GitHub runner, since I migrated from Python 3.8 to Python 3.10. Installing dependencies from lock file Package operations: 79 installs, 0 updates, 0 removals β€’ Installing pyparsing (3.0.9) JSONDecodeError Expecting value: line 1 column 1 (char 0) at /opt/hostedtoolcache/Python/3.10.0/x64/lib/python3.10/json/decoder.py:355 in raw_decode 351β”‚ """ 352β”‚ try: 353β”‚ obj, end = self.scan_once(s, idx) 354β”‚ except StopIteration as err: β†’ 355β”‚ raise JSONDecodeError("Expecting value", s, err.value) from None 356β”‚ return obj, end 357β”‚ Error: Process completed with exit code 1. I didn't change any lib configuration in my pyproject.toml file, but as you can see above: Poetry hides most of the StackTrace. What I tried Recreating the poetry.lock file. Removing the Poetry cache using rm -r ~/.cache/pypoetry/cache/ (and rm -r ~/.cache/pypoetry/). Removing the lib that was returning an error (actually, the issue seems to happen with any lib, so that's the reason why I understand it's probably related to poetry and python) Question Any idea how to resolve this issue in my CI/CD pipeline?
After a few researches, I found this thread on the poetry GitHub repository from november 2021. There is this workaround from hoefling GitHub user: Disabling poetry's experimental new installer may be a workaround for now: Solution poetry config experimental.new-installer false Adding this line in the shell before running the poetry install command resolved my problem! Note that in the same thread, another comment from ddc67cd stated that: the issue is resolved with the new version of cachecontrol==0.12.9 (it should be installed automatically). But running pip install -U cachecontrol didn't resolve the issue in my specific case (might be worth testing otherwise?). It also seems the problem came back recently (July 2022) and this comment suggested a possible root cause to the issue related to the setuptools library. Anyway, disabling poetry's experimental new installer should resolve the problem for now, until a permanent solution is found.
7
7
73,070,247
2022-7-21
https://stackoverflow.com/questions/73070247/how-to-change-image-format-when-uploading-image-in-django
When a user uploads an image from the Django admin panel, I want to change the image format to '.webp'. I have overridden the save method of the model. Webp file is generated in the media/banner folder but the generated file is not saved in the database. How can I achieve that? def save(self, *args, **kwargs): super(Banner, self).save(*args, **kwargs) im = Image.open(self.image.path).convert('RGB') name = 'Some File Name with .webp extention' im.save(name, 'webp') self.image = im But After saving the model, instance of the Image class not saved in the database? My Model Class is : class Banner(models.Model): image = models.ImageField(upload_to='banner') device_size = models.CharField(max_length=20, choices=Banner_Device_Choice)
from django.core.files import ContentFile If you already have the webp file, read the webp file, put it into the ContentFile() with a buffer (something like io.BytesIO). Then you can proceed to save the ContentFile() object to a model. Do not forget to update the model field, and save the model! https://docs.djangoproject.com/en/4.1/ref/files/file/ Alternatively "django-webp-converter is a Django app which straightforwardly converts static images to WebP images, falling back to the original static image for unsupported browsers." It might have some save capabilities too. https://django-webp-converter.readthedocs.io/en/latest/ The cause You are also saving in the wrong order, the correct order to call the super().save() is at the end. Edited, and tested solution: from django.core.files import ContentFile from io import BytesIO def save(self, *args, **kwargs): #if not self.pk: #Assuming you don't want to do this literally every time an object is saved. img_io = BytesIO() im = Image.open(self.image).convert('RGB') im.save(img_io, format='WEBP') name="this_is_my_webp_file.webp" self.image = ContentFile(img_io.getvalue(), name) super(Banner, self).save(*args, **kwargs) #Not at start anymore
7
5
73,108,683
2022-7-25
https://stackoverflow.com/questions/73108683/getting-error-cannot-import-name-unicode-emoji-from-emoji-unicode-codes
I'm trying to create an Instagram bot using InstaPy. I'm following this tutorial. When I ran: from instapy import InstaPy session = InstaPy(username="your username", password="your password") session.login() I got this error: ImportError: cannot import name 'UNICODE_EMOJI' from 'emoji.unicode_codes' (C:\Users\roeegg22\AppData\Local\Programs\Python\Python310\lib\site-packages\emoji\unicode_codes\__init__.py) I tried some solutions online, but none of them worked.
This happens because instapy (or some other library) doesn't reflect the latest update to the emoji library. You should be able to fix it by running pip uninstall emoji pip install emoji==1.7 in the terminal. That way you install the version of emoji library that instapy is made around and the import should work.
12
27
73,122,688
2022-7-26
https://stackoverflow.com/questions/73122688/numpy-efficiently-create-this-matrix-n-3-base-values-of-another-list-and-repe
How can I create the matrix [[a, 0, 0], [0, a, 0], [0, 0, a], [b, 0, 0], [0, b, 0], [0, 0, b], ...] from the vector [a, b, ...] efficiently? There must be a better solution than np.squeeze(np.reshape(np.tile(np.eye(3), (len(foo), 1, 1)) * np.expand_dims(foo, (1, 2)), (1, -1, 3))) right?
You can create a zero array in advance, and then quickly assign values by slicing: def concated_diagonal(ar, col): ar = np.asarray(ar).ravel() size = ar.size ret = np.zeros((col * size, col), ar.dtype) for i in range(col): ret[i::col, i] = ar return ret Test: >>> concated_diagonal([1, 2, 3], 3) array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [2, 0, 0], [0, 2, 0], [0, 0, 2], [3, 0, 0], [0, 3, 0], [0, 0, 3]]) Note that because the number of columns you require is small, the impact of the relatively slow Python level for loop is acceptable: %timeit concated_diagonal(np.arange(1_000_000), 3) 17.1 ms Β± 84.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) Update: A solution with better performance! This is done in one step by clever reshaping and slice assignment: def concated_diagonal(ar, col): ar = np.asarray(ar).reshape(-1, 1) size = ar.size ret = np.zeros((col * size, col), ar.dtype) ret.reshape(size, -1)[:, ::col + 1] = ar return ret Time comparison: %timeit concated_diagonal(np.arange(1_000_000), 3) 10.7 ms Β± 198 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each)
4
3
73,125,231
2022-7-26
https://stackoverflow.com/questions/73125231/pytorch-dataloaders-bad-file-descriptor-and-eof-for-workers0
Description of the problem I am encountering a strange behavior during a neural network training with Pytorch dataloaders made from a custom dataset. The dataloaders are set with workers=4, pin_memory=False. Most of the time, the training finished with no problems. Sometimes, the training stopped at a random moment with the following errors: OSError: [Errno 9] Bad file descriptor EOFError It looks like the error occurs during socket creation to access dataloader elements. The error disappears when I set the number of workers to 0, but I need to accelerate my training with multiprocessing. What could be the source of the error ? Thank you ! The versions of python and libraries Python 3.9.12, Pyorch 1.11.0+cu102 EDIT: The error is occurring only on clusters Output of error file Traceback (most recent call last): File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 145, in _serve Epoch 17: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 253/486 [01:00<00:55, 4.18it/s, loss=1.73] Traceback (most recent call last): File "/my_directory/bench/run_experiments.py", line 251, in <module> send(conn, destination_pid) File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 50, in send reduction.send_handle(conn, new_fd, pid) File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 183, in send_handle with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s: File "/my_directory/.conda/envs/geoseg/lib/python3.9/socket.py", line 545, in fromfd return socket(family, type, proto, nfd) File "/my_directory/.conda/envs/geoseg/lib/python3.9/socket.py", line 232, in __init__ _socket.socket.__init__(self, family, type, proto, fileno) OSError: [Errno 9] Bad file descriptor main(args) File "/my_directory/bench/run_experiments.py", line 183, in main run_experiments(args, save_path) File "/my_directory/bench/run_experiments.py", line 70, in run_experiments ) = run_algorithm(algorithm_params[j], mp[j], ss, dataset) File "/my_directorybench/algorithms.py", line 38, in run_algorithm data = es(mp,search_space, dataset, **ps) File "/my_directorybench/algorithms.py", line 151, in es data = ss.generate_random_dataset(mp, File "/my_directorybench/architectures.py", line 241, in generate_random_dataset arch_dict = self.query_arch( File "/my_directory/bench/architectures.py", line 71, in query_arch train_losses, val_losses, model = meta_net.get_val_loss( File "/my_directory/bench/meta_neural_net.py", line 50, in get_val_loss return self.training( File "/my_directorybench/meta_neural_net.py", line 155, in training train_loss = self.train_step(model, device, train_loader, epoch) File "/my_directory/bench/meta_neural_net.py", line 179, in train_step for batch_idx, mini_batch in enumerate(pbar): File "/my_directory/.conda/envs/geoseg/lib/python3.9/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 530, in __next__ data = self._next_data() File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1207, in _next_data idx, data = self._get_data() File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1173, in _get_data success, data = self._try_get_data() File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1011, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/queues.py", line 122, in get return _ForkingPickler.loads(res) File "/my_directory/.local/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 295, in rebuild_storage_fd fd = df.detach() File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 58, in detach return reduction.recv_handle(conn) File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 189, in recv_handle return recvfds(s, 1)[0] File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 159, in recvfds raise EOFError EOFError EDIT : The way data is accessed from PIL import Image from torch.utils.data import DataLoader # extract of code of dataset class Dataset(): def __init__(self,image_files,mask_files): self.image_files = image_files self.mask_files = mask_files def __getitem__(self, idx): img = Image.open(self.image_files[idx]).convert('RGB') mask=Image.open(self.mask_files[idx]).convert('L') return img, mask # extract of code of trainloader train_loader = DataLoader( dataset=train_dataset, batch_size=4, num_workers=4, pin_memory=False, shuffle=True, drop_last=True, persistent_workers=False, )
I have finally found a solution. Adding this configuration to the dataset script works: import torch.multiprocessing torch.multiprocessing.set_sharing_strategy('file_system') By default, the sharing strategy is set to 'file_descriptor'. I have tried some solutions explained in : this issue (increase shared memory, increase max number of opened file descriptors, torch.cuda.empty_cache() at the end of each epoch, ...) and this other issue, that turns out to solve the problem As suggested by @AlexMeredith, the error may be linked to the distributed filesystem (Lustre) that some clusters use. The error may also come from distributed shared memory.
5
7
73,075,669
2022-7-22
https://stackoverflow.com/questions/73075669/how-to-extract-doc-from-avro-data-and-add-it-to-dataframe
I'm trying to create hive/impala tables base on avro files in HDFS. The tool for doing the transformations is Spark. I can't use spark.read.format("avro") to load the data into a dataframe, as in that way the doc part (description of the column) will be lost. I can see the doc by doing: input = sc.textFile("/path/to/avrofile") avro_schema = input.first() # not sure what type it is The problem is, it's a nested schema and I'm not sure how to traverse it to map the doc to the column description in dataframe. I'd like to have doc to the column description of the table. For example, the input schema looks like: "fields": [ { "name":"productName", "type": [ "null", "string" ], "doc": "Real name of the product" "default": null }, { "name" : "currentSellers", "type": [ "null", { "type": "record", "name": "sellers", "fields":[ { "name": "location", "type":[ "null", { "type": "record" "name": "sellerlocation", "fields": [ { "name":"locationName", "type": [ "null", "string" ], "doc": "Name of the location", "default":null }, { "name":"locationArea", "type": [ "null", "string" ], "doc": "Area of the location",#The comment needs to be added to table comments "default":null .... #These are nested fields In the final table, for example one field name would be currentSellers_locationName, with column description "Name of the location". Could someone please help to shed some light on how to parse the schema and add the doc to description? and explain a bit about what this below bit is about outside of the fields? Many thanks. Let me know if I can explain it better. "name" : "currentSellers", "type": [ "null", { "type": "record", "name": "sellers", "fields":[ {
If you would like to parse the schema yourself and manually add metadata to spark, I would suggest flatdict package: from flatdict import FlatterDict flat_schema = FlatterDict(schema) # schema as python dict names = {k.replace(':name', ''): flat_schema[k] for k in flat_schema if k.endswith(':name')} docs = {k.replace(':doc', ''): flat_schema[k] for k in flat_schema if k.endswith(':doc')} # keep only keys which are present in both names and docs keys_with_doc = set(names.keys()) & set(docs.keys()) full_name = lambda key: '_'.join( names[k] for k in sorted(names, key=len) if key.startswith(k) and k.split(':')[-2] == 'fields' ) name_doc_map = {full_name(k): docs[k] for k in keys_with_doc} A typical set of keys in flat_schema.keys() is: 'fields:1:type:1:fields:0:type:1:fields:0:type:1', 'fields:1:type:1:fields:0:type:1:fields:0:name', 'fields:1:type:1:fields:0:type:1:fields:0:default', 'fields:1:type:1:fields:0:type:1:fields:0:doc', These strings can now be manipulated: extract only the ones ending with "name" and "doc" (ignore "default", etc.) get set intersection to remove the ones that do not have both fields present get a list of all field names from higher levels of hierarchy: fields:1:type:1:fields is one of parents of fields:1:type:1:fields:0:type:1:fields (the condition is that they have the same start and they end with "fields")
6
1
73,093,143
2022-7-23
https://stackoverflow.com/questions/73093143/getting-ttm-income-statement-yahoo-finance-using-yahoo-fin
I try to get ttm values of the income statement for ticker symbol AAPL by using from yahoo_fin import stock_info as si import yfinance as yf import pandas as pd import matplotlib.pyplot as plt import pandas_datareader pd.set_option('display.max_columns', None) income_statement = si.get_income_statement("aapl") income_statement but the result doesn't show the ttm values, only the yearly values are shown On yahoo finance, we can also see the ttm values: Anyone who can help?
The reason you don't get the exact same table you see on the website is because of the way yahoo_fin gets data from Yahoo. Rather than getting them from the table you see, they get them from json data that Yahoo provides. In this data, there are both quarterly and yearly income statements. When Yahoo renders the table on their website, they most likely use the yearly data for the yearly columns and then sum the last 4 quarterly results to get the TTM column (as TTM results are nothing else than the sum of the last 4 quarterly results). If you want to get the TTM data, the best approach would be to do it the same way I assume Yahoo does. Get the quarterly data using yahoo_fin and then sum the quarters to calculate the TTM results. You can do this by setting the optional yearly parameter to False: quarterly_income_statement = si.get_income_statement("aapl", yearly=False) You can check out their method _parse_json to better understand how they get and parse data from Yahoo. (Assuming you have some knowledge of requests and json.) Summing the data To get the sum of the quarters you can for example do this: quarterly_income_statement = si.get_income_statement("aapl", yearly=False) ttm = quarterly_income_statement.sum(axis=1) This will give you a new Dataframe ttm with the same data fields with values being TTM (You can test it and see if it matches the numbers on the website).
4
3
73,072,257
2022-7-21
https://stackoverflow.com/questions/73072257/resolve-warning-a-numpy-version-1-16-5-and-1-23-0-is-required-for-this-versi
When I import SciPy or a library dependent on it, I receive the following warning message: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.1 It's true that I am running NumPy version 1.23.1, however this message is a mystery to me since I am running SciPy version 1.7.3, which, according to SciPy's documentation, is compatible with NumPy <1.24.0. Anyone having this problem or know how to resolve it? I am using Conda as an environment manager, and all my packages are up to date as far as I know. python: 3.9.12 numpy: 1.23.1 scipy: 1.7.3 Thanks in advance if anyone has any clues !
According to the setup.py file of the scipy 1.7.3, numpy is indeed <1.23.0. As @Libra said, the docs must be incorrect. You can: Ignore this warning Use scipy 1.8 Use numpy < 1.23.0 Edit: This is now fixed in the dev docs of scipy https://scipy.github.io/devdocs/dev/toolchain.html
27
12
73,125,077
2022-7-26
https://stackoverflow.com/questions/73125077/how-to-programatically-save-pdf-of-jupyter-notebook-with-a-custom-dynamic-file
On Windows 10, I am trying to save a Jupyter notebook as a pdf, under a name that will change for every run of the notebook. Here is what I have so far: name1 = 'July' name2 = 'August' jupyter_nb_filename = '{}_vs_{}'.format(name1,name2) !jupyter nbconvert --output-dir="C:\\mydir\\" --output=jupyter_nb_filename --to pdf --TemplateExporter.exclude_input=True mynotebook.ipynb Current behaviour: Jupyter notebook is stored as "jupyter_nb_filename.pdf" Desired behaviour. Jupyter notebook to be stored as "July_vs_August.pdf" I looked in the documentation but could not figure it out. Any help would be greatly appreciated.
You have to prefix the variable name with $ for it to be interpreted as a variable in a console command. See this question. !jupyter nbconvert --output-dir="C:\\mydir\\" --output=$jupyter_nb_filename --to pdf --TemplateExporter.exclude_input=True mynotebook.ipynb
5
1
73,106,139
2022-7-25
https://stackoverflow.com/questions/73106139/fastest-way-to-repeatedly-find-indices-of-k-largest-values-in-an-iteratively-par
In a complex-valued array a with nsel = ~750000 elements, I repeatedly (>~10^6 iterations) update nchange < ~1000 elements. After each iteration, in the absolute-squared, real-valued array b, I need to find the indices of the K largest values (K can be assumed to be small, for sure K <= ~50, in practice likely K <= ~10). The K indices do not need to be sorted. The updated values and their indices change in each iteration and they depend on the (a priori) unknown elements of a corresponding to the largest values of b and their indices. Nonetheless, let us assume they are essentially random, with exception that one specific element (typically (one of) the largest value(s)) is always included among the updated values. Important: After an update, the new largest value(s) might be among the non-updated elements. Below is a minimal example. For simplicity, it demonstrates only one of the 10^6 (looped) iterations. We can find the indices of the K largest values using b.argmax() (for K = 1) or b.argpartition() (arbitrary K, general case, see https://stackoverflow.com/a/23734295/5269892). However, due to the large size of b (nsel), going over the full arrays to find the indices of the largest values is very slow. Combined with the large number of iterations, this forms the bottleneck of a larger code (the nonlinear deconvolution algorithm CLEAN) I am using into which this step is embedded. I have already asked the question how to find the largest value (the case K = 1) most efficiently, see Python most efficient way to find index of maximum in partially changed array. The accepted solution relies on accessing b only partially by splitting the data into chunks and (re-)computing the maxima of only the chunks for which some elements were updated. A speed-up of > 7x is thus achieved. According to the author @JΓ©rΓ΄me Richard (thanks for your help!), this solution can unfortunately not be easily generalized to K > 1. As suggested by him, a possible alternative may be a binary search tree. Now my Questions: How is such a binary tree implemented in practice and how do we then find the indices of the largest values most efficiently (and if possible, easily)? Do you have other solutions for the fastest way to repeatedly find the indices of the K largest values in the partially updated array? Note: In each iteration I will need b (or a copy of it) later again as a numpy array. If possible, the solution should be mostly python-based, calling C from python or using Cython or numba is ok. I currently use python 3.7.6, numpy 1.21.2. import numpy as np # some array shapes ('nnu_use' and 'nm'), number of total values ('nvals'), number of selected values ('nsel'; # here 'nsel' == 'nvals'; in general 'nsel' <= 'nvals') and number of values to be changed ('nchange' << 'nsel') nnu_use, nm = 10418//2 + 1, 144 nvals = nnu_use * nm nsel = nvals nchange = 1000 # number of largest peaks to be found K = 10 # fix random seed, generate random 2D 'Fourier transform' ('a', complex-valued), compute power ('b', real-valued), # and two 2D arrays for indices of axes 0 and 1 np.random.seed(100) a = np.random.rand(nsel) + 1j * np.random.rand(nsel) b = a.real ** 2 + a.imag ** 2 inu_2d = np.tile(np.arange(nnu_use)[:,None], (1,nm)) im_2d = np.tile(np.arange(nm)[None,:], (nnu_use,1)) # select 'nsel' random indices and get 1D arrays of the selected 2D indices isel = np.random.choice(nvals, nsel, replace=False) inu_sel, im_sel = inu_2d.flatten()[isel], im_2d.flatten()[isel] def do_update_iter(a, b): # find index of maximum, choose 'nchange' indices of which 'nchange - 1' are random and the remaining one is the # index of the maximum, generate random complex numbers, update 'a' and compute updated 'b' imax = b.argmax() ichange = np.concatenate(([imax],np.random.choice(nsel, nchange-1, replace=False))) a_change = np.random.rand(nchange) + 1j*np.random.rand(nchange) a[ichange] = a_change b[ichange] = a_change.real ** 2 + a_change.imag ** 2 return a, b, ichange # do an update iteration on 'a' and 'b' a, b, ichange = do_update_iter(a, b) # find indices of largest K values ilarge = b.argpartition(-K)[-K:]
I tried to implement a Cython solution based on C++ containers (for 64-bit float values). The good news is that it is faster than a naive np.argpartition. The bad news is that it is quite complex and not much faster: 3~4 times faster. One main issue is that Cython do not implement the std::multimap container which is the most useful one. It is possible to implement this container using a std::map<Key, std::vector<Value>> type but it makes the code significantly more complex and also less efficient (due to the additional cache-unfriendly indirection in memory). If one can guarantee that there is no duplicates in b, then performance can be significantly better (up to x2) since std::map can be used instead. Furthermore, Cython do not seems to accept recent C++11/C++17/C++20 features making the code more cumbersome to read/write. This is sad since [some feature like extract and rvalues-references] can make the code faster. Another main issue is that the execution time is bounded by cache-misses (>75% on my machine) because the binary RB-trees are not cache friendly. The thing is the overall data structure is very likely bigger than the CPU caches. Indeed, 750_000*(8*2+4) = 15_000_000 bytes are at least required to store the key-values, not to mention a similar amount of memory is needed to store node pointers of the tree data structure and most processor caches are smaller than 30 MB. This is mainly a problem during the update because of random accesses: each lookup/insert require log2(nsel) fetches in RAM and the latency of the RAM is typically of several dozens of nanoseconds. Additionally, (C++) RB-trees do not support key updates so a remove+insert is required. I tried to mitigate this problem using a parallel prefetching approach. Unfortunately, it was generally slower in practice... In practice, the extraction of the K-largest items is very fast (about few microseconds for 1000 items and 750_000 values in the tree) while the update takes about 1.0-1.5 millisecond. Meanwhile, np.argpartition takes ~4.5 milliseconds. Some people reported (eg. here) that std::map is actually quite slow when the number of item is quite big. Thus, it may be a good idea to use another non-standard C++ implementation. I expect B-trees to be faster in this case. The Google Abseil library contains such containers and they are certainly significantly faster. That being said, it certainly require a wrapping some code which can be tedious. Alternatively, one can write a full C++ class and call it from Cython. Implementation Here is the implementation (and an example of usage at the end): maxtree.pyx: # distutils: language = c++ import numpy as np cimport numpy as np cimport cython # See: https://cython.readthedocs.io/en/latest/src/userguide/wrapping_CPlusPlus.html from libcpp.vector cimport vector from libcpp.map cimport map from libcpp.pair cimport pair from cython.operator cimport dereference as deref, preincrement as inc @cython.boundscheck(False) # Deactivate bounds checking @cython.wraparound(False) # Deactivate negative indexing cdef class MaxTree: cdef map[double, vector[int]] data cdef int itemCount # Build a tree from `b` def __init__(self, double[::1] b): cdef map[double, vector[int]].iterator it cdef pair[double, vector[int]] node cdef double val cdef int i # Temporary node used to ease insertion node.second.resize(1) # Iterate over `b` items so to add them in the tree for i in range(b.size): val = b[i] it = self.data.find(val) if it == self.data.end(): # Value not found: add a new node node.first = val node.second[0] = i self.data.insert(node) else: # Value found: adds a new duplicate in an existing node deref(it).second.push_back(i) self.itemCount = b.size def size(self): return self.itemCount # Get the index (in the original `b` array) of the K-largest values def getKlargest(self, int count): cdef map[double, vector[int]].reverse_iterator rit cdef int vecSize cdef int* vecData cdef int i, j cdef int[::1] resultView if count > self.itemCount: count = self.itemCount result = np.empty(count, dtype=np.int32) resultView = result i = 0 rit = self.data.rbegin() while rit != self.data.rend(): vecSize = deref(rit).second.size() vecData = deref(rit).second.data() # Note: indices are not always sorted here due to the update for j in range(vecSize-1, -1, -1): resultView[i] = vecData[j] i += 1 count -= 1 if count <= 0: return resultView inc(rit) return result # Set the values of `b` at the index `index` to `values` and update the tree accordingly def update(self, double[::1] b, int[::1] index, double[::1] values): cdef map[double, vector[int]].iterator it cdef pair[double, vector[int]] node #cdef pair[map[double, vector[int]].iterator, bool] infos cdef int idx, i, j, vecSize, indexSize cdef double oldValue, newValue cdef int* vecData assert b.size == self.itemCount assert index.size == values.size assert np.min(index) >= 0 and np.max(index) < b.size # Temporary node used to ease insertion node.second.resize(1) for i in range(index.size): idx = index[i] oldValue = b[idx] newValue = values[i] it = self.data.find(oldValue) assert it != self.data.end() # Update the tree if deref(it).second.size() == 1: # Remove the node from the tree and add a new one because keys are immutable # Assume `index` is correct/coherent and the tree is correctly updated for sake of performance #assert deref(it).second[0] == idx self.data.erase(it) node.first = newValue node.second[0] = idx infos = self.data.insert(node) inserted = infos.second if not inserted: # Duplicate it = infos.first deref(it).second.push_back(idx) else: # Tricky case due to duplicates (untested) vecData = deref(it).second.data() vecSize = deref(it).second.size() # Search the element and remove it for j in range(vecSize): if vecData[j] == idx: vecData[j] = vecData[vecSize-1] deref(it).second.pop_back() break # Update `b` b[idx] = values[i] setup.py: # setup.py from setuptools import setup from Cython.Build import cythonize setup(ext_modules=cythonize("maxtree.pyx")) main.py: # Usage: import numpy as np import maxtree np.random.seed(0) b = np.random.rand(750_000) nchange = 1_000 ichange = np.random.randint(0, b.size, nchange).astype(np.int32) tree = maxtree.MaxTree(b) tree.getKlargest(nchange) tree.update(b, ichange, b[ichange]*0.999) command to run: python3 setup.py build_ext --inplace -q
5
1
73,083,672
2022-7-22
https://stackoverflow.com/questions/73083672/aws-cdk-lambda-function-cannot-find-asset-at-path
I'm trying to make a Lambda function using the AWS CDK They make it seem simple enough, but when I use cdk synth, it's giving me an error that the asset doesn't exist (even though it does exist). Here's my code: cwd = os.getcwd() aws_lambda.Function(self, "lambda_function", runtime=aws_lambda.Runtime.PYTHON_3_9, handler="index.handler", code=aws_lambda.Code.from_asset(os.path.join(cwd, "lambda_functions/lambda")) ) The file exists, and the error message prints the directory I expect it to, so what's the issue here?
From documentation The Code.from_asset(...) requires you to specify a directory or a .zip file. From your code you're referencing a directory which is not true. Change the path to add .zip extension. cwd = os.getcwd() aws_lambda.Function(self, "lambda_function", runtime=aws_lambda.Runtime.PYTHON_3_9, handler="index.handler", code=aws_lambda.Code.from_asset(os.path.join(cwd, "lambda_functions/lambda.zip")) )
7
4
73,128,975
2022-7-26
https://stackoverflow.com/questions/73128975/pydantic-created-at-and-updated-at-fields
I'm new to using Pydantic and I'm using it to set up the models for FastAPI to integrate with my postgres database. I want to make a model that has an updated_at and created_at field which store the last datetime the model was updated and the datetime the model was created. I figured created_at could be something like this: created_at: datetime = datetime.now() How would I do an updated_at so it updates the datetime automatically every time the model is updated?
You can use a validator which will update the field updated_at each time when some other data in the model will change. The root_validator and the validate_assignment config attribute are what you are looking for. This is the sample code: from datetime import datetime from time import sleep from pydantic import BaseModel,root_validator class Foo(BaseModel): data: str = "Some data" created_at: datetime = datetime.now() updated_at: datetime = datetime.now() class Config: validate_assignment = True @root_validator def number_validator(cls, values): values["updated_at"] = datetime.now() return values if __name__ == '__main__': bar = Foo() print(bar.dict()) sleep(5) bar.data = "New data" print(bar.dict()) and the output: { 'data': 'Some data', 'created_at': datetime.datetime(2022, 7, 31, 10, 41, 13, 176243), 'updated_at': datetime.datetime(2022, 7, 31, 10, 41, 13, 179253) } { 'data': 'New data', 'created_at': datetime.datetime(2022, 7, 31, 10, 41, 13, 176243), 'updated_at': datetime.datetime(2022, 7, 31, 10, 41, 18, 184983) } You can see that there is a difference in microseconds after object creation. If this is a problem you can set: updated_at: Optional[datetime] = None and modify the validator like this: @root_validator def number_validator(cls, values): if values["updated_at"]: values["updated_at"] = datetime.now() else: values["updated_at"] = values["created_at"] return values and the new output: { 'data': 'Some data', 'created_at': datetime.datetime(2022, 7, 31, 10, 54, 33, 715379), 'updated_at': datetime.datetime(2022, 7, 31, 10, 54, 33, 715379) } { 'data': 'New data', 'created_at': datetime.datetime(2022, 7, 31, 10, 54, 33, 715379), 'updated_at': datetime.datetime(2022, 7, 31, 10, 54, 38, 728778) }
7
5
73,066,883
2022-7-21
https://stackoverflow.com/questions/73066883/display-html-table-from-xml-file-over-web-browser-without-using-any-software-or
I am a very new to HTML and javascript. Have come across many questions with regard to my problem and after struggling a lot to find a solution, I am posting this question. Problem statment: I have an xml which I am trying to convert it to HTML so that I can display it over web browser in a table format. <?xml version="1.0" encoding="UTF-8"?> <chapter name="ndlkjfidm" date="dfhkryi"> <edge name="nnn" P="ffgnp" V="0.825" T="125c"> <seen name="seen1"> </seen> <seen name="ABB"> <mob name="adas_jk3" type="entry"> <nod name="VSS" voltage="0.000000" vector="!ENXB" active_input="NA" active_ouput="ENX"> <temp name="ADS_DEFAULT_temp_LOW"> <raw nod="VBP" alt="7.05537e-15" jus="74.4619" /> <raw nod="VDDC" alt="4.63027e-10" jus="115.178" /> <raw nod="VDDP" alt="6.75316e-10" jus="115.178" /> <raw nod="VSS" alt="5.04568e-14" jus="9.63935" /> <raw nod="VBN" alt="1.21047e-14" jus="192.973" /> <raw nod="VBP" trip="4.58141e-12" /> <raw nod="VDDC" trip="5.19549e-09" /> <raw nod="VDDP" trip="5.49458e-08" /> <raw nod="VSS" trip="6.00563e-08" /> <raw nod="VBN" trip="8.94924e-11" /> </temp> </nod> <nod name="VSS" voltage="0.000000" vector="ENXB" active_input="NA" active_ouput="ENX"> <temp name="ADS_DEFAULT_temp_HIGH"> <raw nod="VBP" alt="7.05537e-15" jus="74.4644" /> <raw nod="VDDC" alt="1.52578e-14" jus="311.073" /> <raw nod="VDDP" alt="1.00188e-14" jus="521.709" /> <raw nod="VSS" alt="4.03483e-14" jus="11.1118" /> <raw nod="VBN" alt="1.21047e-14" jus="192.975" /> <raw nod="VBP" trip="4.58141e-12" /> <raw nod="VDDC" trip="1.29302e-12" /> <raw nod="VDDP" trip="4.92723e-08" /> <raw nod="VSS" trip="4.91887e-08" /> <raw nod="VBN" trip="8.95356e-11" /> </temp> </nod> </mob> </seen> </edge> </chapter> Below are the links that I have tried. https://www.w3schools.com/xml/ajax_applications.asp https://www.geeksforgeeks.org/read-xml-file-and-print-the-details-as-tabular-data-by-using-javascript/ Loop holes: I can not install anything (sudo apt install apache2 etc..) or any software (xammp etc) Because of which the javascript does not display the table. Tried with pandas as well but do not know how to display it over web browser and the xml too is very huge ( ~1GB) Can someone please suggest me on how to get this done using any language combinations. python with HTML and javascript python with json and HTML HTML with javascript
This should solve your issue (as asked), using pandas: import pandas as pd xml_data = '''<?xml version="1.0" encoding="UTF-8"?> <chapter name="ndlkjfidm" date="dfhkryi"> <edge name="nnn" P="ffgnp" V="0.825" T="125c"> <seen name="seen1"> </seen> <seen name="ABB"> <mob name="adas_jk3" type="entry"> <nod name="VSS" voltage="0.000000" vector="!ENXB" active_input="NA" active_ouput="ENX"> <temp name="ADS_DEFAULT_temp_LOW"> <raw nod="VBP" alt="7.05537e-15" jus="74.4619" /> <raw nod="VDDC" alt="4.63027e-10" jus="115.178" /> <raw nod="VDDP" alt="6.75316e-10" jus="115.178" /> <raw nod="VSS" alt="5.04568e-14" jus="9.63935" /> <raw nod="VBN" alt="1.21047e-14" jus="192.973" /> <raw nod="VBP" trip="4.58141e-12" /> <raw nod="VDDC" trip="5.19549e-09" /> <raw nod="VDDP" trip="5.49458e-08" /> <raw nod="VSS" trip="6.00563e-08" /> <raw nod="VBN" trip="8.94924e-11" /> </temp> </nod> <nod name="VSS" voltage="0.000000" vector="ENXB" active_input="NA" active_ouput="ENX"> <temp name="ADS_DEFAULT_temp_HIGH"> <raw nod="VBP" alt="7.05537e-15" jus="74.4644" /> <raw nod="VDDC" alt="1.52578e-14" jus="311.073" /> <raw nod="VDDP" alt="1.00188e-14" jus="521.709" /> <raw nod="VSS" alt="4.03483e-14" jus="11.1118" /> <raw nod="VBN" alt="1.21047e-14" jus="192.975" /> <raw nod="VBP" trip="4.58141e-12" /> <raw nod="VDDC" trip="1.29302e-12" /> <raw nod="VDDP" trip="4.92723e-08" /> <raw nod="VSS" trip="4.91887e-08" /> <raw nod="VBN" trip="8.95356e-11" /> </temp> </nod> </mob> </seen> </edge> </chapter> ''' One option to read the xml would be: df = pd.read_xml(xml_data) # df html = df.to_html() print(html) Result: <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>name</th> <th>P</th> <th>V</th> <th>T</th> <th>seen</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>nnn</td> <td>ffgnp</td> <td>0.825</td> <td>125c</td> <td>NaN</td> </tr> </tbody> </table> Of course, you can drill down into that xml: df = pd.read_xml(xml_data, xpath=".//nod") # df html = df.to_html() print(html) This would result in: <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>name</th> <th>voltage</th> <th>vector</th> <th>active_input</th> <th>active_ouput</th> <th>temp</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>VSS</td> <td>0.0</td> <td>!ENXB</td> <td>NaN</td> <td>ENX</td> <td>NaN</td> </tr> <tr> <th>1</th> <td>VSS</td> <td>0.0</td> <td>ENXB</td> <td>NaN</td> <td>ENX</td> <td>NaN</td> </tr> </tbody> </table> Or even: df = pd.read_xml(xml_data, xpath=".//raw") # df html = df.to_html() print(html) Returning: <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>nod</th> <th>alt</th> <th>jus</th> <th>trip</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>VBP</td> <td>7.055370e-15</td> <td>74.46190</td> <td>NaN</td> </tr> <tr> <th>1</th> <td>VDDC</td> <td>4.630270e-10</td> <td>115.17800</td> <td>NaN</td> </tr> <tr> <th>2</th> <td>VDDP</td> <td>6.753160e-10</td> <td>115.17800</td> <td>NaN</td> </tr> <tr> <th>3</th> <td>VSS</td> <td>5.045680e-14</td> <td>9.63935</td> <td>NaN</td> </tr> <tr> <th>4</th> <td>VBN</td> <td>1.210470e-14</td> <td>192.97300</td> <td>NaN</td> </tr> <tr> <th>5</th> <td>VBP</td> <td>NaN</td> <td>NaN</td> <td>4.581410e-12</td> </tr> <tr> <th>6</th> <td>VDDC</td> <td>NaN</td> <td>NaN</td> <td>5.195490e-09</td> </tr> <tr> <th>7</th> <td>VDDP</td> <td>NaN</td> <td>NaN</td> <td>5.494580e-08</td> </tr> <tr> <th>8</th> <td>VSS</td> <td>NaN</td> <td>NaN</td> <td>6.005630e-08</td> </tr> <tr> <th>9</th> <td>VBN</td> <td>NaN</td> <td>NaN</td> <td>8.949240e-11</td> </tr> <tr> <th>10</th> <td>VBP</td> <td>7.055370e-15</td> <td>74.46440</td> <td>NaN</td> </tr> <tr> <th>11</th> <td>VDDC</td> <td>1.525780e-14</td> <td>311.07300</td> <td>NaN</td> </tr> <tr> <th>12</th> <td>VDDP</td> <td>1.001880e-14</td> <td>521.70900</td> <td>NaN</td> </tr> <tr> <th>13</th> <td>VSS</td> <td>4.034830e-14</td> <td>11.11180</td> <td>NaN</td> </tr> <tr> <th>14</th> <td>VBN</td> <td>1.210470e-14</td> <td>192.97500</td> <td>NaN</td> </tr> <tr> <th>15</th> <td>VBP</td> <td>NaN</td> <td>NaN</td> <td>4.581410e-12</td> </tr> <tr> <th>16</th> <td>VDDC</td> <td>NaN</td> <td>NaN</td> <td>1.293020e-12</td> </tr> <tr> <th>17</th> <td>VDDP</td> <td>NaN</td> <td>NaN</td> <td>4.927230e-08</td> </tr> <tr> <th>18</th> <td>VSS</td> <td>NaN</td> <td>NaN</td> <td>4.918870e-08</td> </tr> <tr> <th>19</th> <td>VBN</td> <td>NaN</td> <td>NaN</td> <td>8.953560e-11</td> </tr> </tbody> </table> The following pandas documentation might be helpful: https://pandas.pydata.org/docs/dev/reference/api/pandas.read_xml.html and https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_html.html
4
5
73,126,494
2022-7-26
https://stackoverflow.com/questions/73126494/how-to-keep-jupyter-kernel-alive-inside-vscode-remote-container
Question: How can I disconnect, then reconnect to a vscode dev container without killing the ipynb kernel within my workspace? Background: I access my jupyter notebook inside a vscode dev container in order to have reproducibility of my project-specific environment. I connect to the container host machine on my laptop. Upon re-opening my vscode workspace after reconnecting to the container, my ipynb kernel is dead and all notebook computation must be repeated.
Try to use jupyter server instead. You can refer to this issue aout using the 'remote' server to control your kernel lifetime for details.
4
4
73,124,895
2022-7-26
https://stackoverflow.com/questions/73124895/stacked-and-grouped-barchart
I have this data set import pandas as pd import plotly.express as px elements = pd.DataFrame(data={"Area": ["A", "A", "A", "B", "B", "C", "C", "C"], "Branch": ["a1", "f55", "j23", "j99", "ci2", "p21", "o2", "q35"], "Good": [68, 3, 31, 59, 99, 86, 47, 47], "Neutral": [48, 66, 84, 4, 83, 76, 6, 89],"Bad": [72, 66, 50, 83, 29, 54, 84, 55]}) Area Branch Good Neutral Bad 0 A a1 68 48 72 1 A f55 3 66 66 2 A j23 31 84 50 3 B j99 59 4 83 4 B ci2 99 83 29 5 C p21 86 76 54 6 C o2 47 6 84 7 C q35 47 89 55 and i'm trying to plot it and get something that looks like this stacked and grouped with labels, so I tried this fig_elements = px.bar(elements, x='Branch', y=["Good", "Neutral", "Bad"], orientation="v", template="plotly_dark", facet_col='Area') fig_elements.update_layout(plot_bgcolor="rgba(0,0,0,0)", xaxis=(dict(showgrid=False)), yaxis=(dict(showgrid=False)), barmode="stack") can't add labels and on the bars, and areas have all branches not only the branches that belong to it, how can I fix that and add the labels?
import plotly.graph_objects as go df = elements.melt(id_vars=['Area', 'Branch'], value_vars=['Good', 'Neutral', 'Bad'], var_name='Rating') df ### Area Branch Rating value 0 A a1 Good 68 1 A f55 Good 3 2 A j23 Good 31 3 B j99 Good 59 4 B ci2 Good 99 5 C p21 Good 86 6 C o2 Good 47 7 C q35 Good 47 8 A a1 Neutral 48 9 A f55 Neutral 66 10 A j23 Neutral 84 11 B j99 Neutral 4 12 B ci2 Neutral 83 Plot rating_color = ['#17BEBB', '#FFD97D', '#EE6055'] fig = go.Figure() fig.update_layout(barmode='stack', width=900, height=800, uniformtext_minsize=8, uniformtext_mode='hide') for r, c in zip(df['Rating'].unique(), rating_color): df_plot = df[df['Rating'] == r] fig.add_trace( go.Bar( x=[df_plot['Area'], df_plot['Branch']], y=df_plot['value'], marker_color=c, name=r, text=df_plot['value'], width=0.45, textposition='auto') ) fig.show()
4
2
73,129,698
2022-7-26
https://stackoverflow.com/questions/73129698/can-paramspec-be-used-to-type-individual-arguments
I'm trying to type the following wrapper function. It takes another function and the function's arguments and runs it with some side effect. from typing import Callable, ParamSpec, TypeVar P = ParamSpec("P") R = TypeVar("R") def wrapper(func: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R: # Run some side effects return func(*args, **kwargs) def f(a: int, b: int) -> int: return a + b # Prints 3 with the side effect. print(wrapper(f, 1, 2)) The above snippet works and mypy is happy. However, I don't like the black boxes that *args and **kwargs are and would like to make wrapper less generic. How do I type something similar to this: from typing import Callable, ParamSpec, TypeVar P = ParamSpec("P") R = TypeVar("R") # How do I express the fact that a and b are the input params of func? def wrapper(func: Callable[P, R], a: ?, b: ?) -> R: # Run some side effects return func(a, b) def f(a: int, b: int) -> int: return a + b # Mypy should be happy here! print(wrapper(f, 1, 2)) Is this possible?
When you know the arguments that will be passed to func, you don't need ParamSpec; ordinary TypeVars will do. AType = TypeVar('AType') BType = TypeVar('BType') def wrapper(func: Callable[[AType, BType], R], a: AType, b: BType) -> R: return func(a, b) If there were additional arguments, then you would use Concatenate to combine the known arguments with ParamSpec. AType = TypeVar('AType') BType = TypeVar('BType') P = ParamSpec('P') def wrapper(func: Callable[Concatenate[AType, BType, P], R], a: AType, b: BType, *args: P.args, **kwargs: P.kwargs) -> R: return func(a, b, *args, **kwargs)
5
5
73,129,334
2022-7-26
https://stackoverflow.com/questions/73129334/filter-pandas-dataframe-by-multiple-columns-using-tuple-from-list-of-tuples
So I have been referencing this previous question posted here Filter pandas dataframe from tuples. But the problem I am trying to solve is slightly different. I have a list of tuples. Each tuple represents a different set of filters I would like to apply to a dataframe accross multiple columns, so I can isolate the records to perform additional tasks. Whenever I try and filter by a single tuple within the list of tuples, the dataframe I get back has no records.. If I break the tuple values out in a very long form way it works fine. Not sure what I am missing or not thinking about here.. Using the same example from the post I have been referencing.... AB_col = [(0,230), (10,215), (15, 200), (20, 185), (40, 177), (0,237), (10,222), (15, 207), (20, 192), (40, 184)] sales = [{'account': 'Jones LLC', 'A': 0, 'B': 230, 'C': 140}, {'account': 'Alpha Co', 'A': 20, 'B': 192, 'C': 215}, {'account': 'Blue Inc', 'A': 50, 'B': 90, 'C': 95 }] df = pd.DataFrame(sales) example dataframe The answer from the other question df = df[df[["A","B"]].apply(tuple, 1).isin(AB_col)] Which returns example results However, I want to only get one record back, that matches the first tuple in the list of tuples. So I tried this df[df[["A"]].apply(tuple,1).isin(AB_col[0])] But get no records returned my modification results However, I can do this which gets me the results I want, but when I have essentially a list of tuples that is every combination of column values to use a filters for different levels of calculations, this seems like way too much code to have to use to product the desired results df[(df['A']==AB_col[0][0]) & (df['B']==AB_col[0][1])] Which gets me results I want long form results but what i need Is there a way to get to this same result more efficiently? Clarification Update: I don't just need to select all rows by only ever using the first tuple in the list of tuples. I will need ultimately iterate through my list of tuples, and use each tuple to filter the target dataframe to perform additional actions. Also I'm not stuck on keeping a list of tuples as the filters I need to iterate through, I can change it to another form if easier/more performant. Example: AB_col = [(0,230),(20,192)] Iteration 1: filter DF by (0,230) -> do stuff on returned results Iteration 2: filter DF by (20,192) -> do stuff on returned results Iteration 3: filter DF by ... and so on until I have iterated through my list of tuple filters. Thanks!
TL;DR: use df[df[["A","B"]].apply(tuple, 1) == AB_col[0]]. I think you might be overthinking the matter. Let's dissect the code a bit: df[["A","B"]].apply(tuple, 1) # or: df[["A","B"]].apply(tuple, axis=1) # meaning: create tuples for each row 0 (0, 230) 1 (20, 192) 2 (50, 90) dtype: object So this just gets us A and B as tuples. Next, applying df.isin is a way to look whether any of these tuples exist inside your list AB_col. It's a consecutive evaluation: (0, 230) in AB_col # True (20, 192) in AB_col # True (50, 90) in AB_col # False # no different than: 1 in [1,2,3] # True The resulting Series with booleans is then used to select from the df. Hence, the result (first two rows, not the third): account A B C 0 Jones LLC 0 230 140 1 Alpha Co 20 192 215 All you want to do, is return the rows that match the first element from the list AB_col: I want to only get one record back, that matches the first tuple in the list of tuples. So, that's easy enough. We simply need ==: first_elem = AB_col[0] df[df[["A","B"]].apply(tuple, 1) == first_elem] account A B C 0 Jones LLC 0 230 140
5
1
73,055,748
2022-7-20
https://stackoverflow.com/questions/73055748/how-to-draw-bubbles-and-turn-them-animated-into-circles
I am trying to make a python program to draw a line and turn it into a circle with an animation using pygame, yet I haven't even gotten through the drawing-the-line code. I have noticed that python is changing the wrong or both items in a list that contains the starting point when the user presses down the left click, stored as the first item, and the current point of the user's mouse as the second. This is generally what I want it to do: https://youtu.be/vlqZ0LubXCA Here are the outcomes with and without the lines that update the 2nd item: with: without: As you can see, or read in the descriptions, the line is necessary to cover the previous frame. I have marked the lines that change the outcome with arrows: import pygame, PIL, random print('\n') #data bubbles = [] color_options = [[87, 184, 222]] pressed = False released = False bubline_start = [] background = [50, 25, 25] size = [500, 500] #pygame display = pygame.display.set_mode(size) pygame.init() #functions def new_bub_color(): color_index = random.randint(0, len(color_options)-1) lvl = random.randrange(85, 115) bub_color = [] for val in color_options[color_index]: bub_color.append(val*(lvl/100)) return bub_color def bubble_line(): global display, pressed, bubline_start, released, bubbles, color_options if len(bubbles) > 0: if not bubbles[-1][0] == 0: #first frame of click bub_color = new_bub_color() bubbles.append([0, bub_color, [bubline_start, list(pygame.mouse.get_pos())]]) pygame.draw.line(display, bub_color, bubline_start, pygame.mouse.get_pos()) else: #draw after drags pygame.draw.line(display, bubbles[-1][1], bubbles[-1][2][0], list(pygame.mouse.get_pos())) bubbles[-1][2][1] = list(pygame.mouse.get_pos())# <-- HERE else: #first bubble bub_color = new_bub_color() bubbles.append([0, bub_color, [bubline_start, list(pygame.mouse.get_pos())]]) pygame.draw.line(display, bub_color, bubline_start, pygame.mouse.get_pos()) if released: bubbles[-1][0] = 1 bubbles[-1][2][1] = list(pygame.mouse.get_pos())# <-- HERE released = False def cover_prev_frame(): global bubbles, background, size min_pos = [] max_pos = [] for bubble in bubbles: min_pos = bubble[2][0] max_pos = bubble[2][0] for point in bubble[2]: #x min and max if point[0] < min_pos[0]: min_pos[0] = point[0] elif point[0] > max_pos[0]: max_pos[0] = point[0] #y min and max if point[1] < min_pos[1]: min_pos[1] = point[1] elif point[1] > max_pos[1]: max_pos[1] = point[1] max_pos = [max_pos[0]-min_pos[0]+1, max_pos[1]-min_pos[1]+1] if type(background) == str: #image background later = True elif type(background) == list: #solid color background pygame.draw.rect(display, background, pygame.Rect(min_pos, max_pos)) while True: pygame.event.pump() events = pygame.event.get() for event in events: if event.type == pygame.QUIT: pygame.quit() elif event.type == pygame.MOUSEBUTTONDOWN and not pressed: bubline_start = list(pygame.mouse.get_pos()) pressed = True elif event.type == pygame.MOUSEBUTTONUP and pressed: pressed = False released = True cover_prev_frame() if pressed or released: bubble_line() try: pygame.display.update() except: break
if not bubbles[-1][0] == 0: is False as long as the mouse is not released. Therefore add many line segments, each starting at bubline_start and ending at the current mouse position. You must redraw the scene in each frame. bubbles is a list of bubbles and each bubble has a list of points. Add a new point to the last bubble in the list while the mouse is held down. Start a new bubble when the mouse is pressed and end a bubble when it is released. This greatly simplifies your code. Minimal example import pygame, random size = [500, 500] pygame.init() display = pygame.display.set_mode(size) clock = pygame.time.Clock() pressed = False bubbles = [] background = [50, 25, 25] run = True while run: clock.tick(100) events = pygame.event.get() for event in events: if event.type == pygame.QUIT: run = False elif event.type == pygame.MOUSEBUTTONDOWN: start_pos = list(event.pos) bubble_color = pygame.Color(0) bubble_color.hsla = (random.randrange(0, 360), 100, 50, 100) bubbles.append((bubble_color, [start_pos])) pressed = True elif event.type == pygame.MOUSEMOTION and pressed: new_pos = list(event.pos) if len(bubbles[-1][1]) > 0 and bubbles[-1][1] != new_pos: bubbles[-1][1].append(new_pos) elif event.type == pygame.MOUSEBUTTONUP: pressed = False end_pos = list(event.pos) if len(bubbles[-1][1]) > 0 and bubbles[-1][1] != end_pos: bubbles[-1][1].append(list(event.pos)) display.fill(background) for i, bubble in enumerate(bubbles): if len(bubble[1]) > 1: closed = not pressed or i < len(bubbles) - 1 pygame.draw.lines(display, bubble[0], closed, bubble[1], 3) pygame.display.update() pygame.quit() For the animation I propose to create a class that represents a bubble and a method animate that slowly turns the polygon into a circle. Minimal example import pygame, random size = [500, 500] pygame.init() display = pygame.display.set_mode(size) clock = pygame.time.Clock() class Bubble: def __init__(self, start): self.color = pygame.Color(0) self.color.hsla = (random.randrange(0, 360), 100, 50, 100) self.points = [list(start)] self.closed = False self.finished = False def add_point(self, point, close): self.points.append(list(point)) self.closed = close if self.closed: x_, y_ = list(zip(*self.points)) x0, y0, x1, y1 = min(x_), min(y_), max(x_), max(y_) rect = pygame.Rect(x0, y0, x1-x0, y1-y0) self.center = rect.center self.radius = max(*rect.size) // 2 def animate(self): if self.closed and not self.finished: cpt = pygame.math.Vector2(self.center) + (0.5, 0.5) self.finished = True for i, p in enumerate(self.points): pt = pygame.math.Vector2(p) v = pt - cpt l = v.magnitude() if l + 0.5 < self.radius: self.finished = False v.scale_to_length(min(self.radius, l+0.5)) pt = cpt + v self.points[i] = [pt.x, pt.y] def draw(self, surf): if self.finished: pygame.draw.circle(surf, self.color, self.center, self.radius, 3) elif len(self.points) > 1: pygame.draw.lines(surf, self.color, self.closed, self.points, 3) bubbles = [] pressed = False background = [50, 25, 25] run = True while run: clock.tick(100) events = pygame.event.get() for event in events: if event.type == pygame.QUIT: run = False elif event.type == pygame.MOUSEBUTTONDOWN: bubbles.append(Bubble(event.pos)) pressed = True elif event.type == pygame.MOUSEMOTION and pressed: bubbles[-1].add_point(event.pos, False) elif event.type == pygame.MOUSEBUTTONUP: bubbles[-1].add_point(event.pos, True) pressed = False for bubble in bubbles: bubble.animate() display.fill(background) for bubble in bubbles: bubble.draw(display) pygame.display.update() pygame.quit()
6
18
73,063,362
2022-7-21
https://stackoverflow.com/questions/73063362/is-there-a-built-in-way-to-convert-datetimes-to-cftime-in-xarray
I would like to plot two time series, one of which is in cftime and the other in datetime. One possibility is to convert cftime to datetime, but this might give strange results for nonstandard cftime calendars (e.g. NoLeap). As such, I am trying to convert the datetime to cftime. I can brute-force it as follows, but is there a built-in method available? >>> import pandas as pd >>> import xarray as xr >>> >>> da = xr.DataArray( ... [1, 2], coords={"time": pd.to_datetime(["2000-01-01", "2000-02-02"])}, dims=["time"] ... ) >>> print(da.time) <xarray.DataArray 'time' (time: 2)> array(['2000-01-01T00:00:00.000000000', '2000-02-02T00:00:00.000000000'], dtype='datetime64[ns]') Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-02-02 >>> >>> >>> import cftime >>> >>> >>> def datetime_to_cftime(dates, kwargs={}): ... return [ ... cftime.datetime( ... date.dt.year, ... date.dt.month, ... date.dt.day, ... date.dt.hour, ... date.dt.minute, ... date.dt.second, ... date.dt.microsecond, ... **kwargs ... ) ... for date in dates ... ] ... >>> datetime_to_cftime(da.time) [cftime.datetime(2000, 1, 1, 0, 0, 0, 0, calendar='standard', has_year_zero=False), cftime.datetime(2000, 2, 2, 0, 0, 0, 0, calendar='standard', has_year_zero=False)]
Indeed you might consider using DataArray.convert_calendar. For example if you would like to convert datetime64 values to cftime.DatetimeNoLeap objects, you could do something like this: >>> da.convert_calendar("noleap") <xarray.DataArray (time: 2)> array([1., 2.]) Coordinates: * time (time) object 2000-01-01 00:00:00 2000-02-02 00:00:00 This method is new as of xarray version 0.20.0.
5
3
73,112,948
2022-7-25
https://stackoverflow.com/questions/73112948/snakemake-run-directive-produces-no-error-message
When I use the run directive in snakemake (using python code) it doesn't produce any kind of error message for troubleshooting. Is this desired behavior? Am I missing something? Here a minimal example using snakemake 7.8.3 and python 3.9.13. I invoked snakemake with the -p option which in shell directive outputs the exact code as passed to the shell (but doesn't do anything for run directive I guess). Snakefile: def useless_function(): return[thisVariableAlsoDoesntExist] rule all: input: "final.txt" rule test: output: "final.txt" run: print(thisVariableDoesNotExist) useless_function() Stdout: Building DAG of jobs... Using shell: /usr/bin/bash Provided cores: 1 (use --cores to define parallelism) Rules claiming more threads will be scaled down. Job stats: job count min threads max threads ----- ------- ------------- ------------- all 1 1 1 test 1 1 1 total 2 1 1 Select jobs to execute... [Mon Jul 25 18:59:13 2022] rule test: output: final.txt jobid: 1 reason: Missing output files: final.txt resources: tmpdir=/tmp Shutting down, this might take some time. Exiting because a job execution failed. Look above for error message Complete log: .snakemake/log/2022-07-25T185913.188760.snakemake.log Expected error message (when function and print command are executed directly on python console): >>> print(thisVariableDoesNotExist) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'thisVariableDoesNotExist' is not defined >>> useless_function() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in useless_function NameError: name 'thisVariableAlsoDoesntExist' is not defined
I suspect you are hitting this recent bug https://github.com/snakemake/snakemake/issues/1698. If that is the case, you can downgrade to v7.6.2 or work around it, i.e. bear with it or wrap the the code in run in a self-contained script that you execute via shell. This latter is not a bad solution anyway since it keeps the code nicely isolated.
5
5
73,122,817
2022-7-26
https://stackoverflow.com/questions/73122817/initialize-dataframe-with-two-columns-which-have-one-of-the-them-all-zeros
I have a list and I would like to convert it to a pandas dataframe. In the second column, I want to give all zeros but I got "object of type 'int' has no len()" error. The thing I did is this: df = pd.DataFrame([all_equal_timestamps['A'], 0], columns=['data','label']) How can i add second column with all zeros to this dataframe in the easiest manner and why did the code above give me this error?
Not sure what is in all_equal_timestamps, so I presume it's a list of elements. Do you mean to get this result? import pandas as pd all_equal_timestamps = {'A': ['1234', 'aaa', 'asdf']} df = pd.DataFrame(all_equal_timestamps['A'], columns=['data']).assign(label=0) # df['label'] = 0 print(df) Output: data label 0 1234 0 1 aaa 0 2 asdf 0 If you're creating a DataFrame with a list of lists, you'd expect something like this df = pd.DataFrame([ all_equal_timestamps['A'], '0'*len(all_equal_timestamps['A']) ], columns=['data', 'label', 'anothercol']) print(df) Output: data label anothercol 0 1234 aaa asdf 1 0 0 0
4
1
73,121,956
2022-7-26
https://stackoverflow.com/questions/73121956/pandas-can-i-duplicate-rows-and-add-a-column-with-the-values-of-a-list
I have a dataframe like this: ID value repeat ratio 0 0 IDx10 6 0.5 1 1 IDx11 7 1.5 2 2 IDx12 8 2.5 and i have a list like this: l = [1,2] What i want to do is to duplicate every row the number of times of the length of the list and in every new row put each value of the list. And getting a dataframe like this: ID value repeat ratio value 0 0 IDx10 6 0.5 1 1 0 IDx10 6 0.5 2 2 1 IDx11 7 1.5 1 3 1 IDx11 7 1.5 2 4 2 IDx12 8 2.5 1 5 2 IDx12 8 2.5 2
Let us do a cross merge: out = df.merge(pd.Series(l, name='value2'), how='cross') output: ID value repeat ratio value2 0 0 IDx12 6 0.5 1 1 0 IDx12 6 0.5 2 2 0 IDx12 6 0.5 3 3 1 IDx12 7 1.5 1 4 1 IDx12 7 1.5 2 5 1 IDx12 7 1.5 3 6 2 IDx12 8 2.5 1 7 2 IDx12 8 2.5 2 8 2 IDx12 8 2.5 3
4
7
73,118,895
2022-7-26
https://stackoverflow.com/questions/73118895/how-would-you-type-hint-dict-in-python-with-constant-form-but-multiple-types
I want to type hint the return object of some_func function, which is always the same format. Is this correct? from typing import List, Dict def some_func() -> List[Dict[str, int, str, List[CustomObject]]]: my_list = [ {"ID": 1, "cargo": [CustomObject(), CustomObject()]}, {"ID": 2, "cargo": [CustomObject(), CustomObject()]}, {"ID": 2, "cargo": [CustomObject(), CustomObject()]} ] return my_list
one way of correctly type hinting would look like this: from typing import List, Dict, Union def some_func() -> List[Dict[str, Union[int, List[CustomObject]]]]: my_list = [ {"ID": 1, "cargo": [CustomObject(), CustomObject()]}, {"ID": 2, "cargo": [CustomObject(), CustomObject()]}, {"ID": 2, "cargo": [CustomObject(), CustomObject()]} ] return my_list for a brief explanation, when type annotating a dict, you can only have two arguments, the first being the type of any given key, and the second, the type of any given value. since keys are always strings, we keep Dict[str, and our keys can be either an integer, or of type CustomObject, to represent different possibilities in type annotations we use Union, and so in the end we get: Dict[str, Union[int, List[CustomObject]] for some additional notes, in python 3.9 or above you may replace Dict imported from typing, with the builtin dict, and in python 3.10 or above, unions can be represented as type | type, and you can replace List with the builtin list for a cleaner result you may want to use a typedDict, the overall result would be: from typing import TypedDict class customDict(TypedDict): ID: int cargo: list[CustomObject] def some_func() -> list[customDict]: my_list = [ {"ID": 1, "cargo": [CustomObject(), CustomObject()]}, {"ID": 2, "cargo": [CustomObject(), CustomObject()]}, {"ID": 2, "cargo": [CustomObject(), CustomObject()]} ] return my_list each property of the typed dict represents a key of the dicts in my_list, and the following reprents the type of value related to the key
4
4
73,116,647
2022-7-26
https://stackoverflow.com/questions/73116647/why-cant-i-install-a-python-package-with-the-python-requirement-3-8-3-11-i
I'm having an issue installing dependencies into my Poetry project. If I run poetry new (as described in https://python-poetry.org/docs/basic-usage/), I can create a new project: $ poetry new scipy-test Created package scipy_test in scipy-test My project structure looks like this after I delete a few files not needed for this reproduction: $ tree . . β”œβ”€β”€ pyproject.toml └── scipy_test └── __init__.py 1 directory, 2 files My pyproject.toml file looks like this: [tool.poetry] name = "scipy-test" version = "0.1.0" description = "" authors = ["Your Name <[email protected]>"] [tool.poetry.dependencies] python = "^3.9" [tool.poetry.dev-dependencies] pytest = "^5.2" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" When I run poetry add scipy, it tries to install the latest version of SciPy, which right now is 1.8.1. I get the following error: $ poetry add scipy Creating virtualenv scipy-test-4EDXm154-py3.9 in /home/mattwelke/.cache/pypoetry/virtualenvs Using version ^1.8.1 for scipy Updating dependencies Resolving dependencies... (0.1s) SolverProblemError The current project's Python requirement (>=3.9,<4.0) is not compatible with some of the required packages Python requirement: - scipy requires Python >=3.8,<3.11, so it will not be satisfied for Python >=3.11,<4.0 Because no versions of scipy match >1.8.1,<2.0.0 and scipy (1.8.1) requires Python >=3.8,<3.11, scipy is forbidden. So, because scipy-test depends on scipy (^1.8.1), version solving failed. at ~/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/puzzle/solver.py:241 in _solve 237β”‚ packages = result.packages 238β”‚ except OverrideNeeded as e: 239β”‚ return self.solve_in_compatibility_mode(e.overrides, use_latest=use_latest) 240β”‚ except SolveFailure as e: β†’ 241β”‚ raise SolverProblemError(e) 242β”‚ 243β”‚ results = dict( 244β”‚ depth_first_search( 245β”‚ PackageNode(self._package, packages), aggregate_package_nodes β€’ Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties For scipy, a possible solution would be to set the `python` property to ">=3.9,<3.11" https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies, https://python-poetry.org/docs/dependency-specification/#using-environment-markers I'm interpreting the line python = "^3.9" in my pyproject.toml file to mean "this project this meant to run with Python 3.9 (any patch version)". And I'm interpreting the package's Python requirement of ">=3.8,<3.11" to mean "this library requires Python 3.8, 3.9, or Python 3.10 to use it". So if I put these two things together, it seems to me like they should be compatible with each other. The error message does include a hint to help me solve it. It says: For scipy, a possible solution would be to set the `python` property to ">=3.8,<3.11" I agree that this would solve the issue. It would make my Poetry project's Python version match the Python requirement of the dependency exactly. I found that if I changed my pyproject.toml file this way, I was able to install the dependency. But I don't want to allow my project to be executed with Python 3.8. I want to set its Python version to the latest version I'm actually using. So my preferred version is actually "^3.10" (if my understanding as described above is correct) because that should mean "you must be using any patch version of Python 3.10 to run this". If I change the line to python = "^3.10", I get the same kind of error I got before, except the hint in the error message mentions version 3.10 instead of 3.8: For scipy, a possible solution would be to set the `python` property to ">=3.10,<3.11" If I use this value, it works again, allowing me to install the dependency. And this time, it looks like it restricts my project to only being compatible with 3.10, as desired. But it looks a bit verbose. I still don't understand why setting it to "^3.9" (or "^3.10") didn't work. Is there something I'm missing here? If so, how would I change my pyproject.toml file to make it compatible with this dependency I want to add to my project?
The caret requirement you specify... [tool.poetry.dependencies] python = "^3.9" ...means "This Python code has compatibility of 3.9 <= python_version < 4" (^ restricts differently based on how you specify the version, using semantic versioning). This is a wider constraint than what your dependency scipy specifies, because it claims only compatibility up to 3.8 <= python_version < 3.11. To poetry, scipy does not meet your constraints because if python3.11 were out right now, your dependency constraint claims your code supports that Python version, whereas scipy's constraint would say it doesn't. For you, you probably just want something like a range to match something like scipy's (or narrow your range to fit inside scipy's). [tool.poetry.dependencies] python = ">=3.10, <3.11"
23
40
73,112,516
2022-7-25
https://stackoverflow.com/questions/73112516/arimaresults-object-has-no-attribute-plot-predict-error
In stats models I have this code from statsmodels.tsa.arima.model import ARIMA from statsmodels.graphics.tsaplots import plot_predict df1.drop(df1.columns.difference(['PTS']), 1, inplace=True) model = ARIMA(df1.PTS, order=(0, 15,0)) res = model.fit() res.plot_predict(start='2021-10-19', end='2022-04-05') plt.show() However when it goes to plt.show I get the ARIMA results object has no attribute plot predict. What do i do to fix this?
The .plot_predict() method no longer exists with the changes to the ARIMA classes in statsmodels version 13. So, just use the plot_predict() function that you already imported in your code. Here is an example with a different dataset: import matplotlib.pyplot as plt import pandas as pd import statsmodels.api as sm from statsmodels.graphics.tsaplots import plot_predict from statsmodels.tsa.arima.model import ARIMA dta = sm.datasets.sunspots.load_pandas().data[['SUNACTIVITY']] dta.index = pd.date_range(start='1700', end='2009', freq='A') res = ARIMA(dta, order=(0,2,0)).fit() fig, ax = plt.subplots() ax = dta.loc['1950':].plot(ax=ax) plot_predict(res, '1990', '2012', ax=ax) plt.show()
4
6
73,114,693
2022-7-25
https://stackoverflow.com/questions/73114693/complex-list-comparisons-in-python
I want to do a complex list comparison with python. I want to see if listB contains all of the items from listA and if they are in the same order. But I do not care if listB has extra items or interleaved items. Examples: listA = ['A','B','C','D','E'] listB = [':','A','*','B','C','D','E','`'] A, B, C, D, and E all appear in listB and are presented in the same order even though A and B have an item in between them and items at the start and end of listB. Extra complicated: listA = ['A','B','C','D','E'] listB = ['A','*','C','B','C','D','E'] A, B, C, D, and E all appear in listB and are presented in the same order even though A and B have two items in between them and one of those items happens to be something we are searching for. But since we are looking if A -> B is sequential and B -> C is sequential the fact that we also have C -> B -> C shouldn't matter. So, listA = ['A','B','C','D','E'] listB = [':','A','*','B','C','D','E','`'] Would be True listA = ['A','B','C','D','E'] listB = ['A','*','C','B','C','D','E'] Would be True But something like: listA = ['A','B','C','D','E'] listB = ['A','B','C','D','F'] or even listB = ['A','B','C','D'] Would be False If it get a False answer, I'd ideally like to be able to point to where the break in sequence happened -- i.e. E is missing.
Simple solution using a nested loop. Walk over listA and search the elements in listB in order. Should you fail at any point -> this is not a substring: def check(listA, listB): start = 0 for a in listA: for i in range(start, len(listB)): if a == listB[i]: start = i+1 break else: # triggered only if no break # print(f'{a} not found after position {start}') return False return True check('ABCDE', 'A*CBCDE') # True check('ABCDEF', 'A*CBCDE') # False check ('', '') # True check('ABA', 'ABBA') # True NB. Using strings here for clarity, but this works for any iterable. To get information on the non-found item, you can uncomment the print. Example: check('ABCA', 'ABBA') C not found after position 2 # False
4
2
73,103,953
2022-7-25
https://stackoverflow.com/questions/73103953/how-to-print-the-body-of-gmail-in-python-using-gmail-api
Hello everyone I'm attempting to use the Gmail API to print out specific emails from a sender. I've managed to do some research and watched some videos on how to get the sender and the subject printed off but for some reason, I cant get the body of the message to print off. I've looked through the Gmail API and haven't found anything to help with printing the body in text form. Any help with printing off the body of the email, please... service = build('gmail', 'v1', credentials=creds) results = service.users().messages().list(userId='me', labelIds=['INBOX'], q="from:specific email, is:unread").execute() messages = results.get('messages', []) if not messages: print("You have no New Messages.") else: message_count = 0 for message in messages: msg = service.users().messages().get(userId='me', id=message['id']).execute() message_count= message_count + 1 email_data= msg['payload']['headers'] for values in email_data: name = values["name"] if name == "From": from_name = values ["value"] print(from_name) subject= [j['value'] for j in email_data if j["name"]=="Subject"] print(subject) This code like I said pulls the specific email and prints the sender, and the subject all I'm missing is the body. I've tried following what was posted in this stackoverflow: How to retrieve the whole message body using Gmail API (python) But I couldn't manage to get it to work
In your script, how about the following modification? Modified script: service = build("gmail", "v1", credentials=creds) results = service.users().messages().list(userId="me", labelIds=["INBOX"], q="from:specific email, is:unread").execute() messages = results.get("messages", []) if not messages: print("You have no New Messages.") else: message_count = 0 for message in messages: msg = service.users().messages().get(userId="me", id=message["id"]).execute() message_count = message_count + 1 email_data = msg["payload"]["headers"] for values in email_data: name = values["name"] if name == "From": from_name = values["value"] print(from_name) subject = [j["value"] for j in email_data if j["name"] == "Subject"] print(subject) # I added the below script. for p in msg["payload"]["parts"]: if p["mimeType"] in ["text/plain", "text/html"]: data = base64.urlsafe_b64decode(p["body"]["data"]).decode("utf-8") print(data) In this case, please add import base64. When this script is run, both the text body and the HTML body are retrieved. For example, when you want to retrieve only the text body, please modify if p["mimeType"] in ["text/plain", "text/html"]: to if p["mimeType"] == "text/plain":. Reference: Method: users.messages.get
4
9
73,097,290
2022-7-24
https://stackoverflow.com/questions/73097290/separate-lines-from-handwritten-text-using-opencv-in-python
I am using the below script to try and separate handwritten text from the lines which the text was written on. Currently I am trying to select the lines. This seems to work well when the line are solid but when the lines are a string of dots it becomes tricky. To try and get around this I have tried using dilate to make the dots into solid lines, but dilate is also making the text solid which then gets pick up as horizontal lines. I can tweak the kernel for each image but that is not a workable solution when dealing with thousandths of images. Can someone suggest how I might make this work please. Is this the best approach or is there a better approach for selecting these lines? Sample images import cv2 file_path = r'image.jpg' image = cv2.imread(file_path) # resize image if image is bigger then screen size print('before Dimensions : ', image.shape) if image.shape[0] > 1200: image = cv2.resize(image, None, fx=0.2, fy=0.2) print('after Dimensions : ', image.shape) result = image.copy() gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Applying dilation to make lines solid kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3)) dilation = cv2.dilate(thresh, kernel, iterations = 1) # Detect horizontal lines horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (40,1)) detect_horizontal = cv2.morphologyEx(dilation, cv2.MORPH_OPEN, horizontal_kernel, iterations=2) cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: cv2.drawContours(result, [c], -1, (36,255,12), 2) cv2.imshow('1- gray', gray) cv2.imshow("2- thresh", thresh) cv2.imshow("3- detect_horizontal", detect_horizontal) cv2.imshow("4- result", result) cv2.waitKey(0) cv2.destroyAllWindows()
By finding contours, we can eliminate smaller ones by their area using cv2.contourArea. This will work under the assumption that the image contains dotted lines. Code: # read image, convert to grayscale and apply Otsu threshold img = cv2.imread('text.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) th = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # create black background of same image shape black = np.zeros((img.shape[0], img.shape[1], 3), np.uint8) # find contours from threshold image contours = cv2.findContours(th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0] # draw contours whose area is above certain value area_threshold = 7 for c in contours: area = cv2.contourArea(c) if area > area_threshold: black = cv2.drawContours(black,[c],0,(255,255,255),2) black: To refine this for more images, you can filter contours using some statistical measures (like mean, median, etc.)
4
4
73,097,741
2022-7-24
https://stackoverflow.com/questions/73097741/how-to-merge-two-rgba-images
I'm trying to merge two RGBA images (with a shape of (h,w,4)), taking into account their alpha channels. Example : What I've tried I tried to do this using opencv for that, but I getting some strange pixels on the output image. Images Used: and import cv2 import numpy as np import matplotlib.pyplot as plt image1 = cv2.imread("image1.png", cv2.IMREAD_UNCHANGED) image2 = cv2.imread("image2.png", cv2.IMREAD_UNCHANGED) mask1 = image1[:,:,3] mask2 = image2[:,:,3] mask2_inv = cv2.bitwise_not(mask2) mask2_bgra = cv2.cvtColor(mask2, cv2.COLOR_GRAY2BGRA) mask2_inv_bgra = cv2.cvtColor(mask2_inv, cv2.COLOR_GRAY2BGRA) # output = image2*mask2_bgra + image1 output = cv2.bitwise_or(cv2.bitwise_and(image2, mask2_bgra), cv2.bitwise_and(image1, mask2_inv_bgra)) output[:,:,3] = cv2.bitwise_or(mask1, mask2) plt.figure(figsize=(12,12)) plt.imshow(cv2.cvtColor(output, cv2.COLOR_BGRA2RGBA)) plt.axis('off') Output : So what I figured out is that I'm getting those weird pixels because I used cv2.bitwise_and function (Which btw works perfectly with binary alpha channels). I tried using different approaches Question Is there an approach to do this (While keeping the output image as an 8bit image).
I was able to obtain the expected result in 2 stages. # Read both images preserving the alpha channel hh1 = cv2.imread(r'C:\Users\524316\Desktop\Stack\house.png', cv2.IMREAD_UNCHANGED) hh2 = cv2.imread(r'C:\Users\524316\Desktop\Stack\memo.png', cv2.IMREAD_UNCHANGED) # store the alpha channels only m1 = hh1[:,:,3] m2 = hh2[:,:,3] # invert the alpha channel and obtain 3-channel mask of float data type m1i = cv2.bitwise_not(m1) alpha1i = cv2.cvtColor(m1i, cv2.COLOR_GRAY2BGRA)/255.0 m2i = cv2.bitwise_not(m2) alpha2i = cv2.cvtColor(m2i, cv2.COLOR_GRAY2BGRA)/255.0 # Perform blending and limit pixel values to 0-255 (convert to 8-bit) b1i = cv2.convertScaleAbs(hh2*(1-alpha2i) + hh1*alpha2i) Note: In the b=above the we are using only the inverse alpha channel of the memo image But I guess this is not the expected result. So moving on .... # Finding common ground between both the inverted alpha channels mul = cv2.multiply(alpha1i,alpha2i) # converting to 8-bit mulint = cv2.normalize(mul, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U) # again create 3-channel mask of float data type alpha = cv2.cvtColor(mulint[:,:,2], cv2.COLOR_GRAY2BGRA)/255.0 # perform blending using previous output and multiplied result final = cv2.convertScaleAbs(b1i*(1-alpha) + mulint*alpha) Sorry for the weird variable names. I would request you to analyze the result in each line. I hope this is the expected output.
4
3
73,095,952
2022-7-24
https://stackoverflow.com/questions/73095952/how-to-find-all-the-functions-that-has-been-used-in-a-python-script
Say I have a python script in this form import numpy as np x = [1, 2, 5, 3, 9, 6] x.sort() print(np.sum(x)) I want to extract all the functions that have been used in this script. For this example the outcome should be list.sort, np.sum, print. I can list all the builtin functions and then expand that list with dir(numpy). And then check whether each of those functions appear in the code. But what if there is a function like np.linalg.norm(x)? How will I find this function since norm will not be returned by dir(numpy). Note: I understand many other functions can be invoked which does not appear in the code. For this example list.__init__. But I only want those functions that appears in the code. Simply put, I want all the words/tokens that will be colored yellow by vscode in Dark+ theme. Is there any possible way to use rope for this problem?
You can use ast module to do something like that: import ast code = """ import numpy as np x = [1, 2, 5, 3, 9, 6] x.sort() print(np.sum(x)) """ module = ast.parse(code) for node in ast.walk(module): if isinstance(node, ast.Call): try: parent_name = node.func.value.id + '.' except AttributeError: parent_name = '' if isinstance(node.func, ast.Name): print(f'{parent_name}{node.func.id}') else: print(f'{parent_name}{node.func.attr}') # output: # x.sort # print # np.sum Of course, there is x.sort instead of list.sort but I didn't find out how to get the actual parent type of the method.
4
3
73,092,700
2022-7-23
https://stackoverflow.com/questions/73092700/segmentation-fault-while-running-python-in-docker-container
I am trying to run this Github project on docker on my machine. Here is my docker file that I am running after cloning the project on my local machine. FROM python:3 RUN mkdir -p /opt/cascade-server WORKDIR /opt/cascade-server COPY requirements.txt . RUN pip install -r requirements.txt COPY . . COPY docker_defaults.yml conf/defaults.yml CMD /bin/bash docker_start.sh All my configurations are correct and working fine. When I run the project without docker containers, it runs correctly. When I build Docker containers to install it, it does not ends on python cascade.py -vv when running the below shell script from the last line of docker. #!/bin/bash if [ -f "conf/cascade.yml" ]; then echo "cascade.yml found. Using existing configuration." else echo "cascade.yml not found. Generating new config file from defaults" python cascade.py --setup_with_defaults fi python cascade.py -vv All the logs are showing correct results till the last line executes from the shell script. Here is the error when the last line executes. <frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject <frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject <frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject <frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject <frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject /opt/cascade-server/app/async_wrapper.py:44: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/usr/local/lib/python3.10/site-packages/urllib3/util/__init__.py)', 'pymongo.ssl_context (/usr/local/lib/python3.10/site-packages/pymongo/ssl_context.py)', 'urllib3.util.ssl_ (/usr/local/lib/python3.10/site-packages/urllib3/util/ssl_.py)']. gevent.monkey.patch_all() docker_start.sh: line 10: 18 Segmentation fault (core dumped) python cascade.py -vv python version
try to change the base image python:3 (which use Python 3.10) to python:3.7 base image like your local Python interpreter.
4
3
73,089,517
2022-7-23
https://stackoverflow.com/questions/73089517/how-to-insert-only-new-key-to-existing-keyvalue-pair-dictionary-python
I have a dict as below: dict1={ 'item1': {'result': [{'val': 228, 'no': 202}] }, 'item2': {'result': [{'value': 148, 'year': 201}] } } How can we insert a new key 'category' to each item so that the output looks like below: output={ 'item1': {'category': {'result': [{'val': 228, 'no': 202}] } }, 'item2': {'category': {'result': [{'value': 148, 'year': 201}] } } } Currently, i have key:value and im looking to insert newkey which takes same value, key:newkey:value I tried to do dict1['item1']['category1'] but this is adding a new key value pair.
Use to modify in-place the existing dictionary dict1: for key, value in dict1.items(): dict1[key] = { "category" : value } print(dict1) Output {'item1': {'category': {'result': [{'val': 228, 'no': 202}]}}, 'item2': {'category': {'result': [{'value': 148, 'year': 201}]}}} As an alternative use update: dict1.update((k, {"category": v}) for k, v in dict1.items()) Note that update receives both a dictionary or an iterable of key/value pairs, from the documentation: update() accepts either another dictionary object or an iterable of key/value pairs (as tuples or other iterables of length two). Finally in Python 3.9+, you can use the merge operator (|=), as below: dict1 |= ((k, {"category": v}) for k, v in dict1.items())
5
5
73,085,926
2022-7-22
https://stackoverflow.com/questions/73085926/make-python-subprocess-run-in-powershell
I'm trying to get the wireless debugging port of ADB in my Android, using the command here: & "D:\Tools\Nmap\nmap.exe" -T4 192.168.2.20 -p 37000-44000 | Where-Object {$_ -match "tcp open"} | ForEach-Object {$_.split("/")[0]} And I would like to make a Python script for further purposes: ip = '192.168.2.20' nmap_path = r'D:\Tools\Nmap\nmap.exe' def get_port(): port_result = subprocess.run( f'& "{nmap_path}" -T4 {ip} -p 37000-44000 | ' f'Where-Object {{$_ -match "tcp open"}} | ' f'ForEach-Object {{$_.split("/")[0]}}', shell=True ) port = port_result.stdout.decode('utf-8').strip() return port But it gave the following error: & was unexpected at this time., which indicated that the command was run in CMD instead of PowerShell. I do not want to use powershell -Command, nor saving the commands into a .ps1 file. Could I make subprocess.run to run specifically in PowerShell?
The documentation says subprocess() uses COMSPEC to determine which shell to run if you set shell=True. I don't have, want, or use Windows, but imagine you'd need something like: import os import subprocess # Change COMSPEC to point to Powershell os.putenv('COMSPEC',r'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe') subprocess.run(..., shell=True) I'm assuming you can check that path. I guess it would probably be good practice to save and restore the previous COMSPEC in case the change to Powershell upsets something else. So: # Save original COMSPEC savedCOMSPEC = os.getenv('COMSPEC') ... CODE FROM ABOVE ... # Restore previous COMSPEC os.putenv('COMSPEC', savedCOMSPEC)
4
0
73,085,720
2022-7-22
https://stackoverflow.com/questions/73085720/cuda-error-invalid-device-ordinal-when-using-python-3-9
I'm trying to excute a code, but I keep geting this error when compiling this piece of code: import tensorflow as tf from xba import XBA import torch torch.tensor([1, 2, 3, 4]).to(device="cuda:2") torch.tensor([1, 2, 3, 4]).to(device="cuda:2") generates this error: " RuntimeError: CUDA error: invalid device ordinal CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. AnyIdea about the origine of the bug, it's just the first line of code!!
"cuda:2" selects the third GPU in your system. If you don't have 3 GPUs (at least) in your system, you'll get this error. Assuming you have at least 1 properly installed and set up CUDA GPU available, try: "cuda:0"
4
7
73,072,159
2022-7-21
https://stackoverflow.com/questions/73072159/spyder-on-m1-chip
I am getting a new mac with m1 pro chip and want to install Python with Spyder IDE. I will be using conda to manage Python environments. I gather that as of now Spyder does not run natively on m1 chip while Python with different packages does, see for example: https://www.anaconda.com/blog/new-release-anaconda-distribution-now-supporting-m1 The question is, what is the right way to install Spyder. Suppose I'm using Miniconda, create a new environment and install Spyder: conda install spyder will this Spyder work correctly or crash? do I need to run it using Rosetta2? (and how do I run only Spyder IDE under Rosetta2 while having Python run natively on m1?) sorry if I am getting some terminology wrong I am fairly new to this.
I personally have had no issue using Spyder through Anaconda, nonetheless it will be running on Rosetta (even if you download it directly). Python will be running using M1 inside the IDE. I haven't had any major issues. If you absolutely want to run python natively on M1 then python 3.9.1 is required and you should use the console.
4
2
73,083,535
2022-7-22
https://stackoverflow.com/questions/73083535/modify-dataframe-in-place-using-nan-values-from-passed-dataframe
So i have the following sample df df = pd.DataFrame({'Id':[1,1,2,3],'Origin':['int','int','pot','pot'],'Origin2':['pot','int','int','int']}) Id Origin Origin2 0 1 int pot 1 1 int int 2 2 pot int 3 3 pot int And i do the following replace command df.loc[df['Id'].eq(1)].apply(lambda x : x.replace('int':np.nan)) How could i update the original df, with both new columns using indexes. I tried df.update but after checking the documentation, i noticed it doesnt substitute 'non na' values by nan values? For better understanding, the columns in the index [0,1] ('id'= 1). Substitute the string 'int' by np.nan Wanted result: df = pd.DataFrame({'Id':[1,1,2,3],'Origin':[np.nan,np.nan,'pot','pot'],'Origin2':['pot',np.nan,'int','int']}) Id Origin Origin2 0 1 NaN pot 1 1 NaN NaN 2 2 pot int 3 3 pot int
Use mixed boolean/label indexing: m = df['Id'].eq(1) df.loc[m, ['Origin', 'Origin2']] = df.loc[m, ['Origin', 'Origin2']].replace('int', np.nan) Output: Id Origin Origin2 0 1 NaN pot 1 1 NaN NaN 2 2 pot int 3 3 pot int
4
3
73,074,874
2022-7-22
https://stackoverflow.com/questions/73074874/how-to-add-link-in-python-docstring
I have a function in python 3.x def foo(): """Lorem ipsum for more info, see here"" I want to add a hyperlink to 'here' to point to a web site. How can I do that without installing external plugin?
Just add the link as a string into the docstring, like so: def foo(): """Lorem ipsum for more info, see here: www.myfancydocu.com"" The doctring is just a string, so there is no Hyperlink. But anyone that wants to look at the website can just copy the link. There are automatic documentation-builders that build a documentation out of your code and docstrings in e.g. html. Those can probably add hyperlinks to the documentation with a specific syntax, but that syntax then depends on which documentation-builder you use. If you only have your code, then just adding the url as a string is all you can do.
5
5
73,081,130
2022-7-22
https://stackoverflow.com/questions/73081130/python-fastapi-shedule-task
I want to write a task that will only run once a day at 3:30 p.m. with Python FASTAPI. How can I do it? I tried this but it works all the time. schedule.every().day.at("15:30:00").do(job2) while True: schedule.run_all()
Swap the schedule.run_all() for schedule.run_pending(). It should work! import schedule import time def job(): print("I'm working...") schedule.every().day.at("15:30:00").do(job) while True: schedule.run_pending()
10
4
73,078,568
2022-7-22
https://stackoverflow.com/questions/73078568/filtering-long-format-pandas-df-based-on-conditions-from-the-dictionary
Imagine I have an order for specialists in some coding languages with multiple criterion in JSON format: request = {'languages_required': {'Python': 4, 'Java': 2}, 'other_requests': [] } languages_required means that the candidate must have a skill in the language and the number is the minimum level of this language. The format of candidates dataframe is long: df = pd.DataFrame({'candidate': ['a', 'a', 'a', 'b', 'b', 'c', 'c', 'd', 'd', 'd'], 'language': ['Python', 'Java', 'Scala', 'Python', 'R', 'Python', 'Java', 'Python', 'Scala', 'Java'], 'skill': [5, 4, 4, 6, 8, 1, 3, 5, 2, 2]}) That gives: candidate language skill 0 a Python 5 1 a Java 4 2 a Scala 4 3 b Python 6 4 b R 8 5 c Python 1 6 c Java 3 7 d Python 5 8 d Scala 2 9 d Java 2 What I need to do is to keep the candidates and their skills in required languages that meet the requirements from the request, that is: Have skills in both mentioned languages Skills in these languages are equal or higher than values in the dictionary So the desired output would be: candidate language skill 0 a Python 5 1 a Java 4 7 d Python 5 9 d Java 2 I am able to filter the candidates with the languages based on keys() of the dictionary: lang_mask = df[df['language'].isin(request['languages_required'].keys())]\ .groupby('candidate')['language']\ .apply(lambda x: set(request['languages_required']).issubset(x)) But struggle with adding the 'is higher than' per language condition.
You need call first condition in one step and then second in another step: df = df[df['language'].map(request['languages_required']).le(df['skill'])] df = df[df.groupby('candidate')['language'].transform(lambda x: set(request['languages_required']).issubset(x))] print (df) candidate language skill 0 a Python 5 1 a Java 4 7 d Python 5 9 d Java 2 Or one row solution: df = (df[df['language'].map(request['languages_required']).le(df['skill'])] .pipe(lambda x: x[x.groupby('candidate')['language'].transform(lambda x: set(request['languages_required']).issubset(x))])) print (df) candidate language skill 0 a Python 5 1 a Java 4 7 d Python 5 9 d Java 2
4
2
73,077,203
2022-7-22
https://stackoverflow.com/questions/73077203/how-to-create-a-dummy-variable-in-python-if-missing-values-are-included
How to create a dummy variable if missing values are included? I have the following data and I want to create a Dummy variable based on several conditions. My problem is that it automatically converts my missing values to 0, but I want to keep them as missing values. import pandas as pd mydata = {'x' : [10, 50, np.nan, 32, 47, np.nan, 20, 5, 100, 62], 'y' : [10, 1, 5, np.nan, 47, np.nan, 8, 5, 100, 3]} df = pd.DataFrame(mydata) df["z"] = ((df["x"] >= 50) & (df["y"] <= 20)).astype(int) print(df)
When creating your boolean-mask, you are comparing integers with nans. In your case, when comparing df['x']=np.nan with 50, your mask df['x'] >= 50 will always be False and will equal 0 if you convert it to an integer. You can just create a boolean-mask that equals True for all rows that contain any np.nan in the columns ['x', 'y'] and then assign np.nan to these rows. Code: import pandas as pd import numpy as np mydata = {'x' : [10, 50, np.nan, 32, 47, np.nan, 20, 5, 100, 62], 'y' : [10, 1, 5, np.nan, 47, np.nan, 8, 5, 100, 3]} df = pd.DataFrame(mydata) df["z"] = ((df["x"] >= 50) & (df["y"] <= 20)).astype("uint32") df.loc[df[["x", "y"]].isna().any(axis=1), "z"] = np.nan Output: x y z 0 10.0 10.0 0.0 1 50.0 1.0 1.0 2 NaN 5.0 NaN 3 32.0 NaN NaN 4 47.0 47.0 0.0 5 NaN NaN NaN 6 20.0 8.0 0.0 7 5.0 5.0 0.0 8 100.0 100.0 0.0 9 62.0 3.0 1.0 Alternatively, if you want an one-liner, you could use nested np.where statements: df["z"] = np.where( df.isnull().any(axis=1), np.nan, np.where((df["x"] >= 50) & (df["y"] <= 20), 1, 0) )
5
5
73,069,550
2022-7-21
https://stackoverflow.com/questions/73069550/fastapi-best-practices-for-writing-rest-apis-with-multiple-conditions
Let's say I have two entities, Users and Councils, and a M2M association table UserCouncils. Users can be added/removed from Councils and only admins can do that (defined in a role attribute in the UserCouncil relation). Now, when creating endpoints for /councils/{council_id}/remove, I am faced with the issue of checking multiple constraints before the operation, such as the following: @router.delete("/{council_id}/remove", response_model=responses.CouncilDetail) def remove_user_from_council( council_id: int | UUID = Path(...), *, user_in: schemas.CouncilUser, db: Session = Depends(get_db), current_user: Users = Depends(get_current_user), council: Councils = Depends(council_id_dep), ) -> dict[str, Any]: """ DELETE /councils/:id/remove (auth) remove user with `user_in` from council current user must be ADMIN of council """ # check if input user exists if not Users.get(db=db, id=user_in.user_id): raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="User not found" ) if not UserCouncil.get(db=db, user_id=user_in.user_id, council_id=council.id): raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail="Cannot delete user who is not part of council", ) # check if current user exists in council if not ( relation := UserCouncil.get( db=db, user_id=current_user.id, council_id=council.id ) ): raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail="Current user not part of council", ) # check if current user is Admin if relation.role != Roles.ADMIN: raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail="Unauthorized" ) elif current_user.id == user_in.user_id: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail="Admin cannot delete themselves", ) else: updated_users = council.remove_member(db=db, user_id=user_in.user_id) result = {"council": council, "users": updated_users} return result These checks are pretty self-explanatory. However, this adds a lot of code in the endpoint definition. Should the endpoint definitions be generally minimalistic? I could wrap all these checks inside the Councils crud method (i.e., council.remove_member()), but that would mean adding HTTPExceptions inside crud classes, which I don't want to do. What are the general best practices for solving situations like these, and where can I read more about this? Any kind of help would be appreciated. Thanks.
So, I will tell you how I would go about doing it with your example. Generally I like to keep my endpoints quite minimal. What you what to employ is a common pattern used in building APIs and that is to bundle your business logic into a service class. This service class allows you to reuse logic. Say you want to remove a council member from a queue or cron job. This brings up the next issue you highlighted and that is about having HTTP specific exceptions in your service class which may not be used in an HTTP context. Fortunately this is not a difficult one to solve, you can just define your own exceptions and ask the API framework to catch them only to re-raise the desired HTTP exception. Define a custom exception: class UnauthorizedException(Exception): def __init__(self, message: str): super().__init__(message) self.message = message class InvalidActionException(Exception): ... class NotFoundException(Exception): ... In Fast API you can catch specific exceptions your application throws @app.exception_handler(UnauthorizedException) async def unauthorized_exception_handler(request: Request, exc: UnauthorizedException): return JSONResponse( status_code=status.HTTP_403_FORBIDDEN, content={"message": exc.message}, ) @app.exception_handler(InvalidActionException) async def unauthorized_exception_handler(request: Request, exc: InvalidActionException): ... Wrap up your business logic into a service class with sensible methods and raise the exceptions you have defined for your service class CouncilService: def __init__(self, db: Session): self.db = db def ensure_admin_council_member(self, user_id: int, council_id: int): # check if current user exists in council if not ( relation := UserCouncil.get( db=self.db, user_id=user_id, council_id=council_id ) ): raise UnauthorizedException("Current user not part of council") # check if current user is Admin if relation.role != Roles.ADMIN: raise UnauthorizedException("Unauthorized") def remove_council_member(self, user_in: schemas.CouncilUser, council: Councils): # check if input user exists if not Users.get(db=self.db, id=user_in.user_id): raise NotFoundException("User not found") if not UserCouncil.get(db=self.db, user_id=user_in.user_id, council_id=council.id): raise InvalidActionException("Cannot delete user who is not part of council") if current_user.id == user_in.user_id: raise InvalidActionException("Admin cannot delete themselves") updated_users = council.remove_member(db=self.db, user_id=user_in.user_id) result = {"council": council, "users": updated_users} return result and then finally your endpoint definition is quite lean EDIT: removed the /remove verb from the path, as pointed out in the comments, the verb is already specified. Ideally your path should contain Nouns referring to the resource. @router.delete("/{council_id}", response_model=responses.CouncilDetail) def remove_user_from_council( council_id: int | UUID = Path(...), *, user_in: schemas.CouncilUser, current_user: Users = Depends(get_current_user), council: Councils = Depends(council_id_dep), council_service: CouncilService = Depends(get_council_service), ) -> responses.CouncilDetail: """ DELETE /councils/:id (auth) remove user with `user_in` from council current user must be ADMIN of council """ council_service.ensure_admin_council_member(current_user.id, council_id) return council_service.remove_council_member(user_in, council)
4
3
73,066,781
2022-7-21
https://stackoverflow.com/questions/73066781/how-to-find-circulation-in-dataframe
my goal is to find if the following df has a 'circulation' given: df = pd.DataFrame({'From':['USA','UK','France','Italy','Russia','china','Japan','Australia','Russia','Italy'], 'to':['UK','France','Italy','Russia','china','Australia','New Zealand','Japan','USA','France']}) df and if I graph it, it would look like this (eventually, note that the order on the df is different): USA-->UK-->France-->Italy-->Russia-->China-->Australia-->Japan-->Australia | | | | France USA The point is this: You cannot go backward, so Italy cannot go to France and Russia cannot go to USA. Note: From can have multiple Tos How can I find it in pandas so the end result would look like this: I can solve it without pandas (I get df.to_dict('records') and then iterate to find the circulation and then go back to pandas) but I wish to stay on pandas.
The logic is not fully clear, however you can approach your problem with a graph. Your graph is the following: Let us consider circulating nodes, those that have more than one destination. You can obtain this with networkx: import networkx as nx G = nx.from_pandas_edgelist(df, source='From', target='to', create_using=nx.DiGraph) circulating = {n for n in G if len(list(G.successors(n)))>1} df['IS_CIRCULATING'] = df['From'].isin(circulating).astype(int) output: From to IS_CIRCULATING 0 USA UK 0 1 UK France 0 2 France Italy 0 3 Italy Russia 1 4 Russia china 1 5 china Australia 0 6 Japan New Zealand 0 7 Australia Japan 0 8 Russia USA 1 9 Italy France 1 With pure pandas: df['IS_CIRCULATING'] = df.groupby('From')['to'].transform('nunique').gt(1).astype(int)
5
5
73,070,369
2022-7-21
https://stackoverflow.com/questions/73070369/how-do-i-install-antlr4-for-python3-on-windows
I'm trying to install antlr4 for Python 3 on Windows. I run the following pip command successfully: pip install antlr4-python3-runtime Installs the packages, no problem. I'm using the Miniconda environment, and the files are where they are expected. When I try to run antlr4 from the command line, though, error is returned: 'antlr4' is not recognized as an internal or external command, operable program or batch file I've verified that my path variables are as expected... and I run other packages installed via pip in the Miniconda environment with no issue. I've also tried installing to the main Python installation on Windows via CMD, and it installs without issue... but same response when I try to run. I also tried to do this on my Mac, same issue. I'm assuming there is an issue with the antlr4 build, but I wanted to make sure I wasn't missing anything before moving on. Update 0 @Bart's answer is the way to go, but now I'm having trouble running the .jar file. It's throwing an error that says that my Java is out of date (that I' mon class file version 52 and it requires 55). But I have Java 1.8, which should be higher than that. Here is the error below: C:\Users\mathg>java -jar C:\Users\mathg\miniconda3\Scripts\antlr-4.10.1-complete.jar -help Error: A JNI error has occurred, please check your installation and try again Exception in thread "main" java.lang.UnsupportedClassVersionError: org/antlr/v4/Tool has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.access$100(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.launcher.LauncherHelper.checkAndLoadMain(Unknown Source) Update 1 I've done some more digging around, and followed the installation instructions here: antlr4: Getting Started. Updated all the environment variables, created the .bat files as required, and I ran again... but even though it's on my PATH now, still same error. There may be something wonky with my Java install, but I did do a clean reinstall. I did find a similar issue on the GitHub here, but seems to be resolved. Related antlr4 GitHub issue Update 2 The answer to my issue with running the antlr4 .jar file was to reinstall an earlier version of Java. That completely fixed it. If anyone else is going down this rabbit hole, take a look at the GitHub issue link I posted.
antlr4 is not a binary shipped with antlr4-python3-runtime. It is just an alias for the command: java -jar /usr/local/lib/antlr-4.10.1-complete.jar In other words, when you want to generate a parser from your .g4 grammar file, you need to download the antlr-4.10.1-complete.jar file and have a Java runtime installed. You only need Java to generate the parser classes, after which you need the Python runtime classes to use these generated parser classes. For example, you have a grammar called MyLanguage.g4: grammar MyLanguage; parse : GREET NAME EOF ; GREET : 'Hi' | 'Hello'; NAME : [a-zA-Z]+; SPACE : [ \t\r\n] -> skip; Then this is what you'll have to do: generate parser classes: java -jar antlr-4.10.1-complete.jar MyLanguage.g4 -Dlanguage=Python3 which will generate MyLanguageLexer.py, MyLanguageParser.py (and some listener classes). use the lexer and parser: from antlr4 import * from MyLanguageLexer import MyLanguageLexer from MyLanguageParser import MyLanguageParser if __name__ == '__main__': lexer = MyLanguageLexer(InputStream('Hi Trekkie')) parser = MyLanguageParser(CommonTokenStream(lexer)) parse_tree = parser.parse() print(parse_tree.toStringTree(recog=parser)) The script above only needs the library you installed with antlr4-python3-runtime (and are importing with from antlr4 import *). If you runt this script, the following will be printed: (parse Hi Trekkie <EOF>)
4
3
73,069,962
2022-7-21
https://stackoverflow.com/questions/73069962/correct-way-to-hint-that-a-class-is-implementing-a-protocol
On a path of improvement for my Python dev work. I have interest in testing interfaces defined with Protocol at CI/deb building time, so that if a interface isn't actually implemented by a class we will know immediately after the unit tests run. My approach was typing with Protocol and using implements runtime_checkable to build unit test. That works, but the team got into a little debate about how to indicate a concretion was implementing a Protocol without busting runtime_checkable. In C++/Java you need inheritance to indicate implementations of interfaces, but with Python you don't necessarily need inheritance. The conversation centered on whether we should be inheriting from a Protocol interface class. Consider this code example at the end which provides most of the gist of the question. We were thinking about Shape and indicating how to hint to a future developer that Shape is providing IShape, but doing so with inheritance makes the runtime_checkable version of isinstance unusable for its purpose in unit-testing. There is a couple of paths to improvement here: We could find a better way to hint that Shape implements IShape which doesn't involve direct inheritance. We could find a better way to check if an interface is implemented at test deb package build time. Maybe runtime_checkable is the wrong idea. Anyone got guidance on how to use Python better? Thanks! from typing import ( Protocol, runtime_checkable ) import dataclasses @runtime_checkable class IShape(Protocol): x: float @dataclasses.dataclass class Shape(IShape): foo:float = 0. s = Shape() # evaluates as True but doesnt provide the interface. Undermines the point of the run-time checkable in unit testing assert isinstance(s, IShape) print(s.x) # Error. Interface wasnt implemented # # Contrast with this assert # @dataclasses.dataclass class Goo(): x:float = 1 @dataclasses.dataclass class Hoo(): foo: float = 1 g = Goo() h = Hoo() assert isinstance(g, IShape) # asserts as true # but because it has the interface and not because we inherited. print(g.x) assert isinstance(h, IShape) # asserts as False which is what we want
When talking about static type checking, it helps to understand the notion of a subtype as distinct from a subclass. (In Python, type and class are synonymous; not so in the type system implemented by tools like mypy.) A type T is a nominal subtype of type S if we explicitly say it is. Subclassing is a form of nominal subtyping: T is a subtype of S if (but not only if) T is a subclass of S. A type T is a structural subtype of type S if it something about T itself is compatible with S. Protocols are Python's implementation of structure subtyping. Shape does not not need to be a nominal subtype of IShape (via subclassing) in order to be a structural subtype of IShape (via having an x attribute). So the point of defining IShape as a Protocol rather than just a superclass of Shape is to support structural subtyping and avoid the need for nominal subtyping (and all the problems that inheritance can introduce). class IShape(Protocol): x: float # A structural subtype of IShape # Not a nominal subtype of IShape class Shape: def __init__(self): self.x = 3 # Not a structural subtype of IShape class Unshapely: def __init__(self): pass def foo(v: IShape): pass foo(Shape()) # OK foo(Unshapely()) # Not OK So is structural subtyping a replacement for nominal subtyping? Not at all. Inheritance has its uses, but when it's your only method of subtyping, it gets used inappropriately. Once you have a distinction between structural and nominal subtyping in your type system, you can use the one that is appropriate to your actual needs.
13
18
73,062,386
2022-7-21
https://stackoverflow.com/questions/73062386/adding-single-integer-to-numpy-array-faster-if-single-integer-has-python-native
I add a single integer to an array of integers with 1000 elements. This is faster by 25% when I first cast the single integer from numpy.int64 to the python-native int. Why? Should I, as a general rule of thumb convert the single number to native python formats for single-number-to-array operations with arrays of about this size? Note: may be related to my previous question Conjugating a complex number much faster if number has python-native complex type. import numpy as np nnu = 10418 nnu_use = 5210 a = np.random.randint(nnu,size=1000) b = np.random.randint(nnu_use,size=1)[0] %timeit a + b # --> 3.9 Β΅s Β± 19.9 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each) %timeit a + int(b) # --> 2.87 Β΅s Β± 8.07 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each) Note that the speed-up can be enormous (factor 50) for scalar-to-scalar-operations as well, as seen below: np.random.seed(100) a = (np.random.rand(1))[0] a_native = float(a) b = complex(np.random.rand(1)+1j*np.random.rand(1)) c = (np.random.rand(1)+1j*np.random.rand(1))[0] c_native = complex(c) %timeit a * (b - b.conjugate() * c) # 6.48 Β΅s Β± 49.7 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each) %timeit a_native * (b - b.conjugate() * c_native) # 283 ns Β± 7.78 ns per loop (mean Β± std. dev. of 7 runs, 1000000 loops each) %timeit a * b # 5.07 Β΅s Β± 17.7 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each) %timeit a_native * b # 94.5 ns Β± 0.868 ns per loop (mean Β± std. dev. of 7 runs, 10000000 loops each) Update: Could it be that the latest numpy release fixes the speed difference? The release notes of numpy 1.23 mention that scalar operations are now much faster, see https://numpy.org/devdocs/release/1.23.0-notes.html#performance-improvements-and-changes and https://github.com/numpy/numpy/pull/21188. I am using python 3.7.6, numpy 1.21.2.
On my Windows PC with CPython 3.8.1, I get: [Old] Numpy 1.22.4: - First test: 1.65 Β΅s VS 1.43 Β΅s - Second: 2.03 Β΅s VS 0.17 Β΅s [New] Numpy 1.23.1: - First test: 1.38 Β΅s VS 1.24 Β΅s <---- A bit better than Numpy 1.22.4 - Second: 0.38 Β΅s VS 0.17 Β΅s <---- Much better than Numpy 1.22.4 While the new version of Numpy gives a good boost, native type should always be faster than Numpy ones with the (default) CPython interpreter. Indeed, the interpreter needs to call C function of Numpy. This is not needed with native types. Additionally, the Numpy checks and wrapping is not optimal but Numpy is not designed for fast scalar computation in the first place (though the overhead was previously not reasonable). In fact, scalar computations are very inefficient and the interpreter prevent any fast execution. If you plan to do many scalar operation you need to use a natively compiled code, possibly using Cython, Numba, or even a raw C/C++ module. Note that Cython do not optimize/inline Numpy calls but can operate on native types faster. A native code can do this certainly in one or even two order of magnitude less time. Note that in the first case, the path in Numpy functions is not the same and Numpy does additional check that are a bit more expensive then the value is not a CPython object. Still, it should be a constant overhead (and now relatively small). Otherwise, it would be a bug (and should be reported). Related: Why is np.sum(range(N)) very slow?
7
1
73,067,671
2022-7-21
https://stackoverflow.com/questions/73067671/using-github-actions-how-do-you-store-flake8-exit-code-as-a-variable-instead-of
I have a GitHub Action workflow file that is doing multiple linting checks. flake8 is the first linting check and if it fails the entire workflow fails meaning the subsequent linting checks are name: lint on: push: pull_request: jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@main with: ref: ${{ github.head_ref }} - name: python uses: actions/setup-python@main with: # pulls latest version of python, alternatively specify exact version (i.e. 3.8 -> no quotes) python-version: '3.9' - name: install run: | pip install -r requirements.txt - name: Lint with flake8 run: | # fail if there are any flake8 errors flake8 . --count --max-complexity=15 --max-line-length=127 --statistics ## subsequent linting jobs
You could use the continue-on-error in conjunction with the outcome of the step. From the doc: steps.<step_id>.outcome string The result of a completed step before continue-on-error is applied. Possible values are success, failure, cancelled, or skipped. When a continue-on-error step fails, the outcome is failure, but the final conclusion is success. As example: - name: Lint with flake8 id: flake8 continue-on-error: true run: | # fail if there are any flake8 errors flake8 . --count --max-complexity=15 --max-line-length=127 --statistics - name: Check if 'Lint with flake8' step failed if: steps.flake8.outcome != 'success' run: | echo "flake8 fails" exit 1
4
4
73,067,450
2022-7-21
https://stackoverflow.com/questions/73067450/get-row-values-as-column-values
I have a single row data-frame like below Num TP1(USD) TP2(USD) TP3(USD) VReal1(USD) VReal2(USD) VReal3(USD) TiV1 (EUR) TiV2 (EUR) TiV3 (EUR) TR TR-Tag AA-24 0 700 2100 300 1159 2877 30 30 47 10 5 I want to get a dataframe like the one below ID Price Net Range 1 0 300 30 2 700 1159 30 3 2100 2877 47 The logic here is that a. there will be 3 columns names that contain TP/VR/TV. So in the ID, we have 1, 2 & 3 (these can be generated by extracting the value from the column names or just by using a range to fill) b. TP1 value goes into first row of column 'Price',TP2 value goes into second row of column 'Price' & so on c. Same for VR & TV. The values go into 'Net' & 'Range columns d. Columns 'Num', 'TR' & 'TR=Tag' are not relevant for the result. I tried df.filter(regex='TP').stack(). I get all the 'TP' column & I can access individual values be index ([0],[1],[2]). I could not get all of them into a column directly. I also wondered if there may be a easier way of doing this.
Assuming 'Num' is a unique identifier, you can use pandas.wide_to_long: pd.wide_to_long(df, stubnames=['TP', 'VR', 'TV'], i='Num', j='ID') or, for an output closer to yours: out = (pd .wide_to_long(df, stubnames=['TP', 'VR', 'TV'], i='Num', j='ID') .reset_index('ID') .drop(columns=['TR', 'TR-Tag']) .rename(columns={'TP': 'Price', 'VR': 'Net', 'TV': 'Range'}) ) output: ID Price Net Range Num AA-24 1 0 300 30 AA-24 2 700 1159 30 AA-24 3 2100 2877 47 updated answer out = (pd .wide_to_long(df.set_axis(df.columns.str.replace(r'\(USD\)$', '', regex=True), axis=1), stubnames=['TP', 'VReal', 'TiV'], i='Num', j='ID') .reset_index('ID') .drop(columns=['TR', 'TR-Tag']) .rename(columns={'TP': 'Price', 'VReal': 'Net', 'TiV': 'Range'}) ) output: ID Price Net Range Num AA-24 1 0 300 30 AA-24 2 700 1159 30 AA-24 3 2100 2877 47
9
12
73,065,778
2022-7-21
https://stackoverflow.com/questions/73065778/compare-two-pandas-dataframes-in-the-most-efficient-way
Let's consider two pandas dataframes: import numpy as np import pandas as pd df = pd.DataFrame([1, 2, 3, 2, 5, 4, 3, 6, 7]) check_df = pd.DataFrame([3, 2, 5, 4, 3, 6, 4, 2, 1]) If want to do the following thing: If df[1] > check_df[1] or df[2] > check_df[1] or df[3] > check_df[1] then we assign to df 1, and 0 otherwise If df[2] > check_df[2] or df[3] > check_df[2] or df[4] > check_df[2] then we assign to df 1, and 0 otherwise We apply the same algorithm to end of DataFrame My primitive code is the following: df_copy = df.copy() for i in range(len(df) - 3): moving_df = df.iloc[i:i+3] if (moving_df >check_df.iloc[i]).any()[0]: df_copy.iloc[i] = 1 else: df_copy.iloc[i] = -1 df_copy 0 0 -1 1 1 2 -1 3 1 4 1 5 -1 6 3 7 6 8 7 Could you please give me a advice, if there is any possibility to do this without loop?
IIUC, this is easily done with a rolling.min: df['out'] = np.where(df[0].rolling(N, min_periods=1).max().shift(1-N).gt(check_df[0]), 1, -1) output: 0 out 0 1 -1 1 2 1 2 3 -1 3 2 1 4 5 1 5 4 -1 6 3 1 7 6 -1 8 7 -1 to keep the last items as is: m = df[0].rolling(N).max().shift(1-N) df['out'] = np.where(m.gt(check_df[0]), 1, -1) df['out'] = df['out'].mask(m.isna(), df[0]) output: 0 out 0 1 -1 1 2 1 2 3 -1 3 2 1 4 5 1 5 4 -1 6 3 1 7 6 6 8 7 7
4
3
73,057,180
2022-7-20
https://stackoverflow.com/questions/73057180/split-a-string-if-character-is-present-else-dont-split
I have a string like below in python testing_abc I want to split string based on _ and extract the 2 element I have done like below split_string = string.split('_')[1] I am getting the correct output as expected abc Now I want this to work for below strings 1) xyz When I use split_string = string.split('_')[1] I get below error list index out of range expected output I want is xyz 2) testing_abc_bbc When I use split_string = string.split('_')[1] I get abc as output expected output I want is abc_bbc Basically What I want is 1) If string contains `_` then print everything after the first `_` as variable 2) If string doesn't contain `_` then print the string as variable How can I achieve what I want
Set the maxsplit argument of split to 1 and then take the last element of the resulting list. >>> "testing_abc".split("_", 1)[-1] 'abc' >>> "xyz".split("_", 1)[-1] 'xyz' >>> "testing_abc_bbc".split("_", 1)[-1] 'abc_bbc'
4
9
73,056,691
2022-7-20
https://stackoverflow.com/questions/73056691/how-do-we-interpret-the-baseline-output-of-cv2-gettextsize
If I do this for example: cv2.getTextSize('blahblah', cv2.FONT_HERSHEY_SIMPLEX, 2, 2) it returns ((262, 43), 19) so the width and height of the text in pixels are 262 and 43, but what is the 19? Here it says it "corresponds to the y coordinate of the baseline relative to the bottom of the text" but this still doesn't make it clear to me, as I'm not sure what the "baseline" is here?
The baseline here is the yellow line in the figure on page 124. It is the line on which the letters sit. That is according to Pay attention to how the three little points (red,cyan, and green) are drawn and also to how the yellow baseline is shown.
5
7
73,056,540
2022-7-20
https://stackoverflow.com/questions/73056540/no-module-named-amazon-linux-extras-when-running-amazon-linux-extras-install-epe
Here is my (simplified) Dockerfile # https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-base FROM public.ecr.aws/lambda/python:3.8 # get the amazon linux extras RUN yum install -y amazon-linux-extras RUN amazon-linux-extras install epel -y When it reaches the RUN amazon-linux-extras install epel -y line during the build, it gets Step 6/8 : RUN amazon-linux-extras install epel -y ---> Running in dbb44f57111a /var/lang/bin/python: No module named amazon_linux_extras The command '/bin/sh -c amazon-linux-extras install epel -y' returned a non-zero code: 1 I think that has to do with some python 2 vs. 3 stuff, but I'm not sure
You're correct, it's because amazon-linux-extras only works with Python 2. You can modify the RUN instruction to RUN PYTHON=python2 amazon-linux-extras install epel -y
10
16
73,049,456
2022-7-20
https://stackoverflow.com/questions/73049456/apply-the-nested-shape-of-one-list-on-another-flat-list
I have two lists: A: [[0, 1], [2, [3]], 4] B: [5, 6, 7, 8, 9] I wish list B could have the same shape with list A: [5, 6, 7, 8, 9] => [[5, 6], [7, [8]], 9] So list A and list B have the same dimension/shape: A: [[0, 1], [2, [3]], 4] B: [[5, 6], [7, [8]], 9] Consider about time complexity, I hope there is a way of O(n) if possible.
Assuming the number of items is identical, you could use a recursive function and an iterator: A = [[0, 1], [2, [3]], 4] B = [5, 6, 7, 8, 9] def copy_shape(l, other): if isinstance(other, list): other = iter(other) if isinstance(l, list): return [copy_shape(x, other) for x in l] else: return next(other) out = copy_shape(A, B) output: [[5, 6], [7, [8]], 9] NB. the complexity is O(n). You can also use if hasattr(other, '__len__') or if not hasattr(other, '__next__') in place of if isinstance(other, list) to generalize to other iterables (except iterator).
6
6
73,049,158
2022-7-20
https://stackoverflow.com/questions/73049158/extract-values-from-two-columns-of-a-dataframe-and-put-it-in-a-list
I have a dataframe as shown below: df = A col_1 col_45 col_3 1.0 4.0 45.0 [1, 9] 2.0 4.0 NaN [9, 10] 3.0 49.2 10.8 [1, 10] The values in col_1 are of type float and the values in col_3 are in a list. For every row, I want to extract the values in col_1 and col_3 and put it together in a list. I tried the following: df[['col_1','col_3']].astype(float).values.tolist() But it threw me a Value error: ValueError: setting an array element with a sequence.. I would like to have a list as follows: [[4.0,1.0,9.0], [4.0,9.0,10.0], [49.2,1.0,10.0]] Is there a way to do this? Thanks.
Convert one element in col_1 to list then use merge two list like list_1 + list_2, You can use pandas.apply with axis=1 for iterate over each row: >>> df.apply(lambda row: [row['col_1']] + row['col_3'], axis=1) 0 [4.0, 1, 9] 1 [4.0, 9, 10] 2 [49.2, 1, 10] dtype: object >>> df.apply(lambda row: [row['col_1']] + row['col_3'], axis=1).to_list() [ [4.0, 1, 9], [4.0, 9, 10], [49.2, 1, 10] ]
5
3
73,044,698
2022-7-20
https://stackoverflow.com/questions/73044698/pandas-str-extract-giving-unexpected-nan
I have a data set which has a column that looks like this Badge Number 1 3 23 / gold 22 / silver 483 I need only the numbers. Here's my code: df = pd.read_excel('badges.xlsx') df['Badge Number'] = df['Badge Number'].str.extract('(\d+)') print(df) I was expecting an output like: Badge Number 1 3 23 22 483 but I got Badge Number Nan Nan 23 22 Nan Just to test, I dumped the dataframe to a .csv and read it back with pd.read_csv(). That gave me just the numbers, as I need (though of course that's not a solution) I also tried df['Badge Number'] = np.where(df['Badge Number'].str.isnumeric(), df['Badge Number'], df['Badge Number'].str.extract('(\d+)')) but that just gave me all 1s. I know I am trying things I don't even remotely understand, but am hoping there's a straightforward solution.
Another option is while reading the XLS it self, specify your column to string. use dtype={'Badge Number': str} df = pd.read_excel('badges.xlsx',dtype={'Badge Number': str}) df['Badge Number'] = df['Badge Number'].str.extract('(\\d+)')
5
2
73,044,663
2022-7-20
https://stackoverflow.com/questions/73044663/why-are-f-strings-slower-than-string-concatenation-when-repeatedly-adding-to-a-s
I was benchmarking some code for a project with timeit (using a free replit, so 1024MB of memory): code = '{"type":"body","layers":[' for x, row in enumerate(pixels): for y, pixel in enumerate(row): if pixel != (0, 0, 0, 0): code += f'''{{"offsetX":{-start + x * gap},"offsetY":{start - y * gap},"rot":45,"size":{size},"sides":4,"outerSides":0,"outerSize":0,"team":"{'#%02x%02x%02x' % (pixel[:3])}","hideBorder":1}},''' code += '],"sides":1,"name":"Image"}} The loop runs for every single pixel inside a given image (not efficient of course, but I haven't implemented anything to reduce loop times yet), so any optimization I can get in the loop is worth it. I remembered that f-strings are faster than string concatenation as long as you're combining 3+ stringsβ€”and as shown, I have a lot more than 3 strings being combinedβ€”so I decided to replace the += inside the loop with an f-string and see the improvement. code = '{"type":"body","layers":[' for x, row in enumerate(pixels): for y, pixel in enumerate(row): if pixel != (0, 0, 0, 0): code = f'''{code}{{"offsetX":{-start + x * gap},"offsetY":{start - y * gap},"rot":45,"size":{size},"sides":4,"outerSides":0,"outerSize":0,"team":"{'#%02x%02x%02x' % (pixel[:3])}","hideBorder":1}},''' code += '],"sides":1,"name":"Image"}} The results of 500 timeit iterations: += took 5.399778672000139 seconds fstr took 6.91279206800027 seconds I've rerun this multiple times; the above times are the best f-strings have done so far. Why are f-strings slower in this case? PS: This is my first time posting a question here. Any suggestions on how to improve my future questions would be greatly appreciated :D
So, first off, repeated concatenation in a language with immutable strings is, theoretically, O(nΒ²), while efficiently implemented bulk concatenation is O(n), so both versions of your code are theoretically bad for repeated concatenation. The version that works everywhere with O(n) work is: code = ['{"type":"body","layers":['] # Use list of str, not str for x, row in enumerate(pixels): for y, pixel in enumerate(row): if pixel != (0, 0, 0, 0): code.append(f'''{{"offsetX":{-start + x * gap},"offsetY":{start - y * gap},"rot":45,"size":{size},"sides":4,"outerSides":0,"outerSize":0,"team":"{'#%02x%02x%02x' % (pixel[:3])}","hideBorder":1}},''') # Append each new string to list code.append('],"sides":1,"name":"Image"}}') code = ''.join(code) # Efficiently join list of str back to single str Your code with += happens to work efficiently enough because of a CPython specific optimization for string concatenation when concatenating to a string with no other living references, but the very first Programming Recommendation in the PEP8 style guide specifically warns against relying on it: ... do not rely on CPython’s efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b. This optimization is fragile even in CPython (it only works for some types) and isn’t present at all in implementations that don’t use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations. Essentially, your original +=-based code benefited from the optimization, and as a result, ended up performing fewer data copies. Your f-string based code did the same work, but in a way that prevented the CPython optimization from applying (building a brand new, increasingly large, str every time). Both approaches are poor form, one of them was just slightly less awful on CPython. When your hot code is performing repeated concatenation, you're already doing the wrong thing, just use a list of str and ''.join at the end.
6
5
72,982,731
2022-7-14
https://stackoverflow.com/questions/72982731/how-to-transform-a-series-of-a-polars-dataframe
I am dealing with a large dataframe (198,619 rows x 19,110 columns) and so am using the polars package to read in the tsv file. Pandas just takes too long. However, I now face an issue as I want to transform each cell's value x raising it by base 2 as follows: 2^x. I run the following line as an example: df_copy = df df_copy[:,1] = 2**df[:,1] But I get this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /var/tmp/pbs.98503.hn-10-03/ipykernel_196334/3484346087.py in <module> 1 df_copy = df ----> 2 df_copy[:,1] = 2**df[:,1] ~/.local/lib/python3.9/site-packages/polars/internals/frame.py in __setitem__(self, key, value) 1845 1846 # dispatch to __setitem__ of Series to do modification -> 1847 s[row_selection] = value 1848 1849 # now find the location to place series ~/.local/lib/python3.9/site-packages/polars/internals/series.py in __setitem__(self, key, value) 512 self.__setitem__([key], value) 513 else: --> 514 raise ValueError(f'cannot use "{key}" for indexing') 515 516 def estimated_size(self) -> int: ValueError: cannot use "slice(None, None, None)" for indexing This should be simple but I can't figure it out as I'm new to Polars.
The secret to harnessing the speed and flexibility of Polars is to learn to use Expressions. As such, you'll want to avoid Pandas-style indexing methods. Let's start with this data: import polars as pl nbr_rows = 4 nbr_cols = 5 df = pl.DataFrame({ "col_" + str(col_nbr): pl.int_range(col_nbr, nbr_rows + col_nbr, eager=True) for col_nbr in range(0, nbr_cols) }) df shape: (4, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ col_0 ┆ col_1 ┆ col_2 ┆ col_3 ┆ col_4 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═══════║ β”‚ 0 ┆ 1 ┆ 2 ┆ 3 ┆ 4 β”‚ β”‚ 1 ┆ 2 ┆ 3 ┆ 4 ┆ 5 β”‚ β”‚ 2 ┆ 3 ┆ 4 ┆ 5 ┆ 6 β”‚ β”‚ 3 ┆ 4 ┆ 5 ┆ 6 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ In Polars we would express your calculations as: df_copy = df.select(pl.lit(2).pow(pl.all()).name.keep()) print(df_copy) shape: (4, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ col_0 ┆ col_1 ┆ col_2 ┆ col_3 ┆ col_4 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═══════║ β”‚ 1.0 ┆ 2.0 ┆ 4.0 ┆ 8.0 ┆ 16.0 β”‚ β”‚ 2.0 ┆ 4.0 ┆ 8.0 ┆ 16.0 ┆ 32.0 β”‚ β”‚ 4.0 ┆ 8.0 ┆ 16.0 ┆ 32.0 ┆ 64.0 β”‚ β”‚ 8.0 ┆ 16.0 ┆ 32.0 ┆ 64.0 ┆ 128.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
4
2
73,000,068
2022-7-15
https://stackoverflow.com/questions/73000068/what-is-the-right-way-to-validate-a-storekit-2-transaction-jwsrepresentation-in
It's unclear from the docs what you actually do to verify the jwsRepresentation string from a StoreKit 2 transaction on the server side. Also "signedPayload" from the Apple App Store Notifications V2 seems to be the same, but there is also no documentation around actually validating that either outside of validating it client side on device. What gives? What do we do with this JWS/JWT?
Apple now provides an App Store Server Library available in a few different languages (including Python). You still need to manage loading the root certificates on your own, but here is an example: from functools import from appstoreserverlibrary.models.Environment import Environment from appstoreserverlibrary.signed_data_verifier import VerificationException, SignedDataVerifier @lru_cache(maxsize=None) def _load_apple_root_certificates(): # https://www.apple.com/certificateauthority/ certs_urls = [ "https://www.apple.com/appleca/AppleIncRootCertificate.cer", "https://www.apple.com/certificateauthority/AppleComputerRootCertificate.cer", "https://www.apple.com/certificateauthority/AppleRootCA-G2.cer", "https://www.apple.com/certificateauthority/AppleRootCA-G3.cer", ] return [requests.get(cert_url).content for cert_url in certs_urls] root_certificates = _load_apple_root_certificates() enable_online_checks = True bundle_id = "com.example" environment = Environment.SANDBOX app_apple_id = None # appAppleId must be provided for the Production environment signed_data_verifier = SignedDataVerifier(root_certificates, enable_online_checks, environment, bundle_id, app_apple_id) try: signed_notification = "ey.." payload = signed_data_verifier.verify_and_decode_notification(signed_notification) print(payload) except VerificationException as e: print(e)
4
2
72,976,543
2022-7-14
https://stackoverflow.com/questions/72976543/google-bigquery-query-in-python-works-when-using-result-but-permission-issue
I've run into a problem after upgrades of my pip packages and my bigquery connector that returns query results suddenly stopped working with following error message from google.cloud import bigquery from google.oauth2 import service_account credentials = service_account.Credentials.from_service_account_file('path/to/file', scopes=['https://www.googleapis.com/auth/cloud-platform', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/bigquery' ]) client = bigquery.Client(credentials=credentials) data = client.query('select * from dataset.table').to_dataframe() PermissionDenied: 403 request failed: the user does not have bigquery.readsessions.create' permission But! If you switched the code to data = client.query('select * from dataset.table').result() (dataframe -> result) you received the data in RowIterator format and were able to properly read them. The same script using to_dataframe with the same credentials was working on the server. Therefore I set my bigquery package to the same version 2.28.0, which still did not help. I could not find any advices on this error / topic anywhere, so I just want to share if any of you faced the same thing.
There are different ways of receiving data from bigquery. Using the BQ Storage API is considered more efficient for larger result sets compared to the other options: The BigQuery Storage Read API provides a third option that represents an improvement over prior options. When you use the Storage Read API, structured data is sent over the wire in a binary serialization format. This allows for additional parallelism among multiple consumers for a set of results The Python BQ library internally determines whether it can use the BQ Storage API or not. For the result method, it uses the tradtional tabledata.list method internally, whereas the to_dataframe method uses the BQ Storage API if the according package is installed. However, using the BQ Storage API requires you to have the bigquery.readSessionUser Role respectively the readsessions.create right which in your case seems to be lacking. By uninstalling the google-cloud-bigquery-storage, the google-cloud-bigquery package was falling back to the list method. Hence, by de-installing this package, you were working around the lack of rights. See the BQ Python Libary Documentation for details.
5
5
73,026,698
2022-7-18
https://stackoverflow.com/questions/73026698/javas-spring-boot-vs-pythons-fastapi-threads
I'm a Java Spring boot developer and I develop 3-tier crud applications. I talked to a guy who seemed knowledgeable on the subject, but I didn't get his contact details. He was advocating for Python's FastAPI, because horizontally it scales better than Spring boot. One of the reasons he mentioned is that FastAPI is single-threaded. When the thread encounters a database lookup (or other work the can be done asyncly), it picks up other work to later return to the current work when the database results have come in. In Java, when you have many requests pending, the thread pool may get exhausted. I don't understand this reasoning a 100%. Let me play the devil's advocate. When the Python program encounters an async call, it must somehow store the program pointer somewhere, to remember where it needs to continue later. I know that that place where the program pointer is stored is not at all a thread, but I have to give it some name, so let's call it a "logical thread". In Python , you can have many logical threads that are waiting. In Java, you can have a thread pool with many real threads that are waiting. To me, the only difference seems to be that Java's threads are managed at the operating system level, whereas Python's "logical threads" are managed by Python or FastAPI. Why are real threads that are waiting in a thread pool so much more expensive than logical threads that are waiting? If most of my threads are waiting, why can't I just increase the thread pool size to avoid exhaustion?
The issues with Java threads in the question are addressed by the project Loom, which is now included in Jdk21. It is very well explained here https://www.baeldung.com/openjdk-project-loom : Presently, Java relies on OS implementations for both the continuation [of threads] and the scheduler [for threads]. Now, in order to suspend a continuation, it's required to store the entire call-stack. And similarly, retrieve the call-stack on resumption. Since the OS implementation of continuations includes the native call stack along with Java's call stack, it results in a heavy footprint. A bigger problem, though, is the use of OS scheduler. Since the scheduler runs in kernel mode, there's no differentiation between threads. And it treats every CPU request in the same manner. (...) For example, consider an application thread which performs some action on the requests and then passes on the data to another thread for further processing. Here, it would be better to schedule both these threads on the same CPU. But since the [OS] scheduler is agnostic to the thread requesting the CPU, this is impossible to guarantee. The question really boils down to Why are OS threads considered expensive?
3
8
72,975,593
2022-7-14
https://stackoverflow.com/questions/72975593/where-to-store-tokens-secrets-with-fastapi
I'm working with FastAPI and Python on the backend to make external calls to a public API. After authentication, the public API gives an access token that grants access to a specific user's data. Where would be the best place to store/save this access token? I want to easily access it for all my future API calls with the public API service. I don't want a DB or long term storage as it only needs to last the session for the user. Appreciate all help!
Almost a year later, but I found a clean solution I was pleased with. I used Starlette's SessionMiddleware to store the access_token and user session data in the backend. Example: from fastapi import Request ... @router.get("/callback") async def callback(request: Request): ... request.session["access_token"] = access_token Then later, in any endpoints where I need to use the token or get session data: @router.get("/top_artists") async def get_top_songs(request: Request): ... access_token = request.session.get("access_token") This stores access_token and any other session data you want on the backend. Then, a cookie, 'session_id', is stored client-side and passed through Request to retrieve the session data from the server.
3
4
72,995,109
2022-7-15
https://stackoverflow.com/questions/72995109/excluding-a-dependency-in-pip-audit
I have a pipenv project that is using the Trend Micro deepsecurity dependency. Up until recently, this was available on pypi, but Trend has since removed it. They require one to download the SDK and install it manually. Not a horrible issue, as I unzip the package and pip install it. pip freeze|grep deep ξ‚² 1 ✘ ξ‚² 4s ξ‚² portal-bVWoHG0U  deep-security-api @ file:///Users/paul/src/smartronix/portal/ds_temp Unfortunately, this causes heartburn for pip-audit: > pip-audit ξ‚² βœ” ξ‚² 5s ξ‚² portal- Traceback (most recent call last): File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/bin/pip-audit", line 8, in <module> sys.exit(audit()) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_cli.py", line 370, in audit for (spec, vulns) in auditor.audit(source): File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_audit.py", line 66, in audit for dep, vulns in self._service.query_all(specs): File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_service/interface.py", line 143, in query_all yield self.query(spec) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_service/pypi.py", line 58, in query response: requests.Response = self.session.get(url=url, timeout=self.timeout) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 542, in get return self.request('GET', url, **kwargs) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 667, in send history = [resp for resp in gen] File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 667, in <listcomp> history = [resp for resp in gen] File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 166, in resolve_redirects raise TooManyRedirects('Exceeded {} redirects.'.format(self.max_redirects), response=resp) requests.exceptions.TooManyRedirects: Exceeded 30 redirects. The reason for this (as fully documented below) is that pypi used to know about deep security, but has it no longer and provides a confusing response. I'd like to simply exclude this dependency but can't see how to do it. Verbose pip-audit run pip-audit -v ξ‚² DEBUG:pip_audit._cli:parsed arguments: Namespace(cache_dir=None, desc=<VulnerabilityDescriptionChoice.Auto: 'auto'>, dry_run=False, extra_index_urls=[], fix=False, format=<OutputFormatChoice.Columns: 'columns'>, index_url='https://pypi.org/simple', local=False, output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>, paths=[], progress_spinner=<ProgressSpinnerChoice.On: 'on'>, project_path=None, require_hashes=False, requirements=None, skip_editable=False, strict=False, timeout=15, verbose=True, vulnerability_service=<VulnerabilityServiceChoice.Pypi: 'pypi'>) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/alembic/1.7.7/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): pypi.org:443 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/alembic/1.7.7/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/aniso8601/9.0.1/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/aniso8601/9.0.1/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/apscheduler/3.9.1/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/APScheduler/3.9.1/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/APScheduler/3.9.1/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/astroid/2.6.6/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/astroid/2.6.6/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/atomicwrites/1.4.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/atomicwrites/1.4.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/attrs/21.4.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/attrs/21.4.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/autopep8/1.6.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/autopep8/1.6.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-common/1.1.28/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-common/1.1.28/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-core/1.24.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-core/1.24.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-identity/1.11.0b1/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-identity/1.11.0b1/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-mgmt-compute/26.1.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-mgmt-compute/26.1.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-mgmt-core/1.3.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-mgmt-core/1.3.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-mgmt-loganalytics/13.0.0b4/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-mgmt-loganalytics/13.0.0b4/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-mgmt-recoveryservices/2.0.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-mgmt-recoveryservices/2.0.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-mgmt-recoveryservicesbackup/4.2.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-mgmt-recoveryservicesbackup/4.2.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-mgmt-resource/21.1.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-mgmt-resource/21.1.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/azure-monitor-query/1.0.2/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/azure-monitor-query/1.0.2/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/backports-zoneinfo/0.2.1/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/backports.zoneinfo/0.2.1/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/backports.zoneinfo/0.2.1/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/bcrypt/3.2.2/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/bcrypt/3.2.2/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/black/22.3.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/black/22.3.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/boto3/1.22.13/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/boto3/1.22.13/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/botocore/1.25.13/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/botocore/1.25.13/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/cachecontrol/0.12.11/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/CacheControl/0.12.11/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/CacheControl/0.12.11/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/certifi/2021.10.8/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/certifi/2021.10.8/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/cffi/1.15.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7238 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/cffi/1.15.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/cfgv/3.3.1/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/cfgv/3.3.1/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/charset-normalizer/2.0.12/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/charset-normalizer/2.0.12/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/click/8.1.3/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/click/8.1.3/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/coverage/6.3.3/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/coverage/6.3.3/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/cryptography/37.0.2/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/cryptography/37.0.2/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/cyclonedx-python-lib/2.3.0/json" in the cache DEBUG:cachecontrol.controller:Current age based on date: 7237 DEBUG:cachecontrol.controller:Freshness lifetime from max-age: 900 DEBUG:urllib3.connectionpool:https://pypi.org:443 "GET /pypi/cyclonedx-python-lib/2.3.0/json HTTP/1.1" 304 0 DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json/" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) DEBUG:cachecontrol.controller:Looking up "https://pypi.org/pypi/deep-security-api/12.0.466/json" in the cache DEBUG:cachecontrol.controller:Returning cached permanent redirect response (ignoring date and etag information) Traceback (most recent call last): File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/bin/pip-audit", line 8, in <module> sys.exit(audit()) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_cli.py", line 370, in audit for (spec, vulns) in auditor.audit(source): File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_audit.py", line 66, in audit for dep, vulns in self._service.query_all(specs): File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_service/interface.py", line 143, in query_all yield self.query(spec) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/pip_audit/_service/pypi.py", line 58, in query response: requests.Response = self.session.get(url=url, timeout=self.timeout) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 542, in get return self.request('GET', url, **kwargs) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 667, in send history = [resp for resp in gen] File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 667, in <listcomp> history = [resp for resp in gen] File "/Users/paul/.local/share/virtualenvs/portal-bVWoHG0U/lib/python3.7/site-packages/requests/sessions.py", line 166, in resolve_redirects raise TooManyRedirects('Exceeded {} redirects.'.format(self.max_redirects), response=resp) requests.exceptions.TooManyRedirects: Exceeded 30 redirects.
I never did find a real solution to the problem, but I did make this workaround; just delete deepsecurity! For reference, here is the script that I use in my CI pipeline set -e # Remove trend deep security, as it causes pip-audit to fail pipenv run pip uninstall -y deep-security-api pipenv run pip-audit
4
1
72,963,553
2022-7-13
https://stackoverflow.com/questions/72963553/opentelemetry-api-vs-sdk
I'm confused as why OpenTelemetry documentaion has OpenTelemetry Python API and OpenTelemetry Python SDK. Like when using the specification in python when we should consider pip install opentelemetry-api over pip install opentelemetry-sdk
There's no need to specify both the API and SDK dependencies in Python as the SDK has a dependency on the API: https://github.com/open-telemetry/opentelemetry-python/blob/main/opentelemetry-sdk/pyproject.toml#L29 The short answer for when to use each is that the API should be specified as a dependency for packages/libraries and the SDK for applications. The reason for this is that API contains the interface for instrumenting code with traces, metrics and logs. The SDK contains the additional functionality to collect, process and export that data. For example, if I wanted to include instrumentation in a PyPI package that I've created, I would add the API as a dependency and use that to instrument my code. Then if someone were to use my PyPI package in their application, they could include the instrumentation data that my package generates. However, they would have to install the SDK in order to configure their application to do anything with that telemetry data other than simply generate it, e.g. sending it to an OpenTelemetry collector or third party.
9
5
72,980,095
2022-7-14
https://stackoverflow.com/questions/72980095/pyspark-cumulative-sum-with-limits
I have a dataframe as follows: +-------+----------+-----+ |user_id| date|valor| +-------+----------+-----+ | 1|2022-01-01| 0| | 1|2022-01-02| 0| | 1|2022-01-03| 1| | 1|2022-01-04| 1| | 1|2022-01-05| 1| | 1|2022-01-06| 0| | 1|2022-01-07| 0| | 1|2022-01-08| 0| | 1|2022-01-09| 1| | 1|2022-01-10| 1| | 1|2022-01-11| 1| | 1|2022-01-12| 0| | 1|2022-01-13| 0| | 1|2022-01-14| -1| | 1|2022-01-15| -1| | 1|2022-01-16| -1| | 1|2022-01-17| -1| | 1|2022-01-18| -1| | 1|2022-01-19| -1| | 1|2022-01-20| 0| +-------+----------+-----+ The goal is to calculate a score for the user_id using valor as base, it will start from 3 and increase or decrease by 1 as it goes in the valor column. The main problem here is that my score can't be under 1 and can't be over 5, so the sum must always stay on the range and not lose the last value so I can compute it right. So what I expect is this: +-------+----------+-----+-----+ |user_id| date|valor|score| +-------+----------+-----+-----+ | 1|2022-01-01| 0| 3| | 1|2022-01-02| 0| 3| | 1|2022-01-03| 1| 4| | 1|2022-01-04| 1| 5| | 1|2022-01-05| 1| 5| | 1|2022-01-06| 0| 5| | 1|2022-01-07| 0| 5| | 1|2022-01-08| 0| 5| | 1|2022-01-09| 1| 5| | 1|2022-01-10| -1| 4| | 1|2022-01-11| -1| 3| | 1|2022-01-12| 0| 3| | 1|2022-01-13| 0| 3| | 1|2022-01-14| -1| 2| | 1|2022-01-15| -1| 1| | 1|2022-01-16| 1| 2| | 1|2022-01-17| -1| 1| | 1|2022-01-18| -1| 1| | 1|2022-01-19| 1| 2| | 1|2022-01-20| 0| 2| +-------+----------+-----+-----+ So far, I've done a window to rank the column valor, so I can keep track of the quantity of increases or decreases in sequence and remove from valor the sequences larger then 4, but I don't know how to keep the sum in valor_ in the range (1:5): +-------+----------+----+-----+------+ |user_id| date|rank|valor|valor_| +-------+----------+----+-----+------+ | 1|2022-01-01| 0| 0| 0| | 1|2022-01-02| 0| 0| 0| | 1|2022-01-03| 1| 1| 1| | 1|2022-01-04| 2| 1| 1| | 1|2022-01-05| 3| 1| 1| | 1|2022-01-06| 0| 0| 0| | 1|2022-01-07| 0| 0| 0| | 1|2022-01-08| 0| 0| 0| | 1|2022-01-09| 1| 1| 1| | 1|2022-01-10| 2| 1| 1| | 1|2022-01-11| 3| 1| 1| | 1|2022-01-12| 0| 0| 0| | 1|2022-01-13| 0| 0| 0| | 1|2022-01-14| 1| -1| -1| | 1|2022-01-15| 2| -1| -1| | 1|2022-01-16| 3| -1| -1| | 1|2022-01-17| 4| -1| -1| | 1|2022-01-18| 5| -1| 0| | 1|2022-01-19| 6| -1| 0| As you can see, the result here is not what I expected: +-------+----------+----+-----+------+-----+ |user_id| date|rank|valor|valor_|score| +-------+----------+----+-----+------+-----+ | 1|2022-01-01| 0| 0| 0| 3| | 1|2022-01-02| 0| 0| 0| 3| | 1|2022-01-03| 1| 1| 1| 4| | 1|2022-01-04| 2| 1| 1| 5| | 1|2022-01-05| 3| 1| 1| 6| | 1|2022-01-06| 0| 0| 0| 6| | 1|2022-01-07| 0| 0| 0| 6| | 1|2022-01-08| 0| 0| 0| 6| | 1|2022-01-09| 1| 1| 1| 7| | 1|2022-01-10| 2| 1| 1| 8| | 1|2022-01-11| 3| 1| 1| 9| | 1|2022-01-12| 0| 0| 0| 9| | 1|2022-01-13| 0| 0| 0| 9| | 1|2022-01-14| 1| -1| -1| 8| | 1|2022-01-15| 2| -1| -1| 7| | 1|2022-01-16| 3| -1| -1| 6| | 1|2022-01-17| 4| -1| -1| 5| | 1|2022-01-18| 5| -1| 0| 5| | 1|2022-01-19| 6| -1| 0| 5| | 1|2022-01-20| 0| 0| 0| 5|
In such cases, we usually think of window functions to do a calculation going from one row to next. But this case is different, because the window should kind of keep track of itself. So window cannot help. Main idea. Instead of operating with rows, one can do the work with grouped/aggregated arrays. In this case, it would work very well, because we do have a key to use in groupBy, so the table will be divided into chunks of data, so the calculations will be parallelized. Input: from pyspark.sql import functions as F df = spark.createDataFrame( [(1, '2022-01-01', 0), (1, '2022-01-02', 0), (1, '2022-01-03', 1), (1, '2022-01-04', 1), (1, '2022-01-05', 1), (1, '2022-01-06', 0), (1, '2022-01-07', 0), (1, '2022-01-08', 0), (1, '2022-01-09', 1), (1, '2022-01-10', 1), (1, '2022-01-11', 1), (1, '2022-01-12', 0), (1, '2022-01-13', 0), (1, '2022-01-14', -1), (1, '2022-01-15', -1), (1, '2022-01-16', -1), (1, '2022-01-17', -1), (1, '2022-01-18', -1), (1, '2022-01-19', -1), (1, '2022-01-20', 0)], ['user_id', 'date', 'valor']) Script: df = df.groupBy('user_id').agg( F.aggregate( F.array_sort(F.collect_list(F.struct('date', 'valor'))), F.expr("array(struct(cast(null as string) date, 0L valor, 3L cum))"), lambda acc, x: F.array_union( acc, F.array(x.withField( 'cum', F.greatest(F.lit(1), F.least(F.lit(5), x['valor'] + F.element_at(acc, -1)['cum'])) )) ) ).alias("a") ) df = df.selectExpr("user_id", "inline(slice(a, 2, size(a)))") df.show() # +-------+----------+-----+---+ # |user_id| date|valor|cum| # +-------+----------+-----+---+ # | 1|2022-01-01| 0| 3| # | 1|2022-01-02| 0| 3| # | 1|2022-01-03| 1| 4| # | 1|2022-01-04| 1| 5| # | 1|2022-01-05| 1| 5| # | 1|2022-01-06| 0| 5| # | 1|2022-01-07| 0| 5| # | 1|2022-01-08| 0| 5| # | 1|2022-01-09| 1| 5| # | 1|2022-01-10| 1| 5| # | 1|2022-01-11| 1| 5| # | 1|2022-01-12| 0| 5| # | 1|2022-01-13| 0| 5| # | 1|2022-01-14| -1| 4| # | 1|2022-01-15| -1| 3| # | 1|2022-01-16| -1| 2| # | 1|2022-01-17| -1| 1| # | 1|2022-01-18| -1| 1| # | 1|2022-01-19| -1| 1| # | 1|2022-01-20| 0| 1| # +-------+----------+-----+---+ Explanation Groups are created based on "user_id". The aggregation for these groups lies in this line: F.array_sort(F.collect_list(F.struct('date', 'valor'))) This creates arrays (collect_list) for every "user_id". These arrays contain structs of 2 fields: date and value. +-------+-----------------------------------------------+ |user_id|a | +-------+-----------------------------------------------+ |1 |[{2022-01-01, 0}, {2022-01-02, 0}, {...} ... ] | +-------+-----------------------------------------------+ array_sort is used to make sure all the structs inside are sorted, because other steps will depend on it. All the rest what's inside agg is for transforming the result of the above aggregation. The main part in the code is aggregate. It takes an array, "loops" through every element and returns one value (in our case, this value is made to be array too). It works like this... You take the initial value (array(struct(cast(null as string) date, 0L valor, 3L cum)) and merge it with the first element in the array using the provided function (lambda). The result is then used in place of initial value for the next run. You do the merge again, but with the following element in the array. And so on. In this case, the lambda function performs array_union, which makes a union of arrays having identic schemas. We take the initial value (array of structs) as acc variable [{null, 0, 3}] (it's already ready to be used in array_union) take the first element inside 'a' column's array (i.e. ) as x variable {2022-01-01, 0} (it's a struct, so the schema is not the same with acc (array of structs), so some processing is needed, and also the calculation needs to be done at this step, as we have access to both of the variables at this point) we'll create the array of structs by enclosing the x struct inside F.array(); also, we'll have to add one more field to the struct, as x struct currently has just 2 fields F.array(x.withField('cum', ...)) inside the .withField() we have to provide the expression for the field F.greatest( F.lit(1), F.least( F.lit(5), x['valor'] + F.element_at(acc, -1)['cum'] ) ) element_at(acc, -1) takes the last struct of acc array ['cum'] takes the field 'cum' from the struct x['valor'] + adds 'valor' field from the x struct F.least() assures that the max value in 'cum' will stay 5 (takes the min value from the new 'cum' and 5) F.greatest() assures that the min value in 'cum' will stay 1 both acc and the newly created array of structs now have identic schemas and proper data, so they can be unionized array_union the result is now being assigned to acc variable, while x variable gets assigned the next value from the 'a' array. The process continues from step 3. Finally, the result of aggregate looks like [{null, 0, 3}, {2022-01-01, 0, 3}, {2022-01-02, 0, 3}, {2022-01-03, 1, 4}, {...} ... ] The first element is removed using slice(..., 2, size(a)) inline is used to explode the array of structs. Note. It's important to create the initial value of aggregate such that it would contain proper schema (column/field names and types): F.expr("array(struct(cast(null as string) date, 0L valor, 3L cum))") Those L letters tell that 0 and 3 are of bigint (long) data type. (sql-ref-literals) The same could have been written like this: F.expr("array(struct(null, 0, 3))").cast('array<struct<date:string,valor:bigint,cum:bigint>>')
5
6
73,021,768
2022-7-18
https://stackoverflow.com/questions/73021768/what-is-the-difference-between-the-mlrose-project-and-mlose-hiive
I can't find the difference between the mlrose (https://pypi.org/project/mlrose/) project and mlrose-hiive (https://pypi.org/project/mlrose-hiive/). I know hiive has some kind of extensions compared to the original mlrose but I can't find some documentation or anything that explains the new features.
Accordign private training forum info where I have access, it seems mlrose-hiive: mostly are improvements and fixes some dependency bugs, as in the six package: Python import error: cannot import name 'six' from 'sklearn.externals', is backwards-compatible, so, mlrose readthedocs should be OK, they even recommend that in the project repo readme (https://github.com/hiive/mlrose). Also: https://pypi.org/project/mlrose-hiive/ Therefore, documentation in https://mlrose.readthedocs.io/en/stable/
3
4
73,010,915
2022-7-17
https://stackoverflow.com/questions/73010915/multiple-array-agg-in-sqlalchemy
I am working with postgres. I want to fetch multiple fields using array_agg in sqlalchemy. But I couldn't find examples of such use anywhere. I made my request. But I can't process the result of array_agg. I'd like to get a list of strings, or better yet a list of tuples. It would also be nice to get rid of func.distinct, it's only needed because I can't write it like this: func.array_agg((Task.id, Task.user_id)) My query: data = session.query( Status.id, func.array_agg(func.distinct(Task.id, Task.user_id), type_=TEXT) ).join(Task).group_by(Status.id).limit(5).all() I got: (100, '{"(91,1)","(92,1)","(93,1)","(94,1)"}') (200, '{"(95,1)","(96,1)","(97,1)","(98,1)","(99,1)"}') But I want: (100, ["(91,1)","(92,1)","(93,1)","(94,1)"]) (200, ["(95,1)","(96,1)","(97,1)","(98,1)","(99,1)"]) Or better: (100, [(91,1),(92,1),(93,1),(94,1)]) (200, [(95,1),(96,1),(97,1),(98,1),(99,1)]) I try also: func.array_agg(func.distinct(Task.id, Task.user_id), type_=ARRAY(TEXT)) I got: (100, ['{', '"', '(', '9', '1', ',', '1', ')', '"', ',', '"', '(', '9', '2', ',', '1', ')', '"', ',', '"', '(', '9', '3', ',', '1', ')', '"', ',', '"', '(', '9', '4', ',', '1', ')', '"', '}']) (200, ['{', '"', '(', '9', '5', ',', '1', ')', '"', ',', '"', '(', '9', '6', ',', '1', ')', '"', ',', '"', '(', '9', '7', ',', '1', ')', '"', ',', '"', '(', '9', '8', ',', '1', ')', '"', ',', '"', '(', '9', '9', ',', '1', ')', '"', '}'])
The problem here is that Postgresql's array_agg function is returning an array of unknown type; the default behaviour of the psycopg2 connector in this situation is to simply return the array literal as-is. This bug report exists from 2016. SQLAlchemy's maintainer, SO user zzzeek proposed creating a custom type to handle this case. I have modified the solution slightly to convert the tuple elements to integers, and to work with v1.4: import re from sqlalchemy.types import TypeDecorator class ArrayOfRecord(TypeDecorator): impl = sa.String # cache_ok = True seems to work, but I haven't tested extensively cache_ok = True def process_result_value(self, value, dialect): elems = re.match(r"^\{(\".+?\")*\}$", value).group(1) elems = [e for e in re.split(r'"(.*?)",?', elems) if e] return [tuple( map(int, re.findall(r'[^\(\),]+', e)) ) for e in elems] Using it like this: with Session() as session: data = ( session.query( Status.id, sa.func.array_agg( sa.func.ROW(Task.id, Task.user_id), type_=ArrayOfRecord ).label('agg') ) .join(Task) .group_by(Status.id) ) print() for row in data: print(row) print() outputs (100, [(91, 1), (92, 1), (93, 1), (94, 1)]) (200, [(95, 1), (96, 1), (97, 1), (98, 1)])
3
4
73,031,189
2022-7-19
https://stackoverflow.com/questions/73031189/backing-a-cisco-router-using-napalm-using-remote-login-using-ssh
this image is the diagram for GNS3 of routers want to configureTrying to Backup the configuration of a Cisco Router. but the connection is not opening. from napalm import * import napalm drivers = napalm.get_network_driver('ios') device_detail = {'hostname':'192.168.1.2','username':'wahid','password':'wahid'} router = drivers(**device_detail) router.open() #The problem is here <- Exception has occurred: ValueError #Failed to enter enable mode. Please ensure you pass the 'secret' argument to #ConnectHandler. print('Connection is Opened with ->{}'.format(device_detail['hostname'])) config = router.get_config() print('Configuratin on this {} router ->'.format(device_detail['hostname']))
Can you try as follows: from napalm import get_network_driver from getpass import getpass hostname = input("IP address of router: ") username = input(f"Username of {hostname}: ") password = getpass(f"Password of {hostname}") secret = getpass(f"Enable password of {hostname}: ") driver = get_network_driver("ios") device_detail = { "hostname": hostname, "username": username, "password": password, "optional_args": { "secret": secret } } with driver(**device_detail) as router: print(router.get_facts())
3
3
72,964,480
2022-7-13
https://stackoverflow.com/questions/72964480/cannot-import-tensorflow-text
I have problem with importing tensorflow_text I tried importing like below two methods but none of them worked import tensorflow_text as text import tensorflow_text as tf_text My tensorflow version is 2.9.1 and python version is Python 3.7.13. I tried installing tensorflow_text using below two methods but none of them is working. !pip install tensorflow-text !pip install -U tensorflow-text==2.9.0 I am using colab, I also tried reinstalling tensorflow but it still generates below error. NotFoundError: /usr/local/lib/python3.7/distpackages/tensorflow_text/python/ops/_sentencepiece_tokenizer.so: undefined symbol: _ZNSt7__cxx1119basic_ostringstreamIcSt11char_traitsIcESaIcEEC1Ev
Update, Sometimes you need to reinstall and update tensorflow then install tensorflow_text. (Because you need your tensorflow.__version__ and tensorflow_text.__version__ to have the same version) !pip install -U tensorflow !pip install -U tensorflow-text import tensorflow as tf import tensorflow_text as text # Or install with a specific Version !pip install -U "tensorflow==2.8.*" !pip install -U "tensorflow-text==2.8.*" import tensorflow as tf import tensorflow_text as text Old First, install tensorflow-text version 2.8.* like below: !pip install -q -U "tensorflow-text==2.8.*" Then import tensorflow-text like below: import tensorflow_text as text
5
4
73,026,671
2022-7-18
https://stackoverflow.com/questions/73026671/how-do-i-now-since-june-2022-send-an-email-via-gmail-using-a-python-script
I had a Python script which did this. I had to enable something in the Gmail account. For maybe 3 years the script then ran like this: import smtplib, ssl ... subject = 'some subject message' body = """text body of the email""" sender_email = '[email protected]' receiver_email = '[email protected]' # Create a multipart message and set headers message = MIMEMultipart() message['From'] = 'Mike' message['To'] = receiver_email message['Subject'] = subject # Add body to email message.attach(MIMEText(body, 'plain')) # Open file in binary mode with open( client_zip_filename, 'rb') as attachment: # Add file as application/octet-stream # Email client can usually download this automatically as attachment part = MIMEBase('application', 'octet-stream') part.set_payload(attachment.read()) # Encode file in ASCII characters to send by email encoders.encode_base64(part) # Add header as key/value pair to attachment part part.add_header( 'Content-Disposition', f'attachment; filename={subject}', ) # Add attachment to message and convert message to string message.attach(part) text = message.as_string() # Log in to server using secure context and send email context = ssl.create_default_context() with smtplib.SMTP_SSL('smtp.gmail.com', 465, context=context) as server: print( 'waiting to login...') server.login(sender_email, password) print( 'waiting to send...') server.sendmail(sender_email, receiver_email, text) print( 'email appears to have been sent') In May or so of this year I got a message from Google saying that authority to use emails from scripts would be tightened. "Oh dear", I thought. Some time in June I found that the above script no longer works, and raises an exception, specifically on the line server.login(sender_email, password): ... File "D:\My documents\software projects\operative\sysadmin_py\src\job_backup_routine\__main__.py", line 307, in main server.login(sender_email, password) File "c:\users\mike\appdata\local\programs\python\python39\lib\smtplib.py", line 745, in login raise last_exception File "c:\users\mike\appdata\local\programs\python\python39\lib\smtplib.py", line 734, in login (code, resp) = self.auth( File "c:\users\mike\appdata\local\programs\python\python39\lib\smtplib.py", line 657, in auth raise SMTPAuthenticationError(code, resp) smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 https://support.google.com/mail/?p=BadCredentials p14-20020aa7cc8e000000b00435651c4a01sm8910838edt.56 - gsmtp') ... I was thus not entirely surprised by this, and have now gone looking for a solution. I have got this idea that the way forward is something called "OAuth consent" (I don't have any idea what this is...) I found this answer and tried to follow the steps there. Here is my account of trying to follow step 1: I went to this Google configuration page and chose "my_gmail_account_name", the account I want to send emails from ... new "project", name: test-project-2022-07-18 location: default ("No organisation") clicked Create clicked NEXT clicked ENABLE clicked the icon to enable the "Google Developer Console" in the hamburger menu (top left) there is an item "APIs and services" ... one item there is "Credentials" - clicked one item in the left-hand list is "OAuth consent screen" another item is "Credentials". Clicked this: then, at the top, "+ CREATE CREDENTIALS" in the dropdown menu, choose "OAuth Client ID" clicked "CONFIGURE CONSENT SCREEN" radio buttons: "Internal" and "External". chose latter. clicked "CREATE" under "App information": "App name": sysadmin_py "User support email": [email protected] "Developer contact information": [email protected] clicked "SAVE AND CONTINUE" then find myself on a page about "SCOPES", with a button "ADD OR REMOVE SCOPES"... At this point I'm meant to be following "Step 1" instruction "d. Select the application type Other, enter the name "Gmail API Quickstart" and click the Create button"... but nothing of this kind is in view! The update to that answer was done in 2021-04. A year later the interface in Google appears to have changed radically. Or maybe I have taken the wrong path and disappeared down a rabbit hole. I have no idea what to do. Can anyone help?
Google has recently made changes to access of less secure apps (read here: https://myaccount.google.com/lesssecureapps). In order to make your script work again, you'll need to make a new app password for it. Directions to do so are below: Go to My Account in Gmail and click on Security. After that, scroll down to choose the Signing into Google option. Now, click on App Password. (Note: You can see this option when two-step authentication is enabled). To enable two-step authentication: From the Signing into Google, click on the Two-step Verification option and then enter the password. Then Turn ON the two-step verification by entering the OTP code received on the mobile. (Here's a quick link to the same page: https://myaccount.google.com/apppasswords) Here, you can see a list of applications, choose the required one. Next, pick the Select Device option and click on the device which is being used to operate Gmail. Now, click on Generate. After that, enter the Password shown in the Yellow bar. Lastly, click on Done. (Source: https://www.emailsupport.us/blog/gmail-smtp-not-working/) Simply switch out the password the script is using for this newly generated app password. This worked for me and I wish the same for you. I hope this helps!
10
26
73,013,333
2022-7-17
https://stackoverflow.com/questions/73013333/how-to-make-an-angled-arrow-style-border-in-pyqt5
How to make an Angled arrow-type border in PyQt QFrame? In My code, I Have two QLabels and respective frames. My aim is to make an arrow shape border on right side of every QFrame.For clear-cut idea, attach a sample picture. import sys from PyQt5.QtWidgets import * class Angle_Border(QWidget): def __init__(self): super().__init__() self.setWindowTitle("Angle Border") self.lbl1 = QLabel("Python") self.lbl2 = QLabel("PyQt") self.frame1 = QFrame() self.frame1.setProperty("type","1") self.frame1.setFixedSize(200,50) self.frame1.setStyleSheet("background-color:red;color:white;" "font-family:Trebuchet MS;font-size: 15pt;text-align: center;" "border-top-right-radius:25px solid ; border-bottom-right-radius:25px solid ;") self.frame2 = QFrame() self.frame2.setFixedSize(200, 50) self.frame2.setStyleSheet("background-color:blue;color:white;" "font-family:Trebuchet MS;font-size: 15pt;text-align: center;" "border-top:1px solid transparent; border-bottom:1px solid transparent;") self.frame_outer = QFrame() self.frame_outer.setFixedSize(800, 60) self.frame_outer.setStyleSheet("background-color:green;color:white;" "font-family:Trebuchet MS;font-size: 15pt;text-align: center;") self.frame1_layout = QHBoxLayout(self.frame1) self.frame2_layout = QHBoxLayout(self.frame2) self.frame_outer_layout = QHBoxLayout(self.frame_outer) self.frame_outer_layout.setContentsMargins(5,0,0,0) self.frame1_layout.addWidget(self.lbl1) self.frame2_layout.addWidget(self.lbl2) self.hbox = QHBoxLayout() self.layout = QHBoxLayout() self.hbox.addWidget(self.frame1) self.hbox.addWidget(self.frame2) self.hbox.addStretch() self.hbox.setSpacing(0) # self.layout.addLayout(self.hbox) self.frame_outer_layout.addLayout(self.hbox) self.layout.addWidget(self.frame_outer) self.setLayout(self.layout) def main(): app = QApplication(sys.argv) ex = Angle_Border() ex.show() sys.exit(app.exec_()) if __name__ == '__main__': main() Sample Picture
Since the OP didn't ask for user interaction (mouse or keyboard), a possible solution could use the existing features of Qt, specifically QSS (Qt Style Sheets). While the currently previously accepted solution does follow that approach, it's not very effective, most importantly because it's basically "static", since it always requires knowing the color of the following item in order to define the "arrow" colors. This not only forces the programmer to always consider the "sibling" items, but also makes extremely (and unnecessarily) complex the dynamic creation of such objects. The solution is to always (partially) "redo" the layout and update the stylesheets with the necessary values, which consider the current size (which shouldn't be hardcoded), the following item (if any) and carefully using the layout properties and "spacer" stylesheets based on the contents. The following code uses a more abstract, dynamic approach, with basic functions that allow adding/insertion and removal of items. It still uses a similar QSS method, but, with almost the same "line count", it provides a simpler and much more intuitive approach, allowing item creation, deletion and modification with single function calls that are much easier to use. A further benefit of this approach is that implementing "reverse" arrows is quite easy, and doesn't break the logic of the item creation. Considering all the above, you can create an actual class that just needs basic calls such as addItem() or removeItem(). from PyQt5.QtCore import * from PyQt5.QtGui import * from PyQt5.QtWidgets import * class ArrowMenu(QWidget): vMargin = -1 hMargin = -1 def __init__(self, items=None, parent=None): super().__init__(parent) layout = QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.setSpacing(0) layout.addStretch() self.items = [] if isinstance(items, dict): self.addItems(items.items()) elif items is not None: self.addItems(items) def addItems(self, items): for item in items: if isinstance(item, str): self.addItem(item) else: self.addItem(*item) def addItem(self, text, background=None): self.insertItem(len(self.items), text, background) def insertItem(self, index, text, background=None): label = QLabel(text) if background is None: background = self.palette().window().color() background.setAlpha(0) else: background = QColor(background) # human eyes perceive "brightness" in different ways, let's compute # that value in order to decide a color that has sufficient contrast # with the background; see https://photo.stackexchange.com/q/10412 r, g, b, a = background.getRgbF() brightness = r * .3 + g * .59 + b * .11 foreground = 'black' if brightness >= .5 else 'white' label.setStyleSheet('color: {}; background: {};'.format( foreground, background.name(background.HexArgb))) layout = self.layout() if index < len(self.items): i = 0 for _label, _spacer, _ in self.items: if i == index: i += 1 layout.insertWidget(i * 2, _label) layout.insertWidget(i * 2 + 1, _spacer) i += 1 layout.insertWidget(index * 2, label) spacer = QWidget(objectName='menuArrow') layout.insertWidget(index * 2 + 1, spacer) self.items.insert(index, (label, spacer, background)) self.updateItems() def removeItem(self, index): label, spacer, background = self.items.pop(index) label.deleteLater() spacer.deleteLater() layout = self.layout() for i, (label, spacer, _) in enumerate(self.items): layout.insertWidget(i * 2, label) layout.insertWidget(i * 2 + 1, spacer) self.updateItems() self.updateGeometry() def updateItems(self): if not self.items: return size = self.fontMetrics().height() if self.vMargin < 0: vSize = size * 2 else: vSize = size + self.vMargin * 2 spacing = vSize / 2 self.setMinimumHeight(vSize) if self.hMargin >= 0: labelMargin = self.hMargin * 2 else: labelMargin = size // 2 it = iter(self.items) prevBackground = prevSpacer = None while True: try: label, spacer, background = next(it) label.setContentsMargins(labelMargin, 0, labelMargin, 0) spacer.setFixedWidth(spacing) except StopIteration: background = QColor() break finally: if prevBackground: if background.isValid(): cssBackground = background.name(QColor.HexArgb) else: cssBackground = 'none' if prevBackground.alpha(): prevBackground = prevBackground.name(QColor.HexArgb) else: mid = QColor(prevBackground) mid.setAlphaF(.5) prevBackground = ''' qlineargradient(x1:0, y1:0, x2:1, y2:0, stop:0 {}, stop:1 {}) '''.format( prevBackground.name(QColor.HexArgb), mid.name(QColor.HexArgb), ) prevSpacer.setStyleSheet(''' ArrowMenu > .QWidget#menuArrow {{ background: transparent; border-top: {size}px solid {background}; border-bottom: {size}px solid {background}; border-left: {spacing}px solid {prevBackground}; }} '''.format( size=self.height() // 2, spacing=spacing, prevBackground=prevBackground, background=cssBackground )) prevBackground = background prevSpacer = spacer def resizeEvent(self, event): self.updateItems() if __name__ == '__main__': import sys app = QApplication(sys.argv) items = ( ('Python', 'green'), ('Will delete', 'chocolate'), ('PyQt5', 'red'), ('Java', 'blue'), ('ASP.Net', 'yellow'), ) ex = ArrowMenu(items) ex.show() QTimer.singleShot(2000, lambda: ex.addItem('New item', 'aqua')) QTimer.singleShot(5000, lambda: ex.removeItem(1)) sys.exit(app.exec_()) And here is the result:
8
5
73,040,397
2022-7-19
https://stackoverflow.com/questions/73040397/vscode-python-debugger-stopped-launching
I'm using VSCode v1.69.0. My OS is MacOS v10.15.7 (Catalina). I have a Python code that I'm usually debugging with VSCode Python debugger (using the launch.json file). For no apparent reason the debugger recently stopped working, i.e. when I click on "Run Debug", the debug icons (stop, resume, step over etc.) show up for 1 second, then disappear and nothing else happens. There is no error message in the terminal or anything (no command is launched apparently). I tried to uninstall VSCode and reinstall it => no success. I deleted the Python extension in /userprofile/.vscode/extensions and then reinstalled it (both the current v2022.10.1 and pre-release v2022.11.12011103) => no success. The python program runs properly on another IDE. Any idea?
If you are using python3.6 version then latest version of debugger no longer supports it. You can use the historical version 2022.08.*. Or use a new python version.
4
7
73,022,745
2022-7-18
https://stackoverflow.com/questions/73022745/vscode-and-jupyter-notebook-changes-in-python-script-code-dont-update
When I write a code in the editor in VScode and then I try to import this code into jupyter notebook, the alterations I made in the code are do not update - the code that runs in jupyter notebook is the code that is open when I initialize VScode. To update the code I need to restart VScode, open jupyter notebook, and import the code with the alterations made before restarting VScode. I set the path to the folder, and I using the same virtual environment. Is there anyway to fix this?
You could download script file from jupyter as "py" file and run it on other editors most jupyter support that for example in anaconda jupyter and google colab support this too but a notebook run only with program support python notebook.
3
1
73,031,562
2022-7-19
https://stackoverflow.com/questions/73031562/how-to-disable-python-interactive-mode-in-vs-code
I prefer to use this extension so I find very annoying that every time I write a code-cell or hit shift-enter VS-Code opens its internal interactive python console. How to stop such a behavior?
You can view your keyboard shortcuts using: Click the gear icon in the lower left corner of the interface Select Keyboard Shortcuts (Ctrl + K + S) Select Record Keys (Alt + k) on the right side of the input box Press the SHIFT and ENTER keys View the functions bound to this shortcut key in the list Just delete what you don't want. UPDATE Uninstall Jupyter extension, you will not get any interactive window features. Don't forget to restart vscode after uninstalling
3
9
73,028,924
2022-7-18
https://stackoverflow.com/questions/73028924/how-to-measure-time-spent-in-blocking-code-while-using-asyncio-in-python
I'm currently migrating some Python code that used to be blocking to use asyncio with async/await. It is a lot of code to migrate at once so I would prefer to do it gradually and have metrics. With that thing in mind I want to create a decorator to wrap some functions and know how long they are blocking the event loop. For example: def measure_blocking_code(f): def wrapper(*args, **kwargs): # ????? # It should measure JUST 1 second # not 5 which is what the whole async function takes return wrapper @measure_blocking_code async def my_function(): my_blocking_function() # Takes 1 seconds await my_async_function() # Takes 2 seconds await my_async_function_2() # Takes 2 seconds I know the event loop has a debug function that already report this, but I need to get that information for specific functions.
TLDR; This decorator does the job: def measure_blocking_code(f): async def wrapper(*args, **kwargs): t = 0 coro = f() try: while True: t0 = time.perf_counter() future = coro.send(None) t1 = time.perf_counter() t += t1 - t0 while not future.done(): await asyncio.sleep(0) future.result() # raises exceptions if any except StopIteration as e: print(f'Function took {t:.2e} sec') return e.value return wrapper Explanation This workaround exploits the conventions used in asyncio implementation in cPython. These conventions are a superset of PEP-492. In other words: You can generally use async/await without knowing these details. This might not work with other async libraries like trio. An asyncio coro object (coro) can be executed by calling .send() member. This will only run the blocking code, until an async call yields a Future object. By only measuring the time spent in .send(), the duration of the blocking code can be determined.
4
4
73,044,363
2022-7-19
https://stackoverflow.com/questions/73044363/python-convert-all-caps-into-title-case-without-messing-with-camel-case
I'm using python3 and would like to turn strings that contain all caps words (separately or inside a word) into title case (first letter capitalized). I do not want to disrupt one-off capital letters in the middle of a word (camel case), but if there are repeated capitalized letters, I want to keep only the first one capitalized. Here's the desired behavior >>> a = "TITLE BY DeSoto theHUMUNGUSone" >>> print(myfunc(a)) Title By DeSoto TheHumungusone In words, "capitalize the beginning of each word and then take any letter that follows a capital letter and make it lower case." The str.title() does the initial letters the way I want, but it makes all intra-word letters lower case, rather than just those following the first. I was playing with a regular expression approach that makes anything that follows an upper case letter lower case, but I kept getting every other letter capitalized.
A regular expression substitution with a lambda is the way to go: import re a = "TITLE BY DeSoto theHUMUNGUSone" print(re.sub('[A-Z]+', lambda x: x.group(0).title(), a)) Output: Title By DeSoto theHumungusone
4
6
73,033,580
2022-7-19
https://stackoverflow.com/questions/73033580/why-polars-scan-csv-is-even-faster-than-disk-reading-speed
I am testing polars performance by LazyDataFrame API polars.scan_csv with filter. The performance is much better than I expect. Filtering a CSV file is even faster than the disk speed! WHY??? The CSV file is about 1.51 GB on my PC HDD. testing code: import polars as pl t0 = time.time() lazy_df = pl.scan_csv("kline.csv") df = lazy_df.filter(pl.col('ts') == '2015-01-01').collect().to_pandas() print(time.time() - t0) > Output: 1.8616907596588135 It takes less than 2 seconds to scan the whole CSV file, which means that the scan speed is faster than 750MB/S. It is much faster than the disk speed, apparently.
What you're probably seeing is a common problem in benchmarking: the caching of files by your operating system. Most modern operating systems will attempt to cache files that are accessed, if the amount of RAM permits. The first time you accessed the file, your operating system likely cached the 1.51 GB file in RAM (possibly even when you created the file). As such, subsequent retrievals are not really accessing your HDD -- they are running against the cached file in RAM, a process which is far faster than reading from your HDD. (Which is kind of the point of caching files in RAM.) An Example As an example, I created a 29.9 GB csv file, and purposely placed it on my NAS (network-attached storage) rather than on my local hard drive. For reference, my NAS and my machine are connected by a 10 gigabit/sec network. Running this benchmarking code the first time took about 54 seconds. import polars as pl import time start = time.perf_counter() ( pl.scan_csv('/mnt/bak-projects/StackOverflow/benchmark.csv') .filter(pl.col('col_0') == 100) .collect() ) print(time.perf_counter() - start) shape: (1, 27) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ col_0 ┆ col_1 ┆ col_2 ┆ ... ┆ col_22 ┆ col_23 ┆ col_24 ┆ col_25 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ i64 ┆ i64 ┆ i64 ┆ ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═════β•ͺ════════β•ͺ════════β•ͺ════════β•ͺ════════║ β”‚ 1.0 ┆ 100 ┆ 100 ┆ 100 ┆ ... ┆ 100 ┆ 100 ┆ 100 ┆ 100 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ >>> print(time.perf_counter() - start) 53.92608916899917 So, reading a 29.9 GB file in 54 seconds is roughly 29.9 GB * (8 bits-per-byte) / 54 seconds = 4.4 gigabits per second. Not bad for retrieving files from a network drive. And certainly within the realm of possibility on my 10 gigabit/sec network. However, the file is now cached by my operating system (Linux) in RAM (I have 512 GB of RAM). So when I run the same benchmarking code a second time, it took a mere 3.5 seconds: shape: (1, 27) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ col_0 ┆ col_1 ┆ col_2 ┆ ... ┆ col_22 ┆ col_23 ┆ col_24 ┆ col_25 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ i64 ┆ i64 ┆ i64 ┆ ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═════β•ͺ════════β•ͺ════════β•ͺ════════β•ͺ════════║ β”‚ 1.0 ┆ 100 ┆ 100 ┆ 100 ┆ ... ┆ 100 ┆ 100 ┆ 100 ┆ 100 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ >>> print(time.perf_counter() - start) 3.5459880090020306 If my 29.9 GB file was really pulled across my network, this would imply a network speed of at least 29.9 * 8 / 3.5 sec = 68 gigabits per second. (Clearly, not possible on my 10 Gigabit/sec network.) And a third time: 2.9 seconds shape: (1, 27) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ col_0 ┆ col_1 ┆ col_2 ┆ ... ┆ col_22 ┆ col_23 ┆ col_24 ┆ col_25 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ i64 ┆ i64 ┆ i64 ┆ ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ═════β•ͺ════════β•ͺ════════β•ͺ════════β•ͺ════════║ β”‚ 1.0 ┆ 100 ┆ 100 ┆ 100 ┆ ... ┆ 100 ┆ 100 ┆ 100 ┆ 100 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ >>> print(time.perf_counter() - start) 2.8593162479992316 Depending on your operating system, there is a way to flush cached files from RAM before benchmarking.
6
14
73,025,746
2022-7-18
https://stackoverflow.com/questions/73025746/what-does-python-do-inside-the-gdb-debugger
I was debugging a C++ program in the gdb debugger and tried to access the 5th element of vector which only contains 4 element. After trying it, this error was on the screen: (gdb) list main 1 #include <memory> 2 #include <vector> 3 4 int main(int argc, char *argv[]){ 5 6 7 std::vector<int> v_num = {1, 3, 5, 67}; 8 std::unique_ptr<std::vector<int>> p(&v_num); 9 10 (gdb) p v_num.at (5) Python Exception <class 'IndexError'> Vector index "5" should not be >= 4.: Error while executing Python code. (gdb) I didn't expect to see a Python exception inside the the gdb. Can someone explain why I encountered such error? Does gdb uses python internally?
Does gdb uses python internally? Yes, it uses Python a lot to extend itself in many ways, see https://sourceware.org/gdb/onlinedocs/gdb/Python.html#Python. What you discovered is called Python Xmethods, see https://sourceware.org/gdb/onlinedocs/gdb/Xmethods-In-Python.html. Xmethods are used as a replacement of inlined or optimized out method defined in C++ source code. libstdc++ has a number of xmethods for standard containers. You can see them with info xmethod command. For std::vector there are 6 xmethods defined on my box: libstdc++::vector size empty front back at operator[] You can disable all xmethods with disable xmethod command. If you do it, GDB will unable to call at method: (gdb) p v_num.at (5) Cannot evaluate function -- may be inlined
4
4
73,042,986
2022-7-19
https://stackoverflow.com/questions/73042986/csv-to-json-converter-grouping-by-same-keys-values
I'm trying to convert csv format to JSON, I googled I'm not getting the correct way to modify it to get the desired one. This is my code in python: import csv import json def csv_to_json(csvFilePath, jsonFilePath): jsonArray = [] #reading csv (encoding is important) with open(csvFilePath, encoding='utf-8') as csvf: #csv library function csvReader = csv.DictReader(csvf) #convert each csv row into python dictionary for column in csvReader: #add this python dictionary to json array jsonArray.append(column) #convertion with open(jsonFilePath, 'w', encoding='utf-8') as jsonf: jsonString = json.dumps(jsonArray, indent=4) jsonf.write(jsonString) csvFilePath='example.csv' jsonFilePath='output.json' csv_to_json(csvFilePath, jsonFilePath) and this is my csv file format: My actual JSON Output: [ { "Area": "IT", "Employee": "Carl", }, { "Area": "IT", "Employee": "Walter", }, { "Area": "Financial Resources", "Employee": "Jennifer", } ] My desired JSON Output: [ { "Area": "IT", "Employee": ["Carl","Walter"], }, { "Area": "Financial Resources", "Employee": ["Jennifer"], } ] Thank you in advance!
Something like this should work. def csv_to_json(csvFilePath, jsonFilePath): areas = {} with open(csvFilePath, encoding='utf-8') as csvf: csvReader = csv.DictReader(csvf) for column in csvReader: area, employee = column["Area"], column["Employee"] # split values if area in areas: # add all keys and values to one dictionary areas[area].append(employee) else: areas[area] = [employee] # convert dictionary to desired output format. jsonArray = [{"Area": k, "Employee": v} for k,v in areas.items()] with open(jsonFilePath, 'w', encoding='utf-8') as jsonf: jsonString = json.dumps(jsonArray, indent=4) jsonf.write(jsonString)
4
3
73,042,044
2022-7-19
https://stackoverflow.com/questions/73042044/panda-multiply-dataframes-using-dictionary-to-map-columns
I am looking to multiply element-wise two dataframes with matching indices, using a dictionary to map which columns to multiply together. I can only come up with convoluted ways to do it and I am sure there is a better way, really appreciate the help! thx! df1: Index ABC DEF XYZ 01/01/2004 1 2 3 05/01/2004 4 7 2 df2: Index Echo Epsilon 01/01/2004 5 10 05/01/2004 -1 -2 Dictionary d = {'ABC': 'Echo', 'DEF': 'Echo', 'XYZ': 'Epsilon'} Expected result: Index ABC DEF XYZ 01/01/2004 5 10 30 05/01/2004 -4 -7 -4
You can use: # only if not already the index df1 = df1.set_index('Index') df2 = df2.set_index('Index') df1.mul(df2[df1.columns.map(d)].set_axis(df1.columns, axis=1)) or: df1.mul(df2.loc[df1.index, df1.columns.map(d)].values) output: ABC DEF XYZ Index 01/01/2004 5 10 30 05/01/2004 -4 -7 -4
3
3
73,039,481
2022-7-19
https://stackoverflow.com/questions/73039481/check-if-values-in-all-n-previous-rows-are-greater-than-the-value-of-current-row
I have a pandas dataframe like this: col_name 0 2 1 3 2 1 3 0 4 5 5 4 6 3 7 3 that could be created with the code: import pandas as pd dataframe = pd.DataFrame( { 'col_name': [2, 3, 1, 0, 5, 4, 3, 3] } ) Now, I want to get the rows which have a value less than the values in all the n previous rows. So, for n=2 the output is the rows: 2, 3, 6. Additionally I don't want to use any for-loops in my code. Just a trick that comes to my mind is that we can count the number of rows in n previous rows having a value less than the current row and then check if the number equals to n. Have you any idea for the code with my trick or in any other ways? and also is there a way to write a similar code except that if we want to check if values in n next rows are less than the value of the current row?
Let us use rolling to calculate the min value in n previous rows then compare the min value with current row to create a boolean mask df[df['col_name'] < df['col_name'].shift().rolling(2, min_periods=1).min()] col_name 2 1 3 0 6 3
4
3
72,970,941
2022-7-13
https://stackoverflow.com/questions/72970941/dash-leaflet-click-feature-event
I have been using dash leaflet for creating my own dashboard with maps, and it has been great to be able to visualize things with an interactive map. However, there is one thing that I have been stumped on a while for how to deal with a click happening on a polygon or marker. To explain this, I have created a simple example below import geopandas as gpd import dash_leaflet as dl import dash_leaflet.express as dlx from dash import Dash, html, Output, Input import json location = gpd.GeoDataFrame(geometry=gpd.points_from_xy([-74.0060], [40.7128])) app = Dash() app.layout = html.Div( children=[ dl.Map( center=[39, -98], zoom=4, children=[ dl.TileLayer(), dl.GeoJSON(data=dlx.geojson_to_geobuf(json.loads(location.to_json())), format='geobuf', id='locations', zoomToBoundsOnClick=True)], style={'width': '100%', 'height': '80vh'}, id="map"), html.Div(id='counter')]) counter = 0 @app.callback(Output('counter', 'children'), Input('locations', 'click_feature')) def update_counter(feature): global counter counter += 1 return str(counter) if __name__ == "__main__": app.run_server(debug=True) And here is what the dashboard looks like when you first load it Below the map there is a div that contains the number of times the geojson has been clicked (and I realize that when initializing that the function gets called, but that isn't the focus of this problem). When you click on the marker the first time, the div gets updated and the number increases. However, if you were to try and click on the marker again, the div does not update and there is no increase in the number. What I have figured out is that an event will only be fired if you click on a different marker (you can add a coordinate to the geojson and click between the two markers to see for yourself). In the example, this is a counter, but I am trying to filter my data to the location clicked on every time. So my question is what do you have to do to have your data filter every time the geojson is clicked on? Another small thing that I have noticed is that even though I set zoomToBoundsOnClick to true, I do not have the map zoom in on the marker that I clicked on. This isn't a big deal, but it would be a nice to have. So if someone know how to get that work, that would also be appreciated.
In Dash, a callback is only invoked when a property changes. If you click the same feature twice, the click_feature property doesn't change, and the callback is thus not invoked. If you want to invoke the callback on every click, you can target the n_clicks property - it is incremented on (every) click, and the callback will thus fire on every click.
4
1
73,038,799
2022-7-19
https://stackoverflow.com/questions/73038799/program-that-calculates-binary-gap-using-recursion-in-python-generates-recursion
I am new at programming (please be nice :)) and I am trying to write a function that calculates the binary gap of a number using recursion by using the modulus function to gather the bits of an integer and then counting the number of 0s between 1s and displaying the longest chain 0s in between two 1s. For example Input = 10, Binary value = 1010, Binary Gap = 1 Input = 15, Binary value = 1111, Binary Gap = 0 Input = 41, Binary value = 101001, Binary Gap = 2 As it stands the code below will always return a 0 if an even number is inputted and always throw a recursion error should an odd number be used. This leads me to believe there is a problem involving the division and it being stuck in an infinite loop but cannot figure out why. def binary_gap(n, foundFirstOne= False, numofZeros= 0): if n == 0: return 0 bit = n % 2 if bit == 1: ans = binary_gap((n / 2), True, 0) else: if foundFirstOne == True: ans = binary_gap((n / 2), True, numofZeros+1) else: ans = binary_gap((n / 2), False, 0) return ans and numofZeros print(binary_gap(int(input("Please enter a number"))))
First, write a recursive function to convert decimal to binary (not necessary because there is already a bin function). Then, count the gap - there is no point using recursion in this because it is just way too complicated: def convert_to_binary(n): if n > 1: return convert_to_binary(n // 2) + str(n % 2) return str(n % 2) def binary_gap(n): b = convert_to_binary(n) x = b.split('1')[:-1] return max(len(d) for d in x) print(binary_gap(10)) # 1 print(binary_gap(15)) # 0 print(binary_gap(41)) # 2
4
0
73,034,438
2022-7-19
https://stackoverflow.com/questions/73034438/how-to-type-hint-function-with-a-callable-argument-and-default-value
I am trying to type hint the arguments of a function that takes a callable, and has a default argument (in the example below set) from typing import Callable, List T = TypeVar("T") def transform(data: List[int], ret_type: Callable[[List[int]], T] = set) -> T: return ret_type(data) a = [1, 2, 3] my_set: Set = transform(a) The above code triggers the following error message from mypy mypy3: Incompatible default for argument "ret_type" (default has type "Type[Set[Any]]", argument has type "Callable[[List[int]], T]") What should be the correct type of ret_type ? EDIT The below code although not ideal works fine (cf @chepner comment) from typing import cast, Any, Callable, TypeVar T = TypeVar("T") def transform(data: Any, ret_type: Callable[..., T] = cast(Callable, set)) -> T: return ret_type(data)
You could use the @overload for correctly type hinting of function with default argument for your case: from typing import Callable, List, TypeVar, overload, Set T = TypeVar("T") @overload def transform(data: List[int]) -> Set[int]: ... @overload def transform(data: List[int], ret_type: Callable[[List[int]], T]) -> T: ... # untyped implementation def transform(data, ret_type = set): return ret_type(data) a = [1, 2, 3] my_set: Set = transform(a)
5
2