question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
73,037,764
2022-7-19
https://stackoverflow.com/questions/73037764/check-if-string-is-none-empty-or-has-spaces-only
What's the Pythonic way of checking whether a string is None, empty or has only whitespace (tabs, spaces, etc)? Right now I'm using the following bool check: s is None or not s.strip() ..but was wondering if there's a more elegant / Pythonic way to perform the same check. It may seem easy but the following are the different issues I found with this: isspace() returns False if the string is empty. A bool of string that has spaces is True in Python. We cannot call any method, such as isspace() or strip(), on a None object.
The only difference I can see is doing: not s or not s.strip() This has a little benefit over your original way that not s will short-circuit for both None and an empty string. Then not s.strip() will finish off for only spaces. Your s is None will only short-circuit for None obviously and then not s.strip() will check for empty or only spaces.
8
12
73,035,540
2022-7-19
https://stackoverflow.com/questions/73035540/pandas-how-to-expand-a-dataframe-between-dates-and-add-nans-to-new-rows
This is a simple one but it is stumping me. I have a data frame consisting of day-level observations by individuals. However, not all individuals are observed on the same day. I need to create rows including the days in which individuals are not observed (i.e., count NaN or 0), between the dates where they are present. The relevant data looks as follows ID date StartDate EndDate Count Cov1 Cov2 Cov3 A 05/05/2005 04/04/05 06/06/2006 3 1 F 1 A 06/05/2005 04/04/05 06/06/2006 5 1 F 1 A 07/05/2005 04/04/05 06/06/2006 2 1 F 1 A 10/05/2005 04/04/05 06/06/2006 7 1 F 1 B 05/05/2005 04/04/05 06/06/2006 6 0 M 2 B 07/05/2005 04/04/05 06/06/2006 1 0 M 2 C 01/05/2005 04/04/05 06/06/2006 3 1 F 1 C 03/05/2005 04/04/05 06/06/2006 7 1 F 1 However, I need it to look like this: ID date StartDate EndDate Count Cov1 Cov2 Cov3 A 05/05/2005 04/04/05 06/06/2006 3 1 F 1 A 06/05/2005 04/04/05 06/06/2006 5 1 F 1 A 07/05/2005 04/04/05 06/06/2006 2 1 F 1 A 08/05/2005 04/04/05 06/06/2006 0 1 F 1 A 09/05/2005 04/04/05 06/06/2006 0 1 F 1 A 10/05/2005 04/04/05 06/06/2006 7 1 F 1 B 05/05/2005 04/04/05 06/06/2006 6 0 M 2 B 06/05/2005 04/04/05 06/06/2006 0 0 M 2 B 07/05/2005 04/04/05 06/06/2006 1 0 M 2 C 01/05/2005 04/04/05 06/06/2006 3 1 F 1 C 02/05/2005 04/04/05 06/06/2006 0 1 F 1 C 03/05/2005 04/04/05 06/06/2006 7 1 F 1 So, the data should expand by days not counted in the original but between the start and end dates. However, the count variable must not copy over, while all other covariates have to copy.
Create DatetimeIndex by column date first, so possible use custom lambda function with DataFrame.asfreq, remove first level of MultiIndex and convert index to column date, last use Series.dt.strftime for original format DD/MM/YYYY: First is possible test duplicated rows by columns ID, date: print (df[df.duplicated(['ID','date'], keep=False)]) ID date StartDate EndDate Count Cov1 Cov2 Cov3 0 A 05/05/2005 04/04/05 06/06/2006 3 1 F 1 1 A 05/05/2005 04/04/05 06/06/2006 3 1 F 1 If possible, remove duplicates: df = df.drop_duplicates(['ID','date']) df['date'] = pd.to_datetime(df['date'], dayfirst=True) df1 = (df.set_index('date').groupby('ID') .apply(lambda x: x.asfreq('D', method='ffill')) .droplevel(0) .reset_index()) print (df1) date ID StartDate EndDate Count Cov1 Cov2 Cov3 0 2005-05-05 A 04/04/05 06/06/2006 3 1 F 1 1 2005-05-06 A 04/04/05 06/06/2006 5 1 F 1 2 2005-05-07 A 04/04/05 06/06/2006 2 1 F 1 3 2005-05-08 A 04/04/05 06/06/2006 2 1 F 1 4 2005-05-09 A 04/04/05 06/06/2006 2 1 F 1 5 2005-05-10 A 04/04/05 06/06/2006 7 1 F 1 6 2005-05-05 B 04/04/05 06/06/2006 6 0 M 2 7 2005-05-06 B 04/04/05 06/06/2006 6 0 M 2 8 2005-05-07 B 04/04/05 06/06/2006 1 0 M 2 9 2005-05-01 C 04/04/05 06/06/2006 3 1 F 1 10 2005-05-02 C 04/04/05 06/06/2006 3 1 F 1 11 2005-05-03 C 04/04/05 06/06/2006 7 1 F 1 print (df1.index.name) None If possible in real data it is ID use: df1 = df1.rename_axis(None) m = df1.merge(df, indicator=True, how='left')['_merge'].eq('left_only') df1.loc[m, 'Count'] = 0 df1['date'] = df1['date'].dt.strftime('%d/%m/%Y') print (df1) date ID StartDate EndDate Count Cov1 Cov2 Cov3 0 05/05/2005 A 04/04/05 06/06/2006 3 1 F 1 1 06/05/2005 A 04/04/05 06/06/2006 5 1 F 1 2 07/05/2005 A 04/04/05 06/06/2006 2 1 F 1 3 08/05/2005 A 04/04/05 06/06/2006 0 1 F 1 4 09/05/2005 A 04/04/05 06/06/2006 0 1 F 1 5 10/05/2005 A 04/04/05 06/06/2006 7 1 F 1 6 05/05/2005 B 04/04/05 06/06/2006 6 0 M 2 7 06/05/2005 B 04/04/05 06/06/2006 0 0 M 2 8 07/05/2005 B 04/04/05 06/06/2006 1 0 M 2 9 01/05/2005 C 04/04/05 06/06/2006 3 1 F 1 10 02/05/2005 C 04/04/05 06/06/2006 0 1 F 1 11 03/05/2005 C 04/04/05 06/06/2006 7 1 F 1
3
2
73,035,677
2022-7-19
https://stackoverflow.com/questions/73035677/new-column-based-on-values-from-other-columns-and-respecting-pre-established-ru
I'm looking for an algorithm to create a new column based on values ​​from other columns AND respecting pre-established rules. Here's an example: artificial data df = data.frame( col_1 = c('No','Yes','Yes','Yes','Yes','Yes','No','No','No','Unknown'), col_2 = c('Yes','Yes','Unknown','Yes','Unknown','No','Unknown','No','Unknown','Unknown'), col_3 = c('Unknown','Yes','Yes','Unknown','Unknown','No','No','Unknown','Unknown','Unknown') ) The goal is to create a new_column based on the values ​​of col_1, col_2, and col_3. For that, the rules are: If the value 'Yes' is present in any of the columns, the value of the new_column will be 'Yes'; If the value 'Yes' is not present in any of the columns, but the value 'No' is present, then the value of the new_column will be 'No'; If the values ​​'Yes' and 'No' are absent, then the value of new_columns will be 'Unknown'. I managed to operationalize this using case_when() describing all possible combinations; or ifelse sequential. But these solutions are not scalable to N variables. Current solution: library(dplyr) df_1 <- df %>% mutate( new_column = ifelse( (col_1 == 'Yes' | col_2 == 'Yes' | col_3 == 'Yes'), 'Yes', ifelse( (col_1 == 'Unknown' & col_2 == 'Unknown' & col_3 == 'Unknown'), 'Unknown','No' ) ) ) I'm looking for some algorithm capable of operationalizing this faster and capable of being expanded to N variables. After searching for StackOverflow, I couldn't find a way to my problem (I know there are several posts about creating a new column based on values ​​obtained from different columns, but none). Perhaps the search strategy was not the best. If anyone finds it, please provide the link. I used R in the code, but the current solution works in Python using np.where. Solutions in R or Python are welcome.
A solution using Python: import pandas as pd df = pd.DataFrame({ 'col_1': ['No','Yes','Yes','Yes','Yes','Yes','No','No','No','Unknown'], 'col_2': ['Yes','Yes','Unknown','Yes','Unknown','No','Unknown','No','Unknown','Unknown'], 'col_3': ['Unknown','Yes','Yes','Unknown','Unknown','No','No','Unknown','Unknown','Unknown'] }) df['col_4'] = [('Yes' if 'Yes' in x else ('No' if 'No' in x else 'Unknown')) for x in zip(df['col_1'], df['col_2'], df['col_3'])] print(df) Output: col_1 col_2 col_3 col_4 0 No Yes Unknown Yes 1 Yes Yes Yes Yes 2 Yes Unknown Yes Yes 3 Yes Yes Unknown Yes 4 Yes Unknown Unknown Yes 5 Yes No No Yes 6 No Unknown No No 7 No No Unknown No 8 No Unknown Unknown No 9 Unknown Unknown Unknown Unknown
4
1
73,033,594
2022-7-19
https://stackoverflow.com/questions/73033594/github-action-using-wrong-version-of-python
I have the following Github action, in which I'm specifying Python 3.10: name: Unit Tests runs-on: ubuntu-latest defaults: run: shell: bash working-directory: app steps: - uses: actions/checkout@v3 - name: Install poetry run: pipx install poetry - uses: actions/setup-python@v3 with: python-version: "3.10" cache: "poetry" - run: poetry install - name: Run tests run: | make mypy make test The pyproject.toml specifies Python 3.10 as well: [tool.poetry.dependencies] python = ">=3.10,<3.11" When the action runs, I get the following: The currently activated Python version 3.8.10 is not supported by the project (>=3.10,<3.11). Trying to find and use a compatible version. Using python3 (3.10.5) It would look like it's using 3.10, but py.test is using 3.8.10: platform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /home/runner/.cache/pypoetry/virtualenvs/vital-background-pull-yluVa_Vi-py3.10/bin/python For context, this Github action was running on 3.8 before. I've updated the python version in both the test.yaml and the pyproject.toml but it's still using 3.8. Anything else I should change to make it use 3.10? Thank you
The root cause is the section - uses: actions/setup-python@v3 with: python-version: "3.10" cache: "poetry" with the line caching poetry. Since poetry was previously installed with a pip associated with Python 3.8, the package will be retrieved from the cache associated with that Python version. It needs to be re-installed with the new Python version. You can either remove the cache: poetry from a single GH actions execution, or remove the cache manually. This will fix your issue.
4
5
72,979,303
2022-7-14
https://stackoverflow.com/questions/72979303/why-is-pytorch-inference-non-deterministic-even-when-setting-model-eval
I have fine-tuned a PyTorch transformer model using HuggingFace, and I'm trying to do inference on a GPU. However, even after setting model.eval() I still get slightly different outputs if I run inference multiple times on the same data. I have tried a number of things and have done some ablation analysis and found out that the only way to get deterministic output is by also setting torch.cuda.manual_seed_all(42) (or any other seed number). Why is this the case? And is this normal? The model's weights are fixed, and there are no undefined or randomly initialized weights (when I load the trained model I get the All keys matched successfully message), so where is the randomness coming from if I don't set the cuda seed manually? Is this randomness to be expected?
You can use torch.use_deterministic_algorithms to force non-deterministic modules to perform deterministically, where supported e.g: >>> a = torch.randn(100, 100, 100, device='cuda').to_sparse() >>> b = torch.randn(100, 100, 100, device='cuda') # Sparse-dense CUDA bmm is usually nondeterministic >>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item() False >>> torch.use_deterministic_algorithms(True) # Now torch.bmm gives the same result each time, but with reduced performance >>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item() True # CUDA kthvalue has no deterministic algorithm, so it throws a runtime error >>> torch.zeros(10000, device='cuda').kthvalue(1) RuntimeError: kthvalue CUDA does not have a deterministic implementation...
5
4
73,024,608
2022-7-18
https://stackoverflow.com/questions/73024608/merge-multiple-batchencoding-or-create-tensorflow-dataset-from-list-of-batchenco
In a token labelling task I am using a transformers tokenizer, which outputs objects of the BatchEncoding class. I am tokenizing each text separately because I need to extract the labels from the text and re-arrange them after tokenizing (due to subtokens). However, I can't find a way to either create a tensorflow Dataset from the list of BatchEncoding objects or merge all the BatchEncoding objects into one to create the dataset. Here are the main parts of the code: tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') def extract_labels(raw_text): # split text into words and extract label (...) return clean_words, labels def tokenize_text(words, labels): # tokenize text tokens = tokenizer(words, is_split_into_words=True, padding='max_length', truncation=True, max_length=MAX_LENGTH) # since words might be split into subwords, labels need to be re-arranged # only the first subword has the label (...) tokens['labels'] = label_ids return tokens tokens = [] for raw_text in data: clean_text, l = extract_labels(raw_text) t = tokenize_text(clean_text, l) tokens.append(t) type(tokens[0]) # transformers.tokenization_utils_base.BatchEncoding tokens[0] # {'input_ids': [101, 69887, 10112, ..., 0, 0, 0], 'attention_mask': [1, 1, 1, ... 0, 0, 0], 'labels': [-100, 0, -100, ..., -100, -100, -100]} Update, as asked, a basic example to reproduce: from transformers import BertTokenizerFast import tensorflow as tf tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') tokens = [] for text in ["Hello there", "Good morning"]: t = tokenizer(text.split(), is_split_into_words=True, padding='max_length', truncation=True, max_length=10) t['labels'] = list(map(lambda x: 1, t.word_ids())) # fake labels to simplify example tokens.append(t) print(type(tokens[0])) # now tokens is a list of BatchEncodings print(tokens) If I directly tokenized the whole dataset I'd have a single BatchEnconding comprising everything, but I would not be able to handle the labels: data = ["Hello there", "Good morning"] tokens = tokenizer(data, padding='max_length', truncation=True, max_length=10) # now tokens is a batch encoding comprising all the dataset print(type(tokens)) print(tokens) # This way I can get a tf dataset like this: tf.data.Dataset.from_tensor_slices(tokens) Note that I need to first iterate the texts to get the labels and I need each text's word_ids() to rearrange the labels.
You have a few options. You can use a defaultdict: from collections import defaultdict import tensorflow as tf result = defaultdict(list) for d in tokens: for k, v in d.items(): result[k].append(v) dataset = tf.data.Dataset.from_tensor_slices(dict(result)) Or you can use pandas as shown here: import pandas as pd import tensorflow as tf dataset = tf.data.Dataset.from_tensor_slices(pd.DataFrame.from_dict(tokens).to_dict(orient="list")) Or just create the correct structure while preprocessing your data: from transformers import BertTokenizerFast from collections import defaultdict import tensorflow as tf tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased') tokens = defaultdict(list) for text in ["Hello there", "Good morning"]: t = tokenizer(text.split(), is_split_into_words=True, padding='max_length', truncation=True, max_length=10) tokens['input_ids'].append(t['input_ids']) tokens['token_type_ids'].append(t['token_type_ids']) tokens['attention_mask'].append(t['attention_mask']) t['labels'] = list(map(lambda x: 1, t.word_ids())) # fake labels to simplify example tokens['labels'].append(t['labels']) dataset = tf.data.Dataset.from_tensor_slices(dict(tokens)) for x in dataset: print(x) {'input_ids': <tf.Tensor: shape=(10,), dtype=int32, numpy= array([ 101, 29155, 10768, 102, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(10,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(10,), dtype=int32, numpy=array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'labels': <tf.Tensor: shape=(10,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>} {'input_ids': <tf.Tensor: shape=(10,), dtype=int32, numpy= array([ 101, 12050, 17577, 102, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(10,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(10,), dtype=int32, numpy=array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0], dtype=int32)>, 'labels': <tf.Tensor: shape=(10,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>}
4
2
72,999,837
2022-7-15
https://stackoverflow.com/questions/72999837/why-how-is-1-0-5-inaccurate
Python computes the imaginary unit i = sqrt(-1) inaccurately: >>> (-1) ** 0.5 (6.123233995736766e-17+1j) Should be exactly 1j (Python calls it j instead of i). Both -1 and 0.5 are represented exactly, the result can be represented exactly as well, so there's no hard reason (i.e., floating point limitations) why Python couldn't get it right. It could. And i=sqrt(-1) being the definition makes it rather disappointing that Python gets that wrong. So why does it? How does it compute the inaccurate result?
When complex arithmetic is required, your Python implementation likely calculates xy as ey ln x, as might be done with the complex C functions cexp and clog. Those are in turn likely calculated with real functions including ln, sqrt, atan2, sin, cos, and pow, but the details need not concern us. ln −1 is πi. However, π is not representable in a floating-point format. Your Python implementation likely uses the IEEE-754 “double precision” format, also called binary64. In that format, the closest representable value to π is 3.141592653589793115997963468544185161590576171875. So ln −1 is likely calculated as 3.141592653589793115997963468544185161590576171875 i. Then y ln x = .5 • 3.141592653589793115997963468544185161590576171875 i is 1.5707963267948965579989817342720925807952880859375 i. e1.5707963267948965579989817342720925807952880859375 i is also not exactly representable. The true value of that is approximately 6.123233995736765886130329661375001464640•10−17 + .9999999999999999999999999999999981253003 i. The nearest representable value to 6.123233995736765886130329661375001464640•10−17 is 6.12323399573676603586882014729198302312846062338790031898128063403419218957424163818359375•10−17, and the nearest representable value to .9999999999999999999999999999999981253003 is 1, so the calculated result is 6.12323399573676603586882014729198302312846062338790031898128063403419218957424163818359375•10−17 + 1 i.
4
1
73,029,554
2022-7-18
https://stackoverflow.com/questions/73029554/vs-code-pytest-discovery-error-due-to-modulenotfounderror
I am trying to setup my VS Code debugger for pytest and am getting the error discovering pytests tests from the testing tab. The test I'm running works perfectly when run from the terminal but the VS debugger is not able to run it. In the testing tab, it directs me to the output where it shows a ModuleNotFoundError for logdna in the common.py file. That file has some other seemingly syntax errors but I believe VSCode is just not interpreting them correctly because the modules are in the code base. On the left you can see I ran pip3 install logdna to ensure logdna is actually installed. Note that I am running inside a Poetry environment and have a .env in the project root directory. Note also that I have reviewed and tried all answers provided here and none of the provided answers helped in my case: VSCode pytest test discovery fails
You are using virtualenv inside terminal. Here is clearly seeing in logs that python is installed inside /Users/.../pypoetry/virtualenvs/... folder. But VSCode using default python (/usr/local/bin/python3) or at least non virtualenv version You have to choose same interpreter in VSCode as terminal's one What is virtualenv?
5
2
73,025,014
2022-7-18
https://stackoverflow.com/questions/73025014/how-to-return-status-code-in-python-without-actually-exiting-the-process
I'm trying to return a status code without exiting the process of my script. Is there an equivalent to sys.exit() that does not stop execution ? What I'd like to happen : If no exception is raised, status code is by default 0. For minor exceptions, I want to return status code 1, but keep the process running, unless a more critical error exits the process and returns status code 2 with sys.exit(2). In my case, I'm looping through a bunch of PDF files and if one file has not been anonymised for some reason, I want to return status code 1 while looping through the rest of the files. In other words, status code should correspond to : 0 - SUCCESS : all files have been sucessfully anonymised 1 - WARNING : at least one file has not been anonymised 2 - ERROR : critical error, interrupted process For clarity : by status code I mean, what is returned by the Bash command echo $? after the execution of the script. What I have done : I have tried using sys.exit(1) to handle minor exceptions but it automatically stops execution, as stated in the library documentation. To sum up with code snippets : For a given main.py : import sys import os def anonymiser(file) -> Boolean : # Returns True if successfully anonymises the file, False otherwise ... def process_pdf_dir(): for file in os.listdir('pdf_files/'): if file.endswith('.pdf'): if anonymiser(file) == False: ''' Here goes the code to change status code to 1 but keeps looping through the rest of the files unlike `sys.exit(1)` that would stop the execution. ''' if criticalError = True: sys.exit(2) if __name__ == '__main__': process_pdf_dir() After calling the script on the terminal, and echoing its status like so : $ python main.py $ echo $? If indeed, at least one file has not been anonymised : Expected output (if no critical error) : 1 Expected output (if critical error) : 2 Current output (if no critical error) : 0 Current output (if critical error) : 2 Hope this is clear enough. Any help would be greatly appreciated. Thanks.
Wait until the end of your loop to return the 1 exit code. I'd do it like: def process_pdf_dir() -> int: ret = 0 for file in os.listdir('pdf_files/'): if file.endswith('.pdf'): if not anonymiser(file): # remember to return a warning ret = 1 if criticalError: # immediately return a failure return 2 return ret if __name__ == '__main__': sys.exit(process_pdf_dir()) If "critical error" includes uncaught exceptions, maybe put that outside of process_pdf_dir, e.g.: if __name__ == '__main__': try: sys.exit(process_pdf_dir()) except: sys.exit(2)
5
3
72,992,588
2022-7-15
https://stackoverflow.com/questions/72992588/how-to-debug-python-unittests-in-visual-studio-code
I have the following directory structure (a friends was so kind to put it on github while he examined it) - code - elements __init__.py type_of_car.py __init__.py car.py - tests __init__.py test_car.py These are my launch.json settings: { "version": "0.2.0", "configurations": [ { "name": "Python: Debug Tests", "type": "python", "request": "launch", "program": "${file}", "purpose": ["debug-test"], "console": "integratedTerminal", "justMyCode": false }, { "name": "Python: Module", "type": "python", "request": "launch", "module": "main", "justMyCode": false, "cwd": "${workspaceFolder}" } ] } The VS Python Test settings are: { "python.testing.unittestArgs": [ "-v", "-s", "./tests", "-p", "test_*.py" ], "python.testing.pytestEnabled": false, "python.testing.unittestEnabled": true, "python.testing.cwd": "${workspaceFolder}" } The test_car.py imports module - of course - code.car. But also code.car.type_of_car When I run the test from the project root, the test can be invoked and passes. py -m unittest tests.test_car.py However, I cannot run my main code by pressing F5 (see launch.json) configuration. This fails because with No module named 'code.car'; 'code' is not a package being reported. Also, I have to debug my tests as well with Visual Studio Code: I navigate to test_engine.py and open it Switch the "RUN AND DEBUG" to Python: Debug Tests configuration Press F5 to run the debugging. This fails because with No module named 'code.car'; 'code' is not a package being reported. How can I resolve the module-hell so that I can run the test also from VSCode/debugger? (this thing did cost me hours. Any hint is appreciated.) Does someone has insight what the VS Code launcher considers it's root when calling the module?
You can modify the car.py and test_car.py files as follows: I only pasted the modified code。 car.py: # your code from code.elements.type_of_car import TypeOfCar # my code from elements.type_of_car import TypeOfCar test_car.py: # your code import unittest from code.car import Car from random import randint from code.elements.type_of_car import TypeOfCar # my code import unittest import sys sys.path.append("./code") from car import Car from random import randint from elements.type_of_car import TypeOfCar Results of debugging Python: Debug Tests:
3
2
72,996,818
2022-7-15
https://stackoverflow.com/questions/72996818/attributeerror-in-pytest-with-asyncio-after-include-code-in-fixtures
I need to test my telegram bot. To do this I need to create client user to ask my bot. I found telethon library which can do it. First I wrote a code example to ensure that authorisation and connection works and send test message to myself (imports omitted): api_id = int(os.getenv("TELEGRAM_APP_ID")) api_hash = os.getenv("TELEGRAM_APP_HASH") session_str = os.getenv("TELETHON_SESSION") async def main(): client = TelegramClient( StringSession(session_str), api_id, api_hash, sequential_updates=True ) await client.connect() async with client.conversation("@someuser") as conv: await conv.send_message('Hey, what is your name?') if __name__ == "__main__": asyncio.run(main()) @someuser (me) successfully receives message. Okay, now I create a test with fixtures based on code above: api_id = int(os.getenv("TELEGRAM_APP_ID")) api_hash = os.getenv("TELEGRAM_APP_HASH") session_str = os.getenv("TELETHON_SESSION") @pytest.fixture(scope="session") async def client(): client = TelegramClient( StringSession(session_str), api_id, api_hash, sequential_updates=True ) await client.connect() yield client await client.disconnect() @pytest.mark.asyncio async def test_start(client: TelegramClient): async with client.conversation("@someuser") as conv: await conv.send_message("Hey, what is your name?") After running pytest received an error: AttributeError: 'async_generator' object has no attribute 'conversation' It seems client object returned from client fixture in "wrong" condition. Here is print(dir(client)): ['__aiter__', '__anext__', '__class__', '__class_getitem__', '__del__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'aclose', 'ag_await', 'ag_code', 'ag_frame', 'ag_running', 'asend', 'athrow'] Where I loose "right" client object from generator in fixture?
Use @pytest_asyncio.fixture decorator in async fixtures according to documentation https://pypi.org/project/pytest-asyncio/#async-fixtures. Like this: import pytest_asyncio @pytest_asyncio.fixture(scope="session") async def client(): ...
19
42
73,017,267
2022-7-18
https://stackoverflow.com/questions/73017267/syntaxerror-non-utf-8-code-starting-with-xca-in-file-usr-local-bin-python3
I am trying to schedule a crontab job to run a python script and referring to python in /usr/local/bin/python3 but getting this error SyntaxError: Non-UTF-8 code starting with '\xca' in file /usr/local/bin/python3 on line 2 What does this mean and how can I solve it? I can't open the python3 file
you're calling to python twice. Decide which interpreter you want to use and run either: * * * * * /Users/name/opt/anaconda3/envs/myenv/bin/python /Users/name/Desktop/Scrape/scraper.py or * * * * * /usr/local/bin/python3 /Users/name/Desktop/Scrape/scraper.py
3
4
73,013,781
2022-7-17
https://stackoverflow.com/questions/73013781/how-to-enable-autocomplete-when-connect-to-docker-container-through-cli
I created a docker image for my FastAPI application then I created a container from that image. Now When I connect to that container using docker exec -it <container-id> through my terminal, I am able to access it but the problem is that autocomplete doesn't work when I press TAB.
What I have understood from your question is when you enter into the docker environment, you are unable to autocomplete filenames and folders. Usually when you enter into the container via shell, the autocomplete not works properly. Tried to enter into the container using bash environment i.e., docker exec -it <container-id> bash. Now you can use TAB to autocomplete files and folders.
9
19
73,012,167
2022-7-17
https://stackoverflow.com/questions/73012167/convert-a-text-file-into-a-dictionary-list
I have a text file in this format (in_file.txt): banana 4500 9 banana 350 0 banana 550 8 orange 13000 6 How can I convert this into a dictionary list in Python? Code: in_filepath = 'in_file.txt' def data_dict(in_filepath): with open(in_filepath, 'r') as file: for line in file.readlines(): title, price, count = line.split() d = {} d['title'] = title d['price'] = int(price) d['count'] = int(count) return [d] The terminal shows the following result: {'title': 'orange', 'price': 13000, 'count': 6} Correct output: {'title': 'banana', 'price': 4500, 'count': 9}, {'title': 'banana', 'price': 350, 'count': 0} , .... Can anyone help me with my problem? Thank you!
titles = ["title","price","count"] [dict(zip(titles, [int(word) if word.isdigit() else word for word in line.strip().split()])) for line in open("in_file.txt").readlines()] or: titles = ["title","price","count"] [dict(zip(titles, [(data:=line.strip().split())[0], *map(int, data[1:])])) for line in open("in_file.txt").readlines()] your approach(corrected): in_filepath = 'in_file.txt' def data_dict(in_filepath): res = [] with open(in_filepath, 'r') as file: for line in file.readlines(): title, price, count = line.split() d = {} d['title'] = title d['price'] = int(price) d['count'] = int(count) res.append(d) return res data_dict(in_filepath) why? because -> d = {} d['title'] = title d['price'] = int(price) d['count'] = int(count) is out of for loop and run only once and when ‍‍for be finished and then you have just one element you return your last element and didn't use others and use must create a list and append every element at the last line of for loop (saving) and at last, return result @Rockbar approach: import pandas as pd list(pd.read_csv("in_file.txt", sep=" ", header=None, names=["title","price","count"]).T.to_dict().values())
3
3
73,007,303
2022-7-16
https://stackoverflow.com/questions/73007303/what-is-the-correct-way-of-accessing-hydras-current-output-directory
Assuming I prevent Hydra from changing the current working directory, how can I still retrieve the job's output directory (i.e., the folder Hydra created for storing the results of the particular job) from main()? Ideally, I'd like a method that works regardless of whether it's a regular run or a multi-run. I know I can access the configuration through hydra.core.hydra_config.HydraConfig.get(), but it's unclear whether any of the settings there always map to the output folder. The relevant tutorial page discusses how to set the output folder but not how to retrieve it in the case we didn't change the cwd.
Found it. hydra_cfg = hydra.core.hydra_config.HydraConfig.get() hydra_cfg['runtime']['output_dir']
8
11
73,006,164
2022-7-16
https://stackoverflow.com/questions/73006164/communicate-between-two-widgets-not-directly-related
I have a GUI with widgets that has multiple child widgets inside, as you can see in the image: I want to communicate "Widget 2-2" with "Widget 1-1". I have different options, but I don't know which is better. 1. Propagate signal My first idea was propagates the emit to Main Windows and then it propagates the action to the target widget: Widget 2-2 emits a signal Widget 2 catch the emit and emit anther signal Main Windows catch the emit and call a Widget 1 controller function Widget 1 controller function calls a Widget 1-1 controller function Widget 1-1 makes the action Pros All the widgets only talks with its parents and children Cons It can be a bit messy 2. Create a singleton class with all the signals Another idea is to create the signals in a singleton class. Then the Widget 2-2 can emit this signals easily and Widget 1-1 can listen this signals easily too. The relative position of the widgets in the architecture does not care, everybody can emit and connect the signals. Pros Emit and connect this signals is super easy. Everybody can do it. Cons I feel that it is a super bad idea. Probably it will explode in the future, but I don't know why. I prefer to follow the best practices than solve my problem fast and easily (for the moment...)
Singletons, or static initialisation for that matter ought to be the last resort as they can cause all sorts of headaches, especially in multi-threaded environment. Definitely do not go for that in this case. What I have personally done in my project in this case, I emitted a signal in the first widget and then forwarded that in the first common parent to the second widget so that the second widget handled it via its slot in there. This has worked fairly well for me, at least. You can forward signals just fine in Qt, i.e. connect a signal to a signal. The first common parent will encompass both, so you can use it as a proxy for managing the two, or potentially even more widgets in your architecture.
5
2
73,006,039
2022-7-16
https://stackoverflow.com/questions/73006039/what-is-the-purpose-of-pyvenv-cfg-after-the-creation-of-a-python-virtual-environ
When I create a virtual environment in Python on Windows cmd, in the virtual environment folder, the following files appear: Include Lib Scripts pyvenv.cfg What is the goal of pyvenv.cfg creation? Can I use it in any way?
pyvenv.cfg is a configuration file that stores information about the virtual environment such as standard libraries path Python version interpreter version virtual env flags or any other venv configs If print content of pyvenv.cfg $ cat myenv/pyvenv.cfg home = /usr/bin include-system-site-packages = false version = 3.8.5 This is a sample output of the pyvenv.cfg file. Cheers!
11
5
73,001,554
2022-7-16
https://stackoverflow.com/questions/73001554/how-to-define-a-typeddict-class-with-keys-containing-hyphens
How can I create a TypedDict class that supports keys containing hyphens or other characters that are supported in strings, such as "justify-content" in the example below. from typing import TypedDict, Literal from typing_extensions import NotRequired class Attributes(TypedDict): width: NotRequired[str] height: NotRequired[str] direction: NotRequired[Literal["row", "column"]] justify-content: NotRequired[Literal["start", "end", "center", "equally-spaced"]]
It is possible with the functional syntax: from typing import TypedDict, Literal from typing_extensions import NotRequired Attributes = TypedDict( "Attributes", { "width": NotRequired[ str, ], "height": NotRequired[ str, ], "direction": NotRequired[ Literal["row", "column"], ], "justify-content": NotRequired[ Literal["start", "end", "center", "equally-spaced"] ], }, ) It's mentioned in the documentation The functional syntax should also be used when any of the keys are not valid identifiers, for example because they are keywords or contain hyphens
9
11
73,005,057
2022-7-16
https://stackoverflow.com/questions/73005057/how-to-define-a-function-that-has-a-condition-as-input
I need a function that takes a rule/condition as an input. for example given an array of integers detect all the numbers that are greater than two, and all the numbers greater than four. I know this can be achieved easily without a function, but I need this to be inside a function. The function I would like to have is like def _select(x,rule): outp = rule(x) return outp L = np.round(np.random.normal(2,4,50),decimals=2) y = _select(x=L,rule=(>2)) y1 = _select(x=L,rule=(>4)) How should I code a function like this?
Functions are first class objects meaning you can treat them as any other variable. import numpy as np def _select(x,rule): outp = rule(x) return outp def rule_2(val): return val > 2 def rule_4(val): return val > 4 L = np.round(np.random.normal(2,4,50),decimals=2) y = _select(x=L,rule=rule_2) print(y) y1 = _select(x=L,rule=rule_4) print(y1) In your example, the condition you want to use can be expressed as a simple expression. The python lambda keyword lets you define expressions as anonymous functions in other statements and expressions. So, you could replace the explicit def of the functions import numpy as np def _select(x,rule): outp = rule(x) return outp L = np.round(np.random.normal(2,4,50),decimals=2) y = _select(x=L,rule=lambda val: val > 2) print(y) y1 = _select(x=L,rule=lambda val: val > 4) print(y1)
3
6
72,986,422
2022-7-14
https://stackoverflow.com/questions/72986422/how-to-asynchronously-run-functions-within-a-for-loop-in-python
Hi I was wondering how to asynchronously call a function within a for-loop in Python, allowing the for-loop to execute more quickly. bar() in this case is a time intensive function, which is why I want the calls to it to be nonblocking. Here is what I want to refactor: def bar(item): //manipulate item return newItem newItems = [] for item in items: newItem = foo(item) newItems.append[newItem] Here is what I've tried: async def bar(item): //manipulate item return newItem async def foo(): newItems = [bar(item) for item in items] newItems = await asyncio.gather(*newItems) return newItems newItems = asyncio.run(foo()) This doesn't seem to work as each function call still waits for the previous one to finish before starting. I would love tips on what I might be doing wrong. Thank you so much for any and all help!
If your tasks are really async you can do it the following way: import asyncio async def bar(item: int) -> int: # manipulate item print("Started") await asyncio.sleep(5) print("Finished") return item ** 2 async def foo(): items = range(1, 10) tasks = [bar(item) for item in items] new_items = await asyncio.gather(*tasks) return new_items if __name__ == '__main__': results = asyncio.run(foo()) print(results)
4
3
73,000,307
2022-7-15
https://stackoverflow.com/questions/73000307/using-matplotlib-with-dask
Let's say we have pandas dataframe pd and a dask dataframe dd. When I want to plot pandas one with matplotlib I can easily do it: fig, ax = plt.subplots() ax.bar(pd["series1"], pd["series2"]) fig.savefig(path) However, when I am trying to do the same with dask dataframe I am getting Type Errors such as: TypeError: Cannot interpret 'string[python]' as a data type string[python] is just an example, whatever is your dd["series1"] datatype will be inputed here. So my question is: What is the proper way to use matplotlib with dask, and is this even a good idea to combine the two libraries?
SultanOrazbayev's is still spot on, here is an answer elaborating on the datashader option (which hvplot call under the hood). Don't use Matplotlib, use hvPlot! If you wish to plot the data while it's still large, I recommend using hvPlot, as it can natively handle dask dataframes. It also automatically provides interactivity. Example import numpy as np import dask import hvplot.dask # Create Dask DataFrame with normally distributed data df = dask.datasets.timeseries() df['x'] = df['x'].map_partitions(lambda x: np.random.randn(len(x))) df['y'] = df['y'].map_partitions(lambda x: np.random.randn(len(x))) # Plot df.hvplot.scatter(x='x', y='y', rasterize=True)
3
7
73,001,224
2022-7-16
https://stackoverflow.com/questions/73001224/how-to-specify-the-name-and-location-of-the-output-file-when-using-nbconvert
When using nbconvert, how can I specify the name and directory of the new file?
Use the --output flag to change the name of the converted file Use the --output-dir flag to change the directory of the converted file jupyter nbconvert <path/to/notebook.ipynb> --to <x> --output <"name" (without file extension)> --output-dir <path/to/new/file>
4
5
72,995,215
2022-7-15
https://stackoverflow.com/questions/72995215/matplotlib-specify-custom-colors-for-line-plot-of-numpy-array
I have a 2D numpy array (y_array) with 3 columns (and common x values as a list, x_list) and I want to create a plot with each column plotted as a line. I can do this by simply doing matplotlib.pyplot.plot(x_list, y_array) and it works just fine. However I am struggeling with the colors. I need to assign custom colors to each line. I tried by handing a list of my custom colors to the color=key argument, but apparently it does not take a List and throws a ValueError. I find this particularly odd since giving a list for labels actually does work. I also thought about creating a custom colormap from my choosen colors, but I do not know how to switch to this colormap when plotting... What can I do to specify the colors to be used when plotting the array? I would like to avoid iterating over the array columns in a for loop. Thanks in advance! Edit: minimal working example: This throws the mentioned ValueError import matplotlib.pyplot as plt import numpy as np if __name__ == '__main__': x_list = np.linspace(0, 9, 10) y1 = np.random.uniform(10, 19, 10) y2 = np.random.uniform(20, 29, 10) y3 = np.random.uniform(30, 39, 10) y_array = np.column_stack([y1, y2, y3]) labels = ['A', 'B', 'C'] # plt.plot(x_list, y_array, label=labels) # this works just fine my_colors = ['steelblue', 'seagreen', 'firebrick'] plt.plot(x_list, y_array, label=labels, color=my_colors) # throws ValueError plt.legend() plt.show()
You can create a custom cycler and use it to define your colors. You can get more information here import matplotlib.pyplot as plt import numpy as np from cycler import cycler if __name__ == '__main__': my_colors = ['steelblue', 'seagreen', 'firebrick'] custom_cycler = cycler(color=my_colors) x_list = np.linspace(0, 9, 10) y1 = np.random.uniform(10, 19, 10) y2 = np.random.uniform(20, 29, 10) y3 = np.random.uniform(30, 39, 10) y_array = np.column_stack([y1, y2, y3]) fig, ax = plt.subplots() ax.set_prop_cycle(custom_cycler) ax.plot(x_list, y_array) plt.show()
4
3
72,982,495
2022-7-14
https://stackoverflow.com/questions/72982495/couldnt-install-psycopg2-for-a-fastapi-project-on-macos-10-13-6-python-setup
I tried to install psycopg2 with the command line : pip install psycopg2 and this is what I get Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [25 lines of output] /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/setuptools/config/setupcfg.py:463: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead. warnings.warn(msg, warning_class) running egg_info creating /private/var/folders/lx/rssnrrbj15jcz8pj88rr89f00000gn/T/pip-pip-egg-info-lpapm88u/psycopg2.egg-info writing /private/var/folders/lx/rssnrrbj15jcz8pj88rr89f00000gn/T/pip-pip-egg-info-lpapm88u/psycopg2.egg-info/PKG-INFO writing dependency_links to /private/var/folders/lx/rssnrrbj15jcz8pj88rr89f00000gn/T/pip-pip-egg-info-lpapm88u/psycopg2.egg-info/dependency_links.txt writing top-level names to /private/var/folders/lx/rssnrrbj15jcz8pj88rr89f00000gn/T/pip-pip-egg-info-lpapm88u/psycopg2.egg-info/top_level.txt writing manifest file '/private/var/folders/lx/rssnrrbj15jcz8pj88rr89f00000gn/T/pip-pip-egg-info-lpapm88u/psycopg2.egg-info/SOURCES.txt' Error: pg_config executable not found. pg_config is required to build psycopg2 from source. Please add the directory containing pg_config to the $PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. If you prefer to avoid building psycopg2 from source, please install the PyPI 'psycopg2-binary' package instead. For further information please check the 'doc/src/install.rst' file (also at <https://www.psycopg.org/docs/install.html>). [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. then I tried to download the package from the official web site and put in the PATH: /Users/t/PycharmProjects/API/venv/lib/python3.10/site-packages (Im using a virtual environement) but I get No module named 'psycopg2'
After reading some similar questions on Stackoverflow here is the solution that worked for me. First, install Homebrew in case you don't already have it installed /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" then install postgresql : brew install postgresql finally use the pip command to install psycpg2 : pip install psycopg2
3
4
72,993,611
2022-7-15
https://stackoverflow.com/questions/72993611/python-cant-re-enter-readline
I'm new to python and I'm trying to make a GUI window with Tkinter that executes a command. I wrote the code underneath but it wont work. What is wrong with it? The required imports are imported like tk and ttk. This is the tkinter window code: root = Tk() root.geometry("600x450") root.title("Points") Label(root, text=PlayerPoints,font=('Ubuntu')).pack(pady=30) btn= ttk.Button(root, text="Add more points",command=AddPoints) btn.pack() Here is the AddPoints commands code (not the full thing but enough to make the error show up): PointsToAdd = int(input("How many points do you want to add?")) print(PointsToAdd, "points will be added.") And here is the error i get: Exception has occurred: RuntimeError (note: full exception trace is shown but execution is paused at: AddPoints) can't re-enter readline An example code to paste in a code editor to see the error yourself: from tkinter import * from tkinter import ttk PlayerPoints = 100 def AddPoints(): PointsToAdd = int(input("How many points do you want to add?")) print(PointsToAdd, "points will be added.") root = Tk() root.geometry("600x450") root.title("Points") Label(root, text=PlayerPoints,font=('Ubuntu')).pack(pady=30) btn= ttk.Button(root, text="Add more points",command=AddPoints) btn.pack() n = input("this is just to make Python wait for an input instead of killing the program")
From running the code you share, where the tk window will not run, the first clear error is that you haven't included a root.mainloop() statement below is the code as I have tweaked it to make it run: from tkinter import * from tkinter import ttk PlayerPoints = 100 def AddPoints(): PointsToAdd = int(input("How many points do you want to add?")) print(PointsToAdd, "points will be added.") root = Tk() root.geometry("600x450") Label(root, text=PlayerPoints, font=('Ubuntu')).pack(pady=30) btn = ttk.Button(root, text="Add more points", command=AddPoints) btn.pack() root.mainloop() n = input("this is just to make Python wait for an input instead of killing the program") from what I have changed, there aren't any errors. Not too sure if this is what you meant or not though. If you're unfamiliar with how the mainloop feature of tk works I'd suggest either doing some research on it or looking at the following article https://www.educba.com/tkinter-mainloop/ One final thing is that you don't have any code to update the points on the label in the tk GUI (not sure whether you wanted this to happen or not) the following screenshot is the output when running the code I attached above console_output
4
3
72,991,324
2022-7-15
https://stackoverflow.com/questions/72991324/how-to-apply-custom-functions-with-multiple-parameters-in-polars
Now I have a dataframe: df = pd.DataFrame({ "a":[1,2,3,4,5], "b":[2,3,4,5,6], "c":[3,4,5,6,7] }) The function: def fun(a,b,shift_len): return a+b*shift_len,b-shift_len Using Pandas, I can get the result by: df[["d","e"]] = df.apply(lambda row:fun(row["a"],row["b"],3),axis=1,result_type="expand") I want to know how can I use polars to get the same result?
Passing arguments with args import pandas as pd df1 = pd.DataFrame({"a":[1,2,3,4,5],"b":[2,3,4,5,6],"c":[3,4,5,6,7]}) def t(df, row1, row2, shift_len): return df[row1] + df[row2] * shift_len, df[row2] - shift_len df1[["d", "e"]] = df1.apply(t, args=("a", "b", 3), axis=1, result_type="expand") print(df1) OUTPUT: a b c d e 0 1 2 3 7 -1 1 2 3 4 11 0 2 3 4 5 15 1 3 4 5 6 19 2 4 5 6 7 23 3
4
-1
72,982,568
2022-7-14
https://stackoverflow.com/questions/72982568/indexing-first-n-characters-of-charfield-in-django
How to index a specific number of Characters on Django Charfield? For example, this is how we index fields in Django, but I guess it applies to entire field or all characters. class Meta: indexes = [ models.Index(fields=['last_name', 'first_name',]), models.Index(fields=['-date_of_birth',]), ] So, how to apply index to specific portion of field as shown in mysql below. CREATE INDEX part_of_name ON customer (name(10));
You can create a functional index with the Substr function [Django-doc]: from django.db.models.functions import Substr # … class Meta: indexes = [ models.Index(fields=['last_name', 'first_name']), models.Index(fields=['-date_of_birth']), models.Index(Substr('pub_date', 0, 10), name='part_of_name') ]
3
5
72,989,037
2022-7-15
https://stackoverflow.com/questions/72989037/pandas-array-filter-nan-and-keep-the-first-value-in-group
I have the following pandas dataframe. There are many NaN but there are lots of NaN value (I skipped the NaN value to make it look shorter). 0 NaN ... 26 NaN 27 357.0 28 357.0 29 357.0 30 NaN ... 246 NaN 247 357.0 248 357.0 249 357.0 250 NaN ... 303 NaN 304 58.0 305 58.0 306 58.0 307 58.0 308 58.0 309 58.0 310 58.0 311 58.0 312 58.0 313 58.0 314 58.0 315 58.0 316 NaN ... 333 NaN 334 237.0 I would like to filter all the NaN value and also only keep the first value out of the NaN (e.g. from index 27-29 there are three values, I would like to keep the value indexed 27 and skip the 28 and 29 value). The targeted array should be as follows: 27 357.0 247 357.0 304 58.0 334 237.0 I am not sure how could I keep only the first value. Thanks in advance.
Take only values that aren't nan, but the value before them is nan: df = df[df.col1.notna() & df.col1.shift().isna()] Output: col1 27 357.0 247 357.0 304 58.0 334 237.0 Assuming all values are greater than 0, we could also do: df = df.fillna(0).diff() df = df[df.col1.gt(0)]
3
4
72,984,094
2022-7-14
https://stackoverflow.com/questions/72984094/why-is-my-normal-q-q-plot-of-residuals-a-vertical-line
I am using a Q-Q Plot to test if the residuals of my linear regression follow a normal distribution but the result is a vertical line. It looks like linear regression is a pretty good model for this dataset, so shouldn't the residuals be normally distributed? The points were created randomly: import numpy as np x_values = np.linspace(0, 5, 100)[:, np.newaxis] y_values = 29 * x_values + 30 * np.random.rand(100, 1) Then, I fitted a Linear Regression model: from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.fit(x_values, y_values) predictions = reg.predict(x_values) residuals = y_values - predictions Finally, I used the statsmodel module to plot the Q-Q Plot of the residuals: fig = sm.qqplot(residuals, line='45')
Your problem is two-fold here The primary problem is that sklearn (scikit learn) expects your input to be in a 2d columnar array, whereas qqplot from statsmodels expects your data to be in a true 1d array. When you're passing the residuals to qqplot it is attempting to transform each residual individually instead of an entire dataset numpy.random.rand is a uniform distribution, so your errors aren't normal to begin with! To highlight this, I've adapted your code sample. The top row in the resultant figure comprises predictions & residuals for a uniform residual distribution, whereas the bottom row uses a normal distribution for errors. The difference between the "qq_bad" and "qq_good" plots simply has to do with selecting the column of data and passing it in as a true 1d array (instead of a 1d columnar array). from matplotlib.pyplot import subplot_mosaic, show, rc from matplotlib.lines import Line2D from matplotlib.transforms import blended_transform_factory from numpy.random import default_rng from numpy import linspace from sklearn.linear_model import LinearRegression from statsmodels.api import qqplot from scipy.stats import zscore rc('font', size=14) rc('axes.spines', top=False, right=False) rng = default_rng(0) size = 100 x_values = linspace(0, 5, size)[:, None] errors = { 'uniform': rng.uniform(low=-50, high=50, size=(size, 1)), 'normal': rng.normal(loc=0, scale=15, size=(size, 1)) } fig, axd = subplot_mosaic([ ['uniform_fit', 'uniform_hist', 'uniform_qq_bad', 'uniform_qq_good'], ['normal_fit', 'normal_hist', 'normal_qq_bad', 'normal_qq_good'] ], figsize=(12, 6), gridspec_kw={'wspace': .4, 'hspace': .2}) for err_type, err in errors.items(): reg = LinearRegression() y_values = 29 * x_values + 30 + err fit = reg.fit(x_values, y_values) predictions = fit.predict(x_values) residuals = predictions - y_values axd[f'{err_type}_fit'].scatter(x_values, y_values, s=10, alpha=.8) axd[f'{err_type}_fit'].plot(x_values, predictions) axd[f'{err_type}_hist'].hist(residuals, bins=20) qqplot(residuals, ax=axd[f'{err_type}_qq_bad'], line='q') qqplot(residuals[:, 0], ax=axd[f'{err_type}_qq_good'], line='q') #### # Below is primarily for plot aesthetics, feel free to ignore for label, ax in axd.items(): ax.set_ylabel(None) ax.set_xlabel(None) if label.startswith('uniform'): ax.set_title(label.replace('uniform_', '').replace('_', ' ')) if label.endswith('fit'): ax.set_ylabel(f'{label.replace("_fit", "")} error') line = Line2D( [.05, .95], [1.04, 1.04], color='black', transform=blended_transform_factory(fig.transFigure, ax.transAxes), ) fig.add_artist(line) show()
3
1
72,984,800
2022-7-14
https://stackoverflow.com/questions/72984800/why-does-unpacking-non-identifier-strings-work-on-a-function-call
I've noticed, to my surprise, that in a function call, I could unpack a dict with strings that weren't even valid python identifiers. It's surprising to me since argument names must be identifiers, so allowing a function call to unpack a **kwargs that has non-identifiers, with no run time error, doesn't seem healthy (since it could bury problems deeper that where they actually occur). Unless there's an actual use to being able to do this, in which case my question becomes "what would that use be?". Example code Consider this function: def foo(**kwargs): first_key, first_val = next(iter(kwargs.items())) print(f"{first_key=}, {first_val=}") return kwargs This shows that, within a function call, you can't unpack a dict that has has integer keys, which is EXPECTED. >>> t = foo(**{1: 2, 3: 4}) TypeError Traceback (most recent call last) ... TypeError: foo() keywords must be strings What is really not expected, and surprising, is that you can, on the other hand, unpack a dict with string keys, even if these are not valid python identifiers: >>> t = foo(**{'not an identifier': 1, '12': 12, ',(*&$)': 100}) first_key='not an identifier', first_val=1 >>> t {'not an identifier': 1, '12': 12, ',(*&$)': 100}
Looks like this is more of a kwargs issue than an unpacking issue. For example, one wouldn't run into the same issue with foo: def foo(a, b): print(a + b) foo(**{"a": 3, "b": 2}) # 5 foo(**{"a": 3, "b": 2, "c": 4}) # TypeError: foo() got an unexpected keyword argument 'c' foo(**{"a": 3, "b": 2, "not valid": 4}) # TypeError: foo() got an unexpected keyword argument 'not valid' But when kwargs is used, that flexibility comes with a price. It looks like the function first attempts to pop out and map all the named arguments and then passes the remaining items in a dict called kwargs. Since all keywords are strings (but all strings are not valid keywords), the first check is easy - keywords must be strings. Beyond that, it's up to the author to figure out what to do with remaining items in kwargs. def bar(a, **kwargs): print(locals()) bar(a=2) # {'a': 2, 'kwargs': {}} bar(**{"a": 3, "b": 2}) # {'a': 3, 'kwargs': {'b': 2}} bar(**{"a": 3, "b": 2, "c": 4}) # {'a': 3, 'kwargs': {'b': 2, 'c': 4}} bar(**{1: 3, 3: 4}) # TypeError: keywords must be strings Having said all that, there definitely is inconsistency but not a flaw. Some related discussions: Supporting (or not) invalid identifiers in **kwargs feature: **kwargs allowing improperly named variables
7
1
72,983,671
2022-7-14
https://stackoverflow.com/questions/72983671/why-is-my-move-function-not-working-in-pygame
IDK why my player.move() is not working here's my main class: import pygame from player import * pygame.init() WIDTH, HEIGHT = 900, 600 WIN = pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption("MyGame!") FPS = 60 player_x = 500 player_y = 500 PLAYER_WIDTH = 60 PLAYER_HEIGHT = 60 PLAYER_VEL = 5 WHITE = (255, 255, 255) BLACK = (0, 0, 0) keys_pressed = pygame.key.get_pressed() player_rect = pygame.Rect(player_x, player_y, PLAYER_WIDTH, PLAYER_HEIGHT) player = Player(player_rect, BLACK, WIN, keys_pressed, PLAYER_VEL) def draw_window(): WIN.fill(WHITE) pygame.display.update() def main(): run = True clock = pygame.time.Clock() while run: clock.tick(FPS) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False draw_window() player.move() player.draw_player() pygame.display.update() pygame.quit() if __name__ == '__main__': main() and here's my other class called player.py that the move function is not working: import pygame class Player(object): def __init__(self, player, color, surface, keys_pressed, vel): self.player = player self.color = color self.surface = surface self.keys_pressed = keys_pressed self.vel = vel def draw_player(self): pygame.draw.rect(self.surface, self.color, self.player) pygame.display.update() def move(self): if self.keys_pressed[pygame.K_a]: self.player.x -= self.vel if self.keys_pressed[pygame.K_d]: self.player.x += self.vel I tried putting the player = Player(...) before the main function. But whatever I tried, it doesn't seem to work and this is my first time posting a question on stackoverflow. This problem also happened alot in the past so thanks if you helped me out.
pygame.key.get_pressed() returns a sequence with the state of each key. If a key is held down, the state for the key is 1, otherwise 0. The contents of the keys_pressed list don't magically change when the state of the keys changes. keys_pressed is not tied to the keys. You need to get the new state of the keys in every frame: class Player(object): # [...] def move(self): # get current state of the keys self.keys_pressed = pygame.key.get_pressed() if self.keys_pressed[pygame.K_a]: self.player.x -= self.vel if self.keys_pressed[pygame.K_d]: self.player.x += self.vel
3
6
72,968,127
2022-7-13
https://stackoverflow.com/questions/72968127/python-debug-not-working-on-ssh-fs-remote-host
Issue: I am using VSCode version 1.69.1 on Mac (Version details at the bottom). From Mac, I connect to a remote repo using SSH FS When I click on 'run' > 'Start Debugging' or 'Run Without Debugging' on a remote python file, the "Run and Debug pane opens" but the file is not run [![Pane is blank][1]][1] The debugger works for local repos (hosted on Mac). I have tried reinstalling the Python extension, removed the ~/.vscode-server on the remote server, re-install VSCode on the Mac but nothing seem to have helped. This was earlier working (tried a week back) but unsure what changed. From the terminal (connected to remote host, rendered by SSH FS, I am able to run python test.py and that works Launch.json file for python // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "justMyCode": true } ] } Version details Commit: b06ae3b2d2dbfe28bca3134cc6be65935cdfea6a Date: 2022-07-12T08:21:51.333Z (1 day ago) Electron: 18.3.5 Chromium: 100.0.4896.160 Node.js: 16.13.2 V8: 10.0.139.17-electron.0 OS: Darwin x64 21.5.0 Id: Kelvin.vscode-sshfs Description: File system, terminal and task provider using SSH Version: 1.25.0 Publisher: Kelvin Schoofs VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=Kelvin.vscode-sshfs``` [1]: https://i.sstatic.net/elruD.png
Python Debugger version v2022.10.0 seems to be broken for SSH-FS. Using the previous version of Python extension addressed it. To install an older version of an extension, click on gear icon> select "Install another version" and select version to install. I used version v2022.8.1, and that works
4
10
72,975,483
2022-7-14
https://stackoverflow.com/questions/72975483/milliseconds-to-hhmmss-time-format
I'm trying to add the timestamp of a video frame to its own frame name like "frame2 00:00:01:05", using CAP_PROP_POS_MSEC I'm getting the current position of the video file in milliseconds, what I want is to change those milliseconds to the time format "00:00:00:00:". My code currently assigns the name like: "frame2 1.05" but ideally I would have the name like: "frame2 00:00:01:05". Is there a library that can help me achieve this milliseconds to HH:MM:SS conversion? import cv2 vidcap = cv2.VideoCapture('video.mp4') success,frame = vidcap.read() cont = 0 while success: frame_timestamp = (vidcap.get(cv2.CAP_PROP_POS_MSEC))/(1000) cv2.imwrite(f'frame{cont} '+str((format(frame_timestamp,'.2f')))+'.jpg',frame) success,frame = vidcap.read() cont+=1 Thanks!
Just use divmod. It does a division and modulo at the same time. It's more convenient than doing that separately. seconds = 1.05 # or whatever (hours, seconds) = divmod(seconds, 3600) (minutes, seconds) = divmod(seconds, 60) formatted = f"{hours:02.0f}:{minutes:02.0f}:{seconds:05.2f}" # >>> formatted # '00:00:01.05' And if you need the fraction of seconds to be separated by :, which nobody will understand, you can use (seconds, fraction) = divmod(seconds, 1) and multiply fraction * 100 to get hundredth of a second.
4
6
72,967,793
2022-7-13
https://stackoverflow.com/questions/72967793/keyboardinterrupt-with-python-multiprocessing-pool
I want to write a service that launches multiple workers that work infinitely and then quit when main process is Ctrl+C'd. However, I do not understand how to handle Ctrl+C correctly. I have a following testing code: import os import multiprocessing as mp def g(): print(os.getpid()) while True: pass def main(): with mp.Pool(1) as pool: try: s = pool.starmap(g, [[]] * 1) except KeyboardInterrupt: print('Done') if __name__ == "__main__": print(os.getpid()) main() When I try to Ctrl+C it, I expect process(es) running g to just receive SIGTERM and silently terminate, however, I receive something like that instead: Process ForkPoolWorker-1: Done Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.8/multiprocessing/pool.py", line 51, in starmapstar return list(itertools.starmap(args[0], args[1])) File "test.py", line 8, in g pass KeyboardInterrupt This obviously means that parent and children processes both raise KeyboardInterrupt from Ctrl+C, further suggested by tests with kill -2. Why does this happen and how to deal with it to achieve what I want?
The signal that triggers KeyboardInterrupt is delivered to the whole pool. The child worker processes treat it the same as the parent, raising KeyboardInterrupt. The easiest solution here is: Disable the SIGINT handling in each worker on creation Ensure the parent terminates the workers when it catches KeyboardInterrupt You can do this easily by passing an initializer function that the Pool runs in each worker before the worker begins doing work: import signal import multiprocessing as mp # Use initializer to ignore SIGINT in child processes with mp.Pool(1, initializer=signal.signal, initargs=(signal.SIGINT, signal.SIG_IGN)) as pool: try: s = pool.starmap(g, [[]] * 1) except KeyboardInterrupt: print('Done') The initializer replaces the default SIGINT handler with one that ignores SIGINT in the children, leaving it up to the parent process to kill them. The with statement in the parent handles this automatically (exiting the with block implicitly calls pool.terminate()), so all you're responsible for is catching the KeyboardInterrupt in the parent and converting from ugly traceback to simple message.
5
6
72,965,428
2022-7-13
https://stackoverflow.com/questions/72965428/why-is-a-cnn-model-struggling-to-classify-a-colored-mnist
I'm trying to classify colored MNIST digits with a basic CNN architecture on Keras. Here is the piece of code that colors the original dataset into purely either red, green or blue. def load_norm_data(): ## load basic mnist (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() train_images = np.zeros((*x_train.shape, 3)) # orig shape: (60 000, 28, 28, 1) -> rgb shape: (60 000, 28, 28, 3) for num in range(x_train.shape[0]): rgb = np.random.randint(3) train_images[num, ..., rgb] = x_train[num]/255 return train_images, y_train if __name__ == '__main__': ims, labels = load_norm_data() for num in range(10): plt.subplot(2, 5, num+1) plt.imshow(ims[num]) plt.axis('off') which gives for the first couple of digits: Then, I attempt to classify this colored dataset into the same 10 digit classes of MNIST, so the labels aren't changing --and yet the models accuracy drops from 95% for non-colored MNIST, to wildly variable 30-70% on colored MNIST, vastly depending on weight initialization... Please find below the architecture of said model: model = keras.Sequential() model.add(keras.layers.Conv2D(64, kernel_size=(3,3), padding='same')) model.add(keras.layers.MaxPool2D(pool_size=(2,2))) model.add(keras.layers.Conv2D(64, kernel_size=(3,3), padding='same')) model.add(keras.layers.MaxPool2D(pool_size=(2,2), padding='same')) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(10, activation='relu')) model.add(keras.layers.Softmax()) input_shape = train_images.shape model.build(input_shape) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() model.fit(train_images, train_numbers, batch_size=12, epochs=25) Initially, I thought that this drop in performance might be linked to data irregularity (e.g. imagine a lot of 3s in the data ended up being green, thus the model learns green = 3). So I checked the data, the counts are good and the rgb distribution for each class is near 33% for each color too. I also checked the misclassified images to see if there were many representatives of a certain color or digit, but it doesn't seem to be the case either. In any case, after reading Keras' documentation and because of the fact that Conv2D forces you to pass it a 2-dimensional kernel_size that I imagine thus operates on all channels of the input image, the model shouldn't be taking color into account for classification here. Am I missing something here?
The last part of the model includes a dense -> relu -> softmax. The relu activation should be removed. In addition, you might benefit from adding non-linearities (e.g., relu) in your convolutional blocks. Otherwise, the neural network will end up being a (big) linear function and will not work as well for non-linear data. model = keras.Sequential() model.add(keras.layers.Conv2D(64, kernel_size=(3,3), padding='same', activation='relu')) model.add(keras.layers.MaxPool2D(pool_size=(2,2))) model.add(keras.layers.Conv2D(64, kernel_size=(3,3), padding='same', activation='relu')) model.add(keras.layers.MaxPool2D(pool_size=(2,2), padding='same')) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(10)) model.add(keras.layers.Softmax()) It is interesting that the original model worked well on the MNIST dataset. I cannot say for sure why, but perhaps the MNIST dataset is simple enough that the model was able to cope. Also, the relu -> softmax would clamp negative values to 0, and maybe there were not many negative values.
4
3
72,966,797
2022-7-13
https://stackoverflow.com/questions/72966797/how-to-debug-on-exceptions-inside-try-except-block
The PyCharm debugger has the feature to set breakpoints at raised exceptions. However if an exception is handled inside a try except block it is not raised. How to deal with this if I want to debug within the try block? I could comment out the try except parts but this seems too cumbersome. Is there a better solution?
In the breakpoints settings (Either the icon in the debug toolbar, or ctrl+shft+F8), you can set exception breakpoints. The "Activation Policy" is usually set by default to "On termination". But since you handle the error, there is no termination. To activate the breakpoint immediately, even if the error is handled, you need to set the activation policy to "On raise": Note: that warning sign which says: "This option may slow down the debugger"
4
3
72,963,015
2022-7-13
https://stackoverflow.com/questions/72963015/replacing-multiple-characters-at-once
Is there any way to replace multiple characters in a string at once, so that instead of doing: "foo_faa:fee,fii".replace("_", "").replace(":", "").replace(",", "") just something like (with str.replace()) "foo_faa:fee,fii".replace(["_", ":", ","], "")
An option that requires no looping or regular expressions is translate: >>> "foo_faa:fee,fii".translate(str.maketrans('', '', "_:,")) "foofaafeefii" Note that for Python 2, the API is slightly different.
5
11
72,881,426
2022-7-6
https://stackoverflow.com/questions/72881426/python-difficulty-understanding-getlogger-name
Im quite confused on the logging docs' explanation of getLogger(__name__) as a best practice. Gonna be explaining my entire thought process, feel free to leave a comment any time I make a mistake The logging docs says A good convention to use when naming loggers is to use a module-level logger, in each module which uses logging, named as follows: logger = logging.getLogger(__name__) Say I have a project structure: main_module.py cookbook_example ---auxiliary_module.py main_module.py import logging from cookbook_example import auxiliary_module # Creates a new logger instance logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # Create file handler that logs debug messages fh = logging.FileHandler('spam.log', mode='w') fh.setLevel(logging.DEBUG) # Create a formatter formatter = logging.Formatter( '%(asctime)s - %(name)s - %(levelname)s - %(message)s') fh.setFormatter(formatter) logger.addHandler(fh) logger.info('Creating instance of auxiliary_module.Auxiliary') a = auxiliary_module.Auxiliary() logger.info('Calling auxiliary_module.do_something') a.do_something() auxiliary_module.some_function() auxiliary_module.py import logging # create logger module_logger = logging.getLogger(f'__main__.{__name__}') def some_function(): module_logger.info('received a call to "some_function"') Now, from this SO thread, I infer that getLogger(__name__) should not actually be used in EVERY module that uses logging but instead in the module where a logger is configured, which in this case would be main_module.py e.g. In the auxiliary module, trying to get the custom logger through getLogger(__name__) will return the root logger, whereas getLogger(f'__main__.{__name__}') will return that custom logger. To me, this formatting of getLogger(f'__main__.{__name__}') doesn't seem much easier to write than the explicit getLogger('main_module.auxiliary_module'). Furthermore, in the log files it logs __main__.auxiliary_module rather than main_module.auxiliary_module, losing a bit of accuracy. Lastly, I previously stated that to my understanding, getLogger(__name__) should only be placed in the module where the logger is configured. However, configuration should be placed in a config file or dict anyways. Thus, I don't seem to understand any reasonable usage of getLogger(__name__) and how it is, according to the docs, a best practice. Could someone explain this and maybe link a repo that uses loggers with proper organisation that I could refer to? Thanks
Assume this simple project: project/ ├── app.py ├── core │ ├── engine.py │ └── __init__.py ├── __init__.py └── utils ├── db.py └── __init__.py Where app.py is: import logging import sys from utils import db from core import engine logger = logging.getLogger() logger.setLevel(logging.INFO) stdout = logging.StreamHandler(sys.stdout) stdout.setFormatter(logging.Formatter("%(name)s: %(message)s")) logger.addHandler(stdout) def run(): db.start() engine.start() run() and utils/db.py and core/engine.py is: from logging import getLogger print(__name__) # will print utils.db or core.engine logger = getLogger(__name__) print(id(logger)) # different object for each module; child of root though def start(): logger.info("started") If you run this using python app.py, you will see that it takes care of printing the proper namespaces for you. utils.db: started core.engine: started If your code is well organised, your module name itself is the best logger name available. If you had to reinvent those names, it usually means that you have a bad module structure (or some special, non-standard use case). For most purposes though, this should work fine (hence part of stdlib). That is all there is to it. Remember, you don't really set handlers for libraries; that is left to the consumer.
4
7
72,899,105
2022-7-7
https://stackoverflow.com/questions/72899105/how-to-mask-a-polars-dataframe-using-another-dataframe
I have a polars dataframe like so: df = pl.from_repr(""" ┌─────────────────────┬─────────┬─────────┐ │ time ┆ 1 ┆ 2 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ f64 ┆ f64 │ ╞═════════════════════╪═════════╪═════════╡ │ 2021-10-02 00:05:00 ┆ 2.9048 ┆ 2.8849 │ │ 2021-10-02 00:10:00 ┆ 48224.0 ┆ 48068.0 │ └─────────────────────┴─────────┴─────────┘ """) and a masking dataframe with similar columns and time value like so: df_mask = pl.from_repr(""" ┌─────────────────────┬───────┬───────┐ │ time ┆ 1 ┆ 2 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ bool ┆ bool │ ╞═════════════════════╪═══════╪═══════╡ │ 2021-10-02 00:05:00 ┆ false ┆ false │ │ 2021-10-02 00:10:00 ┆ true ┆ true │ └─────────────────────┴───────┴───────┘ """) I am looking for this result: shape: (2, 3) ┌─────────────────────┬────────┬─────────┐ │ time ┆ 1 ┆ 2 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ f64 ┆ f64 │ ╞═════════════════════╪════════╪═════════╡ │ 2021-10-02 00:05:00 ┆ null ┆ null │ │ 2021-10-02 00:10:00 ┆ 2.8849 ┆ 48068.0 │ └─────────────────────┴────────┴─────────┘ Here I only show with 2 columns '1' and '2' but there could any number of them. Any help would be appreciated!
Having columns in a single DataFrame has guarantees you don't have when you have data in separate tables. Masking out values by columns in another DataFrame is a potential for errors caused by different lengths. For this reason polars does not encourage such operations and therefore you must first create a single DataFrame from the two, and then select the columns/computations you need. So let's do that. The first thing you can do is join the two tables. This guaranteed to work on DataFrames of different sizes. df_a.join(df_mask, on="time", suffix="_mask") This comes however with a price, as joining is not free. If you are 100% certain your dataframes have the same height, you can use a horizontal concat. ( pl.concat( [df_a, df_mask.select(pl.all().name.suffix("_mask"))], how="horizontal" ).select( [pl.col("time")] + [ pl.when(pl.col(f"{name}_mask")).then(pl.col(name)).otherwise(None) for name in ["1", "2"] ] ) ) In the final select query we take the columns we want. And compute the masked values with a when -> then -> otherwise branch. This outputs: shape: (2, 3) ┌─────────────────────┬─────────┬─────────┐ │ time ┆ 1 ┆ 2 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ f64 ┆ f64 │ ╞═════════════════════╪═════════╪═════════╡ │ 2021-10-02 00:05:00 ┆ null ┆ null │ │ 2021-10-02 00:10:00 ┆ 48224.0 ┆ 48068.0 │ └─────────────────────┴─────────┴─────────┘
3
4
72,920,189
2022-7-9
https://stackoverflow.com/questions/72920189/select-all-columns-where-column-name-starts-with-string
Given the following dataframe, is there some way to select only columns starting with a given prefix? I know I could do e.g. pl.col(column) for column in df.columns if column.startswith("prefix_"), but I'm wondering if I can do it as part of a single expression. df = pl.DataFrame( {"prefix_a": [1, 2, 3], "prefix_b": [1, 2, 3], "some_column": [3, 2, 1]} ) df.select(pl.all().<column_name_starts_with>("prefix_")) Would this be possible to do lazily?
Starting from Polars 0.18.1 you can use Selectors(polars.selectors.starts_with) which provides more intuitive selection of columns from DataFrame or LazyFrame objects based on their name, dtype or other properties. >>> import polars as pl >>> import polars.selectors as cs >>> >>> df = pl.DataFrame( ... {"prefix_a": [1, 2, 3], "prefix_b": [1, 2, 3], "some_column": [3, 2, 1]} ... ) >>> df shape: (3, 3) ┌──────────┬──────────┬─────────────┐ │ prefix_a ┆ prefix_b ┆ some_column │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞══════════╪══════════╪═════════════╡ │ 1 ┆ 1 ┆ 3 │ │ 2 ┆ 2 ┆ 2 │ │ 3 ┆ 3 ┆ 1 │ └──────────┴──────────┴─────────────┘ >>> # print(df.lazy().select(cs.starts_with("prefix_")).collect()) # for LazyFrame >>> print(df.select(cs.starts_with("prefix_"))) # For DataFrame shape: (3, 2) ┌──────────┬──────────┐ │ prefix_a ┆ prefix_b │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞══════════╪══════════╡ │ 1 ┆ 1 │ │ 2 ┆ 2 │ │ 3 ┆ 3 │ └──────────┴──────────┘
10
11
72,897,924
2022-7-7
https://stackoverflow.com/questions/72897924/python-rabbitmq-pika-consumer-how-to-use-async-function-as-callback
I have the following code where I initialize a consumer listening to a queue. consumer = MyConsumer() consumer.declare_queue(queue_name="my-jobs") consumer.declare_exchange(exchange_name="my-jobs") consumer.bind_queue( exchange_name="my-jobs", queue_name="my-jobs", routing_key="jobs" ) consumer.consume_messages(queue="my-jobs", callback=consumer.consume) The problem is that the consume method is defined as follows: async def consume(self, channel, method, properties, body): Inside the consume method, we need to await async functions, but this produces an error "coroutine is not awaited" for the consume function. Is there a way to use async function as a callback in pika?
I annotated my callback with @sync where sync is: def sync(f): @functools.wraps(f) def wrapper(*args, **kwargs): return asyncio.get_event_loop().run_until_complete(f(*args, **kwargs)) return wrapper (found it here for celery, but it worked with pika too)
6
9
72,909,147
2022-7-8
https://stackoverflow.com/questions/72909147/sqlalchemy-relationships-field-to-pydantic-validation-error
I have some models declared with SQLAlchemy declarative base. Their fields represent some IP addresses. When I try to convert instances of these models to pydantic model via orm_mode, it fails with the following error E pydantic.error_wrappers.ValidationError: 4 validation errors for IpSchema E ip_address -> 0 E value is not a valid IPv4 address (type=value_error.ipv4address) E ip_address -> 0 E value is not a valid IPv6 address (type=value_error.ipv6address) E ip_address -> 0 E value is not a valid IPv4 or IPv6 address (type=value_error.ipvanyaddress) E ip_address -> 0 E str type expected (type=type_error.str) The following is the code. I have tried to check it with pytest, but it fails. Can the orm_mode code be overwritten? from typing import List, Union from pydantic import BaseModel, Field, IPvAnyAddress from sqlalchemy import INTEGER, Column, ForeignKey, String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base() class IpModel(Base): __tablename__ = "ip_model" id = Column(INTEGER, primary_key=True, autoincrement=True, index=True) ip_address = relationship("IpAddress", back_populates="ip_model") class IpAddress(Base): __tablename__ = "ip" id = Column(INTEGER, primary_key=True, autoincrement=True, index=True) address = Column(String(64), nullable=False) ip_model_id = Column(INTEGER, ForeignKey("ip_model.id"), nullable=False) ip_model = relationship("IpModel", back_populates="ip_address") class IpSchema(BaseModel): ip_address: List[Union[IPv4Address, IPv6Address, IPvAnyAddress]] = Field() class Config: orm_mode = True def test_ipv4(): ipv4: str = "192.168.1.1" ip = IpAddress(address=ipv4) m = IpModel(ip_address=[ip]) s = IpSchema.from_orm(m) assert str(s.ip_address[0]) == ipv4 How can I solve this problem?
Pydantic does not know how to map each relationship ORM instances to its address field. For that you will need to add a pydantic validator with the pre=True argument in order to map each ORM instance to the address field before pydantic validation. Here is how it should look like class IpSchema(BaseModel): ip_address: List[Union[IPv4Address, IPv6Address, IPvAnyAddress]] = Field() class Config: orm_mode = True @validator('ip_address', pre=True) def validate(cls, ip_adress_relationship, **kwargs): return [ip.address for ip in ip_adress_relationship] Please note that validators with pre=True run before and after setting values to Pydantic model. In your example it changes nothing, but, for example, if you want to transform list of IPs to str, you need to check type of value first: class IpSchema(BaseModel): ip_address: str class Config: orm_mode = True @validator('ip_address', pre=True) def validate(cls, ip_adress_relationship, **kwargs): if isinstance(ip_adress_relationship, str): return ip_adress_relationship return ','.join([ip.address for ip in ip_adress_relationship]) And here is the whole reproducible (and working) example : from typing import List, Union from pydantic import BaseModel, Field, IPvAnyAddress from pydantic import validator from pydantic.schema import IPv4Address from pydantic.schema import IPv6Address from sqlalchemy import INTEGER, Column, ForeignKey, String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base() class IpModel(Base): __tablename__ = "ip_model" id = Column(INTEGER, primary_key=True, autoincrement=True, index=True) ip_address = relationship("IpAddress", back_populates="ip_model") class IpAddress(Base): __tablename__ = "ip" id = Column(INTEGER, primary_key=True, autoincrement=True, index=True) address = Column(String(64), nullable=False) ip_model_id = Column(INTEGER, ForeignKey("ip_model.id"), nullable=False) ip_model = relationship("IpModel", back_populates="ip_address") class IpSchema(BaseModel): ip_address: List[Union[IPv4Address, IPv6Address, IPvAnyAddress]] = Field() class Config: orm_mode = True @validator('ip_address', pre=True) def validate(cls, ip_adress_relationship, **kwargs): return [ip.address for ip in ip_adress_relationship] def test_ipv4(): ipv4: str = "192.168.1.1" ip = IpAddress(address=ipv4) m = IpModel(ip_address=[ip]) s = IpSchema.from_orm(m) assert str(s.ip_address[0]) == ipv4 if __name__ == '__main__': test_ipv4()
4
2
72,954,928
2022-7-12
https://stackoverflow.com/questions/72954928/type-annotations-for-sqlalchemy-model-declaration
I can't figure how i could type annotate my sqlalchemy models code, what kind of type should i use for my model fields. class Email(Model): __tablename__ = 'emails' name: Column[str] = Column(String, nullable=False) sender: Column[str] = Column( String, default=default_sender, nullable=False ) subject: Column[str] = Column(String, nullable=False) html: Column[str] = Column(String, nullable=False) template_id: Column[UUID] = Column( postgresql.UUID(as_uuid=True), ForeignKey('templates.id'), index=True, ) Column type as well as Mapped satisfies the type linter of my editor, but it doesn't look like the right one as opposed to simple types like str/int/uuid Should i use Optional[Column[str]] or Column[Optional[str]]? What to use for relationship? class Foo(Model): ... bar: Bar = relationship( 'Bar', order_by="desc(Bar.created_at)", lazy='dynamic', ) The result of accessing the relationship varies depending on the attribute lazy.
For SQLAlchemy 2.0 it would be something like: import uuid from sqlalchemy.orm import Mapped, mapped_column from sqlalchemy.dialects.postgresql import UUID class Email(Model): __tablename__ = 'emails' name: Mapped[str] = mapped_column(String, nullable=False) sender: Mapped[str] = mapped_column( String, default=default_sender, nullable=False ) subject: Mapped[str] = mapped_column(String, nullable=False) html: Mapped[str] = mapped_column(String, nullable=False) template_id: Mapped[uuid.UUID] = mapped_column( UUID(as_uuid=True), ForeignKey('templates.id'), index=True, )
5
7
72,906,257
2022-7-8
https://stackoverflow.com/questions/72906257/what-is-nondynamicallyquantizablelinear
I found NonDynamicallyQuantizableLinear while reading torch.nn.modules.activation.py in MultiheadAttention class And I have a few questions about it What is the difference between Linear and NonDynamicallyQuantizableLinear? Why is NonDynamicallyQuantizableLinear used for MultiheadAttention?
It is introduced as a temporary cleanup measure to fail immediately when attempting directly quantize torch.nn.MultiheadAttention. This improves upon the baseline behavior of failing silently. See https://github.com/pytorch/pytorch/issues/58969
6
3
72,915,808
2022-7-8
https://stackoverflow.com/questions/72915808/how-to-create-a-custom-sort-order-for-the-api-methods-in-fastapi-swagger-autodoc
How can I set a custom sort order for the API methods in FastAPI Swagger autodocs? This question shows how to do it in Java. My previous question asked how to sort by "Method", which is a supported sorting method. I would really like to take this a step further, so that I can determine which order the methods appear. Right now DELETE appears at the top, but I want API methods to be in the order: GET, POST, PUT, DELETE. I know it is possible to implement a custom sort in JavaScript and give that function to operationsSorter, but you can't include it from the swagger_ui_parameters property that is available in the Python bindings. Is there some way to accomplish this in Python? from fastapi import FastAPI app = FastAPI(swagger_ui_parameters={"operationsSorter": "method"}) @app.get("/") def list_all_components(): pass @app.get("/{component_id}") def get_component(component_id: int): pass @app.post("/") def create_component(): pass @app.put("/{component_id}") def update_component(component_id: int): pass @app.delete("/{component_id}") def delete_component(component_id: int): pass
You can use tags to group your endpoints. To do that, pass the parameter tags with a list of str (commonly just one str) to your endpoints. Use the same tag name for endpoints that use the same HTTP method, so that you can group your endpoints that way. For example, use Get as the tag name for GET operations (Note: Get is only an example here, you can use any tag name you wish). Doing just the above would most likely define the endpoints in the order you wish (i.e., GET, POST, PUT, DELETE). However, to ensure this—or, even in case you would like to define a different order for your methods/tags—you could add metadata for the different tags used to group your endpoints. You could do that using the parameter openapi_tags, which takes a list containing one dictionary for each tag. Each dictionary should at least contain the name, which should be the same tag name used in the tags parameter. The order of each tag metadata dictionary also defines the order shown in the docs UI. Working Example from fastapi import FastAPI tags_metadata = [ {"name": "Get"}, {"name": "Post"}, {"name": "Put"}, {"name": "Delete"} ] app = FastAPI(openapi_tags=tags_metadata) @app.get("/", tags=["Get"]) def list_all_components(): pass @app.get("/{component_id}", tags=["Get"]) def get_component(component_id: int): pass @app.post("/", tags=["Post"]) def create_component(): pass @app.put("/{component_id}", tags=["Put"]) def update_component(component_id: int): pass @app.delete("/{component_id}", tags=["Delete"]) def delete_component(component_id: int): pass
9
7
72,935,514
2022-7-11
https://stackoverflow.com/questions/72935514/cant-solve-systemerror-unknown-opcode
I am executing a notebook on my laptop and I get the following error. XXX lineno: 17, opcode: 120 --------------------------------------------------------------------------- SystemError Traceback (most recent call last) Input In [3], in <cell line: 3>() 1 gym = Gym(0, 0, 0, 0).from_dill(BACKUP) 2 ticker = gym.api.returnTicker() ----> 3 gym.wallet = gym.get_wallet() 4 plot_donut_gym_wallet(gym) 5 plot_donut_gym_wallet_makers(gym) File <ipython-input-3-1c4842a503bf>:17, in get_wallet(self) SystemError: unknown opcode As you can see, the error happens during a function call. The function itself is not the problem, if I define and run ithe function inside a cell it simply works. But importing the function from its own module leads to this error. I have looked around for hints, all the forum I have read are pointing to some problem with having multiple Python installations. However, I tried using new environments, both using venv and conda and I get the same error. The same code on other machines works, so it appear to be something related to my particular installation, but I can't figure out how to fix it. I tried reinstalling conda, making new envs, upgrading python. Iteted this with Python 3.8, 3.9 and 3.10, I always get the same error. Any help is very welcome. Since the function is from a class previously serialized using dill, this dill-related issue may be relevant https://github.com/uqfoundation/dill/issues/438
Code objects (from functions and class methods) serialized with dill are not guaranteed to work across different Python versions, because the list of valid opcodes change from version to version. In cases where there is an opcode unknown to the interpreter deserializing ("unpickling") the code object, the functions and classes are successfully recreated, but fail to execute as in your example. The code in your example may have been created in Python 2, or Python 3.11 beta... It's impossible to know just by looking. You should ask whoever created the pickle file. Edit: If you don't have the source code of the functions and methods, you may be able to reconstruct their source using a decompiler like decompyle3 (or uncompyle6 for other Python versions) and re-generate these faulty code objects with another Python version. An example in Python3.7: >>> def f(x, *, verbose=False): ... y = x*x ... if verbose: ... print(y) ... return y ... >>> import inspect >>> f_sign = inspect.signature(f) <Signature (x, *, verbose=False)> >>> >>> import os >>> import decompyle3 >>> with open(os.devnull, 'w') as null: ... f_code = decompyle3.main.decompile(f.__code__, out=null) ... >>> f_code <decompyle3.semantics.pysource.SourceWalker object at 0x7f8932f14990> >>> >>> from textwrap import indent >>> print("def {}{}:\n{}".format(f.__name__, f_sign, indent(f_code.text, " "))) def f(x, *, verbose=False): y = x * x if verbose: print('x**2 =', y) return y
3
4
72,950,907
2022-7-12
https://stackoverflow.com/questions/72950907/how-to-fix-userwarning-distutils-was-imported-before-setuptools
When I cloned some packages including python tools, an error occured: Errors << unique_id:cmake /home/scpark/cps_ws/logs/unique_id/build.cmake.001.log CMake Warning (dev) at CMakeLists.txt:2 (project): Policy CMP0048 is not set: project() command manages VERSION variables. Run "cmake --help-policy CMP0048" for policy details. Use the cmake_policy command to set the policy and suppress this warning. The following variable(s) would be set to empty: CMAKE_PROJECT_VERSION CMAKE_PROJECT_VERSION_MAJOR CMAKE_PROJECT_VERSION_MINOR CMAKE_PROJECT_VERSION_PATCH This warning is for project developers. Use -Wno-dev to suppress it. /opt/ros/noetic/lib/python3/dist-packages/_distutils_hack/__init__.py:18: UserWarning: Distutils was imported before Setuptools, but importing Setuptools also replaces the `distutils` module in `sys.modules`. This may lead to undesirable behaviors or errors. To avoid these issues, avoid using distutils directly, ensure that setuptools is installed in the traditional way (e.g. not an editable install), and/or make sure that setuptools is always imported before distutils. warnings.warn( /opt/ros/noetic/lib/python3/dist-packages/_distutils_hack/__init__.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help ERROR: invalid command 'unique_id' CMake Error at /opt/ros/noetic/share/catkin/cmake/safe_execute_process.cmake:11 (message): execute_process(/home/scpark/cps_ws/build/unique_id/catkin_generated/env_cached.sh "/usr/bin/python3" "/opt/ros/noetic/share/catkin/cmake/interrogate_setup_dot_py.py" "unique_id" "/home/scpark/cps_ws/src/unique_identifier/unique_id/setup.py" "/home/scpark/cps_ws/build/unique_id/catkin_generated/setup_py_interrogation.cmake") returned error code 1 Call Stack (most recent call first): /opt/ros/noetic/share/catkin/cmake/catkin_python_setup.cmake:46 (safe_execute_process) CMakeLists.txt:8 (catkin_python_setup) cd /home/scpark/cps_ws/build/unique_id; catkin build --get-env unique_id | catkin env -si /usr/bin/cmake /home/scpark/cps_ws/src/unique_identifier/unique_id --no-warn-unused-cli -DCATKIN_DEVEL_PREFIX=/home/scpark/cps_ws/devel/.private/unique_id -DCMAKE_INSTALL_PREFIX=/home/scpark/cps_ws/install; cd - .................................................................................................................................. Failed << unique_id:cmake [ Exited with code 1 ] Failed <<< unique_id [ 0.5 seconds ] Abandoned <<< unique_identifier [ Unrelated job failed ] [build] Summary: 1 of 3 packages succeeded. [build] Ignored: None. [build] Warnings: None. [build] Abandoned: 1 packages were abandoned. [build] Failed: 1 packages failed. [build] Runtime: 0.7 seconds total. Anyone who knows how to fix it? /opt/ros/noetic/lib/python3/dist-packages/_distutils_hack/init.py:18: UserWarning: Distutils was imported before Setuptools, but importing Setuptools also replaces the distutils module in sys.modules. This may lead to undesirable behaviors or errors. To avoid these issues, avoid using distutils directly, ensure that setuptools is installed in the traditional way (e.g. not an editable install), and/or make sure that setuptools is always imported before distutils.
I had the same problem after upgrading my system to 20.04 and ROS Noetic. As suggested in this answer, upgrading setuptools solves the problem. But I would actually do a user install like this pip3 install --user --upgrade pip setuptools as it avoids conflicts with the system package.
4
1
72,898,620
2022-7-7
https://stackoverflow.com/questions/72898620/trigger-a-dash-dashboard-on-a-button-click
I am working on a dash app, where I try to integrate ExplainerDashboard. If I do it like this: app.config.external_stylesheets = [dbc.themes.BOOTSTRAP] app.layout = html.Div([ html.Button('Submit', id='submit', n_clicks=0), html.Div(id='container-button-basic', children='') ]) X_train, y_train, X_test, y_test = titanic_survive() model = LogisticRegression().fit(X_train, y_train) explainer = ClassifierExplainer(model, X_test, y_test) db = ExplainerDashboard(explainer, shap_interaction=False) db.explainer_layout.register_callbacks(app) @app.callback( Output('container-button-basic', 'children'), Input('submit', 'n_clicks'), ) def update_output(n_clicks): if n_clicks == 1: return db.explainer_layout.layout() The dashboard gets triggered on the button click, however, it is calculated before I click the button and when the dash starts. If I change it and put the calculations inside the callback like this, I get the dashboard but it looks the register callback doesn't work and all the plots are empty app.config.external_stylesheets = [dbc.themes.BOOTSTRAP] app.layout = html.Div([ html.Button('Submit', id='submit', n_clicks=0), html.Div(id='container-button-basic', children='') ]) X_train, y_train, X_test, y_test = titanic_survive() model = LogisticRegression().fit(X_train, y_train) explainer = ClassifierExplainer(model, X_test, y_test) @app.callback( Output('container-button-basic', 'children'), Input('submit', 'n_clicks'), ) def update_output(n_clicks): if n_clicks == 1: db = ExplainerDashboard(explainer, shap_interaction=False) db.explainer_layout.register_callbacks(app) return db.explainer_layout.layout()
The reason why your example doesn't work is that in Dash, callbacks must be registered before the server starts. Hence, you cannot register new callbacks from within a callback. Data pre-processing pipeline I think the cleanest solution would be move the data processing to a pre-processing pipeline. It could be something as simple as a notebook running on the Dataiku node. The code would be along the lines of from explainerdashboard import ClassifierExplainer from explainerdashboard.datasets import titanic_survive from sklearn.linear_model import LogisticRegression X_train, y_train, X_test, y_test = titanic_survive() model = LogisticRegression().fit(X_train, y_train) explainer = ClassifierExplainer(model, X_test, y_test) explainer.dump("/data/dataiku/titanic.joblib") # save to some writeable location The corresponding webapp code would be something like, import dash_bootstrap_components as dbc from dash import Dash from explainerdashboard import ClassifierExplainer, ExplainerDashboard explainer = ClassifierExplainer.from_file("/data/dataiku/titanic.joblib") # load pre-processed data db = ExplainerDashboard(explainer, shap_interaction=False) app.config.external_stylesheets = [dbc.themes.BOOTSTRAP] app.layout = db.explainer_layout.layout() db.explainer_layout.register_callbacks(app) The deployment process would then be to (1) run the notebook and (2) (re)start the webapp backend. Note that this process must be repeated for the app to pickup new data. Callback registration using mock data Another approach could be to use a mock dataset that is small, but has the same structure as your normal (large) dataset, for constructing the ExplainerDashboard during app initialisation. This approach enables fast initial loading, and callback registration before app start. You could then use a callback to load the complete dataset afterwards, i.e. similar to your original idea. Here is some example code, import dash_bootstrap_components as dbc from dash import html, Dash, Output, Input, dcc from dash.exceptions import PreventUpdate from explainerdashboard import ClassifierExplainer, ExplainerDashboard from explainerdashboard.datasets import titanic_survive from sklearn.linear_model import LogisticRegression def get_explainer(X_train, y_train, X_test, y_test, limit=-1): model = LogisticRegression().fit(X_train[:limit], y_train[:limit]) return ClassifierExplainer(model, X_test[:limit], y_test[:limit]) def inject_inplace(src, dst): for attr in dir(dst): try: setattr(dst, attr, getattr(src, attr)) except AttributeError: pass except NotImplementedError: pass X_train, y_train, X_test, y_test = titanic_survive() # Create explainer with minimal data to ensure fast initial load. explainer = get_explainer(X_train, y_train, X_test, y_test, limit=5) dashboard = ExplainerDashboard(explainer, shap_interaction=False) # Setup app with (hidden) dummy classifier layout. dummy_layout = html.Div(dashboard.explainer_layout.layout(), style=dict(display="none")) app = Dash() # not needed in Dataiku app.config.external_stylesheets = [dbc.themes.BOOTSTRAP] app.layout = html.Div([ html.Button('Submit', id='submit', n_clicks=0), dcc.Loading(html.Div(id='container', children=dummy_layout), fullscreen=True) ]) # Register the callback before the app starts. dashboard.explainer_layout.register_callbacks(app) @app.callback(Output('container', 'children'), Input('submit', 'n_clicks')) def load_complete_dataset(n_clicks): if n_clicks != 1: raise PreventUpdate # Replace in-memory references to the full dataset to sure callbacks target the full dataset. full_explainer = get_explainer(X_train, y_train, X_test, y_test) inject_inplace(full_explainer, explainer) return ExplainerDashboard(explainer, shap_interaction=False).explainer_layout.layout() if __name__ == "__main__": app.run_server(port=9024, debug=False)
5
1
72,942,995
2022-7-11
https://stackoverflow.com/questions/72942995/whats-the-benefit-of-a-shared-build-python-vs-a-static-build-python
This question has been bothering me for two weeks and I've searched online and asked people but couldn't get an answer. Python by default build the library libpythonMAJOR.MINOR.a and statically links it into the interpreter. Also it has an --enable-shared flag, which will build a share library libpythonMAJOR.MINOR.so.1.0, and dynamically link it to the interpreter. Based on my poor CS knowledge, the first thought came into my mind when I saw "shared library", is that, "the shared bulid one must save a lot of memory compared to the static build one!". Then I had this assumption: # share build 34K Jun 29 11:32 python3.9 21M Jun 29 11:32 libpython3.9.so.1.0 10 shared python processes, mem usage = 0.034M * 10 + 21M ≈ 21M # static build 22M Jun 27 23:45 python3.9 10 static python processes, mem usage = 10*22M = 220M shared python wins! Later I ran a toy test on my machine and found that's wrong. test.py import time i = 0 while i < 20: time.sleep(1) i += 1 print('done') mem_test.sh #! /bin/bash for i in {1..1000} do ./python3.9 test.py & done For share python to run I set export LD_LIBRARY_PATH=/home/tian/py3.9.13_share/lib . I ran mem_test.sh separately (one by one) with 2 pythons and simply monitored the total mem usage via htop in another console. It turns out that both eat almost the same amount of memory. Later on people taught me there's something call "paging on demand": Is an entire static program loaded into memory when launched? How does an executable get loaded into RAM, does the whole file get loaded into RAM even when the whole file won't be needed, or does it get loaded in "chunks"? so my previous calculation of static python mem usage is completely wrong. Now I am confused. Shared build python doesn't use less memory via a share library runtime? Question: What's the benefit of a shared build python vs a static build python? Or the shared built python indeed save some memory by the mechanism of using a share library, but my test is too trival to reveal? P.S. Checking some python official Dockerfiles, e.g. this one you would see they all set --enable-shared. Also there's related issue on pyenv https://github.com/pyenv/pyenv/issues/2294 , it seems that neither they figure that out.
It turns out to be that others are talking about the scenario "Embedding Python in Another Application" (https://docs.python.org/3/extending/embedding.html). If that's the case, then "saving disk space" and other mentioned reasons make sense. Because embedding python in another application, either you need to statically link libpythonMAJOR.MINOR.a or dynamically link libpythonMAJOR.MINOR.so.1.0. So my current conclusion is that whether python is shared built or statically built only affects the "Embedding Python in Another Application" scenario. For normal use cases, e.g. running the python interpeter, it doesn't make much diffferences. Update: Disk usage comparsion, see comments in makfile: https://stackoverflow.com/a/73099136/5983841
4
1
72,901,475
2022-7-7
https://stackoverflow.com/questions/72901475/dynamically-updating-values-of-a-field-depending-on-the-choice-selected-in-anoth
I have two tables. Inventory and Invoice. InventoryModel: from django.db import models class Inventory(models.Model): product_number = models.IntegerField(primary_key=True) product = models.TextField(max_length=3000, default='', blank=True, null=True) title = models.CharField('Title', max_length=120, default='', blank=True, unique=True) amount = models.IntegerField('Unit Price', default=0, blank=True, null=True) def __str__(self): return self.title InvoiceModel: from django.db import models from inventory.models import Inventory class Invoice(models.Model): invoice_number = models.IntegerField(blank=True, primary_key=True) line_one = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 1", blank=True, default='', null=True) line_one_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_one_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_one_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_two = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 2", blank=True, default='', null=True) line_two_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_two_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_two_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_three = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 3", blank=True, default='', null=True) line_three_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_three_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_three_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_four = models.ForeignKey(Inventory, on_delete=models.CASCADE,related_name='+', verbose_name="Line 4", blank=True, default='', null=True) line_four_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_four_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_four_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_five = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 5", blank=True, default='', null=True) line_five_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_five_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_five_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_six = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 6", blank=True, default='', null=True) line_six_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_six_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_six_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_seven = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 7", blank=True, default='', null=True) line_seven_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_seven_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_seven_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_eight = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 8", blank=True, default='', null=True) line_eight_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_eight_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_eight_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_nine = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 9", blank=True, default='', null=True) line_nine_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_nine_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_nine_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) line_ten = models.ForeignKey(Inventory, on_delete=models.CASCADE, related_name='+', verbose_name="Line 10", blank=True, default='', null=True) line_ten_quantity = models.IntegerField('Quantity', default=0, blank=True, null=True) line_ten_unit_price = models.IntegerField('Unit Price(₹)', default=0, blank=True, null=True) line_ten_total_price = models.IntegerField('Line Total(₹)', default=0, blank=True, null=True) def __unicode__(self): return self.invoice_number I have a form where the person selects an Inventory.title in the line_one. I want the line_one_unit_price to be automatically filled according to the option selected in the line_one. That is, I want the amount of that product to be displayed. I tried using a JSON object and then sending it to the template. views.py: def add_invoice(request): form = InvoiceForm(request.POST or None) data = Inventory.objects.all() dict_obj = model_to_dict(data) serialized = json.dumps(dict_obj) total_invoices = Invoice.objects.count() queryset = Invoice.objects.order_by('-invoice_date')[:6] if form.is_valid(): form.save() messages.success(request, 'Successfully Saved') return redirect('/invoice/list_invoice') context = { "form": form, "title": "New Invoice", "total_invoices": total_invoices, "queryset": queryset, "serialized":serialized, } return render(request, "entry.html", context) I am not sure how I can access that JSON file in javascript? what will be the name of that JSON object? Is it possible to access that in JavaScript? How can I use it to update the price of an item in the forms dynamically? Thanks. EDIT: I added this in the HTML file: {{ data|json_script:"hello-data" }} <script type="text/javascript"> const data = JSON.parse(document.getElementById('hello-data').textContent); document.getElementById('id_line_one').onchange = function(){ document.getElementById('id_line_one_unit_price').value = data[this.value]; }; </script> views.py: def add_invoice(request): form = InvoiceForm(request.POST or None) data = serializers.serialize("json", Inventory.objects.all()) total_invoices = Invoice.objects.count() queryset = Invoice.objects.order_by('-invoice_date')[:6] if form.is_valid(): form.save() messages.success(request, 'Successfully Saved') return redirect('/invoice/list_invoice') context = { "form": form, "title": "New Invoice", "total_invoices": total_invoices, "queryset": queryset, "data": data, } return render(request, "entry.html", context) The output I get when I select an item is {? I don't know why? I guess it's because the logic I used in the javascript is incorrect? JSON File: [{"model": "inventory.inventory", "pk": 1, "fields": {"product": "nice", "title": "Asus", "amount": 56000}}, {"model": "inventory.inventory", "pk": 2, "fields": {"product": "nice", "title": "Lenovo", "amount": 55000}}] Edit2: I added this in the script: <script type="text/javascript"> const data = JSON.parse(document.getElementById('hello-data').textContent); document.getElementById('id_line_one').onchange = function(){ var line_one=document.getElementById('id_line_one').value; document.getElementById('id_line_one_unit_price').value = data.fields[line_one].amount; }; </script> And now I am getting an Uncaught TypeError: Cannot read properties of undefined (reading '1') error. Whenever I select the object from the dropdown, It's value is stored from 1, etc. So the first object in the list returns it's primary key, i.e., 1. I think I am accessing the fields in a wrong way.
I figured out how to achieve this functionality. I wish I could give the credits to a single person but it's really a combination of many people's answers. in the views.py which returns the HTML page where I want this functionality, I wrote this code, It returns the Product objects to the HTML file: model_data=Inventory.objects.values_list('product_number','amount') data=[model for model in model_data.values()] context = { "data": data, } return render(request, "entry.html", context) In the entry.html, I access the data using: {{ data|json_script:"hello-data" }} Then the data I get is: [{"product_number": 1, "product": "Laptop", "title": "Lenovo", "amount": 50000}, {"product_number": 2, "product": "Single Table Tennis Ball", "title": "Table Tennis Ball Share", "amount": 4}] In the entry.html I used some javascript: <script type="text/javascript"> var data = JSON.parse(document.getElementById('hello-data').textContent); document.getElementById('id_line_one').onchange = function(event){ var data1 = data.find(({product_number}) => product_number == event.target.value); document.getElementById('id_line_one_unit_price').value = data1 && data1.amount ? data1.amount : 0; }; So, I parse the JSON data, and then when the user clicks the drop-down menu whose id is id_line_one, I check if the returned number(i.e., product_number) matches with any product_number present in the variable data. and then I change the amount of that respective field. Referrence: https://stackoverflow.com/a/72923853/9350154
4
0
72,914,328
2022-7-8
https://stackoverflow.com/questions/72914328/compare-similarity-of-two-names-and-identify-duplicates-with-neural-network
I have a dataset which contains pairs of names, it looks like this: ID; name1; name2 1; Mike Miller; Mike Miler 2; John Doe; Pete McGillen 3; Sara Johnson; Edita Johnson 4; John Lemond-Lee Peter; John LL. Peter 5; Marta Sunz; Martha Sund 6; John Peter; Johanna Petera 7; Joanna Nemzik; Joanna Niemczik I have some cases, which are labelled. So I check them manually and decide if these are duplicates or not. The manual judgement in these cases would be: 1: Is a duplicate 2: Is not a duplicate 3: Is not a duplicate 4: Is a duplicate 5: Is not a duplicate 6: Is not a duplicate 7: Is a duplicate (The 7th case is a specific case, because here phonetics come into the game too. However, this is not the main problem, I am ok with ignoring phonetics.) A first approach would be to calculate the Levenshtein-distance for each pair and mark those as a duplicate, where the Levenshtein-distance is for example less or equal than 2. This would lead to the following output: 1: Levenshtein distance: 2 => duplicate 2: Levenshtein distance: 11 => not a duplicate 3: Levenshtein distance: 4 => not a duplicate 4: Levenshtein distance: 8 => not a duplicate 5: Levenshtein distance: 2 => duplicate 6: Levenshtein distance: 4 => not a duplicate 7: Levenshtein distance: 2 => duplicate This would be an approach which uses a "fixed" algorithm based on the Levinshtein distance. Now, I would like to do this task with using a neural network / machine learning: I do not need the neural network to detect semantic similarity, like "hospital" and "clininc". However, I would like to avoid the Levenshtein-distance, as I would like the ML algorithm to be able to detect "John Lemond-Lee Peter" and "John LL. Peter" as a potential duplicate, also not with a 100% certainty. The Levenshtein distance would lead to a relative high number in this case (8), as there are quite some characters to be added. In a case like "John Peter" and "Johanna Petera" the Levenshtein-distance would lead to a smaller number (4), however this is in fact no duplicate and for this case I would hope that the ML algorithm would be able to detect that this is likely not a duplicate. So I need the ML algorithm to "learn the way I need the duplicates to be checked". With my labelling I would give as an input I would give the ML algorithm the direction, of what I want. I actually thought that this should be an easy task for a ML algorithm / neural network, but I am not sure. How can I implement a neural network to compare the pairs of names and identify duplicates without using an explicit distance metric (like the Levenshtein distance, euclidean etc.)? I thought that it would be possible to convert the strings to numbers and a neural network can work with this and learn to detect duplicates according to my labelling style. So without having to specify a distance metric. I thought about an human: I would give this task to a person and this person would judge and make a decision. This person has no clue about a Levenshtein-distance or any other mathematical concept. So I just want to train the neural network to learn to do what the human is doing. Of course, every human is different and it also depends on my labelling. (Edit: The ML/neural network solutions I have seen so far (like this) use a metric like levenshtein as a feature input. But as I said I thought it should be possible to teach the neural network the "human judgement" without making use of such a distance measure? Regarding my specific case with having pairs of names: What would the benefit be a of a ML approach using levenshtein distance as a feature? Because it will just detect those pairs of names as a duplicate that have a low levenshtein distance. So I could use a simple algorithm to mark a pair as duplicate if the levenshtein distance between the two names is less than x. Why use a ML instead, what would be the additional benefit?)
I have read carefully whole your question, but still I don't know why you want a neural network for that. Real, sad answer Tweak edit distance (more general distance than Levenshtein) by adding some weights - idea: swapping characters that are close on the keyboard is more likely than those that are faraway. So distance between Asa and Ada is smaller than Asa and Ala. Case (4) you can cover with regex. Happy answer If you insist to go with ML solutions, here is the sketch of what I would do if forced: Prepare a lot of pairs labeled (a lot means e.g. 50 thousands). Pad the names to constant length (e.g. 32 characters). Apply character level encoding (one-hot should do the job). Train a binary classifier (e.g. in a form of siamese network) on such inputs.
5
1
72,949,464
2022-7-12
https://stackoverflow.com/questions/72949464/python-ppt-find-and-replace-within-a-chart
I already referred these posts here here, here and here. Please don't mark it as a duplicate. I have a chart embedded inside the ppt like below I wish to replace the axis headers from FY2021 HC to FY1918 HC. Similarly, FY2122 HC should be replaced with FY1718 HC. How can I do this using python pptx? This chart is coming from embedded Excel though. Is there anyway to change it in ppt? When I tried the below, it doesn't get the axis headers text_runs = [] for shape in slide.shapes: if not shape.has_text_frame: continue for paragraph in shape.text_frame.paragraphs: for run in paragraph.runs: text_runs.append(run.text) when I did the below, I find the list of shape types from the specific slide. I wish to change only the chart headers. So, the screenshot shows only two charts that I have in my slide. for slide in ip_ppt.slides: for shape in slide.shapes: print("id: %s, type: %s" % (shape.shape_id, shape.shape_type)) id: 24, type: TEXT_BOX (17) id: 10242, type: TEXT_BOX (17) id: 11306, type: TEXT_BOX (17) id: 11, type: AUTO_SHAPE (1) id: 5, type: TABLE (19) id: 7, type: TABLE (19) id: 19, type: AUTO_SHAPE (1) id: 13, type: CHART (3) id: 14, type: CHART (3) When I try to access the shape using id, I am unable to as well ip_ppt.slides[5].shapes[13].Chart I also tried the code below from pptx import chart from pptx.chart.data import CategoryChartData chart_data = CategoryChartData() chart.datalabel = ['FY1918 HC', 'FY1718 HC'] Am new to python and pptx. Any solution on how to edit the embedded charts headers would really be useful. Help please
You can get to the category labels the following way: from pptx import Presentation from pptx.shapes.graphfrm import GraphicFrame prs = Presentation('chart-01.pptx') for slide in prs.slides: for shape in slide.shapes: print("slide: %s, id: %s, index: %s, type: %s" % (slide.slide_id, shape.shape_id, slide.shapes.index(shape), shape.shape_type)) if isinstance(shape, GraphicFrame) and shape.has_chart: plotIndex = 0 for plot in shape.chart.plots: catIndex = 0 for cat in plot.categories: print(" plot %s, category %s, category label: %s" % (plotIndex, catIndex, cat.label)) catIndex += 1 plotIndex += 1 which will put out something like that: slide: 256, id: 2, index: 0, type: PLACEHOLDER (14) slide: 256, id: 3, index: 1, type: CHART (3) plot 0, category 0, category label: East plot 0, category 1, category label: West plot 0, category 2, category label: Midwest Unfortunately you can not change the category label, because it is stored in the embedded Excel. The only way to change those is to replace the chart data by using the chart.replace_data() method. Recreating the ChartData object you need for the call to replace_data based on the existing chart is a bit more involved, but here is my go at it based on a chart that I created with the following code: from pptx import Presentation from pptx.chart.data import CategoryChartData from pptx.enum.chart import XL_CHART_TYPE,XL_LABEL_POSITION from pptx.util import Inches, Pt from pptx.dml.color import RGBColor # create presentation with 1 slide ------ prs = Presentation() slide = prs.slides.add_slide(prs.slide_layouts[5]) # define chart data --------------------- chart_data = CategoryChartData() chart_data.categories = ['FY2021 HC', 'FY2122 HC'] chart_data.add_series('blue', (34.5, 31.5)) chart_data.add_series('orange', (74.1, 77.8)) chart_data.add_series('grey', (56.3, 57.3)) # add chart to slide -------------------- x, y, cx, cy = Inches(2), Inches(2), Inches(6), Inches(4.5) gframe = slide.shapes.add_chart( XL_CHART_TYPE.COLUMN_STACKED, x, y, cx, cy, chart_data ) chart = gframe.chart plot = chart.plots[0] plot.has_data_labels = True data_labels = plot.data_labels data_labels.font.size = Pt(13) data_labels.font.color.rgb = RGBColor(0x0A, 0x42, 0x80) data_labels.position = XL_LABEL_POSITION.INSIDE_END prs.save('chart-01.pptx') and that looks almost identical to your picture in the question: The following code will change the category labels in that chart: from pptx import Presentation from pptx.chart.data import CategoryChartData from pptx.shapes.graphfrm import GraphicFrame from pptx.enum.chart import XL_CHART_TYPE from pptx.util import Inches # read presentation from file prs = Presentation('chart-01.pptx') # find the first chart object in the presentation slideIdx = 0 for slide in prs.slides: for shape in slide.shapes: if shape.has_chart: chart = shape.chart print("Chart of type %s found in slide[%s, id=%s] shape[%s, id=%s, type=%s]" % (chart.chart_type, slideIdx, slide.slide_id, slide.shapes.index(shape), shape.shape_id, shape.shape_type )) break slideIdx += 1 # create list with changed category names categorie_map = { 'FY2021 HC': 'FY1918 HC', 'FY2122 HC': 'FY1718 HC' } new_categories = list(categorie_map[c] for c in chart.plots[0].categories) # build new chart data with new category names and old data values new_chart_data = CategoryChartData() new_chart_data.categories = new_categories for series in chart.series: new_chart_data.add_series(series.name,series.values) # write the new chart data to the chart chart.replace_data(new_chart_data) # save everything in a new file prs.save('chart-02.pptx') The comments should explain what is going on and if you open chart-02.pptx with PowerPoint, this is what you will see: Hope that solves your problem!
4
3
72,928,952
2022-7-10
https://stackoverflow.com/questions/72928952/why-cant-the-import-be-resolved
I've seen several answers to this question, albeit none of the solutions have worked for my particular situation. I'm trying to get started building an API with Flask. When I try to import Flask-RESTful, I get an error in VS Code. For context, I am using Windows 11. Here are the first two lines of my .py file: from flask import Flask from flask_restful import Resource, Api, reqparse The error I get reads as: Import "flask_restful" could not be resolved Pylance(reportMissingImports) Now, to add more context, I've checked to make sure the interpreter path is set using Ctrl+Shift+P to open the Command Palette and selecting the correct (and the only) Python interpreter for the project inside my virtual environment. When I run pip list, I get this output: (api) C:\Users\<Username>\OneDrive\Documents\PythonProjects\api>pip list Package Version ----------------------- --------- aiohttp 3.8.1 aiosignal 1.2.0 alembic 1.8.0 aniso8601 9.0.1 anyio 3.6.1 async-timeout 4.0.2 attrs 21.4.0 bleach 5.0.1 certifi 2022.6.15 charset-normalizer 2.1.0 click 8.1.3 click-log 0.4.0 colorama 0.4.5 deprecation 2.1.0 docutils 0.19 dotty-dict 1.3.0 Flask 2.1.2 Flask-Migrate 3.1.0 Flask-RESTful 0.3.9 Flask-SQLAlchemy 2.5.1 flask-swagger 0.2.14 frozenlist 1.3.0 gitdb 4.0.9 GitPython 3.1.27 gotrue 0.5.0 greenlet 1.1.2 h11 0.12.0 httpcore 0.14.7 httpx 0.21.3 idna 3.3 importlib-metadata 4.12.0 invoke 1.7.1 itsdangerous 2.1.2 Jinja2 3.1.2 keyring 23.6.0 Mako 1.2.1 MarkupSafe 2.1.1 multidict 6.0.2 packaging 21.3 pip 22.0.4 pkginfo 1.8.3 postgrest-py 0.10.2 psycopg2 2.9.3 pydantic 1.9.1 Pygments 2.12.0 pyparsing 3.0.9 python-dateutil 2.8.2 python-gitlab 3.6.0 python-semantic-release 7.28.1 pytz 2022.1 pywin32-ctypes 0.2.0 PyYAML 6.0 readme-renderer 35.0 realtime 0.0.4 requests 2.28.1 requests-toolbelt 0.9.1 rfc3986 1.5.0 semver 2.13.0 setuptools 58.1.0 setuptools-scm 7.0.4 six 1.16.0 smmap 5.0.0 sniffio 1.2.0 SQLAlchemy 1.4.39 storage3 0.3.4 supabase 0.5.8 supabase-client 0.2.4 tomli 2.0.1 tomlkit 0.10.2 tqdm 4.64.0 twine 3.8.0 typing_extensions 4.3.0 urllib3 1.26.10 webencodings 0.5.1 websockets 9.1 Werkzeug 2.1.2 wheel 0.37.1 yarl 1.7.2 zipp 3.8.0 Why would the flask...Flask import work, but not flask_restful? I can see both in the Lib\site-packages folder in my project directory and the output from pip list outside the virtual environment is different, which signals to me that there isn't an issue with the path or directories. EDIT: I forgot to mention that when I run the code using Ctrl + Alt + N, I get this output: Traceback (most recent call last): File "c:\Users\<Username>\OneDrive\Documents\PythonProjects\api\api.py", line 3, in <module> from flask_restful import Resource, Api, reqparse ModuleNotFoundError: No module named 'flask_restful' Again, no errors with importing flask, only with flask_restful. Any help with this will be greatly appreciated! Thank you in advance for your time. I'm happy to provide more info if needed. Thanks. EDIT: I have updated pip and attempted to simply run the program inside the command prompt. This is what I got. I'm still getting the import error inside VS Code, though. I am going to see if using a different version of Python makes a difference. Thanks everyone for all of your help so far, I appreciate it! EDIT: Okay, it seems like the issue is a little closer to being solved. So, I updated pip. I retried setting the interpreter path and, which some of you mentioned, it turns out that I'd been doing it wrong. I had to do Ctrl + Shift + P >> Python: Select Interpreter >> Enter interpreter path and select the correct path that way. I did this by going into the project directory, going to the scripts folder, and selecting python.exe. That solved the issue with Pylance. I no longer see an error in the editor when working on the project. However, the interpreter will not show in the bottom right hand corner of the window. That may just be a bug and I can either look through the issues on GitHub or open a new one some other time I assume. When I run the code with Ctrl + Alt + N I get a ModuleNotFoundError relating to flask_restful again. But, when I run set flask_app=api.py >> flask run in the terminal, it has changed from a white background in the browser to a black background and displays the message it is intended to display (a simple "Hello, World" as a test). Should I just keep going until I run into another issue? I also tried python -m api and that worked as well. Should I just ignore the VS Code output window? Also, sorry about the late replies. I appreciate everyone's help and patience.
Use the Ctrl+Shift+P command, search for and select Python:Select Interpreter(Or click directly on the python version displayed in the lower right corner), and select the correct interpreter.
4
6
72,928,384
2022-7-10
https://stackoverflow.com/questions/72928384/python-subprocess-is-not-scalable-by-default-any-simple-solution-you-can-recomm
I have an application which does this: subprocess.Popen(["python3", "-u", "sub-program-1.py"]) So Python program can start multiple long-lived processes on demand. If I stop main Python program and start again, it knows that sub-program-1.py should be started, because there is a record about status in DB that tells it. So simply it works fine when there is only one replica of Docker container, pod, virtual machine or whatever you call it. If I scale an app to 3 replicas, subprocess fails to achieve it. Each Docker container starts sub-program-1.py, while I want to start on one container If one container fails, an app should be smart enough to failover sub-program-1.py to another container An app should be smart enough to balance subprocesses across containers, for example: sub-program-1.py - sub-program-9.py ideally should be spread by putting 3 processes per container so in total there are 9 subprocesses running - I don't need this to be precise, most simplest solution is fine to balance it I've tried to explore RQ (Redis Queue) and similar solutions, but they are heavily focused on tasks, ideally short-living. In my case, they are long-lived processes. E.g. sub-program-1.py can live for months and years. The scheme is this: Main Python app -> sub-program-1.py, sub-program-2.py, etc. Any simple solution exists here without overhead? Is writing statuses of each sub program to DB an option (also detecting when sub process fails to failover it to another container based on statuses in DB) or would you incorporate additional tool to solve subprocess scaling issue? Another option is to start sub-program-1.py on all containers and scale operations inside of it. sub-program-1.py basically calls some third-party APIs and does some operations based on user preference. So scaling those API calls based on each user preference is complicated, it has multiple threads in background when calling APIs simultaneously. In short, sub-program-1.py is tied to user1, sub-program-2.py is tied to user2, etc. So is it worth to make it complex by choosing this option? Update If subprocess is used only in standalone apps and nobody tried to implement this mechanism at findable scale on Github, libraries, etc. How would you solve this issue in Python? I think about these entries in DB: ProcessName ProcessHostname LastHeartBeat Enabled process1 host-0 2022-07-10 15:00 true process2 null null true process3 host-1 2022-07-10 14:50 true So to solve three points that I wrote above: Each container tries to pick up process that is not already picked up (where is null or old date of LastHeartBeat). When first container picked up a process, it writes date to LastHeartBeat and then uses subprocess to start a process. Other containers cannot pick up if LastHeartBeat is constantly updated. If process fails, it doesn't write LastHeartBeat so other container picks up the process as described in point 1. If failed container cannot reach DB, it stops the operations and restarts (if it's able to even do exit). If it cannot reach DB, it doesn't do anything. That is to not run same process twice. To balance processes across containers, the container which is running less processes can pick up a new one. That info is on DB table to make a decision. Would you solve differently? Any best practices you recommend? Thanks
TL;DR - This is a classical monolithic application scaling problem you can easily solve this by redesigning your application to a microservice architecture, since your application functionality is inherently decoupled between components. Once you've done that, it all really boils down to you deploying your application in a natively microservice-friendly fashion and all of your design requirements will be met. Edit: You're currently trying to "scale-up" your application in a micro-service system (multiple processes/containers in 1 pod), which defeats the whole purpose of using it. You will have to stick with 1 subprocess <===> 1 pod for the design to really work. Otherwise, you are only introducing immense complications and this is against many design principles of micro services. More details below, if you're interested. Let me first summarise all that you've said so we can coherently discuss the design. Application Requirements As I understand the requirements of your application from all the information you've provided, the following is true: You want your processes to be long-lived. You have a parent application that spawns these long-lived processes. These processes need to be started on-demand. (dynamically scaled - and scaled-out; see(7) below for 1 process per container) If there is no load, your application should spawn just 1 process with sub-process.py. If a container fails, you would like your application to be able to intelligently switch traffic to a healthy container that is also running your long-lived process. The application should be able to shared load across all the processes/containers currently running. A process is tied to user requests since it makes calls to 3rd party API systems in order to perform its function. So, it is favourable to have just one process inside a container for simplicity of design. Limitations of the Current Design(s) The Current Application Design Currently, you have the application setup in a way that: You have one application process that spawns multiple identical sub-process.py processes through the application process. The application faces the user, and receives requests, and spawns sub-process.py processes as needed and scales well inside one compute unit (container, VM etc.) These processes then perform their actions, and return the response to the application which return it to the user. Now let's discuss your current approach(es) that you've mentioned above and see what are the challenges that you've described. Scaling Design 1 - Simply Scaling Docker Containers This means simply creating more containers for your applications. And we know that it doesn't satisfy the requirements because scaling the application to multiple replicas starts all the processes and makes them active. This is not what you want, so there is no relationship between these replicas in different containers (since the sub-processes are tied to application running in each container, not the overall system). This is obviously because application's in different containers are unaware of each-other (and more importantly, the sub-processes each are spawning). So, this fails to meet our requirement (3), (4), (5). Scaling Design 2 - Use a DB as State Storage To try and meet (3), (4) and (5) we introduced a database that is central to our overall system and it can keep state data of different processes in our system and how certain containers can be "bound" to processes and manage them. However, this was also known to have certain limitations as you pointed out (plus my own thoughts): Such solutions are good for short-lived processes. We have to introduce a database that is high speed and be able to maintain states at a very quick pace with a possibility of race conditions. We will have to write a lot of house-keeping code on top of our containers for orchestration that will use this database and some known rules (that you defined as last 3 points) to achieve our goal. Especially an orchestration component that will know when to start containers on-demand. This is highly complicated. Not only do we have to spawn new processes, we also want to be able to handle failures and automated traffic switching. This will require us to implement a "networking" component that will communicate with our orchestrator and detect failed containers and re-route incoming traffic to healthy ones and restarts the failed containers. We will also require this networking service to be able to distribute incoming traffic load across all the containers currently in our system. This fails to meet our requirements (1) and (7) and most importantly THIS IS REINVENTING THE WHEEL! LET'S TALK ABOUT KUBERNETES AND WHY IT IS EXACTLY WHAT YOU NEED. Proposed Solution Now let's see how this entire problem can be re-engineered with minimum effort and we can satisfy all of our requirements. The Proposed Application Design I propose that you can very simply detach your application from your processes. This is easy enough to do, since your application is accepting user requests and forwarding them to identical pool of workers which are performing their operation by making 3rd party API calls. Inherently, this maps perfectly on micro-services. user1 =====> |===> worker1 => someAPIs user2 =====> App |===> worker2 => someAPIs user2 =====> |===> worker3 => someAPIs ... We can intelligently leverage this. Note that not only are the elements decoupled, but all the workers are performing an identical set of functions (which can result in different output based on use inputs). Essentially you will replace subprocess.Popen(["python3", "-u", "sub-program-1.py"]) with an API call to a service that can provide a worker for you, on demand: output = some_api(my_worker_service, user_input) This means, your design of the application has been preserved and you've simply placed your processes on different systems. So, the application now looks something like this: user1 =====> |===> worker1 => someAPIs user2 =====> App ==>worker_service |===> worker2 => someAPIs user2 =====> |===> worker3 => someAPIs ... With this essential component of application redesign in place, let's revisit our issues from previous designs and see if this helps us and how Kubernetes comes into the picture. The Proposed Scaling Solution - Enter Kubernetes! You were absolutely on the right path when you described usage of a database to maintain the state of our entire system and the orchestration logic being able to retrieve status of current containers in our system and make certain decisions. That's exactly how Kubernetes works! Let's see how Kubernetes solves our problems now processes in Kubernetes can be long lived. So, requirement (1) is met and limitation (1) of our database design is also mitigated. We introduced a service that will manage all of the worker processes for us. So, requirement (2),satisfied. It will also be able to scale the processes on-demand, so requirement (3) is satisfied. It will also keep a minimum process count of 1 so we don't spawn unnecessary processes, so requirement (4) is satisfied. It will be intelligent enough to forward traffic only to processes at are healthy. So, requirement (5) is met. It will also load balance traffic across all the processes it governs, so requirement (6) is met. This service will also mitigate limitation (4) and (5) of our second design. You will be allowed to size your processes as needed, to make sure that you only use the resources needed. So, requirement (7) is met. It uses a central database called etcd, which stores the state of your entire cluster and keeps it updated at all times and accommodates for race conditions as well (multiple components updating the same information - it simply lets the first one to arrive win and fails the other one, forcing it to retry). We've solved problem (2) from our second design. It comes with logic to orchestrate our processes out of the box so there is no need to write any code. This mitigates limitation (3) of our second design. So, not only were we able to meet all of our requirements, we were also able to implement the solution we were trying to achieve, without writing any additional code for the orchestration! (You will just have to restructure your program a little and introduce APIs). How to Implement This Just note that in the k8s literature the smallest unit of computation is referred to as pod which performs a single function. Architecturally, this is identical to your description of sub-process. So, whenever I talk about 'Pods' I simply refer to your sub-processes. You will take (roughly) the following steps to implement the proposed design. Rewrite some part of your application to decouple application from sub-process.py, introducing an API between them. Package sub-process.py into a container image. Deploy a small Kubernetes cluster. Create a Deployment using your sub-process.py image and set the min repica count to 1 and max to any number you want, say 10, for auto-scaling. Expose this Deployment by creating a Service. This is the "worker service" I talked about, and your application will "submit" requests to this service. And it will not have to worry about anything other than simply making a request to an API endpoint, everything else is handled by k8s. Configure your application to make API calls to this Service. Test your application and watch it scale up and down! Now, the way this will function is as follows: Client makes a request to your application. application forwards it to the Service API endpoint. The Service receives the API request and forwards it to one of the Pods that are running your sub-process.py functionality. If multiple requests are received the Service will balance the requests across all the pods that are available. If a pod fails, it will be take "away" from the service by K8s so requests don't fail. The pod will perform your functionality and provide the output. If all the pods in the Service are reaching saturation, the Deployment auto-scaling will trigger and create more pods for the Service and load sharing will resume again (scale-out). If the resource utilisation then reduces, the Deployment will remove certain pods that are not being used anymore and you will be back to 1 pod (scale-in). If you want, you can put your frontend application into a Deployment and Service as well which will allow you to have an even friendlier cloud-native micro-service architecture. The user will interact with an API of your front-end which will invoke the Service that is managing your sub-process.py workers which will return results. I hope this helps you and you can appreciate how clearly the micro-service architecture fits into the design pattern you have, and how you can very simply adapt to it and scale your application as much as you want! Not only that, expressing your design this way will also allow you to redesign/manage/test different versions by simply managing a set of YAML manifests (text files) that you can use with Version Control as well!
3
7
72,952,005
2022-7-12
https://stackoverflow.com/questions/72952005/how-is-numpy-einsum-implemented
I want to understand how is einsum function in python implemented. I found the source code in numpy/core/src/multiarray/einsum.c.src file but couldn't completely understand it. In particular I want to understand how does it creates the required loops automatically? For example: import numpy as np a = np.random.rand(2,3,4,5) b = np.random.rand(5,3,2,4) ll = np.einsum('ijkl, ljik ->', a,b) # This should loop over all the # four indicies i,j,k,l. How does it create loops for these indices automatically ? # The assume that under the hood it does the following sum1 = 0 for i in range(2): for j in range(3): for k in range(4): for l in range(5): sum1 = sum1 + a[i,j,k,l]*b[l,j,i,k] Thank you in advance ps: This question is not about how to use numpy.einsum
I want to understand how does it creates the required loops automatically? Well, it does not create the loops the way you think it does. In this case, it creates an iterator operating over multiple arrays and then use it in a generic main loop. In the more general case, there are two main loops: one to iterate over the output array items and one to perform a reduction. The main function is PyArray_EinsteinSum. In your case, it takes an unoptimized path and end up creating a basic iteration function based on the iterator created previously (ie. iter). This function is get_sum_of_products_function. It basically analyze the einsum operation so to find the best (sum of product) function to call based on a lookup table (like _outstride0_specialized_table). In your specific case, double_sum_of_products_outstride0_two is called. Numpy use a template system so to generate this function automatically at build time (*.c.src files are template files converted to *.c files based on predefined basic comments). In this case, the function is generated from @name@_sum_of_products_outstride0_@noplabel@ and once computed by the C preprocessor it gives something like the following function: static void double_sum_of_products_outstride0_two(int nop, char **dataptr, npy_intp const *strides, npy_intp count) { npy_double accum = 0; char *data0 = dataptr[0]; npy_intp stride0 = strides[0]; char *data1 = dataptr[1]; npy_intp stride1 = strides[1]; while (count--) { accum += (*(npy_double *)data0) * (*(npy_double *)data1); data0 += stride0; data1 += stride1; } *((npy_double *)dataptr[2]) = (accum + (*((npy_double *)dataptr[2]))); } As you can see, there is only one main loop iterating over the previously generated iterator. In your case, stride0 and stride1 are both equal to 8, data0 and data1 are the raw input arrays, dataptr is the raw output array and count is set to 120 initially. Note that the fact both strides are equal to 8 is surprising at first glance since the einsum does not iterate on the two array contiguously. This is because the second array is copied and reorder because Numpy cannot create a uniform view based on the einsum parameters. Note that the fallback case use for the example code is not particularly optimized and it only produce one value. For example, the much more optimized double_sum_of_products_contig_contig_outstride0_two function can be called from unbuffered_loop_nop2_ndim2 for the following code: import numpy as np a = np.random.rand(3, 10) b = np.random.rand(3, 10) for i in range(1): ll = np.einsum('ij, ij -> i', a, b) In this case, the double_sum_of_products_contig_contig_outstride0_two performs the reductions for a given output item and unbuffered_loop_nop2_ndim2 iterate over the output array. If the expression ij, ij -> j is instead used in the above code, then the function double_sum_of_products_contig_two is called which operates the same way than double_sum_of_products_contig_contig_outstride0_two except it reads/writes on the whole output line during the reduction.
8
6
72,909,832
2022-7-8
https://stackoverflow.com/questions/72909832/use-geopandas-shapely-to-find-intersection-area-of-polygons-defined-by-latitud
I have two GeoDataFrames, left and right, with many polygons in them. Now I am trying to find the total intersection area of each polygon in left, with all polygons in right. I've managed to get the indices of the intersecting polygons in right for each polygon in left using gpd.sjoin, so I compute the intersection area using: area = left.iloc[i].geometry.intersection(right.iloc[idx].geometry).area Where i and idx are the indices of the intersecting polygons in the two GDFs (let's assume the left poly only intersects with 1 poly in right). The problem is that the area value I get does not seem correct in any way, and I don't know what units it has. The CRS of both GeoDataFrames is EPSG:4326 so the standard WSG84 projection, and the polygon coordinates are defined in degrees latitude and longitude. Does anyone know what units the computed area then has? Or does this not work and do I need to convert them to a different projection before computing the area? Thanks for the help!
I fixed it by using the EPSG:6933 projection instead, which is an area preserving map projection and returns the area in square metres (EPSG:4326 does not preserve areas, so is not suitable for area calculations). I could just change my GDF to this projection using gdf.to_crs(espg=6933) And then compute the area in the same way as above.
4
3
72,899,754
2022-7-7
https://stackoverflow.com/questions/72899754/django-tailwindcss-wont-load-some-attributes
I'm having issues when it comes to using some attributes with Django and TailwindCSS. Let's take this table for example: <div class="relative overflow-x-auto shadow-md sm:rounded-lg"> <table class="w-full text-lg text-left text-gray-500 rounded-2xl mt-4 dark:text-gray-400"> <thead class="rounded-2xl text-lg text-white uppercase bg-[#68BA9E] dark:bg-gray-700 dark:text-gray-400"> <tr> <th scope="col" class="px-6 py-3"> Report title </th> <th scope="col" class="px-6 py-3"> Company </th> <th scope="col" class="px-6 py-3"> Brand (if any) </th> <th scope="col" class="px-6 py-3"> Go to report </th> </tr> </thead> <tbody> {% for report in reports %} <tr class="bg-white border-b text-center dark:bg-gray-800 dark:border-gray-700 hover:bg-gray-50 dark:hover:bg-gray-600"> <th scope="row" class="h-19 px-6 py-4 font-medium text-gray-900 dark:text-white whitespace-nowrap"> {{ report.title }} </th> <td class="px-6 py-4"> {{ report.company }} </td> <td class="px-6 py-4"> {% if report.brand %} {{ report.brand }} {% else %} - {% endif %} </td> <td class="px-6 py-4"> <a href="{% url 'tool:single-report' slug=report.slug %}">Access</a> </td> </tr> {% endfor %} </tbody> </table> </div> Gives the following: But when I try to change the bg-color from: <thead class="rounded-2xl text-lg text-white uppercase bg-[#68BA9E] dark:bg-gray-700 dark:text-gray-400"> To: <thead class="rounded-2xl text-lg text-white uppercase bg-red-700 dark:bg-gray-700 dark:text-gray-400"> The new color won't load. It gives: I don't understand why I'm getting nothing. In my configuration, following tasks are running: The server is running with python manage.py runserver TailwindCSS is running with python manage.py tailwind start Livereload is running with python manage.py livereload I also clear my cache with CMD+Shift+R. I'm also having troubles with some margins and paddings that won't apply. I even bought the plugin Devtools for TailwindCSS. When I edit an attribute with Chrome inspector and this plugin, it's working. But when it's in my code, the new color won't load. Has this ever happened to you? Update: Here is the complete code: {% extends 'base.html' %} {% block content %} <div class="flex-1 pt-8 pb-5 max-w-7xl mx-auto px-4 sm:px-6 lg:px-8"> <div class="w-100 mb-10"> <div> {% if nb_reports == 0 %} <div class="text-center"> <svg xmlns="http://www.w3.org/2000/svg" class="mx-auto h-12 w-12 text-gray-400" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2" aria-hidden="true">> <path stroke-linecap="round" stroke-linejoin="round" d="M9 17v-2m3 2v-4m3 4v-6m2 10H7a2 2 0 01-2-2V5a2 2 0 012-2h5.586a1 1 0 01.707.293l5.414 5.414a1 1 0 01.293.707V19a2 2 0 01-2 2z"/> </svg> <h3 class="mt-2 text-sm font-medium text-gray-900">No reports</h3> <p class="mt-1 text-sm text-gray-500">Get started by creating a new report.</p> <div class="mt-6"> <a href="{% url 'tool:create-report' %}" class="inline-block items-center px-4 py-2 border border-transparent shadow-sm text-sm font-medium rounded-xl text-white bg-[#195266] hover:bg-[#23647a] focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-indigo-500"> New report </a> </div> </div> {% else %} <div> <h2 class="text-xl leading-6 font-medium text-gray-900">Create report</h2> <p class="mt-1 text-sm text-gray-500">Find all your created reports below.</p> </div> <div class="relative overflow-x-auto shadow-md sm:rounded-lg"> <table class="w-full text-lg text-left text-gray-500 rounded-2xl mt-4 dark:text-gray-400"> <thead class="rounded-2xl text-lg text-white uppercase bg-red-700 dark:bg-gray-700 dark:text-gray-400"> <tr> <th scope="col" class="px-6 py-3"> Report title </th> <th scope="col" class="px-6 py-3"> Company </th> <th scope="col" class="px-6 py-3"> Brand (if any) </th> <th scope="col" class="px-6 py-3"> Go to report </th> </tr> </thead> <tbody> {% for report in reports %} <tr class="bg-white border-b text-center dark:bg-gray-800 dark:border-gray-700 hover:bg-gray-50 dark:hover:bg-gray-600"> <th scope="row" class="h-19 px-6 py-4 font-medium text-gray-900 dark:text-white whitespace-nowrap"> {{ report.title }} </th> <td class="px-6 py-4"> {{ report.company }} </td> <td class="px-6 py-4"> {% if report.brand %} {{ report.brand }} {% else %} - {% endif %} </td> <td class="px-6 py-4"> <a href="{% url 'tool:single-report' slug=report.slug %}">Access</a> </td> </tr> {% endfor %} </tbody> </table> {% endif %} </div> </div> </div> </div> {% endblock %}
We finally managed to solve this issue. The problem was that I ran python manage.py collectstatic which created the following directory : static > css > dist > styles.css. django-tailwind created the same repository under the theme folder. Every time I tried to restart the server, the wrong styles.css was taken into account. So by changing the name of the first folder, it enabled me to load the correct CSS file.
3
4
72,909,466
2022-7-8
https://stackoverflow.com/questions/72909466/class-weight-and-sample-weight-ineffective-for-sklearn-random-forest
I'm new to ML and I've been working with an imbalanced data set where the count of negative samples is twice that of the positive samples. In-order to address these i set scikit-learn Random forest class_weight = 'balanced', which gave me an ROC-AUC score of 0.904 and the recall for class- 1 was 0.86, now when i tried to further improve the AUC Score by assigning weight, there wasn't any major difference with the results, i.e Class_weight = {0: 0.5, 1: 2.75}, assuming this would penalize for every wrong classification of 1 but it didn't seem to work as expected. randomForestClf = RandomForestClassifier(random_state = 42, class_weight = {0: 0.5, 1:2.75}) Tried different values but has no major impact as Recall of 1 remains the same or reduces (0.85) and auc value is quite insignificant (0.90122). It only seems to work when one of the label is set 0. Further tried to set the sample weights too. But that didn't seem to work either. # Sample weights class_weights = [0.5, 2] weights = np.ones(y_train.shape[0], dtype = 'float') for i, val in enumerate(y_train): weights[i] = class_weights[val] Below is the reference to a similar question but the solutions provided didn't work for me. sklearn RandomForestClassifier's class_weights seems to have no effect Is there anything that i'm missing out? Thanks!
The reason is that you grow the trees out fully, which leads to every leaf node being pure. That will happen regardless of the class weights (though the structure of the tree leading up to those pure nodes will change). The predicted probabilities of each tree will be (almost) all 0 or 1, and so the overall probability estimates are just driven by disagreements between the trees. If you set e.g. max_depth=10 (or whatever tree complexity parameter you like), now many/most of the leaf nodes will not be pure. Setting larger positive-class weights will produce leaf values that are biased toward the positive class (but still aren't just 0 and 1), and so the probability estimates will be skewed higher across the board, leading to a higher recall (at the expense of precision, presumably). The ROC curve is relatively unaffected by class balance and the skewed-higher probabilities arising from the larger weights, and so shouldn't be heavily affected by changing weights, for a fixed max_depth.
4
2
72,956,054
2022-7-12
https://stackoverflow.com/questions/72956054/zip-like-function-that-iterates-over-multiple-items-in-lists-and-returns-possibi
In the following code: a = [["2022"], ["2023"]] b = [["blue", "red"], ["green", "yellow"]] c = [["1", "2", "3"], ["4", "5", "6", "7"], ["8", "9", "10", "11"], ["12", "13"]] I would like a function that outputs this, but for any number of variables: [ ["2022", "blue", "1"], ["2022", "blue", "2"], ["2022", "blue", "3"], ["2022", "red", "4"], ["2022", "red", "5"], ["2022", "red", "6"], ["2022", "red", "7"], ["2023", "green", "8"], ["2023", "green", "9"], ["2023", "green", "10"], ["2023", "green", "11"], ["2023", "yellow", "12"], ["2023", "yellow", "13"], ] I have searched for a function to do this with itertools or zip, but haven't found anything yet. To clarify, my use case for this was to iterate through values of a nested/multi-level dropdown menu (the first dropdown returns options, and each option returns a different dropdown, and so on).
First, you join the first argument, to a list of lists with only one element each. Then for each sublist and its index i in the next argument, you pick the i-th list of the previous iteration res[i] and add to aux len(sublist) lists each of one is the res[i] with one item from sublist. from itertools import chain def f(*args): res = list(chain.from_iterable([[item] for item in l] for l in args[0])) for arg in args[1:]: aux = [] for i, sublist in enumerate(arg): aux += [res[i] + [opt] for opt in sublist] res = aux return res In addition if you want to verify that the arguments passed to the function are correct, you can use this: def check(*args): size = sum(len(l) for l in args[0]) for arg in args[1:]: if len(arg) != size: return False size = sum(len(l) for l in arg) return True
7
13
72,956,903
2022-7-12
https://stackoverflow.com/questions/72956903/import-urllib3-could-not-be-resolved-from-sourcepylancereportmissingmodulesour
I am new to Python and writing a lambda function. I installed urllib3 using pip but still getting this following error. I tried restarting vscode/ uninstall and reinstall but still getting the error. this is the result when I run pip show urllib3 what am i missing here?
Maybe there is more than one python environment on your machine, And the location where you installed the package is inconsistent with the python interpreter you are using now. You can use CTRL + SHIFT + P to open the command palette and search Python: Select Interpreter (or click on the interpreter version displayed in the lower right corner). Select the environment interpreter where you have the urllib3 package installed.
3
2
72,937,452
2022-7-11
https://stackoverflow.com/questions/72937452/importerror-dlopen-library-not-loaded-rpath-pywrap-tensorflow-internal
I am a beginner at machine learning. I try to use LSTM algorism but when I write from keras.models import Sequential it shows error as below: ImportError: dlopen(/Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/_pywrap_tfe.so, 2): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: /Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/_pywrap_tfe.so Reason: image not found How can I fix this? Thank you so much! full error message: Traceback (most recent call last): File "/Users/wangzifan/Desktop/machine/LSTM.py", line 39, in <module> from keras.models import Sequential File "/Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/keras/__init__.py", line 21, in <module> from tensorflow.python import tf2 File "/Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/__init__.py", line 37, in <module> from tensorflow.python.tools import module_util as _module_util File "/Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/__init__.py", line 37, in <module> from tensorflow.python.eager import context File "/Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/eager/context.py", line 33, in <module> from tensorflow.python import pywrap_tfe File "/Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/pywrap_tfe.py", line 25, in <module> from tensorflow.python._pywrap_tfe import * ImportError: dlopen(/Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/_pywrap_tfe.so, 2): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: /Users/wangzifan/opt/anaconda3/lib/python3.9/site-packages/tensorflow/python/_pywrap_tfe.so Reason: image not found
Problem solved. install tensorflow again with sudo pip3 install tensorflow and change the import to from tensorflow.python.keras.models import Sequential
4
1
72,955,005
2022-7-12
https://stackoverflow.com/questions/72955005/x-axis-label-cropped-on-saved-image
So I'm trying to do a bar plot on a data where x is the username (string ) and each x is long enough to overlap each other, so I have to rotate the x label. No problem there. However, when exporting the plot results, the x label on the exported image is cropped. I tried using plt.tight_layout() and worked, but it change the look of the plot. The code is similar to this from matplotlib import pyplot as plt x= ['abc', 'ronaldo', 'melon_killer_123456'] y= [1, 2, 3] plt.bar(x, y) plt.xticks(rotation = 90) plt.savefig('a.png') plt.show() Exported image: I want it to look like this (Got this by using jupyter notebook and manually save output image): So how to do that?
You can play around with the rcParams size settings and the plt.subplots_adjust settings until you get your desired image. import matplotlib.pyplot as plt x= ['abc', 'ronaldo', 'melon_killer_123456'] y= [1, 2, 3] plt.rcParams["figure.figsize"] = (5,10) plt.bar(x, y) plt.xticks(rotation = 90) plt.subplots_adjust(top=0.925, bottom=0.20, left=0.07, right=0.90, hspace=0.01, wspace=0.01) plt.savefig('a.png') plt.show()
4
2
72,954,047
2022-7-12
https://stackoverflow.com/questions/72954047/how-to-open-a-virtual-environment-created-with-pyenv-with-vscode-editor
I am working on a Linux environment and have created my virtual environment using the pyenv tool. I have set the local virtual environment in my working folder the the one I want with pyenv from command line like this for example : pyenv local my_venv_name which in my case my_venv_name=3.9.9 When I opened VSCode in that folder the integrated terminal is indeed opening that virtual environment BUT the code in the VSCode seems to not be able to find some installed dependencies that I can see that I have installed in that particular environment. I can do pip freeze in the terminal and I see the packages but VSCode does not see them. How can I set the correct virtual environment to the VSCode editor?
Ok so you have to select the correct python interpreter because pyenv could be using multiple python version in different environments. I found two ways to change it: Open the command palette either from the gear icon bottom left corner or by typing Ctrl + Shift + P. Then type select python interpreter and select the one that corresponds to the virtual environment you want You can find the same option by simply clicking and selecting the interpreter from the bottom right corner of the VSCode as shown in the image below:
14
14
72,953,104
2022-7-12
https://stackoverflow.com/questions/72953104/python-sort-profile-report-by-tottime
Python includes a simple to use profiler: >> import cProfile >> import re >> cProfile.run('re.compile("foo|bar")') 197 function calls (192 primitive calls) in 0.002 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.001 0.001 <string>:1(<module>) 1 0.000 0.000 0.001 0.001 re.py:212(compile) ... How can do the exact same thing, but sorted by tottime instead of standardname?
Use the sort=... argument of cProfile.run: >>> import cProfile >>> import time >>> cProfile.run('time.sleep(1); time.monotonic()', sort='tottime') Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 1.001 1.001 1.001 1.001 {built-in method time.sleep} 1 0.000 0.000 1.001 1.001 {built-in method builtins.exec} 1 0.000 0.000 1.001 1.001 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {built-in method time.monotonic} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
7
4
72,900,609
2022-7-7
https://stackoverflow.com/questions/72900609/modify-i-th-next-tensor-values-every-time-a-value-1-appears-in-a-tensor
I have two tensors with the same size: a = [1, 2, 3, 4, 5, 10, 11, 12, 13, 20, 21, 22, 23, 24, 25, 26, 27, 28] b = [0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1] Tensor a has three regions which are demarked by consecutive values: region 1 is [1,2,3,4,5], region 2 is [10,11,12,13] and region 3 is [20, 21, 22, 23, 24, 25, 26, 27, 28]. For each of those regions, I want to apply the following logic: if one of the values of b is 1, then the following i values are set to 0. If they are already 0, they continue as 0. After i values are changed, nothing happens until another value of b is 1. In that case, the next i values are forced to 0... Some examples: # i = 1 a = [1, 2, 3, 4, 5, 10, 11, 12, 13, 20, 21, 22, 23, 24, 25, 26, 27, 28] b_new = [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1] # i = 2 a = [1, 2, 3, 4, 5, 10, 11, 12, 13, 20, 21, 22, 23, 24, 25, 26, 27, 28] b_new = [0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1] # i = 4 a = [1, 2, 3, 4, 5, 10, 11, 12, 13, 20, 21, 22, 23, 24, 25, 26, 27, 28] b_new = [0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1] Not sure if this would help, but I was able to separate the regions into segments by doing: a_shifted = tf.roll(a - 1, shift=-1, axis=0) a_shifted_segs = tf.math.cumsum(tf.cast(a_shifted != a, dtype=tf.int64), exclusive=True) # a_shifted_segs = = [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2] Do you know any way of doing this efficiently?
Here is a pure Tensorflow approach, which will work in Eager Execution and Graph mode: # copy, paste, acknowledge import tensorflow as tf def split_regions_and_modify(a, b, i): indices = tf.squeeze(tf.where(a[:-1] != a[1:] - 1), axis=-1) + 1 row_splits = tf.cast(tf.cond(tf.not_equal(tf.shape(indices)[0], 0), lambda: tf.concat([indices, [indices[-1] + (tf.cast(tf.shape(a), dtype=tf.int64)[0] - indices[-1])]], axis=0), lambda: tf.shape(a)[0][None]), dtype=tf.int32) def body(i, j, k, tensor, row_splits): k = tf.cond(tf.equal(row_splits[k], j), lambda: tf.add(k, 1), lambda: k) current_indices = tf.range(j + 1, tf.minimum(j + 1 + i, row_splits[k]), dtype=tf.int32) tensor = tf.cond(tf.logical_and(tf.equal(tensor[j], 1), tf.not_equal(j, row_splits[k])), lambda: tf.tensor_scatter_nd_update(tensor, current_indices[..., None], tf.zeros_like(current_indices)), lambda: tensor) return i, tf.add(j, 1), k, tensor, row_splits j0 = tf.constant(0) k0 = tf.constant(0) c = lambda i, j0, k0, b, row_splits: tf.logical_and(tf.less(j0, tf.shape(b)[0]), tf.less(k0, tf.shape(row_splits)[0])) _, _, _, output, _ = tf.while_loop(c, body, loop_vars=[i, j0, k0, b, row_splits]) return output Usage: a = tf.constant([1, 2, 3, 4, 5, 10, 11, 12, 13, 20, 21, 22, 23, 24, 25, 26, 27, 28]) b = tf.constant([0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1]) split_regions_and_modify(a, b, 1) # <tf.Tensor: shape=(18,), dtype=int32, numpy=array([0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1], dtype=int32)> split_regions_and_modify(a, b, 2) # <tf.Tensor: shape=(18,), dtype=int32, numpy=array([0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1], dtype=int32)> split_regions_and_modify(a, b, 4) # <tf.Tensor: shape=(18,), dtype=int32, numpy=array([0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1], dtype=int32)>
4
2
72,944,235
2022-7-11
https://stackoverflow.com/questions/72944235/plotly-how-to-add-data-labels-to-a-choropleth
I have the following Pandas dataframe df that looks as follows: import pandas as pd df = pd.DataFrame({'state' : ['NY', 'CA', 'FL', 'NJ', 'TX', 'CT', 'MA', 'WA', 'IL', 'GA'], 'user_id' : [10000, 3200, 1600, 1200, 800, 600, 400, 350, 270, 260] }) state user_id 0 NY 10000 1 CA 3200 2 FL 1600 3 NJ 1200 4 TX 800 5 CT 600 6 MA 400 7 WA 350 8 IL 270 9 GA 260 I'd like to be able to create a Plotly choropleth that includes data labels over each of the states. To do so, I use add_scattergeo: fig = px.choropleth(df, locations = 'state', locationmode = "USA-states", scope = "usa", color = 'user_id', color_continuous_scale = "blues", ) fig.add_scattergeo( locations = df['state'], text = df['user_id'], mode = 'text', ) fig.show() But, using add_scattergeo does not apply the desired labels. What's the best way to add data labels to a Plotly choropleth? Thanks!
You need to also add locationmode="USA-states" to add_scattergeo: fig = px.choropleth( df, locations='state', locationmode="USA-states", scope="usa", color='user_id', color_continuous_scale="blues", ) fig.add_scattergeo( locations=df['state'], locationmode="USA-states", text=df['user_id'], mode='text', ) Output:
3
6
72,943,397
2022-7-11
https://stackoverflow.com/questions/72943397/pulp-program-for-the-the-following-constraint-mina-b-minx-y
Suppose for a moment that I have 4 variables a,b,x,y and one constraint min(a,b) > min(x,y). how can I represent this program in pulp python?
Ok. So, the first answer I posted (deleted) was a bit hasty and the logic was faulty for the relationship described. This is (hopefully) correct! ;) max() and min() are nonlinear, so we need to linearize them somehow (with helper variable) and some logic to relate the 2 minima, which (below) can use a binary helper variable and a Big-M constraint. in pseudocode: a, b, x, y : real-valued variables ab_min : real-valued variable x_lt_y : binary variable, 1 implies x <= y, 0 else M = some suitably large constant, depending on the max range of a, b, x, y new constraints: ab_min <= a ab_min <= b ab_min >= x - (1 - x_lt_y) * M ab_min >= y - (x_lt_y) * M Logic: We find the minimum of a, b with ab_min. We need "upward pressure" from the min(x, y)... So we know that ab_min must be greater than either x or y, or possibly both. For the "or" constraint, we use the binary logic above and multiply it by the "large constant" to make the other constraint trivial.
3
4
72,943,028
2022-7-11
https://stackoverflow.com/questions/72943028/mocking-function-within-a-function-pytest
def func1(): return 5 def func2(param1, param2): var1 = func1() return param1 + param2 + var1 I want to use pytest to test the second function by mocking the first, but I am not sure how to do this. @pytest.fixture(autouse=True) def patch_func1(self): with mock.patch( "func1", return_value= 5, ) as self.mock_func1: yield I think it can be done with a dependency injection and fixture as above but that would mean changing func1 which I would prefer not to do.
You don't need to change anything. You can use mocker fixture with pytest (requires installation of pytest-mock). don't worry about the mocker argument, it will magically work. def test_func2(mocker): mocked_value = 4 first = 1 second = 2 func1_mock = mocker.patch("func1") func1_mock.return_value = mocked_value actual_value = func2(first, second) assert actual_value == first + second + mocked_value
12
14
72,942,519
2022-7-11
https://stackoverflow.com/questions/72942519/alembic-migration-with-fastapi-docker-connection-to-port-5432-in-localhost-fa
Currently I am trying to learn about Api development with FastAPI and I am trying to dockerize my project. However, when I try to run the database migrations with alembic in Docker by using docker run sm-api_api alembic upgrade head I get the following error: File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3280, in _wrap_pool_connect return fn() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 310, in connect return _ConnectionFairy._checkout(self) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 868, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 476, in checkout rec = pool._do_get() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 256, in _do_get return self._create_connection() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 256, in _create_connection return _ConnectionRecord(self) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 371, in __init__ self.__connect() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 665, in __connect with util.safe_reraise(): File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__ compat.raise_( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_ raise exception File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 661, in __connect self.dbapi_connection = connection = pool._invoke_creator(self) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 590, in connect return dialect.connect(*cargs, **cparams) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 597, in connect return self.dbapi.connect(*cargs, **cparams) File "/usr/local/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address Is the server running on that host and accepting TCP/IP connections? The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/alembic", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/site-packages/alembic/config.py", line 590, in main CommandLine(prog=prog).main(argv=argv) File "/usr/local/lib/python3.10/site-packages/alembic/config.py", line 584, in main self.run_cmd(cfg, options) File "/usr/local/lib/python3.10/site-packages/alembic/config.py", line 561, in run_cmd fn( File "/usr/local/lib/python3.10/site-packages/alembic/command.py", line 322, in upgrade script.run_env() File "/usr/local/lib/python3.10/site-packages/alembic/script/base.py", line 569, in run_env util.load_python_file(self.dir, "env.py") File "/usr/local/lib/python3.10/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file module = load_module_py(module_id, path) File "/usr/local/lib/python3.10/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py spec.loader.exec_module(module) # type: ignore File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/usr/src/app/alembic/env.py", line 81, in <module> run_migrations_online() File "/usr/src/app/alembic/env.py", line 69, in run_migrations_online with connectable.connect() as connection: File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3234, in connect return self._connection_cls(self, close_with_result=close_with_result) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 96, in __init__ else engine.raw_connection() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3313, in raw_connection return self._wrap_pool_connect(self.pool.connect, _connection) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3283, in _wrap_pool_connect Connection._handle_dbapi_exception_noconnection( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2117, in _handle_dbapi_exception_noconnection util.raise_( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_ raise exception File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3280, in _wrap_pool_connect return fn() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 310, in connect return _ConnectionFairy._checkout(self) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 868, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 476, in checkout rec = pool._do_get() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 256, in _do_get return self._create_connection() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 256, in _create_connection return _ConnectionRecord(self) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 371, in __init__ self.__connect() File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 665, in __connect with util.safe_reraise(): File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__ compat.raise_( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_ raise exception File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 661, in __connect self.dbapi_connection = connection = pool._invoke_creator(self) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 590, in connect return dialect.connect(*cargs, **cparams) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 597, in connect return self.dbapi.connect(*cargs, **cparams) File "/usr/local/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address Is the server running on that host and accepting TCP/IP connections? (Background on this error at: https://sqlalche.me/e/14/e3q8) My docker compose file is like this: version: '3' services: api: build: . depends_on: - postgres ports: - 8000:8000 environment: - DATABASE_HOSTNAME=${DATABASE_HOST} - DATABASE_PORT=${DATABASE_PORT} - DATABASE_PASSWORD=${DATABASE_PASSWORD} - DATABASE_NAME=${DATABASE_NAME} - DATABASE_USERNAME=${DATABASE_USERNAME} - SECRET_KEY=${SECRET_KEY} - ALGORITHM=${ALGORITHM} - ACCESS_TOKEN_EXPIRE_MINUTES=${ACCESS_TOKEN_EXPIRE_TIME} postgres: image: postgres environment: - POSTGRES_PASSWORD=${DATABASE_PASSWORD} - POSTGRES_DB=${DATABASE_NAME} ports: - 5432:5432 volumes: - postgres-db:/var/lib/postgresql/data volumes: postgres-db: I already tried to kill the port and run it again, but it did not solve my problem. Does anyone know what the problem is? Edit: My Docker file: FROM python:3.10.5 WORKDIR /usr/src/app COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"] And my .env file: DATABASE_HOST=localhost DATABASE_PORT=5432 DATABASE_PASSWORD={password} DATABASE_NAME=SM_API DATABASE_USERNAME=postgres SECRET_KEY={secret key} ALGORITHM=HS256 ACCESS_TOKEN_EXPIRE_TIME = 30
You are connecting to localhost (or 127.0.0.1) which is, from the container point of view, itself. You probably want to change that line in your docker compose as DATABASE_HOSTNAME=postgres, so it refers to the postgres container.
4
3
72,939,233
2022-7-11
https://stackoverflow.com/questions/72939233/whats-the-use-of-values-in-enum-in-python
I'm working with enum lately and I dont really get the utility of them in some cases. I hope my question is not too trivial or too stupid, and I would really love to better understand the logic behind this python structure. One common use I found online or in some pieces of code I have been working on lately is the use of values that are strings like for example: from enum import Enum class Days(Enum): MONDAY = 'monday' TUESDAY = 'tuesday' ... SUNDAY = 'sunday' And here from my humble prospective, the values seems redundant: if I print the values of some member I obtain the following: print(Days.MONDAY.value) >> 'monday' I totally understand the utility when the values are numbers and they represent a gerarchic structure like for example class Levels(Enum): HIGH = 10 MID = 5 LOW = 0 In which you can do stuff like: HIGH > LOW > True But in a lot of example and actual code use, I see the first approach, the one with MONDAY = 'monday', i.e. when the values are string instead of numerical values, and this case I really dont understand the utility of having a key that is pretty much equal to the value. If anyone can help me understand, or show some utilities I would really love to understand new stuff.
The members (should) always be named in all uppercase (per the very first note on the enum docs), so if you want them to have "names" that are in some other casing, you can assign strings with arbitrary case, which may be more human-friendly when it comes time to display the values to the user. You can also convert from said values to the enum constant easily, with the Enum's constructor, e.g.: Days('monday') is Days.MONDAY # This is True so if the data from the user (or database or whatever) has specific values, you can easily convert them to their logical Enum equivalents this way. If the values really aren't meaningful, you can just assign auto() to all of them and not think about the values. Just in case you're asking "why not use the strings themselves?", the general advantage to enums is guaranteed uniqueness and exhaustiveness for efficient checks and self-documenting code. If you use the strings directly, you have to use == checks (str has no guarantee that equal values are the same object unless you explicitly intern them, so is checks can't be used), and people can pass in strings that don't actually come from the expected set of strings. With Enums there is a central definition of all possible values (and therefore all other values are not possible), and since the values are all guaranteed singletons, when you have a member of that Enum, you can use is/is not testing for cheap identity testing (in CPython at least, it's literally just a pointer comparison) without relying on value equality tests, ==/!=, that invoke the more expensive rich comparison machinery. This even works when you make aliases for the same enum member, e.g.: class Foo(Enum): SPAM = 1 EGGS = 2 ALSO_SPAM = 1 which seamlessly makes Foo.ALSO_SPAM the same object as Foo.SPAM so Foo.SPAM is Foo.ALSO_SPAM is true, allowing two aliases with different names to be used interchangably.
3
5
72,938,821
2022-7-11
https://stackoverflow.com/questions/72938821/pandas-dataframe-show-duplicate-rows-with-exact-duplicates
I have a big dataframe (120000x40) and I try to find duplicates in every row and display them. Thats what I tried: create dataframe import pandas as pd df = pd.DataFrame({'col1':['1-233','2-766g','6-455','4-356','5-253','2-122','5-531','8-345','1-505','3-127','3-622'], 'col2':['6-998','2-766g','5-955','7-236','5-253','7-258','8-987t','7-567','1-505','6-876','NaN'], 'col3':['3-957','NaN','NaN','3-602m','1-266','2-122','7-834','8-345','2-858','7-984g', 'NaN']}) ## code df["duplicate"] = df.apply(lambda x: len(set(x[x.notna()])) != len(x[x.notna()]), axis=1) print(df) #output: But what I wanted was the folling: To see what exactly is doubled here.
To keep the function readable and general, so it works for more or less than three cols, I'd just rely on writing a dedicated function that uses pandas built in functionality for finding duplicates, and applying that to the dataframe rows: import numpy as np import pandas as pd df = pd.DataFrame({'col1':['1-233','2-766g','6-455','4-356','5-253','2-122','5-531','8- 345','1-505','3-127','3-622'], 'col2':['6-998','2-766g','5-955','7-236','5-253','7-258','8-987t','7-567','1-505','6-876','NaN'], 'col3':['3-957','NaN','NaN','3-602m','1-266','2-122','7-834','8-345','2-858','7-984g', 'NaN']}) def get_duplicate_value(row): """If row has duplicates, return that value, else NaN.""" duplicate_locations = row.duplicated() if duplicate_locations.any(): dup_index = duplicate_locations.idxmax() return row[dup_index] return np.NaN df["solution"] = df.apply(get_duplicate_value, axis=1) Check out the docs of pd.Dataframe.apply, pd.Series.duplicated, pd.Series.any and pd.Series.idxmax to figure out how this works exactly. Output: col1 col2 col3 solution 0 1-233 6-998 3-957 NaN 1 2-766g 2-766g NaN 2-766g 2 6-455 5-955 NaN NaN 3 4-356 7-236 3-602m NaN 4 5-253 5-253 1-266 5-253 5 2-122 7-258 2-122 2-122 6 5-531 8-987t 7-834 NaN 7 8- 345 7-567 8-345 NaN 8 1-505 1-505 2-858 1-505 9 3-127 6-876 7-984g NaN 10 3-622 NaN NaN NaN
3
1
72,907,474
2022-7-8
https://stackoverflow.com/questions/72907474/gunicorn-with-gevent-does-not-enforce-timeout
Let's say I have a simple flask app: import time from flask import Flask app = Flask(__name__) @app.route("/") def index(): for i in range(10): print(f"Slept for {i + 1}/{seconds} seconds") time.sleep(1) return "Hello world" I can run it with gunicorn with a 5 second timeout: gunicorn app:app -b 127.0.0.1:5000 -t 5 As expected, http://127.0.0.1:5000 times out after 5 seconds: Slept for 1/10 seconds Slept for 2/10 seconds Slept for 3/10 seconds Slept for 4/10 seconds Slept for 5/10 seconds [2022-07-07 22:45:01 -0700] [57177] [CRITICAL] WORKER TIMEOUT (pid:57196) Now, I want to run gunicorn with an async worker to allow the web server to use its available resources more efficiently, maximizing time that otherwise would be spent idling to do additional work instead. I'm using gevent, still with a timeout of 5 seconds. gunicorn app:app -b 127.0.0.1:5000 -t 5 -k gevent Unexpectedly, http://127.0.0.1:5000 does NOT time out: Slept for 1/10 seconds Slept for 2/10 seconds Slept for 3/10 seconds Slept for 4/10 seconds Slept for 5/10 seconds Slept for 6/10 seconds Slept for 7/10 seconds Slept for 8/10 seconds Slept for 9/10 seconds Slept for 10/10 seconds Looks like this is a known issue with gunicorn. The timeout only applies to the default sync worker, not async workers: https://github.com/benoitc/gunicorn/issues/2695 uWSGI is an alternate option to gunicorn. I'm not as familiar with it. Looks like its timeout option is called harakiri and it can be run with gevent: uwsgi --http 127.0.0.1:5000 --harakiri 5 --master -w app:app --gevent 100 uWSGI's timeout sometimes works as expected with gevent: Slept for 1/10 seconds Slept for 2/10 seconds Slept for 3/10 seconds Slept for 4/10 seconds Slept for 5/10 seconds Thu Jul 7 23:20:59 2022 - *** HARAKIRI ON WORKER 1 (pid: 59836, try: 1) *** Thu Jul 7 23:20:59 2022 - HARAKIRI !!! worker 1 status !!! Thu Jul 7 23:20:59 2022 - HARAKIRI [core 99] 127.0.0.1 - GET / since 1657261253 Thu Jul 7 23:20:59 2022 - HARAKIRI !!! end of worker 1 status !!! DAMN ! worker 1 (pid: 59836) died, killed by signal 9 :( trying respawn ... But other times it doesn't time out so it appears to be pretty flaky. Is there anyway to enforce a timeout using gunicorn with an async worker? If not, are there any other web servers that enforce a consistent timeout with an async worker, similar to uWSGI?
From https://docs.gunicorn.org/en/stable/settings.html#timeout: Workers silent for more than this many seconds are killed and restarted. For the non sync workers it just means that the worker process is still communicating and is not tied to the length of time required to handle a single request. So timeout is likely functioning by design — as worker timeout, not request timeout. You can subclass GeventWorker to override handle_request() with gevent.Timeout: import gevent from gunicorn.workers.ggevent import GeventWorker class MyGeventWorker(GeventWorker): def handle_request(self, listener_name, req, sock, addr): with gevent.Timeout(self.cfg.timeout): super().handle_request(listener_name, req, sock, addr) Usage: # gunicorn app:app -b 127.0.0.1:5000 -t 5 -k gevent gunicorn app:app -b 127.0.0.1:5000 -t 5 -k app.MyGeventWorker
7
7
72,912,762
2022-7-8
https://stackoverflow.com/questions/72912762/setup-py-building-c-extension-with-numpy-dependency
I have created a simple c-function (using the guide here Create Numpy ufunc) and I'm now trying to distribute my package on pypi. For it to work, it needs to compile the c-file(s) into a .so -file, which then can be imported from python and everything is good. To compile it needs the header file numpy/ndarraytypes.h from Numpy. When building and installing locally, it works fine. This since I know where the header files are located. However when distributing it, where can we find the Numpy folder? It is obvious from the logs that numpy gets installed before my package is built and installed so I just need to include the correct Numpy folder. from setuptools import setup from setuptools import Extension if __name__ == "__main__": setup( name="myPack", install_requires=[ "numpy>=1.22.3", # <- where is this one located after it gets installed? ], ext_modules=[ Extension( 'npufunc', sources=['npufunc.c'], include_dirs=[ # what to put here for it to find the "numpy/ndarraytypes.h" header file when installing? "/usr/local/Cellar/numpy/1.22.3_1/lib/python3.9/site-packages/numpy/core/include/" # <- this one I have locally and when added, installtion works fine ] ), ] )
You can query numpy for the include directory from python via import numpy numpy.get_include() This should return a string (/usr/lib/python3.10/site-packages/numpy/core/include on my system) which you can add to the include_dirs. See here for the docs. As for you question: numpy is a build dependency for you project. The old setup.py method is kind of bad at handling those, which is why it has been superseded by the more modern pyproject.toml approach, which (among other things) makes such specifications possible. A fairly minimal pyproject.toml for your setting (setuptools + numpy) would look like this: [build-system] requires = ["setuptools", "wheel", "oldest-supported-numpy"] build-backend = "setuptools.build_meta" Given such a pyproject.toml you can build the extension using the python build module by calling python -m build which should produce a wheel with a compiled so inside it.
3
6
72,907,182
2022-7-8
https://stackoverflow.com/questions/72907182/python-pip-pip-install-cannot-find-a-version-that-satisfies-a-requirement-des
Python3 Pip error + Poetry Packaging I am working in a python library that I am trying to publish to TestPypi. So far, there have been no issues with publishing my Poetry builds. For context, as a beginner, I come from these websites : https://python-poetry.org/docs/ https://packaging.python.org/en/latest/tutorials/packaging-projects/ The only issue that has arose is that dependencies listed in my pyproject.toml are not accounted for when installing the package with pip install. I have attempted at updating setuptools and pip but I have done so to no avail. My goal is to have clean dependency installation without the versioning errors. This is the main solution I have tried. pyproject.toml I hid my real names. [tool.poetry] name = "package-name" version = "0.1.0" description = "<desc>" authors = ["<myname> <myemail>"] license = "MIT" [tool.poetry.dependencies] python = "^3.10" beautifulsoup4 = {version = "4.11.1", allow-prereleases = true} recurring-ical-events = {version = "1.0.2b0", allow-prereleases = true} requests = {version = "2.28.0", allow-prereleases = true} rich = {version = "12.4.4", allow-prereleases = true} [tool.poetry.dev-dependencies] black = {version = "22.3.0", allow-prereleases = true} [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" As the installer iterates through a dependency, it will return this error depending on whichever one is ordered first. (Throughout my monkey-patch-like attempts at fixing this, I was able to change the order of installation by modifying the strictness of each dependency version) the error pip returns ERROR: Could not find a version that satisfies the requirement requests==2.28.1 (from homeworkpy) (from versions: 2.5.4.1) ERROR: No matching distribution found for requests==2.28.1 I have tried changing the strictness of the versions. (I removed the ^) Switching to Poetry as a manager was also an attempt. My previous attempts were manual. I have verified that the builds are corresponding to the correct builds previously published. For extra info: I am building on a Github Codespace in which I run on 18.04.1-Ubuntu Would anyone have any knowledge to spare of an issue like this? I am quite new to packaging and building, and I have had some success in most parts except for dependencies.
Main Error TLDR; Pip tries to resolve dependencies with TestPypi, but they are in another index (Pypi). Workarounds at end of answer. The fact that I am publishing to TestPypi is the reason this has happened. I will explain why what I did made this error appear, and then I will show how you, from the future, may solve this. Difference between Pypi and TestPypi Pypi is the Python Package Index. It's a giant index of Python packages one may install from with pip install. TestPypi is the Python Package Index designated for testing and publishing without touching the real Package Index. It can be useful in times when learning how to publish a package. The main difference is that it is a completely separate repository. Therefore, what's on TestPypi may not be exactly what's on Pypi. My research was limited, so if I confused anyone, the main difference is that they are two different Package Indexes. One was made for testing purposes. I published my package to TestPypi and set my pip install to install from that repository. Not Pypi, but TestPypi. Why dependency resolution failed When I defined my project's dependencies, I defined them based off of their Pypi presences. Most dependencies are present in Pypi. Not TestPypi. This meant that when I asked for my package from TestPypi, pip only looked at TestPypi, and the pip installer workflow fell out to a pattern like this: 0.5. Set fetching repository to TestPypi and Not Pypi. Pull package from TestPypi Install and examine dependencies Find first dependency (e.g. Beautifulsoup4) Pull dependency from TestPypi Successfully install Beautifulsoup4 -. This is because beautifulsoup4 is actually present in the TestPypi. Move on to another dependency (e.g. rich) Fail to pull from TestPypi -. Rich is not present in TestPypi. Return dependency not found. Why some dependencies oddly worked As you see in workflow step 5., the beautifulsoup4 package was found on the TestPypi. (Someone had put it up there). image to TestPypi page with beautifulsoup4 However, as you see in step 7., Rich is not found on the TestPypi index. This issue occurs because I set my repoistiroy to install from TestPypi because my that is where my package was held. This caused pip to use TestPypi. for every single dependency as well. How I got around it. I got around it by using TestPypi to verify accurate build artifact publishing, and then I jumped to Normal Pypi to test installation and dependency installation. Workarounds Install from TestPypi python3 -m pip install -i https://test.pypi.org/simple/ <package name> Install from Pypi (by default) python3 -m pip install <package name> Install package from TestPypi but dependencies from Pypi The Python Docs explains this very well. If you want to allow pip to also download packages from PyPI, you can specify --extra-index-url to point to PyPI. This is useful when the package you’re testing has dependencies: python3 -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ your-package
9
5
72,924,307
2022-7-9
https://stackoverflow.com/questions/72924307/creating-new-rows-in-dataframe-based-on-string-values-in-multiple-columns
I ran into this problem where I have a dataframe that looks like the following (the values in the last 3 columns are usually 4-5 alphanumeric codes). import pandas as pd data = {'ID':['P39','S32'], 'Name':['Pipe','Screw'], 'Col3':['Test1, Test2, Test3','Test6, Test7'], 'Col4':['','Test8, Test9'], 'Col5':['Test4, Test5','Test10, Test11, Test12, Test13'] } df = pd.DataFrame(data) ID Name Col3 Col4 Col5 0 P39 Pipe Test1, Test2, Test3 Test4, Test5 1 S32 Screw Test6, Test7 Test8, Test9 Test10, Test11, Test12, Test13 I want to expand this dataframe or create a new one based on the values in the last 3 columns in each row. I want to create more rows based on the maximum amount of values separated by commas in one of the last 3 rows. I then want to keep the first 2 columns the same in all of the expanded rows. But I want to fill the last 3 columns in the expanded rows with only one value each from the original column. In the above example, the first row would indicate I need 3 total rows (Col3 has the most at 3 values), and the second row would indicate I need 4 total rows (Col5 has the most at 4 values). A desired output would be along the lines of: ID Name Col3 Col4 Col5 0 P39 Pipe Test1 Test4 1 P39 Pipe Test2 Test5 2 P39 Pipe Test3 3 S32 Screw Test6 Test8 Test10 4 S32 Screw Test7 Test9 Test11 5 S32 Screw Test12 6 S32 Screw Test13 I first found a way to figure out the number of rows needed. I also had the idea to append the values to a new dataframe in the same loop. Although, I'm not sure how to separate the values in the last 3 columns and append them one by one in the rows. I know the str.split() is useful to put the values into a list. My only idea would be if I need to loop through each column separately and append it to the correct row, but I'm not sure how to do that. output1 = pd.DataFrame( columns = ['ID', 'Name', 'Col3', 'Col4', 'Col5']) for index, row in df.iterrows(): output2 = pd.DataFrame( columns = ['ID', 'Name', 'Col3', 'Col4', 'Col5']) col3counter = df.iloc[index, 2].count(',') col4counter = df.iloc[index, 3].count(',') col5counter = df.iloc[index, 4].count(',') numofnewcols = max(col3counter, col4counter, col5counter) + 1 iter1 = df.iloc[index, 2].split(', ') iter2 = df.iloc[index, 3].split(', ') iter3 = df.iloc[index, 4].split(', ') #for q in iter1 #output2.iloc[ , 2] = output1 = pd.concat([output1, output2], ignore_index=True) del output2
A bit tricky but it should work with melt to flat your dataframe then pivot_table to reshape it: out = (df.reset_index().melt(['ID', 'Name', 'index'], var_name='col', value_name='val') .assign(val=lambda x: x['val'].str.split(', ')).explode('val') .assign(row=lambda x: x.groupby(['index', 'col']).cumcount()) .pivot_table('val', ['index', 'row', 'ID', 'Name'], 'col', aggfunc='first') .droplevel(['index', 'row']).reset_index().rename_axis(columns=None).fillna('')) Output: ID Name Col3 Col4 Col5 0 P39 Pipe Test1 Test4 1 P39 Pipe Test2 Test5 2 P39 Pipe Test3 3 S32 Screw Test6 Test8 Test10 4 S32 Screw Test7 Test9 Test11 5 S32 Screw Test12 6 S32 Screw Test13
3
1
72,916,381
2022-7-8
https://stackoverflow.com/questions/72916381/read-specific-region-from-pdf
I'm trying to read a specific region on a PDF file. How to do it? I've tried: Using PyPDF2, cropped the PDF page and read only that. It doesn't work because PyPDF2's cropbox only shrinks the "view", but keeps all the items outside the specified cropbox. So on reading the cropped pdf text with extract_text(), it reads all the "invisible" contents, not only the cropped part. Converting the PDF page to PNG, cropping it and using Pytesseract to read the PNG. Py tesseract doesn't work properly, don't know why.
PyMuPDF can probably do this. I just answered another question regarding getting the "highlighted text" from a page, but the solution uses the same relevant parts of the PyMuPDF API you want: figure out a rectangle that defines the area of interest extract text based on that rectangle and I say "probably" because I haven't actually tried it on your PDF, so I cannot say for certain that the text is amenable to this process. import os.path import fitz from fitz import Document, Page, Rect # For visualizing the rects that PyMuPDF uses compared to what you see in the PDF VISUALIZE = True input_path = "test.pdf" doc: Document = fitz.open(input_path) for i in range(len(doc)): page: Page = doc[i] page.clean_contents() # https://pymupdf.readthedocs.io/en/latest/faq.html#misplaced-item-insertions-on-pdf-pages # Hard-code the rect you need rect = Rect(0, 0, 100, 100) if VISUALIZE: # Draw a red box to visualize the rect's area (text) page.draw_rect(rect, width=1.5, color=(1, 0, 0)) text = page.get_textbox(rect) print(text) if VISUALIZE: head, tail = os.path.split(input_path) viz_name = os.path.join(head, "viz_" + tail) doc.save(viz_name) For context, here's the project I just finished where this was working for the highlighted text, https://github.com/zacharysyoung/extract_highlighted_text.
3
3
72,920,010
2022-7-9
https://stackoverflow.com/questions/72920010/pylance-wont-find-stubs-for-native-library-with-submodules
Edit: I also posted this question as an issue on pylance-release github repo, which might be better suited to find an answer. I'm having issues with Visual Studio Code python language server, which cannot find the stubs for a python binding library I am developing (https://github.com/pthom/lg_hello_imgui). I think the issue might be linked to the fact that my library expose several submodules. I made a minimum reproductible example here: https://github.com/pthom/pylance_test When using PyCharm or CLion, I can easily navigate to the stubs I defined for my library, and also in its submodules stubs. However, when using Visual Studio Code, it is less reliable: when using the Jedi language server I can navigate to the functions definition, although it might fail after a few minutes of usage. when using the Pylance language server, it always fails I'm using a Mac M1. I've had this failure inside OSX, as well as inside windows ARM, with Parallels. Is it linked to my setup? Here is the installed package structure: .venv/lib/python3.9/site-packages/lg_hello_imgui/ ├── __init__.py ├── _lg_hello_imgui.cpython-39-darwin.so* ├── hello_imgui.pyi ├── imgui.pyi ├── implot.pyi └── py.typed _lg_hello_imgui is a native library that include three submodules, defined like this via pybind11: PYBIND11_MODULE(_lg_hello_imgui, m) { auto module_imgui = m.def_submodule("imgui"); py_init_module_imgui(module_imgui); auto module_himgui = m.def_submodule("hello_imgui"); py_init_module_hello_imgui(module_himgui); auto module_implot = m.def_submodule("implot"); py_init_module_implot(module_implot); } and lg_hello_imgui/__init__.py states: from lg_hello_imgui._lg_hello_imgui import imgui from lg_hello_imgui._lg_hello_imgui import hello_imgui from lg_hello_imgui._lg_hello_imgui import implot Repo for issue reproduction: https://github.com/pthom/pylance_test (Note: the lg_hello_imgui linked library does not exist as wheel yet, so it may require 1 minute or 2 to build)
I'm answering my own question since I had an answer from the pylance team in the meantime. https://peps.python.org/pep-0484/#stub-files specifies that Modules and variables imported into the stub are not considered exported from the stub unless the import uses the import ... as ... form or the equivalent from ... import ... as ... form. (UPDATE: To clarify, the intention here is that only names imported using the form X as X will be exported, i.e. the name before and after as must be the same.) So I had to input this into the init.pyi file: from . import hello_imgui as hello_imgui from . import imgui as imgui from . import implot as implot
3
4
72,912,915
2022-7-8
https://stackoverflow.com/questions/72912915/join-items-in-list-that-occur-before-and-after-keyword-python
I'm using a name entity recognition model to find names in a text string. For hyphenated names like Jane Miller-Smith, the NER model returns the names seperately like this: names = ['Jane','Miller','-','Smith'] What's a simple way to join the items before and after the '-' to one string in this list? So that I have a list of first and last name like name = ['Jane', 'Miller-Smith']? I've so far tried to loop through the list of names based on solutions like this for different hyphenated name versions: name1 = ['Jane', 'Miller', '-','Smith'] name = ['Jane', '-', 'Marie','Miller', '-','Smith'] new_name = [] for cur, nxt in zip (name, name [1:]): print(cur,nxt) if cur == '-': hyph = cur+nxt new_name.append(hyph) print("hyph: ", hyph) else: new_name.append(cur) print("cur: ", cur) print(new_name) But I can't wrap my head around how to combine only the string before and after the hypen and also keep other non-hyphenated strings in the list in order (so that not the last name is suddenly first).
Scan from right to left, replacing the three-element slices whenever a hyphen is found: >>> names = ['Jane', '-', 'Marie','Miller', '-','Smith'] >>> for i in reversed(range(len(names))): if names[i] == '-': names[i-1: i+2] = [f'{names[i-1]}-{names[i+1]}'] >>> names ['Jane-Marie', 'Miller-Smith'] An alternative is to loop left-to-right and build a new result list: >>> names = ['Jane', '-', 'Marie', 'Miller', '-','Smith'] >>> result = [] >>> it = iter(names) >>> for tok in it: if tok == '-': tok = result.pop() + '-' + next(it) result.append(tok) >>> result ['Jane-Marie', 'Miller-Smith']
3
6
72,920,577
2022-7-9
https://stackoverflow.com/questions/72920577/mach-o-file-but-is-an-incompatible-architecture-have-arm64-need-x86-64
I have a problem when I run a .py file on a Macbook Air M1: [Running] python3 -u "/Users/kaiyuwei/Documents/graduation project/metaheuristics/run_CRO.py" Traceback (most recent call last): File "/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/__init__.py", line 23, in <module> from . import multiarray File "/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/multiarray.py", line 10, in <module> from . import overrides File "/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/overrides.py", line 6, in <module> from numpy.core._multiarray_umath import ( ImportError: dlopen(/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/kaiyuwei/Documents/graduation project/metaheuristics/run_CRO.py", line 1, in <module> from models.multiple_solution.evolutionary_based.CRO import BaseCRO File "/Users/kaiyuwei/Documents/graduation project/metaheuristics/models/multiple_solution/evolutionary_based/CRO.py", line 1, in <module> import numpy as np File "/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/__init__.py", line 140, in <module> from . import core File "/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/__init__.py", line 49, in <module> raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/Library/Developer/CommandLineTools/usr/bin/python3" * The NumPy version is: "1.23.1" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: dlopen(/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')) [Done] exited with code=1 in 0.055 seconds I think the reason is that I'm using the numpy package for 'x86_64', so I tried to use pip install numpy --upgrade to upgrade numpy, but I got output like: Requirement already satisfied: numpy in /Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages (1.23.1) I also tried python3 -m pip install --upgrade pip to upgrade python, but still; Requirement already satisfied: pip in /Users/kaiyuwei/Library/Python/3.8/lib/python/site-packages (22.1.2) Can anyone help me?
I solved the problem by simply uninstalling numpy package: pip3 uninstall numpy and reinstalling it: pip3 install numpy
20
-4
72,914,731
2022-7-8
https://stackoverflow.com/questions/72914731/how-to-permute-dimmensions-in-tensorflow
I am able to permute the dimmension of the tensor: I'm able to do this in pytorch! But not in tensorflow! A = torch.rand(1, 2,5) A = A.permute(0,2,1) A.shape torch.Size([1, 5, 2]) Tensorflow (just a try,I don't know about this): A = tf.random.normal(1, 2,5) A = tf.keras.layers.Permute((0, 2, 1)) Not working
Use tf.transpose: import tensorflow as tf A = tf.random.normal((1, 2, 5)) A_t = tf.transpose(A, perm=[0, 2, 1]) print(A.shape, A_t.shape) # (1, 2, 5) (1, 5, 2)
3
4
72,917,054
2022-7-8
https://stackoverflow.com/questions/72917054/generate-full-page-or-html-fragment-based-on-request-header-htmx
When using HTMX framework with Python Flask, you have to be able to: serve a request as a HTML fragment if it's done by HTMX (via AJAX) server a request as a full page if it's done by the user (e.g. entered directly in the browser URL bar) See Single-page-application with fixed header/footer with HTMX, with browsing URL history or Allow Manual Page Reloading for more details. How to do this with the Flask template system? from flask import Flask, render_template, request app = Flask("") @app.route('/pages/<path>') def main(path): htmx_request = request.headers.get('HX-Request') is not None return render_template(path + '.html', fullpage=not htmx_request) app.run() What's the standard way to output a full page (based on a parent template pagelayout.html): {% extends "pagelayout.html" %} {% block container %} <button>Click me</button> {% endblock %} if fullpage is True, and just a HTML fragment: <button>Click me</button> if it is False?
This solution based on that we can use a dynamic variable when extending a base template. So depending on the type or the request, we use the full base template or a minimal base template that returns only our fragment's content. Lets call our base template for fragments base-fragments.html: {% block container %} {% endblock %} It's just returns the main block's content, nothing else. At the view function we have a new template variable baselayout, that contains the name of the base template depending on the request's type (originating from HTMX or not): @app.route('/pages/<path>') def main(path): htmx_request = request.headers.get('HX-Request') is not None baselayout = 'base-fragments.html' if htmx_request else 'pagelayout.html' return render_template(path + '.html', baselayout=baselayout) And in the page template, we use this baselayout variable at the extends: {% extends baselayout %} {% block container %} <button>Click me</button> {% endblock %}
3
7
72,918,269
2022-7-9
https://stackoverflow.com/questions/72918269/can-i-use-pythons-functools-cache-based-on-identity
I would like to have a Python @cache decorator based on identity, not __hash__/__equal. That is to say, I would like the cached value for an argument ka NOT to be used for a different object ka2, even if ka == ka2. Is there a way to do that? In code: from functools import cache class Key: def __init__(self, value): self.value = value def __eq__(self, another): print(f"__eq__ {self.value}, {another.value}") return another.value == self.value def __hash__(self): print(f"__hash__ {self.value}") return hash(self.value) def __repr__(self): return self.value i = 0 @cache def foo(key): global i i += 1 print(f"Computing foo({key}) = {i}") return i ka = Key('a') ka2 = Key('a') print(f"foo(ka): {foo(ka)}") print(f"foo(ka2): {foo(ka2)}") # I would like the cached value for ka NOT to be used even though ka2 == ka.
Make a wrapper like Key that compares by the identity of its wrapped object, and wrap your caching function in a helper that uses the wrapper: class Id: __slots__="x", def __init__(self,x): self.x=x def __hash__(self): return id(self.x) def __eq__(self,o): return self.x is o.x def cache_id(f): @functools.cache def id_f(i): return f(i.x) @functools.wraps(f) def call(x): return id_f(Id(x)) return call @cache_id def foo(key): …
4
2
72,904,923
2022-7-7
https://stackoverflow.com/questions/72904923/how-to-sort-methods-by-method-type-in-fastapi-swagger-api
How can I set a sort order for the API methods in the FastAPI Swagger autodocs? I would like all my methods grouped by type (GET, POST, PUT, DELETE). This answer shows how to do it in Java. How can I do it in Python? from fastapi import FastAPI app = FastAPI() @app.get("/") def list_all_components(): pass @app.get("/{component_id}") def get_component(component_id: int): pass @app.post("/") def create_component(): pass @app.put("/{component_id}") def update_component(component_id: int): pass @app.delete("/{component_id}") def delete_component(component_id: int): pass
You can configure Swagger UI parameters through the FastAPI constructor. app = FastAPI(swagger_ui_parameters={"operationsSorter": "method"}) The full list of parameters can be found in the swagger documentation.
3
6
72,912,363
2022-7-8
https://stackoverflow.com/questions/72912363/create-a-dag-using-the-rest-api
Is it possible to create, by sending the DAG file contents, to Apache Airflow using the API? For example, it is possible to list all DAGs using the API curl -u "admin:admin" http://localhost:8080/api/v1/dags { "dags": [], "total_entries": 0 }
You can not create new DAGs via API. You can read a discussion about this request in the project https://github.com/apache/airflow/discussions/24744 which also lists the reasons why Airflow won't have it. In simple words by adding such API it means that the machine(s) where DAGs are deployed to need to have credentials to write those DAG files to all the other components. For such use case you better to use Git sync to add files to the DAG directory.
4
5
72,909,692
2022-7-8
https://stackoverflow.com/questions/72909692/python-how-to-get-all-possible-bin-combinations-from-a-set-of-data-with-a-weigh
I have a list of numbers which all correspond to items of different weight: weights = [50, 40, 30, 100, 150, 12, 150, 10, 5, 4] I need to split the values into two bins with the caveat that the bin sum total cannot exceed 300. e.g. The simplest one I can think of is: bin1 = [150, 150] = 300 bin2 = [50, 40, 30, 100, 12, 10, 5, 4] = 251 I want to be able to get all the combinations of these weights that would satisfy this caveat, unsure how to go about this?
one way is brute-forcing it by binning all possible permutations of the list there has to be a better (more clever) way of doing that - it's terribly slow. (but I'm supposed to be doing other things right now ;-)) from itertools import permutations max_bin_size = 300 weights = [50, 40, 30, 100, 150, 12, 150, 10, 5, 4] bin_combinations = set() def to_bins(weights): bin = [] for weight in weights: if weight > max_bin_size: raise ValueError("wtf?") if sum(bin) + weight > max_bin_size: yield tuple(sorted(bin)) bin = [weight] else: bin.append(weight) yield tuple(sorted(bin)) for permutation in set(permutations(weights)): bin_combinations.add(tuple(sorted(to_bins(permutation)))) for combination in bin_combinations: print(combination) # ((4, 10, 40, 100), (5, 12, 150), (30, 50, 150)) # ((4, 5, 10, 40, 50, 100), (12, 30), (150, 150)) # ((4, 10, 30, 150), (5, 50, 100), (12, 40, 150)) # ((4, 5, 30, 100), (10, 50, 150), (12, 40, 150)) # ((4, 12, 50, 100), (5, 10, 30, 40), (150, 150)) # ((4, 5, 150), (10, 30, 40, 150), (12, 50, 100)) # ... Update: a version with reuse of bins (assuming one can put the weights freely into any available bin): from itertools import permutations max_bin_size = 300 weights = [50, 40, 30, 100, 150, 12, 150, 10, 5, 4] bin_combinations = set() def to_bins(weights): bins = [[]] for weight in weights: if weight > max_bin_size: raise ValueError("wtf?") for bin in bins: if sum(bin) + weight > max_bin_size: continue bin.append(weight) break else: bins.append([weight]) return tuple(sorted(tuple(sorted(bin)) for bin in bins)) for permutation in set(permutations(weights)): bin_combinations.add(to_bins(permutation)) print(len(bin_combinations), "combinations") for combination in bin_combinations: print(combination) # 14 combinations # ((4, 5, 10, 12, 30, 50, 150), (40, 100, 150)) # ((4, 5, 10, 30, 40, 50, 150), (12, 100, 150)) # ((4, 12, 30, 100, 150), (5, 10, 40, 50, 150)) # ((4, 100, 150), (5, 10, 12, 30, 40, 50, 150)) # ((4, 5, 12, 30, 50, 150), (10, 40, 100, 150)) # ((4, 5, 10, 12, 100, 150), (30, 40, 50, 150)) # ((4, 5, 12, 30, 40, 50, 150), (10, 100, 150)) # ((4, 5, 40, 100, 150), (10, 12, 30, 50, 150)) # ((4, 10, 12, 30, 40, 50, 150), (5, 100, 150)) # ((4, 5, 10, 30, 100, 150), (12, 40, 50, 150)) # ((4, 5, 10, 12, 30, 40, 150), (50, 100, 150)) # ((4, 10, 40, 50, 150), (5, 12, 30, 100, 150)) # ((4, 5, 10, 12, 40, 50, 150), (30, 100, 150)) # ((4, 5, 10, 12, 30, 40, 50, 100), (150, 150))
4
2
72,908,362
2022-7-8
https://stackoverflow.com/questions/72908362/how-to-convert-discord-bot-commands-to-hybrid-command
I'm trying to convert my Discord Bot commands to hybrid commands. When I don't use the hybrid_command decorator, the slash commands work. The error says the callback must be a coroutine. What does it mean? What am I missing in the code? main.py class MyBot(commands.Bot): def __init__(self): intents=discord.Intents.all() intents.message_content = True super().__init__( command_prefix='!', intents=discord.Intents.all() ) self.initial_extensions = [ "cogs.user_commands" ] async def setup_hook(self): for ext in self.initial_extensions: await self.load_extension(ext) await bot.tree.sync(guild = discord.Object(id=191453821174531128)) user_commands.py class user_commands(commands.Cog): def __init__(self, bot): self.bot = bot bot.remove_command("help") @commands.hybrid_command(name='help', with_app_command=True) @app_commands.command() async def help(self, interaction: discord.Interaction, command: Optional[str]): ...... await interaction.response.send_message(embed=embed, ephemeral = True) @help.autocomplete('command') async def help_autocomplete(self, interaction: discord.Interaction, current: str, ) -> List[app_commands.Choice[str]]: ..... Error:
First of all, your error means that decorators @commands.hybrid_command(...) and @app_commands.command() do not go well together. You either define hybrid command, slash command or text-chat command. Next moment, hybrid commands have commands.Context as their argument so we need to replace interaction parameter with that and adjust your ...... code accordingly -> replace usage of interaction with its analogical attributes/methods from ctx of said commands.Context class, i.e. replace interaction.user with ctx.author interaction.channel with ctx.channel interaction.response.send_message with ctx.send or ctx.reply etc So your code in user_commands.py would look like this. @commands.hybrid_command(name='help', with_app_command=True) async def help(self, ctx: commands.Context, command: Optional[str]): ...... await ctx.send(embed=embed, ephemeral=True) @help.autocomplete('command') async def help_autocomplete(self, interaction: discord.Interaction, current: str, ) -> List[app_commands.Choice[str]]: ..... PS. it's not advisable to remove help command. I suggest reading this amazing github gist A basic walkthrough guide on subclassing HelpCommand
3
6
72,907,685
2022-7-8
https://stackoverflow.com/questions/72907685/idiomatic-way-to-check-if-a-value-is-inside-an-enum
I want to check if some string value exists in the values set of some Enum. Here is what I do: from enum import Enum class Color(str, Enum): RED = "red" GREEN = "green" YELLOW = "yellow" s = "red" # s = "blue" if any(s == c.value for c in Color): print(Color(s)) When I checked the documentation I found that: The EnumMeta metaclass is responsible for providing the contains(), dir(), iter() and other methods that allow one to do things with an Enum class that fail on a typical class, such as list(Color) or some_enum_var in Color But I want something different (checking existence for values). Is there a more pythonic way to solve this?
You can test values against _value2member_map_, a dict attribute of the Enum sub-class that maps values to member classes, if you prefer not to clutter up your code with a try-except block: if s in Color._value2member_map_: print(Color(s))
3
2
72,900,603
2022-7-7
https://stackoverflow.com/questions/72900603/create-non-existent-functions-by-a-single-click-in-vs-code-python
Other IDE's like PyCharm, IntelliJ, etc. have a feature where if it finds a function being called that is undefined, you can right-click it and click 'create method' or something similar to automatically create the function definition. It helps out a lot in TDD. Is there something similar in VS Code?
You can install the My Code Actions extension, here is a simple example: Configure in settings.json file: // settings.json file { "my-code-actions.actions": { "[python]": { "create new methond {{diag:$1}}": { "diagnostics": ["\"(.*?)\" is not defined"], "text": "def {{diag:$1}}():\n pass\n", "where": "afterLast", } } } } The effect when writing code after saving this setting: Another method that might be useful: open in sequence File > Preferences > Configure User Snippets. Select python in the command palette. This will create a python.json file in which you can customize the code segment according to the rules. A simple example: "Print to console":{ "prefix": "defHello", "body": [ "def Hello():", "${1: }print('Hello worle')", "$2", ], "description": "print hello world" }
5
6
72,905,444
2022-7-8
https://stackoverflow.com/questions/72905444/calculate-time-difference-between-two-dates-in-the-same-column-in-pandas
I have a column (DATE) with multiple data times and I want to find the difference in minutes from date to date and store it into a new column (time_interval). This is what I have tried: df['time_interval'] = (df['DATE'],axis=0 - df['DATE'],axis=1) * 24 * 60
Depending on how you'd care to store the differences, either df = pd.DataFrame(data=['01-01-2006 00:53:00', '01-01-2006 01:53:00', '01-01-2006 02:53:00', '01-01-2006 03:53:00', '01-01-2006 04:53:00'], columns=['DATE']) df['DATE'] = pd.to_datetime(df['DATE']) df['time_interval'] = df['DATE'].diff().fillna(timedelta(0)).apply(lambda x: x.total_seconds() / 60) to get DATE time_interval 0 2006-01-01 00:53:00 0.0 1 2006-01-01 01:53:00 60.0 2 2006-01-01 02:53:00 60.0 3 2006-01-01 03:53:00 60.0 4 2006-01-01 04:53:00 60.0 or alternatively df['time_interval'] = df['DATE'].diff().shift(-1).fillna(timedelta(0)).apply(lambda x: x.total_seconds() / 60) to get DATE time_interval 0 2006-01-01 00:53:00 60.0 1 2006-01-01 01:53:00 60.0 2 2006-01-01 02:53:00 60.0 3 2006-01-01 03:53:00 60.0 4 2006-01-01 04:53:00 0.0
3
4
72,902,269
2022-7-7
https://stackoverflow.com/questions/72902269/sqlalchemy-pyodbc-how-to-trust-certificate
I have a python script using pyodbc that connects to a remote server with sql server running on it. I have a package I wrote with functions using sqlalchemy that I was able to use on one of my computers. I connected with this string: driver = 'SQL+Server+Native+Client+11.0' engine_string = prefix + '://' + username + ':' + password + '@' + server + '/' + database + '?driver=' + driver On another computer, I was not able to install the native client 11.0 which I understand is deprecated. I tried switching the value to driver = 'ODBC+Driver+18+for+SQL+Server' I got an error with that version [ODBC Driver 18 for SQL Server]SSL Provider: The certificate chain was issued by an authority that is not trusted. I then tried just a generic odbc connection with the windows utility and got the same error. I was able to get that odbc manager connection to work when I checked 'Trust Server Certificate' This is probably not good long term, but is there a way to add that attribute to the first string I have above? I tried several variations, but nothing worked. I was able to get a working connection with the following: cnxn = pyodbc.connect( driver = '{ODBC Driver 18 for SQL Server}', server = server, database = database, uid = username, pwd = password, encrypt='no', trust_server_certificate='yes') but that connection did not work with the package I wanted to use. thanks!
The connection error is due to a change in default behavior for the newest versions of SQL Server Drivers (ODBC v18+, JDBC v10+, .Net Microsoft.Data.SqlClient v4.0+). ODBC release notes: https://techcommunity.microsoft.com/t5/sql-server-blog/odbc-driver-18-0-for-sql-server-released/ba-p/3169228 The correct ODBC keyword to use is TrustServerCertificate https://learn.microsoft.com/en-us/sql/connect/odbc/dsn-connection-string-attribute?view=sql-server-ver16
4
7
72,901,860
2022-7-7
https://stackoverflow.com/questions/72901860/effective-way-to-regexp-match-pandas-and-strip-inside-df
Hoping someone on here is kind enough to at least point me in the right direction. Overall, I'm trying to match regex for each row and produce the below output (in 'desired example output'). To elaborate, data is being matched from a 'Device Pool' column from a rather large CSV (all settings from a phone). I need to: input only the regexp match in the Device Pool column/row and still have it corresponding to the Directory Number 1 data. I may add other columns later Also strip the D in the regexp as well, as it is only useful for the initial lookup. Example input data(humongous file with lots of columns): ... ,Device Pool,CSS,forward 1,Directory Number 1, ... YART01-432-D098-00-1,CSS-bobville-1,12223041234,12228675309 BART-1435-C512-00-1,CSS-willis-3,12223041234,12228215486 HOMER-1435-A134-00-1,CSS-willis-2,12223041238,12228212345 VAR05-1435-D099-00-1,CSS-willis-2,12223041897,12228215486 POA01-4-D100-00-1,CSS-hmbrgr-chz,12223043151,12228843454 ... Tried a few different approaches to no avail. with findall, I'd have to add the other columns back I guess (doesn't seem very efficient). It was pulling the data but not the other associated columns pertaining to the row. I since dropped that direction. Surely there is a cleaner way, that might even drop the need to filter first. This is where I'm at: df1 = pd.read.csv(some_file.csv) dff1 = df1.filter(items=['Device Pool', 'Directory Number 1'])) df2 = dff1.loc[d1.iloc[:,0].str.contains('[D][0-9][0-9][0-9]', regex=True)] dff2 = # stuck here current example output: Device Pool Directory Number 1 YART01-432-D098-00-1 12228675309 VAR05-1435-D099-00-1 12228215486 POA01-4-D100-00-1 12228843454 ... desired example output: Device Pool Directory Number 1 098 12228675309 099 12228215486 100 12228843454 ... I'll be using these trimmed numbers to reference an address code csv, then pulling coordinates from geo location code, to then map. Pretty fun project really.
You can use df['Device Pool'] = df['Device Pool'].str.replace(r'.*-D(\d+).*', r'\1', regex=True) Or, with Series.str.extract: df['Device Pool'] = df['Device Pool'].str.extract(r'-D(\d+)', expand=False) See a Pandas test: import pandas as pd df = pd.DataFrame({'Device Pool':['YART01-432-D098-00-1', 'VAR05-1435-D099-00-1', 'POA01-4-D100-00-1'], 'Directory Number 1':['12228675309', '12228215486', '12228843454']}) df['Device Pool'].str.replace(r'.*-D(\d+).*', r'\1', regex=True) >>> df Device Pool Directory Number 1 0 098 12228675309 1 099 12228215486 2 100 12228843454 The .*-D(\d+).* regex matches .* - any zero or more chars other than line break chars as many as possible -D - a -D string (\d+) - Group 1: one or more digits .* - the rest of the line.
4
3
72,899,320
2022-7-7
https://stackoverflow.com/questions/72899320/subtract-time-only-from-two-datetime-columns-in-pandas
I am looking to do something like in this thread. However, I only want to subtract the time component of the two datetime columns. For eg., given this dataframe: ts1 ts2 0 2018-07-25 11:14:00 2018-07-27 12:14:00 1 2018-08-26 11:15:00 2018-09-24 10:15:00 2 2018-07-29 11:17:00 2018-07-22 11:00:00 The expected output for ts2 -ts1 time component only should give: ts1 ts2 ts_delta 0 2018-07-25 11:14:00 2018-07-27 12:14:00 1:00:00 1 2018-08-26 11:15:00 2018-09-24 10:15:00 -1:00:00 2 2018-07-29 11:17:00 2018-07-22 11:00:00 -0:17:00 So, for row 0: the time for ts2 is 12:14:00, the time for ts1 is 11:14:00. The expected output is just these two times subtracting (don't care about the days). In this case: 12:14:00 - 11:14:00 = 1:00:00. How would I do this in one single line?
You need to set both datetimes to a common date first. One way is to use pandas.DateOffset: o = pd.DateOffset(day=1, month=1, year=2022) # the exact numbers don't matter # reset dates ts1 = df['ts1'].add(o) ts2 = df['ts2'].add(o) # subtract df['ts_delta'] = ts2.sub(ts1) As one-liner: df['ts_delta'] = df['ts2'].add((o:=pd.DateOffset(day=1, month=1, year=2022))).sub(df['ts1'].add(o)) Other way using a difference between ts2-ts1 (with dates) and ts2-ts1 (dates only): df['ts_delta'] = (df['ts2'].sub(df['ts1']) -df['ts2'].dt.normalize().sub(df['ts1'].dt.normalize()) ) output: ts1 ts2 ts_delta 0 2018-07-25 11:14:00 2018-07-27 12:14:00 0 days 01:00:00 1 2018-08-26 11:15:00 2018-09-24 10:15:00 -1 days +23:00:00 2 2018-07-29 11:17:00 2018-07-22 11:00:00 -1 days +23:43:00 NB. don't get confused by the -1 days +23:00:00, this is actually the ways to represent -1hour
4
0
72,893,180
2022-7-7
https://stackoverflow.com/questions/72893180/flask-restful-error-request-content-type-was-not-application-json
I was following this tutorial and it was going pretty well. He then introduced reqparse and I followed along. I tried to test my code and I get this error {'message': "Did not attempt to load JSON data because the request Content-Type was not 'application/json'."} I don't know if I'm missing something super obvious but I'm pretty sure I copied his code exactly. here's the code: main.py from flask import Flask, request from flask_restful import Api, Resource, reqparse app = Flask(__name__) api = Api(app) #basic get and post names = {"sai": {"age": 19, "gender": "male"}, "bill": {"age": 23, "gender": "male"}} class HelloWorld(Resource): def get(self, name, numb): return names[name] def post(self): return {"data": "Posted"} api.add_resource(HelloWorld, "/helloworld/<string:name>/<int:numb>") # getting larger data pictures = {} class picture(Resource): def get(self, picture_id): return pictures[picture_id] def put(self, picture_id): print(request.form['likes']) pass api.add_resource(picture, "/picture/<int:picture_id>") # reqparse video_put_args = reqparse.RequestParser() # make new request parser object to make sure it fits the correct guidelines video_put_args.add_argument("name", type=str, help="Name of the video") video_put_args.add_argument("views", type=int, help="Views on the video") video_put_args.add_argument("likes", type=int, help="Likes on the video") videos = {} class Video(Resource): def get(self, video_id): return videos[video_id] def post(self, video_id): args = video_put_args.parse_args() print(request.form['likes']) return {video_id: args} api.add_resource(Video, "/video/<int:video_id>") if __name__ == "__main__": app.run(debug=True) test_rest.py import requests BASE = "http://127.0.0.1:5000/" response = requests.post(BASE + 'video/1', {"likes": 10}) print(response.json())
I don't know why you have an issue as far as I can tell you did copy him exactly how he did it. Here's a fix that'll work although I can't explain why his code works and yours doesn't. His video is two years old so it could be deprecated behaviour. import requests import json BASE = "http://127.0.0.1:5000/" payload = {"likes": 10} headers = {'accept': 'application/json'} response = requests.post(BASE + 'video/1', json=payload) print(response.json())
9
6
72,881,807
2022-7-6
https://stackoverflow.com/questions/72881807/error-when-pip-installing-apache-flink-due-to-numpy
I'm trying to install Apache Flink with either python3 -m pip install apache-flink or pip3 install apache-flink, but both fail with an exit code 1 error: clang: error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly error: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/src/umath -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/src/npymath -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/src/common -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/src/npymath -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/src/common -Ibuild/src.macosx-10.14-arm64-3.8/numpy/core/src/npymath -c numpy/core/src/multiarray/alloc.c -o build/temp.macosx-10.14-arm64-3.8/numpy/core/src/multiarray/alloc.o -MMD -MF build/temp.macosx-10.14-arm64-3.8/numpy/core/src/multiarray/alloc.o.d -faltivec -I/System/Library/Frameworks/vecLib.framework/Headers" failed with exit status 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> numpy note: This is an issue with the package mentioned above, not pip. The output is quite long so I won't post the whole thing here, but if there's something in particular I should be looking for in there, any suggestions are welcome. I've also tried starting with numpy already installed in a hope that apache-flink would just use the installed version but that didn't help. pip3 --version -> pip 22.1.2 from /Users/sophier/Library/Python/3.8/lib/python/site-packages/pip (python 3.8) I'm on the new mac with the M1 chip incase that could be a problem.
PyFlink on a M1 is not yet supported but will be from Flink 1.16 onwards, see https://issues.apache.org/jira/browse/FLINK-25188
3
5
72,899,058
2022-7-7
https://stackoverflow.com/questions/72899058/replace-values-from-a-dataframe-with-values-from-another-with-pandas
I have two dataframes with identical columns, but different values and different number of rows. import pandas as pd data1 = {'Region': ['Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa','Asia','Asia','Asia','Asia'], 'Country': ['South Africa','South Africa','South Africa','South Africa','South Africa','South Africa','South Africa','South Africa','Japan','Japan','Japan','Japan'], 'Product': ['ABC','ABC','ABC','ABC','XYZ','XYZ','XYZ','XYZ','DEF','DEF','DEF','DEF'], 'Year': [2016, 2017, 2018, 2019,2016, 2017, 2018, 2019,2016, 2017, 2018, 2019], 'Price': [500, 400, 0,450,750,0,0,890,500,470,0,415]} data1 = {'Region': ['Africa','Africa','Africa','Africa','Africa','Africa','Asia','Asia'], 'Country': ['South Africa','South Africa','South Africa','South Africa','South Africa','South Africa','Japan','Japan'], 'Product': ['ABC','ABC','ABC','ABC','XYZ','XYZ','DEF','DEF'], 'Year': [2016, 2017, 2018, 2019,2016, 2017,2016, 2017], 'Price': [200, 100, 30,750,350,120,400,370]} df = pd.DataFrame(data1) df2 = pd.DataFrame(data2) df is the complete dataset but with some old values, whereas df2 only has the updated values. I want to replace all the values that are in df with the values in df2, all while keeping the values from df that aren't in df2. So for example, in df, the value for Country = Japan, for Product = DEF, in Year = 2016, the Price should be updated from 470 to 400. The same for 2017, while 2018 and 2019 stay the same. So far I have the following code that doesn't seem to work: common_index = ['Region','Country','Product','Year'] df = df.set_index(common_index) df2 = df2.set_index(common_index) df.update(df2, overwrite = True) But this only updates df with the values from df2 and deletes everything else. Expected output should look like this: data3 = {'Region': ['Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa','Asia','Asia','Asia','Asia'], 'Country': ['South Africa','South Africa','South Africa','South Africa','South Africa','South Africa','South Africa','South Africa','Japan','Japan','Japan','Japan'], 'Product': ['ABC','ABC','ABC','ABC','XYZ','XYZ','XYZ','XYZ','DEF','DEF','DEF','DEF'], 'Year': [2016, 2017, 2018, 2019,2016, 2017, 2018, 2019,2016, 2017, 2018, 2019], 'Price': [200, 100, 30,750,350,120,0,890,400,370,0,415]} df3 = pd.DataFrame(data3) Any suggestions on how I can do this?
You can use merge and update: df.update(df.merge(df2, on=['Region', 'Country', 'Product', 'Year'], how='left', suffixes=('_old', None))) NB. the update is in place. output: Region Country Product Year Price 0 Africa South Africa ABC 2016 200.0 1 Africa South Africa ABC 2017 100.0 2 Africa South Africa ABC 2018 30.0 3 Africa South Africa ABC 2019 750.0 4 Africa South Africa XYZ 2016 350.0 5 Africa South Africa XYZ 2017 120.0 6 Africa South Africa XYZ 2018 0.0 7 Africa South Africa XYZ 2019 890.0 8 Asia Japan DEF 2016 400.0 9 Asia Japan DEF 2017 370.0 10 Asia Japan DEF 2018 0.0 11 Asia Japan DEF 2019 415.0
3
4
72,895,097
2022-7-7
https://stackoverflow.com/questions/72895097/python-merging-3-different-dictionary-and-grouping-the-output
I have created 3 different dictionary in python , however I believe this cannot be merged into 1 dictionary e.g. NewDict due to a same Key in all 3 e.g. Name & Company. NewDict1 = {'Name': 'John,Davies', 'Company': 'Google'} NewDict2 = {'Name': 'Boris,Barry', 'Company': 'Microsoft'} NewDict3 = {'Name': 'Humphrey,Smith', 'Company': 'Microsoft'} I would like to group above in such a way that my output is as below : Google : John Davies Microsoft : Boris Barry, Humphrey Smith Any help will be really appreciated .
Use a defaultdict: from collections import defaultdict dicts = [NewDict1, NewDict2, NewDict3] out = defaultdict(list) for d in dicts: out[d['Company']].append(d['Name']) dict(out) output: {'Google': ['John,Davies'], 'Microsoft': ['Boris,Barry', 'Humphrey,Smith']} as printed string for k,v in out.items(): print(f'{k}: {", ".join(v)}') output: Google: John,Davies Microsoft: Boris,Barry, Humphrey,Smith
3
6
72,885,556
2022-7-6
https://stackoverflow.com/questions/72885556/smallest-i-with-1-i-1-i1
Someone reverse-sorted by 1/i instead of the usual -i and it made me wonder: What is the smallest positive integer case where that fails*? I think it must be where two consecutive integers i and i+1 have the same reciprocal float. The smallest I found is i = 6369051721119404: i = 6369051721119404 print(1/i == 1/(i+1)) import math print(math.log2(i)) Output (Try it online!): True 52.50000001100726 Note that it's only a bit larger than 252.5 (which I only noticed after finding the number with other ways) and 52 is the mantissa bits stored by float, so maybe that is meaningful. So: What is the smallest where it fails? Is it the one I found? And is there a meaningful explanation for why? * Meaning it fails to sort correctly, for example sorted([6369051721119404, 6369051721119405], key=lambda i: 1/i) doesn't reverse that list, as both key values are the same.
Suppose i is between 2^n and 2^(n+1) for some n. Then 1/i is between 2^(-n-1) and 2^-n. Its representation as double-precision floating point is 1.xxx...xxx * 2^(-n-1), where there are 52 x's. The smallest difference that can be expressed at that magnitude is 2^-52 * 2^(-n-1) = 2^(-n-53). 1/i and 1/(i+1) may get rounded to the same number if the difference between them is at most 2^(-n-53). Solving for i: 1/i - 1/(i+1) = 2^(-n-53) ==> i(i+1) = 2^(n+53). The solution is approximately i = 2^((n+53) / 2). This matches our range for i if n = 52. Then i = 2^52.5. Starting at this value of i there is a possibility to get the same value for 1/i and 1/(i+1). But it won't happen for every i as it depends on how the numbers are rounded. We can start searching at that point, and as you discovered shortly after that we'll find the first occurrence. Note: We also need to rule out n=51 as it seems to have solutions that fall in range. However, the only integer i that falls in the range 2^51 to 2^52 and is at least the minimal value calculated above is 2^52 itself, which can be ruled out. For larger values we need to switch to n=52 as above.
4
4
72,887,988
2022-7-6
https://stackoverflow.com/questions/72887988/why-is-underscore-not-a-valid-name-in-new-python-match
_ score can be used as a variable name anywhere in Python, such as: _ = 10 print(_) However, it is not accepted here: d = dict(john = 10, owen=12, jenny=13) match d: case {'john' : 10, 'jenny': _}: print('does not work', _) ERROR: print('does not work', _) NameError: name '_' is not defined Yet, perfectly fine to use as follows: d = dict(john = 10, owen=12, jenny=13) match d: case {'john' : 10, 'jenny': a}: print('does not work', a) Why is _ not a valid variable name in new match in Python 3.10?
In a match statement, _ is a wildcard pattern. It matches anything without binding any names, so you can use it multiple times in the same case without having to come up with a bunch of different names for multiple values you don't care about.
4
5
72,883,838
2022-7-6
https://stackoverflow.com/questions/72883838/cant-connect-postgresql-database-to-fastapi
So, hi. Everything works with SQLite, but when I try to add PostgreSQL according to the user's guide on FastAPI, nothing works and I get: sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) invalid dsn: invalid connection option "check_same_thread" My database.py is: from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker #SQLALCHEMY_DATABASE_URL = "sqlite:///./sql_app.db" SQLALCHEMY_DATABASE_URL = "postgresql://user:password@postgresserver/db" engine = create_engine( SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False} ) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base()
check_same_thread is an argument specific to sqlite. As you've specified a Postgres URL, you can remove that argument and you should have no issue creating an engine. i.e: from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker SQLALCHEMY_DATABASE_URL = "postgresql://user:password@postgresserver/db" engine = create_engine(SQLALCHEMY_DATABASE_URL) SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = declarative_base()
10
24
72,883,284
2022-7-6
https://stackoverflow.com/questions/72883284/splitting-str-in-list
I'm trying to write a function that returns list of lists with all possible combination. It's supposed to return this: [['Moscow', 'Oslo', 'Boston', 'Berlin'], ['Moscow', 'Oslo', 'Sydney', 'Berlin'], ['Moscow', 'Paris', 'Boston', 'Berlin'], ['Moscow', 'Paris', 'Sydney', 'Berlin']] and when I call the function pathway('Moscow', [['Oslo', 'Paris'], ['Boston', 'Sydney']], 'Berlin') I get this: [['Moscow', 'Oslo, Boston', 'Berlin'], ['Moscow', 'Oslo, Sydney', 'Berlin'], ['Moscow', 'Paris, Boston', 'Berlin'], ['Moscow', 'Paris, Sydney', 'Berlin']] Here is my function: def pathway(city_from, city_array, city_to): paths = [] cities = itertools.product(*city_array) cities = [', '.join(map(str, x)) for x in cities] for i in cities: i = str(i) path = city_from, i, city_to paths.append(list(path)) return paths How can it be fixed?
This will do precisely what you specified: def pathway(city_from, city_array, city_to): return [[city_from, *c, city_to] for c in itertools.product(*city_array)]
4
4