question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
74,199,437 | 2022-10-25 | https://stackoverflow.com/questions/74199437/why-is-jaxs-split-so-slow-at-first-call | jax.numpy.split can be used to segment an array into equal-length segments with a remainder in the last element. e.g. splitting an array of 5000 elements into segments of 10: array = jnp.ones(5000) segment_size = 10 split_indices = jnp.arange(segment_size, array.shape[0], segment_size) segments = jnp.split(array, split_indices) This takes around 10 seconds to execute on Google Colab and on my local machine. This seems unreasonable for such a simple task on a small array. Am I doing something wrong to make this slow? Further Details (JIT caching, maybe?) Subsequent calls to .split are very fast, provided an array of the same shape and the same split indices. e.g. the first iteration of the following loop is extremely slow, but all others fast. (11 seconds vs 40 milliseconds) from timeit import default_timer as timer import jax.numpy as jnp array = jnp.ones(5000) segment_size = 10 split_indices = jnp.arange(segment_size, array.shape[0], segment_size) for k in range(5): start = timer() segments = jnp.split(array, split_indices) end = timer() print(f'call {k}: {end - start:0.2f} s') Output: call 0: 11.79 s call 1: 0.04 s call 2: 0.04 s call 3: 0.05 s call 4: 0.04 s I assume that the subsequent calls are faster because JAX is caching jitted versions of split for each combination of arguments. If that's the case, then I assume split is slow (on its first such call) because of compilation overhead. Is that true? If yes, how should I split a JAX array without incurring the performance hit? | This is slow because there are tradeoffs in the implementation of split(), and your function happens to be on the wrong side of the tradeoff. There are several ways to compute slices in XLA, including XLA:Slice (i.e. lax.slice), XLA:DynamicSlice (i.e. lax.dynamic_slice), and XLA:Gather (i.e. lax.gather). The main difference between these concerns whether the start and ending indices are static or dynamic. Static indices essentially mean you're specializing your computation for specific index values: this incurs some small compilation overhead on the first call, but subsequent calls can be very fast. Dynamic indices, on the other hand, don't include such specialization, so there is less compilation overhead, but each execution takes slightly longer. You may be able to guess where this is going... jnp.split currently is implemented in terms of lax.slice (see code), meaning it uses static indices. This means that the first use of jnp.split will incur compilation cost proportional to the number of outputs, but repeated calls will execute very quickly. This seemed like the best approach for common uses of split, where a handful of arrays are produced. In your case, you're generating hundreds of arrays, so the compilation cost far dominates over the execution. To illustrate this, here are some timings for three approaches to the same array split, based on gather, slice, and dynamic_slice. You might wish to use one of these directly rather than using jnp.split if your program benefits from different implementations: from timeit import default_timer as timer from jax import lax import jax.numpy as jnp import jax def f_slice(x, step=10): return [lax.slice(x, (N,), (N + step,)) for N in range(0, x.shape[0], step)] def f_dynamic_slice(x, step=10): return [lax.dynamic_slice(x, (N,), (step,)) for N in range(0, x.shape[0], step)] def f_gather(x, step=10): step = jnp.asarray(step) return [x[N: N + step] for N in range(0, x.shape[0], step)] def time(f, x): print(f.__name__) for k in range(5): start = timer() segments = jax.block_until_ready(f(x)) end = timer() print(f' call {k}: {end - start:0.2f} s') x = jnp.ones(5000) time(f_slice, x) time(f_dynamic_slice, x) time(f_gather, x) Here's the output on a Colab CPU runtime: f_slice call 0: 7.78 s call 1: 0.05 s call 2: 0.04 s call 3: 0.04 s call 4: 0.04 s f_dynamic_slice call 0: 0.15 s call 1: 0.12 s call 2: 0.14 s call 3: 0.13 s call 4: 0.16 s f_gather call 0: 0.55 s call 1: 0.54 s call 2: 0.51 s call 3: 0.58 s call 4: 0.59 s You can see here that static indices (lax.slice) lead to the fastest execution after compilation. However, for generating many slices, dynamic_slice and gather avoid repeated compilations. It may be that we should re-implement jnp.split in terms of dynamic_slice, but that wouldn't come without tradeoffs: for example, it would lead to a slowdown in the (possibly more common?) case of few splits, where lax.slice would be faster on both initial and subsequent runs. Also, dynamic_slice only avoids recompilation if each slice is the same size, so generating many slices of varying sizes would incur a large compilation overhead similar to lax.slice. These kinds of tradeoffs are actively discussed in JAX development channels; a recent example very similar to this can be found in PR #12219. If you wish to weigh-in on this particular issue, I'd invite you to file a new jax issue on the topic. A final note: if you're truly just interested in generating equal-length sequential slices of an array, you would be much better off just calling reshape: out = x.reshape(len(x) // 10, 10) The result is now a 2D array where each row corresponds to a slice from the above functions, and this will far out-perform anything that's generating a list of array slices. | 3 | 5 |
74,199,117 | 2022-10-25 | https://stackoverflow.com/questions/74199117/trying-to-web-scrape-text-from-a-table-on-a-website | I am a novice at this, but I've been trying to scrape data on a website (https://awards.decanter.com/DWWA/2022/search/wines?competitionType=DWWA) but I keep coming up empty. I've tried BeautifulSoup and Scrapy but I can't get the text out. Eventually I want to get the row of each individual wine in the table into a dataframe/csv (from all pages) but currently I can't even get the first wine producer name. If you inspect the webpage all the details are in tags with no id or class. My BeautifulSoup attempt URL = 'https://awards.decanter.com/DWWA/2022/search/wines?competitionType=DWWA' headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) \ Chrome/106.0.0.0 Safari/537.36 Edg/106.0.1370.52"} page = requests.get(URL, headers=headers) soup = BeautifulSoup(page.content, "html.parser") soup2 = soup.prettify() producer = soup2.find_all('td').get_text() print(producer) Which is throwing the error: producer = soup2.find_all('td').get_text() AttributeError: 'str' object has no attribute 'find_all' My Scrapy attempt winedf = pd.DataFrame() class WineSpider(scrapy.Spider): name = 'wine_spider' def start_requests(self): dwwa_url = "https://awards.decanter.com/DWWA/2022/search/wines?competitionType=DWWA" yield scrapy.Request(url=dwwa_url, callback=self.parse_front) def parse_front(self, response): table = response.xpath('//*[@id="root"]/div/div[2]/div[4]/div[2]/table') page_links = table.xpath('//*[@id="root"]/div/div[2]/div[4]/div[2]/div[2]/div[1]/ul/li[3]/a(@class,\ "dwwa-page-link") @href') links_to_follow = page_links.extract() for url in links_to_follow: yield response.follow(url=url, callback=self.parse_pages) def parse_pages(self, response): wine_name = Selector(response=response).xpath('//*[@id="root"]/div/div[2]/div[4]/div[2]/table/tbody/\ tr[1]/td[1]/text()').get() wine_name_ext = wine_name.extract().strip() winedf.append(wine_name_ext) medal = Selector(response=response).xpath('//*[@id="root"]/div/div[2]/div[4]/div[2]/table/tbody/tr[1]/\ td[4]/text()').get() medal_ext = medal.extract().strip() winedf.append(medal_ext) Which produces and empty df. Any help would be greatly appreciated. Thank you! | Try: import pandas as pd url = "https://decanterresultsapi.decanter.com/api/DWWA/2022/wines/search?competitionType=DWWA" df = pd.read_json(url) # print last items in df: print(df.tail().to_markdown()) Prints: producer name id competition award score country region subRegion vintage color style priceBandLetter competitionYear competitionType 14853 Telavi Wine Cellar Marani 718257 DWWA 2022 7 86 Georgia Kakheti Kindzmarauli 2021 Red Still - Medium (between 19 and 44 g/L residual sugar) B 2022 DWWA 14854 Štrigova Muškat Žuti 716526 DWWA 2022 7 87 Croatia Continental Zagorje - Međimurje 2021 White Still - Medium (between 19 and 44 g/L residual sugar) C 2022 DWWA 14855 Kopjar Muscat žUti 717754 DWWA 2022 7 86 Croatia Continental Zagorje - Međimurje 2021 White Still - Medium (between 19 and 44 g/L residual sugar) C 2022 DWWA 14856 Cleebronn-Güglingen Blanc De Noir Fein & Fruchtig 719836 DWWA 2022 7 87 Germany Württemberg Not Applicable 2021 White Still - Medium (between 19 and 44 g/L residual sugar) B 2022 DWWA 14857 Winnice Czajkowski Thoma 8 Grand Selection 719891 DWWA 2022 6 90 Poland Not Applicable Not Applicable 2021 White Still - Medium (between 19 and 44 g/L residual sugar) D 2022 DWWA | 3 | 1 |
74,194,721 | 2022-10-25 | https://stackoverflow.com/questions/74194721/how-to-create-incrementing-group-column-counter | Consider the following data set: How can I generate the expected value, ExpectedGroup such that the same value exists when True, but changes and increments by 1, when we run into a False statement in case_id. df = pd.DataFrame([ ['A', 'P', 'O', 2, np.nan], ['A', 'O', 'O', 5, 1], ['A', 'O', 'O', 10, 1], ['A', 'O', 'P', 4, np.nan], ['A', 'P', 'P', 300, np.nan], ['A', 'P', 'O', 2, np.nan], ['A', 'O', 'O', 5, 2], ['A', 'O', 'O', 10, 2], ['A', 'O', 'P', 4, np.nan], ['A', 'P', 'P', 300, np.nan], ['B', 'P', 'O', 2, np.nan], ['B', 'O', 'O', 5, 3], ['B', 'O', 'O', 10, 3], ['B', 'O', 'P', 4, np.nan], ['B', 'P', 'P', 300, np.nan], ], columns = ['ID', 'FromState', 'ToState', 'Hours', 'ExpectedGroup']) # create boolean mask df['case_id'] = ( (df.FromState == 'O') & (df.ToState == 'O') ) 0 False 1 True 2 True 3 False 4 False 5 False 6 True 7 True 8 False 9 False 10 False 11 True 12 True 13 False 14 False Name: case_id, dtype: boo # but how to get incrementing groups? np.where(df['case_id'] != False, df['case_id'].cumsum(), np.nan) | You can use diff to select only the first item of each stretch of True: df['ExpectedGroup'] = (df['case_id'].diff() &df['case_id'] ).cumsum().where(df['case_id']) If you don't want the intermediate column: s = (df.FromState == 'O') & (df.ToState == 'O') # or # s = df[['FromState', 'ToState']].eq('O').all(axis=1) df['ExpectedGroup'] = (s.diff()&s).cumsum().where(s) # or # df.loc[s, 'ExpectedGroup'] = (s.diff()&s).cumsum() Output: ID FromState ToState Hours ExpectedGroup case_id 0 A P O 2 NaN False 1 A O O 5 1.0 True 2 A O O 10 1.0 True 3 A O P 4 NaN False 4 A P P 300 NaN False 5 B P O 2 NaN False 6 B O O 5 2.0 True 7 B O O 10 2.0 True 8 B O P 4 NaN False 9 B P P 300 NaN False | 3 | 2 |
74,190,736 | 2022-10-25 | https://stackoverflow.com/questions/74190736/how-to-refer-to-hierachical-column-in-a-pandas-query | # hierarchical indices and columns index = pd.MultiIndex.from_product([[2013, 2014], [1, 2]], names=['year', 'visit']) columns = pd.MultiIndex.from_product([['Bob', 'Guido', 'Sue'], ['HR', 'Temp']], names=['subject', 'type']) # mock some data data = np.round(np.random.randn(4, 6), 1) data[:, ::2] *= 10 data += 37 # create the DataFrame health_data = pd.DataFrame(data, index=index, columns=columns) health_data How can I get all rows that are using pd.query: Bob HR > 35 any subjects HR > 35 If this is not possible with pd.query at the moment, this would be the accepted answer. | You can't use MultiIndexes with query. (Well, you can to some limit use health_data.query('@health_data.Bob.HR > 35'), but this has many flaws, and you won't be able to do this with a sublevel only). You can use xs instead to select your levels: # is Bod/HR > 35? m1 = health_data[('Bob', 'HR')].gt(35) # is any HR > 35? m2 = health_data.xs('HR', level='type', axis=1).gt(35).any(axis=1) # keep rows with both conditions out = health_data.loc[m1&m2] output: subject Bob Guido Sue type HR Temp HR Temp HR Temp year visit 2013 1 42.0 38.0 45.0 37.7 33.0 36.9 2 48.0 38.2 42.0 36.2 21.0 35.9 2014 2 50.0 39.2 44.0 36.0 36.0 35.8 | 3 | 4 |
74,190,554 | 2022-10-25 | https://stackoverflow.com/questions/74190554/what-does-show-caches-and-adaptive-arguments-do-in-dis-dis-function | Python 3.11 introduces new two parameters to the dis.dis function, show_caches and adaptive. >>> import dis >>> >>> help(dis.dis) Help on function dis in module dis: dis(x=None, *, file=None, depth=None, show_caches=False, adaptive=False) Disassemble classes, methods, functions, and other compiled objects. With no argument, disassemble the last traceback. Compiled objects currently include generator objects, async generator objects, and coroutine objects, all of which store their code object in a special attribute. What does this parameters means in python 3.11?. I did check the result by setting it to True but the result remains same as like setting it to False. >>> dis.dis("a = 1", show_caches=True, adaptive=True) 0 0 RESUME 0 1 2 LOAD_CONST 0 (1) 4 STORE_NAME 0 (a) 6 LOAD_CONST 1 (None) 8 RETURN_VALUE >>> >>> >>> dis.dis("a = 1") 0 0 RESUME 0 1 2 LOAD_CONST 0 (1) 4 STORE_NAME 0 (a) 6 LOAD_CONST 1 (None) 8 RETURN_VALUE | The show_caches argument is described at the very top of dis documentation since it is possible to pass it to many of the module's functions: Changed in version 3.11: Some instructions are accompanied by one or more inline cache entries, which take the form of CACHE instructions. These instructions are hidden by default, but can be shown by passing show_caches=True to any dis utility. adaptive is described under CACHE documentation: Rather than being an actual instruction, this opcode is used to mark extra space for the interpreter to cache useful data directly in the bytecode itself. It is automatically hidden by all dis utilities, but can be viewed with show_caches=True. Logically, this space is part of the preceding instruction. Many opcodes expect to be followed by an exact number of caches, and will instruct the interpreter to skip over them at runtime. Populated caches can look like arbitrary instructions, so great care should be taken when reading or modifying raw, adaptive bytecode containing quickened data. New in version 3.11. | 8 | 4 |
74,189,407 | 2022-10-25 | https://stackoverflow.com/questions/74189407/pandas-numpy-multiple-if-statement-with-and-or-operators | I have quite a complex If statement that I would like to add as a column in my pandas dataframe. In the past I've always used numpy.select for this type of problem, however I wouldn't know how to achieve that with a multi-line if statement. I was able to get this in Excel: =IF(sum1=3,IF(AND(col1=col2,col2=col3),0,1),IF(sum1=2,IF(OR(col1=col2,col2=col3,col1=col3),0,1),IF(sum1=1,0,1))) and write it in Python just as a regular multi-line 'if statement', just want to find out if there is a far cleaner way of presenting this. if df['sum1'] == 3: if df['col1'] == df['col2'] and df['col2'] == df['col3']: df['verify_col'] = 0 else: df['verify_col'] = 1 elif df['sum1'] == 2: if df['col1'] == df['col2'] or df['col2'] == df['col3'] or df['col1'] == df['col3']: df['verify_col'] = 0 else: df['verify_col'] = 1 elif df['sum1'] == 1: df['verify_col'] = 0 else: df['verify_col'] = 1 Here's some sample data: df = pd.DataFrame({ 'col1': ['BMW', 'Mercedes Benz', 'Lamborghini', 'Ferrari', null], 'col2': ['BMW', 'Mercedes Benz', null, null, 'Tesla'], 'col3': ['BMW', 'Mercedes', 'Lamborghini', null, 'Tesla_'], 'sum1': [3, 3, 2, 1, 2] }) I want a column which has the following results: 'verify_col': [0, 1, 0, 0, 1] It basically checks whether the columns match for those that have values in them and assigns a 1 or a 0 for each row. 1 meaning they are different, 0 meaning zero difference. | Use numpy.where with chain mask with | for bitwise OR - if no match any conditions is created 1: m1 = (df['sum1'] == 3) m2 = (df['col1'] == df['col2']) & (df['col2'] == df['col3']) m3 = (df['sum1'] == 2) m4 = (df['col1'] == df['col2']) | (df['col2'] == df['col3']) | (df['col1'] == df['col3']) m5 = df['sum1'] == 1 df['verify_col'] = np.where((m1 & m2) | (m3 & m4) | m5, 0, 1) If need None if no match any conditions: df['verify_col'] = np.select([(m1 & m2) | (m3 & m4) | m5, (m1 & ~m2) | (m3 & ~m4) | ~m5], [0,1], default=None) print (df) col1 col2 col3 sum1 verify_col 0 BMW BMW BMW 3 0 1 Mercedes Benz Mercedes Benz Mercedes 3 1 2 Lamborghini NaN Lamborghini 2 0 3 Ferrari NaN NaN 1 0 4 NaN Tesla Tesla_ 2 1 | 4 | 4 |
74,186,833 | 2022-10-24 | https://stackoverflow.com/questions/74186833/how-to-get-the-schema-from-parquet-in-python-apache-beam | I currently have an apache-beam pipeline in Python in which I'm reading parquet, converting it to a dataframe to do some pandas cleaning, and then converting back to parquet where I'd like to then write the file. It looks like this: with beam.Pipeline(options=pipeline_options) as p: dataframes = p \ | 'Read' >> beam.io.ReadFromParquetBatched(known_args.input) \ | 'Convert to pandas' >> beam.Map(lambda table: table.to_pandas()) \ | 'Process df' >> beam.ParDo(ProcessDataFrame()) \ | 'Convert to parquet' >> beam.Map(lambda table: table.to_parquet()) \ | 'Write to parquet' >> beam.io.WriteToParquet(known_args.output) However this returns an error as expected because I'm missing the schema as an argument to WriteToParquet. Traceback (most recent call last): File "/Users/kgallatin/dataflow/example.py", line 75, in <module> main() File "/Users/kgallatin/dataflow/example.py", line 70, in main | 'Write to parquet' >> beam.io.WriteToParquet(known_args.output) TypeError: __init__() missing 1 required positional argument: 'schema' I have 100-1000s of columns that potentially will change over the lifetime of this pipeline, so I'd like to avoid manually writing them all in the pyarrow format as described here. When I print the parquet from the previous step I can see a pandas schema and some binary pyarrow stuff - is there a way to extract the schema from the parquet at this step for use? | Your codetable.to_parquet() returns serialized bytes of all of the records in Parquet format. Currently, WriteToParquet() need a manually given fixed schema. So, you have to implement your own pipeline(PTransform) if you want to generate a schema automatically. If you don't need a parallelism(such as sharding), you can do like the following using the low level API. ... import pyarrow as pa from apache_beam.io.filesystems import FileSystems from apache_beam.io.filesystem import CompressionTypes ... def write_dataframe_to_parquet(frame): table = pa.Table.from_pandas(frame) with FileSystems.create(known_args.output, mime_type='application/x-parquet', compression_type=CompressionTypes.UNCOMPRESSED, ) as f: pa.parquet.write_table(table, f) ... dataframes = p \ ... | 'Process df' >> beam.ParDo(ProcessDataFrame()) \ | 'Write to parquet' >> beam.Map(write_dataframe_to_parquet) | 3 | 3 |
74,188,468 | 2022-10-25 | https://stackoverflow.com/questions/74188468/python-how-to-resize-an-array-and-duplicate-the-elements | I've got an array with data like this a = [[1,2,3],[4,5,6],[7,8,9]] and I want to change it to b = [[1,1,2,2,3,3],[1,1,2,2,3,3],[4,4,5,5,6,6],[4,4,5,5,6,6],[7,7,8,8,9,9],[7,7,8,8,9,9]] I've tried to use numpy.resize() function but after resizing, it gives [[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]. I can use a for loop to put the numbers at the indexes I need but just wondering if there is any easier way of doing that? To visualise the task, here is the original array This is what I want | My initial though was that np.tile would work but in fact what you are looking for is np.repeat twice on two different axes. Try this runnable example! #!/usr/bin/env python import numpy as np a = [[1,2,3],[4,5,6],[7,8,9]] b = np.repeat(np.repeat(a, 2, axis=1), 2, axis=0) b <script src="https://modularizer.github.io/pyprez/pyprez.min.js"></script> | 3 | 5 |
74,188,287 | 2022-10-25 | https://stackoverflow.com/questions/74188287/python-from-dataframes-a-and-b-use-datetime-of-a-to-find-a-value-in-b-based | Have two dataframes, and I am attempting to 'merge' with conditions. The idea being if df_B's DateTime is greater than ST's DateTime by less than 5 mins, then add the value of df_B 'MC' into a new column in df_A. If not, its a blank. df_A = {'SS': ['2022/10/23 9:58:08', '2022/10/23 9:58:08', '2022/10/23 23:07:17', '2022/10/23 23:07:17', '2022/10/24 2:06:48', '2022/10/24 2:06:48', '2022/10/24 5:31:44', '2022/10/24 5:31:44'], 'SC': [6764, 6764, 6778, 6778, 6782, 6782, 6787, 6787] } df_B = {'DateTime': ['2022/10/23 6:05:51', '2022/10/23 6:06:51', '2022/10/23 7:20:51', '2022/10/23 7:21:51', '2022/10/23 7:51:51', '2022/10/23 7:52:51', '2022/10/24 5:31:58', '2022/10/24 5:35:58'], 'MC': [871.71, 871.77, 871.78, 871.77, 871.78, 871.77, 866.90, 866.91] } df_A = pd.DataFrame(df_A) df_A['SS'] = pd.to_datetime(df_A['SS']) df_B = pd.DataFrame(df_B) df_B['DateTime'] = pd.to_datetime(df_B['DateTime']) Desired Result I have mostly tried pd.merge_asof() and converting the datatimes to indexes and then left/right_index True and direction 'forward'. df = pd.merge_asof(left=df_A, right=df_B, left_index=True, right_index=True, direction='forward') The problem is the merge inserts an MC values in every row based on the direction. And very rarely (or never) id get an exact datetime match). Kinda given up using merge. Should I just parse df_A datetime, check if df_B's datetime its greater than df_A datetime (but reject if greater than 5 mins), and then find the corresponding MC value? For this method, I would normally use the lambda function with apply. df_A.apply(lambda x: (x['SS'], x['SC']), axis=1) So how to incorporate df_B into the above, or should call a function with the method, the function do the processing? Hope makes sense. tks | I get that the rows of df_A and df_B are aligned and both dataframes have equal number of rows. In this case: Putting the 'on' columns into indexes is not necessary to merge dataframes. Instead you can use 'left_on' and 'right_on'. Use 'tolerance' along with the 'forward' direction for the required condition. Reset indexes into "index" columns in order to align the rows by "index" before the merge. BTW, make sure before the merge that Both DataFrames must be sorted by the key. https://pandas.pydata.org/docs/reference/api/pandas.merge_asof.html#pandas.merge_asof df_A.reset_index(inplace=True) df_B.reset_index(inplace=True) df = pd.merge_asof(df_A, df_B, left_on="SS", right_on="DateTime", by="index", direction="forward", tolerance=pd.Timedelta("5m")) index SS SC DateTime MC 0 0 2022-10-23 09:58:08 6764 NaT NaN 1 1 2022-10-23 09:58:08 6764 NaT NaN 2 2 2022-10-23 23:07:17 6778 NaT NaN 3 3 2022-10-23 23:07:17 6778 NaT NaN 4 4 2022-10-24 02:06:48 6782 NaT NaN 5 5 2022-10-24 02:06:48 6782 NaT NaN 6 6 2022-10-24 05:31:44 6787 2022-10-24 05:31:58 866.90 7 7 2022-10-24 05:31:44 6787 2022-10-24 05:35:58 866.91 | 4 | 1 |
74,175,111 | 2022-10-23 | https://stackoverflow.com/questions/74175111/how-to-import-a-pydantic-model-into-sqlmodel | I generated a Pydantic model and would like to import it into SQLModel. Since said model does not inherit from the SQLModel class, it is not registered in the metadata which is why SQLModel.metadata.create_all(engine) just ignores it. In this discussion I found a way to manually add models: SQLModel.metadata.tables["hero"].create(engine) But doing so throws a KeyError for me. SQLModel.metadata.tables["sopro"].create(engine) KeyError: 'sopro' My motivation for tackling the problem this way is that I want to generate an SQLModel from a simple dictionary like this: model_dict = {"feature_a": int, "feature_b": str} And in this SO answer, I found a working approach. Thank you very much in advance for your help! | As far as I know, it is not possible to simply convert an existing Pydantic model to an SQLModel at runtime. (At least as of now.) There are a lot of things that happen during model definition. There is a custom meta class involved, so there is no way that you can simply substitute a regular Pydantic model class for a real SQLModel class, short of manually monkeypatching all the missing pieces. That being said, you clarified that your actual motivation was to be able to dynamically create an SQLModel class at runtime from a dictionary of field definitions. Luckily, this is in fact possible. All you need to do is utilize the Pydantic create_model function and pass the correct __base__ and __cls_kwargs__ arguments: from pydantic import create_model from sqlmodel import SQLModel field_definitions = { # your field definitions here } Hero = create_model( "Hero", __base__=SQLModel, __cls_kwargs__={"table": True}, **field_definitions, ) With that, SQLModel.metadata.create_all(engine) should create the corresponding database table according to your field definitions. See this question for more details. Be sure to use correct form for the field definitions, as the example you gave would not be valid. As the documentation says, you need to define fields in the form of 2-tuples (or just a default value): model_dict = { "feature_a": (int, ...), "feature_b": (str, ...), "feature_c": 3.14, } Hope this helps. | 3 | 3 |
74,126,228 | 2022-10-19 | https://stackoverflow.com/questions/74126228/how-can-i-develop-with-python-libraries-in-editable-mode-on-databricks | On Databricks, it is possible to install Python packages directly from a git repo, or from the dbfs: %pip install git+https://github/myrepo %pip install /dbfs/my-library-0.0.0-py3-none-any.whl Is there a way to enable a live package development mode, similar to the usage of pip install -e, such that the databricks notebook references the library files as is, and it's possible to update the library files on the go? E.g. something like %pip install /dbfs/my-library/ -e combined with a way to keep my-library up-to-date? Thanks! | I would recommend to adopt the Databricks Repos functionality that allows to import Python code into a notebook as a normal package, including the automatic reload of the code when Python package code changes. You need to add the following two lines to your notebook that uses the Python package that you're developing: %load_ext autoreload %autoreload 2 Your library is recognized as the Databricks Repos main folders are automatically added to sys.path. If your library is in a Repo subfolder, you can add it via: import os, sys sys.path.append(os.path.abspath('/Workspace/Repos/<username>/path/to/your/library')) This works for the notebook node, however not for worker nodes. P.S. You can see examples in this Databricks cookbook and in this repository. | 3 | 6 |
74,182,245 | 2022-10-24 | https://stackoverflow.com/questions/74182245/why-prevent-initial-call-does-not-stop-the-initial-call | I have the written code below. I have two dropdown menus that work based on chained callbacks. the first dropdown menu gets the datasets and reads the columns' names and updates the options in the second dropdown menu. Then, the parameters can be plotted on the chart. my dataframes look like this: df={'col1':[12,15,25,33,26,33,39,17,28,25], 'col2':[35,33,37,36,36,26,31,21,15,29], 'col3':['A','A','A','A','B','B','B','B','B','B'], 'col4':[1,2,3,4,5,6,7,8,9,10] I want to highlight the chart background depending on the categories in col3. I don't understand why when I select the dataset from the first dropdown menu the background color for col3 appears on the chart (before selecting the parameters). I have used Prevent_initial_call = True, but the second callback still triggers. import dash from dash import Dash, html, dcc, Output, Input, State, MATCH, ALL import plotly.express as px import pandas as pd import numpy as np import dash_bootstrap_components as dbc app = Dash(__name__) app.layout = html.Div([ html.Div(children=[ html.Button('add Chart', id='add-chart', n_clicks=0) ]), html.Div(id='container', children=[]) ]) @app.callback( Output('container', 'children'), [Input('add-chart', 'n_clicks'), Input({'type': 'remove-btn', 'index': ALL}, 'n_clicks')], [State('container', 'children')], prevent_initial_call=True ) def display_graphs(n_clicks, n, div_children): ctx = dash.callback_context triggered_id = ctx.triggered[0]['prop_id'].split('.')[0] elm_in_div = len(div_children) if triggered_id == 'add-chart': new_child = html.Div( id={'type': 'div-num', 'index': elm_in_div}, style={'width': '25%', 'display': 'inline-block', 'outline': 'none', 'padding': 5}, children=[ dbc.Container([ dbc.Row([ dbc.Col([dcc.Dropdown(id={'type': 'dataset-choice', 'index': n_clicks}, options=['dataset1'], clearable=True, value=[] )], width=6), dbc.Col([dcc.Dropdown(id={'type': 'feature-choice', 'index': n_clicks}, options=[], multi=True, clearable=True, value=[] )], width=6) ]), dbc.Row([ dbc.Col([dcc.Graph(id={'type': 'dynamic-graph','index': n_clicks}, figure={} )]) ]), dbc.Row([ dbc.Col([html.Button("Remove", id={'type': 'remove-btn', 'index': elm_in_div}) ]) ]), ]) ] ) div_children.append(new_child) return div_children if triggered_id != 'add-chart': for idx, val in enumerate(n): if val is not None: del div_children[idx] return div_children @app.callback( Output({'type': 'feature-choice', 'index': MATCH}, 'options'), [Input({'type': 'dataset-choice', 'index': MATCH}, 'value')], prevent_initial_call=True ) def set_dataset_options(chosen_dataset): if chosen_dataset is None: return dash.no_update else: path = 'C:/Users/pymnb/OneDrive/Desktop/test/' df = pd.read_csv(path + chosen_dataset+'.csv') features = df.columns.values[0:2] return features @app.callback( Output({'type': 'dynamic-graph', 'index': MATCH}, 'figure'), [Input({'type': 'dataset-choice', 'index': MATCH}, 'value'), Input({'type': 'feature-choice', 'index': MATCH}, 'value')], prevent_initial_call=True ) def update_graph(chosen_dataset1, chosen_feature): if chosen_feature is None: return dash.no_update if chosen_dataset1 is None: return dash.no_update path = 'C:/Users/pymnb/OneDrive/Desktop/test/' df = pd.read_csv(path + chosen_dataset1+'.csv') Xmin = df[chosen_feature].min().min() print(Xmin) Xmax = df[chosen_feature].max().max() # to find the height of y-axis(col4) col4_max = df['col4'].max() col4_min = df['col4'].min() fig1 = px.line(df, x=chosen_feature, y='col4') fig1.update_layout({'height': 600, 'legend': {'title': '', 'x': 0, 'y': 1.06, 'orientation': 'h'}, 'margin': {'l': 0, 'r': 20, 't': 50, 'b': 0}, 'paper_bgcolor': 'black', 'plot_bgcolor': 'white', } ) fig1.update_yaxes(range=[col4_max, col4_min], showgrid=False) fig1.update_xaxes(showgrid=False) categ_col3 = df.col3.dropna().unique() colors = ['#54FF9F', '#87CEFF'] for (i,j) in zip(categ_col3, colors): index_min = df.loc[df.col3 == i].index[0] index_max = df.loc[df.col3 == i].index[-1] if index_min == 0: cat_min = df['col4'][index_min] else: cat_min = df['col4'][index_min-1] cat_max = df['col4'][index_max] fig1.add_shape(type="rect", x0=Xmin, y0=cat_min, x1=Xmax, y1=cat_max, fillcolor=j, layer='below', opacity=0.5, ) return fig1 if __name__ == '__main__': app.run_server(debug=True) | You can fix it by modifying your code to the following: @app.callback( Output({'type': 'dynamic-graph', 'index': MATCH}, 'figure'), [Input({'type': 'dataset-choice', 'index': MATCH}, 'value'), Input({'type': 'feature-choice', 'index': MATCH}, 'value')], prevent_initial_call=True ) def update_graph(chosen_dataset1, chosen_feature): if (chosen_feature == []) or (chosen_dataset1 is None): #<--- correct the condition return dash.no_update else: #<---- add the else condition to prevent any update Xmin = df[chosen_feature].min().min() Xmax = df[chosen_feature].max().max() The reason behind that because all the elements are created on fly and they are not within the app.layout. Please read the following from the documentation: In other words, if the output of the callback is already present in the app layout before its input is inserted into the layout, prevent_initial_call will not prevent its execution when the input is first inserted into the layout. | 5 | 8 |
74,182,208 | 2022-10-24 | https://stackoverflow.com/questions/74182208/i-am-getting-valueerror-a-document-must-have-an-even-number-of-path-elements | I am trying to read data from excel and store the text data in firestore. When I try to add it one by one without for loop it is working but if I try to automate the process it is not working. Working Code: import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate("certificate.json") firebase_admin.initialize_app(cred) db=firestore.client() name = 'Ashwin' date = '23/10/2022' roll = 'sampledata' cert = 'sampledata' def store_add(name, date,roll,cert): data = {'name':name, 'date':date,'roll':roll} db.collection('certs').document(cert).set(data) store_add(name,date,roll,cert) This is my code (Not Working): import pandas as pd import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate("certificate.json") firebase_admin.initialize_app(cred) db=firestore.client() def store_add(name, date,roll,cert): data = {'name':name, 'date':date,'roll':roll} db.collection('certs').document(cert).set(data) df = pd.read_excel('data.xlsx') name_list = list(df['name']) cert = list(df['cert']) roll = list(df['roll']) date="24/10/2022" for i in range(len(name_list)): store_add(name_list[i],date,roll[i],cert[i]) print("Added:",name_list[i]) I am getting the following error: Traceback (most recent call last): File "e:\PSDC\Certificate-Generator\bulk.py", line 23, in <module> store_add(name_list[i],date,roll[i],cert[i]) File "e:\PSDC\Certificate-Generator\bulk.py", line 14, in store_add db.collection('certs').document(cert).set(data) File "C:\Users\ashwi\AppData\Local\Programs\Python\Python310\lib\site-packages\google\cloud\firestore_v1\base_collection.py", line 130, in document return self._client.document(*child_path) _init__ super(DocumentReference, self).__init__(*path, **kwargs) File "C:\Users\ashwi\AppData\Local\Programs\Python\Python310\lib\site-packages\google\cloud\firestore_v1\base_document.py", line 60, in __init__ _helpers.verify_path(path, is_collection=False) File "C:\Users\ashwi\AppData\Local\Programs\Python\Python310\lib\site-packages\google\cloud\firestore_v1\_helpers.py", line 150, in verify_path raise ValueError("A document must have an even number of path elements") ValueError: A document must have an even number of path elements Firestore Database Structure (Image) | As mentioned in the documentation, document IDs cannot contain a slash but in the provided screenshot there are 3. The value of cert is PSDC/2022/TCS/01 so the final path becomes certs/PSDC/2022/TCS/01 that has 5 segments i.e. a sub-collection. You can either replace the / with some other character like _: db.collection('certs').document(cert.replace("/", "_")).set(data) Alternatively, if the slashes are required, you can store the cert ID in a field in document data and use a random document ID. | 3 | 4 |
74,180,341 | 2022-10-24 | https://stackoverflow.com/questions/74180341/pythonic-way-to-vectorize-ifelse-over-list | What is the most pythonic way to reverse the elements of one list based on another equally-sized list? lista = [1,2,4] listb = ['yes', 'no', 'yep'] # expecting [-1, 2,-4] [[-x if y in ['yes','yep'] else x for x in lista] for y in listb] # yields [[-1, -2, -4], [1, 2, 4], [-1, -2, -4]] if listb[i] is yes or yep, result[i] should be the opposite of lista. Maybe a lambda function applied in list comprehension? | using zip? >>> [-a if b in ('yes','yep') else a for a,b in zip(lista, listb)] [-1, 2, -4] | 3 | 3 |
74,175,424 | 2022-10-23 | https://stackoverflow.com/questions/74175424/is-spacy-lemmatization-not-working-properly-or-does-it-not-lemmatize-all-words-e | When I run the spacy lemmatizer, it does not lemmatize the word "consulting" and therefore I suspect it is failing. Here is my code: nlp = spacy.load('en_core_web_trf', disable=['parser', 'ner']) lemmatizer = nlp.get_pipe('lemmatizer') doc = nlp('consulting') print([token.lemma_ for token in doc]) And my output: ['consulting'] | The spaCy lemmatizer is not failing, it's performing as expected. Lemmatization depends heavily on the Part of Speech (PoS) tag assigned to the token, and PoS tagger models are trained on sentences/documents, not single tokens (words). For example, parts-of-speech.info which is based on the Stanford PoS tagger, does not allow you to enter single words. In your case, the single word "consulting" is being tagged as a noun, and the spaCy model you are using deems "consulting" to be the appropriate lemma for this case. You'll see if you change your string instead to "consulting tomorrow", spaCy will lemmatize "consulting" to "consult" as it is tagged as a verb (see output from the code below). In short, I recommend not trying to perform lemmatization on single tokens, instead, use the model on sentences/documents as it was intended. As a side note: make sure you understand the difference between a lemma and a stem. Read this section provided on Wikipedia Lemma (morphology) page if you are unsure. import spacy nlp = spacy.load('en_core_web_trf', disable=['parser', 'ner']) doc = nlp('consulting') print([[token.pos_, token.lemma_] for token in doc]) # Output: [['NOUN', 'consulting']] doc_verb = nlp('Consulting tomorrow') print([[token.pos_, token.lemma_] for token in doc_verb]) # Output: [['VERB', 'consult'], ['NOUN', 'tomorrow']] If you really need to lemmatize single words, the second approach on this GeeksforGeeks Python lemmatization tutorial produces the lemma "consult". I've created a condensed version of it here for future reference in case the link becomes invalid. I haven't tested it on other single tokens (words) so it may not work for all cases. # Condensed version of approach #2 given in the GeeksforGeeks lemmatizer tutorial: # https://www.geeksforgeeks.org/python-lemmatization-approaches-with-examples/ import nltk from nltk.stem import WordNetLemmatizer nltk.download('averaged_perceptron_tagger') from nltk.corpus import wordnet # POS_TAGGER_FUNCTION : TYPE 1 def pos_tagger(nltk_tag): if nltk_tag.startswith('J'): return wordnet.ADJ elif nltk_tag.startswith('V'): return wordnet.VERB elif nltk_tag.startswith('N'): return wordnet.NOUN elif nltk_tag.startswith('R'): return wordnet.ADV else: return None lemmatizer = WordNetLemmatizer() sentence = 'consulting' pos_tagged = nltk.pos_tag(nltk.word_tokenize(sentence)) lemmatized_sentence = [] for word, tag in pos_tagged: lemmatized_sentence.append(lemmatizer.lemmatize(word, pos_tagger(tag))) print(lemmatized_sentence) # Output: ['consult'] | 4 | 4 |
74,174,413 | 2022-10-23 | https://stackoverflow.com/questions/74174413/add-last-value-from-list-in-column-a-into-list-in-column-b | I have the following data frame: df_test = pd.DataFrame({"f":['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], "d":['x', 'x', 'y', 'y', 'x', 'x', 'y', 'y'], "low": [0,5,2,4,5,10,4,8], "up": [5,10,4,6,10,15,8,12], "z": [1,3,6,2,3,7,5,10]}) and what I first have to do is to convert the columns 'low', 'up' and 'z' to list for each (grouped by) 'f' and 'd'. so this is what I did: dff = df_test.groupby(['f','d'])[['low', 'up', 'z']].agg(list).reset_index() and this is what I get: Now I want to extract the last value from the lists in column 'up' and add it to the lists in column 'low'. But this is unfortunately not working: dff['last'] = (dff['up'].apply(lambda x: x[-1])).tolist() dff['new'] = dff['low'].append(dff['last']) I get an error message "ValueError: cannot reindex from a duplicate axis". The column 'new' should have these values: [0,5,10], [2,4,6], [5,10,15], [4,8,12] any help is very much appreciated! | Another possible solution: dff['new'] = dff['low'] + pd.Series([[x[1]] for x in dff['up']]) Output: f d low up z new 0 a x [0, 5] [5, 10] [1, 3] [0, 5, 10] 1 a y [2, 4] [4, 6] [6, 2] [2, 4, 6] 2 b x [5, 10] [10, 15] [3, 7] [5, 10, 15] 3 b y [4, 8] [8, 12] [5, 10] [4, 8, 12] | 3 | 1 |
74,168,458 | 2022-10-23 | https://stackoverflow.com/questions/74168458/achieving-interface-without-inheritance-in-python | I have a sorted linked list class SortedLinkedList: # ... def insert(self, value: int): # ... if node.value > value: self.add_before(node, value) # ... I would like to generalize the type of values that a Node can hold from only ints to any object that overloads the > operator by implementing the __gt__() magic method. In other languages I would achieve this by using an Interface, but Python apparently has no analog. I've seen suggestions to fake interfaces by using abstract classes like class Sortable(ABC): @abstractmethod def __gt__(self, other) -> bool: pass class SortedLinkedList: # ... def insert(self, value: Sortable, node: Node): # ... The problem is this approach requires extending and using subclasses from Sortable, which means types that already have > functionality like integers cannot be used linkedlist.insert(5) # Pylance red squiggles Argument of type "Literal[5]" cannot be assigned to parameter "value" of type "Sortable" in function "insert" "Literal[5]" is incompatible with "Sortable" Pylance(reportGeneralTypeIssues) I understand that Interfaces are not necessary pre-runtime given Python's dynamic duck typing and implicit style. I am not a fan, and am opting to use available tooling like typing and Pylance to achieve a strictly typed developer experience. I am also not looking to use runtime checks like .hasattr(value, '__gt__'). I'm wanting this to register on the type system/language server/IDE level, as expressivity, readability, and IDE intellisense are the main benefits of strict typing. Is there any way to achieve this? | What you're looking for is typing.Protocol. class Sortable(Protocol): def __gt__(self, other) -> bool: ... With this, any class that defines __gt__ will be detected as an implicit subtype of Sortable. Note that in Pythons >= 3.8, you may need to to make other positional: class Sortable(Protocol): def __gt__(self, other, /) -> bool: ... This is because type checkers might not handle dunder-methods like __gt__ any different from a normal method and thus expects that the arguments might need to be callable by keyword. Adding the / tells the checkers that __gt__'s arguments won't ever need to be passed by keyword. Conversely, if an argument will only be passed by keyword, mark the protocol's method accordingly: class Something(Protocol): def method(self, *, name) -> str: ... | 6 | 8 |
74,172,806 | 2022-10-23 | https://stackoverflow.com/questions/74172806/how-to-generate-random-numbers-in-pre-defined-range-which-sums-up-to-a-fixed-num | I have a simple data generation question. I would request for any kind of help with the code in R or Python. I am pasting the table first. Total Num1_betw_1_to_4 Num2_betw_1_to_3 Num3_betw_1_to_3 9 3 3 3 7 1 3 3 9 4 3 2 9 3 3 3 5 2 2 1 7 3 2 2 9 3 3 3 7 2 3 2 5 6 2 4 9 In the above table, first column values are given. Now I want to generate 3 values in column 2, 3 and 4 which sum up to value in column 1 for each row. But each of the column 2, 3 and 4 have some predefined data ranges like: column 2 value must lie between 1 and 4, column 3 value must lie between 1 and 3, and, column 4 value must lie between 1 and 3. I have printed first 8 rows for your understanding. In real case, only "Total" column values will be given and remaining 3 columns will be blank for which values have to be generated. Any help would be appreciated with the code. | This is straightforward in R. First make a data frame of all possible allowed values of each column: df <- expand.grid(Num1_1_to_4 = 1:4, Num2_1_to_3 = 1:3, Num3_1_to_3 = 1:3) Now throw away any rows that don't sum to 7: df <- df[rowSums(df) == 7,] Finally, sample this data frame: df[sample(nrow(df), 1),] #> Num1_1_to_4 Num2_1_to_3 Num3_1_to_3 #> 19 3 2 2 | 3 | 2 |
74,169,861 | 2022-10-23 | https://stackoverflow.com/questions/74169861/attrs-convert-liststr-to-listfloat | Given the following scenario: import attrs @attrs.define(kw_only=True) class A: values: list[float] = attrs.field(converter=float) A(values=["1.1", "2.2", "3.3"]) which results in *** TypeError: float() argument must be a string or a real number, not 'list' Obviously it's due to providing the whole list to float, but is there a way to get attrs do the conversion on each element, without providing a custom converter function? | As far as I know, attrs doesn't have a built-in option to switch conversion or validation to "element-wise", the way Pydantic's validators have the each_item parameter. I know you specifically did not ask for a converter function, but I don't really see much of an issue in defining one that you can reuse as often as you need to. Here is one way to implement a converter for your specific case: from attrs import define, field from collections.abc import Iterable from typing import Any def float_list(iterable: Iterable[Any]) -> list[float]: return [float(item) for item in iterable] @define class A: values: list[float] = field(converter=float_list) if __name__ == '__main__': a = A(values=["1.1", "2.2", "3.3"]) print(a) It is not much of a difference to your example using converter=float. The output is of course A(values=[1.1, 2.2, 3.3]). You could even have your own generic converter factory for arbitrary convertible item types: from attrs import define, field from collections.abc import Callable, Iterable from typing import Any, TypeAlias, TypeVar T = TypeVar("T") ItemConv: TypeAlias = Callable[[Any], T] ListConv: TypeAlias = Callable[[Iterable[Any]], list[T]] def list_of(item_type: ItemConv[T]) -> ListConv[T]: def converter(iterable: Iterable[Any]) -> list[T]: return [item_type(item) for item in iterable] return converter @define class B: foo: list[float] = field(converter=list_of(float)) bar: list[int] = field(converter=list_of(int)) baz: list[bool] = field(converter=list_of(bool)) if __name__ == '__main__': b = B( foo=range(0, 10, 2), bar=["1", "2", 3.], baz=(-1, 0, 100), ) print(b) Output: B(foo=[0.0, 2.0, 4.0, 6.0, 8.0], bar=[1, 2, 3], baz=[True, False, True]) The only downside to that approach is that the mypy plugin for attrs (for some reason) can not handle this type of converter function and will complain, unless you add # type: ignore[misc] to the field definition in question. | 4 | 3 |
74,171,123 | 2022-10-23 | https://stackoverflow.com/questions/74171123/python-replacing-multiple-list-items-by-one | I want to write a calculator program. My goal is to replace multiple list items(with datatype str) by one(int). I've tried the .insert() method, but it creates a new inner list and places it at the end of the main list. Something like this: input_list = ['4','5','6','+','8','7','4'] #expected result output_list = [456, '+', 874] #actual result input_list = ['4','5','6','+','8','7','4',['4','5','6']] I also tried extend method and also without success. My code: num = "" start = "" for x in range(len(list)): if list[x].isdigit() == True: if start == "": start = x num += list[x] continue else: num += list[x] continue else: num = int(num) list.insert(num,list[start:x]) num = "" start = "" continue | You can use itertools.groupby and pass str.isdigit to its key. It will group the numbers together. from itertools import groupby input_list = ["4", "5", "6", "+", "8", "7", "4"] result = [] for k, g in groupby(input_list, key=str.isdigit): string = "".join(g) # It's True for numbers, False for operators. if k: number = int(string) result.append(number) else: result.append(string) print(result) output: [456, '+', 874] It's also possible with list-comprehension: from itertools import groupby input_list = ["4", "5", "6", "+", "8", "7", "4"] result = [ int("".join(g)) if k else "".join(g) for k, g in groupby(input_list, key=str.isdigit) ] print(result) Note: I intentionally wrote two "".join(g) inside the list-comprehension. Because I had to. Don't try using walrus because you won't get your expected result: result = [ int((s := "".join(g))) if k else s for k, g in groupby(input_list, key=str.isdigit) ] the (s := "".join(g)) part is only evaluated when the k is True, not always. That means if k is False, you'll get the previous value of s which is evaluated in the previous iteration which is '456'. | 3 | 6 |
74,170,810 | 2022-10-23 | https://stackoverflow.com/questions/74170810/pandas-how-to-sort-rows-based-on-particular-suffix-values | My Pandas data frame contains the following data reading from a csv file: id,values 1001-MAC, 10 1034-WIN, 20 2001-WIN, 15 3001-MAC, 45 4001-LINUX, 12 4001-MAC, 67 df = pd.read_csv('example.csv') df.set_index('id', inplace=True) I have to sort this data frame based on the id column order by given suffix list = ["WIN", "MAC", "LINUX"]. Thus, I would like to get the following output: id,values 1034-WIN, 20 2001-WIN, 15 1001-MAC, 10 3001-MAC, 45 4001-MAC, 67 4001-LINUX, 12 How can I do that? | Here is one way to do that: import pandas as pd df = pd.read_csv('example.csv') idx = df.id.str.split('-').str[1].sort_values(ascending=False).index df = df.loc[idx] df.set_index('id', inplace=True) print(df) | 3 | 4 |
74,129,732 | 2022-10-19 | https://stackoverflow.com/questions/74129732/proper-way-to-document-keys-in-typeddict | What is the proper way to document keys inside a TypedDict? I don't see much information inside PEP 589 – TypedDict. I can think of a few solutions: Document keys inside a docstring (is there a standard field to use here?): class Foo(TypedDict): """ I am a documented schema. Keys: key: This key has some implications needing documentation. """ key: int Document keys using inline comments inside the TypedDict class Foo(TypedDict): """I am a documented schema.""" # This key has some implications needing documentation key: int Is there an official approach to this? Anything specific that Sphinx may look for? | If we look at the class typing.TypedDict(dict) signature in the documentation we see it's a class, as the official docs start by saying: TypedDict declares a dictionary type that expects all of its instances to have a certain set of keys So documenting the TypedDict is straightforward because it also uses Class Definition Syntax each of the variables (meant to express key name and value type pairs) act as Class and Instance Variables. When it's said in the question: I can think of a few solutions: Document keys inside a docstring Document keys using inline comments inside the TypedDict You can't use type comments with the usual # type: comment syntax in a TypedDict definition. Otherwise the usual two alternatives of putting a docstring above each variable declaration, or documenting the members in the class docstring are valid. PEP 589 - Class-based Syntax The class body should only contain lines with item definitions of the form key: value_type, optionally preceded by a docstring. The syntax for item definitions is identical to attribute annotations, but there must be no initializer, and the key name actually refers to the string value of the key instead of an attribute name. Type comments cannot be used with the class-based syntax, for consistency with the class-based NamedTuple syntax. (Note that it would not be sufficient to support type comments for backwards compatibility with Python 2.7, since the class definition may have a total keyword argument, as discussed below, and this isn’t valid syntax in Python 2.7.) Instead, this PEP provides an alternative, assignment-based syntax for backwards compatibility, discussed in Alternative Syntax. Last but not least: (...) (is there a standard field to use here?) Using Google style docstrings with the napoleon extension the docstring section of choice would be Attributes:. But if you want to be more precise you can create a custom section using napoleon_custom_sections (here's an example post). I tried looking around but I couldn't find a TypedDict documented in any public API. Since there's isn't an established convention you can use what you prefer. (If your documentation explicitly shows class inheritance it's implicit to the type what the Attributes: section will mean.) | 5 | 2 |
74,167,295 | 2022-10-22 | https://stackoverflow.com/questions/74167295/add-the-missing-numbers-in-the-table-in-order | I need modify a csv file with pandas. I have the following table: Interface Description 1 Used 2 Used 3 Used 4 Used 6 Used 8 Used 12 Used 17 Used I need to match the "Interface" column with a range of 1, 20, complete the table with the missing numbers and place the word "free" in the "Description" column and order it like this: Interface Description 1 Used 2 Used 3 Used 4 Used 5 free 6 Used 7 free 8 Used 9 free 10 free 11 free 12 Used 13 free 14 free 15 free 16 free 17 Used 18 free 19 free 20 free | Use merge in combination with fillna df = pd.DataFrame({ 'Interface': [1, 2, 3, 4, 6, 8, 12, 17], 'Description': 'Used'}) df2 = pd.DataFrame({'Interface': range(1, 21)}).merge(df, how="left").fillna("free") | 4 | 4 |
74,152,013 | 2022-10-21 | https://stackoverflow.com/questions/74152013/importing-parquet-file-in-chunks-and-insert-in-duckdb | I am trying to load the parquet file with row size group = 10 into duckdb table in chunks. I am not finding any documents to support this. This is my work so on: see code import duckdb import pandas as pd import gc import numpy as np # connect to an in-memory database con = duckdb.connect(database='database.duckdb', read_only=False) df1 = pd.read_parquet("file1.parquet") df2 = pd.read_parquet("file2.parquet") # create the table "my_table" from the DataFrame "df1" con.execute("CREATE TABLE table1 AS SELECT * FROM df1") # create the table "my_table" from the DataFrame "df2" con.execute("CREATE TABLE table2 AS SELECT * FROM df2") con.close() gc.collect() Please help me load both the tables with parquet files with row group size or chunks. ALso, load the data to duckdb as chunks | df1 = pd.read_parquet("file1.parquet") This statement will read the entire parquet file into memory. Instead, I assume you want to read in chunks (i.e one row group after another or in batches) and then write the data frame into DuckDB. This is not possible as of now using pandas. You can use something like pyarrow (or fast parquet) to do this. Here is an example from pyarrow docs. iter_batches can be used to read streaming batches from a Parquet file. This can be used to read in batches, read certain row groups or even certain columns. import pyarrow.parquet as pq parquet_file = pq.ParquetFile('example.parquet') for i in parquet_file.iter_batches(batch_size=10): print("RecordBatch") print(i.to_pandas()) Above example simply reads 10 records at a time. You can further limit this to certain row groups or even certain columns like below. for i in parquet_file.iter_batches(batch_size=10, columns=['user_address'], row_groups=[0,2,3]): Hope this helps! | 4 | 5 |
74,162,027 | 2022-10-22 | https://stackoverflow.com/questions/74162027/plotnine-secondary-y-axis-dual-axes | I am using python's wonderful plotnine package. I would like to make a plot with dual y-axis, let's say Celsius on the left axis and Fahrenheit on the right. I have installed the latest version of plotnine, v0.10.1. This says the feature was added in v0.10.0. I tried to follow the syntax on how one might do this in R's ggplot (replacing 'dot' notation with underscores) as follows: import pandas as pd from plotnine import * df = pd.DataFrame({ 'month':('Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'), 'temperature':(26.0,25.8,23.9,20.3,16.7,14.1,13.5,15.0,17.3,19.7,22.0,24.2), }) df['month'] = pd.Categorical(df['month'], categories=('Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'), ordered=True) p = (ggplot(df, aes(x='month', y='temperature')) + theme_light() + geom_line(group=1) + scale_y_continuous( name='Celsius', sec_axis=sec_axis(trans=~.*1.8+32, name='Fahrenheit') ) ) p This didn't like the specification of the transformation, so I tried a few different options. Removing this altogether produces the error: NameError: name 'sec_axis' is not defined The documentation does not contain a reference for sec_axis, and searching for 'secondary axis' doesn't help either. How do you implement a secondary axis in plotnine? | This github issue thread that was mentioned in the question does not say in any way that the secondary axis feature has been implemented. It was added to v0.10.0 milestones list before it was released. Here, milestones list means a todo list of what was planned to be implemented before the version releases. However, upon the actual release, the changelog does not mention the secondary axis feature, which means that it was only planned to be implemented and was not actually implemented. Long story short, the planned feature didn't make it into development and release. So, I'm sorry to say that currently as of v0.10.0 and now v0.10.1 it seems that this feature isn't there yet in plotnine. | 3 | 3 |
74,159,352 | 2022-10-21 | https://stackoverflow.com/questions/74159352/how-do-i-subtract-from-every-secondi1-element-in-a-2d-array | Here is my 2d array: [['P1', 27], ['P2', 44], ['P3', 65], ['P4', 51], ['P5', 31], ['P6', 18], ['P7', 33]] I have created a loop to attach each appropriate label to its corresponding value within this 2d array. At the end of the loop I would like to subtract every value by a different number. For example: if I wanted to subtract the values by 5 [['P1', 22], ['P2', 39], ['P3', 60], ['P4', 46], ['P5', 26], ['P6', 13], ['P7', 28]] How do I do this while the array has both a str type and int? | This is a simple job for a list comprehension: >>> L = [['P1', 27], ['P2', 44], ['P3', 65], ['P4', 51], ['P5', 31], ['P6', 18], ['P7', 33]] >>> [[s, n - 5] for s, n in L] [['P1', 22], ['P2', 39], ['P3', 60], ['P4', 46], ['P5', 26], ['P6', 13], ['P7', 28]] Note that using a comprehension creates a new list and the original data will remain unmodified. If you want to modify the original in-place, it will be preferable to use a for-loop instead: >>> for sublist in L: ... sublist[1] -= 5 ... >>> L [['P1', 22], ['P2', 39], ['P3', 60], ['P4', 46], ['P5', 26], ['P6', 13], ['P7', 28]] | 4 | 5 |
74,157,935 | 2022-10-21 | https://stackoverflow.com/questions/74157935/getting-the-file-name-of-downloaded-video-using-yt-dlp | I'm intending to use yt-dlp to download a video and then cut the video down afterward using ffmpeg. But to be able to use ffmpeg I am going to have to know the name of the file that yt-dlp produces. I have read through their documentation but I can't seem to find a way of getting the file name back into my program. | here are the doc examples the numbers you mentioned (like .f399) I believe are temp only and are eventually removed when the final file is merged. if you want to get the filename: import subprocess someFilename = subprocess.getoutput('yt-dlp --print filename https://www.youtube.com/something') # then pass someFilename to FFmpeg to use your own filename: subprocess.run('yt-dlp -o thisIsMyName https://www.youtube.com/something') # this will likely download a file named thisIsMyName.webm but if you are not sure of the file type/extension beforehand and just want to get that: someFileType = subprocess.getoutput('yt-dlp --print filename -o "%(ext)s" https://www.youtube.com/something') print(someFileType) it's not very efficient but to help explain it: import subprocess someFileType = subprocess.getoutput('yt-dlp --print filename -o "%(ext)s" https://www.youtube.com/something') subprocess.run('yt-dlp -o "myFileName.%(ext)s" https://www.youtube.com/something') subprocess.run(f'ffmpeg -i "myFileName.{someFileType}" outputFile.mp4') | 13 | 8 |
74,143,732 | 2022-10-20 | https://stackoverflow.com/questions/74143732/customize-legend-labels-in-geopandas | I would like to customize the labels on the geopandas plot legend. fig, ax = plt.subplots(figsize = (8,5)) gdf.plot(column = "WF_CEREAL", ax = ax, legend=True, categorical=True, cmap='YlOrBr',legend_kwds = {"loc":"lower right"}, figsize =(10,6)) Adding "labels" in legend_kwds does not help. I tried to add labels with legend_kwds in the following ways, but it didn't work- legend_kwds = {"loc":"lower right", "labels":["low", "mid", "high", "strong", "severe"] legend_labels:["low", "mid", "high", "strong", "severe"] legend_labels=["low", "mid", "high", "strong", "severe"] | Since the question does not have reproducible code and data to work on. I will use the best possible approach to give a demo code that the general readers can follow and some of it may answer the question. The code I provide below can run without the need of external data. Comments are inserted in various places to explain at important steps. # Part 1 # Classifying the data of choice import pandas as pd import geopandas as gpd import matplotlib.pyplot as plt world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) world['gdp_per_cap'] = world.gdp_md_est / world.pop_est num_classes = 4 #quartile scheme has 4 classes # You can use values derived from your preferred classification scheme here num_qtiles = [0, .25, .5, .75, 1.] #class boundaries for quartiles # Here is the categorical data to append to the dataframe # They are also used as legend's label texts qlabels = ["1st quartile","2nd quartile","3rd quartile","4th quartile"] #matching categorical data/labels # Conditions # len(num_qtiles)-1 == num_classes # len(qlabels) == num_classes # Create a new column for the categorical data mentioned above world['gdp_quartile'] = pd.qcut(world['gdp_per_cap'], num_qtiles, labels=qlabels) # Plotting the categorical data for checking ax1 = world['gdp_quartile'].value_counts().plot(figsize=(5,4), kind='bar', xlabel='Quartile_Classes', ylabel='Countries', rot=45, legend=True) The output of part1:- # Part 2 # Plot world map using the categorical data fig, ax = plt.subplots(figsize=(9,4)) # num_classes = 4 # already defined #color_steps = plt.colormaps['Reds']._resample(num_classes) #For older version color_steps = plt.colormaps['Reds'].resampled(num_classes) #Current version of matplotlib # This plots choropleth map using categorical data as the theme world.plot(column='gdp_quartile', cmap = color_steps, legend=True, legend_kwds={'loc':'lower left', 'bbox_to_anchor':(0, .2), 'markerscale':1.29, 'title_fontsize':'medium', 'fontsize':'small'}, ax=ax) leg1 = ax.get_legend() leg1.set_title("GDP per capita") ax.title.set_text("World Map: GDP per Capita") plt.show() Output of part2:- Edit Additional code, use it to replace the line plt.show() above. This answers the question posted in the comment below. # Part 3 # New categorical texts to use with legend new_legtxt = ["low","mid","high","v.high"] for ix,eb in enumerate(leg1.get_texts()): print(eb.get_text(), "-->", new_legtxt[ix]) eb.set_text(new_legtxt[ix]) plt.show() | 4 | 6 |
74,151,654 | 2022-10-21 | https://stackoverflow.com/questions/74151654/error-cannot-install-miniconda-with-rstudio | I have written a R script where I use some python lines through the reticulate package. I need to share it with some colleagues who don't know about programming and I've created a batch file so I can run it without them even opening R. However, I tried using the install_miniconda() function to silently install python to run the code without them knowing (I guess people are reluctant to installing a couple of programs) but R throws an error: > reticulate::install_miniconda(path="C:/") # * Installing Miniconda -- please wait a moment ... # * Downloading "https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe" ... # trying URL 'https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe' # Content type 'application/octet-stream' length 74687656 bytes (71.2 MB) # downloaded 71.2 MB # Error: Miniconda installation failed [unknown reason] I tried without passing any path but my computer user has spaces on it so I cannot use it, that's why I resorted to supplying it with the root path "C:/" Can someone help me understand what is happening? Note: I am using R 4.2.1 on Windows 11 (also tried on Windows 10 with the same result) | Try installing rminiconda from the github like this: remotes::install_github("hafen/rminiconda") rminiconda::install_miniconda(name='your_name') After that you can specify the installation using reticulate like this: py <- rminiconda::find_miniconda_python("your_name") reticulate::use_python(py, required = TRUE) | 3 | 4 |
74,146,246 | 2022-10-20 | https://stackoverflow.com/questions/74146246/counting-number-of-permutations-in-permutation | A permutation of size n is a sequence of n integers in which each of the values from 1 to n occurs exactly once. For example, the sequences [3, 1, 2], [1], and [1, 2, 3, 4] are permutations, while [2], [4, 1, 2], [3, 1] are not. So i recieve 2 inputs: 1 - number of numbers in permutation,2 - the permutation by itself. The question is: how many intervals are there [l;r](1 ≤ l ≤ r ≤ n) for which the sequence p[l..r] is also a permutation? For example: input - 7; [6, 3, 4, 1, 2, 7, 5] The answer is 4: permutation is [6, 3, 4, 1, 2, 7, 5]; permutation is [1]; permutation is [1, 2]; permutation is [3, 4, 1, 2] Hope u undestood the question. I wrote the first 2 cases, but i don't know how to check for others: numbers = int(input("Amount of elements in permutation: ")) perm = list(input("Permutation: ")) perm = [ int(x) for x in perm if x != " "] amount = 1 first = 1 if len(perm) == numbers and int(max(perm)) == numbers and int(min(perm)) == 1: if first in perm and len(perm) > 1: amount += 1 | l = [6, 3, 4, 1, 2, 7, 5] left_bound = right_bound = l.index(1) permutations = [] for i in range(1,len(l)+1): new_index = l.index(i) # special case if i == 1 if new_index == left_bound == right_bound: pass # if new index if further to the left, update the left index elif new_index < left_bound: left_bound = new_index # same with the right one elif new_index > right_bound: right_bound = new_index # Because we always have all numbers up to and including i # in the list l[left_bound:right_bound+1], we know that if # it has not the length i, numbers that are not in the order # are in there -> no permutation. if len(l[left_bound:right_bound+1])==i: permutations.append(l[left_bound:right_bound+1]) print(permutations) Actually just tried it with that one example, if there is an error pls tell me. | 4 | 0 |
74,145,111 | 2022-10-20 | https://stackoverflow.com/questions/74145111/pylance-requiring-explicit-type-on-conformant-list-variables | I define a type as a Union of Literal strings Color = Literal[ "red", "green", "blue", "yellow", "orange", "purple" ] I have a function that expects a list of strings conforming to the type. def f(colors: List[Color]): ... I instantiate a list of conformant strings and pass it to the function colors = ['blue', 'green'] f(colors) Pylance red squiggles the function call with the alert Argument of type "list[str]" cannot be assigned to parameter "colors" of type "List[Color]" in function "f" "list[str]" is incompatible with "List[Color]" TypeVar "_T@list" is invariant Type "str" cannot be assigned to type "Color" "str" cannot be assigned to type "Literal['red']" Pylance(reportGeneralTypeIssues) The alert goes away if I explicitly annotate the instantiated list colors: List[Color] = ['blue', 'green'] This seems redundant. The list matches the expected type regardless of if I annotate it. Shouldn't the type system recognize that? In fact, I can pass the list directly inplace with no alert f(['blue', 'green']) # Pylance allows Pylance is also fine with this def f(seven: Literal[7]): ... x = 7 f(x) # Pylance allows, doesn't require doing x: int = 7 So it seems to only complain about list variables, not implicitly typed variables in general. Why must I explicitly annotate list variables whose values definitely conform to the expected type? | Python literals, like 'blue' or 7, have a natural type. 'blue' is a str, and 7 is an int, according to their runtime types. Type Literals can throw a little confusion into this, because while they take specific objects and give them additional semantic meaning (like 'blue' being a Color or 7 being parameter for f()), they do not override the default assumed type and are only locked in as the true type of the variable when necessary. How does this result in the behavior you've encountered? Let's deal with the 7 first. After x = 7, the implicit, natural type of x is not "a type compatible with f()", it's int. That implicit type gets narrowed successfully during f(x), but only in the context of that exact call, and only because Pylance knows that x legitimately can be narrowed to 7. Meanwhile, for colors = ['blue', 'green'], Pylance is only worried about coming up with the right type for colors. Based on the literal, it first sees that it's a list, and then that it's a list of strings. Thus the inferred type is list[str]. Now, can this be narrowed to list[Color] when we get to f(colors)? No, because lists are mutable. There's no runtime guarantee that the list won't end up with other strings, so the literals inside the list cannot be narrowed into a color. Again, to contrast, 7 is an immutable int, so we know there can't be anything else getting stored alongside the 7 within x. Why then does f(['green', 'blue']) work? Because the lifecycle of the list in this case is only the function call itself, so there's an implicit request to make the list compatible with the call signature if at all possible, without worrying about what the caller code might try to do to the list later (since it can't do anything at all). So how could this be bypassed? First is what you noticed, to explicitly limit the list to Colors. You could also use an immutable type, like a tuple, since Python knows, just like with the list made within the function call, that the object won't change during runtime: Color = Literal['red', 'blue'] def f(colors: Sequence[Color]): ... L = ['red', 'blue'] f(L) # fails! T = ('red', 'blue') f(T) # succeeds! | 3 | 3 |
74,134,089 | 2022-10-20 | https://stackoverflow.com/questions/74134089/keyerror-weakref-at-0x7fc9e8267ad0-to-flask-at-0x7fc9e9ec5750 | I've been having a hard time handling sessions in flask. Since when I manage the application in the local environment everything works perfectly, including flask sessions. But when i already host it in Render i always get this error in every route. [55] [ERROR] Error handling request /valle-de-guadalupe Traceback (most recent call last): File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask/app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask/app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask/app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask/app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/opt/render/project/src/app_folder/routes/public.py", line 35, in valle_de_guadalupe return render_template("public/cities/valle_guadalupe.html") File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask/templating.py", line 147, in render_template return _render(app, template, context) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask/templating.py", line 128, in _render app.update_template_context(context) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask/app.py", line 994, in update_template_context context.update(func()) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask_login/utils.py", line 407, in _user_context_processor return dict(current_user=_get_user()) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask_login/utils.py", line 372, in _get_user current_app.login_manager._load_user() File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask_login/login_manager.py", line 364, in _load_user user = self._user_callback(user_id) File "/opt/render/project/src/app.py", line 52, in load_user return User.get_by_id(int(user_id)) File "/opt/render/project/src/app_folder/models/models.py", line 82, in get_by_id return User.query.get(id) File "<string>", line 2, in get File "/opt/render/project/src/.venv/lib/python3.7/site-packages/sqlalchemy/util/deprecations.py", line 402, in warned return fn(*args, **kwargs) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 947, in get return self._get_impl(ident, loading.load_on_pk_identity) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 959, in _get_impl execution_options=self._execution_options, File "/opt/render/project/src/.venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2959, in _get_impl load_options=load_options, File "/opt/render/project/src/.venv/lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 534, in load_on_pk_identity bind_arguments=bind_arguments, File "/opt/render/project/src/.venv/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1702, in execute bind = self.get_bind(**bind_arguments) File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask_sqlalchemy/session.py", line 61, in get_bind engines = self._db.engines File "/opt/render/project/src/.venv/lib/python3.7/site-packages/flask_sqlalchemy/extension.py", line 629, in engines return self._app_engines[app] File "/usr/local/lib/python3.7/weakref.py", line 396, in __getitem__ return self.data[ref(key)] KeyError: <weakref at 0x7fc9e8267ad0; to 'Flask' at 0x7fc9e9ec5750> index.py from app import app from app_folder.utils.db import db db.init_app(app) with app.app_context(): db.create_all() if __name__ == "__main__": app.run( debug = False, port = 5000 ) app.py from flask import Flask """Flask SqlAlchemy""" from flask_sqlalchemy import SQLAlchemy """Flask Login""" from flask_login import LoginManager """Dot Env""" from dotenv import load_dotenv """App Folder Routes""" from app_folder.handlers.stripe_handlers import stripe_error from app_folder.handlers.web_handlers import web_error from app_folder.models.models import User from app_folder.routes.admin import admin from app_folder.routes.public import public from app_folder.routes.users import users from app_folder.utils.db import db """Imports""" import os import stripe load_dotenv() """config app""" app = Flask(__name__, static_url_path="", template_folder="app_folder/templates", static_folder="app_folder/static") app.config['SECRET_KEY'] = os.getenv("SECRET_KEY") app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv("SQLALCHEMY_DATABASE_VERSION")+os.getenv("SQLALCHEMY_USERNAME")+":"+os.getenv("SQLALCHEMY_PASSWORD")+"@"+os.getenv("SQLALCHEMY_SERVER")+"/"+os.getenv("SQLALCHEMY_DATABASE") app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = os.getenv("SQLALCHEMY_TRACK_MODIFICATIONS") """blueprints""" app.register_blueprint(stripe_error) app.register_blueprint(web_error) app.register_blueprint(admin) app.register_blueprint(public) app.register_blueprint(users) SQLAlchemy(app) login_manager = LoginManager(app) """ stripe """ stripe_keys = { 'secret_key': os.getenv("STRIPE_SECRET_KEY"), 'publishable_key': os.getenv("STRIPE_PUBLISHABLE_KEY") } stripe.api_key = stripe_keys['secret_key'] """Login Manager""" @login_manager.user_loader def load_user(user_id): return User.get_by_id(int(user_id)) """Teardown""" @app.teardown_appcontext def shutdown_session(exception=None): db.session.remove() Regardless of which route i'm on, while handling sessions I get the same error, but in this case use this path. public.py """routes""" @public.route("/", methods=["GET", "POST"]) def index(): return redirect(url_for('public.valle_de_guadalupe')) """cities""" @public.route("/valle-de-guadalupe", methods=["GET", "POST"]) def valle_de_guadalupe(): return render_template("public/cities/valle_guadalupe.html") I don't know if this has happened to someone else. | It's usually best to remove as many extra components and identify a minimal example that produces the error, otherwise it's often hard to help. That said, I think that error suggests that db (SQLAlachemy() from flask-sqlalchemy) is not being inited with app. I'm not sure what the db is that is being imported from app_folder.utils.db, but it appears that you may need to call db.init_app(app). Relatedly, the line SQLAlchemy(app) is not being assigned. Perhaps you meant to assign that to db? | 3 | 6 |
74,144,861 | 2022-10-20 | https://stackoverflow.com/questions/74144861/runtimeerror-runtimeerror-either-sqlalchemy-database-uri-or-sqlalchemy-bind | I am learning basic flask but cannot import db(database) from freecodecamp I cannot find the solution plz answer this query as soon as possible There are 4 files : when I write from app import db the pythonshell shows error app.py from flask import Flask,render_template, url_for from flask_sqlalchemy import SQLAlchemy from datetime import datetime app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URL'] = 'sqlite://test.db' db = SQLAlchemy(app) class Todo(db.Model): id = db.Column(db.Integer, primary_key = True) content = db.Column(db.String(200), nullable = False) date_created = db.Column(db.DateTime, default = datetime.utcnow) def __repr__(self): return '<Task %r>' % self.id @app.route('/') def index(): return render_template('index.html') if __name__ == "__main__": app.run(debug = True) main.css body { margin: 0; font-family: sans-serif; } base.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width-device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie-edge"> <link rel="stylesheet" href= "{{ url_for('static', filename = 'css/main.css')}}"> {% block head %}{% endblock %} </head> <body> {% block body %}{% endblock %} </body> </html> index.html {% extends 'base.html'%} {% block head %} {% endblock %} {% block body %} <h1>Template</h1> {% endblock %} Terminal Window | You need to initialize db as well, db = SQLAlchemy() # db intitialized here app = Flask(__name__) app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite://test.db" db.init_app(app) It's SQLALCHEMY_DATABASE_URI not SQLALCHEMY_DATABASE_URL | 6 | 9 |
74,144,613 | 2022-10-20 | https://stackoverflow.com/questions/74144613/flask-sqlalchemy-raises-typeerror-query-paginate-takes-1-positional-argument | I call Flask-SQLAlchemy's query.paginate method with arguments. Recently this stopped working and instead raised a TypeError. How do I pass page and other arguments to query.paginate? Ticker.query.paginate(page, app.config["TICKERS_PER_PAGE"], False) TypeError: Query.paginate() takes 1 positional argument but 4 were given | As of Flask-SQLAlchemy 3.0, all arguments to paginate are keyword-only. Ticker.query.paginate(page=page, per_page=app.config["TICKERS_PER_PAGE"], error_out=False) | 9 | 20 |
74,144,004 | 2022-10-20 | https://stackoverflow.com/questions/74144004/for-loop-that-takes-turns-between-printing-one-thing-or-the-other-python | Not sure how else to word the title, but this is what I need: Print either the number zero, the number one or a phrase in no consecutive order twenty times. This is what I have: n = 0 x = “hello” for i in range(20): print(n, end= ‘’) or print(n+1, end= ‘’) or print(x) The only problem is that it prints them out in order so that it is always 01hello01hello01hello and so on. I need it to be randomized so that it could print something like 000hello1hello101 or just any random variation of the three variables. Let me know how :) | import random choices = [1, 0, "hello"] for i in range(20): print(random.choice(choices)) | 4 | 8 |
74,140,606 | 2022-10-20 | https://stackoverflow.com/questions/74140606/pandas-get-all-positive-delta-values-efficiently | I am looking for an efficient way to turn this pandas dataframe: A B C 0 0 1 0 1 0 1 1 2 1 1 1 3 1 1 0 4 0 0 1 into A B C 0 0 1 0 1 0 0 1 2 1 0 0 3 0 0 0 4 0 0 1 I only want "1" in a cell, if in the original dataframe the value jumps from "0" to "1". If it's the first row, I want a "1", if "1" is the start value. I have to use this operation often in my project and on a large dataframe, so it should be as efficient as possible. Thanks in advance! | You can use: df.diff().clip(0).fillna(df) output: A B C 0 0 1 0 1 0 0 1 2 1 0 0 3 0 0 0 4 0 0 1 | 3 | 2 |
74,137,116 | 2022-10-20 | https://stackoverflow.com/questions/74137116/how-to-hide-a-pydantic-discriminator-field-from-fastapi-docs | We have a discriminator field type which we want to hide from the Swagger UI docs: class Foo(BDCBaseModel): type: Literal["Foo"] = Field("Foo", exclude=True) Name: str class Bar(BDCBaseModel): type: Literal["Bar"] = Field("Bar", exclude=True) Name: str class Demo(BDCBaseModel): example: Union[Foo, Bar] = Field(discriminator="type") The following router: @router.post("/demo") async def demo( foo: Foo, ): demo = Demo(example=foo) return demo And this is shown in the Swagger docs: We don't want the user to see the type field as it is useless for him/her anyways. We tried making the field private: _type which hides it from the docs but then it cannot be used as discriminator anymore: class Demo(BDCBaseModel): File "pydantic\main.py", line 205, in pydantic.main.ModelMetaclass.__new__ File "pydantic\fields.py", line 491, in pydantic.fields.ModelField.infer File "pydantic\fields.py", line 421, in pydantic.fields.ModelField.__init__ File "pydantic\fields.py", line 537, in pydantic.fields.ModelField.prepare File "pydantic\fields.py", line 639, in pydantic.fields.ModelField._type_analysis File "pydantic\fields.py", line 753, in pydantic.fields.ModelField.prepare_discriminated_union_sub_fields File "pydantic\utils.py", line 739, in pydantic.utils.get_discriminator_alias_and_values pydantic.errors.ConfigError: Model 'Foo' needs a discriminator field for key '_type' | This is a very common situation and the solution is farily simple. Factor out that type field into its own separate model. The typical way to go about this is to create one FooBase with all the fields, validators etc. that all child models will share (in this example only name) and then subclass it as needed. In this example you would create one Foo subclass with that type field that you then use for the Demo annotation, and one FooRequest class without any additions. Here is a full working example: from typing import Literal, Union from fastapi import FastAPI from pydantic import BaseModel, Field class FooBase(BaseModel): name: str class FooRequest(FooBase): pass # possibly configure other request specific things here class Foo(FooBase): type: Literal["Foo"] = Field("Foo", exclude=True) class Config: orm_mode = True class Bar(BaseModel): type: Literal["Bar"] = Field("Bar", exclude=True) name: str class Demo(BaseModel): example: Union[Foo, Bar] = Field(discriminator="type") api = FastAPI() @api.post("/demo") async def demo(foo: FooRequest): foo = Foo.from_orm(foo) return Demo(example=foo) Note that I used the orm_mode = True setting just to have a very concise way of converting a FooRequest instance into a Foo instance inside the route handler function. This is not necessary. You could also just do foo = Foo.parse_obj(foo.dict()) there. Also, the addition of the FooRequest model is redundant here of course. You can just as well use the FooBase as the request model. I wrote it this way just to demonstrate a typical pattern because sometimes the request model has additional things that distinguish it from its siblings. In your example it is overkill. | 6 | 1 |
74,133,458 | 2022-10-20 | https://stackoverflow.com/questions/74133458/python-annotate-return-types-for-different-input-cases | I have a function that can return different types given a boolean input argument. Currently, I just annotate it using union: def fn(x: int, ret_str: bool) -> Union[int, str]: if ret_str: return str(x) else: return x However, if I annotate like this, the output of this function will be typed as a union type as well. out = fn(8, ret_str=False) # out will have type Union[int, str] but we know it's int Is there a way to annotate it such that the type checker will know if I call the function with a literal False, the return type has to be int instead of str? | Yes, this is possible using the typing module and overload/Literal. It isn't the most elegant solution since it is so verbose (in comparison to most typed languages), but will cause the type checker to associate a return type depending on arguments passed. I would probably avoid doing this in most circumstances outside of libraries/modules or other code I know will be referenced repeatedly from an external source. from typing import Literal, overload @overload def fn(x: int, ret_str: Literal[False]) -> int: ... @overload def fn(x: int, ret_str: Literal[True]) -> str: ... def fn(x: int, ret_str: bool) -> int | str: if ret_str: return str(x) return x If wanting to implement more advanced type checking like this, it is usually beneficial to store this information in a separate pyi file instead of directly in your program. Finally, please note that the @overload decorator is purely for the type checker. You cannot put different functionality into a function with it, since the final declaration of the function will always overwrite any previous definitions (regardless of if the decorator is used). As such, putting an ellipsis (...) in overloaded function definitions is the paradigm. | 5 | 4 |
74,124,896 | 2022-10-19 | https://stackoverflow.com/questions/74124896/fastapi-is-returning-attributeerror-dict-object-has-no-attribute-encode | I am having a very simple FastAPI service, when I try to hit the /spell_checking endpoint, I get this error `AttributeError: 'dict' object has no attribute 'encode'. I am hitting the endpoint using postman, with Post request, and this is the url: http://127.0.0.1:8080/spell_checking, the payload is JSON with value: {"word": "testign"} Here is my code: from fastapi import FastAPI, Response, Request import uvicorn from typing import Dict app = FastAPI() @app.post('/spell_checking') def spell_check(word: Dict ) : data = { 'corrected': 'some value' } return Response(data) @app.get('/') def welcome(): return Response('Hello World') if __name__ == '__main__': uvicorn.run(app, port=8080, host='0.0.0.0') I still dont know why a simple service like this would show this error! | The problem happens when you try to create the Response object. This object expects a string as input (infact it tries to call .encode() on it). There is no need to explicitly create one, you can just return the data and fastapi will do the rest. from typing import Dict import uvicorn from fastapi import FastAPI app = FastAPI() @app.post("/spell_checking") def spell_check(word: Dict): data = {"corrected": "some value"} return data @app.get("/") def welcome(): return "Hello World" if __name__ == "__main__": uvicorn.run(app, port=8080, host="0.0.0.0") | 3 | 5 |
74,123,745 | 2022-10-19 | https://stackoverflow.com/questions/74123745/how-to-capture-the-data-of-the-drawn-shapes-by-the-mouse-in-dash | So I have this simple python dash application in which I've laid out a graph and a button. My goal is this: when I press the button, I want to retrieve the shapes that have been drawn. import plotly.graph_objects as go import dash from dash import html, dcc, Input, Output, State app = dash.Dash(__name__) fig = go.Figure() app.layout = html.Div([ dcc.Graph(id = "graph-pic", className="graph-pic", figure=fig, config={'modeBarButtonsToAdd':['drawrect', 'eraseshape']}), html.Button("Shape count", id = "shape-count-button") ]) fig.add_shape(editable=True, x0=-1, x1=0, y0=2, y1=3, xref='x', yref='y') @app.callback( Output("graph-pic", "figure"), Input("shape-count-button", "n_clicks") ) def on_shape_count_button_pressed(n_clicks): trigger_id = dash.callback_context.triggered_id if trigger_id == "shape-count-button": print("Shape count: " + str(len(fig.layout.shapes))) print(fig.layout.shapes) return dash.no_update if __name__ == "__main__": app.run_server() When I press the button, it only prints the first shape that I've added through code... and NOT the ones that I've drawn on the graph with the draw rectangle tool. Output: Shape count: 1 (layout.Shape({ 'editable': True, 'x0': -1, 'x1': 0, 'xref': 'x', 'y0': 2, 'y1': 3, 'yref': 'y' }),) Any hint would be appreciated! | You should use relayout_data to detect all the drawn shapes on the graph, and then you can parse the desired data as you would: import dash import json import plotly.graph_objects as go from dash import html, dcc, Input, Output, State app = dash.Dash(__name__) fig = go.Figure() app.layout = html.Div([ dcc.Graph(id = "graph-pic", className="graph-pic", figure=fig, config={'modeBarButtonsToAdd':['drawrect', 'eraseshape']}), html.Button("Shape count", id = "shape-count-button"), html.Div(id="text") ]) fig.add_shape(editable=True, x0=-1, x1=0, y0=2, y1=3, xref='x', yref='y') @app.callback( Output("text", "children"), Input("shape-count-button", "n_clicks"), Input("graph-pic", "relayoutData"), ) def on_shape_count_button_pressed(n_clicks, relayout_data): trigger_id = dash.callback_context.triggered_id if trigger_id == "shape-count-button": text_lst = "Shape count: " + str(len(fig.layout.shapes)) text_lst += str(fig.layout.shapes) if "shapes" in relayout_data: text_lst += json.dumps(relayout_data["shapes"], indent=2) return text_lst return dash.no_update if __name__ == "__main__": app.run_server() Output | 4 | 3 |
74,103,649 | 2022-10-17 | https://stackoverflow.com/questions/74103649/is-there-a-way-to-use-endswith-startswith-in-match-case-statements | Is there a way to use match case to select string endings/beginnings like below? match text_string: case 'bla-bla': return 'bla' case .endswith('endofstring'): return 'ends' case .startswith('somestart'): return 'start' | You were close. You wanted a conditional guard on a pattern. In the following case, one that simply matches the value. match text_string: case 'bla-bla': return 'bla' case s if s.endswith('endofstring'): return 'ends' case s if s.startswith('somestart'): return 'start' This doesn't gain much over the following. if text_string == 'bla-bla': return 'bla' elif text_string.endswith('endofstring'): return 'ends' elif text_string.startswith('somestart'): return 'start' Unless you're also using the match and want to differentiate between two otherwise identical patterns. | 10 | 19 |
74,058,262 | 2022-10-13 | https://stackoverflow.com/questions/74058262/icu-sort-strings-based-on-2-different-locales | As you probably know, the order of alphabet in some (maybe most) languages is different than their order in Unicode. That's why we may want to use icu.Collator to sort, like this Python example: from icu import Collator, Locale collator = Collator.createInstance(Locale("fa_IR.UTF-8")) mylist.sort(key=collator.getSortKey) This works perfectly for Persian strings. But it also sorts all Persian strings before all ASCII / English strings (which is the opposite of Unicode sort). What if we want to sort ASCII before this given locale? Or ideally, I want to sort by 2 or multiple locales. (For example give multiple Locale arguments to Collator.createInstance) If we could tell collator.getSortKey to return empty bytes for other locales, then I could create a tuple of 2 collator.getSortKey() results, for example: from icu import Collator, Locale collator1 = Collator.createInstance(Locale("en_US.UTF-8")) collator2 = Collator.createInstance(Locale("fa_IR.UTF-8")) def sortKey(s): return collator1.getSortKey(s), collator2.getSortKey(s) mylist.sort(key=sortKey) But looks like getSortKey always returns non-empty bytes. | A bit late to answer the question, but here it is for future reference. ICU collation uses the CLDR Collation Algorithm, which is a tailoring of the Unicode Collation Algorithm. The default collation is referred to as the root collation. Don't think in terms of Locales having a set of collation rules, think more in terms of locales specify any differences between the collation rules that the locale needs and the root collation. CLDR takes a minimalist approach, you only need to include the minimal set of differences needed based on the root collation. English uses the root locale. No tailorings. Persian on the other hand has a few rules needed to override certain aspects of the root collation. As the question indicates, the Persian collation rules order Arabic characters before Latin characters. In the collation rule set for Persian there is a rule [reorder Arab]. This rule is what you need to override. There are a few ways to do this: Use icu.RuleBasedCollator with a coustom set fo rules for Persian. Create a standard Persian collation, retrieve the rules, strip out the reorder directive and then use modified rules with icu.RuleBasedCollator. Create collator instance using a BCP-47 language tag, instead of a Locale identifier There are other approaches as well, but the third is the simplest: loc = Locale.forLanguageTag("fa-u-kr-latn-arab") collator = Collator.createInstance(loc) sorted(mylist, key=collator.getSortKey) This will reorder the Persian collation rules, placing Latin script before Arabic script, then everything else afterwards. Update 2024-06-27 The reordering directive above reorders Latin first, then Arabic script, then everything else based on its default ordering. This works well for bilingual data in Persian and languages using the Latin script, but may not be as suitable for multiscript data. There is a special ISO 15924 code Zzzz representing Unknown script, as a ICU reorder code, it is used to represent all scripts not specifically specified in the reorder. So fa-u-kr-latn-arab would be the same as fa-u-kr-latn-arab-Zzzz, but if we use fa-u-kr-Zzzz without mentioning other codes, the collator will order scripts as per Root collation order. This would give us Persian specific sorting combined with the default script order of the Root collation: import icu data = ["Salâm", "سلام", "тасли́м", "Persian", "فارسی", "Персидский язык"] # Persian (Farsi) locale based collator loc_fa = loc = icu.Locale('fa') collator_fa = icu.Collator.createInstance(loc_fa) sorted(data, key=collator_fa.getSortKey) # ['سلام', 'فارسی', 'Persian', 'Salâm', 'Персидский язык', 'тасли́м'] # Persian (Farsi) locale based collator with reordering: Latin, Arabic, then other scripts loc_alt = icu.Locale.forLanguageTag("fa-u-kr-latn-arab") collator_alt = icu.Collator.createInstance(loc_alt) sorted(data, key=collator_alt.getSortKey) # ['Persian', 'Salâm', 'سلام', 'فارسی', 'Персидский язык', 'тасли́м'] # Persian (Farsi) locale based collator with reordering: Other (Zzzz - Unknown script) # Sets order to default CLDR order loc = icu.Locale.forLanguageTag("fa-u-kr-Zzzz") collator = icu.Collator.createInstance(loc) sorted(data, key=collator.getSortKey) # ['Persian', 'Salâm', 'Персидский язык', 'тасли́м', 'سلام', 'فارسی' | 7 | 4 |
74,039,971 | 2022-10-12 | https://stackoverflow.com/questions/74039971/importerror-cannot-import-name-timedjsonwebsignatureserializer-from-itsdange | I'm running a flask app using itsdangerous python package in AWS EC2 instance. Traceback (most recent call last): File "run.py", line 4, in <module> app = create_app() File "/home/ubuntu/RHS_US/application/portal/__init__.py", line 29, in create_app from portal.users.routes import users File "/home/ubuntu/RHS_US/application/portal/users/routes.py", line 7, in <module> from portal.models import User File "/home/ubuntu/RHS_US/application/portal/models.py", line 7, in <module> from itsdangerous import TimedJSONWebSignatureSerializer as Serializer ImportError: cannot import name 'TimedJSONWebSignatureSerializer' from 'itsdangerous' (/home/ubuntu/.local/lib/python3.7/site-packages/itsdangerous/__init__.py) Any resolution for this? | First make sure to re-install and update itsdangerous. pip install -U itsdangerous Then what you want to do is from itsdangerous.url_safe import URLSafeTimedSerializer as Serializer This works well. | 7 | 15 |
74,098,560 | 2022-10-17 | https://stackoverflow.com/questions/74098560/python-bad-file-descriptor-when-reading-from-pipe | I'm trying to start a subprocess and pass a pipe file descriptor to it to read from. However when I try to read from the pipe in the child process I get "Bad file descriptor", even though I can read from the pipe in the parent process just fine. Here's the parent process: import subprocess import sys import os r, w = os.pipe() os.set_inheritable(r, True) p = subprocess.Popen(["python3", "client.py", str(r)]) os.write(w, b"hello") p.wait() And here's the child process: import sys import os r = int(sys.argv[1]) print("[Client]", os.read(r, 5)) Any help would be much appreciated. | From the official Python docs: Using the subprocess module, all file descriptors except standard streams are closed, and inheritable handles are only inherited if the close_fds parameter is False. | 3 | 4 |
74,046,983 | 2022-10-12 | https://stackoverflow.com/questions/74046983/plotly-no-renderer-could-be-found-for-mimetype-application-vnd-plotly-v1json | Environment: Visual Code Jupyter extension (v2022.9.1202862440) Kernel: Python 3.10.1 64-bit Pyplot version: 5.10.0 Nbformat version: 5.7.0 What happened? I am trying to use simple boxplot with plotly.express via this code: #Basic vizualization fig = px.box(dataset, y='Rh2') fig.update_layout( height=1000 ) First error was pretty straight forward: ValueError: Mime type rendering requires nbformat>=4.2.0 but it is not installed which I resolved using this answer. Second error is the one I am getting now: No renderer could be found for mimetype "application/vnd.plotly.v1+json", but one might be available on the Marketplace. I have no idea what could be done here - on the internet no such question is mentioned (at least I couldn't find any) + market space didn't help either! Thanks in advance EDIT: Tried adding: pio.renderers.default = "vscode" and various others but they result in no error and just blank space. | Try to install Jupyter Notebook Renderers extension on vscode. | 9 | 10 |
74,070,505 | 2022-10-14 | https://stackoverflow.com/questions/74070505/how-to-run-fastapi-application-inside-jupyter | I am learning FastAPI and I have this example. from fastapi import FastAPI app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} I saved the script as main.ipynb The tutorial says to run this line of code in the command line: uvicorn main:app --reload I am getting this error: (venv) PS C:\Users\xxx\xxxx> uvicorn main:app --reload INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [21304] using WatchFiles ERROR: Error loadinimport module "main".INFO: Stopping reloader process [21304] The reason is because I am using .ipynb as opposed to .py. How can i fix this error while using .ipynb. Thanks so much | If you attempted to start the server as usual inside Jupyter, for example: import uvicorn if __name__ == "__main__": uvicorn.run(app) you would get the following error: RuntimeError: asyncio.run() cannot be called from a running event loop This is due to Jupyter already running an event loop, and once Uvicorn calls asyncio.run() internally, the above error is raised. As per asyncio.run() documentation: This function cannot be called when another asyncio event loop is running in the same thread (see relevant asyncio implementation, where the error is raised). [...] This function always creates a new event loop and closes it at the end. It should be used as a main entry point for asyncio programs, and should ideally only be called once. Solution 1 If you wouldd like to run uvicorn from an already running async environment, use uvicorn.Server.serve() instead (you could add the below to a new code cell in your Jupyter notebook, and then run it): import asyncio import uvicorn if __name__ == "__main__": config = uvicorn.Config(app) server = uvicorn.Server(config) await server.serve() or, get the current (running) event loop, using asyncio.get_running_loop(), and then call loop.create_task() for creating a task to run inside the event loop for the current thread: import asyncio import uvicorn if __name__ == "__main__": config = uvicorn.Config(app) server = uvicorn.Server(config) loop = asyncio.get_running_loop() loop.create_task(server.serve()) Solution 2 Alternatively, you can use nest_asyncio, which allows nested use of asyncio.run() and loop.run_until_complete(): import nest_asyncio import uvicorn if __name__ == "__main__": nest_asyncio.apply() uvicorn.run(app) | 8 | 15 |
74,056,594 | 2022-10-13 | https://stackoverflow.com/questions/74056594/python-discord-bot-works-in-dm-but-not-in-server | I am trying to set up a simple Discord bot for a server, but it only appears to be responding in DMs to commands and not in any server channels. The bot has admin permissions in the server I am trying to get it to respond in. After doing some looking around I have found no fixes. Here's the code: import discord token_file = open("bot_token.txt", "r+") TOKEN = str(token_file.readlines()).strip("[]'") token_file.close() command_prefix = ">" client = discord.Client(intents=discord.Intents.default()) @client.event async def on_ready(): print("Logged in as: {0.user}".format(client)) @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith(command_prefix): if message.content == ">help": await message.channel.send("What do you need help with?") elif message.content == ">hello": await message.channel.send("Hi there!") else: await message.channel.send("This command is not recognised by our bot, please use the help menu if required.") else: return client.run(TOKEN) Hope someone can help! | Two things must be done before you can respond to messages within channels: Configuration the intent of the bot within discord developer portal Specify the intent within the bot itself. Step 1: Configuring the intent of the bot in developer portal: Within the developer portal, find your bot under applications. Selecting the "bot" tab from the left, you'll see the following option: It needs to be enabled. If you try the below without enabling it, you will get a permission error: ... from None discord.errors.PrivilegedIntentsRequired: Shard ID None is requesting privileged intents that have not been explicitly enabled in the developer portal. It is recommended to go to https://discord.com/developers/applications/ and explicitly enable the privileged intents within your application's page. If this is not possible, then consider disabling the privileged intents instead. Step 2: Specifying intent in the bot: Specifying intent is done through the following lines. It is required in addition to the option being enabled in the developer portal. intents = discord.Intents.default() intents.message_content = True Discord added a message content intent that has to be used since the first September 2022. if you‘re doing a discord.py tutorial, be aware that it should be a 2.0 tutorial as many things have been updated since then. Consider using the commands extension of discord.py as it is handling some annoying stuff and provides more easier interfaces to handle commands. | 3 | 3 |
74,086,844 | 2022-10-16 | https://stackoverflow.com/questions/74086844/how-to-type-hint-pandas-na-as-a-possible-output | I have Pandas lambda function which I use with .apply. This function will output a dictionary with string keys and values that are either strings or pd.NA. When I try to type hint the function I get an error: def _the_function(x: str) -> dict[str, str | pd.NA]: ... ERROR: Expected class type but received "NAType" How can I tell it of the possible pd.NA value without having to import numpy and using its NaN type hint? My project has no need to import numpy. | Your problem comes from the fact that pandas.NA is not a type. It is an instance (a singleton in fact) of the NAType class in Pandas. You need to use classes in type annotations.* More precisely, annotations must be made with instances of type (typically called classes) or special typing constructs like Union or generics. You can fix this by importing and using that class in the type annotation: import pandas as pd from pandas._libs.missing import NAType ... def _the_function(x: str) -> dict[str, str | NAType]: return {"foo": pd.NA} # example to show annotations are correct Running mypy over that code shows no errors. The only problem is that _libs is a non-public module (as denoted by its name starting with _). This may be due to the NA singleton still being considered experimental. I don't know. But importing from non-public modules is generally discouraged. I searched through the Pandas (and pandas-stubs) source and found no public re-import of the NAType class, so I see no other way around it. If NA is still experimental, I suppose you know the risk you are taking when relying on it in your functions, so importing its class should not make much of a difference to you. Hope this helps. * Confusion can arise because Python allows using None in type annotations, even though strictly speaking it is also an object (also a singleton) of the NoneType class. But the good people working on type hints in Python decided to still allow None as a special case for convenience, even though it is not exactly consistent. As far as I know, that is the only such exception. | 5 | 10 |
74,074,580 | 2022-10-14 | https://stackoverflow.com/questions/74074580/how-to-avoid-losing-type-hinting-of-decorated-function | I noticed that when wrapping a function or method that have some type hinting then the wrapped method loses the type hinting informations when I am coding using Visual studio code. For example with this code: from typing import Callable import functools def decorate(function: Callable): @functools.wraps(function) def wrapper(object: "A", *args, **kwargs): return function(object, *args, **kwargs) return wrapper class A: @decorate def g(self, count: int) -> str: return f"hello {count}" a = A() print(a.g(2)) When I am hovering within visual studio code over the name g then I lose the type hinting informations. Would you know a way to prevent this? Sincerely | The best you can do with Python 3.8 (and 3.9) is the following: from __future__ import annotations from functools import wraps from typing import Any, Callable, TypeVar T = TypeVar("T") def decorate(function: Callable[..., T]) -> Callable[..., T]: @wraps(function) def wrapper(obj: A, *args: Any, **kwargs: Any) -> T: return function(obj, *args, **kwargs) return wrapper class A: @decorate def g(self, count: int) -> str: return f"hello {count}" This will preserve the return type information, but no details about the parameter types. The @wraps decorator should at least keep the signature intact. If the decorator is supposed to be universal for methods of A, this is as good as it gets IMO. If you want it to be more specific, you can always restrict the function type to Callable[[A, B, C], T], but then the decorator will not be as universal anymore. If you upgrade to Python 3.10, you will have access to ParamSpec. Then you can do the following: from __future__ import annotations from functools import wraps from typing import Callable, ParamSpec, TypeVar P = ParamSpec("P") T = TypeVar("T") def decorate(function: Callable[P, T]) -> Callable[P, T]: @wraps(function) def wrapper(*args: P.args, **kwargs: P.kwargs) -> T: return function(*args, **kwargs) return wrapper class A: @decorate def g(self, count: int) -> str: return f"hello {count}" This actually preserves (as the name implies) all the parameter specifications. Hope this helps. | 7 | 11 |
74,099,657 | 2022-10-17 | https://stackoverflow.com/questions/74099657/how-to-print-a-fraction-in-latex-form | I have a fraction and I want to print the latex form of it. The fraction is like this for n == 3: How can I print the latex for this fraction using divide and conquer: 1+\frac{2+\frac{4}{5}}{3+\frac{6}{7}} And for n == 4 the fraction is: And the result is: 1+\frac{2+\frac{4+\frac{8}{9}}{5+\frac{10}{11}}}{3+\frac{6+\frac{12}{13}}{7+\frac{14}{15}}} | I found the solution myself. I've tried to solve the problem by using divide and conquer. My solution has two parameters, start and step. Start is the value that is printed at the first step. and step is the value that shows the number of fractions. If step = 1, then I print just the value of start. Otherwise for each step the numerator of the fraction is 2start and for denominator the value is 2start+1. Here is the code of my solution. def generate_fraction(start : int, step : int): result = "" if step == 1: result = str(start) else: result = str(start) + '+\\frac{'+ generate_fraction(2*start, step-1) + '}{' + generate_fraction(2*start + 1, step-1) + '}' return result # Main Program ... step = int(input()) start = 1 print(generate_fraction(start, step)) | 3 | 11 |
74,058,780 | 2022-10-13 | https://stackoverflow.com/questions/74058780/validation-of-field-assignment-inside-validator-of-pydantic-model | I've following pydantic model defined. When I run p = IntOrStr(value=True), I expect failure as True is boolean and it should fail the assignments to __int and __str class IntOrStr(BaseModel): __int: Optional[conint(strict=True, le=100, ge=10)] = None __str: Optional[constr(strict=True, max_length=64, min_length=10)] = None value: Any @validator("value") def value_must_be_int_or_str(cls, v): try: __int = v # no validation. not sure why? return v except ValidationError as e: print(str(e)) try: __str = v # no validation. not sure why? return v except ValidationError as e: print(str(e)) raise ValueError("error. value must be int or str") class Config: validate_assignment = True Does anyone know why __int = v and __str = v do not trigger any validation? Thanks. | There are a quite a few problems here. Namespace This is unrelated to Pydantic; this is just a misunderstanding of how Python namespaces work: In the namespace of the method, __int and __str are just local variables. What you are doing is simply creating these variables and assigning values to them, then discarding them without doing anything with them. They are completely unrelated to the fields/attributes of your model. If you wanted to assign a value to a class attribute, you would have to do the following: class Foo: x: int = 0 @classmethod def method(cls) -> None: cls.x = 42 But that is not what you want in this case because... Class vs. instance A validator is a class method. This is hinted at by the first parameter being named cls even though the @classmethod decorator can be omitted with @validator. Thus, you will never be able to assign a value to any field of the model instance inside a validator, regardless of the validate_assignment configuration. The validator merely processes the value provided for assignment to the instance. What it returns may then eventually be assigned to the instance, if no other validators get in the way. If you want the value passed to one field to influence what is eventually assigned to other fields, you should instead use a @root_validator. Validator precedence You need to take into account the order in which validators are called. That order is determined by the order in which fields were defined. (see docs) Root validators are called after field validators by default. Thus, if you want changes done by a root validator to influence field validation, you need to use pre=True on it. Underscores Pydantic does not treat attributes, whose names start with an underscore, as fields, meaning they are not subject to validation. If you need a field name that starts with an underscore, you will have to use an alias. Working example All together, I guess you would need something more like the following: from typing import Any, Optional from pydantic import BaseModel, Field, ValidationError, conint, constr, root_validator class IntOrStr(BaseModel): a: Optional[ conint(strict=True, le=100, ge=10) ] = Field(default=None, alias="__a") b: Optional[ constr(strict=True, max_length=64, min_length=10) ] = Field(default=None, alias="__b") value: Any @root_validator(pre=True) def value_to_a_and_b(cls, values: dict[str, Any]) -> dict[str, Any]: value = values.get("value") values["__a"] = value values["__b"] = value return values if __name__ == "__main__": try: IntOrStr(value=True) except ValidationError as e: print(e) The output: 2 validation errors for IntOrStr __a value is not a valid integer (type=type_error.integer) __b str type expected (type=type_error.str) Note that in this setup, the errors are actually picked up by the individual default field validators for the conint and constr types. Also, in this simple example you would not be able to manually set __a or __b because the values would always be overridden in the root validator. But since I don't know your actual intention, I just set it up like this to trigger your desired validation error. Hope this helps. | 3 | 5 |
74,106,823 | 2022-10-18 | https://stackoverflow.com/questions/74106823/working-poetry-project-with-private-dependencies-inside-docker | I have a Python library hosted in Google Cloud Platform Artifact Registry. Besides, I have a Python project, using Poetry, that depends on the library. This is my project file pyproject.toml: [tool.poetry] name = "Test" version = "0.0.1" description = "Test project." authors = [ "Me <[email protected]>" ] [tool.poetry.dependencies] python = ">=3.8,<4.0" mylib = "0.1.1" [tool.poetry.dev-dependencies] "keyrings.google-artifactregistry-auth" = "^1.1.0" keyring = "^23.9.0" [build-system] requires = ["poetry-core>=1.1.0"] build-backend = "poetry.core.masonry.api" [[tool.poetry.source]] name = "my-lib" url = "https://us-east4-python.pkg.dev/my-gcp-project/my-lib/simple/" secondary = true To enable using my private repository, I installed gcloud CLI and authenticated with my credentials. So when I run this command, I see proper results, like this: $ gcloud auth list ACTIVE ACCOUNT ... * <my-account>@appspot.gserviceaccount.com ... Additionally, I'm using Python keyring togheter with keyrings.google-artifactregistry-auth, as you can see in the project file. So, with this setup, I can run poetry install, the dependency gets downloaded from my private artifact registry, using the authentication from GCP. The issue comes when I try to apply the same principles inside a Docker container. I created a Docker file like this: # syntax = docker/dockerfile:1.3 FROM python:3.9 # Install Poetry RUN curl -sSL https://install.python-poetry.org | python3 - ENV PATH "${PATH}:/root/.local/bin" # Install Google Cloud SDK CLI ARG GCLOUD_VERSION="401.0.0-linux-x86_64" RUN wget -q https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-${GCLOUD_VERSION}.tar.gz && \ tar -xf google-cloud-cli-*.tar.gz && \ ./google-cloud-sdk/install.sh --quiet && \ rm google-cloud-cli-*.tar.gz ENV PATH "${PATH}:/google-cloud-sdk/bin" # install Google Artifact Rrgistry keyring integration RUN pip install keyrings.google-artifactregistry-auth RUN --mount=type=secret,id=GOOGLE_APPLICATION_CREDENTIALS ${GOOGLE_APPLICATION_CREDENTIALS} gcloud auth activate-service-account --key-file=/run/secrets/GOOGLE_APPLICATION_CREDENTIALS RUN gcloud auth list RUN keyring --list-backends WORKDIR /app # copy Poetry project files and install dependencies COPY ./.env* ./ COPY ./pyproject.toml ./poetry.lock* ./ RUN poetry install # copy source files COPY ./app /app/app # run the program CMD poetry run python -m app.main As you can see, I injected the Google credentials file, following this documentation. This works. I used Docker BuildKit secrets, as exposed here (security concerns are not a matter of this question). So, when I try to build the image, I got an authentication error (GOOGLE_APPLICATION_CREDENTIALS is properly set pointing to a valid key file): $ DOCKER_BUILDKIT=1 docker image build --secret id=GOOGLE_APPLICATION_CREDENTIALS,src=${GOOGLE_APPLICATION_CREDENTIALS} -t app-test . ... #19 66.68 <c1>Source (my-lib):</c1> Authorization error accessing https://us-east4-python.pkg.dev/my-gcp-project/my-lib/simple/mylib/ #19 68.21 #19 68.21 RuntimeError #19 68.21 #19 68.22 Unable to find installation candidates for mylib (0.1.1) ... If I execute, line by line, all the commands in the Dockerfile, using the same Google credentials key file outside Docker, I got it working. I even tried to debug inside the image, not executing poetry install, nor poetry run... commands, and I saw this, if it helps to debug: # gcloud auth list Credentialed Accounts ACTIVE ACCOUNT * <my-account>@appspot.gserviceaccount.com # keyring --list-backends keyrings.gauth.GooglePythonAuth (priority: 9) keyring.backends.chainer.ChainerBackend (priority: -1) keyring.backends.fail.Keyring (priority: 0) Finally, I even tried following this approach: Using Keyring on headless Linux systems in a Docker container, with the same results: # apt update ... # apt install -y gnome-keyring ... # dbus-run-session -- sh GNOME_KEYRING_CONTROL=/root/.cache/keyring-MEY1T1 SSH_AUTH_SOCK=/root/.cache/keyring-MEY1T1/ssh # poetry install ... • Installing mylib (0.1.1): Failed RuntimeError Unable to find installation candidates for mylib (0.1.1) at ~/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/installation/chooser.py:103 in choose_for 99│ 100│ links.append(link) 101│ 102│ if not links: → 103│ raise RuntimeError(f"Unable to find installation candidates for {package}") 104│ 105│ # Get the best link 106│ chosen = max(links, key=lambda link: self._sort_key(package, link)) 107│ ... I even tried following the advices of this other question. No success. gcloud CLI works inside the container, testing other commands. My guess is that the integration with Keyring is not working properly, but I don't know how to debug it. How can I get my dependency resolved inside a Docker container? | Finally, I found a solution that worked in my use case. There are two main parts: Installing keyrings.google-artifactregistry-auth as a Poetry plugin, using this command: poetry self add keyrings.google-artifactregistry-auth Authenticating inside the container using a service account key file: gcloud auth activate-service-account --key-file=key.json In my case, I use BuildKit secrets to handle it. Then, for instance, the Dockerfile would like this: FROM python:3.9 # Install Poetry RUN curl -sSL https://install.python-poetry.org | python3 - ENV PATH "${PATH}:/root/.local/bin" # install Google Artifact Registry tools for Python as a Poetry plugin RUN poetry self add keyrings.google-artifactregistry-auth # Install Google Cloud SDK CLI ARG GCLOUD_VERSION="413.0.0-linux-x86_64" RUN wget -q https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-${GCLOUD_VERSION}.tar.gz && \ tar -xf google-cloud-cli-*.tar.gz && \ ./google-cloud-sdk/install.sh --quiet && \ rm google-cloud-cli-*.tar.gz ENV PATH "${PATH}:/google-cloud-sdk/bin" # authenticate with gcloud using a BuildKit secret RUN --mount=type=secret,id=gac.json \ gcloud auth activate-service-account --key-file=/run/secrets/gac.json COPY ./pyproject.toml ./poetry.lock* / RUN poetry install # deauthenticate with gcloud once the dependencies are already installed to clean the image RUN gcloud auth revoke --all COPY ./app /app WORKDIR /app CMD ["whatever", "command", "you", "use"] And the Docker build command, providing the secret: DOCKER_BUILDKIT=1 docker image build \ --secret id=gac.json,src=${GOOGLE_APPLICATION_CREDENTIALS} \ -t ${YOUR_TAG} . And with Docker Compose, a similar approach: services: yourapp: build: context: . secrets: - key.json image: yourapp:yourtag ... COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose up --build | 12 | 5 |
74,045,802 | 2022-10-12 | https://stackoverflow.com/questions/74045802/0-invalid-argument-unknown-image-file-format-one-of-jpeg-png-gif-bmp-requ | I have seen Tensorflow Keras error: Unknown image file format. One of JPEG, PNG, GIF, BMP required and Unknown image file format. One of JPEG, PNG, GIF, BMP required these answers. It did not help me completely I am building a simple CNN in google colab Epoch 1/5 --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-29-a98bc2c91ee1> in <module> ----> 1 history = model_1.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=int(0.25 * len(test_data))) 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 53 ctx.ensure_initialized() 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None: InvalidArgumentError: Graph execution error: 2 root error(s) found. (0) INVALID_ARGUMENT: Unknown image file format. One of JPEG, PNG, GIF, BMP required. [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] [[categorical_crossentropy/softmax_cross_entropy_with_logits/Shape_2/_10]] (1) INVALID_ARGUMENT: Unknown image file format. One of JPEG, PNG, GIF, BMP required. [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_31356] I am getting the above error. The error is while I try to fit the model Using the previous answers that I have linked, I have verified that there are no improper images in my folders. All images are jpeg only. My code: import tensorflow as tf # Create training and test directory paths train_dir = 'Dataset/train' test_dir = 'Dataset/test' IMG_SIZE = (224,224) BATCH_SIZE=32 # Set up data loaders import tensorflow as tf IMG_SIZE = (224,224) BATCH_SIZE=32 train_data = tf.keras.preprocessing.image_dataset_from_directory(directory=train_dir, image_size=IMG_SIZE, label_mode='categorical', batch_size=BATCH_SIZE) test_data = tf.keras.preprocessing.image_dataset_from_directory(directory=test_dir, image_size=IMG_SIZE, batch_size=BATCH_SIZE, label_mode='categorical') import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing data_augmentation = keras.Sequential([ preprocessing.RandomFlip('horizontal'), preprocessing.RandomRotation(0.2), preprocessing.RandomZoom(0.2), preprocessing.RandomHeight(0.2), preprocessing.RandomWidth(0.2), # preprocessing.Rescale(1/255.) Keep this model for ResNet. Efficient Net has rescaling buit in ], name='data_augmentation') input_shape = (224,224,3) base_model = tf.keras.applications.EfficientNetB0(include_top=False) base_model.trainable=False # Create the input layer inputs = layers.Input(shape=input_shape, name='input_layer') x=data_augmentation(inputs) # Give base model the inputs after augmentation.. Dont train it x = base_model(x,training=False) x = layers.GlobalAveragePooling2D()(x) # Add a dense layer for output outputs = layers.Dense(9, activation='softmax', name='output_layer')(x) # Make a model using the inputs and outputs model_1 = keras.Model(inputs,outputs) # Compile the model model_1.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) history = model_1.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=int(0.25 * len(test_data))) I have downloaded all the images from google search only. Link to dataset: https://drive.google.com/file/d/1dKgzyq2lUF87ggZQ80KUhINhmtVrC_p-/view?usp=sharing | Please just because an image has a .jpeg or .png extension doesn't mean it is good. Some images can have a correct extension and still be bad. Apart from extensions, the image may still have bad binaries which is what makes an image corrupt. You need a code base to properly fish out corrupt images from your directory during preprocessing. | 3 | 2 |
74,105,403 | 2022-10-18 | https://stackoverflow.com/questions/74105403/determine-if-code-is-running-on-databricks-or-ide-pycharm | I am in the process of building a Python package that can be used by Data Scientists to manage their MLOps lifecycle. Now, this package can be used either locally (usually on PyCharm) or on Databricks. I want a certain functionality of the package to be dependent on where it is running, i.e. I want it to do something different if it is called by a Databricks notebook and something else entirely if it is running locally. Is there any way I can determine where it is being called from? I am a little doubtful as to whether we can use something like the following that checks if your code is running on a notebook or otherwise since this will be a package that is going to be stored in your Databricks environment, How can I check if code is executed in the IPython notebook? | The workaround I've found to work is to check for databricks specific environment variables. import os def is_running_in_databricks() -> bool: return "DATABRICKS_RUNTIME_VERSION" in os.environ | 6 | 8 |
74,114,745 | 2022-10-18 | https://stackoverflow.com/questions/74114745/how-to-fix-deprecationwarning-dataframes-with-non-bool-types-result-in-worse-c | I have been trying to implement the Apriori Algorithm in python. There are several examples online, they all use similar methods and mostly the same example dataset. The reference link: https://www.kaggle.com/code/rockystats/apriori-algorithm-or-market-basket-analysis/notebook (starting from the line [26]) I have a different dataset that has the same structure as the example datasets online. I keep getting the "DeprecationWarning: DataFrames with non-bool types result in worse computationalperformance and their support might be discontinued in the future.Please use a DataFrame with bool type" error. Here is my code: import pandas as pd import numpy as np from mlxtend.frequent_patterns import apriori, association_rules df1 = pd.read_csv(r'C:\Users\USER\dataset', sep=';') df=df1.fillna(0) basket = pd.pivot_table(data=df, index='cust_id', columns='Product', values='quantity', aggfunc='count',fill_value=0.0) def convert_into_binary(x): if x > 0: return 1 else: return 0 basket_sets = basket.applymap(convert_into_binary) frequent_itemsets = apriori(basket_sets, min_support=0.07, use_colnames=True) print(frequent_itemsets) # association rule rules = association_rules(frequent_itemsets, metric="lift", min_threshold=1) print(rules) In addition, in the last step of my code, I get an empty dataframe; I can see the column headings of the dataset but the output is empty. Empty DataFrame Columns: [antecedents, consequents, antecedent support, consequent support, support, confidence, lift, leverage, conviction] Index: [] I am not sure if this issue is related to this error that I am having. I am new to python and I would really appreciate assistance and support on this issue. | I ran into the same issue even after converting my dataframe fields to 0 and 1. The fix was just making sure the apriori module knows the dataframe is of boolean type, so in your case you should run this : frequent_itemsets = apriori(basket_sets.astype('bool'), min_support=0.07, use_colnames=True) In addition, in the last step of my code, I get an empty dataframe; I can see the column headings of the dataset but the output is empty. Try using a smaller min_support | 5 | 7 |
74,073,543 | 2022-10-14 | https://stackoverflow.com/questions/74073543/remove-features-of-json-if-lat-lon-keys-not-within-other-json-boundaries | I'm trying to create a weather contour for the United States from an existing data frame and add it to a Dash Mapbox map, but the json file I am creating "fills in" areas where data does not exist in an attempt to fill out the entire array. The unwanted data can be seen shaded in the image below. I'd like to remove data from the weather json file where the lat-longs from the weather json file and the states json file do not intersect. Better yet would be a solution where weather data was never created at all for areas outside of the states_20m.geojson. The pertinent data files can be found at this GitHub Link. They are the weather dataframe and the states_20m.geojson. Below is my code. import pandas as pd from datetime import datetime import plotly.express as px import plotly.graph_objects as go import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.interpolate import griddata,RectSphereBivariateSpline,Rbf import geojsoncontour import json import branca import scipy as sp import scipy.ndimage from geojson import Feature, Polygon, dump import geopandas as gpd ##### Load in the main DataFrame and define vars##### path = r'date_data.csv' df = pd.read_csv(path, index_col=[0]) col = 'Day_Temp' temp_levels = [-20,0,10,20,32] levels = temp_levels unit = 'deg F' colors = ['#f0ffff','#add8e6','#7bc8f6','#069af6','#0343df' ##### Create the weather contour ##### data = [] df_copy = df.copy() ##### Create the GEOJSON Layer ##### vmin = 0 vmax = 1 cm = branca.colormap.LinearColormap(colors, vmin=vmin, vmax=vmax).to_step(len(levels)) x_orig = (df_copy.long.values.tolist()) y_orig = (df_copy.lat.values.tolist()) z_orig = np.asarray(df_copy[col].values.tolist()) x_arr = np.linspace(np.min(x_orig), np.max(x_orig), 5000) y_arr = np.linspace(np.min(y_orig), np.max(y_orig), 5000) x_mesh, y_mesh = np.meshgrid(x_arr, y_arr) xscale = df_copy.long.max() - df_copy.long.min() yscale = df_copy.lat.max() - df_copy.lat.min() scale = np.array([xscale, yscale]) z_mesh = griddata((x_orig, y_orig), z_orig, (x_mesh, y_mesh), method='linear') sigma = [5, 5] z_mesh = sp.ndimage.filters.gaussian_filter(z_mesh, sigma, mode='nearest') # Create the contour contourf = plt.contourf(x_mesh, y_mesh, z_mesh, levels, alpha=0.9, colors=colors, linestyles='none', vmin=vmin, vmax=vmax) # Convert matplotlib contourf to geojson geojson = geojsoncontour.contourf_to_geojson( contourf=contourf, min_angle_deg=3, ndigits=2, unit=unit, stroke_width=1, fill_opacity=0.3) d = json.loads(geojson) len_features=len(d['features']) if not data: data.append(d) else: for i in range(len(d['features'])): data[0]['features'].append(d['features'][i]) weather_json = json.loads(geojson) ###### Create the DataFrame ##### lats = [30,33,35,40] lons = [-92,-94,-96,-100] dat = [1000,2000,500,12500] df = pd.DataFrame(list(zip(lats,lons,dat)), columns = ['lat', 'lon', 'data']) ##### Add the two on top of on another in a Dash Mapbox ##### # reading in the geospatial data for the state boundaries with open('States_20m.geojson') as g: states_json = json.load(g) column = "data" fig = px.density_mapbox( df, lat="lat", lon="lon", z=column, hover_data={ "lat": True, # remove from hover data "lon": True, # remove from hover data column: True, }, center=dict(lat=38.5, lon=-96), zoom=3, radius=30, opacity=0.4, mapbox_style="carto-positron", color_continuous_scale=['rgb(0,0,0)', 'rgb(19,48,239)', 'rgb(115,249,253)', 'rgb(114,245,77)', 'rgb(254,251,84)', 'rgb(235,70,38)'], range_color = [0, 2000] ) # Weather outlines fig.update_layout( mapbox={ "layers": [ { "source": f, "line": {"width":1}, # "type":"line", "type":"fill", "color": f["properties"]["fill"], "opacity": 1, } for f in weather_json["features"] ], } ) # States outlines fig.update_layout( mapbox={ "layers": [ { "source": g, "line": {"width":1}, "type":"line", "color": 'black', "opacity": 0.5, } for g in states_json["features"] ], } ) fig.show() | Think of each weather contour as a polygon and the outline of the US as another polygon. What you need is the overlap of the US polygon with each contour polygon. import pandas as pd from datetime import datetime import plotly.express as px import plotly.graph_objects as go import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.interpolate import griddata,RectSphereBivariateSpline,Rbf import geojsoncontour import json import branca import scipy as sp import scipy.ndimage from geojson import Feature, Polygon, dump import geopandas as gpd from urllib.request import urlopen import shapely.geometry from shapely.geometry import Point, Polygon, GeometryCollection, Polygon, mapping from shapely.ops import unary_union ##### Load in the main DataFrame and define vars##### df = pd.read_csv(r'https://raw.githubusercontent.com/jkiefn1/SO_Json_Question/main/date_data.csv', index_col=[0]) col = 'Day_Temp' temp_levels = [-20,0,10,20,32] levels = temp_levels unit = 'deg F' colors = ['#f0ffff','#add8e6','#7bc8f6','#069af6','#0343df'] ##### Create the weather contour ##### data = [] df_copy = df.copy() ##### Create the GEOJSON Layer ##### vmin = 0 vmax = 1 cm = branca.colormap.LinearColormap(colors, vmin=vmin, vmax=vmax).to_step(len(levels)) x_orig = (df_copy.long.values.tolist()) y_orig = (df_copy.lat.values.tolist()) z_orig = np.asarray(df_copy[col].values.tolist()) x_arr = np.linspace(np.min(x_orig), np.max(x_orig), 5000) y_arr = np.linspace(np.min(y_orig), np.max(y_orig), 5000) x_mesh, y_mesh = np.meshgrid(x_arr, y_arr) xscale = df_copy.long.max() - df_copy.long.min() yscale = df_copy.lat.max() - df_copy.lat.min() scale = np.array([xscale, yscale]) z_mesh = griddata((x_orig, y_orig), z_orig, (x_mesh, y_mesh), method='linear') sigma = [5, 5] z_mesh = sp.ndimage.filters.gaussian_filter(z_mesh, sigma, mode='nearest') # Create the contour contourf = plt.contourf(x_mesh, y_mesh, z_mesh, levels, alpha=0.9, colors=colors, linestyles='none', vmin=vmin, vmax=vmax) # Convert matplotlib contourf to geojson geojson = geojsoncontour.contourf_to_geojson( contourf=contourf, min_angle_deg=3, ndigits=2, unit=unit, stroke_width=1, fill_opacity=0.3) d = json.loads(geojson) len_features=len(d['features']) if not data: data.append(d) else: for i in range(len(d['features'])): data[0]['features'].append(d['features'][i]) weather_json = json.loads(geojson) ###### Create the DataFrame ##### lats = [30,33,35,40] lons = [-92,-94,-96,-100] dat = [1000,2000,500,12500] df = pd.DataFrame(list(zip(lats,lons,dat)), columns = ['lat', 'lon', 'data']) ##### Add the two on top of on another in a Dash Mapbox ##### # reading in the geospatial data for the state boundaries states_json = json.loads(urlopen(r'https://raw.githubusercontent.com/jkiefn1/SO_Json_Question/main/States_20m.geojson').read()) #creating outline of the US by joining state outlines into one multipolygon usa = gpd.GeoDataFrame.from_features(states_json['features']) usa_poly = gpd.GeoSeries(unary_union(usa['geometry'])).iloc[0] #geojson to geopandas gdf = gpd.GeoDataFrame.from_features(weather_json['features']) #overlapping intersection of US poly with each contour gdf['inter'] = gdf['geometry'].buffer(0).intersection(usa_poly) #update weather_json for n in range(len(gdf)): weather_json['features'][n]['geometry'] = mapping(gdf['inter'].iloc[n]) column = "data" fig = px.density_mapbox( df, lat="lat", lon="lon", z=column, hover_data={ "lat": True, # remove from hover data "lon": True, # remove from hover data column: True, }, center=dict(lat=38.5, lon=-96), zoom=3, radius=30, opacity=0.4, mapbox_style="carto-positron", color_continuous_scale=['rgb(0,0,0)', 'rgb(19,48,239)', 'rgb(115,249,253)', 'rgb(114,245,77)', 'rgb(254,251,84)', 'rgb(235,70,38)'], range_color = [0, 2000] ) # Weather outlines fig.update_layout( mapbox={ "layers": [ { "source": f, "line": {"width":1}, "type":"line", "type":"fill", "color": f["properties"]["fill"], "opacity": 1, } for f in weather_json["features"] ], } ) # States outlines fig.update_layout( mapbox={ "layers": [ { "source": g, "line": {"width":1}, "type":"line", "color": 'black', "opacity": 0.5, } for g in states_json["features"] ], } ) fig.show() | 3 | 3 |
74,111,323 | 2022-10-18 | https://stackoverflow.com/questions/74111323/python-memory-limit-exceeded-during-google-app-engine-deployment | I am creating a Python project and deploying it to Google App Engine. When I use the deployed link in another project, I get the following error message in Google Cloud Logging: Exceeded hard memory limit of 256 MB with 667 MB after servicing 0 requests total. Consider setting a larger instance class in app.yaml. So, I looked at this and this link and here are the main points: Instance Class Memory Limit CPU Limit Supported Scaling Types F1 (default) 256 MB 600 MHz automatic F2 512 MB 1.2 GHz automatic F4 1024 MB 2.4 GHz automatic F4_1G 2048 MB 2.4 GHz automatic instance_class: F2 The error says the limit is 256 MB, but 667 MB is recorded. The memory limit for F1 and the memory limit for F2 are less than 667 MB. So I added instance_class: F2 to app.yaml and changed F2 to F4. When I do the above, I get the following error in Google Cloud Logging: Exceeded hard memory limit of 1024 MB with 1358 MB after servicing 0 requests total. Consider setting a larger instance class in app.yaml. This is a bit strange since the recorded memory is from 667 MB to 1358 MB. The memory limit of F4_1G is over 1358 MB, so I changed instance_class: F4 to instance_class: F4_1G. But it shows me the following error in Google Cloud Logging: Exceeded hard memory limit of 2048 MB with 2194 MB after servicing 0 requests total. Consider setting a larger instance class in app.yaml. This is very strange since the recorded memory goes from 667 MB to 1358 MB to 2194 MB. Update: I have reproduced this problem without additional instance class. Please refer error log below: 0: { logMessage: "Exceeded soft memory limit of 256 MB with 924 MB after servicing 0 requests total. Consider setting a larger instance class in app.yaml." severity: "CRITICAL" time: "2022-10-19T06:00:39.747954Z" } 1: { logMessage: "This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application." severity: "INFO" time: "2022-10-19T06:00:39.748029Z" } 2: { logMessage: "While handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application or may be using an instance with insufficient memory. Consider setting a larger instance class in app.yaml." severity: "WARNING" time: "2022-10-19T06:00:39.748031Z" } Another finding: When the app is running in local terminal, it consumes 1 GB - 3 GB memory to running the app fully loaded which takes around 30 seconds. Meanwhile, the memory usage is 700 MB - 750 MB during idle state, and 750 MB - 800 MB to serve single request. Can anyone explain to me why this is happening? How can I fix this error and use the deployed link successfully? I would appreciate if someone could help me with this. Thank you in advance! | First, I store larger files from local to cloud. Then I successfully deployed the code to Google App Engine without any errors. | 5 | 1 |
74,057,367 | 2022-10-13 | https://stackoverflow.com/questions/74057367/how-to-get-rid-of-the-in-place-futurewarning-when-setting-an-entire-column-from | In pandas v.1.5.0 a new warning has been added, which is shown, when a column is set from an array of different dtype. The FutureWarning informs about a planned semantic change, when using iloc: the change will be done in-place in future versions. The changelog instructs what to do to get the old behavior, but there is no hint how to handle the situation, when in-place operation is in fact the right choice. The example from the changelog: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]) df.iloc[:, 0] = new_prices df.iloc[:, 0] This is the warning, which is printed in pandas 1.5.0: FutureWarning: In a future version, df.iloc[:, i] = newvals will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either df[df.columns[i]] = newvals or, if columns are non-unique, df.isetitem(i, newvals) How to get rid of the warning, if I don't care about in-place or not, but want to get rid of the warning? Am I supposed to change dtype explicitly? Do I really need to catch the warning every single time I need to use this feature? Isn't there a better way? | I haven't found any better way than suppressing the warning using the warnings module: import numpy as np import pandas as pd import warnings df = pd.DataFrame({"price": [11.1, 12.2]}, index=["book1", "book2"]) original_prices = df["price"] new_prices = np.array([98, 99]) with warnings.catch_warnings(): # Setting values in-place is fine, ignore the warning in Pandas >= 1.5.0 # This can be removed, if Pandas 1.5.0 does not need to be supported any longer. # See also: https://stackoverflow.com/q/74057367/859591 warnings.filterwarnings( "ignore", category=FutureWarning, message=( ".*will attempt to set the values inplace instead of always setting a new array. " "To retain the old behavior, use either.*" ), ) df.iloc[:, 0] = new_prices df.iloc[:, 0] | 24 | 8 |
74,075,122 | 2022-10-14 | https://stackoverflow.com/questions/74075122/the-most-efficient-way-rather-than-using-np-setdiff1d-and-np-in1d-to-remove-com | I need a much faster code to remove values of an 1D array (array length ~ 10-15) that are common with another 1D array (array length ~ 1e5-5e5 --> rarely up to 7e5), which are index arrays contain integers. There is no duplicate in the arrays, and they are not sorted and the order of the values must be kept in the main array after modification. I know that can be achieved using such np.setdiff1d or np.in1d (which both are not supported for numba jitted in no-python mode), and other similar posts (e.g. this) have not much more efficient way to do so, but performance is important here because all the values in the main index array will be gradually be removed in loops. import numpy as np import numba as nb n = 500000 r = 10 arr1 = np.random.permutation(n) arr2 = np.random.randint(0, n, r) # @nb.jit def setdif1d_np(a, b): return np.setdiff1d(a, b, assume_unique=True) # @nb.jit def setdif1d_in1d_np(a, b): return a[~np.in1d(a, b)] There is another related post that proposed by norok2 for 2D arrays, that is ~15 times faster solution (hashing-like way using numba) than usual methods described there. This solution may be the best if it could be prepared for 1D arrays: @nb.njit def mul_xor_hash(arr, init=65537, k=37): result = init for x in arr.view(np.uint64): result = (result * k) ^ x return result @nb.njit def setdiff2d_nb(arr1, arr2): # : build `delta` set using hashes delta = {mul_xor_hash(arr2[0])} for i in range(1, arr2.shape[0]): delta.add(mul_xor_hash(arr2[i])) # : compute the size of the result n = 0 for i in range(arr1.shape[0]): if mul_xor_hash(arr1[i]) not in delta: n += 1 # : build the result result = np.empty((n, arr1.shape[-1]), dtype=arr1.dtype) j = 0 for i in range(arr1.shape[0]): if mul_xor_hash(arr1[i]) not in delta: result[j] = arr1[i] j += 1 return result I tried to prepare that for 1D arrays, but I have some problems/question with that. At first, IDU what does mul_xor_hash exactly do, and if init and k are arbitrary selected or not Why mul_xor_hash will not work without nb.njit: File "C:/Users/Ali/Desktop/test - Copy - Copy.py", line 21, in mul_xor_hash result = (result * k) ^ x TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' IDK how to implement mul_xor_hash on 1D arrays (if it could), which I guess may make it faster more than for 2Ds, so I broadcast the input arrays to 2D by [None, :], which get the following error just for arr2: print(mul_xor_hash(arr2[0])) ValueError: new type not compatible with array and what does delta do I am searching the most efficient way in this regard. In the absence of better method than norok2 solution, how to prepare this solution for 1D arrays? | Understanding the hash-based solution At first, IDU what does mul_xor_hash exactly do, and if init and k are arbitrary selected or not mul_xor_hash is a custom hash function. Functions mixing xor and multiply (possibly with shifts) are known to be relatively fast to compute the hash of a raw data buffer. The multiplication tends to shuffle bits and the xor is used to somehow combine/accumulate the result in a fixed size small value (ie. the final hash). There are many different hashing functions. Some are faster than others, some cause more collisions than other in a given context. A fast hashing function causing too many collisions can be useless in practice as it would result in a pathological situation where all conflicting values needs to be compared. This is why fast hash functions are hard to implement. init and k are parameter certainly causing the hash to be pretty balance. This is pretty common in such a hash function. k needs to be sufficiently big for the multiplication to shuffle bits and it should typically also be a prime number (values like power of two tends to increase collisions due to modular arithmetic behaviours). init plays a significant role only for very small arrays (eg. with 1 item): it helps to reduce collisions by xoring the final hash by a non-trivial constant. Indeed, if arr.size = 1, then result = (init * k) ^ arr[0] where init * k is a constant. Having an identity hash function equal to arr[0] is known to be bad since it tends to result in many collisions (this is a complex topic, but put it shortly, arr[0] can be divided by the number of buckets in the hash table for example). Thus, init should be a relatively big number and init * k should also be a big non-trivial value (a prime number is a good target value). Why mul_xor_hash will not work without nb.njit It depends of the input. The input needs to be a 1D array and have a raw size in byte divisible by 8 (eg. 64-bit items, 2n x 32-bit ones, 4n x 16-bit one or 8n 8-bit ones). Here is some examples: mul_xor_hash(np.random.rand(10)) mul_xor_hash(np.arange(10)) # Do not work with 9 and what does delta do It is a set containing the hash of the arr2 row so to find matching lines faster than comparing them without hashes. how to prepare this solution for 1D arrays? AFAIK, hashes are only use to avoid comparisons of rows but this is because the input is the 2D array. In 1D, there is no such a problem. There is big catch with this method: it only works if there is no hash collisions. Otherwise, the implementation wrongly assumes that values are equal even if they are not! @norok explicitly mentioned it in the comments though: Note that the collision handling for the hashings should also be implemented Faster implementation Using the 2D solution of @norok2 for 1D is not a good idea since hashes will not make it faster the way they are used. In fact, a set already use a hash function internally anyway. Not to mention collisions needs to be properly implemented (which is done by a set). Using a set is a relatively good idea since it causes the complexity to be O(n + m) where n = len(arr1) and m = len(arr2). That being said, if arr1 is converted to a set, then it will be too big to fit in L1 cache (due to the size of arr1 in your case) resulting in slow cache misses. Additionally, the growing size of the set will cause values to be re-hashed which is not efficient. If arr2 is converted to a set, then the many hash table fetches will not be very efficient since arr2 is very small in your case. This is why this solution is sub-optimal. One solution is to split arr1 in chunks and then build a set based on the target chunk. You can then check if a value is in the set or not efficiently. Building the set is still not very efficient due to the growing size. This problem is due to Python itself which do not provide a way to reserve some space for the data structure like other languages do (eg. C++). One solution to avoid this issue is simply to reimplement an hash-table which is not trivial and cumbersome. Actually, Bloom filters can be used to speed up this process since they can quickly find if there is no collision between the two sets arr1 and arr2 in average (though they are not trivial to implement). Another optimization is to use multiple threads to compute the chunks in parallel since they are independent. That being said, the appending to the final array is not easy to do efficiently in parallel, especially since you do not want the order to be modified. One solution is to move away the copy from the parallel loop and do it serially but this is slow and AFAIK there is no simple way to do that in Numba currently (since the parallelism layer is very limited). Consider using native languages like C/C++ for an efficient parallel implementation. In the end, hashing can be pretty complex and the speed up can be quite small compared to a naive implementation with two nested loops since arr2 only have few items and modern processors can compare values quickly using SIMD instructions (while hash-based method can hardly benefit from them on mainstream processors). Unrolling can help to write a pretty simple and fast implementation. Again, unfortunately, Numba use LLVM-Jit internally which appear to fail to vectorize such a simple code (certainly due to missing optimizations in either LLVM-Jit or even LLVM itself). As a result, the non vectorized code is finally a bit slower (rather than 4~10 times faster on a modern mainstream processor). One solution is to use a C/C++ code instead to do that (or possibly Cython). Here is a serial implementation using basic Bloom filters: @nb.njit('uint32(int32)') def hash_32bit_4k(value): return (np.uint32(value) * np.uint32(27_644_437)) & np.uint32(0x0FFF) @nb.njit(['int32[:](int32[:], int32[:])', 'int32[:](int32[::1], int32[::1])']) def setdiff1d_nb_faster(arr1, arr2): out = np.empty_like(arr1) bloomFilter = np.zeros(4096, dtype=np.uint8) for j in range(arr2.size): bloomFilter[hash_32bit_4k(arr2[j])] = True cur = 0 for i in range(arr1.size): # If the bloom-filter value is true, we know arr1[i] is not in arr2. # Otherwise, there is maybe a false positive (conflict) and we need to check to be sure. if bloomFilter[hash_32bit_4k(arr1[i])] and arr1[i] in arr2: continue out[cur] = arr1[i] cur += 1 return out[:cur] Here is an untested variant that should work for 64-bit integers (floating point numbers need memory views and possibly a prime constant too): @nb.njit('uint64(int64)') def hash_32bit_4k(value): return (np.uint64(value) * np.uint64(67_280_421_310_721)) & np.uint64(0x0FFF) Note that if all the values in the small array are contained in the main array in each loop, then we can speed up the arr1[i] in arr2 part by removing values from arr2 when we find them. That being said, collisions and findings should be very rare so I do not expect this to be significantly faster (not to mention it adds some overhead and complexity). If items are computed in chunks, then the last chunks can be directly copied without any check but the benefit should still be relatively small. Note that this strategy can be effective for the naive (C/C++) SIMD implementation previously mentioned though (it can be about 2x faster). Generalization and parallel implementation This section focus on the algorithm to use regarding the input size. It particularly details an SIMD-based implementation and discuss about the use of multiple threads. First of all, regarding the value r, the best algorithm to use can be different. More specifically: when r is 0, the best thing to do is to return the input array arr1 unmodified (possibly a copy to avoid issue with in-place algorithms); when r is 1, we can use one basic loop iterating over the array, but the best implementation is likely to use np.where of Numpy which is highly optimized for that when r is small like <10, then using a SIMD-based implementation should be particularly efficient, especially if the iteration range of the arr2-based loop is known at compile-time and is unrolled for bigger r values that are still relatively small (eg. r < 1000 and r << n), the provided hash-based solution should be one of the best; for larger r values with r << n, the hash-based solution can be optimized by packing boolean values as bits in bloomFilter and by using multiple hash-functions instead of one so to better handle collisions while being more cache-friendly (in fact, this is what actual bloom filters does); note that multi-threading can be used so speed up the lookups when r is huge and r << n; when r is big and not much smaller than n, then the problem is pretty hard to solve efficiently and the best solution is certainly to sort both arrays (typically with a radix sort) and use a merge-based method to remove the duplicates, possibly with multiple threads when both r and n are huge (hard to implement). Let's start with the SIMD-based solution. Here is an implementation: @nb.njit('int32[:](int32[::1], int32[::1])') def setdiff1d_nb_simd(arr1, arr2): out = np.empty_like(arr1) limit = arr1.size // 4 * 4 limit2 = arr2.size // 2 * 2 cur = 0 z32 = np.int32(0) # Tile (x4) based computation for i in range(0, limit, 4): f0, f1, f2, f3 = z32, z32, z32, z32 v0, v1, v2, v3 = arr1[i], arr1[i+1], arr1[i+2], arr1[i+3] # Unrolled (x2) loop searching for a match in `arr2` for j in range(0, limit2, 2): val1 = arr2[j] val2 = arr2[j+1] f0 += (v0 == val1) + (v0 == val2) f1 += (v1 == val1) + (v1 == val2) f2 += (v2 == val1) + (v2 == val2) f3 += (v3 == val1) + (v3 == val2) # Remainder of the previous loop if limit2 != arr2.size: val = arr2[arr2.size-1] f0 += v0 == val f1 += v1 == val f2 += v2 == val f3 += v3 == val if f0 == 0: out[cur] = arr1[i+0]; cur += 1 if f1 == 0: out[cur] = arr1[i+1]; cur += 1 if f2 == 0: out[cur] = arr1[i+2]; cur += 1 if f3 == 0: out[cur] = arr1[i+3]; cur += 1 # Remainder for i in range(limit, arr1.size): if arr1[i] not in arr2: out[cur] = arr1[i] cur += 1 return out[:cur] It turns out this implementation is always slower than the hash-based one on my machine since Numba clearly generate an inefficient for the inner arr2-based loop and this appears to come from broken optimizations related to the ==: Numba simply fail use SIMD instructions for this operation (for no apparent reasons). This prevent many alternative SIMD-related codes to be fast as long as they are using Numba. Another issue with Numba is that np.where is slow since it use a naive implementation while the one of Numpy has been heavily optimized. The optimization done in Numpy can hardly be applied to the Numba implementation due to the previous issue. This prevent any speed up using np.where in a Numba code. In practice, the hash-based implementation is pretty fast and the copy takes a significant time on my machine already. The computing part can be speed up using multiple thread. This is not easy since the parallelism model of Numba is very limited. The copy cannot be easily optimized with Numba (one can use non-temporal store but this is not yet supported by Numba) unless the computation is possibly done in-place. To use multiple threads, one strategy is to first split the range in chunk and then: build a boolean array determining, for each item of arr1, whether the item is found in arr2 or not (fully parallel) count the number of item found by chunk (fully parallel) compute the offset of the destination chunk (hard to parallelize, especially with Numba, but fast thanks to chunks) copy the chunk to the target location without copying found items (fully parallel) Here is an efficient parallel hash-based implementation: @nb.njit('int32[:](int32[:], int32[:])', parallel=True) def setdiff1d_nb_faster_par(arr1, arr2): # Pre-computation of the bloom-filter bloomFilter = np.zeros(4096, dtype=np.uint8) for j in range(arr2.size): bloomFilter[hash_32bit_4k(arr2[j])] = True chunkSize = 1024 # To tune regarding the kind of input chunkCount = (arr1.size + chunkSize - 1) // chunkSize # Find for each item of `arr1` if the value is in `arr2` (parallel) # and count the number of item found for each chunk on the fly. # Note: thanks to page fault, big parts of `found` are not even written in memory if `arr2` is small found = np.zeros(arr1.size, dtype=nb.bool_) foundCountByChunk = np.empty(chunkCount, dtype=nb.uint16) for i in nb.prange(chunkCount): start, end = i * chunkSize, min((i + 1) * chunkSize, arr1.size) foundCountInChunk = 0 for j in range(start, end): val = arr1[j] if bloomFilter[hash_32bit_4k(val)] and val in arr2: found[j] = True foundCountInChunk += 1 foundCountByChunk[i] = foundCountInChunk # Compute the location of the destination chunks (sequential) outChunkOffsets = np.empty(chunkCount, dtype=nb.uint32) foundCount = 0 for i in range(chunkCount): outChunkOffsets[i] = i * chunkSize - foundCount foundCount += foundCountByChunk[i] # Parallel chunk-based copy out = np.empty(arr1.size-foundCount, dtype=arr1.dtype) for i in nb.prange(chunkCount): srcStart, srcEnd = i * chunkSize, min((i + 1) * chunkSize, arr1.size) cur = outChunkOffsets[i] # Optimization: we can copy the whole chunk if there is nothing found in it if foundCountByChunk[i] == 0: out[cur:cur+(srcEnd-srcStart)] = arr1[srcStart:srcEnd] else: for j in range(srcStart, srcEnd): if not found[j]: out[cur] = arr1[j] cur += 1 return out This implementation is the fastest for the target input on my machine. It is generally fast when n is quite big and the overhead to create threads is relatively small on the target platform (eg. on PCs but typically not computing servers with many cores). The overhead of the parallel implementation is significant so the number of core on the target machine needs to be at least 4 so the implementation can be significantly faster than the sequential implementation. It may be useful to tune the chunkSize variable for the target inputs. If r << n, it is better to use a pretty big chunkSize. That being said, the number of chunk needs to be sufficiently big for multiple thread to operate on many chunks. Thus, chunkSize should be significantly smaller than n / numberOfThreads. On my machine most of the time (65-70%) is spent in the final copy which is mostly memory-bound and can hardly be optimized further with Numba. Results Here are results on my i5-9600KF-based machine (with 6 cores): setdif1d_np: 2.65 ms setdif1d_in1d_np: 2.61 ms setdiff1d_nb: 2.33 ms setdiff1d_nb_simd: 1.85 ms setdiff1d_nb_faster: 0.73 ms setdiff1d_nb_faster_par: 0.49 ms The best provided implementation is about 4~5 time faster than the other ones. | 3 | 10 |
74,044,291 | 2022-10-12 | https://stackoverflow.com/questions/74044291/pyenv-poetry-itself-running-on-older-python-version-what-to-do | Update: Which Python should I use to install poetry? System Python: That is an excellent idea. Once, however, poetry self update was trying to update a system package without the necessary permissions. Pyenv: A good solution. Nonetheless, if Python is updated and the old installation is deleted, poetry will stop working because it is not aware of the new python version. Set global python with pyenv pyenv global 3.10.7 Install poetry $ curl -sSL https://install.python-poetry.org | python3 - Change global python pyenv global 3.10.8 Now, poetry still runs on Python-3.10.7. If I uninstall this python version, poetry crashes. How can I instruct the virtual environment of poetry to use the new python version? A solution is to uninstall and reinstall it: $ curl -sSL https://install.python-poetry.org | python3 - --uninstall $ curl -sSL https://install.python-poetry.org | python3 - Is there any other way? | The decision is to use Pyenv to install Poetry. If the older python version should be deleted: Uninstall Poetry. Delete old Python version. Install Poetry. | 3 | 1 |
74,058,894 | 2022-10-13 | https://stackoverflow.com/questions/74058894/event-loop-is-closed-in-a-celery-worker | I am facing multiple issues using an async ODM inside my celery worker First i wasn't able to init my database models using celery worker signal i am using beanie for the db connection. First Implementation from asyncer import syncify from asgiref.sync import async_to_sync client = AsyncIOMotorClient( DATABASE_URL, uuidRepresentation="standard" ) db = client[DB_NAME] async def db_session(): await init_beanie( database=db, document_models=[Project, User], ) @worker_ready.connect def startup_celery_ecosystem(**kwargs): logger.info('Startup celery worker process') async_to_sync(db_session)() logger.info('FINISHED : Startup celery worker process') async def get_users(): users = User.find() users_list = await users.to_list() return users_list @celery_app.task def pool_db(): async_to_sync(get_users)() #syncify(get_users)() same error User class is not initialized yet (init_beanie should have already initialized all the models ) With this implementation i could not access my database using the User and Project class and it raises an error as if User and Project haven't been instantiated yet The workaround is to call db_session() at the module level which solve the problem with database models instantiation, But now when querying the database i get the following error from my celery task RuntimeError: Event loop is closed Second Implementation from asyncer import syncify from asgiref.sync import async_to_sync client = AsyncIOMotorClient( DATABASE_URL, uuidRepresentation="standard" ) db = client[DB_NAME] async def db_session(): await init_beanie( database=db, document_models=[Project, User], ) # now init_beanie at module level async_to_sync(db_session)() async def get_users(): users = User.find() users_list = await users.to_list() return users_list @celery_app.task def pool_db(): # this raises the following Runtime error RuntimeError('Event loop is closed') async_to_sync(get_users)() #syncify(get_users)() same error i am not very familiar with how asyncio is implemented and how asyncer and asgiref allows to run async code inside a sync thread which left me confused, any help would be appriciated | After many investigation using flower for monitoring workers and logging the workers Id ( processes ids) it turns out that Celery worker itself does not process any tasks, it spawns other child processes ( this is my case because i am using the default executor pool which is prefork), while the signal ( worker_ready.connect ) is only run on the supervisor process Celery worker and not the childs, and since processes are isoleted memory wise, this means that you can't have access to db connection or any initialized ressources from the child processes. Now i am using celery with gevent which only spawn 1 process, because initially my project doesn't require CPU heavy tasks which means i don't need all the cpu power provided by the prefork pool | 8 | 2 |
74,089,170 | 2022-10-16 | https://stackoverflow.com/questions/74089170/suppress-rasterio-warning-warning-1-tiffreaddirectory | I've been having problems with the error message Warning 1: TIFFReadDirectory:Sum of Photometric type-related color channels and ExtraSamples doesn't match SamplesPerPixel. Defining non-color channels as ExtraSamples. This error message occurs when I open a .tiff file with wrong tiff tags, in my case the tags: PHOTOMETRIC_MINISBLACK = 1; NO(!) Extrasample while the tiff is a RGB image with one extrasample (mask). To specify the tags before opening has no use here, since I don't know the bands in practice. I simply want to suppress the warning but haven't had success with it. I already tried the following: python Warnings module, suppressing all kind of warnings rasterio logging This is what I do to get the Warning: tiff = rasterio.open(path) img = rast.read() If you want to try it out yourself, you can find an example Tiff in the Google Drive. Does someone know how to generally suppress the warning? EDIT: Here is the information about my rasterio version pip show -v rasterio: Name: rasterio Version: 1.2.10 Summary: Fast and direct raster I/O for use with Numpy and SciPy Home-page: https://github.com/mapbox/rasterio Author: Sean Gillies Author-email: [email protected] License: BSD Location: /home/david/miniconda3/lib/python3.9/site-packages Requires: click-plugins, numpy, snuggs, cligj, click, setuptools, affine, certifi, attrs Required-by: rioxarray Metadata-Version: 2.1 Installer: pip Classifiers: Development Status :: 5 - Production/Stable Intended Audience :: Developers Intended Audience :: Information Technology Intended Audience :: Science/Research License :: OSI Approved :: BSD License Programming Language :: C Programming Language :: Cython Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3 Topic :: Multimedia :: Graphics :: Graphics Conversion Topic :: Scientific/Engineering :: GIS Entry-points: [console_scripts] rio=rasterio.rio.main:main_group [rasterio.rio_commands] blocks=rasterio.rio.blocks:blocks bounds=rasterio.rio.bounds:bounds calc=rasterio.rio.calc:calc clip=rasterio.rio.clip:clip convert=rasterio.rio.convert:convert edit-info=rasterio.rio.edit_info:edit env=rasterio.rio.env:env gcps=rasterio.rio.gcps:gcps info=rasterio.rio.info:info insp=rasterio.rio.insp:insp mask=rasterio.rio.mask:mask merge=rasterio.rio.merge:merge overview=rasterio.rio.overview:overview rasterize=rasterio.rio.rasterize:rasterize rm=rasterio.rio.rm:rm sample=rasterio.rio.sample:sample shapes=rasterio.rio.shapes:shapes stack=rasterio.rio.stack:stack transform=rasterio.rio.transform:transform warp=rasterio.rio.warp:warp Note: you may need to restart the kernel to use updated packages. | Updated Answer After a lot of hunting around, and trying to use TIFFSetWarningHandler() as described here, it transpires that rasterio uses its own, built-in, stripped-down version of gdal - unless you build from source. That gives rise to the following. Method 1 Tell rasterio's GDAL to quiet warnings: #!/usr/bin/env python3 # Get cut-down GDAL that rasterio uses from osgeo import gdal # ... and suppress errors gdal.PushErrorHandler('CPLQuietErrorHandler') import rasterio # Open TIFF and read it tiff = rasterio.open('original.tiff') img = tiff.read() print(img.shape) Sample Output (4, 512, 512) I found some additional stuff you may like to refer to here. Method 2 - Use tifffile Another option might be to use tifffile instead as it doesn't emit warnings for your file: from tifffile import imread img = imread('original.tiff') print(img.shape) # prints (512, 512, 4) This is simple and effective but may lack some features of a full GeoTIFF reader. Method 3 - Override the libtiff warning handler This uses ctypes to call into the DLL/shared object library and override the library's warning handler: import ctypes from ctypes.util import find_library # Find the path to the library we want to modify thePath = find_library('tiff') # try "gdal" instead of "tiff" too print(thePath) # Get handle to it theLib = ctypes.CDLL(thePath) theLib.TIFFSetWarningHandler.argtypes = [ctypes.c_void_p] theLib.TIFFSetWarningHandler.restype = ctypes.c_void_p theLib.TIFFSetWarningHandler(None) import rasterio # Open TIFF and read it tiff = rasterio.open('original.tiff') img = tiff.read() print(img.shape) Original Answer Your TIFF is non-compliant because it is RGBA but has "Photometric Interpretation" set to MIN_IS_BLACK meaning it is greyscale or bi-level. Rather than suppressing warnings, the better option IMHO, is to correct your TIFF by setting the "Photometric Interpretation" to RGB and setting the type of the extra sample to UNASSOCIATED_ALPHA. You can do that with tiffset which comes with libtiff: tiffset -s 262 2 11_369_744_2022-10-18.tiff # Photometric = RGB tiffset -s 338 1 2 11_369_744_2022-10-18.tiff # Extra sample is UNASSOCIATED_ALPHA Libtiff now no longer generates errors when loading your image and it displays as RGB colour on macOS Preview. Running tiffinfo on the original image gives: TIFFReadDirectory: Warning, Sum of Photometric type-related color channels and ExtraSamples doesn't match SamplesPerPixel. Defining non-color channels as ExtraSamples.. === TIFF directory 0 === TIFF Directory at offset 0x8 (8) Image Width: 512 Image Length: 512 Resolution: 1, 1 (unitless) Bits/Sample: 8 Compression Scheme: Deflate Photometric Interpretation: min-is-black Samples/Pixel: 4 Rows/Strip: 8 Planar Configuration: single image plane Tag 33550: 76.437028,76.437028,0.000000 Tag 33922: 0.000000,0.000000,0.000000,9079495.967826,5596413.462927,0.000000 Tag 34735: 1,1,0,12,1024,0,1,1,1025,0,1,1,2050,0,1,1,1026,34737,37,0,2049,34737,6,38,2054,0,1,9102,2056,0,1,1,2057,34736,1,0,2058,34736,1,1,2061,34736,1,2,3072,0,1,3857,3076,0,1,9001 Tag 34736: 6378137.000000,6378137.000000,0.000000 Tag 34737: Popular Visualisation Pseudo Mercator|WGS 84| And on the image with corrected tags: === TIFF directory 0 === TIFF Directory at offset 0x99858 (628824) Image Width: 512 Image Length: 512 Resolution: 1, 1 (unitless) Bits/Sample: 8 Compression Scheme: Deflate Photometric Interpretation: RGB color Extra Samples: 1<unassoc-alpha> Samples/Pixel: 4 Rows/Strip: 8 Planar Configuration: single image plane Tag 33550: 76.437028,76.437028,0.000000 Tag 33922: 0.000000,0.000000,0.000000,9079495.967826,5596413.462927,0.000000 Tag 34735: 1,1,0,12,1024,0,1,1,1025,0,1,1,2050,0,1,1,1026,34737,37,0,2049,34737,6,38,2054,0,1,9102,2056,0,1,1,2057,34736,1,0,2058,34736,1,1,2061,34736,1,2,3072,0,1,3857,3076,0,1,9001 Tag 34736: 6378137.000000,6378137.000000,0.000000 Tag 34737: Popular Visualisation Pseudo Mercator|WGS 84| | 3 | 4 |
74,117,873 | 2022-10-18 | https://stackoverflow.com/questions/74117873/data-classes-vs-dictionaries | I've been learning about dataclasses, and was reworking an old project trying to integrate a dataclass into the program in place of a dictionary system I was using. The code blocks below are essentially the respective new and old methods being used to build a dataframe of several thousand items. My problem is I don't understand the use-case for the dataclass over a dictionary. What I want to know is: When should I use a dataclass over a dictionary (or vice versa)? Programmatically, in this instance of simply cataloguing data, is either method more efficient/optimized than the other? In actual practice is either method encouraged over the other (for reasons of efficiency, readibility, industrial standards, or otherwise)? Method using @dataclass @dataclass class Car: year: int = None model: str = None def main(): foo = {} for name in car_list: bar = Car() bar.year = get_year(name) bar.model = get_model(name) foo[name] = vars(bar) df = pd.DataFrame.from_dict(foo) Method using Dictionary def main(): foo = {} for name in car_list: bar = { 'year': None 'model': None } bar['year'] = get_year(name) bar['model'] = get_model(name) foo[name] = bar df = pd.DataFrame.from_dict(foo) | As discussed in the comments, there is a lot of discussion (and opinions) regarding this particular comparison. After doing several hours of research, there are a few main points I'd like to lay out for anyone else who may have this question in the future. 1. In regard to efficiency Dictionaries are simpler data containers and thus will be more efficient. Under the hood, classes and dataclasses are dictionaries with a bit more going on. The top answer on this SO post provides insight into how much more efficient dictaries are than dataclasses when undergoing various tasks. (Creating a container can be as 5x as slow, whereas accessing the data is only 1.25-1.2 times as slow). Various other accounts on the web demonstrate similar results. 2. Functional Differences A major point aside from speed is the control over mutability. It's not impossible to make elements of a dictionary immutable, but generally requires the creation of classes, functions, or importing some library. Dataclasses on the other hand allow instances to be frozen after creation by simply passing frozen=True into the decorator of the dataclass. On top of the obvious changing of values, this functionality also prevents any attributes from being added, accidentally or otherwise. Other decorator arguments provide potentially even finer control over class creation. This video is an excellent, beginner friendly resource that demonstrates several attributes and usecases of dataclasses. Type hints are another reason one might prefer to use dataclasses. While type hints can be utilized with dictionaries and their values, type hinting an object may result in finer control. This is a great write-up on Medium about a team who refactored a project to use dataclasses instead of dictionaries. 3. Which one is right for me? I've spent the better half of an afternoon learning why it's hard to find an answer to this question. Because it depends. If one was objectively better than the other, the other would have been depreciated. Dictionaries are simpler- they require no imports, can be created, accessed and mutated with ease, and will produce faster results. Dataclasses on the other hand allow the user finer control. This can be especially important when working on large projects with several team members. A general heuristic when designing a program is to defer to the simplest structure when possible. In my particular case, I don't need the additional functionality of dataclasses when my goal is simply to create a dataframe, and my input data is somewhat reliable. Using a data class doesn't noticeably slow my code down, but if I were to take on several more inputs, I might see performance take a hit. | 10 | 12 |
74,085,655 | 2022-10-16 | https://stackoverflow.com/questions/74085655/pandas-appending-excelxlsx-file-gives-attribute-error | Getting trouble with either of errors ; writer.book=book AttributeError: can't set attribute 'book' or BadZipFile for code not giving badzipfile error I put code line which writes excel file first, dataOutput=pd.DataFrame(dictDataOutput,index=[0]) but, even though I cannot get rid of writer.book = book AttributeError: can't set attribute 'book' As one of SO answer suggests I need to bring back openpyxl to previous versions, or to work with CSV file not excel. I think that's not the solution. There should be solution which I could not get in dataOutput=pd.DataFrame(dictDataOutput,index=[0]) dataOutput.to_excel('output.xlsx') 'output.xlsm' book = load_workbook('output.xlsx') 'output.xlsm' writer = pd.ExcelWriter('output.xlsx')OR'output.xlsm'#,engine='openpyxl',mode='a',if_sheet_exists='overlay') writer.book = book writer.sheets = {ws.title: ws for ws in book.worksheets} for sheetname in writer.sheets: dataOutput.to_excel(writer,sheet_name=sheetname, startrow=writer.sheets[sheetname].max_row, index = False,header= False) writer.save() I looked for an answer in enter link description here and in detailed solution of attributeError in enter link description here ---I tried another way with pd.ExcelWriter('output.xlsx', mode='a',if_sheet_exists='overlay') as writer: dataOutput.to_excel(writer, sheet_name='Sheet1') writer.save() But this time gave another error FutureWarning: save is not part of the public API, usage can give in unexpected results and will be removed in a future version writer.save() after @Andrew's comments I chaned my code to this way; with pd.ExcelWriter('outputData.xlsm', engine='openpyxl', mode='a', if_sheet_exists='overlay') as writer: book = load_workbook('outputData.xlsm', keep_vba=True) writer.book = book writer.sheets = {ws.title: ws for ws in book.worksheets} current_sheet = book['Sheet1'] Column_A = current_sheet['A'] maxrow = max(c.row for c in Column_A if c.value is not None) for sheetname in writer.sheets: AllDataOutput.to_excel(writer, sheet_name=sheetname, startrow=maxrow, index=False, header=False) | There are a few things that are unclear about your question, but I'll do my best to try and answer what I think you're trying to do. First off, things that won't work: From the to_excel docs, "If you wish to write to more than one sheet in the workbook, it is necessary to specify an ExcelWriter object", so when you run dataOutput.to_excel('output.xslx'), you're creating an excel workbook with only a single sheet. Later, it seems like you're trying to write multiple sheets, but you'll only be able to write a single sheet since that's all that's in the workbook to begin with. From the ExcelWriter docs, .save() is deprecated and .close() should be used instead. But it's recommended to use a context manager (with ... as ... :) so you can entirely avoid having to do that. By default ExcelWriter uses w (write) mode, so the BadZipFile error that you were running into earlier was likely related to the order that things were run. If you had written a book with to_excel, then loaded it with openpyxl, then used ExcelWriter, you'd write, load, then immediately overwrite the file with a blank one. If you then tried loading the book again without recreating it with to_excel, you'd be trying to load an empty file, which is why that error was thrown. The answers that you linked to in your question seem to use a much older version of pandas with much different behavior surrounding interaction with Excel files. The latest versions seem to simplify things quite a bit! As an example solution, I'm going to assume you have two DataFrames called df1 and df2 that you'd like to write to Sheet_1 and Sheet_2 in output.xlsx, and you'd like to update the data in those sheets (using the overlay option) with additional DataFrames called df3 and df4. To confirm, I have openpyxl = 3.0.10 and pandas = 1.5.0 installed. As you have things written currently, there aren't any updates in dataOutput that would be reflected in output.xslx, but I'm going to create a dictionary called df_dict that can hold the DataFrames and be updated based on the sheets they should be written to. I'm sure you can adjust to suit your needs using whatever data structure you prefer! df1 = pd.DataFrame(data = {'A': ['a', 'b', 'c'], 'B': ['g','h','i']}) df2 = pd.DataFrame(data = {'C': ['o', 'p', 'q'], 'D': ['u','v','w']}) df_dict = {'Sheet_1': df1, 'Sheet_2': df2} with pd.ExcelWriter('output.xlsx') as writer: for sheetname, dfname in df_dict.items(): dfname.to_excel(writer, sheet_name = sheetname) # Updated data is put in DataFrames df3 = pd.DataFrame(data = {'A': ['a', 'b', 'c', 'd', 'e', 'f'], 'B': ['g', 'h', 'i', 'j', 'k', 'l']}) df4 = pd.DataFrame(data = {'C': ['o', 'p', 'q', 'r', 's', 't'], 'D': ['u', 'v', 'w', 'x', 'y', 'z']}) # df_dict is updated with the new DataFrames df_dict = {`Sheet_1`: df3, 'Sheet_2': df4} # This block now uses the name of the sheet from the workbook itself to get the updated data. # You could also just run the same block as above and it would work. with pd.ExcelWriter('output.xlsx', mode = 'a', if_sheet_exists = 'overlay') as writer: for sheet in writer.sheets: df_dict[sheet].to_excel(writer, sheet_name = sheet) | 3 | 7 |
74,116,435 | 2022-10-18 | https://stackoverflow.com/questions/74116435/fastapi-is-not-quitting-when-pressing-ctrc | I am finding a difficulty with quitting FastAPI. Ctr+c does not work. Here is my pyproject.toml [tool.pyright] exclude = ["app/worker"] ignore = ["app/worker"] [tool.poetry] name = "api" version = "0.1.0" description = "" authors = ["SamiAlsubhi <[email protected]>"] [tool.poetry.dependencies] python = ">=3.8,<3.9" fastapi = "^0.65.2" tortoise-orm = "^0.17.4" asyncpg = "^0.23.0" aerich = "^0.5.3" networkx = "^2.5.1" numpy = "^1.21.0" ldap3 = "^2.9.1" fastapi-jwt-auth = "^0.5.0" python-multipart = "^0.0.5" torch = "1.7.1" pyts = "0.11.0" Pint = "^0.17" Cython = "^0.29.24" python-dotenv = "^0.19.0" arq = "^0.22" uvicorn = {extras = ["standard"], version = "^0.15.0"} [tool.poetry.dev-dependencies] pytest = "^6.2.4" requests = "^2.25.1" asynctest = "^0.13.0" coverage = "^5.5" pytest-html = "^3.1.1" pytest-sugar = "^0.9.4" pytest-json-report = "^1.4.0" pytest-cov = "^2.12.1" pylint = "^2.11.1" autopep8 = "^1.5.7" black = "^22.3.0" aiosqlite = "^0.17.0" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" here is my entry point """running API in a local dev environment""" import os import uvicorn from dotenv import load_dotenv # laoding env values load_dotenv("../.env") if __name__ == "__main__": port = os.getenv("FASTAPI_PORT") port = int(port) if port else None uvicorn.run("app.main:app", host=os.getenv("FASTAPI_HOST"), port=port, reload=True) This what I get when I run it and then try to quit, the process hangs and does not go back to terminal: (trendr) sami@Samis-MBP backend % python run.py INFO: Will watch for changes in these directories: ['/Users/name/Desktop/etc'] INFO: Uvicorn running on http://0.0.0.0:1000 (Press CTRL+C to quit) INFO: Started reloader process [70087] using watchgod INFO: Started server process [70089] INFO: Waiting for application startup. INFO: Application startup complete. ^CINFO: Shutting down INFO: Finished server process [70089] INFO: ASGI 'lifespan' protocol appears unsupported. | It looks like there was a compatibility issue between unvicorn, starlette and FastAPI around those versions. I updated them to the latest versions and that solved the issue. | 5 | 2 |
74,110,362 | 2022-10-18 | https://stackoverflow.com/questions/74110362/is-there-a-way-in-vscode-to-automatically-switch-python-interpreter-depending-on | I'm using a monorepo with multiple packages in separate directories. Poetry is in charge of package management including creating the virtual environment. I wish VS Code would use the right Python interpreter for each package individually. Is that possible? My workaround is to open a separate window with the directory containing a package, so all of its files would use the right interpreter. | You can open multiple folders in vscode and select the interpreter for each folder individually. Select Add Folder to Workspace in the menu. As a simple example I have added three folders Then click on the python interpreter version in the lower right corner (or Ctrl+Shift+P --> Python:Select Interpreter) Choose a folder, here I choose folder1 Now select interpreter for folder1 folder The same operation continues to select interpreters for folder2 and folder3. Of course this is just the basics, you can also create virtual environments for each folder and then choose the interpreter for the virtual environment for each folder. For example, I created a virtual environment .venv2 under the folder2, the same step 2 and 3, and then I selected the interpreter under the virtual environment .venv2. There is also a way to specify the default interpreter path in the settings.json, but after setting it also requires step 2 and 3 and then select it in the selection panel. When you modify the folder settings, the .vscode folder and the settings.json file will be automatically generated under the folder. The contents of the settings.json file are the settings you just modified And this step is reversible. You can manually create the .vscode folder and add the settings.json file in it. And enter the corresponding settings in the settings.json file. | 14 | 9 |
74,112,099 | 2022-10-18 | https://stackoverflow.com/questions/74112099/importing-matplotlib-causes-int-argument-must-be-a-string-error | I'm new to Python. A few days ago I installed Anaconda and PyCharm (on D disk), and I am trying to use the matplotlib package to plot one picture. When I click "run", I get the following error: Traceback (most recent call last): File "G:\onedrive\OneDrive - mail.dlut.edu.cn\PyCharm\shock wave\P6.py", line 7, in <module> import matplotlib.pyplot as plt File "D:\anaconda3\lib\site-packages\matplotlib\pyplot.py", line 2230, in <module> switch_backend(rcParams["backend"]) File "D:\anaconda3\lib\site-packages\matplotlib\__init__.py", line 672, in __getitem__ plt.switch_backend(rcsetup._auto_backend_sentinel) File "D:\anaconda3\lib\site-packages\matplotlib\pyplot.py", line 247, in switch_backend switch_backend(candidate) File "D:\anaconda3\lib\site-packages\matplotlib\pyplot.py", line 267, in switch_backend class backend_mod(matplotlib.backend_bases._Backend): File "D:\anaconda3\lib\site-packages\matplotlib\pyplot.py", line 268, in backend_mod locals().update(vars(importlib.import_module(backend_name))) File "D:\anaconda3\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "D:\anaconda3\lib\site-packages\matplotlib\backends\backend_qtagg.py", line 12, in <module> from .backend_qt import ( File "D:\anaconda3\lib\site-packages\matplotlib\backends\backend_qt.py", line 73, in <module> _MODIFIER_KEYS = [ File "D:\anaconda3\lib\site-packages\matplotlib\backends\backend_qt.py", line 74, in <listcomp> (_to_int(getattr(_enum("QtCore.Qt.KeyboardModifier"), mod)), TypeError: int() argument must be a string, a bytes-like object or a number, not 'KeyboardModifier' Process finished with exit code 1 | It looks like this is a bug in pyside6 v6.3.0, one of the libraries matplotlib depends on for rendering plots; here's the bug report. It's a new bug, and it's already been fixed, so it's really bad luck that it got you! Solution: the issue seems to be fixed in pyside version 6.4.0 (released 13 October), so one solution is to upgrade it, or you could downgrade, e.g. to version 6.2. Another solution is to try using another backend, because I think the problem only affects the Qt backend. (A backend is a rendering engine for matplotlib — read all about them.) It's easy to try this latter option, so let's start there. Use another backend Try this at the top of your script: import matplotlib matplotlib.use('tkagg') Or you can try others; see this page for help. Up- or down-grade pyside To handle this, you'll need to deal with 'virtual environments'. You may already be doing this. Environments let you have different versions of Python and different collections of packages for different projects you might be working on. Fix the base environment... When you install Anaconda, it made an environment called base that contains 'everything' in Anaconda (Python plus lots of libraries like matplotlib). You can upgrade the version of pyside in the base environment by opening an Anaconda prompt from the Start menu and typing this: conda install -c conda-forge pyside==6.4.0 However, most programmers don't use their base environment and prefer to manage an environment specific to their project. If you are doing this, or want to give it a try, read on. ...or make a new environment Alternatively, to make a new environment, open an Anaconda prompt and type this but replace MYENV with a short suitable name for the environment: conda create -n MYENV python=3.10 pyside=6.4.0 anaconda Or you can replace anaconda with a list of the packages you want, like jupyter scipy networkx or whatever. You would then start using this environment with conda activate MYENV and your script or notebook should run in there no problem. | 4 | 11 |
74,108,243 | 2022-10-18 | https://stackoverflow.com/questions/74108243/pyvis-is-there-a-way-to-disable-physics-without-losing-graphs-layout | I am trying to vizualize a large network with pyvis and facing two problems: super long rendering network instability, i.e. nodes move too fast and it is hard to interact with such a network. Disabling physics with toggle_physics(False) helps to speed up rendering and makes the network static, but eliminates the layout settings. This is how it looks agter disabling physics: link. As you see, the graph is messy and has no structure. What I want to do is to disable physics but keep the layout settings, i.e. I want my graph to look like a normal network (e.g. similar to spring layout in networkX) with weights taken into account for every edge. Is there a way to do so? So far I found out that pyvis only has hierarchial layouts, which is not what I need. I think integrating a networkX layout might help but I have no idea how to do this, since networkX allows to set layout as a keyword argument in nx.draw() function, which is not compatible with my case. This is my code in case it helps to understand my problem: g = nx.Graph() edges_cards = cards_weights_df.values.tolist() g.add_weighted_edges_from(edges_cards) net = Network("1000px", "1800px") net.from_nx(g) net.toggle_physics(False) net.show("graph.html") Your help is appreciated! | It is possible to pass x and y coordinates to your pyvis nodes (see doc here). You can then create a graph layout with networkx and pass the resulting positions to your pyvis graph. See example below with nx.circular_layout() applied to the karate club network: import networkx as nx from pyvis.network import Network G = nx.karate_club_graph() pos=nx.circular_layout(G,scale=500) net = Network() net.from_nx(G) for node in net.get_nodes(): net.get_node(node)['x']=pos[node][0] net.get_node(node)['y']=-pos[node][1] #the minus is needed here to respect networkx y-axis convention net.get_node(node)['physics']=False net.get_node(node)['label']=str(node) #set the node label as a string so that it can be displayed net.toggle_physics(False) net.show('net.html') Here is the result with circular layout: and without any specific layout: import networkx as nx from pyvis.network import Network G = nx.karate_club_graph() net = Network() net.from_nx(G) for node in net.get_nodes(): net.get_node(node)['physics']=False net.get_node(node)['label']=str(node) net.toggle_physics(False) net.show('net.html') | 6 | 6 |
74,110,251 | 2022-10-18 | https://stackoverflow.com/questions/74110251/python-class-generated-by-protoc-cannot-be-imported-in-the-code-because-of-unres | I tried to use protocol buffers on my project and the problem I have is that when I use protoc to generate the python class. The file that's generated looks nothing like in the example provided by Google and cannot be imported in any file because there are some unresolved references. So I followed the example from this page: https://developers.google.com/protocol-buffers/docs/pythontutorial Preconditions Operating system macOS 12.6 on M1 Mac. I used Python 3.9.11 in a vrtualenv managed with pyenv and pyenv-virtualenv I downloaded the latest python package from https://github.com/protocolbuffers/protobuf/releases/tag/v21.7 I installed protobuf with homebrew https://formulae.brew.sh/formula/protobuf I followed this instruction to install the package https://github.com/protocolbuffers/protobuf/tree/v21.7/python I also copiled the c++ protoc from the above protobuff package to see if it helps but it didn't The packages I had in the end were: $ python --version $ Python 3.9.11 $ $ protoc --version $ libprotoc 3.21.7 $ $ pip freeze | grep protobuf $ protobuf==3.20.2 The code First I try to generate the a python class from this tutorial .proto file: syntax = "proto2"; package tutorial; message Person { optional string name = 1; optional int32 id = 2; optional string email = 3; enum PhoneType { MOBILE = 0; HOME = 1; WORK = 2; } message PhoneNumber { optional string number = 1; optional PhoneType type = 2 [default = HOME]; } repeated PhoneNumber phones = 4; } message AddressBook { repeated Person people = 1; } Then I use the command to generate the python class protoc -I=. --python_out=. tutorial.proto And the output file is: # -*- coding: utf-8 -*- # Generated by the protocol buffer compiler. DO NOT EDIT! # source: tutorial.proto """Generated protocol buffer code.""" from google.protobuf.internal import builder as _builder from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import symbol_database as _symbol_database # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0etutorial.proto\x12\x08tutorial\"\xd5\x01\n\x06Person\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\n\n\x02id\x18\x02 \x01(\x05\x12\r\n\x05\x65mail\x18\x03 \x01(\t\x12,\n\x06phones\x18\x04 \x03(\x0b\x32\x1c.tutorial.Person.PhoneNumber\x1aG\n\x0bPhoneNumber\x12\x0e\n\x06number\x18\x01 \x01(\t\x12(\n\x04type\x18\x02 \x01(\x0e\x32\x1a.tutorial.Person.PhoneType\"+\n\tPhoneType\x12\n\n\x06MOBILE\x10\x00\x12\x08\n\x04HOME\x10\x01\x12\x08\n\x04WORK\x10\x02\"/\n\x0b\x41\x64\x64ressBook\x12 \n\x06people\x18\x01 \x03(\x0b\x32\x10.tutorial.Person') _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'tutorial_pb2', globals()) if _descriptor._USE_C_DESCRIPTORS == False: DESCRIPTOR._options = None _PERSON._serialized_start=29 _PERSON._serialized_end=242 _PERSON_PHONENUMBER._serialized_start=126 _PERSON_PHONENUMBER._serialized_end=197 _PERSON_PHONETYPE._serialized_start=199 _PERSON_PHONETYPE._serialized_end=242 _ADDRESSBOOK._serialized_start=244 _ADDRESSBOOK._serialized_end=291 # @@protoc_insertion_point(module_scope) So as you can see there are no metaclasses created and all the constants below the line DESCRIPTOR.options=None are Unresolved references. When I try to import that file later, runtime obviously crashes as this is not a valid Python file. Any ideas? | Turned out the example code works well and the problem was elsewhere in my code. A few confusing things that made me question the code were: The example on the official page is outdated. Google has changed the way the protobuffs are generated and there are no metaclasses anymore. There are dynamic imports so an IDE will highlight descriptors like _PERSON as not defined but the code still works when launched. So yeah I spent time debugging a not existing issue ;). | 5 | 4 |
74,111,063 | 2022-10-18 | https://stackoverflow.com/questions/74111063/at-sign-in-pylance-pyright-inlay-hints-error-messages | The July 2022 release of the Python extension for Visual Studio Code introduced "Inlay Type Hints", which automatically suggests return types of functions that don't have an explicit annotation. To enable it, you can set "python.analysis.inlayHints.functionReturnTypes": true to your IDE user settings (Preferences: Open Settings (JSON) command). While testing this feature, I noticed the following kind of suggestion, inside a class: ... where the highlighted text in yellow is the return type suggested by the Python extension, which is based on Pylance, which itself relies on Pyright. My question is: what is the @ sign in this suggestion supposed to mean? Is there a PEP that refers to this kind of type annotations (with Self@...) or is that way of type hinting specific to Pyright, differing from the standard convention? Where can I find more information about it? I've found a similar Stackoverflow question here but it did not get any answer. | That @ indicates that Self is a TypeVar, and Self@HereIsMyClassName refers to the Self in the context of the class HereIsMyClassName (it could also be a function). This is not valid Python. (Technically, it is valid, as the @ operator is matrix multiplication, so you are matrix-multiplying Self and HereIsMyClassName. However, that's not what is meant and really doesn't make any sense.) Don't write this in your code, but know that this is how Pylance shows you TypeVars when it shows you type definitions. (Possibly other editors and extensions as well.) | 9 | 8 |
74,108,540 | 2022-10-18 | https://stackoverflow.com/questions/74108540/python-error-boolean-series-key-will-be-reindexed-to-match-dataframe-index | I am working on a project in Python, and while my below code works I am getting this hideous error: UserWarning: Boolean Series key will be reindexed to match DataFrame index. dummy_types = pd.get_dummies(df_pokemon_co, columns=['Type 1', 'Type 2']) df_pokemon_co['Rock'] = dummy_types['Type 1_Rock'] + dummy_types['Type 2_Rock'] df_pokemon_co['Ground'] = dummy_types['Type 1_Ground'] + dummy_types['Type 2_Ground'] df_pokemon_co['Water'] = dummy_types['Type 1_Water'] + dummy_types['Type 2_Water'] df_pokemon_co['Sum'] = df_pokemon_co['Rock'] + df_pokemon_co['Ground'] + df_pokemon_co['Water'] print('Total rock:', np.sum(np.any(df_pokemon_co[df_pokemon_co['Sum']==2][df_pokemon_co['Rock']==1], axis=1))) print('Total ground:', np.sum(np.any(df_pokemon_co[df_pokemon_co['Sum']==2][df_pokemon_co['Ground']==1], axis=1))) print('Total water:', np.sum(np.any(df_pokemon_co[df_pokemon_co['Sum']==2][df_pokemon_co['Water']==1], axis=1))) I have worked out that if I remove the following from the print lines the message goes away, but I am not quite sure how to remedy. [df_pokemon_co['Sum']==2] Does anyone have any idea on how to fix this? I have seen some other posts related to this error however the error seems to be getting issued for different reasons in those cases. Thanks in advance :) | If need count matched values by 2 conditions chain them by & for bitwise AND and use sum for Trues values: print('Total rock:', ((df_pokemon_co['Sum']==2) & (df_pokemon_co['Rock']==1)).sum()) For median use DataFrame.loc for select by condition and column name: med = (df_pokemon_co.loc[(df_pokemon_co['Rock']==1) & (df_pokemon_co['Sum']==2), 'Defense'].median()) For avoid call multiple times same mask assign it to variable: mask1 = (df_pokemon_co['Sum']==2) & (df_pokemon_co['Rock']==1) print('Total rock:', mask1.sum()) print('Median rock:', df_pokemon_co.loc[mask1, 'Defense'].sum()) | 3 | 4 |
74,103,528 | 2022-10-17 | https://stackoverflow.com/questions/74103528/type-hinting-an-instance-of-a-nested-class | I have a class structure that looks something like this: class Base: class Nested: pass def __init__(self): self.nestedInstance = self.Nested() where subclasses of Base each have their own Nested class extending the original, like this: class Sub(Base): class Nested(Base.Nested): pass This works perfectly, and instances of Sub have their nestedInstance attributes set to instances of Sub.Nested. However, in my IDE the nestedInstance attribute is always treated as an instance of Base.Nested, not the inherited Sub.Nested. How can I make it so that nestedInstance will be inferred to be Sub.Nested rather than Base.Nested? (Without having to add extra code to every subclass; preferably, this would all be done in Base.) (By the way, I'm aware that this is an odd structure, and I can go into detail about why I chose it if necessary, but I think it's pretty elegant for my situation, and I'm hoping there's a solution to this problem.) | I don't agree with the statement that you were trying to violate the Liskov substitution principle. You were merely looking for a way to let a static type checker infer the type of nested_instance for classes inheriting from Base to be their respective Nested class. Obviously this wasn't possible with the code you had; otherwise there would be no question. There actually is a way to minimize repetition and accomplish what you want. Generics to the rescue! You can define your Base class as generic over a type variable with the upper bound of Base.Nested. When you define Sub as a subclass Base, you provide a reference to Sub.Nested as the concrete type argument. Here is the setup: from typing import Generic, TypeVar, cast N = TypeVar("N", bound="Base.Nested") class Base(Generic[N]): nested_instance: N class Nested: pass def __init__(self) -> None: self.nested_instance = cast(N, self.Nested()) class Sub(Base["Sub.Nested"]): class Nested(Base.Nested): pass This is actually all you need. For more info about generics I recommend the relevant section of PEP 484. A few things to note: Why do we need the bound? If we were to just use N = TypeVar("N"), the type checker would have no problem if we wanted do define a subclass like this: class Broken(Base[int]): class Nested(Base.Nested): pass But this would be a problem since now the nested_instance attribute would be expected to be of the type int, which is not what we want. That upper bound on N will prevent this causing mypy to complain: error: Type argument "int" of "Base" must be a subtype of "Nested" [type-var] Why explicitly declare nested_instance? The whole point of making a class generic is to bind some type variable (like N) to it and then indicate that some associated type inside that class is in fact N (or even multiple). We essentially tell the type checker to expect nested_instance to always be of the type N, which must be provided, whenever Base is used to annotate something. However, now the type checker will always complain, if we ever omit the type argument for Base and tried an annotation like this: x: Base. Again, mypy would tell us: error: Missing type parameters for generic type "Base" [type-arg] This may be the only "downside" to the use of generics in this fashion. Why cast? The problem is that inside Base, the nested_instance attribute is declared as a generic type N, whereas in Base.__init__, we assign an instance of the specific type Base.Nested. Even though it may seem like this should work, it does not. Omitting the cast call results in the following mypy error: error: Incompatible types in assignment (expression has type "Nested", variable has type "N") [assignment] Are the quotes necessary? Yes, and importing __future__.annotations does not help here. I am not entirely sure why that is, but I believe in case of the Base[...] usage the reason is that __class_getitem__ is actually called and you cannot provide Sub.Nested to it because it is not even defined at that point. Full working example from typing import Generic, TypeVar, cast N = TypeVar("N", bound="Base.Nested") class Base(Generic[N]): nested_instance: N class Nested: pass def __init__(self) -> None: self.nested_instance = cast(N, self.Nested()) class Sub(Base["Sub.Nested"]): class Nested(Base.Nested): pass def get_nested(obj: Base[N]) -> N: return obj.nested_instance def assert_instance_of_nested(nested_obj: N, cls: type[Base[N]]) -> None: assert isinstance(nested_obj, cls.Nested) if __name__ == '__main__': sub = Sub() nested = get_nested(sub) assert_instance_of_nested(nested, Sub) This script works "as is" and mypy is perfectly happy with it. The two functions are just for demonstration purposes, so that you see how you could leverage the generic Base. Additional sanity checks To assure you even more, you can for example add reveal_type(sub.nested_instance) at the bottom and mypy will tell you: note: Revealed type is "[...].Sub.Nested" This is what we wanted. If we declare a new subclass class AnotherSub(Base["AnotherSub.Nested"]): class Nested(Base.Nested): pass and try this a: AnotherSub.Nested a = Sub().nested_instance we are again correctly reprimanded by mypy: error: Incompatible types in assignment (expression has type "[...].Sub.Nested", variable has type "[...].AnotherSub.Nested") [assignment] Hope this helps. PS To be clear, you can still inherit from Base without specifying the type argument. This has no runtime implications either way. It's just that a strict type checker will complain about it because it is generic, just as it would complain if you annotate something with list without specifying the type argument. (Yes, list is generic.) Also, whether or not your IDE actually infers this correctly of course depends on how consistent their internal type checker is with the typing rules in Python. PyCharm for example seems to deal with this setup as expected. | 3 | 3 |
74,091,600 | 2022-10-17 | https://stackoverflow.com/questions/74091600/asgi-application-not-working-with-django-channels | I followed the tutorial in the channels documentation but when I start the server python3 manage.py runserver it gives me this : Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). October 17, 2022 - 00:13:21 Django version 4.1.2, using settings 'config.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. when I expected for it to give me this : Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). October 17, 2022 - 00:13:21 Django version 4.1.2, using settings 'config.settings' Starting ASGI/Channels version 3.0.5 development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. settings.py INSTALLED_APPS = [ 'channels', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ... ] ASGI_APPLICATION = 'config.asgi.application' asgi.py import os from django.core.asgi import get_asgi_application from channels.routing import ProtocolTypeRouter os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings') application = ProtocolTypeRouter({ 'http': get_asgi_application(), }) It doesn't give any errors even when I change the ASGI_APPLICATION = 'config.asgi.application to ASGI_APPLICATION = ''. | This could be due to the fact that the Django and channels versions you have used are not compatible Try : channels==3.0.4 and django==4.0.0 | 20 | 30 |
74,097,495 | 2022-10-17 | https://stackoverflow.com/questions/74097495/aligning-text-according-to-user-input-in-python | aStr = input("Enter a string message: ") fW = eval(input("Enter a positive integer: ")) print(f"Right aligned is <{aStr:>fW}>") so, this is my code, I would like to align the spaces according to the user input. Whenever I try to use eval or int in fW, it shows this error. Traceback (most recent call last): File "C:\Users\vmabr\Downloads\A1Q2.py", line 5, in <module> print(f"Right aligned is <{aStr:fW}>") ValueError: Invalid format specifier May I know what's the fix? | You were close. To implement a "parameterized" format-string, you need to wrap the parameter fW into additional curly braces: aStr = input("Enter a string message: ") fW = int(input("Enter a positive integer: ")) print(f"Right aligned is <{aStr:>{fW}}>") # {fW} instead of fW See PEP 498 -> format specifiers for further information. | 3 | 3 |
74,105,434 | 2022-10-18 | https://stackoverflow.com/questions/74105434/how-to-use-text-languages-in-python | I'm trying to create a text-to-speech Python program. I already have it working in English, though, I need other languages too. How can I use the same methods for other languages like Chinese? coding: from gtts import gTTS import os myText = "hello" language = 'en' output = gTTS(text=myText, lang = language, slow = False) output.save("output.mp3") os.system(" start output.mp3") | To get all the languages supported by the library you are using use the following: import gtts.lang print(gtts.lang.tts_langs()) In this output, the keys are what you would use and the values just explain what language it is. And to answer your question 'zh-CN': 'Chinese', 'zh-TW': 'Chinese (Mandarin/Taiwan)', 'zh': 'Chinese (Mandarin)' are all possible versions of chinese. Output: {'af': 'Afrikaans', 'ar': 'Arabic', 'bg': 'Bulgarian', 'bn': 'Bengali', 'bs': 'Bosnian', 'ca': 'Catalan', 'cs': 'Czech', 'cy': 'Welsh', 'da': 'Danish', 'de': 'German', 'el': 'Greek', 'en': 'English', 'eo': 'Esperanto', 'es': 'Spanish', 'et': 'Estonian', 'fi': 'Finnish', 'fr': 'French', 'gu': 'Gujarati', 'hi': 'Hindi', 'hr': 'Croatian', 'hu': 'Hungarian', 'hy': 'Armenian', 'id': 'Indonesian', 'is': 'Icelandic', 'it': 'Italian', 'iw': 'Hebrew', 'ja': 'Japanese', 'jw': 'Javanese', 'km': 'Khmer', 'kn': 'Kannada', 'ko': 'Korean', 'la': 'Latin', 'lv': 'Latvian', 'mk': 'Macedonian', 'ms': 'Malay', 'ml': 'Malayalam', 'mr': 'Marathi', 'my': 'Myanmar (Burmese)', 'ne': 'Nepali', 'nl': 'Dutch', 'no': 'Norwegian', 'pl': 'Polish', 'pt': 'Portuguese', 'ro': 'Romanian', 'ru': 'Russian', 'si': 'Sinhala', 'sk': 'Slovak', 'sq': 'Albanian', 'sr': 'Serbian', 'su': 'Sundanese', 'sv': 'Swedish', 'sw': 'Swahili', 'ta': 'Tamil', 'te': 'Telugu', 'th': 'Thai', 'tl': 'Filipino', 'tr': 'Turkish', 'uk': 'Ukrainian', 'ur': 'Urdu', 'vi': 'Vietnamese', 'zh-CN': 'Chinese', 'zh-TW': 'Chinese (Mandarin/Taiwan)', 'zh': 'Chinese (Mandarin)'} | 4 | 6 |
74,100,867 | 2022-10-17 | https://stackoverflow.com/questions/74100867/run-github-workflow-with-python | I have a python workflow in github here. I can run that workflow from the web interface. But problem is everytime I have to visit github and click on run workflow. Again I have to login in github from an unknown device to run that workflow. Is there any github module in python that can run a workflow using a personal access token? Or how can I run a workflow with github api using requests module? I searched about this topic on google but I didn’t found any solution that works. Many of those are out dated or not explained properly. This is the workflow I used here: name: Python on: workflow_dispatch: jobs: build: runs-on: ubuntu-latest steps: - name: checkout repo uses: actions/[email protected] - name: Run a script run: python3 main.py | Thanks to @C.Nivs & @larsks for helping me to find the right documentation. After experimenting with github api, finally I found my answer. Here is the code I used: branch, owner, repo, workflow_name, ghp_token="main", "dev-zarir", "Python-Workflow", "python.yml", "ghp_..." import requests def run_workflow(branch, owner, repo, workflow_name, ghp_token): url = f"https://api.github.com/repos/{owner}/{repo}/actions/workflows/{workflow_name}/dispatches" headers = { "Accept": "application/vnd.github+json", "Authorization": f"Bearer {ghp_token}", "Content-Type": "application/json" } data = '{"ref":"'+branch+'"}' resp = requests.post(url, headers=headers, data=data) return resp response=run_workflow(branch, owner, repo, workflow_name, ghp_token) if response.status_code==204: print("Workflow Triggered!") else: print("Something went wrong.") | 3 | 4 |
74,105,187 | 2022-10-18 | https://stackoverflow.com/questions/74105187/vscode-auto-suggest-latest-versions-pip-requirements-txt | Identify an unkown feature On a previous VSCode installation (OSX Vscode 1.7.x), a feature was enabled that auto suggested the 3rd party dependencies in the pip requirements.txt file. I am on a fresh install of VSCode(1.71.2 linux) and have both Microsoft Python(v2022.16.1) Pylance(v2022.10.20) extensions installed, but the feature of "auto suggest latest versions" is not showing/working. My google-foo is failing to locate either what "extension" provided that feature, or if already installed, how to install it. I don't know what it is called. It "behaves" like an auto suggest above the 3rdparty==1.2.3 library entry. and also marks ones with CVE's in red. It always auto-detected requirements.txt and requirements-*.txt files. Any help is much appreciated. | It was called PyPi Assistant. I found that by searching in VSCode with @popular pip | 3 | 3 |
74,103,366 | 2022-10-17 | https://stackoverflow.com/questions/74103366/importerror-cannot-import-name-rfc-1738-quote-from-sqlalchemy-engine-url | I am using a python snowflake connector from the following package: snowflake-sqlalchemy from sqlalchemy import create_engine engine = create_engine(...) It used to be working but it's now failing with this weird error. I tried to switch to older version of the package, but can't get rid of this error. Here is the fulll stack trace: File "/Users/Etienne.Herlaut/Repositories/python/performance-hub-apps/utils/snowflake.py", line 19, in get_snowflake_connection engine = create_engine(f'snowflake://{sf_user}:{sf_password}@{sf_account}?warehouse={wh}&role={role}', connect_args={'timeout': 120}) File "<string>", line 2, in create_engine File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/sqlalchemy/util/deprecations.py", line 309, in warned return fn(*args, **kwargs) File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/sqlalchemy/engine/create.py", line 522, in create_engine entrypoint = u._get_entrypoint() File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/sqlalchemy/engine/url.py", line 656, in _get_entrypoint cls = registry.load(name) File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 341, in load return impl.load() File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/importlib_metadata/__init__.py", line 207, in load module = import_module(match.group('module')) File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/snowflake/sqlalchemy/__init__.py", line 63, in <module> from .util import _url as URL File "/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/snowflake/sqlalchemy/util.py", line 8, in <module> from sqlalchemy.engine.url import _rfc_1738_quote ImportError: cannot import name '_rfc_1738_quote' from 'sqlalchemy.engine.url' (/Users/Etienne.Herlaut/.pyenv/versions/3.8.13/lib/python3.8/site-packages/sqlalchemy/engine/url.py) | Looks like this is a known issue will get resolved in v1.4.3, per the release notes: https://github.com/snowflakedb/snowflake-sqlalchemy/blob/main/DESCRIPTION.md | 6 | 7 |
74,080,017 | 2022-10-15 | https://stackoverflow.com/questions/74080017/typing-a-dynamically-assigned-attribute | I am writing a "plugin" for an existing python package and wondering how I would properly type-hint something like this. Scenario A python object from a framework I am working with has a plugins attribute that is designed to be extended by user-created plugins. For instance: (very simplified) # External module that I have no control over from typing import Any class BaseClassPluginsCollection: ... class BaseClass: def __init__(self): self.attr_a = 'a' self.plugins = BaseClassPluginsCollection() def add_plugin(self, plugin: Any, plugin_name: str): setattr(self.plugins, plugin_name, plugin) So the plugins attribute is basically an empty object which users (like myself) will add new plugin objects to. So in my case, I might do something like this: class MyCustomPlugin: def __init__(self): self.my_attr_a = "my attribute" my_plugin = MyCustomPlugin() base_object = BaseClass() base_object.add_plugin(my_plugin, "my_plugin") In practice, this all works fine and I can add and then use the plugin during program execution without issue. However, I would like to properly type-hint this so that my IDE knows about the plugin. Is such a thing even possible? Currently (using VS Code) I get the red squiggly line when I try and reference my plugin, even though it does work fine during execution. For instance, if I try and use the base_object created above, I get an error from the static type checker: print(f"{base_object.plugins.my_plugin.my_attr_a=}") What would be the appropriate way to handle this type of scenario? Is there a method for type-hinting something of this sort when its being added during execution using setattr()? Should I be doing some sort if aliasing before trying to access my_plugin? | I am afraid you are out of luck, if you are looking for some magical elegant annotation. Dynamic attribute assignment The problem lies in the way that the the package you are using (the one defining BaseClass) is designed. Adding a new attribute to an object using setattr means you are dynamically altering the interface of that object. A static type checker will never be able to pick that up. Even in the simplest case, you will run into errors: class Foo: pass class Bar: pass f = Foo() setattr(f, "bar", Bar()) print(type(f.bar)) Even though the output <class '__main__.Bar'> is as expected, mypy immediately complains: error: "Foo" has no attribute "bar" [attr-defined] You might think this is trivial, but consider the situation if the new attribute's name was also dynamically generated by some function func: setattr(f, func(), Bar()) Now the only way for a type checker to know what even the name of the attribute is supposed to be, is to execute foo. Static type checkers don't execute your code, they just read it. So to come back to your code example, there is no way to infer the type of the attribute base_object.plugins.my_plugin because that attribute is set dynamically. Workaround What you can always do however is just assure the type checker that my_plugin is in fact of the type you expect it to be. To do that however, you first need to tell the type checker that base_object.plugins even has such an attribute. For that I see no way around subclassing and just declaring that attribute: class CustomPluginsCollection(BaseClassPluginsCollection): my_plugin: MyCustomPlugin class SubClass(BaseClass): plugins: CustomPluginsCollection def __init__(self) -> None: super().__init__() self.plugins = CustomPluginsCollection() my_plugin = MyCustomPlugin() base_object = SubClass() base_object.add_plugin(my_plugin, "my_plugin") print(f"{base_object.plugins.my_plugin.my_attr_a=}") Now there should be no errors by type checkers and a decent IDE should give you all the type hints for .my_plugin. Whether or not this is doable in practice depends of course on the details of that framework you are using. It may not be trivial to subclass and (re-)declare the attributes as I did here. As a side note, this is why extensible libraries should try to go the inheritance route, allowing you to subclass their own classes and thus define your own interface. But I don't know enough about the package in question to judge here. | 3 | 4 |
74,100,294 | 2022-10-17 | https://stackoverflow.com/questions/74100294/how-do-i-multiply-a-pandas-dataframe-by-a-multiplier-from-a-dict | Starting from a dataframe like the below (simplified example of my real case): import pandas as pd df = pd.DataFrame({ 'a': [1.0, 1.1, 1.0, 4.2, 5.1], 'b': [5.0, 4.2, 3.1, 3.2, 4.1], 'c': [3.9, 2.0, 4.2, 3.8, 6.7], 'd': [3.1, 2.1, 1.2, 1.0, 1.0] }) And then taking a dictionary containing some multipliers I want to multiply certain columns in the dataframe by: dict = { "b": 0.01, "d": 0.001 } i.e. I want to check if each column in the dataframe is in my dictionary, and if it does exist as a key, then multiply that column of the dataframe by the value in the dictionary. In this example, I would want to multiply column 'b' by 0.01 and column 'd' by 0.001. I would end up with: 'a': [1.0, 1.1, 1.0, 4.2, 5.1], 'b': [0.05, 0.042, 0.031, 0.032, 0.041], 'c': [3.9, 2.0, 4.2, 3.8, 6.7], 'd': [0.0031, 0.0021, 0.0012, 0.001, 0.001] In my real example, the dataframe is a cleaned-up set of data read in from Excel, and the dictionary of multipliers is read in from a config file, to allow users to specify which columns need converting from whatever is in Excel to the desired/expected units of measure (e.g. converting 'g/h' in the raw data to 'kg/h' in the dataframe). What are some good, clear ways of achieving this intent, even if I have to restructure the implementation a bit? | Try: df[list(dct)] *= dct.values() print(df) Prints: a b c d 0 1.0 0.050 3.9 0.0031 1 1.1 0.042 2.0 0.0021 2 1.0 0.031 4.2 0.0012 3 4.2 0.032 3.8 0.0010 4 5.1 0.041 6.7 0.0010 If in dct are keys not in dataframe: tmp = {k: dct[k] for k in dct.keys() & df.columns} df[list(tmp)] *= tmp.values() | 3 | 5 |
74,097,901 | 2022-10-17 | https://stackoverflow.com/questions/74097901/meaning-of-all-inside-python-class | I am aware of the use of __all__ at module scope. However I came across the usage of __all__ inside classes. This is done e.g. in the Python standardlib: class re(metaclass=_DeprecatedType): """Wrapper namespace for re type aliases.""" __all__ = ['Pattern', 'Match'] Pattern = Pattern Match = Match What does __all__ achieve in this context? | The typing module does some unorthodox things to patch existing modules (like re). Basically, the built-in module re is being replaced with this class re defined using a custom metaclass that intercepts attribute lookups on the underlying object. __all__ doesn't really have any special meaning to the class (it's just another class attribute), but it effectively becomes the __all__ attribute of the re module. It's the metaclass's definition of __getattribute__ that accomplishes this. | 18 | 12 |
74,095,402 | 2022-10-17 | https://stackoverflow.com/questions/74095402/flake8s-noqa-interferes-with-markdown-using-mkdocstrings | I am using mkdocstrings in order automatically generate an API documentation from my Python functions. At the same time I am using flake8 to keep my code in good shape. If you want to ignore some flake8 warnings on an in-line basis, you could insert "# noqa" whereby the following lines of code will be ignored by flake8. That's nice, however, "# noqa" will be interpreted by mkdocstrings as a markdown header. Now, I am wondering how to resolve that conflict between flake8 and mkdocstrings? | put the noqa comment on the end of the docstring -- it will apply to any line within the docstring without changing the string's contents (note: you need a sufficiently new flake8, this change is relatively recent (probably >=4.x)) def f(): """some docstring here something which causes a warning """ # noqa: ABC123 disclaimer: I am the current flake8 maintainer | 3 | 5 |
74,094,919 | 2022-10-17 | https://stackoverflow.com/questions/74094919/how-to-measure-ram-usage-of-each-part-of-code-in-python | I want to measure the RAM usage of each for loop in my code. I searched internet and find process = psutil.Process(os.getpid()) and print(process.memory_info().rss) for measuring RAM. But this code gets the pid of the whole process and not a specific part. Is there any way to measure RAM usage of each part of the code? For example in code below we have 3 for loops which fill 3 different dictionaries. I want to print RAM usage of each for loop and between the processing of each loop, if the RAM exceed a threshold, i want to break that for loop. dict1 = {} dict2 = {} dict3 = {} for i in range (200): do something with dict1 if RAM usage of this block exceeds 1GB then break this loop used: x Mb for i in range (500): do something with dict2 if RAM usage of this block exceeds 1GB then break this loop used: x2 Mb for i in range (800): do something with dict3 if RAM usage of this block exceeds 1GB then break this loop used: x3 Mb I appreciate answers which can help me a lot | You can read memory usage just before loop and then read it again inside loop. Then you can calculate loop memory usage as a difference between these two values. If it exceeds some threshold, break the loop. Here is sample code: import numpy as np import psutil import os process = psutil.Process(os.getpid()) a = [] threshhold = 64*1024*1024 base_memory_usage = process.memory_info().rss for i in range(10): memory_usage = process.memory_info().rss loop_memory_usage = memory_usage - base_memory_usage print(loop_memory_usage) if loop_memory_usage > threshhold: print('exceeded threshold') break a.append(np.random.random((1000, 1000))) Result: 0 8028160 16031744 24035328 32038912 40042496 48046080 56049664 64053248 72056832 exceeded threshold As you can see, before any action the loop uses 0 bytes of memory. | 3 | 3 |
74,091,764 | 2022-10-17 | https://stackoverflow.com/questions/74091764/string-literal-matching-between-words-in-two-different-dataframe-dfs-and-gener | I have two dataframes df1 and df2 df1 = University School Student first name last name nick name AAA Law John Mckenzie Stevie BBB Business Steve Savannah JO CCC Engineering Mark Justice Fre DDD Arts Stuart Little Rah EEE Life science Adam Johnson meh 120 rows X 5 columns df2 = Statement Stuart had a headache last nigh which was due to th…… Rah basically found a new found friend which lead to the…… Gerome got a brand new watch which was………. Adam was found chilling all through out his life…… Savannah is such a common name that…….. 3000 rows X1 columns AIM is to form df3 Match the string literal and iterate it through every cells in the columns "Student first name" , "Student last name" , "Student nick name" to produce the table below Df3 = Statement Matching University School Stuart had a headache last nigh which was due to th… Stuart DDD Arts Rah basically found a new found friend which lead to Rah DDD Arts Gerome got a brand new watch which was………. NA NA NA Adam was found chilling all through out his life…… Adam EEE Life science Savannah is such a common name that…….. Savannah BBB Business 3000 rows X 4 columns | You can melt and merge: import re df1_melt = df1.melt(['University', 'School'], value_name='Match') regex = '|'.join(map(re.escape, df1_melt['Match'])) out = df2.join( df1_melt[['Match', 'University', 'School']] .merge(df2['Statement'] .str.extract(f'({regex})', expand=False) .rename('Match'), how='right', on='Match' ) ) output: Statement Match University School 0 Stuart had a headache last nigh which was due to the Stuart DDD Arts 1 Rah basically found a new found friend which lead to the Rah DDD Arts 2 Gerome got a brand new watch which was NaN NaN NaN 3 Adam was found chilling all through out his life Adam EEE Life science 4 Savannah is such a common name that Savannah BBB Business | 3 | 2 |
74,092,203 | 2022-10-17 | https://stackoverflow.com/questions/74092203/merge-two-dataframes-with-common-keys-and-adding-unique-columns | I have read through the pandas guide, especially merge and join sections, but still can not figure it out. Basically, this is what I want to do: Let's say we have two data frames: left = pd.DataFrame( { "key": ["K0", "K1", "K2", "K3"], "A": ["A0", "A1", "A2", "A3"], "C": ["B0", "B1", np.nan, np.nan]}) right = pd.DataFrame( { "key": ["K2"], "A": ["A8"], "D": ["D3"]}) I want to merge them based off on "key" and update the values, filling where necessary and replacing old values if there are any. So it should look like this: key A C D 0 K0 A0 B0 NaN 1 K1 A1 B1 NaN 2 K2 A8 NaN D3 3 K3 A3 NaN NaN | You can use combine_first with set_index to accomplish your goal here. right.set_index('key').combine_first(left.set_index('key')).reset_index() Output: key A C D 0 K0 A0 B0 NaN 1 K1 A1 B1 NaN 2 K2 A8 NaN D3 3 K3 A3 NaN NaN | 3 | 5 |
74,082,173 | 2022-10-15 | https://stackoverflow.com/questions/74082173/python-how-to-merge-column-values-from-one-df-to-match-rows-in-another-df | Can someone please show me how to merge df2 to df1 for just the matching cities, then use the df2's average monthly temperature columns to match to each city's date range (according to the month) into a new column called 'Temp' in df1? These are sample data of much larger files for state and cities in Brazil. df1 State City Dates 0 AC Rio Branco 3/20/2020 1 BA Salvador 5/2/2020 2 CE Fortaleza 4/6/2020 3 AC Rio Branco 5/30/2020 df2: has average monthly temperatures for each city. State City MAR APR MAY 0 CE Fortaleza 75.6 72.7 69.4 1 ES Vitória 69.1 64.6 62.7 2 AC Rio Branco 72.8 70.5 68.9 3 BA Salvador 74.6 71.3 70.1 Desired output: df1 with new column 'Temp' State City Dates Temp 0 AC Rio Branco 3/20/2020 72.8 1 BA Salvador 5/2/2020 70.1 2 CE Fortaleza 4/6/2020 72.7 3 AC Rio Branco 5/30/2020 68.9 | You can use a merge after reshaping df2 to long form with melt and extracting the month abbreviation with to_datetime and strftime: (df1.assign(month=pd.to_datetime(df1['Dates']).dt.strftime('%b').str.upper()) .merge(df2.melt(['State', 'City'], var_name='month', value_name='Temp'), on=['State', 'City', 'month']) #.drop(columns='month') # uncomment to remove the column ) output: State City Dates month Temp 0 AC Rio Branco 3/20/2020 MAR 72.8 1 BA Salvador 5/2/2020 MAY 70.1 2 CE Fortaleza 4/6/2020 APR 72.7 3 AC Rio Branco 5/30/2020 MAY 68.9 | 3 | 1 |
74,058,548 | 2022-10-13 | https://stackoverflow.com/questions/74058548/why-the-listview-not-showing-data-in-the-template | I'm working on my first Django project (the final project for codecademy's Django class) and I'm making webpages to show the inventory and menu that a restaurant has. I made the model, view, template, etc. for inventory and it displays the ListView perfectly. I did the same thing for my menu and it doesn't work. The page loads but the table that's supposed to output data is empty. Any insight on what might be going wrong? PS I'm new to programming and this is my first stackoverflow post so forgive any formatting errors or other faux pas ## views.py from django.http import HttpResponse from django.shortcuts import render from .models import Inventory, MenuItem, RecipeRequirement, Purchase from django.views.generic.edit import CreateView, DeleteView, UpdateView from django.views.generic import ListView # Create your views here. def index(request): return render(request, "index.html") class InventoryList(ListView): template_name = "inventory.html" model = Inventory class MenuList(ListView): template_name = "menu.html" model = MenuItem Inventory (below) works fine! :) {% extends './base.html' %} {% block content %} <h2>Inventory</h2> <table id="inventory"> <tr> <th>Ingredient</th> <th>Price</th> <th>Units Available</th> </tr> {% for ingredient in inventory_list %} <tr> <tr> <td>{{ ingredient.ingredient_name }}</td> <td>{{ ingredient.price }}</td> <td>{{ ingredient.units_avail }}</td> </tr> {% endfor %} </table> {% endblock %} This one (Menu) is the problematic one :( {% extends './base.html' %} {% block content %} <h2>Menu</h2> <table id="menu"> <tr> <th>Item</th> <th>Price</th> <th>In Stock?</th> </tr> {% for food in menu_list %} <tr> <tr> <td>{{ food.menu_item_name }}</td> <td>{{ food.price }}</td> <td>{{ food.available }}</td> </tr> {% endfor %} </table> {% endblock %} Models below from django.db import models from django.forms import DateTimeField # Create your models here. class Inventory(models.Model): ingredient_name = models.CharField(max_length=30) price = models.DecimalField(max_digits=5, decimal_places=2) units_avail = models.IntegerField() def __str__(self): return self.ingredient_name + " avail: " + str(self.units_avail) class MenuItem(models.Model): menu_item_name = models.CharField(max_length=100) price = models.DecimalField(max_digits=5, decimal_places=2) def __str__(self): return self.menu_item_name + " with Price: " + str(self.price) def available(self): return all(recipe_req.enough() for recipe_req in self.reciperequirement_set.all()) class RecipeRequirement(models.Model): ingredient = models.ForeignKey(Inventory, on_delete=models.CASCADE) menu_item = models.ForeignKey(MenuItem, on_delete=models.CASCADE) quantity = models.IntegerField() def __str__(self): return self.ingredient.ingredient_name + " in " + self.menu_item.menu_item_name def enough(self): return self.quantity <= self.ingredient.units_avail class Purchase(models.Model): menu_item = models.ForeignKey(MenuItem, on_delete=models.CASCADE) timestamp = models.DateTimeField() def __str__(self): return self.menu_item.menu_item_name + " at " + self.timestamp URLs below from django.urls import path from . import views urlpatterns = [ path("", views.index, name="index"), path("inventory", views.InventoryList.as_view(), name="inventory_list"), path("menu", views.MenuList.as_view(), name="menu_list"), ] | I could see one problem, you have one extra <tr> tag open in both the templates just below the loop. Also think some problem with available method you manually defined in MenuItem model, so I removed it, try without it. Also I'd recommend you to use context_object_name which is by default object_list so: views.py class InventoryList(ListView): template_name = "inventory.html" model = Inventory context_object_name="inventories" class MenuList(ListView): template_name = "menu.html" model = MenuItem context_object_name="menu_list" inventory.html template {% extends './base.html' %} {% block content %} <h2>Inventory</h2> <table id="inventory"> <tr> <th>Ingredient</th> <th>Price</th> <th>Units Available</th> </tr> {% for ingredient in inventories %} <tr> <td>{{ ingredient.ingredient_name }}</td> <td>{{ ingredient.price }}</td> <td>{{ ingredient.units_avail }}</td> </tr> {% endfor %} </table> {% endblock %} menu.html template {% extends './base.html' %} {% block content %} <h2>Menu</h2> <table id="menu"> <tr> <th>Item</th> <th>Price</th> <th>In Stock?</th> </tr> {% for food in menu_list %} <tr> <td>{{ food.menu_item_name }}</td> <td>{{ food.price }}</td> </tr> {% endfor %} </table> {% endblock %} Note: It's recommended to give / at the end of every route so your urls.py should be: urls.py from django.urls import path from . import views urlpatterns = [ path("", views.index, name="index"), path("inventory/", views.InventoryList.as_view(), name="inventory_list"), path("menu/", views.MenuList.as_view(), name="menu_list"), ] Note: class based views in Django requires their name to be written as model name as prefix and actual view name as suffix, so you may change it to InventoryListView and MenuItemListView from InventoryList and MenuList respectively. | 3 | 1 |
74,079,204 | 2022-10-15 | https://stackoverflow.com/questions/74079204/spllit-string-and-assign-to-two-columns-using-pandas-assign-method | If using the following DataFrame I can split the "ccy" string and create two new columns: df_so = pd.DataFrame.from_dict({0: 'gbp_usd', 1: 'eur_usd', 2: 'usd_cad', 3: 'usd_jpy', 4: 'eur_usd', 5: 'eur_usd'},orient='index',columns=["ccy"]) df_so[['base_ccy', 'quote_ccy']] = df_so['ccy'].str.split('_', 1, expand=True) giving the following DataFrame. index ccy base_ccy quote_ccy 0 gbp_usd gbp usd 1 eur_usd eur usd 2 usd_cad usd cad 3 usd_jpy usd jpy 4 eur_usd eur usd 5 eur_usd eur usd How do I do the same str.split using DataFrame.assign within my tweak function below ? I can do this with a list comprehension to get the same result, but is there a simpler/cleaner way using assign?: def tweak_df (df_): return (df_.assign(base_currency= lambda df_: [i[0] for i in df_['ccy'].str.split('_', 1)], quote_currency= lambda df_: [i[1] for i in df_['ccy'].str.split('_', 1)], ) ) tweak_df(df_so) Yields same result as the table above but the code is not very intuitive and simple is better than complex. | I actually think the first version you suggested is the best. df_so[['base_ccy', 'quote_ccy']] = df_so['ccy'].str.split('_', 1, expand=True) If you want to do it using assign, you can do it like this utilising the rename function. df_so.assign(**df_so['ccy'].str.split('_', n=1, expand=True) .rename(columns={0: "base_ccy", 1: "quote_ccy"})) | 3 | 1 |
74,076,140 | 2022-10-15 | https://stackoverflow.com/questions/74076140/sdk-is-not-defined-for-run-configuration | When I'm trying to run my project in PyCharm I'm getting an error: SDK is not defined for Run Configuration. I tried to set a new interpreter and tried everything. What does "SDK" mean and where can I configure it? | I just had this same issue (see my comment above). What worked for me was to go into "Edit Configurations", delete the configuration that was copied over from the original PC, and create my own configuration (basically with the same inputs as before). | 17 | 19 |
74,080,581 | 2022-10-15 | https://stackoverflow.com/questions/74080581/building-dictionary-of-unique-ids-for-pairs-of-matching-strings | I have a dataframe like this #Test dataframe import pandas as pd import numpy as np #Build df titles = {'Title': ['title1', 'cat', 'dog']} references = {'References': [['donkey','chicken'],['title1','dog'],['bird','snake']]} df = pd.DataFrame({'Title': ['title1', 'cat', 'dog'], 'References': [['donkey','chicken'],['title1','dog'],['bird','snake']]}) #Insert IDs for UNIQUE titles title_ids = {'IDs':list(np.arange(0,len(df)) + 1)} df['IDs'] = list(np.arange(0,len(df)) + 1) df = df[['Title','IDs','References']] and I want to generate IDs for the references column that looks like the data frame below. If there is a matching between the strings, assign the same ID as in the IDs column and if not, assign a new unique ID. My first attempt is using the function #Matching function def string_match(string1,string2): if string1 == string2: a = 1 else: a = 0 return a and to loop over each string/title combination but this gets tricky with multiple for loops and if statements. Is there a better way I can do this that is more pythonic? | # Explode to one reference per row references = df["References"].explode() # Combine existing titles with new title from References titles = pd.concat([df["Title"], references]).unique() # Assign each title an index number mappings = {t: i + 1 for i, t in enumerate(titles)} # Map the reference to the index number and convert to list df["RefIDs"] = references.map(mappings).groupby(level=0).apply(list) | 4 | 2 |
74,079,778 | 2022-10-15 | https://stackoverflow.com/questions/74079778/bin-sh-1-crond-not-found-when-cron-already-installed | I have a dockerfile FROM python:3.9.12-bullseye COPY . . RUN apt-get update -y RUN apt-get install cron -y RUN crontab crontab CMD python task.py && crond -f And a crontab * * * * * python /task.py I keep running into the error /bin/sh: 1: crond: not found when I run the docker file. Docker build is fine. Anyone knows why this happens? If I use python:3.6.12-alpine everything works fine but with python:3.9.12-bullseye, i keep getting that error. | If you have a look for debian series cron.service, you could see next: [Unit] Description=Regular background program processing daemon Documentation=man:cron(8) After=remote-fs.target nss-user-lookup.target [Service] EnvironmentFile=-/etc/default/cron ExecStart=/usr/sbin/cron -f $EXTRA_OPTS IgnoreSIGPIPE=false KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target From ExecStart=/usr/sbin/cron -f $EXTRA_OPTS, I guess unlike alpine, the main program on such debian series linux could be cron not crond. (PS: python:3.9.12-bullseye based on debian, while python:3.6.12-alpine based on alpine) | 3 | 6 |
74,074,484 | 2022-10-14 | https://stackoverflow.com/questions/74074484/pandas-read-json-skip-first-lines-of-the-file | Say I have a json file with lines of data like this : file.json : {'ID':'098656', 'query':'query_file.txt'} {'A':1, 'B':2} {'A':3, 'B':6} {'A':0, 'B':4} ... where the first line is just explanations about the given file and how it was created. I would like to open it with something like : import pandas as pd df = pd.read_json('file.json', lines=True) However, how do I read the data starting on line 3 ? I know that pd.read_csv has a skiprows argument, but it does not look like pd.read_json has one. I would like something returning a DataFrame with the columns A and B only, and possibly better than dropping the first line and ID and query columns after loading the whole file. | We can pass into pandas.read_json a file handler as well. If before that we read part of the data, then only the rest will be converted to DataFrame. def read_json(file, skiprows=None): with open(file) as f: if skiprows: f.readlines(skiprows) df = pd.read_json(f, lines=True) return df | 3 | 3 |
74,056,454 | 2022-10-13 | https://stackoverflow.com/questions/74056454/defining-a-quadratic-function-with-numpy-meshgrid | Let's consider a function of two variables f(x1, x2) , where x1 spans over a vector v1 and x2 spans over a vector v2. If f(x1, x2) = np.exp(x1 + x2), we can represent this function in Python as a matrix by means of the command numpy.meshgrid like this: xx, yy = numpy.meshgrid(v1, v2) M = numpy.exp(xx + yy) This way, M is a representation of the function f over the cartesian product "v1 x v2", since M[i,j] = f(v1[i],v2[j]). But this works because both sums and exponential work in parallel componentwise. My question is: if my variable is x = numpy.array([x1, x2]) and f is a quadratic function f(x) = x.T @ np.dot(Q, x), where Q is a 2x2 matrix, how can I do the same thing with the meshgrid function (i.e. calculating all the values of the function f on "v1 x v2" at once)? Please let me know if I should include more details! | def quad(x, y, q): """Return an array A of a shape (len(x), len(y)) with values A[i,j] = [x[i],y[j]] @ q @ [x[i],y[j]] x, y: 1d arrays, q: an array of shape (2,2)""" from numpy import array, meshgrid, einsum a = array(meshgrid(x, y)).transpose() return einsum('ijk,kn,ijn->ij', a, q, a) Notes meshgrid produces 2 arrays of a shape (len(y), len(x)), where first one is with x values along the second dimension. If we apply to this pair np.array then a 3d array of shape (2, len(y), len(x)) will be produced. With transpose we obtain an array, where an element indexed by [i,j,k] is x[i] if k==0 else y[j], where k is 0 or 1, i.e. first or second array from meshgrid. With 'ijk,kn,ijn->ij' we tell einsum to return the sum written bellow for each i, j: sum(a[i,j,k]*q[k,n]*a[i,j,n] for k in range(2) for n in range(2)) Note, that a[i,j] == [x[i], y[j]]. | 3 | 3 |
74,069,659 | 2022-10-14 | https://stackoverflow.com/questions/74069659/conditional-types-with-mypy | I have the following code snippet: from typing import TypedDict class Super(TypedDict): foo: int class SubA(Super): bar: int class SubB(Super): zap: int def print_props(inp: Super, key: str): print(inp[key]) When I call the method print_props with either an instance of SubA or SubB it would be valid as they are sub types of Super. But mypy will complain about inp[key] as the key must be literal "foo". Is it possible to give mypy hints so that it is capable of deciding which keys are valid? For example: "When print_props is called with an instance of SubB only "foo" and "zap" are valid." I took a look at generics; I think it is possible to declare a type variable that is restricted to sub types of Super, but is it possible to express the dependency between the concrete type of the type variable (SubA or SubB) and the literal values key should then be restricted to? | Overloads with Literal could well do it, though I do wonder if a different design would be better. I'm a bit concerned about the increasingly frequent usage of overload and Literal in SO answers. They both suggest a design smell to me @overload def printMyProps(input: SubA, key: Literal["foo", "bar"]) -> None: ... @overload def printMyProps(input: SubB, key: Literal["foo", "zap"]) -> None: ... def printMyProps(input: SubA | SubB, key: Literal["foo", "bar", "zap"]) -> None: print(input[key]) # type: ignore I've used type: ignore because it's a short function and I can't use isinstance on TypedDict. TBH overload implementations often require type hacks. The API works as intended though | 4 | 5 |
74,067,547 | 2022-10-14 | https://stackoverflow.com/questions/74067547/could-not-find-poetry-1-2-2-linux-sha256sum-file | I am trying to update my version of Poetry to 1.2.*, but when running poetry self update I get the error Could not find poetry-1.2.2-linux.sha256sum file... I can't figure out how to try and update Poetry to an earlier version for which hopefully the checksum exists. | You are trying to update a Poetry that was installed with the get-poetry.py installer. This installer is deprecated for more than a year now. Updating via poetry self update is not possible for these installation. Uninstall Poetry and reinstall with the recommended installer. More information are available at https://python-poetry.org/blog/announcing-poetry-1.2.0/ | 14 | 18 |
74,060,371 | 2022-10-13 | https://stackoverflow.com/questions/74060371/gym-super-mario-bros-7-3-0-valueerror-not-enough-values-to-unpack-expected | I'm running Python3 (3.8.10) and am attempting a tutorial with the gym_super_mario_bros (7.3.0) and nes_py libraries. I followed various tutorials code and tried on multiple computers but get an error. I have tried to adjust some of the parameters like adding a 'truncated' variable to the list of values to return. As this is a tutorial level example I'm curious what is wrong. It looks like something with env.step(). Below is the code: from nes_py.wrappers import JoypadSpace from gym_super_mario_bros.actions import SIMPLE_MOVEMENT env = gym_super_mario_bros.make('SuperMarioBros-v0') env = JoypadSpace(env, SIMPLE_MOVEMENT) done = True for step in range(1000): if done: env.reset() state, reward, done, info = env.step(env.action_space.sample()) env.render() env.close() The error I get is below: /home/d/.local/lib/python3.8/site-packages/gym/envs/registration.py:555: UserWarning: WARN: The environment SuperMarioBros-v0 is out of date. You should consider upgrading to version `v3`. logger.warn( /home/d/.local/lib/python3.8/site-packages/gym/utils/passive_env_checker.py:195: UserWarning: WARN: The result returned by `env.reset()` was not a tuple of the form `(obs, info)`, where `obs` is a observation and `info` is a dictionary containing additional information. Actual type: `<class 'numpy.ndarray'>` logger.warn( /home/d/.local/lib/python3.8/site-packages/gym/utils/passive_env_checker.py:219: DeprecationWarning: WARN: Core environment is written in old step API which returns one bool instead of two. It is recommended to rewrite the environment with new step API. logger.deprecation( Traceback (most recent call last): File "mario.py", line 12, in <module> state, reward, done, info = env.step(env.action_space.sample()) File "/home/d/.local/lib/python3.8/site-packages/nes_py/wrappers/joypad_space.py", line 74, in step return self.env.step(self._action_map[action]) File "/home/d/.local/lib/python3.8/site-packages/gym/wrappers/time_limit.py", line 50, in step observation, reward, terminated, truncated, info = self.env.step(action) ValueError: not enough values to unpack (expected 5, got 4) Any guidance is appreciated, thank you! | Move to "...../python3.8/site-packages/gym/wrappers/time_limit.py".And delete all "truncated" | 4 | 5 |
74,062,326 | 2022-10-13 | https://stackoverflow.com/questions/74062326/should-an-application-check-for-uuid-v4-duplicated | I work in my app with a database. This database stores data with a randomly generated ID of the type UUID v4. Now I'm wondering, how common is it for a big (let's be optimistic :D ) app with many users to have a duplicate ID? The ID is the primary key in the SQL database so it could only crash one API call. Is it a clean practice to check if the UUID exists (and thus catch a possible crash in the backend) or is it redundant as it's very unlikely to happen? Especially considering: random in python is not that random there are 2¹²² combinations EDIT: Based on the comments it seems unnecessary to check for duplicates. Thanks! | This shouldn't be a problem in practice due to the very low probability of collisions. See the Wikipedia article on UUID4 collision For example, the number of random version-4 UUIDs which need to be generated in order to have a 50% probability of at least one collision is 2.71 quintillion This number is equivalent to generating 1 billion UUIDs per second for about 85 years. A file containing this many UUIDs, at 16 bytes per UUID, would be about 45 exabytes. NB. UUID1 and UUID2 have a time component which make it impossible to have collisions if the UUIDs are generated at a reasonable enough frequency. | 3 | 3 |
74,054,138 | 2022-10-13 | https://stackoverflow.com/questions/74054138/fastest-way-to-expand-the-values-of-a-numpy-matrix-in-diagonal-blocks | I'm searching for a fast way for resize the matrix in a special way, without using for-loops: I have a squared Matrix: matrix = [[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9,10], [11,12,13,14,15], [16,17,18,19,20], [21,22,23,24,25]] and my purpose is to resize it 3 (or n) times, where the values are diagonal blocks in the matrix and other values are zeros: goal_matrix = [[ 1, 0, 0, 2, 0, 0, 3, 0, 0, 4, 0, 0, 5, 0, 0], [ 0, 1, 0, 0, 2, 0, 0, 3, 0, 0, 4, 0, 0, 5, 0], [ 0, 0, 1, 0, 0, 2, 0, 0, 3, 0, 0, 4, 0, 0, 5], [ 6, 0, 0, 7, 0, 0, 8, 0, 0, 9, 0, 0,10, 0, 0], [ 0, 6, 0, 0, 7, 0, 0, 8, 0, 0, 9, 0, 0,10, 0], [ 0, 0, 6, 0, 0, 7, 0, 0, 8, 0, 0, 9, 0, 0,10], [11, 0, 0,12, 0, 0,13, 0, 0,14, 0, 0,15, 0, 0], [ 0,11, 0, 0,12, 0, 0,13, 0, 0,14, 0, 0,15, 0], [ 0, 0,11, 0, 0,12, 0, 0,13, 0, 0,14, 0, 0,15], [16, 0, 0,17, 0, 0,18, 0, 0,19, 0, 0,20, 0, 0], [ 0,16, 0, 0,17, 0, 0,18, 0, 0,19, 0, 0,20, 0], [ 0, 0,16, 0, 0,17, 0, 0,18, 0, 0,19, 0, 0,20], [21, 0, 0,22, 0, 0,23, 0, 0,24, 0, 0,25, 0, 0], [ 0,21, 0, 0,22, 0, 0,23, 0, 0,24, 0, 0,25, 0], [ 0, 0,21, 0, 0,22, 0, 0,23, 0, 0,24, 0, 0,25]] It should do something like this question, but without unnecessary zero padding. Is there any mapping, padding or resizing function for doing this in a fast way? | IMO, it is inappropriate to reject the for loop blindly. Here I provide a solution without the for loop. When n is small, its performance is better than that of @MichaelSzczesny and @SalvatoreDanieleBianco solutions: def mechanic(mat, n): ar = np.zeros((*mat.shape, n * n), mat.dtype) ar[..., ::n + 1] = mat[..., None] return ar.reshape( *mat.shape, n, n ).transpose(0, 3, 1, 2).reshape([s * n for s in mat.shape]) This solution obtains the expected output through a slice assignment, then transpose and reshape, but copies will occur in the last step of reshaping, making it inefficient when n is large. After a simple test, I found that the solution that simply uses the for loop has the best performance: def mechanic_for_loop(mat, n): ar = np.zeros([s * n for s in mat.shape], mat.dtype) for i in range(n): ar[i::n, i::n] = mat return ar Next is a benchmark test using perfplot. The test functions are as follows: import numpy as np def mechanic(mat, n): ar = np.zeros((*mat.shape, n * n), mat.dtype) ar[..., ::n + 1] = mat[..., None] return ar.reshape( *mat.shape, n, n ).transpose(0, 3, 1, 2).reshape([s * n for s in mat.shape]) def mechanic_for_loop(mat, n): ar = np.zeros([s * n for s in mat.shape], mat.dtype) for i in range(n): ar[i::n, i::n] = mat return ar def michael_szczesny(mat, n): return np.einsum( 'ij,kl->ikjl', mat, np.eye(n, dtype=mat.dtype) ).reshape([s * n for s in mat.shape]) def salvatore_daniele_bianco(mat, n): repeated_matrix = mat.repeat(n, axis=0).repeat(n, axis=1) col_ids, row_ids = np.meshgrid( np.arange(repeated_matrix.shape[0]), np.arange(repeated_matrix.shape[1]) ) repeated_matrix[(col_ids % n) - (row_ids % n) != 0] = 0 return repeated_matrix functions = [ mechanic, mechanic_for_loop, michael_szczesny, salvatore_daniele_bianco ] Resize times unchanged, array size changes: if __name__ == '__main__': from itertools import accumulate, repeat from operator import mul from perfplot import bench bench( functions, list(accumulate(repeat(2, 11), mul)), lambda n: (np.arange(n * n).reshape(n, n), 5), xlabel='ar.shape[0]' ).show() Output: Resize times changes, array size unchanged: if __name__ == '__main__': from itertools import accumulate, repeat from operator import mul from perfplot import bench ar = np.arange(25).reshape(5, 5) bench( functions, list(accumulate(repeat(2, 11), mul)), lambda n: (ar, n), xlabel='resize times' ).show() Output: | 4 | 4 |
74,054,217 | 2022-10-13 | https://stackoverflow.com/questions/74054217/how-do-you-put-the-x-axis-labels-on-the-top-of-the-heatmap-created-with-seaborn | I have created a heatmap using the seaborn and matplotlib package in python, and while it is perfectly suited for my current needs, I really would prefer to have the labels on the x-axis of the heatmap to be placed at the top of the plot, rather than at the bottom (which seems to be its default). So an abridged form of my data looks like this: NP NP1 NP2 NP3 NP4 NP5 identifier A1BG~P04217 -0.094045 0.012229 0.102279 1.319618 0.002383 A2M~P01023 -0.805089 -0.477339 -0.351341 0.089735 -0.473815 AARS1~P49588 0.081827 -0.099849 -0.287426 0.101588 0.136366 ABCB6~Q9NP58 0.109911 0.458039 -0.039325 -0.484872 1.905586 ABCC1~I3L4X2 -0.560155 0.580285 0.012868 0.291303 -0.407900 ABCC4~O15439 0.055264 0.138630 -0.204665 0.191241 0.304999 ABCE1~P61221 -0.510108 -0.059724 -0.233365 0.078956 -0.651327 ABCF1~Q8NE71 -0.348526 -0.135414 -0.390021 -0.190644 -0.276303 ABHD10~Q9NUJ1 0.237959 -2.060834 0.325901 -0.778036 -4.046345 ABHD11~Q8NFV4 0.294587 1.193258 -0.797294 -0.148064 -1.153391 And when I use the following code: import seaborn as sns import matplotlib as plt fig, ax = plt.subplots(figsize=(10,30)) ax = sns.heatmap(df_example, annot=True, xticklabels=True) I get this kind of plot: https://imgpile.com/i/T3zPH1 I should note that the this plot was made from the abridged dataframe above, the actual dataframe has thousands of identifiers, making it very long. But as you can see, the labels on the x axis only appear at the bottom. I have been trying to get them to appear on the top, but seaborn doesn't seem to allow this kind of formatting. So I have also tried using plotly express, but while I solve the issue of placing my x-axis labels on top, I have been completely unable to format the heat map as I had before using seaborn. The following code: import plotly.express as px fig = px.imshow(df_example, width= 500, height=6000) fig.update_xaxes(side="top") fig.show() yields this kind of plot: https://imgpile.com/i/T3zF42. I have tried many times to reformat it using the documentation from plotly (https://plotly.com/python/heatmaps/), but I can't seem to get it to work. When one thing is fixed, another problem arises. I really just want to keep using the seaborn based code as above, and just fix the x-axis labels. I'm also happy to have the x-axis label at both the top and bottom of the plot, but I can't get that work presently. Can someone advise me on what to do here? | Ok, so I did a bit more research, and it turns out you can add the follow code with the seaborn approach: plt.tick_params(axis='both', which='major', labelsize=10, labelbottom = False, bottom=False, top = False, labeltop=True) | 3 | 6 |
74,052,308 | 2022-10-13 | https://stackoverflow.com/questions/74052308/how-to-replace-certain-keys-and-values-of-a-json-with-list-in-python | {'Functions': {0: {'Function-1': {0: {'Function': 'dd', 'Function2': 'd3'}}}}} From the above json i would like to remove the {0: } item and add a list in that place so that the value is enclosed in a list like shown in Desired Output. Please note the above json is an put of a jsondiff. Desired output {"Functions":[{"Function-1":[{"Function":"dd","Function2":"d3"}]}]} The below is my current code : from jsondiff import diff json1 = json.loads(""" { "Name": "Temperature \u0026 Pressure Measurement", "Id": "0x0102", "Channels": [ { "Data": [ { "Channel0": [ { "Enable": 0, "Unit": "Celsius" } ], "Channel1": [ { "Enable": 0, "Unit": "Celsius" } ], "Channel2": [ { "Enable": 0, "Unit": "Celsius" } ] } ] } ], "Events": [ { "event1": 0, "event2": 0 } ], "Diagnostics": [ { "diag1": 0, "diag2": 0 } ], "Functions": [ { "Function-1": [ { "Function": "2d" } ] } ] } """) json2 = json.loads(""" { "Name": "Temperature \u0026 Pressure Measurement", "Id": "0x0102", "Channels": [ { "Data": [ { "Channel0": [ { "Enable": 0, "Unit": "Celsius" } ], "Channel1": [ { "Enable": 0, "Unit": "Celsius" } ], "Channel2": [ { "Enable": 0, "Unit": "Celsius" } ] } ] } ], "Events": [ { "event1": 0, "event2": 0 } ], "Diagnostics": [ { "diag1": 0, "diag2": 0 } ], "Functions": [ { "Function-1": [ { "Function": "dd", "Function2":"d3" } ] } ] } """) # This gives the difference between the json and this is what we want to operate on ! the 'res' may vary based on the changes made to json2 res = str(diff(json1, json2)) print('----------------------') print('------- DIFF -------') print('----------------------') print(f'{res}') print('----------------------') print('----------------------') print('') print('----------------------') print('---Expected Output---') print('----------------------') print('{"Functions":[{"Function-1":[{"Function":"dd","Function2":"d3"}]}]}') print('----------------------') print('----------------------') EDIT:: To be more clear the res variable will change always. So i think it cannot always be achieved by using string replace because number of bracket may change based on the difference from json1 and json2 | Code: from ndicts.ndicts import NestedDict STEP 1// Convert nested dict to flat dict fd = list(pd.json_normalize(Mydict).T.to_dict().values())[0] #Output {'Functions.0.Function-1.0.Function': 'dd', 'Functions.0.Function-1.0.Function2': 'd3'} STEP 2// Convert flat dictionary to nested by split/removing the 0 nd = NestedDict() for key, value in fd.items(): n_key = tuple(key.split(".0.")) nd[n_key] = value dic = nd.to_dict() dic #Output {'Functions': {'Function-1': {'Function': 'dd', 'Function2': 'd3'}}} STEP 3// To add [], you can convert to dict to json and replace { with [{. json.loads(json.dumps(dic).replace('{','[{').replace('}','}]'))[0] Desire Outout {'Functions': [{'Function-1': [{'Function': 'dd', 'Function2': 'd3'}]}]} Package ref question ref | 3 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.