question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
74,743,233
2022-12-9
https://stackoverflow.com/questions/74743233/what-happens-behind-the-scenes-if-i-call-none-x-in-python
I am learning and playing around with Python and I came up with the following test code (please be aware that I would not write productive code like that, but when learning new languages I like to play around with the language's corner cases): a = None print(None == a) # I expected True, I got True b = 1 print(None == b) # I expected False, I got False class MyNone: # Called if I compare some myMyNone == somethingElse def __eq__(self, __o: object) -> bool: return True c = MyNone() print (None == c) # !!! I expected False, I got True !!! Please see the very last line of the code example. How can it be that None == something, where something is clearly not None, return True? I would have expected that result for something == None, but not for None == something. I expected that it would call None is something behind the scenes. So I think the question boils down to: How does the __eq__ method of the None singleton object look like and how could I have found that out? PS: I am aware of PEP-0008 and its quote Comparisons to singletons like None should always be done with is or is not, never the equality operators. but I still would like to know why print (None == c) in the above example returns True.
In fact, None's type does not have its own __eq__ method; within Python we can see that it apparently inherits from the base class object: >>> type(None).__eq__ <slot wrapper '__eq__' of 'object' objects> But this is not really what's going on in the source code. The implementation of None can be found in Objects/object.c in the CPython source, where we see: PyTypeObject _PyNone_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) "NoneType", 0, 0, none_dealloc, /*tp_dealloc*/ /*never called*/ 0, /*tp_vectorcall_offset*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ // ... 0, /*tp_richcompare */ // ... 0, /*tp_init */ 0, /*tp_alloc */ none_new, /*tp_new */ }; I omitted most of the irrelevant parts. The important thing here is that _PyNone_Type's tp_richcompare is 0, i.e. a null pointer. This is checked for in the do_richcompare function: if ((f = Py_TYPE(v)->tp_richcompare) != NULL) { res = (*f)(v, w, op); if (res != Py_NotImplemented) return res; Py_DECREF(res); } if (!checked_reverse_op && (f = Py_TYPE(w)->tp_richcompare) != NULL) { res = (*f)(w, v, _Py_SwappedOp[op]); if (res != Py_NotImplemented) return res; Py_DECREF(res); } Translating for those who don't speak C: If the left-hand-side's tp_richcompare function is not null, call it, and if its result is not NotImplemented then return that result. Otherwise if the reverse hasn't already been checked*, and the right-hand-side's tp_richcompare function is not null, call it, and if the result is not NotImplemented then return that result. There are some other branches in the code, to fall back to in case none of those branches returns a result. But these two branches are enough to see what's going on. It's not that type(None).__eq__ returns NotImplemented, rather the type doesn't have the corresponding function in the C source code at all. That means the second branch is taken, hence the result you observe. *The flag checked_reverse_op is set if the reverse direction has already been checked; this happens if the right-hand-side is a strict subtype of the left-hand-side, in which case it takes priority. That doesn't apply in this case since there is no subtype relation between type(None) and your class.
9
8
74,730,142
2022-12-8
https://stackoverflow.com/questions/74730142/priniting-16bit-minimal-float-looks-not-consistent
Can someone explain why printing float16 minimal produces different results below? Is it by design or a bug? In [87]: x=np.finfo(np.float16).min In [88]: x_array_single=np.array([x]) In [89]: x Out[89]: -65500.0 In [90]: x_array_single Out[90]: array([-65504.], dtype=float16)
EDIT: Note that you would also get this issue if you printed the first value of the array: >>> x_array[0] -65500.0 In the NumPy 1.14.0 Release Notes, it has been written that: Floating-point arrays and scalars use a new algorithm for decimal representations, giving the shortest unique representation. This will usually shorten float16 fractional output, and sometimes float32 and float128 output. float64 should be unaffected. See the new floatmode option to np.set_printoptions. That's why the outputs are different. When you print it as a float32 or as a float64 (or just using the built-in float format which is 64 bits), you get the more precised output: >>> float(x) -65504.0 >>> np.float32('-65504') -65504.0 and: >>> float(x_array[0]) -65504.0 You can also see the change in the precision here: >>> np.float16('65500') == np.float16('65504')` True >>> np.float32('65500') == np.float32('65504') False
3
4
74,719,898
2022-12-7
https://stackoverflow.com/questions/74719898/how-to-transform-a-polars-dataframe-to-a-pyspark-dataframe
How to correctly transform a Polars DataFrame to a pySpark DataFrame? More specifically, the conversion methods which I've tried all seem to have problems parsing columns containing arrays / lists. create spark dataframe data = [{"id": 1, "strings": ['A', 'C'], "floats": [0.12, 0.43]}, {"id": 2, "strings": ['B', 'B'], "floats": [0.01]}, {"id": 3, "strings": ['C'], "floats": [0.09, 0.01]} ] sparkdf = spark.createDataFrame(data) convert it to polars import pyarrow as pa import polars as pl pldf = pl.from_arrow(pa.Table.from_batches(sparkdf._collect_as_arrow())) try to convert back to spark dataframe (attempt 1) spark.createDataFrame(pldf.to_pandas()) TypeError: Can not infer schema for type: <class 'numpy.ndarray'> TypeError: Unable to infer the type of the field floats. try to convert back to spark dataframe (attempt 2) schema = sparkdf.schema spark.createDataFrame(pldf.to_pandas(), schema) TypeError: field floats: ArrayType(DoubleType(), True) can not accept object array([0.12, 0.43]) in type <class 'numpy.ndarray'> relevant: How to transform Spark dataframe to Polars dataframe?
What about spark.createDataFrame(pldf.to_dicts()) Alternatively you could do: spark.createDataFrame({x:y.to_list() for x,y in pldf.to_dict().items()}) Since the to_dict method returns polars Series instead of lists, I'm using a comprehension to convert the Series into regular lists which spark comprehends.
4
1
74,725,366
2022-12-8
https://stackoverflow.com/questions/74725366/has-anyone-used-polars-and-seaborn-or-matplotlib-together
Has anyone used a Polars dataframe with Seaborn to graph something? I've been working through a notebook on Kaggle that used Pandas, and I wanted to refactor it to Polars. The dataframe I'm working with looks like this: PassengerID (i64) Survived (i64) Pclass (i64) Name (str) ... Ticket (str) Fare (f64) Cabin (str) Embarked (str) Age (f64) 1 0 3 your name here ... A/5 21171 7.25 null S 24 ... ... ... ... ... ... ... ... ... ... Kaggle has me making a histogram with the following code: g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'Age', bins=20) When I run these two lines I get the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/seaborn/axisgrid.py", line 678, in map for (row_i, col_j, hue_k), data_ijk in self.facet_data(): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/seaborn/axisgrid.py", line 632, in facet_data data_ijk = data[row & col & hue & self._not_na] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/polars/internals/series/series.py", line 906, in __array_ufunc__ args.append(arg.view(ignore_nulls=True)) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/polars/internals/series/series.py", line 2680, in view ptr_type = dtype_to_ctype(self.dtype) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/polars/datatypes.py", line 550, in dtype_to_ctype raise NotImplementedError( NotImplementedError: Conversion of polars data type <class 'polars.datatypes.Boolean'> to C-type not implemented. I don't have any boolean datatypes in my dataframe, so I'm not sure what to do about this error. Any ideas?
seaborn doesn't accept a polars dataframe as an input. You just have to use to_pandas() so change g = sns.FacetGrid(train_df, col='Survived') to g = sns.FacetGrid(train_df.to_pandas(), col='Survived')
5
5
74,728,651
2022-12-8
https://stackoverflow.com/questions/74728651/can-i-force-pip-to-install-a-package-even-with-a-version-conflict
During an installation, pip is throwing an error due to version conflicts ERROR: Could not find a version that satisfies the requirement XXX==1.2.1 This is due to a package made by the company for witch I work ( not open source work ). The reason for witch it is blocking is because it is locked in python version 3.6 and I am attempting to use python 3.9 Is it possible or not to ask pip to force install a package even though it was not built / tested for this specific version of python ? To be clear, I am fully aware that this is normally not a good idea. I however have little alternatives as the team that manages that specific package no longer exists. It is a dependency we are attempting to remove but until we can, we need to use it Can I ask pip to use the latest version of the package even though it may break ? I'll add that there are no other package conflicts, just this one with python version so I should - in theory - be the only issue
I haven't tried it myself yet, but according to the documentation, pip install has the following argument that may help to bypass it: --ignore-requires-python Ignore the Requires-Python information. https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-ignore-requires-python
4
3
74,717,007
2022-12-7
https://stackoverflow.com/questions/74717007/why-does-a-python-function-work-in-parallel-even-if-it-should-not
I am running this code using the healpy package. I am not using multiprocessing and I need it to run on a single core. It worked for a certain amount of time, but, when I run it now, the function healpy.projector.GnomonicProj.projmap takes all the available cores. This is the incriminated code block: def Stacking () : f = lambda x,y,z: pixelfunc.vec2pix(xsize,x,y,z,nest=False) map_array = pixelfunc.ma_to_array(data) im = np.zeros((xsize, xsize)) plt.figure() for i in range (nvoids) : sys.stdout.write("\r" + str(i+1) + "/" + str(nvoids)) sys.stdout.flush() proj = hp.projector.GnomonicProj(rot=[rav[i],decv[i]], xsize=xsize, reso=2*nRad*rad_deg[i]*60/(xsize)) im += proj.projmap(map_array, f) im/=nvoids plt.imshow(im) plt.colorbar() plt.title(title + " (Map)") plt.savefig("../Plots/stackedMap_"+name+".png") return im Does someone know why this function is running in parallel? And most important, does someone know a way to run it in a single core? Thank you!
In this thread they recommend to set the environment variable OMP_NUM_THREADS accordingly: Worked with: import os os.environ['OMP_NUM_THREADS'] = '1' import healpy as hp import numpy as np os.environ['OMP_NUM_THREADS'] = '1' have to be done before import numpy and healpy libraries. As to the why: probably they use some parallelization techniques wrapped within their implementation of the functions you use. According to the name of the variable, I would guess OpenMP it is.
6
6
74,722,533
2022-12-7
https://stackoverflow.com/questions/74722533/python-is-there-a-way-to-insert-a-json-in-a-super-column-in-redshift-without-es
I'm trying to save data returned from an API as JSON in a SUPER type column in a table on Redshift. My request: data = requests.get(url=f'https://economia.awesomeapi.com.br/json/last/USD-BRL' I have a function to insert the data that runs like this: QUERY = f"""INSERT INTO {schema}.{table}(data) VALUES (%s)""" conn = self.__get_conn() cursor = conn.cursor() print('INSERT DATA') cursor.execute(query=QUERY, vars=([data, ])) conn.commit() When try to insert the data using data.json() I get the following error: psycopg2.ProgrammingError: can't adapt type 'dict'. So I used json.dumps(data.json()) to serialize and input it. But when I look at the database the data has escape characters like this: "{\"code\": \"USD\", \"codein\": \"BRL\", \"name\": \"DΓ³lar Americano/Real Brasileiro\", \"high\": \"5.2768\", \"low\": \"5.1848\", \"varBid\": \"-0.0264\", \"pctChange\": \"-0.5\", \"bid\": I want to use DBT to structure this dataset using JSON_PARSE() and CTEs on Redshift, but these escaping characters are in the way. What am I missing? Is there a different way to do it? The table DDL: CREATE TABLE IF NOT EXISTS public.raw_currency ( id BIGINT DEFAULT "identity"(105800, 0, '1,1'::text) ENCODE az64 ,"data" SUPER ENCODE zstd ,stored_at TIMESTAMP WITHOUT TIME ZONE ENCODE az64 ,error_log VARCHAR(65535) ENCODE lzo )
INSERT is an anti-pattern with Redshift; basically, inserts are very slow. And in this case, it's the conversion from JSON to string and back that's getting you. You want to use COPY when bulk-inserting data. Redshift's COPY takes a format parameter, and JSON is one of many supported formats. Docs. Note that to be copied, the data needs to be in S3, and your Redshift cluster needs access to the key in S3 where you are copying from. Lots of tutorials online for this.
3
1
74,718,300
2022-12-7
https://stackoverflow.com/questions/74718300/how-to-reorder-columns-in-pyarrow-table
I have pyarrow table which have column order ['A', 'B', 'C', 'D'] I want to change the order of this pyarrow table to ['B', 'D', 'C', 'A'] can we reorder pyarrows table like pandas dataframe ?
You can use pyarrow.Table.select import pyarrow as pa table_a_b = pa.table({ "A": [1,2,3], "B": [4,5,6] }) table_b_a = table_a_b.select(['B', 'A']) B A 4 1 5 2 6 3
3
4
74,717,893
2022-12-7
https://stackoverflow.com/questions/74717893/how-to-efficiently-search-for-similar-substring-in-a-large-text-python
Let me try to explain my issue with an example, I have a large corpus and a substring like below, corpus = """very quick service, polite workers(cory, i think that's his name), i basically just drove there and got a quote(which seems to be very fair priced), then dropped off my car 4 days later(because they were fully booked until then), then i dropped off my car on my appointment day, then the same day the shop called me and notified me that the the job is done i can go pickup my car. when i go checked out my car i was amazed by the job they've done to it, and they even gave that dirty car a wash( prob even waxed it or coated it, cuz it was shiny as hell), tires shine, mats were vacuumed too. i gave them a dirty, broken car, they gave me back a what seems like a brand new car. i'm happy with the result, and i will def have all my car's work done by this place from now.""" substring = """until then then i dropped off my car on my appointment day then the same day the shop called me and notified me that the the job is done i can go pickup my car when i go checked out my car i was amazed by the job they ve done to it and they even gave that dirty car a wash prob even waxed it or coated it cuz it was shiny as hell tires shine mats were vacuumed too i gave them a dirty broken car they gave me back a what seems like a brand new car i m happy with the result and i will def have all my car s work done by this place from now""" Both the substring and corpus are very similar but it not exact, If I do something like, import re re.search(substring, corpus, flags=re.I) # this will fail substring is not exact but rather very similar In the corpus the substring is like below which is bit different from the substring I have because of that regular expression search is failing, can someone suggest a really good alternative for similar substring lookup, until then), then i dropped off my car on my appointment day, then the same day the shop called me and notified me that the the job is done i can go pickup my car. when i go checked out my car i was amazed by the job they've done to it, and they even gave that dirty car a wash( prob even waxed it or coated it, cuz it was shiny as hell), tires shine, mats were vacuumed too. i gave them a dirty, broken car, they gave me back a what seems like a brand new car. i'm happy with the result, and i will def have all my car's work done by this place from now I did try difflib library but it was not satisfying my use-case. Some background information, The substring I have right now, is obtained some time ago from pre-processed corpus using this regex re.sub("[^a-zA-Z]", " ", corpus). But now I need to use that substring I have to do the reverse lookup in the corpus text and find the start and ending index in the corpus.
You don't actually need to fuzzy match all that much, at least for the example given; text can only change in spaces within substring, and it can only change by adding at least one non-alphabetic character (which can replace a space, but the space can't be deleted without a replacement). This means you can construct a regex directly from substring with wildcards between words, search (or finditer) the corpus for it, and the resulting match object will tell you where the match(es) begin and end: import re # Allow any character between whitespace-separated "words" except ASCII # alphabetic characters ssre = re.compile(r'[^a-z]+'.join(substring.split()), re.IGNORECASE) if m := ssre.search(corpus): print(m.start(), m.end()) print(repr(m.group(0))) Try it online! which correctly identifies where the match began (index 217) and ended (index 771) in corpus; .group(0) can directly extract the matching text for you if you prefer (it's uncommon to need the indices, so there's a decent chance you were asking for them solely to extract the real text, and .group(0) does that directly). The output is: 217 771 "until then), then i dropped off my car on my appointment day, then the same day the shop called me and notified me that the the job is done i can go pickup my car. when i go checked out my car i was amazed by the job they've done to it, and they even gave that dirty car a wash( prob even waxed it or coated it, cuz it was shiny as hell), tires shine, mats were vacuumed too. i gave them a dirty, broken car, they gave me back a what seems like a brand new car. i'm happy with the result, and i will def have all my car's work done by this place from now" If spaces might be deleted without being replaced, just change the + quantifier to * (the regex will run a little slower since it can't short-circuit as easily, but would still work, and should run fast enough). If you need to handle non-ASCII alphabetic characters, the regex joiner can change from r'[^a-z]+' to the equivalent r'[\W\d_]+' (which means "match all non-word characters [non-alphanumeric and not underscore], plus numeric characters and underscores"); it's a little more awkward to read, but it handles stuff like Γ© properly (treating it as part of a word, not a connector character). While it's not going to be as flexible as difflib, when you know no words are removed or added, it's just a matter of spacing and punctuation, this works perfectly, and should run significantly faster than a true fuzzy matching solution (that has to do far more work to handle the concept of close matches).
8
3
74,719,447
2022-12-7
https://stackoverflow.com/questions/74719447/find-first-and-last-integers-in-a-list-of-series-of-numbers
I'm working with lists that look as follows: [2,3,4,5,6,7,8,13,14,15,16,17,18,19,20,30,31,32,33,34,35] In the end I want to extract only the first and last integer in a consecutive series, as such: [(2,8),(13,20),(30,35)] I am new to working with Python, below my code for trying to solve this problem helix = [] single_prot_helices = [] for ind,pos in enumerate(prot[:-1]): if pos == prot[ind+1]-1: #if 2 == 3-1 essentially helix.append(pos) elif pos < prot[ind+1]-1: #if 8 < 13-1 for example helix.append(pos) single_prot_helices.append(helix) #save in a permanent list, clear temp list helix.clear() In this case prot is a list just like the example above. I expected single_prot_helices to look something like this: [[2,3,4,5,6,7,8],[13,14,15,16,17,18,19,20],[30,31,32,33,34,35]] and at this point it would have been easy to get the first and last integer from these lists and put them in a tuple, but instead of the expected list I got: [[20,30,31,32,33,34,35],[20,30,31,32,33,34,35]] Only the last series of numbers was returned and I got 1 less lists than expected (expected 3, received 2). I don't understand where I made a mistake since I believe my code follows my logic: look at the number (pos), look at the next number, if the next number is larger by 1 then add the number (pos) to a list (helix); if the next number is larger by more than 1 then add the smaller number (pos) to the list (helix), append the list to a permanent list (single_prot_helices) and then clear the list (helix) to prepare it for the next series of numbers to be appended. Any help will be highly appreciated.
You could do something like this: foo = [2,3,4,5,6,7,8,13,14,15,16,17,18,19,20,30,31,32,33,34,35] series = [] result = [] for i in foo: # if the series is empty or the element is consecutive if (not series) or (series[-1] == i - 1): series.append(i) else: # append a tuple of the first and last item of the series result.append((series[0], series[-1])) series = [i] # needed in case foo is empty if series: result.append((series[0], series[-1])) print(result) # [(2, 8), (13, 20), (30, 35)] Or, as a generator: def generate_series(list_of_int): series = [] for i in list_of_int: if not series or series[-1] == i - 1: series.append(i) else: yield (series[0], series[-1]) series = [i] if series: yield (series[0], series[-1]) foo = [2,3,4,5,6,7,8,13,14,15,16,17,18,19,20,30,31,32,33,34,35] print([item for item in generate_series(foo)]) # [(2, 8), (13, 20), (30, 35)] Yours has a few problems. The main one is that helix is a mutable list and you only ever clear it. This is causing you to append the same list multiple times which is why they're all identical. The first fix is to assign a new list to helix rather than clearing. prot = [2,3,4,5,6,7,8,13,14,15,16,17,18,19,20,30,31,32,33,34,35] helix = [] single_prot_helices = [] for ind,pos in enumerate(prot[:-1]): if pos == prot[ind+1]-1: #if 2 == 3-1 essentially helix.append(pos) elif pos < prot[ind+1]-1: #if 8 < 13-1 for example helix.append(pos) single_prot_helices.append(helix) #save in a permanent list, clear temp list helix = [] print(single_prot_helices) # [[2, 3, 4, 5, 6, 7, 8], [13, 14, 15, 16, 17, 18, 19, 20]] As you can see the last list is missed. That is because the last helix is never appended. You could add: if helix: single_prot_helices.append(helix) But that still only gives you: [[2, 3, 4, 5, 6, 7, 8], [13, 14, 15, 16, 17, 18, 19, 20], [30, 31, 32, 33, 34]] leaving out the last element since you only ever iterate to the second from last one. Which means you would need to do something complicated and confusing like this outside of your loop: if helix: if helix[-1] == prot[-1] - 1: helix.append(prot[-1]) single_prot_helices.append(helix) else: single_prot_helices.append(helix) single_prot_helices.append(prot[-1]) else: single_prot_helices.append(prot[-1]) Giving you: [[2, 3, 4, 5, 6, 7, 8], [13, 14, 15, 16, 17, 18, 19, 20], [30, 31, 32, 33, 34, 35]] If you're still confused by names and mutability Ned Batchelder does a wonderful job of explaining the concepts with visual aids.
4
3
74,716,030
2022-12-7
https://stackoverflow.com/questions/74716030/what-is-the-difference-between-python-xlib-python3-xlib-pyxlib-and-xlib-in-pyt
I individually installed (and posterior uninstalled): python-xlib python3-xlib pyxlib xlib via pip (un)install and could execute from Xlib import X, display, Xutil from Xlib.ext import randr d = display.Display() with all of them with Python 3.8.10. – What is the difference between them? Pip definitively downloads and installs different packages with different sizes.
Use only python-xlib The other three python3-xlib pyxlib xlib are (seemingly) from two individuals (one holds pyxlib and xlib the other holds python3-xlib) with either broken homepage links or pointing to python-xlib. Nothing in python-xlib points to pyxlib or python3-xlib. In the best case these are just outdated snapshots of python-xlib with questionable changes turned into packages.
3
3
74,655,669
2022-12-2
https://stackoverflow.com/questions/74655669/python-unittests-used-in-a-project-structure-with-multiple-directories
I need to use unittest python library to execute tests about the 3 functions in src/arithmetics.py file. Here is my project structure. . β”œβ”€β”€ src β”‚ └── arithmetics.py └── test └── lcm β”œβ”€β”€ __init__.py β”œβ”€β”€ test_lcm_exception.py └── test_lcm.py src/arithmetics.py def lcm(p, q): p, q = abs(p), abs(q) m = p * q while True: p %= q if not p: return m // q q %= p if not q: return m // p def lcm_better(p, q): p, q = abs(p), abs(q) m = p * q h = p % q while h != 0: p = q q = h h = p % q h = m / q return h def lcm_faulty(p, q): r, m = 0, 0 r = p * q while (r > p) and (r > q): if (r % p == 0) and (r % q == 0): m = r r = r - 1 return m test/lcm/test_lcm.py import unittest from src.arithmetics import * class LcmTest(unittest.TestCase): def test_lcm(self): for X in range(1, 100): self.assertTrue(0 == lcm(0, X)) self.assertTrue(X == lcm(X, X)) self.assertTrue(840 == lcm(60, 168)) def test_lcm_better(self): for X in range(1, 100): self.assertTrue(0 == lcm_better(0, X)) self.assertTrue(X == lcm_better(X, X)) self.assertTrue(840 == lcm_better(60, 168)) def test_lcm_faulty(self): self.assertTrue(0 == lcm_faulty(0, 0)) for X in range(1, 100): self.assertTrue(0 == lcm_faulty(X, 0)) self.assertTrue(0 == lcm_faulty(0, X)) self.assertTrue(840 == lcm_faulty(60, 168)) if __name__ == '__main__': unittest.main() test/lcm/test_lcm_exception.py import unittest from src.arithmetics import * class LcmExceptionTest(unittest.TestCase): def test_lcm_exception(self): for X in range(0, 100): self.assertTrue(0 == lcm(0, 0)) # ZeroDivisionError self.assertTrue(0 == lcm(X, 0)) # ZeroDivisionError def test_lcm_better_exception(self): for X in range(0, 100): self.assertTrue(0 == lcm_better(0, 0)) # ZeroDivisionError self.assertTrue(0 == lcm_better(X, 0)) # ZeroDivisionError def test_lcm_faulty_exception(self): for X in range(1, 100): self.assertTrue(X == lcm_faulty(X, X)) # ppcm(1, 1) != 1 if __name__ == '__main__': unittest.main() test/lcm/__init__.py is an empty file To execute my tests, I tried this command : python3 -m unittest discover But the output is : ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I don't understand how can I run my tests... Thanks for helping me !
A file __init__.py is missing I think that the problem is the missing of file __init__.py in the folder test. Try to add this (empty) file in that folder as I show you below: test_lcm β”œβ”€β”€ src β”‚ └── arithmetics.py └── test └── __init__py <---------------- add this file └── lcm β”œβ”€β”€ __init__.py β”œβ”€β”€ test_lcm_exception.py └── test_lcm.py If you watch my tree folders I have created a folder test_lcm as root of the tree. You have to execute the cd command to place yourself inside that folder. So execute a cd command similar to the following (in my system test_lcm is placed in my home folder): # go to test_lcm folder cd ~/test_lcm After that, execute: # execute test python3 -m unittest discover The last part of the output is: ---------------------------------------------------------------------- Ran 6 tests in 0.002s FAILED (failures=1, errors=2) The output shows that are executed 6 tests with 2 errors (test_lcm_better_exception and test_lcm_exception fail). Useful links This is a useful link to know how to define a Python package. In particular I want to highlight the following sentence present in that link: The __init__.py files are required to make Python treat directories containing the file as packages. Instead this is a post which speaks of a similar topic. Furthermore this link reports this sentence: For example, unittest module in standard library doesn't search into directory without __init__.py. This explains why the file __init__.py is necessary. May be in the future the unittest module will search tests also in directories without __init__.py file.
3
3
74,631,205
2022-11-30
https://stackoverflow.com/questions/74631205/jupyter-notebook-does-not-run-in-pycharm
When I try to run the server when I'm using PyCharm brings me this error Jupyter server process exited with code 1 usage: jupyter.py [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir] [--paths] [--json] [--debug] [subcommand] Jupyter: Interactive Computing positional arguments: subcommand the subcommand to launch optional arguments: -h, --help show this help message and exit --version show the versions of core jupyter packages and exit --config-dir show Jupyter config dir --data-dir show Jupyter data dir --runtime-dir show Jupyter runtime dir --paths show all Jupyter paths. Add --json for machine-readable format. --json output paths as machine-readable json --debug output debug information about paths Available subcommands: 1.0.0 Jupyter command jupyter-notebook not found. When running Jupyter using VSCode it works properly. I also tried reinstalling it from PyCharm packages and terminal but still doesn't work.
I didn't manage to solve this, but I found a workaround Go to PyCharm Settings and search for Jupyter Servers Open a Terminal, and start Jupyter notebook, typically: python3 -m notebook Copy the URL with the token to the Configured Server field in Pycharm, click OK You should now be able to run and debug Jupyter cells in Pycharm!
25
10
74,638,652
2022-12-1
https://stackoverflow.com/questions/74638652/render-latex-symbols-in-plotly-graphs-in-vscode-interactive-window
I am trying to get Latex symbols in titles and labels of a Plotly figure. I am using VSCode and I run the code in Interactive Window. Latex usage looks really simple in Jupyter Notebook, from what I saw in other posts, but I can't get it to work within this environment. My env: python 3.10.4 plotly 5.9.0 vscode 1.62.3 What I tried: use r"$$" formatting, change the font family change plotly.io.renderers.default install mathjax in my conda env and try to adapt plotly.offline mode (see https://github.com/plotly/plotly.py/issues/515) This basic code snippet should work according to most posts I have seen but does not do the Latexrendering in the Interactive Window. It has been taken from https://plotly.com/python/LaTeX/, where everything looks so easy. That's why I am guessing the issue is related to VSCode. import plotly.graph_objs as go fig = go.Figure() fig.add_trace(go.Scatter( x=[1, 2, 3, 4], y=[1, 4, 9, 16], name=r'$\alpha_{1c} = 352 \pm 11 \text{ km s}^{-1}$' )) fig.add_trace(go.Scatter( x=[1, 2, 3, 4], y=[0.5, 2, 4.5, 8], name=r'$\beta_{1c} = 25 \pm 11 \text{ km s}^{-1}$' )) fig.update_layout( xaxis_title=r'$\sqrt{(n_\text{c}(t|{T_\text{early}}))}$', yaxis_title=r'$d, r \text{ (solar radius)}$' ) fig.show() What I have What I should have
This is a known issue with Plotly in Jupyter notebooks in VSCode (e.g. issues #7801 and #8131). Tomas Mazak shared a workaround in #8131: import plotly import plotly.graph_objs as go from IPython.display import display, HTML ## Tomas Mazak's workaround plotly.offline.init_notebook_mode() display(HTML( '<script type="text/javascript" async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-MML-AM_SVG"></script>' )) ## fig = go.Figure() fig.add_trace(go.Scatter( x=[1, 2, 3, 4], y=[1, 4, 9, 16], name=r'$\alpha_{1c} = 352 \pm 11 \text{ km s}^{-1}$' )) fig.add_trace(go.Scatter( x=[1, 2, 3, 4], y=[0.5, 2, 4.5, 8], name=r'$\beta_{1c} = 25 \pm 11 \text{ km s}^{-1}$' )) fig.update_layout( xaxis_title=r'$\sqrt{(n_\text{c}(t|{T_\text{early}}))}$', yaxis_title=r'$d, r \text{ (solar radius)}$' ) fig.show() Output:
6
6
74,705,127
2022-12-6
https://stackoverflow.com/questions/74705127/how-to-fix-error-module-lib-has-no-attribute-x509-v-flag-cb-issuer-check
On my WSL im getting the error AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK' whenever I try to use pip e.g. pip list, python3 -m pip, etc. Is there a way to reinstall pip or uninstall packages without using pip? I tried following the solutions in related Questions but none of them work because they either use pip or the problem persists after.
The solution that worked for me was mentioned here. you have to remove the line: CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK from this file: /usr/lib/python3/dist-packages/OpenSSL/crypto.py and then you can use pip again: $ pip uninstall cryptography $ pip install --upgrade cryptography==36.0.2
5
20
74,661,044
2022-12-2
https://stackoverflow.com/questions/74661044/add-a-custom-javascript-to-the-fastapi-swagger-ui-docs-webpage-in-python
I want to load my custom javascript file or code to the FastAPI Swagger UI webpage, to add some dynamic interaction when I create a FastAPI object. For example, in Swagger UI on docs webpage I would like to <script src="custom_script.js"></script> or <script> alert('worked!') </script> I tried: api = FastAPI(docs_url=None) api.mount("/static", StaticFiles(directory="static"), name="static") @api.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=api.openapi_url, title=api.title + " - Swagger UI", oauth2_redirect_url=api.swagger_ui_oauth2_redirect_url, swagger_js_url="/static/sample.js", swagger_css_url="/static/sample.css", ) but it is not working. Is there a way just to insert my custom javascript code on docs webpage of FastAPI Swagger UI with Python ?
If you take a look at the get_swagger_ui_html function that is imported from fastapi.openapi.docs, you will see that the HTML for the docs page is constructed manually via string interpolation/concatenation. It would be trivial to modify this function to include an additional script element, as shown below: # custom_swagger.py import json from typing import Any, Dict, Optional from fastapi.encoders import jsonable_encoder from fastapi.openapi.docs import swagger_ui_default_parameters from starlette.responses import HTMLResponse def get_swagger_ui_html( *, openapi_url: str, title: str, swagger_js_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@4/swagger-ui-bundle.js", swagger_css_url: str = "https://cdn.jsdelivr.net/npm/swagger-ui-dist@4/swagger-ui.css", swagger_favicon_url: str = "https://fastapi.tiangolo.com/img/favicon.png", oauth2_redirect_url: Optional[str] = None, init_oauth: Optional[Dict[str, Any]] = None, swagger_ui_parameters: Optional[Dict[str, Any]] = None, custom_js_url: Optional[str] = None, ) -> HTMLResponse: current_swagger_ui_parameters = swagger_ui_default_parameters.copy() if swagger_ui_parameters: current_swagger_ui_parameters.update(swagger_ui_parameters) html = f""" <!DOCTYPE html> <html> <head> <link type="text/css" rel="stylesheet" href="{swagger_css_url}"> <link rel="shortcut icon" href="{swagger_favicon_url}"> <title>{title}</title> </head> <body> <div id="swagger-ui"> </div> """ if custom_js_url: html += f""" <script src="{custom_js_url}"></script> """ html += f""" <script src="{swagger_js_url}"></script> <!-- `SwaggerUIBundle` is now available on the page --> <script> const ui = SwaggerUIBundle({{ url: '{openapi_url}', """ for key, value in current_swagger_ui_parameters.items(): html += f"{json.dumps(key)}: {json.dumps(jsonable_encoder(value))},\n" if oauth2_redirect_url: html += f"oauth2RedirectUrl: window.location.origin + '{oauth2_redirect_url}'," html += """ presets: [ SwaggerUIBundle.presets.apis, SwaggerUIBundle.SwaggerUIStandalonePreset ], })""" if init_oauth: html += f""" ui.initOAuth({json.dumps(jsonable_encoder(init_oauth))}) """ html += """ </script> </body> </html> """ return HTMLResponse(html) A new, optional parameter named custom_js_url is added: custom_js_url: Optional[str] = None, If a value is provided for this parameter, a script element is inserted into the DOM directly before the script element for swagger_js_url (this is an arbitrary choice, you can change the location of the custom script element based on your needs). if custom_js_url: html += f""" <script src="{custom_js_url}"></script> """ If no value is provided, the HTML produced is the same as the original function. Remember to update your import statements for get_swagger_ui_html and update your function for the /docs endpoint as shown below: from fastapi import FastAPI from fastapi.staticfiles import StaticFiles from fastapi.openapi.docs import ( get_redoc_html, get_swagger_ui_oauth2_redirect_html, ) from custom_swagger import get_swagger_ui_html import os app = FastAPI(docs_url=None) path_to_static = os.path.join(os.path.dirname(__file__), 'static') app.mount("/static", StaticFiles(directory=path_to_static), name="static") @app.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): return get_swagger_ui_html( openapi_url=app.openapi_url, title="My API", oauth2_redirect_url=app.swagger_ui_oauth2_redirect_url, swagger_js_url="/static/swagger-ui-bundle.js", swagger_css_url="/static/swagger-ui.css", # swagger_favicon_url="/static/favicon-32x32.png", custom_js_url="/static/custom_script.js", ) This is still a pretty hacky solution, but I think it is much cleaner and more maintainable than putting a bunch of custom javascript inside the swagger-ui-bundle.js file.
5
4
74,680,849
2022-12-4
https://stackoverflow.com/questions/74680849/save-keras-tuner-results-as-pandas-dataframe
Is there a possibility of saving the results of keras-tuner as Dataframe? All I can find are printing functions like result_summary(), but I cannot access the printed content. The example below both prints print None, while the result_summary() still prints the best results. It looks like I will have to access the trial results by traversing through the saved files. I hoped to get a prepared table as printed by result_summary(). optimizer = keras_tuner.BayesianOptimization( hypermodel=build_model, objective='val_loss', max_trials=5, # num_initial_points=2, # alpha=0.0001, # beta=2.6, seed=1, hyperparameters=None, tune_new_entries=True, allow_new_entries=True #,min_trials=3 ) search = optimizer.search(x=train_X, y=train_Y, epochs=epochs, validation_data=(test_X, test_Y)) results = optimizer.results_summary(num_trials=10) print(results) print(search)
If you don't need to store "score" with the hyperparameters, this should do what you want. You need to get the hyperparameters (HPs). The HPs are stored in hp.get_config() under ["values"] key. You collect a list of dicts with the HPs and convert them into DataFrame and to csv file. best_hps = optimizer.get_best_hyperparameters(num_trials=max_trials) HP_list = [] for hp in best_hps: HP_list.append(hp.get_config()["values"]) HP_df = pd.DataFrame(HP_list) HP_df.to_csv("name.csv", index=False, na_rep='NaN') If you wish to save the scores too, you need to go through trials and concatenate the hyperparameter dict with its score as this trials = optimizer.oracle.get_best_trials(num_trials=max_trials) HP_list = [] for trial in trials: HP_list.append(trial.hyperparameters.get_config()["values"] | {"Score": trial.score}) HP_df = pd.DataFrame(HP_list) HP_df.to_csv("name.csv", index=False, na_rep='NaN')
3
4
74,689,457
2022-12-5
https://stackoverflow.com/questions/74689457/overriding-fastapi-dependencies-that-have-parameters
I'm trying to test my FastAPI endpoints by overriding the injected database using the officially recommended method in the FastAPI documentation. The function I'm injecting the db with is a closure that allows me to build any desired database from a MongoClient by giving it the database name whilst (I assume) still working with FastAPI depends as it returns a closure function's signature. No error is thrown so I think this method is correct: # app def build_db(name: str): def close(): return build_singleton_whatever(MongoClient, args....) return close Adding it to the endpoint: # endpoint @app.post("/notification/feed") async def route_receive_notifications(db: Database = Depends(build_db("someDB"))): ... And finally, attempting to override it in the tests: # pytest # test_endpoint.py fastapi_app.dependency_overrides[app.build_db] = lambda x: lambda: x However, the dependency doesn't seem to override at all and the test ends up creating a MongoClient with the IP of the production database as in normal execution. So, any ideas on overriding FastAPI dependencies that are given parameters in their endpoints? I have tried creating a mock closure function with no success: def mock_closure(*args): def close(): return args return close app.dependency_overrides[app.build_db] = mock_closure('otherDB') And I have also tried providing the same signature, including the parameter, with still no success: app.dependency_overrides[app.build_db('someDB')] = mock_closure('otherDB') Edit note I'm also aware I can create a separate function that creates my desired database and use that as the dependency, but I would much prefer to use this dynamic version as it's more scalable to using more databases in my apps and avoids me writing essentially repeated functions just so they can be cleanly injected.
My case involved an HTTP client wrapper, instead of a DB. I think it could be applied to your case as well. Context: I want to inject values for a FastAPI handler's dependency to test various scenarios. We have a handler with its dependencies @router.get("/{foo}") async def get(foo, client = Depends(get_client)): # get_client is the key to override client = get_client() return await client.request(foo) The function get_client is the dependency I want to override in my tests. It returns a Client object that takes a function that performs an HTTP request to an external service (this function actually wraps aiohttp, but that's not the important part). Here's its barebone definition: class Client: def __init__(request): self._request = request async def request(self, params): return await self._request(params) We want to test various responses from the external service, so we need to build the function that returns a function that returns a Client object (sorry for the tongue-twister), with its params: def get_client_getter(response): async def request_mock(*args, **kwargs): return response def get_client(): return Client(request=request_mock) return get_client() Then in the various tests we have: def test_1(): app.dependency_overrides[get_client] = get_client_getter(1) ... def test_true(): app.dependency_overrides[get_client] = get_client_getter(True) ... def test_none(): app.dependency_overrides[get_client] = get_client_getter(None) ...
8
1
74,631,751
2022-11-30
https://stackoverflow.com/questions/74631751/python-runs-older-version-after-installing-updated-version-on-mac
I am currently running python 3.6 on my Mac, and installed the latest version of Python (3.11) by downloading and installing through the official python releases. Running python3.11 opens the interpreter in 3.11, and python3.11 --version returns Python 3.11.0, but python -V in terminal returns Python 3.6.1 :: Continuum Analytics, Inc.. I tried to install again via homebrew using brew install [email protected] but got the same results. More frustrating, when I try to open a virtual environment using python3 -m venv env I get Error: Command '['/Users/User/env/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. I altered .bash_profile with # Setting PATH for Python 3.11 # The original version is saved in .bash_profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/3.11/bin:${PATH}" export PATH . "$HOME/.cargo/env" And created a .zprofile based on this post with export PYTHONPATH=$HOME/Users/User and a .zshrc based on this post, but --version still throws python3.6. I'm running Big Sur OS. Pip and homebrew are up to date and upgraded. Acknowledging that I'm totally foolish, what do I need to do to get python >3.7 running in terminal?
What you want to do is overwrite a python symlink. After installing python via homebrew, you can see that python3.11 is just symlink. cd /usr/local/bin; ls -l | grep python3.11 The result is: lrwxr-xr-x 1 user admin 43 Nov 7 15:43 python3.11@ -> ../Cellar/[email protected]/3.11.0/bin/python3.11 So let's just overwrite it. ln -s -f $(which python3.11) $(which python) ln -s -f $(which python3.11) $(which python3) ln -s -f $(which pip3.11) $(which pip) ln -s -f $(which pip3.11) $(which pip3) After these commands, pip, pip3, python3, python will invoke the version 3.11. This command makes soft symlink. ln -s This command with -f option overwrite an existing soft symlink. A soft symlink is similar to a shortcut. In man page,which command is described as which - shows the full path of (shell) commands.
4
7
74,695,915
2022-12-6
https://stackoverflow.com/questions/74695915/apply-a-function-over-the-columns-of-a-dask-array
What is the most efficient way to apply a function to each column of a Dask array? As documented below, I've tried a number of things but I still suspect that my use of Dask is rather amateurish. I have a quite wide and quite long array, in the region of 3,000,000 x 10,000. I want to apply the ecdf function to each column of this array. The individual column results stacked together should result in an array with the same dimension as the input array. Consider the following tests and let me know which approach is the ideal one or how I can improve. I know, I could just use the fastest one, but I really want to exploit the possibilities of Dask to the maximum. The arrays could also be multiple times bigger. At the same time, the results of my benchmarks are surprising for me. Maybe I don't understand the logic behind Dask correctly. import numpy as np import dask import dask.array as da from dask.distributed import Client, LocalCluster from statsmodels.distributions.empirical_distribution import ECDF ### functions def ecdf(x): fn = ECDF(x) return fn(x) def ecdf_array(X): res = np.zeros_like(X) for i in range(X.shape[1]): res[:,i] = ecdf(X[:,i]) return res ### set up scheduler / workers n_workers = 10 cluster = LocalCluster(n_workers=n_workers, threads_per_worker=4) client = Client(cluster) ### create data set X = da.random.random((100000,100)) #dask Xarr = X.compute() #numpy ### traditional for loop %timeit -r 10 foo1 = ecdf_array(Xarr) ### adjusting chunk size to 2d-array and map_blocks X = X.rechunk(chunks=(X.shape[0],np.ceil(X.shape[1]/n_workers))) Xm = X.map_blocks(lambda x: ecdf_array(x),meta = np.array((), dtype='float')) %timeit -r 10 foo2 = Xm.compute() ### adjusting chunk size to column size and map_blocks X = X.rechunk(chunks=(X.shape[0],1)) Xm = X.map_blocks(lambda x: np.expand_dims(ecdf(np.squeeze(x)),1),meta = np.array((), dtype='float')) %timeit -r 10 foo3 = Xm.compute() ### map over columns by slicing Xm = client.map(lambda i: ecdf(np.asarray(X[:,i])),range(X.shape[1])) Xm = client.submit(lambda x: da.transpose(da.vstack(x)),Xm) %timeit -r 10 foo4 = Xm.result() ### apply_along_axis Xaa = da.apply_along_axis(lambda x: np.expand_dims(ecdf(x),1), 0, X, dtype=X.dtype, shape=X.shape) %timeit -r 10 foo5 = Xaa.compute() ### lazy loop Xl = [] for i in range(X.shape[1]): Xl.append(dask.delayed(ecdf)(X[:,i])) Xl = dask.delayed(da.vstack)(Xl) %timeit -r 10 foo6 = Xl.compute() Along my benchmarks "map over columns by slicing" is the fastest approach followed by "adjusting chunk size to column size & map_blocks" and the non-parallel "apply_along_axis". Method Results (10 loops) traditional for loop 2.16 s Β± 82.3 ms adjusting chunk size to 2d-array & map_blocks 1.26 s Β± 301 ms adjusting chunk size to column size & map_blocks 926 ms Β± 31.9 map over columns by slicing 316 ms Β± 11.5 ms apply_along_axis 1.01 s Β± 18.7 ms lazy loop 1.4 s Β± 352 ms Along my understanding of the idea behind Dask, I would have expected the "adjusting chunk size to 2d-array & map_blocks" method to be the fastest. The two approaches which performed best don't seem to be very "Dasky" at the same time the non-parallel apply_along_axis is ranked third. All that gives my the suspicion that I am doing something wrong.
As far as I can tell, your code looks correct (see the explanation below for why the performance of map over columns by slicing is misleadingly fast). With some minor refactoring, the "dask-y" version might be: from dask.array.random import random from numpy import zeros from statsmodels.distributions.empirical_distribution import ECDF n_rows = 100_000 X = random((n_rows, 100), chunks=(n_rows, 1)) _ECDF = lambda x: ECDF(x.squeeze())(x) meta = zeros((n_rows, 1), dtype="float") foo0 = X.map_blocks(_ECDF, meta=meta) # executing foo0.compute() should take about 0.8s Note that the dask array is initialized with the appropriate chunking (one column per chunk), while in your current code the execution timing will include the time to rechunk the array. In terms of overall speed-up, the individual computations are tiny (on the scale of 50ms), so to reduce the number of tasks it's possible to chunk several processing of several columns into a single chunk. However, this has a trade-off associated with the slow-down due to iterating over the columns of the numpy array. The main advantage is in the reduced burden on the scheduler. Depending on the scale of your final dataset and the computing resources available, the chunked version might have a slight advantage over the non-chunked version (i.e. the first snippet): from dask.array.random import random from numpy import stack, zeros from statsmodels.distributions.empirical_distribution import ECDF n_rows = 100_000 n_cols = 100 chunk_size = (n_rows, 10) X = random((n_rows, n_cols), chunks=chunk_size) _ECDF = lambda x: ECDF(x.squeeze())(x) def block_ECDF(x): return stack([_ECDF(column) for column in x.T], axis=1) meta = zeros(chunk_size, dtype="float") foo0 = X.map_blocks(block_ECDF, meta=meta) # executing foo0.compute() should take about 0.8s Note that the fastest performer in your benchmarks is map over columns by slicing. However, this is misleading because what python is timing here is just the collection of the computed results. Most of the time will be spent on the computation, so the accurate way to time this approach is to start the timer when the futures are submitted and end it when the results are collected.
5
3
74,680,359
2022-12-4
https://stackoverflow.com/questions/74680359/tensorflow-compat-v2-internal-tracking-has-no-attribute-trackablesaver-er
I got this error after installing Tensorflow.js. Previously this program was working. Could it be a problem with the versions? I'm really curious as to what's causing it. Thanks in advance. File ~\OneDrive\Masaüstü\Bitirme Proje\neural_network(sinir_ağları).py:61 model = build_model() File ~\OneDrive\Masaüstü\Bitirme Proje\neural_network(sinir_ağları).py:29 in build_model model = keras.Sequential([ File C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\trackable\base.py:205 in _method_wrapper result = method(self, *args, **kwargs) File C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py:67 in error_handler raise e.with_traceback(filtered_tb) from None File C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py:3331 in saver_with_op_caching return tf.__internal__.tracking.TrackableSaver( AttributeError: module 'tensorflow.compat.v2.__internal__.tracking' has no attribute 'TrackableSaver' I was planning to convert my model with Tensorflow.js and run it over the web. But when I installed Tensorflow.js I got this error in the program.
Update keras with pip install 'keras>=2.9.0' for keras-team/keras@af70910. - self._trackable_saver = saver_with_op_caching(self) + self._checkpoint = tf.train.Checkpoint(root=weakref.ref(self))
4
3
74,697,284
2022-12-6
https://stackoverflow.com/questions/74697284/do-architectures-built-using-tf-keras-models-sequential-run-more-slowly-and-accu
I just compared 2 (I thought) equivalent VGG-ish architectures. One was constructed using tf.keras.Models.Sequential, the other used Tensorflow's functional API. Each was attempting to solve the cats_vs_dogs dataset. After 10 training epochs, the Sequential model had these runtimes and accuracies: Epoch 10/10 703/703 [==============] - 16s 23ms/step - accuracy: 0.9271 - val_accuracy: 0.8488 But the Functional API output had these runtimes and accuracies: Epoch 10/10 703/703 [==============] - 15s 22ms/step - accuracy: 0.8483 - val_accuracy: 0.8072 The differences in training times and accuracy struck me as severe. The training time differences were less severe, but consistent. Now I'm wondering if my nets are truly equivalent, or if there's some difference between Sequential and the Functional API that accounts for this. I imported the following modules: import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras.optimizers import RMSprop They used these versions of tf & tfds: Tensorflow Version: 2.10.0 Tensorflow_Datasets Version: 4.7.0+nightly (Note: This was what I got from Conda - not a purposeful choice) I downloaded the cats vs. dogs datasets directly from Kaggle, since there was some checksum error when I tried to download using the tf methods for downloading standard datasets. I ultimately had to remove some files that were deleted (!?) or were using CMYK color coding (!?), but there were fewer than 10 such images. I constructed the datasets in this way: builder = tfds.folder_dataset.ImageFolder('./cat_vs_dog/') dataset = builder.as_dataset(split='train', shuffle_files=True) d2 = builder.as_dataset(split='test', shuffle_files=True) def preprocess(features): # Resize and normalize image = tf.image.resize(features['image'], (224, 224)) return tf.cast(image, tf.float32) / 255., features['label'] # preprocess dataset dataset = dataset.map(preprocess).batch(32) d2 = d2.map(preprocess).batch(32) The relevant factors from builder.info are: tfds.core.DatasetInfo( features=FeaturesDict({ 'image': Image(shape=(None, None, 3), dtype=tf.uint8), 'image/filename': Text(shape=(), dtype=tf.string), 'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=2), }), supervised_keys=('image', 'label'), disable_shuffling=False, splits={ 'test': <SplitInfo num_examples=2500, num_shards=1>, 'train': <SplitInfo num_examples=22495, num_shards=1>, }, ) I constructed the Sequential model like this: def seq_model(): model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3, 3), activation = 'relu', input_shape = (224, 224, 3)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(64, (3, 3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3, 3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(128, (3, 3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation = 'relu'), tf.keras.layers.Dense(1, activation = 'sigmoid'), ]) model.compile(optimizer = RMSprop(learning_rate = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy']) return model model = seq_model() history = model.fit(dataset, validation_data=d2, epochs=10) I constructed the Functional API model like this: class Mini_Block(tf.keras.Model): def __init__(self, filters, kernel_size, pool_size=2, strides=2): super().__init__() self.filters = filters self.kernel_size = kernel_size # Define a Conv2D layer, specifying filters, # kernel_size, activation and padding. self.conv2D_0 = tf.keras.layers.Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', strides=strides, padding='same') # Define the max pool layer that will be added after the Conv2D blocks self.max_pool = tf.keras.layers.MaxPooling2D(pool_size=pool_size, strides=strides, padding='same') def call(self, inputs): # access the class's conv2D_0 layer conv2D_0 = self.conv2D_0 # Connect the conv2D_0 layer to inputs x = conv2D_0(inputs) # Finally, add the max_pool layer max_pool = self.max_pool(x) return max_pool class MiniVGG(tf.keras.Model): def __init__(self, num_classes): super().__init__() # Creating VGG blocks self.block_a = Mini_Block(filters=32, kernel_size=3) self.block_b = Mini_Block(filters=64, kernel_size=3) self.block_c = Mini_Block(filters=128, kernel_size=3) self.block_d = Mini_Block(filters=128, kernel_size=3) # Classification Head self.flatten = tf.keras.layers.Flatten() self.fc = tf.keras.layers.Dense(512, activation='relu') self.classifier = tf.keras.layers.Dense(1, activation='sigmoid') def call(self, inputs): # Chain all the layers one after the other x = self.block_a(inputs) x = self.block_b(x) x = self.block_c(x) x = self.block_d(x) x = self.flatten(x) x = self.fc(x) x = self.classifier(x) return x vgg = MiniVGG(num_classes=1) vgg.compile(optimizer=RMSprop(learning_rate = 1e-4), loss='binary_crossentropy', metrics=['accuracy']) hist = vgg.fit(dataset, validation_data=d2, epochs=10) Is there some structural difference between these two nets, or a reason that Sequential nets are much more accurate and slightly slower than those using the Functional API? These nets actually differ in that the Sequential model uses 'valid' padding and a stride of 1, but the Functional API model has 'same' padding and a stride of 2. Tragically, this is not the whole story. Following @V.M's suggestion, when I looked at the nets' architectures directly, using this code: def get_params(curr_layer, spaces=""): if hasattr(curr_layer,'layers'): print(spaces,curr_layer.name) for sub_layer in curr_layer.layers: get_params(sub_layer, spaces+" ") elif hasattr(curr_layer,'weights'): print(spaces,curr_layer.name) for xx in curr_layer.weights: print(spaces+" Weights Shape:",xx.shape) if len(curr_layer.weights) < 1: print(spaces+" ", "No Weights") if "conv" in curr_layer.name: print(spaces + " Padding:", curr_layer.padding) print(spaces + " Strides:", curr_layer.strides) I found some weirdness involving the flatten layer. But, I think that's a topic for another question.
The reason for the differences in these two nets lie in the differences in the way they are padded (the sequential model used valid padding, the functional API model used same padding) and in the strides they used. The sequential model used strides of 1 for the conv layer and 2 for the pooling layer, while the functional API model used a stride of 2 for each layer. Another tricky (for me, anyway) aspect is that the conv layer has a default stride value of 1, while the pool layer has a default value of None, which is subsequently converted to match the pool size. I used the following 2 methods (and helpful comments from @V.M ) to directly inspect my nets and figure this out (Note: These functions are specifically written for my nets and problem, but should be easily generalizable) def get_params(curr_layer, spaces=""): '''Get internal net parameters for a single layer''' if hasattr(curr_layer,'layers'): print(spaces,curr_layer.name) for sub_layer in curr_layer.layers: # Recursive call, since 'Block' layer of functional API net was made up of sub-layers containing the parameters of interest get_params(sub_layer, spaces+" ") elif hasattr(curr_layer,'weights'): print(spaces,curr_layer.name) for xx in curr_layer.weights: print(spaces + " " +xx.name.split('/')[-1] + " Shape:",xx.shape) if hasattr(curr_layer,'padding'): print(spaces + " Padding:", curr_layer.padding) if hasattr(curr_layer,'strides'): print(spaces + " Strides:", curr_layer.strides) if hasattr(curr_layer,'pool_size'): print(spaces + " Pool Size:", curr_layer.pool_size) and def feature_map_info(model, input_shape): ''' Get dimensions of feature maps at each layer''' for layer in model.layers: print(layer.name) try: for sub_layer in layer.layers: output_shape = sub_layer.compute_output_shape(input_shape) print(" ", sub_layer.name) print(" ", output_shape) input_shape = output_shape.as_list() except: output_shape = layer.compute_output_shape(input_shape) print(" ", output_shape) input_shape = output_shape.as_list() Note: For feature_map_info, the input shape needs to be entered as a 4 element list - [batch size, rows, cols, depth], where batch_size can be set to None. For completeness, here are the matching nets with the Functional API & the Sequential model: Functional API: class Mini_Block2(tf.keras.Model): def __init__(self, filters, kernel_size, pool_size=2, strides=1): super().__init__() self.filters = filters self.kernel_size = kernel_size # Define a Conv2D layer, specifying filters, kernel_size, activation and padding. self.conv2D_0 = tf.keras.layers.Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', strides=strides, padding='valid') # Define the max pool layer that will be added after the Conv2D blocks self.max_pool = tf.keras.layers.MaxPooling2D(pool_size=pool_size, padding='valid') def call(self, inputs): # access the class's conv2D_0 layer conv2D_0 = self.conv2D_0 # Connect the conv2D_0 layer to inputs x = conv2D_0(inputs) # Finally, add the max_pool layer max_pool = self.max_pool(x) return max_pool class MiniVGG2(tf.keras.Model): def __init__(self, num_classes): super().__init__() # Creating blocks of VGG with the following # (filters, kernel_size, repetitions) configurations self.block_a = Mini_Block2(filters=32, kernel_size=3) self.block_b = Mini_Block2(filters=64, kernel_size=3) self.block_c = Mini_Block2(filters=128, kernel_size=3) self.block_d = Mini_Block2(filters=128, kernel_size=3) # Classification head # Define a Flatten layer self.flatten = tf.keras.layers.Flatten() # Create a Dense layer with 512 units and ReLU as the activation function self.fc = tf.keras.layers.Dense(512, activation='relu') # Finally add the softmax classifier using a Dense layer self.classifier = tf.keras.layers.Dense(1, activation='sigmoid') def call(self, inputs): # Chain all the layers one after the other x = self.block_a(inputs) x = self.block_b(x) x = self.block_c(x) x = self.block_d(x) x = self.flatten(x) x = self.fc(x) x = self.classifier(x) return x Sequential def seq_model(): model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3, 3), activation = 'relu', input_shape = (224, 224, 3)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(64, (3, 3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3, 3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(128, (3, 3), activation = 'relu'), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation = 'relu'), tf.keras.layers.Dense(1, activation = 'sigmoid'), ]) model.compile(optimizer = RMSprop(learning_rate = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy']) return model
3
0
74,674,688
2022-12-4
https://stackoverflow.com/questions/74674688/google-colab-notebook-using-ijava-stuck-at-connecting-after-installation-ref
All of my notebooks stopped connecting, after the initial IJava installation and browser page refresh. What used to work Execute this first cell !wget https://github.com/SpencerPark/IJava/releases/download/v1.3.0/ijava-1.3.0.zip !unzip ijava-1.3.0.zip !python install.py --sys-prefix Wait for the Installed java kernel message Refresh the browser page. Execute any cell with Java code. Now what happens is I can execute the first cell and get the Installed java kernel message, seeing the notebook status as "Connected". But after refreshing the page, the status of the notebook is stuck at "Connecting" forever, and thus no cells can be executed. -- I'm using Google Colab for free, but since the initial installation still works, and the notebook status is "Connected" before the page is refreshed, this should not be the problem. Any idea what has been changed, and how I can get my Java notebooks to connect again? -- UPDATE 1 After the page reloads, when I try to run a cell containing Java code, this is the error message I'm getting after a while: await connected: disconnected @https://ssl.gstatic.com/colaboratory-static/common/5f9fa09db4e185842380071022f6c9a6/external_polymer_binary_l10n__en_gb.js:6249:377 promiseReactionJob@[native code] Also, the notebook settings are Runtime type: java Hardware accelerator: None The cells contain really simple Java code, no external libraries, no CPU or GPU intensive stuff. For debugging purposes I tried running other cells (like the one with the Java installation, or Python code) - but of course, they also do not execute without connection. -- UPDATE 2 After installing IJava and before the page reload, I noticed that the path for the Java kernel is different than the path for the "preinstalled" ir and python3 kernels: !jupyter kernelspec list Available kernels: ir /usr/local/share/jupyter/kernels/ir python3 /usr/local/share/jupyter/kernels/python3 java /usr/share/jupyter/kernels/java Could that be the problem? (I have never checked this before, so I don't know whether the default-path has been changed recently.) This is the metadata content of the ipynb file: { "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "provenance": [{ "file_id": "...", "timestamp": 1670411565504 }, { "file_id": "...", "timestamp": 1670311531999 }, { "file_id": "...", "timestamp": 1605675807586 }], "authorship_tag": "..." }, "kernelspec": { "name": "java", "display_name": "java" } }, "cells": [{ ... ]} }
At some point colab changed the default transport to ipc (from the default tcp) which is not supported by IJava. /usr/bin/python3 /usr/local/bin/jupyter-notebook --ip=... --transport=ipc --port=... The kernel starts but never properly connects and doesn't send the initial kernel info message that jupyter is waiting for. When/if there comes a point where we can ask to start up with tcp transport instead, that will be preferable (see https://github.com/googlecolab/colabtools/issues/3267) but for the time being we can work around it. The workaround is to stick a little local proxy in front of the java kernel that connects all the ipc channels and forwards them to another set of tcp channels that are connected to the java kernel. The first cell is still the usual install/setup but also includes the install for the proxy as well: %%sh # Install java kernel wget -q https://github.com/SpencerPark/IJava/releases/download/v1.3.0/ijava-1.3.0.zip unzip -q ijava-1.3.0.zip python install.py # Install proxy for the java kernel wget -qO- https://gist.github.com/SpencerPark/e2732061ad19c1afa4a33a58cb8f18a9/archive/b6cff2bf09b6832344e576ea1e4731f0fb3df10c.tar.gz | tar xvz --strip-components=1 python install_ipc_proxy_kernel.py --kernel=java --implementation=ipc_proxy_kernel.py Run that cell. You may have Unrecognized runtime "java"; defaulting to "python3" which is ok. After the cell runs with output similar to: Installed java kernel into "/usr/local/share/jupyter/kernels/java" e2732061ad19c1afa4a33a58cb8f18a9-b6cff2bf09b6832344e576ea1e4731f0fb3df10c/install_ipc_proxy_kernel.py e2732061ad19c1afa4a33a58cb8f18a9-b6cff2bf09b6832344e576ea1e4731f0fb3df10c/ipc_proxy_kernel.py Moving java kernel from /usr/local/share/jupyter/kernels/java... Wrote modified kernel.json for java_tcp in /usr/local/share/jupyter/kernels/java_tcp/kernel.json Installing the proxy kernel in place of java in /usr/local/share/jupyter/kernels/java Installed proxy kernelspec: {"argv": ["/usr/bin/python3", "/usr/local/share/jupyter/kernels/java/ipc_proxy_kernel.py", "{connection_file}", "--kernel=java_tcp"], "env": {}, "display_name": "Java", "language": "java", "interrupt_mode": "message", "metadata": {}} Proxy kernel installed. Go to 'Runtime > Change runtime type' and select 'java' install.py:164: DeprecationWarning: replace is ignored. Installing a kernelspec always replaces an existing installation install_dest = KernelSpecManager().install_kernel_spec( follow the printed instruction: Go to 'Runtime > Change runtime type' and select 'java'. The runtime should now show "Connected to java..." and you should be able to write and execute java code. Try https://colab.research.google.com/gist/SpencerPark/447de114fcd3e6a272dc140809462e30 for an example base notebook. That setup cell should be everything you need to get running, but here is a bit of an explanation for what is in the proxy kernel. It is published as a gist (https://gist.github.com/SpencerPark/e2732061ad19c1afa4a33a58cb8f18a9). The general idea is: rename the real kernel with the _tcp suffix (java_tcp) and install the proxy in it's place with the intended name (java). start the proxy kernel and bind everything as if the proxy is a kernel itself. shell_socket = create_and_bind_socket(shell_port, zmq.ROUTER) stdin_socket = create_and_bind_socket(stdin_port, zmq.ROUTER) control_socket = create_and_bind_socket(control_port, zmq.ROUTER) iopub_socket = create_and_bind_socket(iopub_port, zmq.PUB) hb_socket = create_and_bind_socket(hb_port, zmq.REP) start the real kernel with supported params (transport "tcp") and the same session information. This is important so we can forward messages directly to the real kernel without decoding them in between. kernel_manager = KernelManager() kernel_manager.kernel_name = args.kernel kernel_manager.transport = "tcp" kernel_manager.client_factory = ProxyKernelClient kernel_manager.autorestart = False kernel_manager.session.signature_scheme = signature_scheme kernel_manager.session.key = key kernel_manager.start_kernel() start a zmq proxy for each pair of channels (this all the ProxyKernelClient does). Thread(target=zmq.proxy, args=(proxy_server_socket, self.kernel_client_socket)).start() then we are done! Just wait for the managed kernel process to exit, and then exit ourselves as well. exit_code = kernel_manager.kernel.wait() kernel_client.stop_channels() zmq_context.destroy(0) exit(exit_code)
5
4
74,673,048
2022-12-4
https://stackoverflow.com/questions/74673048/github-actions-setup-python-stopped-working
Below is my workflow file, which worked previously forever and I've not changed anything. env: PYTHON_VERSION: '3.8.9' jobs: build: name: build πŸ”§ runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v3 with: python-version: ${{ env.PYTHON_VERSION }} - run: mkdir file-icons product-icons - run: python3 -m translate - uses: actions/upload-artifact@v3 with: name: build path: | file-icons/ product-icons/ thumbnails/ *.md package.json today it decided to stop working πŸ₯² Error: Version 3.8.9 with arch x64 not found this is with debug logging on ##[debug]x64===x64 && linux===linux makes no sense to me, it found a match and then just moved on what it was before
Resolved. GitHub actions recently updated ubuntu-latest from 20.04 to 22.04 https://github.blog/changelog/2022-08-09-github-actions-ubuntu-22-04-is-now-generally-available-on-github-hosted-runners/ https://github.com/actions/runner-images/issues/6399 certain versions of python is not yet supported by setup-python action on 22.04 (and currently they are not looking to support old python versions such as 3.8.9) Solutions: option one - set ubuntu to 20.0.4 (the suggested option according to the runner image issue on GitHub https://github.com/actions/runner-images/issues/6399) option two - use python 3.8.12 or newer
4
3
74,670,915
2022-12-3
https://stackoverflow.com/questions/74670915/axis-labels-in-line-with-tick-labels-in-matplotlib
For space reasons, I sometimes make plots in the following style: fig, ax = plt.subplots(figsize=(3, 3)) ax.plot([0,1], [1000, 1001]) ax.set_xticks([0, 1]) ax.set_yticks([1000, 1001]) ax.set_xlabel("x", labelpad=-8) ax.set_ylabel("y", labelpad=-18) Here, I've kept just ticks marking the boundaries of the X/Y domains, and I'm manually aligning the xlabel and ylabel using the labelpad keyword argument so that the x and y axis labels visually align with the tick labels. Note, I've had to use different amounts of padding for the different axes, since the length of the y tick labels 1000 and 1001 extends farther away from the axis than the height of the x tick labels 0 and 1, and since the vertical position of the x axis label and the horizontal position of the y axis label are relative to their usual position, which would be just past the extent of the tick labels. I'm wondering, is there a way to automate this procedure, and to do it exactly rather than visually? For example, if labelpad were relative to the spines, that would be very nice, or if there were a way to determine the extent of the ticks and tick labels away from the spines, that number could be used to automate this as well. A similar effect can be obtained using ax.yaxis.set_label_coords, but this transforms the position relative to the axes' transform, and thus depends on the size of the axes, while the ticks are positioned absolutely relative to the spines.
The path you were going down with ax.{x,y}axis.set_label_coords was pretty much there! All you need to do is wrap the transAxes transform in an offset_copy and then provide an offset that is a combination of the current length of the ticks + any space around the tick bbox. Using Transforms import matplotlib.pyplot as plt from matplotlib.transforms import offset_copy fig, ax = plt.subplots(figsize=(3, 3)) fig.set_facecolor('white') ax.plot([0,1], [1000, 1001]) ax.set_xticks([0, 1]) ax.set_yticks([1000, 1001]) # Create a transform that vertically offsets the label # starting at the edge of the Axes and moving downwards # according to the total length of the bounding box of a major tick t = offset_copy( ax.transAxes, y=-(ax.xaxis.get_tick_padding() + ax.xaxis.majorTicks[0].get_pad()), fig=fig, units='dots' ) ax.xaxis.set_label_coords(.5, 0, transform=t) ax.set_xlabel('x', va='top') # Repeat the above, but on the y-axis t = offset_copy( ax.transAxes, x=-(ax.yaxis.get_tick_padding() + ax.yaxis.majorTicks[0].get_pad()), fig=fig, units='dots' ) ax.yaxis.set_label_coords(0, .5, transform=t) ax.set_ylabel('y', va='bottom') Test with longer ticks import matplotlib.pyplot as plt from matplotlib.transforms import offset_copy fig, ax = plt.subplots(figsize=(3, 3)) fig.set_facecolor('white') ax.plot([0,1], [1000, 1001]) ax.set_xticks([0, 1]) ax.set_yticks([1000, 1001]) ax.xaxis.set_tick_params(length=10) ax.yaxis.set_tick_params(length=15) t = offset_copy( ax.transAxes, y=-(ax.xaxis.get_tick_padding() + ax.xaxis.majorTicks[0].get_pad()), fig=fig, units='points' ) ax.xaxis.set_label_coords(.5, 0, transform=t) ax.set_xlabel('x', va='top') t = offset_copy( ax.transAxes, x=-(ax.yaxis.get_tick_padding() + ax.yaxis.majorTicks[0].get_pad()), fig=fig, units='points' ) ax.yaxis.set_label_coords(0, .5, transform=t) ax.set_ylabel('y', va='bottom') Longer ticks & increased DPI import matplotlib.pyplot as plt from matplotlib.transforms import offset_copy fig, ax = plt.subplots(figsize=(3, 3), dpi=150) fig.set_facecolor('white') ax.plot([0,1], [1000, 1001]) ax.set_xticks([0, 1]) ax.set_yticks([1000, 1001]) ax.xaxis.set_tick_params(length=10) ax.yaxis.set_tick_params(length=15) t = offset_copy( ax.transAxes, y=-(ax.xaxis.get_tick_padding() + ax.xaxis.majorTicks[0].get_pad()), fig=fig, units='points' ) ax.xaxis.set_label_coords(.5, 0, transform=t) ax.set_xlabel('x', va='top') t = offset_copy( ax.transAxes, x=-(ax.yaxis.get_tick_padding() + ax.yaxis.majorTicks[0].get_pad()), fig=fig, units='points' ) ax.yaxis.set_label_coords(0, .5, transform=t) ax.set_ylabel("y", va='bottom')
4
2
74,656,397
2022-12-2
https://stackoverflow.com/questions/74656397/attributeerror-function-object-has-no-attribute-register-when-using-functoo
Goal: create a single-dispatch generic function; as per functools documentation. I want to use my_func() to calculate dtypes: int or list, in pairs. Note: I've chosen to implement type hints and to raise errors for my own test cases, outside of the scope of this post. For many argument data type hints; this post uses many @my_func.register(...) decorators. Code: from functools import singledispatch from typeguard import typechecked from typing import Union @singledispatch @typechecked def my_func(x, y): raise NotImplementedError(f'{type(x)} and or {type(y)} are not supported.') @my_func.register(int) def my_func(x: int, y: int) -> Union[int, str]: try: return round(100 * x / (x + y)) except (ZeroDivisionError, TypeError, AssertionError) as e: return f'{e}' @my_func.register(list) def my_func(x: list, y: list) -> Union[int, str]: try: return round(100 * sum(x) / (sum(x) + sum(y))) except (ZeroDivisionError, TypeError, AssertionError) as e: return f'{e}' a = my_func(1, 2) print(a) b = my_func([0, 1], [2, 3]) print(b) (venv) me@ubuntu-pcs:~/PycharmProjects/project$ python3 foo/bar.py /home/me/miniconda3/envs/venv/lib/python3.9/site-packages/typeguard/__init__.py:1016: UserWarning: no type annotations present -- not typechecking __main__.my_func warn('no type annotations present -- not typechecking {}'.format(function_name(func))) Traceback (most recent call last): File "/home/me/PycharmProjects/project/foo/bar.py", line 22, in <module> @my_func.register(list) AttributeError: 'function' object has no attribute 'register'
Functions must be uniquely named Credit to @Bijay Regmi for pointing this out. @typechecked is placed above only the polymorphed @my_func.register functions; not above the @singledispatch function. Note: you still invoke my_func(); I am just testing the polymorphed functions. from functools import singledispatch from typeguard import typechecked from typing import Union @singledispatch def my_func(x, y): raise NotImplementedError(f'{type(x)} and or {type(y)} are not supported.') @my_func.register(int) @typechecked def my_func_int(x: int, y: int) -> Union[int, str]: try: return round(100 * x / (x + y)) except (ZeroDivisionError, TypeError, AssertionError) as e: return f'{e}' @my_func.register(list) @typechecked def my_func_list(x: list, y: list) -> Union[int, str]: try: return round(100 * sum(x) / (sum(x) + sum(y))) except (ZeroDivisionError, TypeError, AssertionError) as e: return f'{e}' a = my_func_int(1, 2) print(a) b = my_func_list([0, 1], [2, 3]) print(b) (venv) me@ubuntu-pcs:~/PycharmProjects/project$ python3 foo/bar.py 33 17
3
2
74,692,201
2022-12-5
https://stackoverflow.com/questions/74692201/aggregation-calculation-method-for-treemap-in-plotly-express-python
Thanks by advance for people who will try to help me. This is the first time I ask a question as I have been struggling for days on this one! Eternal glory to the one helping me out with this! Let me explain my problem with a few lines of codes and screens. I want to create a treemap showing the growth of values between 2 dates. In order to be more precise, I want this treemap to: -Have squares that have a size proportional to a value x at date 2 AND be coloured according to a scale showing the growth of this value x from date 1 to date 2. Let us consider the following example: import pandas as pd import numpy as np import matplotlib.pyplot as plt import plotly.express as px import plotly data = {'variable': ['a', 'b', 'c'], 'parent': ['I', 'I', 'II'], 'value_1': [1,4,5], 'value_2': [4,2,5] } df = pd.DataFrame(data) df['growth'] = 100 * (df['value_2'] / df['value_1'] - 1) fig = px.treemap(df, path=['parent', 'variable'], values = 'value_2', color='growth', color_continuous_scale='plasma') fig.show() It gives me the beautiful treemap here: Growth treemap But here is the problem. As you may see on the following screen, the growth for I is 183%: a wrong growth! However, when calculating manually, a going from 1 to 4, and b from 4 to 2, the growth should be: 1/5 * 300% + 4/5 * -50% = 20% (I goes from 5 to 6). This is due because the calculus that is made is 4/6 * 300% + 2/6 * -50% = 183%. The method is calculating the weighting average wrt to the new coefficients, and not the former ones as it should in theory. Is there a way, to have the correct growth when aggregating to a parent class? Thank you very much for your help, and let me know if I can help further
I couldn't find a way to get the data across as you're trying to depict it. However, I did come up with a workaround. This requires the use of plotly.io. I want to point out that the nice contrast you have with the colors is lost, when you change the parent to 20% from 183.333333--- essentially that parent is nearly the same color as II, because the values are 20 and 0, whereas 'a' is 300 and the low is only -50. Additionally, I added px.Constant so that you don't get a really useless hover label for the root (the black-ish background parent of the parents). import pandas as pd import plotly.express as px import plotly.io as pio fig = px.treemap(df, path=[px.Constant('Total'), 'parent', 'variable'], values = 'value_2', color='growth', color_continuous_scale='plasma') Now when you use pio, you will create an external file, but this is only way, short of using Jupyter, to add Javascript to your plot. This will automatically open in your browser, like fig.show(), except this will reflect that the parent I has a growth of 20% in the hover data. pio.write_html(fig, 'index.html', auto_open = True, div_id = 'thisPlot', include_mathjax = 'cdn', include_plotlyjs = 'cdn', full_html = True, post_script = "setTimeout(function() {" + "el = document.getElementById('thisPlot');" + "el.data[0].marker.colors[3] = 20; /* change the calc value */" + "Plotly.newPlot(el, el.data, el.layout); /* re-plot it */" "}, 200)") You may notice that there is el.data[0].marker.colors[3] called to change. That's the parent I. Here's all of the data that is captured in el.data[0].marker.colors before this change is made: [300, -50, 0, 183.33333333333334, 0, 100]. By the way, whenever I go the route of pio.write_html, I always name the file the same thing, so it's always overwriting itself. I'm not interested in the saved file personally, just the outcome of post_script.
3
2
74,706,620
2022-12-6
https://stackoverflow.com/questions/74706620/python-pytest-in-azure-devops-modulenotfounderror-no-module-named-core
I have my DevOps pipeline which is using pytest to execute unit tests found in Python code. I'm using a root folder called "core" for the main python functionality, and reference it using the following format: import unittest from core.objects.paragraph import Paragraph from core.objects.sentence import Sentence Running this locally (python -m pytest) works correctly and executes all of the tests. However, everytime I run this from Azure DevOps I get the error message: ModuleNotFoundError: No module named 'core' I have no idea why this is happening πŸ€·β€β™‚οΈπŸ€·β€β™‚οΈπŸ€·β€β™‚οΈ For reference, this is my YAML build definition trigger: - main pool: vmImage: ubuntu-latest strategy: matrix: Python310: python.version: '3.10.7' steps: - task: UsePythonVersion@0 inputs: versionSpec: '$(python.version)' displayName: 'Use Python $(python.version)' - script: | python -m pip install --upgrade pip pip install -r requirements.txt displayName: 'Install dependencies' - script: | pip install pytest pytest-azurepipelines displayName: 'Install pytest' - script: | python -m pytest --junitxml=$(Build.StagingDirectory)/Testoutput.xml workingDirectory: 'tests' displayName: 'Test with PyTest' Once the execution has finished, this is the error message I'm seeing: ____________ ERROR collecting test_objects/test_paragraph_object.py ____________ ImportError while importing test module '/home/vsts/work/1/s/tests/test_objects/test_paragraph_object.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/hostedtoolcache/Python/3.10.7/x64/lib/python3.10/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) test_objects/test_paragraph_object.py:2: in <module> from core.objects.paragraph import Paragraph E ModuleNotFoundError: No module named 'core' ------------ generated xml file: /home/vsts/work/1/a/Testoutput.xml ------------
Please make sure the directory/home/vsts/work/1/s/tests/has a __init__.pyfile.
3
2
74,709,683
2022-12-6
https://stackoverflow.com/questions/74709683/combine-a-row-with-column-in-dataframe-and-show-the-corresponding-values
So I want to show this data in just two columns. For example, I want to turn this data Year Jan Feb Mar Apr May Jun 1997 3.45 2.15 1.89 2.03 2.25 2.20 1998 2.09 2.23 2.24 2.43 2.14 2.17 1999 1.85 1.77 1.79 2.15 2.26 2.30 2000 2.42 2.66 2.79 3.04 3.59 4.29 into this Date Price Jan-1977 3.45 Feb-1977 2.15 Mar-1977 1.89 Apr-1977 2.03 .... Jan-2000 2.42 Feb-2000 2.66 So far, I have read about how to combine two columns into another dataframe using .apply() .agg(), but no info how to combine them as I showed above. import pandas as pd df = pd.read_csv('matrix-A.csv', index_col =0 ) matrix_b = ({}) new = pd.DataFrame(matrix_b) new["Date"] = df['Year'].astype(float) + "-" + df["Dec"] print(new) I have tried this way, but it of course does not work. I have also tried using pd.Series() but no success I want to ask whether there is any site where I can learn how to do this, or does anybody know correct way to solve this?
Another possible solution, which is based on pandas.DataFrame.stack: out = df.set_index('Year').stack() out.index = ['{}_{}'.format(j, i) for i, j in out.index] out = out.reset_index() out.columns = ['Date', 'Value'] Output: Date Value 0 Jan_1997 3.45 1 Feb_1997 2.15 2 Mar_1997 1.89 3 Apr_1997 2.03 4 May_1997 2.25 .... 19 Feb_2000 2.66 20 Mar_2000 2.79 21 Apr_2000 3.04 22 May_2000 3.59 23 Jun_2000 4.29
3
2
74,703,727
2022-12-6
https://stackoverflow.com/questions/74703727/how-to-call-async-function-from-sync-funcion-and-get-result-while-a-loop-is-alr
I have a asyncio running loop, and from the coroutine I'm calling a sync function, is there any way we can call and get result from an async function in a sync function tried below code, it is not working want to print output of hel() in i() without changing i() to async function is it possible, if yes how? import asyncio async def hel(): return 4 def i(): loop = asyncio.get_running_loop() x = asyncio.run_coroutine_threadsafe(hel(), loop) ## need to change y = x.result() ## this lines print(y) async def h(): i() asyncio.run(h())
This is one of the most commonly asked type of question here. The tools to do this are in the standard library and require only a few lines of setup code. However, the result is not 100% robust and needs to be used with care. This is probably why it's not already a high-level function. The basic problem with running an async function from a sync function is that async functions contain await expressions. Await expressions pause the execution of the current task and allow the event loop to run other tasks. Therefore async functions (coroutines) have special properties that allow them to yield control and resume again where they left off. Sync functions cannot do this. So when your sync function calls an async function and that function encounters an await expression, what is supposed to happen? The sync function has no ability to yield and resume. A simple solution is to run the async function in another thread, with its own event loop. The calling thread blocks until the result is available. The async function behaves like a normal function, returning a value. The downside is that the async function now runs in another thread, which can cause all the well-known problems that come with threaded programming. For many cases this may not be an issue. This can be set up as follows. This is a complete script that can be imported anywhere in an application. The test code that runs in the if __name__ == "__main__" block is almost the same as the code in the original question. The thread is lazily initialized so it doesn't get created until it's used. It's a daemon thread so it will not keep your program from exiting. The solution doesn't care if there is a running event loop in the main thread. import asyncio import threading _loop = asyncio.new_event_loop() _thr = threading.Thread(target=_loop.run_forever, name="Async Runner", daemon=True) # This will block the calling thread until the coroutine is finished. # Any exception that occurs in the coroutine is raised in the caller def run_async(coro): # coro is a couroutine, see example if not _thr.is_alive(): _thr.start() future = asyncio.run_coroutine_threadsafe(coro, _loop) return future.result() if __name__ == "__main__": async def hel(): await asyncio.sleep(0.1) print("Running in thread", threading.current_thread()) return 4 def i(): y = run_async(hel()) print("Answer", y, threading.current_thread()) async def h(): i() asyncio.run(h()) Output: Running in thread <Thread(Async Runner, started daemon 28816)> Answer 4 <_MainThread(MainThread, started 22100)>
9
8
74,708,868
2022-12-6
https://stackoverflow.com/questions/74708868/how-to-create-a-loop-to-takes-an-existing-df-and-creates-a-randomized-new-df
I am trying to build a tool that will essentially scramble a dataset while maintaining the same elements. For example, if I have the table below 1 2 3 4 5 6 0 ABC 1234 NL00 Paid VISA 1 BCD 2345 NL01 Unpaid AMEX 2 CDE 3456 NL02 Unpaid VISA I want it to then look go through each column, pick a random value, and paste that into a new df. An example output would be 1 2 3 4 5 6 2 BCD 2345 NL01 Unpaid VISA 0 BCD 1234 NL02 Unpaid VISA 0 CDE 3456 NL01 Paid VISA I have managed to make it work with the code below, although for 24 columns the code was quite repetitive and I know a loop should be able to do this much quicker, I just have not been able to make it work. import pandas as pd import random lst1 = df['1'].to_list() lst2 = df['2'].to_list() lst3 = df['3'].to_list() lst4 = df['4'].to_list() lst5 = df['5'].to_list() lst6 = df['6'].to_list() df_new = pd.DataFrame() df_new['1'] = random.choices(lst1, k=2000) df_new['2'] = random.choices(lst2, k=2000) df_new['3'] = random.choices(lst3, k=2000) df_new['4'] = random.choices(lst4, k=2000) df_new['5'] = random.choices(lst5, k=2000) df_new['6'] = random.choices(lst6, k=2000)
Here's an easy solution: df.apply(pd.Series.sample, replace=True, ignore_index=True, frac=1) Output (potential): 1 2 3 4 5 6 0 2 CDE 3456 NL00 Paid VISA 1 2 BCD 3456 NL01 Paid VISA 2 0 CDE 3456 NL01 Paid VISA pd.DataFrame.apply applies pd.Series.sample method to each column of the dataframe with resampling (replace=True) and return 100% size of the original dataframe with frac=1.
3
2
74,707,663
2022-12-6
https://stackoverflow.com/questions/74707663/remove-duplicates-based-on-combination-of-two-columns-in-pandas
I need to delete duplicated rows based on combination of two columns (person1 and person2 columns) which have strings. For example person1: ryan and person2: delta or person 1: delta and person2: ryan is same and provides the same value in messages column. Need to drop one of these two rows. Return the non duplicated rows as well. Code to recreate df df = pd.DataFrame({"": [0,1,2,3,4,5,6], "person1": ["ryan", "delta", "delta", "delta","bravo","alpha","ryan"], "person2": ["delta", "ryan", "alpha", "bravo","delta","ryan","alpha"], "messages": [1, 1, 2, 3,3,9,9]}) df person1 person2 messages 0 0 ryan delta 1 1 1 delta ryan 1 2 2 delta alpha 2 3 3 delta bravo 3 4 4 bravo delta 3 5 5 alpha ryan 9 6 6 ryan alpha 9 Answer df should be: finaldf person1 person2 messages 0 0 ryan delta 1 1 2 delta alpha 2 2 3 delta bravo 3 3 5 alpha ryan 9
Try as follows: res = (df[~df.filter(like='person').apply(frozenset, axis=1).duplicated()] .reset_index(drop=True)) print(res) person1 person2 messages 0 0 ryan delta 1 1 2 delta alpha 2 2 3 delta bravo 3 3 5 alpha ryan 9 Explanation First, we use df.filter to select just the columns with person*. For these columns only we use df.apply to turn each row (axis=1) into a frozenset. So, at this stage, we are looking at a pd.Series like this: 0 (ryan, delta) 1 (ryan, delta) 2 (alpha, delta) 3 (bravo, delta) 4 (bravo, delta) 5 (alpha, ryan) 6 (alpha, ryan) dtype: object Now, we want to select the duplicate rows, using Series.duplicated and add ~ as a prefix to the resulting boolean series to select the inverse from the original df. Finally, we reset the index with df.reset_index.
6
5
74,696,410
2022-12-6
https://stackoverflow.com/questions/74696410/generic-iterator-annotations-python
I am trying to annotate an Iterator which only returns two values, T and cls[T]. Currently I have it annotated like this: from __future__ import annotations import typing class Node(typing.Generic[T]): def __init__(self, value: T, next: typing.Optional[Node[T]] = None) -> None: self.value = value self.next = next def __iter__(self) -> typing.Iterator[typing.Union[T, Node[T]]]: yield from (self.value, self.next) This code works without any errors. However, I wish to take advantage of the fact that this dunder method will always yield two values to simplify the process for the user. As it is, the user has to deal with the following: one = Node[int](1, Node[int](2)) value, next = one # value = 1, next = Node(2) (i.e one.next) # This is a typing error because next can be either int or Node[int] # and 'two' is expected to be Node[int]: two: Node[int] = next # we can fix this by doing type narrowing assert isinstance(next, Node) two = next # now the error should be gone So basically I want to take advantage of the fact that the second thing returned by the __iter__ is always of type Node[T] to avoid having to do type narrowing. I know that I have to change typing.Union[T, Node[T]] in the method's return annotation, but I do not know what to change it to.
This is not possible to annotate using the generic Iterator. It is a class expecting exactly one type argument. (current typeshed source) That means every value returned by its __next__ method necessarily has the same type. You are invoking the iterator protocol on that tuple of self.value, self.next. A tuple has an arbitrary number of type arguments (see here), but an iterator over it must still have exactly one. This actually leads to typing issues fairly often. Since you seem to intend your Node class to essentially emulate the tuple interface, this may be one of the rare cases, where it is better to inherit from it directly. A tuple will obviously also give you the iterable protocol, so you can still unpack it as before, but the types should be inferred properly, if you do everything correctly. Here is a full working example: from __future__ import annotations from typing import TypeVar, Optional T = TypeVar("T") class Node(tuple[T, "Node[T]"]): def __new__(cls, value: T, next_: Optional[Node[T]] = None) -> Node[T]: return tuple.__new__(cls, (value, next_)) def __init__(self, value: T, next_: Optional[Node[T]] = None) -> None: self.value = value self.next = next_ if __name__ == "__main__": node_1 = Node(1, Node(2)) val: int node_2: Node[int] val, node_2 = node_1 This passes mypy --strict without problems. As an unrelated side note, I would advise against using built-in names like next. Also, note that you do not need to specify the type argument for Node, when you initialize one because it is automatically bound to the type passed to the value parameter.
4
1
74,706,249
2022-12-6
https://stackoverflow.com/questions/74706249/when-does-class-attribute-initialization-code-run-in-python
There is a class attribute spark in our AnalyticsWriter class: class AnalyticsWriter: spark = SparkSession.getActiveSession() # this is not getting executed I noticed that this code is not being executed before a certain class method is run. Note: it has been verified that there is already an active SparkSession available in the process: so the init code is simply not being executed @classmethod def measure_upsert( cls ) -> DeltaTable: assert AnalyticsWriter.spark, "AnalyticsWriter requires \ an active SparkSession" I come from jvm-land (java/scala) and in those places the class level initialization code happens before any method invocations. What is the equivalent in python?
Class attributes are initialized at the moment they are hit, during class definition, so the line containing the getActiveSession() call is run before the class is even fully defined. class AnalyticsWriter: spark = SparkSession.getActiveSession() # The code has been run here # ... other definitions that occur after spark exists ... # class is complete here I suspect the code is doing something, just not what you expect. You can confirm that it is in fact run with a cheesy hack like: class AnalyticsWriter: spark = (SparkSession.getActiveSession(), print("getActiveSession called", flush=True))[0] which just makes a tuple of the result of your call and an eager print, then discards the meaningless result from the print; you should see the output from the print immediately, before you can get around to calling class methods.
3
5
74,702,540
2022-12-6
https://stackoverflow.com/questions/74702540/how-to-map-a-ms-sql-server-table-that-has-a-uniqueidentifier-primary-key
Using sqlalchemy 1.4, I want to map a class to a table that has a UNIQUEIDENTIFIER primary key. Using sqlalchemy.String does not work (complains about the fact that you cannot increment it). Checking dialects, I tried to used sqlalchemy.dialects.mssql.UNIQUEIDENTIFIER, however this does not work either: class Result(Base): __tablename__ = os.environ["RESULT_TABLE_NAME"] __table_args__ = {"schema": "myschema"} id = Column(UNIQUEIDENTIFIER) sqlalchemy.exc.ArgumentError: Mapper mapped class Result->DT_ODS_RESULT could not assemble any primary key columns for mapped table 'DT_ODS_RESULT' Using the primary key parameters: class Result(Base): __tablename__ = os.environ["RESULT_TABLE_NAME"] __table_args__ = {"schema": "SIDODS"} id = Column(UNIQUEIDENTIFIER, primary_key=True) sqlalchemy.orm.exc.FlushError: Instance <Result at 0x7fbf9ab99d60> has a NULL identity key. If this is an auto-generated value, check that the database table allows generation of new primary key values, and that the mapped Column object is configured to expect these generated values. Ensure also that this flush() is not occurring at an inappropriate time, such as within a load() event. Which I do not understand, because my column has a default value, as shown in this query part: ALTER TABLE [SIDODS].[DT_ODS_RESULT] ADD DEFAULT (newid()) FOR [ID] The row is inserted with the following: r = Result( query_id=query_id, trigger_id=trigger_id, insertion_datetime=get_datetime_at_timezone() ) session.add(r) session.commit() How to correctly map my sqlalchemy model class so that I can insert, without specifying it manually, a row with a UNIQUEIDENTIFIER type ?
Specifying primary_key=True and server_default=text("newid()") is sufficient to let SQLAlchemy know that the server will take case of assigning the PK value: from sqlalchemy import Column, String, create_engine, select, text from sqlalchemy.dialects.mssql import UNIQUEIDENTIFIER from sqlalchemy.orm import Session, declarative_base engine = create_engine("mssql+pyodbc://scott:tiger^5HHH@mssql_199") Base = declarative_base() class Thing(Base): __tablename__ = "thing" id = Column(UNIQUEIDENTIFIER, primary_key=True, server_default=text("newid()")) description = Column(String(100)) Base.metadata.drop_all(engine, checkfirst=True) engine.echo = True Base.metadata.create_all(engine) """DDL emitted: CREATE TABLE thing ( id UNIQUEIDENTIFIER NOT NULL DEFAULT newid(), description VARCHAR(100) NULL, PRIMARY KEY (id) ) """ engine.echo = False with Session(engine) as sess: sess.add(Thing(description="a thing")) sess.commit() with engine.begin() as conn: print(conn.execute(select(Thing.__table__)).all()) # [('32570502-7F18-40B5-87E2-7A60232FE9BD', 'a thing')]
4
3
74,687,769
2022-12-5
https://stackoverflow.com/questions/74687769/typeerror-getattr-attribute-name-must-be-string-in-pytorch-diffusers-how
I am trying the diffusers of Pytorch to generate pictures in my Mac M1. I have a simple syntax like this: modelid = "CompVis/stable-diffusion-v1-4" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(modelid, revision="fp16", torch_dtype=torch.float16, use_auth_token=auth_token) pipe.to(device) when I run my script, it throws an error, (meta_ai) ➜ Difussion_Model /Users/urs/miniforge3/envs/meta_ai/bin/python "/Users/urs/Downloads/Difussion_Model/03_StableD iffusionApp/app trial1.py" Fetching 19 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:00<00:00, 10253.70it/s] Traceback (most recent call last): File "/Users/urs/Downloads/Difussion_Model/03_StableDiffusionApp/app trial1.py", line 27, in <module> pipe = StableDiffusionPipeline.from_pretrained(modelid, revision="fp16", torch_dtype=torch.float16, use_auth_token=auth_token) File "/Users/urs/miniforge3/envs/meta_ai/lib/python3.9/site-packages/diffusers/pipeline_utils.py", line 239, in from_pretrained load_method = getattr(class_obj, load_method_name) TypeError: getattr(): attribute name must be string In torch_dtype=torch.float16, I have tried all different types available here: https://pytorch.org/docs/stable/tensor_attributes.html, but none of it works. Would anyone please help? Updates on 6 Dec: I copy and paste the code from the official page which is dedicated to M1, https://huggingface.co/docs/diffusers/optimization/mps The code is as follow, # make sure you're logged in with `huggingface-cli login` from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") pipe = pipe.to("mps") # Recommended if your computer has < 64 GB of RAM pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" # First-time "warmup" pass (see explanation above) _ = pipe(prompt, num_inference_steps=1) # Results match those from the CPU device after the warmup pass. image = pipe(prompt).images[0] But I still get the same error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[14], line 4 1 # make sure you're logged in with `huggingface-cli login` 2 from diffusers import StableDiffusionPipeline ----> 4 pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") 5 pipe = pipe.to("mps") 7 # Recommended if your computer has < 64 GB of RAM File ~/miniforge3/envs/meta_ai/lib/python3.9/site-packages/diffusers/pipeline_utils.py:239, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 236 if issubclass(class_obj, class_candidate): 237 load_method_name = importable_classes[class_name][1] --> 239 load_method = getattr(class_obj, load_method_name) 241 loading_kwargs = {} 242 if issubclass(class_obj, torch.nn.Module): TypeError: getattr(): attribute name must be string
The device should be mps (device='mps'). Mac M1 has no inbuilt Nvidia GPU. Also, I would suggest you check How to use Stable Diffusion in Apple Silicon (M1/M2) HG blog and make sure all the requirements are satisfied. Also, check for your installed diffusers version. import diffusers print(diffusers.__version__) If it is <=0.4.0, please update it using, pip install --upgrade diffusers transformers scipy
5
2
74,700,723
2022-12-6
https://stackoverflow.com/questions/74700723/python-subprocess-or-bat-script-splits-argument-on-equal-sign
How do I add this: -param:abc=def as a SINGLE command line argument? Module subprocess splits this up in TWO arguments by replacing the equal sign with a space. Here is my python script: import subprocess pa=['test.bat', '--param:abc=def'] subprocess.run(pa) Here is the test program test.bat: @echo off echo Test.bat echo Arg0: %0 echo Arg1: %1 echo Arg2: %2 pause and here the output: Test.bat Arg0: test.bat Arg1: --param:abc Arg2: def Press any key to continue . . . Because the equal sign is gone, the real app will not be started correctly. By the way, this problem also seems to happen when running on linux, with a sh script instead of a bat file. I understand that removing the equal sign is a 'feature' in certain cases, e.g. with the argparse module, but in my case I need to keep the equal sign. Any help is appreciated!
Welcome to .bat file hell To preserve equal sign, you'll have to quote your argument (explained here Preserving "=" (equal) characters in batch file parameters) ('"--param:abc=def"'), but then subprocess will escape the quotes Test.bat Arg0: test.bat Arg1: \"--param:abc=def\" Arg2: Good old os.system won't do that import os os.system('test.bat "--param:abc=def"') result Test.bat Arg0: test.bat Arg1: "--param:abc=def" Arg2: Damn, those quotes won't go off. Let's tweak the .bat script a little to remove them manually @echo off echo Test.bat echo Arg0: %0 rem those 2 lines remove the quotes set ARG1=%1 set ARG1=%ARG1:"=% echo Arg1: %ARG1% echo Arg2: %2 now it yields the proper result Test.bat Arg0: test.bat Arg1: --param:abc=def Arg2: Alternatively, stick to subprocess and remove quotes AND backslashes.
3
1
74,683,670
2022-12-5
https://stackoverflow.com/questions/74683670/how-to-get-the-extracted-feature-vector-from-transfer-learning-models-python
I am trying to implement a classification model with ResNet50. I know, CNN or transfer learning models extract the features themselves. How can I get the extracted feature vector from the image dataset in python? Code Snippet of model training: base_model = ResNet152V2( input_shape = (height, width, 3), include_top=False, weights='imagenet' ) from tensorflow.keras.layers import MaxPool2D, BatchNormalization, GlobalAveragePooling2D top_model = base_model.output top_model = GlobalAveragePooling2D()(top_model) top_model = Dense(1072, activation='relu')(top_model) top_model = Dropout(0.6)(top_model) top_model = Dense(256, activation='relu')(top_model) top_model = Dropout(0.6)(top_model) prediction = Dense(len(categories), activation='softmax')(top_model) model = Model(inputs=base_model.input, outputs=prediction) for layer in model.layers: layer_trainable=False from tensorflow.keras.optimizers import Adam model.compile( optimizer = Adam(learning_rate=0.00001), loss='categorical_crossentropy', metrics=['accuracy'] ) import keras from keras.callbacks import EarlyStopping es_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=3) datagen = ImageDataGenerator(rotation_range= 30, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.3, horizontal_flip=True ) resnet152v2 = model.fit(datagen.flow(X_train, y_train), validation_data = (X_val, y_val), epochs = 100, callbacks=[es_callback] ) I followed a solution from here. Code snippet of extracting features from the trained model: extract = Model(model.inputs, model.layers[-3].output) #top_model = Dropout(0.6)(top_model) features = extract.predict(X_test) The shape of X_test is (120, 224, 224, 3)
Once you have trained your model, you can save it (or just directly use it in the same file) then cut off the top layer. model = load_model("xxx.h5") #Or use model directly #Use functional model from tensorflow.keras.models import Model featuresModel = Model(inputs=model.input,outputs=model.layers[-3].output) #Get rids of dropout too #Inference directly featureVector = featuresModel.predict(yourBatchData, batch_size=yourBatchLength) If inferenced in batches, you would need to separate the output results to see the result for each input in the batch.
3
1
74,683,994
2022-12-5
https://stackoverflow.com/questions/74683994/how-to-query-azure-sql-database-using-python-async
I'm stuck and can't figure out a workable way to connect asynchronously to an Azure SQL database using Python. I've tried asyncio, pyodbc and asyncpg to no avail. I think this is close... import asyncio import pyodbc async def query_azure_sql_db(): connection_string = 'Driver={ODBC Driver 17 for SQL Server};Server=tcp:<mySERVER>.database.windows.net,1433;Database=sqldbstockdata;Uid=<myUN>;Pwd=<myPW>;Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;' async with pyodbc.connect(connection_string) as conn: async with conn.cursor() as cursor: query = 'SELECT * FROM dbo.<myTABLE>' await cursor.execute(query) results = cursor.fetchall() return results loop = asyncio.get_event_loop() results = loop.run_until_complete(query_azure_sql_db()) print(results) But results in this cryptic error: AttributeError: __aenter__ I'm open to other libraries. Any help is appreciated.
The problem is that pyodbc.connect is a function and does not implement async context manager, so why you are getting the AttributeError. I recommend using aioodbc instead of pyodbc as it provides the same functionality as async methods. You can also make it work with async with by implementing the __aenter__ and __aexit__ dunder methods. import asyncio import aioodbc class AsyncPyodbc: def __init__(self, dsn, autocommit=False, ansi=False, timeout=0, loop=None, executor=None, echo=False, after_created=None, **kwargs): self._connection: aioodbc.Connection = aioodbc.connect(dsn=dsn, autocommit=autocommit, ansi=ansi, loop=loop, executor=executor, echo=echo, timeout=timeout, after_created=after_created, **kwargs) async def __aenter__(self) -> aioodbc.Connection: return await self._connection async def __aexit__(self, *_): return await self._connection.close() This will require an event loop and will be used as follows. async def query_azure_sql_db(): connection_string = ( 'Driver={ODBC Driver 17 for SQL Server};' 'Server=tcp:<mySERVER>.database.windows.net,1433;' 'Database=sqldbstockdata;Uid=<myUN>;Pwd=<myPW>;' 'Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;' ) loop = asyncio.get_event_loop() async with AsyncPyodbc(connection_string, loop=loop) as conn: ...
5
6
74,672,979
2022-12-4
https://stackoverflow.com/questions/74672979/upload-a-python-package-to-azure-artifacts-using-a-github-action
I have a repository on Github for a private Python package for my organization, and I would like to publish this package in a private Azure Artifacts feed using Github actions for automated CI/CD. It seems like none of the documentation around publishing to Azure Artifacts is for this exact scenario; the documentation from ADO is solely for using Azure Pipelines, or seems to require manual action (for instance, trying to use artifacts-keyring). Is there some way to provide twine with credentials in a Github Actions YAML file that would enable me to publish to my private ADO?
You can save the authentication information in the project file of Repository, and then run Publish Python packages from the command line in your Github Actions.
4
0
74,682,391
2022-12-5
https://stackoverflow.com/questions/74682391/python-get-the-cache-dictionary-of-lru-cache-wrapper-object
I have this simple cache decorator demo here @functools.cache def cached_fib(n): assert n > 0 if n <= 2: return 1 return cached_fib(n - 1) + cached_fib(n - 2) t1 = time.perf_counter() cached_fib(400) t2 = time.perf_counter() print(f"cached_fib: {t2 - t1}") # 0.0004117000003134308 I want to access the actual cache dictionary object inside this cached_fib, but when I try to access through cached_fib.cache, it gives me an error saying that AttributeError: 'functools._lru_cache_wrapper' object has no attribute 'cache' though the cache attribute is used in python and c version alike. Thank you!
The internals of the cache are encapsulated for thread safety and to allow the underlying implementation details to change. The three public attributes are, some statistics in cache_info: >>> cached_fib.cache_info() CacheInfo(hits=0, misses=0, maxsize=None, currsize=0) >>> cached_fib(123) 22698374052006863956975682 >>> cached_fib.cache_info() CacheInfo(hits=120, misses=123, maxsize=None, currsize=123) Details about the init mode in cache_parameters >>> cached_fib.cache_parameters() {'maxsize': None, 'typed': False} And cache_clear which purges the cache. >>> cached_fib.cache_info() CacheInfo(hits=120, misses=123, maxsize=None, currsize=123) >>> cached_fib.cache_clear() >>> cached_fib.cache_info() CacheInfo(hits=0, misses=0, maxsize=None, currsize=0) Additionally, the "uncached" function version is publicly available via __wrapped__, but this is not something specific to functools.cache: >>> cached_fib <functools._lru_cache_wrapper object at 0x10b7a3270> >>> cached_fib.__wrapped__ <function cached_fib at 0x10bb781f0> If you want a cache implementation where the underlying dict is exposed to the user, I recommend the third-party cachetools library.
3
3
74,678,931
2022-12-4
https://stackoverflow.com/questions/74678931/how-to-improve-julias-performance-using-just-in-time-compilation-jit
I have been playing with JAX (automatic differentiation library in Python) and Zygote (the automatic differentiation library in Julia) to implement Gauss-Newton minimisation method. I came upon the @jit macro in Jax that runs my Python code in around 0.6 seconds compared to ~60 seconds for the version that does not use @jit. Julia ran the code in around 40 seconds. Is there an equivalent of @jit in Julia or Zygote that results is a better performance? Here are the codes I used: Python from jax import grad, jit, jacfwd import jax.numpy as jnp import numpy as np import time def gaussian(x, params): amp = params[0] mu = params[1] sigma = params[2] amplitude = amp/(jnp.abs(sigma)*jnp.sqrt(2*np.pi)) arg = ((x-mu)/sigma) return amplitude*jnp.exp(-0.5*(arg**2)) def myjacobian(x, params): return jacfwd(gaussian, argnums = 1)(x, params) def op(jac): return jnp.matmul( jnp.linalg.inv(jnp.matmul(jnp.transpose(jac),jac)), jnp.transpose(jac)) def res(x, data, params): return data - gaussian(x, params) @jit def step(x, data, params): residuals = res(x, data, params) jacobian_operation = op(myjacobian(x, params)) temp = jnp.matmul(jacobian_operation, residuals) return params + temp N = 2000 x = np.linspace(start = -100, stop = 100, num= N) data = gaussian(x, [5.65, 25.5, 37.23]) ini = jnp.array([0.9, 5., 5.0]) t1 = time.time() for i in range(5000): ini = step(x, data, ini) t2 = time.time() print('t2-t1: ', t2-t1) ini Julia using Zygote function gaussian(x::Union{Vector{Float64}, Float64}, params::Vector{Float64}) amp = params[1] mu = params[2] sigma = params[3] amplitude = amp/(abs(sigma)*sqrt(2*pi)) arg = ((x.-mu)./sigma) return amplitude.*exp.(-0.5.*(arg.^2)) end function myjacobian(x::Vector{Float64}, params::Vector{Float64}) output = zeros(length(x), length(params)) for (index, ele) in enumerate(x) output[index,:] = collect(gradient((params)->gaussian(ele, params), params))[1] end return output end function op(jac::Matrix{Float64}) return inv(jac'*jac)*jac' end function res(x::Vector{Float64}, data::Vector{Float64}, params::Vector{Float64}) return data - gaussian(x, params) end function step(x::Vector{Float64}, data::Vector{Float64}, params::Vector{Float64}) residuals = res(x, data, params) jacobian_operation = op(myjacobian(x, params)) temp = jacobian_operation*residuals return params + temp end N = 2000 x = collect(range(start = -100, stop = 100, length= N)) params = vec([5.65, 25.5, 37.23]) data = gaussian(x, params) ini = vec([0.9, 5., 5.0]) @time for i in range(start = 1, step = 1, length = 5000) ini = step(x, data, ini) end ini
Your Julia code doing a number of things that aren't idiomatic and are worsening your performance. This won't be a full overview, but it should give you a good idea to start. The first thing is passing params as a Vector is a bad idea. This means it will have to be heap allocated, and the compiler doesn't know how long it is. Instead, use a Tuple which will allow for a lot more optimization. Secondly, don't make gaussian act on a Vector of xs. Instead, write the scalar version and broadcast it. Specifically, with these changes, you will have function gaussian(x::Number, params::NTuple{3, Float64}) amp, mu, sigma = params # The next 2 lines should probably be done outside this function, but I'll leave them here for now. amplitude = amp/(abs(sigma)*sqrt(2*pi)) arg = ((x-mu)/sigma) return amplitude*exp(-0.5*(arg^2)) end
4
6
74,660,176
2022-12-2
https://stackoverflow.com/questions/74660176/using-visualstudio-python-how-to-handle-overriding-stdlib-module-pylancer
When running ipynbs in VS Code, I've started noticing Pylance warnings on standard library imports. I am using a conda virtual environment, and I believe the warning is related to that. An example using the glob library reads: "env\Lib\glob.py" is overriding the stdlib "glob" modulePylance(reportShadowedImports) So far my notebooks run as expected, but I am curious if this warning is indicative of poor layout or is just stating the obvious more of an "FYI you are not using the base install of python". I have turned off linting and the problem stills persists. And almost nothing returns from my searches of the error "reportShadowedImports".
The reason you find nothing by searching is because this check has just been implemented recently (see Github). I ran into the same problem as you because code.py from Micropython/Circuitpython also overrides the module "code" in stdlib. The solution is simple, though you then loose out on this specific check. Just add reportShadowedImports to your pyright config. For VS Code, that would be adding it to .vscode/settings.json: { "python.languageServer": "Pylance", [...] "python.analysis.diagnosticSeverityOverrides": { "reportShadowedImports": "none" }, [...] }
20
36
74,673,076
2022-12-4
https://stackoverflow.com/questions/74673076/can-you-explain-me-the-output
I was in class section of python programming and I am confused here. I have learned that super is used to call the method of parent class but here Employee is not a parent of Programmer yet it's called (showing the result of getLanguage method). What I am missing? This is the code, class Employee: company= "Google" language = "java" def showDetails(self): print("This is an employee"); def getLanguage(self): print(f"1. The language is {self.language}"); class Programmer: language= "Python" company = "Youtubeeee" def getLanguage(self): super().getLanguage(); print(f"2. The language is {self.language}") def showDetails(self): print("This is an programmer") class Programmer2(Programmer , Employee): language= "C++" def getLanguage(self): super().getLanguage(); print(f"3. The language is {self.language}") p2 = Programmer2(); p2.getLanguage(); This is the output, 1. The language is C++ 2. The language is C++ 3. The language is C++
You've bumped into one of the reasons why super exists. From the docs, super delegates method calls to a parent or sibling class of type. Python bases class inheritance on a dynamic Method Resolution Order (MRO). When you created a class with multiple inheritance, those two parent classes became siblings. The left most is first in MRO and the right one is next. This isn't a property of the Programmer class, Its a property of the Programmer2 class that decided to do multiple inheritance. If you use Programmer differently, as in, p3 = Programmer() p3.getLanguage() You get the error AttributeError: 'super' object has no attribute 'getLanguage' because its MRO only goes to the base object which doesn't have the method. You can view the MRO of the class with its __mro__ attribute. Programmer.__mro__: (<class '__main__.Programmer'>, <class 'object'>) Programmer2.__mro__: (<class '__main__.Programmer2'>, <class '__main__.Programmer'>, <class '__main__.Employee'>, <class 'object'>)
4
3
74,667,621
2022-12-3
https://stackoverflow.com/questions/74667621/get-full-article-in-google-sheet-using-openai
I'm trying to get full article in Google Sheet using Openai API. In column A I just mention the topic and want to get full article in column B. Here is what I'm trying /** * Use GPT-3 to generate an article * * @param {string} topic - the topic for the article * @return {string} the generated article * @customfunction */ function getArticle(topic) { // specify the API endpoint and API key const api_endpoint = 'https://api.openai.com/v1/completions'; const api_key = 'YOUR_API_KEY'; // specify the API parameters const api_params = { prompt: topic, max_tokens: 1024, temperature: 0.7, model: 'text-davinci-003', }; // make the API request using UrlFetchApp const response = UrlFetchApp.fetch(api_endpoint, { method: 'post', headers: { Authorization: 'Bearer ' + api_key, 'Content-Type': 'application/json', }, payload: JSON.stringify(api_params), }); // retrieve the article from the API response const json = JSON.parse(response.getContentText()); if (json.data && json.data.length > 0) { const article = json.data[0].text; return article; } else { return 'No article found for the given topic.'; } } How can I get the article?
Modification points: When I saw the official document of OpenAI API, in your endpoint of https://api.openai.com/v1/completions, it seems that the following value is returned. Ref { "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7", "object": "text_completion", "created": 1589478378, "model": "text-davinci-003", "choices": [ { "text": "\n\nThis is indeed a test", "index": 0, "logprobs": null, "finish_reason": "length" } ], "usage": { "prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12 } } In the case of json.data, it seems that the endpoint of https://api.openai.com/v1/models might be required to be used. Ref And, there is no property of json.data[0].text. I thought that this might be the reason for your current issue. If you want to retrieve the values of text from the endpoint of https://api.openai.com/v1/completions, how about the following modification? From: if (json.data && json.data.length > 0) { const article = json.data[0].text; return article; } else { return 'No article found for the given topic.'; } To: if (json.choices && json.choices.length > 0) { const article = json.choices[0].text; return article; } else { return 'No article found for the given topic.'; } Note: If the value of response.getContentText() is not your expected values, this modification might not be able to be used. Please be careful about this. Reference: Completions of OpenAI API
3
3
74,663,224
2022-12-3
https://stackoverflow.com/questions/74663224/passing-a-functions-output-as-a-parameter-of-another-function
I'm having a hard time figuring out how to pass a function's return as a parameter to another function. I've searched a lot of threads that are deviations of this problem but I can't think of a solution from them. My code isn't good yet, but I just need help on the line where the error is occurring to start with. Instructions: create a function that asks the user to enter their birthday and returns a date object. Validate user input as well. This function must NOT take any parameters. create another function that takes the date object as a parameter. Calculate the age of the user using their birth year and the current year. def func1(): bd = input("When is your birthday? ") try: dt.datetime.strptime(bd, "%m/%d/%Y") except ValueError as e: print("There is a ValueError. Please format as MM/DD/YYY") except Exception as e: print(e) return bd def func2(bd): today = dt.datetime.today() age = today.year - bd.year return age This is the Error I get: TypeError: func2() missing 1 required positional argument: 'bday' So far, I've tried: assigning the func1 to a variable and passing the variable as func2 parameter calling func1 inside func2 defining func1 inside func2
You're almost there, a few subtleties to consider: The datetime object must be assigned to a variable and returned. Your code was not assigning the datetime object, but returning a str object for input into func2. Which would have thrown an attribute error as a str has no year attribute. Simply subtracting the years will not always give the age. What if the individual's date of birth has not yet come? In this case, 1 must be subtracted. (Notice the code update below). For example: from datetime import datetime as dt def func1(): bday = input("When is your birthday? Enter as MM/DD/YYYY: ") try: # Assign the datetime object. dte = dt.strptime(bday, "%m/%d/%Y") except ValueError as e: print("There is a ValueError. Please format as MM/DD/YYYY") except Exception as e: print(e) return dte # <-- Return the datetime, not a string. def func2(bdate): today = dt.today() # Account for the date of birth not yet arriving. age = today.year - bdate.year - ((today.month, today.day) < (bdate.month, bdate.day)) return age Can be called using: func2(bdate=func1())
3
1
74,660,595
2022-12-2
https://stackoverflow.com/questions/74660595/further-optimizing-the-ising-model
I've implemented the 2D ISING model in Python, using NumPy and Numba's JIT: from timeit import default_timer as timer import matplotlib.pyplot as plt import numba as nb import numpy as np # TODO for Dict optimization. # from numba import types # from numba.typed import Dict @nb.njit(nogil=True) def initialstate(N): ''' Generates a random spin configuration for initial condition ''' state = np.empty((N,N),dtype=np.int8) for i in range(N): for j in range(N): state[i,j] = 2*np.random.randint(2)-1 return state @nb.njit(nogil=True) def mcmove(lattice, beta, N): ''' Monte Carlo move using Metropolis algorithm ''' # # TODO* Dict optimization # dict_param = Dict.empty( # key_type=types.int64, # value_type=types.float64, # ) # dict_param = {cost : np.exp(-cost*beta) for cost in [-8, -4, 0, 4, 8] } for _ in range(N): for __ in range(N): a = np.random.randint(0, N) b = np.random.randint(0, N) s = lattice[a, b] dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N] cost = 2*s*dE if cost < 0: s *= -1 #TODO* elif np.random.rand() < dict_param[cost]: elif np.random.rand() < np.exp(-cost*beta): s *= -1 lattice[a, b] = s return lattice @nb.njit(nogil=True) def calcEnergy(lattice, N): ''' Energy of a given configuration ''' energy = 0 for i in range(len(lattice)): for j in range(len(lattice)): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy += -nb*S return energy/2 @nb.njit(nogil=True) def calcMag(lattice): ''' Magnetization of a given configuration ''' mag = np.sum(lattice, dtype=np.int32) return mag @nb.njit(nogil=True) def ISING_model(nT, N, burnin, mcSteps): """ nT : Number of temperature points. N : Size of the lattice, N x N. burnin : Number of MC sweeps for equilibration (Burn-in). mcSteps : Number of MC sweeps for calculation. """ T = np.linspace(1.2, 3.8, nT); E,M,C,X = np.zeros(nT), np.zeros(nT), np.zeros(nT), np.zeros(nT) n1, n2 = 1.0/(mcSteps*N*N), 1.0/(mcSteps*mcSteps*N*N) for temperature in range(nT): lattice = initialstate(N) # initialise E1 = M1 = E2 = M2 = 0 iT = 1/T[temperature] iT2= iT*iT for _ in range(burnin): # equilibrate mcmove(lattice, iT, N) # Monte Carlo moves for _ in range(mcSteps): mcmove(lattice, iT, N) Ene = calcEnergy(lattice, N) # calculate the Energy Mag = calcMag(lattice,) # calculate the Magnetisation E1 += Ene M1 += Mag M2 += Mag*Mag E2 += Ene*Ene E[temperature] = n1*E1 M[temperature] = n1*M1 C[temperature] = (n1*E2 - n2*E1*E1)*iT2 X[temperature] = (n1*M2 - n2*M1*M1)*iT return T,E,M,C,X def main(): N = 32 start_time = timer() T,E,M,C,X = ISING_model(nT = 64, N = N, burnin = 8 * 10**4, mcSteps = 8 * 10**4) end_time = timer() print("Elapsed time: %g seconds" % (end_time - start_time)) f = plt.figure(figsize=(18, 10)); # # figure title f.suptitle(f"Ising Model: 2D Lattice\nSize: {N}x{N}", fontsize=20) _ = f.add_subplot(2, 2, 1 ) plt.plot(T, E, '-o', color='Blue') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Energy ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 2 ) plt.plot(T, abs(M), '-o', color='Red') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Magnetization ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 3 ) plt.plot(T, C, '-o', color='Green') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Specific Heat ", fontsize=20) plt.axis('tight') _ = f.add_subplot(2, 2, 4 ) plt.plot(T, X, '-o', color='Black') plt.xlabel("Temperature (T)", fontsize=20) plt.ylabel("Susceptibility", fontsize=20) plt.axis('tight') plt.show() if __name__ == '__main__': main() Which of course, works: I have two main questions: Is there anything left to optimize? I knew ISING model is hard to simulate, but looking at the following table, it seems like I'm missing something... lattice size : 32x32 burnin = 8 * 10**4 mcSteps = 8 * 10**4 Simulation time = 365.98 seconds lattice size : 64x64 burnin = 10**5 mcSteps = 10**5 Simulation time = 1869.58 seconds I tried implementing another optimization based on not calculating the exponential over and over again using a dictionary, yet on my tests, it seems like its slower. What am I doing wrong?
The computation of the exponential is not really an issue. The main issue is that generating random numbers is expensive and a huge number of random values are generated. Another issue is that the current computation is intrinsically sequential. Indeed, for N=32, mcmove tends to generate about 3000 random values, and this function is called 2 * 80_000 times per iteration. This means, 2 * 80_000 * 3000 = 480_000_000 random number generated per iteration. Assuming generating a random number takes about 5 nanoseconds (ie. only 20 cycles on a 4 GHz CPU), then each iteration will take about 2.5 seconds only to generate all the random numbers. On my 4.5 GHz i5-9600KF CPU, each iteration takes about 2.5-3.0 seconds. The first thing to do is to try to generate random number using a faster method. The bad news is that this is hard to do in Numba and more generally any-Python-based code. Micro-optimizations using a lower-level language like C or C++ can significantly help to speed up this computation. Such low-level micro-optimizations are not possible in high-level languages/tools like Python, including Numba. Still, one can implement a random-number generator (RNG) specifically designed so to produce the random values you need. xoshiro256** can be used to generate random numbers quickly though it may not be as random as what Numpy/Numba can produce (there is no free lunch). The idea is to generate 64-bit integers and extract range of bits so to produce 2 16-bit integers and a 32-bit floating point value. This RNG should be able to generate 3 values in only about 10 cycles on a modern CPU! Once this optimization has been applied, the computation of the exponential becomes the new bottleneck. It can be improved using a lookup table (LUT) like you did. However, using a dictionary is slow. You can use a basic array for that. This is much faster. Note the index need to be positive and small. Thus, the minimum cost needs to be added. Once the previous optimization has been implemented, the new bottleneck is the conditionals if cost < 0 and elif c < .... The conditionals are slow because they are unpredictable (due to the result being random). Indeed, modern CPUs try to predict the outcomes of conditionals so to avoid expensive stalls in the CPU pipeline. This is a complex topic. If you want to know more about this, then please read this great post. In practice, such a problem can be avoided using a branchless computation. This means you need to use binary operators and integer sticks so for the sign of s to change regarding the value of the condition. For example: s *= 1 - ((cost < 0) | (c < lut[cost])) * 2. Note that modulus are generally expensive unless the compiler know the value at compile time. They are even faster when the value is a power of two because the compiler can use bit tricks so to compute the modulus (more specifically a logical and by a pre-compiled constant). For calcEnergy, a solution is to compute the border separately so to completely avoid the modulus. Furthermore, loops can be faster when the compiler know the number of iterations at compile time (it can unroll the loops and better vectorize them). Moreover, when N is not a power of two, the RNG can be significantly slower and more complex to implement without any bias, so I assume N is a power of two. Here is the final code: # [...] Same as in the initial code @nb.njit(inline="always") def rol64(x, k): return (x << k) | (x >> (64 - k)) @nb.njit(inline="always") def xoshiro256ss_init(): state = np.empty(4, dtype=np.uint64) maxi = (np.uint64(1) << np.uint64(63)) - np.uint64(1) for i in range(4): state[i] = np.random.randint(0, maxi) return state @nb.njit(inline="always") def xoshiro256ss(state): result = rol64(state[1] * np.uint64(5), np.uint64(7)) * np.uint64(9) t = state[1] << np.uint64(17) state[2] ^= state[0] state[3] ^= state[1] state[1] ^= state[2] state[0] ^= state[3] state[2] ^= t state[3] = rol64(state[3], np.uint64(45)) return result @nb.njit(inline="always") def xoshiro_gen_values(N, state): ''' Produce 2 integers between 0 and N and a simple-precision floating-point number. N must be a power of two less than 65536. Otherwise results will be biased (ie. not random). N should be known at compile time so for this to be fast ''' rand_bits = xoshiro256ss(state) a = (rand_bits >> np.uint64(32)) % N b = (rand_bits >> np.uint64(48)) % N c = np.uint32(rand_bits) * np.float32(2.3283064370807974e-10) return (a, b, c) @nb.njit(nogil=True) def mcmove_generic(lattice, beta, N): ''' Monte Carlo move using Metropolis algorithm. N must be a small power of two and known at compile time ''' state = xoshiro256ss_init() lut = np.full(16, np.nan) for cost in (0, 4, 8, 12, 16): lut[cost] = np.exp(-cost*beta) for _ in range(N): for __ in range(N): a, b, c = xoshiro_gen_values(N, state) s = lattice[a, b] dE = lattice[(a+1)%N,b] + lattice[a,(b+1)%N] + lattice[(a-1)%N,b] + lattice[a,(b-1)%N] cost = 2*s*dE # Branchless computation of s tmp = (cost < 0) | (c < lut[cost]) s *= 1 - tmp * 2 lattice[a, b] = s return lattice @nb.njit(nogil=True) def mcmove(lattice, beta, N): assert N in [16, 32, 64, 128] if N == 16: return mcmove_generic(lattice, beta, 16) elif N == 32: return mcmove_generic(lattice, beta, 32) elif N == 64: return mcmove_generic(lattice, beta, 64) elif N == 128: return mcmove_generic(lattice, beta, 128) else: raise Exception('Not implemented') @nb.njit(nogil=True) def calcEnergy(lattice, N): ''' Energy of a given configuration ''' energy = 0 # Center for i in range(1, len(lattice)-1): for j in range(1, len(lattice)-1): S = lattice[i,j] nb = lattice[i+1, j] + lattice[i,j+1] + lattice[i-1, j] + lattice[i,j-1] energy -= nb*S # Border for i in (0, len(lattice)-1): for j in range(1, len(lattice)-1): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy -= nb*S for i in range(1, len(lattice)-1): for j in (0, len(lattice)-1): S = lattice[i,j] nb = lattice[(i+1)%N, j] + lattice[i,(j+1)%N] + lattice[(i-1)%N, j] + lattice[i,(j-1)%N] energy -= nb*S return energy/2 @nb.njit(nogil=True) def calcMag(lattice): ''' Magnetization of a given configuration ''' mag = np.sum(lattice, dtype=np.int32) return mag # [...] Same as in the initial code I hope there is no error in the code. It is hard to check results with a different RNG. The resulting code is significantly faster on my machine: it compute 4 iterations in 5.3 seconds with N=32 as opposed to 24.1 seconds. The computation is thus 4.5 times faster! It is very hard to optimize the code further using Numba in Python. The computation cannot be efficiently parallelized due to the long dependency chain in mcmove.
3
4
74,668,215
2022-12-3
https://stackoverflow.com/questions/74668215/error-with-tatsu-does-not-recognize-the-right-grammar-pattern
I am getting started with tatsu and I am trying to implement a grammar for the miniML language. Once my grammar successfully parsed, I tried to parse some little expressions to check that it was working ; however I discovered Tatsu was unable to recognize some of the expected patterns. Here is the code : ` grammar=""" @@grammar::CALC start = expression $ ; expression = |integer |addition |soustraction |multiplication |division |Fst |Snd |pair |varname |assign |function |application |parentheses ; integer = /\d+/ ; addition = left:'+' right:pair ; soustraction = '-' pair ; multiplication = '*' pair ; division = '/' pair ; Fst = 'Fst' pair ; Snd = 'Snd' pair ; pair = '(' expression ',' expression ')' ; varname = /[a-z]+/ ; assign = varname '=' expression ';' expression ; function = 'Lambda' varname ':' expression ; application = ' '<{expression}+ ; parentheses = '(' expression ')' ; """ ` then parsed : parser = tatsu.compile(grammar) All of those expression are successfully recognized, except the "assign" and the "application" ones. If i try something like this : parser.parse("x=3;x+1") I get that error message : FailedExpectingEndOfText: (1:2) Expecting end of text : x=3;x+1 ^ start and same goes for an expression of the type "expression expression". What could be the syntax error I made here ? I have no clue and I can't find any in the documentation. Thanks in advance !
It seems the failure of assign comes from a conflict with the varname rule; to solve it, simply place |assign BEFORE |variable in your expression rule. A now obsolete workaround, that I'll leave anyway: # I added a negative lookahead for '=' so it will not conflict with the assign rule varname = /[a-z]+/!'=' ; assign = /[a-z]+/ '=' expression ';' expression ; Example: parser.parse("x=1;+(x,1)") # ['x', '=', '1', ';', AST({'left': '+', 'right': ['(', 'x', ',', '1', ')']})] About 'application' : replacing ' ' with / / at the start of the rule, and placing |application at the start of the expression rule solves the problem: parser.parse("1 2 (x=1;3) *(4,5)") Out[207]: (' ', '1', (' ', '2', (' ', ['(', ['x', '=', '1', ';', '3'], ')'], ['*', ['(', '4', ',', '5', ')']])))
3
2
74,666,784
2022-12-3
https://stackoverflow.com/questions/74666784/try-to-understand-class-and-instance-variable-annotations-in-pep-526
It seems to me that I misunderstand the PEP 526 in the context of annotating class and instance variables. Based on the example in the PEP document here the object bar should be an instance variable with default value 7: class Foo: bar: int = 7 def __init__(self, bar): self.bar = bar But testing that with Python it seems different to me. bar is an class and instance variable. Python 3.9.10 (tags/v3.9.10:f2f3f53, Jan 17 2022, 15:14:21) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> class Foo: ... bar: int = 7 ... def __init__(self, bar): ... self.bar = bar ... >>> Foo.bar 7 >>> f = Foo(3) >>> f.bar 3 >>> f.__class__.bar 7 >>> Maybe the PEP is not only clear about it? IMHO the PEP is violated and the goal not reached.
There is no conflict, and the PEP does explain what is going on. First of all: An instance variable is just a name set on an instance (usually via self.[name] = ...), and class variables are just names set on a class (usually by assigning to a name within the class ... statement body. They are not mutually exclusive. What you may be missing is how Python looks up names on instances. When you have an instance foo and you want to get attribute bar, then foo.bar is actually handled by code defined for the class, as it needs to make a few checks for more advanced use-cases (called descriptors), but in the case of regular variables it’ll return the value from an instance variable before falling back to the class variable. This is how default values work; if there is no bar set on the instance then the value from the class is used. In other words: a default value for an instance variable set on the class only comes into play when you don’t set an a value on the instance: >>> class Foo: ... bar: int = 7 ... def __init__(self, bar: int | None): ... if bar is not None: ... self.bar = bar ... >>> f1 = Foo() # default value for bar >>> f2 = Foo(3) # explicit value set >>> Foo.bar # default on class 7 >>> f1.bar # is returned here 7 >>> "bar" in vars(f1) # as there is no instance variable False >>> f2.bar # gets the instance value 3 >>> vars(f2)["bar"] # from the instance namespace 3 The section you link to talks about why there is a need to annotate class variables for type checkers. The example they give has two names, one intended to be a default value for an instance attribute and the other intended to be only set on the class. Without that type annotation the type checker can’t distinguish between those two cases because they look exactly the same otherwise: class Starship: captain = 'Picard' stats = {} The stats dict is meant to be shared between instances, so self.stats = ... would shadow the class variable and break the intended use: stats is intended to be a class variable (keeping track of many different per-game statistics), while captain is an instance variable with a default value set in the class. This difference might not be seen by a type checker: both get initialized in the class, but captain serves only as a convenient default value for the instance variable, while stats is truly a class variable – it is intended to be shared by all instances. Using ClassVar[...] lets a type checker know that creating an instance attribute should be an error, while assigning to self.captain is not: Since both variables happen to be initialized at the class level, it is useful to distinguish them by marking class variables as annotated with types wrapped in ClassVar[...]. In this way a type checker may flag accidental assignments to attributes with the same name on instances. Note that the purpose of the PEP is to define how type annotations work for variables, including class and instance variables, not define how Python variables work at runtime! Type annotations are designed to be machine readable documentation for static analysis tools, and are there to help find programmer errors. Don’t try to read this as a description of how the Python runtime works.
3
6
74,660,993
2022-12-2
https://stackoverflow.com/questions/74660993/python-find-x-and-y-values-of-a-2d-gaussian-given-a-value-for-the-function
I have a 2D gaussian function f(x,y). I know the values xβ‚€ and yβ‚€ at which the peak gβ‚€ of the function occurs. But then I want to find xβ‚‘ and yβ‚‘ values at which f(xβ‚‘, yβ‚‘) = gβ‚€ / eΒΉ. I know there are multiple solutions to this, but at least one is sufficient. So far I have def f(x, y, g0,x0,y0,sigma_x,sigma_y,offset): return offset + g0* np.exp(-(((x-x0)**(2)/(2*sigma_x**(2))) + ((y-y0)**(2)/(2*sigma_y**(2))))) All variables taken as parameters are known as they were extracted from a curve fit. I understand that taking the derivative in x and setting f() = 0 and similarly in y, gives a solvable linear system for (x,y), but this seems like overkill to manually implement, there must be some library or tool out there that can do what I am trying to achieve?
There are an infinite number of possibilities (or possibly 1 trivial or none in special cases regarding the value of g0). A solution can be computed analytically in constant time using a direct method. No need for approximations or iterative methods to find roots of a given function. It is just pure maths. Gaussian kernel have interesting symmetries. One of them is the invariance to the rotation when the peak is translated to (0,0). Another on is that the 1D section of a 2D gaussian surface is a gaussian curve. Lets ignore offset for a moment: it does not really change the problem (it is just a Z-axis translation) and add additional useless term for the resolution. The geometric solution to is problem is an ellipse so the solution (xe, ye) follows the conic expression : (xe-x0)Β² / aΒ² + (ye-y0)Β² / bΒ² = 1. If sigma_x = sigma_y, then the solution is simpler : this is a circle with the expression (xe-x0)Β² + (ye-y0)Β² = r. Note that a, b and r are dependant of the searched value and the kernel parameters (eg. sigma_x). Changing sigma_x and sigma_y is like stretching the space, and so the solution similarly. Changing x0 and y0 is like translating the space and so the solution too. In fact, we could solve the problem for the simpler case where x0=0, y0=0, sigma_x=1 and sigma_y=1. Then we can apply a translation, followed by a linear transformation using a transformation matrix. A basic multiplication 4x4 matrix can do that. Solving the simpler case is much easier since there are are less parameter to consider. Actually, g0 and offset can also be partially discarded of f since it is on both side of the expression and one just need to solve the linear equation offset + g0 * h(xe,ye) = g0 / e so h(x,y) = 1 / e - offset / g0 where h(xe, ye) = exp(-(xeΒ² + yeΒ²)/2). Assuming we forget the translation and linear transformation for a moment, the problem can be solve quite easily: h(xe, ye) = 1 / e - offset / g0 exp(-(xeΒ² + yeΒ²)/2) = 1 / e - offset / g0 -(xeΒ² + yeΒ²)/2 = ln(1 / e - offset / g0) xeΒ² + yeΒ² = -2 * ln(1 / e - offset / g0) That's it! We got our circle expression where the radius r is -2*ln(1 / e - offset / g0)! Note that ln in the expression is basically the natural logarithm. Now we could try to find the 4x4 matrix coefficients, or actually try to directly solve the full expression which is finally not so difficult. offset + g0 * exp(-((x-x0)Β²/(2*sigma_xΒ²) + (y-y0)Β²/(2*sigma_yΒ²))) = g0 / e exp(-((x-x0)Β²/(2*sigma_xΒ²) + (y-y0)Β²/(2*sigma_yΒ²))) = 1 / e - offset / g0 -((x-x0)Β²/(2*sigma_xΒ²) + (y-y0)Β²/(2*sigma_yΒ²)) = ln(1 / e - offset / g0) ((x-x0)Β²/sigma_xΒ² + (y-y0)Β²/sigma_yΒ²)/2 = -ln(1 / e - offset / g0) (x-x0)Β²/sigma_xΒ² + (y-y0)Β²/sigma_yΒ² = -2 * ln(1 / e - offset / g0) That's it! We got you conic expression where r = -2 * ln(1 / e - offset / g0) is a constant, a = sigma_x and b = sigma_y are the unknown parameter in the above expression. It can be normalized using a = sigma_x/sqrt(r) and b = sigma_y/sqrt(r) so the right hand side is 1 fitting exactly with the above expression but this is just some math details. You can find one point of the ellipse easily since you know the centre of the ellipse (x0, y0) and there is at least 1 point at the intersection of the line y=y0 and the above conic expression. Lets find it: (x-x0)Β²/sigma_xΒ² + (y0-y0)Β²/sigma_yΒ² = -2 * ln(1 / e - offset / g0) (x-x0)Β²/sigma_xΒ² = -2 * ln(1 / e - offset / g0) (x-x0)Β² = -2 * ln(1 / e - offset / g0) * sigma_xΒ² x = sqrt(-2 * ln(1 / e - offset / g0) * sigma_xΒ²) + x0 Note there are two solutions (-sqrt(...) + x0) but you only need one of them. I hope I did not make any mistake in the computation (at least the details should be enough to find it easily) and the solution is not a complex number in your case. The benefit of this solution is that it is very very fast to compute. The final solution is: (xe, ye) = (sqrt(-2*ln(1/e-offset/g0)*sigma_xΒ²)+x0, y0)
3
2
74,661,959
2022-12-2
https://stackoverflow.com/questions/74661959/why-does-numpy-matrix-multiply-computation-time-increase-by-an-order-of-magnitud
When computing A @ a where A is a random N by N matrix and a is a vector with N random elements using numpy the computation time jumps by an order of magnitude at N=100. Is there any particular reason for this? As a comparison the same operation using torch on the cpu has a more gradual increase Tried it with python3.10 and 3.9 and 3.7 with the same behavior Code used for generating numpy part of the plot: import numpy as np from tqdm.notebook import tqdm import pandas as pd import time import sys def sym(A): return .5 * (A + A.T) results = [] for n in tqdm(range(2, 500)): for trial_idx in range(10): A = sym(np.random.randn(n, n)) a = np.random.randn(n) t = time.time() for i in range(1000): A @ a t = time.time() - t results.append({ 'n': n, 'time': t, 'method': 'numpy', }) results = pd.DataFrame(results) from matplotlib import pyplot as plt fig, ax = plt.subplots(1, 1) ax.semilogy(results.n.unique(), results.groupby('n').time.mean(), label="numpy") ax.set_title(f'A @ a timimgs (1000 times)\nPython {sys.version.split(" ")[0]}') ax.legend() ax.set_xlabel('n') ax.set_ylabel('avg. time') Update Adding import os os.environ["MKL_NUM_THREADS"] = "1" os.environ["NUMEXPR_NUM_THREADS"] = "1" os.environ["OMP_NUM_THREADS"] = "1" before Γ¬mport numpy gives a more expected output, see this answer for details: https://stackoverflow.com/a/74662135/5043576
numpy tries to use threads when multiplying matricies of size 100 or larger, and the default CBLAS implementation of threaded multiplication is ... sub optimal, as opposed to other backends like intel-MKL or ATLAS. if you force it to use only 1 thread using the answers in this post you will get a continuous line for numpy performance.
20
17
74,655,787
2022-12-2
https://stackoverflow.com/questions/74655787/match-case-statement-with-multiple-or-conditions-in-each-case
Is there a way to assess whether a case statement variable is inside a particular list? Consider the following scenario. We have three lists: a = [1, 2, 3] b = [4, 5, 6] c = [7, 8, 9] Then I want to check whether x is in each list. Something like this (of course this is a Syntax Error but I hope you get the point): match x: case in a: return "132" case in b: return "564" case in c: return "798" This can be easy with an if-else scenario. Nonetheless, focusing on the match-case, if one has many lists. And big lists, it would be a mundane task to write them like that: match x: case 1 | 2 | 3: return "132" case 4 | 5 | 6: return "564" case 7 | 8 | 9: return "762" Is there an easy way to check for multiple conditions for each case, without having to write them down? I checked for duplicates, but I couldn't find them, I hope I don't miss something. Please be kind and let me know if there is a duplicate question.
As it seems cases accept a "guard" clause starting with Python 3.10, which you can use for this purpose: match x: case w if w in a: # this was the "case in a" in the question case w if w in b: # this was the "case in b" in the question ... the w here actually captures the value of x, part of the syntax here too, but it's more useful in some other fancy cases listed on the linked whatsnew page.
14
22
74,643,203
2022-12-1
https://stackoverflow.com/questions/74643203/how-to-mock-a-function-which-makes-a-mutation-on-an-argument-that-is-necessary-f
I want to be able to mock a function that mutates an argument, and that it's mutation is relevant in order for the code to continue executing correctly. Consider the following code: def mutate_my_dict(mutable_dict): if os.path.exists("a.txt"): mutable_dict["new_key"] = "new_value" return True def function_under_test(): my_dict = {"key": "value"} if mutate_my_dict(my_dict): return my_dict["new_key"] return "No Key" def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.return_value = True result = function_under_test() assert result == "new_value" **Please understand i know i can just mock os.path.exists in this case but this is just an example. I intentionally want to mock the function and not the external module. ** I also read the docs here: https://docs.python.org/3/library/unittest.mock-examples.html#coping-with-mutable-arguments But it doesn't seem to fit in my case. This is the test i've written so far, but it obviously doesn't work since the key changes: def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.return_value = True result = function_under_test() assert result == "new_value" Thanks in advance for all of your time :)
With the help of Peter i managed to come up with this final test: def mock_mutate_my_dict(my_dict): my_dict["new_key"] = "new_value" return True def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.side_effect = mock_mutate_my_dict result = function_under_test() assert result == "new_value" How it works is that with a side effect you can run a function instead of the intended function. In this function you need to both change all of the mutating arguments and return the value returned.
4
1
74,653,913
2022-12-2
https://stackoverflow.com/questions/74653913/group-columns-if-coordinates-are-not-more-distant-than-a-threshold
Sps Gps start end SP1 G1 2 322 SP1 G1 318 1368 SP1 G1 21125 22297 SP2 G2 2 313 SP2 G2 334 1359 SP2 G2 11716 11964 SP2 G2 20709 20885 SP2 G2 21080 22297 SP3 G3 2 313 SP3 G3 328 1368 SP3 G3 21116 22294 SP4 G4 346 1356 SP4 G4 21131 22282 and I would like to add a new columns Threshold_gps for each Sps and Gps that have start and end next to each others but where the distance length (end-start) is below a threshold of 500. Let's take examples: SP1-G1 Sps Gps start end SP1 G1 2 322 SP1 G1 318 1368 SP1 G1 21125 22297 here 318-322=-4 which is < 500 so I group them Sps Gps start end Threshold_gps SP1 G1 2 322 G1 SP1 G1 318 1368 G1 SP1 G1 21125 22297 then, 21125-1368=19757 which is > 500 so I do not group them Sps Gps start end Threshold_gps SP1 G1 2 322 G1 SP1 G1 318 1368 G1 SP1 G1 21125 22297 G2 SP2-G2 Sps Gps start end Threshold_gps SP2 G2 2 313 SP2 G2 334 1359 SP2 G2 11716 11964 SP2 G2 20709 20885 SP2 G2 21080 22297 334-313=21 which is < 500 so I group them Sps Gps start end Threshold_gps SP2 G2 2 313 G1 SP2 G2 334 1359 G1 SP2 G2 11716 11964 SP2 G2 20709 20885 SP2 G2 21080 22297 then, 11716-1359=10357 which is > 500 so I do not group them Sps Gps start end Threshold_gps SP2 G2 2 313 G1 SP2 G2 334 1359 G1 SP2 G2 11716 11964 G2 SP2 G2 20709 20885 SP2 G2 21080 22297 then, 20709-11964=8745 which is > 500 so I do not group them Sps Gps start end Threshold_gps SP2 G2 2 313 G1 SP2 G2 334 1359 G1 SP2 G2 11716 11964 G2 SP2 G2 20709 20885 G3 SP2 G2 21080 22297 then, 21080-20885=195 which is < 500 so I group them Sps Gps start end Threshold_gps SP2 G2 2 313 G1 SP2 G2 334 1359 G1 SP2 G2 11716 11964 G2 SP2 G2 20709 20885 G3 SP2 G2 21080 22297 G3 and so on.. Sps Gps start end Threshold_gps SP1 G1 2 322 G1 SP1 G1 318 1368 G1 SP1 G1 21125 22297 G2 SP2 G2 2 313 G1 SP2 G2 334 1359 G1 SP2 G2 11716 11964 G2 SP2 G2 20709 20885 G3 SP2 G2 21080 22297 G3 SP3 G3 2 313 G1 SP3 G3 328 1368 G1 SP3 G3 21116 22294 G2 SP4 G4 346 1356 G1 SP4 G4 21131 22282 G2 Does someone have an idea please? Here is the dict format of the tab if it can helps: {'Sps': {0: 'SP1', 1: 'SP1', 2: 'SP1', 3: 'SP2', 4: 'SP2', 5: 'SP2', 6: 'SP2', 7: 'SP2', 8: 'SP3', 9: 'SP3', 10: 'SP3', 11: 'SP4', 12: 'SP4'}, 'Gps': {0: 'G1', 1: 'G1', 2: 'G1', 3: 'G2', 4: 'G2', 5: 'G2', 6: 'G2', 7: 'G2', 8: 'G3', 9: 'G3', 10: 'G3', 11: 'G4', 12: 'G4'}, 'start': {0: 2, 1: 318, 2: 21125, 3: 2, 4: 334, 5: 11716, 6: 20709, 7: 21080, 8: 2, 9: 328, 10: 21116, 11: 346, 12: 21131}, 'end': {0: 322, 1: 1368, 2: 22297, 3: 313, 4: 1359, 5: 11964, 6: 20885, 7: 22297, 8: 313, 9: 1368, 10: 22294, 11: 1356, 12: 22282}}
I believe you might want: df['Threshold_gps'] = (df .groupby(['Sps', 'Gps'], group_keys=False) .apply(lambda d: (s:=d['end'].shift().rsub(d['start']) .gt(500)) .cumsum().add(1-s.iloc[0]) .astype(str).radd('G') ) ) for python <3.8: def get_group(g): s = g['end'].shift().rsub(g['start']).gt(500) return s.cumsum().add(1-s.iloc[0]).astype(str).radd('G') df['Threshold_gps'] = (df .groupby(['Sps', 'Gps'], group_keys=False) .apply(get_group) ) Output: Sps Gps start end Threshold_gps 0 SP1 G1 2 322 G1 1 SP1 G1 318 1368 G1 2 SP1 G1 21125 22297 G2 3 SP2 G2 2 313 G1 4 SP2 G2 334 1359 G1 5 SP2 G2 11716 11964 G2 6 SP2 G2 20709 20885 G3 7 SP2 G2 21080 22297 G3 8 SP3 G3 2 313 G1 9 SP3 G3 328 1368 G1 10 SP3 G3 21116 22294 G2 11 SP4 G4 346 1356 G1 12 SP4 G4 21131 22282 G2
3
4
74,655,149
2022-12-2
https://stackoverflow.com/questions/74655149/what-is-the-current-correct-format-for-python-docstrings-according-to-pep-stan
I've been looking all over the web for the current standards for Python Docstrings and I've come across different answers for different scenarios. What is the currently most-accepted and wide-spread docstring format that I should use? These are the ones that I've found so far: Sphinx format (1): :param type name: description Sphinx format (2): :py:param type name: description NumPy format: Parameters: __________ param: description Other formats: Args: param (type): description Parameters: param (type): description I just want to document my code in a standard way that is accepted by almost every IDE (including VS Code and PyCharm) that also conforms to PEP and readthedocs, so I can also enable hover-over with mouse over the code to see description of the arguments. I'm looking for current standards that are at least backwards compatible with Python 3.6 a since that's the base of the projects I work on.
The most widely accepted and standardized format for Python docstrings is the one defined in the PEP 257 - Docstring Conventions. This format is supported by most IDEs, including VS Code and PyCharm, and is also used by the Sphinx and NumPy documentation tools. The PEP 257 format for documenting function parameters is as follows: def function_name(param1: type, param2: type) -> return_type: """ Description of the function and its arguments. Parameters: param1 (type): Description of the first parameter. param2 (type): Description of the second parameter. Returns: return_type: Description of the return value. """ This format is compatible with Python 3.6 and later versions, and is also backward compatible with older versions of Python. It is recommended to follow this format for documenting function arguments in your code to ensure consistency and compatibility with different tools and IDEs.
7
-1
74,642,594
2022-12-1
https://stackoverflow.com/questions/74642594/why-does-stablediffusionpipeline-return-black-images-when-generating-multiple-im
I am using the StableDiffusionPipeline from the Hugging Face Diffusers library in Python 3.10.2, on an M2 Mac (I tagged it because this might be the issue). When I try to generate 1 image from 1 prompt, the output looks fine, but when I try to generate multiple images using the same prompt, the images are all either black squares or a random image (see example below). What could be the issue? My code is as follows (where I change n_imgs from 1 to more than 1 to break it): from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") pipe = pipe.to("mps") # for M1/M2 chips pipe.enable_attention_slicing() prompt = "a photo of an astronaut driving a car on mars" # First-time "warmup" pass (because of weird M1 behaviour) _ = pipe(prompt, num_inference_steps=1) # generate images n_imgs = 1 imgs = pipe([prompt] * n_imgs).images I also tried setting num_images_per_prompt instead of creating a list of repeated prompts in the pipeline call, but this gave the same bad results. Example output (for multiple images): [edit/update]: When I generate the images in a loop surrounding the pipe call instead of passing an iterable to the pipe call, it does work: # generate images n_imgs = 3 for i in range(n_imgs): img = pipe(prompt).images[0] # do something with img But it is still a mystery to me as to why.
Apparently it is indeed an Apple Silicon (M1/M2) issue, of which Hugging Face is not yet sure why this is happening, see this GitHub issue for more details.
4
2
74,646,115
2022-12-1
https://stackoverflow.com/questions/74646115/merge-all-excel-files-into-one-file-with-multiple-sheets
i would like some help. I have multiple excel files, each file only has one sheet. I would like to combine all excel files into just one file but with multiple sheets one sheet per excel file keeping the same sheet names. this is what i have so far: import pandas as pd from glob import glob import os excelWriter = pd.ExcelWriter("multiple_sheets.xlsx",engine='xlsxwriter') for file in glob('*.xlsx'): df = pd.read_excel(file) df.to_excel(excelWriter,sheet_name=file,index=False) excelWriter.save() All the excel files looks like this: https://iili.io/HfiJRHl.png sorry i cannot upload images here, dont know why but i pasted the link But all the excel files have the exact same columns and rows and just one sheet, the only difference is the sheet name Thanks in advance
import pandas as pd import os output_excel = r'/home/bera/Desktop/all_excels.xlsx' #List all excel files in folder excel_folder= r'/home/bera/Desktop/GIStest/excelfiles/' excel_files = [os.path.join(root, file) for root, folder, files in os.walk(excel_folder) for file in files if file.endswith(".xlsx")] with pd.ExcelWriter(output_excel) as writer: for excel in excel_files: #For each excel sheet_name = pd.ExcelFile(excel).sheet_names[0] #Find the sheet name df = pd.read_excel(excel) #Create a dataframe df.to_excel(writer, sheet_name=sheet_name, index=False) #Write it to a sheet in the output excel
4
6
74,638,479
2022-12-1
https://stackoverflow.com/questions/74638479/check-unique-value-when-define-concrete-class-for-abstract-variable-in-python
Suppose that I have this architecture for my classes: # abstracts.py import abc class AbstractReader(metaclass=abc.ABCMeta): @classmethod def get_reader_name(cl): return cls._READER_NAME @classmethod @property @abc.abstractmethod def _READER_NAME(cls): raise NotImplementedError # concretes.py from .abstracts import AbstractReader class ReaderConcreteNumber1(AbstractReader): _READER_NAME = "NAME1" class ReaderConcreteNumber2(AbstractReader): _READER_NAME = "NAME2" Also I have a manager classes that find concrete classes by _READER_NAME variable. So I need to define unique _READER_NAME for each of my concrete classes. how do I check that NAME1 and NAME2 are unique when concrete classes are going to define?
You can create a metaclass with a constructor that uses a set to keep track of the name of each instantiating class and raises an exception if a given name already exists in the set: class UniqueName(type): names = set() def __new__(metacls, cls, bases, classdict): name = classdict['_READER_NAME'] if name in metacls.names: raise ValueError(f"Class with name '{name}' already exists.") metacls.names.add(name) return super().__new__(metacls, cls, bases, classdict) And make it the metaclass of your AbstractReader class. since Python does not allow a class to have multiple metaclasses, you would need to make AbstractReader inherit from abc.ABCMeta instead of having it as a metaclass: class AbstractReader(abc.ABCMeta, metaclass=UniqueName): ... # your original code here Or if you want to use ABCMeta as metaclass in your AbstractReader, just override ABCMeta class and set child ABC as metaclass in AbstractReader: class BaseABCMeta(abc.ABCMeta): """ Check unique name for _READER_NAME variable """ _readers_name = set() def __new__(mcls, name, bases, namespace, **kwargs): reader_name = namespace['_READER_NAME'] if reader_name in mcls._readers_name: raise ValueError(f"Class with name '{reader_name}' already exists. ") mcls._readers_name.add(reader_name) return super().__new__(mcls, name, bases, namespace, **kwargs) class AbstractReader(metaclass=BaseABCMeta): # Your codes ... So that: class ReaderConcreteNumber1(AbstractReader): _READER_NAME = "NAME1" class ReaderConcreteNumber2(AbstractReader): _READER_NAME = "NAME1" would produce: ValueError: Class with name 'NAME1' already exists. Demo: https://replit.com/@blhsing/MerryEveryInternet
4
1
74,645,811
2022-12-1
https://stackoverflow.com/questions/74645811/how-to-crop-square-inscribed-in-partial-circle
I have frames of a video taken from a microscope. I need to crop them to a square inscribed to the circle but the issue is that the circle isn't whole (like in the following image). How can I do it? My idea was to use contour finding to get the center of the circle and then find the distance from each point over the whole array of coordinates to the center, take the maximum distance as the radius and find the corners of the square analytically but there must be a better way to do it (also I don't really have a formula to find the corners).
Let's start with an illustration of the problem to help with the explanation. Of course, we have to begin with loading the image. Let's also grab its width and height, since they will be useful later on. img = cv2.imread('TUP74.jpg', cv2.IMREAD_COLOR) height, width = img.shape[:2] First, let's convert the image to grayscale and then apply threshold to make the circle all white, and the background black. I arbitrarily picked a threshold value of 31, which seems to give reasonable results. img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(img_gray, 31, 255, cv2.THRESH_BINARY) The result of those operations looks like this: Now, we can determine the "top" and "bottom" of the circle (first_yd and last_yd), by finding the first and last row that contains at least one white pixel. I chose to use cv2.reduce to find the maximum of each row (since the thresholded image only contains 0's and 255's, a non-zero result means there is at least 1 white pixel), followed by cv2.findNonZero to get the row numbers. reduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX) row_info = cv2.findNonZero(reduced) first_yd, last_yd = row_info[0][0][1], row_info[-1][0][1] This information allows us to determine the diameter of the circle d, its radius r (r = d/2), as well as the Y coordinate of the center of the circle center_y. diameter = last_yd - first_yd radius = int(diameter / 2) center_y = first_yd + radius Next, we need to determine the X coordinate of the center of the circle center_x. Let's take advantage of the fact that the circle is cropped on the left-hand side. The white pixels in the first column of the threshold image represent a chord c of the circle (red in the diagram). Again, we begin with finding the "top" and "bottom" of the chord (first_yc and last_yc), but since we're working with a single column, we only need cv2.findNonZero. row_info = cv2.findNonZero(thresh[:,0]) first_yc, last_yc = row_info[0][0][1], row_info[-1][0][1] c = last_yc - first_yc Now we have a nice right-angled triangle with one side adjacent to the right angle being half of the chord c (red in the diagram), the other adjacent side being the unknown offset o, and the hypotenuse (green in the diagram) being the radius of the circle r. Let's apply Pythagoras' theorem: r2 = (c/2)2 + o2 o2 = r2 - (c/2)2 o = sqrt(r2 - (c/2)2) And in Python: center_x = int(math.sqrt(radius**2 - (c/2)**2)) Now we're ready to determine the parameters of the inscribed square. Let's keep in mind that the center of the circle and center of its inscribed square are co-located. Here is another illustration: We will again use Pythagoras' theorem. The hypotenuse of the right triangle is again the radius r. Both of the sides adjacent to the right angle are of equal length, which is half the length of the side of inscribed square s. r2 = (s/2)2 + (s/2)2 r2 = 2 Γ— (s/2)2 r2 = 2 Γ— s2/22 r2 = s2/2 s2 = 2 Γ— r2 s = sqrt(2) Γ— r And in Python: s = int(math.sqrt(2) * radius) Finally, we can determine the top-left and bottom-right corners of the inscribed square. Both of those points are offset by s/2 from the common center. half_s = int(s/2) tl = (center_x - half_s, center_y - half_s) br = (center_x + half_s, center_y + half_s) We have determined all the parameters we need. Let's print them out... Circle diameter = 1167 pixels Circle radius = 583 pixels Circle center = (404,1089) Inscribed square side = 824 pixels Inscribed square top-left = (-8,677) Inscribed square bottom-right = (816,1501) and visualize the center (green), the detected circle (red) and the inscribed square (blue) on a copy of the input image: Now we can do the cropping, but first we have to make sure we don't go out of bounds of the source image. crop_left = max(tl[0], 0) crop_top = max(tl[1], 0) # Kinda redundant, but why not crop_right = min(br[0], width) crop_bottom = min(br[1], height) # ditto cropped = img[crop_top:crop_bottom, crop_left:crop_right] And that's it. Here's the cropped image (it's rectangular, since small part of the inscribed square falls outside the source image, and scaled down for embedding -- click to get the full-sized image): Complete Script import cv2 import numpy as np import math img = cv2.imread('TUP74.jpg', cv2.IMREAD_COLOR) height, width = img.shape[:2] img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(img_gray, 31, 255, cv2.THRESH_BINARY) # Find top/bottom of the circle, to determine radius and center reduced = cv2.reduce(thresh, 1, cv2.REDUCE_MAX) row_info = cv2.findNonZero(reduced) first_yd, last_yd = row_info[0][0][1], row_info[-1][0][1] diameter = last_yd - first_yd radius = int(diameter / 2) center_y = first_yd + radius # Repeat again, just on first column, to find length of a chord of the circle row_info = cv2.findNonZero(thresh[:,0]) first_yc, last_yc = row_info[0][0][1], row_info[-1][0][1] c = last_yc - first_yc # Apply Pythagoras theorem to find the X offset of the center from the chord # Since the chord is in row 0, this is also the X coordinate center_x = int(math.sqrt(radius**2 - (c/2)**2)) # Find length of the side of the inscribed square (Pythagoras again) s = int(math.sqrt(2) * radius) # Now find the top-left and bottom-right corners of the square half_s = int(s/2) tl = (center_x - half_s, center_y - half_s) br = (center_x + half_s, center_y + half_s) # Let's print out what we found print("Circle diameter = %d pixels" % diameter) print("Circle radius = %d pixels" % radius) print("Circle center = (%d,%d)" % (center_x, center_y)) print("Inscribed square side = %d pixels" % s) print("Inscribed square top-left = (%d,%d)" % tl) print("Inscribed square bottom-right = (%d,%d)" % br) # And visualize it... vis = img.copy() cv2.line(vis, (center_x-5,center_y), (center_x+5,center_y), (0,255,0), 3) cv2.line(vis, (center_x,center_y-5), (center_x,center_y+5), (0,255,0), 3) cv2.circle(vis, (center_x,center_y), radius, (0,0,255), 3) cv2.rectangle(vis, tl, br, (255,0,0), 3) # Write some illustration images cv2.imwrite('circ_thresh.png', thresh) cv2.imwrite('circ_vis.png', vis) # Time to do some cropping, but we need to make sure the coordinates are inside the bounds of the image crop_left = max(tl[0], 0) crop_top = max(tl[1], 0) # Kinda redundant, but why not crop_right = min(br[0], width) crop_bottom = min(br[1], height) # ditto cropped = img[crop_top:crop_bottom, crop_left:crop_right] cv2.imwrite('circ_cropped.png', cropped) NB: The main focus of this was the explanation of the algorithm. I've been kinda blunt on rounding the values, and there may be some off-by-one errors. For the sake of brevity, error checking is minimal. It's left as an excercise to the reader to address those issues as necessary. Furthermore, the assumption is that the left-hand side of the circle is cropped as in the sample image. It should be fairly trivial to extend this to handle other possible scenarios, using the techniques I've demonstrated.
3
4
74,641,489
2022-12-1
https://stackoverflow.com/questions/74641489/how-to-override-a-mock-for-an-individual-test-within-a-class-that-already-has-a
I have a test class that has a mock decorator, and several tests. Each test receives the mock, because mock is defined on the class level. Great. Here's what it looks like: @mock.patch("foo", bar) class TestMyThing(TestCase): def test_A(self): assert something def test_B(self): assert something def test_C(self): assert something def test_D(self): assert something I now want test_D to get a have a different value mocked for foo. I first try: @mock.patch("foo", bar) class TestMyThing(TestCase): def test_A(self): assert something def test_B(self): assert something def test_C(self): assert something @mock.patch("foo", baz) def test_D(self): assert something This doesn't work. Currently to get unittest to take the mock.patch that decorates test_D, I have to remove the mock.patch that decorates the class. This means creating lots of DRY and doing the following: class TestMyThing(TestCase): @mock.patch("foo", bar) def test_A(self): assert something @mock.patch("foo", bar) def test_B(self): assert something @mock.patch("foo", bar) def test_C(self): assert something @mock.patch("foo", baz) def test_D(self): assert something This is non ideal due to DRY boilerplate, which makes it error prone and violates open-closed principle. Is there a better way to achieve the same logic?
Yes! You can leverage the setUp/tearDown methods of the unittest.TestCase and the fact that unittest.mock.patch in its "pure" form (i.e. not as context manager or decorator) returns a "patcher" object that has start/stop methods to control when exactly it should do its magic. You can call on the patcher to start inside setUp and to stop inside tearDown and if you keep a reference to it in an attribute of your test case, you can easily stop it manually in selected test methods. Here is a full working example: from unittest import TestCase from unittest.mock import patch class Foo: @staticmethod def bar() -> int: return 1 class TestMyThing(TestCase): def setUp(self) -> None: self.foo_bar_patcher = patch.object(Foo, "bar", return_value=42) self.mock_foo_bar = self.foo_bar_patcher.start() super().setUp() def tearDown(self) -> None: self.foo_bar_patcher.stop() super().tearDown() def test_a(self): self.assertEqual(42, Foo.bar()) def test_b(self): self.assertEqual(42, Foo.bar()) def test_c(self): self.assertEqual(42, Foo.bar()) def test_d(self): self.foo_bar_patcher.stop() self.assertEqual(1, Foo.bar()) This patching behavior is the same, regardless of the different variations like patch.object (which I used here), patch.multiple etc. Note that for this example it is not necessary to keep a reference to the actual MagicMock instance generated by the patcher in an attribute, like I did with mock_foo_bar. I just usually do that because I often want to check something against the mock instance in my test methods. It is worth mentioning that you can also use setUpClass/tearDownClass for this, but then you need to be careful with re-starting the patch, if you stop it because those methods are (as the name implies) only called once for each test case, whereas setUp/tearDown are called once before/after each test method. PS: The default implementations of setUp/tearDown on TestCase do nothing, but it is still good practice IMO to make a habit of calling the superclass' method, unless you deliberately want to omit that call.
5
3
74,641,988
2022-12-1
https://stackoverflow.com/questions/74641988/pandas-keyerror-in-get-loc-when-calling-entries-from-dataframe-in-for-loop
I am using a pandas data-frame and for some reason when trying to access one entry after another in a for loop it does gives me an error. Here is my (simplified) code snippet: df_original = pd.read_csv(csv_dataframe_filename, sep='\t', header=[0, 1], encoding_errors="replace") df_original.columns = ['A', 'B', 'Count_Number', 'D', 'E', 'F', 'use_first', 'H', 'I'] df_use = df_original df_use = df_use.drop(df_use[((df_use['use_first']=='no'))].index) df_use.columns = ['A', 'B', 'Count_Number', 'D', 'E', 'F', 'use_first', 'H', 'I'] c_mag = np.zeros((len(df_use), 1)) x = 0 for i in range(len(df_use)): print(df_use['Count_Number'][x]) #THIS IS THE LINE THAT IS THE ISSUE x += 1 print(c_mag) print(df_use['Count_Number'][x]) The line that is the issue is marked by a comment. If I enter a specific number instead of the variable x, it works (both outside and inside the loop, but inside the loop it of course then prints always the same value each time which is not what I want). It also works with df_original instead of df_use (but for my purpose I really need df_use). The printing in the very last line also works (even with variable x that at that point has a certain value). I also entered the column naming for df_use in the middle later on, so I got the issue with and without it in the same way. I tried whether all other parts of the code work and they do, so both dataframes can be printed correctly etc. Using x instead of i as a variable is also a result of playing around and trying to find a solution, so using i was giving the same result. The column contains floats, if that matters. But for the code as it is I get the following error message ("folder of file" is of course just a replacement for the actual file path): Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3361, in get_loc return self._engine.get_loc(casted_key) File "pandas\_libs\index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\hashtable_class_helper.pxi", line 2131, in pandas._libs.hashtable.Int64HashTable.get_item File "pandas\_libs\hashtable_class_helper.pxi", line 2140, in pandas._libs.hashtable.Int64HashTable.get_item KeyError: 0 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "[folder of file]", line 74, in <module> print(df_use['Count_Number'][x]) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 942, in __getitem__ return self._get_value(key) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 1051, in _get_value loc = self.index.get_loc(label) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py", line 3363, in get_loc raise KeyError(key) from err KeyError: 0 Process finished with exit code 1 I searched for answers and tried out different things, such as checking the spelling etc. But I can not find a solution and do not understand what I am doing wrong. Does anyone have an idea on how to solve this issue? Thank you in advance for any helpful comment! UPDATE: Found a solution after all. using .iloc[x] instead of just [x] solves the issue. Now I am still curious though why that happens - for other variables it worked even without the .iloc, so why not in this case? I feel like an answer would help me to better understand how things are working in python, so thanks for any hints even if I got the code working already. What I already tried: The line that is the issue is marked by a comment. If I enter a specific number instead of the variable x, it works. It also works with df_original instead of df_use (but for my purpose I really need df_use). The printing in the very last line also works (even with variable x that at that point has a certain value). I also entered the column naming for df_use in the middle later on, so I got the issue with and without it in the same way. I tried whether all other parts of the code work and they do, so both data-frames can be printed correctly etc. Using x instead of i as a variable is also a result of playing around and trying to find a solution, so using i was giving the same result. I also played around with different ways of how to run the loop, but that did not help either. I searched for answers and tried out different things, such as checking the spelling etc. What I am expecting: The entries of the data-frame columns can be called and used successfully (in this simplified case: can be printed) in the for loop one entry after another. If the printing itself can be done differently, that does not help me (of course I can just print the whole column, that is working), because my actual purpose is to do further calculations with each value. print() is just for now to simplify the issue and try to find a solution.
This is the answer focusing on the UPDATE section you have provided. The first thing you need to understand between normal indexing of DataFrame and using iloc. iloc basically use position indexing (just like in lists we have positions of elements 0, 1, ... len(list)-1), but the normal indexing, in your case [x] matches the column name (in your case, it is row) with what you have entered rather than checking the position. The traceback tells us that there is no row name 0, that's why it is producing KeyError. In the case of iloc, it uses position indexing, so it will return the very first value of the column Count_Number (for x=0). In your case, if you want to use the for loop to print the values of the column in sequence, using iloc is recommended. As for the last line of your code, it will print the very last value of your column Count_Number, as the very last value of x in for loop is the length of the DataFrame - 1. For example: A sample DataFrame stored in a variable named df: a b c 1 1 2 3 2 4 5 6 Now, if I do: df['a'][0]: I get KeyError: 0 exception, the similar one you are getting. But, if I replace it with iloc, df['a'].iloc[0], the output: 1 Assuming you know how for loops work in python: for i in range(len(df)): #print(df['a'][i] will produce KeyError: 0 #This is because, range(2) [length of df] will give #0, 1 #But we don't have 0 as a row in df print(df['a'].iloc[i]) The code above will produce: 1 4 I was unable to understand completely the rest of your issue, so if you still have them, please do ask but in short and specific manner.
3
4
74,635,994
2022-12-1
https://stackoverflow.com/questions/74635994/pytorchs-share-memory-vs-built-in-pythons-shared-memory-why-in-pytorch-we
Trying to learn about the built-in multiprocessing and Pytorch's multiprocessing packages, I have observed a different behavior between both. I find this to be strange since Pytorch's package is fully-compatible with the built-in package. Concretely, I'm refering to the way variables are shared between processes. In Pytorch, tensor's are moved to shared_memory via the inplace operation share_memory_(). On the other hand, we can get the same result with the built-in package by using the shared_memory module. The difference between both that I'm struggling to understand is that, with the built-in version, we have to explicitely access the shared memory-block inside the launched process. However, we don't need to do that with the Pytorch version. Here is a Pytorch's toy example showing this: import time import torch # the same behavior happens when importing: # import multiprocessing as mp import torch.multiprocessing as mp def get_time(s): return round(time.time() - s, 1) def foo(a): # wait ~1sec to print the value of the tensor. time.sleep(1.0) with lock: #------------------------------------------------------------------- # WITHOUT explicitely accessing the shared memory block, we can observe # that the tensor has changed: #------------------------------------------------------------------- print(f"{__name__}\t{get_time(s)}\t\t{a}") # global variables. lock = mp.Lock() s = time.time() if __name__ == '__main__': print("Module\t\tTime\t\tValue") print("-"*50) # create tensor and assign it to shared memory. a = torch.zeros(2).share_memory_() print(f"{__name__}\t{get_time(s)}\t\t{a}") # start child process. p0 = mp.Process(target=foo, args=(a,)) p0.start() # modify the value of the tensor after ~0.5sec. time.sleep(0.5) with lock: a[0] = 1.0 print(f"{__name__}\t{get_time(s)}\t\t{a}") time.sleep(1.5) p0.join() which outputs (as expected): Module Time Value -------------------------------------------------- __main__ 0.0 tensor([0., 0.]) __main__ 0.5 tensor([1., 0.]) __mp_main__ 1.0 tensor([1., 0.]) And here is a toy example with the built-in package: import time import multiprocessing as mp from multiprocessing import shared_memory import numpy as np def get_time(s): return round(time.time() - s, 1) def foo(shm_name, shape, type_): #------------------------------------------------------------------- # WE NEED TO explicitely access the shared memory block to observe # that the array has changed: #------------------------------------------------------------------- existing_shm = shared_memory.SharedMemory(name=shm_name) a = np.ndarray(shape, type_, buffer=existing_shm.buf) # wait ~1sec to print the value. time.sleep(1.0) with lock: print(f"{__name__}\t{get_time(s)}\t\t{a}") # global variables. lock = mp.Lock() s = time.time() if __name__ == '__main__': print("Module\t\tTime\t\tValue") print("-"*35) # create numpy array and shared memory block. a = np.zeros(2,) shm = shared_memory.SharedMemory(create=True, size=a.nbytes) a_shared = np.ndarray(a.shape, a.dtype, buffer=shm.buf) a_shared[:] = a[:] print(f"{__name__}\t{get_time(s)}\t\t{a_shared}") # start child process. p0 = mp.Process(target=foo, args=(shm.name, a.shape, a.dtype)) p0.start() # modify the value of the vaue after ~0.5sec. time.sleep(0.5) with lock: a_shared[0] = 1.0 print(f"{__name__}\t{get_time(s)}\t\t{a_shared}") time.sleep(1.5) p0.join() which equivalently outputs, as expected: Module Time Value ----------------------------------- __main__ 0.0 [0. 0.] __main__ 0.5 [1. 0.] __mp_main__ 1.0 [1. 0.] So what I'm strugging to understand is why we don't need to follow the same steps in both versions, built-in and Pytorch's, i.e. how Pytorch is able to avoid the need to explicitely access the shared memory-block? P.S. I'm using a Windows OS and Python 3.9
pytorch has a simple wrapper around shared memory, python's shared memory module is only a wrapper around the underlying OS dependent functions. the way it can be done is that you don't serialize the array or the shared memory themselves, and only serialize what's needed to create them by using the __getstate__ and __setstate__ methods from the docs, so that your object acts as both a proxy and a container at the same time. the following bar class can double for a proxy and a container this way, which is useful if the user shouldn't have to worry about the shared memory part. import time import multiprocessing as mp from multiprocessing import shared_memory import numpy as np class bar: def __init__(self): self._size = 10 self._type = np.uint8 self.shm = shared_memory.SharedMemory(create=True, size=self._size) self._mem_name = self.shm.name self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf) def __getstate__(self): """Return state values to be pickled.""" return (self._mem_name, self._size, self._type) def __setstate__(self, state): """Restore state from the unpickled state values.""" self._mem_name, self._size, self._type = state self.shm = shared_memory.SharedMemory(self._mem_name) self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf) def get_time(s): return round(time.time() - s, 1) def foo(shm, lock): # ------------------------------------------------------------------- # without explicitely access the shared memory block we observe # that the array has changed: # ------------------------------------------------------------------- a = shm # wait ~1sec to print the value. time.sleep(1.0) with lock: print(f"{__name__}\t{get_time(s)}\t\t{a.arr}") # global variables. s = time.time() if __name__ == '__main__': lock = mp.Lock() # to work on windows/mac. print("Module\t\tTime\t\tValue") print("-" * 35) # create numpy array and shared memory block. a = bar() print(f"{__name__}\t{get_time(s)}\t\t{a.arr}") # start child process. p0 = mp.Process(target=foo, args=(a, lock)) p0.start() # modify the value of the vaue after ~0.5sec. time.sleep(0.5) with lock: a.arr[0] = 1.0 print(f"{__name__}\t{get_time(s)}\t\t{a.arr}") time.sleep(1.5) p0.join() python just makes it much easier to hide such details inside the class without bothering the user with such details. Edit: i wish they'd make locks non-inheritable so your code can raise an error on the lock, instead you'll find out one day that it doesn't actually lock ... After it crashes your application in production.
6
4
74,628,777
2022-11-30
https://stackoverflow.com/questions/74628777/why-does-gpu-memory-increase-when-recreating-and-reassigning-a-jax-numpy-array-t
When I recreate and reassign a JAX np array to the same variable name, for some reason the GPU memory nearly doubles the first recreation and then stays stable for subsequent recreations/reassignments. Why does this happen and is this generally expected behavior for JAX arrays? Fully runnable minimal example: https://colab.research.google.com/drive/1piUvyVylRBKm1xb1WsocsSVXJzvn5bdI?usp=sharing. For posterity in case colab goes down: %env XLA_PYTHON_CLIENT_PREALLOCATE=false import jax from jax import numpy as jnp from jax import random # First creation of jnp array x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage from the first call is 618 MB # Second creation of jnp array, reassigning it to the same variable name x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage is now 1130 MB - almost double! # Third creation of jnp array, reassigning it to the same variable name x = jnp.ones(shape=(int(1e8),), dtype=float) get_gpu_memory() # the memory usage is stable at 1130 MB. Thank you!
The reason for this behavior comes from the interaction of several things: Without pre-allocation, the GPU memory usage will grow as needed, but will not shrink when buffers are deleted. When you reassign a python variable, the old value still exists in memory until the Python garbage collector notices it is no longer referenced, and deletes it. This will take a small amount of time to occur in the background (you can call import gc; gc.collect() to force this to happen at any point). JAX sends instructions to the GPU asynchronously, meaning that once Python garbage-collects a GPU-backed value, the Python script may continue running for a short time before the corresponding buffer is actually removed from the device. All of this means there's some delay between unassigning the previous x value, and that memory being freed on the device, and if you're immediately allocating a new value, the device will likely expand its memory allocation to fit the new array before the old one is deleted. So why does the memory use stay constant on the third call? Well, by this time the first allocation has been removed, and so there is already space for the third allocation without growing the memory footprint. With these things in mind, you can keep the allocation constant by putting a delay between deleting the old value and creating the new value; i.e. replace this: x = jnp.ones(shape=(int(1e8),), dtype=float) with this: del x time.sleep(1) x = jnp.ones(shape=(int(1e8),), dtype=float) When I run it this way, I see constant memory usage at 618MiB.
3
3
74,633,504
2022-11-30
https://stackoverflow.com/questions/74633504/how-to-stretch-out-a-bounding-box-given-from-minarearect-function-in-opencv
I wish to run a line detector between two known points on an image but firstly I need to widen the area around the line so my line detector has more area to work with. The main issue it stretch the area around line with respect to the line slope. For instance: white line generated form two points with black bounding box. I tried manualy manipulating the box array: input_to_min_area = np.array([[660, 888], [653, 540]]) # this works instead of contour as an input to minAreaRect rect = cv.minAreaRect(input_to_min_area) box = cv.boxPoints(rect) box[[0, 3], 0] += 20 box[[1, 2], 0] -= 20 box = np.int0(box) cv.drawContours(self.images[0], [box], 0, (0, 255, 255), 2) But that doesn't work for any line slope. From vertical to this angle everything is fine, but for the horizontal lines doesn't work. What would be a simpler solution that works for any line slope?
A minAreaRect() gives you a center point, the size of the rectangle, and an angle. You could just add to the shorter side length of the rectangle. Then you have a description of a "wider rectangle". You can then do with it whatever you want, such as call boxPoints() on it. padding = 42 rect = cv.minAreaRect(input_to_min_area) (center, (w,h), angle) = rect # take it apart if w < h: # we don't know which side is longer, add to shorter side w += padding else: h += padding rect = (center, (w,h), angle) # rebuild A box around your two endpoints, widened:
4
3
74,633,074
2022-11-30
https://stackoverflow.com/questions/74633074/how-to-type-hint-a-generic-numpy-array
Is there any way to type a Numpy array as generic? I'm currently working with Numpy 1.23.5 and Python 3.10, and I can't type hint the following example. import numpy as np import numpy.typing as npt E = TypeVar("E") # Should be bounded to a numpy type def double_arr(arr: npt.NDArray[E]) -> npt.NDArray[E]: return arr * 2 What I expect arr = np.array([1, 2, 3], dtype=np.int8) double_arr(arr) # npt.NDAarray[np.int8] arr = np.array([1, 2.3, 3], dtype=np.float32) double_arr(arr) # npt.NDAarray[np.float32] But I end up with the following error arr: npt.NDArray[E] ^^^ Could not specialize type "NDArray[ScalarType@NDArray]" Type "E@double_arr" cannot be assigned to type "generic" "object*" is incompatible with "generic" If i bound the E to numpy datatypes (np.int8, np.uint8, ...) the type-checker fails to evaluate the multiplication due to the multiple data-types.
Looking at the source, it seems the generic type variable used to parameterize numpy.dtype of numpy.typing.NDArray is bounded by numpy.generic (and declared covariant). Thus any type argument to NDArray must be a subtype of numpy.generic, whereas your type variable is unbounded. This should work: from typing import TypeVar import numpy as np from numpy.typing import NDArray E = TypeVar("E", bound=np.generic, covariant=True) def double_arr(arr: NDArray[E]) -> NDArray[E]: return arr * 2 But there is another problem, which I believe lies in insufficient numpy stubs. An example of it is showcased in this issue. The overloaded operand (magic) methods like __mul__ somehow mangle the types. I just gave the code a cursory look right now, so I don't know what is missing. But mypy will still complain about the last line in that code: error: Returning Any from function declared to return "ndarray[Any, dtype[E]]" [no-any-return] error: Unsupported operand types for * ("ndarray[Any, dtype[E]]" and "int") [operator] The workaround right now is to use the functions instead of the operands (via the dunder methods). In this case using numpy.multiply instead of * solves the issue: from typing import TypeVar import numpy as np from numpy.typing import NDArray E = TypeVar("E", bound=np.generic, covariant=True) def double_arr(arr: NDArray[E]) -> NDArray[E]: return np.multiply(arr, 2) a = np.array([1, 2, 3], dtype=np.int8) reveal_type(double_arr(a)) No more mypy complaints and the type is revealed as follows: numpy.ndarray[Any, numpy.dtype[numpy.signedinteger[numpy._typing._8Bit]]] It's worth keeping an eye on that operand issue and maybe even report the specific error of Unsupported operand types for * separately. I haven't found that in the issue tracker yet. PS: Alternatively, you could use the * operator and add a specific type: ignore. That way you'll notice, if/once the annotation error is eventually fixed by numpy because mypy complains about unused ignore-directives in strict mode. def double_arr(arr: NDArray[E]) -> NDArray[E]: return arr * 2 # type: ignore[operator,no-any-return]
4
8
74,628,389
2022-11-30
https://stackoverflow.com/questions/74628389/cant-install-tensorflow-text-2-11-0
I got a warning something like warnings.warn( No local packages or working download links found for tensorflow-text~=2.11.0 error: Could not find suitable distribution for Requirement.parse('tensorflow-text~=2.11.0') and if I run pip install 'tensorflow-text~=2.11.0' I got : ERROR: Could not find a version that satisfies the requirement tensorflow-text~=2.11.0 (from versions: 2.8.1, 2.8.2, 2.9.0rc0, 2.9.0rc1, 2.9.0, 2.10.0b2, 2.10.0rc0, 2.10.0) ERROR: No matching distribution found for tensorflow-text~=2.11.0 tensorflow-text 2.11.0 available on pypi and if I run pip install tensorflow-text it installs tensorflow-text 2.10.0 and downgrade the whole tensorflow to 2.10.0 Version Info: OS: Windows 10 Environment: Conda (miniconda3) Python: 3.10.8 Tensorflow: 2.11 I've tried pip and conda-forge
As per their note, they have dropped building for Windows with v2.11.0. So, you'll need to build from source or seek a third-party build.
3
7
74,623,917
2022-11-30
https://stackoverflow.com/questions/74623917/how-to-tokenize-block-of-text-as-one-token-in-python
Recently I am working on a genome data set which consists of many blocks of genomes. On previous works on natural language processing, I have used sent_tokenize and word_tokenize from nltk to tokenize the sentences and words. But when I use these functions on genome data set, it is not able to tokenize the genomes correctly. The text below shows some part of the genome data set. >NR_004049 1 tattattatacacaatcccggggcgttctatatagttatgtataatgtat atttatattatttatgcctctaactggaacgtaccttgagcatatatgct gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa caact >NR_004048 1 aatgttttatataaattgcagtatgtgtcacccaaaatagcaaaccccat aaccaaccagattattatgatacataatgcttatatgaaactaagacatt tcgcaacatttattttaggtatataaatacatttattgaaggaattgata tatgccagtaaaatggtgtatttttaatttctttcaataaaaacataatt gacattatataaaaatgaattataaaactctaagcggtggatcactcggc tcatgggtcgatgaagaacgcagcaaactgtgcgtcatcgtgtgaactgc aggacacatgaacatcgacattttgaacgcatatcgcagtccatgctgtt atgtactttaattaattttatagtgctgcttggactacatatggttgagg gttgtaagactatgctaattaagttgcttataaatttttataagcatatg gtatattattggataaatataataatttttattcataatattaaaaaata aatgaaaaacattatctcacatttgaatgt >NR_004047 1 atattcaggttcatcgggcttaacctctaagcagtttcacgtactgttta actctctattcagagttcttttcaactttccctcacggtacttgtttact atcggtctcatggttatatttagtgtttagatggagtttaccacccactt agtgctgcactatcaagcaacactgactctttggaaacatcatctagtaa tcattaacgttatacgggcctggcaccctctatgggtaaatggcctcatt taagaaggacttaaatcgctaatttctcatactagaatattgacgctcca tacactgcatctcacatttgccatatagacaaagtgacttagtgctgaac tgtcttctttacggtcgccgctactaagaaaatccttggtagttactttt cctcccctaattaatatgcttaaattcagggggtagtcccatatgagttg >NR_004052 1 When the tokenizer of ntlk is applied on this dataset, each line of text (for example tattattatacacaatcccggggcgttctatatagttatgtataatgtat ) becomes one token which is not correct. and a block of sequences should be considered as one token. For example in this case contents between >NR_004049 1 and >NR_004048 1 should be consider as one token: >NR_004049 1 tattattatacacaatcccggggcgttctatatagttatgtataatgtat atttatattatttatgcctctaactggaacgtaccttgagcatatatgct gtgacccgaaagatggtgaactatacttgatcaggttgaagtcaggggaa accctgatggaagaccgaaacagttctgacgtgcaaatcgattgtcagaa ttgagtataggggcgaaagaccaatcgaaccatctagtagctggttcctt ccgaagtttccctcaggatagctggtgcattttaatattatataaaataa tcttatctggtaaagcgaatgattagaggccttagggtcgaaacgatctt aacctattctcaaactttaaatgggtaagaaccttaactttcttgatatg aagttcaaggttatgatataatgtgcccagtgggccacttttggtaagca gaactggcgctgtgggatgaaccaaacgtaatgttacggtgcccaaataa caact >NR_004048 1 So each block starting with special words such as >NR_004049 1 until the next special character should be considered as one token. The problem here is tokenizing this kind of data set and i dont have any idea how can i correctly tokenize them. I really appreciate answers which helps me to solve this. Update: One way to solve this problem is to append al lines within each block, and then using the nltk tokenizer. for example This means that to append all lines between >NR_004049 1 and >NR_004048 1 to make one string from several lines, so the nltk tokenizer will consider it as one token. Can any one help me how can i append lines within each block?
You just need to concatenate the lines between two ids apparently. There should be no need for nltk or any tokenizer, just a bit of programming ;) patterns = {} with open('data', "r") as f: id = None current = "" for line0 in f: line= line0.rstrip() if line[0] == '>' : # new pattern if len(current)>0: # print("adding "+id+" "+current) patterns[id] = current current = "" # to find the next id: tokens = line.split(" ") id = tokens[0][1:] else: # continuing pattern current = current + line if len(current)>0: patterns[id] = current # print("adding "+id+" "+current) # do whatever with the patterns: for id, pattern in patterns.items(): print(f"{id}\t{pattern}")
3
3
74,624,626
2022-11-30
https://stackoverflow.com/questions/74624626/type-narrowing-of-class-attributes-in-python-typeguard-without-subclassing
Consider I have a python class that has a attributes (i.e. a dataclass, pydantic, attrs, django model, ...) that consist of a union, i.e. None and and a state. Now I have a complex checking function that checks some values. If I use this checking function, I want to tell the type checker, that some of my class attributes are narrowed. For instance see this simplified example: import dataclasses from typing import TypeGuard @dataclasses.dataclass class SomeDataClass: state: tuple[int, int] | None name: str # Assume many more data attributes class SomeDataClassWithSetState(SomeDataClass): state: tuple[int, int] def complex_check(data: SomeDataClass) -> TypeGuard[SomeDataClassWithSetState]: # Assume some complex checks here, for simplicity it is only: return data.state is not None and data.name.startswith("SPECIAL") def get_sum(data: SomeDataClass) -> int: if complex_check(data): return data.state[0] + data.state[1] return 0 Explore on mypy Playground As seen it is possible to do this with subclasses, which for various reason is not an option for me: it introduces a lot of duplication some possible libraries used for dataclasses are not happy with being subclasses without side condition there could be some Metaclass or __subclasses__ magic that handles all subclass specially, i.e. creating database for the dataclasses So is there an option to type narrow a(n) attribute(s) of a class without introducing a solely new class, as proposed here?
TL;DR: You cannot narrow the type of an attribute. You can only narrow the type of an object. As I already mentioned in my comment, for typing.TypeGuard to be useful it relies on two distinct types T and S. Then, depending on the returned bool, the type guard function tells the type checker to assume the object to be either T or S. You say, you don't want to have another class/subclass alongside SomeDataClass for various (vaguely valid) reasons. But if you don't have another type, then TypeGuard is useless. So that is not the route to take here. I understand that you want to reduce the type-safety checks like if obj.state is None because you may need to access the state attribute in multiple different places in your code. You must have some place in your code, where you create/mutate a SomeDataClass instance in a way that ensures its state attribute is not None. One solution then is to have a getter for that attribute that performs the type-safety check and only ever returns the narrower type or raises an error. I typically do this via @property for improved readability. Example: from dataclasses import dataclass @dataclass class SomeDataClass: name: str optional_state: tuple[int, int] | None = None @property def state(self) -> tuple[int, int]: if self.optional_state is None: raise RuntimeError("or some other appropriate exception") return self.optional_state def set_state(obj: SomeDataClass, value: tuple[int, int]) -> None: obj.optional_state = value if __name__ == "__main__": foo = SomeDataClass(optional_state=(1, 2), name="foo") bar = SomeDataClass(name="bar") baz = SomeDataClass(name="baz") set_state(bar, (2, 3)) print(foo.state) print(bar.state) try: print(baz.state) except RuntimeError: print("baz has no state") I realize you mean there are many more checks happening in complex_check, but either that function doesn't change the type of data or it does. If the type remains the same, you need to introduce type-safety for attributes like state in some other place, which is why I suggest a getter method. Another option is obviously to have a separate class, which is what is typically done with FastAPI/Pydantic/SQLModel for example and use clever inheritance to reduce code duplication. You mentioned this may cause problems because of subclassing magic. Well, if it does, use the other approach, but I can't think of an example that would cause the problems you mentioned. Maybe you can be more specific and show a case where subclassing would lead to problems.
7
2
74,622,588
2022-11-30
https://stackoverflow.com/questions/74622588/sudoku-backtracking-python-to-find-multiple-solutions
I have a code to solve a Sudoku recursively and print out the one solution it founds. But i would like to find the number of multiple solutions. How would you modify the code that it finds all possible solutions and gives out the number of solutions? Thank you! :) code: board = [ [7,8,0,4,0,0,1,2,0], [6,0,0,0,7,5,0,0,9], [0,0,0,6,0,1,0,7,8], [0,0,7,0,4,0,2,6,0], [0,0,1,0,5,0,9,3,0], [9,0,4,0,6,0,0,0,5], [0,7,0,3,0,0,0,1,2], [1,2,0,0,0,7,4,0,0], [0,4,9,2,0,6,0,0,7] ] def solve(bo): find = find_empty(bo) if not find: return True else: row, col = find for num in range(1,10): if valid(bo, num, (row, col)): bo[row][col] = num if solve(bo): return True bo[row][col] = 0 return False def valid(bo, num, pos): # Check row for field in range(len(bo[0])): if bo[pos[0]][field] == num and pos[1] != field: return False # Check column for line in range(len(bo)): if bo[line][pos[1]] == num and pos[0] != line: return False # Check box box_x = pos[1] // 3 box_y = pos[0] // 3 for i in range(box_y*3, box_y*3 + 3): for j in range(box_x * 3, box_x*3 + 3): if bo[i][j] == num and (i,j) != pos: return False return True def print_board(bo): for i in range(len(bo)): if i % 3 == 0 and i != 0: print("- - - - - - - - - - - - - ") for j in range(len(bo[0])): if j % 3 == 0 and j != 0: print(" | ", end="") if j == 8: print(bo[i][j]) else: print(str(bo[i][j]) + " ", end="") def find_empty(bo): for i in range(len(bo)): for j in range(len(bo[0])): if bo[i][j] == 0: return (i, j) # row, col return None if __name__ == "__main__": print_board(board) solve(board) print("___________________") print("") print_board(board) I already tried to change the return True term at the Solve(Bo) Function to return None/ deleted it(For both return Terms) that it continues… Then the Algorithm continues and finds multiple solutions, but in the end fills out the correct numbers from the very last found solutions again into 0’s. This is the solution then printed out.
As asked: How would you modify the code that it finds all possible solutions and gives out the number of solutions? If you don't want to return ("give out") the solutions themselves, but the number of solutions, then you need to maintain a counter, and use the count you get back from the recursive call to update the owned counter: def solve(bo): find = find_empty(bo) if not find: return 1 count = 0 row, col = find for num in range(1, 10): if valid(bo, num, (row, col)): bo[row][col] = num count += solve(bo) bo[row][col] = 0 return count In the main program, you would no longer print the board, as you don't expect the filled board now, but a number: print(solve(board)) # Will output 1 for your example board. Getting all solutions If you don't just want to know the count, but every individual solution itself, then I would go for a generator function, that yields each solution: def solve(bo): find = find_empty(bo) if not find: yield [row[:] for row in bo] # Make a copy return row, col = find for num in range(1, 10): if valid(bo, num, (row, col)): bo[row][col] = num yield from solve(bo) bo[row][col] = 0 Then the main program can do: count = 0 for solution in solve(board): print("SOLUTION:") print_board(solution) count += 1 print("NUMBER of SOLUTIONS:", count)
4
1
74,624,111
2022-11-30
https://stackoverflow.com/questions/74624111/application-runs-with-uvicorn-but-cant-find-module-no-module-named-app
. β”œβ”€β”€ __pycache__ β”‚ └── api.cpython-310.pyc β”œβ”€β”€ app β”‚ β”œβ”€β”€ __pycache__ β”‚ β”‚ └── main.cpython-310.pyc β”‚ β”œβ”€β”€ api_v1 β”‚ β”‚ β”œβ”€β”€ __pycache__ β”‚ β”‚ β”‚ └── apis.cpython-310.pyc β”‚ β”‚ β”œβ”€β”€ apis.py β”‚ β”‚ └── endpoints β”‚ β”‚ β”œβ”€β”€ __pycache__ β”‚ β”‚ β”‚ └── message_prediction.cpython-310.pyc β”‚ β”‚ └── message_prediction.py β”‚ β”œβ”€β”€ config.py β”‚ β”œβ”€β”€ main.py β”‚ └── schemas β”‚ β”œβ”€β”€ Messages.py β”‚ └── __pycache__ β”‚ └── Messages.cpython-310.pyc β”œβ”€β”€ app.egg-info β”‚ β”œβ”€β”€ PKG-INFO β”‚ β”œβ”€β”€ SOURCES.txt β”‚ β”œβ”€β”€ dependency_links.txt β”‚ └── top_level.txt β”œβ”€β”€ build β”‚ └── bdist.macosx-12.0-arm64 β”œβ”€β”€ data β”‚ β”œβ”€β”€ processed β”‚ β”‚ β”œβ”€β”€ offers_big.csv_cleaned.xlsx β”‚ β”‚ └── requests_big.csv_cleaned.xlsx β”‚ β”œβ”€β”€ processed.dvc β”‚ β”œβ”€β”€ raw β”‚ β”‚ β”œβ”€β”€ offers.csv.old β”‚ β”‚ β”œβ”€β”€ offers_big.csv β”‚ β”‚ β”œβ”€β”€ requests.csv.old β”‚ β”‚ └── requests_big.csv β”‚ β”œβ”€β”€ raw.dvc β”‚ β”œβ”€β”€ validated β”‚ β”‚ β”œβ”€β”€ validated_offers.xlsx β”‚ β”‚ └── validated_requests.xlsx β”‚ └── validated.dvc β”œβ”€β”€ dist β”‚ └── app-0.1.0-py3.10.egg β”œβ”€β”€ model.pkl β”œβ”€β”€ model.py β”œβ”€β”€ notebooks β”‚ └── contact-form.ipynb β”œβ”€β”€ requirements.in β”œβ”€β”€ requirements.txt β”œβ”€β”€ setup.py └── test_api.py # main.py import os from fastapi import FastAPI import uvicorn from app.api_v1.apis import api_router # create the app messages_classification_app = FastAPI() messages_classification_app.include_router(api_router) if __name__ == '__main__': uvicorn.run("app.main:messages_classification_app", host=os.getenv("HOST", "0.0.0.0"), port=int(os.getenv("PORT", 8000))) # requirements.in fastapi uvicorn -e file:.#egg=app Trying to run the fastAPI app with python, results in error: py[learning] ξ‚° ~/r/v/contact-form-classification ξ‚° ξ‚  master Β± ξ‚° python app/main.py Traceback (most recent call last): File "/Users/xxxxx/repos/visable/contact-form-classification/app/main.py", line 4, in <module> from app.api_v1.apis import api_router ModuleNotFoundError: No module named 'app' Running it with uvicorn directly works: py[learning] ξ‚° ~/r/v/contact-form-classification ξ‚° ξ‚  master Β± ξ‚° uvicorn app.main:messages_classification_app INFO: Started server process [53665] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) Any idea why? Looked into similar questions, don't seem to apply to mine.
python app/main.py will make app/ the first entry in sys.path, so app imports within won't work. Do python -m app.main to run app/main.py as a module without having Python touch sys.path.
3
2
74,607,041
2022-11-28
https://stackoverflow.com/questions/74607041/how-to-do-with-pydantic-regex-validation
I'm trying to write a validator with usage of Pydantic for following strings (examples): 1.1.0, 3.5.6, 1.1.2, etc.. I'm failing with following syntax: install_component_version: constr(regex=r"^[0-9]+.[0-9]+.[0-9]$") install_component_version: constr(regex=r"^([0-9])+.([0-9])+.([0-9])$") install_component_version: constr(regex=r"^([0-9])\.([0-9])\.([0-9])$") Can anyone help me out what regex syntax should look like?
The error you are facing is due to type annotation. As per https://github.com/pydantic/pydantic/issues/156 this is not yet fixed, you can try using pydantic.Field and then pass the regex argument there like so install_component_version: str = Field(regex=r"^[0-9]+.[0-9]+.[0-9]$") This way you get the regex validation and type checking. PS: This is not a 100% alternative to constr but if all you want is regex validation, the above alternative works and makes mypy happy. :warning: pydantic v2 As mentioned in the comments, if using Pydantic V2, regex is replaced with pattern. install_component_version: str = Field(pattern=r"^[0-9]+.[0-9]+.[0-9]$")
8
10
74,589,610
2022-11-27
https://stackoverflow.com/questions/74589610/whats-pylints-typevar-name-specification
Pylint gives a warning whenever something like this happens: import typing SEQ_FR = typing.TypeVar("SEQ_FR") #^^^^^ gets underlined with the warning The warning is like this: Type variable name "SEQ_FR" doesn't conform to predefined naming style. pylint(invalid-name) I tried searching through Pylint's documentations with no luck on finding the exact regex / specifications used. Doesn't seem like I can pass a custom regex onto Pylint for this as well, unlike regular variables, methods, functions, classes, etc. What is the specification used by Pylint to flag TypeVar variables as valid or invalid names?
You can find the rule used in the Pylint messages documentation; this error is named invalid-name, so the specific documentation can be found on the invalid-name / C0103 page, which has a TypeVar rule in the Predefined Naming Patterns section: Name type: typevar Good Names: T, _CallableT, _T_co, AnyStr, DeviceTypeT, IPAddressT Bad Names: DICT_T, CALLABLE_T, ENUM_T, DeviceType, _StrType, TAnyStr It doesn't document the exact regex rule here, but Pylint will actually include the regex used in the error message when you use the --include-naming-hint=y command-line switch *): Type variable name "SEQ_FR" doesn't conform to predefined naming style ('^_{0,2}(?!T[A-Z])(?:[A-Z]+|(?:[A-Z]+[a-z]+)+T?(?<!Type))(?:_co(?:ntra)?)?$' pattern) (invalid-name) Alternatively, you can find the regex for typevars in the source code. Breaking the pattern down, typevar names are compliant when following these rules: Optionally start with 0-2 underscores not starting with T<capital letter> either all capital letters and no underscores, or a PascalCaseWord ✝) optionally ending in T, no underscores, and not ending with Type with an optional _co or _contra ending. Put differently, typevar names must be either PascalCase ✝) or all-caps, are optionally protected (_) or private (__), are optionally marked as covariant (_co) or contravariant (_contra), and should not end in Type. A compliant name for your example could be SeqFr or SeqFrT; the T suffix is meant to make it clear the PascalCaseT name is a typevar. Alternatively, you can specify your own regex with the --typevar-rgx=<regex> command-line switch *). Note: As Pierre Sassoulas (the maintainer of Pylint) pointed out in a comment: There is no PEP-8 convention (yet) for typevar naming; instead the PyLint team captured the rules observed in Python projects and the type hinting documentation. The exact rule is therefore still subject to change if an official convention were to be created. *) The Visual Studio Code settings for the Python extension include a python.linting.pylintArgs option that takes a list of command-line switches. ✝) PascalCase is also known as CapitalizedWords, UpperCamelCase or StudlyCase. Don't confuse this with camelCase (initial letter lowercase) or snake_case (all lowercase with underscores). I regularly do! When in doubt, Wikipedia has a handly table of multi-word formats.
9
12
74,606,984
2022-11-28
https://stackoverflow.com/questions/74606984/how-are-small-sets-stored-in-memory
If we look at the resize behavior for sets under 50k elements: >>> import sys >>> s = set() >>> seen = {} >>> for i in range(50_000): ... size = sys.getsizeof(s) ... if size not in seen: ... seen[size] = len(s) ... print(f"{size=} {len(s)=}") ... s.add(i) ... size=216 len(s)=0 size=728 len(s)=5 size=2264 len(s)=19 size=8408 len(s)=77 size=32984 len(s)=307 size=131288 len(s)=1229 size=524504 len(s)=4915 size=2097368 len(s)=19661 This pattern is consistent with quadrupling of the backing storage size once the set is 3/5ths full, plus some presumably constant overhead for the PySetObject: >>> for i in range(9, 22, 2): ... print(2**i + 216) ... 728 2264 8408 32984 131288 524504 2097368 A similar pattern continues even for larger sets, but the resize factor switches to doubling instead of quadrupling. The reported size for small sets is an outlier. Instead of size 344 bytes, i.e. 16 * 8 + 216 (the storage array of a newly created empty set has 8 slots avail until the first resize up to 32 slots) only 216 bytes is reported by sys.getsizeof. What am I missing? How are those small sets stored so that they use only 216 bytes instead of 344?
set object in Python is represented by the following C structure: typedef struct { PyObject_HEAD Py_ssize_t fill; /* Number of active and dummy entries*/ Py_ssize_t used; /* Number of active entries */ /* The table contains mask + 1 slots, and that's a power of 2. * We store the mask instead of the size because the mask is more * frequently needed. */ Py_ssize_t mask; /* The table points to a fixed-size smalltable for small tables * or to additional malloc'ed memory for bigger tables. * The table pointer is never NULL which saves us from repeated * runtime null-tests. */ setentry *table; Py_hash_t hash; /* Only used by frozenset objects */ Py_ssize_t finger; /* Search finger for pop() */ setentry smalltable[PySet_MINSIZE]; PyObject *weakreflist; /* List of weak references */ } PySetObject; Now remember, getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector. Ok, set implements the __sizeof__. static PyObject * set_sizeof(PySetObject *so, PyObject *Py_UNUSED(ignored)) { Py_ssize_t res; res = _PyObject_SIZE(Py_TYPE(so)); if (so->table != so->smalltable) res = res + (so->mask + 1) * sizeof(setentry); return PyLong_FromSsize_t(res); } Now let’s inspect the line res = _PyObject_SIZE(Py_TYPE(so)); _PyObject_SIZE is just a macro which expands to (typeobj)->tp_basicsize. #define _PyObject_SIZE(typeobj) ( (typeobj)->tp_basicsize ) This code is essentially trying to access the tp_basicsize slot to get the size in bytes of instances of the type which is just sizeof(PySetObject) in case of set. PyTypeObject PySet_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) "set", /* tp_name */ sizeof(PySetObject), /* tp_basicsize */ 0, /* tp_itemsize */ # Skipped rest of the code for brevity. I have modified the set_sizeof C function with the following changes: static PyObject * set_sizeof(PySetObject *so, PyObject *Py_UNUSED(ignored)) { Py_ssize_t res; unsigned long py_object_head_size = sizeof(so->ob_base); // Because PyObject_HEAD expands to PyObject ob_base; unsigned long fill_size = sizeof(so->fill); unsigned long used_size = sizeof(so->used); unsigned long mask_size = sizeof(so->mask); unsigned long table_size = sizeof(so->table); unsigned long hash_size = sizeof(so->hash); unsigned long finger_size = sizeof(so->finger); unsigned long smalltable_size = sizeof(so->smalltable); unsigned long weakreflist_size = sizeof(so->weakreflist); int is_using_fixed_size_smalltables = so->table == so->smalltable; printf("| PySetObject Fields | Size(bytes) |\n"); printf("|------------------------------------|\n"); printf("| PyObject_HEAD | '%zu' |\n", py_object_head_size); printf("| fill | '%zu' |\n", fill_size); printf("| used | '%zu' |\n", used_size); printf("| mask | '%zu' |\n", mask_size); printf("| table | '%zu' |\n", table_size); printf("| hash | '%zu' |\n", hash_size); printf("| finger | '%zu' |\n", finger_size); printf("| smalltable | '%zu' |\n", smalltable_size); printf("| weakreflist | '%zu' |\n", weakreflist_size); printf("-------------------------------------|\n"); printf("| Total | '%zu' |\n", py_object_head_size+fill_size+used_size+mask_size+table_size+hash_size+finger_size+smalltable_size+weakreflist_size); printf("\n"); printf("Total size of PySetObject '%zu' bytes\n", sizeof(PySetObject)); printf("Has set resized: '%s'\n", is_using_fixed_size_smalltables ? "No": "Yes"); if(!is_using_fixed_size_smalltables) { printf("Size of malloc'ed table: '%zu' bytes\n", (so->mask + 1) * sizeof(setentry)); } res = _PyObject_SIZE(Py_TYPE(so)); if (so->table != so->smalltable) res = res + (so->mask + 1) * sizeof(setentry); return PyLong_FromSsize_t(res); } and compiling and running these changes gives me: >>> import sys >>> >>> set_ = set() >>> sys.getsizeof(set_) | PySetObject Fields | Size(bytes) | |------------------------------------| | PyObject_HEAD | '16' | | fill | '8' | | used | '8' | | mask | '8' | | table | '8' | | hash | '8' | | finger | '8' | | smalltable | '128' | | weakreflist | '8' | -------------------------------------| | Total | '200' | Total size of PySetObject '200' bytes Has set resized: 'No' 216 >>> set_.add(1) >>> set_.add(2) >>> set_.add(3) >>> set_.add(4) >>> set_.add(5) >>> sys.getsizeof(set_) | PySetObject Fields | Size(bytes) | |------------------------------------| | PyObject_HEAD | '16' | | fill | '8' | | used | '8' | | mask | '8' | | table | '8' | | hash | '8' | | finger | '8' | | smalltable | '128' | | weakreflist | '8' | -------------------------------------| | Total | '200' | Total size of PySetObject '200' bytes Has set resized: 'Yes' Size of malloc'ed table: '512' bytes 728 The return value is 216/728 bytes because sys.getsize add 16 bytes of GC overhead. But the important thing to note here is this line. | smalltable | '128' | Because for small tables(before the first resize) so->table is just a reference to fixed size(8) so->smalltable(No malloc'ed memory) so sizeof(PySetObject) is sufficient enough to get the size because it also includes the storage size( 128(16(size of setentry) * 8)). Now what happens when the resize occurs? It constructs entirely new table (malloc'ed) and uses that table instead of so->smalltables. This means that the sets, which have resized, also carry out a dead-weight of 128 bytes (size of fixed size small table) along with the size of malloc'ed table(so->table). else { newtable = PyMem_NEW(setentry, newsize); if (newtable == NULL) { PyErr_NoMemory(); return -1; } } /* Make the set empty, using the new table. */ assert(newtable != oldtable); memset(newtable, 0, sizeof(setentry) * newsize); so->mask = newsize - 1; so->table = newtable;
13
7
74,563,511
2022-11-24
https://stackoverflow.com/questions/74563511/should-you-decorate-dataclass-subclasses-if-not-making-additional-fields
If you don't add any more fields to your subclass is there a need to add the @dataclass decorator to it and would it do anything? If there is no difference, which is the usual convention? from dataclasses import dataclass @dataclass class AAA: x: str y: str ... # decorate? class BBB(AAA): ...
From documentation, dataclasses source code and experimenting with class instances, I don't see any difference, even if new fields are added to a subclass1. If we are talking conventions, then I would advise against decorating a subclass. Class BBB's definition is supposed to say "this class behaves just like AAA, but with these changes" (likely a couple of new methods in your case). Re-decorating BBB: serves no purpose violates DRY principle: although it's just one line, but still there's no particular difference between re-decorating and copy-pasting a short method from superclass could (potentially) get in the way of changing AAA: you might theoretically switch to another library for dataclassing or decide to use @dataclass with non-default parameters, and that'd require maintaining subclass as well (for no good reason). Update 1 As Andrius corrected me, if you add new fields in a subclass, you'll encounter the following problem: >>> class BBB(AAA): ... z: str ... >>> BBB('a', 'b', 'c') Traceback (most recent call last): File "<input>", line 1, in <module> BBB('a', 'b', 'c') TypeError: AAA.__init__() takes 3 positional arguments but 4 were given Original question specifically says that new fields won't be added, so this is just a correction of my initial answer.
5
5
74,608,905
2022-11-29
https://stackoverflow.com/questions/74608905/single-source-of-truth-for-python-project-version-in-presence-of-pyproject-toml
The pyproject.toml specification affords the ability to specify the project version, e.g. [project] name = "foo" version = "0.0.1" However, it is also a common Python idiom to put __version__ = "0.0.1" in foo/__init__.py so that users can query it. Is there a standard way of extracting the version from the pyproject.toml and getting it into the foo/__init__.py?
There are two approaches you can take here. Keep version in pyproject.toml and get it from the package metadata in the source code. So, in your mypkg/__init__.py or wherever: from importlib.metadata import version __version__ = version("mypkg") importlib.metadata.version is available since Python 3.8. For earlier Python versions, you can do similar with the importlib_metadata backport. Keep the version in the source code and instruct the build system to get it from there. For a setuptools build backend, it looks like this in pyproject.toml: [project] name = "mypkg" dynamic = ["version"] [tool.setuptools.dynamic] version = {attr = "mypkg.__version__"} My recommendation is (πŸ₯) ... neither! Don't keep a __version__ attribute in the source code at all. It's an outdated habit which we can do without these days. Version is already a required field in the package metadata, it's redundant to keep the same string as an attribute in the package/module namespace.
13
20
74,551,520
2022-11-23
https://stackoverflow.com/questions/74551520/not-able-to-copy-file-in-docker-file-which-is-downloaded-in-github-actions
I can able to see the .pkl which is downloaded using actions/download-artifact@v3 action in work directory along with Dockerfile as shown below, When I try to COPY file inside Dockefile, I get a file not found error. How to copy the files inside docker image that are downloaded(through github actions) before building docker image? Here is doc from github on docker support, but I didn't get exactly how to solve my issue. Any help would be really appreciated!! Dockerfile: name: Docker - GitHub workflow env: CONTAINER_NAME: xxx-xxx on: workflow_dispatch: push: branches: ["main"] pull_request: branches: ["main"] permissions: id-token: write contents: read jobs: load-artifacts: runs-on: ubuntu-latest environment: dev env: output_path: ./xxx/xxx_model.pkl steps: - uses: actions/checkout@v3 - name: Download PPE model file run: | az storage blob download --container-name ppe-container --name xxx_model.pkl -f "${{ env.output_path }}" - name: View output - after run: | ls -lhR - name: 'Upload Artifact' uses: actions/upload-artifact@v3 with: name: ppe_model path: ${{ env.output_path }} build: needs: load-artifacts runs-on: ubuntu-latest env: ACR: xxxx steps: - uses: actions/checkout@v3 - uses: actions/download-artifact@v3 id: download with: name: ppe_model # path: ${{ env.model_path }} - name: Echo download path run: echo ${{steps.download.outputs.download-path}} - name: View directory files run: | ls -lhR -a - name: Build container image uses: docker/build-push-action@v2 with: push: false tags: ${{ env.ACR }}.azurecr.io/${{ env.CONTAINER_NAME }}:${{ github.run_number }} file: ./Dockerfile
In your workflow file, you're not specifying the context: - name: Build container image uses: docker/build-push-action@v2 with: push: false tags: ${{ env.ACR }}.azurecr.io/${{ env.CONTAINER_NAME }}:${{ github.run_number }} file: ./Dockerfile By default, that means that docker/build-push-action uses a git context. That will re-clone your repository... without your model. The fix, then, is to specify a path context, like this: - name: Build container image uses: docker/build-push-action@v2 with: context: . push: false tags: ${{ env.ACR }}.azurecr.io/${{ env.CONTAINER_NAME }}:${{ github.run_number }} file: ./Dockerfile
4
8
74,565,844
2022-11-24
https://stackoverflow.com/questions/74565844/typeerror-cannot-join-tz-naive-with-tz-aware-datetimeindex
all! I am trying to generate results of this repo https://github.com/ArnaudBu/stock-returns-prediction for stocks price prediction based on financial analysis. Running the very first step 1_get_data.py I come across an error: TypeError: Cannot join tz-naive with tz-aware DatetimeIndex The code is # -*- coding: utf-8 -*- from yfinance import Ticker import pandas as pd from yahoofinancials import YahooFinancials import requests from tqdm import tqdm import time import pickle # with open('tmp.pickle', 'rb') as f: # statements, tickers_done = pickle.load(f) # Download function def _download_one(ticker, start=None, end=None, auto_adjust=False, back_adjust=False, actions=False, period="max", interval="1d", prepost=False, proxy=None, rounding=False): return Ticker(ticker).history(period=period, interval=interval, start=start, end=end, prepost=prepost, actions=actions, auto_adjust=auto_adjust, back_adjust=back_adjust, proxy=proxy, rounding=rounding, many=True) # Modify project and reference index according to your needs tickers_all = [] # for project in ["sp500", "nyse", "nasdaq"]: for project in ["nasdaq"]: print(project) ref_index = ["^GSPC", "^IXIC"] # Load tickers companies = pd.read_csv(f"data/{project}/{project}.csv", sep=",") # companies = companies.drop(companies.index[companies['Symbol'].index[companies['Symbol'].isnull()][0]]) # the row with Nan value tickers = companies.Symbol.tolist() tickers = [a for a in tickers if a not in tickers_all and "^" not in a and r"/" not in a] tickers_all += tickers # Download prices full_data = {} for ticker in tqdm(tickers + ref_index): tckr = _download_one(ticker, period="7y", actions=True) full_data[ticker] = tckr ohlc = pd.concat(full_data.values(), axis=1, keys=full_data.keys()) ohlc.columns = ohlc.columns.swaplevel(0, 1) ohlc.sort_index(level=0, axis=1, inplace=True) prices = ohlc["Adj Close"] dividends = ohlc["Dividends"] prices.to_csv(f"data/{project}/prices_daily.csv") dividends.to_csv(f"data/{project}/dividends.csv") statements = {} tickers_done = [] for ticker in tqdm(tickers): # Get statements if ticker in tickers_done: continue yahoo_financials = YahooFinancials(ticker) stmts_codes = ['income', 'cash', 'balance'] all_statement_data = yahoo_financials.get_financial_stmts('annual', stmts_codes) # build statements dictionary for a in all_statement_data.keys(): if a not in statements: statements[a] = list() for b in all_statement_data[a]: try: for result in all_statement_data[a][b]: extracted_date = list(result)[0] dataframe_row = list(result.values())[0] dataframe_row['date'] = extracted_date dataframe_row['symbol'] = b statements[a].append(dataframe_row) except Exception as e: print("Error on " + ticker + " : " + a) tickers_done.append(ticker) with open('tmp.pickle', 'wb') as f: pickle.dump([statements, tickers_done], f) # save dataframes for a in all_statement_data.keys(): df = pd.DataFrame(statements[a]).set_index('date') df.to_csv(f"data/{project}/{a}.csv") # Donwload shares shares = [] tickers_done = [] for ticker in tqdm(tickers): if ticker in tickers_done: continue d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868") if not d.ok: time.sleep(300) d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868") ctn = d.json()['timeseries']['result'] dct = dict() for n in ctn: type = n['meta']['type'][0] dct[type] = dict() if type in n: for o in n[type]: if o is not None: dct[type][o['asOfDate']] = o['reportedValue']['raw'] df = pd.DataFrame.from_dict(dct) df['symbol'] = ticker shares.append(df) tickers_done.append(ticker) time.sleep(1) # save dataframe df = pd.concat(shares) df['date'] = df.index df.to_csv(f"data/{project}/shares.csv", index=False) # https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/MSFT?symbol=MSFT&padTimeSeries=true&type=annualTreasurySharesNumber,trailingTreasurySharesNumber,annualPreferredSharesNumber,trailingPreferredSharesNumber,annualOrdinarySharesNumber,trailingOrdinarySharesNumber,annualShareIssued,trailingShareIssued,annualNetDebt,trailingNetDebt,annualTotalDebt,trailingTotalDebt,annualTangibleBookValue,trailingTangibleBookValue,annualInvestedCapital,trailingInvestedCapital,annualWorkingCapital,trailingWorkingCapital,annualNetTangibleAssets,trailingNetTangibleAssets,annualCapitalLeaseObligations,trailingCapitalLeaseObligations,annualCommonStockEquity,trailingCommonStockEquity,annualPreferredStockEquity,trailingPreferredStockEquity,annualTotalCapitalization,trailingTotalCapitalization,annualTotalEquityGrossMinorityInterest,trailingTotalEquityGrossMinorityInterest,annualMinorityInterest,trailingMinorityInterest,annualStockholdersEquity,trailingStockholdersEquity,annualOtherEquityInterest,trailingOtherEquityInterest,annualGainsLossesNotAffectingRetainedEarnings,trailingGainsLossesNotAffectingRetainedEarnings,annualOtherEquityAdjustments,trailingOtherEquityAdjustments,annualFixedAssetsRevaluationReserve,trailingFixedAssetsRevaluationReserve,annualForeignCurrencyTranslationAdjustments,trailingForeignCurrencyTranslationAdjustments,annualMinimumPensionLiabilities,trailingMinimumPensionLiabilities,annualUnrealizedGainLoss,trailingUnrealizedGainLoss,annualTreasuryStock,trailingTreasuryStock,annualRetainedEarnings,trailingRetainedEarnings,annualAdditionalPaidInCapital,trailingAdditionalPaidInCapital,annualCapitalStock,trailingCapitalStock,annualOtherCapitalStock,trailingOtherCapitalStock,annualCommonStock,trailingCommonStock,annualPreferredStock,trailingPreferredStock,annualTotalPartnershipCapital,trailingTotalPartnershipCapital,annualGeneralPartnershipCapital,trailingGeneralPartnershipCapital,annualLimitedPartnershipCapital,trailingLimitedPartnershipCapital,annualTotalLiabilitiesNetMinorityInterest,trailingTotalLiabilitiesNetMinorityInterest,annualTotalNonCurrentLiabilitiesNetMinorityInterest,trailingTotalNonCurrentLiabilitiesNetMinorityInterest,annualOtherNonCurrentLiabilities,trailingOtherNonCurrentLiabilities,annualLiabilitiesHeldforSaleNonCurrent,trailingLiabilitiesHeldforSaleNonCurrent,annualRestrictedCommonStock,trailingRestrictedCommonStock,annualPreferredSecuritiesOutsideStockEquity,trailingPreferredSecuritiesOutsideStockEquity,annualDerivativeProductLiabilities,trailingDerivativeProductLiabilities,annualEmployeeBenefits,trailingEmployeeBenefits,annualNonCurrentPensionAndOtherPostretirementBenefitPlans,trailingNonCurrentPensionAndOtherPostretirementBenefitPlans,annualNonCurrentAccruedExpenses,trailingNonCurrentAccruedExpenses,annualDuetoRelatedPartiesNonCurrent,trailingDuetoRelatedPartiesNonCurrent,annualTradeandOtherPayablesNonCurrent,trailingTradeandOtherPayablesNonCurrent,annualNonCurrentDeferredLiabilities,trailingNonCurrentDeferredLiabilities,annualNonCurrentDeferredRevenue,trailingNonCurrentDeferredRevenue,annualNonCurrentDeferredTaxesLiabilities,trailingNonCurrentDeferredTaxesLiabilities,annualLongTermDebtAndCapitalLeaseObligation,trailingLongTermDebtAndCapitalLeaseObligation,annualLongTermCapitalLeaseObligation,trailingLongTermCapitalLeaseObligation,annualLongTermDebt,trailingLongTermDebt,annualLongTermProvisions,trailingLongTermProvisions,annualCurrentLiabilities,trailingCurrentLiabilities,annualOtherCurrentLiabilities,trailingOtherCurrentLiabilities,annualCurrentDeferredLiabilities,trailingCurrentDeferredLiabilities,annualCurrentDeferredRevenue,trailingCurrentDeferredRevenue,annualCurrentDeferredTaxesLiabilities,trailingCurrentDeferredTaxesLiabilities,annualCurrentDebtAndCapitalLeaseObligation,trailingCurrentDebtAndCapitalLeaseObligation,annualCurrentCapitalLeaseObligation,trailingCurrentCapitalLeaseObligation,annualCurrentDebt,trailingCurrentDebt,annualOtherCurrentBorrowings,trailingOtherCurrentBorrowings,annualLineOfCredit,trailingLineOfCredit,annualCommercialPaper,trailingCommercialPaper,annualCurrentNotesPayable,trailingCurrentNotesPayable,annualPensionandOtherPostRetirementBenefitPlansCurrent,trailingPensionandOtherPostRetirementBenefitPlansCurrent,annualCurrentProvisions,trailingCurrentProvisions,annualPayablesAndAccruedExpenses,trailingPayablesAndAccruedExpenses,annualCurrentAccruedExpenses,trailingCurrentAccruedExpenses,annualInterestPayable,trailingInterestPayable,annualPayables,trailingPayables,annualOtherPayable,trailingOtherPayable,annualDuetoRelatedPartiesCurrent,trailingDuetoRelatedPartiesCurrent,annualDividendsPayable,trailingDividendsPayable,annualTotalTaxPayable,trailingTotalTaxPayable,annualIncomeTaxPayable,trailingIncomeTaxPayable,annualAccountsPayable,trailingAccountsPayable,annualTotalAssets,trailingTotalAssets,annualTotalNonCurrentAssets,trailingTotalNonCurrentAssets,annualOtherNonCurrentAssets,trailingOtherNonCurrentAssets,annualDefinedPensionBenefit,trailingDefinedPensionBenefit,annualNonCurrentPrepaidAssets,trailingNonCurrentPrepaidAssets,annualNonCurrentDeferredAssets,trailingNonCurrentDeferredAssets,annualNonCurrentDeferredTaxesAssets,trailingNonCurrentDeferredTaxesAssets,annualDuefromRelatedPartiesNonCurrent,trailingDuefromRelatedPartiesNonCurrent,annualNonCurrentNoteReceivables,trailingNonCurrentNoteReceivables,annualNonCurrentAccountsReceivable,trailingNonCurrentAccountsReceivable,annualFinancialAssets,trailingFinancialAssets,annualInvestmentsAndAdvances,trailingInvestmentsAndAdvances,annualOtherInvestments,trailingOtherInvestments,annualInvestmentinFinancialAssets,trailingInvestmentinFinancialAssets,annualHeldToMaturitySecurities,trailingHeldToMaturitySecurities,annualAvailableForSaleSecurities,trailingAvailableForSaleSecurities,annualFinancialAssetsDesignatedasFairValueThroughProfitorLossTotal,trailingFinancialAssetsDesignatedasFairValueThroughProfitorLossTotal,annualTradingSecurities,trailingTradingSecurities,annualLongTermEquityInvestment,trailingLongTermEquityInvestment,annualInvestmentsinJointVenturesatCost,trailingInvestmentsinJointVenturesatCost,annualInvestmentsInOtherVenturesUnderEquityMethod,trailingInvestmentsInOtherVenturesUnderEquityMethod,annualInvestmentsinAssociatesatCost,trailingInvestmentsinAssociatesatCost,annualInvestmentsinSubsidiariesatCost,trailingInvestmentsinSubsidiariesatCost,annualInvestmentProperties,trailingInvestmentProperties,annualGoodwillAndOtherIntangibleAssets,trailingGoodwillAndOtherIntangibleAssets,annualOtherIntangibleAssets,trailingOtherIntangibleAssets,annualGoodwill,trailingGoodwill,annualNetPPE,trailingNetPPE,annualAccumulatedDepreciation,trailingAccumulatedDepreciation,annualGrossPPE,trailingGrossPPE,annualLeases,trailingLeases,annualConstructionInProgress,trailingConstructionInProgress,annualOtherProperties,trailingOtherProperties,annualMachineryFurnitureEquipment,trailingMachineryFurnitureEquipment,annualBuildingsAndImprovements,trailingBuildingsAndImprovements,annualLandAndImprovements,trailingLandAndImprovements,annualProperties,trailingProperties,annualCurrentAssets,trailingCurrentAssets,annualOtherCurrentAssets,trailingOtherCurrentAssets,annualHedgingAssetsCurrent,trailingHedgingAssetsCurrent,annualAssetsHeldForSaleCurrent,trailingAssetsHeldForSaleCurrent,annualCurrentDeferredAssets,trailingCurrentDeferredAssets,annualCurrentDeferredTaxesAssets,trailingCurrentDeferredTaxesAssets,annualRestrictedCash,trailingRestrictedCash,annualPrepaidAssets,trailingPrepaidAssets,annualInventory,trailingInventory,annualInventoriesAdjustmentsAllowances,trailingInventoriesAdjustmentsAllowances,annualOtherInventories,trailingOtherInventories,annualFinishedGoods,trailingFinishedGoods,annualWorkInProcess,trailingWorkInProcess,annualRawMaterials,trailingRawMaterials,annualReceivables,trailingReceivables,annualReceivablesAdjustmentsAllowances,trailingReceivablesAdjustmentsAllowances,annualOtherReceivables,trailingOtherReceivables,annualDuefromRelatedPartiesCurrent,trailingDuefromRelatedPartiesCurrent,annualTaxesReceivable,trailingTaxesReceivable,annualAccruedInterestReceivable,trailingAccruedInterestReceivable,annualNotesReceivable,trailingNotesReceivable,annualLoansReceivable,trailingLoansReceivable,annualAccountsReceivable,trailingAccountsReceivable,annualAllowanceForDoubtfulAccountsReceivable,trailingAllowanceForDoubtfulAccountsReceivable,annualGrossAccountsReceivable,trailingGrossAccountsReceivable,annualCashCashEquivalentsAndShortTermInvestments,trailingCashCashEquivalentsAndShortTermInvestments,annualOtherShortTermInvestments,trailingOtherShortTermInvestments,annualCashAndCashEquivalents,trailingCashAndCashEquivalents,annualCashEquivalents,trailingCashEquivalents,annualCashFinancial,trailingCashFinancial&merge=false&period1=493590046&period2=1613490868 # https://query1.finance.yahoo.com/v8/finance/chart/MSFT?symbol=MSFT&period1=1550725200&period2=1613491890&useYfid=true&interval=1d&events=div # https://query1.finance.yahoo.com/v10/finance/quoteSummary/MSFT?formatted=true&crumb=2M1BZy1YB7f&lang=en-US&region=US&modules=incomeStatementHistory,cashflowStatementHistory,balanceSheetHistory,incomeStatementHistoryQuarterly,cashflowStatementHistoryQuarterly,balanceSheetHistoryQuarterly&corsDomain=finance.yahoo.com The screenshot of the error is: It refers to the line 51 of the above code. I have tried multiple times, and check some related questions/answers here as well but have not any satisfied answer. There is another similar question but it has not any proper answer. Any help in this regard would be highly appreciated. Thanks in anticipation!
I just found that the issue was related to the full_data[ticker] in line 49. Once I checked its type and data inside, I found it as dataframe and as: The issue was with the time under the index column Date. So, to remove those I used this line full_data[ticker] = full_data[ticker].tz_localize(None) of code under the line 49 full_data[ticker] = tckr. And then I checked the full_data[ticker] so got this: The time under the Date are disappeared hence solving the issue. Thanks to @VasP whose suggestion helped me to crack this issue. So, here is the working code now: # -*- coding: utf-8 -*- from yfinance import Ticker import pandas as pd from yahoofinancials import YahooFinancials import requests from tqdm import tqdm import time import pickle # with open('tmp.pickle', 'rb') as f: # statements, tickers_done = pickle.load(f) # Download function def _download_one(ticker, start=None, end=None, auto_adjust=False, back_adjust=False, actions=False, period="max", interval="1d", prepost=False, proxy=None, rounding=False): return Ticker(ticker).history(period=period, interval=interval, start=start, end=end, prepost=prepost, actions=actions, auto_adjust=auto_adjust, back_adjust=back_adjust, proxy=proxy, rounding=rounding, many=True) # Modify project and reference index according to your needs tickers_all = [] # for project in ["sp500", "nyse", "nasdaq"]: for project in ["nasdaq"]: print(project) ref_index = ["^GSPC", "^IXIC"] # Load tickers companies = pd.read_csv(f"data/{project}/{project}.csv", sep=",") # companies = companies.drop(companies.index[companies['Symbol'].index[companies['Symbol'].isnull()][0]]) # the row with Nan value tickers = companies.Symbol.tolist() tickers = [a for a in tickers if a not in tickers_all and "^" not in a and r"/" not in a] tickers_all += tickers # Download prices full_data = {} for ticker in tqdm(tickers + ref_index): tckr = _download_one(ticker, period="7y", actions=True) full_data[ticker] = tckr full_data[ticker] = full_data[ticker].tz_localize(None) #Added now ohlc = pd.concat(full_data.values(), axis=1, keys=full_data.keys()) ohlc.columns = ohlc.columns.swaplevel(0, 1) ohlc.sort_index(level=0, axis=1, inplace=True) prices = ohlc["Adj Close"] dividends = ohlc["Dividends"] prices.to_csv(f"data/{project}/prices_daily.csv") dividends.to_csv(f"data/{project}/dividends.csv") statements = {} tickers_done = [] for ticker in tqdm(tickers): # Get statements if ticker in tickers_done: continue yahoo_financials = YahooFinancials(ticker) stmts_codes = ['income', 'cash', 'balance'] all_statement_data = yahoo_financials.get_financial_stmts('annual', stmts_codes) # build statements dictionary for a in all_statement_data.keys(): if a not in statements: statements[a] = list() for b in all_statement_data[a]: try: for result in all_statement_data[a][b]: extracted_date = list(result)[0] dataframe_row = list(result.values())[0] dataframe_row['date'] = extracted_date dataframe_row['symbol'] = b statements[a].append(dataframe_row) except Exception as e: print("Error on " + ticker + " : " + a) tickers_done.append(ticker) with open('tmp.pickle', 'wb') as f: pickle.dump([statements, tickers_done], f) # save dataframes for a in all_statement_data.keys(): df = pd.DataFrame(statements[a]).set_index('date') df.to_csv(f"data/{project}/{a}.csv") # Donwload shares shares = [] tickers_done = [] for ticker in tqdm(tickers): if ticker in tickers_done: continue d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868") if not d.ok: time.sleep(300) d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868") ctn = d.json()['timeseries']['result'] dct = dict() for n in ctn: type = n['meta']['type'][0] dct[type] = dict() if type in n: for o in n[type]: if o is not None: dct[type][o['asOfDate']] = o['reportedValue']['raw'] df = pd.DataFrame.from_dict(dct) df['symbol'] = ticker shares.append(df) tickers_done.append(ticker) time.sleep(1) # save dataframe df = pd.concat(shares) df['date'] = df.index df.to_csv(f"data/{project}/shares.csv", index=False) # https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/MSFT?symbol=MSFT&padTimeSeries=true&type=annualTreasurySharesNumber,trailingTreasurySharesNumber,annualPreferredSharesNumber,trailingPreferredSharesNumber,annualOrdinarySharesNumber,trailingOrdinarySharesNumber,annualShareIssued,trailingShareIssued,annualNetDebt,trailingNetDebt,annualTotalDebt,trailingTotalDebt,annualTangibleBookValue,trailingTangibleBookValue,annualInvestedCapital,trailingInvestedCapital,annualWorkingCapital,trailingWorkingCapital,annualNetTangibleAssets,trailingNetTangibleAssets,annualCapitalLeaseObligations,trailingCapitalLeaseObligations,annualCommonStockEquity,trailingCommonStockEquity,annualPreferredStockEquity,trailingPreferredStockEquity,annualTotalCapitalization,trailingTotalCapitalization,annualTotalEquityGrossMinorityInterest,trailingTotalEquityGrossMinorityInterest,annualMinorityInterest,trailingMinorityInterest,annualStockholdersEquity,trailingStockholdersEquity,annualOtherEquityInterest,trailingOtherEquityInterest,annualGainsLossesNotAffectingRetainedEarnings,trailingGainsLossesNotAffectingRetainedEarnings,annualOtherEquityAdjustments,trailingOtherEquityAdjustments,annualFixedAssetsRevaluationReserve,trailingFixedAssetsRevaluationReserve,annualForeignCurrencyTranslationAdjustments,trailingForeignCurrencyTranslationAdjustments,annualMinimumPensionLiabilities,trailingMinimumPensionLiabilities,annualUnrealizedGainLoss,trailingUnrealizedGainLoss,annualTreasuryStock,trailingTreasuryStock,annualRetainedEarnings,trailingRetainedEarnings,annualAdditionalPaidInCapital,trailingAdditionalPaidInCapital,annualCapitalStock,trailingCapitalStock,annualOtherCapitalStock,trailingOtherCapitalStock,annualCommonStock,trailingCommonStock,annualPreferredStock,trailingPreferredStock,annualTotalPartnershipCapital,trailingTotalPartnershipCapital,annualGeneralPartnershipCapital,trailingGeneralPartnershipCapital,annualLimitedPartnershipCapital,trailingLimitedPartnershipCapital,annualTotalLiabilitiesNetMinorityInterest,trailingTotalLiabilitiesNetMinorityInterest,annualTotalNonCurrentLiabilitiesNetMinorityInterest,trailingTotalNonCurrentLiabilitiesNetMinorityInterest,annualOtherNonCurrentLiabilities,trailingOtherNonCurrentLiabilities,annualLiabilitiesHeldforSaleNonCurrent,trailingLiabilitiesHeldforSaleNonCurrent,annualRestrictedCommonStock,trailingRestrictedCommonStock,annualPreferredSecuritiesOutsideStockEquity,trailingPreferredSecuritiesOutsideStockEquity,annualDerivativeProductLiabilities,trailingDerivativeProductLiabilities,annualEmployeeBenefits,trailingEmployeeBenefits,annualNonCurrentPensionAndOtherPostretirementBenefitPlans,trailingNonCurrentPensionAndOtherPostretirementBenefitPlans,annualNonCurrentAccruedExpenses,trailingNonCurrentAccruedExpenses,annualDuetoRelatedPartiesNonCurrent,trailingDuetoRelatedPartiesNonCurrent,annualTradeandOtherPayablesNonCurrent,trailingTradeandOtherPayablesNonCurrent,annualNonCurrentDeferredLiabilities,trailingNonCurrentDeferredLiabilities,annualNonCurrentDeferredRevenue,trailingNonCurrentDeferredRevenue,annualNonCurrentDeferredTaxesLiabilities,trailingNonCurrentDeferredTaxesLiabilities,annualLongTermDebtAndCapitalLeaseObligation,trailingLongTermDebtAndCapitalLeaseObligation,annualLongTermCapitalLeaseObligation,trailingLongTermCapitalLeaseObligation,annualLongTermDebt,trailingLongTermDebt,annualLongTermProvisions,trailingLongTermProvisions,annualCurrentLiabilities,trailingCurrentLiabilities,annualOtherCurrentLiabilities,trailingOtherCurrentLiabilities,annualCurrentDeferredLiabilities,trailingCurrentDeferredLiabilities,annualCurrentDeferredRevenue,trailingCurrentDeferredRevenue,annualCurrentDeferredTaxesLiabilities,trailingCurrentDeferredTaxesLiabilities,annualCurrentDebtAndCapitalLeaseObligation,trailingCurrentDebtAndCapitalLeaseObligation,annualCurrentCapitalLeaseObligation,trailingCurrentCapitalLeaseObligation,annualCurrentDebt,trailingCurrentDebt,annualOtherCurrentBorrowings,trailingOtherCurrentBorrowings,annualLineOfCredit,trailingLineOfCredit,annualCommercialPaper,trailingCommercialPaper,annualCurrentNotesPayable,trailingCurrentNotesPayable,annualPensionandOtherPostRetirementBenefitPlansCurrent,trailingPensionandOtherPostRetirementBenefitPlansCurrent,annualCurrentProvisions,trailingCurrentProvisions,annualPayablesAndAccruedExpenses,trailingPayablesAndAccruedExpenses,annualCurrentAccruedExpenses,trailingCurrentAccruedExpenses,annualInterestPayable,trailingInterestPayable,annualPayables,trailingPayables,annualOtherPayable,trailingOtherPayable,annualDuetoRelatedPartiesCurrent,trailingDuetoRelatedPartiesCurrent,annualDividendsPayable,trailingDividendsPayable,annualTotalTaxPayable,trailingTotalTaxPayable,annualIncomeTaxPayable,trailingIncomeTaxPayable,annualAccountsPayable,trailingAccountsPayable,annualTotalAssets,trailingTotalAssets,annualTotalNonCurrentAssets,trailingTotalNonCurrentAssets,annualOtherNonCurrentAssets,trailingOtherNonCurrentAssets,annualDefinedPensionBenefit,trailingDefinedPensionBenefit,annualNonCurrentPrepaidAssets,trailingNonCurrentPrepaidAssets,annualNonCurrentDeferredAssets,trailingNonCurrentDeferredAssets,annualNonCurrentDeferredTaxesAssets,trailingNonCurrentDeferredTaxesAssets,annualDuefromRelatedPartiesNonCurrent,trailingDuefromRelatedPartiesNonCurrent,annualNonCurrentNoteReceivables,trailingNonCurrentNoteReceivables,annualNonCurrentAccountsReceivable,trailingNonCurrentAccountsReceivable,annualFinancialAssets,trailingFinancialAssets,annualInvestmentsAndAdvances,trailingInvestmentsAndAdvances,annualOtherInvestments,trailingOtherInvestments,annualInvestmentinFinancialAssets,trailingInvestmentinFinancialAssets,annualHeldToMaturitySecurities,trailingHeldToMaturitySecurities,annualAvailableForSaleSecurities,trailingAvailableForSaleSecurities,annualFinancialAssetsDesignatedasFairValueThroughProfitorLossTotal,trailingFinancialAssetsDesignatedasFairValueThroughProfitorLossTotal,annualTradingSecurities,trailingTradingSecurities,annualLongTermEquityInvestment,trailingLongTermEquityInvestment,annualInvestmentsinJointVenturesatCost,trailingInvestmentsinJointVenturesatCost,annualInvestmentsInOtherVenturesUnderEquityMethod,trailingInvestmentsInOtherVenturesUnderEquityMethod,annualInvestmentsinAssociatesatCost,trailingInvestmentsinAssociatesatCost,annualInvestmentsinSubsidiariesatCost,trailingInvestmentsinSubsidiariesatCost,annualInvestmentProperties,trailingInvestmentProperties,annualGoodwillAndOtherIntangibleAssets,trailingGoodwillAndOtherIntangibleAssets,annualOtherIntangibleAssets,trailingOtherIntangibleAssets,annualGoodwill,trailingGoodwill,annualNetPPE,trailingNetPPE,annualAccumulatedDepreciation,trailingAccumulatedDepreciation,annualGrossPPE,trailingGrossPPE,annualLeases,trailingLeases,annualConstructionInProgress,trailingConstructionInProgress,annualOtherProperties,trailingOtherProperties,annualMachineryFurnitureEquipment,trailingMachineryFurnitureEquipment,annualBuildingsAndImprovements,trailingBuildingsAndImprovements,annualLandAndImprovements,trailingLandAndImprovements,annualProperties,trailingProperties,annualCurrentAssets,trailingCurrentAssets,annualOtherCurrentAssets,trailingOtherCurrentAssets,annualHedgingAssetsCurrent,trailingHedgingAssetsCurrent,annualAssetsHeldForSaleCurrent,trailingAssetsHeldForSaleCurrent,annualCurrentDeferredAssets,trailingCurrentDeferredAssets,annualCurrentDeferredTaxesAssets,trailingCurrentDeferredTaxesAssets,annualRestrictedCash,trailingRestrictedCash,annualPrepaidAssets,trailingPrepaidAssets,annualInventory,trailingInventory,annualInventoriesAdjustmentsAllowances,trailingInventoriesAdjustmentsAllowances,annualOtherInventories,trailingOtherInventories,annualFinishedGoods,trailingFinishedGoods,annualWorkInProcess,trailingWorkInProcess,annualRawMaterials,trailingRawMaterials,annualReceivables,trailingReceivables,annualReceivablesAdjustmentsAllowances,trailingReceivablesAdjustmentsAllowances,annualOtherReceivables,trailingOtherReceivables,annualDuefromRelatedPartiesCurrent,trailingDuefromRelatedPartiesCurrent,annualTaxesReceivable,trailingTaxesReceivable,annualAccruedInterestReceivable,trailingAccruedInterestReceivable,annualNotesReceivable,trailingNotesReceivable,annualLoansReceivable,trailingLoansReceivable,annualAccountsReceivable,trailingAccountsReceivable,annualAllowanceForDoubtfulAccountsReceivable,trailingAllowanceForDoubtfulAccountsReceivable,annualGrossAccountsReceivable,trailingGrossAccountsReceivable,annualCashCashEquivalentsAndShortTermInvestments,trailingCashCashEquivalentsAndShortTermInvestments,annualOtherShortTermInvestments,trailingOtherShortTermInvestments,annualCashAndCashEquivalents,trailingCashAndCashEquivalents,annualCashEquivalents,trailingCashEquivalents,annualCashFinancial,trailingCashFinancial&merge=false&period1=493590046&period2=1613490868 # https://query1.finance.yahoo.com/v8/finance/chart/MSFT?symbol=MSFT&period1=1550725200&period2=1613491890&useYfid=true&interval=1d&events=div # https://query1.finance.yahoo.com/v10/finance/quoteSummary/MSFT?formatted=true&crumb=2M1BZy1YB7f&lang=en-US&region=US&modules=incomeStatementHistory,cashflowStatementHistory,balanceSheetHistory,incomeStatementHistoryQuarterly,cashflowStatementHistoryQuarterly,balanceSheetHistoryQuarterly&corsDomain=finance.yahoo.com
3
5
74,616,238
2022-11-29
https://stackoverflow.com/questions/74616238/seaborn-object-interface-custom-title-and-facet-with-2-or-more-variables
I am using seaborn object interface and I want to go a little further in graph customization. Here is a case with facet plot on 2 observations: df = pd.DataFrame( np.array([['A','B','A','B'],['odd','odd','even','even'], [1,2,1,2], [2,4,1.5,3],]).T , columns= ['kind','face','Xs','Ys'] ) ( so.Plot(df,x='Xs' , y='Ys') .facet("kind","face") .add(so.Dot()) .label(title= 'kind :{}'.format) ) As you can see, subplots title display "kind: | kind: ". I want to display "kind: | face: ". Obviously I tried title= 'kind :{}, face :{}'.format but it threw an error... I discovered .label(title= 'kind :{}'.format) iterates over facet observation inputs and made a quick and dirty workaround. df = pd.DataFrame( np.array([['A','B','A','B'],['odd','odd','even','even'], [1,2,1,2], [2,4,1.5,3],]).T , columns= ['kind','face','Xs','Ys'] ) def multiObs_facet_title(t:tuple) -> str: if t in ['A','B']: return 'kind: {}'.format(t) else: return 'face: {}'.format(t) ( so.Plot(df,x='Xs' , y='Ys') .facet("kind","face") .add(so.Dot()) .label(title= multiObs_facet_title) ) I wonder if there is a better way to do this without to have checking the value of observations?
It's probably not what you are looking for. But with a 2-D Seaborn facet (columns and rows), it doesn't seem too bad to do it in pandas if you want to avoid defining a custom function. df = pd.DataFrame( np.array([['A','B','A','B'],['odd','odd','even','even'], [1,2,1,2], [2,4,1.5,3],]).T , columns= ['kind','face','Xs','Ys'] ) df["kind_1"] = "kind: "+ df["kind"] df["face_1"] = "face: "+ df["face"] ( so.Plot(df,x='Xs' , y='Ys') .facet("kind_1","face_1") .add(so.Dot()) )
3
1
74,550,915
2022-11-23
https://stackoverflow.com/questions/74550915/pulling-real-time-data-and-update-in-streamlit-and-asyncio
The goal is to pulling real time data in the background (say every 5 seconds) and pull into the dashboard when needed. Here is my code. It kinda works but two issues I am seeing: 1. if I move st.write("TESTING!") to the end, it will never get executed because of the while loop. Is there a way to improve? I can imagine as the dashboard grows, there will be multiple pages/tables etc.. This won't give much flexibility. 2. The return px line in the async function, I am not very comfortable with it because I got it right via trial and error. Sorry for being such a newbie, but if there are better ways to do it, I would really appreciate. Thank you! import asyncio import streamlit as st import numpy as np st.set_page_config(layout="wide") async def data_generator(test): while True: with test: px = np.random.randn(5, 1) await asyncio.sleep(1) return px test = st.empty() st.write("TESTING!") with test: while True: px = asyncio.run(data_generator(test)) st.write(px[0])
From my experience, the trick to using asyncio is to create your layout ahead of time, using empty widgets where you need to display async info. The async coroutine would take in these empty slots and fill them out. This should help you create a more complex application. Then the asyncio.run command can become the last streamlit action taken. Any streamlit commands after this wouldn't be processed, as you have observed. I would also recommend to arrange any input widgets outside of the async function, during the initial layout, and then send in the widget output for processing. Of course you could draw your input widgets inside the function, but the layout then might become tricky. If you still want to have your input widgets inside your async function, you'd definitely have to put them outside of the while loop, otherwise you would get duplicated widget error. (You might try to overcome this by creating new widgets all the time, but then the input widgets would be "reset" and interaction isn't achieved, let alone possible memory issue.) Here's a complete example of what I mean: import asyncio import pandas as pd import plotly.express as px import streamlit as st from datetime import datetime CHOICES = [1, 2, 3] def main(): print('\nmain...') # layout your app beforehand, with st.empty # for the widgets that the async function would populate graph = st.empty() radio = st.radio('Choose', CHOICES, horizontal=True) table = st.empty() try: # async run the draw function, sending in all the # widgets it needs to use/populate asyncio.run(draw_async(radio, graph, table)) except Exception as e: print(f'error...{type(e)}') raise finally: # some additional code to handle user clicking stop print('finally') # this doesn't actually get called, I think :( table.write('User clicked stop!') async def draw_async(choice, graph, table): # must send in all the streamlit widgets that # this fn would interact with... # this could possibly work, but layout is tricky # choice2 = st.radio('Choose 2', CHOICES) while True: # this would not work because you'd be creating duplicated # radio widgets # choice3 = st.radio('Choose 3', CHOICES) timestamp = datetime.now() sec = timestamp.second graph_df = pd.DataFrame({ 'x': [0, 1, 2], 'y': [max(CHOICES), choice, choice*sec/60.0], 'color': ['max', 'current', 'ticking'] }) df = pd.DataFrame({ 'choice': CHOICES, 'current_choice': len(CHOICES)*[choice], 'time': len(CHOICES)*[timestamp] }) graph.plotly_chart(px.bar(graph_df, x='x', y='y', color='color')) table.dataframe(df) _ = await asyncio.sleep(1) if __name__ == '__main__': main()
10
8
74,598,670
2022-11-28
https://stackoverflow.com/questions/74598670/how-to-get-complete-fundamental-f0-frequency-extraction-with-python-lib-libros
I am running librosa.pyin on a speech audio clip, and it doesn't seem to be extracting all the fundamentals (f0) from the first part of the recording. librosa documentation: https://librosa.org/doc/main/generated/librosa.pyin.html sr: 22050 fmin=librosa.note_to_hz('C0') fmax=librosa.note_to_hz('C7') f0, voiced_flag, voiced_probs = librosa.pyin(y, fmin=fmin, fmax=fmax, pad_mode='constant', n_thresholds = 10, max_transition_rate = 100, sr=sr) Raw audio: Spectrogram with fundamental tones, onssets, and onset strength, but the first part doesn't have any fundamental tones extracted. link to audio file: https://jasonmhead.com/wp-content/uploads/2022/12/quick_fox.wav times = librosa.times_like(o_env, sr=sr) onset_frames = librosa.onset.onset_detect(onset_envelope=o_env, sr=sr) Another view with power spectrogram: I tried compressing the audio, but that didn't seem to work. Any suggestions on what parameters I can adjust, or audio pre-processing that can be done to have fundamental tones extracted from all words? What type of things affect fundamental tone extraction success?
TL;DR It seems like it's all about the parameters tweaking. Here are some results that I've got playing with the example, it would be better to open it in a separate tab: The bottom plot shows a phonetic transcription (well, kinda) of the example file. Some conclusions I've made to myself: There are some words/parts of a word that are difficult to hear: they have low energy and when listening to them alone it doesn't sound like a word, but only when coupled with nearby segments ("the" is very short and sounds more like "z"). Some words are divided into parts (e.g. "fo"-"x"). I don't really know what should be the F0 frequency when someone pronounces "x". I'm not even sure that there is any difference in pronunciation between people (otherwise how do cats know that we are calling them all over the world). Two-seconds period is a pretty short amount of time. Some experiments: If we want to see a smooth F0 graph, going with n_threshold=1 will do the thing. It's a bad idea. In the "voiced_flag" part of the graphs, we see that for n_threshold=1 it decides that each frame was voiced, counting every frequency change as activity. Changing the sample rate affects the ability to retrieve F0 (in the rightmost graph, the sample rate was halved), as it was previously mentioned the n_threshold=1 doesn't count, but also we see that n_threshold=100 (which is a default value for pyin) doesn't produce any F0 at all. Top most left (max_transition_rate=200) and middle (max_transition_rate=100) graphs show the extracted F0 for n_threshold=2 and n_threshold=100. Actually it degrades pretty fast, and n_threshold=3 looks almost the same as n_threshold=100. I find the lower part, the voiced_flag decision plot, has high importance when combined with the phonetics transcript. In the middle graph, default parameters recognise "qui", "jum", "over", "la". If we want F0 for other phonems, n_threshold=2 should do the work. Setting n_threshold=3+ gives F0s in the same range. Increasing the max_transition_rate adds noice and reluctancy to declare that the voice segment is over. That's my thoughts. Hope it helps.
3
4
74,580,811
2022-11-26
https://stackoverflow.com/questions/74580811/circularity-calculation-with-perimeter-area-of-a-simple-circle
Circularity signifies the comparability of the shape to a circle. A measure of circularity is the shape area to the circle area ratio having an identical perimeter (we denote it as Circle Area) as represented in equation below. Sample Circularity = Sample Area / Circle Area Let the perimeter of shape be P, so P = 2 * pi * r then P^2 = 4 * pi^2 r^2 = 4 * pi * (pi * r^2) = 4 * pi * Circle Area. Thus Circle Area = Sample Perimeter^2 / (4 * pi) which implies Sample Circularity = (4 * pi * Sample Area) / (Sample Perimeter^2) So with help of math, there is no need to find an algorithm to calculate fit circle or draw it on a right way over shape or etc. This statistic equals 1 for a circular object and less than 1 for an object that departs from circularity, except that it is relatively insensitive to irregular boundaries. ok, that's fine, but ... . In python i try calculate circularity for a simple circle but always i got 1.11. My python approach is: import cv2 import math Gray_image = cv2.imread(Input_Path, cv2.IMREAD_GRAYSCALE) cnt , her = cv2.findContours(Gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) Perimeter = cv2.arcLength(cnt[0], True) Area = cv2.contourArea(cnt[0]) Circularity = math.pow(Perimeter, 2) / (4 * math.pi * Area) print(round(Circularity , 2)) If i use Perimeter = len(cnt[0]) then answer is 0.81 which is incorrect again. Thank you for taking the time to answer. To draw a circle, use following command: import cv2 import numpy as np Fill_Circle = np.zeros((1000, 1000, 3)) cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1) cv2.imwrite(Path_to_Save, Fill_Circle)
As I mentioned in this recent answer to a related question, OpenCV's perimeter estimate is not good enough to compute the circularity feature. OpenCV computes the perimeter by adding up all the distances between vertices of the polygon built from the edge pixels of the image. This length is typically larger than the actual perimeter of the actual object imaged. This blog post of mine describes the problem well, and provides a better way to estimate the perimeter of an object from a binary image. This better method is implemented (among other places) in DIPlib, in the function dip.MeasurementTool.Measure(), as the feature "Perimeter". [Disclosure: I'm an author of DIPlib]. The feature "Roundness" implements what you refer to as circularity here (these feature names are used interchangeably in the literature). There is a different feature referred to as "Circularity" in DIPlib, which does not depend on the perimeter and typically is more precise if the shape is close to a circle. This is how you would use that function: import diplib as dip import cv2 import numpy as np Fill_Circle = np.zeros((1000, 1000, 3)) cv2.circle(Fill_Circle, (500, 500), 450, (255, 255, 255), -1) labels = dip.Label(Fill_Circle[:, :, 0] > 0) msr = dip.MeasurementTool.Measure(labels, features=["Perimeter", "Size", "Roundness", "Circularity"]) print(msr) Circularity = msr[1]["Roundness"][0] For your circle, I see: area = 636121.0 perimeter = 2829.27 roundness = 0.9986187 (this is what you refer to as circularity) circularity = 0.0005368701 (closer to 0 means more like a circle)
3
4
74,601,283
2022-11-28
https://stackoverflow.com/questions/74601283/plotly-imshow-reversing-y-labels-reverses-the-image
I'd like to visualize a 20x20 matrix, where top left point is (-10, 9) and lower right point is (9, -10). So the x is increasing from left to right and y is decreasing from top to bottom. So my idea was to pass x labels as a list: [-10, -9 ... 9, 9] and y labels as [9, 8 ... -9, -10]. This worked as intended in seaborn (matplotlib), however doing so in plotly just reverses the image vertically. Here's the code: import numpy as np import plotly.express as px img = np.arange(20**2).reshape((20, 20)) fig = px.imshow(img, x=list(range(-10, 10)), y=list(range(-10, 10)), ) fig.show() import numpy as np import plotly.express as px img = np.arange(20**2).reshape((20, 20)) fig = px.imshow(img, x=list(range(-10, 10)), y=list(reversed(range(-10, 10))), ) fig.show() Why is this happening and how can I fix it? EDIT: Adding seaborn code to see the difference. As you can see, reversing the range for labels only changes the labels and has no effect on the image whatsoever, this is the effect I want in plotly. import seaborn as sns import numpy as np img = np.arange(20**2).reshape((20, 20)) sns.heatmap(img, xticklabels=list(range(-10, 10)), yticklabels=list(range(-10, 10)) ) import seaborn as sns import numpy as np img = np.arange(20**2).reshape((20, 20)) sns.heatmap(img, xticklabels=list(range(-10, 10)), yticklabels=list(reversed(range(-10, 10))) )
You should use origin : origin (str, 'upper' or 'lower' (default 'upper')) – position of the [0, 0] pixel of the image array, in the upper left or lower left corner. The convention β€˜upper’ is typically used for matrices and images. import numpy as np import plotly.express as px img = np.arange(20**2).reshape((20, 20)) x = list(range(-10, 10)) y = list(reversed(range(-10, 10))) fig = px.imshow(img, x=x, y=y, origin='lower') fig.show() Why reversing the labels doesn't work as expected ? For single-channel arrays (2d, not rgb), px.imshow() builds a heatmap trace with by default the parameter autorange='reversed' on the yaxis, as "the convention β€˜upper’ is typically used for matrices". This takes form with the following line : autorange = True if origin == "lower" else "reversed" The thing is that the autorange feature forces the orientation of the axis : when "reversed", the y's should be increasing from top to bottom (increasing vs decreasing inferred according to input data, not the same behavior with non-numeric strings), which means the yaxis is flipped iff the range of y's is not already reversed, in which case the whole is flipped vertically (any point (x, y) keeps its value). This is what you started with. Now, if you reverse the labels in y, it actually reverses the yaxis coordinates against the data (values should decrease as y increases), but by doing this, since the y range is now already inverted, the autorange doesn't have to flip the image, so it results in the yaxis being reversed (expected) and the image being not flipped, hence compared to what you started with, the image goes from flipped to unflipped (unexpected). Alternatives In this situation, to avoid any confusion, an alternative to the solution above would be to define a specific range : fig = px.imshow(img, x=x, y=y) fig.update_yaxes(autorange=False, range=[-10.5, 9.5]) Or to have the data reoriented beforehand, in which case there is no need to reverse the y's (and even less to specify tickvals/ticktext, unless by preference) : img = img.tolist()[::-1] # reverse data x = list(range(-10, 10)) y = list(range(-10, 10)) # instead of y fig = px.imshow(img, x=x, y=y, origin='lower') Output (same for the 3 code snippets)
10
11
74,612,555
2022-11-29
https://stackoverflow.com/questions/74612555/django-login-payload-visible-in-plaintext-in-chrome-devtools
This is weird. I have created login functions so many times but never noticed this thing. When we provide a username and password in a form and submit it, and it goes to the server-side as a Payload like this, I can see the data in the Chrome DevTools network tab: csrfmiddlewaretoken: mHjXdIDo50tfygxZualuxaCBBdKboeK2R89scsxyfUxm22iFsMHY2xKtxC9uQNni username: testuser password: 'dummy pass' #same as i typed(no encryption) I got this in the case of incorrect creds because the login failed and it wouldn't redirect to the other page. But then I tried with valid creds and I checked the Preserve log box in the Chrome network tab. Then I checked there and I could still see the exact entered Username and password. At first I thought I might have missed some encryption logic or something. But then I tried with multiple reputed tech companies' login functionality and I could still see creds in the payload. Isn't this wrong? It's supposed to be in the encrypted format right? Models.py from django.contrib.auth.models import User class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) html <form method="POST" class="needs-validation mb-4" novalidate> {% csrf_token %} <div class="form-outline mb-4"> <input type="email" id="txt_email" class="form-control" placeholder="Username or email address" required /> </div> <div class="form-outline mb-4"> <input type="password" id="txt_password" class="form-control" placeholder="Password" required /> </div> <div class="d-grid gap-2"> <button class="btn btn-primary fa-lg gradient-custom-2 login_btn" type="submit" id="btn_login"><i class="fa fa-sign-in" aria-hidden="true"> </i> Sign in</button> <div class="alert alert-danger" id="lbl_error" role="alert" style="display: none;"> </div> </div> </form> login view def authcheck(request): try: if request.method == "POST": username = request.POST["username"] password = request.POST["password"] user = authenticate(username=username, password=password) if user is not None: check_is_partner = Profile.objects.filter(user__username=username, is_partner=True).values("password_reset").first() if check_is_partner and check_is_partner['password_reset'] is True: return JsonResponse(({'code':0 ,'username':username}), content_type="json") if check_ip_restricted(user.profile.ip_restriction, request): return HttpResponse("ok_ipr", content_type="json") login(request, user) session = request.session session["username"] = username session["userid"] = user.id session.save() if check_is_partner: return HttpResponse("1", content_type="json") else: return HttpResponse("ok", content_type="json") else: return HttpResponse("nok", content_type="json") except Exception: return HttpResponse("error", content_type="json")
It's supposed to be in the encrypted format right? No. What you're seeing in Chrome DevTools is the username and password before they get encrypted. If you were to run tcpdump or Wireshark when you make the request, you'd see that it is encrypted over the network. In order for the data to be usable by anyone, it has to be unencrypted/decrypted at some point. For example, you can also see the response data (status code, headers, payload) in Chrome DevTools, which is encrypted over the network, but it's shown to you after it's been decrypted. Here's a similar answer to a similar question. EDIT: This is all assuming you're on a site using https. If you're using plain ole http, anyone sniffing the network can see your username + password in plaintext.
4
7
74,605,877
2022-11-28
https://stackoverflow.com/questions/74605877/how-do-i-get-the-position-of-my-python-flet-window
I've been working with the python flet package for a while and I'd like to know how to get my window's position. Does anyone know anything? I googled but found nothing.
I haven't used this package before, but looking at the docs it seems that window_top and window_left on the root Page instance are what you're after (assuming this is a desktop app). See relevant docs here: https://flet.dev/docs/controls/page#window_top.
3
2
74,546,855
2022-11-23
https://stackoverflow.com/questions/74546855/mock-flask-sqlalchemy-query
I'm getting an error of Module not found when trying to create a test function to mock the get method of sqlalchemy query (with pytest) example: from mock import patch @patch('flask_sqlalchemy._QueryProperty.__get__') def test_get_all(queryMock): assert True When running pytest i get an error: ModuleNotFoundError: No module named 'flask_sqlalchemy._QueryProperty' I'm using the 3.0.2 version of Flask-SQLAlchemy. So i just changed to version 2.5.1 and it worked. However i think would be good to use the latest version. Is there any other way to mock the sql-alchemy query that works with the latest versions?
Use flask_sqlalchemy.model._QueryProperty.__get__ instead of flask_sqlalchemy._QueryProperty.__get__. This will resolve your issue as _QueryProperty class has been moved into model. from mock import patch @patch('flask_sqlalchemy.model._QueryProperty.__get__') def test_get_all(queryMock): assert True
3
5
74,606,902
2022-11-28
https://stackoverflow.com/questions/74606902/django-sending-post-request-using-a-nested-serializer-with-many-to-many-relati
I'm fairly new to Django and I'm trying to make a POST request with nested objects. This is the data that I'm sending: { "id":null, "deleted":false, "publishedOn":2022-11-28, "decoratedThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "rawThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "videoUrl":"https://www.youtube.com/watch?v=jNQXAC9IVRw", "title":"Video with tags", "duration":120, "visibility":1, "tags":[ { "id":null, "videoId":null, "videoTagId":42 } ] } Here's a brief diagram of the relationship of these objects on the database I want to create a video and pass in an array of nested data so that I can create multiple tags that can be associated to a video in a many to many relationship. Because of that, the 'id' field of the video will be null and the 'videoId' inside of the tag object will also be null when the data is being sent. However I keep getting a 400 (Bad request) error saying {tags: [{videoId: [This field may not be null.]}]} I'm trying to override the create method inside VideoManageSerializer so that I can extract the tags and after creating the video I can use that video to create those tags. I don't think I'm even getting to the create method part inside VideoManageSerializer as the video is not created on the database. I've been stuck on this issue for a few days. If anybody could point me in the right direction I would really appreciate it. I'm using the following serializers: class VideoManageSerializer(serializers.ModelSerializer): tags = VideoVideoTagSerializer(many=True) class Meta: model = Video fields = ('__all__') # POST def create(self, validated_data): tags = validated_data.pop('tags') video_instance = Video.objects.create(**validated_data) for tag in tags: VideoVideoTag.objects.create(video=video_instance, **tag) return video_instance class VideoVideoTagSerializer(serializers.ModelSerializer): class Meta: model = VideoVideoTag fields = ('__all__') This is the view which uses VideoManageSerializer class VideoManageViewSet(GenericViewSet, # generic view functionality CreateModelMixin, # handles POSTs RetrieveModelMixin, # handles GETs UpdateModelMixin, # handles PUTs and PATCHes ListModelMixin): serializer_class = VideoManageSerializer queryset = Video.objects.all() These are all the models that I'm using: class Video(models.Model): decoratedThumbnail = models.CharField(max_length=500, blank=True, null=True) rawThumbnail = models.CharField(max_length=500, blank=True, null=True) videoUrl = models.CharField(max_length=500, blank=True, null=True) title = models.CharField(max_length=255, blank=True, null=True) duration = models.PositiveIntegerField() visibility = models.ForeignKey(VisibilityType, models.DO_NOTHING, related_name='visibility') publishedOn = models.DateField() deleted = models.BooleanField(default=0) class Meta: managed = True db_table = 'video' class VideoTag(models.Model): name = models.CharField(max_length=100, blank=True, null=True) deleted = models.BooleanField(default=0) class Meta: managed = True db_table = 'video_tag' class VideoVideoTag(models.Model): videoId = models.ForeignKey(Video, models.DO_NOTHING, related_name='tags', db_column='videoId') videoTagId = models.ForeignKey(VideoTag, models.DO_NOTHING, related_name='video_tag', db_column='videoTagId') class Meta: managed = True db_table = 'video_video_tag'
I would consider changing the serializer as below, class VideoManageSerializer(serializers.ModelSerializer): video_tag_id = serializers.PrimaryKeyRelatedField( many=True, queryset=VideoTag.objects.all(), write_only=True, ) tags = VideoVideoTagSerializer(many=True, read_only=True) class Meta: model = Video fields = "__all__" # POST def create(self, validated_data): tags = validated_data.pop("video_tag_id") video_instance = Video.objects.create(**validated_data) for tag in tags: VideoVideoTag.objects.create(videoId=video_instance, videoTagId=tag) return video_instance Things that have changed - Added a new write_only field named video_tag_id that supposed to accept "list of PKs of VideoTag". Changed the tags field to read_only so that it won't take part in the validation process, but you'll get the "nested serialized output". Changed create(...) method to cooperate with the new changes. The POST payload has been changed as below (note that tags has been removed and video_tag_id has been introduced) { "deleted":false, "publishedOn":"2022-11-28", "decoratedThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "rawThumbnail":"https://t3.ftcdn.net/jpg/02/48/42/64/360_F_248426448_NVKLywWqArG2ADUxDq6QprtIzsF82dMF.jpg", "videoUrl":"https://www.youtube.com/watch?v=jNQXAC9IVRw", "title":"Video with tags", "duration":120, "visibility":1, "video_tag_id":[1,2,3] } Refs DRF: Simple foreign key assignment with nested serializer? DRF - write_only DRF - read_only
3
4
74,619,476
2022-11-29
https://stackoverflow.com/questions/74619476/how-to-compile-tkinter-as-an-executable-for-macos
I'm trying to compile a Tkinter app as an executable for MacOs. I tried to use py2app and pyinstaller. I almost succeed using py2app, but it returns the following error: Traceback The Info.plist file must have a PyRuntimeLocations array containing string values for preferred Python runtime locations. These strings should be "otool -L" style mach ids; "@executable_stub" and "~" prefixes will be translated accordingly. This is how the setup.py looks like: from setuptools import setup APP = ['main.py'] DATA_FILES = ['config.json'] OPTIONS = { 'argv_emulation': True } setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) And this is the directory structure: -modules/---__init.py__ | | | -- gui_module.py | | | -- scraper_module.py | | | -- app.ico | -config.json | -countries_list.txt | -main.py | -requirements.txt | -setup.py I'm happy to share more details and the files if you need them.
The problem was that you need to give an executable path for the python framework you have on your MacOs. So I modify the setup.py setup.py from setuptools import setup class CONFIG: VERSION = 'v1.0.1' platform = 'darwin-x86_64' executable_stub = '/opt/homebrew/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib' # this is important, check where is your Python framework and get the `dylib` APP_NAME = f'your_app_{VERSION}_{platform}' APP = ['main.py'] DATA_FILES = [ 'config.json', 'countries_list.txt', ('modules', ['modules/app.ico']), # this modules are automatically added if you use __init__.py in your folder # ('modules', ['modules/scraper_module.py']), # ('modules', ['modules/gui_module.py']), ] OPTIONS = { 'argv_emulation': False, 'iconfile': 'modules/app.ico', 'plist': { 'CFBundleName': APP_NAME, 'CFBundleDisplayName': APP_NAME, 'CFBundleGetInfoString': APP_NAME, 'CFBundleVersion': VERSION, 'CFBundleShortVersionString': VERSION, 'PyRuntimeLocations': [ executable_stub, # also the executable can look like this: #'@executable_path/../Frameworks/libpython3.4m.dylib', ] } } def main(): setup( name=CONFIG.APP_NAME, app=CONFIG.APP, data_files=CONFIG.DATA_FILES, options={'py2app': CONFIG.OPTIONS}, setup_requires=['py2app'], maintainer='foo bar', author_email='[email protected]', ) if __name__ == '__main__': main() Then you need to run python3 setup.py py2app and now you can go and just double click on your_app_{VERSION}_{platform}.app. Recomendations by the py2app docs: Make sure not to use the -A flag Do not use --argv-emulation when the program uses a GUI toolkit (as Tkinter) py2app options docs
5
1
74,616,757
2022-11-29
https://stackoverflow.com/questions/74616757/why-does-defining-new-class-sometimes-call-the-init-function-of-objects-th
I'm trying to understand what actually happens when you declare a new class which inherits from a parent class in python. Here's a very simple code snippet: # inheritance.py class Foo(): def __init__(self, *args, **kwargs): print("Inside foo.__init__") print(args) print(kwargs) class Bar(Foo): pass print("complete") If I run this there are no errors and the output is as I would expect. ❯ python inheritance.py complete Here's a script with an obvious bug in it, I inherit from an instance of Foo() rather than the class Foo # inheritance.py class Foo(): def __init__(self, *args, **kwargs): print("Inside foo.__init__") print(f"{args=}") print(f"{kwargs=}\n") foo = Foo() class Bar(foo): <---- This is wrong pass print("complete") This code runs without crashing however I don't understand why Foo.__init__() is called twice. Here's the output: ❯ python inheritance.py Inside foo.__init__ <--- this one I expected args=() kwargs={} Inside foo.__init__ <--- What is going on here...? args=('Bar', (<__main__.Foo object at 0x10f190b10>,), {'__module__': '__main__', '__qualname__': 'Bar'}) kwargs={} complete On line 8 I instantiate Foo() with no arguments which is what I expected. However on line 9 Foo.__init__ is called with the arguments that would normally be passed to type() to generate a new class. I can see vaguely what's happening: class Bar(...) is code that generates a new class so at some point type("Bar", ...) needs to be called but: How does this actually happen? Why does inheriting from an instance of Foo() cause Foo.__init__("Bar", <tuple>, <dict>) to be called? Why isn't type("Bar", <tuple>, <dict>) called instead?
Python is using foo to determine the metaclass to use for Bar. No explicit metaclass is given, so the "most derived metaclass" must be determined. The metaclass of a base class is its type; usually, that's type itself. But in this case, the type of the only base "class", foo, is Foo, so that becomes the most derived metaclass. And so, class Bar(foo): pass is being treated as class Bar(metaclass=Foo): pass which means that Bar is created by calling Foo: Bar = Foo('Bar', (foo,), {}) Note that Bar is now an instance of Foo, not a type. Yes, a class statement does not necessarily create a class.
4
6
74,608,872
2022-11-29
https://stackoverflow.com/questions/74608872/error-failure-while-executing-cp-pr-private-tmp-d20221129-9397-882a6m-ca-ce
When i install python by brew, it shows error: cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied cp: utimensat: /usr/local/Cellar/ca-certificates/.: Permission denied Error: Failure while executing; `cp -pR /private/tmp/d20221129-9397-882a6m/ca-certificates/. /usr/local/Cellar/ca-certificates` exited with 1. Here's the output: cp: /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied cp: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/.brew: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory cp: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew/ca-certificates.rb: No such file or directory cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/.brew: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/.brew: No such file or directory cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share/ca-certificates: unable to copy extended attributes to /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory cp: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates/cacert.pem: No such file or directory cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share/ca-certificates: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/share/ca-certificates: No such file or directory cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11/share: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11/share: No such file or directory cp: utimensat: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory cp: chown: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory cp: chmod: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory cp: chflags: /usr/local/Cellar/ca-certificates/./2022-10-11: No such file or directory cp: /private/tmp/d20221129-9397-882a6m/ca-certificates/./2022-10-11: unable to copy ACL to /usr/local/Cellar/ca-certificates/./2022-10-11: Permission denied cp: utimensat: /usr/local/Cellar/ca-certificates/.: Permission denied HOW can i fix it and install python with brew? Before that, i was installed node and yarn with brew. I have tried uninstalling it and re-installing brew, but it doesn't work. Thank you!
I had the same issue and managed to fix it the following way: I saw that the upgrade script tried to copy files from /private/tmp/d20221130-21318-e53mkn/ca-certificates/./2022-10-11 to /usr/local/Cellar/ca-certificates/./2022-10-11 and I got a Permission denied meaning - I (the current mac user) could not edit/create files there. So i went to /usr/local/Cellar/ and saw that my user was not the owner of the folder ca-certificates and several others. So i just changed the onwer like this: cd /usr/local/Cellar/ sudo chown -R REPLACE_WITH_YOUR_USERNAME:admin * And this fix the issue for me. A simple brew upgrde upgraded all my packages after that. Hope this helps !
4
6
74,591,919
2022-11-27
https://stackoverflow.com/questions/74591919/how-to-catch-segfault-in-python-as-exception
Sometimes Python not only throws exception but also segfaults. Through many years of my experience with Python I saw many segfaults, half of them where inside binary modules (C libraries, i.e. .so/.pyd files), half of them where inside CPython binary itself. When segfault is issued then whole Python program finishes with crashdump (or silently). My question is if segfault happens in some block of code or thread is there any chance to catch it as regular Exception through except, and thus preventing whole program from crashing? It is known that you can use faulthandler, for example through python -q -X faulthandler. Then it creates following dump when segfaults: >>> import ctypes >>> ctypes.string_at(0) Fatal Python error: Segmentation fault Current thread 0x00007fb899f39700 (most recent call first): File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at File "<stdin>", line 1 in <module> Segmentation fault But this dump above finishes program entirely. Instead I want to catch this traceback as some standard Exception. Another question is whether I can catch segfault of Python code inside C API of PyRun_SimpleString() function?
The simplest way is to have a "parent" process which launches your app process, and check its exit value. -11 means the process received the signal 11 which is SEGFAULTV (cf) import subprocess SEGFAULT_PROCESS_RETURNCODE = -11 segfaulting_code = "import ctypes ; ctypes.string_at(0)" # https://codegolf.stackexchange.com/a/4694/115779 try: subprocess.run(["python3", "-c", segfaulting_code], check=True) except subprocess.CalledProcessError as err: if err.returncode == SEGFAULT_PROCESS_RETURNCODE: print("probably segfaulted") else: print(f"crashed for other reasons: {err.returncode}") else: print("ok") EDIT: here is a reproducible example with a Python dump using the built-in faulthandler : # file: parent_handler.py import subprocess SEGFAULT_PROCESS_RETURNCODE = -11 try: subprocess.run(["python3", "-m", "dangerous_child.py"], check=True) except subprocess.CalledProcessError as err: if err.returncode == SEGFAULT_PROCESS_RETURNCODE: print("probably segfaulted") else: print(f"crashed for other reasons: {err.returncode}") else: print("ok") # file: dangerous_child.py import faulthandler import time faulthandler.enable() # by default will dump on sys.stderr, but can also print to a regular file def cause_segfault(): # https://codegolf.stackexchange.com/a/4694/115779 import ctypes ctypes.string_at(0) i = 0 while True: print("everything is fine ...") time.sleep(1) i += 1 if i == 5: print("going to segfault!") cause_segfault() everything is fine ... everything is fine ... everything is fine ... everything is fine ... everything is fine ... going to segfault! Fatal Python error: Segmentation fault Current thread 0x00007f7a9ab35740 (most recent call first): File "/usr/lib/python3.8/ctypes/__init__.py", line 514 in string_at File "/home/stack_overflow/dangerous_child.py", line 9 in cause_segfault File "/home/stack_overflow/dangerous_child.py", line 19 in <module> File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 848 in exec_module File "<frozen importlib._bootstrap>", line 671 in _load_unlocked File "<frozen importlib._bootstrap>", line 975 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 991 in _find_and_load File "/usr/lib/python3.8/runpy.py", line 111 in _get_module_details File "/usr/lib/python3.8/runpy.py", line 185 in _run_module_as_main probably segfaulted (outputs from both processes got mixed in my terminal, but you can separate them as you like) That way you can pinpoint the problem was caused by the Python code ctypes.string_at. But as Mark indicated in the comments, you should not trust this too much, if the program got killed is because it was doing bad things.
4
8
74,605,279
2022-11-28
https://stackoverflow.com/questions/74605279/python-3-11-worse-optimized-than-3-10
I run this simple loop with Python 3.10.7 and 3.11.0 on Windows 10. import time a = 'a' start = time.time() for _ in range(1000000): a += 'a' end = time.time() print(a[:5], (end-start) * 1000) The older version executes in 187ms, Python 3.11 needs about 17000ms. Does 3.10 realize that only the first 5 chars of a are needed, whereas 3.11 executes the whole loop? I confirmed this performance difference on godbolt.
TL;DR: you should not use such a loop in any performance critical code but ''.join instead. The inefficient execution appears to be related to a regression during the bytecode generation in CPython 3.11 (and missing optimizations during the evaluation of binary add operation on Unicode strings). General guidelines This is an antipattern. You should not write such a code if you want this to be fast. This is described in PEP-8: Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such). For example, do not rely on CPython’s efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b. This optimization is fragile even in CPython (it only works for some types) and isn’t present at all in implementations that don’t use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations. Indeed, other implementations like PyPy does not perform an efficient in-place string concatenation for example. A new bigger string is created for every iteration (since strings are immutable, the previous one may be referenced and PyPy does not use a reference counting but a garbage collector). This results in a quadratic runtime as opposed to a linear runtime in CPython (at least in past implementation). Deep Analysis I can reproduce the problem on Windows 10 between the embedded (64-bit x86-64) version of CPython 3.10.8 and the one of 3.11.0: Timings: - CPython 3.10.8: 146.4 ms - CPython 3.11.0: 15186.8 ms It turns out the code has not particularly changed between CPython 3.10 and 3.11 when it comes to Unicode string appending. See for example PyUnicode_Append: 3.10 and 3.11. A low-level profiling analysis shows that nearly all the time is spent in one unnamed function call of another unnamed function called by PyUnicode_Concat (which is also left unmodified between CPython 3.10.8 and 3.11.0). This slow unnamed function contains a pretty small set of assembly instructions and nearly all the time is spent in one unique x86-64 assembly instruction: rep movsb byte ptr [rdi], byte ptr [rsi]. This instruction is basically meant to copy a buffer pointed by the rsi register to a buffer pointed by the rdi register (the processor copy rcx bytes for the source buffer to the destination buffer and decrement the rcx register for each byte until it reach 0). This information shows that the unnamed function is actually memcpy of the standard MSVC C runtime (ie. CRT) which appears to be called by _copy_characters itself called by _PyUnicode_FastCopyCharacters of PyUnicode_Concat (all the functions are still belonging to the same file). However, these CPython functions are still left unmodified between CPython 3.10.8 and 3.11.0. The non-negligible time spent in malloc/free (about 0.3 seconds) seems to indicate that a lot of new string objects are created -- certainly at least 1 per iteration -- matching with the call to PyUnicode_New in the code of PyUnicode_Concat. All of this indicates that a new bigger string is created and copied as specified above. The thing is calling PyUnicode_Concat is certainly the root of the performance issue here and I think CPython 3.10.8 is faster because it certainly calls PyUnicode_Append instead. Both calls are directly performed by the main big interpreter evaluation loop and this loop is driven by the generated bytecode. It turns out that the generated bytecode is different between the two version and it is the root of the performance issue. Indeed, CPython 3.10 generates an INPLACE_ADD bytecode instruction while CPython 3.11 generates a BINARY_OP bytecode instruction. Here is the bytecode for the loops in the two versions: CPython 3.10 loop: >> 28 FOR_ITER 6 (to 42) 30 STORE_NAME 4 (_) 6 32 LOAD_NAME 1 (a) 34 LOAD_CONST 2 ('a') 36 INPLACE_ADD <---------- 38 STORE_NAME 1 (a) 40 JUMP_ABSOLUTE 14 (to 28) CPython 3.11 loop: >> 66 FOR_ITER 7 (to 82) 68 STORE_NAME 4 (_) 6 70 LOAD_NAME 1 (a) 72 LOAD_CONST 2 ('a') 74 BINARY_OP 13 (+=) <---------- 78 STORE_NAME 1 (a) 80 JUMP_BACKWARD 8 (to 66) This changes appears to come from this issue. The code of the main interpreter loop (see ceval.c) is different between the two CPython version. Here are the code executed by the two versions: // In CPython 3.10.8 case TARGET(INPLACE_ADD): { PyObject *right = POP(); PyObject *left = TOP(); PyObject *sum; if (PyUnicode_CheckExact(left) && PyUnicode_CheckExact(right)) { sum = unicode_concatenate(tstate, left, right, f, next_instr); // <----- /* unicode_concatenate consumed the ref to left */ } else { sum = PyNumber_InPlaceAdd(left, right); Py_DECREF(left); } Py_DECREF(right); SET_TOP(sum); if (sum == NULL) goto error; DISPATCH(); } //---------------------------------------------------------------------------- // In CPython 3.11.0 TARGET(BINARY_OP_ADD_UNICODE) { assert(cframe.use_tracing == 0); PyObject *left = SECOND(); PyObject *right = TOP(); DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP); DEOPT_IF(Py_TYPE(right) != Py_TYPE(left), BINARY_OP); STAT_INC(BINARY_OP, hit); PyObject *res = PyUnicode_Concat(left, right); // <----- STACK_SHRINK(1); SET_TOP(res); _Py_DECREF_SPECIALIZED(left, _PyUnicode_ExactDealloc); _Py_DECREF_SPECIALIZED(right, _PyUnicode_ExactDealloc); if (TOP() == NULL) { goto error; } JUMPBY(INLINE_CACHE_ENTRIES_BINARY_OP); DISPATCH(); } Note that unicode_concatenate calls PyUnicode_Append (and do some reference counting checks before). In the end, CPython 3.10.8 calls PyUnicode_Append which is fast (in-place) and CPython 3.11.0 calls PyUnicode_Concat which is slow (out-of-place). It clearly looks like a regression to me. People in the comments reported having no performance issue on Linux. However, experimental tests shows a BINARY_OP instruction is also generated on Linux, and I cannot find so far any Linux-specific optimization regarding string concatenation. Thus, the difference between the platforms is pretty surprising. Update: towards a fix I have opened an issue about this available here. One should not that putting the code in a function is significantly faster due to the variable being local (as pointed out by @Dennis in the comments). Related posts: How slow is Python's string concatenation vs. str.join? Python string 'join' is faster (?) than '+', but what's wrong here? Python string concatenation in for-loop in-place?
25
28
74,619,283
2022-11-29
https://stackoverflow.com/questions/74619283/reference-parameter-in-figure-caption-with-quarto
Is there a way to reference a parameter in a Quarto figure or table caption? In the example below, I am able to reference my input parameter txt in a regular text block, but not in a figure caption. In the figure caption, only the raw text is displayed: --- title: "example" format: html params: txt: "example" --- ## I can reference it in text Here I can reference an input parameter: `r params$txt` ```{r} #| label: fig-example #| fig-cap: "Example: I cannot reference a parameter using `r params$txt` or params$txt." plot(1:10, 1:10) ```
Try with !expr ```{r} #| label: fig-example #| fig-cap: !expr params$txt plot(1:10, 1:10) ``` -output If we need to add some text, either paste or use glue #| fig-cap: !expr glue::glue("This should be {params$txt}")
6
10
74,618,499
2022-11-29
https://stackoverflow.com/questions/74618499/is-there-an-easy-way-to-construct-a-pandas-dataframe-from-an-iterable-of-datacla
One can do that with dataclasses like so: from dataclasses import dataclass import pandas as pd @dataclass class MyDataClass: i: int s: str df = pd.DataFrame([MyDataClass("a", 1), MyDataClass("b", 2)]) that makes the DataFrame df with columns i and s as one would expect. Is there an easy way to do that with an attrs class? I can do it by iterating over the the object's properties and constructing an object of a type like dict[str, list] ({"i": [1, 2], "s": ["a", "b"]} in this case) and constructing the DataFrame from that but it would be nice to have support for attrs objects directly.
You can access the dictionary at the heart of a dataclass like so a = MyDataClass("a", 1) a.__dict__ this outputs: {'i': 'a', 's': 1} Knowing this, if you have an iterable arr of type MyDataClass, you can access the __dict__ attribute and construct a dataframe arr = [MyDataClass("a", 1), MyDataClass("b", 2)] df = pd.DataFrame([x.__dict__ for x in arr]) df outputs: i s 0 a 1 1 b 2 The limitation with this approach that if the slots option is used, then this will not work. Alternatively, it is possible to convert the data from a dataclass to a tuple or dictionary using dataclasses.astuple and dataclasses.asdict respectively. The data frame can be also constructed using either of the following: # using astuple df = pd.DataFrame( [dataclasses.astuple(x) for x in arr], columns=[f.name for f in dataclasses.fields(MyDataClass)] ) # using asdict df = pd.DataFrame([dataclasses.asdict(x) for x in arr])
4
6
74,607,032
2022-11-28
https://stackoverflow.com/questions/74607032/vs-code-cursor-bug-in-terminal
Cursor repeating and remaining in the Integrated Terminal in VS Code I encountered this bug in my terminal while doing Python tutorial so downloaded and reinstalled the same version (latest version of VS Code) but the problem persists. I looked about for some answers but only found this tutorial which is not related. Anyway, I reinstalled the software only to find the bug was still present. The code runs but the cursor is an obstruction. From time to time I may type in the wrong execution so it's a bit of a bother.
Turned off GPU acceleration in the Terminal of VS Code. That has seemed to resolve the matter; No longer cursor trails. Settings > Type 'Render' > Go to Terminal β€Ί Integrated: Gpu Acceleration. Setting controls whether the terminal will leverage the GPU to do its rendering. Switch 'off' in dropdown menu
3
4
74,614,332
2022-11-29
https://stackoverflow.com/questions/74614332/is-unpacking-a-type-hint-possible-or-its-workarounds
Is there a way to unpack a tuple type alias? For example, ResultTypeA = tuple[float, float, dict[str, float]] ResultTypeB = tuple[*ResultTypeA, str, str] So that ResultTypeB evaluates to tuple[float, float, dict[str, float], str, str] instead of tuple[tuple[float, float, dict[str, float]], str, str] If not possible, what would be a workaround for this?
What you are looking for may be the new typing.TypeVarTuple as proposed by PEP 646. Due to how new it is (Python 3.11+) and how big of a change this produces, many static type checkers still do not fully support it (see this mypy issue for example). Maybe typing.Unpack is actually more applicable in this case, but again hardly useful so long as type checkers don't support it. But at a certain point, you should probably ask yourself, if your design is all that good, if your type annotations become this complex.
3
3
74,612,809
2022-11-29
https://stackoverflow.com/questions/74612809/why-are-attributes-defined-outside-init-in-popular-packages-like-sqlalchemy
I'm modifying an app, trying to use Pydantic for my application models and SQLAlchemy for my database models. I have existing classes, where I defined attributes inside the __init__ method as I was taught to do: class Measure: def __init__( self, t_received: int, mac_address: str, data: pd.DataFrame, battery_V: float = 0 ): self.t_received = t_received self.mac_address = mac_address self.data = data self.battery_V = battery_V In both Pydantic and SQLAlchemy, following the docs, I have to define those attributes outside the __init__ method, for example in Pydantic: import pydantic class Measure(pydantic.BaseModel): t_received: int mac_address: str data: pd.DataFrame battery_V: float Why is it the case? Isn't this bad practice? Is there any impact on other methods (classmethods, staticmethods, properties ...) of that class? Note that this is also very unhandy because when I instantiate an object of that class, I don't get suggestions on what parameters are expected by the constructor!
Defining attributes of a class in the class namespace directly is totally acceptable and is not special per se for the packages you mentioned. Since the class namespace is (among other things) essentially a blueprint for instances of that class, defining attributes there can actually be useful, when you want to e.g. provide all public attributes with type annotations in a single place in a consistent manner. Consider also that a public attribute does not necessarily need to be reflected by a parameter in the constructor of the class. For example, this is entirely reasonable: class Foo: a: list[int] b: str def __init__(self, b: str) -> None: self.a = [] self.b = b In other words, just because something is a public attribute, that does not mean it should have to be provided by the user upon initialization. To say nothing of protected/private attributes. What is special about Pydantic (to take your example), is that the metaclass of BaseModel as well as the class itself does a whole lot of magic with the attributes defined in the class namespace. Pydantic refers to a model's typical attributes as "fields" and one bit of magic allows special checks to be done during initialization based on those fields you defined in the class namespace. For example, the constructor must receive keyword arguments that correspond to the non-optional fields you defined. from pydantic import BaseModel class MyModel(BaseModel): field_a: str field_b: int = 1 obj = MyModel( field_a="spam", # required field_b=2, # optional field_c=3.14, # unexpected/ignored ) If I were to omit field_a during construction of a MyModel instance, an error would be raised. Likewise, if I had tried to pass field_b="eggs", an error would be raised. So the fact that you don't write your own __init__ method is a feature Pydantic provides you. You only define the fields and an appropriate constructor is "magically" there for you already. As for the drawback you mentioned, where you don't get any auto-suggestions, that is true by default for all IDEs. Static type checkers cannot understand that dynamic constructor and simply infer what arguments are expected. Currently this is solved via extensions, such as the mypy plugin and the PyCharm plugin. Maybe soon the @dataclass_transform decorator from PEP 681 will standardize this for similar packages and thus improve support by static type checkers. It is also worth noting that even the standard library's dataclasses only work via special extensions in type checkers. To your other question, there is obviously some impact on methods of such classes (by design), though the specifics are not always obvious. You should of course not simply write your own __init__ method without being careful to call the superclass' __init__ properly inside it. Also, @property-setters currently don't work as you would expect it (though it is debatable if it even makes sense to use properties on Pydantic models). To wrap up, this approach is not only not bad practice, it is a great idea to reduce boilerplate code and it is extremely common these days, as evidenced by the fact that hugely popular and established packages (like the aforementioned Pydantic, as well as e.g. SQLAlchemy, Django and others) use this pattern to a certain extent.
3
2
74,613,853
2022-11-29
https://stackoverflow.com/questions/74613853/python-regular-expression-re-sub-to-replace-matches
I am trying to analyze an earnings call using python regular expression. I want to delete unnecessary lines which only contain the name and position of the person, who is speaking next. This is an excerpt of the text I want to analyze: "Questions and Answers\nOperator [1]\n\n Shannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2]\n I hope everyone is well. Tim, you talked about seeing some improvement in the second half of April. So I was wondering if you could just talk maybe a bit more on the segment and geographic basis what you're seeing in the various regions that you're selling in and what you're hearing from your customers. And then I have a follow-up.\n Timothy D. Cook, Apple Inc. - CEO & Director [3]\n ..." At the end of each line that I want to delete, you have [some number]. So I used the following line of code to get these lines: name_lines = re.findall('.*[\d]]', text) This works and gives me the following list: ['Operator [1]', ' Shannon Siemsen Cross, Cross Research LLC - Co-Founder, Principal & Analyst [2]', ' Timothy D. Cook, Apple Inc. - CEO & Director [3]'] So, now in the next step I want to replace this strings in the text using the following line of code: for i in range(0,len(name_lines)): text = re.sub(name_lines[i], '', text) But this does not work. Also if I just try to replace 1 instead of using the loop it does not work, but I have no clue why. Also if I try now to use re.findall and search for the lines I obtained from the first line of code I don`t get a match.
The first argument to re.sub is treated as a regular expression, so the square brackets get a special meaning and don't match literally. You don't need a regular expression for this replacement at all though (and you also don't need the loop counter i): for name_line in name_lines: text = text.replace(name_line, '')
4
4
74,597,855
2022-11-28
https://stackoverflow.com/questions/74597855/where-does-pip3-install-package-binaries
It see that depending on system and configuration, packages are installed in different places. Example: Machine 1: pip3 install fb-idb pip3 show fb-idb > ... > /opt/homebrew/lib/python3.9/site-packages Machine 2: pip3 install fb-idb pip3 show fb-idb > ... > /us/local/lib/python3.10/site-packages Now the problem I have is that on machine 1, I got the path to the binary by executing which idb (> /opt/homebrew/bin/idb), but on machine 2, it seems the bin dir wasn't added to the path, so which doesn't work. Is there a way to figure out where the binaries are installed, if I only have the site-packages path?
pip3 show --files fb-idb shows where pip has installed all the files of the package. Run pip3 show --files fb-idb | grep -F /bin/ to extract the directory where pip installed scripts and entry points (On Windows it's \Scripts\). The directories are related to the header Location: so either do grep -F Location: separately or do it combined: pip3 show --files fb-idb | grep 'Location:\|/bin/'
3
6
74,611,317
2022-11-29
https://stackoverflow.com/questions/74611317/enable-pyenv-virtualenv-prompt-at-terminal
I just installed pyenv and virtualenv following: https://realpython.com/intro-to-pyenv/ After completing installation I was prompted with: pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior I added export PYENV_VIRTUALENV_DISABLE_PROMPT=1 to my .bash_aliases just to see what the behavior would be, and sure enough it removed the prompt that used to exist at the beginning of the command prompt indicating the pyenv-virtualenv version. Used to be like: (myenv) user@foo:~/my_project [main] $ where (myenv) is the active environment, and [main] is the git branch. I would love to have the environment indicator back! It is very useful. I guess at some possibilities such as: export PYENV_VIRTUALENV_DISABLE_PROMPT=0 export PYENV_VIRTUALENV_ENABLE_PROMPT=1 But these do not return the previous behavior. I have googled all over and can't figure out how to get this back. This answer is not useful, as it seems like a hack around the original functionality, and displays the environment always, not just when I enter (or manually activate) an environment.
Borrowing a solution from here, the following works (added to .bashrc or .bash_aliases): export PYENV_VIRTUALENV_DISABLE_PROMPT=1 export BASE_PROMPT=$PS1 function updatePrompt { if [[ "$(pyenv virtualenvs)" == *"* $(pyenv version-name) "* ]]; then export PS1='($(pyenv version-name)) '$BASE_PROMPT else export PS1=$BASE_PROMPT fi } export PROMPT_COMMAND='updatePrompt'
3
6
74,608,230
2022-11-29
https://stackoverflow.com/questions/74608230/no-artists-with-labels-found-to-put-in-legend-error-when-changing-the-legend
I want to make my legend size bigger in Pyplot. I used this answer to do that. Here is my code. import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.rcParams['figure.figsize'] = [15, 7] lst = [1,2,3,4,5,6,7,8,9,8,7,6,5,4,3,4,5,6] plt.plot(lst) plt.legend(fontsize="x-large") # Here I make it bigger but doesn't work plt.legend(["This is my legend"]) plt.ylabel('some numbers') plt.show() I get this warning and I don't know what is wrong. I don't understand what "Artist" means here. No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
"Artist" is a term from matplotlib: https://matplotlib.org/stable/tutorials/intermediate/artists.html Presumably the error message means that there are no items in the legend whose font size can be changed. Maybe pass the fontsize argument to the same plt.legend call in which you create the legend, or call plt.legend(fontsize=...) after you have created the legend.
21
17