question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
74,433,918 | 2022-11-14 | https://stackoverflow.com/questions/74433918/apply-a-function-to-2-columns-in-polars | I want to apply a custom function which takes 2 columns and outputs a value based on those (row-based) In Pandas there is a syntax to apply a function based on values in multiple columns df['col_3'] = df.apply(lambda x: func(x.col_1, x.col_2), axis=1) What is the syntax for this in Polars? | In polars, you don't add columns by assigning just the value of the new column. You always have to assign the whole df (in other words there's never ['col_3'] on the left side of the =) To that end if you want your original df with a new column then you use the with_columns method. you would do df = ( df .with_columns( pl.struct('col_1','col_2') .map_elements(lambda x: func(x['col_1'], x['col_2'])) .alias('col_3') ) ) A struct is a dataframe inside a column of a dataframe. This is helpful because map_elements (and indeed all expressions) can only be invoked from a single column. The map_elements turns the struct, in each row, into dict and that becomes the input to your function. map_elements is for functions which take a single input and output a single value. (If you're using a vectorized function that expects something like a list and returns another list then you should use map_batches). Finally, you do alias on that to give it the name you want it to have. | 9 | 24 |
74,450,537 | 2022-11-15 | https://stackoverflow.com/questions/74450537/how-to-efficiently-create-an-index-like-polars-dataframe-from-multiple-sparse-se | I would like to create a DataFrame that has an "index" (integer) from a number of (sparse) Series, where the index (or primary key) is NOT necessarily consecutive integers. Each Series is like a vector of (index, value) tuple or {index: value} mapping. (1) A small example In Pandas, this is very easy as we can create a DataFrame at a time, like >>> pd.DataFrame({ "A": {0: 'a', 20: 'b', 40: 'c'}, "B": {10: 'd', 20: 'e', 30: 'f'}, "C": {20: 'g', 30: 'h'}, }).sort_index() A B C 0 a NaN NaN 10 NaN d NaN 20 b e g 30 NaN f h 40 c NaN NaN but I can't find an easy way to achieve a similar result with Polars. As described in Coming from Pandas, Polars does not use an index unlike Pandas, and each row is indexed by its integer position in the table; so I might need to represent an "indexed" Series with a 2-column DataFrame: A = pl.DataFrame({ "index": [0, 20, 40], "A": ['a', 'b', 'c'] }) B = pl.DataFrame({ "index": [10, 20, 30], "B": ['d', 'e', 'f'] }) C = pl.DataFrame({ "index": [20, 30], "C": ['g', 'h'] }) I tried to combine these multiple DataFrames, joining on the index column: >>> A.join(B, on='index', how='full', coalesce=True).join(C, on='index', how='full', coalesce=True).sort(by='index') shape: (5, 4) βββββββββ¬βββββββ¬βββββββ¬βββββββ β index β A β B β C β β --- β --- β --- β --- β β i64 β str β str β str β βββββββββͺβββββββͺβββββββͺβββββββ‘ β 0 β a β null β null β β 10 β null β d β null β β 20 β b β e β g β β 30 β null β f β h β β 40 β c β null β null β βββββββββ΄βββββββ΄βββββββ΄βββββββ This gives the result I want, but I wonder: (i) if there is there more concise way to do this over many columns, and (ii) how make this operation as efficient as possible. Alternatives? I also tried outer joins as this is one way to combine Dataframes with different number of columns and rows, as described above. Other alternatives I tried includes diagonal concatenation, but this does not deduplicate or join on index: >>> pl.concat([A, B, C], how='diagonal') index A B C 0 0 a None None 1 20 b None None 2 40 c None None 3 10 None d None 4 20 None e None 5 30 None f None 6 20 None None g 7 30 None None h (2) Efficiently Building a Large Table The approach I found above gives desired results I'd want but I feel there must be a better way in terms of performance. Consider a case with more large tables; say 300,000 rows and 20 columns: N, C = 300000, 20 pls = [] pds = [] for i in range(C): A = pl.DataFrame({ "index": np.linspace(i, N*3-i, num=N, dtype=np.int32), f"A{i}": np.arange(N, dtype=np.float32), }) pls.append(A) B = A.to_pandas().set_index("index") pds.append(B) The approach of joining two columns in a row is somewhat slow than I expected: %%time F = functools.reduce(lambda a, b: a.join(b, on='index', how='full', coalesce=True), pls) F.sort(by='index') CPU times: user 1.49 s, sys: 97.8 ms, total: 1.59 s Wall time: 611 ms or than one-pass creation in pd.DataFrame: %%time pd.DataFrame({ f"A{i}": pds[i][f'A{i}'] for i in range(C) }).sort_index() CPU times: user 230 ms, sys: 50.7 ms, total: 281 ms Wall time: 281 ms | Following your example, but only informing polars on the fact that the "index" column is sorted (polars will use fast paths if data is sorted). You can use align_frames together with functools.reduce to get what you want. This is your data creation snippet: import functools import polars as pl N, C = 300000, 20 pls = [] pds = [] for i in range(C): A = pl.DataFrame({ "index": np.linspace(i, N*3-i, num=N, dtype=np.int32), f"A{i}": np.arange(N, dtype=np.float32), }).with_columns(pl.col("index").set_sorted()) pls.append(A) B = A.to_pandas().set_index("index") pds.append(B) Creating the frame aligned by index. We need to use functools.reduce because align_frames returns a list of new DataFrame objects that are aligned by index. frames = pl.align_frames(*pls, on="index") functools.reduce(lambda a, b: a.with_columns(b.get_columns()), frames) Performance The performance is better than the pandas sort_index method. Pandas >>> %%timeit >>> pd.DataFrame({ ... f"A{i}": pds[i][f'A{i}'] for i in range(C) ... }).sort_index() 389 ms Β± 8.96 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Polars >>> %%timeit >>> frames = pl.align_frames(*pls, on="index") >>> functools.reduce(lambda a, b: a.with_columns(b.get_columns()), frames) 348 ms Β± 11.9 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) | 6 | 5 |
74,372,173 | 2022-11-9 | https://stackoverflow.com/questions/74372173/how-to-multiply-each-element-in-a-list-with-a-value-in-a-different-column | I have a dataframe with a certain number of groups, containing a weight column and a list of values, which can be of arbitrary length, so for example: df = pl.DataFrame( { "Group": ["Group1", "Group2", "Group3"], "Weight": [100.0, 200.0, 300.0], "Vals": [[0.5, 0.5, 0.8],[0.5, 0.5, 0.8], [0.7, 0.9]] } ) ββββββββββ¬βββββββββ¬ββββββββββββββββββ β Group β Weight β Vals β β --- β --- β --- β β str β f64 β list[f64] β ββββββββββͺβββββββββͺββββββββββββββββββ‘ β Group1 β 100.0 β [0.5, 0.5, 0.8] β β Group2 β 200.0 β [0.5, 0.5, 0.8] β β Group3 β 300.0 β [0.7, 0.9] β ββββββββββ΄βββββββββ΄ββββββββββββββββββ My goal is to calculate a 'weighted' column, which would be the multiple of each item in the values list with the value in the weight column: ββββββββββ¬βββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββ β Group β Weight β Vals β Weighted β β --- β --- β --- β --- β β str β f64 β list[f64] β list[i64] β ββββββββββͺβββββββββͺββββββββββββββββββͺββββββββββββββββββ‘ β Group1 β 100.0 β [0.5, 0.5, 0.8] β [50, 50, 80] β β Group2 β 200.0 β [0.5, 0.5, 0.8] β [100, 100, 160] β β Group3 β 300.0 β [0.7, 0.9] β [210, 270] β ββββββββββ΄βββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ I've tried a few different things: df.with_columns( pl.col("Vals").list.eval(pl.element() * 3).alias("Weight1"), #Multiplying with literal works pl.col("Vals").list.eval(pl.element() * pl.col("Weight")).alias("Weight2"), #Does not work pl.col("Vals").list.eval(pl.element() * pl.col("Unknown")).alias("Weight3"), #Unknown columns give same value pl.col("Vals").list.eval(pl.col("Vals") * pl.col("Weight")).alias("Weight4"), #Same effect # pl.col('Vals') * 3 -> gives an error ) ββββββββββ¬βββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββ β Group β Weight β Vals β Weight1 β Weight2 β Weight3 β Weight4 β β --- β --- β --- β --- β --- β --- β --- β β str β f64 β list[f64] β list[f64] β list[f64] β list[f64] β list[f64] β ββββββββββͺβββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββββͺβββββββββββββββͺβββββββββββββββββββββ‘ β Group1 β 100.0 β [0.5, 0.5, β [1.5, 1.5, β [0.25, 0.25, β [0.25, 0.25, β [0.25, 0.25, 0.64] β β β β 0.8] β 2.4] β 0.64] β 0.64] β β β Group2 β 200.0 β [0.5, 0.5, β [1.5, 1.5, β [0.25, 0.25, β [0.25, 0.25, β [0.25, 0.25, 0.64] β β β β 0.8] β 2.4] β 0.64] β 0.64] β β β Group3 β 300.0 β [0.7, 0.9] β [2.1, 2.7] β [0.49, 0.81] β [0.49, 0.81] β [0.49, 0.81] β ββββββββββ΄βββββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ Unless I'm not understanding it correctly, it seems like you're unable to access columns outside of the list from within the eval function. Perhaps there might be a way to use list comprehension within the statement, but that doesn't really seem like a neat solution. What would be the recommended approach here? Any help would be appreciated! | EDIT - Polars update: As of the latest version of Polars, this is now a the correct syntax: df = pl.DataFrame( { "Group": ["Group1", "Group2", "Group3"], "Weight": [100.0, 200.0, 300.0], "Vals": [[0.5, 0.5, 0.8],[0.5, 0.5, 0.8], [0.7, 0.9]] } ) (df .explode('Vals') .with_columns(Weighted = pl.col('Weight')*pl.col('Vals')) .group_by('Group') .agg( pl.col('Weight').first(), pl.col('Vals'), pl.col('Weighted') ) ) shape: (3, 4) ββββββββββ¬βββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββββββββ β Group β Weight β Vals β Weighted β β --- β --- β --- β --- β β str β f64 β list[f64] β list[f64] β ββββββββββͺβββββββββͺββββββββββββββββββͺββββββββββββββββββββββββ‘ β Group3 β 300.0 β [0.7, 0.9] β [210.0, 270.0] β β Group1 β 100.0 β [0.5, 0.5, 0.8] β [50.0, 50.0, 80.0] β β Group2 β 200.0 β [0.5, 0.5, 0.8] β [100.0, 100.0, 160.0] β ββββββββββ΄βββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββββββββ | 6 | 3 |
74,429,898 | 2022-11-14 | https://stackoverflow.com/questions/74429898/insert-or-update-upsert-multiple-objects-using-orm-session | I'm trying to upsert using SQLAlchemy. There's no upsert in SQL but SQLAlchemy provides this. The same thing I'm trying to perform with SQLAlchemy ORM session. My code: from sqlalchemy.orm import sessionmaker Session = sessionmaker(engine) with Session() as session: """Here upsert functionality""" session.insert_or_update(company) session.commit() session.merge(company) works as I only need to check for primary key and not other unique values. The documentation says: Session.merge() examines the primary key attributes of the source instance, and attempts to reconcile it with an instance of the same primary key in the session. If not found locally, it attempts to load the object from the database based on primary key, and if none can be located, creates a new instance. The state of each attribute on the source instance is then copied to the target instance. The resulting target instance is then returned by the method; the original source instance is left unmodified, and un-associated with the Session if not already. How do I perform this for multiple objects? | As you have noted, Session.merge() will accomplish the task on an object-by-object basis. For example, if we have class Thing(Base): __tablename__ = "thing" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=False) txt: Mapped[str] = mapped_column(String(50)) my_thing = Thing(id=1, txt="foo") we can do with Session(engine) as sess: sess.merge(my_thing) sess.commit() However, we don't want to do that for a large number of objects, e.g., rows_to_upsert = 4000 things = [Thing(id=i, txt=f"txt_{i}") for i in range(rows_to_upsert)] with Session(engine, autoflush=False) as sess: t0 = time.perf_counter() for thing in things: sess.merge(thing) sess.commit() print( f"merge: {rows_to_upsert:,} rows upserted in {(time.perf_counter() - t0):0.1f} seconds" ) because that will result in 4000 round-trips to the server to perform a SELECT for each object, and that will be slow. In my testing, upserting 4000 rows took approximately 40 seconds, or about 100 rows/sec. Instead, we should convert the list of objects to a list of dict list_of_dict = [dict(id=thing.id, txt=thing.txt) for thing in things] and then use an INSERT β¦ ON DUPLICATE KEY statement from sqlalchemy.dialects.mysql import insert insert_stmt = insert(Thing).values(list_of_dict) on_duplicate_stmt = insert_stmt.on_duplicate_key_update( dict(txt=insert_stmt.inserted.txt) ) with Session(engine) as sess: t0 = time.perf_counter() sess.execute(on_duplicate_stmt) sess.commit() print( f"insert: {rows_to_upsert:,} rows upserted in {(time.perf_counter() - t0):0.1f} seconds" ) which only took around 2 seconds, or about 2000 rows/sec. | 5 | 0 |
74,392,324 | 2022-11-10 | https://stackoverflow.com/questions/74392324/poetry-install-throws-winerror-1312-when-running-over-ssh-on-windows-10 | I have an SSH connection from a Windows machine to another, and then trying to do a poetry install. My problem is: I get this error when executing poetry install through ssh: [WinError 1312] A specified logon session does not exist. It may already have been terminated. This command works perfectly when I execute it locally on the target machine, but fails when connecting through ssh. How can I get rid/fix the [WinError 1312]? I saw another user that posted the same question recently, but removed it. I've seen some clues regarding the MachineKeys, but have really no idea on how to proceed. Any suggestion will be highly appreciated. Python: 3.10.8 Poetry: 1.2.1 Installing dependencies from lock file Package operations: 5 installs, 0 updates, 0 removals β’ Installing install-requires (0.3.0) OSError [WinError 1312] A specified logon session does not exist. It may already have been terminated. at ~\AppData\Roaming\pypoetry\venv\lib\site-packages\win32ctypes\core\ctypes\_util.py:53 in check_zero 49β 50β def check_zero_factory(function_name=None): 51β def check_zero(result, function, arguments, *args): 52β if result == 0: β 53β raise make_error(function, function_name) 54β return result 55β return check_zero 56β 57β The following error occurred when trying to handle this error: error (1312, 'CredRead', 'A specified logon session does not exist. It may already have been terminated.') at ~\AppData\Roaming\pypoetry\venv\lib\site-packages\win32ctypes\pywin32\pywintypes.py:37 in pywin32error 33β def pywin32error(): 34β try: 35β yield 36β except WindowsError as exception: β 37β raise error(exception.winerror, exception.function, exception.strerror) 38β | Based on similarities in the stack traces and your description, my guess is that you're facing the same bug from #1892 and #1917, where Poetry tries to use your keyring to access/publish modules, and hence fails when these credentials are invalid. But it appears that poetry tries to access the keyring even for install operations. One of the solutions proposed was to uninstall the keyring package remotely: For me, I worked around the problem by pip uninstalling the 'keyring' package from that virt env. Another solution is to export the environment variable PYTHON_KEYRING_BACKEND. Here's an example of how you can do that on Windows cmd: SET PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring or Windows powershell: $env:PYTHON_KEYRING_BACKEND="keyring.backends.null.Keyring" or Linux shell: export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring Unfortunately, it appears that issue #1917 is still open and unresolved, so this is the best workaround that you can find to fix the issue for now. | 9 | 9 |
74,369,065 | 2022-11-9 | https://stackoverflow.com/questions/74369065/obtaining-the-image-iterations-before-final-image-has-been-generated-stablediffu | I am currently using the diffusers StableDiffusionPipeline (from hugging face) to generate AI images with a discord bot which I use with my friends. I was wondering if it was possible to get a preview of the image being generated before it is finished? For example, if an image takes 20 seconds to generate, since it is using diffusion it starts off blury and gradually gets better and better. What I want is to save the image on each iteration (or every few seconds) and see how it progresses. How would I be able to do this? class ImageGenerator: def __init__(self, socket_listener, pretty_logger, prisma): self.model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", revision="fp16", torch_dtype=torch.float16, use_auth_token=os.environ.get("HF_AUTH_TOKEN")) self.model = self.model.to("cuda") async def generate_image(self, data): start_time = time.time() with autocast("cuda"): image = self.model(data.description, height=self.default_height, width=self.default_width, num_inference_steps=self.default_inference_steps, guidance_scale=self.default_guidance_scale) image.save(...) The code I have currently is this, however it only returns the image when it is completely done. I have tried to look into how the image is generated inside of the StableDiffusionPipeline but I cannot find anywhere where the image is generated. If anybody could provide any pointers/tips on where I can begin that would be very helpful. | You can use the callback argument of the stable diffusion pipeline to get the latent space representation of the image: link to documentation The implementation shows how the latents are converted back to an image. We just have to copy that code and decode the latents. Here is a small example that saves the generated image every 5 steps: from diffusers import StableDiffusionPipeline import torch #load model model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", revision="fp16", torch_dtype=torch.float16, use_auth_token="YOUR TOKEN HERE") model = model.to("cuda") def callback(iter, t, latents): # convert latents to image with torch.no_grad(): latents = 1 / 0.18215 * latents image = model.vae.decode(latents).sample image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() # convert to PIL Images image = model.numpy_to_pil(image) # do something with the Images for i, img in enumerate(image): img.save(f"iter_{iter}_img{i}.png") # generate image (note the `callback` and `callback_steps` argument) image = model("tree", callback=callback, callback_steps=5) To understand the stable diffusion model I highly recommend this blog post. | 6 | 8 |
74,389,487 | 2022-11-10 | https://stackoverflow.com/questions/74389487/caching-python-venv-folder-with-codebuild | I am trying to cache the Virtual environment folder (.venv) for my Python project with CodeBuild. Here's my buildspec.yml file: version: 0.2 env: shell: bash phases: install: commands: - python3 -m venv .venv && source .venv/bin/activate - pip3 install -r requirements.txt build: commands: - pytest -v tests/ cache: paths: - .venv/**/* This is the error I get: [Container] 2022/11/10 11:33:20 MkdirAll: /codebuild/local-cache/custom/11615810f/.venv [Container] 2022/11/10 11:33:20 Symlinking: /codebuild/output/src682375820/src/hello.org/demo/.venv => /codebuild/local-cache/custom/11615810f/.venv [Container] 2022/11/10 11:34:01 Running command python3 -m venv .venv && source .venv/bin/activate Error: Unable to create directory '/codebuild/output/src682375820/src/hello.org/demo/.venv' [Container] 2022/11/10 11:34:01 Command did not exit successfully python3 -m venv .venv && source .venv/bin/activate exit status 1 [Container] 2022/11/10 11:34:01 Phase complete: INSTALL State: FAILED [Container] 2022/11/10 11:34:01 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python3 -m venv .venv && source .venv/bin/activate. Reason: exit status 1 The problem is that you can't instantiate a Python virtual environment into a symlinked folder. Any ideas? | CodeBuild uses symlinks to link cached directories and python -m venv tries to create a new directory over a symlink, which isn't possible. Try to run python3 -m venv command inside the .venv directory: install: commands: - cd .venv && python3 -m venv . && cd - - source .venv/bin/activate - pip3 install -r requirements.txt If you check the CodeBuild log, you'll see that symlink is created at the very beginning: # Example when /opt/poetry is cached [Container] 2023/12/11 08:10:45.046132 Expanded cache path /opt/poetry/ [Container] 2023/12/11 08:10:45.495963 MkdirAll: /codebuild/local-cache/custom/5694e39f8fd7f99453b01828b7946ba4ef802b635c8c9ec0be9b56824a16d47c/opt/poetry [Container] 2023/12/11 08:10:45.496095 Symlinking: /opt/poetry => /codebuild/local-cache/custom/5694e39f8fd7f99453b01828b7946ba4ef802b635c8c9ec0be9b56824a16d47c/opt/poetry | 4 | 1 |
74,422,209 | 2022-11-13 | https://stackoverflow.com/questions/74422209/jupyterlab-failed-to-load-model-class-hboxmodel | I am using JupyterLab and trying to run tqdm. I've had an error persist for quite a while that seems to be a JS error. Extensions: Other labextensions (built into JupyterLab) app dir: /opt/homebrew/Cellar/[email protected]/3.9.10/Frameworks/Python.framework/Versions/3.9/share/jupyter/lab @jupyter-widgets/jupyterlab-manager v5.0.3 enabled OK jupyter-leaflet v0.17.2 enabled OK Error Message: Failed to load model class 'VBoxModel' from module '@jupyter-widgets/controls' Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is I have tried, re-installing, re-building all of the extensions listed above to no avail. | I had the same issue when upgrading to jupyterlab==3.6.3. After upgrading ipywidgets to the at that time latest version 8.0.6, it still failed. I managed to fix the issue by downgrading then to ipywidgets==7.7.5. | 5 | 3 |
74,446,830 | 2022-11-15 | https://stackoverflow.com/questions/74446830/how-to-fix-403-forbidden-errors-with-python-requests-even-with-user-agent-head | I am sending a request to some URL. I copied the curl command to python. So, all the headers are included, but my request is not working and I receive status code 403 and error code 1020 in the HTML output. The code is import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', # 'Accept-Encoding': 'gzip, deflate, br', 'DNT': '1', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'none', 'Sec-Fetch-User': '?1', } response = requests.get('https://v2.gcchmc.org/book-appointment/', headers=headers) print(response.status_code) print(response.cookies.get_dict()) with open("test.html",'w') as f: f.write(response.text) I also get cookies but not getting the desired response. I know I can do it with selenium but I want to know the reason behind this. Note: I have installed all the libraries and checked the versions, but it is still not working and throwing a 403 error. | The site is protected by cloudflare which aims to block, among other things, unauthorized data scraping. From What is data scraping? The process of web scraping is fairly simple, though the implementation can be complex. Web scraping occurs in 3 steps: First the piece of code used to pull the information, which we call a scraper bot, sends an HTTP GET request to a specific website. When the website responds, the scraper parses the HTML document for a specific pattern of data. Once the data is extracted, it is converted into whatever specific format the scraper botβs author designed. You can use urllib instead of requests, it seems to be able to deal with cloudflare req = urllib.request.Request('https://v2.gcchmc.org/book-appointment/') req.add_headers('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0') req.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8') req.add_header('Accept-Language', 'en-US,en;q=0.5') r = urllib.request.urlopen(req).read().decode('utf-8') with open("test.html", 'w', encoding="utf-8") as f: f.write(r) | 14 | 15 |
74,448,717 | 2022-11-15 | https://stackoverflow.com/questions/74448717/dash-choosing-an-id-and-then-make-plot-with-multiple-sliders | I have a dataset which is similar to below one. Please note that there are multiple values for a single ID. import pandas as pd import numpy as np import random df = pd.DataFrame({'DATE_TIME':pd.date_range('2022-11-01', '2022-11-05 23:00:00',freq='20min'), 'SBP':[random.uniform(110, 160) for n in range(358)], 'DBP':[random.uniform(60, 100) for n in range(358)], 'ID':[random.randrange(1, 3) for n in range(358)], 'TIMEINTERVAL':[random.randrange(1, 200) for n in range(358)]}) df['VISIT'] = df['DATE_TIME'].dt.day df['MODE'] = np.select([df['VISIT']==1, df['VISIT'].isin([2,3])], ['CKD', 'Dialysis'], 'Late TPL') df['TIME'] = df['DATE_TIME'].dt.time df['TIME'] = df['TIME'].astype('str') def to_day_period(s): bins = ['0', '06:00:00', '13:00:00', '18:00:00', '23:00:00', '24:00:00'] labels = ['Night', 'Morning', 'Afternoon', 'Evening', 'Night'] return pd.cut( pd.to_timedelta(s), bins=list(map(pd.Timedelta, bins)), labels=labels, right=False, ordered=False ) df['TIME_OF_DAY'] = to_day_period(df['TIME']) I would like to use Dash so that I can firstly choose the ID, and then make a plot of that chosen ID. Besides, I made a slider to choose the time interval between measurements in terms of minutes. This slider should work for Morning and Night values separately. So, I have already implemented a slider which works for all day and night times. I would like to have two sliders, one for Morning values from 06:00 until 17:59, and other one called Night from 18 until 05:59. from dash import Dash, html, dcc, Input, Output import pandas as pd import os import plotly.express as px # FUNCTION TO CHOOSE A SINGLE PATIENT def choose_patient(dataframe_name, id_number): return dataframe_name[dataframe_name['ID']==id_number] # FUNCTION TO CHOOSE A SINGLE PATIENT WITH A SINGLE VISIT def choose_patient_visit(dataframe_name, id_number,visit_number): return dataframe_name[(dataframe_name['ID']==id_number) & (dataframe_name['VISIT']==visit_number)] # READING THE DATA df = pd.read_csv(df,sep=',',parse_dates=['DATE_TIME','DATE'], infer_datetime_format=True) # ---------------------------------------------------- dash example ---------------------------------------------------- app = Dash(__name__) app.layout = html.Div([ html.H4('Interactive Scatter Plot'), dcc.Graph(id="scatter-plot",style={'width': '130vh', 'height': '80vh'}), html.P("Filter by time interval:"), dcc.Dropdown(df.ID.unique(), id='pandas-dropdown-1'), # for choosing ID, dcc.RangeSlider( id='range-slider', min=0, max=600, step=10, marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400', 450: '450', 500: '500', 550: '550', 600: '600'}, value=[0, 600] ), html.Div(id='dd-output-container') ]) @app.callback( Output("scatter-plot", "figure"), Input("pandas-dropdown-1", "value"), Input("range-slider", "value"), prevent_initial_call=True) def update_lineplot(value, slider_range): low, high = slider_range df1 = df.query("ID == @value & TIMEINTERVAL >= @low & TIMEINTERVAL < @high").copy() if df1.shape[0] != 0: fig = px.line(df1, x="DATE_TIME", y=["SBP", "DBP"], hover_data=['TIMEINTERVAL'], facet_col='VISIT', facet_col_wrap=2, symbol='MODE', facet_row_spacing=0.1, facet_col_spacing=0.09) fig.update_xaxes(matches=None, showticklabels=True) return fig else: return dash.no_update app.run_server(debug=True, use_reloader=False) How can I implement such two sliders? One slider should work for Night values of TIME_OF_DAY column and another one for Morning values of TIME_OF_DAY column. I look at Dash website, but there is no such tool available. | The solution below only requires vey minor modifications to your example code. It essentially filters the original dataframe twice (once for TIME_OF_DAY='Night', and once for TIME_OF_DAY='Morning') and concatenates them before plotting. I've also modified the bins in the to_day_period function to only produce two labels that match the request in the text ("Morning values from 06:00 until 17:59, and [...] Night from 18 until 05:59."), but the dash code can similarily be used for more categories. bins = ['0', '06:00:00', '18:00:00', '24:00:00'] labels = ['Night', 'Morning', 'Night'] Dash code app = Dash(__name__) app.layout = html.Div([ html.H4('Interactive Scatter Plot with ABPM dataset'), html.P("Select ID:"), dcc.Dropdown(df.ID.unique(), id='pandas-dropdown-1'), # for choosing ID, html.P("Filter by time interval during nighttime (18:00-6:00):"), dcc.RangeSlider( id='range-slider-night', min=0, max=600, step=10, marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400', 450: '450', 500: '500', 550: '550', 600: '600'}, value=[0, 600] ), html.P("Filter by time interval during daytime (6:00-18:00):"), dcc.RangeSlider( id='range-slider-morning', min=0, max=600, step=10, marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400', 450: '450', 500: '500', 550: '550', 600: '600'}, value=[0, 600] ), dcc.Graph(id="scatter-plot", style={'width': '130vh', 'height': '80vh'}), html.Div(id='dd-output-container') ]) @app.callback( Output("scatter-plot", "figure"), Input("pandas-dropdown-1", "value"), Input("range-slider-night", "value"), Input("range-slider-morning", "value"), prevent_initial_call=True) def update_lineplot(value, slider_range_night, slider_range_morning): low_night, high_night = slider_range_night low_morning, high_morning = slider_range_morning df_night = df.query("ID == @value & TIME_OF_DAY == 'Night' & TIMEINTERVAL >= @low_night & TIMEINTERVAL < @high_night").copy() df_morning = df.query("ID == @value & TIME_OF_DAY == 'Morning' & TIMEINTERVAL >= @low_morning & TIMEINTERVAL < @high_morning").copy() df1 = pd.concat([df_night, df_morning], axis=0).sort_values(['TIME']) if df1.shape[0] != 0: fig = px.line(df1, x="DATE_TIME", y=["SBP", "DBP"], hover_data=['TIMEINTERVAL'], facet_col='VISIT', facet_col_wrap=2, symbol='MODE', facet_row_spacing=0.1, facet_col_spacing=0.09) fig.update_xaxes(matches=None, showticklabels=True) return fig else: return no_update app.run_server(debug=True, use_reloader=False) Application In the screenshot below, all the "Morning"/"Daytime" observations have been filtered for TIMEINTERVAL, while the "Night" observations remain unaffected: | 4 | 1 |
74,447,766 | 2022-11-15 | https://stackoverflow.com/questions/74447766/tkinter-use-characters-bytes-offset-as-index-for-text-widget | I want to delete part of a text widget's content, using only character offset (or bytes if possible). I know how to do it for lines, words, etc. Looked around a lot of documentations: https://www.tcl.tk/man/tcl8.6/TkCmd/text.html#M24 https://tkdocs.com/tutorial/text.html https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/text-methods.html https://web.archive.org/web/20120112185338/http://effbot.org/tkinterbook/text.htm Here is an example mre: import tkinter as tk root = tk.Tk() text = tk.Text(root) txt = """Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse enim lorem, aliquam quis quam sit amet, pharetra porta lectus. Nam commodo imperdiet sapien, in maximus nibh vestibulum nec. Quisque rutrum massa eget viverra viverra. Vivamus hendrerit ultricies nibh, ac tincidunt nibh eleifend a. Nulla in dolor consequat, fermentum quam quis, euismod dui. Nam at gravida nisi. Cras ut varius odio, viverra molestie arcu. Pellentesque scelerisque eros sit amet sollicitudin venenatis. Proin fermentum vestibulum risus, quis suscipit velit rutrum id. Phasellus nisl justo, bibendum non dictum vel, fermentum quis ipsum. Nunc rutrum nulla quam, ac pretium felis dictum in. Sed ut vestibulum risus, suscipit tempus enim. Nunc a imperdiet augue. Nullam iaculis consectetur sodales. Praesent neque turpis, accumsan ultricies diam in, fermentum semper nibh. Nullam eget aliquet urna, at interdum odio. Nulla in mi elementum, finibus risus aliquam, sodales ante. Aenean ut tristique urna, sit amet condimentum quam. Mauris ac mollis nisi. Proin rhoncus, ex venenatis varius sollicitudin, urna nibh fringilla sapien, eu porttitor felis urna eu mi. Aliquam aliquam metus non lobortis consequat. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Aenean id orci dui.""" text.insert(tk.INSERT, txt) def test_delete(event=None): text.delete() # change this line here text.pack(fill="both", expand=1) text.pack_propagate(0) text.bind('<Control-e>', test_delete) root.mainloop() It display an example text inside a variable, inside a text widget. I use a single key binding to test some of the possible ways to do what I want on that piece of text. I tried a lot of things, both from the documentation(s) and my own desperation: text.delete(0.X): where X is any number. I thought since lines were 1.0, maybe using 0.X would work on chars only. It only work with a single char, regardless of what X is (even with a big number). text.delete(1.1, 1.3): This act on the same line, because I was trying to see if it would delete 3 chars in any direction on the same line. It delete 2 chars instead of 3, and it does so by omitting one char at the start of the first line, and delete 2 char after that. text.delete("end - 9c"): only work at the end (last line), and omit 7 chars starting from EOF, and then delete a single char after that. text.delete(0.1, 0.2): Does not do anything. Same result for other 0.X, 0.X combination. Example of what I try to achieve: Using the example text above would take too long, so let's consider a smaller string, say "hello world". Now let's say we use an index that start with 1 (doesn't matter but make things easier to explain), the first char is "h" and the last one is "d". So say I use chars range such as "2-7", that would be "ello w". Say I want to do "1-8"? -> "hello wo", and now starting from the end, "11-2", "ello world". This is basically similar to what f.tell() and f.seek() do. I want to do something like that but using only the content inside of the text widget, and then do something on those bytes/chars ranges (in the example above, I'm deleting them, etc). | Based on my own relentless testing and other answers here, I managed to get to a solution. import tkinter as tk from tkinter import messagebox # https://stackoverflow.com/a/29780454/12349101 root = tk.Tk() main_text = tk.Text(root) box_text = tk.Text(root, height=1, width=10) box_text.pack() txt = """hello world""" len_txt = len( txt) # get the total length of the text content. Can be replaced by `os.path.getsize` or other alternatives for files main_text.insert(tk.INSERT, txt) def offset(): inputValue = box_text.get("1.0", "end-1c") # get the input of the text widget without newline (since it's added by default) # focusing the other text widget, deleting and re-insert the original text so that the selection/tag is updated (no need to move the mouse to the other widget in this example) main_text.focus() main_text.delete("1.0", tk.END) main_text.insert(tk.INSERT, txt) to_do = inputValue.split("-") if len(to_do) == 1: # if length is 1, it probably is a single offset for a single byte/char to_do.append(to_do[0]) if not to_do[0].isdigit() or not to_do[1].isdigit(): # Only integers are supported messagebox.showerror("error", "Only integers are supported") return # trick to prevent the failing range to be executed if int(to_do[0]) > len_txt or int(to_do[1]) > len_txt: # total length is the maximum range messagebox.showerror("error", "One of the integers in the range seems to be bigger than the total length") return # trick to prevent the failing range to be executed if to_do[0] == "0" or to_do[1] == "0": # since we don't use a 0 index, this isn't needed messagebox.showerror("error", "Using zero in this range isn't useful") return # trick to prevent the failing range to be executed if int(to_do[0]) > int(to_do[1]): # This is to support reverse range offset, so 11-2 -> 2-11, etc first = int(to_do[1]) - 1 first = str(first).split("-")[-1:][0] second = (int(to_do[0]) - len_txt) - 1 second = str(second).split("-")[-1:][0] else: # use the offset range normally first = int(to_do[0]) - 1 first = str(first).split("-")[-1:][0] second = (int(to_do[1]) - len_txt) - 1 second = str(second).split("-")[-1:][0] print(first, second) main_text.tag_add("sel", '1.0 + {}c'.format(first), 'end - {}c'.format(second)) buttonCommit = tk.Button(root, text="use offset", command=offset) buttonCommit.pack() main_text.pack(fill="both", expand=1) main_text.pack_propagate(0) root.mainloop() Now the above works, as described in the "hello world" example in my post. It isn't a 1:1 clone/emulation of f.tell() or f.seek(), but I feel like it's close. The above does not use text.delete but instead select the text, so it's visually less confusing (at least to me). It works with the following offset type: reverse range: 11-2 -> 2-11 so the order does not matter normal range: 2-11, 1-8, 8-10... single offset: 10 or 10-10 so it can support single char/byte Now the main thing I noticed, is that '1.0 + {}c', 'end - {}c' where {} is the range, works by omitting its given range. If you were to use 1-3 as a range on the string hello world it would select ello wor. You could say it omitted h and ld\n, with the added newline by Tkinter (which we ignore in the code above unless it's part of the total length variable). The correct offset (or at least the one following the example I gave in the post above) would be 2-9. P.S: For this example, clicking on the button after entering the offsets range is needed. | 8 | 1 |
74,425,218 | 2022-11-13 | https://stackoverflow.com/questions/74425218/how-to-configure-mypy-to-ignore-a-stub-file-for-a-specific-module | I installed a "dnspython" package with "pip install dnspython" under Ubuntu 22.10 and made a following short script: #!/usr/bin/env python3 import dns.zone import dns.query zone = dns.zone.Zone("example.net") dns.query.inbound_xfr("10.0.0.1", zone) for (name, ttl, rdata) in zone.iterate_rdatas("SOA"): serial_nr = rdata.serial When I check this code snippet with mypy(version 0.990), then it reports an error: Module has no attribute "inbound_xfr" [attr-defined] for line number 7. According to mypy documentation, if a Python file and a stub file are both present in the same directory on the search path, then only the stub file is used. In case of "dnspython", the stub file query.pyi is present in the dns package and the stub file indeed has no attribute "inbound_xfr". When I rename or remove the stub file, then the query.py Python file is used instead of the stub file and mypy no longer complains about missing attribute. I guess this is a "dnspython" bug? Is there a way to tell to mypy that for query module, the stub file should be ignored? | I would recommend ignoring only the specific wrong line, not the whole module. dns.query.inbound_xfr("10.0.0.1", zone) # type: ignore[attr-defined] This will suppress attr-defined error message that is generated on that line. If you're going to take this approach, I'd also recommend running mypy with the --warn-unused-ignores flag, which will report any redundant and unused # type: ignore statements (for example, after updating the library). | 4 | 5 |
74,400,966 | 2022-11-11 | https://stackoverflow.com/questions/74400966/expand-wont-do-what-i-want-how-do-i-generate-a-custom-list-of-inputs-to-a-ru | I want to run a Snakemake workflow where the input is defined by a combination of different variables (e.g. pairs of samples, sample ID and Nanopore barcode,...): sample_1 = ["foo", "bar", "baz"] sample_2 = ["spam", "ham", "eggs"] I've got a rule using these: rule frobnicate: input: assembly = "{first_sample}_{second_sample}.txt" output: frobnicated = "{first_sample}_{second_sample}.frob" I now want to create a rule all that will do this for some combinations of the samples in sample_1 and sample_2, but not all of them. Using expand would give me all possible combinations of sample_1 and sample_2. How can I, for example, just combine the first variable in the first list with the first in the second and so on (foo_spam.frob, bar_ham.frob, and baz_eggs.frob)? And what if I want some more complex combination? | Using expand with other combinatoric functions By default, expand uses the itertools function product. However, it's possible to specify another function for expand to use. To combine the first variable in the first with the first in the second and so on, one can tell expand to use zip: sample_1 = ["foo", "bar", "baz"] sample_2 = ["spam", "ham", "eggs"] rule all: input: expand("{first_sample}_{second_sample}.frob", zip, first_sample=sample_1, second_sample=sample_2) will yield foo_spam.frob, bar_ham.frob, and baz_eggs.frob as inputs to rule all. Using regular Python code to generate your list The input generated by expand is ultimately just a list of file names. If you can't get where you want to with expand and another combinatoric function, it could be easier to just use regular Python code to generate the list yourself (for an example of this in action, see this question). The brute-force solution: just write the list yourself If your combination of inputs can't be arrived at programmatically at all, one last resort would be to write out the combinations you want by hand. For example: sample_1 = ["foo", "bar", "baz"] sample_2 = ["spam", "ham", "eggs"] all_frobnicated = ["foo_eggs.frob", "bar_spam.frob", "baz_ham.frob"] rule all: input: all_frobnicated This will, of course, mean your inputs are completely hardcoded, so if you want to use this workflow with a new batch, you'll have to write the sample combinations you want there out by hand as well. | 4 | 4 |
74,412,482 | 2022-11-12 | https://stackoverflow.com/questions/74412482/ibrk-tws-api-error-takes-4-positional-arguments-but-5-were-given | If I run a basic TWS example, I receive the error message . If I comment out the error() call back it runs fine. I've tried this on several examples and get the same result. Exception has occurred: TypeError error() takes 4 positional arguments but 5 were given File "/Users/jayurbain/Dropbox/twsapi/Algorithmic Trading using Interactive Broker's Python API /ib_basic_app.py", line 20, in <module> app.run() Please advise. Here's the call back that is being overridden in wrapper.py: def error(self, reqId:TickerId, errorCode:int, errorString:str, advancedOrderRejectJson = ""): Here is the entire code: from ibapi.client import EClient from ibapi.wrapper import EWrapper class TradingApp(EWrapper, EClient): def __init__(self): EClient.__init__(self,self) def error(self, reqId, errorCode, errorString): print("Error {} {} {}".format(reqId,errorCode,errorString)) app = TradingApp() app.connect("127.0.0.1", 7497, clientId=1) app.run() | The solution to the problem was to execute python setup install rather than using pip to install ibapi. Thanks for your answers. | 5 | -2 |
74,414,355 | 2022-11-12 | https://stackoverflow.com/questions/74414355/equivalent-for-r-dplyrs-glimpse-function-in-python-for-panda-dataframes | I find the glimpse function very useful in R/dplyr. But as someone who is used to R and is working with Python now, I haven't found something as useful for Panda dataframes. In Python, I've tried things like .describe() and .info() and .head() but none of these give me the useful snapshot which R's glimpse() gives us. Nice features which I'm quite accustomed to having in glimpse() include: All variables/column names as rows in the output All variable/column data types The first few observations of each column Total number of observations Total number of variables/columns Here is some simple code you could work it with: R library(dplyr) test <- data.frame(column_one = c("A", "B", "C", "D"), column_two = c(1:4)) glimpse(test) # The output is as follows Rows: 4 Columns: 2 $ column_one <chr> "A", "B", "C", "D" $ column_two <int> 1, 2, 3, 4 Python import pandas as pd test = pd.DataFrame({'column_one':['A', 'B', 'C', 'D'], 'column_two':[1, 2, 3, 4]}) Is there a single function for Python which mirrors these capabilities closely (not multiple and not partly)? If not, how would you create a function that does the job precisely? | Here is one way to do it: def glimpse(df): print(f"Rows: {df.shape[0]}") print(f"Columns: {df.shape[1]}") for col in df.columns: print(f"$ {col} <{df[col].dtype}> {df[col].head().values}") Then: import pandas as pd df = pd.DataFrame( {"column_one": ["A", "B", "C", "D"], "column_two": [1, 2, 3, 4]} ) glimpse(df) # Output Rows: 4 Columns: 2 $ column_one <object> ['A' 'B' 'C' 'D'] $ column_two <int64> [1 2 3 4] | 8 | 6 |
74,444,165 | 2022-11-15 | https://stackoverflow.com/questions/74444165/multiple-objective-functions-with-binary-variables-google-or-tools | I have a set of U users and a set of S servers. I want to maximize the number of users allocated to a server while minimizing the number of servers used (this means that I have two objective functions). Each user has some requirements w and each server has a total capacity of C. The solver variables are the following: # x[i,j] = True if user u[j] is allocated to server s[i] # x[i,j] = False otherwise # y[i] = True if server s[i] is used to serve users # y[i] = False otherwise As mentioned before, I want to maximize x[i,j] while minimizing y[i] The constraints are the following: Capacity constraint: Since each server i has a certain capacity, the allocation of j users must not exceed that capacity Proximity constraint: Only users located within the range of the server can be allocated to it. A user can be located in the overlapping range of multiple servers Constraint family: Ensures every user is allocated to at most one server. Using this library from ortools.sat.python import cp_model So far I've done: Create the solver variables (they are boolean) Create the constraints Maximize the x[i,j] variable Obtain the objective function For instance, if I have 10 users and 4 servers all the 10 users are allocated among the 4 servers What I need but haven't been able to accomplish: Maximize the x[i,j] variable AND Minimize the y[i] variable For the same 10 users and the same 4 servers above, all the 10 users can be allocated among just 2 servers and not 4 I have tried the solution given in this post but it is not working since I got that the problem does not have an optimal solution | there are usually 2 approaches: weighted sum: a * obj1 + b * obj2 lexicographic: optimize obj1, get optimal value, change objective to obj2, add constraint obj1 <= best_obj1_value (optional + slack). Then reoptimize. Bonus point when reusing the optimal solution with obj1 as a hint for the second solve. | 4 | 6 |
74,449,223 | 2022-11-15 | https://stackoverflow.com/questions/74449223/whats-the-diffrence-between-pandas-pd-to-pickle-and-pickle-module-pickle-dump | I would like to save python pandas DataFrame object as pickle. What's the diffrence in using pandas.to_pickle vs pickle.dumps? I've made some tests. Here's my test code : import pandas as pd import pickle df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'), index=['x', 'y']) # Save df.to_pickle('df1.pickle') with open('df2.pickle','wb') as f: pickle.dump(df, f) # Load df1 = pd.read_pickle('df1.pickle') with open('df2.pickle','rb') as f: df2 = pickle.load(f) # Is ok? if (df1 == df2).all().all(): print('Data is ok.') # Load opposite df3 = pd.read_pickle('df2.pickle') with open('df1.pickle','rb') as f: df4 = pickle.load(f) # Is ok? if (df3 == df4).all().all(): print('Data opposite is ok.') Result : Data is ok. Data opposite is ok. Is there any diffrence? I see some diffrence in output pickle file size. Pandas version file is bigger. -rw-rw-r-- 1 spasz spasz 694 lis 15 17:38 df1.pickle -rw-rw-r-- 1 spasz spasz 662 lis 15 17:38 df2.pickle Tested on python 3.8.0, pandas 1.5.0. | After analyzing module code for pickle (python3.8) and pandas 1.5.0 here are my thoughts. Saving/dumping DataFrame to pickle Pickle code : Default using pickle protocol DEFAULT_PROTOCOL if not specified. DEFAULT_PROTOCOL(==4) is not the HIGHEST_PROTOCOL(==5). Pandas code : Default using pickle protocol HIGHEST_PROTOCOL, Handles error GH#39002 when saving pickle with protocol==5 and compression bz2 or xz, Reading DataFrame from pickle Pickle code : No important diffrence to mention, Pandas code : Default use of direct call to pickle.load() Catches some exceptions to handle data written in previous pandas versions and python 2.7 unicode errors (see GH#28645 and GH#31988) My final thoughts As @DeepSpace said, pandas pretty much calls pickle functions directly. If you're creating new data with current pandas versions, and not using pickle protocol==5 with bz2/xz compression, then you can safely use of pickle module functions. | 3 | 4 |
74,407,024 | 2022-11-11 | https://stackoverflow.com/questions/74407024/how-to-optimize-the-code-and-reduce-memory-usage-python | The purpose is to reduce memory usage. Meaning that it should be optimized in a way that the hash is equal to the test hash. What I've tried so far: Adding __slots__ but it didn't make any changes. Change default dtype float64 to float32. Although it reduces the mem usage significantly, it brakes the test by changing the hash. Converted data into np.array reduced CPU times: from 13 s to 2.05 s but didn't affect the memory usage. The code to reproduce: rows = 40000000 trs = 10 random.seed(42) generated_data: tp.List[float] = np.array([random.random() for _ in range(rows)]) def df_upd(df_initial: pd.DataFrame, df_new: pd.DataFrame) -> pd.DataFrame: return pd.concat((df_initial, df_new), axis=1) class T: """adding a column of random data""" __slots__ = ['var'] def __init__(self, var: float): self.var = var def transform(self, df_initial: pd.DataFrame) -> pd.DataFrame: return df_upd(df_initial, pd.DataFrame({self.var: generated_data})) class Pipeline: __slots__ = ['df', 'transforms'] def __init__(self): self.df = pd.DataFrame() self.transforms = np.array([T(f"v{i}") for i in range(trs)]) def run(self): for t in self.transforms: self.df = t.transform(self.df) return self.df if __name__ == "__main__": # starting the monitoring tracemalloc.start() # function call pipe = Pipeline() %time df = pipe.run() print("running") # displaying the memory current, peak = tracemalloc.get_traced_memory() print(f"Current memory usage is {current / 10**3} KB ({(current / 10**3)*0.001} MB); Peak was {peak / 10**3} KB ({(peak / 10**3)*0.001} MB); Diff = {(peak - current) / 10**3} KB ({((peak - current) / 10**3)*0.001} MB)") # stopping the library tracemalloc.stop() # should stay unchanged %time hashed_df = hashlib.sha256(pd.util.hash_pandas_object(df, index=True).values).hexdigest() print("hashed_df", hashed_df) assert hashed_df == test_hash print("Success!") | If you avoid pd.concat() and use the preferred way of augmenting dataframes: df["new_col_name"] = new_col_data this will reduce peak memory consumption significantly. In your code it is sufficient to fix the Transform class: class Transform: """adding a column of random data""" __slots__ = ['var'] def __init__(self, var: str): self.var = var def transform(self, df: pd.DataFrame) -> pd.DataFrame: df[self.var] = generated_data return df (Note that I also changed the type of var from float to str to reflect how it is used in the code). In my machine I went from: Current memory usage is 1600110.987 KB (1600.110987 MB); Peak was 4480116.325 KB (4480.116325 MB); Diff = 2880005.338 KB (2880.005338 MB) to: Current memory usage is 1760101.105 KB (1760.101105 MB); Peak was 1760103.477 KB (1760.1034769999999 MB); Diff = 2.372 KB (0.002372 MB) (I am not sure why the current memory usage is slightly higher in this case). For faster computation, you may want to do some pre-allocation. To do that, you could replace, in Pipeline's __init__(): self.df = pd.DataFrame() with: self.df = pd.DataFrame(data=np.empty((rows, trs)), columns=[f"v{i}" for i in range(trs)]) If you want to get even faster, you can compute the DataFrame right away in the Pipeline's __init__, e.g.: class Pipeline: __slots__ = ['df', 'transforms'] def __init__(self): self.df = pd.DataFrame(data=generated_data[:, None] + np.zeros(trs)[None, :], columns=[f"v{i}" for i in range(trs)]) def run(self): return self.df but I assume your Transform is a proxy of a more complex operation and I am not sure this simplification is easy to adapt beyond the toy code in the question. | 4 | 3 |
74,449,864 | 2022-11-15 | https://stackoverflow.com/questions/74449864/user-is-not-authenticated-in-django-it-shows-errors | I am creating a website so I finshed register page in login page, every details are correct but that is showing please verify your details I mean the else part also I put in a print statement after username and password same details are printing when I typed but not access login. views.py def sign_in(request): if request.method == "POST": username=request.POST['username'] password=request.POST['password'] print(username,password) user = authenticate(username=username,password=password) if user is not None: login(request,user) messages.success(request,"you are logged successfully ") return redirect("front_page") else: messages.error(request,"please verify your details") return redirect("sign_in") return render(request,"login.html") def front_page(request): return render(request,"frontpage.html") urls.py path('log_in',views.sign_in,name="sign_in"), path('front_page',views.front_page,name="front_page"), html <div class="signup-form"> <form action="{% url 'sign_in' %}" method="POST"> {% csrf_token %} <h2 class="text-center">Login</h2> <p class="text-center">Please fill in this form to login your account!</p> <hr> <div class="form-group"> <div class="input-group"> <div class="input-group-prepend"> <span class="input-group-text"> <span class="fa fa-user"></span> </span> </div> <input type="text" class="form-control" name="username" placeholder="Username" required="required"> </div> </div> <div class="form-group"> <div class="input-group"> <div class="input-group-prepend"> <span class="input-group-text"> <i class="fa fa-lock"></i> </span> </div> <input type="password" class="form-control" name="password" placeholder="Password" required="required"> </div> </div> <div class="form-group d-flex justify-content-center"> <button type="submit" class="btn btn-primary btn-lg">Login</button> </div> <div class="text-center text-primary">Dont have a account? <a href="{% url 'register' %}">Create here </a></div> </form> </div> I just want to login with username and password, but it shows please verify your details, but all the details are correct. Any ideas? | authenticate() also requires request as an argument so it should be: user = authenticate(request,username=username,password=password) Your view for registering users should be like this: def register(request): if request.method == "POST": username=request.POST['username'] password=request.POST['password'] confirm_password=request.POST['confirm_password'] if password == confirm_password: if User.objects.filter(username=username).exists(): messages.info(request,"user name was taken please choose another one") return redirect('register') else: user1=User.objects.create_user(username=username,password=password) user1.is_active = False messages.success(request,"you account was created successfully thank you for choosing us ") return redirect("sign_in") else: messages.info(request,"please verify your passwords") return redirect('register') else: return render(request,'register.html') | 3 | 4 |
74,393,322 | 2022-11-10 | https://stackoverflow.com/questions/74393322/tkinter-optionmenu-cannot-open-a-second-time-using-space | I am having some trouble working around what I can only assume is a bug in Tkinter. from tkinter import * def refocus(event, obj): obj.focus() root = Tk() options = ["Hello", "world", "How", "are", "you"] v1 = StringVar() v2 = StringVar() v3 = StringVar() o1 = OptionMenu(root, v1, *options) o1.configure(takefocus=1) o2 = OptionMenu(root, v2, *options) o2.configure(takefocus=1) o3 = OptionMenu(root, v3, *options) o3.configure(takefocus=1) o1.bind("<Configure>", lambda e=Event(), o=o1: refocus(e, o)) o2.bind("<Configure>", lambda e=Event(), o=o2: refocus(e, o)) o3.bind("<Configure>", lambda e=Event(), o=o3: refocus(e, o)) o1.pack(side=TOP) o2.pack(side=TOP) o3.pack(side=TOP) root.mainloop() From the code there are 3 OptionMenus which I am trying to navigate using tab, space, arrows and Enter keys on the keyboard. The issue is as soon as a menu box pops up it seems to loose focus of the original Option menu. To fix this I have bound configure which re-focuses the option box that it was on, but this only works if the input has changed so I need a new method of doing this. Still there is an issue where if I select the wrong option, i have to cycle through all the other inputs (OptionMenus, Entrys, Checkboxes etc.) to get back to that option for it to open again. In cases where there is only 1 OptionMenu then it will not work until I click another input with a mouse I am looking for a way that I can focus back on the OptionMenu after the menu part has lost focus. I have also tried using o1['menu'].bind(... but this did not work at all. example process: o1 focused -> pressed space -> open menu -> arrows to move -> enter to select -> focus on o1 -> press space -> open menu -> arrows to move -> enter to select -> focus on o1 -> press tab -> o2 focused | Look at this: import tkinter as tk def open_option_menu(event): # Get the widget from the event that tkinter passed in obj = event.widget # Calculate the x/y position of the popup window x = obj.winfo_rootx() y = obj.winfo_rooty() + obj.winfo_height() # Show the popup window # obj["menu"] returns a `tk.Menu` object which has a `.post` method obj["menu"].post(x, y) # Stop tkinter from showing the popup window again. return "break" root = tk.Tk() options = ["Hello", "world", "How", "are", "you"] v1 = tk.StringVar(root) v2 = tk.StringVar(root) v3 = tk.StringVar(root) o1 = tk.OptionMenu(root, v1, *options) o2 = tk.OptionMenu(root, v2, *options) o3 = tk.OptionMenu(root, v3, *options) o1.configure(takefocus=True) o2.configure(takefocus=True) o3.configure(takefocus=True) o1.bind("<space>", open_option_menu) o2.bind("<space>", open_option_menu) o3.bind("<space>", open_option_menu) o1.pack() o2.pack() o3.pack() root.mainloop() I think it's a bug in tkinter so I created a function open_option_menu that shows the popup window. If you don't like something about tkinter, you can rebuild it yourself with the functionality that you need. If you want to know more about the .post method, type help(tkinter.Menu.post) in python. Note, this problem only exists on Windows, so the solution only works with Windows. On Ubuntu, the popup isn't focused when its shown. | 5 | 3 |
74,432,327 | 2022-11-14 | https://stackoverflow.com/questions/74432327/free-memory-as-i-iterate-over-a-list | I have a hypothetical question regarding the memory usage of lists in python. I have a long list my_list that consumes multiple gigabytes if it is loaded into memory. I want to loop over that list and use each element only once during the iteration, meaning I could delete them from the list after looping over them. While I am looping, I am storing something else in memory, meaning the memory I allocated for my_list is now needed for something else. Thus, ideally, I would like to delete the list elements and free the memory while I am looping over them. I assume, in most cases, a generator would make the most sense here. I could dump my list to a csv file and then read it line by line in a for-loop. In that case, my_list would never be loaded into memory in the first place. However, let's assume for the sake of discussion I don't want to do that. Is there a way of releasing the memory of a list as I loop over it? The following does NOT work: >>> my_list = [1,2,3] >>> sys.getsizeof(my_list) 80 >>> my_list.pop() >>> sys.getsizeof(my_list) 80 or >>> my_list = [1,2,3] >>> sys.getsizeof(my_list) 80 >>> del my_list[-1] >>> sys.getsizeof(my_list) 80 even when gc.collect() is called explicitly. The only way that I get to work is copying the array (which at the time of copying would require 2x the memory and thus is again a problem): >>> my_list = [1,2,3] >>> sys.getsizeof(my_list) 80 >>> my_list.pop() >>> my_list_copy = my_list.copy() >>> sys.getsizeof(my_list_copy) 72 The fact that I don't find information on this topic indicates to me that probably the approach is either impossible or bad practice. If it should not be done this way, what would be the best alternative? Loading from csv as a generator? Or are there even better ways of doing this? EDIT: as @Scott Hunter pointed out, the garbage collector works for much larger lists: >>> my_list = [1] * 10**9 >>> for i in range(10): ... for j in range(10**8): ... del my_list[-1] ... gc.collect() ... print(sys.getsizeof(my_list)) Prints the following: 8000000056 8000000056 8000000056 8000000056 8000000056 4500000088 4500000088 2531250112 1423828240 56 | Many of your assumptions here are incorrect. First biggie is the assumption that you can delete items as you loop over them with a for loop. You can't. You could with a while loop of the form: while my_list: item=my_list.pop(0) process(item) # Each my_list[0] element ref_count-- each loop... # If the ref_count==0, item is garbage collected automatically # But this is very slow with a Python list. # Use a collections.deque instead The garbage collector in Python is virtually seamless and automatic. It is rare to ever have to call the garbage collector from your program with programs that are written in common programming patterns. I can't think of anytime that it was needed in my use. To answer one of your questions, if you call .pop on a list, the object pop'ed off the list has it reference count decreased. If it reaches zero, that object if garbage collected automatically -- no need to call gc.collect(). The main use of calling gc.collect() is to delete self referring items that may have confused the default garbage collector. (ie, a = []; a.append(a); del a. In this case, the ref count for a never reaches 0 and is not deleted. Here is an example.) Python (depending in implementation) allocates and frees memory in blocks far larger than individual objects that are usually used in lists. Each .append or .pop to/from a list either goes to shared heap or then to a new allocation or release. See here. If you try and do per item memory management -- unless each item is huge -- it will be far less efficient than Python's automatic memory management. You state that you are reading a large list from a file and then using those items for something else. So long as the items are not mutated -- usually this does not result in a new copy of the item. It results in the reference count of each item being increased. If you read a large list from a file, mutate each item and keep a copy, then indeed your memory use goes up. However, your first list is automatically deleted when it goes out of scope. All these issues are moot if you process the file line-by-line or use a generator of some sort to do so. Here is an example. Assume you want to take a big text file and 1) read every line; 2) change "_" to ":" in every line; 3) have a list with every line so processed. Given this 1.3 GB, 100,000,000 line file: # Create the file with awk % awk -v cnt=100000000 'BEGIN{ for (i=1; i<=cnt; i++) print "Line_" i }' >file % ls -lh file -rw-r--r-- 1 dawg wheel 1.3G Nov 14 12:34 file % printf "%s\n...\n%s\n" $(head -n 1 file) $(tail -n 1 file) Line_1 ... Line_100000000 You can process that file several different ways. First is as you as thinking about: from collections import deque # If you don't use a deque, this is too SLOW with a Python list def process(dq): rtr=deque() while dq: line=dq.popleft() line=line.rstrip().replace("_", ":") rtr.append(line) return rtr with open('/tmp/file') as f: dq=deque(f.readlines()) new_dq=process(dq) print(new_dq[-1]) The second is with a line by line generator: def process(line): return line.rstrip().replace("_", ":") with open('/tmp/file') as f: new_li=list(process(line) for line in f) print(new_li[-1]) The second method is a) Faster; b) less memory and c) easier and more Pythonic to write. Don't overthink trying to manage memory. Python is a lot easier than C. | 5 | 3 |
74,440,410 | 2022-11-15 | https://stackoverflow.com/questions/74440410/sklearn-evaluate-accuracy-precision-recall-f1-show-same-result | I want to evaluate with accuracy, precision, recall, f1 like this code but it show same result. df = pd.read_csv(r'test.csv') X = df.iloc[:,:10] Y = df.iloc[:,10] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2) clf = DecisionTreeClassifier() clf = clf.fit(X_train,y_train) predictions = clf.predict(X_test) accuracy = accuracy_score(y_test, predictions) precision = precision_score(y_test, predictions,average='micro') recall = recall_score(y_test, predictions,average='micro') f1 = f1_score(y_test, predictions,average='micro') print("Accuracy: ", accuracy) print("precision: ", precision) print("recall: ", recall) print("f1: ", f1) It show output like this. Accuracy: 0.8058823529411765 precision: 0.8058823529411765 recall: 0.8058823529411765 f1: 0.8058823529411765 The output is same value. How to fix it? | According to sklearn's documentation, the behavior is expected when using micro as average and when dealing with a multiclass setting: Note that if all labels are included, βmicroβ-averaging in a multiclass setting will produce precision, recall and F that are all identical to accuracy. Here is a nice blog article describing why these scores can be equal (also with an intuitive example) TL;DR F1 equals recall and precision if recall == precision In the case of micro averaging, the number of false positive always equals the number of false negative. Thus, recall == precision Finally, note that micro F1 always equals accuracy. See here | 4 | 6 |
74,442,230 | 2022-11-15 | https://stackoverflow.com/questions/74442230/the-notion-of-block-in-python | The documentation states: A Python program is constructed from code blocks. A block is a piece of Python program text that is executed as a unit. The following are blocks: a module, a function body, and a class definition. This seems to imply, contrary to what I had thought, that an indented piece of code, such as the body of an if-statement or a for-loop is not a block. Am I reading this correctly? What does it mean to be executed as a unit (e.g. why wouldn't a for-loop fit this definition)? What would you call an indented piece of code if not a block? | This seems to imply, contrary to what I had thought, that an indented piece of code, such as the body of an if-statement or a for-loop is not a block. Indeed, at least in the technical context of the Python language reference, what we would normally call an "indented block" is not a "block". It's not unusual in technical specifications that words are given specific contextual meanings which are different to their usual meanings, and unfortunately sometimes this happens when the usual meaning is also a technical one. What would you call an indented piece of code if not a block? The Python language reference calls it a suite. But I think most Python programmers would rather call it a block, unless they were proposing an edit to the language reference. What does it mean to be executed as a unit (e.g. why wouldn't a for-loop fit this definition)? This means that a "block" (in the strict technical sense of the Python language reference) corresponds with a code object, is executed in a stack frame, and has its own lexical scope. A for loop does not meet this definition because it does not have its own code object, stack frame or lexical scope; its bytecode is part of the containing block's code object, it is executed in the containing block's stack frame, and it has the containing block's lexical scope. Practically, all this means is that variables declared inside a for loop (or any other suite) are still in scope after the suite has executed, until the end of the "block". I really don't like variables in compound statements not being local to the statement... Why wouldn't Python allow me to have a small scope for variables? C-like languages have "block scope" (where the word "block" here means a sequence of statements delimited by a pair of braces), but they also allow you to declare variables without assigning to them yet, so you can declare them in the correct scope even if the assignment will occur in a different scope. For example, the following is totally fine in C, Java or similar languages: int x; if(condition) { x = 5; } else { x = 7; } But Python is a dynamic language in which you don't need to declare variables before you assign to them, so the equivalent Python code would be: if condition: x = 5 else: x = 7 If Python's "suites" had their own lexical scopes then this pattern wouldn't work; x would be local to the if and else scopes and would not be in scope afterwards. To get the same behaviour as the C code, you'd have to write a dummy x = None assignment before the if statement in order to declare x in the correct scope (and, perhaps, nonlocal x so that the assignments in the if and else suites would not instead declare new local variables in their own scopes). This would be silly and with very little upside, so it makes sense that Python is not designed that way. | 3 | 6 |
74,439,252 | 2022-11-15 | https://stackoverflow.com/questions/74439252/pandas-copying-values-of-a-certain-row-based-on-a-different-column | What I'm trying to achieve is that when a row in col2 has a 1, it will copy that 1 onto all the other values in col2 as long as the rows in col1 have the same name. As an example, if the dataframe looks like this col1 col2 xx 1 xx 0 xx 0 xx 0 yy 0 yy 0 yy 0 zz 0 zz 0 zz 1 The output would be col1 col2 xx 1 xx 1 xx 1 xx 1 yy 0 yy 0 yy 0 zz 1 zz 1 zz 1 | Use groupby.transform('max'): df['col2'] = df.groupby('col1')['col2'].transform('max') Output: col1 col2 0 xx 1 1 xx 1 2 xx 1 3 xx 1 4 yy 0 5 yy 0 6 yy 0 7 zz 1 8 zz 1 9 zz 1 | 5 | 6 |
74,438,709 | 2022-11-14 | https://stackoverflow.com/questions/74438709/how-is-secret-key-txt-more-secure-in-django-project | I apologize if this is a duplicate question but I can't find an answer online. In Django Checklist Docs I see the following to keep secret key secure. with open('/etc/secret_key.txt') as f: SECRET_KEY = f.read().strip() My project is deployed with AWS EBS. I've created a separate file called "secret_key.txt" which holds the key. How is this more secure than keeping the key in the settings.py config file? If someone can access my projects settings.py file to access the key, would they not be able to access the "secret_key.txt" file as well? How is creating a "secret_key.txt" file more secure? I've checked Google and Stack Overflow for reasoning but can't find an answer. Currently all sensitive information is protected using an .env file and including this file in .gitignore. | You usually add that file to the .gitignore, such that the file is not part of the (GitHub) repository. This means that you can add (other) settings in the project, and you load "sensitive" settings through environment variables, or files. This hackernoon post for example, discusses four ways to define sensitive variables such that these are not defined in files that you add to the subversioning system. Usually it is advisable to incude a settings.py in the project however, stripped from sensitive data. That way a peer can easily set up the project all the other (required) settings, and thus only has to define a limited number of sensitive variable to get the project running. I think however using an environment variable might be better, since it is probably easier to specify this, and thus to manage a number of processes that all might work with different values. | 4 | 9 |
74,435,728 | 2022-11-14 | https://stackoverflow.com/questions/74435728/i-cant-run-geckodriver-python-selenium-winerror-216 | I've got the win32 drivers from https://github.com/mozilla/geckodriver/releases and placed the exe under the python38 folder I'm running windows 11 OSError: [WinError 216] This version of %1 is not compatible with the version of Windows you're running. Check your computer's system information and then contact the software publisher here you can find the full terminal output https://pastebin.com/k3Gvm2nU > `from selenium import webdriver > from selenium.webdriver.common.keys import Keys > from selenium.webdriver.common.by import By > > driver = webdriver.Firefox() > driver.get("http://www.python.org") > assert "Python" in driver.title > elem = driver.find_element(By.NAME, "q") > elem.clear() > elem.send_keys("l") > elem.send_keys(Keys.RETURN) > assert "No results found." not in driver.page_source` this is the code, I was expecting it to open a firefox page but it doesn't, I think that geckodriver isn't running because its incompatible for some reasons? | You can use webdriver_manager to get rid of driver problems. You can use webdriver_manager for firefox as you can see in the link as follows for selenium 3 from selenium import webdriver from webdriver_manager.firefox import GeckoDriverManager driver = webdriver.Firefox(executable_path=GeckoDriverManager().install()) for selenium 4 from selenium import webdriver from selenium.webdriver.firefox.service import Service as FirefoxService from webdriver_manager.firefox import GeckoDriverManager driver = webdriver.Firefox(service=FirefoxService(GeckoDriverManager().install())) | 5 | 6 |
74,409,966 | 2022-11-12 | https://stackoverflow.com/questions/74409966/how-to-replace-setup-py-with-a-pyproject-toml-for-a-native-c-build-dependency | I came across this little project for creating a C-compiled version of the Black-Scholes function to be used in python. Although the example code seem to have been published in July this year, it seem that the use setup.py type of build has been deprecated beyond legacy builds. Any compilation fails, first complaining about missing MS C++ 14 compiler (which is not true), then further investigation, seem to indicate that setup.py can no longer be used. Q: How can I convert the setup.py to a valid pyproject.toml file? from setuptools import setup, Extension ext = Extension('bs', sources=['black_scholes/bs.c']) setup( name="black_scholes", version="0.0.1", description="European Options Pricing Library", packages=['black_scholes'], ext_modules=[ext] ) From the somewhat ambiguous website (above), I created the following tree structure. $ tree -L 3 ./ ./ βββ black_scholes β βββ black_scholes β β βββ Makefile β β βββ __init__.py β β βββ bs.c β βββ pyproject.toml β βββ setup.py βββ README.md βββ bs_test.py Possibly relevant questions: Is there a simple way to convert setup.py to pyproject.toml Pip error: Microsoft Visual C++ 14.0 is required How to solve "error: Microsoft Visual C++ 14.0 or greater is required" when installing Python packages? 'setup.py install is deprecated' warning shows up every time I open a terminal in VSCode | After having wasted 2 days on trying to circumvent the required Visual Studio C++ Build tools requirements, the only unfortunate option that would work, was to submit to the >7GB download in order to get my 20 line C-function to compile and install nicely on Py3.10. (Follow this.) Using an external _custom_build.py Here are the files that worked: # setup.py from setuptools import setup, Extension ext = Extension('bs', sources=['black_scholes/bs.c']) setup( name="black_scholes", version="0.0.1", description="European Options Pricing Library", packages=['black_scholes'], ext_modules=[ext] ) Then for the pyproject.toml: # pyproject.toml [build-system] requires = ["setuptools>=61.0", "cython"] build-backend = "setuptools.build_meta" [project] name = "black_scholes" description = "European Options Pricing Library" version = "0.0.1" readme = "README.md" requires-python = ">=3.7" authors = [ { name="Example Author", email="[email protected]" }, ] classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ] keywords = ["quant", "portfolio"] [project.urls] "Homepage" = "https://pyquantnews.com/how-to-45x-python-performance-with-c/" [tool.setuptools] py-modules = ["_custom_build"] [tool.setuptools.cmdclass] build_py = "_custom_build.build_py" This is using an external build file called _custom_build.py, as suggested from the SO link above. # _custom_build.py from setuptools import Extension from setuptools.command.build_py import build_py as _build_py class build_py(_build_py): def run(self): self.run_command("build_ext") return super().run() def initialize_options(self): super().initialize_options() if self.distribution.ext_modules == None: self.distribution.ext_modules = [] self.distribution.ext_modules.append( Extension( "bs", sources=["black_scholes/bs.c"], extra_compile_args=["-std=c17", "-lm", "-Wl", "-c", "-fPIC"], ) ) However, it seem that the extra_compile_args are completely ignored... It would have been great if someone could come up with an alternative solution to build using smaller compiler, like MinGW or so. The final tree should look like this: $ tree -L 3 . βββ black_scholes β βββ black_scholes β β βββ Makefile β β βββ bs.c β βββ .gitignore β βββ README.md β βββ __init__.py β βββ _custom_build.py β βββ pyproject.toml β βββ setup.py βββ bs_test.py Using a src build with setup.py & pyproject.toml UPDATE: 2022-11-14 The above procedure turned out to be very messy and also gave different results depending on how you used pip install. In the end I completely changed the flat folder structure to use a src based structure. The working project now look like this: # tree -L 3 . βββ docs βββ examples β βββ fbs_test.py βββ src β βββ black_scholes β β βββ __init__.py β βββ lib β βββ Makefile β βββ fbs.c βββ .gitignore βββ LICENSE.md βββ README.md βββ clean.sh βββ pyproject.toml βββ setup.py and the content of the files are like this: # setup.py from setuptools import setup, find_packages, Extension ext = Extension( name = 'black_scholes.fbs', # 'mypackage.mymodule' sources = ['src/lib/fbs.c'], # list of source files (to compile) include_dirs = ['src/lib'], # list of directories to search for C/C++ header files (in Unix form for portability) py_limited_api = True # opt-in flag for the usage of Python's limited API <python:c-api/stable>. ) setup_args = dict( packages = find_packages(where="src"), # list package_dir = {"": "src"}, # mapping ext_modules = [ext], # list scripts = ["examples/fbs_test.py"] # list ) setup(**setup_args) and # pyproject.toml [build-system] requires = ['setuptools>=61.0'] # 'cython' build-backend = 'setuptools.build_meta' [project] name = 'black_scholes' # ... [tool.setuptools] package-dir = {"" = "src"} #py-modules = ["_custom_build"] [tool.setuptools.packages.find] where = ["src"] Here it is very important that the package name coincide with the src/black_scholes directory name. If not you will have all sorts of very weird run-time errors even after the package has compiled and installed. | 7 | 9 |
74,432,427 | 2022-11-14 | https://stackoverflow.com/questions/74432427/how-to-install-python-libraries-in-docker-file-on-ubuntu | I want to create a docker image (docker version: 20.10.20)that contains python libraires from a requirement.txt file that contains 50 libraries. Without facing root user permissions how can proceed. Here is the file: From ubuntu:latest RUN apt update RUN apt install python3 -y WORKDIR /Destop/DS # COPY requirement.txt ./ # RUN pip install -r requirement.txt # it contains only pandas==1.5.1 COPY script2.py ./ CMD ["python3", "./script2.py"] It failed at requiremnt.txt command *error it takes lot of time while creating image. because it ask for root permission. | For me the only problem in your Dockerfile is in the line RUN apt install python -y. This is erroring with Package 'python' has no installation candidate. It is expected since python refers to version 2.x of Python wich is deprecated and no longer present in the default Ubuntu repositories. Changing your Dockerfile to use Python version 3.x worked fine for me. FROM ubuntu:latest RUN apt update RUN apt install python3 python3-pip -y WORKDIR /Destop/DS COPY requirement.txt ./ RUN pip3 install -r requirement.txt COPY script2.py ./ CMD ["python3", "./script2.py"] To test I used requirement.txt pandas==1.5.1 and script2.py import pandas as pd print(pd.__version__) With this building the docker image and running a container from it executed succesfully. docker build -t myimage . docker run --rm myimage | 3 | 5 |
74,426,028 | 2022-11-14 | https://stackoverflow.com/questions/74426028/pyplot-3d-scatter-plot-zlabel | Minimum working example: #Python import matplotlib.pyplot as plt x = [0, 1, 2, 3, 4, 5] y = [0, 1, 2, 3, 4, 5] z = [0, 1, 2, 3, 4, 5] fig = plt.figure() ax = plt.axes(projection="3d") ax.scatter(x, y, z, c='g', s=20) plt.xlabel("X data") plt.ylabel("Y data") #plt.zlabel("Z data") DOES NOT WORK ax.view_init(60,35) plt.show() Question: how to set up the label of the Z axis? For some reason plt has the xlabel and ylabel properties, but not the zlabel. | For 3D plots the labels need to be changed using the axes objects. Try something like this ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') | 3 | 4 |
74,425,460 | 2022-11-13 | https://stackoverflow.com/questions/74425460/how-to-print-whole-number-without-zeros-after-decimal-point | I'm trying to print a whole number (such as 39 for example) in the following format: 39. It must not be a str type object like '39.' for example, but a number e. g. n = 39.0 should be printed like 39. n = 39.0 #magic stuff with output 39. I tried using :.nf methods (:.0f apparently -- didn't work), print(float(39.)) or just print(39.) In the first case, it looks like 39, in the second and third 39.0 I also tried float(str(39) + '.') and obviously it didn't work Sorry, if it's a stupid question, I've been trying to solve it for several hours already, still can't find any information. | From Format Specification Mini-Language (emphasis mine): The '#' option causes the βalternate formβ to be used for the conversion. The alternate form is defined differently for different types. This option is only valid for integer, float and complex types. For integers, when binary, octal, or hexadecimal output is used, this option adds the respective prefix '0b', '0o', '0x', or '0X' to the output value. For float and complex the alternate form causes the result of the conversion to always contain a decimal-point character, even if no digits follow it. Normally, a decimal-point character appears in the result of these conversions only if a digit follows it. In addition, for 'g' and 'G' conversions, trailing zeros are not removed from the result. >>> n=39.0 >>> print(f'{n:#.0f}') 39. | 4 | 8 |
74,421,106 | 2022-11-13 | https://stackoverflow.com/questions/74421106/why-does-this-specific-piece-of-code-using-random-random-run-slower-in-python-3 | I'm just curious to hear other people's thoughts on why this specific piece of code might might run slower in Python 3.11 than in Python 3.10.6. Cross-posted from here. I'm new here - please kindly let me know if I'm doing something wrong. test.py script: import timeit from random import random def run(): for i in range(100): j = random() t = timeit.timeit(run, number=1000000) print(t) Commands: (base) conda activate python_3_10_6 (python_3_10_6) python test.py 5.0430680999998 (python_3_10_6) conda activate python_3_11 (python_3_11) python test.py 5.801756700006081 | This looks like it's probably the PEP 659 optimizations not paying off for random.random. PEP 659 is an effort to JIT-optimize many common operations. (Not JIT compilation, but definitely JIT optimization.) It pays off for most Python code, but I think random.random isn't covered. random.random is a method (of a hidden random.Random instance) written in C, with no arguments other than self, so it should be using the METH_NOARGS calling convention. This calling convention has no specialized fast path. Both specialize_c_call and _Py_Specialize_Call just bail out instead of specializing the call. When PEP 659 doesn't pay off, the work that goes into supporting it is just overhead. I'm not sure what parts contribute how much overhead, but the bytecode is longer than before, due to generating PRECALL and CALL instructions (although I think there's some work going on to improve that), plus attempting specialization and tracking when to attempt specialization has its own overhead. | 3 | 7 |
74,372,527 | 2022-11-9 | https://stackoverflow.com/questions/74372527/typeerror-cart-takes-no-arguments | This is views.py file from the cart section where I want to add product, remove product and show product details in the cart. Error is : Cart() takes no arguments. from django.shortcuts import render, redirect, get_object_or_404 from django.views.decorators.http import require_POST from ecommerce.models import Product from .cart import Cart from .forms import CartAddProductForm @require_POST def cart_add(request, product_id): cart = Cart(request) product = get_object_or_404(Product, id=product_id) form = CartAddProductForm(request.POST) if form.is_valid(): cd = form.cleaned_data cart.add(product=product, quantity=cd['quantity'], override_quantity=cd['override']) return redirect('cart:cart_detail') @require_POST def cart_remove(request, product_id): cart = Cart(request) product = get_object_or_404(Product, id=product_id) cart.remove(product) return redirect('cart:cart_detail') def cart_detail(request): cart = Cart(request) products = [] for item in cart: item['update_quantity_form'] = CartAddProductForm(initial={ 'quantity': item['quantity'], 'override': True}) return render(request, 'cart/detail.html', {'cart': cart}) This is my cart.py file where shopping cart can be managed. Here is the method to add, remove, iteration over the items in the cart etc. from decimal import Decimal from django.conf import settings from ecommerce.models import Product class Cart: def __int__(self, request): """ Initialize the cart. """ self.session = request.session cart = self.session.get(settings.CART_SESSION_ID) if not cart: cart = self.session[settings.CART_SESSION_ID] = {} self.cart = cart def add(self, product, quantity=1, override_quantity=False): """ Add a product to the cart or update its quantity """ product_id = str(product.id) if product_id not in self.cart: self.cart[product_id] = {'quantity': 0, 'price': str(product.price)} if override_quantity: self.cart[product_id]['quantity'] = quantity else: self.cart[product_id]['quantity'] += quantity self.save() def save(self): self.session.modified = True def remove(self, product): """ Remove a product from the cart. """ product_id = str(product.id) if product_id in self.cart: del self.cart[product_id] self.save() def __iter__(self): """ Iterate over the items in the cart and het the products from the database """ product_ids = self.cart.keys() products = Product.objects.filter(id_in=product_ids) cart = self.cart.copy() for product in products: cart[str(product.id)]['product'] = product for item in cart.values(): item['price'] = Decimal(item['price']) item['total_price'] = item['price'] * item['quantity'] yield item def __len__(self): """ Count all the items in the cart """ return sum(item['quantity'] for item in self.cart.values()) def get_total_price(self): return sum(Decimal(item['price']) * item['quantity'] for item in self.cart.values()) def clear(self): del self.session[settings.CART_SESSION_ID] self.save() Traceback Error: Traceback (most recent call last): File "C:\Users\bbkra\PycharmProjects\djangoProject\venv\lib\site-packages\django\core\handlers\exception.py", line 55, in inner response = get_response(request) File "C:\Users\bbkra\PycharmProjects\djangoProject\venv\lib\site-packages\django\core\handlers\base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\bbkra\PycharmProjects\djangoProject\venv\lib\site-packages\django\views\decorators\http.py", line 43, in inner return func(request, *args, **kwargs) File "C:\Users\bbkra\PycharmProjects\djangoProject\cart\views.py", line 10, in cart_add cart = Cart(request) TypeError: Cart() takes no arguments | I could see one mistake it should be __init__() method not __int__ method. That's why Cart(request) gave that error as __init__() was not actually called. | 5 | 5 |
74,417,696 | 2022-11-13 | https://stackoverflow.com/questions/74417696/python-match-statement-with-enum | I'm trying to match "header" to one of the header types in my ENUM class. I've tried header to match Header.PROFILE_NAME, Header.PROFILE_NAME.name, Header.PROFILE_NAME.name. However none of these worked so far. Can't find a lot of information about it either. Hope someone can help me out on this one. Cheers in advance. from enum import Enum class Header(Enum): PROFILE_NAME = None FIRSTNAME = None LASTNAME = None EMAIL = None PHONE = None STREET = None HOUSE = None ADDRESS2 = None CITY = None STATE = None COUNTRY = None CARD_TYPE = None CARD_NUMBER = None CARD_EXP_MONTH = None CARD_EXP_YEAR = None CARD_CVV = None def setProfiles(): with open('profiles.csv', 'r') as profilesFile: profiles = csv.reader(profilesFile) # Sets CSV header name to index for index, profile in enumerate(profiles): if(index == 0): for index, header in enumerate(profile): match header: case Header.PROFILE_NAME.name: print("profile") #Header.PROFILE_NAME.value = index case Header.FIRSTNAME.name: print("firstn") #Header.FIRSTNAME.value = index case Header.LASTNAME.name: print("last") Header.LASTNAME._value_ = index case Header.EMAIL: Header.EMAIL._value_ = index case Header.PHONE: Header.PHONE._value_ = index case Header.STATE: Header.STATE._value_ = index case Header.HOUSE: Header.HOUSE._value_ = index case Header.ADDRESS2: Header.ADDRESS2._value_ = index case Header.CITY: Header.CITY._value_ = index case Header.STATE: Header.STATE._value_ = index case Header.COUNTRY: Header.COUNTRY._value_ = index case Header.CARD_TYPE: Header.CARD_TYPE._value_ = index case Header.CARD_NUMBER: Header.CARD_NUMBER._value_ = index case Header.CARD_EXP_MONTH: Header.CARD_EXP_MONTH._value_ = index case Header.CARD_EXP_YEAR: Header.CARD_EXP_YEAR._value_ = index case Header.CARD_CVV: Header.CARD_CVV._value_ = index # Creates profile and sets info else: #print(Header.PROFILE_NAME) # print(Header.FIRSTNAME) createProfile = Profile() for index, info in enumerate(profile): match index: case index if index == Header.PROFILE_NAME: createProfile.profileName = info case index if index == Header.FIRSTNAME: createProfile.firstName = info case index if index == Header.LASTNAME: createProfile.lastName = info case index if index == Header.EMAIL: createProfile.email = info case index if index == Header.PHONE: createProfile.phone = info case index if index == Header.STREET: createProfile.street = info case index if index == Header.HOUSE: createProfile.house = info case index if index == Header.ADDRESS2: createProfile.address2 = info case index if index == Header.CITY: createProfile.city = info case index if index == Header.STATE: createProfile.state = info case index if index == Header.COUNTRY: createProfile.country = info case index if index == Header.CARD_TYPE: createProfile.cardType = info case index if index == Header.CARD_NUMBER: createProfile.cardNumber = info case index if index == Header.CARD_EXP_MONTH: createProfile.cardExpiryMonth = info case index if index == Header.CARD_EXP_YEAR: createProfile.cardExpiryYear = info case index if index == Header.CARD_CVV: createProfile.cardCVV = info | The match statement will work directly with enums, so convert your header into an enum first: for index, header in enumerate(profile): header = Header[header.upper()] # or whatever is needed to match the name match header: ... | 6 | 7 |
74,418,107 | 2022-11-13 | https://stackoverflow.com/questions/74418107/how-to-check-timestamps-and-day-period-then-drop-mismatch | I have data with timestamps. Users respond to questions and they also select day period (morning or evening). I want to drop rows where recorded timestamp and day period mismatch. So check, if timestamp is between 6am-12pm and discard if "daytime" is "evening", etc. df timestamps daytime 2020-04-10 11:40 Morning 2022-04-12 19:32 Morning *(discard)* 2022-04-12 20:53 Evening 2022-04-15 22:50 Morning *(discard)* 2022-04-16 09:31 Evening*(discard)* The rule should be: if between 06:00-12:00 and 'daytime' is Evening ==> Remove row/ if between 18:00 - 00:00 and 'daytime' is Morning ==> Remove row I've tried: remove = df[ (6< df['timestamp'].dt.hour < 12 & df['period'] == 'Evening') | (18< df['timestamp'].dt.hour < 23 & df['period'] == 'Morning')] df.drop(remove , inplace=True) | Instead of dropping you can use .query() to filter. df["timestamps"] = pd.to_datetime(df["timestamps"]) df = df.query( "timestamps.dt.hour.between(6, 12, inclusive='both') & daytime.eq('Morning') | " "timestamps.dt.hour.between(18, 23, inclusive='both') & daytime.eq('Evening')" ).reset_index(drop=True) print(df) timestamps daytime 0 2020-04-10 11:40:00 Morning 1 2022-04-12 20:53:00 Evening | 3 | 2 |
74,411,491 | 2022-11-12 | https://stackoverflow.com/questions/74411491/python-equivalent-for-gcloud-auth-print-identity-token-command | The gcloud auth print-identity-token command prints an identity token for the specified account. $(gcloud auth print-identity-token \ --audiences=https://example.com \ --impersonate-service-account [email protected] \ --include-email) How do I do the same using Python? | Here a code sample (not so easy and well documented) import google.auth.transport.requests from google.auth.impersonated_credentials import IDTokenCredentials SCOPES = ['https://www.googleapis.com/auth/cloud-platform'] request = google.auth.transport.requests.Request() audience = 'my_audience' creds, _ = google.auth.default(scopes=SCOPES) icreds = google.auth.impersonated_credentials.Credentials( source_credentials=creds, target_principal="SA TO IMPERSONATE", target_scopes=SCOPES) id = IDTokenCredentials(icreds, target_audience=audience,include_email=True) id.refresh(request) print(id.token) | 10 | 13 |
74,415,578 | 2022-11-12 | https://stackoverflow.com/questions/74415578/openpyxl-or-pandas-which-is-better-at-reading-data-from-a-excel-file-and-return | Hello Stack OF Community, * Basically my goal is to extract values from an excel file, after reading through data from another column.* ** Thickness** of parcel, with values for example - [0.12, 0.12, 0.13, 0.14, 0.14, 0.15] (Heading: Thickness (mm)) Weight of parcel, with values for example - [4.000, 3.500, 2.500, 4.500, 5.000, 2.000] (Heading: Weight (KG)) Excel File: Thickness Weight 0.12 4.000 0.12 3.500 0.13 2.500 0.14 4.500 0.14 5.000 0.15 2.000 Looking to generate this using Python: Thickness Weight Parcels 0.12 7.500 2 Parcels 0.13 2.500 1 Parcels 0.14 9.500 2 Parcels 0.15 2.000 1 Parcels TOTAL: 21.500 6 Parcels The user will be shown all the current values of Thickness Available and will be allowed to input a single thickness value to get its weight or a range and get its weight. So anyone of you who can recommend me how can this task be accomplished easily and efficiently. I would be very grateful for your advice. Please note: I have only done Python Programming Language. Thank You. I have learned Openpyxl but also got to know that Pandas is an efficent tool for Data Analysis, so please let me know! Arigato! | Pandas actually uses openpyxl as well as well as some other engines inside. You can check engines field in the documentation. I think that reading and manipulations are easier with pandas, but if you need some advanced formatting, you will need to use openpyxl directly. (For basic cases pandas is enough). Here is a basic example for your problem. You will need to change formatting for you needs. import pandas as pd # uncomment to read the file # df = pd.read_excel('tmp.xlsx', index_col=None) df = pd.DataFrame({ "Thikness": [0.12, 0.12, 0.13, 0.14, 0.14, 0.15], "Weight": [4.000, 3.500, 2.500, 4.500, 5.000, 2.000, ], }) res = df.groupby(["Thikness"], as_index=False).agg( Weight=('Weight', sum), Count=('Weight', 'count'), ) # write excel writer = pd.ExcelWriter('tmp.xlsx', engine='xlsxwriter') res.to_excel(writer, sheet_name='Sheet1') | 4 | 5 |
74,409,601 | 2022-11-12 | https://stackoverflow.com/questions/74409601/why-is-visual-studio-code-not-showing-mypy-errors-when-i-have-my-package-install | I have a Python project which uses mypy for type checking. The root of my project contains a setup.py and the package folder rise, along with a virtual environment folder venv. Both my shells and VSCode are set to use this virtual environment. Most of the time, this setup works great: VSCode runs mypy every time I save and marks the errors with squiggles and in the Problems window. But for some reason, if I install the rise package in editable mode using pip install -e ., type checking in VSCode never finds any type errors. This does not happen if I install it in non-editable mode with pip install ., and if I uninstall the editable-mode package and reload the VSCode window, it immediately starts working normally again. If I go into the Python output in the Output window, I see: > ./venv/bin/python ~/.vscode/extensions/ms-python.python-2022.18.2/pythonFiles/linter.py -m mypy --show-column-numbers --strict ./rise/action.py [...output from pylint, which is also configured and is working normally, clipped] ##########Linting Output - mypy########## Success: no issues found in 1 source file Yet if I copy and paste the command it's echoing into the terminal, there is in fact an error: $ ./venv/bin/python ~/.vscode/extensions/ms-python.python-2022.18.2/pythonFiles/linter.py -m mypy --show-column-numbers --strict ./rise/action.py rise/action.py:140:12: error: Incompatible return value type (got "None", expected "bool") [return-value] I'm stumped as to how to troubleshoot this further, and I haven't been able to find any reference to this issue anywhere on the web. | I ended up installing the Mypy extension for Visual Studio Code and removing mypy from my linters configuration. The extension doesn't suffer from this bug (or misconfiguration, or whatever it is), and it runs faster to boot. | 5 | 7 |
74,413,330 | 2022-11-12 | https://stackoverflow.com/questions/74413330/is-it-possible-to-merge-plotly-traces-into-a-single-one | The following code presents a way to add two traces to a Plotly figure: import plotly.graph_objs as go fig = go.Figure() fig.add_trace(go.Scatter( x = [0, 1, 2, 3], y = [1, 2, 3, 4], mode = 'lines+markers', name = "Trace 0", )) fig.add_trace(go.Scatter( x = [5,6,7,8], y = [1, 2, 3, 4], mode = 'lines+markers', name = "Trace 1", )) fig.show() Which looks like this: Is it possible to merge these two traces into a single one such that they appear under the same title in the legend and share the same visual properties (i.e, color, markers, etc)? More over, merging these traces should enable toggling the visibility when clicking the title in the legend. | The other answer here is excellent, but I'll post an alternative solution for those interested. If for some reason you don't want to add another column to your dataframe (or maybe you're not using dataframes), you can specify the color of each trace and put your traces in the same legend group to ensure they toggle together. This is also a bit closer to your original code syntax wise, but it is bit more of a workaround stylistically. import plotly.graph_objs as go fig = go.Figure() trace_color = "#636EFA" ## default plotly blue fig.add_trace(go.Scatter( x = [0, 1, 2, 3], y = [1, 2, 3, 4], mode = 'lines+markers', name = "Trace 0", marker = dict(color = trace_color), showlegend = False, legendgroup = "Trace", )) fig.add_trace(go.Scatter( x = [5,6,7,8], y = [1, 2, 3, 4], mode = 'lines+markers', name = "Trace", marker = dict(color = trace_color), legendgroup = "Trace", )) fig.show() | 4 | 5 |
74,407,211 | 2022-11-11 | https://stackoverflow.com/questions/74407211/check-dates-for-gap-of-more-than-one-day-and-group-them-if-continuous-in-spark | If I have table with dates in format MM/DD/YYYY like below. +---+-----------+----------+ | id| startdate| enddate| +---+-----------+----------+ | 1| 01/01/2022|01/31/2022| | 1| 02/01/2022|02/28/2022| | 1| 03/01/2022|03/31/2022| | 2| 01/01/2022|03/01/2022| | 2| 03/05/2022|03/31/2022| | 2| 04/01/2022|04/05/2022| +---+-----------+----------+ How to I group based on id column and if start and end date is continuous? One thing is if there is more than a one day gap then keep the row on a new line so the above table will become: +---+-----------+----------+ | id| startdate| enddate| +---+-----------+----------+ | 1| 01/01/2022|31/03/2022| | 2| 01/01/2022|03/01/2022| | 2| 03/05/2022|04/05/2022| +---+-----------+----------+ id = 1 becomes one row as all dates for id =1 is continuous i.e no gap > 1 but id 2 has two rows as there is a gap between 03/01/2022 and 03/05/2022. | This is a particular case of the sessionization problem (i.e. identify sessions in data based on some conditions). Here is a possible solution that uses windows. The logic behind the solution: Associate at each row the temporally previous enddate with the same id Calculate the difference in days between each startdate and the previous enddate Identify all the rows that don't have a previous row or is at least two days after the previous row Associate at each row a session_index, that is the number of new sessions seen up to this line Aggregate grouping by id and session_index w = Window.partitionBy("id")\ .orderBy("startdate") df = df \ .select( F.col("id"), F.to_date("startdate", "MM/dd/yyyy").alias("startdate"), F.to_date("enddate", "MM/dd/yyyy").alias("enddate") ) \ .withColumn("previous_enddate", F.lag('enddate', offset=1).over(w)) \ .withColumn("date_diff", F.datediff(F.col("startdate"), F.col("previous_enddate"))) \ .withColumn("is_new_session", F.col("date_diff").isNull() | (F.col("date_diff") > 1)) \ .withColumn("session_index", F.sum(F.col("is_new_session").cast("int")).over(w)) df.groupBy("id", "session_index") \ .agg( F.min("startdate").alias("startdate"), F.max("enddate").alias("enddate") ) \ .drop("session_index") | 3 | 5 |
74,370,833 | 2022-11-9 | https://stackoverflow.com/questions/74370833/how-to-add-new-site-language-in-django-admin | I work on a project where we want to have multilingual site. We start with two languages defined in settings.py LANGUAGES = ( ("en-us", _("United States")), ("cs", _("Czech Republic")), ) I am not the programmer doing the work but if I understood correctly all we need is to be able to add - for example - French language for the whole website but not via setting.py but Django admin web interface. LANGUAGES = ( ("en-us", _("United States")), ("cs", _("Czech Republic")), ("fr", _("French")), ) We are using rosetta for translating in Django admin. So I want to use Django admin to add new laguage so it appears in rosetta interface. Could someone tell me how we can control ( add or remove or disable ) languages from Django admin? I checked these but did not find the answer Adding new site language in Django admin How to manage system languages from django admin site? https://djangowaves.com/tutorial/multiple-languages-in-Django/ Add translation for model field using django rosetta Adding languages dynamically through Django Admin | The short answer is that you can't do that. The settings.py of a Django project is not designed, and not recommended to be modified by the web application.(It can introduce a security breach.) So I recommend to change LANGUAGES manually, or to enable all languages supported by Django by removing LANGUAGES key. Of course, don't forget to generate message files with the makemessages command. If you really want such a dynamic feature, your best bet will be to implement it on your own by modifying the Django Rosetta source code.(Define a preference item for supported languages on a DB model, and filter languages by its value.) | 4 | 7 |
74,382,683 | 2022-11-9 | https://stackoverflow.com/questions/74382683/behavior-of-multiprocessing-pool-on-exception | Suppose I have a program that looks like this: jobs = [list_of_values_to_consume_and_act] with multiprocessing.Pool(8) as pool: results = pool.map(func, jobs) And whatever is done in func can raise an exception due to external circumstances, so I can't prevent an exception from happening. How will the pool behave on exception? Will it only terminate the process that raised an exception and let other processes run and consume the jobs? If yes, will it start another process to pick up the slack? What about the job being handled by the dead process, will it be 'resubmitted' to the pool? In any case, how do I 'retrieve' the exception? | No processes will be terminated at all. All calls to the target functions from within the pool's processes are wrapped in a try...except block. Incase an exception is caught, the process informs the appropriate handler thread in the main process which passes the exception forward so it can be re-rasied. Whether or not other jobs will execute depends on if the pool is still open. Incase you do not catch this re-raised exception, the main process (or the process that started the pool) will exit, automatically cleaning up open resources like the pool (so no tasks can be executed now since pool closed). But if you catch the exception and let the main process continue running then the pool will not shutdown and other jobs will execute as scheduled. N/A The outcome of a job is irrelevant, once it's run once by any process, that job is marked completed and not resubmitted to the pool. Wrap your call to pool.map in a try...except block? Do note that incase one of your jobs do raise an error, then the results of other successful jobs will become inaccessible as well (because these are stored after the call to pool.map completes, but the call never successfully completed). In such cases, where you need to catch exceptions of individual jobs, it's better to use pool.imap or pool.apply_async Example of catching exception for individual tasks using imap: import multiprocessing import time def prt(value): if value == 3: raise ValueError(f"Error for value {value}") time.sleep(1) return value if __name__ == "__main__": with multiprocessing.Pool(3) as pool: jobs = pool.imap(prt, range(1, 10)) results = [] for i in range(10): try: result = next(jobs) except ValueError as e: print(e) results.append("N/A") # This means that this individual task was unsuccessful except StopIteration: break else: results.append(result) print(results) Example of catching exception for individual tasks using apply_async import multiprocessing import time def prt(value): if value == 3: raise ValueError(f"Error for value {value}") time.sleep(1) return value if __name__ == "__main__": pool = multiprocessing.Pool(3) job = [pool.apply_async(prt, (i,)) for i in range(1, 10)] results = [] for j in job: try: results.append(j.get()) except ValueError as e: print(e) results.append("N/A") print(results) | 3 | 5 |
74,406,574 | 2022-11-11 | https://stackoverflow.com/questions/74406574/why-is-python-dataclass-a-decorator-and-not-a-base-class | Why does Python implement dataclasses.dataclass as a class decorator and not as a base class? I think it would be at least clearer from the conceptual point of view to have it as a base class: the __init__ method seems to be the only thing a dataclass decorator adds to a class, and adding methods and attributes is what any straightforward base class in usually intended to do. Why to implement a decorator that intrinsically modifies a class? Base classes are just meant for this. Also, having a "Dataclass" base class would make easier for users to modify its behaviour in case any particular working mechanism is needed, one would just have to overwrite base class' methods when inheriting the dataclass. Since it clearly has been made this way for some reason I'm trying to figure out why. The only thing that comes to my mind might be some performance-related thing, I think inheriting a class should be slower that just passing a class through a function, however I'm not sure dataclasses are meant to be highly performant - nor the Python language itself - and in any case for that we have named tuples. | Dataclasses were introduced in PEP 557, which describes some of the design considerations for this feature, including rejected ideas. However, there is no mention of any rejected alternatives to using a decorator, such as using a base class instead. So it seems we cannot give a definitive answer for why a decorator was used rather than a base class. Perhaps using a base class simply wasn't considered as an option. That said, it is not correct to say that the @dataclass decorator only adds the __init__ method. The signature of the dataclass function shows that there are several other features provided, which you can opt in or out of by passing boolean flags: __eq__, enabled by default __repr__, enabled by default Ordering methods such as __lt__, opt-in __hash__, opt-in Immutability through the __setattr__ and __delattr__ methods, opt-in In principle, these methods could be implemented in a generic way in a base class. However, there is a potential for greater performance by generating code for these methods once the field specifications are known; this way, you don't have to look at those field specifications on every method call. (These methods are generated when the dataclass is declared, by building Python code as a string and calling exec, see here.) So, one big advantage of using a decorator is that you can look at the field specifications just once and then generate code for that specific dataclass. By analogy, implementing these in a generic way in a base class is like interpreting, whereas generating code with a decorator is like compiling. (Of course the resulting generated code is still interpreted by the CPython interpreter, which is why this is only an analogy.) Or, putting it a different way, it makes less sense to inherit methods such as __eq__, __lt__ and __hash__ from a base class, because these methods don't do the same thing for every dataclass, except in a much more generic sense. | 7 | 7 |
74,406,021 | 2022-11-11 | https://stackoverflow.com/questions/74406021/import-json-lines-into-pandas | I want to import a JSON lines file into pandas. I tried to import it like a regular JSON file, but it did not work: js = pd.read_json (r'C:\Users\Name\Downloads\profilenotes.jsonl') | This medium article provides a fairly simple answer, which can be adapted to be even shorter. All you need to do is read each line then parse each line with json.loads(). Like this: import json import pandas as pd lines = [] with open(r'test.jsonl') as f: lines = f.read().splitlines() line_dicts = [json.loads(line) for line in lines] df_final = pd.DataFrame(line_dicts) print(df_final) As cgobat pointed out in a comment, the medium article adds a few extra unnecessary steps, which have been optimized in this answer. | 6 | 6 |
74,405,574 | 2022-11-11 | https://stackoverflow.com/questions/74405574/how-to-update-python-to-the-latest-version-on-archlinux | How to install the latest python version 3.11.0 on ArchLinux through pacman? ArchLinux wiki says current version is 3.10, although python 3.11 has been officially released. When running sudo pacman -Syyu p I'm welcomed with warning: python-3.10.8-3 is up to date. Am I doing something wrong? | Use AUR like "yay" to get the new python3.11. If you haven't installed yay on your system, setup yay by following these instructions Run this command after setting up yay in your system: yay -S python311 | 5 | 4 |
74,401,537 | 2022-11-11 | https://stackoverflow.com/questions/74401537/pandas-groupby-two-columns-and-expand-the-third | I have a Pandas dataframe with the following structure: A B C a b 1 a b 2 a b 3 c d 7 c d 8 c d 5 c d 6 c d 3 e b 4 e b 3 e b 2 e b 1 And I will like to transform it into this: A B C1 C2 C3 C4 C5 a b 1 2 3 NAN NAN c d 7 8 5 6 3 e b 4 3 2 1 NAN In other words, something like groupby A and B and expand C into different columns. Knowing that the length of each group is different. C is already ordered Shorter groups can have NAN or NULL values (empty), it does not matter. | Use GroupBy.cumcount and pandas.Series.add with 1, to start naming the new columns from 1 onwards, then pass this to DataFrame.pivot, and add DataFrame.add_prefix to rename the columns (C1, C2, C3, etc...). Finally use DataFrame.rename_axis to remove the indexes original name ('g') and transform the MultiIndex into columns by using DataFrame.reset_indexcolumns A,B: df['g'] = df.groupby(['A','B']).cumcount().add(1) df = df.pivot(['A','B'], 'g', 'C').add_prefix('C').rename_axis(columns=None).reset_index() print (df) A B C1 C2 C3 C4 C5 0 a b 1.0 2.0 3.0 NaN NaN 1 c d 7.0 8.0 5.0 6.0 3.0 2 e b 4.0 3.0 2.0 1.0 NaN Because NaN is by default of type float, if you need the columns dtype to be integers add DataFrame.astype with Int64: df['g'] = df.groupby(['A','B']).cumcount().add(1) df = (df.pivot(['A','B'], 'g', 'C') .add_prefix('C') .astype('Int64') .rename_axis(columns=None) .reset_index()) print (df) A B C1 C2 C3 C4 C5 0 a b 1 2 3 <NA> <NA> 1 c d 7 8 5 6 3 2 e b 4 3 2 1 <NA> EDIT: If there's a maximum N new columns to be added, it means that A,B are duplicated. Therefore, it will beneeded to add helper groups g1, g2 with integer and modulo division, adding a new level in index: N = 4 g = df.groupby(['A','B']).cumcount() df['g1'], df['g2'] = g // N, (g % N) + 1 df = (df.pivot(['A','B','g1'], 'g2', 'C') .add_prefix('C') .droplevel(-1) .rename_axis(columns=None) .reset_index()) print (df) A B C1 C2 C3 C4 0 a b 1.0 2.0 3.0 NaN 1 c d 7.0 8.0 5.0 6.0 2 c d 3.0 NaN NaN NaN 3 e b 4.0 3.0 2.0 1.0 | 8 | 12 |
74,405,180 | 2022-11-11 | https://stackoverflow.com/questions/74405180/why-cpython-exposes-pytuple-setitem-as-c-api-if-tuple-is-immutable-by-design | Tuple in python is immutable by design, so if we try to mutate a tuple object, python emits following TypeError which make sense. >>> a = (1, 2, 3) >>> a[0] = 12 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment So my question is, if tuple is immutable by design why cpython exposes PyTuple_SetItem as C-API?. From the documentation it's described as int PyTuple_SetItem(PyObject *p, Py_ssize_t pos, PyObject *o) Insert a reference to object o at position pos of the tuple pointed to by p. Return 0 on success. If pos is out of bounds, return -1 and set an IndexError exception. Isn't this statement exactly equal to tuple[index] = value in python layer?. If the goal was to create a tuple from collection of items we could have use PyTuple_Pack. Additional note: After lot of trial and error with ctypes.pythonapi I managed to mutate tuple object using PyTuple_SetItem import ctypes from ctypes import py_object my_tuple = (1, 2, 3) newObj = py_object(my_tuple) m = "hello" # I don't know why I need to Py_DecRef here. # Although to reproduce this in your system, no of times you have # to do `Py_DecRef` depends on no of ref count of `newObj` in your system. ctypes.pythonapi.Py_DecRef(newObj) ctypes.pythonapi.Py_DecRef(newObj) ctypes.pythonapi.Py_DecRef(newObj) ctypes.pythonapi.Py_IncRef(m) PyTuple_SetItem = ctypes.pythonapi.PyTuple_SetItem PyTuple_SetItem.argtypes = ctypes.py_object, ctypes.c_size_t, ctypes.py_object PyTuple_SetItem(newObj, 0, m) print(my_tuple) # this will print `('hello', 2, 3)` | Similarly, there is a PyTuple_Resize function with the warning Because tuples are supposed to be immutable, this should only be used if there is only one reference to the object. Do not use this if the tuple may already be known to some other part of the code. The tuple will always grow or shrink at the end. Think of this as destroying the old tuple and creating a new one, only more efficiently. Looking at the source, there is a guard on the function if (!PyTuple_Check(op) || Py_REFCNT(op) != 1) { .... error .... Sure enough, this is only allowed when there is only 1 reference to the tuple - that reference being the thing that thinks its a good idea to change it. So, a tuple is "mostly immutable" but C code can change it in limited circumstances to avoid the penalty of creating a new tuple. | 12 | 11 |
74,403,900 | 2022-11-11 | https://stackoverflow.com/questions/74403900/how-do-i-get-typer-to-accept-the-short-h-as-well-as-the-long-help-to-outp | Out of the box, Typer CLIs only recognize the long help option --help to display the help text. I would like to also accept the short option -h but I can't figure out how. I've searched the docs to no avail. Do I need to alias -h to --help and if so, how do I do that? | The key is to use context_settings={"help_option_names": ["-h", "--help"]}) As suggested by @jvx8ss in the comments, one needs to convert a typer.run app to one using @app.command() decorators. Here is a minimal working example: import typer app = typer.Typer(context_settings={"help_option_names": ["-h", "--help"]}) @app.command() def main(name: str): print(f"Hello {name}") if __name__ == "__main__": app() | 4 | 6 |
74,398,563 | 2022-11-11 | https://stackoverflow.com/questions/74398563/how-to-use-polars-dataframes-with-scikit-learn | I'm unable to use polars dataframes with scikit-learn for ML training. Currently, I'm preprocessing all dataframes in polars and convert them to pandas for model training in order for it to work. Is there any method to directly use polars dataframes with the scikit-learn API (without converting to pandas first)? | You must call to_numpy when passing a DataFrame to sklearn. Though sometimes sklearn can work on polars Series it is still good type hygiene to transform to the type the host library expects. import polars as pl from sklearn.linear_model import LinearRegression data = pl.DataFrame( np.random.randn(100, 5) ) x = data.select([ pl.all().exclude("column_0"), ]) y = data.select(pl.col("column_0").alias("y")) x_train = x[:80] y_train = y[:80] x_test = x[80:] y_test = y[80:] m = LinearRegression() m.fit(X=x_train.to_numpy(), y=y_train.to_numpy()) m.predict(x_test.to_numpy()) | 12 | 10 |
74,400,353 | 2022-11-11 | https://stackoverflow.com/questions/74400353/vscode-flake8-ignore | Flake8 was installed lately by one of the updates of vscode. I think it is time to comply to the "rules" of python to writer better and more readable code. Unfortunately I have some errors that I cannot fix in the code (no discussion about that, but a local module has to be loaded before some others). I want to ignore some warning of Flake8. I have the following settings in settings.json: "python.linting.flake8Enabled": true, "python.linting.enabled": true, "python.linting.flake8Args": [ "--extend-ignore=E203,E266,E501,W503,E402", "--max-line-length=98" ], Both the warnings are not ignored and the maximum line length has not changed. In the GUI the settings are also visible (python > Linting > Flake8 Args). EDIT: I also tried (what was suggested by rzlvmp below) "--ignore=E203,E266,E501,W503,E402" And I restarted vscode and sometime the whole computer to be sure. | seems like python.linting.flake8Args no longer works, I can get flake to work, but I get everything. My solution was to install the flake8 plugin: https://marketplace.visualstudio.com/items?itemName=ms-python.flake8 and use the flake8.args: "flake8.args": [ "--ignore=E24,E128,E201,E202,E225,E231,E252,E265,E302,E303,E401,E402,E501,E731,W504,W605", "--verbose" ], | 3 | 6 |
74,396,955 | 2022-11-11 | https://stackoverflow.com/questions/74396955/how-to-type-hint-variable-that-is-initially-none-but-is-guaranteed-to-get-a-valu | I have a class variable as shown below: class MyClass: def __init__(self): self.value: MyOtherClass | None = None self.initialize_value() def initialize_value(self): self.value = MyOtherClass() def use_value(self): return self.value.used self.value is guaranteed to be initialized as an instance of MyOtherClass before being used in self.use_value(); however, it must remain as None in the constructor. The dilemma here is how to properly typehint this situation. If I choose to typehint it as shown above, I get an error: Item "None" of "Optional[MyOtherClass]" has no attribute "used" [union-attr] If I instead remove the variable definition from the constructor and move the typehint to self.initialize_value(), I get an Instance attribute value defined outside __init__ error from Pylint What is the best way to go about this? Thanks a lot! | To avoid repetition, when such an attribute is used throughout the class in multiple methods, a common pattern for me is protecting it and defining a property that raises an error, if the attribute behind it is not set: class MyClass: def __init__(self): self._value: MyOtherClass | None = None self.initialize_value() @property def value(self) -> MyOtherClass: if self._value is None: raise AttributeError("...") return self._value def initialize_value(self): self._value = MyOtherClass() def use_value(self): return self.value.used This also removes ambiguity for the type checker since the return type of the property is always MyOtherClass. | 15 | 12 |
74,394,875 | 2022-11-10 | https://stackoverflow.com/questions/74394875/mock-patch-in-pytest-fixture-doesnt-work-when-other-tests-have-run | Given the following code and tests: # some module from other.module import a_func # returns False by default def do_stuff(): return "banana" if a_func() else "pear" ############################# # tests in a different module @pytest.fixture def my_fixture(): with mock.patch("other.module.a_func", lambda: True): yield class TestMyStuff: def test_something(self): assert do_stuff() == "pear" def test_something_else(self, my_fixture): assert do_stuff() == "banana" The last test fails if I run all my tests at once, but succeeds if I only run it. I know that this might be due to the fact that by the time test_something_else runs, other.module.a_func has already been imported and loaded, and so the patch isn't taking effect. How do I make it work, or how should I restructure my code so that I can mock the result of a_func to test different outcomes of do_stuff? | You should do mock.patch("some.module.a_func") instead of mock.patch("other.module.a_func") from other.module import a_func means that a_func becomes part of the some.module, so patching the origin of function definition has no effect - instead, patching should be done where the function is used. Read Where to patch for more info. | 10 | 12 |
74,397,741 | 2022-11-11 | https://stackoverflow.com/questions/74397741/how-to-open-new-tab-with-command-line-in-file-explorer | I want to open a new folder in different tabs on the same window of Windows File Explorer in Windows 11, instead of opening a new window every time. I tried to use Start.exe C:\ Explorer.exe C: \ -W 0 Explorer.exe C: \ --windows 0 in terminal and Python. import os import sys gpus = sys.argv[1] path = os.path.realpath(gpus) os.startfile(path) but it always opens in a new window. | I did some research on your problem and concluded that it is not allowed to open Tabs in the file explorer. In this link we can see that as of 11/09/2022 there is no response by MSFT https://techcommunity.microsoft.com/t5/windows-11/22h2-explorer-tabs-save-configuration/m-p/3672808 | 5 | 3 |
74,397,847 | 2022-11-11 | https://stackoverflow.com/questions/74397847/how-to-randomly-sample-from-a-datafframe-while-preserving-the-distribution-in-py | I am using a Kaggle sample data. As shown bellow, 40% of the location is in CA and 47% of the category includes FOODS. What I am trying to achieve is to randomly select data from this data frame, while more or less preserve the same distribution for the values of the these two columns. Does python/Pandas have such a capability? >>> df = pd.read_parquet("~/dimension.parquet") >>> df.groupby('location')['location'].count().transform(lambda x: x/x.sum()) location CA 0.4 TX 0.3 WI 0.3 >>> df.groupby('category')['category'].count().transform(lambda x: x/x.sum()) category FOODS 0.471302 HOBBIES 0.185307 HOUSEHOLD 0.343391 | Your can select a fraction of each group with groupby.sample: # selecting 10% of each group df.groupby(['location', 'category']).sample(frac=0.1) But if your data is large and you select a decent number of rows, this should naturally maintain a representativity of the proportions: df.sample(n=1000) Example, let's pick 500 (0.05%) or 5000 (0.5%) rows out of 1M rows with defined frequencies: np.random.seed(0) n = 1_000_000 df = pd.DataFrame({'location': np.random.choice(['CA', 'TX', 'WI'], p=[0.4, 0.3, 0.3], size=n), 'category': np.random.choice(['A', 'B', 'C'], p=[0.85, 0.1, 0.05], size=n)}) out = df.sample(n=500) out['location'].value_counts(normalize=True) CA 0.388 TX 0.312 WI 0.300 Name: location, dtype: float64 out['category'].value_counts(normalize=True) A 0.822 B 0.126 C 0.052 Name: category, dtype: float64 With df.sample(n=5000): CA 0.3984 TX 0.3064 WI 0.2952 Name: location, dtype: float64 A 0.8468 B 0.1042 C 0.0490 Name: category, dtype: float64 Frequencies of the original population: CA 0.399295 WI 0.300520 TX 0.300185 Name: location, dtype: float64 A 0.850125 B 0.099679 C 0.050196 Name: category, dtype: float64 We observe that both samples are fairly representative of the original population, with nevertheless some loss of precision with the smaller sampling. In contrast, the groupby.sample maintains a close to original proportion even with very small samples (here with ~200 rows (0.02%)): out2 = df.groupby(['location', 'category']).sample(frac=0.0002) print(out2['location'].value_counts(normalize=True)) print(out2['category'].value_counts(normalize=True)) len(out2) CA 0.4 TX 0.3 WI 0.3 Name: location, dtype: float64 A 0.85 B 0.10 C 0.05 Name: category, dtype: float64 200 | 4 | 3 |
74,393,947 | 2022-11-10 | https://stackoverflow.com/questions/74393947/make-python-dataclass-iterable | I have a dataclass and I want to iterate over in in a loop to spit out each of the values. I'm able to write a very short __iter__() within it easy enough, but is that what I should be doing? I don't see anything in the documentation about an 'iterable' parameter or anything, but I just feel like there ought to be... Here is what I have which, again, works fine. from dataclasses import dataclass @dataclass class MyDataClass: a: float b: float c: float def __iter__(self): for value in self.__dict__.values(): yield value thing = MyDataclass(1,2,3) for i in thing: print(i) # outputs 1,2,3 on separate lines, as expected Is this the best / most direct way to do this? | The simplest approach is probably to make a iteratively extract the fields following the guidance in the dataclasses.astuple function for creating a shallow copy, just omitting the call to tuple (to leave it a generator expression, which is a legal iterator for __iter__ to return: def __iter__(self): return (getattr(self, field.name) for field in dataclasses.fields(self)) # Or writing it directly as a generator itself instead of returning a genexpr: def __iter__(self): for field in dataclasses.fields(self): yield getattr(self, field.name) Unfortunately, astuple itself is not suitable (as it recurses, unpacking nested dataclasses and structures), while asdict (followed by a .values() call on the result), while suitable, involves eagerly constructing a temporary dict and recursively copying the contents, which is relatively heavyweight (memory-wise and CPU-wise); better to avoid unnecessary O(n) eager work. asdict would be suitable if you want/need to avoid using live views (if later attributes of the instance are replaced/modified midway through iterating, asdict wouldn't change, since it actually guarantees they're deep copied up-front, while the genexpr would reflect the newer values when you reached them). The implementation using asdict is even simpler (if slower, due to the eager pre-deep copy): def __iter__(self): yield from dataclasses.asdict(self).values() # or avoiding a generator function: def __iter__(self): return iter(dataclasses.asdict(self).values()) There is a third option, which is to ditch dataclasses entirely. If you're okay with making your class behave like an immutable sequence, then you get iterability for free by making it a typing.NamedTuple (or the older, less flexible collections.namedtuple) instead, e.g.: from typing import NamedTuple class MyNotADataClass(NamedTuple): a: float b: float c: float thing = MyNotADataClass(1,2,3) for i in thing: print(i) # outputs 1,2,3 on separate lines, as expected and that is iterable automatically (you can also call len on it, index it, or slice it, because it's an actual subclass of tuple with all the tuple behaviors, it just also exposes its contents via named properties as well). | 14 | 20 |
74,393,442 | 2022-11-10 | https://stackoverflow.com/questions/74393442/what-is-the-purpose-of-the-master-parameter-in-the-tkinter-variable-class-subc | I've been looking for some more detailed information regarding the Variable subclasses in tkinter, namely BooleanVar, DoubleVar, IntVar, and StringVar. I'm hoping someone with broader knowledge can point me in the right direction. Given the constructor: tkinter.Variable(master=None, value=None, name=None) I'm curious what the utility of the master parameter is for these classes. I understand that it's equivalent to the master parameter for other tkinter widgets, but I'm not sure I understand how it affects these variable classes specifically; it's a bit more intuitive when the widget is the child of a given class. I typically ignore this value, as I understand that (assuming this is in my root class that inherits from Tk) self.var = tk.StringVar() is equivalent to self.var = tk.StringVar(self), or self.var = tk.StringVar(None). Should I be including this? Is it providing some functionality I would otherwise be missing? I'm not necessarily looking for "What is the best practice here", but rather an explanation of the intended use. Any info is much appreciated! Here's a link to what little information I've been able to find, if anyone else is curious TkDocs - Variable | When you create an instance of Tk, you are doing more than just creating a widget. For each instance, you are also creating an embedded Tcl interpreter. This tcl interpreter is where all of the widgets and variables and image objects exist. The objects within this interpreter are only available to that interpreter and cannot be shared with other interpreters. If you create multiple instances of Tk, the master parameter lets you tell tkinter which interpreter each variable belongs to. Without it, the variables and widgets will be created in the interpreter of the first instance of Tk. | 4 | 5 |
74,383,395 | 2022-11-10 | https://stackoverflow.com/questions/74383395/property-sheets-of-openpyxlwriter-object-has-no-setter-using-pandas-and-open | This code used to get a xlsx file and write over it, but after updating from pandas 1.1.5 to 1.5.1 I got zipfile.badzipfile file is not a zip file Then I read here that after pandas 1.2.0 the pd.ExcelWriter(report_path, engine='openpyxl') creates a new file but as this is a completely empty file, openpyxl cannot load it. Knowing that, I changed the code to this one, but now I'm getting AttributeError: property 'sheets' of 'OpenpyxlWriter' object has no setter. How should I handle this? book = load_workbook('Resultados.xlsx') writer = pd.ExcelWriter('Resultados.xlsx', engine='openpyxl') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) reader = pd.read_excel(r'Resultados.xlsx') df = pd.DataFrame.from_dict(dict_) df.to_excel(writer, index=False, header=False, startrow=len(reader) + 1) writer.close() | TLDR Use .update to modify writer.sheets Rearrange the order of your script to get it working # run before initializing the ExcelWriter reader = pd.read_excel("Resultados.xlsx", engine="openpyxl") book = load_workbook("Resultados.xlsx") # use `with` to avoid other exceptions with pd.ExcelWriter("Resultados.xlsx", engine="openpyxl") as writer: writer.book = book writer.sheets.update(dict((ws.title, ws) for ws in book.worksheets)) df.to_excel(writer, index=False, header=False, startrow=len(reader)+1) Details Recreating your problem with some fake data import numpy as np from openpyxl import load_workbook import pandas as pd if __name__ == "__main__": # make some random data np.random.seed(0) df = pd.DataFrame(np.random.random(size=(5, 5))) # this makes an existing file with pd.ExcelWriter("Resultados.xlsx", engine="openpyxl") as writer: df.to_excel(excel_writer=writer) # make new random data np.random.seed(1) df = pd.DataFrame(np.random.random(size=(5, 5))) # what you tried... book = load_workbook("Resultados.xlsx") writer = pd.ExcelWriter("Resultados.xlsx", engine="openpyxl") writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) reader = pd.read_excel("Resultados.xlsx") # skipping this step as we defined `df` differently # df = pd.DataFrame.from_dict(dict_) df.to_excel(writer, index=False, header=False, startrow=len(reader)+1) writer.close() We get the same error plus a FutureWarning ...\StackOverflow\answer.py:23: FutureWarning: Setting the `book` attribute is not part of the public API, usage can give unexpected or corrupted results and will be removed in a future version writer.book = book Traceback (most recent call last): File "...\StackOverflow\answer.py", line 24, in <module> writer.sheets = dict((ws.title, ws) for ws in book.worksheets) AttributeError: can't set attribute 'sheets' The AttributeError is because sheets is a property of the writer instance. If you're unfamiliar with it, here is a resource. In shorter terms, the exception is raised because sheets cannot be modified in the way you're trying. However, you can do this: # use the `.update` method writer.sheets.update(dict((ws.title, ws) for ws in book.worksheets)) That will move us past the the AttributeError, but we'll hit a ValueError a couple lines down: reader = pd.read_excel("Resultados.xlsx") Traceback (most recent call last): File "...\StackOverflow\answer.py", line 26, in <module> reader = pd.read_excel("Resultados.xlsx") ... File "...\lib\site-packages\pandas\io\excel\_base.py", line 1656, in __init__ raise ValueError( ValueError: Excel file format cannot be determined, you must specify an engine manually. Do what the error message says and supply an argument to the engine parameter reader = pd.read_excel("Resultados.xlsx", engine="openpyxl") And now we're back to your original zipfile.BadZipFile exception Traceback (most recent call last): File "...\StackOverflow\answer.py", line 26, in <module> reader = pd.read_excel("Resultados.xlsx", engine="openpyxl") ... File "...\Local\Programs\Python\Python310\lib\zipfile.py", line 1334, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file After a bit of toying, I noticed that the Resultados.xlsx file could not be opened manually after running this line: writer = pd.ExcelWriter("Resultados.xlsx", engine="openpyxl") So I reordered some of the steps in your code: # run before initializing the ExcelWriter reader = pd.read_excel("Resultados.xlsx", engine="openpyxl") book = load_workbook("Resultados.xlsx") # the old way # writer = pd.ExcelWriter("Resultados.xlsx", engine="openpyxl") with pd.ExcelWriter("Resultados.xlsx", engine="openpyxl") as writer: writer.book = book writer.sheets.update(dict((ws.title, ws) for ws in book.worksheets)) df.to_excel(writer, index=False, header=False, startrow=len(reader)+1) | 5 | 5 |
74,390,633 | 2022-11-10 | https://stackoverflow.com/questions/74390633/explain-deprecationwarning-private-variables-such-as-cmd-call-set-will-be | Python interpreter version used in the code base I am working on has recently been updated from Python 3.7 to 3.9. A few new warnings similar to one in the title have started showing up when some of the tools written in Python are executed. I've searched the net extensively, read the What's New in 3.10 but haven't found an answer about what it exactly means, and what possible actions I can take to address it. I have an option to grep the source code of CPython of course, but I'd rather avoid it if possible. The warning seems to predict change in the visibility of class members. The code in question was not written by me. The original author is (of course) no longer available. Personally, I never use underscored members in an attempt to affect their visibility. Here is how the code around the warning looks like: class Cmd(Enum): ... @classmethod def __call_set(cls, # << Here the warning ...): ... | writing an attribute as so: _attr makes it a private attribute, and __attr makes it a protected attribute. This deprecation warning seems to indicate the attributes in question will be made not private and not protected in 3.10. TL;DR They won't have the underscore in 3.10, and they will be completely visible. | 4 | 0 |
74,377,678 | 2022-11-9 | https://stackoverflow.com/questions/74377678/class-attributes-dependent-on-other-class-attributes | I want to create a class attribute, that are dependent to another class attribute (and I tell class attribute, not instance attribute). When this class attribute is a string, as in this topic, the proposed solution class A: foo = "foo" bar = foo[::-1] print(A.bar) works fine. But when the class attribute is a list or a tuple, as in my case, the following code dose not work... x=tuple('nice cup of tea') class A: remove = ('c','o','a',' ') remains = tuple(c for c in x if not c in remove) print(A.remains) raise Traceback (most recent call last): File "[***.py]", line 3, in <module> class A: File "[***.py]", line 5, in A remains = tuple(c for c in x if not c in remove) File "[***.py]", line 5, in <genexpr> remains = tuple(c for c in x if not c in remove) NameError: name 'remove' is not defined Why this kind of methods works if my class attributes is less complex (as simple string, as in the mentioned previous topic) but not for tuple? After investigations, I found this way: x=tuple('nice cup of tea') def sub(a,b): return tuple(c for c in a if not c in b) class A: remove = ('c','o','a',' ') remains = sub(x, remove) print(A.remains) that works, but does not suit me, for these reasons: I don't understand why this one, via an intermediary function, works and not without. I don't want to add a function for just a single line with an elementary operation. | Python is trying to look up remove in the global scope, but it doesn't exist there. x, on the other hand, is looked up in the enclosing (class) scope. See the documentation: Resolution of names Class definition blocks and arguments to exec() and eval() are special in the context of name resolution. A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution with an exception that unbound local variables are looked up in the global namespace. The namespace of the class definition becomes the attribute dictionary of the class. The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods -- this includes comprehensions and generator expressions since they are implemented using a function scope. This means that the following will fail: class A: a = 42 b = list(a + i for i in range(10)) Displays for lists, sets and dictionaries The iterable expression in the leftmost for clause is evaluated directly in the enclosing scope and then passed as an argument to the implicitly nested scope. Subsequent for clauses and any filter condition in the leftmost for clause cannot be evaluated in the enclosing scope as they may depend on the values obtained from the leftmost iterable. | 5 | 6 |
74,378,923 | 2022-11-9 | https://stackoverflow.com/questions/74378923/aligning-text-in-rows-of-pyplot-legend-at-multiple-points-without-using-monospa | I am trying to create a neat legend in Pyplot. So far I have this: fig = plt.figure() ax = plt.gca() marker_size = [20.0, 40.0, 60.0, 100.0, 150.0] marker_color = ['black', 'red', 'pink', 'white', 'yellow'] ranges = [0.0, 1.5, 20.0, 60.0, 500.0] marker_edge_thickness = 1.2 s = [(m ** 2) / 100.0 for m in marker_size] scatter_kwargs = {'edgecolors' : 'k', 'linewidths' : marker_edge_thickness} for i in range(len(marker_size)): if i == (len(marker_size) - 1): label_str = '{:>5.1f} $\leq$ H$_2$'.format(ranges[i]) else: label_str = '{:>5.1f} $\leq$ H$_2$ < {:>5.1f}'.format(ranges[i], ranges[i + 1]) ax.scatter([], [], s = s[i], c = marker_color[i], label = label_str, **scatter_kwargs) #ax.legend(prop={'family': 'monospace'}) ax.legend() plt.show() It is ok but the symbols don't align properly between the rows. I would like to align the rows at multiple points, with alignment on the decimal points, the less-than and greater-than symbols, and the H2. I could use a monotype font (as per this answer: Adding internal spaces in pyplot legend), but this is ugly and seems to be incompatible with the subscript 2 in H2. This would be possible in LaTeX (e.g. using the alignat environment); is it possible in Pyplot? | You could replace the spaces by '\u2007', a space that is as wide as a digit. In most fonts, a space character is much narrower than a digit. Except for monospaced fonts, which don't look as nice, each letter has its own width. The character width can even be different depending on which letter goes before and after (E.g. with 'VA', the 'A' can be a bit under the 'V', this is called "kerning"). Special characters are introduced in UTF-8 for spaces with a specific width. import matplotlib.pyplot as plt fig = plt.figure() ax = plt.gca() marker_size = [20.0, 40.0, 60.0, 100.0, 150.0] marker_color = ['black', 'red', 'pink', 'white', 'yellow'] ranges = [0.0, 1.5, 20.0, 60.0, 500.0] marker_edge_thickness = 1.2 s = [(m ** 2) / 100.0 for m in marker_size] scatter_kwargs = {'edgecolors': 'k', 'linewidths': marker_edge_thickness} for i in range(len(marker_size)): label_str = f"{ranges[i]:>5.1f}$\leq$H$_2$" if i < (len(marker_size) - 1): label_str += f"$<${ranges[i + 1]:>5.1f}" label_str = label_str.replace(' ', '\u2007') ax.scatter([], [], s=s[i], c=marker_color[i], label=label_str, **scatter_kwargs) ax.legend() plt.tight_layout() plt.show() | 4 | 4 |
74,370,984 | 2022-11-9 | https://stackoverflow.com/questions/74370984/is-tkwait-wait-variable-wait-window-wait-visibility-broken | I recently started to use tkwait casually and noticed that some functionality only works under special conditions. For example: import tkinter as tk def w(seconds): dummy = tk.Toplevel(root) dummy.title(seconds) dummy.after(seconds*1000, lambda x=dummy: x.destroy()) dummy.wait_window(dummy) print(seconds) root = tk.Tk() for i in [5,2,10]: w(i) root.mainloop() The code above works just fine and as expected: The for loop calls the function The function runs and blocks the code for x seconds The window gets destroyed and the for loop continues But in a more event driven environment these tkwait calls gets tricky. The documentation states quote: If an event handler invokes tkwait again, the nested call to tkwait must complete before the outer call can complete. Instead of an output of >>5 >>2 >>10 you will get >>10 >>2 >>5 because the nested call blocks the inner and the outer will block the inner. I suspect a nested event loop or an equivalent of the mainloop processes events in the normal fashion while waiting. Am I doing something wrong by using this feature? Because if you think about it, nearly all tkinter dialog windows are using this feature and I've never read about this behavior before. An event driven example might be: import tkinter as tk def w(seconds): dummy = tk.Toplevel(root) dummy.title(seconds) dummy.after(seconds*1000, lambda x=dummy: x.destroy()) dummy.wait_window(dummy) print(seconds) root = tk.Tk() btn1 = tk.Button( root, command=lambda : w(5), text = '5 seconds') btn2 = tk.Button( root, command=lambda : w(2), text = '2 seconds') btn3 = tk.Button( root, command=lambda : w(10), text = '10 seconds') btn1.pack() btn2.pack() btn3.pack() root.mainloop() As an additional problem that raises with wait_something is that it will prevent your process to finish if the wait_something never was released. | Basically, you need great care if you're using an inner event loop because: Conditions that would terminate the outer event loop aren't checked for until the inner event loop(s) are finished. It's really quite easy to end up recursively entering an inner event loop by accident. The recursive entry problem is usually most easily handled by disabling the path that enters the event loop while the inner event loop runs. There's often an obvious way to do this, such as disabling the button that you'd click. The condition handling is rather more difficult. In Tcl, you'd handle it by restructuring things slightly using a coroutine so that the thing that looks like an inner event loop isn't, but rather is just parking things until the condition is satisfied. That option is... rather more difficult to do in Python as the language implementation isn't fully non-recursive (and I'm not sure that Tkinter is set up to handle the mess of async function coloring). Fortunately, provided you're careful, it's not too difficult. It helps if you know that wait_window is waiting for a <Destroy> event where the target window is the toplevel (and not one of the inner components) and that destroying the main window will trigger it as all the other windows are also destroyed when you do that. In short, as long as you avoid reentrancy you'll be fine with it. You just need to arrange for the button that was clicked to be disabled while the wait is ongoing; that's good from a UX perspective too (the user can't do it, so don't provide the visual hint that they can). def w(seconds, button): dummy = tk.Toplevel(root) dummy.title(seconds) dummy.after(seconds*1000, lambda x=dummy: x.destroy()) button["state"] = "disabled" # <<< This, before the wait dummy.wait_window(dummy) button["state"] = "normal" # <<< This, after the wait print(seconds) btn1 = tk.Button(root, text = '5 seconds') # Have to set the command after creation to bind the button handle to the callback btn1["command"] = (lambda : w(5, btn1)) This all omits little things like error handling. | 8 | 8 |
74,369,418 | 2022-11-9 | https://stackoverflow.com/questions/74369418/dealing-with-cracks-in-the-unary-union-of-several-imprecise-polygons | I used shapely.ops.unary_union on a number of 6-sided shapely.geometry.Polygons, and obtained the following shape A: Note how there are two "cracks" in the upper part. These are not intended, and are presumably caused by some floating-point edge cases. If you construct another shape B that sits inside of A, and if A happens to intersect one of these "cracks", then A.covers(B) will be False! In my particular case, this leads to a test suite failure, because A.covers(B) is supposed to be an invariant. Therefore I need to deal with this somehow. Is there some algorithm I can use to "seal" these cracks? In practice, these "cracks" will not affect the functioning of the application, because we only care about the outer border of A covering B. Thus I am open to solutions that adjust the individual hexagons in order to make the cracks go away by introducing overlaps. However, I cannot accept the outer border of this shape changing, because that would actually no longer be testing the application as it is intended to be used. To summarize, I want the outcome to look like this (my hand-edited version): | You can fix this by buffering and un-buffering the shape. Here's an example of a polygon with a small crack: from shapely.geometry import Polygon bad_polygon = Polygon([[0, 0], [1, 0], [1, 0.4999], [0.5, 0.5], [1, 0.5001], [1, 1], [0, 1], [0, 0]]) To fix it, expand the shape slightly, and contract it the same amount, using the buffer() method. tol = 1e-4 bad_polygon.buffer(tol).buffer(-tol) The value tol must be at least as large as half the distance across the crack at the crack's widest point. | 4 | 5 |
74,316,373 | 2022-11-4 | https://stackoverflow.com/questions/74316373/what-is-the-point-for-asyncio-synchronization-primitives-not-to-be-thread-safe | It seems that several asyncio functions, like those showed here, for synchronization primitives are not thread safe... By being not thread safe, considering for example asyncio.Lock, I assume that this lock won't lock the global variable, when we're running multiple threads in our computer, so race conditions are problem. So, what's the point of having this Lock that doesn't lock? (not a criticism, but an honest doubt) What are the case uses for these unsafe primitives? | For use-cases, look at what's Python asyncio.Lock() for? As for why it is not thread-safe. It is mostly performance. Asyncio is not made for multithreaded work like old servers used to do, the asyncio eventloop itself is not thread-safe and only runs coroutines in a single thread, hence a lock that is only running in a single thread doesn't need to be threadsafe, and doesn't need to suffer from the extra overhead from using a thread-safe primitives. Interacting with asyncio.lock suspends the current coroutine and instructs the eventloop to resume another coroutine (remember the eventloop is not thread-safe), those primitives don't need interaction with the OS unlike thead-safe ones which need to interact with the OS, which is the entire reason why asyncio is faster than old multithreaded servers. You can use loop.run_in_executor to do tasks in another thread, those tasks can hold a threading.Lock while doing their work. This is preferred over trying to acquire the lock in the asyncio eventloop thread. To use a threading.Lock in an async function you could wait for it indirectly using the answer in this question How to use threading.Lock in async function while object can be accessed from multiple thread as commented by @dano. It is not that those primitives can't be made threadsafe. They can be made to work similar to loop.call_soon_threadsafe or the above method which is more expensive than a normal threading.lock. Thread-safe locks are expensive and thread-safe interactions with the eventloops are even more expensive, because you need to send a message to the eventloop to interact with it. | 8 | 7 |
74,289,077 | 2022-11-2 | https://stackoverflow.com/questions/74289077/attributeerror-multiprocessingdataloaderiter-object-has-no-attribute-next | I am trying to load the dataset using Torch Dataset and DataLoader, but I got the following error: AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next' the code I use is: class WineDataset(Dataset): def __init__(self): # Initialize data, download, etc. # read with numpy or pandas xy = np.loadtxt('./data/wine.csv', delimiter=',', dtype=np.float32, skiprows=1) self.n_samples = xy.shape[0] # here the first column is the class label, the rest are the features self.x_data = torch.from_numpy(xy[:, 1:]) # size [n_samples, n_features] self.y_data = torch.from_numpy(xy[:, [0]]) # size [n_samples, 1] # support indexing such that dataset[i] can be used to get i-th sample def __getitem__(self, index): return self.x_data[index], self.y_data[index] # we can call len(dataset) to return the size def __len__(self): return self.n_samples dataset = WineDataset() train_loader = DataLoader(dataset=dataset, batch_size=4, shuffle=True, num_workers=2) I tried to make the num_workers=0, still have the same error. Python version 3.8.9 PyTorch version 1.13.0 | I too faced the same issue, when i tried to call the next() method as follows dataiter = iter(dataloader) data = dataiter.next() You need to use the following instead and it works perfectly: dataiter = iter(dataloader) data = next(dataiter) Finally your code should look like follows: class WineDataset(Dataset): def __init__(self): # Initialize data, download, etc. # read with numpy or pandas xy = np.loadtxt('./data/wine.csv', delimiter=',', dtype=np.float32, skiprows=1) self.n_samples = xy.shape[0] # here the first column is the class label, the rest are the features self.x_data = torch.from_numpy(xy[:, 1:]) # size [n_samples, n_features] self.y_data = torch.from_numpy(xy[:, [0]]) # size [n_samples, 1] # support indexing such that dataset[i] can be used to get i-th sample def __getitem__(self, index): return self.x_data[index], self.y_data[index] # we can call len(dataset) to return the size def __len__(self): return self.n_samples dataset = WineDataset() dataloader = DataLoader(dataset=dataset, batch_size=4, shuffle=True, num_workers=2) dataiter = iter(dataloader) data = next(dataiter) | 37 | 94 |
74,318,682 | 2022-11-4 | https://stackoverflow.com/questions/74318682/how-to-submit-html-form-input-value-using-fastapi-and-jinja2-templates | I am facing the following issue while trying to pass a value from an HTML form <input> element to the form's action attribute and send it to the FastAPI server. This is how the Jinja2 (HTML) template is loaded: # Test TEMPLATES @app.get("/test",response_class=HTMLResponse) async def read_item(request: Request): return templates.TemplateResponse("index.html", {"request": request}) My HTML form: <form action="/disableSubCategory/{{subCatName}}"> <label for="subCatName">SubCategory:</label><br> <input type="text" id="subCatName" name="subCatName" value=""><br> <input type="submit" value="Disable"> </form> My FastAPI endpoint to be called in the form action: # Disable SubCategory @app.get("/disableSubCategory/{subCatName}") async def deactivateSubCategory(subCatName: str): disableSubCategory(subCatName) return {"message": "SubCategory [" + subCatName + "] Disabled"} The error I get: "GET /disableSubCategory/?subCatName=Barber HTTP/1.1" 404 Not Found What I am trying to achieve is the following FastAPI call: /disableSubCategory/{subCatName} ==> "/disableSubCategory/Barber" Anyone who could help me understand what I am doing wrong? Thanks. Leo | Option 1 You could have the category name defined as Form parameter in the backend, and submit a POST request from the frontend using an HTML <form>, as described in Method 1 of this answer. app.py from fastapi import FastAPI, Form, Request from fastapi.responses import HTMLResponse from fastapi.templating import Jinja2Templates app = FastAPI() templates = Jinja2Templates(directory='templates') @app.post('/disable') async def disable_cat(cat_name: str = Form(...)): return f'{cat_name} category has been disabled.' @app.get('/', response_class=HTMLResponse) async def main(request: Request): return templates.TemplateResponse('index.html', {'request': request}) templates/index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <h1>Disable a category</h1> <form method="post" action="/disable"> <label for="cat_name">Enter a category name to disable:</label><br> <input type="text" id="cat_name" name="cat_name"> <input class="submit" type="submit" value="Submit"> </form> </body> </html> To achieve the same result, i.e., submiting Form data to the backend through a POST request, using JavaScript's Fetch API instead, you could use the following template in the frontend (see Option 4 later on in this answer for a similar approach): templates/index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <h1>Disable a category</h1> <label for="cat_name">Enter a category name to disable:</label><br> <input type="text" id="cat_name" name="cat_name"> <input type="button" value="Submit" onclick="send()"> <p id="resp"></p> <script> function send() { var resp = document.getElementById("resp"); const cat_name = document.getElementById("cat_name").value; var formData = new FormData(); formData.append("cat_name", cat_name); fetch('/disable', { method: 'POST', body: formData, }) .then(response => response.json()) .then(data => { resp.innerHTML = JSON.stringify(data); // data is a JSON object }) .catch(error => { console.error(error); }); } </script> </body> </html> To use arbitrary Form keys, i.e., if the Form keys/names are not known beforehand to the backend, you might want to have a look at this answer. Option 2 You could have the category name declared as query parameter in your endpoint, and in the frontend use a similar approach to the one demonstrated in your question to convert the value from the <form> <input> element into a query parameter, and then add it to the query string of the URL (in the action attribute). Note that the below uses a GET request in contrast to the above (in this case, you need to use @app.get() in the backend and <form method="get" ... in the frontend, which is the default method anyway). Beware that most browsers cache GET requests (i.e., saved in browser's history), thus making them less secure compared to POST, as the data sent are part of the URL and visible to anyone who has access to the device. Thus, GET method should not be used when sending passwords or other sensitive information. app.py from fastapi import FastAPI, Request from fastapi.responses import HTMLResponse from fastapi.templating import Jinja2Templates app = FastAPI() templates = Jinja2Templates(directory='templates') @app.get('/disable') async def disable_cat(cat_name: str): return f'{cat_name} category has been disabled.' @app.get('/', response_class=HTMLResponse) async def main(request: Request): return templates.TemplateResponse('index.html', {'request': request}) templates/index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <h1>Disable a category</h1> <form method="get" id="myForm" action='/disable{{ cat_name }}'> <label for="cat_name">Enter a category name to disable:</label><br> <input type="text" id="cat_name" name="cat_name"> <input class="submit" type="submit" value="Submit"> </form> </body> </html> If you instead would like to use a POST requestβwhich might make more sense when updating content/state on the server compared to GET that should be used when requesting (not modifying) dataβyou could define the FastAPI endpoint in app.py above with @app.post(), as well as replace the above template with the one below (similar to Method 2 of this answer), which submits the <form> using POST method, after transforming the <form> data into query parameters. Note that since the data are still sent as part of the query string, one would still be able to see them in the browser's history, despite of using a POST request method; hence, in this case, using a POST method, it doesn't actually make it a more secure way of transferring those data. To avoid that, one should submit the data in the request body instead, as demonstrated in Option 1 earlier. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <script> document.addEventListener('DOMContentLoaded', (event) => { document.getElementById("myForm").addEventListener("submit", function (e) { var myForm = document.getElementById('myForm'); var qs = new URLSearchParams(new FormData(myForm)).toString(); myForm.action = '/disable?' + qs; }); }); </script> </head> <body> <h1>Disable a category</h1> <form method="post" id="myForm"> <label for="cat_name">Enter a category name to disable:</label><br> <input type="text" id="cat_name" name="cat_name"> <input class="submit" type="submit" value="Submit"> </form> </body> </html> Option 3 You could still have it defined as path parameter (as shown in your question), and use JavaScript in the frontend to modify the action attribute of the HTML <form>, by passing the value of the <form> <input> element as path parameter to the URL, similar to what has been described earlier. app.py from fastapi import FastAPI, Request from fastapi.responses import HTMLResponse from fastapi.templating import Jinja2Templates app = FastAPI() templates = Jinja2Templates(directory='templates') @app.post('/disable/{name}') async def disable_cat(name: str): return f'{name} category has been disabled.' @app.get('/', response_class=HTMLResponse) async def main(request: Request): return templates.TemplateResponse('index.html', {'request': request}) templates/index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <script> document.addEventListener('DOMContentLoaded', (event) => { document.getElementById("myForm").addEventListener("submit", function (e) { var myForm = document.getElementById('myForm'); var catName = document.getElementById('catName').value; myForm.action = '/disable/' + catName; }); }); </script> </head> <body> <h1>Disable a category</h1> <form method="post" id="myForm"> <label for="catName">Enter a category name to disable:</label><br> <input type="text" id="catName" name="catName"> <input class="submit" type="submit" value="Submit"> </form> </body> </html> Option 4 If you would like to prevent the page from reloading/redirecting when hitting the submit button of the HTML <form> and rather get the results in the same page, you could use Fetch API, as briefly described earlier, in order to make an asynchronous HTTP request, similar to this answer, as well as this answer and this answer. Additionally, one could call the Event.preventDefault() function, as described in this answer, in order to prevent the default action when submitting an HTML <form>. The example below is based on the previous option (i.e., Option 3); however, the same approach below (i.e., making an asynchronous HTTP request) could also be used for Options 1 & 2 demonstrated earlier, if you would like to keep the browser from refreshing the page on <form> submission. Note that this option submits the <input> value as a path parameter to the backend, but if you would like to submit it as a Form parameter instead, please have a look at the relevant code given in Option 1 earlier. app.py from fastapi import FastAPI, Request from fastapi.responses import HTMLResponse from fastapi.templating import Jinja2Templates app = FastAPI() templates = Jinja2Templates(directory='templates') @app.post('/disable/{name}') async def disable_cat(name: str): return f'{name} category has been disabled.' @app.get('/', response_class=HTMLResponse) async def main(request: Request): return templates.TemplateResponse('index.html', {'request': request}) templates/index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <script> document.addEventListener('DOMContentLoaded', (event) => { document.getElementById("myForm").addEventListener("submit", function (e) { e.preventDefault() // Cancel the default action var catName = document.getElementById('catName').value; fetch('/disable/' + catName, { method: 'POST', }) .then(resp => resp.text()) // or, resp.json(), etc. .then(data => { document.getElementById("response").innerHTML = data; }) .catch(error => { console.error(error); }); }); }); </script> </head> <body> <h1>Disable a category</h1> <form id="myForm"> <label for="catName">Enter a category name to disable:</label><br> <input type="text" id="catName" name="catName"> <input class="submit" type="submit" value="Submit"> </form> <div id="response"></div> </body> </html> | 3 | 11 |
74,312,939 | 2022-11-4 | https://stackoverflow.com/questions/74312939/can-you-make-a-regular-python-class-frozen | It's useful to be able to create frozen dataclasses. I'm wondering if there is a way to do something similar for regular python classes (ones with an __init__ function with complex logic possibly). It would be good to prevent modification after construction in some kind of elegant way, like frozen dataclasses. | yes. All attribute access in Python is highly customizable, and this is just a feature dataclasses make use of. The easiest way to control attribute setting is to create a custom __setattr__ method in your class - if you want to be able to create attributes during __init__ one of the ways is to have an specific parameter to control whether each instance is frozen already, and freeze it at the end of __init__: class MyFrozen: _frozen = False def __init__(self, ...): ... self._frozen = True def __setattr__(self, attr, value): if getattr(self, "_frozen", None): raise AttributeError("Trying to set attribute on a frozen instance") return super().__setattr__(attr, value) | 3 | 4 |
74,308,012 | 2022-11-3 | https://stackoverflow.com/questions/74308012/type-hints-without-value-assignment-in-python | I was under the impression that typing module in Python is mostly for increasing code readability and for code documentation purposes. After playing around with it and reading about the module, I've managed to confuse myself with it. Code below works even though those two variables are not initialized (as you would normally initialize them e.g. a = "test"). I've only put a type hint on it and everything seems ok. That is, I did not get a NameError as I would get if I just had a in my code NameError: name 'a' is not defined Is declaring variables in this manner (with type hints) an OK practice? Why does this work? from typing import Any test_var: int a: Any print('hi') I expected test_var: int to return an error saying that test_var is not initiated and that I would have to do something like test_var: int = 0 (or any value at all). Does this get set to a default value because I added type hint to it? | It is fairly straightforward, when you consider the namespaces involved. This is hinted at by the fact that you get a NameError, when you actually try and do anything with test_var, such as passing it to a function (like print). It tells you that the name you used is not known to the interpreter. What does variable assignment do? What happens, when you assign a value to a variable in the global namespace of a module for the first time, is it gets added to that module's globals dictionary with the key being the variable name as a string and the value being, well, its value. You can see this dictionary by calling the built-in globals function in that module: (I will be using pprint in some of the following examples to make the output easier to read.) from pprint import pprint a = 1 pprint(globals()) The output looks something like this: {'__annotations__': {}, ... '__name__': '__main__', ... 'a': 1, ...} There are various other keys in the globals dictionary that we can ignore for this matter. But you can see that the key 'a' appears in it and its associated value is the 1 we assigned to the variable named a before. (In case this is not obvious, the order of statements matters; if you check the output of globals() before assigning a value to a, there will be no entry in that dictionary for it.) What does annotation do? When you look closer at that dictionary, you'll find another interesting key there, namely __annotations__. Right now, its value is an empty dictionary. But I bet you can already guess, what will happen, if we annotate our variable with a type: from pprint import pprint a: int = 1 pprint(globals()) The output: {'__annotations__': {'a': <class 'int'>}, ... 'a': 1, ...} When we add a type hint to (i.e. annotate) a variable, the interpreter adds that name and type to the relevant __annotations__ dictionary (see annotation assignment docs); in this case that of our module. By the way, since the __annotations__ dictionary is in our global namespace we can access it directly: a: int = 1 print("a" in globals()) # True print("a" in __annotations__) # True As you can see, __annotations__ is a variable like any other, except that it is present by default, without you having to manually assign anything to it. It gets updated any time a variable is annotated in its scope. Side note There is nothing forcing us to annotate a correctly. We can assign a wrong type to it and Python will gladly add that to the __annotations__ dictionary and this incorrect annotation will have no effect whatsoever on the value assignment: a: str = 1 print(a) # 1 print(type(a) is int) # True print(__annotations__) # {'a': <class 'str'>} In fact, the interpreter's laissez-faire policy regarding annotations goes so far that we can use literally anything for the annotation as long as it is a syntactically valid expression, even though it may make zero sense semantically. For example a complex number literal: a: 2+1j = 1 print(a) # 1 print(type(a) is int) # True print(__annotations__) # {'a': (2+1j)} You can even call arbitrary functions in the annotation itself: a: bin(3) = 1 print(a) # 1 print(type(a) is int) # True print(__annotations__) # {'a': '0b11'} But let us get back to the topic. Can you annotate without assigning? Finally, what happens, if we just annotate without assigning a value to a variable? a: int print("a" in globals()) # False print("a" in __annotations__) # True And that is the explanation of why we get an error, if we try and e.g. print out a in this example, but otherwise don't get any error. The code merely told the interpreter (and any static type checker) about the annotation, but it assigned no value, thus not creating an entry in the global namespace dictionary. It makes sense, if you think about it: What should be set as the value for a in that namespace? It has no value (not even None or NotImplemented or anything like that). To the interpreter the a: int line merely meant the creation of an entry in the __annotations__ of our module, which is perfectly valid. Runtime meaning of annotations I would also like to stress the fact that the annotation is not meaningless for the interpreter and thus runtime, as some people often claim. It is admittedly rarely used, but as we just saw in the example, you can absolutely work with annotations at runtime. Whether or not this is useful is obviously up to you. Some packages like Pydantic or the standard library's dataclasses actually rely heavily on annotations for their purposes. The value set in the __annotations__ dictionary in our example is actually a reference to the int class. So we can absolutely work with it at runtime, if we want to: a: int a_type = __annotations__["a"] print(a_type is int) # True print(a_type("2")) # 2 You can play around with this concept in class namespaces as well (not just with the module namespace), but I'll leave this as an exercise for the reader. So to wrap up, for a name to be added to any namespace, it must have a value assigned to it. Not assigning a value and just providing an annotation is totally fine to create an entry in that namespace's __annotations__. | 11 | 18 |
74,360,992 | 2022-11-8 | https://stackoverflow.com/questions/74360992/how-to-cache-data-in-fastapi | How can I cache requests in FastAPI? For example, there are two functions and a PostgreSQL database: @app.get("/") def home(request: Request): return templates.TemplateResponse("index.html", {"request": request}) @app.post("/api/getData") async def getData(request: Request, databody = Body()): data = databody["data"] with connection.cursor() as cursor: cursor.execute( f"INSERT INTO database (ip, useragent, datetime) VALUES ('request.headers['host']', 'request.headers['user-agent']', '{datetime.now()}'") ) return {"req": request} Then the request is processed by JavaScript and displayed on the HTML page . | You can try fastapi-cache: from fastapi import FastAPI from starlette.requests import Request from starlette.responses import Response from fastapi_cache import FastAPICache from fastapi_cache.backends.redis import RedisBackend from fastapi_cache.decorator import cache from redis import asyncio as aioredis app = FastAPI() @cache() async def get_cache(): return 1 @app.get("/") @cache(expire=60) async def index(): return dict(hello="world") @app.on_event("startup") async def startup(): redis = aioredis.from_url("redis://localhost", encoding="utf8", decode_responses=True) FastAPICache.init(RedisBackend(redis), prefix="fastapi-cache") | 6 | 7 |
74,323,364 | 2022-11-4 | https://stackoverflow.com/questions/74323364/how-to-add-a-private-repository-using-poetry | Using Artifactory (https://cloud.google.com/artifact-registry) I intend to add a dependency with poetry (https://python-poetry.org/docs/repositories/). I can install with command: pip install --index-url https://us-central1-python.pkg.dev/<PROJECT_ID>/<SOME_LIB_REPO>/simple/ <PACKAGE_NAME> (auth using keyrings.google-artifactregistry-auth) I intend to use POETRY to manage my dependencies. I don't know if using poetry adds this dependence. With poetry add --source <PACKAGE_NAME> https://us-central1-python.pkg.dev/<PROJECT_ID>/<SOME_LIB_REPO> I'm getting this error: poetry update Updating dependencies Resolving dependencies... (0.9s) RepositoryError 401 Client Error: Unauthorized for url: https://us-central1-python.pkg.dev/<PROJECT_ID>/<SOME_LIB_REPO>/simple/pytest/ My pyproject.toml ... [[tool.poetry.source]] name = "SOME_LIB" url = " https://us-central1-python.pkg.dev/<PROJECT_ID>/<SOME_LIB_REPO>/simple/" secondary = true Here there is details how to config with PIP/VIRTUALENV: https://cloud.google.com/artifact-registry/docs/python/authentication but not have details about Poetry. Do you have any tips about it? | Your RepositoryError of 401 Client Error: Unauthorized for url: https://us-central1-python.pkg.dev/<PROJECT_ID>/<SOME_LIB_REPO>/simple/pytest/ clearly indicates that you lack authorization for the URL mentioned. So you will need to make sure you properly get and use the authorization, that is, you will have a user with credentials who actually has the right to access it. Once you have that, you will need to authenticate, so the server you are sending the request to will be able to determine who you are and therefore to grant you proper access. Read: https://python-poetry.org/docs/repositories/#configuring-credentials The page provides the poetry config http-basic.foo <username> <password> example, explaining that you can set up some credentials for a specific repository. You can specify only the username, but then whenever you try to access the URL, you will be prompted for the password. You can also publish by adding --username and --password to the command. You can also specify environment variables, like export POETRY_PYPI_TOKEN_PYPI=my-token export POETRY_HTTP_BASIC_PYPI_USERNAME=<username> export POETRY_HTTP_BASIC_PYPI_PASSWORD=<password> See https://python-poetry.org/docs/configuration/#using-environment-variables Anyways, the error tells you that you were not properly authenticated via the credentials that are needed, so either you do not have credentials, they are incorrect or you lack privilege. | 6 | 2 |
74,326,921 | 2022-11-5 | https://stackoverflow.com/questions/74326921/graphql-schema-to-python-dataclasses-codegen | I have a GraphQL schema defined from server and I'd like to write a nice Python GraphQL client for it. I'm looking for a way to transform my GraphQL schema into python classes with type hints such that I'll be able to see all available queries, mutations, their fields(names & types) and return vals. I cannot write manually all python classes due to schema complexity, I have many filters on each field. see this example from ent on TodoWhereInput to understand how error prune this will be. I really enjoy using GraphQL playground with auto completion, I want that experience in my python client. For example, given this schema as an input: type Book { title: String year: Int } type Author { name: String books: [Book] } I'd like to generate this python code as an output: from dataclasses import dataclass @dataclass class Book: title: str year: int @dataclass class Author: name: str books: list[Book] same for Inputs in schema. I already looked at: codegen which is awesome for typescript! but doesn't have python support :/ gql_schema_codegen nice, but generating TypedDict which isn't dataclasses, I have to change each dict and pass total=False so it won't required all fields by default. sgqlc code-generator which doesn't allow type hints. writing queries is still dynamically and error prune. | Thanks for all the answers, but after revisiting this issue I found out Ariadne is the best solution for my case. Generating async / sync clients based on GraphQL schema + queries, highly configurable. You are welcome to try them yourself: https://github.com/mirumee/ariadne-codegen | 5 | 3 |
74,316,387 | 2022-11-4 | https://stackoverflow.com/questions/74316387/how-to-use-blueprints-in-azure-functions-v2-for-python | In looking at the guide What do blueprints offer that just importing doesn't? Here are some points that are unclear: It says to have a file called http_blueprint.py in which you'd define some routes but it just looks like the regular http trigger but the decorator is a bp.route instead of an app.route. Are these also app.functions since the main file has 2 decorators per def? Does everything in the blueprint have to be an http trigger or is that just an example that they used? Can you have multiple blueprint files or are we limited to the single one? | Example of repo structure. project β file001.py β file002.py β function_app.py β README.md β host.json β local.settings.json file001.py: import azure.function as func import json bp01 = func.Blueprint() @bp01.route(route="route01") def method01(req:func.HttpRequest) -> func.HttpRequest: return func.HttpResponse ( json.dumps({ 'version': 1 }) ) file002.py: import azure.function as func import json bp02 = func.Blueprint() @bp02.route(route="route02") def method02(req:func.HttpRequest) -> func.HttpRequest: return func.HttpResponse ( json.dumps({ 'version': 2 }) ) This is how you register more than one blueprint in function_app.py: import azure.functions as func from file001.py import bp01 from file002.py import bp02 app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS) app.register_functions(bp01) app.register_functions(bp01) My guess blueprints are useful when you want separation of concerns but I don't know for certain. The most confusing part to me was that the main methods from the two blueprints must have a unique name otherwise the azure function will get confused and not show neither. | 6 | 9 |
74,364,918 | 2022-11-8 | https://stackoverflow.com/questions/74364918/how-to-pass-xfrozen-modules-off-to-python-to-disable-frozen-modules | Running a python script on VS outputs this error. How to pass -Xfrozen_modules=off to python to disable frozen modules? I was trying to update the python version from 3.6 to 3.11 and then started seeing this message. | If you are using VS Code you can add "pythonArgs": ["-Xfrozen_modules=off"] to your debug configuration in launch.json, like this: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python (Xfrozen=off)", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "justMyCode": true, "pythonArgs": ["-Xfrozen_modules=off"] } ] } There is some additional context in the GitHub Issue fabioz/PyDev.Debugger/issues/213. | 11 | 14 |
74,315,381 | 2022-11-4 | https://stackoverflow.com/questions/74315381/docker-compose-environment-variable-is-not-set | Project tree /backend .env.dev docker-compose-dev.yml /project I have the following warning: docker-compose -f docker-compose-dev.yml up --build # i am in the /backend directory WARNING: The DB_USER variable is not set. Defaulting to a blank string. WARNING: The DB_PASSWORD variable is not set. Defaulting to a blank string. WARNING: The DB_NAME variable is not set. Defaulting to a blank string. It means that docker compose don't see my environment file. How can I fix it? docker-compose-dev.yml services: django: build: ./project # path to Dockerfile command: sh -c " sleep 3 && gunicorn -w 8 --bind 0.0.0.0:8800 core_app.wsgi" volumes: - ./project:/project - ./project/static:/project/static - ./project/media:/project/media - ./project/logs:/project/logs expose: - 8800 env_file: - ./.env.dev depends_on: - db db: image: postgres:13-alpine volumes: - pg_data:/var/lib/postgresql/data/ expose: - 5432 ports: - "5435:5432" env_file: - ./.env.dev environment: - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} - POSTGRES_DB=${DB_NAME} ....... How can I enable env file? P.S Values are set in the file .env.dev DB_NAME=db_dev DB_USER=postgres DB_PASSWORD=passsss132132 ENV_TYPE=DEV UPDATE Found out that this way is working docker-compose -f docker-compose-dev.yml --env-file=.env.dev up | If the complaint is coming from docker (compose) itself, try to: rename ./.env.dev to simply .env.dev; if not yet, rename .env.dev to .env (default) and remove the env_file entry from your compose. That will certainly work, then you can go back to investigate the issue with env_file ([1]). Update Now facing the same problem myself -- in a similar situation trying to separate the variables of my JupyterHub from that destined to the Jupyter Notebooks --, I digged in a bit more to understand better the role of Compose's .env and env_file:. As already informed in the question, the use of option --env-file solves the issue. Which is the right answer for defining docker-compose's (yaml) environment variables in a file named differently from .env (default). The env_file option in docker-compose.yaml is meant for the container being run: only the container sees those variables. I recently posted a similar answer to a similar question : https://stackoverflow.com/a/75538969/687896 [1] https://docs.docker.com/compose/env-file/ | 11 | 8 |
74,365,266 | 2022-11-8 | https://stackoverflow.com/questions/74365266/how-to-change-the-label-in-display-list-of-a-field-in-the-model-in-django-admin | I have a model with some fields with a verbose_name. This verbose name is suitable for the admin edit page, but definitively too long for the list page. How to set the label to be used in the list_display admin page? | You can create custom columns. For example, there is Person model below: # "models.py" from django.db import models class Person(models.Model): name = models.CharField(max_length=30) age = models.IntegerField() Now, you can create the custom columns "my_name" and "my_age" with my_name() and my_age() and can rename them with @admin.display as shown below: @admin.register(Person) class PersonAdmin(admin.ModelAdmin): list_display = ("my_name", "my_age") # "my_name" and "my_age" need to be assigned @admin.display(description='My name') def my_name(self, obj): # β Displayed return obj.name @admin.display(description='My age') def my_age(self, obj): # β Displayed return obj.age Then, MY NAME, MY AGE and the values of "name" and "age" fields are displayed as shown below: Of course, you can assign "name" and "age" fields to list_display in addition to the custom columns "my_name" and "my_age" as shown below: @admin.register(Person) class PersonAdmin(admin.ModelAdmin): list_display = ("my_name", "my_age", "name", "age") # β Here β @admin.display(description='My name') def my_name(self, obj): return obj.name @admin.display(description='My age') def my_age(self, obj): return obj.age Then, NAME, AGE and the values of "name" and "age" fields are displayed as shown below: | 5 | 3 |
74,351,267 | 2022-11-7 | https://stackoverflow.com/questions/74351267/portable-way-to-write-python-3-shebang | Back when Python3 was there, I used to use: #!/usr/bin/env python3 But recently, especially with Ubuntu 22.04 or macOS, the python3 executable isn't always available in PATH, instead, I should use python to call python3. Is there any portable way to write Python3 shebang? | According to the Python docs, using #!/usr/bin/env python3 is still the recommended way -- but they admit that it does not always work: 2.4. Miscellaneous To easily use Python scripts on Unix, you need to make them executable, e.g. with chmod +x script and put an appropriate Shebang line at the top of the script. A good choice is usually #!/usr/bin/env python3 which searches for the Python interpreter in the whole PATH. However, some Unices may not have the env command, so you may need to hardcode /usr/bin/python3 as the interpreter path. The alternative is then to hardocode the python binary, i.e. /usr/bin/python3 (or python in your case). | 4 | 2 |
74,289,869 | 2022-11-2 | https://stackoverflow.com/questions/74289869/how-to-unit-test-a-pure-asgi-middleware-in-python | I have an ASGI middleware that adds fields to the POST request body before it hits the route in my fastapi app. from starlette.types import ASGIApp, Message, Scope, Receive, Send class MyMiddleware: """ This middleware implements a raw ASGI middleware instead of a starlette.middleware.base.BaseHTTPMiddleware because the BaseHTTPMiddleware does not allow us to modify the request body. For documentation see https://www.starlette.io/middleware/#pure-asgi-middleware """ def __init__(self, app: ASGIApp): self.app = app async def __call__(self, scope: Scope, receive: Receive, send: Send): if scope["type"] != "http": await self.app(scope, receive, send) return "" async def modify_message(): message: dict = await receive() if message.get("type", "") != "http.request": return message if not message.get("body", None): return message body: dict = json.loads(message.get("body", b"'{}'").decode("utf-8")) body["some_field"] = "foobar" message["body"] = json.dumps(body).encode("utf-8") return message await self.app(scope, modify_message, send) Is there an example on how to unit test an ASGI middleware? I would like to test directly the __call__ part which is difficult as it does not return anything. Do I need to use a test api client (e.g. TestClient from fastapi) to then create some dummy endpoint which returns the request as response and thereby check if the middleware was successful or is there a more "direct" way? | I've faced the similar problem recently, so I want to share my solution for fastapi and pytest. I had to implement per request logs for the fastapi app using middlewares. I've checked Starlette's test suite as Marcelo Trylesinski suggested and adapted the code to fit fastapi. Thank you for the recommendation, Marcelo! Here is my middleware that logs information from every request and response. # middlewares.py import logging from starlette.types import ASGIApp, Scope, Receive, Send logger = logging.getLogger("app") class LogRequestsMiddleware: def __init__(self, app: ASGIApp) -> None: self.app = app async def __call__( self, scope: Scope, receive: Receive, send: Send ) -> None: async def send_with_logs(message): """Log every request info and response status code.""" if message["type"] == "http.response.start": # request info is stored in the scope # status code is stored in the message logger.info( f'{scope["client"][0]}:{scope["client"][1]} - ' f'"{scope["method"]} {scope["path"]} ' f'{scope["scheme"]}/{scope["http_version"]}" ' f'{message["status"]}' ) await send(message) await self.app(scope, receive, send_with_logs) To test a middleware, I had to create test_factory_client fixture: # conftest.py import pytest from fastapi.testclient import TestClient @pytest.fixture def test_client_factory() -> TestClient: return TestClient In the test, I mocked logger.info() call within the middleware and asserted if the method was called. # test_middlewares.py from unittest import mock from fastapi.testclient import TestClient from fastapi import FastAPI from .middlewares import LogRequestsMiddleware # mock logger call within the pure middleware @mock.patch("path.to.middlewares.logger.info") def test_log_requests_middleware( mock_logger, test_client_factory: TestClient ): # create a fresh app instance to isolate tested middlewares app = FastAPI() app.add_middleware(LogRequestsMiddleware) # create an endpoint to test middlewares @app.get("/") def homepage(): return {"hello": "world"} # create a client for the app using fixure client = test_client_factory(app) # call an endpoint response = client.get("/") # sanity check assert response.status_code == 200 # check if the logger was called mock_logger.assert_called_once() | 5 | 4 |
74,319,151 | 2022-11-4 | https://stackoverflow.com/questions/74319151/why-is-there-pip3-10-exe-in-python311-scripts | As you can see below, after I installed Python 3.11, I came to the realization that running pip3.10 freeze did not list me the packages I had in my Python 3.10.2 but those of my Python 3.11. This is explained by the fact that in Python311\Scripts I have both pip3.10.exe and pip3.11.exe. Is there a reason? When I want to pip install or do pip freeze with pip3.10 I need to use the absolute path now. | This was a pip bug, I don't understand the details exactly but when pip tried to match to the correct Python version it was only accounting for single-digit version numbers, resulting in this weird behavior. Perhaps it is better explained in this thread on github. To summarize that thread (from user uranusjr): pip contains logic to specially fix pipX.Y (and easy_install-X.Y) entry points to correctly match the Python version, but that logic only accounts for single-digit version numbers and doesnβt work with 3.10 upwards. This was fixed with pip version 22.3.1 (Also seen on this github thread), and from what I can tell doesn't occur anymore when using python 3.11.1. | 4 | 4 |
74,350,734 | 2022-11-7 | https://stackoverflow.com/questions/74350734/kedro-how-to-update-a-dataset-in-a-kedro-pipeline-given-that-a-dataset-cannot | In a Kedro project, I have a dataset in catalog.yml that I need to increment by adding a few lines each time I call my pipeline. #catalog.yml my_main_dataset: type: pandas.SQLTableDataSet credentials: postgrey_credentials save_args: if_exists: append table_name: my_dataset_name However I cannot just rely on append in my catalog parameters since I need to control that I do not insert already existing dates in my dataset to avoid duplicates. I also cannot create a node taking my dataset both as input (to look for already existing dates and merge with the additional data) and as output, otherwise I'm creating a cycle which is forbidden (only DAG are permitted). I'm stuck and do not see any elegant way to solve my issue. I looked at other threads but did not find anything relevant on stackoverflow so far. I tried a very ugly thing which is to create an independent node in the same pipeline just to look into my dataset and record min and max dates in global variables as a side effect, in order to use the in the main flow to control the append. It's not only ugly, but it also fails since I cannot control in which order independent nodes of a same pipeline will be run... Idealy I would like to achieve something like this, which it is forbidden by Kedro the way I coded it (not DAG): #catalog.yml my_main_dataset: type: pandas.SQLTableDataSet credentials: postgrey_credentials save_args: if_exists: append table_name: my_dataset_name my_additional_dataset: type: pandas.SQLTableDataSet credentials: postgrey_credentials save_args: if_exists: append table_name: my__additional_dataset_name #node.py import pandas as pd def increment_main_dataset(main_df, add_df): last_date = main_df['date'].max() filtered_add_df = add_df.loc[add_df['date'] > last_date] main_df = pd.concat([main_df, filtered_add_df], axis=0) return main_df #pipeline.py from kedro.pipeline import Pipeline, node, pipeline from .nodes import * def create_pipeline(**kwargs) -> Pipeline: return pipeline([ node( func=increment_main_dataset, inputs=["my_main_dataset", "my_additional_dataset"], outputs="my_main_dataset", name="increment-dataset", ), ]) | It may not be the best solution, but a workaround is for you to set two kedro datasets pointing to the same physical space. One for reading and the other for writing, but to the same file/table. Something like: #catalog.yml my_main_dataset_read: # same as my_main_dataset_write type: pandas.SQLTableDataSet credentials: postgrey_credentials save_args: if_exists: append table_name: my_dataset_name my_main_dataset_write: # same as my_main_dataset_read type: pandas.SQLTableDataSet credentials: postgrey_credentials save_args: if_exists: append table_name: my_dataset_name my_additional_dataset: type: pandas.SQLTableDataSet credentials: postgrey_credentials save_args: if_exists: append table_name: my__additional_dataset_name #node.py - no changes from your code import pandas as pd def increment_main_dataset(main_df, add_df): last_date = main_df['date'].max() filtered_add_df = add_df.loc[add_df['date'] > last_date] main_df = pd.concat([main_df, filtered_add_df], axis=0) return main_df #pipeline.py from kedro.pipeline import Pipeline, node, pipeline from .nodes import * def create_pipeline(**kwargs) -> Pipeline: return pipeline([ node( func=increment_main_dataset, inputs=["my_main_dataset_read", "my_additional_dataset"], outputs="my_main_dataset_write", name="increment-dataset", ), ]) | 4 | 2 |
74,319,258 | 2022-11-4 | https://stackoverflow.com/questions/74319258/split-list-of-dictionaries-in-separate-lists-based-primarily-on-list-size-but-se | I currently have a list of dictionaries that looks like that: total_list = [ {'email': '[email protected]', 'id': 1, 'country': 'UK'}, {'email': '[email protected]', 'id': 1, 'country': 'Germany'}, {'email': '[email protected]', 'id': 2, 'country': 'UK'} {'email': '[email protected]', 'id': 3, 'country': 'Italy'}, {'email': '[email protected]', 'id': 3, 'country': 'Netherland'}, {'email': '[email protected]', 'id': 4, 'country': 'France'}, ... ] I want to split it primarily based on size, so let's say that the new size list is 3 items per list, But I also want to make sure that all the same users will be in the same new sublist. So the result I am trying to create is: list_a = [ {'email': '[email protected]', 'id': 1, 'country': 'UK'}, {'email': '[email protected]', 'id': 2, 'country': 'UK'} {'email': '[email protected]', 'id': 1, 'country': 'Germany'} ] list_b = [ {'email': '[email protected]', 'id': 3, 'country': 'Italy'}, {'email': '[email protected]', 'id': 4, 'country': 'France'} {'email': '[email protected]', 'id': 3, 'country': 'Netherland'}, ... ] Obviously in the example that I provided the users were located really close to each other in the list, but in reality, they could be spread way more. I was considering sorting the list based on the email and then splitting them, but I am not sure what happens if the items that are supposed to be grouped together happen to be at the exact location that the main list will be divided. What I have tried so far is: def list_splitter(main_list, size): for i in range(0, len(main_list), size): yield main_list[i:i + size] # calculating the needed number of sublists max_per_batch = 3 number_of_sublists = ceil(len(total_list) / max_per_batch) # sort the data by email total_list.sort(key=lambda x: x['email']) sublists = list(list_splitter(main_list=total_list, size=max_per_batch)) The issue is that with this logic I cannot 100% ensure that if there are any items with the same email value they will end up in the same sublist. Because of the sorting, chances are that this will happen, but it is not certain. Basically, I need a method to make sure that items with the same email will always be in the same sublist, but the main condition of the split is the sublist size. | This solution starts of by only working with the list of all emails. The emails are then grouped based on their frequency and the limit on group size. Later the remaining data, i.e. id and country, are joined back on the email groups. The first function create_groups works on the list of emails. It counts the number of occurrences of each email and groups them. Each new group starts with the most frequent email. If there is room left in the group it looks for the most frequent that also fits in the group. If such an item exists, it is added to the group. This is repeated until the group is full; then, a new group is started. from operator import itemgetter from itertools import groupby, chain from collections import Counter def create_groups(items, group_size_limit): # Count the frequency of all items and create a list of items # sorted by descending frequency items_not_grouped = Counter(items).most_common() groups = [] while items_not_grouped: # Start a new group with the most frequent ungrouped item item, count = items_not_grouped.pop(0) group, group_size = [item], count while group_size < group_size_limit: # If there is room left in the group, look for a new group member for index, (candidate, candidate_count) \ in enumerate(items_not_grouped): if candidate_count <= group_size_limit - group_size: # If the candidate fits, add it to the group group.append(candidate) group_size += candidate_count # ... and remove it from the items not grouped items_not_grouped.pop(index) break else: # If the for loop did not break, no items fit in the group break groups.append(group) return groups This is the result of using that function on your example: users = [ {'email': '[email protected]', 'id': 1, 'country': 'UK',}, {'email': '[email protected]', 'id': 2, 'country': 'UK'}, {'email': '[email protected]', 'id': 1, 'country': 'Germany'}, {'email': '[email protected]', 'id': 3, 'country': 'Italy'}, {'email': '[email protected]', 'id': 4, 'country': 'France'}, {'email': '[email protected]', 'id': 3, 'country': 'Netherland'} ] emails = [user["email"] for user in users] email_groups = create_groups(emails, 3) # -> [ # ['[email protected]', '[email protected]'], # ['[email protected]', '[email protected]'] # ] Finally, when the groups have been created, the function join_data_on_groups groups the original dictionary of users. It takes the email-groups from before and the list of dictionaries as arguments: def join_data_on_groups(groups, item_to_data): item_to_data = {item: list(data) for item, data in item_to_data} groups = [(item_to_data[item] for item in group) for group in groups] groups = [list(chain(*group)) for group in groups] return groups email_getter = itemgetter("email") users_grouped_by_email = groupby(sorted(users, key=email_getter), email_getter) user_groups = join_data_on_groups(email_groups, users_grouped_by_email) print(user_groups) Result: [ [ {'email': '[email protected]', 'id': 1, 'country': 'UK'}, {'email': '[email protected]', 'id': 1, 'country': 'Germany'}, {'email': '[email protected]', 'id': 2, 'country': 'UK'} ], [ {'email': '[email protected]', 'id': 3, 'country': 'Italy'}, {'email': '[email protected]', 'id': 3, 'country': 'Netherland'}, {'email': '[email protected]', 'id': 4, 'country': 'France'} ] ] | 4 | 3 |
74,344,614 | 2022-11-7 | https://stackoverflow.com/questions/74344614/is-there-a-way-in-python-to-extract-only-the-core-text-without-boxes-footer-et | I am trying to extract only the core text from a "rich" pdf document, meaning that it has a lot of tables, graphs, boxes, footers etc. in which I am not interested in. I tried with some common python packages like PyPDF2, pdfplumber or pdfreader.The problem is that apparently they extract all the text present in the pdf, including those parts listed above in which I am not interested. As an example: from PyPDF2 import PdfReader file = PdfReader(file) page = file.pages[10] text = page.extract_text() This code will get me the whole text from page 11, including footers, box, text from a table and the number of the page, while what I would like is only the core text. Unluckily the only solution I found up to now is to copy paste in another file the core text. Is there any method/package which can automatically recognize the main text from the other parts of the pdf and return me only that? Thank you for your help!!! | per D.L's comment, please add some reproducible code and, preferably, a pdf to work with. However, I think I can answer at least part of your question. jsvine's pdfplumber is an incredibly robust python pdf processing package. pdfplumber contains a bounding box functionality that lets you extract text from within (.within_bbox(...)) or from outside (.outside_bbox) the 'bounding box' -- or geographical area -- delineated on the Page object. Every character object extracted from the page contains location information such as y1 - Distance of top of character from bottom of page and Distance of left side of character from left side of page. If the majority of pages within the .pdf you are trying to extract text from contain footnotes, I would recommend only extracting text above the y1 value. Given that footnotes are typically well below the end of a page, except for academic papers using Chicago Style citations, you should still be able to set a standard .bbox for where you want to extract text (within a set .bbox that does not include footnotes or out of a set .bbox that does not include footnotes). To your question about tables, that poses a trickier question. Tables are by far the trickiest thing to detect and/or extract from. pdfplumber offers, to my knowledge, the most robust open source table detection/extraction capabilities out there. To extract the area outside a table, I would call the .find_tables(...) function on each Page object to return a .bbox of the table and extract around that. However -- this is not perfect. It is not always able to detect tables. Regarding your 3rd question, how to exclude boxes, are you referring to text boxes? Please provide further clarification! Finally -- to reiterate my first point -- pdfplumber is an incredibly robust package. That being said, extracting text from .pdf files is really tough. Good luck -- please provide more information and I will be happy to help as best I can. | 3 | 4 |
74,362,585 | 2022-11-8 | https://stackoverflow.com/questions/74362585/bioreactor-simulation-for-ethanol-production-using-gekko | I am trying to simulate a DAE system that solves a fed-batch bioreactor problem for ethanol production using GEKKO. This is done so I can later optimize it more easily to maximize Ethanol production. It was previously solved in MATLAB and produced the results as shown in the following figures: , , , , My problem now is that I can't produce the same results with GEKKO, given all the same values for constants and variables. No solution can be found, but converges for a smaller time such as: m.time= np.linspace(0,1,11). Any idea on what is wrong with my code? The original system that needs to be solved is: from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO(remote=False) # Create time vector: t=[0, 0.1, 0.2,...,36.9,37], [hours] nt = 371 m.time = np.linspace(0,37,nt) # Define constants and parameters ################################# # Kinetic Parameters a1 = m.Const(value=0.05, name='a1') # Ratkowsky parameter [oC-1 h-0.5] aP = m.Const(value=4.50, name='aP') # Growth-associated parameter for EtOh production [-] AP1 = m.Const(value=6.0, name='AP1') # Activation energy parameter for EtOh production [oC] AP2 = m.Const(value=20.3, name='AP2') # Activation energy parameter for EtOh production [oC] b1 = m.Const(value=0.035, name='b1') # Parameter in the exponential expression of the maximum specific growth rate expression [oC-1] b2 = m.Const(value=0.15, name='b2') # Parameter in the exponential expression of the maximum specific growth rate expression [oC-1] b3 = m.Const(value=0.40, name='b3') # Parameter in the exponential expression of the specific death rate expression [oC-1] c1 = m.Const(value=0.38, name='c1') # Constant decoupling factor for EtOh [gP gX-1 h-1] c2 = m.Const(value=0.29, name='c2') # Constant decoupling factor for EtOh [gP gX-1 h-1] k1 = m.Const(value=3, name='k1') # Parameter in the maximum specific growth rate expression [oC] k2 = m.Const(value=55, name='k2') # Parameter in the maximum specific growth rate expression [oC] k3 = m.Const(value=60, name='k3') # Parameter in the growth-inhibitory EtOH concentration expression [oC] k4 = m.Const(value=50, name='k4') # Temperature at the inflection point of the specific death rate sigmoid curve [oC] Pmaxb = m.Const(value=90, name='Pmaxb') # Temperature-independent product inhibition constant [g L-1] PmaxT = m.Const(value=90, name='PmaxT') # Maximum value of product inhibition constant due to temperature [g L-1] Kdb = m.Const(value=0.025, name='Kdb') # Basal specific cellular biomass death rate [h-1] KdT = m.Const(value=30, name='KdT') # Maximum value of specific cellular biomass death rate due to temperature [h-1] KSX = m.Const(value=5, name='KSX') # Glucose saturation constant for the specific growth rate [g L-1] KOX = m.Const(value=0.0005, name='KOX') # Oxygen saturation constant for the specific growth rate [g L-1] qOmax = m.Const(value=0.05, name='qOmax') # Maximum specific oxygen consumption rate [h-1] # Metabolic Parameters YPS = m.Const(value=0.51, name='YPS') # Theoretical yield of EtOH on glucose [gP gS-1] YXO = m.Const(value=0.97, name='YXO') # Theoretical yield of biomass on oxygen [gX gO-1] YXS = m.Const(value=0.53, name='YXS') # Theoretical yield of biomass on glucose [gX gS-1] # Physicochemical and thermodynamic parameters Chbr = m.Const(value=4.18, name='Chbr') # Heat capacity of the mass of reaction [J g-1 oC-1] Chc = m.Const(value=4.18, name='Chc') # Heat capacity of cooling agent [J g-1 oC-1] deltaH = m.Const(value=518.e3, name='deltaH') # Heat of reaction of fermentation [J mol-1 O2] Tref = m.Const(value=25, name='Tref') # Reference temperature [oC] KH = m.Const(value=200, name='KH') # Henry's constant for oxygen in the fermentation broth [atm L mol-1] z = m.Const(value=0.792, name='z') # Oxygen compressibility factor [-] R = m.Const(value=0.082, name='R') # Ideal gas constant [L atm mol-1 oC-1] kla0 = m.Const(value=100, name='kla0') # Temperature-independent volumetric oxygen transfer coefficient [-h] KT = m.Const(value=36.e4, name='KT') # Heat transfer coefficient [J h-1 m-2 oC-1] rho = m.Const(value=1080, name='rho') # Density of the fermentation broth [g L-1] rhoc = m.Const(value=1000, name='rhoc') # Density of the cooling agent [g L-1] MO = m.Const(value=15.999, name='MO') # Molecular weight of oxygen [g mol-1] # Bioreactor design data AT = m.Const(value=1, name='AT') # Bioreactor heat transfer area [m2] V = m.Const(value=2000, name='V') # Bioreactor working volume [L] Vcj = m.Const(value=250, name='Vcj') # Cooling jacket volume [L] Ogasin = m.Const(value=0.305, name='Ogasin') # Oxygen concentration in airflow inlet [g L-1] # Define variables ################## mi = m.Var(name='mi') # I want Qin to be a step function: Qin = Qin0 + 15H(t-5) + 5H(t-10) - 6H(t-20) - 14H(t-35), where H(t-t0) heaviside function Qin_step = np.zeros(nt) Qin_step[50:101] = 15 Qin_step[101:201] = 20 Qin_step[201:350] = 14 Qin = m.Param(value=Qin_step, name='Qin') # Fixed variables, they are constant throughout the time horizon Xtin = m.FV(value=0, name='Xtin') Xvin = m.FV(value=0, name='Xvin') Qe = m.FV(value=0, name='Qe') Sin = m.FV(value=400, lb=0, ub=1500) Pin = m.FV(value=0, name='Pin') Fc = m.FV(value=40, name='Fc') Fair = m.FV(value=60000, name='Fair') Tin = m.FV(value=30, name='Tin') Tcin = m.FV(value=15, name='Tcin') Vl = m.Var(value=1000, lb=-0.0, ub=0.75*V, name='Vl') Xt = m.Var(value=0.1, lb=-0.0, ub=10, name='Xt') Xv = m.Var(value=0.1, lb=-0.0, ub=10, name='Xv') S = m.Var(value=400, lb=+0.0, ub=10000, name='S') P = m.Var(value=0, name='P') Ol = m.Var(value=0.0065, name= 'Ol') Og = m.Var(value=0.305, name='Og') T = m.Var(value=30, lb=20, ub=40, name='T') Tc = m.Var(value=20, lb=0, ub=30, name='Tc') Sf_cum = m.Var(value=0, name='Sf_cum') t = m.Var(value=0, name='Time') # Define algebraic equations ############################ # Specific growth rate of cell mass mimax = m.Intermediate(((a1*(T - k1))*(1 - m.exp(b1 * (T - k2)) )) ** 2) Pmax = m.Intermediate(Pmaxb + PmaxT/(1- m.exp(-b2*(T-k3)))) m.Equation(mi == mimax * (S / (KSX + S)) * (Ol / (KOX + Ol)) * (1 - P / Pmax) * (1 / (1 + m.exp(-(100 - S))))) mi = m.if3(condition=mi, x1=0, x2=mi) # Specific production rate of EtOH bP = m.if3(condition=S, x1=0, x2=c1*m.exp(-AP1/T) - c2*m.exp(-AP2/T)) qP = m.Intermediate(aP*mi + bP) # Specific consumption rate of glucose qS = m.Intermediate(mi/YXS + qP/YPS) # Specific consumption rate of oxygen qO = m.Intermediate(qOmax*Ol/YXO/(KOX+Ol)) # Specific biological deactivation rate of cell mass Kd = m.Intermediate(Kdb + KdT/(1+m.exp(-b3*(T-k4)))) # Saturation concentration of oxygen in culture media Ostar = m.Intermediate(z*Og*R*T/KH) # Oxygen mass transfer coefficient kla = m.Intermediate(kla0*1.2**(T-20)) # Bioreactor phases equation Vg = m.Intermediate(V - Vl) # Define differential equations ############################### m.Equation(Vl.dt() == Qin - Qe) m.Equation(Xt.dt() == Qin/Vl*(Xtin-Xt) + mi*Xv) m.Equation(Xv.dt() == Qin/Vl*(Xvin-Xv) + Xv*(mi-Kd)) m.Equation(S.dt() == Qin/Vl*(Sin-S) - qS*Xv) m.Equation(P.dt() == Qin/Vl*(Pin - P) + qP*Xv) m.Equation(Ol.dt() == Qin/Vl*(Ostar-Ol) + kla*(Ostar-Ol) - qO*Xv) m.Equation(Og.dt() == Fair/Vg*(Ogasin-Og) - Vl*kla/Vg*(Ostar-Ol) + Og*(Qin-Qe)/Vg) m.Equation(T.dt() == Qin/Vl*(Tin-T) - Tref/Vl*(Qin-Qe) + qO*Xv*deltaH/MO/rho/Chbr - KT*AT*(T-Tc)/Vl/rho/Chbr) m.Equation(Tc.dt() == Fc/Vcj*(Tcin - Tc) + KT*AT*(T-Tc)/Vcj/rhoc/Chc) m.Equation(Sf_cum.dt() == Qin*Sin) m.Equation(t.dt() == 1) # solve ODE m.options.IMODE = 6 # m.open_folder() m.solve(display=True) # Plot results plt.figure(1) plt.title('Total & Viable Cellular Biomass') plt.plot(m.time, Xv.value, label='Xv') plt.plot(m.time, Xt.value, label='Xt') plt.legend() plt.ylabel('Biomass concentration [g/L]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() plt.figure(2) plt.title('Substrate (S) & Product (P) concentration') plt.plot(m.time, S.value, label='S') plt.plot(m.time, P.value, label='P') plt.legend() plt.ylabel('Concentration [g/L]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() plt.figure(3) plt.title('Bioreactor & Cooling jacket temperature') plt.plot(m.time, T.value, label='T') plt.plot(m.time, Tc.value, label='Tc') plt.legend() plt.ylabel('Temperature [oC]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() fig4, ax = plt.subplots() ax.title.set_text('Dissolved & Gaseous Oxygen concentration') lns1 = ax.plot(m.time, Ol.value, label='[Oliq]', color='c') ax.set_xlabel('Time [h]') ax.set_ylabel('Oliq [g/L]', color='c') ax.minorticks_on() ax2 = ax.twinx() lns2 = ax2.plot(m.time, Og.value, label='[Ogas]', color='y') ax2.set_ylabel('Ogas [g/L]', color='y') ax2.minorticks_on() lns = lns1 + lns2 labs = [l.get_label() for l in lns] ax.legend(lns, labs, loc='best') ax.grid() fig4.tight_layout() plt.figure(4) plt.figure(5) plt.title('Feeding Policy') plt.plot(m.time, Qin.value, label='Qin') plt.legend() plt.ylabel('Qin [L/h]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() plt.show() | Nice application! Here are some suggestions to improve the convergence. Remove the lower and upper bounds when simulating. This was causing the "no solution found" error. Vl = m.Var(value=1000, name='Vl') # lb=-0.0, ub=0.75*V Xt = m.Var(value=0.1, name='Xt') # lb=-0.0, ub=10 Xv = m.Var(value=0.1, name='Xv') # lb=-0.0, ub=10 S = m.Var(value=400, name='S') # lb=+0.0, ub=10000 P = m.Var(value=0, name='P') Ol = m.Var(value=0.0065, name= 'Ol') Og = m.Var(value=0.305, name='Og') T = m.Var(value=30, name='T') # lb=20, ub=40 Tc = m.Var(value=20, name='Tc') # lb=0, ub=30 Re-arrange the equations to avoid divide-by-zero (when possible). For most of the equations, the volume term can be moved to the left-hand side of the equation to avoid a variable in the denominator. m.Equation(Vl.dt() == Qin - Qe) m.Equation(Vl*Xt.dt() == Qin*(Xtin-Xt) + mi*Vl*Xv) m.Equation(Vl*Xv.dt() == Qin*(Xvin-Xv) + Xv*Vl*(mi-Kd)) m.Equation(Vl*S.dt() == Qin*(Sin-S) - qS*Vl*Xv) m.Equation(Vl*P.dt() == Qin*(Pin - P) + qP*Vl*Xv) m.Equation(Vl*Ol.dt() == Qin*(Ostar-Ol) + Vl*kla*(Ostar-Ol) - qO*Vl*Xv) m.Equation(Vg*Og.dt() == Fair*(Ogasin-Og) - Vl*kla*(Ostar-Ol) + Og*(Qin-Qe)) m.Equation(Vl*T.dt() == Qin*(Tin-T) - Tref*(Qin-Qe) \ + Vl*qO*Xv*deltaH/MO/rho/Chbr - KT*AT*(T-Tc)/rho/Chbr) m.Equation(Vcj*Tc.dt() == Fc*(Tcin - Tc) + KT*AT*(T-Tc)/rhoc/Chc) m.Equation(Sf_cum.dt() == Qin*Sin) Use APOPT solver for improved speed and increase NODES=3 for improved accuracy. IMODE=7 is a sequential simulation to improve the solution speed when there are zero Degrees of Freedom (simulation, #equations=#variables). m.options.SOLVER= 1 m.options.IMODE = 7 m.options.NODES = 3 Here is an easier way to define the step inputs that is based on time. # I want Qin to be a step function: # Qin = Qin0 + 15H(t-5) + 5H(t-10) - 6H(t-20) - 14H(t-35) # where H(t-t0) heaviside function Qin_step = np.zeros(nt) Qin_step[np.where(tm>=5)] += 15 Qin_step[np.where(tm>=10)] += 5 Qin_step[np.where(tm>=20)] -= 6 Qin_step[np.where(tm>=35)] -= 14 Qin = m.Param(value=Qin_step, name='Qin') Avoid if3() (when possible) and replace with lower bounds in the variable definition: #mi = m.if3(condition=mi, x1=0, x2=mi) mi = m.Var(name='mi',lb=0) m.Equation(mi == mimax * (S / (KSX+S)) * (Ol/(KOX + Ol)) \ * (1 - P/Pmax) * (1 / (1+m.exp(-(100-S))))) or a slack variable slk that avoids a binary switching variable introduced with m.if3(): slk = m.Var(0,lb=0) mi_u = m.Var(name='mi_u') mi = m.Var(name='mi',lb=0) m.Equation(mi = mi_u + slk) m.Minimize(slk) (Optional) Insert a few additional small steps at the start if there are problems converging. This is useful later when you start optimizing. # Create time vector: t=[0, 0.1, 0.2,...,36.9,37], [hours] tm = np.linspace(0,37,371) # Insert smaller time steps at the beginning tm = np.insert(tm,1,[0.001,0.005,0.01,0.05]) (Later) When you need to optimize, it is often helpful to converge to a simulation as an initial guess for the optimization problem. m.options.IMODE=7 m.solve() m.options.IMODE=6 m.solve() Here is the complete script. from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO(remote=False) # Create time vector: t=[0, 0.1, 0.2,...,36.9,37], [hours] tm = np.linspace(0,37,371) # Insert smaller time steps at the beginning tm = np.insert(tm,1,[0.001,0.005,0.01,0.05]) m.time = tm nt = len(tm) # Define constants and parameters ################################# # Kinetic Parameters a1 = m.Const(value=0.05, name='a1') # Ratkowsky parameter [oC-1 h-0.5] aP = m.Const(value=4.50, name='aP') # Growth-associated parameter for EtOh production [-] AP1 = m.Const(value=6.0, name='AP1') # Activation energy parameter for EtOh production [oC] AP2 = m.Const(value=20.3, name='AP2') # Activation energy parameter for EtOh production [oC] b1 = m.Const(value=0.035, name='b1') # Parameter in the exponential expression of the maximum specific growth rate expression [oC-1] b2 = m.Const(value=0.15, name='b2') # Parameter in the exponential expression of the maximum specific growth rate expression [oC-1] b3 = m.Const(value=0.40, name='b3') # Parameter in the exponential expression of the specific death rate expression [oC-1] c1 = m.Const(value=0.38, name='c1') # Constant decoupling factor for EtOh [gP gX-1 h-1] c2 = m.Const(value=0.29, name='c2') # Constant decoupling factor for EtOh [gP gX-1 h-1] k1 = m.Const(value=3, name='k1') # Parameter in the maximum specific growth rate expression [oC] k2 = m.Const(value=55, name='k2') # Parameter in the maximum specific growth rate expression [oC] k3 = m.Const(value=60, name='k3') # Parameter in the growth-inhibitory EtOH concentration expression [oC] k4 = m.Const(value=50, name='k4') # Temperature at the inflection point of the specific death rate sigmoid curve [oC] Pmaxb = m.Const(value=90, name='Pmaxb') # Temperature-independent product inhibition constant [g L-1] PmaxT = m.Const(value=90, name='PmaxT') # Maximum value of product inhibition constant due to temperature [g L-1] Kdb = m.Const(value=0.025, name='Kdb') # Basal specific cellular biomass death rate [h-1] KdT = m.Const(value=30, name='KdT') # Maximum value of specific cellular biomass death rate due to temperature [h-1] KSX = m.Const(value=5, name='KSX') # Glucose saturation constant for the specific growth rate [g L-1] KOX = m.Const(value=0.0005, name='KOX') # Oxygen saturation constant for the specific growth rate [g L-1] qOmax = m.Const(value=0.05, name='qOmax') # Maximum specific oxygen consumption rate [h-1] # Metabolic Parameters YPS = m.Const(value=0.51, name='YPS') # Theoretical yield of EtOH on glucose [gP gS-1] YXO = m.Const(value=0.97, name='YXO') # Theoretical yield of biomass on oxygen [gX gO-1] YXS = m.Const(value=0.53, name='YXS') # Theoretical yield of biomass on glucose [gX gS-1] # Physicochemical and thermodynamic parameters Chbr = m.Const(value=4.18, name='Chbr') # Heat capacity of the mass of reaction [J g-1 oC-1] Chc = m.Const(value=4.18, name='Chc') # Heat capacity of cooling agent [J g-1 oC-1] deltaH = m.Const(value=518000, name='deltaH') # Heat of reaction of fermentation [J mol-1 O2] Tref = m.Const(value=20, name='Tref') # Reference temperature [oC] KH = m.Const(value=200, name='KH') # Henry's constant for oxygen in the fermentation broth [atm L mol-1] z = m.Const(value=0.792, name='z') # Oxygen compressibility factor [-] R = m.Const(value=0.082, name='R') # Ideal gas constant [L atm mol-1 oC-1] kla0 = m.Const(value=100, name='kla0') # Temperature-independent volumetric oxygen transfer coefficient [-h] KT = m.Const(value=360000, name='KT') # Heat transfer coefficient [J h-1 m-2 oC-1] rho = m.Const(value=1080, name='rho') # Density of the fermentation broth [g L-1] rhoc = m.Const(value=1000, name='rhoc') # Density of the cooling agent [g L-1] MO = m.Const(value=32.0, name='MO') # Molecular weight of oxygen [g mol-1] # Bioreactor design data AT = m.Const(value=1, name='AT') # Bioreactor heat transfer area [m2] V = m.Const(value=1800, name='V') # Bioreactor working volume [L] Vcj = m.Const(value=50, name='Vcj') # Cooling jacket volume [L] Ogasin = m.Const(value=0.305, name='Ogasin') # Oxygen concentration in airflow inlet [g L-1] # Define variables ################## mi = m.Var(name='mi',lb=0) # I want Qin to be a step function: # Qin = Qin0 + 15H(t-5) + 5H(t-10) - 6H(t-20) - 14H(t-35) # where H(t-t0) heaviside function Qin_step = np.zeros(nt) Qin_step[np.where(tm>=5)] += 15 Qin_step[np.where(tm>=10)] += 5 Qin_step[np.where(tm>=20)] -= 6 Qin_step[np.where(tm>=35)] -= 14 Qin = m.Param(value=Qin_step, name='Qin') # Fixed variables, they are constant throughout the time horizon Xtin = m.FV(value=0, name='Xtin') Xvin = m.FV(value=0, name='Xvin') Qe = m.FV(value=0, name='Qe') Sin = m.FV(value=400, lb=0, ub=1500) Pin = m.FV(value=0, name='Pin') Fc = m.FV(value=40, name='Fc') Fair = m.FV(value=60000, name='Fair') Tin = m.FV(value=30, name='Tin') Tcin = m.FV(value=15, name='Tcin') Vl = m.Var(value=1000, name='Vl') # lb=-0.0, ub=0.75*V Xt = m.Var(value=0.1, name='Xt') # lb=-0.0, ub=10 Xv = m.Var(value=0.1, name='Xv') # lb=-0.0, ub=10 S = m.Var(value=50, name='S') # lb=+0.0, ub=10000 P = m.Var(value=0, name='P') Ol = m.Var(value=0.0065, name= 'Ol') Og = m.Var(value=0.305, name='Og') T = m.Var(value=30, name='T') # lb=20, ub=40 Tc = m.Var(value=20, name='Tc') # lb=0, ub=30 Sf_cum = m.Var(value=0, name='Sf_cum') #t = m.Var(value=0, name='Time') # Define algebraic equations ############################ # Specific growth rate of cell mass mimax = m.Intermediate(((a1*(T-k1))*(1-m.exp(b1*(T-k2))))** 2) Pmax = m.Intermediate(Pmaxb + PmaxT/(1-m.exp(-b2*(T-k3)))) m.Equation(mi == mimax * (S / (KSX+S)) * (Ol/(KOX + Ol)) \ * (1 - P/Pmax) * (1 / (1+m.exp(-(100-S))))) #mi = m.if3(condition=mi, x1=0, x2=mi) # Specific production rate of EtOH #bP = m.if3(condition=S, x1=0, x2=c1*m.exp(-AP1/T) - c2*m.exp(-AP2/T)) bP = m.Intermediate(c1*m.exp(-AP1/T) - c2*m.exp(-AP2/T)) qP = m.Intermediate(aP*mi + bP) # Specific consumption rate of glucose qS = m.Intermediate(mi/YXS + qP/YPS) # Specific consumption rate of oxygen qO = m.Intermediate(qOmax*Ol/YXO/(KOX+Ol)) # Specific biological deactivation rate of cell mass Kd = m.Intermediate(Kdb + KdT/(1+m.exp(-b3*(T-k4)))) # Saturation concentration of oxygen in culture media Ostar = m.Intermediate(z*Og*R*T/KH) # Oxygen mass transfer coefficient kla = m.Intermediate(kla0*1.2**(T-20)) # Bioreactor phases equation Vg = m.Intermediate(V - Vl) # Define differential equations ############################### m.Equation(Vl.dt() == Qin - Qe) m.Equation(Vl*Xt.dt() == Qin*(Xtin-Xt) + mi*Vl*Xv) m.Equation(Vl*Xv.dt() == Qin*(Xvin-Xv) + Xv*Vl*(mi-Kd)) m.Equation(Vl*S.dt() == Qin*(Sin-S) - qS*Vl*Xv) m.Equation(Vl*P.dt() == Qin*(Pin - P) + qP*Vl*Xv) m.Equation(Vl*Ol.dt() == Qin*(Ostar-Ol) + Vl*kla*(Ostar-Ol) - qO*Vl*Xv) m.Equation(Vg*Og.dt() == Fair*(Ogasin-Og) - Vl*kla*(Ostar-Ol) + Og*(Qin-Qe)) m.Equation(Vl*T.dt() == Qin*(Tin-T) - Tref*(Qin-Qe) \ + Vl*qO*Xv*deltaH/MO/rho/Chbr - KT*AT*(T-Tc)/rho/Chbr) m.Equation(Vcj*Tc.dt() == Fc*(Tcin - Tc) + KT*AT*(T-Tc)/rhoc/Chc) m.Equation(Sf_cum.dt() == Qin*Sin) #m.Equation(t.dt() == 1) # solve ODE m.options.SOLVER= 1 m.options.IMODE = 7 m.options.NODES = 3 # m.open_folder() m.solve(disp=False) # Plot results plt.figure(1) plt.title('Total & Viable Cellular Biomass') plt.plot(m.time, Xv.value, label='Xv') plt.plot(m.time, Xt.value, label='Xt') plt.legend() plt.ylabel('Biomass concentration [g/L]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() plt.figure(2) plt.title('Substrate (S) & Product (P) concentration') plt.subplot(2,1,1) plt.plot(m.time, S.value, label='S') plt.legend(); plt.grid() plt.ylabel('Conc [g/L]') plt.subplot(2,1,2) plt.plot(m.time, P.value, label='P') plt.legend(); plt.grid() plt.ylabel('Conc [g/L]') plt.xlabel('Time [h]') plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() plt.figure(3) plt.title('Bioreactor & Cooling jacket temperature') plt.plot(m.time, T.value, label='T') plt.plot(m.time, Tc.value, label='Tc') plt.legend() plt.ylabel('Temperature [oC]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() fig4, ax = plt.subplots() ax.title.set_text('Dissolved & Gaseous Oxygen concentration') lns1 = ax.plot(m.time, Ol.value, label='[Oliq]', color='c') ax.set_xlabel('Time [h]') ax.set_ylabel('Oliq [g/L]', color='c') ax.minorticks_on() ax2 = ax.twinx() lns2 = ax2.plot(m.time, Og.value, label='[Ogas]', color='y') ax2.set_ylabel('Ogas [g/L]', color='y') ax2.minorticks_on() lns = lns1 + lns2 labs = [l.get_label() for l in lns] ax.legend(lns, labs, loc='best') ax.grid() fig4.tight_layout() plt.figure(4) plt.figure(5) plt.title('Feeding Policy') plt.plot(m.time, Qin.value, label='Qin') plt.legend() plt.ylabel('Qin [L/h]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(m.time[0],m.time[-1]) plt.tight_layout() plt.figure(6) plt.title('Check >=0 Constraints') plt.subplot(2,1,1) plt.plot(tm,bP.value,label='bP') plt.legend(); plt.grid() plt.subplot(2,1,2) plt.plot(tm,mi.value,label='mi') plt.legend(); plt.grid() plt.show() The plots replicate the Matlab plots with the updated parameters (thanks for the suggestion). Let us know if we can help with further questions. Python Version Here is an equivalent Python version. I created this to help with the equivalency testing, if it is still needed. import numpy as np from scipy.integrate import odeint from scipy.interpolate import interp1d import matplotlib.pyplot as plt #Simulate fed-batch operation # Specify the simulation time (hrs) tspan = np.linspace(0,37,371); t=tspan # Specify values of control variables [Qin0 Xtin Xvin Qe Sin Fc Fair Tin Tcin] u0 = [0.0,0.0,0.0,0.0,0.0,400,40,60000,30,15] # Specify initial conditions [Xt Xv S P Oliq Ogas T Tc Vl Gloss MP] x0 = [0.1,0.1,50,0,0.0065,0.305,30,20,1000,0.0,0.0] ux0 = tuple(u0 + x0) #Qin = 15*heaviside(t-5) + 5*heaviside(t-10) # - 6*heaviside(t-20) - 14*heaviside(t-35) Qin = np.zeros_like(tspan) Qin[np.where(tspan>=5)] += 15 Qin[np.where(tspan>=10)] += 5 Qin[np.where(tspan>=20)] -= 6 Qin[np.where(tspan>=35)] -= 14 QinInterp = interp1d(tspan,Qin,bounds_error=False) def ethanol(x,t,Qin0,Qin,Xtin,Xvin,Qe,Sin,Fc,Fair, Tin,Tcin,Xt0,Xv0,S0,P0,Oliq0,Ogas0,T0, Tc0,Vl0,Sf_cum0,Time0): ## #Initial Conditions ## Xt0 = u[10] # Initial total cellular biomass, [g L-1] ## Xv0 = u[11] # Initial viable cellular biomass, [g L-1] ## S0 = u[12] # Initial substrate/Glucose concentration, [g L-1] ## P0 = u[13] # Initial product/Ethanol concentration, [g L-1] ## Oliq0 = u[14] # Initial Dissolved oxygen concentration, [g L-1] ## Ogas0 = u[15] # Initial Gas phase oxygen (bubbles) in the fermentation broth, [g L-1] ## T0 = u[16] # Initial Temperature in the bioreactor, [oC] ## Tc0 = u[17] # Initial Temperature of the cooling agent in the jacket, [oC] ## Vl0 = u[18] # Initial Culture volume in the bioreactor, [L] ## Sf_cum0 = u[19] # Initial Cumulative substrate/glucose fed to the bioreactor, [g] ## Time0 = u[20] # Initial batch time, [h] ## ## #Control variables ## Qin0 = u[0] # Volumetric inflow rate, [l/h-1] ## Qin = u[1] # Volumetric inflow rate, [l/h-1] ## Xtin = u[2] # Total biomass concentration in the bioreactor feed, [g L-1] ## Xvin = u[3] # Viable biomass concentration in the bioreactor feed, [g L-1] ## Qe = u[4] # Volumetric outflow rate, [l/h-1] ## Sin = u[5] # Substrate/Glucose concentration in bioreactor feed, [g L-1] ## Fc = u[6] # Cooling agent inlet volumetric flowrate, [L h-1] ## Fair = u[7] # Airflow inlet volumetric flowrate, [L h-1] ## Tin = u[8] # Temperature of bioreactor feed, [oC] ## Tcin = u[9] # Temperature of cooling agent inlet, [oC] # 1D Interpolation for Qin Qin = QinInterp(t) #Definition of model parameters #Kinetic parameters a1 = 0.05 # Ratkowsky parameter [oC-1 h-0.5] aP = 4.50 # Growth-associated parameter for ethanol production, [-] AP1 = 6.0 # Activation energy parameter for ethanol production, [oC] AP2 = 20.3 # Activation energy parameter for ethanol production, [oC] b1 = 0.035 # Parameter in the exponential expression of the maximum specific growth rate np.expression, [oC-1] b2 = 0.15 # Parameter in the exponential expression of the growth inhibitory ethanol concentration np.expression, [oC-1] b3 = 0.40 # Parameter in the exponential np.expression of the specific death rate expression,[oC-1] c1 = 0.38 # Constant decoupling factor for ethanol production, [gP gX-1 h-1] c2 = 0.29 # Constant decoupling factor for ethanol production, [gP gX-1 h-1] k1 = 3.00 # Parameter in the maximum specific growth rate expression, [oC] k2 = 55.0 # Parameter in the maximum specific growth rate expression, [oC] k3 = 60.0 # Parameter in the growth-inhibitory ethanol concentration expression, [oC] k4 = 50.0 # Temperature at the inflection point of the specific death rate sigmoid curve, [oC] Pmaxb = 90 # Temperature-independent product inhibition constant, [g L-1] PmaxT = 90 # Maximum value of product inhibition constant due to temperature, [g L-1] Kdb = 0.025 # Basal specific cellular biomass death rate, [h-1] KdT = 30.00 # Maximum value of specific cellular biomass death rate due to temperature, [h-1] KSX = 5 # Glucose saturation constant for the specific growth rate, [g L-1] KOX = 0.0005 # Oxygen saturation constant for the specific growth rate, [g L-1] qOmax = 0.05 # Maximum specific oxygen consumption rate, [h-1] #Metabolic parameters YPS = 0.51 # Theoretical yield of ethanol on glucose, [gP gS-1] YXO = 0.97 # Theoretical yield of biomass on oxygen, [gX gO-1] YXS = 0.53 # Theoretical yield of biomass on glucose, [gX gS-1] #Physicochemical and thermodynamic parameters Chbr = 4.18 # Heat capacity of the mass of reaction, [J g-1 oC-1] Chc = 4.18 # Heat capacity of the cooling agent, [J g-1 oC-1] DeltaH = 518000 # Heat of reaction of fermentation, [J mol-1 O2] Tref = 20 # Reference temperature, [oC] KH = 200 # Henry's constant for oxygen in the fermentation broth, [atm L mol-1] z = 0.792 # Oxygen compressibility factor, [-] R = 0.082 # Ideas gas constant, [L atm mol-1 oC-1] kla0 = 100 # Temperature-independent volumetric oxygen transfer coefficient, [h-1] KT = 360000 # Heat transfer coefficient, [J h-1 m-2 ??C-1] rho = 1080 # Density of the fermentation broth, [g L-1] rhoc = 1000 # Density of the cooling agent, [g L-1] MO = 32.0 # Molecular weight of oxygen (O2), [g mol-1] #Bioreactor design data AT = 1.0 # Bioreactor heat transfer area, [m2] V = 1800 # Bioreactor working volume, [L] Vcj = 50 # Cooling jacket volume, [L] Ogasin = 0.305 # Oxygen concentration in airflow inlet, [g L-1] #Definition of model variables #State variables Xt = x[0] # Total cellular biomass, [g L-1] Xv = x[1] # Viable cellular biomass, [g L-1] S = x[2] # Substrate/Glucose concentration, [g L-1] P = x[3] # Product/Ethanol concentration, [g L-1] Oliq = x[4] # Dissolved oxygen concentration, [g L-1] Ogas = x[5] # Gas phase oxygen (bubbles) in the fermentation broth, [g L-1] T = x[6] # Temperature in the bioreactor, [oC] Tc = x[7] # Temperature of the cooling agent in the jacket, [oC] Vl = x[8] # Culture volume in the bioreactor, [L] Sf_cum = x[9] # Cumulative amount of substrate/glucose fed to the bioreactor, [g] Time = x[10] # Batch time, [h] # Definition of model equations # Kinetic rates # ----------------------------- # Specific growth rate, [h-1] mmax = ((a1*(T-k1))*(1-np.exp(b1*(T-k2))))**2 Pmax = Pmaxb + PmaxT/(1-np.exp(-b2*(T-k3))) m1 = mmax * S/(KSX + S) * Oliq/(KOX + Oliq) * (1 - P/Pmax) * 1/(1+np.exp(-(100-S)/1)) # Specific growth rate, [h-1] if m1 >= 0: m = m1 else: m=0.0 # Non-growth-associated ethanol specific production rate, [h-1] if S > 0: bP = c1 * np.exp(-AP1/T) - c2 * np.exp(-AP2/T) # Non-growth-associated ethanol specific production rate, [h-1] else: bP = 0.0 qP = aP*m + bP # Ethanol consumption specific rate qS = m/YXS + qP/YPS # Oxygen consumption specific rate qO = qOmax*Oliq/YXO/(KOX + Oliq) # Specific biological deactivation rate of cell mass Kd = Kdb + KdT/(1+np.exp(-b3*(T-k4))) # Saturation concentration of oxygen in culture media Osat = z*Ogas*R*T/KH # Oxygen mass transfer coefficient kla = kla0*1.2**(T-20) # Volume of the gas phase in the bioreactor Vg = V - Vl #Material balances #----------------- # Volume of liquid culture dVl = Qin - Qe # Total cell mass dXt = m*Xv + Qin/Vl*(Xtin-Xt) # Total mass of biologically active cells dXv = (m-Kd)*Xv + Qin/Vl*(Xvin-Xv) # Glucose concentration dS = Qin/Vl*(Sin-S) - qS*Xv # Ethanol concentration dP = Qin/Vl*(-P) + qP*Xv # Disolved oxygen dOliq = Qin/Vl*(Osat - Oliq) + kla*(Osat-Oliq) - qO*Xv # Oxygen gas phase dOgas = Fair/Vg*(Ogasin-Ogas) - Vl*kla/Vg*(Osat - Oliq) + Ogas*(Qin-Qe)/Vg # Energy balances #--------------- # Bioreactor temprature dT = Qin/Vl*(Tin-T) - Tref/Vl*(Qin-Qe) + qO*Xv*DeltaH/MO/rho/Chbr - KT*AT*(T-Tc)/Vl/rho/Chbr # Cooling agent temperature dTc = Fc/Vcj*(Tcin-Tc) + KT*AT*(T-Tc)/Vcj/rhoc/Chc # Yields & Productivity #--------------------- # Cumulative amount of glucose fed to the bioreactor dSf_cum = Sin*Qin dTime = 1 # Definition of state derivatives vector # State derivatives dxdt = [dXt,dXv,dS,dP,dOliq,dOgas,dT,dTc,dVl,dSf_cum,dTime] # [dxdt,mmax,Pmax,bP,m,Kd,Qin] return dxdt # test function print(ethanol(x0,0.0,*ux0)) # Simulate the bioreactor operation until the selected time tf x = odeint(ethanol,x0,tspan,args=ux0) #plots Results #Total and Viable Cellular Biomass plt.figure() plt.plot(tspan,x[:,0]) plt.plot(tspan,x[:,1]) plt.title('Total & Viable Cellular Biomass') plt.ylabel('Biomass concentration [g/L]') plt.xlabel('t [h]') plt.legend(['Xt','Xv']) plt.figure() plt.title('Substrate (S) & Product (P) concentration') plt.plot(tspan,x[:,2], label='S') plt.plot(tspan,x[:,3], label='P') plt.legend(); plt.grid() plt.ylabel('Conc [g/L]') plt.xlabel('Time [h]') plt.minorticks_on() plt.ylim(0) plt.xlim(t[0],t[-1]) plt.tight_layout() plt.figure() plt.title('Bioreactor & Cooling jacket temperature') plt.plot(tspan,x[:,6], label='T') plt.plot(tspan,x[:,7], label='Tc') plt.legend() plt.ylabel('Temperature [oC]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(t[0],t[-1]) plt.tight_layout() fig4, ax = plt.subplots() ax.title.set_text('Dissolved & Gaseous Oxygen concentration') lns1 = ax.plot(t,x[:,4], label='[Oliq]', color='c') ax.set_xlabel('Time [h]') ax.set_ylabel('Oliq [g/L]', color='c') ax.minorticks_on() ax2 = ax.twinx() lns2 = ax2.plot(t,x[:,5], label='[Ogas]', color='y') ax2.set_ylabel('Ogas [g/L]', color='y') ax2.minorticks_on() lns = lns1 + lns2 labs = [l.get_label() for l in lns] ax.legend(lns, labs, loc='best') ax.grid() fig4.tight_layout() plt.figure() plt.title('Feeding Policy') plt.plot(tspan, Qin, label='Qin') plt.legend() plt.ylabel('Qin [L/h]') plt.xlabel('Time [h]') plt.grid() plt.minorticks_on() plt.ylim(0) plt.xlim(tspan[0],tspan[-1]) plt.tight_layout() plt.show() | 4 | 3 |
74,308,059 | 2022-11-3 | https://stackoverflow.com/questions/74308059/using-typeguard-decorator-typechecked-in-python-whilst-evading-circular-impor | Context To prevent circular imports in Python when using type-hints, one can use the following construct: # controllers.py from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: from models import Book class BookController: def __init__(self, book: "Book") -> None: self.book = book Where the if TYPE_CHECKING: is only executed during type checking, and not during execution of the code. Issue When one applies active function argument type verification, (based on the type hints of the arguments), typeguard throws the error: NameError: name 'Supported_experiment_settings' is not defined MWE I # models.py from controllers import BookController from typeguard import typechecked class Book: @typechecked def get_controller(self, some_bookcontroller:BookController): return some_bookcontroller some_book=Book() BookController("somestring") And: # controllers.py from __future__ import annotations from typing import TYPE_CHECKING from typeguard import typechecked #from models import Book if TYPE_CHECKING: from models import Book class BookController: @typechecked def __init__(self, book: Book) -> None: self.book = book Note the #from models import Book is commented out. Now if one runs: python models.py It throws the error: File "/home/name/Documents/eg/models.py", line 13, in BookController("somestring") ... NameError: name 'Book' is not defined. Did you mean: 'bool'? because the typechecking for def __init__(self, book: Book) -> None: does not know what the class Book is. MWE II Then if one disables @typechecked in controllers.py with: # controllers.py from __future__ import annotations from typing import TYPE_CHECKING from typeguard import typechecked if TYPE_CHECKING: from models import Book class BookController: #@typechecked def __init__(self, book: Book) -> None: self.book = book it works. (But no typechecking). MWE III Then if one re-enables typechecking, and includes the import of book, (with from models import Book) like: # controllers.py from __future__ import annotations from typing import TYPE_CHECKING from typeguard import typechecked from models import Book if TYPE_CHECKING: from models import Book class BookController: @typechecked def __init__(self, book: Book) -> None: self.book = book It throws the circular import error: Traceback (most recent call last): File "/home/name/Documents/eg/models.py", line 2, in <module> from controllers import BookController File "/home/name/Documents/eg/controllers.py", line 5, in <module> from models import Book File "/home/name/Documents/eg/models.py", line 2, in <module> from controllers import BookController ImportError: cannot import name 'BookController' from partially initialized module 'controllers' (most likely due to a circular import) (/home/name/Documents/eg/controllers.py) Question How can one evade this circular import whilst still allowing the @typechecked decorator to verify/access the Book import? Is there an equivalent TYPE_CHECKING boolean for typeguard? | Your problem is that by using the denamespacing import form (from x import y) the contents of an imported module can't be resolved lazily, so one side or the other will require a name before the other module has finished importing (and therefore before it has defined the name). The typical solution here is to use namespaced imports (import x) and qualify your uses (x.y) so that, assuming the names aren't needed to define your module, just when the functions within it are called, they can defer resolution to when they are needed and the circular aspect is not an issue. Having checked the source code, typechecked is lazy and defers resolving annotations until the function in question is called, so it should be easy enough to fix this, changing your code to (inline comments indicating significant changes): # models.py from __future__ import annotations # Ensure annotations evaluated lazily here as well, # so it works regardless of which module in the cycle # is imported first import controllers # Namespaced import from typeguard import typechecked class Book: @typechecked def get_controller(self, some_bookcontroller: controllers.BookController): # Namespace qualified annotation return some_bookcontroller And: # controllers.py from __future__ import annotations from typeguard import typechecked import models # Namespaced import class BookController: @typechecked def __init__(self, book: models.Book) -> None: # Namespace qualified annotation self.book = book The code from models.py that does: some_book=Book() BookController("somestring") should be moved from models to some other module that imports those names from the two modules with circular dependencies to make the code completely robust. This is because, being at top-level, depending on which of the cyclicly defined modules got imported first (the problem occurs with models.py here), it would be unable to resolve the name from controllers, even if the name is namespace-qualified (because it would be trying to use it before models finished being defined, and when the import of controllers pauses to wait on models to finish being defined, it hasn't defined the rest of its contents, and therefore the code that runs on import in models can't resolve the names from controllers). If it's in some other module importing from models and controllers, both of those imports will resolve by the time the subsequent code is run (assuming neither of them import this hypothetical module3 themselves), so it works either way (using namespaced or denamespaced imports). If you're curious how circular imports work in Python, there's a complete explanation on What happens when using mutual or circular (cyclic) imports in Python?, but the short version is that, the very first time a module is imported in an entire program, Python puts an empty module object in the sys.modules cache, then runs the code within the module to populate the cached module object. The second time a module is imported, including when it's imported by a module that imports the original module cyclically, Python just binds the cached module object and does nothing else, even if the cached module object isn't populated yet. This means that when an import in progress at the top of the stack (module B) tries to import a module that is already in progress below it on the stack (module A), module B gets the incompletely initialized module from the cache (because A was already in the process of being imported). If defining the contents of B relies on any component of A that was not defined before A tried to import B, it won't exist on the cached (and mostly empty) A module. But so long as all such reference to A in B are confined to functions that aren't called during definition time, or are used in annotations that from __future__ import annotations converts to string-based annotations and nothing tries to resolve them during the import process, this works fine. B finishes being defined without trying to use any elements from A, and when A's definition resumes, a complete B module exists for it to load from. The problem occurs when the module in a cycle imported second (B in this case) tries to use a component of the module imported first (A) that isn't defined until after A finishes importing B (usually pretty early on in A, so almost nothing in A is defined). from A import spam requires resolving A.spam immediately, so it's going to break. import A, and using A.spam at top level (as you did with controllers.BookController in your original code) also breaks things. Your use case was even nastier, because in fact, python models.py, there were secretly three modules involved: __main__ (which is the pseudoname given to the top level script, so the code is whatever is in models.py, but it is not the same thing as the models module for import/cache purposes), which imports... controllers, which in turn imports... models, which then cyclically imports controllers This makes things uglier, because despite thinking you imported models first, you actually imported controllers before models, and the code in models.py is executed twice, once for the script itself as __main__, and once for the import as models. There are two unrelated Book classes (which can cause type-checking issues), and that top-level code in models.py was executed twice (creating two instances of BookController, and one instance each of the two unrelated but identical definitions of Book). The solution is: Ensure both sides of the circular import use plain namespaced imports (import A/import B) and neither tries to access any component of the other at top-level (including at class definition level when the class is at top level), nor do they try to call any methods/functions/constructors from top-level that would indirectly access another component. Technically speaking, you could make a working use case without both of them being so careful, but it would be incredibly brittle; whichever module took an eager dependency on the other must be imported first, and in practice, you don't want to assume that one module is always imported first (you could force it in a package structure that includes a __init__.py, but that breaks implicit namespace packages, and it's still brittle from a maintainer's point of view, even if it's safe from an API consumer point of view), so it's best to do the cautious thing on both sides. Never involve a module that will be invoked as a script (with python modulename.py or python -mmodulename) in a circular import scenario. Ideally never write a module that will be used as both script and imported module in the same run of the program (either it's the main script and no one else imports it, or it's imported as a module with some other script serving as main). If a module is both run as the top-level script and imported elsewhere, weird stuff happens. | 5 | 6 |
74,290,259 | 2022-11-2 | https://stackoverflow.com/questions/74290259/count-the-number-of-three-way-conversations-in-a-group-chat-dataset-using-pandas | I wanted to count the number of three way conversations that have occured in a dataset. A chat group_x can consist of multiple members. What is a three way conversation? 1st way - red_x sends a message in the group_x. 2nd way - green_x replies in the same group_x. 3rd way - red_x sends a reply in the same group_x. This can be called a three way conversation. The sequence has to be exactly red_#, green_#, red_#. What is touchpoint? Touchpoint 1 - red_x's first message. Touchpoint 2 - green_x's first message. Touchpoint 3 - red_x's second message. Code to easily generate a sample dataset I'm working with. import pandas as pd from pandas import Timestamp t1_df = pd.DataFrame({'from_red': [True, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, True], 'sent_time': [Timestamp('2021-05-01 06:26:00'), Timestamp('2021-05-04 10:35:00'), Timestamp('2021-05-07 12:16:00'), Timestamp('2021-05-07 12:16:00'), Timestamp('2021-05-09 13:39:00'), Timestamp('2021-05-11 10:02:00'), Timestamp('2021-05-12 13:10:00'), Timestamp('2021-05-12 13:10:00'), Timestamp('2021-05-13 09:46:00'), Timestamp('2021-05-13 22:30:00'), Timestamp('2021-05-14 14:14:00'), Timestamp('2021-05-14 17:08:00'), Timestamp('2021-06-01 09:22:00'), Timestamp('2021-06-01 21:26:00'), Timestamp('2021-06-03 20:19:00'), Timestamp('2021-06-03 20:19:00'), Timestamp('2021-06-09 07:24:00'), Timestamp('2021-05-01 06:44:00'), Timestamp('2021-05-01 08:01:00'), Timestamp('2021-05-01 08:09:00')], 'w_uid': ['w_000001', 'w_112681', 'w_002516', 'w_002514', 'w_004073', 'w_005349', 'w_006803', 'w_006804', 'w_008454', 'w_009373', 'w_010063', 'w_010957', 'w_066840', 'w_071471', 'w_081446', 'w_081445', 'w_106472', 'w_000002', 'w_111906', 'w_000003'], 'user_id': ['red_00001', 'green_0263', 'red_01071', 'red_01071', 'red_01552', 'red_01552', 'red_02282', 'red_02282', 'red_02600', 'red_02854', 'red_02854', 'red_02600', 'red_00001', 'red_09935', 'red_10592', 'red_10592', 'red_12292', 'red_00002', 'green_0001', 'red_00003'], 'group_id': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1], 'touchpoint': [1, 2, 1, 3, 1, 3, 1, 3, 1, 1, 3, 3, 3, 1, 1, 3, 1, 1, 2, 1]}, columns = ['from_red', 'sent_time', 'w_uid', 'user_id', 'group_id', 'touchpoint']) t1_df['sent_time'] = pd.to_datetime(t1_df['sent_time'], format = "%d-%m-%Y") t1_df The dataset looks like this: from_red sent_time w_uid user_id group_id touchpoint True 2021-05-01 06:26:00 w_000001 red_00001 0 1 False 2021-05-04 10:35:00 w_112681 green_0263 0 2 True 2021-05-07 12:16:00 w_002516 red_01071 0 1 True 2021-05-07 12:16:00 w_002514 red_01071 0 3 True 2021-05-09 13:39:00 w_004073 red_01552 0 1 True 2021-05-11 10:02:00 w_005349 red_01552 0 3 True 2021-05-12 13:10:00 w_006803 red_02282 0 1 True 2021-05-12 13:10:00 w_006804 red_02282 0 3 True 2021-05-13 09:46:00 w_008454 red_02600 0 1 True 2021-05-13 22:30:00 w_009373 red_02854 0 1 True 2021-05-14 14:14:00 w_010063 red_02854 0 3 True 2021-05-14 17:08:00 w_010957 red_02600 0 3 True 2021-06-01 09:22:00 w_066840 red_00001 0 3 True 2021-06-01 21:26:00 w_071471 red_09935 0 1 True 2021-06-03 20:19:00 w_081446 red_10592 0 1 True 2021-06-03 20:19:00 w_081445 red_10592 0 3 True 2021-06-09 07:24:00 w_106472 red_12292 0 1 True 2021-05-01 06:44:00 w_000002 red_00002 1 1 False 2021-05-01 08:01:00 w_111906 green_0001 1 2 True 2021-05-01 08:09:00 w_000003 red_00003 1 1 Here is what I have tried, but the query is taking too long. Is there a faster way to achieve the same? test_df = pd.DataFrame() for i in range(len(t1_df['sent_time'])-1): if t1_df.query(f"group_id == {i}")['from_red'].nunique() == 2: y = t1_df.query(f"group_id == {i} & touchpoint == 2").loc[:, ['sent_time']].values[0][0] x = t1_df.query(f"group_id == {i} & sent_time > @y & (touchpoint == 3)").sort_values('sent_time') test_df = pd.concat([test_df, x]) test_df.merge(x, how = "outer") else: pass test_df | For me it's not clear how you define the "three way conversation". Within on group, if you have the input messages what option(s) do you consider as "three way conversation"? There are several options: Input : red_0, red_2, green_0, red_1, red_0, red_2, red_1 Option1: red_2, green_0, red_1 Option2: red_0, green_0, red_0 + : red_2, green_0, red_2 and many more. Your code example returns the second msg of a user when sent after green: OptionX: green_0, red_0 + : green_0, red_2 + : green_0, red_1 without keeping track if some red user sent a msg before green. Another question is, what happens if green is sending multiple times within one group. Input : red_0, red_2, green_0, green_0, red_1, red_0, green_1, red_1 Based on your description "The sequence has to be exactly red_#, green_#, red_#." I guess, Option1 is what you are looking for and maybe that it's even independent from the color: color0_#, color1_#, color0_#. Correct me if I'm wrong ;). Prepare the DataFrame To get the operation more generic, I would first prepare the DataFrame, e.g. extract the color of the user and get a integer represenation for the color # extract the user color and id t1_df[['color', 'id']] = t1_df.pop('user_id').str.split('_', expand=True) # get the dtypes right, also it is not needed here t1_df.id = t1_df.id.astype(int) t1_df.color = t1_df.color.astype('category') # get color as intager t1_df['color_as_int'] =pd.factorize(t1_df.color)[0] Detect the sequence color0_#, color1_#, color0_# # a three way conversation is where color_as_int is [...,a,b,a,...] # expressed as difference it's color_as_int.diff() is [...,c,-c,...] # get the difference with tracking the group, therefore first sort t1_df.sort_values(['group_id', 'sent_time'], inplace=True) d_color = t1_df.groupby(['group_id']).color_as_int.diff() m = (d_color != 0) & (d_color == -d_color.shift(-1)) # detect [...,c,-c,...] # count up for each three way conversation m[m] = m[m].cumsum() m = m.astype(int) # get the labels for the dataframe [...,a,b,a,...] t1_df['three_way_conversation'] = m + m.shift(1, fill_value=0) + m.shift(-1, fill_value=0) which returns and works for any color columns = ['sent_time', 'group_id', 'color', 'id', 'touchpoint'] print(t1_df.loc[t1_df['three_way_conversation']>0, columns]) sent_time group_id color id touchpoint 0 2021-05-01 06:26:00 0 red 1 1 1 2021-05-04 10:35:00 0 green 263 2 2 2021-05-07 12:16:00 0 red 1071 1 17 2021-05-01 06:44:00 1 red 2 1 18 2021-05-01 08:01:00 1 green 1 2 19 2021-05-01 08:09:00 1 red 3 1 Bonus with the DataFrame preparation you can easily count the msg per color or user within a group or get the first and last time of a msg from a color or user. cumcount is faster as count and pd.merg() afterwards. t1_df['color_msg_count'] = t1_df.groupby(['group_id', 'color']).cumcount() + 1 t1_df['user_msg_count'] = t1_df.groupby(['group_id', 'color', 'id']).cumcount() + 1 t1_df['user_sent_time_min'] = t1_df.sort_values('sent_time').groupby(['group_id', 'color', 'id']).sent_time.cummin() t1_df['user_sent_time_max'] = t1_df.sort_values('sent_time', ascending=False).groupby(['group_id', 'color', 'id']).sent_time.cummax() | 6 | 1 |
74,305,444 | 2022-11-3 | https://stackoverflow.com/questions/74305444/error-while-trying-to-run-corr-in-python-with-pandas-module | While trying to run the corr() method in python using pandas module, I get the following error: FutureWarning: The default value of numeric_only in DataFrame.corr is deprecated. In a future version, it will default to False. Select only valid columns or specify the value of numeric_only to silence this warning. print(df.corr()) Note (Just for clarification) :- df is the name of the dataframe read from a csvfile. For eg:- import pandas as pd df = pd.read_csv('Data.csv') print(df.corr()) The problem only lies in the corr() method which raises the aforementioned error: FutureWarning: The default value of numeric_only in DataFrame.corr is deprecated. In a future version, it will default to False. Select only valid columns or specify the value of numeric_only to silence this warning. I partially understand the error, however I would like to know: Are there any other alternative methods to do the same function of corr() to identify the relationship between each column in a data set? Like is there a way to replicate the function without using corr() method? Sorry If my question is wrong or improper in anyway, I'm open to feedbacks. Thanks in advance. | The problem only lies in the one function corr() which is not deprecated but its numeric_only Argument in the function is. So, you can set it to false or true according to needs by df.corr(numeric_only = *[True/False]*). You can learn more at its documentation. p.s - I wrote so that it is more understandable and the fact that this problem was well known to me. | 9 | 11 |
74,360,037 | 2022-11-8 | https://stackoverflow.com/questions/74360037/pytest-doctest-modules-executes-scripts | I have this MRE: . βββ tests βββ notest.py The notest.py just do a sys.exit(1): When I run pytest --doctest-modules I get this error: ERROR collecting tests/notest.py tests/notest.py:4: in <module> sys.exit(1) E SystemExit: 1 So the --doctest-modules would try to execute my script which is not a test. Is that normal behaviour and how to prevent that? | Is that normal behaviour? Yes. Passing --doctest-modules will activate a special doctest collector that is not restricted to globs specified by python_files (test_*.py and *_test.py by default). Instead, it will find and collect any python module that is not __init__.py or __main__.py. Afterwards, doctest will import them to collect docstrings and execute doctests from every module. how to prevent that? If you have doctests in notest you want to run, you have to prevent code execution on notest module import. Place the code that causes premature exit into if __name__ == '__main__' block: # notest.py import sys if __name__ == '__main__': sys.exit(1) Now the regular import (e.g. python -c 'import tests.notest') will exit with 0 as the main block is not executed, while running the module (e.g. python tests/notest.py or python -m tests.notest) will still exit with 1. If you have no tests in the notest module or you can't edit its code, you can instruct pytest to ignore the module completely: $ pytest --doctest-modules --ignore=tests/notest.py or persist the option in pyproject.toml or pytest.ini to avoid entering the option every time. Example with pyproject.toml: [tool.pytest.ini_options] addopts = "--ignore=tests/notest.py" Alternatively, this can also be done in a conftest module. Example: # tests/conftest.py collect_ignore = ["notest.py"] Beware that collect_ignore receives files relative to the current conftest module. Therefore, if you use a conftest in the project root directory (on the same level as tests directory), you would have to specify the correct relative path: # conftest.py collect_ignore = ["tests/notest.py"] | 3 | 5 |
74,359,505 | 2022-11-8 | https://stackoverflow.com/questions/74359505/python-3-11-debugger-not-working-properly-anymore | So I have just installed python version 3.11 and changed my python interpreter in pycharm and the code runs properly now after I reinstalled the packages to my new venv. but when i debug my code I keep getting a long list of warnings and I have no idea how to fix it: ------------------------------------------------------------------------------- pydev debugger: CRITICAL WARNING: This version of python seems to be incorrectly compiled (internal generated filenames are not absolute) pydev debugger: The debugger may still function, but it will work slower and may miss breakpoints. pydev debugger: Related bug: http://bugs.python.org/issue1666807 ------------------------------------------------------------------------------- Connected to pydev debugger (build 221.5080.212) pydev debugger: Unable to find real location for: <frozen codecs> pydev debugger: Unable to find real location for: <frozen importlib._bootstrap> pydev debugger: Unable to find real location for: <frozen importlib._bootstrap_external> pydev debugger: Unable to find real location for: <frozen zipimport> pydev debugger: Unable to find real location for: <frozen ntpath> pydev debugger: Unable to find real location for: <frozen genericpath> pydev debugger: Unable to find real location for: <frozen os> pydev debugger: Unable to find real location for: <frozen _collections_abc> pydev debugger: Unable to find real location for: <string> pydev debugger: Unable to find real location for: <frozen abc> pydev debugger: Unable to find real location for: <__array_function__ internals> pydev debugger: Unable to find real location for: <frozen io> I installed it for all users to C:\Program Files if that helps | Should be fixed in PyCharm 2022.3 (ticket https://youtrack.jetbrains.com/issue/PY-56939/CRITICAL-WARNING-error-debugging-Python-311-code). Early Access Preview version is already available https://www.jetbrains.com/pycharm/nextversion/ | 13 | 13 |
74,366,289 | 2022-11-8 | https://stackoverflow.com/questions/74366289/how-to-add-drop-down-menu-to-swagger-ui-autodocs-based-on-basemodel-using-fastap | I have this following class: class Quiz(BaseModel): question: str subject: str choice: str = Query(choices=('eu', 'us', 'cn', 'ru')) I can render the form bases on this class like this @api.post("/postdata") def post_data(form_data: Quiz = Depends()): return form_data How can I display a drop down list for choice field ? | Option 1 Use literal values. Literal type is a new feature of the Python standard library as of Python 3.8 (prior to Python 3.8, it requires the typing-extensions package) and is supported by Pydantic. Example: from fastapi import FastAPI, Depends from pydantic import BaseModel from typing import Literal app = FastAPI() class Quiz(BaseModel): question: str subject: str choice: Literal['eu', 'us', 'cn', 'ru'] = 'us' @app.post('/submit') def post_data(data: Quiz = Depends()): return data Option 2 Use Enums (also, see Python's enum module, as well as FastAPI's documentation on Predefined values). By having your Enum sub-class inheriting from str, the API docs will be able to know that the values must be of type string and will be able to render correctly. Example: from fastapi import FastAPI, Depends from pydantic import BaseModel from enum import Enum app = FastAPI() class Country(str, Enum): eu = 'eu' us = 'us' cn = 'cn' ru = 'ru' class Quiz(BaseModel): question: str subject: str choice: Country = Country.us @app.post('/submit') def post_data(data: Quiz = Depends()): return data | 5 | 8 |
74,312,668 | 2022-11-4 | https://stackoverflow.com/questions/74312668/does-column-slice-of-a-pandas-dataframe-with-columns-of-different-data-types-cre | I have some dataframes as follows: df = pd.DataFrame([[1,2.0],[3,4.0]], index = ['row1','row2'], columns = ['a','b']) df2 = df.iloc[:, :] df3 = df.iloc[:1, :] df4 = df.iloc[:, :1] Column a is int while column b is float. Question: are df2, df3, df4 view or copy test 1: print(df._is_view, df._is_copy) print(df2._is_view, df2._is_copy) print(df3._is_view, df3._is_copy) print(df4._is_view, df4._is_copy) False None False None False <weakref at 0x7fed1113de90; to 'DataFrame' at 0x7fed11aa80a0> True <weakref at 0x7fed114d65c0; to 'DataFrame' at 0x7fed11aa9ab0> From this, it says df2, df3 are not a view. But df4 is. Why? test 2: df2.loc['row1', 'b'] = 100.0 print(df1) df3.loc['row1', 'a'] = 1000.0 print(df1) df4.loc['row1', 'a'] = 10000.0 print(df1) a b row1 10 2.0 row2 3 4.0 a b row1 100 2.0 row2 3 4.0 a b row1 100 2.0 row2 3 4.0 /tmp/ipykernel_2006744/1832530048.py:5: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df4.loc['row1', 'a'] = 1000 From this, it can be seen that df's value is updated when df2 or df3 is updated. So df2 and df3 should be a view. Updating df4 does not propagate to df, so df4 seems to be a copy. How come the results are contradicting to _is_view Question 2: The SettingWithCopyWarning when setting df4 says a copy of a slice. What is this refering to? Is "a slice" refering to df4? Then what is the "a copy of a slice" provided I am using .loc? | You are setting values on a newly created sliced data frame. Don't do it. That's a kind of chained assignment, warned by the document. In your code, the df2 and df3 are views and df4 is a copy. It cannot be determined accurately from the undocumented API _is_view and _is_copy. And 'a copy of a slice' in the warning means the result of df[:, :1] as a copy, where 'a slice' means the symbolic source code df[:, :1] - a Python slicing syntax. In the current Pandas implementation, it cannot be easily defined whether a slice of a data frame is a view or a copy of the original frame, due to the following reasons. Cell values of a data frame can be stored into multiple NumPy arrays. (See The BlockManager by Uwe for a detail.) Implementation on tracking references created by slicing is incomplete. (See NDFrame._slice() for example. It does not check if an actual copy was done such as via Block.take_nd().) So the document says vaguely '... may depend on the context'. The _is_view and _is_copy do not give accurate information. And the internal checking for a chained assignment is not always done. For example, you can see this incompleteness in the following. print('on a heterogenious one') df = pd.DataFrame({'a': [1, 2], 'b': [4, 5], 'c': ['a', 'b']}) df.iloc[:, :1].loc[0, 'a'] = 10 print('on a homogenious one') df = pd.DataFrame({'a': [1, 2], 'b': [4, 5]}) df.iloc[:, :1].loc[0, 'a'] = 10 This outputs the following. on a heterogenious one on a homogenious one test_iloc.py:10: SettingWithCopyWarning: ... | 4 | 1 |
74,368,353 | 2022-11-8 | https://stackoverflow.com/questions/74368353/how-to-handle-mypy-when-many-possible-types-but-expecting-a-specific-type | Say I have a generic function that can return a number of different types depending on what properties I select: def json_parser(json_data: Dict[str, Any], property_tree: List[str] ) -> Union[Dict[str, Any], List[str], str, None]: .... I then call this generic function with specific properties that I know will return a string. def get_language(self, metadata_json: Dict[str, Any]) -> str: return json_parser(metadata_json, ["volumeInfo", "language"]) I get the following error when running Mypy. error: Incompatible return value type (got "Union[Dict[str, Any], List[str], str, None]", expected "List[str]") [return-value] From my understanding, this is because Mypy doesn't know to expect a string. What is the best way of handling this? | You've got two options, depending on how careful you want to be. First, typing.cast is a function that takes a type and a value and... magically makes the value have that type. At runtime it's defined as def cast(ty, value): return value but type-checkers are instructed to treat it as some magic black box. You could write from typing import cast ... return cast(str, json_parser(metadata_json, ["volumeInfo", "language"])) and your type checker will simply accept the reality that the value is a str. This is not checked at runtime. If you made a mistake and that turned out to be a dictionary or something else, cast will let it pass, and your beautiful statically-typed code will fail with a type error at runtime. That leads us to our second option: assertions. Mypy and other type checkers understand assertions, if statements, and other constructs, and they also understand isinstance. So if you can convince Mypy that you've done your due diligence, it will accept your type outright. result = json_parser(metadata_json, ["volumeInfo", "language"]) assert isinstance(result, str) return result # Accepted as str by mypy or result = json_parser(metadata_json, ["volumeInfo", "language"]) if isinstance(result, str): return result # Accepted as str by mypy else: raise SomeSpecificError(...) This is checked at runtime. If someone hands you bad data or you made a mistake, you'll get a runtime error, which is better than nothing. | 4 | 4 |
74,365,554 | 2022-11-8 | https://stackoverflow.com/questions/74365554/how-super-init-works-when-i-inherit-str-class | I am trying to inherit str class for fun. I provided two ways, 1) using super() and 2) using str class in the constructor as follows: class Str2(str): def __init__(self, value): super().__init__() # I did not use `value` here, but my code works! def ishello(self): if self == "Hello": return True else: False s = Str2("Hello") print(s.upper()) # a method of str class print(s.ishello()) # a new method I defined in Str2 class class Str2(str): def __init__(self, value): str.__init__(value) # It makes sense for me def ishello(self): if self == "Hello": return True else: False s = Str2("Hello") print(s.upper()) # a method of str class print(s.ishello()) # a new method I defined in Str2 class I am wondering how super().__init__() works in my first code although I did not use value Thanks in advance. | str.__init__ does not do anything (similar to tuple.__init__, among other immutable classes). The actual initialization happens in __new__. Conceptually, this makes sense, since __new__ returns a new object, while __init__ can be run multiple times on an existing one. That means that whether you call super().__init__ or str.__init__ with or without an argument, or even omit the call entirely, you will get the same result. When you call Str2(value), you are actually calling type.__call__(Str2, value), which passes value to both __new__ and __init__. Since you did not override __new__, you get the full benefit of its behavior no matter what __init__ does. In short, if you don't want to modify value, you can get rid of your custom __init__. If you do want to modify value, override __new__ instead: class Str2(str): def __new__(cls, value): value = "My prefix " + value return super().__new__(cls, value) | 5 | 5 |
74,345,802 | 2022-11-7 | https://stackoverflow.com/questions/74345802/getting-error-in-visual-code-studio-importerror-cannot-import-name-dummyopera | Getting error while running the airflow DAG code in visual studio code. Error ImportError: cannot import name 'DummyOperator' from 'airflow.operators' (c:\Users\10679196\AppData\Local\Programs\Python\Python38\lib\site-packages\airflow\operators\__init__.py) Import Statement from airflow import DAG from airflow.operators import DummyOperator Version apache-airflow : 2.4.2 python : 3.8.1 pip : 22.3 | As per documentation the DummyOperator is deprecated and beginning with the version 2.4.0 is not supported any more. You should use from airflow.operators.empty import EmptyOperator BTW your old import seems also incorrect. For airflow < 2.4.0 this should work: from airflow.operators.dummy import DummyOperator | 7 | 25 |
74,356,504 | 2022-11-8 | https://stackoverflow.com/questions/74356504/installation-error-of-scikit-image-in-python-3-11-0 | Getting errors when installing scikit-image with python-3.11.0. The package is simply installed via pip install scikit-image or python -m pip install -U scikit-image. The error messages showed that the problem occur on the wheel building process, and therefore hinder the scikit-image installation. How could I fix this problem? Error messages: error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ INFO: CCompilerOpt.cache_flush[857] : write cache to path -> C:\Users\admin\AppData\Local\Temp\pip-install-z_bg8g6d\scikit-image_ae4333805d744761b97e8cd984f9e2c1\build\temp.win-amd64-3.11\Release\ccompiler_opt_cache_ext.py [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for scikit-image Failed to build scikit-image ERROR: Could not build wheels for scikit-image, which is required to install pyproject.toml-based projects Environment: affine==2.3.1 attrs==22.1.0 beautifulsoup4==4.11.1 certifi==2022.9.24 charset-normalizer==2.1.1 click==8.1.3 click-plugins==1.1.1 cligj==0.7.2 colorama==0.4.6 contourpy==1.0.6 cycler==0.11.0 docopt==0.6.2 Fiona @ file:///C:/Users/admin/pipwin/Fiona-1.8.21-cp311-cp311-win_amd64.whl fonttools==4.38.0 GDAL @ file:///C:/Users/admin/pipwin/GDAL-3.4.3-cp311-cp311-win_amd64.whl geopandas==0.12.1 idna==3.4 Js2Py==0.74 kiwisolver==1.4.4 matplotlib==3.6.2 munch==2.5.0 numpy==1.23.4 opencv-contrib-python==4.6.0.66 packaging==21.3 pandas==1.5.1 Pillow==9.3.0 pipwin==0.5.2 pyjsparser==2.7.1 pyparsing==3.0.9 PyPrind==2.11.3 pyproj==3.4.0 pySmartDL==1.3.4 python-dateutil==2.8.2 pytz==2022.6 pytz-deprecation-shim==0.1.0.post0 rasterio==1.3.3 requests==2.28.1 rioxarray==0.12.4 Shapely==1.8.5.post1 six==1.16.0 snuggs==1.4.7 soupsieve==2.3.2.post1 tzdata==2022.6 tzlocal==4.2 urllib3==1.26.12 xarray==2022.11.0 Two approaches were suggested: Downgrades to an older python version: Nathan answer Install Visual Studio C++ compiler, as mentioned in: https://wiki.python.org/moin/WindowsCompilers However, I would like to stay on current python version, and I'm getting stuck in the second approach. | In my case, this problem was solved via installing a wheel file from: https://www.lfd.uci.edu/~gohlke/pythonlibs/#scikit-image In my case, I download the cp311 windows amd64 version. Then, install the .whl file to the virtual environment (env) D:\env>python -m pip install D:\Download\scikit_image-0.19.3-cp311-cp311-win_amd64.whl Finally, the scikit-image was successfully installed. | 3 | 6 |
74,346,565 | 2022-11-7 | https://stackoverflow.com/questions/74346565/fastapi-typeerror-issubclass-arg-1-must-be-a-class-with-modular-imports | When working with modular imports with FastAPI and SQLModel, I am getting the following error if I open /docs: TypeError: issubclass() arg 1 must be a class Python 3.10.6 pydantic 1.10.2 fastapi 0.85.2 sqlmodel 0.0.8 macOS 12.6 Here is a reproducible example. user.py from typing import List, TYPE_CHECKING, Optional from sqlmodel import SQLModel, Field if TYPE_CHECKING: from item import Item class User(SQLModel): id: int = Field(default=None, primary_key=True) age: Optional[int] bought_items: List["Item"] = [] item.py from sqlmodel import SQLModel, Field class Item(SQLModel): id: int = Field(default=None, primary_key=True) price: float name: str main.py from fastapi import FastAPI from user import User app = FastAPI() @app.get("/", response_model=User) def main(): return {"message": "working just fine"} I followed along the tutorial from sqlmodel https://sqlmodel.tiangolo.com/tutorial/code-structure/#make-circular-imports-work. If I would put the models in the same file, it all works fine. As my actual models are quite complex, I need to rely on the modular imports though. Traceback: Traceback (most recent call last): File "/Users/felix/opt/anaconda3/envs/fastapi_test/lib/python3.10/site-packages/fastapi/utils.py", line 45, in get_model_definitions m_schema, m_definitions, m_nested_models = model_process_schema( File "pydantic/schema.py", line 580, in pydantic.schema.model_process_schema File "pydantic/schema.py", line 621, in pydantic.schema.model_type_schema File "pydantic/schema.py", line 254, in pydantic.schema.field_schema File "pydantic/schema.py", line 461, in pydantic.schema.field_type_schema File "pydantic/schema.py", line 847, in pydantic.schema.field_singleton_schema File "pydantic/schema.py", line 698, in pydantic.schema.field_singleton_sub_fields_schema File "pydantic/schema.py", line 526, in pydantic.schema.field_type_schema File "pydantic/schema.py", line 921, in pydantic.schema.field_singleton_schema File "/Users/felix/opt/anaconda3/envs/fastapi_test/lib/python3.10/abc.py", line 123, in __subclasscheck__ return _abc_subclasscheck(cls, subclass) TypeError: issubclass() arg 1 must be a class | TL;DR You need to call User.update_forward_refs(Item=Item) before the OpenAPI setup. Explanation So, this is actually quite a bit trickier and I am not quite sure yet, why this is not mentioned in the docs. Maybe I am missing something. Anyway... If you follow the traceback, you'll see that the error occurs because in line 921 of pydantic.schema in the field_singleton_schema function a check is performed to see if issubclass(field_type, BaseModel) and at that point field_type is not in fact a type instance. A bit of debugging reveals that this occurs, when the schema for the User model is being generated and the bought_items field is being processed. At that point the annotation is processed and the type argument for List is still a forward reference to Item. Meaning it is not the actual Item class itself. And that is what is passed to issubclass and causes the error. This is a fairly common problem, when dealing with recursive or circular relationships between Pydantic models, which is why they were so kind to provide a special method just for that. It is explained in the Postponed annotations section of the documentation. The method is update_forward_refs and as the name suggests, it is there to resolve forward references. What is tricky in this case, is that you need to provide it with an updated namespace to resolve the Item reference. To do that you need to actually have the real Item class in scope because that is what needs to be in that namespace. Where you do it does not really matter. You could for example import User model into your item module and call it there (obviously below the definition of Item): from sqlmodel import SQLModel, Field from .user import User class Item(SQLModel): id: int = Field(default=None, primary_key=True) price: float name: str User.update_forward_refs(Item=Item) But that call needs to happen before an attempt is made to set up that schema. Thus you'll at least need to import the item module in your main module: from fastapi import FastAPI from .user import User from . import item api = FastAPI() @api.get("/", response_model=User) def main(): return {"message": "working just fine"} At that point it is probably simpler to have a sub-package with just the model modules and import all of them in the __init__.py of that sub-package. The reason I gave the example of putting the User.update_forward_refs call in below your Item definition is that these situations typically occur, when you actually have a circular relationship, i.e. if your Item class had a users field for example, which was typed as list[User]. Then you'd have to import User there anyway and might as well just update the references there. In your specific example, you don't actually have any circular dependencies, so there is strictly speaking no need for the TYPE_CHECKING escape. You can simply do from .item import Item inside user.py and put the actual class in your annotation as bought_items: list[Item]. But I assume you simplified the actual use case and simply forgot to include the circular dependency. Maybe I am missing something and someone else here can find a way to call update_forward_refs without the need to provide Item explicitly, but this way should definitely work. | 13 | 12 |
74,318,512 | 2022-11-4 | https://stackoverflow.com/questions/74318512/python-api-request-nested-dictionaries-to-dataframe-with-datetime-indexed-value | I run a query on python to get hourly price data from an API, using the get function: result = (requests.get(url_prices, headers=headers, params={'SpotKey':'1','Fields':'hours','FromDate':'2016-05-05','ToDate':'2016-12-05','Currency':'eur','SortType':'ascending'}).json()) where 'SpotKey' identifies the item I want to retrieve from the API, in this example '1' is hourly price timeseries (the other parameters are self explanatory). The result from the query is: {'SpotKey': '1', 'SpotName': 'APX', 'Denomination': 'eur/mwh', 'Elements': [{'Date': '2016-05-05T00:00:00.0000000', 'TimeSpans': [{'TimeSpan': '00:00-01:00', 'Value': 23.69}, {'TimeSpan': '01:00-02:00', 'Value': 21.86}, {'TimeSpan': '02:00-03:00', 'Value': 21.26}, {'TimeSpan': '03:00-04:00', 'Value': 20.26}, {'TimeSpan': '04:00-05:00', 'Value': 19.79}, {'TimeSpan': '05:00-06:00', 'Value': 19.79}, ... {'TimeSpan': '19:00-20:00', 'Value': 57.52}, {'TimeSpan': '20:00-21:00', 'Value': 49.4}, {'TimeSpan': '21:00-22:00', 'Value': 42.23}, {'TimeSpan': '22:00-23:00', 'Value': 34.99}, {'TimeSpan': '23:00-24:00', 'Value': 33.51}]}]} where 'Elements' is the relevant list containing the timeseries, structured as nested dictionaries of 'Date' keys and 'TimeSpans' keys. Each 'TimeSpans' keys contains other nested dictionaries for each hour of the day, with a 'TimeSpan' key for the hour and a 'Value' key for the price. I would like to transform it to a dataframe like: Datetime eur/mwh 2016-05-05 00:00:00 23.69 2016-05-05 01:00:00 21.86 2016-05-05 02:00:00 21.26 2016-05-05 03:00:00 20.26 2016-05-05 04:00:00 19.79 ... ... 2016-12-05 19:00:00 57.52 2016-12-05 20:00:00 49.40 2016-12-05 21:00:00 42.23 2016-12-05 22:00:00 34.99 2016-12-05 23:00:00 33.51 For the time being I managed to do so doing: df = pd.concat([pd.DataFrame(x) for x in result['Elements']]) df['Date'] = pd.to_datetime(df['Date'] + ' ' + [x['TimeSpan'][:5] for x in df['TimeSpans']], errors='coerce') df[result['Denomination']] = [x['Value'] for x in df['TimeSpans']] df = df.set_index(df['Date'], drop=True).drop(columns=['Date','TimeSpans']) df = df[~df.index.isnull()] I did so because the daylight-saving-time is replacing the 'TimeSpan' hourly values with 'dts' string, giving ParseDate errors when creating the datetime index. Since I will request data very frequently and potentially for different granularities (e.g. half-hourly), is there a better / quicker / standard way to shape so many nested dictionaries into a dataframe with the format I look for, that allows to avoid the parsing date error for daylight-saving-time changes? thank you in advance, cheers. | You did not give examples of the dts, so I cannot verify. But in principle, trating the Date as timestamp and TimeSpan as as timedeltas should give you both the ability to ignore granularity changes and potentialy include additional "dts" parsing. def parse_time(x): if "dst" not in x: return x[:5]+":00" return f"{int(x[:2])+1}{x[2:5]}:00" # TODO ACTUALLY PARSE, time overflow etc df = pd.DataFrame(result['Elements']).set_index("Date") d2 = df.TimeSpans.explode().apply(pd.Series) d2['Datetime'] = pd.to_datetime(d2.index) + pd.to_timedelta(d2.TimeSpan.apply(parse_dt)) pd.DataFrame(d2.set_index(d2.Datetime).Value).rename(columns={"Value": "eur/mwh"}) gives | 5 | 4 |
74,344,904 | 2022-11-7 | https://stackoverflow.com/questions/74344904/python-numpy-image-crop-one-side-fill-with-black-the-ther | I have a image with resolution 816x624, and would need to make a 640x640 image out of it. To do so, i'd have to crop the long side (centered), and fill up the short side with black (centered). the resulting image should be the center of the starting image, with a small black strip on top and bottom. How can this be done? i tried with: crop_img = img1[int(h1 / 2 - 640 / 2):int(h1 / 2 + 640 / 2), int(w1 / 2 - 640 / 2):int(w1 / 2 + 640/ 2)] but this does not work because h1 i ssmaller than 640. | Given an imput image img, an expected height h and an expected width w: def resize_img(img, h, w): #cut the image cutted_img = img[ max(0, int(img.shape[0]/2-h/2)):min(img.shape[0], int(img.shape[0]/2+h/2)), max(0, int(img.shape[1]/2-w/2)):min(img.shape[1], int(img.shape[1]/2+w/2)), ] #pad the image padded_img = np.zeros(shape=(h,w,3), dtype=img.dtype) padded_img[ int(padded_img.shape[0]/2-cutted_img.shape[0]/2):int(padded_img.shape[0]/2+cutted_img.shape[0]/2), int(padded_img.shape[1]/2-cutted_img.shape[1]/2):int(padded_img.shape[1]/2+cutted_img.shape[1]/2), ] = cutted_img return padded_img some exaples: url = "https://www.planetware.com/wpimages/2020/02/france-in-pictures-beautiful-places-to-photograph-eiffel-tower.jpg" response = requests.get(url) start_img = np.array(PIL.Image.open(BytesIO(response.content))) plt.imshow(start_img) plt.show() #shape = (487, 730, 3) plt.imshow(resize_img(start_img, 640, 640)) plt.show() #shape = (640, 640, 3) I tested on some other pictures: shape = (1020, 680, 3) shape = (450, 254, 3) shape = (847, 564, 3) All resized images have size (640, 640, 3) and seem to be properly padded. PS. I have used the following libraries: import numpy as np from matplotlib import pyplot as plt import PIL import requests from io import BytesIO | 3 | 4 |
74,347,282 | 2022-11-7 | https://stackoverflow.com/questions/74347282/count-nan-values-per-column-in-an-ndarray | Problem: I have a ndarray (2000,7) and I want to count the numbers of Nan's per column and save it in an ndarray Tried: number_nan_in_arr = np.count_nonzero(np.isnan(arr)) But this count the total number of nan's over all columns Solution: ? | You can add axis=0 to count null per column: number_nan_in_arr = np.count_nonzero(np.isnan(arr), axis=0) Example: import numpy as np a = np.array([[0, np.nan, 7, 0], [3, 0, 2, np.nan]]) np.count_nonzero(np.isnan(a), axis=0) Output: array([0, 1, 0, 1], dtype=int64) | 3 | 2 |
74,313,762 | 2022-11-4 | https://stackoverflow.com/questions/74313762/except-fails-on-unhashable-exceptions-documented-behaviour-or-a-bug | Consider the following code block: class MyException(Exception): __hash__ = None try: raise ExceptionGroup("Foo", [ MyException("Bar") ]) except* Exception: pass The except* should catch any number of exceptions of any kind, thrown together as an ExceptionGroup (or a single exception of any kind if thrown alone, come to that). Instead, an unhandled TypeError occurs during handling of our ExceptionGroup, allegedly at "line -1" of our module: + Exception Group Traceback (most recent call last): | File "C:\Users\Josep\AppData\Roaming\JetBrains\PyCharmCE2022.2\scratches\scratch_62.py", line 6, in <module> | raise ExceptionGroup("Foo", [ | ExceptionGroup: Foo (1 sub-exception) +-+---------------- 1 ---------------- | MyException: Bar +------------------------------------ During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Josep\AppData\Roaming\JetBrains\PyCharmCE2022.2\scratches\scratch_62.py", line -1, in <module> TypeError: unhashable type: 'MyException' (If we replace the except* Exception with something more specific, say except* ValueError or except* MyException, the same thing happens. If we try to just raise a single MyException and catch it normally with except MyException, that works fine.) Normal except clauses don't care whether exceptions are hashable. I could not find this quirk of except* documented in PEP-654 or the Python 3.11 release notes. Is this intended behavior, or is it simply a bug in the Python implementation I'm using? (For those who want to reproduce this behavior, I'm using Python 3.11.0, 64-bit, on Windows.) | This was reported at https://github.com/python/cpython/issues/99181 and we have a PR to fix it. Should be fixed in 3.11.1. | 6 | 1 |
74,304,427 | 2022-11-3 | https://stackoverflow.com/questions/74304427/setting-maximum-number-of-workers-in-dask-map-function | I have a Dask process that triggers 100 workers with a map function: worker_args = .... # array with 100 elements with worker parameters futures = client.map(function_in_worker, worker_args) worker_responses = client.gather(futures) I use docker where each worker is a container. I have configured docker to spawn 20 workers/containers, like so: docker-compose up -d --scale worker=20 The problem is that my machine crashes because the map function triggers 20 workers in parallel and that makes memory and CPU exceed the maximum. I want to keep the configuration of 20 workers because I use the workers for other functions that don't require large amount of memory. How to limit the map function to, say, 5 workers in parallel? | dask does not dynamically adjust worker resources depending on how many workers are idle. In the example you provided, once 20 workers are initiated, if only 5 workers are used, then they will not be allocated the resources from the remaining 15 workers that are idle. If that's acceptable (e.g. because the idle resources are being utilized by an external program), then one way to restrict work to 5 workers is to explicitly specify them via workers kwarg to .map call: # instantiate workers from distributed import Client c = Client(n_workers=20) # select at most 5 workers from the available list selected_workers = list(c.scheduler_info()['workers'])[:5] dummy_function = lambda x: x**2 futs = c.map(dummy_function, range(10), workers=selected_workers) Another way to control workload allocation is to use resources kwarg, see these related answers: 0, 1, 2, 3. | 5 | 3 |
74,322,978 | 2022-11-4 | https://stackoverflow.com/questions/74322978/awaiting-a-future-versus-event-wait | In the UDP client example in the Python docs, they use loop.create_future() to create a new Future. The main program awaits this future until result is set on it, at which point the program cleans up resources and terminates. However, I have always used an asyncio.Event for this kind of thing. Is there any difference between these two techniques? Is there any reason to prefer the Future instead of the Event? loop = asyncio.get_running_loop() future = loop.create_future() await future event = asyncio.Event() await event.wait() | They can be both used for synchronization, but a Future has a proper result and can raise exceptions. So, Event provides less features, but when the use case is only synchronization, it may express the intent better and be less error-prone. In fact, an Event is implemented as a list of futures. | 5 | 3 |
74,332,390 | 2022-11-6 | https://stackoverflow.com/questions/74332390/how-to-understand-snippet-of-regex | I am attempting to understand what this snippet of code does: passwd1=re.sub(r'^.*? --', ' -- ', line) password=passwd1[4:] I understand that the top line uses regex to remove the " -- ", and the bottom line I think removes something as well? I went back to this code after a while and need to improve it but to do that I need to understand this again. I've been trying to read regex docs to no avail, what is this: r'^.*? at the beginning of the regex?. | To break r'^.*? -- into pieces: r in front of a string in Python lets the interpreter know that it's a regex string. This lets you not have to do a bunch of confusing character escaping. The ^ tells the regex to match only from the beginning of the string. .*? tells the regex to match any number of characters up to... --, which is a literal match. The sum of this is that it will match any string, starting at the beginning of a line up to the -- demarcation. Since it is re.sub(), the matched part of the string will be replaced with --. This is why something like Google -- MyPassword becomes -- MyPassword. The second line is a simple string slice, dropping the first four elements (characters) of the string. This might be superfluous - you could just substitute the match with an empty string like this: passwd1 = re.sub(r'^.* --', '', line) This achieves the same result. Note I've dropped the ?, which is also superfluous here, because the * has a similar but broader effect. There are some technical differences, but I don't think you need it for your stated purpose. ? will match zero or one of the previous character - in this case a ., which is 'any character'. The * will match zero or more of the previous character. .* is what is known as a greedy quantifier, and .*? a lazy quantifier. That is, the greedy quantifier will match as much as possible, and the lazy will match as little as possible. The difference between ^.*? -- and ^.* -- is what is matched in this case: Something something -- mypassword -- yourpassword In the greedy case, the first two clauses ('something something -- mypassword') are matched and deleted. In the lazy case, only 'something something' is deleted. Most passwords don't include spaces, nevermind ' -- ', so you probably want to use the greedy version. | 4 | 4 |
74,326,894 | 2022-11-5 | https://stackoverflow.com/questions/74326894/how-to-change-the-image-size-for-seaborn-objects | The solutions shown in How to change the figure size of a seaborn axes or figure level plot do not work for seaborn.objects. This question is about the new interface added in seaborn v0.12. Tried different ways to set the plotted image size but no one worked, below is the code, how to set the below image height and width by pixel or inch? About seaborn.objects.Plot.add: import pandas as pd from seaborn import objects as so df = sns.load_dataset('planets') p = so.Plot(data=df, x='orbital_period', y='mass').add(so.Dot()) p.save('output.png') p.show() | With Plot.layout: ( so.Plot(data=d, x="point_x", y="point_y") .add(so.Dot()) .layout(size=(w, h)) # in inches * (dpi / 100) .save("output.png", dpi=dpi) # e.g. dpi=100 .show() ) | 4 | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.