question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,759,907
2024-1-4
https://stackoverflow.com/questions/77759907/how-to-remove-xa0-from-soup-in-beautifulsoup-python
I am currently using Beautifulsoup to parse the HTML code of a webpage. To get the text from an element, I use the ".text" attribute: soup.find('p', {'class': 'example'}).text But the problem is that sometimes I get "\xa0" in the result: "some text «\xa0text\xa0»" I tried using the "replace" function: soup = BeautifulSoup(driver.page_source.replace('\xa0', ' '), "lxml") NOTE: I don't want to have to use a function for every single string I parse, I would like to have the soup already purged from those characters from the beginning.
The problem is that the HTML source probably contains  , not the literal \xa0. Try replacing that instead, or as well. soup = BeautifulSoup( driver.page_source.replace( ' ', ' ').replace('\xa0', ' '), "lxml")
2
2
77,759,260
2024-1-4
https://stackoverflow.com/questions/77759260/django-appregistrynotready-when-running-unittests-from-vscode
My UnitTest test cases in a Django project run fine when running them directly from the Django utility (python manage.py test). However, when trying to run them from VSCode's test explorer, I get django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.. I found several similar questions that were already answered here on SO, but none of the solutions seem to work. Here's what I tried: import my classes (e.g. models) within the SetUpTestData method and the actual test methods of the test class, rather than at the top of the file. This seemed to help a bit, in the sense that the tests now do show up in the test explorer (previously they didn't), but I still get an error when running them. set the DJANGO_SETINGS_MODULE environment variable at the top of the file containing the test class: import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_project.settings") add a .env file to my root directory, containing DJANGO_SETINGS_MODULE = my_project.settings add a configuration with purpose debug-test to launch.json: "configurations": [ { "name": "Django test", "type": "python", "request": "launch", "program": "${workspaceFolder}\\manage.py", "args": [ "test" ], "purpose": ["debug-test"], "django": true, "justMyCode": true } ] remove tests.py from my app folder, as my tests are stored in a separate folder (and apparently Django can't have both) My folder structure looks like this: root |_ my_app |_ tests |_ __init__.py |_ test_models.py |_ my_project For the sake of completeness, I have the following in settings.json: { "python.testing.unittestArgs": [ "-v", "-s", ".", "-p", "test_*.py" ], "python.testing.pytestEnabled": false, "python.testing.unittestEnabled": true } Is there any way to make VSCode's test explorer play nice with Django and UnitTest?
You can try to do the following steps: Create a __init__.py file in your tests directory (if it doesn't already exist), and add the following code: import os import django os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_project.settings") django.setup() These codes manually set the Django settings module and initializes Django. In your settings.json, change the python.testing.unittestArgs to point to your tests directory: { "python.testing.unittestArgs": [ "-v", "-s", "./my_app/tests", "-p", "test_*.py" ], "python.testing.pytestEnabled": false, "python.testing.unittestEnabled": true } This tells VS Code to look for tests in the my_app/tests directory.
2
3
77,761,161
2024-1-4
https://stackoverflow.com/questions/77761161/sqlalchemy-string-relationships-cause-undefined-name-complaints-by-flake8-and
# order.py class Order(Base): __tablename__ = "Order" id: Mapped[int] = mapped_column(primary_key=True) items: Mapped[List["Item"]] = relationship(back_populates="order") # item.py class Item(Base): __tablename__ = "Item" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) order_id: Mapped[int] = mapped_column(ForeignKey("Order.id")) order: Mapped["Order"] = relationship(back_populates="items") I'm using sqlalchemy 2.0.23 and not using plugins for mypy or flake8. Neither module should need to import the other since I'm using string relationships like "Item" in the Order class and "Order" in the relationship in the Item class. flake8 reports an F821 error on both files because of the undefined name. mypy reports a similar error on both. I can configure flake8 to ignore the F821. I'm not sure how I'd do similar in mypy. But these are important rules that shouldn't be turned off to get SQLAlchemy classes through linters. I want to keep my classes in separate files. Is there a way to correctly define them so that linters like these won't complain? Adding imports to both files quiets these linters, but results in a circular import problem so the code won't run.
mypy and flake8 are correct here, these warnings shouldn't be ignored. To resolve the circular import problem, you can use "typechecking-only import", i.e. an import statement wrapped in if TYPE_CHECKING block (TYPE_CHECKING constant is explained here) . Here's how it may look like: # order.py from typing import TYPE_CHECKING if TYPE_CHECKING: from .item import Item class Order(Base): __tablename__ = "Order" id: Mapped[int] = mapped_column(primary_key=True) items: Mapped[List["Item"]] = relationship(back_populates="order") and # item.py from typing import TYPE_CHECKING if TYPE_CHECKING: from .order import Order class Item(Base): __tablename__ = "Item" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) order_id: Mapped[int] = mapped_column(ForeignKey("Order.id")) order: Mapped["Order"] = relationship(back_populates="items")
2
3
77,761,185
2024-1-4
https://stackoverflow.com/questions/77761185/deleteing-unwanted-characters-from-dat-file-then-performing-calculations-on-the
I need to read a .dat file in Python. The file has 3 columns in total and hundreds of rows. The second and third column contain two characters followed by a float that I would like to extract--the second column always starts with "SA" and the third column always starts with "SC". I am currently loading in the data and looping through each row to extract the values, but is there a better way to do it? Once the data is cleaned, I want to perform some calculations on the result, namely computing the average. Here is an example of two lines from the .dat file: 9:01:15 SA7.998 SC7.968 9:01:16 SA7.998 SC7.968 Here is the code I am currently using. import numpy as np import os.path from statistics import mean time=[] s_1=[] s_2=[] s1=[] s2=[] r1=[] r2=[] avgg=[] # Reading data from file with open('serial_2.dat','r') as f: dat=f.readlines() for i in dat: y=i.split() # cleaning and getting columns without spaces time.append(y[0]) s1.append(y[1]) s2.append(y[2]) #getting only numbers without strings (SA and SC) for counter in (range(0,len(s1))): S_1=s1[counter] r1.append(S_1[2:]) r1_f=np.array(r1, dtype='float32') S_2=s2[counter] r2.append(S_2[2:]) r2_f=np.array(r2, dtype='float32') avgg=r1_f+r2_f/2 print(np.mean(avgg))
You can use pandas to do that: #! pip install pandas import pandas as pd import numpy as np df = pd.read_csv('serial_2.dat', sep='\s+', header=None, names=['time', 's1', 's2']) df['s1'] = df['s1'].str.extract('^[\D]+(.*)').astype(float) df['s2'] = df['s2'].str.extract('^[\D]+(.*)').astype(float) Output: >>> df time s1 s2 0 9:01:15 7.998 7.968 1 9:01:16 7.998 7.968 >>> df.dtypes time object s1 float64 s2 float64 dtype: object If you have always 2 characters in s1 and s2 columns, you can avoid regex and strip the first two characters: df['s1'] = df['s1'].str[2:].astype(float) df['s2'] = df['s2'].str[2:].astype(float) Then compute the global average: # With pandas >>> df[['s1', 's2']].mean().mean() 7.983 # With numpy >>> np.mean(df[['s1', 's2']]) 7.983 You can also compute the row average: df['avg'] = df[['s1', 's2']].mean(axis=1) print(df) # Output time s1 s2 avg 0 9:01:15 7.998 7.968 7.983 1 9:01:16 7.998 7.968 7.983
2
3
77,758,047
2024-1-4
https://stackoverflow.com/questions/77758047/encountering-importerror-with-tesserocr-symbol-not-found-in-flat-namespace-z
I'm attempting to use tesserocr in my Python project, but when I try to import it, I'm getting [No module named 'tesserocr'], I run into an ImportError. The error message points to a missing symbol related to the Tesseract library. Here is the full error message: ImportError: dlopen(/Volumes/WorkSpace/Backend/Reveratest/revera_api/venv/lib/python3.8/site-packages/tesserocr.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZN9tesseract11TessBaseAPID1Ev' Here's what I've done so far: Installed tesseract using brew: brew install tesseract Installed tesserocr via pip within a virtual environment: pip install tesserocr Confirmed that tesseract command-line works correctly. I'm running this on macOS M1 and my Python version is 3.8. How can I resolve this issue so that I can use tesserocr in my project? Are there specific paths or configurations I'm possibly missing? Any assistance or pointers as to what might be causing this error would be greatly appreciated!
I ran into a problem when trying to work with the tesserocr package, but I managed to find a fix and wanted to share it for anyone who might be experiencing the same issue. The error was resolved by first completely uninstalling the tesserocr package and then reinstalling it using the --no-binary option with pip. Here are the commands I used: Uninstall tesserocr: pip uninstall tesserocr Reinstall tesserocr without using binary packages: pip install --no-binary :all: tesserocr After performing these two steps, the error was gone, and tesserocr was working as expected. I hope this helps someone who faces the same problem!
4
8
77,753,885
2024-1-3
https://stackoverflow.com/questions/77753885/fastapi-supporting-either-basic-auth-or-jwt-auth-to-access-endpoint
Referencing this link: fastapi-supporting-multiple-authentication-dependencies I think this is the closest to what I need, but somehow I can’t get either of the dependency to work because fastapi is enforcing both dependencies before it grants access to endpoint. Snippet for custom depedency: def basic_logged_user(credentials: Annotated[HTTPBasicCredentials, Depends(security)]): current_username_bytes = credentials.username.encode("utf8") correct_username_bytes = settings.SESSION_LOGIN_USER.encode("utf8") is_correct_username = secrets.compare_digest( current_username_bytes, correct_username_bytes ) current_password_bytes = credentials.password.encode("utf8") correct_password_bytes = settings.SESSION_LOGIN_PASS.encode("utf8") is_correct_password = secrets.compare_digest( current_password_bytes, correct_password_bytes ) if not (is_correct_username and is_correct_password): raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid Credentials", headers={"WWW-Authenticate": "Basic"}, ) return credentials.username def jwt_logged_user(token: str = Depends(utils.OAuth2_scheme), db: Session = Depends(db_session)): credential_exception = HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Incorrect username or password", headers={"WWW-Authenticate": "Bearer"}) token = utils.verify_token(token, credential_exception) user = db.query(User).filter(User.username == token.username).first() return user # custom auth def auth_user(jwt_auth: HTTPBearer = Depends(jwt_logged_user), basic_auth: HTTPBasic = Depends(basic_logged_user)): if not (jwt_auth or basic_auth): raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail='Invalid Credentials') #endpoint @router.get("/") async def get_users(db: Session = Depends(db_session), logged_user: str = Depends(auth_user)): query_users = db.query(User).all() return query_users I expect it to grant me access to endpoint when I provide correct credential for JWT auth or Basic Auth, but it still forces me to put in credential for both. How can I achieve this effect of providing any one of the 2 auth but not both.
The idea is to make all of this security dependencies not to raise exceptions on user authentication error at the dependency resolving stage. For HTTPBasic pass auto_error=False: security = HTTPBasic(auto_error=False) And then in basic_logged_user you should check that def basic_logged_user(credentials: Annotated[Optional[HTTPBasicCredentials], Depends(security)]): if credentials is None: return None ... # Do not raise exception, but return None instead You need to find the way how to do the same with your second authentication scheme (utils.OAuth2_scheme) - not to raise HTTP_401_UNAUTHORIZED, but return None instead. Then your auth_user will work as you expect. It will raise HTTP_401_UNAUTHORIZED only if both schemes return None.
2
0
77,753,658
2024-1-3
https://stackoverflow.com/questions/77753658/langchain-local-llama-compatible-model
I'm trying to setup a local chatbot demo for testing purpose. I wanted to use LangChain as the framework and LLAMA as the model. Tutorials I found all involve some registration, API key, HuggingFace, etc, which seems unnecessary for my purpose. Is there a way to use a local LLAMA comaptible model file just for testing purpose? And also an example code to use the model with LangChain would be appreciated. Thanks! UPDATE: I wrote a blog post based on the accepted answer.
No registration is required to utilize on-prem local models within ecosystems like Hugging Face (HF). Similarly, using Langchain does not involve any registration requirements. Various model formats, such as GGUF and GGML, are employed for storing models for inference and can be found on HF. It is crucial to consider these formats when attempting to load and run a model locally. For instance, consider TheBloke's Llama-2-7B-Chat-GGUF model, which is a relatively compact 7-billion-parameter model suitable for execution on a modern CPU/GPU. To run the model, we can use Llama.cpp from Langchain: def llamacpp(): from langchain_community.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import LLMChain llm = LlamaCpp( model_path="models/Llama-2-7B-Chat-GGUF/llama-2-7b-chat.Q4_0.gguf", n_gpu_layers=40, n_batch=512, verbose=True, ) template = """Question: {question} Answer: Let's work this out in a step by step way to be sure we have the right answer.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "Who is Bjarne Stroustrup and how is he related to programming?" print(llm_chain.run(question)) And get output from the LLM: Bjarne Stroustrup is a Danish computer scientist who created C++. - He was born in Aarhus, Denmark on August 5, 1950 and earned his PhD from Cambridge University in 1983. - In 1979 he began developing the programming language C++, which was initially called "C with Classes". - C++ was first released in 1983 and has since become one of the most popular programming languages in use today. Bjarne Stroustrup is known for his work on the C programming language and its extension to C++. - He wrote The C Programming Language, a book that helped establish C as a widely used language. - He also wrote The Design and Evolution of C++, a detailed explanation of how he created C++ and why he made certain design choices. ... In this instance, I cloned TheBloke's model repository from HF and positioned it in a directory named models/. The final path for the model became models/Llama-2-7B-Chat-GGUF/llama-2-7b-chat.Q4_0.gguf: # Make sure you have git-lfs installed (https://git-lfs.com) git lfs install git clone https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF Although the model can be run on a CPU, this was locally run on my Windows PC equipped with an RTX 4070 card with good performance during inference.
3
4
77,753,308
2024-1-3
https://stackoverflow.com/questions/77753308/is-this-tuple-syntax-inside-a-python-list-comprehension-if-not-what-is-it
I am having some trouble understanding parts of syntax. Particularly when parentheses () are required for tuples. For instance, this piece of code below: c = {'a':10,'b':1,'c':22,'d':10} tup = a,b = 4,5 print(a) print(b) print(tup) newlist = [(x,y) for y,x in c.items()] print(newlist) The output for this code is: 4 5 (4, 5) [(10, 'a'), (1, 'b'), (22, 'c'), (10, 'd')] When trying to take the parentheses out of x, y in the list comprehension statement, I get a traceback. However, every other tuple in this code does not require parenthesis. What am I missing? Why is it that Python understands a,b to be a tuple but not x,y when it is in a list comprehension statement? It seems to me that Python is inconsistent with tuple syntax. I tried taking the parentheses out and putting them back to understand how the syntax works.
Parentheses are needed to give priority to operators. Consider the two following lines of code, which differ only by the parentheses: tup_a = 3, 4 + 10, 20 tup_b = (3, 4) + (10, 20) The first line defines a tuple tup_a of 3 elements, (3, 14, 20). The second line defines two tuples (3, 4) and (10, 20), then concatenates them into a tuple of four elements, (3, 4, 10, 20). Wrong parentheses can easily result in an error: tup_c = 3, 4 * 10, 20 tup_d = (3, 4) * (10, 20) The first line defines a tuple of 3 elements, (3, 40, 20). The second line results in a TypeError because it defines two tuples (3, 4) and (10, 20) and then attempts to multiply them, but the multiplication operator * doesn't know what to do with tuples. You encountered a similar issue. Consider the following lines of codes: x = 42 c = {'a': 10, 'b': 1, 'c': 22, 'd': 10} l_a = [(x, y) for y,x in c.items()] # [(10, 'a'), (1, 'b'), (22, 'c'), (10, 'd')] l_b = [x, (y for y,x in c.items())] # [42, <generator object>] l_c = [x, y for y,x in c.items()] # SyntaxError: invalid syntax This code correctly defines two lists l_a and l_b, but then it raises a SyntaxError when trying to define l_c. The designers of python decided that the syntax for l_c was ambiguous, and you should add parentheses to explicitly specify whether you meant the same thing as l_a or as l_b. Note that the line of code for l_b would raise a NameError: name 'x' is not defined if I hadn't added x = 42 at the beginning, since the role of x is not the same in l_b as it is in l_a.
3
9
77,753,412
2024-1-3
https://stackoverflow.com/questions/77753412/get-the-standard-deviation-of-a-row-ignoring-the-min-and-max-values
Given the following data frame: a b c d e sd -100 2 3 60 4 1 7 5 -50 9 130 2 How would I calculate the standard deviation sd column which excludes the minimum and maximum values from each row? The actual data frame is a few million rows long so something vectorised would be great! To replicate: df = pd.DataFrame( {"a": [-100, 7], "b": [2, 5], "c": [3, -50], "d": [60, 9], "e": [4, 130]} )
Excluding the first min/max An approach that should be fast would be to use numpy to sort the values per row, exclude the first and last and compute the std: df['sd'] = np.sort(df, axis=1)[:, 1:-1].std(axis=1, ddof=1) Handling duplicate min/max (excluding all) If you can have several times the same min/max and want to exclude all, then you could compute a mask: m1 = df.ne(df.min(axis=1), axis=0) m2 = df.ne(df.max(axis=1), axis=0) df['sd'] = df.where(m1&m2).std(axis=1) Output: a b c d e sd 0 -100 2 3 60 4 1.0 1 7 5 -50 9 130 2.0
2
5
77,752,740
2024-1-3
https://stackoverflow.com/questions/77752740/pandas-use-assign-transform-in-pipe-after-a-merge
I have two dfs solar_part = pd.DataFrame( {'pool': 1, 'orig': 635.1}, index = [0] ) solar_aod = pd.DataFrame( {'pool': [1,1,1,1], 'MoP': [1,2,3,4], 'prin': [113.1, 115.3, 456.6, 234.1]} ) Which I then merge together via pipeline and try and transform/assign a new variable solar_p = ( solar_aod .merge(solar_part, on = ['pool'], how = 'left') .assign(remn = ['prin'] / ['orig']) ) The assign (have tried transform as well) gives the error of TypeError: unsupported operand type(s) for /: 'list' and 'list' which I'm guessing is caused by the bracketing. Trying only quotes gives the same same error but with str instead of list. Not including an assign/transform function and then doing it "long hand" via solar_p.prin / solar_p.orig * 100 also works but I have several more equations to include, so I'd like it as concise as possible. How to do this tranform/assign after a merge in a pipeline?
You can use eval to create an additional column as well as assign but the latter need to evaluate the new dataframe before use columns that why you have to use a lambda function here: solar_p = ( solar_aod .merge(solar_part, on='pool', how='left') .eval('remn = prin / orig') ) Output: >>> solar_p pool MoP prin orig remn 0 1 1 113.1 635.1 0.178082 1 1 2 115.3 635.1 0.181546 2 1 3 456.6 635.1 0.718942 3 1 4 234.1 635.1 0.368603
3
4
77,751,520
2024-1-3
https://stackoverflow.com/questions/77751520/performing-addition-and-subtraction-operations-within-the-columns-of-the-same-py
I have a data Dataframe like, data = { "A": [42, 38, 39,23], "B": [45, 30, 15,65], "C": [60, 50, 25,43], "D": [12, 70, 35,76,], "E": [87, 90, 45,43], "F": [40, 48, 55,76], "G": [58, 42, 85,10], } df = pd.DataFrame(data) print(df) A B C D E F G 0 42 45 60 12 87 40 58 1 38 30 50 70 90 48 42 2 39 15 25 35 45 55 85 3 23 65 43 76 43 76 10 Here, I need to subtract all the values of columns C to G from column B, then add column A. Like df['C']=df['C']-df['B']+df['A'] df['D']=df['D']-df['B']+df['A'] df['E']=df['E']-df['B']+df['A'] How it is possible in simple/single commands?
You can use a single eval with a multiline expression: df = df.eval('''C=C-B+A D=D-B+A E=E-B+A ''') Or, for this particular operation, since all are +A-B: df[['C', 'D', 'E']] = df[['C', 'D', 'E']].add(df['A'].sub(df['B']), axis=0) Output: A B C D E F G 0 42 45 57 9 84 40 58 1 38 30 58 78 98 48 42 2 39 15 49 59 69 55 85 3 23 65 1 34 1 76 10
4
4
77,750,484
2024-1-3
https://stackoverflow.com/questions/77750484/why-is-pd-date-range-sometimes-stop-inclusive-and-sometimes-exclusive-based-on
import pandas as pd print(pd.date_range(start='1999-08-01', end='2000-07-01', freq='D')) print(pd.period_range(start='1999-08-01', end='2000-07-01', freq='D')) print(pd.date_range(start='1999-08', end='2000-07', freq='M')) #This doesn't include anything from 2000-07, ie is not stop-inclusive - length 11 print(pd.period_range(start='1999-08', end='2000-07', freq='M')) # Length 12 Of these, three are stop-inclusive as one might naively define, whereas the date_range with frequency "M" is not. Why is that?
This is because 2000-07 is understood as 2000-07-01 and M is the end of month (opposite to MS the start of month) so 2000-07-31 is after 2000-07-01. If you want to include "July" change the frequency from M to MS but it will change the day: >>> pd.date_range(start='1999-08', end='2000-07', freq='M')[[0, -1]] DatetimeIndex(['1999-08-31', '2000-06-30'], dtype='datetime64[ns]', freq=None) >>> pd.date_range(start='1999-08', end='2000-07', freq='MS')[[0, -1]] DatetimeIndex(['1999-08-01', '2000-07-01'], dtype='datetime64[ns]', freq=None)
2
1
77,750,066
2024-1-3
https://stackoverflow.com/questions/77750066/group-axis-labels-for-seaborn-box-plots
I want a grouped axis label for box-plots for example a bit like this bar chart where the x axis is hierarchical: I am struggling to work with groupby objects to extract the values for the box plot. I have found this heatmap example which references this stacked bar answer from @Stein but I can't get it to work for my box plots (I know I don't want the 'sum' of the groups but can't figure out how to get the values I want grouped correctly). In my real data the group sizes will be different, not all the same as in the example data. I don't want to use seaborn's 'hue' as a solution as I want all the boxes the same color. This is the closest I have got, thanks: import numpy as np import seaborn as sns import matplotlib.pyplot as plt from itertools import groupby def test_table(): data_table = pd.DataFrame({'Room':['Room A']*24 + ['Room B']*24, 'Shelf':(['Shelf 1']*12 + ['Shelf 2']*12)*2, 'Staple':['Milk','Water','Sugar','Honey','Wheat','Corn']*8, 'Quantity':np.random.randint(1, 20, 48), }) return data_table def add_line(ax, xpos, ypos): line = plt.Line2D([xpos, xpos], [ypos + .1, ypos], transform=ax.transAxes, color='black') line.set_clip_on(False) ax.add_line(line) def label_len(my_index,level): labels = my_index.get_level_values(level) return [(k, sum(1 for i in g)) for k,g in groupby(labels)] def label_group_bar_table(ax, df): ypos = -.1 scale = 1./df.index.size for level in range(df.index.nlevels)[::-1]: pos = 0 for label, rpos in label_len(df.index,level): lxpos = (pos + .5 * rpos)*scale ax.text(lxpos, ypos, label, ha='center', transform=ax.transAxes) add_line(ax, pos*scale, ypos) pos += rpos add_line(ax, pos*scale , ypos) ypos -= .1 df = test_table().groupby(['Room','Shelf','Staple']).sum() fig = plt.figure() fig = plt.figure(figsize = (15, 10)) ax = fig.add_subplot(111) sns.boxplot(x=df.Quantity, y=df.Quantity,data=df) #Below 3 lines remove default labels labels = ['' for item in ax.get_xticklabels()] ax.set_xticklabels(labels) ax.set_xlabel('') label_group_bar_table(ax, df) fig.subplots_adjust(bottom=.1*df.index.nlevels) plt.show() Which gives:
You can use: import numpy as np import seaborn as sns import matplotlib.pyplot as plt from itertools import groupby def test_table(): data_table = pd.DataFrame({'Room':['Room A']*24 + ['Room B']*24, 'Shelf':(['Shelf 1']*12 + ['Shelf 2']*12)*2, 'Staple':['Milk','Water','Sugar','Honey','Wheat','Corn']*8, 'Quantity':np.random.randint(1, 20, 48), }) return data_table def add_line(ax, xpos, ypos): line = plt.Line2D([xpos, xpos], [ypos + .1, ypos], transform=ax.transAxes, color='black') line.set_clip_on(False) ax.add_line(line) def label_len(my_index,level): labels = my_index.get_level_values(level) return [(k, sum(1 for i in g)) for k,g in groupby(labels)] # HERE: Replace all df.index occurrences with levels def label_group_bar_table(ax, levels): ypos = -.1 scale = 1./levels.size for level in range(levels.nlevels)[::-1]: pos = 0 for label, rpos in label_len(levels, level): lxpos = (pos + .5 * rpos)*scale ax.text(lxpos, ypos, label, ha='center', transform=ax.transAxes) add_line(ax, pos*scale, ypos) pos += rpos add_line(ax, pos*scale , ypos) ypos -= .1 # HERE: Don't aggregate your data df = test_table() # HERE: Create a unique group identifier df['ID'] = df.groupby(['Room', 'Shelf', 'Staple']).ngroup() # HERE: Create an ordered MultiIndex for label_group_bar_table levels = df.drop_duplicates('ID').sort_values('ID')[['Room', 'Shelf', 'Staple']] levels = pd.MultiIndex.from_frame(levels) fig = plt.figure(figsize = (15, 10)) ax = fig.add_subplot(111) # HERE: Set 'ID' as x-axis sns.boxplot(x='ID', y='Quantity', data=df) #Below 3 lines remove default labels labels = ['' for item in ax.get_xticklabels()] ax.set_xticklabels(labels) ax.set_xlabel('') ax.set_xticks([]) # HERE: Remove xticks label_group_bar_table(ax, levels) fig.subplots_adjust(bottom=.1*levels.nlevels) plt.show() To get:
2
1
77,749,311
2024-1-3
https://stackoverflow.com/questions/77749311/replace-multiple-matching-groups-with-modified-captured-gropus
I am reading text from a file that contains flags start and end. I want to replace everything between start and end with the same text except I want to remove any newlines in the matching group. I tried to do it as follows: import re start = '---' end = '===' text = '''\ Some text ---line 1 line 2 line 3=== More text ... Some more text ---line 4 line 5=== and even more text\ ''' modified = re.sub(pattern=rf'{start}(.+){end}', repl=re.sub(r'\n', ' ', r'\1'), string=text, flags=re.DOTALL) print(modified) This prints: Some text line 1 line 2 line 3=== More text ... Some more text ---line 4 line 5 and even more text Couple of issues with this, 1. it matches the biggest group (and not the smaller matching groups), and 2. it does not remove the newlines. I am expecting the output to be: Some text line 1 line 2 line 3 More text ... Some more text line 4 line 5 and even more text Any help will be appreciated. Thank you!
Use non-greedy modifier (?) in the capturing group. Also, change the replacement function for simple str.replace: import re start = "---" end = "===" text = """\ Some text ---line 1 line 2 line 3=== More text ... Some more text ---line 4 line 5=== and even more text\ """ modified = re.sub( pattern=rf"{start}(.+?){end}", repl=lambda g: g.group(1).replace("\n", " "), string=text, flags=re.DOTALL, ) print(modified) Prints: Some text line 1 line 2 line 3 More text ... Some more text line 4 line 5 and even more text
2
1
77,732,090
2023-12-29
https://stackoverflow.com/questions/77732090/polars-multiply-2-lazyframes-together-by-column
I have 2 polars LazyFrames: df1 = pl.DataFrame(data={ 'foo': np.random.uniform(0,127, size= n).astype(np.float64), 'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64), 'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64) }).lazy() df2 = pl.DataFrame(data={ 'foo': np.random.uniform(0,127, size= n).astype(np.float64), 'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64), 'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64) }).lazy() I would like to multiply each column in df1 with its respective column in df2. If I convert these to non-lazy DataFrames I can achieve this: df1.collect() * df2.collect() foo bar baz f64 f64 f64 3831.295563 6.4637e6 3.3669e12 164.194271 2.9691e8 2.2696e12 3655.918761 1.9444e7 2.3625e12 7191.48868 3.7044e7 3.1687e12 9559.505277 2.6864e8 2.5426e12 However, if I try to perform the same expression on the LazyFrames, I get an exception df1 * df2 TypeError: unsupported operand type(s) for *: 'LazyFrame' and 'LazyFrame' How can I perform column-wise multiplication across 2 LazyFrames?
you'll need to join ( df1.with_row_index() .join(df2.with_row_index(), on="index") .select(pl.col(col) * pl.col(f"{col}_right") for col in df1.columns) .collect() ) shape: (10, 3) ┌─────────────┬──────────┬───────────┐ │ foo ┆ bar ┆ baz │ │ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ f64 │ ╞═════════════╪══════════╪═══════════╡ │ 6623.602754 ┆ 2.7173e8 ┆ 3.7654e12 │ │ 2588.499522 ┆ 7.4295e8 ┆ 3.0266e12 │ │ 933.474643 ┆ 3.7090e8 ┆ 4.2794e12 │ │ 7061.625136 ┆ 2.2365e8 ┆ 2.7040e12 │ │ … ┆ … ┆ … │ │ 2717.969236 ┆ 4.9398e7 ┆ 3.0930e12 │ │ 785.760153 ┆ 1.6305e8 ┆ 1.8954e12 │ │ 9534.366291 ┆ 7.3153e8 ┆ 1.9056e12 │ │ 1916.452503 ┆ 1.4976e8 ┆ 3.2704e12 │ └─────────────┴──────────┴───────────┘
2
2
77,738,762
2023-12-31
https://stackoverflow.com/questions/77738762/django-multiselectfield-error-super-object-has-no-attribute-get-flatchoices
I tried to use Django Multiselectfield library but i got this error: I already installed it properly by command: pip install django-multiselectfield I also added in my settings.py INSTALLED_APPS = [ ..... 'multiselectfield', ] My models.py: from multiselectfield import MultiSelectField from django.db import models CATEGORY = [ ('CP', 'Computer Programming'), ('LG', 'Language'), ('MA', 'Mathematics'), ('BM', 'Business Management'), ('GK', 'General Knowledge') ] class Quiz(models.Model): ... title = models.CharField(max_length=256) category = MultiSelectField(choices=CATEGORY, max_choices=2, max_length=5) ... def save(self, *args, **kwargs): if not self.category: self.category = ['GK'] super(Quiz, self).save(*args, **kwargs) def __str__(self): return self.title The error: 'super' object has no attribute '_get_flatchoices' However this is located in the Multiselectfield library code. Because of this error, even my data cannot be saved. I tried to find out what am I missing, but I have no idea. Any idea what's wrong with my code? Appreciate your help ^_^
The current version of Multiselectfield is incompatible with Django 5.0. There is no _get_flatchoices in django's CharField anymore, instead Django provides flatchoices directly. Therefore, the following code in ../multiselectfield/db/fields.py should be removed. def _get_flatchoices(self): flat_choices = super(MultiSelectField, self)._get_flatchoices() class MSFFlatchoices(list): # Used to trick django.contrib.admin.utils.display_for_field into # not treating the list of values as a dictionary key (which errors # out) def __bool__(self): return False __nonzero__ = __bool__ return MSFFlatchoices(flat_choices) flatchoices = property(_get_flatchoices) Thanks for aliceni81 For further information, you can check on https://github.com/goinnn/django-multiselectfield/issues/141 Edit: Now they already fixed the issue, you can install the package by using command: pip install https://github.com/goinnn/django-multiselectfield/archive/master.zip
1
6
77,737,323
2023-12-30
https://stackoverflow.com/questions/77737323/why-does-parallel-reading-of-an-hdf5-dataset-max-out-at-100-cpu-but-only-for-l
I'm using Cython to read a single Dataset from an HDF5 file using 64 threads. Each thread calculates a start index start and chunk size size, and reads from that chunk into a common buffer buf, which is a memoryview of a NumPy array. Crucially, each thread opens its own copy of the file and Dataset. Here's the code: def read_hdf5_dataset(const char* file_name, const char* dataset_name, long[::1] buf, int num_threads): cdef hsize_t base_size = buf.shape[0] // num_threads cdef hsize_t start, size cdef hid_t file_id, dataset_id, mem_space_id, file_space_id cdef int thread for thread in prange(num_threads, nogil=True): start = base_size * thread size = base_size + buf.shape[0] % num_threads \ if thread == num_threads - 1 else base_size file_id = H5Fopen(file_name, H5F_ACC_RDONLY, H5P_DEFAULT) dataset_id = H5Dopen2(file_id, dataset_name, H5F_ACC_RDONLY) mem_space_id = H5Screate_simple(1, &size, NULL) file_space_id = H5Dget_space(dataset_id) H5Sselect_hyperslab(file_space_id, H5S_SELECT_SET, &start, NULL, &size, NULL) H5Dread(dataset_id, H5Dget_type(dataset_id), mem_space_id, file_space_id, H5P_DEFAULT, <void*> &buf[start]) H5Sclose(file_space_id) H5Sclose(mem_space_id) H5Dclose(dataset_id) H5Fclose(file_id) Although it reads the Dataset correctly, the CPU utilization maxes out at exactly 100% on a float32 Dataset of ~10 billion entries, even though it uses all 64 CPUs (albeit only at ~20-30% utilization due to the I/O bottleneck) on a float32 Dataset of ~100 million entries. I've tried this on two different computing clusters with the same result. Maybe it has something to do with the size of the Dataset being greater than INT32_MAX? What's stopping this code from running in parallel on extremely large datasets, and how can I fix it? Any other suggestions to improve the code's clarity or efficiency would also be appreciated.
Something is happening that is either preventing cython's prange from launching multiple threads, or is preventing the threads from getting anywhere once launched. It may or may not have anything to do directly with hdf5. Here's some possible causes: Are you pre-allocating a buf large enough to hold the entire dataset before running your function? If so, that means your program is allocating 40+ gigabytes of memory (4 bytes per float32). How much memory do the nodes you're running on have? Are you the only user? Memory starvation could easily cause the kind of performance issues you describe. Both cython and hdf5 require certain compilation flags in order to correctly support parallelism. Between your small and large dataset runs did you modify or recompile your code at all? One easy way to explain why your program is using 100% of a single cpu is that it's getting hung somewhere before your read_hdf5_dataset function is ever called. What other code in your program runs first, and could it be causing the problems you see? Part of the problem here is that it is going to be very hard for any users on this site to reproduce your exact issue, since we don't have most of your program and I at least don't have any 40 gig hdf5 files lying around (back in my grad school days tho, terabytes). If one of my above suggestions doesn't help, I think you have two ways forward: Try to come up with a simplified repro of your issue, then edit your question to post it here. Using a combination of debugger and profiler (and print statements, if you're feeling lame), try to track down the exact line your program is getting hung up on when single cpu utilization spins up to 100%. That alone should tell you a whole lot more about what's going on. In particular it should it very clear whether anything is getting locked down by a mutex, as @Homer512 suggested in his comments.
3
1
77,747,229
2024-1-2
https://stackoverflow.com/questions/77747229/log-a-dataframe-using-logging-and-pandas
I am using pandas to operate on dataframes and logging to log intermediate results, as well as warnings and errors, into a separate log file. I need to also print into the same log file a few intermediate dataframes. Specifically, I want to: Print dataframes into the same log file as the rest of the logging messages (to ensure easier debugging and avoid writing many intermediate files, as would be the case with calls to to_csv with a file destination), Control logging verbosity (as is commonly done) using logging levels, such as DEBUG or INFO, sharing this with the verbosity of other logging messages (including those that are not related to dataframes). Control logging verbosity (on a finer level) using a separate variable that determines how many rows of the dataframe to print. Pretty-print 1 row per line, with aligned columns, and with each row preceded by the typical logging metadata, such as 240102 10:58:20 INFO:. The best I could come up is the code below, which is a bit too verbose. Is there a simpler and more pythonic way to log a dataframe slice? Note: Please include an example of usage. Example: import io import logging import pandas as pd # Print into log this many lines of several intermediate dataframes, # set to 20 or so: MAX_NUM_DF_LOG_LINES = 4 logging.basicConfig( datefmt = '%y%m%d %H:%M:%S', format = '%(asctime)s %(levelname)s: %(message)s') logger = logging.getLogger(__name__) # Or logging.DEBUG, etc: logger.setLevel(level = logging.INFO) # Example of a simple log message: logger.info('Reading input.') TESTDATA=""" enzyme regions N length AaaI all 10 238045 AaaI all 20 170393 AaaI captured 10 292735 AaaI captured 20 229824 AagI all 10 88337 AagI all 20 19144 AagI captured 10 34463 AagI captured 20 19220 """ df = pd.read_csv(io.StringIO(TESTDATA), sep='\s+') # ...some code.... # Example of a log message with a chunk of a dataframe, here, using # `head` (but this can be another method that slices a dataframe): logger.debug('less important intermediate results: df:') for line in df.head(MAX_NUM_DF_LOG_LINES).to_string().splitlines(): logger.debug(line) # ...more code.... logger.info('more important intermediate results: df:') for line in df.head(MAX_NUM_DF_LOG_LINES).to_string().splitlines(): logger.info(line) # ...more code.... Prints: 240102 10:58:20 INFO: Reading input. 240102 10:58:20 INFO: more important intermediate results: df: 240102 10:58:20 INFO: enzyme regions N length 240102 10:58:20 INFO: 0 AaaI all 10 238045 240102 10:58:20 INFO: 1 AaaI all 20 170393 240102 10:58:20 INFO: 2 AaaI captured 10 292735 240102 10:58:20 INFO: 3 AaaI captured 20 229824 Related: None of this accomplishes what I try to do, but it is getting closer: How to print multiline logs using python logging module? See this comment, which is neat, but not very pythonic, as it calls print from inside a list comprehension and then discards the result: "Do note that the latter only works on py2 due to map being lazy; you can do [logger.info(line) for line in 'line 1\nline 2\nline 3'.splitlines()] on py3. – Kyuuhachi, Jun 22, 2021 at 16:30". Also, the accepted answer by Qeek has issues: (a) it lacks the functionality to dynamically define the max number of dataframe rows to write into the log (define this once per script, not every call to logger); and (b) it has no examples of usage, so it is unclear. Write or log print output of pandas Dataframe - this prints something like this, that is it is missing the timestamp + logging level metadata at the beginning of each line: 240102 12:27:19 INFO: dataframe head - enzyme regions N length 0 AaaI all 10 238045 1 AaaI all 20 170393 2 AaaI captured 10 292735 ... How to log a data-frame to an output file - same as the previous answer.
What you are looking for is a custom Formatter. Using it will be more Pythonic. It provides better flexibility and code readability, as it is part of Python logging system. class DataFrameFormatter(logging.Formatter): def __init__(self, fmt: str, n_rows: int = 4) -> None: self.n_rows = n_rows super().__init__(fmt) def format(self, record: logging.LogRecord) -> str: if isinstance(record.msg, pd.DataFrame): s = '' if hasattr(record, 'n_rows'): self.n_rows = record.n_rows lines = record.msg.head(self.n_rows).to_string().splitlines() if hasattr(record, 'header'): record.msg = record.header.strip() s += super().format(record) + '\n' for line in lines: record.msg = line s += super().format(record) + '\n' return s.strip() return super().format(record) formatter = DataFrameFormatter('%(asctime)s %(levelname)-8s %(message)s', n_rows=4) logger = logging.getLogger() logger.setLevel(logging.DEBUG) ch = logging.StreamHandler() ch.setFormatter(formatter) logger.addHandler(ch) df = pd.DataFrame({'a' : [1,2,3,4,5], 'bb': [10, 20, 30, 40 ,50]}) logger.info(df, extra={'header': "this is a header line"}) logger.debug('foo') logger.info(df, extra={'n_rows': 2}) This will produce following log: 2024-01-09 15:09:53,384 INFO this is a header line 2024-01-09 15:09:53,384 INFO a bb 2024-01-09 15:09:53,384 INFO 0 1 10 2024-01-09 15:09:53,384 INFO 1 2 20 2024-01-09 15:09:53,384 INFO 2 3 30 2024-01-09 15:09:53,384 INFO 3 4 40 2024-01-09 15:09:53,385 DEBUG foo 2024-01-09 15:09:53,385 INFO a bb 2024-01-09 15:09:53,385 INFO 0 1 10 2024-01-09 15:09:53,385 INFO 1 2 20 This way you can easily control header and n_rows through the extra entry. And if you will not provide them, the default values will be used.
2
6
77,725,407
2023-12-28
https://stackoverflow.com/questions/77725407/how-to-forecast-out-of-sample-value-in-sktime-squaringresiduals
I am trying to forecast out-of-sample value using sktime SquaringResiduals. Here is the code which working well for in-sample prediction. from sktime.forecasting.arch import StatsForecastGARCH from sktime.forecasting.squaring_residuals import SquaringResiduals def hybridModel(p,q,model): out_sample_date = FH(np.arange(12), is_relative=True) in_sample_date = FH(df.index, is_relative=False) var_fc = StatsForecastGARCH(p=p,q=q) sqr = SquaringResiduals(forecaster=model, residual_forecaster=var_fc,initial_window=int(len(df))) sqr = sqr.fit(df, fh=in_sample_date) # y_pred2 = sqr.predict(out_sample_date) #out sample prediction y_pred = sqr.predict(in_sample_date) #in sample prediction fig,ax=plot_series(df, y_pred, labels=["passenger", "y_pred"]) return sqr,fig,y_pred,error_matrix(df,y_pred) sqr,fig1,y_pred1,matrix1= hybridModel(1,1,forecaster) Now I try to forecast out-sample. y_pred2 = sqr.predict(out_sample_date) #out sample prediction > ValueError: A different forecasting horizon `fh` has been provided > from the one seen already in `fit`, in this instance of > SquaringResiduals. If you want to change the forecasting horizon, > please re-fit the forecaster. This is because the fitting of the > forecaster SquaringResiduals depends on `fh`. So I change:sqr = sqr.fit(df, fh=in_sample_date) to sqr = sqr.fit(df) > ValueError: The forecasting horizon `fh` must be passed to `fit` of > SquaringResiduals, but none was found. This is because fitting of the > forecaster SquaringResiduals depends on `fh`. Then I change: sqr = sqr.fit(df, fh=in_sample_date) to sqr = sqr.fit(df, fh=out_sample_date) > ValueError: The `window_length` and the forecasting horizon are > incompatible with the length of `y`. Found `window_length`=84, > `max(fh)`=11, but len(y)=84. It is required that the window length > plus maximum forecast horizon is smaller than the length of the time > series `y` itself. Then I checked predict function for other model, and predict() function working well for both in-sample and out-sample prediction for non-hybrid model: from sktime.forecasting.tbats import TBATS from sktime.forecasting.base import ForecastingHorizon as FH import warnings import numpy as np import pandas as pd import mlflow from sktime.utils import mlflow_sktime as mf from sktime.utils.plotting import plot_series warnings.filterwarnings("ignore") out_sample_date = FH(np.arange(12), is_relative=True) in_sample_date = FH(df.index, is_relative=False) forecaster = TBATS( use_box_cox=True, use_trend=True, use_damped_trend=True, sp=12, use_arma_errors=True, n_jobs=1) forecaster.fit(df) y_pred = forecaster.predict(in_sample_date) y_pred2 = forecaster.predict(out_sample_date) fig,ax = plot_series(df,y_pred,y_pred2,labels=['passenger','prediction','out_sample_pred']) Why out-sample / in-sample prediction function does not work together for SquaringResiduals and how can we predict out-sample / in-sample value? sqr = SquaringResiduals(forecaster=model, residual_forecaster=var_fc,initial_window=int(len(df))) sqr = sqr.fit(df, fh=in_sample_date) y_pred2 = sqr.predict(out_sample_date) #out sample prediction Thank you so much for your attention.
The documentation explains that the forecaster is trained on y(t_1),...y(t_i) where i = initial_window, ... N-steps_ahead, and that this is used to calculate the residual r(t_i+steps_ahead) := y(t_i+steps_ahead) - ŷ(t_i+steps_ahead) for each value of i. The initial_window must be less than or equal to N-steps_ahead to make any forecasts for a positive number of steps_ahead. I believe the reason for this is if we consider initial_window = N-s where s is greater than or equal to 0, and steps_ahead=a, then in the first iteration of the loop over i, we get: r(t_i+steps_ahead) := y(t_i+steps_ahead) - ŷ(t_i+steps_ahead) r(t_(N-s+a)) := y(t_(N-s+a)) - ŷ(t_(N-s+a)) Notice that y(t_(N-s+a)) is not known unless N-s+a <= N, or equivalently a < s because we don't know the true value of future timestamps. This means when you use SquaringResiduals, the maximum possible initial window you can supply is max_initial_window = len(df)-max(out_sample_date). Notice that we are using max(out_sample_date) and not len(out_sample_date) because np.arange(12) only asks for forecasts of steps_ahead = 0, ... 11 or a maximum forecast horizon of 11. Below is a fully reproducible example: import pandas as pd import numpy as np import matplotlib.pyplot as plt from sktime.forecasting.arch import StatsForecastGARCH from sktime.forecasting.squaring_residuals import SquaringResiduals from sktime.forecasting.tbats import TBATS from sktime.forecasting.base import ForecastingHorizon as FH import warnings import mlflow from sktime.utils import mlflow_sktime as mf from sktime.utils.plotting import plot_series warnings.filterwarnings("ignore") ## make up some random data np.random.seed(42) dates = pd.date_range(start='2012-01-01',end='2019-01-01',freq='1M') passengers = 40 + 10*np.sin(np.linspace(-np.pi, np.pi, len(dates))) + np.random.normal(loc=0, scale=2, size=len(dates)) df = pd.DataFrame(data={"passenger": passengers}, index=pd.PeriodIndex(data=dates, freq='M')) def hybridModel(p,q,model): out_sample_date = FH(np.arange(12), is_relative=True) in_sample_date = FH(df.index, is_relative=False) max_initial_window = len(df)-max(out_sample_date) ## <-- max initial window cannot be any larger! var_fc = StatsForecastGARCH(p=p,q=q) sqr = SquaringResiduals(forecaster=model, residual_forecaster=var_fc, initial_window=max_initial_window) sqr = sqr.fit(df, fh=in_sample_date) y_pred = sqr.predict(in_sample_date) #in sample prediction sqr = sqr.fit(df, fh=out_sample_date) y_pred2 = sqr.predict(out_sample_date) #out sample prediction fig,ax=plot_series(df, y_pred, y_pred2, labels=["passenger", "y_pred", "y_pred2"]) plt.plot() return sqr,fig,y_pred forecaster = TBATS( use_box_cox=True, use_trend=True, use_damped_trend=True, sp=12, use_arma_errors=True, n_jobs=1) forecaster.fit(df) sqr,fig1,y_pred1= hybridModel(1,1,forecaster) fig1.show()
3
1
77,740,462
2023-12-31
https://stackoverflow.com/questions/77740462/deconvolution-of-distributions-defined-by-histograms
I'm reading an excellent paper (here) in which the authors begin with a dataset of effect estimates from randomized control trials. In theory, these numbers are a convolution between an unknown distribution and a standard normal. That is to say, each number can be thought of as a draw from the unknown distribution plus some white noise. They claim they can recover the unknown distribution by deconvolving the data with a standard gaussian. I'd like to do this myself in a toy example, but an struggling to obtain sensible results. In the code below I: Draw 1e5 random numbers from a gamma distribution To each draw, I add white noise I compute a histogram of these new draws, and let the "signal" be the height of the histogram at the midpoint of the bind edges (as defined by numby and matplotlib) My code to generate the data is as follows import numpy as np from scipy.stats import norm, gamma from scipy.signal import convolve, deconvolve import matplotlib.pyplot as plt # First, create the original signal N = int(1e5) X = gamma(a=4, scale=1/2).rvs(N) # Corrupt with gaussian noise E = norm().rvs(N) Y = X + E height, edge, ax = plt.hist(X, edgecolor='white', bins = 50); mid = (edge[:-1] + edge[1:])/2 Now that I have the signal, I'd like to deconvolve it with a gaussian. The result should hopefully be a gamma distribution (or something close to the gamma density I've used above). However, I'm not sure how to set up the "impulse" in the scipy.signal.deconvolution function call. What length should this be, and at what points should I evaluate the gaussian density?
Yes, when independent random variables X and E are added, the pdf of the sum (X + E) is the convolution of X's pdf and E's pdf. So, if we know the distribution of E and have a histogram estimating the pdf of (X + E), it is possible to obtain a non-parametric estimate of the distribution of X by deconvolution. An important practical limitation to be aware of is that deconvolution is often ill posed, meaning a small change in the input may result in a large change in the deconvolved result. That is a problem here: the histogram is an imperfect estimate of exact pdf of (X + E), and exact deconvolution like scipy.signal.deconvolution will unreasonably amplify those imperfections. To deal with this, it is necessary to use a regularized deconvolution method. There are many ways to do this. A simple classical method is Wiener deconvolution. Here are plots showing results with Wiener deconvolution (code listing below): The top plot shows our initial estimate of (X + E) from the histogram. The following three plots show the result of Wiener deconvolution with different regularization strengths: (2nd plot, regularization = 1e-5) If the regularization is too weak, nasty oscillations develop in the output. (3rd plot, regularization = 0.001) It seems that around 0.001 is about right for this problem. (4th plot, regularization = 0.1) If the regularization is too strong, then the deconvolution doesn't do much. A defect is that the Wiener deconvolution creates negative values, but of course, a pdf should always be nonnegative. A better (but more complicated) method would be to minimize the mean square error objective in the Wiener deconvolution derivation subject to a nonnegativity constraint. And there are other possible techniques beyond this to finesse the regularization for better results. Code: # Copyright 2024 Google LLC. # SPDX-License-Identifier: Apache-2.0 import numpy as np from scipy.stats import norm, gamma desired_dist = gamma(a=4, scale=1/2) # Underlying desired dist. noise_dist = norm() # Distribution of additive noise. # Generate the observed samples. N = int(1e5) samples = desired_dist.rvs(N) + noise_dist.rvs(N) # Histogram to estimate (desired + noise) distribution. This is # the "signal" to be unblurred. bins = 64 hist, bin_edges = np.histogram(samples, bins=bins, range=[-8, 16]) dx = bin_edges[1] - bin_edges[0] bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2 signal = hist / (N * dx) # Sample noise pdf to get the "blur kernel." kernel = noise_dist.pdf((np.arange(bins) - bins//2) * dx) kernel /= kernel.sum() def wiener_deconv(signal, kernel, regularization=0.001): signal_f = np.fft.rfft(signal) kernel_f = np.fft.rfft(np.fft.fftshift(kernel)) deconv_f = (signal_f * np.conj(kernel_f)) / ( np.abs(kernel_f)**2 + regularization) return np.fft.irfft(deconv_f, len(signal)) deconv = wiener_deconv(signal, kernel) However, I'm not sure how to set up the "impulse" in the scipy.signal.deconvolution function call. What length should this be, and at what points should I evaluate the gaussian density? The key bit for this is you want to sample the pdf of E over a sequence of points {..., -2*dx, -dx, 0, +dx, +2*dx, ...} where dx is the histogram bin width, and over a range wide enough that most of the area is captured.
2
2
77,733,446
2023-12-29
https://stackoverflow.com/questions/77733446/polars-api-registering-and-type-checkers
I'm consistently getting type errors from either mypy or pyright when using polars namespace registration functions. Is there a way I can avoid type-checker errors other than hinting # type: ignore[attr-defined] every time I'm using a function from my custom namespace? Example follows from the official documentation https://docs.pola.rs/py-polars/html/reference/api.html : file checker.py: import polars as pl @pl.api.register_expr_namespace("greetings") class Greetings: def __init__(self, expr: pl.Expr): self._expr = expr def hello(self) -> pl.Expr: return (pl.lit("Hello ") + self._expr).alias("hi there") def goodbye(self) -> pl.Expr: return (pl.lit("Sayōnara ") + self._expr).alias("bye") print(pl.DataFrame(data=["world", "world!", "world!!"]).select( [ pl.all().greetings.hello(), # type: ignore[attr-defined] pl.all().greetings.goodbye(), ] )) % mypy checker.py checker.py:19: error: "Expr" has no attribute "greetings" [attr-defined] Found 1 error in 1 file (checked 1 source file % mypy --version mypy 1.8.0 (compiled: yes) % pyright checker.py /path/to/checker.py /apth/to/checker.py:19:18 - error: Cannot access member "greetings" for type "Expr" Member "greetings" is unknown (reportGeneralTypeIssues) 1 error, 0 warnings, 0 informations % pyright --version pyright 1.1.343
Polars API registration is non-compliant with Python's typing system. First and foremost, Polars can help by providing a dynamic attribute access typing hint. Without them doing this, only type-checkers with modding or plugin support can allow you to get around this. Polars classes with dynamically-registerable namespaces need to define __getattr__ Type checkers (both mypy and Pyright) require error suppression because polars.expr.expr.Expr does not have the dynamic attribute accessor defined. The presence of something like this: class Expr: if typing.TYPE_CHECKING: def __getattr__(self, attr_name: str, /) -> typing.Any: ... is sufficient to silence type checkers about dynamic attribute access (see mypy Playground, Pyright Playground, also typing documentation >> Type System Reference >> Type Stubs >> Attribute Access - this equally applies to inline types in source files, such as Polars' .py files). I would suggest filing an issue with the Polars devs to ask them to add a dynamic __getattr__ for typing purposes only. Pyright requires per-file or per-line controls For Pyright, these errors can't be suppressed except using a per-line # type: ignores, or bunch of file-level type controls like # pyright: reportUnknownMemberType=none, reportGeneralTypeIssues=none. Pyright's BDFL has previously expressed reluctance for plugin support, so it is unmoddable until someone writes a Typing PEP to propose a standard for somehow dynamically registering namespaces. mypy can offer a statically-typed solution in the form of a mypy plugin It'd be great if Polars supported dynamic namespace registration, at least in mypy, through a mypy plugin - I suggest pushing for this. You can use the following as an inspiration, which provides full static type checking of the registered namespaces. mypy static type checking result This is the kind of result you'd want to get: import polars as pl @pl.api.register_expr_namespace("greetings") class Greetings: def __init__(self, expr: pl.Expr): self._expr = expr def hello(self) -> pl.Expr: return (pl.lit("Hello ") + self._expr).alias("hi there") def goodbye(self) -> pl.Expr: return (pl.lit("Sayōnara ") + self._expr).alias("bye") >>> print( ... pl.DataFrame(data=["world", "world!", "world!!"]).select( ... [ ... pl.all().greetings.hello(), ... pl.all().greetings.goodbye(1), # mypy: Too many arguments for "goodbye" of "Greetings" [call-arg] ... pl.all().asdfjkl # mypy: `polars.expr.expr.Expr` object has no attribute `asdfjkl` [misc] ... ] ... ) ... ) ... Project structure project/ mypy.ini mypy_polars_plugin.py test.py Implementation Contents of mypy.ini [mypy] plugins = mypy_polars_plugin.py Contents of mypy_polars_plugin.py from __future__ import annotations import typing_extensions as t import mypy.nodes import mypy.plugin import mypy.plugins.common if t.TYPE_CHECKING: import collections.abc as cx import mypy.options import mypy.types STR___GETATTR___NAME: t.Final = "__getattr__" STR_POLARS_EXPR_MODULE_NAME: t.Final = "polars.expr.expr" STR_POLARS_EXPR_FULLNAME: t.Final = f"{STR_POLARS_EXPR_MODULE_NAME}.Expr" STR_POLARS_EXPR_REGISTER_EXPR_NAMESPACE_FULLNAME: t.Final = "polars.api.register_expr_namespace" def plugin(version: str) -> type[PolarsPlugin]: return PolarsPlugin class PolarsPlugin(mypy.plugin.Plugin): _polars_expr_namespace_name_to_type_dict: dict[str, mypy.types.Type] def __init__(self, options: mypy.options.Options) -> None: super().__init__(options) self._polars_expr_namespace_name_to_type_dict = {} @t.override def get_customize_class_mro_hook( self, fullname: str ) -> cx.Callable[[mypy.plugin.ClassDefContext], None] | None: """ mypy requires the presence of `__getattr__` or `__getattribute__` for `get_attribute_hook` to work on dynamic attributes. This hook-getter adds `__getattr__` to the class definition of `polars.expr.expr.Expr`. """ if fullname == STR_POLARS_EXPR_FULLNAME: return add_getattr return None @t.override def get_class_decorator_hook_2( self, fullname: str ) -> cx.Callable[[mypy.plugin.ClassDefContext], bool] | None: """ Makes mypy recognise the class decorator factory `@polars.api.register_expr_namespace(...)` in the following context: @polars.api.register_expr_namespace(<namespace name>) class <Namespace>: ... Accumulates a mapping of a bunch of potential attributes to be accessible on instances of `polars.expr.expr.Expr`; the mapping has entries which look like `<namespace name>: <Namespace>` """ if fullname == STR_POLARS_EXPR_REGISTER_EXPR_NAMESPACE_FULLNAME: return self.polars_expr_namespace_registering_hook return None @t.override def get_attribute_hook( self, fullname: str ) -> cx.Callable[[mypy.plugin.AttributeContext], mypy.types.Type] | None: """ Makes mypy understand that, whenever an attribute is accessed from instances of `polars.expr.expr.Expr` and the attribute doesn't exist, reach for the attributes accumulated in the mapping through the actions of `get_class_decorator_hook_2`. """ if fullname.startswith(f"{STR_POLARS_EXPR_FULLNAME}."): return self.polars_expr_attribute_hook return None def polars_expr_namespace_registering_hook( self, ctx: mypy.plugin.ClassDefContext ) -> bool: """ Use the decorator factory `polars.api.register_expr_namespace(<namespace name>)` to register available dynamic attributes later accessed from instances of `polars.expr.expr.Expr`. Returns whether the class has enough information to be considered semantically analysed. """ # Ensure that the class decorator expression looks like # `@polars.api.register_expr_namespace(<namespace name>)` namespace_arg: str | None if ( (not isinstance(ctx.reason, mypy.nodes.CallExpr)) or (len(ctx.reason.args) != 1) or ( (namespace_arg := ctx.api.parse_str_literal(ctx.reason.args[0])) is None ) ): # If the decorator factory expression doesn't look valid, do an early # return. return True self._polars_expr_namespace_name_to_type_dict[ namespace_arg ] = ctx.api.named_type(ctx.cls.fullname) return True def polars_expr_attribute_hook( self, ctx: mypy.plugin.AttributeContext ) -> mypy.types.Type: """ Reaches for registered namespaces when accessing attributes on instances of `polars.expr.expr.Expr`. Shows an error when the attribute doesn't exist. """ assert isinstance(ctx.context, mypy.nodes.MemberExpr) attr_name: str = ctx.context.name namespace_type: mypy.types.Type | None = ( self._polars_expr_namespace_name_to_type_dict.get(attr_name) ) if namespace_type is not None: return namespace_type else: ctx.api.fail( f"`{STR_POLARS_EXPR_FULLNAME}` object has no attribute `{attr_name}`", ctx.context, ) return mypy.types.AnyType(mypy.types.TypeOfAny.from_error) def add_getattr(ctx: mypy.plugin.ClassDefContext) -> None: mypy.plugins.common.add_method_to_class( ctx.api, cls=ctx.cls, name=STR___GETATTR___NAME, args=[ mypy.nodes.Argument( variable=mypy.nodes.Var( name="attr_name", type=ctx.api.named_type("builtins.str") ), type_annotation=ctx.api.named_type("builtins.str"), initializer=None, kind=mypy.nodes.ArgKind.ARG_POS, pos_only=True, ) ], return_type=mypy.types.AnyType(mypy.types.TypeOfAny.implementation_artifact), self_type=ctx.api.named_type(STR_POLARS_EXPR_FULLNAME), ) Contents of test.py import polars as pl @pl.api.register_expr_namespace("greetings") class Greetings: def __init__(self, expr: pl.Expr): self._expr = expr def hello(self) -> pl.Expr: return (pl.lit("Hello ") + self._expr).alias("hi there") def goodbye(self) -> pl.Expr: return (pl.lit("Sayōnara ") + self._expr).alias("bye") print( pl.DataFrame(data=["world", "world!", "world!!"]).select( [ pl.all().greetings.hello(), pl.all().greetings.goodbye(1), # mypy: Too many arguments for "goodbye" of "Greetings" [call-arg] pl.all().asdfjkl # mypy: `polars.expr.expr.Expr` object has no attribute `asdfjkl` ] ) )
5
3
77,738,266
2023-12-31
https://stackoverflow.com/questions/77738266/how-to-make-pydantic-class-fields-immutable
I am trying to create a pydantic class with Immutable class fields (not instance fields). Here is my base code: from pydantic import BaseModel class ImmutableModel(BaseModel): _name: str = "My Name" _age: int = 25 ImmutableModel._age = 26 print("Class variables:") print(f"Name: {ImmutableModel._name}") print(f"Age: {ImmutableModel._age}") Output: Class variables: Name: My Name Age: 26 I tried using the Config class inside my ImmutableModel to make fields immutable. But it seems like it only works for instance class fields. class Config: allow_mutation = False FYI, I use Python 3.6.13 and pydantic==1.9.2
Initially, I tried to achieve creating immutable Class and Instance using pydantic module. Now I were able to manage it using native method itself. Since this was completely defined by me and immutable, its fine to have no validation. class ImmutableMetaclass(type): def __setattr__(cls, name, value): raise AttributeError("Cannot create or set class attribute '{}'".format(name)) class MyImmutableClass(metaclass=ImmutableMetaclass): # Define all allowed class attributes here CONSTANT_1 = 1 def __setattr__(self, name, value): raise AttributeError("Cannot create or set class or instance attribute '{}'".format(name)) immutable_class = MyImmutableClass immutable_class.CONSTANT_1 = 100 immutable_class.CONSTANT_2 = 200 immutable_instance = MyImmutableClass() immutable_instance.CONSTANT_1 = 100 immutable_instance.CONSTANT_3 = 300 All the above code raises AttributeError.
2
0
77,742,640
2024-1-1
https://stackoverflow.com/questions/77742640/moviepy-compositevideoclip-generates-blank-unplayable-video
I'm trying to generate a video: the image at the top, the text in the center and the video at the bottom. I don't want any gap between them. The image is resized to become the first half, so the height is set to 960 (the final video is 1080 x 1920). Similarly, the video is resized and cropped. The text should be on top of the whole thing. But what it generates is a blank, unplayable video (2570 x 960: why is the width 2570?); no errors, either. I've tried many combinations over the past few days, but none worked. Here's the code: import cv2 from moviepy.editor import VideoFileClip, concatenate_videoclips, TextClip, ImageClip, CompositeVideoClip, clips_array from moviepy.video.fx.crop import crop from moviepy.video.fx.resize import resize from moviepy.config import change_settings change_settings({"IMAGEMAGICK_BINARY": r"E:\User\ImageMagick\ImageMagick-7.1.1-Q16-HDRI\magick.exe"}) def generate_video(image_path, video_path, output_path, text, text_options: dict = None): image_clip = ImageClip(image_path) image_clip = resize(image_clip, width=1080, height=960) video_clip = VideoFileClip(video_path) video_clip = resize(video_clip, height=960) video_clip = crop(video_clip, x_center=video_clip.w/2, width=1080) if text_options is None: text_options = {'fontsize': 30, 'color': 'white', 'bg_color': 'black', 'font': 'Arial'} text_clip = TextClip(text, text_options) text_clip = text_clip.set_position(("center", "center")) image_clip = image_clip.set_duration(video_clip.duration) text_clip = text_clip.set_duration(video_clip.duration) image_clip = image_clip.set_position((0.0, 0.0), relative=True) video_clip = video_clip.set_position((0.0, 0.5), relative=True) final_clip = CompositeVideoClip([image_clip, text_clip, video_clip]) final_clip.write_videofile(output_path, codec="libx264", audio=False) image_clip.close() text_clip.close() video_clip.close() It's worked before (so the installation is fine; when the script was smaller), the paths are correct, and I've called the function with valid arguments. Why doesn't this code work? I'm also curious to know how the code would be if clips_array or concatenate_videoclips were used (the output should be exactly the same).
The issue comes from these two lines bellow, resize asumes that only one of the arguments will be specified, meaning that the width argument is ignored (hitting this condition) and because you didn't specify a size when calling CompositeVideoClip, the final clip will be rendered with the size of the first clip in the array (not documented but happens in this line) which is most likely why you can't play the video. image_clip = resize(image_clip, width=1080, height=960) final_clip = CompositeVideoClip([image_clip, text_clip, video_clip]) Here you have it fixt. def generate_video(image_path, video_path, output_path, text, text_options=None): video_clip = VideoFileClip(video_path) video_clip = resize(video_clip, newsize=(1920, 1080)) video_clip = crop(video_clip, y1=1920 // 4, height=1920 // 2) image_clip = ImageClip(image_path, duration=video_clip.duration) image_clip = resize(image_clip, newsize=(1920, 1080)) image_clip = crop(image_clip, height=1920 // 2) default_text_options = { 'fontsize': 30, 'font': 'Arial', 'color': 'white', 'bg_color': 'black', } text_clip = TextClip(text, **(text_options or default_text_options)) text_clip = text_clip.set_duration(video_clip.duration) final_clip = CompositeVideoClip( [ image_clip.set_position((0.0, 0.0), relative=True), video_clip.set_position((0.0, 0.5), relative=True), text_clip.set_position(("center", "center")), # the last layer is the top layer ], size=(1920, 1080), ) final_clip.write_videofile(output_path, codec="libx264", audio=False) text_clip.close() image_clip.close() video_clip.close()
2
1
77,739,186
2023-12-31
https://stackoverflow.com/questions/77739186/initializing-an-earth-engine-project-in-colab
I can no longer initailize my own earth engine project on Google colab using Python. I tried initializing using the ee.Initialize() method on Colab with used to provide with a link that will generate a token to my project but instead i get this error: WARNING:googleapiclient.http:Encountered 403 Forbidden with reason "PERMISSION_DENIED" --------------------------------------------------------------------------- HttpError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/ee/data.py in _execute_cloud_call(call, num_retries) 382 try: --> 383 return call.execute(num_retries=num_retries) 384 except googleapiclient.errors.HttpError as e: 7 frames HttpError: <HttpError 403 when requesting https://earthengine.googleapis.com/v1/projects/earthengine-legacy/algorithms?prettyPrint=false&alt=json returned "Google Earth Engine API has not been used in project 522309567947 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/earthengine.googleapis.com/overview?project=522309567947 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.". Details: "[{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Google developers console API activation', 'url': 'https://console.developers.google.com/apis/api/earthengine.googleapis.com/overview?project=522309567947'}]}, {'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'SERVICE_DISABLED', 'domain': 'googleapis.com', 'metadata': {'service': 'earthengine.googleapis.com', 'consumer': 'projects/522309567947'}}]"> During handling of the above exception, another exception occurred: EEException Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/ee/data.py in _execute_cloud_call(call, num_retries) 383 return call.execute(num_retries=num_retries) 384 except googleapiclient.errors.HttpError as e: --> 385 raise _translate_cloud_exception(e) # pylint: disable=raise-missing-from 386 387 EEException: Google Earth Engine API has not been used in project 522309567947 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/earthengine.googleapis.com/overview?project=522309567947 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry .
Their lack of documentation on this change is incredibly frustrating, but fortunately it's a very easy fix (or at least it was for me). If you go to your earth engine code editor and click on the 'Assets' tab, then copy the name of the Project that all of your EE assets are that you're trying to access.see linked image here: Then, at the initialization step in your colab script, you replace 'ee.Initialize()' with 'ee.Initiliaze(project='NameofYourEarthEngineProject') Hope this helps.
2
4
77,746,344
2024-1-2
https://stackoverflow.com/questions/77746344/how-to-hyperlink-nodes-in-d3js-networkx-digraph
I would like to create a NetworkX graph and visualize it using d3js similarly to the javascript example in the NetworkX docs. This graph is very similar to the interactive graph on the NetworkX homepage page. The example works for me, but I would like to add hyperlinks to the nodes. I think I have node attributes called "xlink:href", but I have not figured out how. This question was ansered for NewtworkX and visualization with bokeh here. I have not tested this example, since I want to use d3js. The code below is available here. So far: import json import networkx as nx G = nx.Graph() G.add_node('Node1') G.add_node('Node2') G.add_edge('Node1', 'Node2') for n in G: G.nodes[n]["name"] = "My" + str(n) G.nodes[n]["xlink:href"] = "http://google.com" # <==<< link Not working d = nx.json_graph.node_link_data(G) json.dump(d, open("force/force.json", "w")) print("Wrote node-link JSON data to force/force.json") The above produces: {'directed': False, 'multigraph': False, 'graph': {}, 'nodes': [{'name': 'MyNode1', 'xlink:href': 'http://google.com', 'id': 'Node1'}, {'name': 'MyNode2', 'xlink:href': 'http://google.com', 'id': 'Node2'}], 'links': [{'source': 'Node1', 'target': 'Node2'}]} Which can be visualized like this: # Serve the file over http to allow for cross origin requests import flask app = flask.Flask(__name__, static_folder="force") @app.route("/") def static_proxy(): return app.send_static_file("force.html") app.run(port=8001) Interestingly, the tooltip on the graph displays "Node2" and not "MyNode2". Links collected while trying to solve this: https://github.com/simonlindgren/nXd3 http://www.d3noob.org/2014/05/including-html-link-in-d3js-tool-tip.html https://networkx.org/documentation/stable/reference/readwrite/json_graph.html
Here's a solution using gravis, which accepts graph objects from NetworkX, iGraph, graph-tool and some other Python libraries and can visualize them with either d3.js, vis.js or three.js with a single function call. Disclaimer: I'm the developer of gravis. Your use case appears to be 1) creating an interactive graph visualization with d3 and 2) serve it via a web server such as Flask. Since this was part of the motivation to build gravis, I think it might fit well here. import gravis as gv import networkx as nx g = nx.Graph() g.add_node('Node1') g.add_node('Node2') g.add_edge('Node1', 'Node2') for n in g: g.nodes[n]["name"] = f"My {n}" g.nodes[n]["hover"] = '<a href="http://google.com">Google</a>' fig = gv.d3(g, node_label_data_source='name') fig.display() The last line opens a browser window and shows the visualization. Alternatively you can also use fig.to_html() to get a standalone HTML text that you can serve via a web server like Flask. The hyperlink is shown when hovering over a node and can easily be clicked since the pop up disappears with a time delay. Yet another use case that can easily be serve is creating embedded graph visualizations in a Jupyter notebook, which may help in prototyping your app: (The mouse cursor was hidden from the screenshot. It's hovering over node 1, hence the pop up with the hyperlink is visible.)
1
2
77,745,673
2024-1-2
https://stackoverflow.com/questions/77745673/poetry-add-command-removes-category-from-lock-file
My current poetry.lock file looks like this (part of it) # This file is automatically @generated by Poetry and should not be changed by hand. [[package]] name = "annotated-types" version = "0.6.0" description = "Reusable constraint types to use with typing.Annotated" category = "main" optional = false python-versions = ">=3.8" files = [ {file = "annotated_types-0.6.0-py3-none-any.whl", hash = "sha256:0641064de18ba7a25dee8f96403ebc39113d0cb953a01429249d5c7564666a43"}, {file = "annotated_types-0.6.0.tar.gz", hash = "sha256:563339e807e53ffd9c267e99fc6d9ea23eb8443c08f112651963e24e22f84a5d"}, ] [[package]] name = "asgiref" version = "3.7.2" description = "ASGI specs, helper code, and adapters" category = "main" optional = false python-versions = ">=3.7" files = [ {file = "asgiref-3.7.2-py3-none-any.whl", hash = "sha256:89b2ef2247e3b562a16eef663bc0e2e703ec6468e2fa8a5cd61cd449786d4f6e"}, {file = "asgiref-3.7.2.tar.gz", hash = "sha256:9e0ce3aa93a819ba5b45120216b23878cf6e8525eb3848653452b4192b92afed"}, ] For every package there is a category Now I need to add an extra package. I run poetry add --group main beautifulsoup4 , but category is removed from each line. What should I do to: 1) install a new package 2) make category fields stay the same
category was removed since poetry 1.5, that is why it's removed in poetry.lock file after any changes
2
4
77,747,059
2024-1-2
https://stackoverflow.com/questions/77747059/why-does-hatch-not-save-the-virtual-environment-in-the-same-directory-i-am-worki
Why does hatch not save the virtual environment in the same directory I am working in? I went through the doc and couldn't find a way to achieve that. Maybe it makes sense that it's that way but I can't really think why. Is there a way to create it in the project directory? Or a good reason to not want to do that?
When you use a tool like hatch, you are basically giving up precise control over the virtual environment in favor of letting hatch manage it for you; this includes letting hatch decide where to keep all such managed environments. That said, hatch stores virtual environments in a specific data directory that you can specify using the --data-dir option to the hatch command itself. $ mkdir hatchtest; cd hatchtest $ ls $ hatch --data-dir . env create [...] $ ls env (The default data directory likely varies from platform to platform; it's ~/Library/Application Support/hatch on my macOS box.)
3
2
77,746,742
2024-1-2
https://stackoverflow.com/questions/77746742/what-is-the-term-for-a-colon-before-a-suite-in-python-syntax
I want to know what the technical term for a colon in the context of introducing a suite after a statement is called. I do not mean colons in slices, key-value pairs or type hints, but this kind of usage: if True: pass else: pass try: raise except: pass def foo(): pass lambda: ... # I don't know if this is appertaining I did not really find a fitting term online. Most only call them colons, while expecting the reader to understand the meaning of a colon in this context in Python. I found this website which uses the term "colon operator", but that doesn't feel right to me - especially as it lumps together all uses of colons under that umbrella term. I apologise if I have missed an obvious answer somewhere.
It's simply called a "colon" :, but I might say and the grammar spec suggests you'd say something like "the start of a block" https://docs.python.org/3/reference/grammar.html # COMPOUND STATEMENTS # =================== # Common elements # --------------- block: | NEWLINE INDENT statements DEDENT | simple_stmts
2
3
77,742,624
2024-1-1
https://stackoverflow.com/questions/77742624/how-to-group-items-of-a-list-based-on-their-type-transition
My input is a list: data = [ -1, 0, 'a','b', 1, 2, 3, 'c', 6, 'd', 'e', .4, .5, 'a', 'b', 4, 'f', 'g', ] I'm trying to form groups (dictionary) where the keys are the strings and the values are the numbers right after them. There are however three details I should consider: The list of data I receive can sometimes have leading non-string values that should be ignored The number of strings for each group is variable but the minimum is always 1 Some groups can appear multiple times (example: a/b) For all of that I made the code below: start = list(map(type, data)).index(str) wanted = {} for i in data[start:]: strings = [] if type(i) == str: strings.append(i) numbers = [] else: numbers.append(i) wanted['/'.join(strings)] = numbers This gives me nearly what I'm looking for: {'a': [], 'b': [4], '': [4], 'c': [6], 'd': [], 'e': [0.4, 0.5], 'f': [], 'g': []} Can you show me how to fix my code? My expected output is this: {'a/b': [1, 2, 3, 4], 'c': [6], 'd/e': [0.4, 0.5], 'f/g': []}
You can use itertools.groupby with a key function that tests if the current item is a string. To skip possible leading non-string items, fetch the first group, and fetch the next group again as a replacement if the first group items are not strings. For each group of items, if they are strings, join them as a key; otherwise extend the list under that key with the items. If the last group of items are strings, set it to an empty list as a default: from itertools import groupby, chain output = {} groups = groupby(data, lambda i: isinstance(i, str)) try: is_str, group = next(groups) if not is_str: is_str, group = next(groups) except StopIteration: pass else: for is_str, group in chain([(is_str, group)], groups): if is_str: key = '/'.join(group) else: output.setdefault(key, []).extend(group) if is_str: output.setdefault(key, []) output becomes: {'a/b': [1, 2, 3, 4], 'c': [6], 'd/e': [0.4, 0.5], 'f/g': []} Demo: https://ideone.com/rcOSzV
1
1
77,743,877
2024-1-2
https://stackoverflow.com/questions/77743877/model-instance-doesnt-check-fields-integrity
Consider the model below class Faculty(models.Model): name = models.CharField(max_length=255) short_name = models.CharField(max_length=15) description = models.CharField(max_length=512) logo = models.ImageField(upload_to=faculty_upload_to, null=True) That I do Faculty.objects.create() or faculty = Faculty() faculty.save() this creates an empty entry in the database >>> from universities.models import * >>> Faculty.objects.create() <Faculty: Faculty object (2)> Why doesn't Django give me an integrity error? I am using Django 5.0.
CharFields default to an empty string (""), which is a valid value for a VARCHAR column. You may consider adding a CheckConstraint if you want to validate values at the database level: class Faculty(models.Model): name = models.CharField(max_length=255) short_name = models.CharField(max_length=15) description = models.CharField(max_length=512) logo = models.ImageField(upload_to=faculty_upload_to, null=True) class Meta: constraints = [ models.CheckConstraint(check=~Q(name="")), models.CheckConstraint(check=~Q(short_name="")), models.CheckConstraint(check=~Q(description="")), ]
1
2
77,743,898
2024-1-2
https://stackoverflow.com/questions/77743898/keep-the-previous-maximum-value-after-the-streak-ends
This is my dataframe: import pandas as pd df = pd.DataFrame( { 'a': [110, 115, 112, 180, 150, 175, 160, 145, 200, 205, 208, 203, 206, 207, 208, 209, 210, 215], 'b': [1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1], } ) And this is the output that I want. I want to create column c. a b c 0 110 1 110 1 115 1 115 2 112 0 115 3 180 1 180 4 150 0 180 5 175 1 180 6 160 0 180 7 145 0 180 8 200 1 200 9 205 1 205 10 208 1 208 11 203 0 208 12 206 1 208 13 207 1 208 14 208 1 208 15 209 1 209 16 210 1 210 17 215 1 215 When df.a > df.a.shift(1) b is 1 otherwise it is 0. Steps needed: a) Find where the streak of 1 in b ends. b) Keep the maximum value of the streak. c) Put that value in c until a greater value is found in a. For example when 180 is found in b: a) Row 3 has streak of 1. b) Maximum value of the streak is 180. c) df.c = 180 until a greater value is found in a. In this case it is 200 at row 8. It was not easy to elaborate the problem. Maybe I have described the problem with wrong words. So If there are any questions feel free to ask in the comments. And I really appreciate if you introduce a built-in way or a clean way to create column b. I put those 1 and 0s manually. This is what I have tried. But it does not feel like a correct approach. df['streak'] = df['b'].ne(df['b'].shift()).cumsum() df['max'] = df.groupby('streak')['a'].max()
You just want cummax: df['c'] = df['a'].cummax() Output: a b c 0 110 1 110 1 115 1 115 2 112 0 115 3 180 1 180 4 150 0 180 5 175 1 180 6 160 0 180 7 145 0 180 8 200 1 200 9 205 1 205 10 208 1 208 11 203 0 208 12 206 1 208 13 207 1 208 14 208 1 208 15 209 1 209 16 210 1 210 17 215 1 215
2
4
77,741,382
2024-1-1
https://stackoverflow.com/questions/77741382/how-to-make-a-column-with-data-type-bit-and-lenght-4-in-sqlalchemy-for-mariadb-o
I have not found a way to do this as the BIT data type in sqlalchemy inherits from Boolean datatype. flags = Column(BIT(4), nullable=False) # BIT column with length 4 sql: `flags` BIT(4) NOT NULL DEFAULT b'0', AttributeError: 'BIT' object has no attribute 'length'
BIT is not a standard data type (neither in SQL Standard nor in SQLAlchemy) but it is supported by e.g. PostgreSQL, SQL Server, MariaDB and MySQL. As @danblack already mentioned in his comment, BIT type is defined in types.py (but also in base.py as MSBit). That means you have to use types of your dialect, in this case mysql (mariadb is also stored under dialects/mysql). e.g. from sqlalchemy.dialects.mysql.types import BIT .... table_def= Table('my_table', meta, Column('flags', BIT(4), nullable=False))
2
1
77,742,615
2024-1-1
https://stackoverflow.com/questions/77742615/how-to-easily-get-these-values-from-a-2d-matrix-using-numpy
I have this 2d numpy matrix a = np.array( [[ 1 2 3 4 5] [ 6 7 8 9 10] [11 12 13 14 15] [16 17 18 19 20] [21 22 23 24 25] [26 27 28 29 30]]) and I want to extract these values from it just for the sake of learning :) [[11,12],[16,17],[29,30]] After so many tries I ended up in chatGPT which gave me a wrong answer :(. chatGPT suggested this a[[2, 3, 5], [0, 1, 1]] but yeilded these values [11 17 27]. Any help would be appreciated thanks
You need to add commas to your array. You can use numpy indexing: https://numpy.org/doc/stable/user/basics.indexing.html import numpy as np a=np.array([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20], [21, 22, 23, 24, 25], [26, 27, 28, 29, 30]]) print(a[2:4,0:2]) or print(a[[2,3],0:2]) #output array([[11, 12], [16, 17]]) print(a[5:,3:]) or print(a[[5],3:]) #output array([[29, 30]]) Everything in one go: a[[2,3],0:2].tolist() + a[[5],3:].tolist() or a[2:4,0:2].tolist() + a[5:,3:].tolist() #output [[11, 12], [16, 17], [29, 30]]
2
1
77,740,136
2023-12-31
https://stackoverflow.com/questions/77740136/read-from-multiple-rest-api-stream-endpoints-python
I have a multiple rest api endpoints (urls) which streams data. I wonder what would be the best approach to read from all of them in one / many processes. currently I'm reading the data only from one url, doing something like: s = requests.Session() resp = s.get(url, headers=headers, stream=True) for line in resp.iter_lines(): if line: print(line) I would like to do the same with more urls and I wonder what would be the best approach here.
Here is an example how you can read from multiple URLs using concurrent.futures.ThreadPoolExecutor. But this is just one approach, you can use multiprocessing, asyncio/aiohttp, etc. from concurrent.futures import ThreadPoolExecutor import requests def get_from_api(tpl): session, url = tpl resp = session.get(url, stream=True) # just for example: count_lines = 0 for line in resp.iter_lines(): count_lines += 1 return url, count_lines def main(): api_urls = [ "https://google.com", "https://yahoo.com", "https://facebook.com", "https://instagram.com", # ...etc. ] with ThreadPoolExecutor(max_workers=2) as pool, requests.session() as session: for url, count_lines in pool.map( get_from_api, ((session, url) for url in api_urls) ): print(url, count_lines) if __name__ == "__main__": main() Prints: https://google.com 17 https://yahoo.com 648 https://facebook.com 26 https://instagram.com 50 EDIT: Using asyncio/aiohttp: import asyncio # Streaming API: # https://docs.aiohttp.org/en/stable/streams.html#streaming-api import aiohttp async def fetch(session, url): while True: async with session.get(url) as response: reader = response.content cnt = 0 async for line in reader: cnt += 1 print(f"{url}: {cnt} lines read") await asyncio.sleep(3) async def main(): urls = [ "https://google.com", # Replace with actual URLs "https://facebook.com", ] async with aiohttp.ClientSession() as session: tasks = {asyncio.create_task(fetch(session, url)) for url in urls} # this loops indifinitely: await asyncio.gather(*tasks) if __name__ == "__main__": asyncio.run(main()) Prints: https://google.com: 17 lines read https://facebook.com: 26 lines read https://google.com: 655 lines read https://facebook.com: 26 lines read ...
2
2
77,741,201
2024-1-1
https://stackoverflow.com/questions/77741201/check-if-a-child-element-is-present-and-non-empty-in-xml-file
I have the following xml file: <?xml version="1.0" encoding="utf-8"?> <components version="1.0.0"> <component type="foo"> <maintag> <subtag> <check>Foo</check> </subtag> <subtag> <check></check> </subtag> <subtag> </subtag> </maintag> </component> </components> I want to check that each subtag element has child element check that has non-empty value. It should print an error if: check is present but is empty check is not present at all in one or more subtag How would I do that? I came up with this but it doesn't do exactly what I want from lxml import etree # type: ignore def parse_xml(path: str) -> list: root = etree.parse(path) components = root.xpath("/components/component") return list(components) path = "test.xml" for p in parse_xml(path)[0].iter('check'): if not len(str(p)) > 0: print("check tag empty")
You can translate your two conditions to an XPath : root = etree.parse("test.xml") expr = "//subtag[not(check/text()) or not(check)]" not any(e is not None for e in root.xpath(expr)) # False Alternatively, if you need to verbose the checks, you can do something like : for idx, st in enumerate(root.xpath("//subtag"), 1): if (check:=st.find("check")) is None: print(f"'check' at 'subtag' {idx} is absent") elif not check.text: print(f"'check' at 'subtag' {idx} is empty") else: ... Output : # 'check' at 'subtag' 2 is empty # 'check' at 'subtag' 3 is absent
2
2
77,741,143
2024-1-1
https://stackoverflow.com/questions/77741143/matplotlib-ticks-are-overlapped-when-hspace-0
When pushing two plots together, I'd like to have the the ticks set to 'inout' and have it properly displayed over both the charts, but the 2nd chart is overlapping it. import matplotlib.pyplot as plt import numpy as np t = np.arange(0.01, 5.0, 0.01) s1 = np.sin(2 * np.pi * t) s2 = np.exp(-t) ax1 = plt.subplot(211) plt.plot(t, s1) plt.tick_params('x', labelbottom=False, direction='inout', length=6) ax2 = plt.subplot(212, sharex=ax1) plt.plot(t, s2) plt.subplots_adjust(hspace=0) plt.show() We can adjust the hspace and see that they're there, but just getting covered. I've tried changing the z-order, but that didn't work. plt.tick_params('x', labelbottom=False, direction='inout', length=6, zorder=100)
IIUC, you can create ax2 before ax1: import matplotlib.pyplot as plt import numpy as np t = np.arange(0.01, 5.0, 0.01) s1 = np.sin(2 * np.pi * t) s2 = np.exp(-t) ax2 = plt.subplot(212) plt.plot(t, s2) ax1 = plt.subplot(211, sharex=ax2) plt.plot(t, s1) plt.tick_params('x', labelbottom=False, direction='inout', length=6) plt.subplots_adjust(hspace=0) plt.show() Output:
3
3
77,740,674
2024-1-1
https://stackoverflow.com/questions/77740674/pandas-groupby-run-self-function-then-transformapply
I need to run regression for each group, then pass the coefficients into the new column b. Here is my code: Self-defined function: def simplereg(g, y, x): try: xvar = sm.add_constant(g[x]) yvar = g[y] model = sm.OLS(yvar, xvar, missing='drop').fit() b = model.params[x] return pd.Series([b*100]*len(g)) except Exception as e: return pd.Series([np.NaN]*len(g)) Create sample data: import pandas as pd import numpy as np # Setting the parameters gvkeys = ['A', 'B', 'C', 'D'] # Possible values for gvkey years = np.arange(2000, 2020) # Possible values for year # Number of rows for each gvkey, ensuring 5-7 observations for each num_rows_per_gvkey = np.random.randint(5, 8, size=len(gvkeys)) total_rows = sum(num_rows_per_gvkey) # Creating the DataFrame np.random.seed(0) # For reproducibility df = pd.DataFrame({ 'gvkey': np.repeat(gvkeys, num_rows_per_gvkey), 'year': np.random.choice(years, size=total_rows), 'y': np.random.rand(total_rows), 'x': np.random.rand(total_rows) }) df.sort_values(by='year', ignore_index=True, inplace=True) # make sure if the code can handle even data without sort Run groupby code: df['b'] = df.groupby('gvkey').apply(simplereg, y='y', x='x') However, the code return column 'b' with all N/A May I ask where is the issue and how to fix it? Thank you
Is it a bad idea to catch all exceptions? Obviously yes. In addition you do not display any error message. If your function returns NaN values, it's probably because your code is throwing an exception. Your code works well for me if I import statsmodels.api as sm and make minor changes: import statsmodels.api as sm def simplereg(g, y, x): try: xvar = sm.add_constant(g[x]) yvar = g[y] model = sm.OLS(yvar, xvar, missing='drop').fit() b = model.params[x] return pd.Series([b*100]*len(g), index=g.index) # reindex here except Exception as e: print(e) # At least, print exception here return pd.Series([np.NaN]*len(g), index=g.index) # reindex here # drop the first group level (gvkey) to align indexes df['b'] = df.groupby('gvkey').apply(simplereg, y='y', x='x').droplevel('gvkey') Output: >>> df gvkey year y x b 0 A 2000 0.799159 0.128926 -55.326856 1 B 2001 0.774234 0.253292 -68.351309 2 A 2003 0.461479 0.315428 -55.326856 3 A 2003 0.780529 0.363711 -55.326856 4 B 2004 0.521848 0.208877 -68.351309 5 C 2005 0.612096 0.656330 6.342994 6 D 2005 0.060225 0.096098 36.320231 7 B 2006 0.414662 0.161310 -68.351309 8 C 2006 0.456150 0.466311 6.342994 9 A 2007 0.118274 0.570197 -55.326856 10 C 2007 0.568434 0.244426 6.342994 11 C 2008 0.943748 0.196582 6.342994 12 D 2009 0.681820 0.368725 36.320231 13 B 2009 0.639921 0.438602 -68.351309 14 A 2012 0.870012 0.670638 -55.326856 15 B 2012 0.264556 0.653108 -68.351309 16 C 2013 0.616934 0.138183 6.342994 17 C 2014 0.018790 0.158970 6.342994 18 A 2015 0.978618 0.210383 -55.326856 19 D 2015 0.666767 0.976459 36.320231 20 D 2016 0.437032 0.097101 36.320231 21 C 2017 0.617635 0.110375 6.342994 22 B 2018 0.944669 0.102045 -68.351309 23 B 2019 0.143353 0.988374 -68.351309 24 D 2019 0.359508 0.820993 36.320231 25 D 2019 0.697631 0.837945 36.320231
3
0
77,741,075
2024-1-1
https://stackoverflow.com/questions/77741075/merge-dfs-but-avoid-duplication-of-columns-and-maintain-the-order-in-pandas
All the dfs have a key col "id". pd.merge is not a viable option even with the suffix option. There are over 40k cols in each of the dfs so column binding and deleting later (suffix_x) is not an option. Exactly 50k (common) rows in each of the dfs identified by "id" col. Minimal example with two common cols: df1 = pd.DataFrame({ 'id': ['a', 'b', 'c'], 'col1': [123, 121, 111], 'col2': [456, 454, 444], 'col3': [786, 787, 777], }) df2 = pd.DataFrame({ 'id': ['a', 'b', 'c'], 'col1': [123, 121, 111], 'col2': [456, 454, 444], 'col4': [11, 44, 77], }) df3 = pd.DataFrame({ 'id': ['a', 'b', 'c'], 'col1': [123, 121, 111], 'col2': [456, 454, 444], 'col5': [1786, 1787, 1777], }) Final answer: finaldf = pd.DataFrame({ 'id': ['a', 'b', 'c'], 'col1': [123, 121, 111], 'col2': [456, 454, 444], 'col3': [786, 787, 777], 'col4': [11, 44, 77], 'col5': [1786, 1787, 1777], })
If memory is limiting and the dataframes are already aligned, you could try to set up the output and update it: from functools import reduce dfs = [df1, df2, df3] cols = reduce(lambda a,b: a.union(b, sort=False), (x.columns for x in dfs)) out = pd.DataFrame(index=dfs[0].index, columns=cols) for x in dfs: out.update(x) Variant for the last step: out = pd.DataFrame(dfs[0], columns=cols) for x in dfs[1:]: out.update(x) Output: id col1 col2 col3 col4 col5 0 a 123 456 786 11 1786 1 b 121 454 787 44 1787 2 c 111 444 777 77 1777
3
4
77,740,923
2024-1-1
https://stackoverflow.com/questions/77740923/replacing-a-value-with-its-previous-value-in-a-column-if-it-is-greater
This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [101, 90, 11, 120, 1] } ) And this is the output that I want. I want to create column y: a y 0 101 101.0 1 90 101.0 2 11 90.0 3 120 120.0 4 1 120.0 Basically, values in a are compared with their previous value, and the greater one is selected. For example for row 1, 90 is compared with 101. 101 is greater so it is selected. I have done it in this way: df['x'] = df.a.shift(1) df['y'] = df[['a', 'x']].max(axis=1) Is there a cleaner or some kind of built-in way to do it?
You can use np.fmax to get the maxima without creating an additional column: df["y"] = np.fmax(df["a"], df["a"].shift(1)) This outputs: a y 0 101 101.0 1 90 101.0 2 11 90.0 3 120 120.0 4 1 120.0 We use np.fmax() to ignore the NaN created when shifting df["a"].
2
5
77,740,276
2023-12-31
https://stackoverflow.com/questions/77740276/pandas-dataframe-groupby-over-consecutive-duplicates-and-sum-the-values
In pandas dataframe, I'm totally confused of how to use the method of groupby() over consecutive duplicates by sum values in column Let's say I have the following DataFrame df : index type value 0 profit 11 1 profit 10 2 loss -5 3 profit 50 4 profit 15 5 loss -30 6 loss -25 7 loss -10 what I'm looking to is: index type grand 0 profit 21 # total of 11 + 10 = 21 1 loss -5 # the same value as this row NOT consecutive duplicated 2 profit 65 # total of 50 + 15 = 65 3 loss -65 # total of -30 -25 -10 = -65 What I tried to do: df['grand'] = df.groupby(df['type'].ne(df['type'].shift()).cumsum()).cumcount() but it gives me counting the consecutive duplicated I tried to iterate through the rows with several solutions but all were failed Thanks so much!
Instead of .cumcount() use sum: out = ( df.groupby(df["type"].ne(df["type"].shift()).cumsum(), as_index=False) .agg({"type": "first", "value": "sum"}) .rename(columns={"value": "grand"}) ) print(out) Prints: type grand 0 profit 21 1 loss -5 2 profit 65 3 loss -65
2
1
77,737,760
2023-12-30
https://stackoverflow.com/questions/77737760/pandas-join-with-multi-index-and-nan
I am using Pandas 2.1.3. I am trying to join two DataFrames on multiple index levels, and one of the index levels has NA's. The minimum reproducible example looks something like this: a = pd.DataFrame({ 'idx_a':['A', 'A', 'B'], 'idx_b':['alpha', 'beta', 'gamma'], 'idx_c': [1.0, 1.0, 1.0], 'x':[10, 20, 30] }).set_index(['idx_a', 'idx_b', 'idx_c']) b = pd.DataFrame({ 'idx_b':['gamma', 'delta', 'epsilon', np.nan, np.nan], 'idx_c': [1.0, 1.0, 1.0, 1.0, 1.0], 'y':[100, 200, 300, 400, 500] }).set_index(['idx_b', 'idx_c']) c = a.join( b, how='inner', on=['idx_b', 'idx_c'] ) print(a) x idx_a idx_b idx_c A alpha 1.0 10 beta 1.0 20 B gamma 1.0 30 print(b) y idx_b idx_c gamma 1.0 100 delta 1.0 200 epsilon 1.0 300 NaN 1.0 400 1.0 500 print(c) x y idx_a idx_b idx_c B gamma 1.0 30 100 1.0 30 400 1.0 30 500 I would have expected: print(c) x y idx_a idx_b idx_c B gamma 1.0 30 100 Why is join matching on the NaN values?
You can resolve your problem by removing the indexes and using merge instead of join: a = pd.DataFrame({ 'idx_a':['A', 'A', 'B'], 'idx_b':['alpha', 'beta', 'gamma'], 'idx_c': [1.0, 1.0, 1.0], 'x':[10, 20, 30] }) b = pd.DataFrame({ 'idx_b':['gamma', 'delta', 'epsilon', np.nan, np.nan], 'idx_c': [1.0, 1.0, 1.0, 1.0, 1.0], 'y':[100, 200, 300, 400, 500] }) c = a.merge(b, on=['idx_b', 'idx_c'], how='inner') Output: idx_a idx_b idx_c x y 0 B gamma 1.0 30 100 If you want to keep the indexes on a and b as they are in the question you can do this (thanks @mozway): c = (a .reset_index() .merge(b.reset_index(), on=['idx_b', 'idx_c'], how='inner') .set_index(list(dict.fromkeys(a.index.names+b.index.names))) ) Output: x y idx_a idx_b idx_c B gamma 1.0 30 100
2
2
77,736,673
2023-12-30
https://stackoverflow.com/questions/77736673/how-to-show-progression-of-changing-subplot-labels-plt-text-along-with-color-c
running python 3.9: import matplotlib.pyplot as plot import numpy as np import matplotlib.animation as animation fig, plt = plot.subplots() myArray = np.array([[0, 1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12, 13], [14, 15, 16, 17, 18, 19, 20], [21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34], [35, 36, 37, 38, 39, 40, 41], [42, 43, 44, 45, 46, 47, 48]]) def randint(): return np.random.randint(0, 7) def randNewValue(): return np.random.randint(0, 48) #/10 def labelSquares(): for i in range(7): for j in range(7): plt.text(j, i, myArray[i, j], ha="center", va="center", color="white") def modifyArrayElement(r, c, x): row = r col = c new_value = x myArray[row, col] = new_value # labelSquares() return myArray ims = [] for i in range(100): row = randint() col = randint() newValue = randNewValue() # plt.text(col, row, myArray[row, col], ha="center", va="center", color="white") # labelSquares() im = plt.imshow(modifyArrayElement(row, col, newValue), animated = True) if i == 0: plt.imshow(modifyArrayElement(row, col, newValue)) ims.append([im]) ani = animation.ArtistAnimation(fig, ims, interval = 400, blit = True, repeat_delay = 5000) labelSquares() # how to make it run "during" the animation, i.e: show progression of changing labels (plt.text) along with color changing, matching myArray values at each iteration? f = r"c://Temp/animation.gif" writergif = animation.PillowWriter(fps=2) # ani.save(f, writer=writergif) plot.show() The result shows an animation of changing square colors as the contents of myArray changes, however, I would like the numbers inside them (text labels) to also change along, matching myArray values at each iteration of the for loop instead of staying static at final values all the time. Placing labelSquares() - or any similar labeling attempt inside the for loop - does not work. Is there a way to do it? (Sorry if this is a trivial problem showing my inexperience with pyplot, and I hope all the experts forgive me for asking such a stupid question)
From the ArtistAnimation documentation on the artistlist parameter: Each list entry is a collection of Artist objects that are made visible on the corresponding frame. Note that every object you add to a Matplotlib figure is an artist, including the Text objects that are returned by plt.text. So, for each frame, we can get hold of these and put them in a list together with the AxesImage artist from imshow. Since your ModifyArrayElement function changes the global array in-place, the labelSquares function has those updates automatically, and so does imshow, so I simplified the imshow calls. Also added animated=True to the plt.text to prevent unnecessary drawing. import matplotlib.pyplot as plot import numpy as np import matplotlib.animation as animation fig, plt = plot.subplots() myArray = np.array([[0, 1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12, 13], [14, 15, 16, 17, 18, 19, 20], [21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34], [35, 36, 37, 38, 39, 40, 41], [42, 43, 44, 45, 46, 47, 48]]) def randint(): return np.random.randint(0, 7) def randNewValue(): return np.random.randint(0, 48) #/10 def labelSquares(): all_texts = [] for i in range(7): for j in range(7): all_texts.append(plt.text(j, i, myArray[i, j], ha="center", va="center", color="white", animated=True)) return all_texts def modifyArrayElement(r, c, x): row = r col = c new_value = x myArray[row, col] = new_value return myArray artists = [] for i in range(100): row = randint() col = randint() newValue = randNewValue() modifyArrayElement(row, col, newValue) im = plt.imshow(myArray, animated=True) frame_artists = labelSquares() frame_artists.append(im) artists.append(frame_artists) ani = animation.ArtistAnimation(fig, artists, interval = 400, blit = True, repeat_delay = 5000) writergif = animation.PillowWriter(fps=2) ani.save("myanim.gif", writer=writergif)
2
2
77,736,263
2023-12-30
https://stackoverflow.com/questions/77736263/pandas-loc-method-using-not-and-in-operators
I have a data frame and I want to remove some rows if their value is not equal to some values that I have stored in a list. So I have a list variable stating the values of objects I want to keep: allowed_values = ["value1", "value2", "value3"] And I am attempting to remove rows from my dataframe if a certain column does not contain 1 of the allowed_values. At first I was using a for loop and if statement like this: for index, row in df.iterrows(): if row["Type"] not in allowed_values: # drop the row, was about to find out how to do this, but then I found out about the `.loc()` method and thought it would be better to use this. So using the .loc() method, I can do something like this to only keep objects that have a Type value equal to value1: df = df.loc[df["Type"] == "value1"] But I want to keep all objects that have a Type in the allowed_values list. I tried to do this: df = df.loc[df["Type"] in allowed_values] but this gives me the following error: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). I would expect this to still work as using the in or a combination of not in operators still results in a boolean, so I'm not sure why the .loc() method doesn't like these operators. What exactly is wrong with using in or not operators in the .loc() method and how can I create a logical statment that will drop rows if their Type value is not in the allowed_values list? EDIT: I found this question asking about the same error I got and the answer was that you need to use bitwise operators only (e.g. ==, !=, &, |, etc) and not and in are not bitwise operators and require something called "truth-values". So I think the only way to get the functionality I want is to just have a lengthy bitwise logical operator, something like: df = df.loc[(df["Type"] == "value1") | (df["Type"] == "value2") | (df["Type"] == "value3")] Is there no other way to check each value is in the allowed_values list? This would make my code a lot neater (I have more than 3 values in the list, so this is a lengthy line).
Try this: import pandas as pd allowed_values = ['White', 'Green', 'Red'] df = pd.DataFrame({'color': ['White', 'Black', 'Green', 'White']}) df = df[df['color'].isin(allowed_values)] df color 0 White 2 Green 3 White If you must use .loc then you can use: df = df.loc[df['color'].isin(allowed_values)]
2
1
77,735,868
2023-12-30
https://stackoverflow.com/questions/77735868/stack-multiple-columns-in-a-pandas-dataframe
I have a pandas data frame and would like to stack 4 columns to 2. So I have this df = pd.DataFrame({'date':['2023-12-01', '2023-12-05', '2023-12-07'], 'other_col':['a', 'b', 'c'], 'right_count':[4,7,9], 'right_sum':[2,3,5], 'left_count':[1,8,5], 'left_sum':[0,8,4]}) date other_col right_count right_sum left_count left_sum 0 2023-12-01 a 4 2 1 0 1 2023-12-05 b 7 3 8 8 2 2023-12-07 c 9 5 5 4 and would like to get this date other_col side count sum 0 2023-12-01 a right 4 2 1 2023-12-05 b right 7 3 2 2023-12-07 c right 9 5 3 2023-12-01 a left 1 0 4 2023-12-05 b left 8 8 5 2023-12-07 c left 5 4
You can use a custom reshaping with a temporary MultiIndex: out = (df .set_index(['date', 'other_col']) .pipe(lambda x: x.set_axis(x.columns.str.split('_', expand=True), axis=1)) .rename_axis(columns=['side', None]) .stack('side').reset_index() ) Or a melt+pivot: tmp = df.melt(['date', 'other_col'], var_name='side') tmp[['side', 'col']] = tmp['side'].str.split('_', n=1, expand=True) out = (tmp.pivot(index=['date', 'other_col', 'side'], columns='col', values='value') .reset_index().rename_axis(columns=None) ) Output: date other_col side count sum 0 2023-12-01 a left 1 0 1 2023-12-01 a right 4 2 2 2023-12-05 b left 8 8 3 2023-12-05 b right 7 3 4 2023-12-07 c left 5 4 5 2023-12-07 c right 9 5 Or, much easier, using the janitor library and pivot_longer: # pip install pyjanitor import janitor out = df.pivot_longer(index=['date', 'other_col'], names_to=('side', '.value'), names_pattern=r'([^_]+)_([^_]+)') Output: date other_col side count sum 0 2023-12-01 a right 4 2 1 2023-12-05 b right 7 3 2 2023-12-07 c right 9 5 3 2023-12-01 a left 1 0 4 2023-12-05 b left 8 8 5 2023-12-07 c left 5 4
2
3
77,734,544
2023-12-30
https://stackoverflow.com/questions/77734544/breaking-up-dataframe-into-chunks-for-a-loop
I am using a loop (that was answered in this question) to iteratively open several csv files, transpose them, and concatenate them into a large dataframe. Each csv file is 15 mb and over 10,000 rows. There are over 1000 files. I am finding that the first 50 loops happen within a few seconds but then each loop takes a minute. I wouldn't mind keeping my computer on overnight but I may need to do this multiple times and I'm worried that it will get exponentially slower. Is there a more memory efficient way to do this such breaking up df into chunks of 50 rows each and then concatenating all of them at the end? In the following code, df is a dataframe of 1000 rows that has columns to indicate folder and file name. merged_data = pd.DataFrame() count = 0 for index, row in df.iterrows(): folder_name = row['File ID'].strip() file_name = row['File Name'].strip() file_path = os.path.join(root_path, folder_name, file_name) file_data = pd.read_csv(file_path, names=['Case', f'{folder_name}_{file_name}'], sep='\t') file_data_transposed = file_data.set_index('Case').T.reset_index(drop=True) file_data_transposed.insert(loc=0, column='folder_file_id', value=str(folder_name+'_'+file_name)) merged_data = pd.concat([merged_data, file_data_transposed], axis=0, ignore_index=True) count = count + 1 print(count)
The reason the code is slow is because you are using concat in the loop. You should collect the data in a python dictionary then do a single concat at the end. With few improvements: import pathlib import pandas as pd root_path = pathlib.Path('root') # use pathlib instead of os.path data = {} # use enumerate rather than create an external counter for count, (_, row) in enumerate(df.iterrows(), 1): folder_name = row['File ID'].strip() file_name = row['File Name'].strip() file_path = root_path / folder_name / file_name folder_file_id = f'{folder_name}_{file_name}' file_data = pd.read_csv(file_path, header=None, sep='\t', names=['Case', folder_file_id], memory_map=True, low_memory=False) data[folder_file_id] = file_data.set_index('Case').squeeze() print(count) merged_data = (pd.concat(data, names=['folder_file_id']) .unstack('Case').reset_index()) Output: >>> merged_data Case folder_file_id 0 1 2 3 4 0 folderA_file001.txt 1234.0 5678.0 9012.0 3456.0 7890.0 1 folderB_file002.txt 4567.0 8901.0 2345.0 6789.0 NaN Input data: >>> df File ID File Name 0 folderA file001.txt 1 folderB file002.txt >>> cat root/folderA/file001.txt 0 1234 1 5678 2 9012 3 3456 4 7890 >>> cat root/folderB/file002.txt 0 4567 1 8901 2 2345 3 6789 Multithreaded version: from concurrent.futures import ThreadPoolExecutor import pathlib import pandas as pd root_path = pathlib.Path('root') def read_csv(args): count, row = args # expand arguments folder_name = row['File ID'].strip() file_name = row['File Name'].strip() file_path = root_path / folder_name / file_name folder_file_id = f'{folder_name}_{file_name}' file_data = pd.read_csv(file_path, header=None, sep='\t', names=['Case', folder_file_id], memory_map=True, low_memory=False) print(count) return folder_file_id, file_data.set_index('Case').squeeze() with ThreadPoolExecutor(max_workers=2) as executor: batch = enumerate(df[['File ID', 'File Name']].to_dict('records'), 1) data = executor.map(read_csv, batch) merged_data = (pd.concat(dict(data), names=['folder_file_id']) .unstack('Case').reset_index())
3
1
77,732,026
2023-12-29
https://stackoverflow.com/questions/77732026/how-to-draw-images-visualizing-numpy-arrays-themselves
Are there tools for visualizing numpy arrays themselves like the image below?
You can do it from first principles: from matplotlib import pyplot as plt from matplotlib.patches import Rectangle from matplotlib.transforms import Bbox def square(i, j, k, origin=(0,0), zstep=0.2, **kwargs): xy = np.array(origin) + np.array((k, j)) + np.array([1, -1]) * i * zstep return Rectangle(xy, 1, 1, zorder=-i, **kwargs) def draw(a, *, origin=(0,0), zstep=0.2, ax=None, rect_kwargs=None, text_kwargs=None): ax = plt.gca() if ax is None else ax rect_kwargs = {} if rect_kwargs is None else rect_kwargs facecolor = rect_kwargs.pop('facecolor', 'lightblue') facecolor = np.broadcast_to(facecolor, a.shape) text_kwargs = {} if text_kwargs is None else text_kwargs textcolor = rect_kwargs.pop('color', 'k') textcolor = np.broadcast_to(textcolor, a.shape) text_kwargs = dict(ha='center', va='center') | text_kwargs im, jm, km = a.shape bboxes = [] origin = np.array(origin) + np.array((0, zstep * im)) for i in range(im): for j in range(jm): for k in range(km): r = square(i, j, k, origin=origin, edgecolor='k', facecolor=facecolor[i, j, k], **rect_kwargs) ax.add_patch(r) bb = r.get_bbox() bboxes.append(bb) center = bb.get_points().mean(0) ax.annotate(a[i, j, k], center, **text_kwargs, zorder=-i) bb = Bbox.union(bboxes) # help auto axis limits ax.plot(*bb.get_points().T, '.', alpha=0) return bb Simple example def np_example(shape): return 1 + np.arange(np.prod(shape)).reshape(shape) a, b = [np_example(shape) for shape in [(2, 3, 4), (2, 2, 4)]] fig, ax = plt.subplots(figsize=(4, 4)) draw(a, ax=ax) draw(b, origin=(0, a.shape[1] + 1), rect_kwargs=dict(facecolor='lightgreen')) acolor = np.broadcast_to('lightblue', a.shape) bcolor = np.broadcast_to('lightgreen', b.shape) draw( np.concatenate((a, b), axis=1), origin=(a.shape[2] + 2, 0), rect_kwargs=dict(facecolor=np.concatenate((acolor, bcolor), axis=1))) ax.set_aspect(1) ax.invert_yaxis() ax.set_axis_off() plt.tight_layout() plt.show() More involved example a, b, c, d = [np_example(shape) for shape in [(2, 3, 4), (2, 2, 4), (2, 3, 2), (2, 3, 4)]] colors = ['#ffe5b6', '#add8a3', '#c5dbfb', '#efd0dd'] arrs = [a, b, np.concatenate((a, b), axis=1), c, d, None, np.concatenate((a, c), axis=2), None, np.concatenate((a, d), axis=0)] a_, b_, c_, d_ = [np.broadcast_to(c, ar.shape) for c, ar in zip(colors, [a, b, c, d])] colors = [a_, b_, np.concatenate((a_, b_), axis=1), c_, d_, None, np.concatenate((a_, c_), axis=2), None, np.concatenate((a_, d_), axis=0)] titles = ['a', 'b', 'np.concatenate((a, b), axis=1)', 'c', 'd', None, 'np.concatenate((a, c), axis=2)', None, 'np.concatenate((a, d), axis=0)'] x_pos = np.array((0, 5.5, 12)) y_pos = np.array((0, 4.5, 10)) origins = np.c_[np.meshgrid(x_pos, y_pos)].T.reshape(-1, 2) fig, ax = plt.subplots(figsize=(6, 6)) for ar, color, title, origin in zip(arrs, colors, titles, origins): if ar is None: continue bb = draw(ar, origin=origin, rect_kwargs=dict(facecolor=color)) cc = np.array(bb.coefs['S']) txt_xy = np.diagonal(np.c_[1-cc, cc] @ bb.get_points()) ax.annotate(title, txt_xy, xytext=(0, 4), textcoords='offset points', ha='center', va='bottom') ax.set_aspect(1.1) ax.invert_yaxis() ax.set_axis_off() plt.tight_layout() plt.show()
4
3
77,734,349
2023-12-29
https://stackoverflow.com/questions/77734349/creating-a-new-column-by-a-condition-and-selecting-the-maximum-value-by-shift
This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [10, 20, 30, 400, 50, 60], 'b': [897, 9, 33, 4, 55, 65] } ) And this is the output that I want. I want to create column c. a b c 0 10 897 NaN 1 20 9 897.0 2 30 33 NaN 3 400 4 400.0 4 50 55 NaN 5 60 65 NaN These are the steps needed: a) Find rows that df.a > df.b b) From the above rows compare the value from a to its previous value from b. If it was more than previous b value, put a in column c otherwise put the previous b. For example: a) Rows 1 and 3 met df.a > df.b b) From row 1, 20 is less than 897 so 897 is chosen. However in row 3, 400 is greater than 33 so it is selected. This image clarifies the point: This is what I have tried but it does not work: df.loc[df.a > df.b, 'c'] = max(df.a, df.b.shift(1))
Try: mask = df.a > df.b df.loc[mask, "c"] = np.where(df["a"] > df["b"].shift(), df["a"], df["b"].shift())[mask] print(df) Prints: a b c 0 10 897 NaN 1 20 9 897.0 2 30 33 NaN 3 400 4 400.0 4 50 55 NaN 5 60 65 NaN
2
1
77,732,940
2023-12-29
https://stackoverflow.com/questions/77732940/aggregate-dataframe-using-simpler-vectorized-operations-instead-of-loops
I have a piece of code that works correctly (gives the expected answer), but is both inefficient and unnecessarily complex. It uses loops that I want to simplify and make more efficient, possibly using vectorized operations. It also converts a dataframe to a series and then back into a dataframe again - another code chunk that needs work. In other words, I want to make this piece of code more pythonic. I marked the problematic places in the code (below) with comments that start with # TODO:. The goal of the code is to summarize and aggregate the input dataframe df (which has the distributions of DNA fragment lengths for two types of regions: all and captured). This is a bioinformatic problem, part of a larger project that ranks enzymes by their ability to cut certain DNA regions into pieces of defined length. For the purpose of this question, the only relevant information is that length is integer and there are two types of DNA regions: all and captured. The aim is to produce the dataframe df_pur with purity vs. length_cutoff (the cutoff of length when purifying the DNA). The steps are: Compute the fraction of the total length for each type of regions that is above each of the length_cutoffs. Find the ratio of this fraction: captured / all for each of the length_cutoffs and store the result in a dataframe. import io import pandas as pd # This is a minimal reproducible example. The real dataset has 2 # columns and 10s of millions of rows. Column 1 is integer, column 2 # has 2 values: 'all' and 'captured': TESTDATA=""" length regions 1 all 49 all 200 all 20 captured 480 captured 2000 captured """ df = pd.read_csv(io.StringIO(TESTDATA), sep='\s+') # This is a minimal reproducible example. The real list has ~10 # integer values (cutoffs): length_cutoffs = [10, 100, 1000] df_tot_length = pd.DataFrame(columns=['tot_length']) df_tot_length['tot_length'] = df.groupby(['regions']).length.sum() df_tot_length.reset_index(inplace=True) print(df_tot_length) # regions tot_length # 0 all 250 # 1 captured 2500 df_frc_tot = pd.DataFrame(columns=['regions', 'length_cutoff', 'sum_lengths']) regions = df['regions'].unique() df_index = pd.DataFrame({'regions': regions}).set_index('regions') # TODO: simplify this loop (vectorize?): for length_cutoff in length_cutoffs: df_cur = (pd.DataFrame({'length_cutoff': length_cutoff, 'sum_lengths': df[df['length'] >= length_cutoff] .groupby(['regions']).length.sum()}, # Prevent dropping rows where no elements # are selected by the above # condition. Re-insert the dropped rows, # use for those sum_lengths = NaN index=df_index.index) # Correct the above sum_lengths = NaN to 0: .fillna(0)).reset_index() # Undo the effect of `fillna(0)` above, which casts the # integer column as float: df_cur['sum_lengths'] = df_cur['sum_lengths'].astype('int') # TODO: simplify this loop (vectorize?): for region in regions: df_cur.loc[df_cur['regions'] == region, 'frc_tot_length'] = ( df_cur.loc[df_cur['regions'] == region, 'sum_lengths'] / df_tot_length.loc[df_tot_length['regions'] == region, 'tot_length']) df_frc_tot = pd.concat([df_frc_tot, df_cur], axis=0) df_frc_tot.reset_index(inplace=True, drop=True) print(df_frc_tot) # regions length_cutoff sum_lengths frc_tot_length # 0 all 10 249 0.996 # 1 captured 10 2500 1.000 # 2 all 100 200 0.800 # 3 captured 100 2480 0.992 # 4 all 1000 0 0.000 # 5 captured 1000 2000 0.800 # TODO: simplify the next 2 statements: ser_pur = (df_frc_tot.loc[df_frc_tot['regions'] == 'captured', 'frc_tot_length'] .reset_index(drop=True) / df_frc_tot.loc[df_frc_tot['regions'] == 'all', 'frc_tot_length'] .reset_index(drop=True)) df_pur = pd.DataFrame({'length_cutoff': length_cutoffs, 'purity': ser_pur}) print(df_pur) # length_cutoff purity # 0 10 1.004016 # 1 100 1.240000 # 2 1000 inf Note: I am primarily interested in making the code more clear, simple and pythonic. Among the answers that are tied for the above, I will prefer the more efficient one in terms of speed. I have 8 GB available by default per job, but can increase this to 32 GB if needed. To benchmark efficiency, please use this real life-size example dataframe: num_rows = int(1e7) df = pd.concat([ pd.DataFrame({'length': np.random.choice(range(1, 201), size=num_rows, replace=True), 'regions': 'all'}), pd.DataFrame({'length': np.random.choice(range(20, 2001), size=num_rows, replace=True), 'regions': 'captured'}), ]).reset_index(drop=True)
IIUC, you can do: length_cutoffs = [10, 100, 1000] df["bins"] = pd.cut( df["length"], pd.IntervalIndex.from_breaks([-np.inf] + length_cutoffs + [np.inf], closed="left"), ) out = df.pivot_table(index=["regions", "bins"], values="length", aggfunc="sum") g = out.groupby(level=0) out["frc_tot_length"] = ( g["length"].transform(lambda x: [x.iloc[i:].sum() for i in range(len(x))]) ) / g["length"].sum() print(out) print() This prints: length frc_tot_length regions bins all [-inf, 10.0) 1 1.000 [10.0, 100.0) 49 0.996 [100.0, 1000.0) 200 0.800 [1000.0, inf) 0 0.000 captured [-inf, 10.0) 0 1.000 [10.0, 100.0) 20 1.000 [100.0, 1000.0) 480 0.992 [1000.0, inf) 2000 0.800 Then: x = out.unstack(level=0) x = x[("frc_tot_length", "captured")] / x[("frc_tot_length", "all")] print(x) Prints: bins [-inf, 10.0) 1.000000 [10.0, 100.0) 1.004016 [100.0, 1000.0) 1.240000 [1000.0, inf) inf dtype: float64
2
2
77,723,957
2023-12-27
https://stackoverflow.com/questions/77723957/permutation-where-element-can-be-repeated-specific-times-and-do-it-fast
I have been looking for a function like this, sadly I was not able to found it. Here a code that do what I write: import itertools #Repeat every element specific times data = { 1: 3, 2: 1, 3: 2 } #Length n = 0 to_rep = [] for i in data: to_rep += [i]*data[i] n += data[i] #itertools will generate also duplicated ones ret = itertools.permutations(to_rep, n) #clean dups ret = list(set(ret)) So, the code will show all lists of length 6, where there is 3 times "1", 1 time "2", and 2 times "3", the code works. So.... the challenge here is time, this method is too expensive! which would be the fastest way to do this? I have tested this with some samples of 27 times True and one time False, which is not much, in total there is 28 ways to do it, but this code takes forever... there is even more ways, but I would like to know a more efficient one. I was able to write a code when we have two elements, like True/False, but seems a lot more complex with more than two elements. def f(n_true, n_false): ret = [] length = n_true + n_false full = [True]*length for rep in itertools.combinations(range(length), n_false): full_ = full.copy() for i in rep: full_[i] = False ret.append(full_) return ret Exists a function that already do this? Which is best way to run this?
Here are several options. Simple recursive algorithm You can use the following recursive function that iterates over keys: def multiset_permutations(d: dict) -> list: if not d: return [tuple()] else: perms = [] for k in d.keys(): remainder = d.copy() if remainder[k] > 1: remainder[k] -= 1 else: del remainder[k] perms += [(k,) + q for q in multiset_permutations(remainder)] return perms Example usage: In: data = {1: 3, 2: 1, 3: 2} In: multiset_permutations(data) Out: [(1, 1, 1, 2, 3, 3), (1, 1, 1, 3, 2, 3), (1, 1, 1, 3, 3, 2), (1, 1, 2, 1, 3, 3), (1, 1, 2, 3, 1, 3), ... To see why it works, notice that all permutations must start with one of the keys as the first element. Once you have chosen the first element, 'k' from among the keys, the permutations that start with that element have the form [k] + q, where q is a permutation of the remainder multiset where the count of k is reduced by one (or deleted when it hits zero). So, the code simply chooses all first elements from among the keys, then recurses on the remainders. For an explanation of the base case, see: https://math.stackexchange.com/questions/4293329/does-the-set-of-permutations-of-an-empty-set-contain-an-empty-set This has a huge benefit in algorithmic operations when the number of keys is small but the counts per key is large, because the branching factor at each recursion is reduced to the number of keys at that stage (rather than the total size which may be much larger). Using the package more_itertools The function distinct_permutations() in the more_itertools package can do this. Thanks for Kelly Bundy for suggesting this in the comments. import more_itertools def multiset_to_list(d: dict) -> list: l = [] for k, v in d.items(): l += [k]*v return l def multiset_permutations_more_itertools(d: dict) -> list: l = multiset_to_list(d) return list(more_itertools.distinct_permutations(l, len(l))) Using the sympy package You can use the function multiset_permutations() in the sympy package, as detailed in Andrej Kesely's answer. from sympy.utilities import iterables def multiset_permutations_sympy(d: dict) -> list: return [tuple(x) for x in iterables.multiset_permutations(multiset_to_list(d))] Timing comparison on 2-group example Here we compare the timing of these different methods on a generalization of the 2-group example from the question. The recursive method is slower than more_itertools.distinct_permutations(), faster than sympy.utilities.iterables.multiset_permutations() for small to moderate n, and slightly slower for large n. The iteratools.permutations() method from the original question is not feasible for moderate or large n because of it's poor asymptotic complexity. To summarize, if you want peak performance, you should use more_itertools.distinct_permutations(). The recursive code would be useful if you don't want to add dependencies to your project, or if you are writing in a different language than python, or for educational purposes if you have general interest in the algorithm. The sympy version is also competitive and could be useful if you would prefer to have sympy as a dependency of your project. import itertools import more_itertools from sympy.utilities import iterables from time import time import matplotlib.pyplot as plt def multiset_to_list(d: dict) -> list: l = [] for k, v in d.items(): l += [k]*v return l def multiset_permutations_itertools(d: dict) -> list: l = multiset_to_list(d) return list(set(itertools.permutations(l, len(l)))) def multiset_permutations_more_itertools(d: dict) -> list: l = multiset_to_list(d) return list(more_itertools.distinct_permutations(l, len(l))) def multiset_permutations_sympy(d: dict) -> list: return [tuple(x) for x in iterables.multiset_permutations(multiset_to_list(d))] def multiset_permutations(d: dict) -> list: if not d: return [tuple()] else: perms = [] for k in d.keys(): remainder = d.copy() if remainder[k] > 1: remainder[k] -= 1 else: del remainder[k] perms += [(k,) + q for q in multiset_permutations(remainder)] return perms def mean_timing(f: callable, x, num_samples=10): total_time = 0.0 for _ in range(num_samples): t = time() y = f(x) dt = time() - t total_time += dt mean_time = total_time / num_samples return y, mean_time timings_itertools = [] timings_more_itertools = [] timings_sympy = [] timings = [] nn = [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]#, 2048, 4096] for n in nn: print('n=', n) data = {0:n, 1:1} ret, dt = mean_timing(multiset_permutations, data) ret_more_itertools, dt_more_itertools = mean_timing(multiset_permutations_more_itertools, data) ret_sympy, dt_sympy = mean_timing(multiset_permutations_sympy, data) print('dt=', dt) print('dt_more_itertools=', dt_more_itertools) print('dt_sympy=', dt_sympy) assert(set(ret) == set(ret_more_itertools)) assert(set(ret) == set(ret_sympy)) timings.append(dt) timings_more_itertools.append(dt_more_itertools) timings_sympy.append(dt_sympy) if n < 11: ret_itertools, dt_itertools = mean_timing(multiset_permutations_itertools, data) assert (set(ret) == set(ret_itertools)) print('dt_itertools=', dt_itertools) timings_itertools.append(dt_itertools) plt.figure() plt.loglog(nn, timings) plt.loglog(nn, timings_more_itertools) plt.loglog(nn, timings_sympy) plt.loglog(nn[:len(timings_itertools)], timings_itertools) plt.xlabel('Problem size, n') plt.ylabel('Mean time (seconds)') plt.title('Multiset permutation timings comparison') plt.legend(['recursive', 'more_itertools', 'sympy', 'itertools']) plt.show() plt.savefig('multiset_permutations_timing_comparison.png', bbox_inches='tight') And the output:
3
3
77,728,680
2023-12-28
https://stackoverflow.com/questions/77728680/python-3-x-how-should-one-override-inherited-properties-from-parent-classes
A simple example probably shows more: class NaturalNumber(): def __init__(self, val): self._value = val def _get_value(self): return self._value def _set_value(self, val): if val < 0: raise ValueError( f"Cannot set value to {val}: Natural numbers are not negative." ) self._value = val value = property(_get_value, _set_value, None, None) class EvenNaturalNumber(NaturalNumber): # This seems superfluous but is required to define the property. def _get_value(self): return super()._get_value() def _set_value(self, val): if val % 2: raise ValueError( f"Cannot set value to {val}: Even numbers are divisible by 2." ) super()._set_value(val) # This seems superfluous but parent property defined with parent setter. value = property(_get_value, _set_value, None, None) This is a simplified example from a real usecase where I want to inject code for testing over production classes. The principle is the same, and here I am introducing an extra validation to value in the EvenNaturalNumber class that inherits from the NaturalNumber class. My boss doesn't like me getting rid of all his decorators, so ideally a solution should work however the underlying class is written. What would seem natural is: class NaturalNumber(): def __init__(self, val): self._value = val @property def value(self): return self._value @value.setter def value(self, val): if val < 0: raise ValueError( f"Cannot set value to {val}: Natural numbers are not negative." ) self._value = val class EvenNaturalNumber(NaturalNumber): @property def value(self): return super().value @value.setter def value(self, val): if val % 2: raise ValueError( f"Cannot set value to {val}: Even numbers are divisible by 2." ) super().value = val But this errors with valid sets on EvenNaturalNumber.value (say, = 2). With AttributeError: 'super' object has no attribute 'value'. Personally, I would say that's a bug in the python language! But I'm suspecting I have missed something. I have found a solution using decorators but this seems rather convoluted: class NaturalNumber(): def __init__(self, val): self._value = val @property def value(self): return self._value @value.setter def value(self, val): if val < 0: raise ValueError( f"Cannot set value to {val}: Natural numbers are not negative." ) self._value = val class EvenNaturalNumber(NaturalNumber): @property def value(self): return super().value @value.setter def value(self, val): if val % 2: raise ValueError( f"Cannot set value to {val}: Even numbers are divisible by 2." ) super(type(self), type(self)).value.fset(self, val) And another way is: class NaturalNumber(): def __init__(self, val): self._value = val @property def value(self): return self._value @value.setter def value(self, val): if val < 0: raise ValueError( f"Cannot set value to {val}: Natural numbers are not negative." ) self._value = val class EvenNaturalNumber(NaturalNumber): # This seems superfluous but is required to define the property. def _set_value(self, val): if val % 2: raise ValueError( f"Cannot set value to {val}: Even numbers are divisible by 2." ) NaturalNumber.value.fset(self, val) # This seems superfluous as parent property defined with parent setter. value = NaturalNumber.value.setter(_set_value) But this second solution seems rather unsatisfactory as it makes use of the knowledge that the value property is defined in the NaturalNumber class. I don't see any way to iterate over the EvenNaturalNumber.__mro__ unless I do this in EvenNaturalNumber._set_value(self, val), but that's the job of super(), isn't it? Any improvement or suggestions will be gratefully received. Otherwise, my boss is just going to have to live with super(type(self), type(self)).value.fset(self, val)! Added 01/01/2024. Many thanks to ShadowRanger for pointing out the error with super(type(self), type(self)); I completely agree this should be super(__class__, type(self)) (mia culpa). Why this has arisen is as follows. My parent classes would be better named Device (equivalent to NaturalNumber above). These talk to external devices over a serial connection (using pyserial). Various attributes on the external devices are exposed in the class as properties. So a getter will write to the external device to query a value and return the reply as the value appropriately decoded and typed for the callee. The setter writes to the device to set the value there and then checks there has been an appropriate reponse from the device (all with apropriate error handling for exceptional circumstances). Needless to say, the whole system is about getting these devices to interact in interesting ways. All of this seems quite natural. I am building a test system over the top of this so that (to some extent) the entire system can be tested (unit / functional / regression etc.) without actually being connected to any of the devices: I hesitate to use the phrase "test harness" but perhaps justified in this case. So the idea is to use inheritance over each Device class and have a MockDevice class that inherits from its parent and exposes the same properties. But each MockDevice class is initialised with a MockSerialConnection (as opposed to a real SerialConnection which the Device classes are initialised with), so that we can inject the expected responses into the MockSerialConnection, which are then read by the Device code and (hopefully) interpreted correctly, thus providing a mechanism to test changes to the code as the system develops. It is all smoke and mirrors, but hopefully in a good way. For the MockDevice properties the getters and setters need to set the relevant serial communication and then call the relevant getters and setters of the parent Device class, so that we have good code coverage. There is quite a lot of multiple inheritance going on here too (thank goodness for the __mro__), and exactly where in the inheritance hierachy a method or property is defined isn't completely fixed (and may vary in the future). We start with AbstractDevice classes that essentially define functionality of the device (with a whole class heirachy here for similar devices with more or fewer features or functionality), the actual Device class represents a specific device (down to catalogue number) from a given manufacturer, which is not only dependant of the AbstractDevice but the communication protocol (and type of serial connection) together with specifics (such as the commands to send for a specific attribute). The helper function solution while good for providing functionality that is easily overriden (library authors take note) doesn't quite fit the bill here for the purpose of testing. There is nothing stopping another developer down the road applying a "quick fix" to the setters and getters (rather than the relevant helper functions), that would never be picked up by the test system. I also don't really see the difference from my first code sample where I specifically define _get_value and _set_value and declare the property with value = property(_get_value, _set_value, None, None) and avoid the decorators. I am also a little disatisfied with the @NaturalNumber.value.setter solution because it needs apriori knowledge that the property being set resides in the NaturalNumber class (the improvement over my earlier solution accepted). Yes, I can work it out how it is now but as a test system it should work if down the line a developer moves the functionality to another place in the inheritance hierarchy. The test system will still error but it means we will need to maintain code in two places (production and test); it seems preferable that the test system walks the mro to find the relevant class property to override. If one could @super().value.setter then that would be perfect, but one can't. Notice that there is no problem in the getters: class EvenNaturalNumber(NaturalNumber): @property def value(self): return super().value works just fine. This is why I am tempted to think of this as a "language bug". The fact that super(__class__, type(self)).value.fset(self, val) works suggests (to me) that the python compiler would be able to detect super().value = val as a setter and act accordingly. I would be interested to know what others thought about this as a Python Enhancement Proposal (PEP), and if supportive what else should be addressed around the area of properties. I hope that helps to give a fuller picture. Also if there are other suggestions on a different way to approach the test harness, that will also be gratefully received.
So first off: super(type(self), type(self)) is straight up wrong; never do that (it seems like it works when there is one layer, it fails with infinite recursion if you make a child class and try to invoke the setter on it). Sadly, this is a case where there is no elegant solution. The closest you can get, in the general case, is the safe version of super(type(self), type(self)), using that code as-is, but replacing super(type(self), type(self)) with super(__class__, type(self)). You can simplify the code a little by having the child class use a decorator rather than manually invoking .setter, getting the benefits of your value = NaturalNumber.value.setter(_set_value) solution more succinctly: class EvenNaturalNumber(NaturalNumber): # No need to redefine the getter, since it works as is # Just use NaturalNumber's value for the decorator @NaturalNumber.value.setter def value(self, val): if val % 2: raise ValueError( f"Cannot set value to {val}: Even numbers are divisible by 2." ) # Use __class__, not type(self) for first argument, # to avoid infinite recursion if this class is subclassed super(__class__, type(self)).value.fset(self, val) __class__ is what no-arg super() uses to get the definition-time class for the method (whenever super or __class__ are referenced in a function being defined within a class, it's given a faked closure-score that provides __class__ as the class the function was defined in). This avoids the infinite recursion problem (and is slightly more efficient as a side-benefit, since loading __class__ from closure scope is cheaper than calling type(self) a second time). That said, in this particular case, the best solution is probably to do as DarkMath suggests, and use an internal validation method that the setter can depend on, so the property need not be overridden in the child at all. It's not a general solution, since the changes aren't always so easily factored out, but it's the best solution for this particular case.
2
1
77,730,831
2023-12-29
https://stackoverflow.com/questions/77730831/image-dimension-mismatch-while-trying-to-add-noise-to-image-using-keras-sequenti
To Recreate this question's ask on your system, please find the Source code and Dataset here What I am trying? I am trying to create a simple GAN (Generative Adversarial N/w) where I am trying to recolor Black and White images using a few ImageNet images. What Process am I following? I have take a few Dog images, which are stored in folder ./ImageNet/dogs/ directory. Using Python code I have created 2 more steps where I convert Dog images into 244 x 244 resolution and save in ./ImageNet/dogs_lowres/ Dog Low Res. images into Grayscale and save in ./ImageNet/dogs_bnw/ Feed the Low Res BnW images to GAN model and generate colored images. Where am I Stuck? I am stuck at understanding how the Image dimensions / shape are used. I am getting the error as such: ValueError: `logits` and `labels` must have the same shape, received ((32, 28, 28, 3) vs (32, 224, 224)). Here's the code for Generator and Discriminator: # GAN model for recoloring black and white images generator = Sequential() generator.add(Dense(7 * 7 * 128, input_dim=100)) generator.add(Reshape((7, 7, 128))) generator.add(Conv2DTranspose(64, kernel_size=5, strides=2, padding='same')) generator.add(Conv2DTranspose(32, kernel_size=5, strides=2, padding='same')) generator.add(Conv2DTranspose(3, kernel_size=5, activation='sigmoid', padding='same')) # Discriminator model discriminator = Sequential() discriminator.add(Flatten(input_shape=(224, 224, 3))) discriminator.add(Dense(1, activation='sigmoid')) # Compile the generator model optimizer = Adam(learning_rate=0.0002, beta_1=0.5) generator.compile(loss='binary_crossentropy', optimizer=optimizer) # Train the GAN to recolor images epochs = 10000 batch_size = 32 and the training loop is as follows: for epoch in range(epochs): idx = np.random.randint(0, bw_images.shape[0], batch_size) real_images = bw_images[idx] noise = np.random.normal(0, 1, (batch_size, 100)) generated_images = generator.predict(noise) # noise_rs = noise.reshape(-1, 1) g_loss = generator.train_on_batch(noise, real_images) if epoch % 100 == 0: print(f"Epoch: {epoch}, Generator Loss: {g_loss}") Where is the Error? I get error on line: g_loss = generator.train_on_batch(noise, real_images) When I check for the shape of noise and real_images objects, this is what I get: real_images.shape (32, 224, 224) noise.shape (32, 100) Any help/suggestion is appreciated.
generator outputs [32 28 28 3], whereas it is getting a target of shape [32 224 224]. The target has two differences: it is greyscale rather than colour, and has larger dimensions. I am assuming the target supplied to the generator should be colour rather than grayscale. You can load the colour images and resize them using: def load_images_color(directory): images = [] for filename in os.listdir(directory): img_path = os.path.join(directory, filename) img = cv2.imread(img_path) img = cv2.resize(img, (224, 224)) # Resize images to 224x224 img = img.astype('float32') / 255.0 # Normalize pixel values images.append(img) return np.array(images) # Load colour images cl_images = load_images_color('./ImageNet/dogs') ... for epoch in range(epochs): ... cl_real = cl_images[idx] #Resize colour images to match generator output shape cl_real_small = [] for im in cl_real: cl_real_small.append( cv2.resize(im, (28, 28)) ) cl_real_small = np.array(cl_real_small) ... g_loss = generator.train_on_batch(noise, cl_real_small)
2
2
77,728,334
2023-12-28
https://stackoverflow.com/questions/77728334/install-lightgbm-gpu-in-a-wsl-conda-env
-------------------- original question --------------------------------- How to install LightGBM?? I have checked multiple sources but staill failed to install. I tried pip and conda but both return the error: [LightGBM] [Warning] Using sparse features with CUDA is currently not supported. [LightGBM] [Fatal] CUDA Tree Learner was not enabled in this build. Please recompile with CMake option -DUSE_CUDA=1 What i have tried is following: git clone --recursive https://github.com/microsoft/LightGBM cd LightGBM/ mkdir -p build cd build cmake -DUSE_GPU=1 .. make -j$(nproc) cd ../python-package pip install . -------------------- My solution below (cuda) --------------------------------- Thanks for the replies guys. I tried some ways and finally it works as below: First, make sure cmake is installed (under the wsl): sudo apt-get update sudo apt-get install cmake sudo apt-get install g++ Then, git clone --recursive https://github.com/microsoft/LightGBM cd LightGBM mkdir build cd build cmake -DUSE_GPU=1 -DOpenCL_LIBRARY=/usr/local/cuda/lib64/libOpenCL.so -DOpenCL_INCLUDE_DIR=/usr/local/cuda/include/ .. make -j4 Currently, the install is not linked to any conda env yet. So to do this, under the vscode terminal (or still wsl), conda activate an env and then create a jupyter notebook for testing: Make sure that lib_lightgbm.so is under the LightGBM/python-package, if not, copy into that folder. Then in the jupyter notebook: import sys import numpy as np sys.path.append('/mnt/d/lgm-test2/LightGBM/python-package') import lightgbm as lgb The final bit is you can refer Jame's reply that device needs to be set to 'cuda' instead of 'gpu'.
Seeing logs about CUDA in the original posts suggests to me that you're trying to use CUDA-enabled LightGBM. It's important to clarify that, as LightGBM supports two different GPU-accelerated builds: -DUSE_GPU=1 ("device": "gpu") = OpenCL-based build targeting a wide range of GPUs -DUSE_CUDA=1 ("device": "cuda") = CUDA kernels targeting NVIDIA GPUs As described in the project's docs (link), as of v4.0.0 building the lightgbm Python package from sources in its git repos requires use of a shell script in that repo. Run the following to build and install a CUDA-enabled version of the library from the source code on GitHub. git clone --recursive https://github.com/microsoft/LightGBM cd LightGBM/ sh build-python.sh install --cuda If you'd prefer to install from a release on PyPI without having to clone the repo, run the following. pip install \ --no-binary lightgbm \ --config-settings=cmake.define.USE_CUDA=ON \ 'lightgbm>=4.0.0' With CUDA-enabled LightGBM installed that way, you can then use GPU-accelerated training by passing "device": "cuda" through parameters, like this: import lightgbm as lgb from sklearn.datasets import make_regression X, y = make_regression(n_samples=10_000) dtrain = lgb.Dataset(X, label=y) bst = lgb.train( params={ "objective": "regression", "device": "cuda", "verbose": 1 }, train_set=dtrain, num_boost_round=5 )
3
4
77,723,202
2023-12-27
https://stackoverflow.com/questions/77723202/how-to-avoid-odoo-15-render-error-for-missing-values
In ODOO 15 I have made my own template that runs a method that returns some data to show in a dictionary. This is a piece of the template: <t t-set="PRI_par_DSP_par_stage" t-value="o.PRI_par_DSP_par_stage(o.date_start, o.date_end, o.source, o.domain)"/> <table border="1" style="text-align: left; width: auto; margin: 0 auto;"> <tbody> <t t-foreach="PRI_par_DSP_par_stage" t-as="row"> <tr> <td><t t-esc="row"/></td> <td><t t-esc="row['dsp_id']"/></td> <td><t t-esc="row['Brouillon']"/></td> [....] </tbody> </table> </t> And my method returns something like: [{'dsp_id': 'DEBIT', 'Brouillon': 3936.0, 'Qualification': 40299.24, 'Closing': 156753.59}, {'dsp_id': 'THD', 'Closing': 22487.8}] When rendering this I got an 500 error: Web Error message: Error when render the template KeyError: 'Brouillon' Template: 1026 Path: /t/t/div/main/t/div/div[7]/table/tbody/t/tr/td[3]/t Node: <t t-esc="row['Brouillon']"/> The error occurred while displaying the model and evaluated the following expressions: 1026<t t-esc="row['Brouillon']"/> Because, of course, I don't have that key in second dictionary from list. Is there a way (other than add those keys in my response) to overcome this issue, like checking if there is really a key and after that trying to render it, I have tried with t-if=row..., t-if=define(row...) still have this issue. Any ideas will be highly appreciated. :)
<td t-if="'dsp_id' in row.keys()"><t t-esc="row['dsp_id']"/></td> <td t-if="'Brouillon' in row.keys()"><t t-esc="row['Brouillon']"/></td>
2
2
77,731,279
2023-12-29
https://stackoverflow.com/questions/77731279/python-how-to-add-a-filter-to-a-histogram
I use Python in Power BI and have currently the following code: import matplotlib.pyplot as plt plt.hist(dataset.age, bins=10, edgecolor="#6A9662", color="#DDFFDD", alpha=0.75) plt.show() In my table i have a column TYPE with "E" and "G". How can i add a Filter TYPE = "E" to the Python code? Many thanks in advance! I am expecting that my histogram shows only results for the filtered table TYPE = "E".
I suggest you to filter the dataframe before plotting import matplotlib.pyplot as plt data = dataset[dataset["TYPE"]=="E"].age plt.hist(data, bins=10, edgecolor="#6A9662", color="#DDFFDD", alpha=0.75) plt.show()
2
1
77,731,198
2023-12-29
https://stackoverflow.com/questions/77731198/polars-product-of-all-columns-except-one-in-2-lazyframes
I am learning polars, having come from pandas. In pandas land, I frequently operate on 2 dataframes, each with a datetime index, and the same columns. Example: df1 = pd.DataFrame(data={ 'time': pd.date_range('2023-01-01', periods=n, freq='1 min'), 'foo': np.random.uniform(0,127, size=n).astype(np.float64), 'bar': np.random.uniform(1e3,32767, size=n).astype(np.float64), 'baz': np.random.uniform(1e6,2147483, size=n).astype(np.float64) }).set_index('time') df2 = pd.DataFrame(data={ 'time': pd.date_range('2023-01-01', periods=n, freq='1 min'), 'foo': np.random.uniform(0,127, size=n).astype(np.float64), 'bar': np.random.uniform(1e3,32767, size=n).astype(np.float64), 'baz': np.random.uniform(1e6,2147483, size=n).astype(np.float64) }).set_index('time') To calculate the product of the columns I can do the following: df1 * df2 foo bar baz time 2023-01-01 00:00:00 8720.704791 1.632745e+08 2.276452e+12 2023-01-01 00:01:00 310.271257 3.375341e+08 2.195998e+12 2023-01-01 00:02:00 2936.646429 5.506997e+08 2.005228e+12 2023-01-01 00:03:00 12342.312737 3.383745e+08 3.779531e+12 2023-01-01 00:04:00 382.163185 1.371315e+08 1.529299e+12 The index remains the same, and each column in dataframe 1 is multiplied by its respective column in dataframe 2. I am now trying to do the same with polars LazyFrames. Here is the polars LazyFrame equivalent to my pandas dataframes above: df1 = pl.DataFrame(data={ 'time': pd.date_range('2023-01-01', periods=n, freq='1 min'), 'foo': np.random.uniform(0,127, size= n).astype(np.float64), 'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64), 'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64) }).lazy() df2 = pl.DataFrame(data={ 'time': pd.date_range('2023-01-01', periods=n, freq='1 min'), 'foo': np.random.uniform(0,127, size= n).astype(np.float64), 'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64), 'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64) }).lazy() I believe the correct way to operate on these 2 LazyFrames is to concat, group_by and then apply some aggregation. However, whilst something like sum works as expected: pl.concat([df1, df2]).group_by('time').sum().sort('time').collect() time foo bar baz datetime[ns] f64 f64 f64 2023-01-01 00:00:00 117.758176 35887.733953 3.4859e6 2023-01-01 00:01:00 83.828093 32037.128498 3.3425e6 2023-01-01 00:02:00 158.950876 51900.065898 2.1312e6 2023-01-01 00:03:00 90.781075 41924.70712 3.4727e6 2023-01-01 00:04:00 156.831011 34252.423581 3.0899e6 I do not know how to perform a product aggregation Things I have tried: agg(col('*').mul(col'*')) pl.concat([df1, df2]).group_by('time').agg(pl.col("*").mul(pl.col("*"))).sort('time').collect() I'm not even sure what this is doing - it produces a list of values for each column time foo bar baz datetime[ns] list[f64] list[f64] list[f64] 2023-01-01 00:00:00 [12359.923484, 114.893701] [1.3056e8, 4.3358e8] [1.2441e12, 3.0416e12] 2023-01-01 00:01:00 [1212.065713, 15767.846044] [8.9478e8, 8.6531e8] [1.3674e12, 2.0145e12] 2023-01-01 00:02:00 [38.587401, 3658.818448] [7.2488e8, 2.1755e8] [3.6923e12, 1.6436e12] 2023-01-01 00:03:00 [298.835241, 1202.613949] [7.9310e8, 5.6334e8] [1.6880e12, 1.9158e12] 2023-01-01 00:04:00 [11931.488236, 697.035171] [7.1008e7, 1.0676e9] [2.1519e12, 3.2458e12] How can I perform column-wise multiplication on my 2 dataframes, using time as the index?
You can create a .struct containing the "non-index" columns. df1.select("time", cols=pl.struct(pl.exclude("time"))) shape: (5, 2) ┌─────────────────────┬───────────────────────────────────┐ │ time ┆ cols │ │ --- ┆ --- │ │ datetime[ns] ┆ struct[3] │ ╞═════════════════════╪═══════════════════════════════════╡ │ 2023-01-01 00:00:00 ┆ {124.263949,27108.741665,1.2177e… │ │ 2023-01-01 00:01:00 ┆ {82.583389,6365.59054,1.5726e6} │ │ 2023-01-01 00:02:00 ┆ {11.729725,14632.406993,1.3759e6… │ │ 2023-01-01 00:03:00 ┆ {84.397273,25237.331443,1.9678e6… │ │ 2023-01-01 00:04:00 ┆ {93.136133,13659.764971,1.9972e6… │ └─────────────────────┴───────────────────────────────────┘ And perform a left-join to align all the values: (df1.select("time", cols=pl.struct(pl.exclude("time"))) .join( df2.select("time", cols=pl.struct(pl.exclude("time"))), on = "time", how = "left" ) ) shape: (5, 3) ┌─────────────────────┬───────────────────────────────────┬───────────────────────────────────┐ │ time ┆ cols ┆ cols_right │ │ --- ┆ --- ┆ --- │ │ datetime[ns] ┆ struct[3] ┆ struct[3] │ ╞═════════════════════╪═══════════════════════════════════╪═══════════════════════════════════╡ │ 2023-01-01 00:00:00 ┆ {124.263949,27108.741665,1.2177e… ┆ {97.580405,5774.836603,1.6745e6} │ │ 2023-01-01 00:01:00 ┆ {82.583389,6365.59054,1.5726e6} ┆ {22.390398,16631.827323,1.4927e6… │ │ 2023-01-01 00:02:00 ┆ {11.729725,14632.406993,1.3759e6… ┆ {116.287397,29089.731203,1.6769e… │ │ 2023-01-01 00:03:00 ┆ {84.397273,25237.331443,1.9678e6… ┆ {70.481321,7588.344937,1.6139e6} │ │ 2023-01-01 00:04:00 ┆ {93.136133,13659.764971,1.9972e6… ┆ {13.712869,27637.512573,1.7183e6… │ └─────────────────────┴───────────────────────────────────┴───────────────────────────────────┘ You can then multiply the structs and .unnest() (df1.select("time", cols=pl.struct(pl.exclude("time"))) .join( df2.select("time", cols=pl.struct(pl.exclude("time"))), on = "time", how = "left" ) .select("time", pl.col("cols") * pl.col("cols_right")) .unnest("cols") ) shape: (5, 4) ┌─────────────────────┬─────────────┬──────────┬───────────┐ │ time ┆ foo ┆ bar ┆ baz │ │ --- ┆ --- ┆ --- ┆ --- │ │ datetime[ns] ┆ f64 ┆ f64 ┆ f64 │ ╞═════════════════════╪═════════════╪══════════╪═══════════╡ │ 2023-01-01 00:00:00 ┆ 12125.72637 ┆ 1.5655e8 ┆ 2.0389e12 │ │ 2023-01-01 00:01:00 ┆ 1849.074903 ┆ 1.0587e8 ┆ 2.3473e12 │ │ 2023-01-01 00:02:00 ┆ 1364.019211 ┆ 4.2565e8 ┆ 2.3072e12 │ │ 2023-01-01 00:03:00 ┆ 5948.43133 ┆ 1.9151e8 ┆ 3.1758e12 │ │ 2023-01-01 00:04:00 ┆ 1277.163612 ┆ 3.7752e8 ┆ 3.4318e12 │ └─────────────────────┴─────────────┴──────────┴───────────┘
2
1
77,729,423
2023-12-28
https://stackoverflow.com/questions/77729423/pandas-dataframe-to-json-transform-pythonic-way
I am bit rusty in pandas to json transforms. I have a pandas data frame like this for a fictional library: userId visitId readingTime Books BookType u1 1 300 book1,book2,book3 Fiction u2 1 400 book4,book5 Horror u2 2 250 book6 Romance Need to create a json that is like: { "visitSummary": { "u1": [ { "readingTime": 300, "Books": [ "book1", "book2", "book3" ], "BookType": "Fiction" } ], "u2": [ { "readingTime": 400, "Books": [ "book4", "book5" ], "BookType": "Horror" }, { "readingTime": 250, "Books": [ "book6" ], "BookType": "Romance" } ] } } I was thinking to do it using nested loops and processing each row. I am hoping, there is a simpler pythonic way to do it. Using Python 3.10 and pandas 2.1.4
With split, groupby & to_dict : out = { "visitSummary": (df.assign(Books=df["Books"].str.split(",")) .groupby("userId").apply(lambda g: g.drop( columns=["userId", "visitId"]).to_dict("records")).to_dict()) } Output : import json; print(json.dumps(out, indent=4)) { "visitSummary": { "u1": [ { "readingTime": 300, "Books": [ "book1", "book2", "book3" ], "BookType": "Fiction" } ], "u2": [ { "readingTime": 400, "Books": [ "book4", "book5" ], "BookType": "Horror" }, { "readingTime": 250, "Books": [ "book6" ], "BookType": "Romance" } ] } }
2
1
77,728,931
2023-12-28
https://stackoverflow.com/questions/77728931/how-to-use-tqdm-and-processpoolexecutor
I'm creating a program that processes files sometimes very huge files in the order of GB. It takes a lot of time. At first I tried using ThreadPoolExecutor which relatively improved the speed. For example a ~200 Mb file would take about ~3 minutes running synchronously, with ThreadPoolExecutor, about ~130+ seconds. That's too slow for me. I tried ProcessPoolExecutor and it did wonderful. Same work done in about 12-18 seconds. Well makes sense since the task is eating a lot of cpu. Now the problem comes to visualizing the progress of the task. I used tqdm. With threads everything works wonderfully. I can see the beautiful progress. But when I change to use Processpool, the program crashes. My code looks like as follows: from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor, as_completed from pathlib import Path from tqdm import tqdm import os class FileProcessor: NJOBS = 8 # 4 for multiprocessing CHK_SIZE = 1024 * 1024 * 80 # 80 MB def __init__(self, fp: str | Path): self.filepath = Path(fp) self.filesize = os.path.getsize(self.filepath) @classmethod def _process_chunk(cls, chunk: bytes, pb: tqdm, *somemoreargs): # processes each byte, updates progressbar afterwards array = bytearray(chunk) for i in range(len(array)): # do sam' with byte at i time.sleep(0.0001) # for large file comment this if pb: pb.update() return bytes(array) # and some more vals def perform(self): def subchunk(chunk: bytes): schk_size = len(chunk) // FileProcessor.NJOBS if not schk_size: schk_size = len(chunk) # will work on this later i = 0 while (schunk := chunk[i:i + schk_size]): yield schunk # and some more info i += schk_size progressbar = tqdm(range(self.filesize)) file = self.filepath.open(mode="rb") executor = ThreadPoolExecutor(max_workers=FileProcessor.NJOBS) # executor = ProcessPoolExecutor(max_workers=os.get_cpu_count()) with progressbar, file, executor: while (chunk := file.read(FileProcessor.CHK_SIZE)): futures = [executor.submit(FileProcessor._process_chunk, sc, progressbar) for sc in subchunk(chunk)] # futures = [executor.submit(FileProcessor._process_chunk, sc, None) for sc in subchunk(chunk)] for future in as_completed(futures): # do something with results res = future.result() # progressbar.update(len(res)) # uncomment for multiprocessing # do final stuff This works well with multi threads. The progressbar fills smoothly. But when I change to multi processes, the program crashes. I am guessing is due to the fact that "processes not sharing memory space". So, the question is how can I use tqdm to show the progress smoothly whilst using multi processing. For now I am updating the bar after the process ends: in for future in as_completed(futures) but the progress display is rather ugly with big jumps
Since you want to use a ProcessPoolExecutor instance (you code still shows you using a ThreadPoolExecutor instance), then the main process which has nothing else to except wait for submitted tasks to complete can easily be the updater of the progress bar. You now need to arrange for your worker function/method (in your case _process_chunk to return an additional value that is the amount to advance the progress bar by: import os import time from concurrent.futures import (ProcessPoolExecutor, as_completed) from pathlib import Path from tqdm import tqdm class FileProcessor: NJOBS = 4 # 4 for multiprocessing CHK_SIZE = 1024 * 1024 * 80 # 80 MB def __init__(self, fp: str): self.filepath = Path(fp) self.filesize = os.path.getsize(self.filepath) @staticmethod def _process_chunk(chunk: bytes, *somemoreargs): array = bytearray(chunk) for b in array: #time.sleep(0.0001) ... # Also return the number of bytes processed: return bytes(array), len(array) def perform(self): def subchunk(chunk: bytes): schk_size = len(chunk) // FileProcessor.NJOBS if not schk_size: schk_size = len(chunk) # will work on this later i = 0 while schunk := chunk[i : i + schk_size]: yield schunk # and some more info i += schk_size progressbar = tqdm(total=self.filesize) file = self.filepath.open(mode="rb") executor = ProcessPoolExecutor(max_workers=FileProcessor.NJOBS) with progressbar, file, executor: while chunk := file.read(FileProcessor.CHK_SIZE): futures = [ executor.submit(FileProcessor._process_chunk, sc) for sc in subchunk(chunk) ] for future in as_completed(futures): _, bytes_processed = future.result() progressbar.update(bytes_processed) future.result() # do final stuff if __name__ == "__main__": f = FileProcessor("some_big_file.tar") f.perform()
2
2
77,728,651
2023-12-28
https://stackoverflow.com/questions/77728651/is-there-a-way-to-stream-multiple-videos-continuosly-with-the-same-address-and-r
I am participating in a project using RTSP and cameras footage. My objective is to create a continuous flow of videos streamed in the same address and route to update the videos according to the footage we need to display. So, the expected behavior I have to create is to change the video streamed in the address while using ffplay to visualize the stream I am creating. This is recreated by this diagram: But this simply does not work. Whenever I try to change the video, the original video just freeze and it is impossible to visualize anything. ffplay throws an error: rtsp://localhost:8554/sample: error while seeking. And this is weird for me because when I shut down the transmition and I open it again, it shows the changed footage that I wanted to visualize now. To create the RTSP server I am using this repository: https://github.com/p513817/rtsp4k. It is based on mediamtx and Python to recreate all the logic. I have no knowledge in RTSP, so I don't know if I should do anything low-level to keep the flow without finishing the video or I should try other framework. I need help! The logic I am doing to update the video is def put_placeholder(self, input:str, route: str): success = True try: # I change the Displayer to show my placeholder instead the original video self.dprs[route] = Displayer( input='./utils/placeholder.mp4', route=str(route), start_stream=True ) # Same here if route in PARAMS.CONF["streams"] : PARAMS.CONF["streams"][route] = { "input": './utils/placeholder.mp4' } write_config(PARAMS.CONF_PATH, PARAMS.CONF) except Exception as e: logging.exception(e) success = False finally: # I update the info in the .json that summarize all the streams self._update_info( input='./utils/placeholder.mp4', route=route, url=self.dprs[route].get_url() if success else "", status=success ) return self.info[route]
I realized that I was creating a new object instead of changing the current one. This made that ffplay was incapable of keeping track of the address and route because loses connection and then it is reconnected immediately, making the stream to crash. The final code for updating the videos is a modification of the function put_placeholder that I created in the Manager class in rtsp_handler.py. It is added a new function called change_input() in the Displayer class by creating a function that updates the file's path and not creates a new thread or class instance. def put_placeholder(self, input:str, route: str): success = True try: # NOTE: modify config path = "" if input == 'placeholder': path = './utils/placeholder.mp4' else: path = './data/'+input+'.mp4' self.dprs[route].change_input(path) if route in PARAMS.CONF["streams"] : PARAMS.CONF["streams"][route] = { "input": path } write_config(PARAMS.CONF_PATH, PARAMS.CONF) except Exception as e: logging.exception(e) success = False finally: self._update_info( input=path, route=route, url=self.dprs[route].get_url() if success else "", status=success ) return self.info[route] def change_input(self, new_input:str): self.input = new_input self.src = self._create_source()
2
1
77,724,098
2023-12-27
https://stackoverflow.com/questions/77724098/getting-csrf-verification-failed-request-aborted-on-a-django-forms-on-live-ser
Hello I am learning Django i write a app and publish it in AWS EC2 instance with gunicorn and ngnix on local environments everything works fine but in production on every submit on forms i get this 403 Error : Forbidden (403) CSRF verification failed. Request aborted. More information is available with DEBUG=True. iam sure in templates every form have {% csrf_token %} and this is my setting.py file of django app: import os from pathlib import Path from decouple import config BASE_DIR = Path(__file__).resolve().parent.parent SECRET_KEY = config("DJANGO_SECRET_KEY") SITE_ID = 1 DEBUG = False ALLOWED_HOSTS = ["winni-furnace.ca", "www.winni-furnace.ca"] CSRF_COOKIE_SECURE = True CSRF_COOKIE_DOMAIN = '.winni-furnace.ca' INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.sites', 'django.contrib.sitemaps', # local 'accounts.apps.AccountsConfig', 'blog.apps.BlogConfig', 'mail_service.apps.MailServiceConfig', 'messages_app.apps.MessagesAppConfig', 'offers.apps.OffersConfig', 'pages.apps.PagesConfig', 'projects.apps.ProjectsConfig', 'score.apps.ScoreConfig', 'services.apps.ServicesConfig', # 3rd 'taggit', "django_social_share", "whitenoise.runserver_nostatic" ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'app.urls' INTERNAL_IPS = [ # ... "127.0.0.1", # ... ] TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [BASE_DIR / 'templates'] , 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'app.wsgi.application' AUTH_USER_MODEL = "accounts.CustomUser" LOGIN_REDIRECT_URL = "pages:index" LOGOUT_REDIRECT_URL = "pages:index" # Database # https://docs.djangoproject.com/en/5.0/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': config('POSTGRES_DB'), 'USER': config('POSTGRES_USER'), 'PASSWORD': config('POSTGRES_PASSWORD'), 'HOST': 'wiinidb.cxmwqowm20bd.us-east-1.rds.amazonaws.com', 'PORT': '5432', } } # Password validation # https://docs.djangoproject.com/en/5.0/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/5.0/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/5.0/howto/static-files/ EMAIL_PORT = 587 EMAIL_USE_TLS = True EMAIL_BACKEND = "django_ses.SESBackend" AWS_SES_REGION_NAME = "us-west-1" ASW_SES_REGION_ENDPOINT = "email-smtp.us-west-1.amazonaws.com" EMAIL_HOST_USER = "[email protected]" STATIC_URL = "/static/" STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_ROOT = BASE_DIR / "staticfiles" STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' AWS_ACCESS_KEY_ID = config("AWS_ACCESS_KEY_ID") AWS_SECRET_ACCESS_KEY = config("AWS_SECRET_ACCESS_KEY") AWS_STORAGE_BUCKET_NAME = 'wiini' AWS_S3_SIGNATURE_NAME = config("AWS_S3_SIGNATURE_NAME") AWS_S3_REGION_NAME = 'ca-central-1' AWS_S3_FILE_OVERWRITE = False AWS_DEFAULT_ACL = None AWS_S3_VERITY = True DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage' # Default primary key field type # https://docs.djangoproject.com/en/5.0/ref/settings/#default-auto-field DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' this is my ngnix config : server{ listen 80; server_name winni-furnace.ca www.winni-furnace.ca; location /static/ { root /home/ubuntu/winnipeg_prod/app/staticfiles; } location / { include proxy_params; proxy_pass http://unix:/home/ubuntu/winnipeg_prod/app/app.sock; } } i dont have idea what i must do and why this happen and also iam not professional so if i do a silly misstake take it easy on me thank you update ( I turn debug to True ) i get this error page now : Forbidden (403) CSRF verification failed. Request aborted. Help Reason given for failure: Origin checking failed - https://winni-furnace.ca does not match any trusted origins. In general, this can occur when there is a genuine Cross Site Request Forgery, or when Django’s CSRF mechanism has not been used correctly. For POST forms, you need to ensure: Your browser is accepting cookies. The view function passes a request to the template’s render method. In the template, there is a {% csrf_token %} template tag inside each POST form that targets an internal URL. If you are not using CsrfViewMiddleware, then you must use csrf_protect on any views that use the csrf_token template tag, as well as those that accept the POST data. The form has a valid CSRF token. After logging in in another browser tab or hitting the back button after a login, you may need to reload the page with the form, because the token is rotated after a login. You’re seeing the help section of this page because you have DEBUG = True in your Django settings file. Change that to False, and only the initial error message will be displayed. You can customize this page using the CSRF_FAILURE_VIEW setting.
Everything works fine but in production on every submit on forms i get this 403 Error The reason you get this error is because you are requesting your website with https but you have not defined the 443 port on nginx. Here is a post to do this using AWS. Here is a simple example: server { server_name winni-furnace.ca www.winni-furnace.ca; location /static/ { root /home/ubuntu/winnipeg_prod/app/staticfiles; return 301 https://winni-furnace.ca$request_uri; } server { server_name winni-furnace.ca www.winni-furnace.ca; listen 443; ssl on; ssl_certificate /path/of/winni_cert_chain.crt; ssl_certificate_key /path/of/winni.key; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } } Notice I have added this line return 301 https://winni-furnace.ca$request_uri; which will redirect all http request to https automatically.
2
1
77,724,742
2023-12-28
https://stackoverflow.com/questions/77724742/reset-index-acting-in-an-unexpected-way-unexpected-keyword-argument-names
I'm cleaning up a dataset for plotting. Pretty standard stuff, transposing, deleting unnecessary columns, etc. Here is the code so far: #File name fname = r'E:\Grad School\Research\BOEM\Data\Grain Size\ForScript\allmyCoresRunTD.csv' #Read csv file into a dataframe. skips the first row that just has the 'record number'. Also only loads the #Dx rows and not the remainder of the bins. df = pd.read_csv(fname, index_col=None, skiprows=1, nrows=8) #Delete 'Average' from Malvern. The script will average the data for us df.drop(columns = df.columns[df.columns.str.startswith('Average')], inplace=True) df.drop(columns = df.columns[df.columns.str.startswith('FSH')], inplace=True) This is what the dataframe looks like at this point: I rearrange the data some: #Transpose the data df2 = df.transpose() #Change the first row into column names df2.columns = df2.iloc[0] #Delete header in first row df2 = df2.drop('Sample Name', axis=0) And now it looks like this: So then I just want to reset the index and name the new column SampleID. df2 = df2.reset_index(names=['SampleID']) But instead of moving the index to a column and naming the column SampleID, it gives the following error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[4], line 21 18 df2 = df2.drop('Sample Name', axis=0) 20 #Reset the indexes ---> 21 df2 = df2.reset_index(names='SampleID') 22 #df2['SampleID'] = df2['SampleID'].str.replace('19OCS-', '') 23 df2['SampleID'] = df2['SampleID'].str.split('.').str[0] File ~\Anaconda3\lib\site-packages\pandas\util\_decorators.py:311, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 305 if len(args) > num_allow_args: 306 warnings.warn( 307 msg.format(arguments=arguments), 308 FutureWarning, 309 stacklevel=stacklevel, 310 ) --> 311 return func(*args, **kwargs) TypeError: reset_index() got an unexpected keyword argument 'names' I originally wrote this code about 6 months ago and it worked just fine but now it's not working? I only changed the file path in the code now vs 6 months ago. I can run the line like this: df2 = df2.reset_index() And it will move the index correctly but, of course, the column is not named. I can use .rename to change the name: df2 = df2.rename({'index':'SampleID'}, axis=1) But I can't figure out why the names keyword is not working. I've double checked my syntax and looked at the API and I just cannot figure out why it's giving me this error. I've tried adding/removing brackets; I've tried specifying the level; I've tried using col_fill. It's probably something simple I did wrong. Thank you for your time. Note: It changes from df to df2 after the transpose but I continue working in df2 for the remainder of the code.
Check that version of pandas you are running is >= 1.5.0. The parameter was introduced in 1.5.0 for reset_index() import pandas as pd pd.__version__
2
4
77,725,185
2023-12-28
https://stackoverflow.com/questions/77725185/can-i-run-just-pd-df-to-csv-in-a-different-thread
I have a pretty big pandas dataframe and I want to select some rows based on conditions. The problem is that the act of saving as CSV is separate from the overall flow of the program and consumes quite a bit of time. Is it possible to separate the threads so that the main thread progresses to the selected rows, while at the same time unselected rows are saved as csv in another thread? such as... # This is pseudo code import pandas as pd df = pd.DataFrame({"col1":[x for x in range(10000)], "col2":[x**2 for x in range(0, 10000)]}) df_selected = df[df.apply(lambda x: x.col1%3==0, axis=1)] df_unselected = df[df.apply(lambda x: x.col1%3!=0, axis=1)] def Other_thread_save_to_csv(df:pd.DataFrame): # this function is the last function to use df_unselected . Other_thread_save_to_csv(df_unselected ) all_other_hadlings(df_selected )
Yes, Python's either threading or multiprocessing features are handy for concurrent tasks like saving a DataFrame to CSV while doing other tasks. There are things you need to consider while working with threads and multiprocessing in python: Global Interpreter Lock (GIL) in Python: This means threading may not always speed up CPU-heavy tasks. But for I/O tasks (like file writing), it's quite good to use. Use Multiprocessing for Heavy CPU Tasks: If your other DataFrame tasks are CPU-intensive, multiprocessing is a better choice than threading. and the last would be the Thread Safety, You have to Make sure no other thread is altering the DataFrame when you're writing it to a CSV. # This is pseudo code import pandas as pd import threading def save_to_csv(df, filename): df.to_csv(filename, index=False) df = pd.DataFrame({"col1": [x for x in range(10000)], "col2": [x**2 for x in range(10000)]}) df_selected = df[df["col1"] % 3 == 0] df_unselected = df[df["col1"] % 3 != 0] # Initiating a thread to save a portion of DataFrame thread = threading.Thread(target=save_to_csv, args=(df_unselected, 'unselected_rows.csv')) thread.start() # Continue other tasks with the main thread # additional_operations(df_selected) # Optionally, wait for the thread to complete thread.join() save_to_csv function runs on a separate thread, allowing your program to process df_selected while df_unselected gets saved in the background.
2
5
77,723,077
2023-12-27
https://stackoverflow.com/questions/77723077/how-to-make-sure-all-combinations-for-each-number-are-given-in-an-integer-compos
I'm watching the MIT OCW 2008 Introduction to Computer Science and Programming, and trying to do the assignments. This is informal so there's no grading involved. There's this problem where you have to have to buy specific numbers of nuggets with packs containing 6, 9, and 20 nuggets. The aim is to learn to how to use recursions in Python. I'm solving for up to 65 nuggets. I've written the code below. It kind of works, but the problem is, I get exactly one solution for each number I input. How do I make sure that I get all the possible combinations for each number of total nuggets? def testtheorem(x): for a in range(0,15): for b in range(0,15): for c in range(0,15): y = 6*a + 9*b + 20*c if y == x and x < 66: print ("For", int(x), "total;") print ("6 piece can be", int(a)) print ("9 piece can be", int(b)) print ("20 piece can be", int(c)) testtheorem(x+1) return
I took a quick go at this question, following the comment from gog. The reason that you are only getting one answer for each answer you input is that you are returning prematurely. Each time you find the first answer, you create a new child function and look for an answer in this number. Eventually, you reach 66 which can never have an answer. Then you start heading back, all the way to the top - to your input number. I drew a small example with starting at 64: To solve your problem, you need to change at which point in the function you are calling the recursive function and when you are returning. Other than that, your code is on the right track.
2
2
77,723,163
2023-12-27
https://stackoverflow.com/questions/77723163/can-a-decorator-name-classes-it-creates
I have a decorator which wraps functions and produces classes (for a better reason than in the following example). It all works, except that I'd like to be able to set the name of the class given by type(). For example, >>> class Animal: ... pass ... >>> def linnean_name(genus, species): ... def _linnean_name(fn): ... class _Animal(Animal): ... binomial_name = (genus, species) ... def converse(self): ... fn() ... _Animal.__name__ = fn.__name__.title() ... _Animal.__module__ = fn.__module__ ... return _Animal ... return _linnean_name ... >>> @linnean_name('Vombatus', 'ursinus') ... def Wombat(): ... print("Hello, I am a wombat.") ... >>> sheila = Wombat() >>> sheila.binomial_name ('Vombatus', 'ursinus') >>> sheila.converse() Hello, I am a wombat. All well and good, but >>> type(sheila) <class '__main__.linnean_name.<locals>._linnean_name.<locals>._Animal'> where I was hoping to see <class '__main__.Wombat'> This was the reason for ... _Animal.__name__ = fn.__name__.title() ... _Animal.__module__ = fn.__module__ which appear not to do anything particularly useful. How can I do this?
You are looking for __new__. It allows you to specify a new class name and share a common base class for pre-defined behavior. You could also make MixIn classes by define more bases/base classes etc. class Animal: def talk(self): print(f"Hello, I am {self}") def __str__(self): return f'{self.family} {self. spec}' def __new__(cls, family, spec, *args, **kwargs): cls_name = family.title() + spec.title() bases = (cls,) typ = type(cls_name, bases, {}) obj = object.__new__(typ) #from here it's like __init__ #except you return the object instead of None #obj is equal to self in the __init__ here obj.family = family obj.spec = spec return obj pass sheila = Animal('Vombatus', 'ursinus') sheila.talk() As mentioned in the comments, you actually don't have to use a class and __new__ you can just have a factory that uses type. However, in my opinion __new__ is more readable. class Animal: def talk(self): print(f"Hello, I am {self}") def __str__(self): return f'{self.family} {self. spec}' pass def factory(family, spec, *args, **kwargs): cls_name = family.title() + spec.title() bases = (Animal,) typ = type(cls_name, bases, {}) obj = typ(*args, **kwargs) obj.family = family obj.spec = spec return obj sheila = factory('Vombatus', 'ursinus') print(type(sheila)) sheila.talk()
5
1
77,724,179
2023-12-27
https://stackoverflow.com/questions/77724179/how-numpy-arrays-are-overwritten-from-interpreter-point-of-view
I wrote two simple functions to learn CPython's behaviour regarding numpy arrays. Python 3.12.1 and numpy version 1.26.2, compiled by mkl (conda default) def foo(): for i in range(100): H = np.random.rand(1000, 1000) %timeit -r 100 foo() def baaz(): H = np.zeros((1000, 1000)) for i in range(100): H[:, :] = np.random.rand(1000, 1000) %timeit -r 100 baaz() Using dis library to see the bytecodes by calling dis.dis(foo()) an dis.dis(baaz()) I get these two outputs. Initially, I believed that baaz() should run faster than than foo() since we are reusing the H array, instead of de-allocating, and allocating H again on each loop. However, I see consistently that foo() is faster. I am wondering what causes this. I cannot read assembly bytecode, but by simply looking at dis.dis(foo()) and dis.dis(baaz()) output, I can see that foo() generates 13 extra lines compare to baaz(). Dissembled foo(): 671 0 RETURN_GENERATOR 2 POP_TOP 4 RESUME 0 676 6 LOAD_CONST 1 (None) 8 STORE_FAST 1 (lastline) 677 10 LOAD_FAST 0 (code) --> 12 LOAD_ATTR 1 (NULL|self + co_lines) 32 CALL 0 40 GET_ITER >> 42 FOR_ITER 23 (to 92) 46 UNPACK_SEQUENCE 3 50 STORE_FAST 2 (start) 52 STORE_FAST 3 (end) 54 STORE_FAST 4 (line) 678 56 LOAD_FAST 4 (line) 58 POP_JUMP_IF_NOT_NONE 1 (to 62) 60 JUMP_BACKWARD 10 (to 42) >> 62 LOAD_FAST 4 (line) 64 LOAD_FAST 1 (lastline) 66 COMPARE_OP 55 (!=) 70 POP_JUMP_IF_TRUE 1 (to 74) 72 JUMP_BACKWARD 16 (to 42) 679 >> 74 LOAD_FAST 4 (line) 76 STORE_FAST 1 (lastline) 680 78 LOAD_FAST 2 (start) 80 LOAD_FAST 4 (line) 82 BUILD_TUPLE 2 84 YIELD_VALUE 1 86 RESUME 1 88 POP_TOP 90 JUMP_BACKWARD 25 (to 42) 677 >> 92 END_FOR 681 94 RETURN_CONST 1 (None) >> 96 CALL_INTRINSIC_1 3 (INTRINSIC_STOPITERATION_ERROR) 98 RERAISE 1 ExceptionTable: 4 to 58 -> 96 [0] lasti 62 to 70 -> 96 [0] lasti 74 to 94 -> 96 [0] lasti Dissembled baaz(): 1 0 RESUME 0 2 2 LOAD_GLOBAL 0 (np) 12 LOAD_ATTR 3 (NULL|self + zeros) 32 LOAD_CONST 1 ((1000, 1000)) 34 CALL 1 42 STORE_FAST 0 (H) 3 44 LOAD_GLOBAL 5 (NULL + range) 54 LOAD_CONST 2 (100) 56 CALL 1 64 GET_ITER >> 66 FOR_ITER 43 (to 156) 70 STORE_FAST 1 (i) 4 72 LOAD_GLOBAL 0 (np) 82 LOAD_ATTR 6 (random) 102 LOAD_ATTR 9 (NULL|self + rand) 122 LOAD_CONST 3 (1000) 124 LOAD_CONST 3 (1000) 126 CALL 2 134 LOAD_FAST 0 (H) 136 LOAD_CONST 0 (None) 138 LOAD_CONST 0 (None) 140 BUILD_SLICE 2 142 LOAD_CONST 0 (None) 144 LOAD_CONST 0 (None) 146 BUILD_SLICE 2 148 BUILD_TUPLE 2 150 STORE_SUBSCR 154 JUMP_BACKWARD 45 (to 66) 3 >> 156 END_FOR 158 RETURN_CONST 0 (None) P.S: It may seem not obvious why one would think that baaz() should be faster, but this is indeed the case in a language like Julia Understanding Julia multi-thread / multi-process design.
In both cases you are creating a new array when you do np.random.rand(1000, 1000), and then de-allocating it. In the baaz case, you are also going through the work up updating the initial array. Hence it is slower. Numpy functions provide a way to avoid this, consider a simple case: arr[:] = arr + 1 This always creates a new array, which is the result of the expression arr + 1. You could avoid this by using: np.add(arr, 1, out=arr) Just a quick example of the above: In [31]: %%timeit -r 100 arr = np.zeros(1_000_000) ...: arr[:] = arr + 1 ...: ...: 1.85 ms ± 375 µs per loop (mean ± std. dev. of 100 runs, 100 loops each) In [32]: %%timeit -r 100 arr = np.zeros(1_000_000) ...: np.add(arr, 1, out=arr) ...: ...: 418 µs ± 29.1 µs per loop (mean ± std. dev. of 100 runs, 1,000 loops each) Unfortunately, I don't think there is anything equivalent for numpy.random functions. Possibly, numba can help you here, not sure how optimized np.random is with it though. But it's worth taking a look at.
2
1
77,723,914
2023-12-27
https://stackoverflow.com/questions/77723914/how-to-share-a-common-argument-between-nested-function-calls-in-python
I have two Python functions f() and g() which share one argument: a. def f(a, b): return a + b def g(a, c): return a * c Sometimes, g() is called standalone, in which case both arguments are necessary: g(a = 1, c = 3) Sometimes, g() is called as an argument inside a f() call: f(a = 1, b = g(a = 1, c = 3)) Notice that that the a argument is redundant, because it always has to have the same value in f() and g(). I would like to avoid this repetition. g() should figure out if it is called inside a f() call. If so, and if the a argument is not explicitly provided, then it should use the same a argument specified in the f() call. In other words, I want the same result as above with this simpler call: f(a = 1, g(c = 3))
What you want is not possible, g cannot know how it is called and it is evaluated before f. If you can modify the functions, g could return a partial if a is missing, then f can evaluate it if b is a function: from functools import partial def f(a, b): if callable(b): b = b(a) return a + b def g(a=None, c=None): if a is None: return partial(g, c=c) return a * c f(a = 1, b = g(a = 1, c = 3)) # 4 f(a = 1, b = g(c = 3)) # 4
2
3
77,719,569
2023-12-27
https://stackoverflow.com/questions/77719569/multivariable-gradient-descent-for-mles-nonlinear-model-in-python
I am trying to perform gradient descent to compute three MLEs (from scratch). I have data $x_i=s_i+w_i$ where $s_i=A(nu_i/nu_0)^{alpha}(nu_i/nu_0+1)^{-4alpha}$ where I have calculated the first derivatives analytically: import numpy as np import matplotlib.pyplot as plt #model function s_i def signal(A,nu_0,alpha,nu): return A*(nu/nu_0)**alpha*(1+nu/nu_0)**(-4*alpha) #partial derivatives of log likelihood function def MLE_A(A,nu_0,alpha,nu,x_i): nu=np.array(nu) x_i=np.array(x_i) return -np.sum(((nu/nu_0)**alpha*((A*(nu/nu_0)**alpha)/(nu/nu_0+1)**(4*alpha)-x_i))/(nu/nu_0+1)**(4*alpha)) def MLE_alpha(A,nu_0,alpha,nu,x_i): nu=np.array(nu) x_i=np.array(x_i) return -np.sum((A*(nu/nu_0)**alpha*(4*np.log(nu/nu_0+1)-np.log(nu/nu_0))*(x_i*(nu/nu_0+1)**(4*alpha)-A*(nu/nu_0)**alpha))/(nu/nu_0+1)**(8*alpha)) def MLE_nu_0(A,nu_0,alpha,nu,x_i): nu=np.array(nu) x_i=np.array(x_i) return -np.sum((A*alpha*(nu/nu_0)**(alpha)*(nu_0-3*nu)*((x_i*((nu)/nu_0+1)**(4*alpha))-A*(nu/nu_0)**alpha))/(nu_0*(nu+nu_0)*((nu)/nu_0+1)**(8*alpha))) def gradient_descent(A_init,nu_0_init,alpha_init,nu,x_i,iterations=1000,learning_rate=0.01): A=A_init nu_0=nu_0_init alpha=alpha_init theta=np.array([A_init,nu_0_init,alpha_init]) updated_theta=[theta] for i in range(iterations): new_theta = theta - learning_rate * np.array([MLE_A(A,nu_0,alpha,nu,x_i), MLE_nu_0(A,nu_0,alpha,nu,x_i), MLE_alpha(A,nu_0,alpha,nu,x_i)]) theta=new_theta updated_theta.append(theta) A,nu_0,alpha = new_theta[0],new_theta[1],new_theta[2] return(updated_theta) A=6 nu_0=2 alpha=1 nu=np.linspace(0.05,1.0,200) x_i=signal(A,nu_0,alpha,nu)+np.random.normal(0,0.05,len(nu)) params= gradient_descent(A,nu_0,alpha,nu,x_i,iterations=10000,learning_rate=0.01) print(params[-1]) A_fit=params[-1][0] nu_0_fit=params[-1][1] alpha_fit=params[-1][2] plt.plot(nu,x_i) plt.plot(nu,signal(A_fit,nu_0_fit,alpha_fit,nu)) plt.show() Sometimes I get errors like RuntimeWarning: overflow encountered in power and RuntimeWarning: invalid value encountered in true_divide and sometimes I get wildly off values. I used different values for the learning rate and it did not fix it. I have checked the functions by hand and using symbolic software. Also, I used latexify to see that I did type them in right so I am assuming it is my implementation of the gradient descent that is somehow off.
Observations Your MCVE seems to have hard time with some xi values when assessing derivatives: it experiences overflow or division by 0. As a consequence, your parameter vector contains NaN and the algorithm fails. For some setup, the problem is not present, for instance fixing the random seed to: np.random.seed(1234567) Generate a dataset that prevents your code to fail and then we get the following result: # [5.66701368 2.18827948 5.48118174] Which may indicate that the last derivative is not exact. MCVE Performing gradient descent for MSE with numerical assessment of the gradient works fine (for other seeds as well) and then is model independent (user does not have to provide derivatives): import numpy as np import numdifftools as nd from scipy import optimize import matplotlib.pyplot as plt def model(x, A, nu0, alpha): return A * np.power((x/nu0), 1. * alpha) * np.power((1 + x/nu0), (-4. * alpha)) def gradient_factory(model, x, y): def wrapped(p): return 0.5 * np.sum(np.power(y - model(x, *p), 2.)) return nd.Gradient(wrapped) np.random.seed(1234567) p0 = (6, 2, 1) nu = np.linspace(0.05, 1.0, 200) xi = model(nu, *p0) + 0.05 * np.random.normal(size=nu.size) def gradient_descent(model, x, y, p0, tol=1e-16, maxiter=500, rate=0.001, atol=1e-10, rtol=1e-8): gradient = gradient_factory(model, x, y) p = np.array(p0) dp = gradient(p) for _ in range(maxiter): # Update gradient descent: p_ = p - rate * dp dp_ = gradient(p_) # Update rate: Dp_ = p_ - p Ddp_ = dp_ - dp rate = np.abs(Dp_.T @ Ddp_) / np.power(np.linalg.norm(Ddp_), 2) # Break when precision is reached: if np.allclose(p, p_, atol=atol, rtol=rtol): break # Next iteration: p = p_ dp = dp_ else: raise RuntimeError("Max Iteration (maxiter=%d) reached" % maxiter) return p p = gradient_descent(model, nu, xi, [1., 1., 1.]) # array([5.82733367, 2.06411618, 0.98227514]) And converges toward the same solution than curve_fit: popt, pcov = optimize.curve_fit(model, nu, xi) # (array([5.82905928, 2.06399904, 0.98240413]), # array([[ 0.56149129, -0.03771281, 0.04195576], # [-0.03771281, 0.00493833, -0.00287965], # [ 0.04195576, -0.00287965, 0.00314415]])) You may adapt this MCVE to perform MLE instead of minimizing MSE. You can also provide your own definition of gradient to drive the descent instead of assessing it numerically.
3
1
77,720,049
2023-12-27
https://stackoverflow.com/questions/77720049/optimizing-multiple-try-except-code-for-web-scraping
I have a web scraping script where, due to many reasons, some codes "break" if not finding the information as expected. I am handling it with multiple "try/except" blocks. asin = item.get('data-asin') title = item.find_all('span',{'class' : 'a-size-base-plus a-color-base a-text-normal'})[0].get_text() try: label = item.find_all('span',{'aria-label' : 'Escolha da Amazon'})[0].get('aria-label') except IndexError : label = None try: current_whole_price = item.find_all('span', {'class' : 'a-price'})[0].find('span', {'class' : 'a-price-whole'}).get_text().replace(',','').replace('.','') except: current_whole_price = '0' try : current_fraction_price = item.find_all('span', {'class' : 'a-price'})[0].find('span', {'class' : 'a-price-fraction'}).get_text() except : current_fraction_price = '0' current_price = float(current_whole_price+'.'+current_fraction_price) try : rating_info = item.find('div', {'class':'a-row a-size-small'}).get_text() rating = float(rating_info[:3].replace(',','.')) rating_count = int(re.findall(r'\d+', rating_info)[-1]) except : rating = None rating_count = None try: ad = True if (item.find_all('span', {'class' : 'a-color-secondary'})[0].get_text() == 'Patrocinado') else False except IndexError: ad = False _ = {'productId' : itemId, 'asin' : asin, 'opt_label' : label, #"ad": True if (item.find_all('span', {'class' : 'a-color-secondary'})[0].get_text() == 'Patrocinado') else False , "ad": ad, 'title' : title, 'current_price' : current_price, 'url':f'https://www.amazon.com.br/dp/{asin}', 'rating' : rating, 'rating_count' : rating_count, } But, looking at my code, you can see that many of the "try/except" are similar. I wonder if I could use some kind of function where I just pass the "item", the "desired selector" and the "failsafe value" to return if it goes wrong. I intend to make my code simpler and smaller. I take any tips! Regards!
Yes, you can create a function to handle the repetitive try/except blocks, but you have to find a common way to get your fields, then it might be something like: def get_element(item, selector, attribute, failsafe_value=None): try: element = item.find(selector).get(attribute) return element if element else failsafe_value except (AttributeError, IndexError): return failsafe_value
2
3
77,719,714
2023-12-27
https://stackoverflow.com/questions/77719714/detect-and-create-mask-for-color-highlighted-section-on-image
How can I create a highlight mask for an image? I have attempted various methods, however, I have not been able to attain the desired outcome. How to detect colored sections without any adverse effects on the pink colored section, or any other color such as yellow? Input Current output Desired output import cv2 import numpy as np def detect_highlighted_text(image_path): image = cv2.imread(image_path) hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) lower_pink = np.array([150, 50, 50]) upper_pink = np.array([180, 255, 255]) lower_red = np.array([0, 50, 50]) upper_red = np.array([10, 255, 255]) pink_mask = cv2.inRange(hsv_image, lower_pink, upper_pink) red_mask = cv2.inRange(hsv_image, lower_red, upper_red) combined_mask = cv2.bitwise_or(pink_mask, red_mask) highlighted_text = cv2.bitwise_and(image, image, mask=combined_mask) cv2.imshow("Highlighted Text", highlighted_text) cv2.waitKey(0) cv2.destroyAllWindows() detect_highlighted_text("path/to/your/image.jpg") I tried to use create mask using inRange function, also searched about using findCountors.
You are almost there. You need a couple of Morphological filters and a little bit of tuning in your HSV thresholds. To obtain the final mask, you could also apply a Flood-fill at one corner using white color. Something like this: # Imports: import cv2 # Set image path directoryPath = "D://opencvImages//" # Set image path: imagePath = directoryPath + "41OzV.jpg" # Load image: inputImage = cv2.imread(imagePath) # Convert the BGR pixel to HSV: hsvImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2HSV) # Threshold on red: rangeThreshold = 5 lowerValues = np.array([0, 34, 153]) upperValues = np.array([179, 255, 255]) # Create HSV mask: redMask = cv2.inRange(hsvImage, lowerValues, upperValues) This gives you the following binary mask: It’s alright, but it would benefit from some morphology. Apply a Closing first, to close the gaps and then an Opening to get rid of the small blobs of noise: # Use a little bit of morphology to clean the mask: # Set kernel (structuring element) size: kernelSize = 3 # Set morph operation iterations: opIterations = 3 # Get the structuring element: morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize)) # Perform closing: redMask = cv2.morphologyEx(redMask, cv2.MORPH_CLOSE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101) # Perform opening: redMask = cv2.morphologyEx(redMask, cv2.MORPH_OPEN, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101) This will yield the following cleaned mask: Now, let’s use this mask and ANDit with the original image: # Mask original image: outputImage = cv2.bitwise_and(inputImage, inputImage, mask=redMask) Which produces the following image: Almost there, now let’s flood-fill at (0,0) (top left corner) to propagate white throughout the black color: # Flood-fill at seed point: seedPoint = (0, 0) # Set new color and tolerance: newColor = (255, 255, 255) tolerance = (5, 5, 5) cv2.floodFill(outputImage, None, seedPoint, newColor, loDiff=tolerance, upDiff=tolerance) This is the final result:
2
1
77,697,302
2023-12-21
https://stackoverflow.com/questions/77697302/how-to-run-ollama-in-google-colab
I have a code like this. And I'm launching it. I get an ngrok link. !pip install aiohttp pyngrok import os import asyncio from aiohttp import ClientSession # Set LD_LIBRARY_PATH so the system NVIDIA library becomes preferred # over the built-in library. This is particularly important for # Google Colab which installs older drivers os.environ.update({'LD_LIBRARY_PATH': '/usr/lib64-nvidia'}) async def run(cmd): ''' run is a helper function to run subcommands asynchronously. ''' print('>>> starting', *cmd) p = await asyncio.subprocess.create_subprocess_exec( *cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, ) async def pipe(lines): async for line in lines: print(line.strip().decode('utf-8')) await asyncio.gather( pipe(p.stdout), pipe(p.stderr), ) await asyncio.gather( run(['ollama', 'serve']), run(['ngrok', 'http', '--log', 'stderr', '11434']), ) Which I'm following, but the following is on the page How can I fix this? Before that, I did the following !choco install ngrok !ngrok config add-authtoken ----- !curl https://ollama.ai/install.sh | sh !command -v systemctl >/dev/null && sudo systemctl stop ollama
1. Run ollama but don't stop it !curl https://ollama.ai/install.sh | sh # should produce, among other thigns: # The Ollama API is now available at 0.0.0.0:11434 This means Ollama is running (but do check to see if there are errors, especially around graphics capability/Cuda as these may interfere. However, Don't run !command -v systemctl >/dev/null && sudo systemctl stop ollama (unless you want to stop Ollama). The next step is to start the Ollama service, but since you are using ngrok I'm assuming you want to be able to run the LLM from other environments outside the Colab? If this isn't the case, then you don't really need ngrok, but since Colabs are tricky to get working nicely with async code and threads it's useful to use the Colab to e.g. run a powerful enough VM to play with larger models than (say) anthing you could run on your dev environment (if this is an issue). 2. Set up ngrok and forward the local ollama service to a public URI Ollama isn't yet running as a service but we can set up ngrok in advance of this: import threading import time import os import asyncio from pyngrok import ngrok import threading import queue import time from threading import Thread # Get your ngrok token from your ngrok account: # https://dashboard.ngrok.com/get-started/your-authtoken token="your token goes here - don't forget to replace this with it!" ngrok.set_auth_token(token) # set up a stoppable thread (not mandatory, but cleaner if you want to stop this later class StoppableThread(threading.Thread): def __init__(self, *args, **kwargs): super(StoppableThread, self).__init__(*args, **kwargs) self._stop_event = threading.Event() def stop(self): self._stop_event.set() def is_stopped(self): return self._stop_event.is_set() def start_ngrok(q, stop_event): try: # Start an HTTP tunnel on the specified port public_url = ngrok.connect(11434) # Put the public URL in the queue q.put(public_url) # Keep the thread alive until stop event is set while not stop_event.is_set(): time.sleep(1) # Adjust sleep time as needed except Exception as e: print(f"Error in start_ngrok: {e}") Run that code so the functions exist, then in the next cell, start ngrok in a separate thread so it doesn't hang your colab - we'll use a queue so we can still share data between threads because we want to know what the ngrok public URL will be when it runs: # Create a queue to share data between threads url_queue = queue.Queue() # Start ngrok in a separate thread ngrok_thread = StoppableThread(target=start_ngrok, args=(url_queue, StoppableThread.is_stopped)) ngrok_thread.start() That will be running, but you need to get the results from the queue to see what ngrok returned, so then do: # Wait for the ngrok tunnel to be established while True: try: public_url = url_queue.get() if public_url: break print("Waiting for ngrok URL...") time.sleep(1) except Exception as e: print(f"Error in retrieving ngrok URL: {e}") print("Ngrok tunnel established at:", public_url) This should output something like: Ngrok tunnel established at: NgrokTunnel: "https://{somelongsubdomain}.ngrok-free.app" -> "http://localhost:11434" 3. Run ollama as an async process import os import asyncio # NB: You may need to set these depending and get cuda working depending which backend you are running. # Set environment variable for NVIDIA library # Set environment variables for CUDA os.environ['PATH'] += ':/usr/local/cuda/bin' # Set LD_LIBRARY_PATH to include both /usr/lib64-nvidia and CUDA lib directories os.environ['LD_LIBRARY_PATH'] = '/usr/lib64-nvidia:/usr/local/cuda/lib64' async def run_process(cmd): print('>>> starting', *cmd) process = await asyncio.create_subprocess_exec( *cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) # define an async pipe function async def pipe(lines): async for line in lines: print(line.decode().strip()) await asyncio.gather( pipe(process.stdout), pipe(process.stderr), ) # call it await asyncio.gather(pipe(process.stdout), pipe(process.stderr)) That creates the function to run an async command but doesn't run it yet. This will start ollama in a separate thread so your Colab isn't blocked: import asyncio import threading async def start_ollama_serve(): await run_process(['ollama', 'serve']) def run_async_in_thread(loop, coro): asyncio.set_event_loop(loop) loop.run_until_complete(coro) loop.close() # Create a new event loop that will run in a new thread new_loop = asyncio.new_event_loop() # Start ollama serve in a separate thread so the cell won't block execution thread = threading.Thread(target=run_async_in_thread, args=(new_loop, start_ollama_serve())) thread.start() It should produce something like: >>> starting ollama serve Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 {some key} 2024/01/16 20:19:11 images.go:808: total blobs: 0 2024/01/16 20:19:11 images.go:815: total unused blobs removed: 0 2024/01/16 20:19:11 routes.go:930: Listening on 127.0.0.1:11434 (version 0.1.20) Now you're all set up. You can either do the next steps in the Colab, but it might be easier to run on your local machine if you normally dev there. 4. Run an ollama model remotely from your local dev environment Assuming you have installed ollama on your local dev environment (say WSL2), I'm assuming it's linux anyway... but i.e. your laptop or desktop machine in front of you (as opposed to Colab). Replace the actual URI below with whatever public URI ngrok reported above: export OLLAMA_HOST=https://{longcode}.ngrok-free.app/ You can now run ollama and it will run on the remote in your Colab (so long as that's stays up and running). e.g. run this on your local machine and it will look as if it's running locally but it's really running in your Colab and the results are being served to wherever you call this from (so long as the OLLAMA_HOST is set correctly and is a valid tunnel to your ollama service: ollama run mistral You can now interact with the model on the command line locally but the model runs on the Colab. If you want to run larger models, like mixtral, then you need to be sure to connect your Colab to a Back end compute that's powerful enough (e.g. 48GB+ of RAM, so V100 GPU is minimum spec for this at the time of writing). Note: If you have any issues with cuda or nvidia showing in the ouputs of any steps above, don't proceed until you fix them.
17
12
77,690,729
2023-12-20
https://stackoverflow.com/questions/77690729/django-built-in-logout-view-method-not-allowed-get-users-logout
Method Not Allowed (GET): /users/logout/ Method Not Allowed: /users/logout/ [10/Dec/2023 12:46:21] "GET /users/logout/ HTTP/1.1" 405 0 This is happening when I went to url http://127.0.0.1:8000/users/logout/ urls.py: from django.contrib.auth import views as auth_views urlpatterns = [ ...other urls... path('users/logout/', auth_views.LogoutView.as_view(), name='logout'), ] I am expecting user to logout
Since django-5, you need to do this through a POST request, since it has side-effects. The fact that it worked with a GET request was (likely) a violation of the HTTP protocol: it made it possible for certain scripts to log out users, without the user wanting to. So a POST request also protects against cross-site request forgery (CSRF) [wiki]. So in the template, work with a mini-form: <form method="post" action="{% url 'logout' %}"> {% csrf_token %} <button type="submit">logout</button> </form> Also note that Django's LogoutView [Django-doc] does not render a page: it only works with a POST request that logs out the user that is making the request, and redirect to the page provided by the ?next_page=… parameter, or if such parameter is absent with LOGOUT_REDIRECT_URL [Django-doc]. Visiting the page will thus not work and return a 405 Method Not Allowed. If you thus want to make a page to logout, you add an extra view: from django.views.generic import TemplateView urlpatterns = [ # …, path( 'do-logout/', TemplateView.as_view(template_name='my_template.html'), name='do-logout', ) ] where the my_template.html then thus contains such miniform. You can thus visit /do-logout/ (or use another path) to render a page with a miniform to log out.
21
20
77,706,694
2023-12-23
https://stackoverflow.com/questions/77706694/how-to-modify-my-python-slack-bolt-socket-mode-code-to-reload-automatically-duri
I have the following Python code which works fine using the Web Socket mode. When I type a slash command (/hello-socket-mode) on my Slash application then it very well invokes my method, handle_some_command():- import os from slack_bolt import App from slack_bolt.adapter.socket_mode import SocketModeHandler # Install the Slack app and get xoxb- token in advance app = App(token="<bot token>") # Add functionality here @app.command("/hello-socket-mode") def handle_some_command(ack, body, logger): ack() print('testing slash command') logger.info(body) if __name__ == "__main__": # Create an app-level token with connections:write scope handler = SocketModeHandler(app,"<app token>") handler.start() Since I'm in the development phase of my Slack app, I want to add reload functionality to this backend code of the web socket such that it reloads automatically whenever there is a change in the code. I tried to add uvicorn here but with that, I stopped getting invocation to my backend method, handle_some_command()whenever I tried to enter the slash command in the my slack application. I created a separate Python script called run.py with following code:- from uvicorn import run if __name__ == "__main__": run("main:app", host="0.0.0.0", port=3000, reload=True, log_level="info") And then executed, python run.py to run my app using uvicorn to reload whenever there is a code change. It is not working at all, I'm not getting any slash command invocation to my backend code now. I just need some way of reload and not necessarily uvicorn here. I would appreciate it if someone could please guide me on this to make it work.
Code like this will work; you may delete fastapi dependency if not needed when development finished import os from fastapi import FastAPI from slack_bolt.adapter.socket_mode import SocketModeHandler from slack_bolt.app import App # Set enviornment vars BOT_TOKEN = os.environ["SLACK_BOT_TOKEN"] APP_TOKEN = os.environ["SLACK_APP_TOKEN"] SIGNING_SECRET = os.environ["SLACK_SIGNING_SECRET"] # Load Slack Bolt app = App(token=BOT_TOKEN, signing_secret=SIGNING_SECRET) # Load FastAPI api = FastAPI() # Load Slack bolt handlers # Action().add_to(bolt) SocketModeHandler(app, APP_TOKEN).connect() @app.message("hello") def message_hello(message, say): say(f"Hey there <@{message['user']}>!") # Load FastAPI endpoints @api.get("/") async def root(): return {"OK"} And call this code like this: bash -c 'uvicorn app.main:api --reload --host 0.0.0.0 --port 4000 --log-level warning'
2
3
77,692,864
2023-12-20
https://stackoverflow.com/questions/77692864/django-request-from-requestfactory-on-windows-does-not-include-session-attribut
My drone pipeline suddenly started failing on Windows only (the Linux pipeline, executing the same tests, works fine). error: assert hasattr(request, 'session'), "The session-based temporary "\ AssertionError: The session-based temporary message storage requires session middleware to be installed, and come before the message middleware in the MIDDLEWARE list Looking at RequestFactory, this does indeed not give a session, but somehow it works on Linux and it used to work on Windows too. What has changed, why? And how should I add a dummy session to the request? test.py class TestDynamicAlertSubscriptionAdminModule(TestCase): def setUp(self): request = RequestFactory().post(path="dummy") request.user = self.user request._messages = messages.storage.default_storage(request) settings.py MIDDLEWARE = [ "django.contrib.sessions.middleware.SessionMiddleware", "django.contrib.messages.middleware.MessageMiddleware", ] INSTALLED_APPS = [ "django.contrib.sessions", "django.contrib.messages", ]
Why is the session attribute not present on the request? You're using RequestFactory to create your request and directly passing it to the view, this means the request doesn't go through any of the middleware and since SessionMiddleware is responsible for setting up the session it doesn't work. This is described more in detail here. Why does this work on one environment and not the other? The messages can be stored in different backends depending on the MESSAGE_STORAGE setting. Given you have two different environments there's a possibility that this setting is configured differently for the environments where one is using the SessionStorage and another is using one of FallbackStorage or CookieStorage. The environment using SessionStorage will fail and error out while the other will succeed since they don't rely on the session.
2
2
77,692,807
2023-12-20
https://stackoverflow.com/questions/77692807/aws-glue-4-0-failing-when-calling-dynamicframe-fromdf
I'm trying to convert a Spark data frame in Python 3.10 into a dynamic frame using Glue's fromDF method from awsglue.dynamicframe import DynamicFrame DynamicFrame.fromDF(frame, glue_context, "node") But this throws an error on CloudWatch saying com.mongodb.spark.sql.connector.exceptions.MongoSparkException: Partitioning failed. Inspecting the logs further, I found that under the hood Glue seems to be using $collStats somewhere, which is not supported by AWS DocumentDB. Note that this function worked when the job was Glue 2.0, but updating it to 4.0 has started causing this issue. com.mongodb.MongoCommandException: Command failed with error 304: 'Aggregation stage not supported: '$collStats'' I haven't really tried fixing this issue, because this seems to be happening under the hood, and I don't have access to the source code of Glue 4.0. The only thing is that this did not fail in Glue 2.0.
What worked for me is setting the partitioner in the batch read configuration for mongo-spark to be set to SinglePartitionPartitioner. Seems like the default was updated to use SamplePartitioner in Glue 4.0 and the newer mongo-spark connector versions which was causing the problem. Refer the documentation
2
1
77,704,979
2023-12-22
https://stackoverflow.com/questions/77704979/importerror-libsingular-singular-4-3-1-so-cannot-open-shared-object-file-no-s
I am trying to run sage in the Ubuntu 23.10 terminal and it throws this exact error: ImportError: libsingular-Singular-4.3.1.so: cannot open shared object file: No such file or directory I am not sure which package to install and (considering the new ubuntu 23 (https://askubuntu.com/questions/1465218/pip-error-on-ubuntu-externally-managed-environment-%C3%97-this-environment-is-extern) thing) how. if it matters, I installed sage from apt.
This is a known bug: A quick workaround is downgrading singular-related packages. Here is a one-line-solution: curl -O https://mirror.enzu.com/ubuntu/pool/universe/s/singular/libsingular4-dev-common_4.3.1-p3+ds-1_all.deb -O https://mirror.enzu.com/ubuntu/pool/universe/s/singular/libsingular4-dev_4.3.1-p3+ds-1_amd64.deb -O https://mirror.enzu.com/ubuntu/pool/universe/s/singular/libsingular4m3n0_4.3.1-p3+ds-1_amd64.deb -O https://mirror.enzu.com/ubuntu/pool/universe/s/singular/singular-data_4.3.1-p3+ds-1_all.deb -O https://mirror.enzu.com/ubuntu/pool/universe/s/singular/singular-modules_4.3.1-p3+ds-1_amd64.deb -O https://mirror.enzu.com/ubuntu/pool/universe/s/singular/singular-ui_4.3.1-p3+ds-1_amd64.deb -O https://mirror.enzu.com/ubuntu/pool/universe/s/singular/singular_4.3.1-p3+ds-1_amd64.deb && sudo dpkg -i libsingular4-dev-common_4.3.1-p3+ds-1_all.deb libsingular4-dev_4.3.1-p3+ds-1_amd64.deb libsingular4m3n0_4.3.1-p3+ds-1_amd64.deb singular-data_4.3.1-p3+ds-1_all.deb singular-modules_4.3.1-p3+ds-1_amd64.deb singular-ui_4.3.1-p3+ds-1_amd64.deb singular_4.3.1-p3+ds-1_amd64.deb && sudo apt-mark hold libsingular4-dev-common libsingular4-dev libsingular4m3n0 singular-data singular-modules singular-ui singular && rm -f libsingular4-dev-common_4.3.1-p3+ds-1_all.deb libsingular4-dev_4.3.1-p3+ds-1_amd64.deb libsingular4m3n0_4.3.1-p3+ds-1_amd64.deb singular-data_4.3.1-p3+ds-1_all.deb singular-modules_4.3.1-p3+ds-1_amd64.deb singular-ui_4.3.1-p3+ds-1_amd64.deb singular_4.3.1-p3+ds-1_amd64.deb Complete answer on git: https://github.com/sagemath/sage/issues/36688
3
4
77,707,877
2023-12-23
https://stackoverflow.com/questions/77707877/change-x-axis-labels
I have created a Python function (see below code) that when called, plots a grouped bar chart which is working ok. I have also attached an image of the created graph. Is there a way instead of printing 1,2,3,4,5,6,7,8,9,10 on the x-axis, 6 times for each indexed list inside data_list. And then separate each of the 10 grouped bars apart from each other so it is more obvious and easier to interpret? import matplotlib.pyplot as plt def plot_graph(list, title_str): x_plot = [] y_plot = [] legend_labels = ['a', 'b', 'c', 'd', 'e', 'f'] x_labels = [1,2,3,4,5,6,7,8,9,10] x_labels_text = ['red', 'blue', 'green', 'purple', 'olive', 'brown'] x_colors = ['tab:red', 'tab:blue', 'tab:green', 'tab:purple', 'tab:olive', 'tab:brown'] fig, ax = plt.subplots() ax.set_xlabel('\nFault Type', fontsize=15) ax.set_ylabel('Number of Errors (%)', fontsize=15) ax.set_title('Total Number of Errors (%)', fontsize=15) for i in range(len(list)): for j in range(len(list[i])): x_plot.append(x_labels[i]) y_plot.append(list[i][j]) ax.bar(range(len(x_plot)), y_plot, label=legend_labels, color=x_colors, width=0.5) ax.set_xticks(range(len(x_plot)), x_plot) ax.set_ylim(ymax=100) #ax.legend(['a', 'b', 'c', 'd', 'e', 'f']) patches, _ = ax.get_legend_handles_labels() labels = [*'abcdef'] ax.legend(*patches, labels, loc='best') fig.tight_layout() plt.setp(ax.get_xticklabels(), fontsize=10) plt.savefig("C:/CoolTermWin64Bit/CoolTermWin64Bit/uart_data/Gathered Data/Code Generated Data Files/" + title_str + ".pdf") data_list = [ [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60], [10, 20, 30, 40, 50, 60] ] plot_graph(data_list, "data grouped bar graph")
Some notes before I answer your questions: Make sure your indentation is correct; consider using numpy for your arrays (faster and easier if you know what you're doing); don't use list as a variable name as it has a specific use in Python. Also, please see answer number 3 if you want a more complete answer for number 1. Not an exactly elegant solution, but it works: for i in range(len(data)): for j in range(len(data[i])): if j == len(x_labels) // 2: x_plot.append(i + 1) else: x_plot.append('') y_plot.append(data[i][j]) First of all, you need to fix your labels, there's something wrong with your code there. Here's how to add a legend: categories = ['example1', 'example2',...'example6'] plt.legend(categories, title='Legend') Instead of writing out how to do a whole group bar chart (which is what you're looking for), I'm going to link a guide. This should help solve problem number 1 as well: https://matplotlib.org/stable/gallery/lines_bars_and_markers/barchart.html
2
1
77,705,521
2023-12-22
https://stackoverflow.com/questions/77705521/why-is-pdb-showing-blank-line-or-comment-but-there-is-a-line
I'm trying to interactively run pdb on a remote system I don't own (I'm not sure if that's relevant), and it's behaving a bit differently than I expect. Listing the source shows there is clearly a line 22, but when I try to set the breakpoint, it tells me the line is blank. Fiddling a bit, I can get it to break on line 23, which is the line AFTER the one I want to break on. See terminal output below. I'm sure it's something dumb, but I'm at a loss. EDIT: I just noticed something new.... I'm using conda, and if I use the base environment, I don't see this issue, but if I use an environment I created, it does. (ecg) [tmhedr3@hmrihpcp03 ecg_analysis]$ python -m pdb rnn.py > /home/tmhedr3/ecg_analysis/rnn.py(1)<module>() -> import os (Pdb) ll 0 import os 1 -> import glob 2 import numpy as np 3 import pandas as pd 4 # import tensorflow as tf 5 # import keras 6 # from keras import layers 7 8 ages = open('ages.csv', 'r') 9 lines = ages.readlines() 10 n = len(lines) 11 12 data = np.ndarray((n, 5000, 12)) 13 14 for i in range(n): 15 li = lines[i].strip().split(sep=',') 16 name = li[0] 17 age = li[1] 18 19 leadData = np.loadtxt('./data/csv/' + name, skiprows=1, delimiter=',', usecols=(0,1,2,3,4,5,6,7,8,9,10,11), dtype='int') 20 data[i] = leadData 21 22 print(data.shape) 23 (Pdb) b 22 *** Blank or comment (Pdb) b 23 Breakpoint 1 at /home/tmhedr3/ecg_analysis/rnn.py:23 (Pdb) c > /home/tmhedr3/ecg_analysis/rnn.py(23)<module>() -> print(data.shape) (Pdb)
Here it is, you've been hit by this issue - https://github.com/python/cpython/issues/103319 - that is whyit only affects a few Python versions. All it took for me to find was to open the Lib/pdb.py on cpython's source tree, look for "ll", and inspect visually the code from there - the issue itself is described as comments in the source code. cPython's Lib/pdb.py ... class Pdb(...): ... # [~ line 1992] def _getsourcelines(self, obj): # GH-103319 # inspect.getsourcelines() returns lineno = 0 for # module-level frame which breaks our code print line number # This method should be replaced by inspect.getsourcelines(obj) # once this bug is fixed in inspect lines, lineno = inspect.getsourcelines(obj) lineno = max(1, lineno) return lines, lineno Of course, changing the behavior of inspect.getsourcelines now to do as written would sure break a lot of code depending on its count-from-0 behavior. We are likely either stuck with this, unless someone adds an optional, defaulting to False, flag parameter for getsourcelines to start counting from 1.
2
1
77,716,501
2023-12-26
https://stackoverflow.com/questions/77716501/odoo-remote-attach-vscode-not-hitting-breakpoints
I'm trying to debug Odoo with no success. Docker compose starts the environment for Odoo so it's already built. This is my launch file: { "version": "0.2.0", "env": { "GEVENT_SUPPORT": "True" }, "configurations": [ { "name": "Debug", "type": "python", "request": "attach", "connect": { "host": "127.0.0.1", "port": 5678 }, "pathMappings": [ { "localRoot": "/local_folder/odoo", "remoteRoot": "/remote_folder/odoo", } ], "justMyCode": true } ] } In the Dockerfile I built the image using RUN pip3 install debugpy. This is the command entry for the compose file: ports: - "8069:8069" - "5678:5678" expose: - 8069 - 5678 command: "python3 -m debugpy --listen 0.0.0.0:5678 --wait-for-client /path_to_odoo/odoo-bin args" So the container is waiting for me to start debugging as normal but when I try an specific action the debugger does not stop in any breakpoint. The breakpoints are red so vscode does see the remote file. Should the address still be localhost:8069? Using netstat shows that 8069 and 5678 are listening but can't make the debugger work. The debugpy version in the container is 1.8 but already used 1.6 with no success. Any idea what the problem could be?
I managed to make it "work". It's a bug already reported here: https://github.com/microsoft/debugpy/issues/1206 Breakpoints hit when I remove GEVENT_SUPPORT=True or set it to False, but as soon as I do that I get spammed: It seems that the gevent monkey-patching is being used. Please set an environment variable with: GEVENT_SUPPORT=True to enable gevent support in the debugger. So the current solution to be able to use vscode debugger is to comment or remove GEVENT_SUPPORT and create a custom entrypoint so you won't get the spam message mentioned. Create a odoo_custom.py file: #!/usr/bin/env python3 import sys sys.path.append('/path/to/project_folder/framework') # enable gevent support in the debugger __import__('os').environ['GEVENT_SUPPORT'] = 'true' # set server timezone in UTC before time module imported __import__('os').environ['TZ'] = 'UTC' # import and run odoo-bin if __name__ == "__main__": with open("/path/to/project_folder/framework/odoo-bin") as f: code = compile(f.read(), "odoo-bin", 'exec') exec(code) Finally add this in your compose: "python3 -m debugpy --listen 0.0.0.0:5678 --wait-for-client /project_folder/custom.py args" Two days ago, Jan 3 2024, the issue was closed and they gonna fix it for python 3.12+, older python versions will still have to use this solution.
2
1
77,691,002
2023-12-20
https://stackoverflow.com/questions/77691002/attributeerror-flask-object-has-no-attribute-before-first-request-in-flask
I'm trying to run init function when flask app run. Here's server.py: from .parser import Parser app = Flask(config().get("FLASK_APP")) parser = None @app.before_first_request def init(): parser = Parser() Here's wsgi.py: import logging from src.utils.config import config host = config().get("FLASK_HOST") port = config().get("FLASK_PORT") env = config().get("FLASK_ENV") is_dev = env == "dev" logging.basicConfig( format='[%(asctime)s][%(levelname)s][%(message)s]', level=logging.INFO if is_dev else logging.WARNING, datefmt='%Y-%m-%d %H:%M:%S' ) from src.api.server import app if __name__ == '__main__': app.run( host=host, port=port, debug=is_dev, threaded=True ) And i'm getting the error AttributeError: 'Flask' object has no attribute 'before_first_request'. When i'm running the server.py like this: from .parser import Parser app = Flask(config().get("FLASK_APP")) parser = Parser() The parser runs twice. I'm using Flask 3.0.0, as i see this decorator is deprecated. Are there any other solutions for Flask 3.0.0 version? I've checked the flask documentation but didn't find any alternatives for before_first_request decorator.
Suddenly i found the solution. The problem as i see is in the flask's reloader, so i just disabled this feature in the app.run bootstrap function. Here's detailed info by Sean's answer https://stackoverflow.com/a/9476701/11988818 For now, my application runs once. And there's no needs in the before_first_request decorator anymore.
2
2
77,717,029
2023-12-26
https://stackoverflow.com/questions/77717029/redisearch-full-text-index-not-working-with-python-client
I am trying to follow this Redis documentation link to create a small DB of notable people searchable in real time (using Python client). I tried a similar code, but the final line, which queries by "s", should return two documents, instead, it returns a blank set. Can anybody help me find out the mistake I am making? import redis from redis.commands.json.path import Path import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.search.field import TextField, NumericField, TagField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import NumericFilter, Query d1 = {"key": "shahrukh khan", "pl": '{"d": "mvtv", "id": "1234-a", "img": "foo.jpg", "t: "act", "tme": "1965-"}', "org": "1", "p": 100} d2 = {"key": "salman khan", "pl": '{"d": "mvtv", "id": "1236-a", "img": "fool.jpg", "t: "act", "tme": "1965-"}', "org": "1", "p": 100} d3 = {"key": "aamir khan", "pl": '{"d": "mvtv", "id": "1237-a", "img": "fooler.jpg", "t: "act", "tme": "1965-"}', "org": "1", "p": 100} schema = ( TextField("$.key", as_name="key"), NumericField("$.p", as_name="p"), ) r = redis.Redis(host='localhost', port=6379) rs = r.ft("idx:au") rs.create_index( schema, definition=IndexDefinition( prefix=["au:"], index_type=IndexType.JSON ) ) r.json().set("au:mvtv-1234-a", Path.root_path(), d1) r.json().set("au:mvtv-1236-a", Path.root_path(), d2) r.json().set("au:mvtv-1237-a", Path.root_path(), d3) rs.search(Query("s"))
When executing a query from the redis-py client, it will transmit the FT.SEARCH command to the redis server. You can observe it by using the command MONITOR from a redis-client for example. According to the documentation, when providing a single word for the research, the matching is full. That's why the result of your query is the empty set. If you want to search by prefix, you need to use the expression prefix*. However, documentation says: The prefix needs to be at least two characters long. Hence, you cannot search by word starting only by s. What you could do: rs.search(Query("sa*")) #Result{1 total, docs: [Document {'id': 'au:mvtv-1236-a', 'payload': None, 'json': '{"key":"salman khan","pl":"{\\"d\\": \\"mvtv\\", \\"id\\": \\"1236-a\\", \\"img\\": \\"fool.jpg\\", \\"t: \\"act\\", \\"tme\\": \\"1965-\\"}","org":"1","p":100}'}]} Aside note If you want to scope your search on a specific field, the syntax is: Query("@field_name: word") # Query("@key: sa*") where @field_name is the schema field's name. Otherwise, the search will look up for all TextField attributes.
6
2
77,712,157
2023-12-25
https://stackoverflow.com/questions/77712157/django-refresh-token-rotation-and-user-page-refresh
I'm using Django simple JWT to implement user authentication, I have done few adjustments so the access token and refresh token are sent as http-only cookies and everything works well On the frontend I have implemented Persistent Login that would keep the user logged in when they refresh the page or close the browser etc. But since I have enabled these settings: "ROTATE_REFRESH_TOKENS": True, "BLACKLIST_AFTER_ROTATION": True, If the user keeps refreshing the page multiple times in a very short time, it might occur that a token is blacklisted before the user receives the new refresh token is there a way to fix that? One possible fix yet I'm not sure of its reliability is disabling the automatic blacklisting and waiting for the frontend to send a request upon receiving the new refresh token, the request containing the old refresh token in its body like this @api_view(['POST']) def blacklist_token(request): refreshToken = request.data.get("refresh") print(refreshToken) if refreshToken: token = tokens.RefreshToken(refreshToken) token.blacklist() return Response(status=status.HTTP_200_OK) PS: Using React.js on the frontend
Refreshing a page should not require a token refresh. Instead the backend should receive and use the existing access token (from the HTTP only cookie). When the access token expires, the backend should return a 401 unauthorized response to the frontend. The frontend can then perform a synchronized token refresh. This copes reliably with multiple views getting data concurrently. If only a single view gets data at a time, it is safe to initiate the token refresh server side instead. Eventually, the refresh token will also expire. In OAuth 2.0, the authorization server returns an invalid_grant error code in this case. The frontend then redirects the user to authenticate again. I would recommend rehearsing these expiry events, since that's a great way to ensure a reliable app, where the user never experiences unnecessary errors.
2
1
77,708,266
2023-12-23
https://stackoverflow.com/questions/77708266/speed-up-for-finding-an-optimal-partition-line
This coding question derived from this question. Consider an n by n grid of integers. The task is to draw a straight line across the grid so that the part that includes the top left corner sums to the largest number possible. Here is a picture of an optimal solution with score 45: We include a square in the part that is to be summed if its middle is above or on the line. Above means in the part including the top left corner of the grid. (To make this definition clear, no line can start exactly in the top left corner of the grid.) The task is to choose the line that maximizes the sum of the part that includes the top left square. The line must go straight from one side to another. The line can start or end anywhere on a side and not just at integer points. The Python code given is: import numpy as np import fractions def best_line(grid): n, m = grid.shape D = [(di, dj) for di in range(-(n - 1), n) for dj in range(-(n - 1), n)] def slope(d): di, dj = d if dj == 0: return float('inf') if di <= 0 else float('-inf'), -di else: return fractions.Fraction(di, dj), fractions.Fraction(-1, dj) D.sort(key=slope) D = np.array(D, dtype=np.int64) s_max = grid.sum() for grid in (grid, grid.T): left_sum = 0 for j in range(grid.shape[1]): left_sum += grid[:,j].sum() for i in range(grid.shape[0]): p = np.array([i, j], dtype=np.int64) Q = p + D Q = Q[np.all((0 <= Q) & (Q < np.array(grid.shape)), axis=1)] s = left_sum for q in Q: if not np.any(q): break if q[1] <= j: s -= grid[q[0],q[1]] else: s += grid[q[0],q[1]] s_max = max(s_max, s) return s_max This code is already slow for n=30. Is there any way to speed it up in practice? Test cases As the problem is quite complicated, I have given some example inputs and outputs. The easiest test cases are when the input matrix is made of positive (or negative) integers only. In that case a line that makes the part to sum the whole matrix (or the empty matrix if all the integers are negative) wins. Only slightly less simple is if there is a line that clearly separates the negative integers from the non negative integers in the matrix. Here is a slightly more difficult example with an optimal line shown. The optimal value is 14. The grid in machine readable form is: [[ 3 -1 -2 -1] [ 0 1 -1 1] [ 1 1 3 0] [ 3 3 -1 -1]] Here is an example with optimal value 0. [[-3 -3 2 -3] [ 0 -2 -1 0] [ 1 0 2 0] [-1 -2 1 -1]] This matrix has optimal score 31: [[ 3 0 1 3 -1 1 1 3 -2 -1] [ 3 -1 -1 1 0 -1 2 1 -2 0] [ 2 2 -2 0 1 -3 0 -2 2 1] [ 0 -3 -3 -1 -1 3 -2 0 0 3] [ 2 2 3 2 -1 0 3 0 -3 -1] [ 1 -1 3 1 -3 3 -2 0 -3 0] [ 2 -2 -2 -3 -2 1 -2 0 0 3] [ 0 3 0 1 3 -1 2 -3 0 -2] [ 0 -2 2 2 2 -2 0 2 1 3] [-2 -2 0 -2 -2 2 0 2 3 3]] In Python/numpy, an easy way to make more test matrices is: import numpy as np N = 30 square = np.random.randint(-3, 4, size=(N, N)) Timing N = 30 np.random.seed(42) big_square = randint(-3, 4, size=(N, N)) print(best_line(np.array(big_square))) takes 1 minute 55 seconds and gives the output 57. Andrej Kesely's parallel code takes 1 min 5 seconds for n=250. This is a huge improvement. Can it be made faster still?
TL;DR: This answer provides a much faster solution than the one of @AndrejKesely. Is also makes use of Numba, but the final code is more optimized despite being also more complex. First simple implementation The initial code is not efficient because it calls Numpy function in a loop. In such a case, Numpy functions are very expensive. Numba and Cython are the way to go to make the code significantly faster. Still, operations on Nx2 arrays are not efficient, neither in Numpy nor in Numba. Moreover, generating temporary arrays (like Q) tends not to be optimal. The solution is typically to compute the array D on the fly without generating a temporary array Q. Here is a naive relatively-fast implementation we can write based on this: import numpy as np import numba as nb # Naive implementation based on the initial code: see below def generate_D(n): import fractions def slope(d): di, dj = d if dj == 0: return float('inf') if di <= 0 else float('-inf'), -di return fractions.Fraction(di, dj), fractions.Fraction(-1, dj) D = [(di, dj) for di in range(-(n - 1), n) for dj in range(-(n - 1), n)] D.sort(key=slope) return np.array(D, dtype=np.int64) # Naive Numba implementation: see below @nb.njit('(int64[:,::1], int64[:,::1], int64)') def best_line_main_loop_seq(grid, D, s_max_so_far): n, m = grid.shape s_max = s_max_so_far left_sum = 0 for j in range(m): left_sum += grid[:,j].sum() for i in range(n): s = left_sum for k in range(D.shape[0]): qi = D[k, 0] + i qj = D[k, 1] + j if 0 <= qi and qi < n and 0 <= qj and qj < m: if qi == 0 and qj == 0: break val = grid[qi,qj] s += -val if qj <= j else val s_max = max(s_max, s) return s_max # Main computing function def best_line(grid): n, m = grid.shape D = generate_D(n) s_max = grid.sum() s_max = max(s_max, best_line_main_loop_par_unroll_transposed(grid.T.copy(), D, s_max)) s_max = max(s_max, best_line_main_loop_par_unroll_transposed(grid, D, s_max)) return s_max # Benchmark N = 30 np.random.seed(42) big_square = np.random.randint(-3, 4, size=(N, N)) grid = np.array(big_square).astype(np.int64) assert best_line(grid) == 57 %time best_line(grid) The performance of this sequential code is not far from the one of the parallel implementation of @AndrejKesely (a bit slower on my machine, especially with a large N). Faster generation of the array D The above code is limited by the slow sort in generate_D(n) due to the rather inefficient fractions module, especially for small N values. The code can be made faster by comparing fractions in Numba directly. However, Numba unfortunately do not support keywords for the sort function so this feature needs to be reimplemented. This is a bit cumbersome to do manually, but the speed up worth the effort. Here is the resulting implementation: @nb.njit('(UniTuple(int64,2), UniTuple(int64,2))') def compare_fraction(a, b): a_top, a_bot = a b_top, b_bot = b if a_bot < 0: a_top = -a_top a_bot = -a_bot if b_bot < 0: b_top = -b_top b_bot = -b_bot fixed_a = a_top * b_bot fixed_b = b_top * a_bot if fixed_a < fixed_b: return -1 if fixed_a == fixed_b: return 0 return 1 @nb.njit('(int64[::1], int64[::1])') def compare_slope(a, b): ai, aj = a bi, bj = b if aj == 0: # slope_a is special if ai <= 0: # slope_a = (INF,-ai) if bj == 0 and bi <= 0: if -ai < -bi: return -1 elif -ai == -bi: return 0 else: return 1 else: return 1 else: # slope_a = (-INF,-ai) if bj == 0 and bi > 0: # slope_b = (-INF,-bi) if -ai < -bi: return -1 elif -ai == -bi: return 0 else: return 1 else: return -1 else: if bj == 0: # slope_b is special if bi <= 0: # slope_b = (INF,-bi) return -1 else: # slope_b = (-INF,-bi) return 1 slope_a = ((ai,aj), (-1,aj)) slope_b = ((bi,bj), (-1,bj)) res = compare_fraction(slope_a[0], slope_b[0]) if res == 0: return compare_fraction(slope_a[1], slope_b[1]) return res # Quite naive quick-sort, but simple one @nb.njit('(int64[:,::1],)') def sort_D(arr): if len(arr) <= 1: return else: pivot = arr[0].copy() left = 1 right = len(arr) - 1 while True: while left <= right and compare_slope(arr[left], pivot) <= 0: left = left + 1 while compare_slope(arr[right], pivot) >= 0 and right >= left: right = right - 1 if right < left: break else: arr[left], arr[right] = arr[right].copy(), arr[left].copy() arr[0], arr[right] = arr[right].copy(), arr[0].copy() sort_D(arr[:right]) sort_D(arr[right+1:]) @nb.njit('(int64,)') def generate_D(n): D_list = [(di, dj) for di in range(-(n - 1), n) for dj in range(-(n - 1), n)] D_arr = np.array(D_list, dtype=np.int64) sort_D(D_arr) return D_arr The above generate_D function is more than 10 times faster on my machine. Faster main loop The naive main loop code provided above can be improved. First of all, it can be parallelized though the code does not scale well (certainly due to the work imbalance caused by the break in the loop). This can be done efficiently with prange on the outer loop (using prange on the inner loop is not optimal because the amount of work is small and creating/joining threads is expensive). Moreover, the loop can be unrolled so to reduce the number of conditional checks, especially on j. While the performance improvement can be significant (e.g. 2 times faster), this resulting code is also clearly less readable/maintainable (actually pretty ugly). This this a trade-off to pay since the operation is too complex for the Numba JIT to do it (and more generally most compilers). Finally, the array access can be made more cache-friendly by virtually transposing the array so to improve the locality of memory accesses (i.e. accessing many items of a same row in the target grid rather than accessing different rows). This optimization is especially useful for large N values (e.g. >200). Here is the resulting optimized main code: @nb.njit('(int64[:,::1], int64[:,::1], int64)', parallel=True, cache=True) def best_line_main_loop_par_unroll_transposed(grid, D, s_max_so_far): m, n = grid.shape s_max = s_max_so_far all_left_sum = np.empty(m, dtype=np.int64) left_sum = 0 for j in range(m): left_sum += grid[j,:].sum() all_left_sum[j] = left_sum for j in nb.prange(m): left_sum = all_left_sum[j] for i in range(0, n//4*4, 4): i1 = i i2 = i + 1 i3 = i + 2 i4 = i + 3 s1 = left_sum s2 = left_sum s3 = left_sum s4 = left_sum continue_loop_i1 = True continue_loop_i2 = True continue_loop_i3 = True continue_loop_i4 = True for k in range(D.shape[0]): qj = D[k, 1] + j if 0 <= qj and qj < m: qi1 = D[k, 0] + i1 qi2 = D[k, 0] + i2 qi3 = D[k, 0] + i3 qi4 = D[k, 0] + i4 mult = np.int64(-1 if qj <= j else 1) if qj != 0: if continue_loop_i1 and 0 <= qi1 and qi1 < n: s1 += mult * grid[qj,qi1] if continue_loop_i2 and 0 <= qi2 and qi2 < n: s2 += mult * grid[qj,qi2] if continue_loop_i3 and 0 <= qi3 and qi3 < n: s3 += mult * grid[qj,qi3] if continue_loop_i4 and 0 <= qi4 and qi4 < n: s4 += mult * grid[qj,qi4] s_max = max(s_max, max(max(s1, s2), max(s3, s4))) else: if continue_loop_i1 and 0 <= qi1 and qi1 < n: if qi1 == 0 and qj == 0: continue_loop_i1 = False else: s1 += mult * grid[qj,qi1] s_max = max(s_max, s1) if continue_loop_i2 and 0 <= qi2 and qi2 < n: if qi2 == 0 and qj == 0: continue_loop_i2 = False else: s2 += mult * grid[qj,qi2] s_max = max(s_max, s2) if continue_loop_i3 and 0 <= qi3 and qi3 < n: if qi3 == 0 and qj == 0: continue_loop_i3 = False else: s3 += mult * grid[qj,qi3] s_max = max(s_max, s3) if continue_loop_i4 and 0 <= qi4 and qi4 < n: if qi4 == 0 and qj == 0: continue_loop_i4 = False else: s4 += mult * grid[qj,qi4] s_max = max(s_max, s4) if not continue_loop_i1 and not continue_loop_i2 and not continue_loop_i3 and not continue_loop_i4: break for i in range(n//4*4, n): s = left_sum for k in range(D.shape[0]): qi = D[k, 0] + i qj = D[k, 1] + j mult = np.int64(-1 if qj <= j else 1) if 0 <= qi and qi < n and 0 <= qj and qj < m: if qi == 0 and qj == 0: break s += mult * grid[qj,qi] s_max = max(s_max, s) return s_max def best_line(grid): n, m = grid.shape D = generate_D(n) s_max = grid.sum() s_max = max(s_max, best_line_main_loop_par_unroll_transposed(grid.T.copy(), D, s_max)) s_max = max(s_max, best_line_main_loop_par_unroll_transposed(grid, D, s_max)) return s_max Note that Numba takes a significant time to compile this function, hence the cache=True flag to avoid recompiling it over and over. Performance results Here are performance results on my machine (with a i5-9600KF CPU, CPython 3.8.1 on Windows, and Numba 0.58.1): With N = 30: - Initial code: 6461 ms - AndrejKesely's code: 54 ms - This code: 4 ms <----- With N = 300: - Initial code: TOO LONG - AndrejKesely's code: 109 s - This code: 12 s <----- Thus, this implementation is more than 1600 times faster than the initial implementation, and also 9 times faster than the one of @AndrejKesely. This is the fastest one by a large margin. Notes The provided implementation can theoretically be optimized a bit further thanks to SIMD instructions. However, this is AFAIK not possible to easily do that with Numba. A native language need to be used to do that (e.g. C, C++, Rust). SIMD-friendly native languages (e.g. CUDA, ISPC) are certainly the way to go to do that quite easily. Indeed, doing that manually with native SIMD intrinsics or SIMD library is cumbersome and it will likely make the code completely unreadable. I expect this to be 2x-4x faster but certainly not much more. On CPU, this requires a hardware supporting fast SIMD masked-load instructions and blending/masking ones (e.g. quite-recent mainstream AMD/Intel x86-64 CPUs). On GPU, one need to care about maintaining SIMD lanes mainly active (not so simple since GPUs have very wide SIMD registers and warp divergence tends to increase due to the break and conditionals), not to mention memory accesses should be as contiguous as possible without bank conflits to get a fast implementation (this is probably far from being easy to do though here).
8
6
77,704,108
2023-12-22
https://stackoverflow.com/questions/77704108/convolutional-neural-network-not-learning
I'm trying to train a Convolutional Neural Network for image recognition on a training set 1500 images with 15 categories. I've been told that, with this architecture and initial weights drawn from a Gaussian distribution with a mean of 0 and a standard deviation of 0.01 and the initial bias values to 0, with the proper learning rate it should achieve an accuracy of around 30%. However, it doesn't learn anything at all: the accuracy is similar to the one of a random classifier and the weights after training still follow a normal distribution. What am I doing wrong? This is the NN class simpleCNN(nn.Module): def __init__(self): super(simpleCNN,self).__init__() #initialize the model self.conv1=nn.Conv2d(in_channels=1,out_channels=8,kernel_size=3,stride=1) #Output image size is (size+2*padding-kernel)/stride -->62*62 self.relu1=nn.ReLU() self.maxpool1=nn.MaxPool2d(kernel_size=2,stride=2) #outtput image 62/2-->31*31 self.conv2=nn.Conv2d(in_channels=8,out_channels=16,kernel_size=3,stride=1) #output image is 29*29 self.relu2=nn.ReLU() self.maxpool2=nn.MaxPool2d(kernel_size=2,stride=2) #output image is 29/2-->14*14 (MaxPool2d approximates size with floor) self.conv3=nn.Conv2d(in_channels=16,out_channels=32,kernel_size=3,stride=1) #output image is 12*12 self.relu3=nn.ReLU() self.fc1=nn.Linear(32*12*12,15) #16 channels * 16*16 image (64*64 with 2 maxpooling of stride 2), 15 output features=15 classes self.softmax = nn.Softmax(dim=1) def forward(self,x): x=self.conv1(x) x=self.relu1(x) x=self.maxpool1(x) x=self.conv2(x) x=self.relu2(x) x=self.maxpool2(x) x=self.conv3(x) x=self.relu3(x) x=x.view(-1,32*12*12) x=self.fc1(x) x=self.softmax(x) return x The inizialization: def init_weights(m): if isinstance(m,nn.Conv2d) or isinstance(m,nn.Linear): nn.init.normal_(m.weight,0,0.01) nn.init.zeros_(m.bias) model = simpleCNN() model.apply(init_weights) The training function: loss_function=nn.CrossEntropyLoss() optimizer=optim.SGD(model.parameters(),lr=0.1,momentum=0.9) def train_one_epoch(epoch_index,loader): running_loss=0 for i, data in enumerate(loader): inputs,labels=data #get the minibatch outputs=model(inputs) #forward pass loss=loss_function(outputs,labels) #compute loss running_loss+=loss.item() #sum up the loss for the minibatches processed so far optimizer.zero_grad() #reset gradients loss.backward() #compute gradient optimizer.step() #update weights return running_loss/(i+1) # average loss per minibatch The training: EPOCHS=20 best_validation_loss=np.inf for epoch in range(EPOCHS): print('EPOCH{}:'.format(epoch+1)) model.train(True) train_loss=train_one_epoch(epoch,train_loader) running_validation_loss=0.0 model.eval() with torch.no_grad(): # Disable gradient computation and reduce memory consumption for i,vdata in enumerate(validation_loader): vinputs,vlabels=vdata voutputs=model(vinputs) vloss=loss_function(voutputs,vlabels) running_validation_loss+=vloss.item() validation_loss=running_validation_loss/(i+1) print('LOSS train: {} validation: {}'.format(train_loss,validation_loss)) if validation_loss<best_validation_loss: #save the model if it's the best so far timestamp=datetime.now().strftime('%Y%m%d_%H%M%S') best_validation_loss=validation_loss model_path='model_{}_{}'.format(timestamp,epoch) torch.save(model.state_dict(),model_path) With the default initializion it works a little better, but i'm supposed to reach 30% with the gaussian. Could you spot some issue that might be causing it not to learn? I have already tries different learning rates and momentum.
The problem is given was given by the fact that I was importing the images, and then converting them to tensors with transforms.ToTensor(), which rescales the pixel values in the range [0,1]. While the CNN was actually meant to work with [0,255]. Having so small pixel values, a small standard deviation with the normal initialization is almost equivalent to a null initialization. So in order to fix this kind of problem you have to be sure that the pixel values are in the range [0,255]. Also the softmax at the end of the network worsens the problem, as already pointed out.
2
0
77,696,374
2023-12-21
https://stackoverflow.com/questions/77696374/how-to-fix-importerror-dll-load-failed-while-importing-bcrypt-the-specified
I'm currently facing an issue with importing the _bcrypt module in Python, and I would greatly appreciate any help or insights you can provide. I am using Python 3.9 on a Windows 10 Pro machine. the code calls paramiko which calls bcrypt When I try to import paramiko, I encounter the following error: Traceback (most recent call last): File "d:\remote_conn\client-copy.py", line 44, in import sys, os, paramiko, time File "C:\Python39\lib\site-packages\paramiko\__init__.py", line 22, in from paramiko.transport import ( File "C:\Python39\lib\site-packages\paramiko\transport.py", line 93, in from paramiko.dsskey import DSSKey File "C:\Python39\lib\site-packages\paramiko\dsskey.py", line 37, in from paramiko.pkey import PKey File "C:\Python39\lib\site-packages\paramiko\pkey.py", line 32, in import bcrypt File "C:\Python39\lib\site-packages\bcrypt\__init__.py", line 13, in from ._bcrypt import ( ImportError: DLL load failed while importing `_bcrypt`: The specified procedure could not be found. I have already attempted the following troubleshooting steps: updated all packages Installed Microsoft Visual C++ Redistributable for Visual Studio 2015-20122 (x86/x64) and Microsoft Visual C++ Redistributable 2013 (x64). Installed OpenSSL with Git . Checked that the directory containing the DLL files is included in the PATH environment variable (C:\Windows\system32, C:\Python39\DLLs). Despite these efforts, the error persists. Could you please provide any suggestions or insights on how to resolve this issue? Is there anything I might be missing or any alternative approaches I could try? or how to know which DLLs files need? Thank you in advance for your help! Best regards
I ran into the same issue for latest versions of paramiko(3.4.0) and bcrypt(4.1.2). I tried installing a lower version of bcrypt but should be >=3.2 as latest paramiko version(3.4.0) requires bcrypt>=3.2. This worked for me.
4
4
77,714,222
2023-12-25
https://stackoverflow.com/questions/77714222/http-error-when-calling-the-duckduckgo-api
I am currently following a course on machine learning : the course from the fast.ai web site. The course is a video, but it is linked to a a jupyter notebook on kaggle (https://www.kaggle.com/code/jhoward/is-it-a-bird-creating-a-model-from-your-own-data). However when i try to run cell number 11: the one that contains this code: #NB: `search_images` depends on duckduckgo.com, which doesn't always return correct responses. # If you get a JSON error, just try running it again (it may take a couple of tries). urls = search_images('bird photos', max_images=1) urls[0] I get this error : HTTPError Traceback (most recent call last) /tmp/ipykernel_17/2432147335.py in <module> 1 #NB: `search_images` depends on duckduckgo.com, which doesn't always return correct responses. 2 # If you get a JSON error, just try running it again (it may take a couple of tries). ----> 3 urls = search_images('bird photos', max_images=1) 4 urls[0] /tmp/ipykernel_17/3153793264.py in search_images(term, max_images) 4 def search_images(term, max_images=30): 5 print(f"Searching for '{term}'") ----> 6 return ddg_images(term, max_results=max_images).itemgot('image') /opt/conda/lib/python3.7/site-packages/duckduckgo_search/compat.py in ddg_images(keywords, region, safesearch, time, size, color, type_image, layout, license_image, max_results, page, output, download) 80 type_image=type_image, 81 layout=layout, ---> 82 license_image=license_image, 83 ): 84 results.append(r) /opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in images(self, keywords, region, safesearch, timelimit, size, color, type_image, layout, license_image) 403 assert keywords, "keywords is mandatory" 404 --> 405 vqd = self._get_vqd(keywords) 406 assert vqd, "error in getting vqd" 407 /opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in _get_vqd(self, keywords) 93 def _get_vqd(self, keywords: str) -> Optional[str]: 94 """Get vqd value for a search query.""" ---> 95 resp = self._get_url("POST", "https://duckduckgo.com", data={"q": keywords}) 96 if resp: 97 for c1, c2 in ( /opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in _get_url(self, method, url, **kwargs) 87 logger.warning(f"_get_url() {url} {type(ex).__name__} {ex}") 88 if i >= 2 or "418" in str(ex): ---> 89 raise ex 90 sleep(3) 91 return None /opt/conda/lib/python3.7/site-packages/duckduckgo_search/duckduckgo_search.py in _get_url(self, method, url, **kwargs) 80 ) 81 if self._is_500_in_url(str(resp.url)) or resp.status_code == 202: ---> 82 raise httpx._exceptions.HTTPError("") 83 resp.raise_for_status() 84 if resp.status_code == 200: HTTPError: I tried running all the cells above, successfully : my Kaggle account is verified and I have enabled internet. I suspect that the duckduckgo.com website does not work (because the error thrown is inside an if statement , and that statement is verified if the Http response code contains a 500) : is there a way for me to check if this is the case only for me or for everyone else, if so : does anyone of have found a way to bypass this problem and still finish lesson one of the course?
On the right side of the webpage, under "Notebook options", find a drop down menu called "ENVIRONMENT". Change this to "Always use latest environment". Rerun ALL the cells (including the cell with the import statement) and see if the error persists.
3
5
77,716,028
2023-12-26
https://stackoverflow.com/questions/77716028/inserting-many-rows-in-psycopg3
I used to use execute_values in psycopg2 but it's gone in psycopg3. I tried following the advice in this answer or this github post, but it just doesn't seem to work for my use case. I'm trying to insert multiple values, my SQL is like so: sql = INSERT INTO activities (type_, key_, a, b, c, d, e) VALUES %s ON CONFLICT (key_) DO UPDATE SET a = EXCLUDED.a, b = EXCLUDED.b, c = EXCLUDED.c, d = EXCLUDED.d, e = EXCLUDED.e values = [['type', 'key', None, None, None, None, None]] But doing cursor.executemany(sql, values) results in {ProgrammingError}the query has 1 placeholder but 7 parameters were passed. I tried many variations with extra parentheses etc. but always it results in some error. For example doing self.cursor.executemany(sql, [values]) results in syntax error near or at "$1": Line 3: VALUES $1.
The values clause should be consist of one %s placeholder for each column being inserted, separated by commas and all within parentheses, like this: INSERT INTO t (a, b, c) VALUES (%s, %s, %s) We can produce the desired string with string manipulation: # Create one placeholder per column inserted. placeholders = ', '.join(['%s'] * len(values[0])) # Wrap in parentheses. values_clause = f"""({placeholders})""" # Inject into the query string. isql = isql % values_clause with psycopg.connect(dbname='test') as conn, conn.cursor() as cur: cur.executemany(isql, values) conn.commit() However psycopg provides tools for composing SQL statements, and it may be safer to use these tools rather than relying on string manipulation if your query building is very dynamic. Using these tools, you would have this (I've added the parentheses into the main query string this time, as there is no benefit in not doing so): placeholders = sql.SQL(', ').join(sql.Placeholder() * len(values[0])) isql = sql.SQL("""INSERT INTO t77716028 (type_, key_, a, b, c, d, e) VALUES ({placeholders}) ON CONFLICT (key_) DO UPDATE SET a = EXCLUDED.a, b = EXCLUDED.b, c = EXCLUDED.c, d = EXCLUDED.d, e = EXCLUDED.e""") isql = isql.format(placeholders=placeholders) with psycopg.connect(dbname='test') as conn, conn.cursor() as cur: print(f'{isql.as_string(conn)=}') cur.executemany(isql, values) conn.commit()
3
3
77,715,680
2023-12-26
https://stackoverflow.com/questions/77715680/how-to-align-bar-labels-on-the-right-in-barh-plot
Example Code import pandas as pd import seaborn as sns sns.set_style('white') s = pd.Series({'John': 7, 'Amy': 4, 'Elizabeth': 4, 'James': 4, 'Roy': 2}) color1 = ['orange', 'grey', 'grey','grey','grey'] ax1 = s.plot(kind='barh', color=color1, figsize=(6, 3), width=.8) ax1.invert_yaxis() ax1.bar_label(ax1.containers[0], labels=s.index, padding=-60, color='white', fontsize=12, fontweight='bold') ax1.bar_label(ax1.containers[0], padding=10, color='black', fontsize=8, fmt='{:.0f} times'.format, fontweight='bold') ax1.set_xticks([]) ax1.set_yticks([]) sns.despine(bottom=True, left=True) How can I align the bar labels on the right side of the bar in the plot? I am looking for a way to fix this using any of the following libraries: pandas, seaborn, or matplotlib.
Edit: You can patch Text manually to change the horizontal alignment and get a better result: labels = ax1.bar_label(ax1.containers[0], labels=s.index, padding=-5, color='white', fontsize=12, fontweight='bold') for label in labels: label.set_ha('right') Output: Old answer: You can use annotate to more precisely control what you want or use str.rjust to get labels of the same width. However, since you are not using a monospaced font, the result is approximate: zlabels = s.index.str.rjust(s.index.str.len().max()) ax1.bar_label(ax1.containers[0], labels=zlabels, padding=-65, color='white', fontsize=12, fontweight='bold') Output:
3
4
77,715,516
2023-12-26
https://stackoverflow.com/questions/77715516/python-format-negative-time
I am writing a program that tracks running performance using Python. It calculates the runner's pace given the calculated elapsed_time and the distance. The issue I am facing is in the calculation of the difference between the runner's pace and their predicted pace. This difference can be positive or negative, as you can run faster than predicted or slower than predicted. However, in the example code (below) I get the result diff = 23:59:54. The result I want is diff = -00:00:06. Any ideas on what to do? I had a vague idea that datetime.timedelta might help. But as far as I can see there is no way to format a timedelta as a string...? import time def time_to_secs(t): hour, minute, seconds = t.split(':') adjustedtime = (int(hour)*3600) + (int(minute)*60) + int(seconds) return adjustedtime def time_to_string(t): ty_res = time.gmtime(t) result = time.strftime("%H:%M:%S",ty_res) return result predicted_pace = '00:04:10' distance = 10.0 elapsed_time = '00:42:42' pace = time_to_string((time_to_secs(elapsed_time)/distance)) diff = time_to_string(time_to_secs(predicted_pace) - time_to_secs(pace)) print(f'Your total time over {distance} km was {elapsed_time} with a pace of {pace} per km') print(f'Your predicted pace was {predicted_pace} per km') print(f'Difference between predicted and actual pace was {diff}')
I think this arises from the way Python's time.strftime() function handles times. When you provide a negative number of seconds to time.strftime(), it doesn't format it as a negative time. Instead, it calculates a time that many seconds before the epoch, which is not what you want it seems. You need a custom solution to handle this. To calculate the difference between the predicted pace and the actual pace and handle negative values properly, you can modify the time_to_string function to account for negative timedelta values. Here is an example that might be helpful: import time def time_to_secs(t): hour, minute, seconds = t.split(':') adjustedtime = (int(hour) * 3600) + (int(minute) * 60) + int(seconds) return adjustedtime def time_to_string(t): negative = False if t < 0: negative = True t = -t # Make the time positive for calculation ty_res = time.gmtime(t) result = time.strftime("%H:%M:%S", ty_res) if negative: result = "-" + result return result predicted_pace = '00:04:10' distance = 10.0 elapsed_time = '00:42:42' pace = time_to_string((time_to_secs(elapsed_time) / distance)) diff = time_to_string(time_to_secs(predicted_pace) - time_to_secs(pace)) print(f'Your total time over {distance} km was {elapsed_time} with a pace of {pace} per km') print(f'Your predicted pace was {predicted_pace} per km') print(f'Difference between predicted and actual pace was {diff}') This code does the following: It first calculates the total seconds for both the predicted pace and the actual pace It then computes the difference in seconds. The code then determines if the difference is negative (meaning the runner was faster than predicted). If the time is negative, it then converts the absolute value of the delta into hours, minutes, and seconds. Finally, the code then formats the result as a string, adding a negative sign if necessary. Hope this helps.
2
2
77,707,533
2023-12-23
https://stackoverflow.com/questions/77707533/busy-gpios-raspberry-pi5
I bought the new PI5, and I'm in a middle of a difficulty: This is a python/flask script - main.py - which call for a python script named shift_register.py and runs SRoutput(): from website import create_app from flask import send_from_directory, request import os, sys ctrl_hardware_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'ctrl_hardware')) sys.path.append(ctrl_hardware_path) from shift_register import SRoutput app = create_app() @app.route("/images/<path:filename>") def serve_image(filename): return send_from_directory("images", filename) # Definição dos bits a serem transmitidos #Rota para receber o parâmetro binário e usar no shift_register.py @app.route('/atualizar_shift_register', methods=['GET']) def atualizar_shift_register(): parametro = request.args.get('parametro') # Remove o prefixo '0b' se presente if parametro.startswith('0b'): parametro = parametro[2:] # Chama a função SRoutput do shift_register.py passando o parâmetro binário SRoutput(int(parametro,2)) #Converte o parâmetro binário para inteiro return f'Parâmetro binário {parametro} passado com sucesso!' if __name__ == '__main__': app.run(host='0.0.0.0', port=5000, debug=True) #Defenido para executar em todos os ip's disponíveis pela rede This is script shift_register.py: from gpiozero import OutputDevice import time import warnings warnings.filterwarnings("ignore") SER = OutputDevice(5, initial_value=None) # GPIO 5 - SER/DS (serial data input, SPI data) RCLK = OutputDevice(6) # GPIO 6 - RCLK/STCP SRCLK = OutputDevice(13) # GPIO 13 - SRCLK/SHCP (storage register clock pin, SPI clock) OE = OutputDevice(19) # GPIO 19 - Enable/Disable do SR SRCLR = OutputDevice(26) # GPIO 26 - O registo de deslocamento � limpo (ACTIVO BAIXO) # Setup dos pinos #GPIO.setup(SER, GPIO.OUT) #GPIO.setup(RCLK, GPIO.OUT) #GPIO.setup(SRCLK, GPIO.OUT) #GPIO.setup(SRCLR, GPIO.OUT) #GPIO.setup(OE, GPIO.OUT) # Inicializar a variavel correspondente a R1 # Reles 1 e 2 # checkshift = 0b0011 # Valor por defeito de espera nas operacoes do registo de deslocamento WaitTimeSR = 0.1 ##################################################### # Tabela de verdade do Registo de Deslocamento # SER | SRCLK | 'SRCLR | RCLK | 'OE | Sa�das/Fun��es # X X X X H Q's inactivas # X X X X L Q'S activos # X X L X X SR limpo # L + et H X X 0 no SR # H + et H X X 1 no SR # X X X +et X dados out ###################################################### # Inicaializa o pino de clear dos registos a 1 - o clear � controlado e feito numa fun��o SRCLR.on() #GPIO.output(SRCLR,1) # Enable do SR - sa�das sempre activas OE.off() #GPIO.output(OE, 0) # Fun��o que verifica e desloca os bits para armazenar no registo de deslocamento def SRoutput(checkshift): for i in range(9): shift = checkshift & 1 if shift == 1: print ("UM") WriteReg (shift, WaitTimeSR) else: print ("ZERO") WriteReg(shift, WaitTimeSR) checkshift = checkshift >> 1 OutputReg() # Defini��o da fun��o que envia os dados para o registo de deslocamento, # segundo o algoritmo descrito em baixo ### ALGORITMO ### # Enviar um bit para o pino SER/DS ### Depois de enviado, � dado um impulso de clock (SRCLK/SHCP) e o bit armazenado nos registos ###### ... um segundo bit � enviado, repetindo os dois passos em cima - � repetido at� estarem armazenados 8 bits ######### Por ultimo � dado um impulso aos registos (RCLK/STCP) para obter os 8 bits na saida def WriteReg (WriteBit, WaitTimeSR): SRCLR.off() #GPIO.output (SRCLK, 0) # Clock - flanco POSITIVO SER.value = WriteBit #GPIO.output (SER,WriteBit) # Envia o bit para o registo time.sleep (WaitTimeSR) # Espera 100ms SRCLK.on() #GPIO.output(SRCLK,1) # Funcao que limpa o registo def register_clear (): SRCLK.off() #GPIO.output(SRCLK, 0) time.sleep(WaitTimeSR) # espera 100ms SRCLK.on() #GPIO.output(SRCLK, 1) # Armazenar o valor no registo def OutputReg(): RCLK.off() #GPIO.output(RCLK, 0) time.sleep(WaitTimeSR) RCLK.on() #GPIO.output(RCLK, 1) (Sorry, the comments are in Portuguese...) However, when I run main.py I get these errors: [* Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:5000 * Running on http://X.X.X.X:5000 Press CTRL+C to quit * Restarting with stat Traceback (most recent call last): File "/usr/lib/python3/dist-packages/gpiozero/pins/pi.py", line 408, in pin pin = self.pins[info] ~~~~~~~~~^^^^^^ KeyError: PinInfo(number=29, name='GPIO5', names=frozenset({'BOARD29', 'BCM5', 5, '5', 'WPI21', 'GPIO5', 'J8:29'}), pull='', row=15, col=1, interfaces=frozenset({'', 'spi', 'dpi', 'gpio', 'i2c', 'uart'})) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/grinder/Documents/GitHub/Unnamed-Thesis/webserver/main.py", line 8, in <module> from shift_register import SRoutput File "/home/grinder/Documents/GitHub/Unnamed-Thesis/ctrl_hardware/shift_register.py", line 7, in <module> SER = OutputDevice(5, initial_value=None) # GPIO 5 - SER/DS (serial data input, SPI data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/gpiozero/devices.py", line 103, in __call__ self = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/gpiozero/output_devices.py", line 74, in __init__ super().__init__(pin, pin_factory=pin_factory) File "/usr/lib/python3/dist-packages/gpiozero/mixins.py", line 75, in __init__ super().__init__(*args, **kwargs) File "/usr/lib/python3/dist-packages/gpiozero/devices.py", line 549, in __init__ pin = self.pin_factory.pin(pin) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/gpiozero/pins/pi.py", line 410, in pin pin = self.pin_class(self, info) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/gpiozero/pins/lgpio.py", line 126, in __init__ lgpio.gpio_claim_input( File "/usr/lib/python3/dist-packages/lgpio.py", line 755, in gpio_claim_input return _u2i(_lgpio._gpio_claim_input(handle&0xffff, lFlags, gpio)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/lgpio.py", line 458, in _u2i raise error(error_text(v)) lgpio.error: 'GPIO busy' (I've changed all the GPIO's and they are all busy) Strangely enough, this worked with PI3 and RPi. What am I missing? Thank you!
Fix my problem: If anyone stumble upon this error this is how I fix it: import os os.environ['GPIOZERO_PIN_FACTORY'] = os.environ.get('GPIOZERO_PIN_FACTORY', 'mock') from gpiozero import OutputDevice Addendum By setting 'mock' as the default, the code is configuring gpiozero to use a pin simulation (a simulated environment) if the environment variable is not explicitly defined. This setup is useful for testing or running the code in environments where physical pins are not available, allowing the code to continue functioning without interaction with real hardware. Only part of the problem is solved [Solution] In case if someone has the same problem: from website import create_app from flask import send_from_directory, request import os, sys ctrl_hardware_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'ctrl_hardware')) sys.path.append(ctrl_hardware_path) from shift_register import SRoutput app = create_app() @app.route("/images/<path:filename>") def serve_image(filename): return send_from_directory("images", filename) # Definição dos bits a serem transmitidos #Rota para receber o parâmetro binário e usar no shift_register.py @app.route('/atualizar_shift_register', methods=['GET']) def atualizar_shift_register(): parametro = request.args.get('parametro') # Remove o prefixo '0b' se presente if parametro.startswith('0b'): parametro = parametro[2:] # Chama a função SRoutput do shift_register.py passando o parâmetro binário SRoutput(int(parametro,2)) #Converte o parâmetro binário para inteiro return f'Parâmetro binário {parametro} passado com sucesso!' if __name__ == '__main__': app.run(host='0.0.0.0', port=5000, **debug=True**) #Defenido para executar em todos os ip's disponíveis pela rede On PI5, the argument debug=True make ALL GPIO's busy. Removing it resolved the problem
2
3
77,714,037
2023-12-25
https://stackoverflow.com/questions/77714037/python-sort-groupby-data-by-key-function
I would like to sort the data by the month however not alphabetically rather by the month order, i.e first the sales for January and then February etc, see the illustrated data I created month=['January','February','March','April','January','February','March','April'] sales=[10,100,130,145,13409,670,560,40] dict = {'month': month, 'sales': sales} df = pd.DataFrame(dict) df.groupby('month')['sales'].mean().sort_values() in this case I received data by the sales average, however I would like to sort the value by the month order
Alternatively and probably slower, but just for posterity you can use lambda with map: month_order = {"January": 1, "February": 2, "March": 3, "April": 4} df = df.sort_values(by="month", key=lambda x: x.map(month_order), ignore_index=True)
2
1
77,712,635
2023-12-25
https://stackoverflow.com/questions/77712635/how-to-disable-border-in-subplot-for-a-3d-plot-in-matplotlib
I have the following code which produces two plots in matplotlib. However, each I'm unable to remove this border in the second subplot despite feeding all these arguments. How can I remove the black border around the second subplot (see attached image) import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D beta, gamma = np.linspace(-np.pi / 2, np.pi / 2, 500), np.linspace(-np.pi / 2, np.pi / 2, 500) B, G = np.meshgrid(beta, gamma) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6)) # 2D Contour plot ax1.imshow(obj_vals.T, origin='lower', cmap='hot', extent=(-np.pi/2, np.pi/2, -np.pi/2, np.pi/2)) ax1.set_xlabel(r'$\gamma$') ax1.set_ylabel(r'$\beta$') ax1.set_xticks([]) ax1.set_yticks([]) ax2 = fig.add_subplot(122, projection='3d') # Make panes transparent ax2.xaxis.pane.fill = False # Left pane ax2.yaxis.pane.fill = False # Right pane ax2.zaxis.pane.fill = False # Right pane # Remove grid lines ax2.grid(False) # Remove tick labels ax2.set_xticklabels([]) ax2.set_yticklabels([]) ax2.set_zticklabels([]) # Transparent spines ax2.xaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) ax2.yaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) ax2.zaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) ax2.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) ax2.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) # No ticks ax2.set_xticks([]) ax2.set_yticks([]) ax2.set_zticks([]) # Surface plot surf = ax2.plot_surface(B, G, obj_vals.T, cmap='hot') plt.axis('off') plt.tight_layout() plt.show()
I think the issue here is that you are creating a subplot with two 2D axes and then adding a 3D axis over the right 2D one. You have removed the splines for the 3D plot, but the original 2D plot that you originally made is still there. I think a better way to do this would be to create the figure and then the subplots individually, like what is shown on this documentation page. Then, you can also remove all the 3D axes components using ax.set_axis_off(), as per this answer. import matplotlib.pyplot as plt import numpy as np plt.close("all") x = np.linspace(-2*np.pi, 2*np.pi, 100) y = np.linspace(-2*np.pi, 2*np.pi, 100) X, Y = np.meshgrid(x, y) Z = np.sin(X)*np.cos(Y) fig = plt.figure(figsize=(12,6)) ax1 = fig.add_subplot(1, 2, 1) ax1.contourf(X, Y, Z, levels=100, cmap="hot") ax1.set_xlabel(r'$\gamma$') ax1.set_ylabel(r'$\beta$') ax1.set_xticks([]) ax1.set_yticks([]) ax2 = fig.add_subplot(1, 2, 2, projection="3d") ax2.plot_surface(X, Y, Z, cmap="hot", rstride=1, cstride=1) ax2.set_axis_off() plt.tight_layout() plt.show()
2
1
77,710,854
2023-12-24
https://stackoverflow.com/questions/77710854/time-limit-exceed-in-the-code-that-sums-the-numbers
in the question there will be a number taken from user. Up to that number, I want to find the sum of all numbers if the sum of that number's digits is odd. For example if the input is 13 then ; 1+3+5+7+9+10+12 = 47 should be the result. In order to prevent time limit exceed, question says "Since the answer might be too big, please print the answer in modulo 10^9+7" Constraints 1 ≤ n < 10^17 (n is input) Here is my code: MOD = 1000000007 def getSum(number): total = 0 while number > 0: total += number % 10 number //= 10 return (total % MOD) % 2 def sticker(number): stickerNeed = 0 for oneDigit in range(1, number): if (getSum(oneDigit % MOD) == 1): stickerNeed += oneDigit return stickerNeed % MOD number = int(input()) result = sticker(number) print(result % MOD) But it still cannot handle the really big numbers and gives time limit exceed. Can you help me out? What wrong with my code?
Surely, brute force is not right way for given range. Instead think about the next: 1 3 5 7 9 10 12...18 21...29 30...98 100 102..108 111...199 201...999 1000 1002... We can see that ever decade *0..*9 contains exactly 5 numbers with odd digit sum, but these numbers can start from *0 or from *1. Aslo note that every hundred *0..*99 contains five decades starting from 0, and five decades starting from 1 Let's separate the end of series after last *00 value for simplicity and calculate their sum using function like your sticker but with corrected mistakes (your function gives wrong results). Right function (not the fastest, but suitable for our purposes): def sumofOdds(a, b): result = 0 for i in range(a, b+1): t = i s = 0 while t: s += t%10 t //= 10 if s%2: result += i return result For example, for input 231, calculate sum for range 200..231 as described above, and for 1..199 as described below But what about swarm of numbers before last *00? Sum of these numbers is sum of arithmetic progression of the sequence 0,2,4,...*98 plus addition of 25 * number of hundreds (because every hundred contains 5 decades starting from 1 and they give 25 additional units). Note - this is one-liner formula. Where to use MOD? In formula for arithmetic progression sum, after summation with 25 * number of hundreds, and after summation with "end of series" sum. But not in getSum and sticker I hope these ideas will be useful for your solution Test result: 46781934 #input 321669645 #by formula, calculated in microseconds 321669645 #by brute force, calculated in one minute P.S. For range 10^17 we have to use modular aritmetics inside sumofOdds in languages without in-built support of big integers (Python works with such numbers well) if s%2: result = (result + i) % MOD
2
5
77,712,255
2023-12-25
https://stackoverflow.com/questions/77712255/how-the-parameter-vector-of-zeroes-works-in-tensorflow
I'm following book on Tensorflow by Chris Mattman. Here is the code: import tensorflow as tf tf.disable_v1_behavior() def model(X, w): terms = [] for i in range(num_coeffs): #num_coeffs = 6 term = tf.multiply(w[i], tf.pow(X, i)) terms.append(term) return tf.add_n(terms) w = tf.Variable([0.]*num_coeffs, name="parameters") y_model = model(X, w) #X = tf.placeholder(tf.float32) How do we multiply 0. by some value in the variable and get anything different from 0? It is to be polynomial regression model.
The parameter [0.]*num_coefs is the initial value of the tensorflow variable represented by w. Of course, when all coefficients are zero, the result of multiplying by those coefficients is zero. Th point of performing optimization with respect to some distance or loss metric (e.g. minimizing mean squared error) is that the variable will be updated to nonzero values that optimize the metric subject to the input and output data. Your code does not yet include an optimizer, so the variable won't be updated. Adding an optimizer will lead to the variable being updated to a nonzero vector, following some strategy depending on the selected optimizer.
2
0
77,709,156
2023-12-23
https://stackoverflow.com/questions/77709156/check-if-float-result-is-close-to-one-of-the-expected-values
I have a function in Python makes a number of calculations and returns a float. I need to validate the float result is close (i.e. +/- 1) to one of 4 possible integers. For example the expected integers are 20, 50, 80, 100. If I have result = 19.808954 if would pass the validation test as it is with +/- 1 of 20 which is one of the expected results. Is there a concise way to do this test? My python math functions are basic.
any(abs(actual - expected) < 1 for expected in [20, 50, 80, 100]) # ^^^^^^^^^^^^^^^^^^^^^^^^^^ This checks if the underlined boolean expression is true for any of the expected values. any will return as soon as it finds a true value and return True or False accordingly. If you want to know which expected value it matches, change it to this: [expected for expected in [20, 50, 80, 100] if abs(actual - expected) < 1] This will build a list of matching values: If the list is empty, there was no match. If the list has one element, that's the match. In the edge case that there are multiple matches, the list will have multiple values. This could happen if the expected list had 19 and 20, say, with an input of 19.5.
3
3
77,711,479
2023-12-24
https://stackoverflow.com/questions/77711479/how-to-fix-cant-load-file-activate-ps1-because-script-execution-is-disable
I'm using VS Code and I have followed the steps in the docs to set it up for Python development: https://code.visualstudio.com/docs/python/python-tutorial. But when trying to run a program I get the error: can´t load the file C:\V\VSCode\Python.venv\Scripts\Activate.ps1 because script execution is disabled on this system. For more information, see the about_Execution_Policies. Even though I get the error, the program runs fine. Python version: 3.12.1 VSCode version: 1.85.1
Run the command get-executionpolicy to verify your execution policy setting. If you get Restricted, it means No scripts can be run and you can only use Windows PowerShell in interactive mode. If you get AllSigned, it means you can run only scripts that have been digitally signed by a trusted publisher. If you get RemoteSigned, it means you can run scripts that was created locally, but scripts that is downloaded must be digitally signed by a trusted publisher. If you get Unrestricted, then there are no restrictions at all. This allows you to run unsigned scripts from any source but will warn when a script has been downloaded from the Internet. To remove the restrictions, you can either run set-executionpolicy remotesigned (preferable) or set-executionpolicy unrestricted (if you're running as admin). After this, you can activate your virtual environment usingvenv\Scripts\activate. To get more information on Set-ExecutionPolicy, run get-help set-executionpolicy. See Setting the PowerShell Execution Policy for more.
2
4
77,711,457
2023-12-24
https://stackoverflow.com/questions/77711457/an-object-that-works-with-the-operator-for-both-strings-and-ints
Is there a way in Python to initialise a variable so that it will work with += "a string" or with += 100? For example: a = <what to put here> a += 100 # a = 100 b = <same what to put here from above> b += "a string" # b = "a string" If I use: a = "" a += "a string" # this will work fine b = "" b += 0 # -> TypeError Once the type of the variable has been defined, it is okay to get a TypeError afterwards if I try to += a different type. For example: a = <what to put here> a += "a string" # a = "a string" a += 200 # -> okay to get a TypeError here I’m only interested in making this work for ints and strings.
a = type('', (), {'__add__': lambda _, x: x})() a += 100 b = type('', (), {'__add__': lambda _, x: x})() b += "a string" print([a, b]) Output (Attempt This Online!): [100, 'a string']
6
1
77,711,068
2023-12-24
https://stackoverflow.com/questions/77711068/add-rows-without-duplicates-in-python
I want to add new items to the original csv file. The original file's ID increases by 1 each time an item is added, as shown below. Id Name 0 Alpha 1 Beta 2 Gamma 3 Delta I want to add the following array items = ["Epsilon", "Beta", "Zeta"] to the original csv file and eliminate duplicates, which would finally look like this: Id Name 0 Alpha 1 Beta 2 Gamma 3 Delta 4 Epsilon 5 Zeta I tried it with pandas, but the id column becomes "nan" for some reason. import pandas as pd items = ["Epsilon", "Beta", "Zeta"] df = pd.read_csv('original.csv', index_col='Id') for i in range(len(items)): df=df.append({'Id': len(df), 'Name': items[i]}, ignore_index=True) df = df.drop_duplicates(['Name'], ignore_index=True) df I would appreciate it if you could help me with this problem.
Try: items = ["Epsilon", "Beta", "Zeta"] df = pd.concat([df, pd.DataFrame({"Name": items})]).drop_duplicates(subset="Name") df["Id"] = range(len(df)) print(df) # df.to_csv('out.csv') Prints: Id Name 0 0 Alpha 1 1 Beta 2 2 Gamma 3 3 Delta 0 4 Epsilon 2 5 Zeta
2
3
77,709,546
2023-12-24
https://stackoverflow.com/questions/77709546/pyspark-ram-leakage
My spark codes recently causes ram leakage. For instance, before running any script, when I run top, I can see 251 GB total memory and 230 GB free + used memory. When I run my spark job through spark-submit, regardless of whether the job is completed or not (ending with exception) the free + used memory is much lower than the start. This is one sample of my code: from pyspark.sql import SparkSession def read_df(spark, jdbc_url, table_name, jdbc_properties ): df = spark.read.jdbc(url=jdbc_url, table=table_name, properties=jdbc_properties) return df def write_df(result, table_name, jdbc_properties): result = result.repartition(50) result.write.format('jdbc').options( url=jdbc_properties['jdbc_url'], driver="org.postgresql.Driver", user=jdbc_properties["user"], password=jdbc_properties["password"], dbtable=table_name, mode="overwrite" ).save() if __name__ == '__main__': spark = SparkSession \ .builder \ .appName("Python Spark SQL basic example") \ .config("spark.driver.extraClassPath", "postgresql-42.5.2.jar").config("spark.executor.extraClassPath","postgresql-42.5.2.jar") \ .config("spark.local.dir", "/shared/hm31") \ .config("spark.master", "local[*]") \ .getOrCreate() spark.sparkContext.setLogLevel("WARN") parquet_path = '/shared/hossein_hm31/embeddings_parquets' try: unique_nodes = read_df(spark, jdbc_url, 'hm31.unique_nodes_cert', jdbc_properties) df = spark.read.parquet(parquet_path) unique_nodes.createOrReplaceTempView("unique_nodes") df.createOrReplaceTempView("all_embeddings") sql_query = """ select u.node_id, a.embedding from unique_nodes u inner join all_embeddings a on u.pmid = a.pmid """ result = spark.sql(sql_query) print("num", result.count()) result.repartition(10).write.parquet('/shared/parquets_embeddings/') write_df(result, 'hm31.uncleaned_embeddings_cert', jdbc_properties) spark.catalog.clearCache() unique_nodes.unpersist() df.unpersist() result.unpersist() spark.stop() exit(0) except: print('Error') spark.catalog.clearCache() unique_nodes.unpersist() df.unpersist() spark.stop() exit(0) print('Error') spark.catalog.clearCache() unique_nodes.unpersist() df.unpersist() spark.stop() exit(0) Where I tried to remove cached data frames. This RAM leakage would need a server restart, which is uncomfortable. This is the command I run: spark-submit --master local[50] --driver-class-path ./postgresql-42.5.2.jar --jars ./postgresql-42.5.2.jar --driver-memory 200g --conf "spark.local.dir=./logs" calculate_similarities.py And this is the top output, that you can see free + used memory is much less than the total, and used to be around 230 before I ran my spark job. The jobs are sorted by memory usage, and you can see there is no memory-intensive job running after the spark ended with an exception. I shall add that the machine does not have Pyspark itself. It has Java 11, and I just run Pyspark by importing its package. Thanks P.S: The unique_nodes is around 0.5 GB on Postgres. The df = spark. read.parquet(parquet_path) reads 38 parquet files, each around 3 GB. After joining, the result is around 8 GB.
There is no "RAM leakage" here. You're mis-interpreting what top is displaying: total is the total amount of memory (no surprises) free is the amount of memory that is unused for any purpose used is what the kernel currently has allocated, e.g. due to requests from applications the sum of free+ used is not total, because there is also buff/cache. This is the amount of memory currently used for "secondary" purposes, especially caching data that is on disk for which the kernel knows it already has an exact copy in memory; as long as there is no memory-pressure by used, the kernel will try to keep it's buff/cache. avail is what is readily available to be used by applications, approximately the sum of free + buff/cache Your top screenshot shows large amount of memory allocated to buff/cache, which is probably data that was ready previously and which the kernel keeps around in case it is needed later; there is no "leakage" here, because the kernel will evict these cached memory page if the need by applications arrives. Also notice that the avail number is still around 234gb, which is almost exactly what you expected from free + used - but didn't take into account buff/cache.
2
2
77,708,843
2023-12-23
https://stackoverflow.com/questions/77708843/reading-opencv-yaml-in-python-gives-error-input-file-is-invalid-in-function-op
I have following test.yaml: Camera.ColourOrder: "RGB" Camera.ColourDepth: "UCHAR_8" Camera.Spectrum: "Visible_NearIR" IMU.T_b_c1: !!opencv-matrix rows: 4 cols: 4 dt: f data: [0.999903, -0.0138036, -0.00208099, -0.0202141, 0.0137985, 0.999902, -0.00243498, 0.00505961, 0.0021144, 0.00240603, 0.999995, 0.0114047, 0.0, 0.0, 0.0, 1.0] I am trying to read it using python opencv cv2 module (because it contains opencv related objects like opencv_matrix embedded in yaml, which will give error if tried reading using python's inbuilt yaml module). But it is giving me error as shown below: >>> import os >>> 'test.yaml' in os.listdir() True >>> import cv2 >>> fs = cv2.FileStorage("test.yaml", cv2.FILE_STORAGE_READ) cv2.error: OpenCV(4.8.1) /io/opencv/modules/core/src/persistence.cpp:699: error: (-5:Bad argument) Input file is invalid in function 'open' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> SystemError: <class 'cv2.FileStorage'> returned a result with an error set I tried removing IMU.T_b_c1 and keeping just Camera.xyz values, still the same error. What I am missing here? PS: is there any other way / module to read such yaml file with embedded opencv objects? I am ok with not using cv2 and read some other package / module.
Judging by the error message and the implementation, OpenCV expects YAML data to be preceded by a YAML directive (%YAML). The directive is also shown in the documentation's sample. Try the following (notice first line) %YAML:1.0 Camera.ColourOrder: "RGB" Camera.ColourDepth: "UCHAR_8" Camera.Spectrum: "Visible_NearIR" IMU.T_b_c1: !!opencv-matrix rows: 4 cols: 4 dt: f data: [0.999903, -0.0138036, -0.00208099, -0.0202141, 0.0137985, 0.999902, -0.00243498, 0.00505961, 0.0021144, 0.00240603, 0.999995, 0.0114047, 0.0, 0.0, 0.0, 1.0]
3
3
77,708,882
2023-12-23
https://stackoverflow.com/questions/77708882/change-first-and-last-elements-of-strings-or-lists-inside-a-dataframe
I have a dataframe like this: data = { 'name': ['101 blueberry 2023', '102 big cat 2023', '103 small white dog 2023'], 'number': [116, 118, 119]} df = pd.DataFrame(data) df output: name number 0 101 blueberry 2023 116 1 102 big cat 2023 118 2 103 small white dog 2023 119 I would like to change the first and last numbers in the name column. For example, the first number in name to the number in the number column, and the last number in name to '2024'. So finally it would look like: name number 0 116 blueberry 2024 116 1 118 big cat 2024 118 2 119 small white dog 2024 119 I have tried splitting name into a list and changing the first and last elements of the list. df['name_pieces'] = df['name'].split(' ') df output: name number name_pieces 0 101 blueberry 2023 116 [101, blueberry, 2023] 1 102 big cat 2023 118 [102, big, cat, 2023] 2 103 small white dog 2023 119 [103, small, white, dog, 2023] I can access the first item of the lists using .str, but I cannot change the item. df['name_pieces'].str[0] output: 0 101 1 102 2 103 but trying to assign the first value of the list gives an error df['name_pieces'].str[0] = df['number'] output: TypeError: 'StringMethods' object does not support item assignment How can I replace the first and last value of name inside this dataframe?
Don't bother with the lists. You can just extract the part of the strings you want and join the other parts. df.assign(name= df['number'].astype(str) + df['name'].str.extract(r'( .* )', expand=False) + '2024' ) name number 0 116 blueberry 2024 116 1 118 big cat 2024 118 2 119 small white dog 2024 119 This regex gets the longest part of the string surrounded by spaces, i.e the part between the first space and last space. Here's a variation if you'd rather think about name primarily: df.assign(name= df['name'].str.extract(r'( .* )', expand=False) .radd(df['number'].astype(str)) .add('2024') )
2
4
77,707,308
2023-12-23
https://stackoverflow.com/questions/77707308/why-do-attributes-that-are-being-set-by-a-custom-type-initializer-need-to-be-pro
The CPython Tutorial defines a custom initializer for a custom type which has the following lines : if (first) { tmp = self->first; Py_INCREF(first); self->first = first; Py_XDECREF(tmp); } but the tutorial advises against this simpler but more nefarious version : if (first) { Py_XDECREF(self->first); Py_INCREF(first); self->first = first; } For the following reason : But this would be risky. Our type doesn’t restrict the type of the first member, so it could be any kind of object. It could have a destructor that causes code to be executed that tries to access the first member; or that destructor could release the Global interpreter Lock and let arbitrary code run in other threads that accesses and modifies our object. I understand the reason why a multithreaded programming can break the simple version. There can be a race condition where the new self->first is set by one thread and freed by another thread (by way of Py_XDECREF). This is avoided in the correct version because both threads will have had to set self->first before freeing it. The part I don't understand is this section : It could have a destructor that causes code to be executed that tries to access the first member If first is an object that is getting garbage collected, its destructor would have to be called and it should have access to itself through the duration of the destructor. Why can this be dangerous? Thank you!
Let's suppose we have: custom = Custom(...) # global variable class SomePyClass: def __del__(self): # access global variable custom.__init__(1, 2, 3) Let's suppose that custom.first is an instance of SomePyClass and that this instance gets destructed at the line Py_XDECREF(self->first);. The arbitrary code in the destructor is going to cause custom.__init__ to be called again. This will cause Py_XDECREF to be called again on an object that really should have a refcount of 0 and which is already in the process of getting destroyed. That alone represents a reference counting error since it's ref-count will end up below 0 (note this may not be exactly what happens since it temporarily gets given a ref-count of 1 for the duration of the destructor, but it absolutely is a ref-counting error). Note also the new value of first that is being assigned to in __init__ is also mis-reference-counted: while our instance is decrefed twice, the value that is about to be assigned is replaced without decrefing it.
2
1