question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
70,811,065 | 2022-1-22 | https://stackoverflow.com/questions/70811065/how-to-schedule-code-execution-in-python | I have a function that is running inside a for loop. My goal is to run it every day at a specific time, such as 10 am. My code is: def get_sql_backup_transfer(ip, folder,path_sec_folder,sec_folder,path): call(["robocopy",f'{ip}\\{folder}', f'{path_sec_folder}{sec_folder}',"/tee","/r:5","/w:120","/S","/MIR",f"/LOG:{path}{sec_folder}.log"]) for i in sqlserverList : get_sql_backup_transfer(i['ip'] , i['folder'] , path_sec_folder ,i['sec_folder'] , path ) How can I run this code automatically every day at 10 am? | There are some ways to do this , but the best way is using 'schedule' package, i guess However, in the first step install the package : pip install schedule And then use it in your code like the following codes : import schedule schedule.every().day.at("10:00").do(yourFunctionToDo,'It is 10:00') | 5 | 2 |
70,809,437 | 2022-1-22 | https://stackoverflow.com/questions/70809437/why-if-statement-does-not-pick-the-specific-row-for-condition-from-a-data-fram | writing a function that should meet a condition on a row basis and return the expected results def bt_quantity(df): df = bt_level(df) df['Marker_change'] = df['Marker'] - df['Marker'].shift(1).fillna(0).round(0).astype(int) df['Action'] = np.where(df['Marker_change'] > 0, "BUY", "") def turtle_split(row): if df['Action'] == 'Buy': return baseQ * (turtle ** row['Marker'] - 1) // (turtle - 1) else: return 0 df['Traded_q'] = df.apply(turtle_split, axis=1).round(0).astype(int) df['Net_q'] = df['Traded_q'].cumsum().round(0).astype(int) print(df.head(39)) return df This is a common issue, and I am not using any "and" or "or" in the code. still getting the below error ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). I tried changing str to int(BUY >> 1), no progress. P.S. the data set is huge and I am using multiple modules and functions to work on this project. | As you said, this is really a common problem, you will find the answers from Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() But to address your particular case, this is not a very efficient way of doing it as the dataset is huge. You should use Numpy. That will shorten the runtime drastically. I see two issues with your snippet There is a Typo. You used "BUY" and then used "Buy". Python is case-sensitive. The col Action is getting tested for "BUY" entirely. The fix is to use row (Not a pythonic way, but a small fix) def bt_quantity(df): df = bt_level(df) df['Marker_change'] = df['Marker'] - df['Marker'].shift(1).fillna(0).round(0).astype(int) df['Action'] = np.where(df['Marker_change'] > 0, "BUY", "") def turtle_split(row): if row['Action'] == 'BUY': return baseQ * (turtle ** row['Marker'] - 1) // (turtle - 1) else: return 0 df['Traded_q'] = df.apply(turtle_split, axis=1).round(0).astype(int) df['Net_q'] = df['Traded_q'].cumsum().round(0).astype(int) print(df.head(39)) return df | 5 | 5 |
70,789,297 | 2022-1-20 | https://stackoverflow.com/questions/70789297/apache-beam-performance-between-python-vs-java-running-on-gcp-dataflow | We have Beam data pipeline running on GCP dataflow written using both Python and Java. In the beginning, we had some simple and straightforward python beam jobs that works very well. So most recently we decided to transform more java beam to python beam job. When we having more complicated job, especially the job requiring windowing in the beam, we noticed that there is a significant slowness in python job than java job which end up using more cpu and memory and cost much more. some sample python code looks like: step1 = ( read_from_pub_sub | "MapKey" >> beam.Map(lambda elem: (elem.data[key], elem)) | "WindowResults" >> beam.WindowInto( beam.window.SlidingWindows(360,90), allowed_lateness=args.allowed_lateness, ) | "GroupById" >> beam.GroupByKey() And Java code is like: PCollection<DataStructure> step1 = message .apply( "MapKey", MapElements.into( TypeDescriptors.kvs( TypeDescriptors.strings(), TypeDescriptor.of(DataStructure.class))) .via(event -> KV.of(event.key, event))) .apply( "WindowResults", Window.<KV<String, CustomInterval>>into( SlidingWindows.of(Duration.standardSeconds(360)) .every(Duration.standardSeconds(90))) .withAllowedLateness(Duration.standardSeconds(this.allowedLateness)) .discardingFiredPanes()) .apply("GroupById", GroupByKey.<String, DataStructure>create()) We noticed Python is always using like 3 more times CPU and memory than Java needed. We did some experimental tests that just ran JSON input and JSON output, same results. We are not sure that is just because Python, in general, is slower than java or the way the GCP Dataflow execute Beam Python and Java is different. Any similar experience, tests and reasons why this is are appreciated. | Yes, this is a very normal performance factor between Python and Java. In fact, for many programs the factor can be 10x or much more. The details of the program can radically change the relative performance. Here are some things to consider: Profiling the Dataflow job (official docs) Profiling a Dataflow pipeline (medium blog) Profiling Apache Beam Python pipelines (another medium blog) Profiling Python (general Cloud Profiler docs) How can I profile a Python Dataflow job? (previous StackOverflow question on profiling Python job) If you prefer Python for its concise syntax or library ecosystem, the approach to achieve speed is to use optimized C libraries or Cython for the core processing, for example using pandas/numpy/etc. If you use Beam's new Pandas-compatible dataframe API you will automatically get this benefit. | 6 | 6 |
70,805,036 | 2022-1-21 | https://stackoverflow.com/questions/70805036/why-are-python-sets-sorted-in-ascending-order | Let's run the following code: st = {3, 1, 2} st >>> {1, 2, 3} st.pop() >>> 1 st.pop() >>> 2 st.pop() >>> 3 Although sets are said to be unordered, this set behaves as if it was sorted in ascending order. The method pop(), that should return an 'arbitrary element', according to the documentation, returns elements in ascending order as well. What is the reason for this? | The order correlates to the hash of the object, size of the set, binary representation of the number, insertion order and other implementation parameters. It is completely arbitrary and shouldn't be relied upon: >>> st = {3, 1, 2,4,9,124124,124124124124,123,12,41,15,} >>> st {1, 2, 3, 4, 9, 41, 12, 15, 124124, 123, 124124124124} >>> st.pop() 1 >>> st.pop() 2 >>> st.pop() 3 >>> st.pop() 4 >>> st.pop() 9 >>> st.pop() 41 >>> st.pop() 12 >>> {1, 41, 12} {1, 12, 41} >>> {1, 9, 41, 12} {1, 12, 9, 41} # Looks like 9 wants to go after 12. >>> hash(9) 9 >>> hash(12) 12 >>> hash(41) 41 >>> {1, 2, 3, 4, 9, 41, 12} {1, 2, 3, 4, 9, 12, 41} # 12 before 41 >>> {1, 2, 3, 4, 9, 41, 12, 15} # add 15 at the end {1, 2, 3, 4, 9, 41, 12, 15} # 12 after 41 | 5 | 6 |
70,802,550 | 2022-1-21 | https://stackoverflow.com/questions/70802550/pyinstaller-executable-is-not-working-if-using-multiprocessing-in-pyqt5 | I was working in my project related to pyqt5 GUI. I used multiprocessing to make it faster. When I run my program in editor, it works fine. But when I converted my program into executable using pyinstaller, then while running the program through this executable, it's not working. GUI opens, but close once it comes to the multiprocessing portion of the code (I get to know this by putting some print statement) I have also tried mutiprocessing.freeze_support(), still it's not worked. if I remove the multiprocessing, program through executable works fine, But I need to use multiprocessing to make it faster. Any suggestion? | I had the same problem a while ago, and I recommend using Nuitka hence it supports Multiprocessing. If the problem lasts, try to use the threading library: from threading import Thread def worker(a,b): while True: print(a+b) a+=1 my_thread = Thread(target = worker, args = [10,20]) my_thread.start() You can also setup a class if you want to terminate the thread at a certain point. | 5 | 1 |
70,792,216 | 2022-1-20 | https://stackoverflow.com/questions/70792216/count-number-of-retries-for-each-request | I use package requests together with urllib3.util.retry.Retry() to send tens of thousands of queries. I seek to count the number of queries and the number of necessary attempts until I successfully retrieve the desired data. My goal is to construct a measure for the reliability of the API. To fix ideas, let's assume that the Response object of requests contains this data: from requests import Session from urllib3.util.retry import Retry from requests.adapters import HTTPAdapter def create_session(): session = Session() retries = Retry( total = 15, backoff_factor = 0.5, status_forcelist = [401, 408, 429, 500, 502, 504], allowed_methods = frozenset(["GET"]) ) session.mount('http://', HTTPAdapter(max_retries=retries)) session.mount('https://', HTTPAdapter(max_retries=retries)) return session urls = ['https://httpbin.org/status/500'] count_queries = len(urls) count_attempts = 0 with create_session() as s: for url in urls: response = s.get(url) count_attempts += response.total_retries Since there is no such variable, I am looking for alternatives to count the total number of retries. While I am unable to identify an approach to this problem, I made the following observations during my search which is potentially helpful: urllib3 stores the retry-history in the Retry object. The urllib3.HTTPResponse stores the last Retry object (docs). The urllib3.HTTPResponse (to be precise, its undecoded body) is stored in requests.Response.raw, however only when stream=True (docs). In my understanding, I can't access this data. One user provides a solution to a similar question that subclasses the Retry class. Essentially, a callback function is called which prints a string to a logger. This could be adapted to increment a counter instead of printing to logs. However, if possible, I prefer to track the retries specific to a particular get, as shown above, as opposed to all gets using the same session. A very similar question was asked here, however no (working) solution was provided. I'm using Python 3.9, urllib3 1.26.8, requests 2.26.0. | This is a rather verbose solution along the lines of this answer. It counts requests and retries on the session level (which, however, was not my preferred approach). import requests from urllib3.util.retry import Retry class RequestTracker: """ track queries and retries """ def __init__(self): self._retries = 0 self._queries = 0 def register_retry(self): self._retries += 1 def register_query(self): self._queries += 1 @property def retries(self): return self._retries @property def queries(self): return self._queries class RetryTracker(Retry): """ subclass Retry to track count of retries """ def __init__(self, *args, **kwargs): self._request_tracker = kwargs.pop('request_tracker', None) super(RetryTracker, self).__init__(*args, **kwargs) def new(self, **kw): """ pass additional information when creating new Retry instance """ kw['request_tracker'] = self._request_tracker return super(RetryTracker, self).new(**kw) def increment(self, method, url, *args, **kwargs): """ register retry attempt when new Retry object with incremented counter is returned """ if self._request_tracker: self._request_tracker.register_retry() return super(RetryTracker, self).increment(method, url, *args, **kwargs) class RetrySession(requests.Session): """ subclass Session to track count of queries """ def __init__(self, retry): super().__init__() self._requests_count = retry def prepare_request(self, request): """ increment query counter """ # increment requests counter self._requests_count.register_query() return super().prepare_request(request) class RequestManager: """ manage requests """ def __init__(self, request_tracker=None): # session settings self.__session = None self.__request_tracker = request_tracker # retry logic specification args = dict( total = 11, backoff_factor = 1, status_forcelist = [401,408, 429, 500, 502, 504], allowed_methods = frozenset(["GET"]) ) if self.__request_tracker is not None: args['request_tracker'] = self.__request_tracker self.__retries = RetryTracker(**args) else: self.__retries = Retry(**args) @property def session(self): if self.__session is None: # create new session if self.__request_tracker is not None: self.__session = RetrySession(self.__request_tracker) else: self.__session = requests.Session() # mount https adapter with retry logic https = requests.adapters.HTTPAdapter(max_retries=self.__retries) self.__session.mount('https://', https) return self.__session @session.setter def session(self, value): raise AttributeError('Setting session attribute is prohibited.') request_tracker = RequestTracker() request_manager = RequestManager(request_tracker=request_tracker) session = request_manager.session urls = ['https://httpbin.org/status/500'] with session as s: for url in urls: response = s.get(url) print(request_tracker.queries) print(request_tracker.retries) | 7 | 2 |
70,799,600 | 2022-1-21 | https://stackoverflow.com/questions/70799600/how-exactly-does-python-find-new-and-choose-its-arguments | While trying to implement some deep magic that I'd rather not get into here (I should be able to figure it out if I get an answer for this), it occurred to me that __new__ doesn't work the same way for classes that define it, as for classes that don't. Specifically: when you define __new__ yourself, it will be passed arguments that mirror those of __init__, but the default implementation doesn't accept any. This makes some sense, in that object is a builtin type and doesn't need those arguments for itself. However, it leads to the following behaviour, which I find quite vexatious: >>> class example: ... def __init__(self, x): # a parameter other than `self` is necessary to reproduce ... pass >>> example(1) # no problem, we can create instances. <__main__.example object at 0x...> >>> example.__new__ # it does exist: <built-in method __new__ of type object at 0x...> >>> old_new = example.__new__ # let's store it for later, and try something evil: >>> example.__new__ = 'broken' >>> example(1) # Okay, of course that will break it... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'str' object is not callable >>> example.__new__ = old_new # but we CAN'T FIX IT AGAIN >>> example(1) # the argument isn't accepted any more: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object.__new__() takes exactly one argument (the type to instantiate) >>> example() # But we can't omit it either due to __init__ Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: __init__() missing 1 required positional argument: 'x' Okay, but that's just because we still have something explicitly attached to example, so it's shadowing the default, which breaks some descriptor thingy... right? Except not: >>> del example.__new__ # if we get rid of it, the problem persists >>> example(1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object.__new__() takes exactly one argument (the type to instantiate) >>> assert example.__new__ is old_new # even though the lookup gives us the same object! The same thing still happens if we directly add and remove the attribute, without replacing it in between. Simply assigning and removing an attribute breaks the class, apparently irrevocably, and makes it impossible to instantiate. It's as if the class had some hidden attribute that tells it how to call __new__, which has been silently corrupted. When we instantiate example at the start, how actually does Python find the base __new__ (it apparently finds object.__new__, but is it looking directly in object? Getting there indirectly via type? Something else?), and how does it decide that this __new__ should be called without arguments, even though it would pass an argument if we wrote a __new__ method inside the class? Why does that logic break if we temporarily mess with the class' __new__, even if we restore everything such that there is no observable net change? | The issues you're seeing aren't related to how Python finds __new__ or chooses its arguments. __new__ receives every argument you're passing. The effects you observed come from specific code in object.__new__, combined with a bug in the logic for updating the C-level tp_new slot. There's nothing special about how Python passes arguments to __new__. What's special is what object.__new__ does with those arguments. object.__new__ and object.__init__ expect one argument, the class to instantiate for __new__ and the object to initialize for __init__. If they receive any extra arguments, they will either ignore the extra arguments or throw an exception, depending on what methods have been overridden: If a class overrides exactly one of __new__ or __init__, the non-overridden object method should ignore extra arguments, so people aren't forced to override both. If a subclass __new__ or __init__ explicitly passes extra arguments to object.__new__ or object.__init__, the object method should raise an exception. If neither __new__ nor __init__ are overridden, both object methods should throw an exception for extra arguments. There's a big comment in the source code talking about this. At C level, __new__ and __init__ correspond to tp_new and tp_init function pointer slots in a class's memory layout. Under normal circumstances, if one of these methods is implemented in C, the slot will point directly to the C-level implementation, and a Python method object will be generated wrapping the C function. If the method is implemented in Python, the slot will point to the slot_tp_new function, which searches the MRO for a __new__ method object and calls it. When instantiating an object, Python will invoke __new__ and __init__ by calling the tp_new and tp_init function pointers. object.__new__ is implemented by the object_new C-level function, and object.__init__ is implemented by object_init. object's tp_new and tp_init slots are set to point to these functions. object_new and object_init check whether they're overridden by checking a class's tp_new and tp_init slots. If tp_new points to something other than object_new, then __new__ has been overridden, and similar for tp_init and __init__. static PyObject * object_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { if (excess_args(args, kwds)) { if (type->tp_new != object_new) { PyErr_SetString(PyExc_TypeError, "object.__new__() takes exactly one argument (the type to instantiate)"); return NULL; } ... Now, when you assign or delete __new__, Python has to update the tp_new slot to reflect this. When you assign __new__ on a class, Python sets the class's tp_new slot to the generic slot_tp_new function, which searches for a __new__ method and calls it. When you delete __new__, the class is supposed to re-inherit tp_new from the superclass, but the code has a bug: else if (Py_TYPE(descr) == &PyCFunction_Type && PyCFunction_GET_FUNCTION(descr) == (PyCFunction)(void(*)(void))tp_new_wrapper && ptr == (void**)&type->tp_new) { /* The __new__ wrapper is not a wrapper descriptor, so must be special-cased differently. If we don't do this, creating an instance will always use slot_tp_new which will look up __new__ in the MRO which will call tp_new_wrapper which will look through the base classes looking for a static base and call its tp_new (usually PyType_GenericNew), after performing various sanity checks and constructing a new argument list. Cut all that nonsense short -- this speeds up instance creation tremendously. */ specific = (void *)type->tp_new; /* XXX I'm not 100% sure that there isn't a hole in this reasoning that requires additional sanity checks. I'll buy the first person to point out a bug in this reasoning a beer. */ } In the specific = (void *)type->tp_new; line, type is the wrong type - it's the class whose slot we're trying to update, not the class we're supposed to inherit tp_new from. When this code finds a __new__ method written in C, instead of updating tp_new to point to the corresponding C function, it sets tp_new to whatever value it already had! It doesn't change tp_new at all! So initially, your example class has tp_new set to object_new, and object_new ignores extra arguments because it sees that __init__ is overridden and __new__ isn't. When you set example.__new__ = 'broken', Python sets example's tp_new to slot_tp_new. Nothing you do after that point changes tp_new to anything else, even though del example.__new__ really should have. When object_new finds that example's tp_new is slot_tp_new instead of object_new, it rejects extra arguments and throws an exception. The bug manifests in some other ways too. For example, >>> class Example: pass ... >>> Example.__new__ = tuple.__new__ >>> Example() <__main__.Example object at 0x7f9d0a38f400> Before the __new__ assignment, Example has tp_new set to object_new. When the example does Example.__new__ = tuple.__new__, Python finds that tuple.__new__ is implemented in C, so it fails to update tp_new, leaving it set to object_new. Then, in Example(1, 2, 3), tuple.__new__ should raise an exception, because tuple.__new__ isn't applicable to Example: >>> tuple.__new__(Example) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: tuple.__new__(Example): Example is not a subtype of tuple but because tp_new is still set to object_new, object_new gets called instead of tuple.__new__. The devs have tried to fix the buggy code several times, but each fix was itself buggy and got reverted. The second attempt got closer, but broke multiple inheritance - see the conversation in the bug tracker. | 5 | 5 |
70,794,535 | 2022-1-20 | https://stackoverflow.com/questions/70794535/selenium-unable-to-find-session-with-id-after-a-few-minutes-of-idling | I started a Docker container with: docker run -d --shm-size="4g" --hostname selenium_firefox selenium/standalone-firefox In another container with Python: ... >>> driver = webdriver.Remote(command_executor="http://" +selenium_host+":4444/w d/hub", desired_capabilities=DesiredCapabilities.FIREFOX, keep_alive=True) >>> driver.title '' >>> driver.title Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/site-packages/selenium/webdriver/remote/webdri ver.py", line 447, in title resp = self.execute(Command.GET_TITLE) File "/usr/local/lib/python3.10/site-packages/selenium/webdriver/remote/webdri ver.py", line 424, in execute self.error_handler.check_response(response) File "/usr/local/lib/python3.10/site-packages/selenium/webdriver/remote/errorh andler.py", line 247, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: Unable to execute reques t for an existing session: Unable to find session with ID: 5c619451-8361-4ec9-9b 7e-58b7afac15ff Build info: version: '4.1.1', revision: 'e8fcc2cecf' System info: host: 'selenium_firefox', ip: '172.17.0.3', os.name: 'Linux', os.ar ch: 'amd64', os.version: '5.4.0-89-generic', java.version: '11.0.13' Driver info: driver.version: unknown The first driver.title I ran it immediately after creating the remote webdriver. Then I waited for some time (around 15 minutes) and ran driver.title again, and it seem that the Python console has lost connection to the corresponding browser. Why does this happen and how do I avoid it? It doesn't happen if I don't use a remote webdriver. | Option 1: Override Docker Selenium Grid default session timeout From docker/selenium documentation: Grid has a default session timeout of 300 seconds, where the session can be on a stale state until it is killed. You can use SE_NODE_SESSION_TIMEOUT to overwrite that value in seconds. docker run -d -e SE_NODE_SESSION_TIMEOUT=1000 --shm-size="4g" --hostname selenium_firefox selenium/standalone-firefox Option 2: Ping your session once in 60 (any < 300) seconds You may execute some driver command in a loop during the idle time for x in range(15): time.sleep(60) driver.current_url Reference https://github.com/SeleniumHQ/docker-selenium#grid-url-and-session-timeout | 6 | 7 |
70,795,755 | 2022-1-21 | https://stackoverflow.com/questions/70795755/how-to-resolve-dash-bootstrap-no-gutter-type-error | When I run this code, I get an error message saying "Error: The dash_bootstrap_components.Row component (version 1.0.2) received an unexpected keyword argument: no_gutters Allowed arguments: align, children, className, class_name, id, justify, key, loading_state, style" I believe it's because of the dash bootstrap version I use. How do I modify my code to make it work? app.layout = dbc.Container([ dbc.Row( dbc.Col(html.H1("My Dashboard", className='text-center'), width=12) ), dbc.Row([ dbc.Col([ dcc.Dropdown(id='my-dpdn', multi=False, value='A', options=[{'label':x, 'value':x} for x in sorted(df['Value'].unique())], ), dcc.Graph(id='line-fig', figure={}) ],# width={'size':5, 'offset':1, 'order':1}, xs=12, sm=12, md=12, lg=5, xl=5 ), dbc.Col([ dcc.Dropdown(id='my-dpdn2', multi=True, value=['B','C'], options=[{'label':x, 'value':x} for x in sorted(df['Value'].unique())], ), dcc.Graph(id='line-fig2', figure={}) ], #width={'size':5, 'offset':0, 'order':2}, xs=12, sm=12, md=12, lg=5, xl=5 ), ], no_gutters=True, justify='start') ], fluid=True) | As of Version 1.0 for Dash-bootstrap-components, the no_gutters attribute for the Row component has been deprecated. You are probably using code from a version <= 0.13, read this migration guide for the full details of the changes. You have a couple of options here: Reading that migration guide will suggest the new way of removing gutters under the heading Row without 'gutters' Dropped no_gutters prop. Use gutter modifier classes instead. See the docs for examples. 2- You can revert your dash-bootstrap-components library back to 0.13 (eg. pip install dash-bootstrap-components==0.13), but this is not recommended. | 5 | 4 |
70,794,199 | 2022-1-20 | https://stackoverflow.com/questions/70794199/use-of-colon-in-type-hints | When type annotating a variable of type dict, typically you'd annotate it like this: numeralToInteger: dict[str, int] = {...} However I rewrote this using a colon instead of a comma: numeralToInteger: dict[str : int] = {...} And this also works, no SyntaxError or NameError is raised. Upon inspecting the __annotations__ global variable: colon: dict[str : int] = {...} comma: dict[str, int] = {...} print(__annotations__) The output is: {'colon': dict[slice(<class 'str'>, <class 'int'>, None)], 'comma': dict[str, int]} So the colon gets treated as a slice object and the comma as a normal type hint. Should I use the colon with dict types or should I stick with using a comma? I am using Python version 3.10.1. | If you have a dictionary whose keys are strings and values are integers, you should do dict[str, int]. It's not optional. IDEs and type-checkers use these type hints to help you. When you say dict[str : int], it is a slice object. Totally different things. Try these in mypy playground: d: dict[str, int] d = {'hi': 20} c: dict[str: int] c = {'hi': 20} message: main.py:4: error: "dict" expects 2 type arguments, but 1 given main.py:4: error: Invalid type comment or annotation main.py:4: note: did you mean to use ',' instead of ':' ? Found 2 errors in 1 file (checked 1 source file) Error messages are telling everything | 12 | 9 |
70,792,058 | 2022-1-20 | https://stackoverflow.com/questions/70792058/how-can-i-change-the-default-pager-for-pythons-help-debugger-command | I'm currently doing some work in a server (Ubuntu) without admin rights nor contact with the administrator. When using the help(command) in the python command line I get an error. Here's an example: >>> help(someCommand) /bin/sh: most: command not found So, this error indicates that most pager is not currently installed. However, the server I'm working on has "more" and "less" pagers installed. So, how can I change the default pager configuration for this python utility? | This one is annoyingly difficult to research, but I think I found it. The built-in help generates its messages using the standard library pydoc module (the module is also intended to be usable as a standalone script). In that documentation, we find: When printing output to the console, pydoc attempts to paginate the output for easier reading. If the PAGER environment variable is set, pydoc will use its value as a pagination program. So, presumably, that's been set to most on your system. Assuming it won't break anything else on your system, just unset or change it. (It still pages without a value set - even on Windows. I assume it has a built-in fallback.) | 5 | 1 |
70,782,902 | 2022-1-20 | https://stackoverflow.com/questions/70782902/best-way-to-navigate-a-nested-json-in-python | I have tried different for loops trying to iterate through this JSON and I cant figure out how to do it. I have a list of numbers and want to compare it to the "key" values under each object of "data" (For example, Aatrox, Ahri, Akali, and so on) and if the numbers match store the "name" value in another list. Example: listOfNumbers = [266, 166, 123, 283] 266 and 166 would match the "key" in the Aatrox and Akshan objects respectively so I would want to pull that name and store it in a list. I understant this JSON is mostly accessed by key values rather than being indexed so Im not sure how I would iterate through all the "data" objects in a for loop(s). JSON im referencing: { "type": "champion", "format": "standAloneComplex", "version": "12.2.1", "data": { "Aatrox": { "version": "12.2.1", "id": "Aatrox", "key": "266", "name": "Aatrox", "title": "the Darkin Blade", "blurb": "Once honored defenders of Shurima against the Void, Aatrox and his brethren would eventually become an even greater threat to Runeterra, and were defeated only by cunning mortal sorcery. But after centuries of imprisonment, Aatrox was the first to find...", "info": { "attack": 8, "defense": 4, "magic": 3, "difficulty": 4 }, "image": { "full": "Aatrox.png", "sprite": "champion0.png", "group": "champion", "x": 0, "y": 0, "w": 48, "h": 48 }, "tags": [ "Fighter", "Tank" ], "partype": "Blood Well", "stats": { "hp": 580, "hpperlevel": 90, "mp": 0, "mpperlevel": 0, "movespeed": 345, "armor": 38, "armorperlevel": 3.25, "spellblock": 32, "spellblockperlevel": 1.25, "attackrange": 175, "hpregen": 3, "hpregenperlevel": 1, "mpregen": 0, "mpregenperlevel": 0, "crit": 0, "critperlevel": 0, "attackdamage": 60, "attackdamageperlevel": 5, "attackspeedperlevel": 2.5, "attackspeed": 0.651 } }, "Ahri": { "version": "12.2.1", "id": "Ahri", "key": "103", "name": "Ahri", "title": "the Nine-Tailed Fox", "blurb": "Innately connected to the latent power of Runeterra, Ahri is a vastaya who can reshape magic into orbs of raw energy. She revels in toying with her prey by manipulating their emotions before devouring their life essence. Despite her predatory nature...", "info": { "attack": 3, "defense": 4, "magic": 8, "difficulty": 5 }, "image": { "full": "Ahri.png", "sprite": "champion0.png", "group": "champion", "x": 48, "y": 0, "w": 48, "h": 48 }, "tags": [ "Mage", "Assassin" ], "partype": "Mana", "stats": { "hp": 526, "hpperlevel": 92, "mp": 418, "mpperlevel": 25, "movespeed": 330, "armor": 21, "armorperlevel": 3.5, "spellblock": 30, "spellblockperlevel": 0.5, "attackrange": 550, "hpregen": 5.5, "hpregenperlevel": 0.6, "mpregen": 8, "mpregenperlevel": 0.8, "crit": 0, "critperlevel": 0, "attackdamage": 53, "attackdamageperlevel": 3, "attackspeedperlevel": 2, "attackspeed": 0.668 } }, "Akali": { "version": "12.2.1", "id": "Akali", "key": "84", "name": "Akali", "title": "the Rogue Assassin", "blurb": "Abandoning the Kinkou Order and her title of the Fist of Shadow, Akali now strikes alone, ready to be the deadly weapon her people need. Though she holds onto all she learned from her master Shen, she has pledged to defend Ionia from its enemies, one...", "info": { "attack": 5, "defense": 3, "magic": 8, "difficulty": 7 }, "image": { "full": "Akali.png", "sprite": "champion0.png", "group": "champion", "x": 96, "y": 0, "w": 48, "h": 48 }, "tags": [ "Assassin" ], "partype": "Energy", "stats": { "hp": 500, "hpperlevel": 105, "mp": 200, "mpperlevel": 0, "movespeed": 345, "armor": 23, "armorperlevel": 3.5, "spellblock": 37, "spellblockperlevel": 1.25, "attackrange": 125, "hpregen": 9, "hpregenperlevel": 0.9, "mpregen": 50, "mpregenperlevel": 0, "crit": 0, "critperlevel": 0, "attackdamage": 62, "attackdamageperlevel": 3.3, "attackspeedperlevel": 3.2, "attackspeed": 0.625 } }, "Akshan": { "version": "12.2.1", "id": "Akshan", "key": "166", "name": "Akshan", "title": "the Rogue Sentinel", "blurb": "Raising an eyebrow in the face of danger, Akshan fights evil with dashing charisma, righteous vengeance, and a conspicuous lack of shirts. He is highly skilled in the art of stealth combat, able to evade the eyes of his enemies and reappear when they...", "info": { "attack": 0, "defense": 0, "magic": 0, "difficulty": 0 }, "image": { "full": "Akshan.png", "sprite": "champion0.png", "group": "champion", "x": 144, "y": 0, "w": 48, "h": 48 }, "tags": [ "Marksman", "Assassin" ], "partype": "Mana", "stats": { "hp": 560, "hpperlevel": 90, "mp": 350, "mpperlevel": 40, "movespeed": 330, "armor": 26, "armorperlevel": 3, "spellblock": 30, "spellblockperlevel": 0.5, "attackrange": 500, "hpregen": 3.75, "hpregenperlevel": 0.65, "mpregen": 8.175, "mpregenperlevel": 0.7, "crit": 0, "critperlevel": 0, "attackdamage": 52, "attackdamageperlevel": 3.5, "attackspeedperlevel": 4, "attackspeed": 0.638 } } } } | You simply iterate over the values of the dictionary, check whether the value of the 'key' item is in your list and if that's the case, append the value of the 'name' item to your output list. Let jsonObj be your JSON object presented in your question. Then this code should work: listOfNumbers = [266, 166, 123, 283] names = [] for value in jsonObj['data'].values(): if value['key'] in listOfNumbers: names.append(value['name']) JSON objects in Python are just dictionaries. So, you better familiarize yourself with Python's dict. | 5 | 4 |
70,780,369 | 2022-1-20 | https://stackoverflow.com/questions/70780369/valueerror-only-one-element-tensors-can-be-converted-to-python-scalars-when-con | I have the following: type of X is: <class 'list'> X: [tensor([[1.3373, 0.5666, 0.2337, ..., 0.4899, 0.1876, 0.5892], [0.0320, 0.0797, 0.0052, ..., 0.3405, 0.0000, 0.0390], [0.1305, 0.1281, 0.0021, ..., 0.6454, 0.1964, 0.0493], ..., [0.2635, 0.0237, 0.0000, ..., 0.6635, 0.1376, 0.2988], [0.0241, 0.5464, 0.1263, ..., 0.5766, 0.2352, 0.0140], [0.1740, 0.1664, 0.0057, ..., 0.6056, 0.1020, 1.1573]], device='cuda:0')] However, following the instructions in this video, I get this error: X_tensor = torch.FloatTensor(X) ValueError: only one element tensors can be converted to Python scalars I have the following conversion code: X_tensor = torch.FloatTensor(X) How can I fix this problem? $ python Python 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.__version__ '1.10.1+cu113' X here is a list of torch.tensors. | Use torch.stack(): X = torch.stack(X) | 4 | 11 |
70,779,984 | 2022-1-20 | https://stackoverflow.com/questions/70779984/python-tuple-is-not-defined-warning | I'm writing a function that returns a tuple and using type annotations. To do so, I've written: def foo(bar) -> Tuple[int, int]: pass This runs, but I've been getting a warning that says: "Tuple" is not defined Pylance report UndefinedVariable Given that I'm writing a number of functions that return a tuple type, I'd like to get rid of the warning. I'm assuming I just need to import the package Tuple refers to, but what is the right Python package for it? From my research, I'm inclined to think it's the typing package, but I'm not sure. | Depending on your exact python version there might be subtle differences here, but what's bound to work is from typing import Tuple def foo(bar) -> Tuple[int, int]: pass Alternatively I think since 3.9 or 3.10 you can just outright say tuple[int, int] without importing anything. | 13 | 19 |
70,773,695 | 2022-1-19 | https://stackoverflow.com/questions/70773695/how-are-python-closures-implemented | I am interested in how python implements closures? For the sake of example, consider this def closure_test(): x = 1 def closure(): nonlocal x x = 2 print(x) return closure closure_test()() Here, the function closure_test has a local x which the nested function closure captures. When I run the program, I get the following output 2 When I see the disassembly of the function closure_test, 2 0 LOAD_CONST 1 (1) 2 STORE_DEREF 0 (x) 3 4 LOAD_CLOSURE 0 (x) 6 BUILD_TUPLE 1 8 LOAD_CONST 2 (<code object closure at 0x7f14ac3b9500, file "<string>", line 3>) 10 LOAD_CONST 3 ('closure_test.<locals>.closure') 12 MAKE_FUNCTION 8 (closure) 14 STORE_FAST 0 (closure) 7 16 LOAD_FAST 0 (closure) 18 RETURN_VALUE Disassembly of <code object closure at 0x7f14ac3b9500, file "<string>", line 3>: 5 0 LOAD_CONST 1 (2) 2 STORE_DEREF 0 (x) 6 4 LOAD_GLOBAL 0 (print) 6 LOAD_DEREF 0 (x) 8 CALL_FUNCTION 1 10 POP_TOP 12 LOAD_CONST 0 (None) 14 RETURN_VALUE I see the instructions STORE_DEREF, LOAD_DEREF, MAKE_FUNCTION and LOAD_CLOSURE which I won't get if I write the whole program without functions and closures. I think those are the instructions which are needed to use closures. But how does Python manages this? How does it captures the variable off from the local variable table of the enclosing function? And after capturing the variable where does it live? How does the function get the access of the captured variable? I want a complete low level understanding of how it works. Thanks in advance. | Overview Python doesn't directly use variables the same way one might expect coming from a statically-typed language like C or Java, rather it uses names and tags instances of objects with them In your example, closure is simply an instance of a function with that name It's really nonlocal here which causes LOAD_CLOSURE and BUILD_TUPLE to be used as described in When is the existence of nonlocal variables checked? and further in How to define free-variable in python? and refers to x, not the inner function named literally closure 3 4 LOAD_CLOSURE 0 (x) About nonlocal For your case, nonlocal is asserting that x exists in the outer scope excluding globals at compile time, but is practically redundant because it's not used elsewhere docs Originally I'd written that this was redundant due to redeclaration, but that's not true - nonlocal prevents re-using the name, but x simply isn't shown anywhere else so the effect isn't obvious I've added a 3rd example with a very ugly generator to illustrate the effect Example of use with a global (note the SyntaxError is at compile time, not at runtime!) >>> x = 3 >>> def closure_test(): ... def closure(): ... nonlocal x ... print(x) ... return closure ... File "<stdin>", line 3 SyntaxError: no binding for nonlocal 'x' found >>> def closure_test(): ... def closure(): ... print(x) ... return closure ... >>> closure_test()() 3 Examples of SyntaxErrors related to invalid locals use >>> def closure_test(): ... def closure(): ... nonlocal x ... x = 2 ... print(x) ... return closure ... File "<stdin>", line 3 SyntaxError: no binding for nonlocal 'x' found >>> def closure_test(): ... x = 1 ... def closure(): ... x = 2 ... nonlocal x ... print(x) ... return closure ... File "<stdin>", line 5 SyntaxError: name 'x' is assigned to before nonlocal declaration Example which makes use of nonlocal to set the outer value (Note this is badly-behaved because a more normal approach wrapping yield with try:finally displays before closure is actually called) >>> def closure_test(): ... x = 1 ... print(f"x outer A: {x}") ... def closure(): ... nonlocal x ... x = 2 ... print(f"x inner: {x}") ... yield closure ... print(f"x outer B: {x}") ... >>> list(x() for x in closure_test()) x outer A: 1 x inner: 2 x outer B: 2 [None] Original Example without nonlocal (note absence of BUILD_TUPLE and LOAD_CLOSURE!) >>> def closure_test(): ... x = 1 ... def closure(): ... x = 2 ... print(x) ... return closure ... >>> >>> import dis >>> dis.dis(closure_test) 2 0 LOAD_CONST 1 (1) 2 STORE_FAST 0 (x) 3 4 LOAD_CONST 2 (<code object closure at 0x10d8132f0, file "<stdin>", line 3>) 6 LOAD_CONST 3 ('closure_test.<locals>.closure') 8 MAKE_FUNCTION 0 10 STORE_FAST 1 (closure) 6 12 LOAD_FAST 1 (closure) 14 RETURN_VALUE Disassembly of <code object closure at 0x10d8132f0, file "<stdin>", line 3>: 4 0 LOAD_CONST 1 (2) 2 STORE_FAST 0 (x) 5 4 LOAD_GLOBAL 0 (print) 6 LOAD_FAST 0 (x) 8 CALL_FUNCTION 1 10 POP_TOP 12 LOAD_CONST 0 (None) 14 RETURN_VALUE About the ByteCode and a Simple Comparison Reducing your example to remove all the names, it's simply >>> import dis >>> dis.dis(lambda: print(2)) 1 0 LOAD_GLOBAL 0 (print) 2 LOAD_CONST 1 (2) 4 CALL_FUNCTION 1 6 RETURN_VALUE The rest of the bytecode just moves the names around x for 1 and 2 closure and closure_test.<locals>.closure for inner function (located at some memory address) print literally the print function None literally the None singleton Specific DIS opcodes STORE_DEREF puts a value in slot i LOAD_DEREF retrieves a value from slot i MAKE_FUNCTION creates a new function on the stack and puts it in slot i LOAD_CLOSURE does just that, putting it on the stack at i You can see the constants, names, and free variables with dis.show_code() >>> dis.show_code(closure_test) Name: closure_test Filename: <stdin> Argument count: 0 Positional-only arguments: 0 Kw-only arguments: 0 Number of locals: 1 Stack size: 3 Flags: OPTIMIZED, NEWLOCALS Constants: 0: None 1: 1 2: <code object closure at 0x10db282f0, file "<stdin>", line 3> 3: 'closure_test.<locals>.closure' Variable names: 0: closure Cell variables: 0: x Digging at the closure itself >>> dis.show_code(closure_test()) # call outer Name: closure Filename: <stdin> Argument count: 0 Positional-only arguments: 0 Kw-only arguments: 0 Number of locals: 0 Stack size: 2 Flags: OPTIMIZED, NEWLOCALS, NESTED Constants: 0: None 1: 2 Names: 0: print Free variables: 0: x >>> dis.show_code(lambda: print(2)) Name: <lambda> Filename: <stdin> Argument count: 0 Positional-only arguments: 0 Kw-only arguments: 0 Number of locals: 0 Stack size: 2 Flags: OPTIMIZED, NEWLOCALS, NOFREE Constants: 0: None 1: 2 Names: 0: print Using Python 3.9.10 Other related questions Python nonlocal statement How does exec work with locals? | 7 | 6 |
70,776,558 | 2022-1-19 | https://stackoverflow.com/questions/70776558/pytube-exceptions-regexmatcherror-init-could-not-find-match-for-w-w | So my issue is I run this simple code to attempt to make a pytube stream object... from pytube import YouTube yt = YouTube('https://www.youtube.com/watch?v=aQNrG7ag2G4') stream = yt.streams.filter(file_extension='mp4') And end up with the error in the title. Full error: Traceback (most recent call last): File ".\test.py", line 4, in <module> stream = yt.streams.filter(file_extension='mp4') File "C:\Users\logan\AppData\Local\Programs\Python\Python38\lib\site-packages\pytube\__main__.py", line 292, in streams return StreamQuery(self.fmt_streams) File "C:\Users\logan\AppData\Local\Programs\Python\Python38\lib\site-packages\pytube\__main__.py", line 184, in fmt_streams extract.apply_signature(stream_manifest, self.vid_info, self.js) File "C:\Users\logan\AppData\Local\Programs\Python\Python38\lib\site-packages\pytube\extract.py", line 409, in apply_signature cipher = Cipher(js=js) File "C:\Users\logan\AppData\Local\Programs\Python\Python38\lib\site-packages\pytube\cipher.py", line 33, in __init__ raise RegexMatchError( pytube.exceptions.RegexMatchError: __init__: could not find match for ^\w+\W Extra data: python version: 3.8.10 pytube version: 11.0.2 | As juanchosaravia suggested on https://github.com/pytube/pytube/issues/1199, in order to solve the problem, you should go in the cipher.py file and replace the line 30, which is: var_regex = re.compile(r"^\w+\W") With that line: var_regex = re.compile(r"^\$*\w+\W") After that, it worked again. | 13 | 74 |
70,773,879 | 2022-1-19 | https://stackoverflow.com/questions/70773879/fastapi-starlette-redirectresponse-redirect-to-post-instead-get-method | I have encountered strange redirect behaviour after returning a RedirectResponse object events.py router = APIRouter() @router.post('/create', response_model=EventBase) async def event_create( request: Request, user_id: str = Depends(get_current_user), service: EventsService = Depends(), form: EventForm = Depends(EventForm.as_form) ): event = await service.post( ... ) redirect_url = request.url_for('get_event', **{'pk': event['id']}) return RedirectResponse(redirect_url) @router.get('/{pk}', response_model=EventSingle) async def get_event( request: Request, pk: int, service: EventsService = Depends() ): ....some logic.... return templates.TemplateResponse( 'event.html', context= { ... } ) routers.py api_router = APIRouter() ... api_router.include_router(events.router, prefix="/event") this code returns the result 127.0.0.1:37772 - "POST /event/22 HTTP/1.1" 405 Method Not Allowed OK, I see that for some reason a POST request is called instead of a GET request. I search for an explanation and find that the RedirectResponse object defaults to code 307 and calls POST link I follow the advice and add a status redirect_url = request.url_for('get_event', **{'pk': event['id']}, status_code=status.HTTP_302_FOUND) And get starlette.routing.NoMatchFound for the experiment, I'm changing @router.get('/{pk}', response_model=EventSingle) to @router.post('/{pk}', response_model=EventSingle) and the redirect completes successfully, but the post request doesn't suit me here. What am I doing wrong? UPD html form for running event/create logic base.html <form action="{{ url_for('event_create')}}" method="POST"> ... </form> base_view.py @router.get('/', response_class=HTMLResponse) async def main_page(request: Request, activity_service: ActivityService = Depends()): activity = await activity_service.get() return templates.TemplateResponse('base.html', context={'request': request, 'activities': activity}) | When you want to redirect to a GET after a POST, the best practice is to redirect with a 303 status code, so just update your code to: # ... return RedirectResponse(redirect_url, status_code=303) As you've noticed, redirecting with 307 keeps the HTTP method and body. Fully working example: from fastapi import FastAPI, APIRouter, Request from fastapi.responses import RedirectResponse, HTMLResponse router = APIRouter() @router.get('/form') def form(): return HTMLResponse(""" <html> <form action="/event/create" method="POST"> <button>Send request</button> </form> </html> """) @router.post('/create') async def event_create( request: Request ): event = {"id": 123} redirect_url = request.url_for('get_event', **{'pk': event['id']}) return RedirectResponse(redirect_url, status_code=303) @router.get('/{pk}') async def get_event( request: Request, pk: int, ): return f'<html>oi pk={pk}</html>' app = FastAPI(title='Test API') app.include_router(router, prefix="/event") To run, install pip install fastapi uvicorn and run with: uvicorn --reload --host 0.0.0.0 --port 3000 example:app Then, point your browser to: http://localhost:3000/event/form | 12 | 17 |
70,773,526 | 2022-1-19 | https://stackoverflow.com/questions/70773526/why-do-we-need-a-dict-update-method-in-python-instead-of-just-assigning-the-va | I have been working with dictionaries that I have to modify within different parts of my code. I am trying to make sure if I do not miss anything about there is no need for dict_update() in any scenario. So the reasons to use update() method is either to add a new key-value pair to current dictionary, or update the value of your existing ones. But wait!? Aren't they already possible by just doing: >>>test_dict = {'1':11,'2':1445} >>>test_dict['1'] = 645 >>>test_dict {'1': 645, '2': 1445} >>>test_dict[5]=123 >>>test_dict {'1': 645, '2': 1445, 5: 123} In what case it would be crucial to use it ? I am curious. Many thanks | 1. You can update many keys on the same statement. my_dict.update(other_dict) In this case you don't have to know how many keys are in the other_dict. You'll just be sure that all of them will be updated on my_dict. 2. You can use any iterable of key/value pairs with dict.update As per the documentation you can use another dictionary, kwargs, list of tuples, or even generators that yield tuples of len 2. 3. You can use the update method as an argument for functions that expect a function argument. Example: def update_value(key, value, update_function): update_function([(key, value)]) update_value("k", 3, update_on_the_db) # suppose you have a update_on_the_db function update_value("k", 3, my_dict.update) # this will update on the dict | 22 | 37 |
70,773,518 | 2022-1-19 | https://stackoverflow.com/questions/70773518/fail-to-install-module-pil-error-could-not-find-a-version-that-satisfies-the | when trying to import - from PIL import Image this error occurs: ERROR: Could not find a version that satisfies the requirement PIL (from versions: none) ERROR: No matching distribution found for PIL | the solution: pip install Pillow after that make sure it works: from PIL import Image | 5 | 7 |
70,773,090 | 2022-1-19 | https://stackoverflow.com/questions/70773090/printstring1-or-string2-in-string-does-not-give-boolean-result | why does print("Lorem" and "aliqua" in string ) Gives True. A Boolean, But print("Lorem" or "aliqua" in string ) Gives 'Lorem'. A String string = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua" print("Lorem" and "aliqua" in string ) >>> True print("Lorem" or "aliqua" in string ) >>> Lorem | Try: print("Lorem" in string and "aliqua" in string ) And print("Lorem" in string or "aliqua" in string ) Explanation: The condition in string will always be true as it checks string is non empty. >>> if "harsha": ... print("hi") ... hi >>> if "": ... print("hi") ... <<No output>> | 4 | 5 |
70,768,547 | 2022-1-19 | https://stackoverflow.com/questions/70768547/how-to-pass-date-and-id-through-url-in-django | I am trying to pass date and id through url, but getting an error, I have passed just id before and I usually do it like this. path('user_payment_menu/<int:pk>/',user_payment_menu, name='user_payment_menu'), but now I want date to pass after int:pk/ but when I add date after a slash I am getting an error. | Probably the easiest way to define a date is with a custom path converter. You can implement this with: # app_name/converters.py class DateConverter: regex = '\d{4}-\d{1,2}-\d{1,2}' format = '%Y-%m-%d' def to_python(self, value): return datetime.strptime(value, self.format).date() def to_url(self, value): return value.strftime(self.format) Then you can register the format and use the <date:…> path converter: # app_name/urls.py from django.urls import path, register_converter from app_name.converters import DateConverter from app_name.views import user_payment_menu register_converter(DateConverter, 'date') urlpatterns = [ path('user_payment_menu/<int:pk>/<date:mydate>/',user_payment_menu, name='user_payment_menu'), then in the view you define an extra attribute that will contain the date as a date object: # app_name/views.py def user_payment_menu(request, pk, mydate): # … You can use a date object when generating a URL, for example with: {% url 'user_payment_menu' pk=somepk mydate=somedate %} | 4 | 13 |
70,766,215 | 2022-1-19 | https://stackoverflow.com/questions/70766215/problem-with-memory-allocation-in-julia-code | I used a function in Python/Numpy to solve a problem in combinatorial game theory. import numpy as np from time import time def problem(c): start = time() N = np.array([0, 0]) U = np.arange(c) for _ in U: bits = np.bitwise_xor(N[:-1], N[-2::-1]) N = np.append(N, np.setdiff1d(U, bits).min()) return len(*np.where(N==0)), time()-start problem(10000) Then I wrote it in Julia because I thought it'd be faster due to Julia using just-in-time compilation. function problem(c) N = [0] U = Vector(0:c) for _ in U elems = N[1:length(N)-1] bits = elems .⊻ reverse(elems) push!(N, minimum(setdiff(U, bits))) end return sum(N .== 0) end @time problem(10000) But the second version was much slower. For c = 10000, the Python version takes 2.5 sec. on an Core i5 processor and the Julia version takes 4.5 sec. Since Numpy operations are implemented in C, I'm wondering if Python is indeed faster or if I'm writing a function with wasted time complexity. The implementation in Julia allocates a lot of memory. How to reduce the number of allocations to improve its performance? | The original code can be re-written in the following way: function problem2(c) N = zeros(Int, c+2) notseen = falses(c+1) for lN in 1:c+1 notseen .= true @inbounds for i in 1:lN-1 b = N[i] ⊻ N[lN-i] b <= c && (notseen[b+1] = false) end idx = findfirst(notseen) isnothing(idx) || (N[lN+1] = idx-1) end return count(==(0), N) end First check if the functions produce the same results: julia> problem(10000), problem2(10000) (1475, 1475) (I have also checked that the generated N vector is identical) Now let us benchmark both functions: julia> using BenchmarkTools julia> @btime problem(10000) 4.938 s (163884 allocations: 3.25 GiB) 1475 julia> @btime problem2(10000) 76.275 ms (4 allocations: 79.59 KiB) 1475 So it turns out to be over 60x faster. What I do to improve the performance is avoiding allocations. In Julia it is easy and efficient. If any part of the code is not clear please comment. Note that I concentrated on showing how to improve the performance of Julia code (and not trying to just replicate the Python code, since - as it was commented under the original post - doing language performance comparisons is very tricky). I think it is better to concentrate in this discussion on how to make Julia code fast. EDIT Indeed changing to Vector{Bool} and removing the condition on b and c relation (which mathematically holds for these values of c) gives a better speed: julia> function problem3(c) N = zeros(Int, c+2) notseen = Vector{Bool}(undef, c+1) for lN in 1:c+1 notseen .= true @inbounds for i in 1:lN-1 b = N[i] ⊻ N[lN-i] notseen[b+1] = false end idx = findfirst(notseen) isnothing(idx) || (N[lN+1] = idx-1) end return count(==(0), N) end problem3 (generic function with 1 method) julia> @btime problem3(10000) 20.714 ms (3 allocations: 88.17 KiB) 1475 | 14 | 17 |
70,690,454 | 2022-1-13 | https://stackoverflow.com/questions/70690454/how-to-redirect-the-user-back-to-the-home-page-using-fastapi-after-submitting-a | I have a page with a table of students. I added a button that allows you to add a new row to the table. To do this, I redirect the user to a page with input forms. The problem is that after submitting the completed forms, the user goes to a new empty page. How to transfer data in completed forms and redirect the user back to the table? I just started learning web programming, so I decided to first make an implementation without using AJAX technologies. Code: from fastapi import FastAPI, Form from fastapi.responses import Response import json from jinja2 import Template app = FastAPI() # The page with the table @app.get('/') def index(): students = get_students() # Get a list of students with open('templates/students.html', 'r', encoding='utf-8') as file: html = file.read() template = Template(html) # Creating a template with a table # Loading a template return Response(template.render(students=students), media_type='text/html') # Page with forms for adding a new entry @app.get('/add_student') def add_student_page(): with open('templates/add_student.html', 'r', encoding='utf-8') as file: html = file.read() # Loading a page return Response(html, media_type='text/html') # Processing forms and adding a new entry @app.post('/add') def add(name: str = Form(...), surname: str = Form(...), _class: str = Form(...)): add_student(name, surname, _class) # Adding student data # ??? | To start with, in cases where you return Jinja2 templates, you should return a TemplateResponse, as shown in the documentation. To redirect the user to a specific page, you could use RedirectResponse. Since you do that through a POST (and not GET) method, as shown in your example, a 405 (Method Not Allowed) error would be thrown. However, as explained here, you could change the response status code to status_code=status.HTTP_303_SEE_OTHER, and the issue would then be resolved (please have a look at this answer and this answer for more details). A working example is given below. In case you needed to pass additional path and/or query parameters to your endpoint, please have a look at this and this answer as well. If you would like to achieve the same result, using Fetch API instead of HTML <form>, please have a look at this answer. Working Example app.py from fastapi import FastAPI, Request, Form, status from fastapi.templating import Jinja2Templates from fastapi.responses import RedirectResponse app = FastAPI() templates = Jinja2Templates(directory="templates") # replace with your own get_students() method def get_students(): return ["a", "b", "c"] @app.post('/add') async def add(request: Request, name: str = Form(...), surname: str = Form(...), _class: str = Form(...)): # add_student(name, surname, _class) # Adding student data redirect_url = request.url_for('index') return RedirectResponse(redirect_url, status_code=status.HTTP_303_SEE_OTHER) @app.get('/add_student') async def add_student_page(request: Request): return templates.TemplateResponse("add_student.html", {"request": request}) @app.get('/') async def index(request: Request): students = get_students() # Get a list of students return templates.TemplateResponse("index.html", {"request": request, "students": students}) templates/index.html <!DOCTYPE html> <html> <body> <h1>Students: {{ students }}</h1> </body> </html> templates/add_student.html <!DOCTYPE html> <html> <body> <form action="http://127.0.0.1:8000/add" method="POST"> name : <input type="text" name="name"><br> surname : <input type="text" name="surname"><br> class : <input type="text" name="_class"><br> <input type="submit" value="submit"> </form> </body> </html> | 8 | 7 |
70,744,477 | 2022-1-17 | https://stackoverflow.com/questions/70744477/prometheus-how-to-expose-metrics-in-multiprocess-app-with-start-http-server | How expose metrics in multiprocess app use start_http_server I found many examples with gunicorn in internet but i want use start_http_server what should i do with code below to make it work properly? from multiprocessing import Process import time, os from prometheus_client import start_http_server, multiprocess, CollectorRegistry, Counter MY_COUNTER = Counter('my_counter', 'Description of my counter') os.environ["PROMETHEUS_MULTIPROC_DIR"] = "tmp" def f(): print("+1") MY_COUNTER.inc() if __name__ == '__main__': start_http_server(8000) p = Process(target=f, args=()) a = p.start() p2 = Process(target=f, args=()) p2.start() time.sleep(1) print("collect") registry = CollectorRegistry() data = multiprocess.MultiProcessCollector(registry) while True: time.sleep(1) | I was figuring out the same thing, and the solution was as simple as you would imagine. Updated your example code to work: from multiprocessing import Process import shutil import time, os from prometheus_client import start_http_server, multiprocess, CollectorRegistry, Counter COUNTER1 = Counter('counter1', 'Incremented by the first child process') COUNTER2 = Counter('counter2', 'Incremented by the second child process') COUNTER3 = Counter('counter3', 'Incremented by both child processes') def f1(): while True: time.sleep(1) print("Child process 1") COUNTER1.inc() COUNTER3.inc() def f2(): while True: time.sleep(1) print("Child process 2") COUNTER2.inc() COUNTER3.inc() if __name__ == '__main__': # ensure variable exists, and ensure defined folder is clean on start prome_stats = os.environ["PROMETHEUS_MULTIPROC_DIR"] if os.path.exists(prome_stats): shutil.rmtree(prome_stats) os.mkdir(prome_stats) # pass the registry to server registry = CollectorRegistry() multiprocess.MultiProcessCollector(registry) start_http_server(8000, registry=registry) p = Process(target=f1, args=()) a = p.start() p2 = Process(target=f2, args=()) p2.start() print("collect") while True: time.sleep(1) localhost:8000/metrics # HELP counter1_total Incremented by the first child process # TYPE counter1_total counter counter1_total 9.0 # HELP counter2_total Incremented by the second child process # TYPE counter2_total counter counter2_total 9.0 # HELP counter3_total Incremented by both child processes # TYPE counter3_total counter counter3_total 18.0 | 6 | 14 |
70,682,613 | 2022-1-12 | https://stackoverflow.com/questions/70682613/update-env-variable-on-notebook-in-vscode | I’m working on a python project with a notebook and .env file on VsCode. I have problem when trying to refresh environment variables in a notebook (I found a way but it's super tricky). My project: .env file with: MY_VAR="HELLO_ALICE" test.ipynb file with one cell: from os import environ print('MY_VAR = ', environ.get('MY_VAR')) What I want: set the env variable and run my notebook (see HELLO_ALICE) edit .env file: change "HELLO_ALICE" to "HELLO_BOB" set the env variable and run my notebook (see HELLO_BOB) What do not work: open my project in vsCode, open terminal in terminal run: >> set -a; source .env; set +a; open notebook, run cell --> I see HELLO_ALICE edit .env (change HELLO_ALICE TO HELLO_BOB) restart notebook (either click on restart or close tab and reopen it) in terminal run: >> set -a; source .env; set +a; (same as step 2) open notebook, run cell --> I see HELLO_ALICE So I see twice HELLO_ALICE instead of HELLO_ALICE then HELLO_BOB... But if it was on .py file instead of notebook, it would have worked (I would see HELLO_ALICE first then HELLO_BOB) To make it work: Replace step 5. by: Close VsCode and reopen it Why it is a problem: It is super tricky. I'm sure that in 3 month I will have forgotten this problem with the quick fix and I will end up loosing again half a day to figure out what is the problem & solution. So my question is: Does anyone know why it works like this and how to avoid closing and reopening VsCode to refresh env variable stored in a .env file on a notebook ? (Closing and reopening VsCode should not change behavior of code) Notes: VsCode version = 1.63.2 I tired to use dotenv module and load env variable in my notebook (does not work) question: How to set env variable in Jupyter notebook works only if you define your env variables inside notebook this behavior happen only on env variables. For instance if instead a .env file I use a env.py file where i define my env constants as python variables, restarting the notebook will refresh the constants. | The terminal you open in VSC is not the same terminal ipython kernel is running. The kernel is already running in an environment that is not affected by you changing variables in another terminal. You need to set the variables in the correct environment. You can do that with dotenv, but remember to use override=True. This seems to work: $ pip3 install python-dotenv import dotenv from os import environ env_file = '../.env' f = open(env_file,'w') f.write('MY_VAR="HELLO_ALICE"') f.close() dotenv.load_dotenv(env_file, override=True) print('MY_VAR = ', environ.get('MY_VAR')) f = open(env_file,'w') f.write('MY_VAR="HELLO_BOB"') f.close() dotenv.load_dotenv(env_file, override=True) print('MY_VAR = ', environ.get('MY_VAR')) MY_VAR = HELLO_ALICE MY_VAR = HELLO_BOB | 9 | 11 |
70,711,245 | 2022-1-14 | https://stackoverflow.com/questions/70711245/changing-schema-name-in-openapi-docs-generated-by-fastapi | I'm using FastAPI to create backend for my project. I have a method that allows to upload a file. I implemented it as follows: from fastapi import APIRouter, UploadFile, File from app.models.schemas.files import FileInResponse router = APIRouter() @router.post("", name="files:create-file", response_model=FileInResponse) async def create(file: UploadFile = File(...)) -> FileInResponse: pass As you can see, I use a dedicated pydantic model for a method result—FileInResponse: from pathlib import Path from pydantic import BaseModel class FileInResponse(BaseModel): path: Path And I follow this naming pattern for models (naming models as <Entity>InCreate, <Entity>InResponse, and so on) throughout the API. However, I couldn't create a pydantic model with a field of the type File, so I had to declare it directly in the route definition (i.e. without a model containing it). As a result, I have this long auto generated name Body_files_create_file_api_files_post in the OpenAPI docs: Is there a way to change the schema name? | In case someone is interested into renaming auto generated fields from FastAPI in openapi.json file, you should add an operation_id param to your route. This will allow FastAPI to use this id to generate a clearer attribute name for the generated schema. Before: @router.post( "/{opportunity_id}/files", status_code=status.HTTP_201_CREATED, ) async def attach_opportunity_file( db: Database, uploaded_file: UploadFile = File(title="File to upload"), ) -> OpportunityFile: pass After: @router.post( "/{opportunity_id}/files", status_code=status.HTTP_201_CREATED, operation_id="attach_opportunity_file", # New operation_id added ) async def attach_opportunity_file( db: Database, uploaded_file: UploadFile = File(title="File to upload"), ) -> OpportunityFile: pass | 5 | 2 |
70,714,622 | 2022-1-14 | https://stackoverflow.com/questions/70714622/why-is-colab-still-running-python-3-7 | I saw on this tweet that Google Colab move to Python 3.7 on February 2021. As of today however (January 2022), Python 3.10 is out, but Colab still runs Python 3.7. My (voluntarily) naive take is that this is quite a significant lag. Why are they not at least on Python 3.8 or even 3.9? Is it simply to make sure that some compatibility criteria are met? | The only reason is that they want to have the most compatible version of Python worldwide. According to the Python Readiness report (Python 3.7 Readiness), version 3.7 supports almost 80.6% of the most used packages so far. Still, this coverage is 78.3% for version 3.8, 70.6% for version 3.9, and 49.7% for version 3.10 (as of March 29, 2022). They still use this version today if Python 3.6 was not in its EOL. Lucky us, python.org decided to get rid of versions below 3.7. 😊 On the other hand, you can update the Python version in your Colab by running some Linux commands in the notebook. However, whenever you start a new notebook, Google ignores the updates and returns to the original version. Google should provide options for selecting the Python version. Because of this, I do not use Colab in most cases, especially when teaching Python to my students. Update (January 12, 2023): Now, Google Colaboratory supports Python 3.8.16 Python 3.8 Readiness. After a long time, we see some improvement. But it's still outdated because the current version is 3.11.1. The Python Readiness report says 80.8% of most used packages support Python 3.8, and 30.6% support 3.11. But we know this information comes from info on PyPi. In practice, this support is far more than what the package maintainer says in the repository. Many packages support 3.11, but they still mention the lower version of Python. The reason is that the maintainer hasn't had a chance to check and update their production yet. Update (May 07, 2024): Fortunately, we have seen more activities on the Google server admins recently 😉. It has been around three months since Google's admins updated Python to version 3.10 (Python 3.10 Readiness), and the current version that colab supports by default is 3.10.12, which is still old (the current stable version of Python is 3.12.3)). The funny part is that they didn't update it at least to 3.10.14, the current version of Python 3.10. "Half a loaf is better than none." | 4 | 7 |
70,693,647 | 2022-1-13 | https://stackoverflow.com/questions/70693647/how-can-i-decode-a-json-string-into-a-pydantic-model-with-a-dataframe-field | I am using MongoDB to store the results of a script into a database. When I want to reload the data back into python, I need to decode the JSON (or BSON) string into a pydantic basemodel. With a pydantic model with JSON compatible types, I can just do: base_model = BaseModelClass.parse_raw(string) But the default json.loads decoder doesn't know how to deal with a DataFrame. I can overwrite the .parse_raw function into something like: from pydantic import BaseModel import pandas as pd class BaseModelClass(BaseModel): df: pd.DataFrame class Config: arbitrary_types_allowed = True json_encoders = { pd.DataFrame: lambda df: df.to_json() } @classmethod def parse_raw(cls, data): data = json.loads(data) data['df'] = pd.read_json(data['df']) return cls(**data) But ideally I would want to automatically decode fields of type pd.DataFrame rather than manually change the parse_raw function every time. Is there any way of doing something like: class Config: arbitrary_types_allowed = True json_encoders = { pd.DataFrame: lambda df: df.to_json() } json_decoders = { pd.DataFrame: lambda df: pd.read_json(data['df']) } To make the detection of any field which should be a data frame, be converted to one, without having to modify the parse_raw() script? | Pydantic V2: You can define a custom data type and specify a serializer which will automatically handle conversions: from typing import Annotated, Any from pydantic import BaseModel, GetCoreSchemaHandler import pandas as pd from pydantic_core import CoreSchema, core_schema class myDataFrame(pd.DataFrame): @classmethod def __get_pydantic_core_schema__( cls, source_type: Any, handler: GetCoreSchemaHandler ) -> CoreSchema: validate = core_schema.no_info_plain_validator_function(cls.try_parse_to_df) return core_schema.json_or_python_schema( json_schema=validate, python_schema=validate, serialization=core_schema.plain_serializer_function_ser_schema( lambda df: df.to_json() ), ) @classmethod def try_parse_to_df(cls, value: Any): if isinstance(value, str): return pd.read_json(value) return value # Create a model with your custom type class BaseModelClass(BaseModel): df: myDataFrame # Create your model sample_df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], columns=["A", "B"]) my_model = BaseModelClass(df=sample_df) # Should also be able to parse from json my_model = BaseModelClass(df=sample_df.to_json()) # Even more dramatically my_model_2 = BaseModelClass.model_validate_json(my_model.model_dump_json()) | 8 | 2 |
70,759,514 | 2022-1-18 | https://stackoverflow.com/questions/70759514/how-to-show-geopandas-interactive-map-with-explore | I made a geopandas dataframe and I want to use geopandas_dataframe.explore() to create an interactive map. Here is my code. First I create the geopandas dataframe, I check the dtypes and I try to map the dataframe with gdf.explore(). Unfortunately, my code just finishes without errors and no map is shown. code: geometry = [Point(xy) for xy in zip(df[1], df[0])] gdf = geopandas.GeoDataFrame(df, geometry=geometry) print(gdf.head()) print(gdf.dtypes) gdf.explore() output: 0 1 geometry 0 51.858306 5.778404 POINT (5.77840 51.85831) 1 51.858322 5.778410 POINT (5.77841 51.85832) 2 51.858338 5.778416 POINT (5.77842 51.85834) 3 51.858354 5.778422 POINT (5.77842 51.85835) 4 51.858370 5.778429 POINT (5.77843 51.85837) 0 float64 1 float64 geometry geometry dtype: object Process finished with exit code 0 Why don't I get a map? I already tried gdf.show() but that doesn't exist. What do I need to do to show the geopandas map? | What IDE are you using? In Jupyter Notebook your code (slightly modified) works for me. However, when I run it in PyCharm I get, "Process finished with exit code 0" with no plot. import geopandas as gpd import pandas as pd from shapely.geometry import Point data_dict = {'x': {0: -110.1, 1: -110.2, 2: -110.3, 3: -110.4, 4: -110.5}, 'y': {0: 40.1, 1: 40.2, 2: 40.3, 3: 40.4, 4: 40.5}} df = pd.DataFrame(data_dict) geometry = [Point(xy) for xy in zip(df['x'], df['y'])] gdf = gpd.GeoDataFrame(df, geometry=geometry, crs=4326) print(gdf.head()) print(gdf.dtypes) gdf.explore() Edit: Looks like you can save your folium figure to a html. This worked for me from PyCharm. m = gdf.explore() outfp = r"<your dir path>\base_map.html" m.save(outfp) | 6 | 7 |
70,694,787 | 2022-1-13 | https://stackoverflow.com/questions/70694787/fastapi-fastapi-users-with-database-adapter-for-sqlmodel-users-table-is-not-crea | I was trying to use fastapi users package to quickly Add a registration and authentication system to my FastAPI project which uses the PostgreSQL database. I am using asyncio to be able to create asynchronous functions. In the beginning, I used only sqlAlchemy and I have tried their example here. And I added those line of codes to my app/app.py to create the database at the starting of the server. and everything worked like a charm. the table users was created on my database. @app.on_event("startup") async def on_startup(): await create_db_and_tables() Since I am using SQLModel I added FastAPI Users - Database adapter for SQLModel to my virtual en packages. And I added those lines to fastapi_users/db/__init__.py to be able to use the SQL model database. try: from fastapi_users_db_sqlmodel import ( # noqa: F401 SQLModelBaseOAuthAccount, SQLModelBaseUserDB, SQLModelUserDatabase, ) except ImportError: # pragma: no cover pass I have also modified app/users.py, to use SQLModelUserDatabase instead of sqlAchemy one. async def get_user_manager(user_db: SQLModelUserDatabase = Depends(get_user_db)): yield UserManager(user_db) and the app/dp.py to use SQLModelUserDatabase, SQLModelBaseUserDB, here is the full code of app/db.py import os from typing import AsyncGenerator from fastapi import Depends from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine from sqlalchemy.orm import sessionmaker from fastapi_users.db import SQLModelUserDatabase, SQLModelBaseUserDB from sqlmodel import SQLModel from app.models import UserDB DATABASE_URL = os.environ.get("DATABASE_URL") engine = create_async_engine(DATABASE_URL) async_session_maker = sessionmaker( engine, class_=AsyncSession, expire_on_commit=False) async def create_db_and_tables(): async with engine.begin() as conn: await conn.run_sync(SQLModel.metadata.create_all) async def get_async_session() -> AsyncSession: async_session = sessionmaker( engine, class_=AsyncSession, expire_on_commit=False ) async with async_session() as session: yield session async def get_user_db(session: AsyncSession = Depends(get_async_session)): yield SQLModelUserDatabase(UserDB, session, SQLModelBaseUserDB) Once I run the code, the table is not created at all. I wonder what could be the issue. I could not understand. Any idea? | By the time I posted this question that was the answer I received from one of the maintainer of fastapi-users that made me switch to sqlAlchemy that time, actually I do not know if they officially released sqlModel DB adapter or not My guess is that you didn't change the UserDB model so that it inherits from the SQLModelBaseUserDB one. It's necessary in order to let SQLModel detect all your models and create them. You can have an idea of what it should look like in fastapi-users-db-sqlmodel tests: https://github.com/fastapi-users/fastapi-users-db-sqlmodel/blob/3a46b80399f129aa07a834a1b40bf49d08c37be1/tests/conftest.py#L25-L27 Bear in mind though that we didn't officially release this DB adapter; as they are some problems with SQLModel regarding UUID (tiangolo/sqlmodel#25). So you'll probably run into issues. and here is the GitHub link of the issue: https://github.com/fastapi-users/fastapi-users/discussions/861 | 7 | 2 |
70,727,291 | 2022-1-16 | https://stackoverflow.com/questions/70727291/how-do-i-know-whether-a-sklearn-scaler-is-already-fitted-or-not | For example, ss is an sklearn.preprocessing.StandardScaler object. If ss is fitted already, I want to use it to transform my data. If ss is not fitted yet, I want to use my data to fit it and transform my data. Is there a way to know whether ss is already fitted or not? | Sklearn implements the check_is_fitted function to check if any generic estimator is fitted, which works with StandardScaler: from sklearn.preprocessing import StandardScaler from sklearn.utils.validation import check_is_fitted ss = StandardScaler() check_is_fitted(ss) # Raises error ss.fit([[1,2,3]]) check_is_fitted(ss) # No error | 7 | 4 |
70,754,841 | 2022-1-18 | https://stackoverflow.com/questions/70754841/how-to-adjust-the-color-bar-size-in-geopandas | I've created this map using geopandas, but I can't make the color bar have the same size as the figure. ax = covid_death_per_millon_geo.plot(column = 'total_deaths_per_million', legend = True, cmap = 'RdYlGn_r', figsize=(20,15)) ax.set_title('Covid deaths per Million', size = 20) ax.set_axis_off() https://i.sstatic.net/a26oJ.png | The colorbars on GeoPandas plots are Matplotlib colorbar objects, so it's worth checking those docs. Add a legend_kwds option and define the shrink value to change the colorbar's size (this example will shrink it to 50% of the default): ax = covid_death_per_millon_geo.plot( column="total_deaths_per_million", legend=True, legend_kwds={ "shrink":.5 }, cmap="RdYlGn_r", figsize=(20, 15) ) Alternatively, change the colorbar's location to put it below the map. You may want to change both the location and the size: ax = covid_death_per_millon_geo.plot( column="total_deaths_per_million", legend=True, legend_kwds={ "location":"bottom", "shrink":.5 }, cmap="RdYlGn_r", figsize=(20, 15) ) | 5 | 4 |
70,710,874 | 2022-1-14 | https://stackoverflow.com/questions/70710874/how-to-send-base64-image-using-python-requests-and-fastapi | I am trying to implement a code for image style transfer based on FastAPI. I found it effective to convert the byte of the image into base64 and transmit it. So, I designed my client codeto encode the image into a base64 string and send it to the server, which received it succesfully. However, I face some difficulties in restoring the image bytes to ndarray. I get the following this errors: image_array = np.frombuffer(base64.b64decode(image_byte)).reshape(image_shape) ValueError: cannot reshape array of size 524288 into shape (512,512,4) This is my client code : import base64 import requests import numpy as np import json from matplotlib.pyplot import imread from skimage.transform import resize if __name__ == '__main__': path_to_img = "my image path" image = imread(path_to_img) image = resize(image, (512, 512)) image_byte = base64.b64encode(image.tobytes()) data = {"shape": image.shape, "image": image_byte.decode()} response = requests.get('http://127.0.0.1:8000/myapp/v1/filter/a', data=json.dumps(data)) and this is my server code: import json import base64 import uvicorn import model_loader import numpy as np from fastapi import FastAPI from typing import Optional app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"} @app.get("/myapp/v1/filter/a") async def style_transfer(data: dict): image_byte = data.get('image').encode() image_shape = tuple(data.get('shape')) image_array = np.frombuffer(base64.b64decode(image_byte)).reshape(image_shape) if __name__ == '__main__': uvicorn.run(app, port='8000', host="127.0.0.1") | Option 1 As previously mentioned here, as well as here and here, one should use UploadFile, in order to upload files from client apps (for async read/write have a look at this answer). For example: server side: @app.post("/upload") def upload(file: UploadFile = File(...)): try: contents = file.file.read() with open(file.filename, 'wb') as f: f.write(contents) except Exception: return {"message": "There was an error uploading the file"} finally: file.file.close() return {"message": f"Successfuly uploaded {file.filename}"} client side: import requests url = 'http://127.0.0.1:8000/upload' file = {'file': open('images/1.png', 'rb')} resp = requests.post(url=url, files=file) print(resp.json()) Option 2 If, however, you still need to send a base64 encoded image, you can do it as previously described here (Option 2). On client side, you can encode the image to base64 and send it using a POST request as follows: client side: import base64 import requests url = 'http://127.0.0.1:8000/upload' with open("photo.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) payload ={"filename": "photo.png", "filedata": encoded_string} resp = requests.post(url=url, data=payload) On server side you can receive the image using a Form field, and decode the image as follows: server side: @app.post("/upload") def upload(filename: str = Form(...), filedata: str = Form(...)): image_as_bytes = str.encode(filedata) # convert string to bytes img_recovered = base64.b64decode(image_as_bytes) # decode base64string try: with open("uploaded_" + filename, "wb") as f: f.write(img_recovered) except Exception: return {"message": "There was an error uploading the file"} return {"message": f"Successfuly uploaded {filename}"} | 5 | 6 |
70,753,768 | 2022-1-18 | https://stackoverflow.com/questions/70753768/jupyter-notebook-access-to-the-file-was-denied | I'm trying to run a Jupyter notebook on Ubuntu 21.10. I've installed python, jupyter notebook, and all the various prerequisites. I added export PATH=$PATH:~/.local/bin to my bashrc so that the command jupyter notebook would be operational from the terminal. When I call jupyter notebook from the terminal, I get the following error message from my browser: Access to the file was denied. The file at /home/username/.local/share/jupyter/runtime/nbserver-260094-open.html is not readable. It may have been removed, moved, or file permissions may be preventing access. I'm using the latest version of FireFox. I've read a number of guides on this and it seems to be a permissions error, but none of the guides that I've used have resolved the issue. Using sudo does not help, in fact it causes Exception: Jupyter command "jupyter-notebook" not found. to be thrown. That being said, I am still able to access the notebook server. If I go to the terminal and instead click on the localhost:8888 or IP address of the notebook server then it takes me to the notebook and everything runs without issue. I would like to solve this so that when I run jupyter notebook I'm taken to the server and don't need to go back to the terminal window and click the IP address. It's inconvenient and can slow me down if I'm running multiple notebooks at once. Any help on this issue would be greatly appreciated! | I had the same problem. Ubuntu 20.04.3 LTS Chromium Version 96.0.4664.110 This was the solution in my case: Create the configuration file with this command: jupyter notebook --generate-config Edit the configuration file ~/.jupyter/jupyter_notebook_config.py and set: c.NotebookApp.use_redirect_file = False Make sure that this configuration parameter starts at the beginning of the line. If you leave one space at the beginning of the line, you will get the message that access to the file was denied. Otherwise you can clean and reinstall JupyterLab jupyter lab clean --all pip3 install jupyterlab --force-reinstall | 25 | 44 |
70,743,246 | 2022-1-17 | https://stackoverflow.com/questions/70743246/django-db-with-ssh-tunnel | Is there a python native way to connect django to a database through an ssh tunnel? I have seen people using ssh port forwarding in the host machine but I would prefer a solution that can be easily containerized. | It is pretty seamless. Requirements: The sshtunnel package https://github.com/pahaz/sshtunnel In the django settings.py create an ssh tunnel before the django DB settings block: from sshtunnel import SSHTunnelForwarder # Connect to a server using the ssh keys. See the sshtunnel documentation for using password authentication ssh_tunnel = SSHTunnelForwarder( SERVER_IP, ssh_private_key=PATH_TO_SSH_PRIVATE_KEY, ssh_private_key_password=SSH_PRIVATE_KEY_PASSWORD, ssh_username=SSH_USERNAME, remote_bind_address=('localhost', LOCAL_DB_PORT_ON_THE_SERVER), ) ssh_tunnel.start() Then add the DB info block in the settings.py. Here I am adding a default local DB and the remote DB that we connect to using the ssh tunnel DATABASES = { 'default': { 'ENGINE': 'django.contrib.gis.db.backends.postgis', 'HOST': NORMAL_DB_HOST, 'PORT': NORMAL_DB_PORT, 'NAME': NORMAL_DB_NAME, 'USER': NORMAL_DB_USER, 'PASSWORD': NORMAL_DB_PASSWORD, }, 'shhtunnel_db': { 'ENGINE': 'django.contrib.gis.db.backends.postgis', 'HOST': 'localhost', 'PORT': ssh_tunnel.local_bind_port, 'NAME': REMOTE_DB_DB_NAME, 'USER': REMOTE_DB_USERNAME, 'PASSWORD': REMOTE_DB_PASSWORD, }, } That is it. Now one can make migratations to the remote db using commands like $ python manage.py migrate --database=shhtunnel_db or make calls to the db from within the python code using lines like Models.objects.all().using('shhtunnel_db') Extra: In my case the remote db was created by someone else and I only wanted to read it. In order to avoid writing the models and deactivating the model manager I used the following django command to get the models from the database [src]: python manage.py inspectdb | 4 | 13 |
70,757,953 | 2022-1-18 | https://stackoverflow.com/questions/70757953/pygame-display-init-fails-for-non-root-user | Tl;dr I need to use pygame but it can't initialize the screen as a normal user because of the permissions for the framebuffer driver. root can do pygame.display.init() but not the user. User is in the group 'video' and can write on /dev/fb0. What permission is missing to the user so pygame.display.init() would work. Error encountered : pygame.error: Unable to open a console terminal Description So, I am trying to use pygame in order to display things on a framebuffer /dev/fb0. To use some functions I need (e.g pygame.Surface.convert) the display must be initialized. However, when calling pygame.display.init() I have an error, but only when not doing so as root. According to @Nodraak (ref) it is related to the permissions of the framebuffer driver. Late answer but I wish I would have tried that earlier : You may need to be root to use a frame buffer driver. (It helped in my case: RaspberryPi 2 without X running but with a screen connected. I can now open a display through SSH or directly on the RPi) A tree -fupg / | grep fb | grep rwx doesn't seem to show any binary which would be executable by root but not by others. I am quite sure that adding my user to a group, or tweaking the file permissions somewhere would be enough to fix the issue. Note: For Security reasons, running the software as root is not an option. Context System : RaspberryPi X Server: None Screen: 1 (HDMI) Connection: remote (SSH) Origin of the error I am trying to convert a surface with pygame.Surface.convert(...) function. But receive the following error : pygame.error: cannot convert without pygame.display initialized Nevertheless, initializing pygame.display with pygame.display.init() is giving the following error: pygame.error: Unable to open a console terminal I have the rights to write to the screen as I am part of the video group, and cat /dev/urandom > /dev/fb0 is effectively displaying snow on the screen. Also I tried setting up the SDL_... environment variable to fbcon or dummy but it doesn't help. I also tried keeping the root env with user su -m user and same result. Reproduce the error On a raspberrypi without XServer, connect an HDMI screen, install pygame. import pygame pygame.display.init() Error message: pygame.error: Unable to open a console terminal Software Versions python 3.7.3 pygame 1.9.4.post1 OS Raspbian Buster libsdl 2 Related Pygame.display.init Documentation SO Question: Pygame display init on headless Raspberry(...) | Solution to the problem OpenVT So it appears that the best solution, which meet all requirement I listed, is to use openvt. How ? The procedure holds in a few bullet points : 1. Add User to tty group As root, add your user to the group named tty, that will allow us to give it access to the TTYs. # As root: usermod -a -G tty $username 2. Give TTY access to users in group tty Now that the user is part of the group tty we need it to be allowed to write on it, as openvt will use a tty. By default, the mode should be set at 620 we need to set it at 660 to allow the group to write on it. # Edit file: /lib/udev/rules.d/50-udev-default.rules SUBSYSTEM=="tty", KERNEL=="tty[0-9]*", GROUP="tty", MODE="0660" # ensure mode is 0660 ^ 3. Set SDL environment variables Inside your software, make sure to set up the environment variables of SDL. import os # ... os.environ['SDL_VIDEODRIVER'] = 'fbcon' os.environ["SDL_FBDEV"] = "/dev/fb0" 4. Reboot the Raspberry Ok, you don't need a snippet for that.. Whether ? Well ok. # as root / with sudo reboot 5. Start software with openvt openvt (open Virtual Terminal) allows us to run the interface directly with screen access. This must be executed by the final user, in the same directory as the software (preferably). openvt -s -- python3 ./interface.py And that should work. Of course you can then integrate this in a Linux service so it starts at boot. but you may need to add After: [email protected] in the [Unit] section of the service file. Well, it took me lots of time to figure that one out, so I hope it helps someone else as well. | 5 | 2 |
70,713,838 | 2022-1-14 | https://stackoverflow.com/questions/70713838/can-someone-explain-the-logic-behind-xarray-polyfit-coefficients | I am trying to fit a linear regression to climate data from a Netcdf file. The data look like the following.. print(dsloc_lvl) <xarray.DataArray 'sla' (time: 10227)> array([0.0191, 0.0193, 0.0197, ..., 0.0936, 0.0811, 0.0695]) Coordinates: latitude float32 21.62 * time (time) datetime64[ns] 1993-01-01 1993-01-02 ... 2020-12-31 longitude float32 -89.12 Attributes: ancillary_variables: err_sla comment: The sea level anomaly is the sea surface height abo... grid_mapping: crs long_name: Sea level anomaly standard_name: sea_surface_height_above_sea_level units: m _ChunkSizes: [ 1 50 50]`` I've been using Xarray library to process data, so I've use the xarray.DataArray.polyfit and xarray.DataArray.polyval. Regression line looks good when plotting results. However, when looking into the coefficients I've noticed they are very small. I've compare coefficients with the np.polyfit approach which are consisitent with what is expected. I figure this is because for np. ppolyfit I convert dates using date2num x1=mdates.date2num(dsloc_lvl['time']) Out: array([ 8401., 8402., 8403., ..., 18625., 18626., 18627.]) and the xarray approach converts dates differently, I believe is with: dsloc_lvl.time.astype(float) <xarray.DataArray 'time' (time: 10227)> array([7.2584640e+17, 7.2593280e+17, 7.2601920e+17, ..., 1.6092000e+18, 1.6092864e+18, 1.6093728e+18]) Coordinates: latitude float32 21.62 * time (time) datetime64[ns] 1993-01-01 1993-01-02 ... 2020-12-31 longitude float32 -89.12 Attributes: axis: T long_name: Time standard_name: time _ChunkSizes: 1 _CoordinateAxisType: Time valid_min: 15706.0 valid_max: 25932.0 So this makes coefficients look totally different: np approach: np.polyfit(x1,y1,1) Out: array([ 1.31727420e-05, -1.31428413e-01]) xarray aprroach: dsloc_lvl.polyfit('time',1) Out: <xarray.Dataset> Dimensions: (degree: 2) Coordinates: * degree (degree) int32 1 0 Data variables: polyfit_coefficients (degree) float64 1.525e-19 -0.1314 My question is, what are the units of time de xarray approach is using, and is there a way to scale it to match de numpy approach? Thanks. | While the results of numpy's polyfit are the regression coefficients with respect to an array of x values you pass in manually, xarray's polyfit gives coefficients in units of the coordinate labels. In the case of datetime coordinates, this often means the coefficient result is in the units of the array per nanosecond. This happens because your data has a daily frequency but the time coordinate's labels are type datetime64[ns] (ns means nanoseconds). Convert the linear coefficient from [1/ns] to [1/day] and you get the same result! 1.525e-19 [units/ns] * 1e9 [ns/s] * 60 [s/m] * 60 [m/h] * 24 [h/d] = 1.317e-05 [units / day] Xarray does not support numpy datetime arrays with any precision other than nanosecond, so you can't get arround this by simply changing the datetime type to, say, datetime64[D]. You can convert the coefficients you find, as above, or manually convert the axis to a float or int with the units you're looking for prior to calling polyfit. See the xarray docs on Time Series Data for more info. Example As an example, I'll create a sample array: In [1]: import xarray as xr, pandas as pd, numpy as np ...: ...: # create an array indexed by time, with 1096 daily observations from ...: # Jan 1 2020 to Dec 31, 2022. The array has noise around a linear ...: # trend with slope -0.1 ...: time = pd.date_range('2020-01-01', '2022-12-31', freq='D') ...: Y = np.random.random(size=len(time)) + np.arange(0, (len(time) * -0.1), -0.1) ...: da = xr.DataArray(Y, dims=['time'], coords=[time]) In [2]: da Out[2]: <xarray.DataArray (time: 1096)> array([ 0.44076544, 0.66566835, 0.72999141, ..., -108.84335381, -109.38686183, -109.49807849]) Coordinates: * time (time) datetime64[ns] 2020-01-01 2020-01-02 ... 2022-12-31 If we take a look at the time coordinate, everything looks as you'd expect: In [3]: da.time Out[3]: <xarray.DataArray 'time' (time: 1096)> array(['2020-01-01T00:00:00.000000000', '2020-01-02T00:00:00.000000000', '2020-01-03T00:00:00.000000000', ..., '2022-12-29T00:00:00.000000000', '2022-12-30T00:00:00.000000000', '2022-12-31T00:00:00.000000000'], dtype='datetime64[ns]') Coordinates: * time (time) datetime64[ns] 2020-01-01 2020-01-02 ... 2022-12-31 The trouble arises because da.polyfit needs to interpret the coordinate as numerical values. If we convert da.time to a float, you can see how we run into trouble. These values represent nanoseconds since Jan 1, 1970 0:00:00: In [4]: da.time.astype(float) Out[4]: <xarray.DataArray 'time' (time: 1096)> array([1.5778368e+18, 1.5779232e+18, 1.5780096e+18, ..., 1.6722720e+18, 1.6723584e+18, 1.6724448e+18]) Coordinates: * time (time) datetime64[ns] 2020-01-01 2020-01-02 ... 2022-12-31 To get the same behavior as numpy, we could add an ordinal_day coordinate. Note here that I subtract off the start date (resulting in timedelta64[ns] data), then drop the coordinate into numpy using .values before changing precisions to timedelta64[D] (if you do this in xarray the precision change will be ignored): In [7]: da.coords['ordinal_day'] = ( ...: ('time', ), ...: (da.time - da.time.min()).values.astype('timedelta64[D]').astype(int) ...: ) In [8]: da.ordinal_day Out[8]: <xarray.DataArray 'ordinal_day' (time: 1096)> array([ 0, 1, 2, ..., 1093, 1094, 1095]) Coordinates: * time (time) datetime64[ns] 2020-01-01 2020-01-02 ... 2022-12-31 ordinal_day (time) int64 0 1 2 3 4 5 6 ... 1090 1091 1092 1093 1094 1095 Now we can run polyfit using ordinal_day as the coordinate (after swapping the dimensions of the array from time to ordinal_day using da.swap_dims): In [10]: da.swap_dims({'time': 'ordinal_day'}).polyfit('ordinal_day', deg=1) Out[10]: <xarray.Dataset> Dimensions: (degree: 2) Coordinates: * degree (degree) int64 1 0 Data variables: polyfit_coefficients (degree) float64 -0.1 0.4966 This gives us the results we'd expect - I constructed the data with uniform random values in [0, 1] (so, mean 0.5 at the intercept) plus a linear trend with slope -0.1. | 4 | 6 |
70,705,250 | 2022-1-14 | https://stackoverflow.com/questions/70705250/how-to-install-local-package-with-conda | I have a local python project called jive that I would like to use in an another project. My current method of using jive in other projects is to activate the conda env for the project, then move to my jive directory and use python setup.py install. This works fine, and when I use conda list, I see everything installed in the env including jive, with a note that jive was installed using pip. But what I really want is to do this with full conda. When I want to use jive in another project, I want to just put jive in that projects environment.yml. So I did the following: write a simple meta.yaml so I could use conda-build to build jive locally build jive with conda build . I looked at the tarball that was produced and it does indeed contain the jive source as expected In my other project, add jive to the dependencies in environment.yml, and add 'local' to the list of channels. create a conda env using that environment.yml. When I activate the environment and use conda list, it lists all the dependencies including jive, as desired. But when I open python interpreter, I cannot import jive, it says there is no such package. (If use python setup.py install, I can import it.) How can I fix the build/install so that this works? Here is the meta.yaml, which lives in the jive project top level directory: package: name: jive version: "0.2.1" source: path: . build: script: python -m pip install --no-deps --ignore-installed . requirements: host: - python>=3.5 - pip - setuptools run: - python>=3.5 - numpy - pandas - scipy - seaborn - matplotlib - scikit-learn - statsmodels - joblib - bokeh test: imports: jive And here is the output of conda build . No numpy version specified in conda_build_config.yaml. Falling back to default numpy value of 1.16 WARNING:conda_build.metadata:No numpy version specified in conda_build_config.yaml. Falling back to default numpy value of 1.16 Adding in variants from internal_defaults INFO:conda_build.variants:Adding in variants from internal_defaults Adding in variants from /Users/thomaskeefe/.conda/conda_build_config.yaml INFO:conda_build.variants:Adding in variants from /Users/thomaskeefe/.conda/conda_build_config.yaml Attempting to finalize metadata for jive INFO:conda_build.metadata:Attempting to finalize metadata for jive Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done BUILD START: ['jive-0.2.1-py310_0.tar.bz2'] Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done ## Package Plan ## environment location: /opt/miniconda3/conda-bld/jive_1642185595622/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla The following NEW packages will be INSTALLED: bzip2: 1.0.8-h1de35cc_0 ca-certificates: 2021.10.26-hecd8cb5_2 certifi: 2021.5.30-py310hecd8cb5_0 libcxx: 12.0.0-h2f01273_0 libffi: 3.3-hb1e8313_2 ncurses: 6.3-hca72f7f_2 openssl: 1.1.1m-hca72f7f_0 pip: 21.2.4-py310hecd8cb5_0 python: 3.10.0-hdfd78df_3 readline: 8.1.2-hca72f7f_1 setuptools: 58.0.4-py310hecd8cb5_0 sqlite: 3.37.0-h707629a_0 tk: 8.6.11-h7bc2e8c_0 tzdata: 2021e-hda174b7_0 wheel: 0.37.1-pyhd3eb1b0_0 xz: 5.2.5-h1de35cc_0 zlib: 1.2.11-h4dc903c_4 Preparing transaction: ...working... done Verifying transaction: ...working... done Executing transaction: ...working... done Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Copying /Users/thomaskeefe/Documents/py_jive to /opt/miniconda3/conda-bld/jive_1642185595622/work/ source tree in: /opt/miniconda3/conda-bld/jive_1642185595622/work export PREFIX=/opt/miniconda3/conda-bld/jive_1642185595622/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla export BUILD_PREFIX=/opt/miniconda3/conda-bld/jive_1642185595622/_build_env export SRC_DIR=/opt/miniconda3/conda-bld/jive_1642185595622/work Processing $SRC_DIR DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default. pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555. Building wheels for collected packages: jive Building wheel for jive (setup.py): started Building wheel for jive (setup.py): finished with status 'done' Created wheel for jive: filename=jive-0.2.1-py3-none-any.whl size=46071 sha256=b312955cb2fd917bc4e684a575407b884190680f2dddad7fcb9ac25e5b290fc9 Stored in directory: /private/tmp/pip-ephem-wheel-cache-rbpkt2an/wheels/15/68/82/4ed7cd246fbc4c72cf764b425a03230247589bd2394a7e457b Successfully built jive Installing collected packages: jive Successfully installed jive-0.2.1 Resource usage statistics from building jive: Process count: 3 CPU time: Sys=0:00:00.3, User=0:00:00.5 Memory: 53.7M Disk usage: 50.4K Time elapsed: 0:00:06.1 Packaging jive INFO:conda_build.build:Packaging jive INFO conda_build.build:build(2289): Packaging jive Packaging jive-0.2.1-py310_0 INFO:conda_build.build:Packaging jive-0.2.1-py310_0 INFO conda_build.build:bundle_conda(1529): Packaging jive-0.2.1-py310_0 compiling .pyc files... number of files: 70 Fixing permissions INFO :: Time taken to mark (prefix) 0 replacements in 0 files was 0.06 seconds TEST START: /opt/miniconda3/conda-bld/osx-64/jive-0.2.1-py310_0.tar.bz2 Adding in variants from /var/folders/dd/t85p2jdn3sd11bsdnl7th6p00000gn/T/tmp4o3im7d1/info/recipe/conda_build_config.yaml INFO:conda_build.variants:Adding in variants from /var/folders/dd/t85p2jdn3sd11bsdnl7th6p00000gn/T/tmp4o3im7d1/info/recipe/conda_build_config.yaml INFO conda_build.variants:_combine_spec_dictionaries(234): Adding in variants from /var/folders/dd/t85p2jdn3sd11bsdnl7th6p00000gn/T/tmp4o3im7d1/info/recipe/conda_build_config.yaml Renaming work directory '/opt/miniconda3/conda-bld/jive_1642185595622/work' to '/opt/miniconda3/conda-bld/jive_1642185595622/work_moved_jive-0.2.1-py310_0_osx-64' INFO:conda_build.utils:Renaming work directory '/opt/miniconda3/conda-bld/jive_1642185595622/work' to '/opt/miniconda3/conda-bld/jive_1642185595622/work_moved_jive-0.2.1-py310_0_osx-64' INFO conda_build.utils:shutil_move_more_retrying(2091): Renaming work directory '/opt/miniconda3/conda-bld/jive_1642185595622/work' to '/opt/miniconda3/conda-bld/jive_1642185595622/work_moved_jive-0.2.1-py310_0_osx-64' shutil.move(work)=/opt/miniconda3/conda-bld/jive_1642185595622/work, dest=/opt/miniconda3/conda-bld/jive_1642185595622/work_moved_jive-0.2.1-py310_0_osx-64) INFO:conda_build.utils:shutil.move(work)=/opt/miniconda3/conda-bld/jive_1642185595622/work, dest=/opt/miniconda3/conda-bld/jive_1642185595622/work_moved_jive-0.2.1-py310_0_osx-64) INFO conda_build.utils:shutil_move_more_retrying(2098): shutil.move(work)=/opt/miniconda3/conda-bld/jive_1642185595622/work, dest=/opt/miniconda3/conda-bld/jive_1642185595622/work_moved_jive-0.2.1-py310_0_osx-64) Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done ## Package Plan ## environment location: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol The following NEW packages will be INSTALLED: blas: 1.0-mkl bokeh: 2.4.2-py39hecd8cb5_0 bottleneck: 1.3.2-py39he3068b8_1 brotli: 1.0.9-hb1e8313_2 ca-certificates: 2021.10.26-hecd8cb5_2 certifi: 2021.10.8-py39hecd8cb5_2 cycler: 0.11.0-pyhd3eb1b0_0 fonttools: 4.25.0-pyhd3eb1b0_0 freetype: 2.11.0-hd8bbffd_0 giflib: 5.2.1-haf1e3a3_0 intel-openmp: 2021.4.0-hecd8cb5_3538 jinja2: 3.0.2-pyhd3eb1b0_0 jive: 0.2.1-py310_0 local joblib: 1.1.0-pyhd3eb1b0_0 jpeg: 9d-h9ed2024_0 kiwisolver: 1.3.1-py39h23ab428_0 lcms2: 2.12-hf1fd2bf_0 libcxx: 12.0.0-h2f01273_0 libffi: 3.3-hb1e8313_2 libgfortran: 3.0.1-h93005f0_2 libpng: 1.6.37-ha441bb4_0 libtiff: 4.2.0-h87d7836_0 libwebp: 1.2.0-hacca55c_0 libwebp-base: 1.2.0-h9ed2024_0 llvm-openmp: 12.0.0-h0dcd299_1 lz4-c: 1.9.3-h23ab428_1 markupsafe: 2.0.1-py39h9ed2024_0 matplotlib: 3.5.0-py39hecd8cb5_0 matplotlib-base: 3.5.0-py39h4f681db_0 mkl: 2021.4.0-hecd8cb5_637 mkl-service: 2.4.0-py39h9ed2024_0 mkl_fft: 1.3.1-py39h4ab4a9b_0 mkl_random: 1.2.2-py39hb2f4e1b_0 munkres: 1.1.4-py_0 ncurses: 6.3-hca72f7f_2 numexpr: 2.8.1-py39h2e5f0a9_0 numpy: 1.21.2-py39h4b4dc7a_0 numpy-base: 1.21.2-py39he0bd621_0 olefile: 0.46-pyhd3eb1b0_0 openssl: 1.1.1m-hca72f7f_0 packaging: 21.3-pyhd3eb1b0_0 pandas: 1.3.5-py39h743cdd8_0 patsy: 0.5.2-py39hecd8cb5_0 pillow: 8.4.0-py39h98e4679_0 pip: 21.2.4-py39hecd8cb5_0 pyparsing: 3.0.4-pyhd3eb1b0_0 python: 3.9.7-h88f2d9e_1 python-dateutil: 2.8.2-pyhd3eb1b0_0 pytz: 2021.3-pyhd3eb1b0_0 pyyaml: 6.0-py39hca72f7f_1 readline: 8.1.2-hca72f7f_1 scikit-learn: 1.0.2-py39hae1ba45_0 scipy: 1.7.3-py39h8c7af03_0 seaborn: 0.11.2-pyhd3eb1b0_0 setuptools: 58.0.4-py39hecd8cb5_0 six: 1.16.0-pyhd3eb1b0_0 sqlite: 3.37.0-h707629a_0 statsmodels: 0.13.0-py39hca72f7f_0 threadpoolctl: 2.2.0-pyh0d69192_0 tk: 8.6.11-h7bc2e8c_0 tornado: 6.1-py39h9ed2024_0 typing_extensions: 3.10.0.2-pyh06a4308_0 tzdata: 2021e-hda174b7_0 wheel: 0.37.1-pyhd3eb1b0_0 xz: 5.2.5-h1de35cc_0 yaml: 0.2.5-haf1e3a3_0 zlib: 1.2.11-h4dc903c_4 zstd: 1.4.9-h322a384_0 Preparing transaction: ...working... done Verifying transaction: ...working... ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::intel-openmp-2021.4.0-hecd8cb5_3538, defaults/osx-64::llvm-openmp-12.0.0-h0dcd299_1 path: 'lib/libiomp5.dylib' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'bin/webpinfo' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'bin/webpmux' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'include/webp/decode.h' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'include/webp/encode.h' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'include/webp/mux.h' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'include/webp/mux_types.h' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'include/webp/types.h' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebp.7.dylib' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebp.a' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebp.dylib' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebpdecoder.3.dylib' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebpdecoder.a' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebpdecoder.dylib' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebpmux.3.dylib' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebpmux.a' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/libwebpmux.dylib' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/pkgconfig/libwebp.pc' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/pkgconfig/libwebpdecoder.pc' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'lib/pkgconfig/libwebpmux.pc' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'share/man/man1/cwebp.1' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'share/man/man1/dwebp.1' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'share/man/man1/webpinfo.1' ClobberWarning: This transaction has incompatible packages due to a shared path. packages: defaults/osx-64::libwebp-base-1.2.0-h9ed2024_0, defaults/osx-64::libwebp-1.2.0-hacca55c_0 path: 'share/man/man1/webpmux.1' done Executing transaction: ...working... ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/llvm-openmp-12.0.0-h0dcd299_1/lib/libiomp5.dylib target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libiomp5.dylib ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/bin/webpinfo target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/bin/webpinfo ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/bin/webpmux target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/bin/webpmux ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/include/webp/decode.h target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include/webp/decode.h ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/include/webp/encode.h target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include/webp/encode.h ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/include/webp/mux.h target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include/webp/mux.h ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/include/webp/mux_types.h target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include/webp/mux_types.h ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/include/webp/types.h target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include/webp/types.h ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebp.7.dylib target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebp.7.dylib ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebp.a target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebp.a ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebp.dylib target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebp.dylib ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebpdecoder.3.dylib target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebpdecoder.3.dylib ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebpdecoder.a target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebpdecoder.a ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebpdecoder.dylib target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebpdecoder.dylib ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebpmux.3.dylib target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebpmux.3.dylib ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebpmux.a target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebpmux.a ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/lib/libwebpmux.dylib target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libwebpmux.dylib ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/.condatmp/1018f8ab-87a7-4fa8-a41c-4c14cc77cfff target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pkgconfig/libwebp.pc ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/.condatmp/e3701fae-f2cd-44e9-9dc6-c71f499cd2c2 target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pkgconfig/libwebpdecoder.pc ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/.condatmp/0f4bcf50-01e5-404d-b1a4-8a87d45c22c5 target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pkgconfig/libwebpmux.pc ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/share/man/man1/cwebp.1 target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/share/man/man1/cwebp.1 ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/share/man/man1/dwebp.1 target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/share/man/man1/dwebp.1 ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/share/man/man1/webpinfo.1 target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/share/man/man1/webpinfo.1 ClobberWarning: Conda was asked to clobber an existing path. source path: /opt/miniconda3/pkgs/libwebp-1.2.0-hacca55c_0/share/man/man1/webpmux.1 target path: /opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/share/man/man1/webpmux.1 Installed package of scikit-learn can be accelerated using scikit-learn-intelex. More details are available here: https://intel.github.io/scikit-learn-intelex For example: $ conda install scikit-learn-intelex $ python -m sklearnex my_application.py done export PREFIX=/opt/miniconda3/conda-bld/jive_1642185595622/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol export SRC_DIR=/opt/miniconda3/conda-bld/jive_1642185595622/test_tmp Traceback (most recent call last): File "/opt/miniconda3/conda-bld/jive_1642185595622/test_tmp/run_test.py", line 2, in <module> import jive ModuleNotFoundError: No module named 'jive' import: 'jive' Tests failed for jive-0.2.1-py310_0.tar.bz2 - moving package to /opt/miniconda3/conda-bld/broken WARNING:conda_build.build:Tests failed for jive-0.2.1-py310_0.tar.bz2 - moving package to /opt/miniconda3/conda-bld/broken WARNING conda_build.build:tests_failed(2970): Tests failed for jive-0.2.1-py310_0.tar.bz2 - moving package to /opt/miniconda3/conda-bld/broken TESTS FAILED: jive-0.2.1-py310_0.tar.bz2 EDIT: I added a test: section to the meta.yaml as merv suggested. | The immediate error is that the build is generating a Python 3.10 version, but when testing Conda doesn't recognize any constraint on the Python version, and creates a Python 3.9 environment. I think the main issue is that python >=3.5 is only a valid constraint when doing noarch builds, which this is not. That is, once a package builds with a given Python version, the version must be constrained to exactly that version (up through minor). So, in this case, the package is built with Python 3.10, but it reports in its metadata that it is compatible with all versions of Python 3.5+, which simply isn't true because Conda Python packages install the modules into Python-version-specific site-packages (e.g., lib/python-3.10/site-packages/jive). Typically, Python versions are controlled by either the --python argument given to conda-build or a matrix supplied by the conda_build_config.yaml file (see documentation on "Build variants"). Try adjusting the meta.yaml to something like package: name: jive version: "0.2.1" source: path: . build: script: python -m pip install --no-deps --ignore-installed . requirements: host: - python - pip - setuptools run: - python - numpy - pandas - scipy - seaborn - matplotlib - scikit-learn - statsmodels - joblib - bokeh If you want to use it in a Python 3.9 environment, then use conda build --python 3.9 .. | 6 | 2 |
70,762,856 | 2022-1-18 | https://stackoverflow.com/questions/70762856/error-importing-plugin-sqlmypy-no-module-named-sqlmypy | I have sqlalchemy-stubs installed via my Pipfile: [dev-packages] sqlalchemy-stubs = {editable = true, git = "https://github.com/dropbox/sqlalchemy-stubs.git"} Verified by running pipenv graph: sqlalchemy-stubs==0.4 - mypy [required: >=0.790, installed: 0.910] - mypy-extensions [required: >=0.4.3,<0.5.0, installed: 0.4.3] - toml [required: Any, installed: 0.10.2] - typing-extensions [required: >=3.7.4, installed: 4.0.0] - typing-extensions [required: >=3.7.4, installed: 4.0.0] However when running mypy (mypy --strict .) with the config: [mypy] plugins = graphene_plugin, sqlmypy I get the error: error: Error importing plugin "sqlmypy": No module named 'sqlmypy' What is going on? | I fixed by removing the editable = true from the pipfile. | 5 | 0 |
70,750,396 | 2022-1-18 | https://stackoverflow.com/questions/70750396/how-to-generate-a-rank-5-matrix-with-entries-uniform | I want to generate a rank 5 100x600 matrix in numpy with all the entries sampled from np.random.uniform(0, 20), so that all the entries will be uniformly distributed between [0, 20). What will be the best way to do so in python? I see there is an SVD-inspired way to do so here (https://math.stackexchange.com/questions/3567510/how-to-generate-a-rank-r-matrix-with-entries-uniform), but I am not sure how to code it up. I am looking for a working example of this SVD-inspired way to get uniformly distributed entries. I have actually managed to code up a rank 5 100x100 matrix by vertically stacking five 20x100 rank 1 matrices, then shuffling the vertical indices. However, the resulting 100x100 matrix does not have uniformly distributed entries [0, 20). Here is my code (my best attempt): import numpy as np def randomMatrix(m, n, p, q): # creates an m x n matrix with lower bound p and upper bound q, randomly. count = np.random.uniform(p, q, size=(m, n)) return count Qs = [] my_rank = 5 for i in range(my_rank): L = randomMatrix(20, 1, 0, np.sqrt(20)) # L is tall R = randomMatrix(1, 100, 0, np.sqrt(20)) # R is long Q = np.outer(L, R) Qs.append(Q) Q = np.vstack(Qs) #shuffle (preserves rank 5 [confirmed]) np.random.shuffle(Q) | I just couldn't take the fact the my previous solution (the "selection" method) did not really produce strictly uniformly distributed entries, but only close enough to fool a statistical test sometimes. The asymptotical case however, will almost surely not be distributed uniformly. But I did dream up another crazy idea that's just as bad, but in another manner - it's not really random. In this solution, I do smth similar to OP's method of forming R matrices with rank 1 and then concatenating them but a little differently. I create each matrix by stacking a base vector on top of itself multiplied by 0.5 and then I stack those on the same base vector shifted by half the dynamic range of the uniform distribution. This process continues with multiplication by a third, two thirds and 1 and then shifting and so on until i have the number of required vectors in that part of the matrix. I know it sounds incomprehensible. But, unfortunately, I couldn't find a way to explain it better. Hopefully, reading the code would shed some more light. I hope this "staircase" method will be more reliable and useful. import numpy as np from matplotlib import pyplot as plt ''' params: N - base dimention M - matrix length R - matrix rank high - max value of matrix low - min value of the matrix ''' N = 100 M = 600 R = 5 high = 20 low = 0 # base vectors of the matrix base = low+np.random.rand(R-1, N)*(high-low) def build_staircase(base, num_stairs, low, high): ''' create a uniformly distributed matrix with rank 2 'num_stairs' different vectors whose elements are all uniformly distributed like the values of 'base'. ''' l = levels(num_stairs) vectors = [] for l_i in l: for i in range(l_i): vector_dynamic = (base-low)/l_i vector_bias = low+np.ones_like(base)*i*((high-low)/l_i) vectors.append(vector_dynamic+vector_bias) return np.array(vectors) def levels(total): ''' create a sequence of stritcly increasing numbers summing up to the total. ''' l = [] sum_l = 0 i = 1 while sum_l < total: l.append(i) i +=1 sum_l = sum(l) i = 0 while sum_l > total: l[i] -= 1 if l[i] == 0: l.pop(i) else: i += 1 if i == len(l): i = 0 sum_l = sum(l) return l n_rm = R-1 # number of matrix subsections m_rm = M//n_rm len_rms = [ M//n_rm for i in range(n_rm)] len_rms[-1] += M%n_rm rm_list = [] for len_rm in len_rms: # create a matrix with uniform entries with rank 2 # out of the vector 'base[i]' and a ones vector. rm_list.append(build_staircase( base = base[i], num_stairs = len_rms[i], low = low, high = high, )) rm = np.concatenate(rm_list) plt.hist(rm.flatten(), bins = 100) A few examples: and now with N = 1000, M = 6000 to empirically demonstrate the nearly asymptotic behavior: | 8 | 1 |
70,738,211 | 2022-1-17 | https://stackoverflow.com/questions/70738211/run-pytest-classes-in-custom-order | I am writing tests with pytest in pycharm. The tests are divided into various classes. I would like to specify certain classes that have to run before other classes. I have seen various questions on stackoverflow (such as specifying pytest tests to run from a file and how to run a method before all other tests). These and various others questions wanted to choose specific functions to run in order. This can be done, I understand, using fixtures or with pytest ordering. I don't care which functions from each class run first. All I care about is that the classes run in the order I specify. Is this possible? | Approach You can use the pytest_collection_modifyitems hook to modify the order of collected tests (items) in place. This has the additional benefit of not having to install any third party libraries. With some custom logic, this allows to sort by class. Full example Say we have three test classes: TestExtract TestTransform TestLoad Say also that, by default, the test order of execution would be alphabetical, i.e.: TestExtract -> TestLoad -> TestTransform which does not work for us due to test class interdependencies. We can add pytest_collection_modifyitems to conftest.py as follows to enforce our desired execution order: # conftest.py def pytest_collection_modifyitems(items): """Modifies test items in place to ensure test classes run in a given order.""" CLASS_ORDER = ["TestExtract", "TestTransform", "TestLoad"] class_mapping = {item: item.cls.__name__ for item in items} sorted_items = items.copy() # Iteratively move tests of each class to the end of the test queue for class_ in CLASS_ORDER: sorted_items = [it for it in sorted_items if class_mapping[it] != class_] + [ it for it in sorted_items if class_mapping[it] == class_ ] items[:] = sorted_items Some comments on the implementation details: Test classes can live in different modules CLASS_ORDER does not have to be exhaustive. You can reorder just those classes on which to want to enforce an order (but note: if reordered, any non-reordered class will execute before any reordered class) The test order within the classes is kept unchanged It is assumed that test classes have unique names items must be modified in place, hence the final items[:] assignment | 7 | 8 |
70,761,481 | 2022-1-18 | https://stackoverflow.com/questions/70761481/how-to-stop-pycharms-break-stop-halt-feature-on-handled-exceptions-i-e-only-b | I have that PyCharm is halting on all my exceptions, even the ones I am handling in a try except block. I do not want it to break there - I am handling and perhaps expecting an error. But every other exception I do want it to halt and suspend execution (e.g. so that I have the program state and debug it). How does one do that? I tried going into the python exception breakpoint option but I didn't see an option like "break only on unhandled exceptions" e.g as suggested by these: Stop PyCharm If Error https://intellij-support.jetbrains.com/hc/en-us/community/posts/206601165-How-to-enable-stopping-on-unhandled-exceptions- note this is my current state, note how it stopped in my try block... :( crossposted: https://intellij-support.jetbrains.com/hc/en-us/community/posts/4415666598546-How-to-stop-PyCharm-s-break-stop-halt-feature-on-handled-exceptions-i-e-only-break-on-python-unhandled-exceptions- I tried: In your link here intellij-support.jetbrains.com/hc/en-us/community/posts/… the poster Okke said they solved this issue adding --pdb to the 'addition arguments', which someone later said they probably meant interpreter options. but didn't work got error: /Users/brandomiranda/opt/anaconda3/envs/meta_learning/bin/python --pdb /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py --cmd-line --multiproc --qt-support=auto --client 127.0.0.1 --port 58378 --file /Users/brandomiranda/ultimate-utils/tutorials_for_myself/try_catch_pycharm_issues/try_catch_with_pickle.py unknown option --pdb usage: /Users/brandomiranda/opt/anaconda3/envs/meta_learning/bin/python [option] ... [-c cmd | -m mod | file | -] [arg] ... Try `python -h' for more information. Process finished with exit code 2 | I think it is already working actually, but you are in fact not catching the correct error. In your code you have: try: pickle.dumps(obj) except pickle.PicklingError: return False But the error thrown is AttributeError. So to avoid that you need something like this: try: pickle.dumps(obj) except (pickle.PicklingError, AttributeError): return False | 5 | 2 |
70,696,761 | 2022-1-13 | https://stackoverflow.com/questions/70696761/python-nsfw-detection-module-nudenet-not-longer-working | I have been using the python module nudenet for my final degree project. I'm using google colab to run it. It worked correctly and without any problem during this last months until yesterday, when I tried to import it, this error happenend: !pip install --upgrade nudenet from nudenet import NudeClassifier ImportError: cannot import name '_registerMatType' from 'cv2.cv2' (/usr/local/lib/python3.7/dist-packages/cv2/cv2.cpython-37m-x86_64-linux-gnu.so) I tried to solve this error by downgrading opencv-python-headless to a previous version !pip uninstall opencv-python-headless==4.5.5.62 !pip install opencv-python-headless==4.5.1.48 But then, when I load the classifier this error appears: classifier = NudeClassifier() Downloading the checkpoint to /root/.NudeNet/classifier_model.onnx MB| |# | 0 Elapsed Time: 0:00:00 Content-length not found, file size cannot be estimated. Succefully Downloaded to: /root/.NudeNet/classifier_model.onnx InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /root/.NudeNet/classifier_model.onnx failed:Protobuf parsing failed. I have also tried to downgrade the version of the module nudenet, and still nothing works. Thank you in advance. This is the link for the module in github | replace the rile classifier_model.onnx with nudenet classifier and file classifier_lite.onnx with nudenet lite classifier | 4 | 9 |
70,753,091 | 2022-1-18 | https://stackoverflow.com/questions/70753091/why-does-object-not-support-setattr-but-derived-classes-do | Today I stumbled upon the following behaviour: class myobject(object): """Should behave the same as object, right?""" obj = myobject() obj.a = 2 # <- works obj = object() obj.a = 2 # AttributeError: 'object' object has no attribute 'a' I want to know what is the logic behind designing the language to behave this way, because it feels utterly paradoxical to me. It breaks my intuition that if I create a subclass, without modification, it should behave the same as the parent class. EDIT: A lot of the answers suggest that this is because we want to be able to write classes that work with __slots__ instead of __dict__ for performance reasons. However, we can do: class myobject_with_slots(myobject): __slots__ = ("x",) obj = myobject_with_slots() obj.x = 2 obj.a = 2 assert "a" in obj.__dict__ # ✔ assert "x" not in obj.__dict__ # ✔ So it seems we can have both __slots__ and __dict__ at the same time, so why doesn't object allow both, but one-to-one subclasses do? | Because derived classes do not necessarily support setattr either. class myobject(object): """Should behave the same as object!""" __slots__ = () obj = myobject() obj.a = 2 # <- works the same as for object Since all types derive from object, most builtin types such as list are also examples. Arbitrary attribute assignment is something that object subclasses may support, but not all do. Thus, the common base class does not support this either. Support for arbitrary attributes is commonly backed by the so-called __dict__ slot. This is a fixed attribute that contains a literal dict 1 to store any attribute-value pairs. In fact, one can manually define the __dict__ slot to get arbitrary attribute support. class myobject(object): """Should behave the same as object, right?""" __slots__ = ("__dict__",) obj = myobject() obj.a = 2 # <- works! print(obj.__dict__) # {'a': 2} The takeaway from this demonstration is that fixed attributes is actually the "base behaviour" of Python; the arbitrary attributes support is built on top when required. Adding arbitrary attributes for object subtypes by default provides a simpler programming experience. However, still supporting fixed attributes for object subtypes allows for better memory usage and performance. Data Model: __slots__ The space saved [by __slots__] over using __dict__ can be significant. Attribute lookup speed can be significantly improved as well. Note that it is possible to define classes with both fixed attributes and arbitrary attributes. The fixed attributes will benefit from the improved memory layout and performance; since they are not stored in the __dict__, its memory overhead2 is lower – but it still costs. 1Python implementations may use different, optimised types for __dict__ as long as they behave like a dict. 2For its hash-based lookup to work efficiently with few collisions, a dict must be larger than the number of items it stores. | 7 | 4 |
70,704,285 | 2022-1-13 | https://stackoverflow.com/questions/70704285/can-no-longer-fold-python-dictionaries-in-vs-code | I used to be able to collapse (fold) python dictionaries just fine in my VS Code. Randomly I am not able to do that anymore. I can still fold classes and functions just fine, but dictionaries cannot fold, the arrow on the left hand side just isn't there. I've checked my settings but I can't figure out what would've changed. I'm not sure the best forum to go to for help, so I'm hoping this is ok. Any ideas? | It's caused by Pylance v2022.1.1. Use v2022.1.0 instead. Issue #2248 | 8 | 6 |
70,763,542 | 2022-1-18 | https://stackoverflow.com/questions/70763542/pandas-dataframe-mypy-error-slice-index-must-be-an-integer-or-none | The following line pd.DataFrame({"col1": [1.1, 2.2]}, index=[3.3, 4.4])[2.5:3.5] raises a mypy linting error of on the [2.5 Slice index must be an integer or None This is valid syntax and correctly returns col1 3.3 1.1 Without # type: ignore, how can I resolve this linting error? versions: pandas 1.3.0 mypy 0.931 The code in question: def get_dataframe( ts_data: GroupTs, ts_group_name: str, start_time: Optional[float] = None, end_time: Optional[float] = None, ) -> pd.DataFrame: df = pd.DataFrame(ts_data.group[ts_group_name].ts_dict)[ start_time:end_time ].interpolate( method="index", limit_area="inside" ) # type: pd.DataFrame return df[~df.index.duplicated()] | This is by design for now, I fear, but if you have to, you can silence mypy by slicing with a callable, like this: import pandas as pd df = pd.DataFrame({"col1": [1.1, 2.2]}, index=[3.3, 4.4])[ lambda x: (2.5 <= x.index) & (x.index < 3.5) ] print(df) # Ouput col1 3.3 1.1 And so mypy reports no issues found on this code. | 8 | 5 |
70,723,757 | 2022-1-15 | https://stackoverflow.com/questions/70723757/arch-x86-64-and-arm64e-is-available-but-python3-is-saying-incompatible-architect | I am trying to run this reading-text-in-the-wild on Mac M1. When I attempt to run this code python3 make_keras_charnet_model.py I get the error Using Theano backend. Traceback (most recent call last): File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/gof/cutils.py", line 305, in <module> from cutils_ext.cutils_ext import * # noqa ImportError: dlopen(/Users/name/.theano/compiledir_macOS-12.0-arm64-i386-64bit-i386-3.8.6-64/cutils_ext/cutils_ext.so, 0x0002): tried: '/Users/name/.theano/compiledir_macOS-12.0-arm64-i386-64bit-i386-3.8.6-64/cutils_ext/cutils_ext.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/cutils_ext.so' (no such file), '/usr/lib/cutils_ext.so' (no such file) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/gof/cutils.py", line 316, in <module> from cutils_ext.cutils_ext import * # noqa ImportError: dlopen(/Users/name/.theano/compiledir_macOS-12.0-arm64-i386-64bit-i386-3.8.6-64/cutils_ext/cutils_ext.so, 0x0002): tried: '/Users/name/.theano/compiledir_macOS-12.0-arm64-i386-64bit-i386-3.8.6-64/cutils_ext/cutils_ext.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/cutils_ext.so' (no such file), '/usr/lib/cutils_ext.so' (no such file) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "make_keras_charnet_model.py", line 10, in <module> from keras.models import Sequential, model_from_json File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/keras/models.py", line 15, in <module> from . import backend as K File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/keras/backend/__init__.py", line 47, in <module> from .theano_backend import * File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/keras/backend/theano_backend.py", line 1, in <module> import theano File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/__init__.py", line 76, in <module> from theano.scan_module import scan, map, reduce, foldl, foldr, clone File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/scan_module/__init__.py", line 40, in <module> from theano.scan_module import scan_opt File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/scan_module/scan_opt.py", line 59, in <module> from theano import tensor, scalar File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/tensor/__init__.py", line 7, in <module> from theano.tensor.subtensor import * File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/tensor/subtensor.py", line 27, in <module> import theano.gof.cutils # needed to import cutils_ext File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/gof/cutils.py", line 319, in <module> compile_cutils() File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/gof/cutils.py", line 283, in compile_cutils cmodule.GCC_compiler.compile_str('cutils_ext', code, location=loc, File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/gof/cmodule.py", line 2212, in compile_str return dlimport(lib_filename) File "/Users/name/miniforge3/envs/ocr_env/lib/python3.8/site-packages/theano/gof/cmodule.py", line 299, in dlimport rval = __import__(module_name, {}, {}, [module_name]) ImportError: dlopen(/Users/name/.theano/compiledir_macOS-12.0-arm64-i386-64bit-i386-3.8.6-64/cutils_ext/cutils_ext.so, 0x0002): tried: '/Users/name/.theano/compiledir_macOS-12.0-arm64-i386-64bit-i386-3.8.6-64/cutils_ext/cutils_ext.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/cutils_ext.so' (no such file), '/usr/lib/cutils_ext.so' (no such file) I have duplicated my terminal to have the duplicate open with Rosetta and still I get the error. when I run the command below to check the architecture available on my M1 Mac file /bin/bash I get this output /bin/bash: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e] /bin/bash (for architecture x86_64): Mach-O 64-bit executable x86_64 /bin/bash (for architecture arm64e): Mach-O 64-bit executable arm64e I looks like I have both x86_64 and arm64e available but the error is saying (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/local/lib/cutils_ext.so' (no such file), '/usr/lib/cutils_ext.so' (no such file) what is causing this error and how can I fix it? | I found that i need to specify the architecture so instead of python3 make_keras_charnet_model.py i now use this arch -arm64 python3 make_keras_charnet_model.py | 7 | 6 |
70,759,112 | 2022-1-18 | https://stackoverflow.com/questions/70759112/one-to-one-relationships-with-sqlmodel | After working through the tutorial of SQLModel, I don't remember seeing anything on how to implement 1:1 relationships using Relationship attributes. I found documentation for SQLAlchemy, but it's not immediately clear how this applies to SQLModel. Code example: How to enforce that User and ICloudAccount have a 1:1 relationship? class User(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) name: str icloud_account_id: Optional[int] = Field(default=None, foreign_key="icloudaccount.id") icloud_account: Optional["ICloudAccount"] = Relationship(back_populates="users") class ICloudAccount(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) user_name: str users: List[User] = Relationship(back_populates="icloud_account") | You can turn off the list functionality to allow SQLModel to foreign key as a one-to-one. You do this with the SQLalchemy keyword uselist class User(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) name: str icloud_account_id: Optional[int] = Field(default=None, foreign_key="icloudaccount.id") icloud_account: Optional["ICloudAccount"] = Relationship(back_populates="user") class ICloudAccount(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) user_name: str user: Optional["User"] = Relationship( sa_relationship_kwargs={'uselist': False}, back_populates="icloud_account" ) | 11 | 23 |
70,739,858 | 2022-1-17 | https://stackoverflow.com/questions/70739858/how-to-create-a-brand-new-virtual-environment-or-duplicate-an-existing-one-in-po | I have a project and an existing virtual environment created with poetry (poetry install/init). So, as far as I know, the purpouse of a virtual environment is avoiding to modify the system base environment and the possibility of isolation (per project, per development, per system etc...). How can I create another brand new environment for my project in poetry? How can I eventually duplicate and use an existing one? I mean that the current one (activated) should be not involved in this (except for eventually copying it) because I want to test another set of dependencies and code. I am aware of this: https://github.com/python-poetry/poetry/issues/4055 (answer is not clear and ticket is not closed) https://python-poetry.org/docs/managing-environments/ (use command seems not to work in the requested way) | Poetry seems to be bound to one virtualenv per python interpreter. Poetry is also bound to the pyproject.toml file and its path to generate a new environment. So there are 2 tricky solutions: 1 - change your deps in the pyproject.toml and use another python version (installed for example with pyenv) and then: poetry env use X.Y poetry will create a new virtual environment but this is not exactly the same as changing just some project deps. 2 - use another pyproject.toml from another path: mkdir env_test cp pyproject.toml env_test/pyproject.toml cd env_test nano pyproject.toml # edit your dependencies poetry install # creates a brand new virtual environment poetry shell # run your script with the new environment This will generate a new environment with just the asked dependencies changed. Both environments can be used at the same time. After the test, it is eventually possible to delete the new environment with the env command. | 25 | 26 |
70,730,831 | 2022-1-16 | https://stackoverflow.com/questions/70730831/whats-the-mathematical-reason-behind-python-choosing-to-round-integer-division | I know Python // rounds towards negative infinity and in C++ / is truncating, rounding towards 0. And here's what I know so far: |remainder| -12 / 10 = -1, - 2 // C++ -12 // 10 = -2, + 8 # Python 12 / -10 = -1, 2 // C++ 12 // -10 = -2, - 8 # Python 12 / 10 = 1, 2 // Both 12 // 10 = 1, 2 -12 / -10 = 1, - 2 // Both = 2, + 8 C++: 1. m%(-n) == m%n 2. -m%n == -(m%n) 3. (m/n)*n + m%n == m Python: 1. m%(-n) == -8 == -(-m%n) 2. (m//n)*n + m%n == m But why Python // choose to round towards negative infinity? I didn't find any resources explain that, but only find and hear people say vaguely: "for mathematics reasons". For example, in Why is -1/2 evaluated to 0 in C++, but -1 in Python?: People dealing with these things in the abstract tend to feel that rounding toward negative infinity makes more sense (that means it's compatible with the modulo function as defined in mathematics, rather than % having a somewhat funny meaning). But I don't see C++ 's / not being compatible with the modulo function. In C++, (m/n)*n + m%n == m also applies. So what's the (mathematical) reason behind Python choosing rounding towards negative infinity? See also Guido van Rossum's old blog post on the topic. | But why Python // choose to round towards negative infinity? I'm not sure if the reason why this choice was originally made is documented anywhere (although, for all I know, it could be explained in great length in some PEP somewhere), but we can certainly come up with various reasons why it makes sense. One reason is simply that rounding towards negative (or positive!) infinity means that all numbers get rounded the same way, whereas rounding towards zero makes zero special. The mathematical way of saying this is that rounding down towards −∞ is translation invariant, i.e. it satisfies the equation: round_down(x + k) == round_down(x) + k for all real numbers x and all integers k. Rounding towards zero does not, since, for example: round_to_zero(0.5 - 1) != round_to_zero(0.5) - 1 Of course, other arguments exist too, such as the argument you quote based on compatibility with (how we would like) the % operator (to behave) — more on that below. Indeed, I would say the real question here is why Python's int() function is not defined to round floating point arguments towards negative infinity, so that m // n would equal int(m / n). (I suspect "historical reasons".) Then again, it's not that big of a deal, since Python does at least have math.floor() that does satisfy m // n == math.floor(m / n). But I don't see C++ 's / not being compatible with the modulo function. In C++, (m/n)*n + m%n == m also applies. True, but retaining that identity while having / round towards zero requires defining % in an awkward way for negative numbers. In particular, we lose both of the following useful mathematical properties of Python's %: 0 <= m % n < n for all m and all positive n; and (m + k * n) % n == m % n for all integers m, n and k. These properties are useful because one of the main uses of % is "wrapping around" a number m to a limited range of length n. For example, let's say we're trying to calculate directions: let's say heading is our current compass heading in degrees (counted clockwise from due north, with 0 <= heading < 360) and that we want to calculate our new heading after turning angle degrees (where angle > 0 if we turn clockwise, or angle < 0 if we turn counterclockwise). Using Python's % operator, we can calculate our new heading simply as: heading = (heading + angle) % 360 and this will simply work in all cases. However, if we try to to use this formula in C++, with its different rounding rules and correspondingly different % operator, we'll find that the wrap-around doesn't always work as expected! For example, if we start facing northwest (heading = 315) and turn 90° clockwise (angle = 90), we'll indeed end up facing northeast (heading = 45). But if then try to turn back 90° counterclockwise (angle = -90), with C++'s % operator we won't end up back at heading = 315 as expected, but instead at heading = -45! To get the correct wrap-around behavior using the C++ % operator, we'll instead need to write the formula as something like: heading = (heading + angle) % 360; if (heading < 0) heading += 360; or as: heading = ((heading + angle) % 360) + 360) % 360; (The simpler formula heading = (heading + angle + 360) % 360 will only work if we can always guarantee that heading + angle >= -360.) This is the price you pay for having a non-translation-invariant rounding rule for division, and consequently a non-translation-invariant % operator. | 88 | 97 |
70,761,764 | 2022-1-18 | https://stackoverflow.com/questions/70761764/pytest-html-not-displaying-image | I am trying to generate a self contained html report using pytest-html and selenium. I have been trying to imbedded screenshots into the report but they are not being displayed. My conftest.py looks like this @pytest.fixture() def chrome_driver_init(request, path_to_chrome): driver = webdriver.Chrome(options=opts, executable_path=path_to_chrome) request.cls.driver = driver page_object_init(request, driver) driver.get(URL) driver.maximize_window() yield driver driver.quit() # Hook that takes a screenshot of the web browser for failed tests and adds it to the HTML report @pytest.hookimpl(hookwrapper=True) def pytest_runtest_makereport(item): pytest_html = item.config.pluginmanager.getplugin("html") outcome = yield report = outcome.get_result() extra = getattr(report, "extra", []) if report.when == "call": feature_request = item.funcargs['request'] driver = feature_request.getfixturevalue('chrome_driver_init') nodeid = item.nodeid xfail = hasattr(report, "wasxfail") if (report.skipped and xfail) or (report.failed and not xfail): file_name = f'{nodeid}_{datetime.today().strftime("%Y-%m-%d_%H_%M")}.png'.replace("/", "_").replace("::", "_").replace(".py", "") driver.save_screenshot("./reports/screenshots/"+file_name) extra.append(pytest_html.extras.image("/screenshots/"+file_name)) report.extra = extra I am convinced the problem is with the path to the image, and I have tried so many str combinations, os.path and pathlib but nothing has worked. The screenshot is being saved in the expected location and I can open it like any other image. Its just not displaying on the report. <div class="image"><img src="data:image/png;base64,screenshots\scr_tests_test_example_TestExample_test_fail_example_2022-01-18_16_26.png"/></div> EDIT: For addional clairification. I have tried to use absolute path in the extra.append but it kept giving me a Cant Resolve File error in the HTML file. My absoulte path was(with some personal details redacted) C:\Users\c.Me\OneDrive - Me\Documents\GitHub\project\build\reports\screenshots\filename.png I have tried it with both '/' and '\' Also my File structure project ├───build │ ├───reports │ ├───screenshots │ ├───filename.png | ├───report.html | ├───run.py # I am running the test suite from here ├───scr | ├───settings.py │ ├───tests │ ├───confest.py run.py if __name__ == "__main__": os.system(f"pytest --no-header -v ../scr/tests/ --html=./reports/Test_Report_{today}.html --self-contained-html") For Prophet, may be bless me this day To get the Cannot Resolve Directory error my code is the following file_name = f'{nodeid}_{datetime.today().strftime("%Y-%m-%d_%H_%M")}.png'.replace("/", "_").replace("::", "_").replace(".py", "") img_path = os.path.join(REPORT_PATH, 'screenshots', file_name) driver.save_screenshot(img_path) extra.append(pytest_html.extras.image(img_path)) The variable REPORT_PATH is imported from the settings.py (see directory tree above) and is created by PROJ_PATH = Path(__file__).parent.parent REPORT_PATH = PROJ_PATH.joinpath("build\reports") also fun fact if I do img_path.replace("\\", "/") the error changes to Cannot Resolve File | I have learned so much in this painful journey. Mostly I have learned I am an idiot. The problem was that I wanted to make a self contained HTML. Pytest-html does not work as expected with adding images to a self contained report. Before you can you have to convert the image into its text base64 version first. So the answers to all my owes was a single line of code. @pytest.hookimpl(hookwrapper=True) def pytest_runtest_makereport(item): pytest_html = item.config.pluginmanager.getplugin("html") outcome = yield report = outcome.get_result() extra = getattr(report, "extra", []) if report.when == "call": feature_request = item.funcargs['request'] driver = feature_request.getfixturevalue('chrome_driver_init') nodeid = item.nodeid xfail = hasattr(report, "wasxfail") if (report.skipped and xfail) or (report.failed and not xfail): file_name = f'{nodeid}_{datetime.today().strftime("%Y-%m-%d_%H_%M")}.png'.replace("/", "_").replace("::", "_").replace(".py", "") img_path = os.path.join(REPORT_PATH, "screenshots", file_name) driver.save_screenshot(img_path) screenshot = driver.get_screenshot_as_base64() # the hero extra.append(pytest_html.extras.image(screenshot, '')) report.extra = extra Thank you Prophet for guiding on this pilgrimage. Now I must rest. | 4 | 10 |
70,760,723 | 2022-1-18 | https://stackoverflow.com/questions/70760723/walrus-assignment-expression-in-list-comprehension-generator | I am trying to pass each element of foo_list into a function expensive_call, and get a list of all the items whose output of expensive_call is Truthy. I am trying to do it with list comprehensions, is it possible? Something like: Something like this: result_list = [y := expensive_call(x) for x in foo_list if y] or.... result_list = [y for x in foo_list if y := expensive_call(x)] Note: This is not a solution because it call expensive call twice: result_list = [expensive_call(x) for x in foo_list if expensive_call(x)] And before someone recommends none list comprehension, I know one can do: result_list = [] for x in foo_list: result = expensive_call(x) result and result_list.append(result) | Quoting from above: result_list = [y for x in foo_list if y := expensive_call(x)] It should work almost exactly as how you have it; just remember to parenthesize the assignment with the := operator, as shown below. foo_list = [1, 2, 3] def check(x): return x if x != 2 else None result_list = [y for x in foo_list if (y := check(x))] print(result_list) Result: [1, 3] | 7 | 8 |
70,751,892 | 2022-1-18 | https://stackoverflow.com/questions/70751892/error-in-pip-install-transformers-building-wheel-for-tokenizers-pyproject-toml | I'm building a docker image on cloud server via the following docker file: # base image FROM python:3 # add python file to working directory ADD ./ / # install and cache dependencies RUN pip install --upgrade pip RUN pip install RUST RUN pip install transformers RUN pip install torch RUN pip install slack_sdk RUN pip install slack_bolt RUN pip install pandas RUN pip install gensim RUN pip install nltk RUN pip install psycopg2 RUN pip install openpyxl ...... When installing the transformers package, the following error occurs: STEP 5: RUN pip install transformers Collecting transformers Downloading transformers-4.15.0-py3-none-any.whl (3.4 MB) Collecting filelock ...... Downloading click-8.0.3-py3-none-any.whl (97 kB) Building wheels for collected packages: tokenizers Building wheel for tokenizers (pyproject.toml): started ERROR: Command errored out with exit status 1: command: /usr/local/bin/python /usr/local/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp_3y7hw5q cwd: /tmp/pip-install-bsy5f4da/tokenizers_e09b9f903acd40f0af4a997fe1d8fdb4 Complete output (50 lines): running bdist_wheel ...... copying py_src/tokenizers/trainers/__init__.pyi -> build/lib.linux-x86_64-3.10/tokenizers/trainers copying py_src/tokenizers/tools/visualizer-styles.css -> build/lib.linux-x86_64-3.10/tokenizers/tools running build_ext error: can't find Rust compiler If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. To update pip, run: pip install --upgrade pip and then retry package installation. If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. ---------------------------------------- ERROR: Failed building wheel for tokenizers ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects Building wheel for tokenizers (pyproject.toml): finished with status 'error' Failed to build tokenizers subprocess exited with status 1 subprocess exited with status 1 error building at STEP "RUN pip install transformers": exit status 1 time="2022-01-18T07:24:56Z" level=error msg="exit status 1" Dockerfile build failed - exit status 1exit status 1 I'm not very sure about what's happening here. Can anyone help me? Thanks in advance. | The logs say error: can't find Rust compiler You need to install a rust compiler. See https://www.rust-lang.org/tools/install. You can modify the installation instructions for a docker image like this (from https://stackoverflow.com/a/58169817/5666087): RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y ENV PATH="/root/.cargo/bin:${PATH}" | 9 | 7 |
70,757,783 | 2022-1-18 | https://stackoverflow.com/questions/70757783/how-to-make-a-specific-instance-of-a-class-immutable | Having an instance c of a class C, I would like to make c immutable, but other instances of C dont have to. Is there an easy way to achieve this in python? | You can't make Python classes fully immutable. You can however imitate it: class C: _immutable = False def __setattr__(self, name, value): if self._immutable: raise TypeError(f"Can't set attribute, {self!r} is immutable.") super().__setattr__(name, value) Example: >>> c = C() >>> c.hello = 123 >>> c.hello 123 >>> c._immutable = True >>> c.hello = 456 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in __setattr__ TypeError: Can't set attribute, <__main__.C object at 0x000002087C679D20> is immutable. If you wish to set it at initialization, you can add an __init__ like so: class C: _immutable = False def __init__(self, immutable=False): self._immutable = immutable def __setattr__(self, name, value): if self._immutable: raise TypeError(f"Can't set attribute, {self!r} is immutable.") super().__setattr__(name, value) Keep in mind you can still bypass it by accessing and modifying the __dict__ of the instance directly: >>> c = C(immutable=True) >>> c.__dict__["hello"] = 123 >>> c.hello 123 You may attempt to block it like so: class C: _immutable = False def __init__(self, immutable=False): self._immutable = immutable def __getattribute__(self, name): if name == "__dict__": raise TypeError("Can't access class dict.") return super().__getattribute__(name) def __setattr__(self, name, value): if self._immutable: raise TypeError(f"Can't set attribute, {self!r} is immutable.") super().__setattr__(name, value) But even then it's possible to bypass: >>> c = C(immutable=True) >>> object.__getattribute__(c, "__dict__")["hello"] = 123 >>> c.hello 123 | 5 | 8 |
70,733,261 | 2022-1-16 | https://stackoverflow.com/questions/70733261/joining-dataframes-using-rust-polars-in-python | I am experimenting with polars and would like to understand why using polars is slower than using pandas on a particular example: import pandas as pd import polars as pl n=10_000_000 df1 = pd.DataFrame(range(n), columns=['a']) df2 = pd.DataFrame(range(n), columns=['b']) df1p = pl.from_pandas(df1.reset_index()) df2p = pl.from_pandas(df2.reset_index()) # takes ~60 ms df1.join(df2) # takes ~950 ms df1p.join(df2p, on='index') | A pandas join uses the indexes, which are cached. A comparison where they do the same: # pandas # CPU times: user 1.64 s, sys: 867 ms, total: 2.5 s # Wall time: 2.52 s df1.merge(df2, left_on="a", right_on="b") # polars # CPU times: user 5.59 s, sys: 199 ms, total: 5.79 s # Wall time: 780 ms df1p.join(df2p, left_on="a", right_on="b") | 5 | 12 |
70,751,249 | 2022-1-18 | https://stackoverflow.com/questions/70751249/which-are-safe-methods-and-practices-for-string-formatting-with-user-input-in-py | My Understanding From various sources, I have come to the understanding that there are four main techniques of string formatting/interpolation in Python 3 (3.6+ for f-strings): Formatting with %, which is similar to C's printf The str.format() method Formatted string literals/f-strings Template strings from the standard library string module My knowledge of usage mainly comes from Python String Formatting Best Practices (source A): str.format() was created as a better alternative to the %-style, so the latter is now obsolete However, str.format() is vulnerable to attacks if user-given format strings are not properly handled f-strings allow str.format()-like behavior only for string literals but are shorter to write and are actually somewhat-optimized syntactic sugar for concatenation Template strings are safer than str.format() (demonstrated in the first source) and the other two methods (implied in the first source) when dealing with user input I understand that the aforementioned vulnerability in str.format() comes from the method being usable on any normal strings where the delimiting braces are part of the string data itself. Malicious user input containing brace-delimited replacement fields can be supplied to the method to access environment attributes. I believe this is unlike the other ways of formatting where the programmer is the only one that can supply variables to the pre-formatted string. For example, f-strings have similar syntax to str.format() but, because f-strings are literals and the inserted values are evaluated separately through concatenation-like behavior, they are not vulnerable to the same attack (source B). Both %-formatting and Template strings also seem to only be supplied variables for substitution by the programmer; the main difference pointed out is Template's more limited functionality. My Confusion I have seen a lot of emphasis on the vulnerability of str.format() which leaves me with questions of what I should be wary of when using the other techniques. Source A describes Template strings as the safest of the above methods "due to their reduced complexity": The more complex formatting mini-languages of the other string formatting techniques might introduce security vulnerabilities to your programs. Yes, it seems like f-strings are not vulnerable in the same way str.format() is, but are there known concerns about f-string security as is implied by source A? Is the concern more like risk mitigation for unknown exploits and unintended interactions? I am not familiar with C and I don't plan on using the clunkier %/printf-style formatting, but I have heard that C's printf had its own potential vulnerabilities. In addition, both sources A and B seem to imply a lack of security with this method. The top answer in Source B says, String formatting may be dangerous when a format string depends on untrusted data. So, when using str.format() or %-formatting, it's important to use static format strings, or to sanitize untrusted parts before applying the formatter function. Do %-style strings have known security concerns? Lastly, which methods should be used and how can user input-based attacks be prevented (e.g. filtering input with regex)? More specifically, are Template strings really the safer option? and Can f-strings be used just as easily and safely while granting more functionality? | It doesn't matter which format you choose, any format and library can have its own downsides and vulnerabilities. The bigger questions you need to ask yourself is what is the risk factor and the scenario you are facing with, and what are you going to do about it. First ask yourself: will there be a scenario where a user or an external entity of some kind (for example - an external system) sends you a format string? If the answer is no, there is no risk. If the answer is yes, you need to see whether this is needed or not. If not - remove it to eliminate the risk. If you need it - you can perform whitelist-based input validation and exclude all format-specific special characters from the list of permitted characters, in order to eliminate the risk. For example, no format string can pass the ^[a-zA-Z0-9\s]*$ generic regular expression. So the bottom line is: it doesn't matter which format string type you use, what's really important is what do you do with it and how can you reduce and eliminate the risk of it being tampered. | 7 | 1 |
70,755,030 | 2022-1-18 | https://stackoverflow.com/questions/70755030/reading-environment-variables-from-more-than-one-env-file-in-python | I have environment variables that I need to get from two different files in order to keep user+pw outside of the git repo. I download the sensitive user+pass from another location and add it to .gitignore. I am using from os import getenv from dotenv import load_dotenv ... load_dotenv() DB_HOST=getenv('DB_HOST') # from env file 1 DB_NAME=getenv('DB_NAME') # from env file 1 DB_USER=getenv('DB_USER') # from env file 2 DB_PASS=getenv('DB_PASS') # from env file 2 and I have the two ".env" files in the folder of the python script. env_file.env contains: DB_HOST=xyz DB_NAME=abc env_file_in_gitignore.env which needs to stay out of the git repo but is available by download using an sh script: DB_USER=me DB_PASS=eao How to avoid the error: TypeError: connect() argument 2 must be str, not None connect() argument 2 must be str, not None which is thrown since one of the two files are not used for the .env import? How can I get environment variables from two different ".env" files, both stored in the working directory? | You can add file path as an argument in load_dotenv function from dotenv import load_dotenv import os load_dotenv(<file 1 path>) load_dotenv(<file 2 path>) | 9 | 14 |
70,731,492 | 2022-1-16 | https://stackoverflow.com/questions/70731492/the-transaction-declared-chain-id-5777-but-the-connected-node-is-on-1337 | I am trying to deploy my SimpleStorage.sol contract to a ganache local chain by making a transaction using python. It seems to have trouble connecting to the chain. from solcx import compile_standard from web3 import Web3 import json import os from dotenv import load_dotenv load_dotenv() with open("./SimpleStorage.sol", "r") as file: simple_storage_file = file.read() compiled_sol = compile_standard( { "language": "Solidity", "sources": {"SimpleStorage.sol": {"content": simple_storage_file}}, "settings": { "outputSelection": { "*": {"*": ["abi", "metadata", "evm.bytecode", "evm.sourceMap"]} } }, }, solc_version="0.6.0", ) with open("compiled_code.json", "w") as file: json.dump(compiled_sol, file) # get bytecode bytecode = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["evm"][ "bytecode" ]["object"] # get ABI abi = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["abi"] # to connect to ganache blockchain w3 = Web3(Web3.HTTPProvider("HTTP://127.0.0.1:7545")) chain_id = 5777 my_address = "0xca1EA31e644F13E3E36631382686fD471c62267A" private_key = os.getenv("PRIVATE_KEY") # create the contract in python SimpleStorage = w3.eth.contract(abi=abi, bytecode=bytecode) # get the latest transaction nonce = w3.eth.getTransactionCount(my_address) # 1. Build a transaction # 2. Sign a transaction # 3. Send a transaction transaction = SimpleStorage.constructor().buildTransaction( {"chainId": chain_id, "from": my_address, "nonce": nonce} ) print(transaction) It seems to be connected to the ganache chain because it prints the nonce, but when I build and try to print the transaction here is the entire traceback call I am receiving Traceback (most recent call last): File "C:\Users\evens\demos\web3_py_simple_storage\deploy.py", line 52, in <module> transaction = SimpleStorage.constructor().buildTransaction( File "C:\Python310\lib\site-packages\eth_utils\decorators.py", line 18, in _wrapper return self.method(obj, *args, **kwargs) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\contract.py", line 684, in buildTransaction return fill_transaction_defaults(self.web3, built_transaction) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\_utils\transactions.py", line 114, in fill_transaction_defaults default_val = default_getter(web3, transaction) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\_utils\transactions.py", line 60, in <lambda> 'gas': lambda web3, tx: web3.eth.estimate_gas(tx), File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\eth.py", line 820, in estimate_gas return self._estimate_gas(transaction, block_identifier) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\module.py", line 57, in caller result = w3.manager.request_blocking(method_str, File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\manager.py", line 197, in request_blocking response = self._make_request(method, params) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\manager.py", line 150, in _make_request return request_func(method, params) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\middleware\formatting.py", line 76, in apply_formatters response = make_request(method, params) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\middleware\gas_price_strategy.py", line 90, in middleware return make_request(method, params) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\middleware\formatting.py", line 74, in apply_formatters response = make_request(method, formatted_params) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\middleware\attrdict.py", line 33, in middleware response = make_request(method, params) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\middleware\formatting.py", line 74, in apply_formatters response = make_request(method, formatted_params) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\middleware\formatting.py", line 73, in apply_formatters formatted_params = formatter(params) File "cytoolz/functoolz.pyx", line 503, in cytoolz.functoolz.Compose.__call__ ret = PyObject_Call(self.first, args, kwargs) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Python310\lib\site-packages\eth_utils\decorators.py", line 91, in wrapper return ReturnType(result) # type: ignore File "C:\Python310\lib\site-packages\eth_utils\applicators.py", line 22, in apply_formatter_at_index yield formatter(item) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Python310\lib\site-packages\eth_utils\applicators.py", line 72, in apply_formatter_if return formatter(value) File "cytoolz/functoolz.pyx", line 250, in cytoolz.functoolz.curry.__call__ return self.func(*args, **kwargs) File "C:\Users\evens\AppData\Roaming\Python\Python310\site- packages\web3\middleware\validation.py", line 57, in validate_chain_id raise ValidationError( web3.exceptions.ValidationError: The transaction declared chain ID 5777, but the connected node is on 1337 | Had this issue myself, apparently it's some sort of Ganache CLI error but the simplest fix I could find was to change the network id in Ganache through settings>server to 1337. It restarts the session so you'd then need to change the address and private key variable. If it's the same tutorial I'm doing, you're likely to come unstuck after this... the code for transaction should be: transaction = SimpleStorage.constructor().buildTransaction( { "gasPrice": w3.eth.gas_price, "chainId": chain_id, "from": my_address, "nonce": nonce, }) print(transaction) Otherwise you get a value error if you don't set the gasPrice | 13 | 37 |
70,745,252 | 2022-1-17 | https://stackoverflow.com/questions/70745252/how-to-extract-column-names-from-sql-query-using-python | I would like to extract the column names of a resulting table directly from the SQL statement: query = """ select sales.order_id as id, p.product_name, sum(p.price) as sales_volume from sales right join products as p on sales.product_id=p.product_id group by id, p.product_name; """ column_names = parse_sql(query) # column_names: # ['id', 'product_name', 'sales_volume'] Any idea what to do in parse_sql()? The resulting function should be able to recognize aliases and remove the table aliases/identifiers (e.g. "sales." or "p."). Thanks in advance! | I've done something like this using the library sqlparse. Basically, this library takes your SQL query and tokenizes it. Once that is done, you can search for the select query token and parse the underlying tokens. In code, that reads like import sqlparse def find_selected_columns(query) -> list[str]: tokens = sqlparse.parse(query)[0].tokens found_select = False for token in tokens: if found_select: if isinstance(token, sqlparse.sql.IdentifierList): return [ col.value.split(" ")[-1].strip("`").rpartition('.')[-1] for col in token.tokens if isinstance(col, sqlparse.sql.Identifier) ] else: found_select = token.match(sqlparse.tokens.Keyword.DML, ["select", "SELECT"]) raise Exception("Could not find a select statement. Weired query :)") This code should also work for queries with Common table expressions, i.e. it only return the final select columns. Depending on the SQL dialect and the quote chars you are using, you might to have to adapt the line col.value.split(" ")[-1].strip("`").rpartition('.')[-1] | 5 | 5 |
70,729,329 | 2022-1-16 | https://stackoverflow.com/questions/70729329/are-generators-with-context-managers-an-anti-pattern | I'm wondering about code like this: def all_lines(filename): with open(filename) as infile: yield from infile The point of a context manager is to have explicit control over the lifetime of some form of state, e.g. a file handle. A generator, on the other hand, keeps its state until it is exhausted or deleted. I do know that both cases work in practice. But I'm worried about whether it is a good idea. Consider for example this: def all_first_lines(filenames): return [next(all_lines(filename), None) for filename in filenames] I never exhaust the generators. Instead, their state is destroyed when the generator object is deleted. This works fine in reference-counted implementations like CPython, but what about garbage-collected implementations? I'm practically relying on the reference counter for managing state, something that context managers were explicitly designed to avoid! And even in CPython it shouldn't be too hard to construct cases were a generator is part of a reference cycle and needs the garbage collector to be destroyed. To summarize: Would you consider it prudent to avoid context managers in generators, for example by refactoring the above code into something like this? def all_lines(filename): with open(filename) as infile: return infile.readlines() def first_line(filename): with open(filename) as infile: return next(infile, None) def all_first_lines(filenames): return [first_line(filename) for filename in filenames] | While it does indeed extend the lifetime of the object until the generator exits or is destroyed, it also can make the generators clearer to work with. Consider creating the generators under an outer with and passing the file as an argument instead of them opening it. Now the file is invalid for use after the context manager is exited, even though the generators can still be seen as usable. If limiting the time for how long the handles are held is important, you can explicitly close the generators using the close method after you are done with them. This is a similar problem to what trio tries to solve with its nurseries for asynchronous tasks, where the nursery context manager waits for every task spawned from that nursery to exit before proceeding, the tutorial example illustrates this. This blog post by the author can provide some reasoning for the way it's done in trio which can be an interesting read that's somewhat related to the problem. | 5 | 1 |
70,739,677 | 2022-1-17 | https://stackoverflow.com/questions/70739677/sphinx-html-static-path-entry-does-not-exist | I am working with Sphinx and I want to set the html_static_path variable in the config.py file. The default was: html_static_path = ['_static'] My project setup is: docs/ build/ doctrees html/ _static/ source/ conf.py The sphinx documentation says that I need to set the path relative to this directory, i.e. relative path from conf.py. Thus, I tried: html_static_path = ['..\build\html\source\_static'] AND I tried to set the absolute path. But I still get the warning: WARNING: html_static_path entry 'build\html\source\_static' does not exist Can you help me? Thank you! | The build directory is created after you build the docs, which is why you get that error. When you make your docs, Sphinx will copy the static directory from your source location as defined by html_static_path to the build location. Create a new directory source/_static and place any static assets inside of it. Change the value in conf.py to this: html_static_path = ["_static"] | 11 | 13 |
70,738,783 | 2022-1-17 | https://stackoverflow.com/questions/70738783/json-serialize-python-enum-object | While serializing Enum object to JSON using json.dumps(...), python will throw following error: >>> class E(enum.Enum): ... A=0 ... B=1 >>> import json >>> json.dumps(E.A) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\guptaaman\Miniconda3\lib\json\__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "C:\Users\guptaaman\Miniconda3\lib\json\encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "C:\Users\guptaaman\Miniconda3\lib\json\encoder.py", line 257, in iterencode return _iterencode(o, 0) File "C:\Users\guptaaman\Miniconda3\lib\json\encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type E is not JSON serializable How to make Enum object JSON serializable? | We can make the class which is inheriting Enum to also inherit str class as following: >>> class E(str, enum.Enum): ... A="0" ... B="1" ... >>> json.dumps(E.A) '"0"' Reference: https://hultner.se/quickbits/2018-03-12-python-json-serializable-enum.html | 6 | 10 |
70,709,406 | 2022-1-14 | https://stackoverflow.com/questions/70709406/import-matplotlib-could-not-be-resolved-from-source-pylancereportmissingmodul | whenever I try to import matplotlib or matplotlib.pyplot in VS Code I get the error in the title: Import "matplotlib" could not be resolved from source Pylance(reportMissingModuleSource) or Import "matplotlib.pyplot" could not be resolved from source Pylance(reportMissingModuleSource) The hyperlink of the reportMissingModuleSource sends me to https://github.com/microsoft/pylance-release/blob/main/DIAGNOSTIC_SEVERITY_RULES.md#diagnostic-severity-rules, where it says: "Diagnostics for imports that have no corresponding source file. This happens when a type stub is found, but the module source file was not found, indicating that the code may fail at runtime when using this execution environment. Type checking will be done using the type stub." However, from the explanation I don't understand exactly what's wrong and what I should do to fix this, can someone help me with this? | I can reproduce your question when I select a python interpreter where doesn't exist matplotlib: So, the solution is opening an integrated Terminal then run pip install matplotlib. After it's installed successfully, please reload window, then the warning should go away. | 17 | 29 |
70,729,502 | 2022-1-16 | https://stackoverflow.com/questions/70729502/f2-rename-variable-doesnt-work-in-vscode-jupyter-notebook-python | I can use the normal F2 rename variable functionality in regular python files in vscode. But not when editing python in a jupyter notebook. When I press F2 on a variable in a jupyter notebook in vscode I get the familiar change variable window but when I press enter the variable is not changed and I get this error message: No result. No result. Is there a way to get the F2 change variable functionality to work in jupyter notebooks? Here's my system info: jupyter module version (adventofcode) C:\git\leetcode>pip show jupyter Name: jupyter Version: 1.0.0 Summary: Jupyter metapackage. Install all the Jupyter components in one go. Home-page: http://jupyter.org Author: Jupyter Development Team Author-email: [email protected] License: BSD Location: c:\users\johan\anaconda3\envs\adventofcode\lib\site-packages Requires: ipykernel, qtconsole, nbconvert, jupyter-console, notebook, ipywidgets Required-by: Python version: (adventofcode) C:\git\leetcode>python --version Python 3.10.0 vscode version: 1.63.2 (user setup) vscode Jupyter extension version (from the changelog in the extensions window): 2021.11.100 (November Release on 8 December 2021) | Notice that you put up a bug report in GitHub and see this issue: Renaming variables didn't work, the programmer replied: Some language features are currently not supported in notebooks, but we are making plans now to hopefully bring more of those online soon. So please wait for this feature. | 10 | 6 |
70,723,487 | 2022-1-15 | https://stackoverflow.com/questions/70723487/sqlalchemy-get-child-count-without-reading-all-the-children | Here are my Parent and Child classes: class Parent(Base): id = Column(...) ... children = relationship("Child", backref="parent", lazy="select") class Child(Base): id = Column(...) parent_id = Column(...) active = Column(Boolean(), ...) The reason behind the loading technique of children of Parent being lazy is that there can be a very large number of children associated with a parent. Now, I would like to get the number of active children of a parent as a hybrid-property. Here is how I tried to do it: class Parent(Base): ... @hybrid_property def active_count(self): return len([child for child in self.children if child.active]) @active_count.expression def active_count(cls): return ( select(func.count(Child.id)) .where(Child.parent_id == cls.id) .where(Child.active == True) ) But the problem with this method is that when I call parent.active_count, it fires a query to get all of the children. How can I get only the count (of active children) without reading whole the children? | I think you unnecessary iterate over children within active_count hybrid_property definition. This should works for you: class Parent(Base): ... children = relationship("Child", backref="parent", lazy="dynamic") @hybrid_property def active_count(self): return self.children.filter_by(active=True).with_entities(func.count('*')).scalar() # or # return self.children.filter_by(active=True).count() # but it would have worse performance | 7 | 5 |
70,721,360 | 2022-1-15 | https://stackoverflow.com/questions/70721360/python-selenium-web-scrap-how-to-find-hidden-src-value-from-a-links | Scrapping links should be a simple feat, usually just grabbing the src value of the a tag. I recently came across this website (https://sunteccity.com.sg/promotions) where the href value of a tags of each item cannot be found, but the redirection still works. I'm trying to figure out a way to grab the items and their corresponding links. My typical python selenium code looks something as such all_items = bot.find_elements_by_class_name('thumb-img') for promo in all_items: a = promo.find_elements_by_tag_name("a") print("a[0]: ", a[0].get_attribute("href")) However, I can't seem to retrieve any href, onclick attributes, and I'm wondering if this is even possible. I noticed that I couldn't do a right-click, open link in new tab as well. Are there any ways around getting the links of all these items? Edit: Are there any ways to retrieve all the links of the items on the pages? i.e. https://sunteccity.com.sg/promotions/724 https://sunteccity.com.sg/promotions/731 https://sunteccity.com.sg/promotions/751 https://sunteccity.com.sg/promotions/752 https://sunteccity.com.sg/promotions/754 https://sunteccity.com.sg/promotions/280 ... Edit: Adding an image of one such anchor tag for better clarity: | By reverse-engineering the Javascript that takes you to the promotions pages (seen in https://sunteccity.com.sg/_nuxt/d4b648f.js) that gives you a way to get all the links, which are based on the HappeningID. You can verify by running this in the JS console, which gives you the first promotion: window.__NUXT__.state.Promotion.promotions[0].HappeningID Based on that, you can create a Python loop to get all the promotions: items = driver.execute_script("return window.__NUXT__.state.Promotion;") for item in items["promotions"]: base = "https://sunteccity.com.sg/promotions/" happening_id = str(item["HappeningID"]) print(base + happening_id) That generated the following output: https://sunteccity.com.sg/promotions/724 https://sunteccity.com.sg/promotions/731 https://sunteccity.com.sg/promotions/751 https://sunteccity.com.sg/promotions/752 https://sunteccity.com.sg/promotions/754 https://sunteccity.com.sg/promotions/280 https://sunteccity.com.sg/promotions/764 https://sunteccity.com.sg/promotions/766 https://sunteccity.com.sg/promotions/762 https://sunteccity.com.sg/promotions/767 https://sunteccity.com.sg/promotions/732 https://sunteccity.com.sg/promotions/733 https://sunteccity.com.sg/promotions/735 https://sunteccity.com.sg/promotions/736 https://sunteccity.com.sg/promotions/737 https://sunteccity.com.sg/promotions/738 https://sunteccity.com.sg/promotions/739 https://sunteccity.com.sg/promotions/740 https://sunteccity.com.sg/promotions/741 https://sunteccity.com.sg/promotions/742 https://sunteccity.com.sg/promotions/743 https://sunteccity.com.sg/promotions/744 https://sunteccity.com.sg/promotions/745 https://sunteccity.com.sg/promotions/746 https://sunteccity.com.sg/promotions/747 https://sunteccity.com.sg/promotions/748 https://sunteccity.com.sg/promotions/749 https://sunteccity.com.sg/promotions/750 https://sunteccity.com.sg/promotions/753 https://sunteccity.com.sg/promotions/755 https://sunteccity.com.sg/promotions/756 https://sunteccity.com.sg/promotions/757 https://sunteccity.com.sg/promotions/758 https://sunteccity.com.sg/promotions/759 https://sunteccity.com.sg/promotions/760 https://sunteccity.com.sg/promotions/761 https://sunteccity.com.sg/promotions/763 https://sunteccity.com.sg/promotions/765 https://sunteccity.com.sg/promotions/730 https://sunteccity.com.sg/promotions/734 https://sunteccity.com.sg/promotions/623 | 6 | 3 |
70,722,545 | 2022-1-15 | https://stackoverflow.com/questions/70722545/draw-circle-in-console-using-python | I want to draw a circle in the console using characters instead of pixels, for this I need to know how many pixels are in each row. The diameter is given as input, you need to output a list with the width in pixels of each line of the picture For example: input: 7 output: [3, 5, 7, 7, 7, 5, 3] input: 12 output: [4, 8, 10, 10, 12, 12, 12, 12, 10, 10, 8, 4] How can this be implemented? | This was a good reminder for me to be careful when mixing zero-based and one-based computations. In this case, I had to account for the for loops being zero-based, but the quotient of the diameter divided by 2 being one-based. Otherwise, the plots would have been over or under by 1. By the way, while I matched your answer for 7, I didn't come up with the same exact plot for 12: NOTE - Tested using Python 3.9.6 pixels_in_line = 0 pixels_per_line = [] diameter = int(input('Enter the diameter of the circle: ')) # You must account for the loops being zero-based, but the quotient of the diameter / 2 being # one-based. If you use the exact radius, you will be short one column and one row. offset_radius = (diameter / 2) - 0.5 for i in range(diameter): for j in range(diameter): x = i - offset_radius y = j - offset_radius if x * x + y * y <= offset_radius * offset_radius + 1: print('*', end=' ') pixels_in_line += 1 else: print(' ', end=' ') pixels_per_line.append(pixels_in_line) pixels_in_line = 0 print() print('The pixels per line are {0}.'.format(pixels_per_line)) Output for 7: Enter the diameter of the circle: 7 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * The pixels per line are [3, 5, 7, 7, 7, 5, 3]. Output for 12: Enter the diameter of the circle: 12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * The pixels per line are [2, 6, 8, 10, 10, 12, 12, 10, 10, 8, 6, 2]. | 7 | 3 |
70,723,165 | 2022-1-15 | https://stackoverflow.com/questions/70723165/pandas-groupby-get-value-from-previous-element-of-a-group-based-on-value-of-ano | I have a data frame with 4 columns. I have sorted this data frame by 'group' and 'timestamp' beforehand. df = pd.DataFrame( { "type": ['type0', 'type1', 'type2', 'type3', 'type1', 'type3', 'type0', 'type1', 'type3', 'type3'], "group": [1, 1, 1, 1, 1, 1, 2, 2, 2, 2], "timestamp": ["20220105 07:52:46", "20220105 07:53:11", "20220105 07:53:55", "20220105 07:59:12", "20220105 08:24:13", "20220105 08:48:19", "20220105 11:01:30", "20220105 11:15:16", "20220105 12:13:36", "20220105 12:19:44"], "price": [0, 1.5, 2.5, 3, 3.2, 3.1, 0.5, 3, 3.25, pd.NA] }) >> df type group timestamp price 0 type0 1 20220105 07:52:46 0 1 type1 1 20220105 07:53:11 1.5 2 type2 1 20220105 07:53:55 2.5 3 type3 1 20220105 07:59:12 3 4 type1 1 20220105 08:24:13 3.2 5 type3 1 20220105 08:48:19 3.1 6 type0 2 20220105 11:01:30 0.5 7 type1 2 20220105 11:15:16 3 8 type3 2 20220105 12:13:36 3.25 9 type3 2 20220105 12:19:44 <NA> After grouping by the column 'group', I want to create a 'new_price' column as per the following logic: For each 'type3' row in a group (i.e., df['type'] = 'type3'), get the price from the PREVIOUS 'type1' or 'type2' row in the group. For type0/type1/type2 rows, keep the same price as in the input data frame. My Solution: My solution below works when we don't have 2 consecutive 'type3' rows. But when there are 2 consecutive 'type3' rows, I get the wrong price for the second 'type3' row. I want the price from the previous 'type1' or 'type2' row in the group, but I get the price from the first 'type3' row using my solution. df = df.sort_values(by=["group", "timestamp"]) required_types_mask = df['type'].isin(['type1', 'type2', 'type3']) temp_series = df.loc[:, 'price'].where(required_types_mask).groupby(df['group']).shift(1) type_3_mask = df['type'].eq('type3') df.loc[:, 'new_price'] = df.loc[:, 'price'].mask(type_3_mask, temp_series) My result: type group timestamp price new_price 0 type0 1 20220105 07:52:46 0 0 1 type1 1 20220105 07:53:11 1.5 1.5 2 type2 1 20220105 07:53:55 2.5 2.5 3 type3 1 20220105 07:59:12 3 2.5 4 type1 1 20220105 08:24:13 3.2 3.2 5 type3 1 20220105 08:48:19 3.1 3.2 6 type0 2 20220105 11:01:30 0.5 0.5 7 type1 2 20220105 11:15:16 3 3 8 type3 2 20220105 12:13:36 3.25 3 9 type3 2 20220105 12:19:44 <NA> 3.25 <- Incorrect price Expected result: type group timestamp price new_price 0 type0 1 20220105 07:52:46 0 0 1 type1 1 20220105 07:53:11 1.5 1.5 2 type2 1 20220105 07:53:55 2.5 2.5 3 type3 1 20220105 07:59:12 3 2.5 4 type1 1 20220105 08:24:13 3.2 3.2 5 type3 1 20220105 08:48:19 3.1 3.2 6 type0 2 20220105 11:01:30 0.5 0.5 7 type1 2 20220105 11:15:16 3 3 8 type3 2 20220105 12:13:36 3.25 3 9 type3 2 20220105 12:19:44 <NA> 3 <- Correct price | We can mask the price with type3 then ffill s = df.price.mask(df.type.isin(['type0','type3'])) df['new'] = np.where(df.type.eq('type3'),s.groupby(df['group']).ffill(),df['price']) df type group timestamp price new 0 type0 1 20220105 07:52:46 0 0 1 type1 1 20220105 07:53:11 1.5 1.5 2 type2 1 20220105 07:53:55 2.5 2.5 3 type3 1 20220105 07:59:12 3 2.5 4 type1 1 20220105 08:24:13 3.2 3.2 5 type3 1 20220105 08:48:19 3.1 3.2 6 type0 2 20220105 11:01:30 0.5 0.5 7 type1 2 20220105 11:15:16 3 3 8 type3 2 20220105 12:13:36 3.25 3 9 type3 2 20220105 12:19:44 <NA> 3 | 4 | 4 |
70,719,806 | 2022-1-15 | https://stackoverflow.com/questions/70719806/outlier-removal-techniques-from-an-array | I know there's a ton resources online for outlier removal, but I haven't yet managed to obtain what I exactly want, so posting here, I have an array (or DF) of 4 columns. Now I want to remove the rows from the DF based on a column's outlier values. The following is what I have tried, but they are not perfect. def outliers2(data2, m = 4.5): c=[] data = data2[:,1] # Choosing the column d = np.abs(data - np.median(data)) # deviation comoutation mdev = np.median(d) # mean deviation for i in range(len(data)): if (abs(data[i] - mdev) < m * np.std(data)): c.append(data2[i]) return c x = pd.DataFrame(outliers2(np.array(b))) column = ['t','orig_w','filt_w','smt_w'] x.columns = column #Plot plt.rcParams['figure.figsize'] = [10,8] plt.plot(b.t,b.orig_w,'o',label='Original',alpha=0.8) # Original plt.plot(x.t,x.orig_w,'.',c='r',label='Outlier removed',alpha=0.8) # After outlier removal plt.legend() the plot illustrates how the results looks, red points after the outlier treatment over the blue original points. I would really like to get rid of those vertical group of points around the x~0 mark. What to do ? A link to the data file is provided here : Full data The green circles show typically the points i would like to get rid of | You could use scipy's median_filter: import pandas as pd from matplotlib import pyplot as plt from scipy.ndimage import median_filter b = pd.read_csv("test.csv") x = b.copy() x.orig_w = median_filter(b.orig_w, size=15) #Plot plt.rcParams['figure.figsize'] = [10,8] #Original plt.plot(b.t,b.orig_w,'o',label='Original',alpha=0.8) # After outlier removal plt.plot(x.t,x.orig_w,'.',c='r',label='Outlier removed',alpha=0.8) plt.legend() plt.show() Sample output: | 4 | 3 |
70,714,809 | 2022-1-14 | https://stackoverflow.com/questions/70714809/iterate-variables-over-binary-combinations | Is there a simple way to loop variables through all possible combinations of true/false? For example, say we have a function: def f(a, b, c): return not (a and (b or c)) Is there a way to loop a,b,c through 000, 001, 010, 011, 100, etc.? My first thought was to loop through the integers and get the bit that corresponds to the variable position, like so: for i in range (8): a = i & 4 b = i & 2 c = i & 1 print(f(a,b,c), " ", a,b,c) It seems to work, but a, b, and c when printed out are all integers, and integers interact with logical operators differently than booleans. For example, 4 and 2 equals 2, and 9 or 3 equals 9. I don't mind it too much, it just took some thinking through to convince myself this doesn't matter. But the question still stands, is there a still simpler way to loop through all possible true/false or 0/1 values? | Use itertools.product: from itertools import product for a, b, c in product([True, False], repeat=3): print(f(a,b,c), " ", a,b,c) If you really want integers 1 and 0, just replace [True, False] with [1, 0]. There's little reason to treat integers as bit arrays here. | 4 | 5 |
70,698,407 | 2022-1-13 | https://stackoverflow.com/questions/70698407/huggingface-autotokenizer-valueerror-couldnt-instantiate-the-backend-tokeniz | Goal: Amend this Notebook to work with albert-base-v2 model Error occurs in Section 1.3. Kernel: conda_pytorch_p36. I did Restart & Run All, and refreshed file view in working directory. There are 3 listed ways this error can be caused. I'm not sure which my case falls under. Section 1.3: # define the tokenizer tokenizer = AutoTokenizer.from_pretrained( configs.output_dir, do_lower_case=configs.do_lower_case) Traceback: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-25-1f864e3046eb> in <module> 140 # define the tokenizer 141 tokenizer = AutoTokenizer.from_pretrained( --> 142 configs.output_dir, do_lower_case=configs.do_lower_case) 143 144 # Evaluate the original FP32 BERT model ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 548 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 549 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 550 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 551 else: 552 if tokenizer_class_py is not None: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1752 use_auth_token=use_auth_token, 1753 cache_dir=cache_dir, -> 1754 **kwargs, 1755 ) 1756 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1880 # Instantiate tokenizer. 1881 try: -> 1882 tokenizer = cls(*init_inputs, **init_kwargs) 1883 except OSError: 1884 raise OSError( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert_fast.py in __init__(self, vocab_file, tokenizer_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, **kwargs) 159 cls_token=cls_token, 160 mask_token=mask_token, --> 161 **kwargs, 162 ) 163 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py in __init__(self, *args, **kwargs) 116 else: 117 raise ValueError( --> 118 "Couldn't instantiate the backend tokenizer from one of: \n" 119 "(1) a `tokenizers` library serialization file, \n" 120 "(2) a slow tokenizer instance to convert or \n" ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. Please let me know if there's anything else I can add to post. | First, I had to pip install sentencepiece. However, in the same code line, I was getting an error with sentencepiece. Wrapping str() around both parameters yielded the same Traceback. --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-12-1f864e3046eb> in <module> 140 # define the tokenizer 141 tokenizer = AutoTokenizer.from_pretrained( --> 142 configs.output_dir, do_lower_case=configs.do_lower_case) 143 144 # Evaluate the original FP32 BERT model ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 548 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 549 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 550 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 551 else: 552 if tokenizer_class_py is not None: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1752 use_auth_token=use_auth_token, 1753 cache_dir=cache_dir, -> 1754 **kwargs, 1755 ) 1756 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1776 copy.deepcopy(init_configuration), 1777 *init_inputs, -> 1778 **(copy.deepcopy(kwargs)), 1779 ) 1780 else: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1880 # Instantiate tokenizer. 1881 try: -> 1882 tokenizer = cls(*init_inputs, **init_kwargs) 1883 except OSError: 1884 raise OSError( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert.py in __init__(self, vocab_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, sp_model_kwargs, **kwargs) 179 180 self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) --> 181 self.sp_model.Load(vocab_file) 182 183 @property ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in Load(self, model_file, model_proto) 365 if model_proto: 366 return self.LoadFromSerializedProto(model_proto) --> 367 return self.LoadFromFile(model_file) 368 369 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in LoadFromFile(self, arg) 169 170 def LoadFromFile(self, arg): --> 171 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) 172 173 def DecodeIdsWithCheck(self, ids): TypeError: not a string --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-12-1f864e3046eb> in <module> 140 # define the tokenizer 141 tokenizer = AutoTokenizer.from_pretrained( --> 142 configs.output_dir, do_lower_case=configs.do_lower_case) 143 144 # Evaluate the original FP32 BERT model ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 548 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 549 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 550 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 551 else: 552 if tokenizer_class_py is not None: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1752 use_auth_token=use_auth_token, 1753 cache_dir=cache_dir, -> 1754 **kwargs, 1755 ) 1756 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1776 copy.deepcopy(init_configuration), 1777 *init_inputs, -> 1778 **(copy.deepcopy(kwargs)), 1779 ) 1780 else: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1880 # Instantiate tokenizer. 1881 try: -> 1882 tokenizer = cls(*init_inputs, **init_kwargs) 1883 except OSError: 1884 raise OSError( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert.py in __init__(self, vocab_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, sp_model_kwargs, **kwargs) 179 180 self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) --> 181 self.sp_model.Load(vocab_file) 182 183 @property ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in Load(self, model_file, model_proto) 365 if model_proto: 366 return self.LoadFromSerializedProto(model_proto) --> 367 return self.LoadFromFile(model_file) 368 369 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in LoadFromFile(self, arg) 169 170 def LoadFromFile(self, arg): --> 171 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) 172 173 def DecodeIdsWithCheck(self, ids): TypeError: not a string I then had to swap out parameters for just the model name: tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') This second part is detailed on this SO post. | 6 | 7 |
70,708,036 | 2022-1-14 | https://stackoverflow.com/questions/70708036/how-to-get-a-terraform-object-into-an-aws-lambda-environment | Lambda functions support the environment parameter and make it easy to define a key-value pair. But what about getting an object (defined by a module variable eg) into the function's environment? Quick example of what I'm trying to accomplish in python 3.7: Terraform: # variable definition variable foo { type = map(any) default = { a = "b" c = "d" } } resource "aws_lambda_function" "lambda" { . . . environment { foo = jsonencode(foo) } } and then in my function: def bar: for k in os.environ["foo"]: print(k) Thanks ! | In python, you will have to get json string and convert it to dict: import json def bar: for k in json.loads(os.environ["foo"]): print(k) | 4 | 5 |
70,704,749 | 2022-1-14 | https://stackoverflow.com/questions/70704749/type-hint-for-class-method-return-error-name-is-not-defined | The following class has a class method create(), which as a type hint for return type, for creating an instance of the class. class X: @classmethod def create(cls) -> X: pass However, it got the following error? NameError: name 'X' is not defined | The name X doesn't exist until the class is fully defined. You can fix this by importing a __future__ feature called annotations. Just put this at the top of your file. from __future__ import annotations This wraps all annotations in quotation marks, to suppress errors like this. It's the same as doing this class X: @classmethod def create(cls) -> 'X': # <-- Note the quotes pass but automatically. This will be the default behavior in some future Python version (originally, it was going to be 3.10, but it's been pushed back due to compatibility issues), but for now the import will make it behave the way you want. The future import was added in Python 3.7. If you're on an older version of Python, you'll have to manually wrap the types in strings, as I did in the example above. | 7 | 9 |
70,696,896 | 2022-1-13 | https://stackoverflow.com/questions/70696896/how-to-add-already-created-figures-to-a-subplot-figure | I created this function that generates the ROC_AUC, then I returned the figure created to a variable. from sklearn.metrics import roc_curve, auc from sklearn.preprocessing import label_binarize import matplotlib.pyplot as plt def plot_multiclass_roc(clf, X_test, y_test, n_classes, figsize=(17, 6)): y_score = clf.decision_function(X_test) # structures fpr = dict() tpr = dict() roc_auc = dict() # calculate dummies once y_test_dummies = pd.get_dummies(y_test, drop_first=False).values for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test_dummies[:, i], y_score[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # roc for each class fig, ax = plt.subplots(figsize=figsize) ax.plot([0, 1], [0, 1], 'k--') ax.set_xlim([0.0, 1.0]) ax.set_ylim([0.0, 1.05]) ax.set_xlabel('False Positive Rate') ax.set_ylabel('True Positive Rate') ax.set_title('Receiver operating characteristic for Optimized SVC model') for i in range(n_classes): ax.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f) for label %i' % (roc_auc[i], i+1)) ax.legend(loc="best") ax.grid(alpha=.4) sns.despine() plt.show() return fig svc_model_optimized_roc_auc_curve = plot_multiclass_roc(svc_model_optimized, X_test, y_test, n_classes=3, figsize=(16, 10)) The resulting figure would look like somethin below: I created 5 different ROC curves for 5 different models using the same function but returning their figures to separate variables. Then I created a subplot figure that I thought would display all of them. The code is: import matplotlib.pyplot as plt %matplotlib inline figs, ax = plt.subplots( nrows=3, ncols=2, figsize=(20, 20), ) ax[0,0] = logmodel_roc_auc_curve ax[0,1] = RandomForestModel_optimized_roc_auc_cruve ax[1,0] = decisiontree_model_optimized_roc_auc_curve ax[1,1] = best_clf_knn_roc_auc_curve ax[2,0] = svc_model_optimized_roc_auc_curve But the resulting figure produced is this: There was a similar problem to this here but it was solved by executing the functions again. But I would like to find a way if possible to just simply "paste" the figures I already have into the subplot. | You need exactly the same as in the linked solution. You can't store plots for later use. Note that in matplotlib a figure is the surrounding plot with one or more subplots. Each subplot is referenced via an ax. Function plot_multiclass_roc needs some changes: it needs an ax as parameter, and the plot should be created onto that ax. fig, ax = plt.subplots(figsize=figsize) should be removed; the fig should be created previously, outside the function also plt.show() should be removed from the function it is not necessary to return anything Outside the function, you create the fig and the axes. In matplotlib there is a not-well-followed convention to use axs for the plural of ax (when referring to a subplot). So: fig, axs = plt.subplots(nrows = 3, ncols = 2, figsize= (20, 20) ) plot_multiclass_roc(...., ax=axs[0,0]) # use parameters for logmodel plot_multiclass_roc(...., ax=axs[0,1]) # use parameters for Random Forest plot_multiclass_roc(...., ax=axs[1,0]) # ... plot_multiclass_roc(...., ax=axs[1,1]) # ... plot_multiclass_roc(...., ax=axs[2,0]) # ... axs[2,1].remove() # remove the unused last ax plt.tight_layout() # makes that labels etc. fit nicely plt.show() | 8 | 8 |
70,701,672 | 2022-1-13 | https://stackoverflow.com/questions/70701672/how-can-i-scale-deployment-replicas-in-kubernetes-cluster-from-python-client | I have Kubernetes cluster set up and managed by AKS, and I have access to it with the python client. Thing is that when I'm trying to send patch scale request, I'm getting an error. I've found information about scaling namespaced deployments from python client in the GitHub docs, but it was not clear what is the body needed in order to make the request work: # Enter a context with an instance of the API kubernetes.client with kubernetes.client.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = kubernetes.client.AppsV1Api(api_client) name = 'name_example' # str | name of the Scale namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects body = None # object | pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional) dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional) field_manager = 'field_manager_example' # str | fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). (optional) force = True # bool | Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. (optional) try: api_response = api_instance.patch_namespaced_deployment_scale(name, namespace, body, pretty=pretty, dry_run=dry_run, field_manager=field_manager, force=force) pprint(api_response) except ApiException as e: print("Exception when calling AppsV1Api->patch_namespaced_deployment_scale: %s\n" % e) So when running the code I'm getting Reason: Unprocessable Entity Does anyone have any idea in what format should the body be? For example if I want to scale the deployment to 2 replicas how can it be done? | The body argument to the patch_namespaced_deployment_scale can be a JSONPatch document, as @RakeshGupta shows in the comment, but it can also be a partial resource manifest. For example, this works: >>> api_response = api_instance.patch_namespaced_deployment_scale( ... name, namespace, ... [{'op': 'replace', 'path': '/spec/replicas', 'value': 2}]) (Note that the value needs to be an integer, not a string as in the comment.) But this also works: >>> api_response = api_instance.patch_namespaced_deployment_scale( ... name, namespace, ... {'spec': {'replicas': 2}}) | 4 | 11 |
70,698,738 | 2022-1-13 | https://stackoverflow.com/questions/70698738/two-walrus-operators-in-one-if-statement | Is there a correct way to have two walrus operators in 1 if statement? if (three:= i%3==0) and (five:= i%5 ==0): arr.append("FizzBuzz") elif three: arr.append("Fizz") elif five: arr.append("Buzz") else: arr.append(str(i-1)) This example works for three but five will be "not defined". | The logical operator and evaluates its second operand only conditionally. There is no correct way to have a conditional assignment that is unconditionally needed. Instead use the "binary" operator &, which evaluates its second operand unconditionally. arr = [] for i in range(1, 25): # v force evaluation of both operands if (three := i % 3 == 0) & (five := i % 5 == 0): arr.append("FizzBuzz") elif three: arr.append("Fizz") elif five: arr.append("Buzz") else: arr.append(str(i)) print(arr) # ['1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', ...] Correspondingly, one can use | as an unconditional variant of or. In addition, the "xor" operator ^ has no equivalent with conditional evaluation at all. Notably, the binary operators evaluate booleans as purely boolean - for example, False | True is True not 1 – but may work differently for other types. To evaluate arbitrary values such as lists in a boolean context with binary operators, convert them to bool after assignment: # |~~~ force list to boolean ~~| | force evaluation of both operands # v v~ walrus-assign list ~vv v if bool(lines := list(some_file)) & ((today := datetime.today()) == 0): ... Since assignment expressions require parentheses for proper precedence, the common problem of different precedence between logical (and, or) and binary (&, |, ^) operators is irrelevant here. | 6 | 2 |
70,688,858 | 2022-1-12 | https://stackoverflow.com/questions/70688858/how-to-continuously-read-from-stdin-not-just-once-input-file-is-done | I have these two scripts: clock.py #!/usr/bin/env python3 import time while True: print("True", flush=True) time.sleep(1) continuous_wc.py #!/usr/bin/env python3 import sys def main(): count = 0 for line in sys.stdin: sys.stdout.write(str(count)) count += 1 if __name__=='__main__': main() And I run them like so: ./clock.py | ./continuous_wc.py I'm hoping that it prints: 1 2 3 4 5 ... Every second is like a clock, because it's counting the lines in the file basically. But it doesn't output anything. Why not? | In addition to print(x, flush=True) you must also flush after sys.stdout.write. Note that the programs would technically work without flush, but they would print values very infrequently, in very large chunks, as the Python IO buffer is many kilobytes. Flushing is there to make it work more real-time. sys.stdout.write(str(count)) sys.stdout.flush() | 5 | 4 |
70,697,906 | 2022-1-13 | https://stackoverflow.com/questions/70697906/split-text-into-smaller-paragraphs-of-a-minimal-length-without-breaking-the-sent | Is there a better way to do this task? For the pre-processing of an NLP task, I was trying to split large pieces of text into a list of strings of even length. By splitting the text at every "." I would have very uneven sentences in length. By using an index/number I would cut off sentences in the middle. The goal was to have sentences in a list of even length without truncating a sentence before it ends. This is the solution I came up with but I feel like something simpler should exist. def even_split(text): combined_sentences = [] tmp_text = text.strip() if tmp_text[-1] != ".": tmp_text += "." while len(tmp_text) > 0: dots = [] for i in range(len(tmp_text)): if tmp_text[i] == '.': dots.append(i) split_dot = dots[min(range(len(dots)), key=lambda i: abs(dots[i]-150))] combined_sentences.append(tmp_text[:split_dot+1]) tmp_text = tmp_text[split_dot+1:].strip() return combined_sentences For example, if I input the following string: Marketing products and services is a demanding and tedious task in today’s overly saturated market. Especially if you’re in a B2B lead generation business. As a business owner or part of the sales team, you really need to dive deep into understanding what strategies work best and how to appeal to your customers most efficiently. Lead generation is something you need to master. Understanding different types of leads will help you sell your product or services and scale your business faster. That’s why we’re explaining what warm leads are and how you can easily turn them into paying customers. This will output: ['Marketing products and services is a demanding and tedious task in today’s overly saturated market. Especially if you’re in a B2B lead generation business.', 'As a business owner or part of the sales team, you really need to dive deep into understanding what strategies work best and how to appeal to your customers most efficiently.', 'Lead generation is something you need to master. Understanding different types of leads will help you sell your product or services and scale your business faster.', 'That’s why we’re explaining what warm leads are and how you can easily turn them into paying customers.'] As you can see they are evenly split at around 150 char each. Is there a better way to do this task? | IIUC, you want to split the text on dot, but try to keep a minimal length of the chunks to avoid having very short sentences. What you can do is to split on the dots and join again until you reach a threshold (here 200 characters): out = [] threshold = 200 for chunk in text.split('. '): if out and len(chunk)+len(out[-1]) < threshold: out[-1] += ' '+chunk+'.' else: out.append(chunk+'.') output: ['Marketing products and services is a demanding and tedious task in today’s overly saturated market. Especially if you’re in a B2B lead generation business.', 'As a business owner or part of the sales team, you really need to dive deep into understanding what strategies work best and how to appeal to your customers most efficiently.', 'Lead generation is something you need to master. Understanding different types of leads will help you sell your product or services and scale your business faster.', 'That’s why we’re explaining what warm leads are and how you can easily turn them into paying customers..'] | 5 | 2 |
70,679,571 | 2022-1-12 | https://stackoverflow.com/questions/70679571/how-do-i-set-a-wildcard-for-csrf-trusted-origins-in-django | After updating from Django 2 to Django 4.0.1 I am getting CSRF errors on all POST requests. The logs show: "WARNING:django.security.csrf:Forbidden (Origin checking failed - https://127.0.0.1 does not match any trusted origins.): /activate/" I can't figure out how to set a wildcard for CSRF_TRUSTED_ORIGINS? I have a server shipped to customers who host it on their own domain so there is no way for me to no the origin before hand. I have tried the following with no luck: CSRF_TRUSTED_ORIGINS = ["https://*", "http://*"] and CSRF_TRUSTED_ORIGINS = ["*"] Explicitly setting "https://127.0.0.1" in the CSRF_TRUSTED_ORIGINS works but won't work in my customer's production deployment which will get another hostname. | The Django app is running using Gunicorn behind NGINX. Because SSL is terminated after NGINX request.is_secure() returns false which results in Origin header not matching the host here: https://github.com/django/django/blob/3ff7f6cf07a722635d690785c31ac89484134bee/django/middleware/csrf.py#L276 I resolved the issue by adding the following in Django: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') And ensured that NGINX is forwarding the http scheme with the following in my NGINX conf: proxy_set_header X-Forwarded-Proto $scheme; | 11 | 6 |
70,688,159 | 2022-1-12 | https://stackoverflow.com/questions/70688159/using-plt-annotate-how-to-adjust-arrow-color | I want to add some annotations and arrows to my plots using plt.annotate() and I am not sure how to change the arrow parameters. import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'weight': [10, 10, 20, 5, 10, 15]}) print(df) Dummy Data: weight 0 10 1 10 2 20 3 5 4 10 5 15 Plot Line: ax = df.plot(figsize=(8, 6)) ax.annotate("Maximum", xy=(2, 20), xytext=(2, 10), arrowprops=dict(arrowstyle="->"), color='red') How do I change the arrow color and other parameters to also match the Text? In this case I changed the text color to red. Thanks for the help! | In arrowprops you should be able to set arrow color and other properties , something like this: arrowprops=dict(arrowstyle= '->', color='red', lw=3, ls='--') | 4 | 5 |
70,685,203 | 2022-1-12 | https://stackoverflow.com/questions/70685203/how-to-get-the-origin-url-in-fastapi | Is it possible to get the URL that a request came from in FastAPI? For example, if I have an endpoint that is requested at api.mysite.com/endpoint and a request is made to this endpoint from www.othersite.com, is there a way that I can retrieve the string "www.othersite.com" in my endpoint function? | The premise of the question, which could be formulated as a server can identify the URL from where a request came is misguided. True, some HTTP requests (especially some of the requests issued by browsers) carry an Origin header and/or a Referer [sic] header. Also, the Forwarded header, if present, contains information about where the request was issued. However, nothing in the HTTP specification requires that requests in general advertise where they came from. Therefore, whether with FastAPI or some other server technology, there's no definite way of knowing where a request came from. | 5 | 5 |
70,686,557 | 2022-1-12 | https://stackoverflow.com/questions/70686557/python-googlesearch-module-error-typeerror-search-got-an-unexpected-keywor | Here is my code, it was working properly before I was not getting an error while using it. I don't understand how it happened even though I didn't change with it. : results = [] for query in my_list: results.append(search(query, tld="com", num=1, stop=1, pause=2)) Error: results.append(search(query, tld="com", num=1, stop=1, pause=2)) TypeError: search() got an unexpected keyword argument 'tld' | It is from the google python package. it is still working of all the versions. version parameters : query : query string that we want to search for. tld : tld stands for top level domain which means we want to search our result on google.com or google.in or some other domain. lang : lang stands for language. num : Number of results we want. start : First result to retrieve. stop : Last result to retrieve. Use None to keep searching forever. pause: Lapse to wait between HTTP requests. Lapse too short may cause Google to block your IP. Keeping significant lapse will make your program slow but its safe and better option. Return : Generator (iterator) that yields found URLs. If the stop parameter is None the iterator will loop forever. Here is your real problem: There is one more python package with the module name as googlesearch Link here Since it might be installed on your environment, this might be calling this module which does not have these parameters included. The BlockBuster Solution is: (tested these both packages on local) Delete your Python Environment Create a new one install pip install beautifulsoup4 and pip install google Now use your code which will work like charm. Never install the pip install googlesearch-python python package | 4 | 10 |
70,682,654 | 2022-1-12 | https://stackoverflow.com/questions/70682654/comparing-pandas-map-and-merge | I have the following df: df = pd.DataFrame({'key': {0: 'EFG_DS_321', 1: 'EFG_DS_900', 2: 'EFG_DS_900', 3: 'EFG_Q_900', 4: 'EFG_DS_1000', 5: 'EFG_DS_1000', 6: 'EFG_DS_1000', 7: 'ABC_DS_444', 8: 'EFG_DS_900', 9: 'EFG_DS_900', 10: 'EFG_DS_321', 11: 'EFG_DS_900', 12: 'EFG_DS_1000', 13: 'EFG_DS_900', 14: 'EFG_DS_321', 15: 'EFG_DS_321', 16: 'EFG_DS_1000', 17: 'EFG_DS_1000', 18: 'EFG_DS_1000', 19: 'EFG_DS_1000', 20: 'ABC_DS_444', 21: 'EFG_DS_900', 22: 'EFG_DAS_12345', 23: 'EFG_DAS_12345', 24: 'EFG_DAS_321', 25: 'EFG_DS_321', 26: 'EFG_DS_12345', 27: 'EFG_Q_1000', 28: 'EFG_DS_900', 29: 'EFG_DS_321'}}) and I have the following dict: d = {'ABC_AS_1000': 123, 'ABC_AS_444': 321, 'ABC_AS_231341': 421, 'ABC_AS_888': 412, 'ABC_AS_087': 4215, 'ABC_DAS_1000': 3415, 'ABC_DAS_444': 4215, 'ABC_DAS_231341': 3214, 'ABC_DAS_888': 321, 'ABC_DAS_087': 111, 'ABC_Q_1000': 222, 'ABC_Q_444': 3214, 'ABC_Q_231341': 421, 'ABC_Q_888': 321, 'ABC_Q_087': 41, 'ABC_DS_1000': 421, 'ABC_DS_444': 421, 'ABC_DS_231341': 321, 'ABC_DS_888': 41, 'ABC_DS_087': 41, 'EFG_AS_1000': 213, 'EFG_AS_900': 32, 'EFG_AS_12345': 1, 'EFG_AS_321': 3, 'EFG_DAS_1000': 421, 'EFG_DAS_900': 321, 'EFG_DAS_12345': 123, 'EFG_DAS_321': 31, 'EFG_Q_1000': 41, 'EFG_Q_900': 51, 'EFG_Q_12345': 321, 'EFG_Q_321': 321, 'EFG_DS_1000': 41, 'EFG_DS_900': 51, 'EFG_DS_12345': 321, 'EFG_DS_321': 1} I want to map d into df, but given that the real data is very large and complicated, i'm trying to understand if map or merge is better in terms of efficiency (running time). first option: a simple map res = df['key'].map(d) second option: convert d into a dataframe and preform a merge d1 = pd.DataFrame.from_dict(d,orient='index',columns=['res']) res = df.merge(d1,left_on='key',right_index=True)['res'] Any help will be much appreciated (or any better solutions of course:)) | map will be faster than a merge If your goal is simply to assign a numerical category to each unique value in df['AB'], you could use pandas.factorize that should be a bit faster than map: res = df['AB'].factorize()[0]+1 output: array([1, 1, 1, 2, 2, 3, 3, 3]) test on 800k rows: factorize 28.6 ms ± 153 µs map 32.1 ms ± 110 µs merge 68.6 ms ± 1.33 ms | 6 | 5 |
70,680,363 | 2022-1-12 | https://stackoverflow.com/questions/70680363/structural-pattern-matching-using-regex | I have a string that I'm trying to validate against a few regex patterns and I was hoping since Pattern matching is available in 3.10, I might be able to use that instead of creating an if-else block. Consider a string 'validateString' with possible values 1021102,1.25.32, string021. The code I tried would be something like the following. match validateString: case regex1: print('Matched regex1') case regex2: print('Matched regex2') case regex3: print('Matched regex3') For regex 1, 2 and 3, I've tried string regex patterns and also re.compile objects but it doesn't seem to work. I have been trying to find examples of this over the internet but can't seem to find any that cover regex pattern matching with the new python pattern matching. Any ideas for how I can make it work? Thanks! | It is not possible to use regex-patterns to match via structural pattern matching (at this point in time). From: PEP0643: structural-pattern-matching PEP 634: Structural Pattern Matching Structural pattern matching has been added in the form of a match statement and case statements of patterns with associated actions. Patterns consist of sequences, mappings, primitive data types as well as class instances. Pattern matching enables programs to extract information from complex data types, branch on the structure of data, and apply specific actions based on different forms of data. (emphasis mine) Nothing in this gives any hint that evoking match / search functions of the re module on the provided pattern is intended to be used for matching. You can find out more about the reasoning behind strucutral pattern matching by reading the actuals PEPs: PEP 634 -- Structural Pattern Matching: Specification PEP 635 -- Structural Pattern Matching: Motivation and Rationale PEP 636 -- Structural Pattern Matching: Tutorial they also include ample examples on how to use it. | 27 | 4 |
70,678,304 | 2022-1-12 | https://stackoverflow.com/questions/70678304/removing-signs-and-repeating-numbers | I want to remove all signs from my dataframe to leave it in either one of the two formats: 100-200 or 200 So the salaries should either have a single hyphen between them if a range of salaries if given, otherwise a clean single number. I have the following data: import pandas as pd import re df = {'salary':['£26,768 - £30,136/annum Attractive benefits package', '£26,000 - £28,000/annum plus bonus', '£21,000/annum', '£26,768 - £30,136/annum Attractive benefits package', '£33/hour', '£18,500 - £20,500/annum Inc Bonus - Study Support + Bens', '£27,500 - £30,000/annum £27,500 to £30,000 + Study', '£35,000 - £40,000/annum', '£24,000 - £27,000/annum Study Support (ACCA / CIMA)', '£19,000 - £24,000/annum Study Support', '£30,000 - £35,000/annum', '£44,000 - £66,000/annum + 15% Bonus + Excellent Benefits. L', '£75 - £90/day £75-£90 Per Day']} data = pd.DataFrame(df) Here's what I have tried to remove some of the signs: salary = [] for i in data.salary: space = re.sub(" ",'',i) lower = re.sub("[a-z]",'',space) upper = re.sub("[A-Z]",'',lower) bracket = re.sub("/",'',upper) comma = re.sub(",", '', bracket) plus = re.sub("\+",'',comma) percentage = re.sub("\%",'', plus) dot = re.sub("\.",'', percentage) bracket1 = re.sub("\(",'',dot) bracket2 = re.sub("\)",'',bracket1) salary.append(bracket2) Which gives me: '£26768-£30136', '£26000-£28000', '£21000', '£26768-£30136', '£33', '£18500-£20500-', '£27500-£30000£27500£30000', '£35000-£40000', '£24000-£27000', '£19000-£24000', '£30000-£35000', '£44000-£6600015', '£75-£90£75-£90' However, I have some repeating numbers, essentially I want anything after the first range of values removed, and any sign besides the hyphen between the two numbers. Expected output: '26768-30136', '26000-28000', '21000', '26768-30136', '33', '18500-20500', '27500-30000', '35000-40000', '24000-27000', '19000-24000', '30000-35000', '44000-66000', '75-90 | Another way using pandas.Series.str.partition with replace: data["salary"].str.partition("/")[0].str.replace("[^\d-]+", "", regex=True) Output: 0 26768-30136 1 26000-28000 2 21000 3 26768-30136 4 33 5 18500-20500 6 27500-30000 7 35000-40000 8 24000-27000 9 19000-24000 10 30000-35000 11 44000-66000 12 75-90 Name: 0, dtype: object Explain: It assumes that you are only interested in the parts upto /; it extracts everything until /, than removes anything but digits and hypen | 5 | 5 |
70,675,453 | 2022-1-12 | https://stackoverflow.com/questions/70675453/django-app-runs-locally-but-i-get-csrf-verification-failed-on-heroku | My app runs fine at heroku local but after deployed to Heroku, every time I try to login/register/login as admin, it returns this error shown below. I have tried to put @csrf_exempt on profile views, but that didn't fix the issue. What can I do? | The error message is fairly self-explanatory (please excuse typos as I can't copy from an image): Origin checking failed - https://pacific-coast-78888.herokuapp.com does not match any trusted origins The domain you are using is not a trusted origin for CSRF. There is then a link to the documentation, which I suspect goes to the Django CSRF documentation, though the documentation for the CSRF_TRUSTED_ORIGINS setting might be more useful: A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. Look in your settings.py for CSRF_TRUSTED_ORIGINS and add https://pacific-coast-78888.herokuapp.com to the list. If that setting doesn't already exist, simply add it: CSRF_TRUSTED_ORIGINS = ["https://pacific-coast-78888.herokuapp.com"] | 4 | 11 |
70,658,748 | 2022-1-10 | https://stackoverflow.com/questions/70658748/using-fastapi-in-a-sync-way-how-can-i-get-the-raw-body-of-a-post-request | Using FastAPI in a sync, not async mode, I would like to be able to receive the raw, unchanged body of a POST request. All examples I can find show async code, when I try it in a normal sync way, the request.body() shows up as a coroutine object. When I test it by posting some XML to this endpoint, I get a 500 "Internal Server Error". from fastapi import FastAPI, Response, Request, Body app = FastAPI() @app.get("/") def read_root(): return {"Hello": "World"} @app.post("/input") def input_request(request: Request): # how can I access the RAW request body here? body = request.body() # do stuff with the body here return Response(content=body, media_type="application/xml") Is this not possible with FastAPI? Note: a simplified input request would look like: POST http://127.0.0.1:1083/input Content-Type: application/xml <XML> <BODY>TEST</BODY> </XML> and I have no control over how input requests are sent, because I need to replace an existing SOAP API. | Using async def endpoint If an object is a co-routine, it needs to be awaited. FastAPI is actually Starlette underneath, and Starlette methods for returning the request body are async methods (see the source code here as well); thus, one needs to await them (inside an async def endpoint). For example: from fastapi import Request @app.post("/input") async def input_request(request: Request): return await request.body() Update 1 - Using def endpoint Alternatively, if you are confident that the incoming data is a valid JSON, you can define your endpoint with def instead, and use the Body field, as shown below (for more options on how to post JSON data, see this answer): from fastapi import Body @app.post("/input") def input_request(payload: dict = Body(...)): return payload If, however, the incoming data are in XML format, as in the example you provided, one option is to pass them using Files instead, as shown below—as long as you have control over how client data are sent to the server (have a look here as well). Example: from fastapi import File @app.post("/input") def input_request(contents: bytes = File(...)): return contents Update 2 - Using def endpoint and async dependency As described in this post, you can use an async dependency function to pull out the body from the request. You can use async dependencies on non-async (i.e., def) endpoints as well. Hence, if there is some sort of blocking code in this endpoint that prevents you from using async/await—as I am guessing this might be the reason in your case—this is the way to go. Note: I should also mention that this answer—which explains the difference between def and async def endpoints (that you might be aware of)—also provides solutions when you are required to use async def (as you might need to await for coroutines inside a route), but also have some synchronous expensive CPU-bound operation that might be blocking the server. Please have a look. Example of the approach described earlier can be found below. You can uncomment the time.sleep() line, if you would like to confirm yourself that a request won't be blocking other requests from going through, as when you declare an endpoint with normal def instead of async def, it is run in an external threadpool (regardless of the async def dependency function). from fastapi import FastAPI, Depends, Request import time app = FastAPI() async def get_body(request: Request): return await request.body() @app.post("/input") def input_request(body: bytes = Depends(get_body)): print("New request arrived.") #time.sleep(5) return body | 26 | 33 |
70,648,645 | 2022-1-10 | https://stackoverflow.com/questions/70648645/create-a-field-which-is-immutable-after-being-set | Is it possible to create a Pydantic field that does not have a default value and this value must be set on object instance creation and is immutable from then on? e.g. from pydantic import BaseModel class User(BaseModel): user_id: int name: str user = User(user_id=1, name='John') user.user_id = 2 # raises immutable error | Pydantic already has the option you want. You can customize specific field by specifying allow_mutation to false. This raises a TypeError if the field is assigned on an instance. from pydantic import BaseModel, Field class User(BaseModel): user_id: int = Field(..., allow_mutation=False) name: str class Config: validate_assignment = True user = User(user_id=1, name='John') user.user_id = 2 # TypeError: "user_id" has allow_mutation set to False and cannot be assigned | 9 | 12 |
70,660,854 | 2022-1-11 | https://stackoverflow.com/questions/70660854/how-to-check-if-a-bot-can-dm-a-user | If a user has the privacy setting "Allow direct messages from server members" turned off and a discord bot calls await user.dm_channel.send("Hello there") You'll get this error: discord.errors.Forbidden: 403 Forbidden (error code: 50007): Cannot send messages to this user I would like to check whether I can message a user without sending them a message. Trying to send a message and catching this error does not work for me, because I don't want a message to get sent in the event that the bot is allowed to message. I have tried this: print(user.dm_channel.permissions_for(bot).send_messages) but it always returns True, even if the message is not permitted. I have also tried this: channel = await user.create_dm() if channel is None: ... but unfortunately, it seems that "has permission to message user" and "has permission to create a dm channel" are considered different. EDIT To clarify the exact usage since there seems to be a bit of confusion, take this example. There is a server, and 3 users in question: Me, My Bot, and Steve. Steve has "Allow direct messages from server members" checked off. The bot has a command called !newgame which accepts a list of users and starts a game amongst them, which involves DMing some of the members of the game. Because of Steve's privacy settings, he cannot play the game (since the bot will need to message him). If I do !newgame @DJMcMayhem @Steve I'd like to provide a response like: > I can't start a game with that list of users because @Steve has the wrong privacy settings. But as far as I know right now, the only way to find out if Steve can play is by first attempting to message every user, which I'd like to avoid. | Explanation You can send an invalid message, which would raise a 400 Bad Request exception, to the dm_channel. This can be accomplished by setting content to None, for example. If it raises 400 Bad Request, you can DM them. If it raises 403 Forbidden, you can't. Code async def can_dm_user(user: discord.User) -> bool: try: await user.send() except discord.Forbidden: return False except discord.HTTPException: return True | 10 | 11 |
70,597,020 | 2022-1-5 | https://stackoverflow.com/questions/70597020/lower-latency-from-webcam-cv2-videocapture | I'm building an application using the webcam to control video games (kinda like a kinect). It uses the webcam (cv2.VideoCapture(0)), AI pose estimation (mediapipe), and custom logic to pipe inputs into dolphin emulator. The issue is the latency. I've used my phone's hi-speed camera to record myself snapping and found latency of around 32 frames ~133ms between my hand and the frame onscreen. This is before any additional code, just a loop with video read and cv2.imshow (about 15ms) Is there any way to decrease this latency? I'm already grabbing the frame in a separate Thread, setting CAP_PROP_BUFFERSIZE to 0, and lowering the CAP_PROP_FRAME_HEIGHT and CAP_PROP_FRAME_WIDTH, but I still get ~133ms of latency. Is there anything else I can be doing? Here's my code below: class WebcamStream: def __init__(self, src=0): self.stopped = False self.stream = cv2.VideoCapture(src) self.stream.set(cv2.CAP_PROP_BUFFERSIZE, 0) self.stream.set(cv2.CAP_PROP_FRAME_HEIGHT, 400) self.stream.set(cv2.CAP_PROP_FRAME_WIDTH, 600) (self.grabbed, self.frame) = self.stream.read() self.hasNew = self.grabbed self.condition = Condition() def start(self): Thread(target=self.update, args=()).start() return self def update(self,): while True: if self.stopped: return (self.grabbed, self.frame) = self.stream.read() with self.condition: self.hasNew = True self.condition.notify_all() def read(self): if not self.hasNew: with self.condition: self.condition.wait() self.hasNew = False return self.frame def stop(self): self.stopped = True The application needs to run in as close to real time as possible, so any reduction in latency, no matter how small would be great. Currently between the webcam latency (~133ms), pose estimation and logic (~25ms), and actual time it takes to move into the correct pose, it racks up to about 350-400ms of latency. Definitely not ideal when I'm trying to play a game. EDIT: Here's the code I used to test the latency (Running the code on my laptop, recording my hand and screen, and counting frame difference in snapping): if __name__ == "__main__": cap = WebcamStream().start() while(True): frame = cap.read() cv2.imshow('frame', frame) cv2.waitKey(1) | Welcome to the War-on-Latency ( shaving-off ) The experience you have described above is a bright example, how accumulated latencies could devastate any chances to keep a control-loop tight-enough, to indeed control something meaningfully stable, as in a MAN-to-MACHINE-INTERFACE system we wish to keep: User's-motion | CAM-capture | IMG-processing | GUI-show | User's-visual-cortex-scene-capture | User's decision+action | loop A real-world situation, where OpenCV profiling was shown, to "sense" how much time we spend in respective acquisition-storage-transformation-postprocessing-GUI pipeline actual phases ( zoom in as needed ) What latency-causing steps do we work with? Forgive, for a moment, a raw-sketch of where we accumulate each of the particular latency-related costs : CAM \____/ python code GIL-awaiting ~ 100 [ms] chopping |::| python code calling a cv2.<function>() |::| __________________________________________-----!!!!!!!----------- |::| ^ 2x NNNNN!!!!!!!MOVES DATA! |::| | per-call NNNNN!!!!!!! 1.THERE |::| | COST NNNNN!!!!!!! 2.BACK |::| | TTTT-openCV::MAT into python numpy.array |::| | //// forMAT TRANSFORMER TRANSFORMATIONS USBx | //// TRANSFORMATIONS |::| | //// TRANSFORMATIONS |::| | //// TRANSFORMATIONS |::| | //// TRANSFORMATIONS |::| | //// TRANSFORMATIONS H/W oooo _v____TTTT in-RAM openCV::MAT storage TRANSFORMATIONS / \ oooo ------ openCV::MAT object-mapper \ / xxxx O/S--- °°°° xxxx driver """" _____ xxxx \\\\ ^ xxxx ...... openCV {signed|unsigned}-{size}-{N-channels} _________\\\\___|___++++ __________________________________________ openCV I/O ^ PPPP PROCESSING as F | .... PROCESSING A | ... PROCESSING S | .. PROCESSING T | . PROCESSING as | PPPP PROCESSING possible___v___PPPP _____ openCV::MAT NATIVE-object PROCESSING What latencies do we / can we fight ( here ) against? Hardware latencies could help, yet changing already acquired hardware could turn expensive Software latencies of already latency-optimised toolboxes is possible, yet harder & harder Design inefficiencies are the final & most common place, where latencies could get shaved-off OpenCV ? There is not much to do here. The problem is with the OpenCV-Python binding details: ... So when you call a function, say res = equalizeHist(img1,img2) in Python, you pass two numpy arrays and you expect another numpy array as the output. So these numpy arrays are converted to cv::Mat and then calls the equalizeHist() function in C++. Final result, res will be converted back into a Numpy array. So in short, almost all operations are done in C++ which gives us almost same speed as that of C++. This works fine "outside" a control-loop, not in our case, where both of the two transport-costs, transformation-costs and any of new or interim-data storage RAM-allocation-costs result in worsening our control-loop TAT. So avoid any and all calls of OpenCV-native functions from Python-(behind the bindings' latency extra-miles)-side, no matter how tempting or sweet these may look on the first sight. HUNDREDS-of-[ms] are a rather bitter cost of ignoring this advice. Python ? Yes, Python. Using Python interpreter introduces both latency per se, plus adds problems with concurrency-avoided processing, no matter how many cores does our hardware operate on ( while recent Py3 tries a lot to lower these costs under the interpreter-level software). We can test & squeeze max out of the (still unavoidable, in 2022) GIL-lock interleaving - check the sys.getswitchinterval() and test increasing this amount for having less interleaved python-side processing ( tweaking is dependent on other your python-application ambitions ( GUI, distributed-computing tasks, python network-I/O workloads, python-HW-I/O-s, if applicable, etc ) RAM-memory-I/O costs ? Our next major enemy. Using a least-sufficient-enough image-DATA-format, that MediaPipe can work with is the way forward in this segment. Avoidable losses All other (our) sins belong to this segment. Avoid any image-DATA-format transformations ( see above, cost may easily grow into HUNDREDS THOUSANDS of [us] just for converting an already acquired-&-formatted-&-stored numpy.array into just another colourmap) MediaPipe lists enumerated formats it can work with: // ImageFormat SRGB: sRGB, interleaved: one byte for R, then one byte for G, then one byte for B for each pixel. SRGBA: sRGBA, interleaved: one byte for R, one byte for G, one byte for B, one byte for alpha or unused. SBGRA: sBGRA, interleaved: one byte for B, one byte for G, one byte for R, one byte for alpha or unused. GRAY8: Grayscale, one byte per pixel. GRAY16: Grayscale, one uint16 per pixel. SRGB48: sRGB,interleaved, each component is a uint16. SRGBA64: sRGBA,interleaved,each component is a uint16. VEC32F1: One float per pixel. VEC32F2: Two floats per pixel. So, choose the MVF -- the minimum viable format -- for gesture-recognition to work and downscale the amount of pixels as possible ( 400x600-GRAY8 would be my hot candidate ) Pre-configure ( not missing the cv.CAP_PROP_FOURCC details ) the native-side OpenCV::VideoCapture processing to do no more than just plain storing this MVF in a RAW-format on the native-side of the Acquisition-&-Pre-processing chain, so that no other post-process formatting takes place. If indeed forced to ever touch the python-side numpy.array object, prefer to use vectorised & striding-tricks powered operations over .view()-s or .data-buffers, so as to avoid any unwanted add-on latency costs increasing the control-loop TAT. Options? eliminate any python-side calls ( as these cost you --2x--the-costs of data-I/O + transformation costs ) by precisely configuring the native-side OpenCV processing to match the needed MediaPipe data-format minimise, better avoid any blocking, if still too skewed control-loop, try using distributed-processing with moving raw-data into other process ( not necessarily a Python-interpreter ) on localhost or within a sub-ms LAN domain ( further tips available here ) try to fit the hot-DATA RAM-footprints to match you CPU-Cache Hierarchy cache-lines' sizing & associativity details ( see this ) | 5 | 10 |
70,597,896 | 2022-1-5 | https://stackoverflow.com/questions/70597896/check-if-conda-env-exists-and-create-if-not-in-bash | I have a build script to run a simple python app. I am trying to set it up that it will run for any user that has conda installed and in their PATH. No other prerequisites. I have that pretty much accomplished but would like to make it more efficient for returning users. build_run.sh conda init bash conda env create --name RUN_ENV --file ../run_env.yml -q --force conda activate RUN_ENV python run_app.py conda deactivate I would like to make it that the script checks if RUN_ENV already exists and activates it instead of forcing its creation every time. I tried ENVS=$(conda env list | awk '{print }' ) if [[ conda env list = *"RUN_ENV"* ]]; then conda activate RUN_ENV else conda env create --name RUN_ENV --file ../run_env.yml -q conda activate RUN_ENV exit fi; python run_app.py conda deactivate but it always came back as false and tried to create RUN_ENV | update 2022 i've been receiving upvotes recently. so i'm going to bump up that this method overall is not natively "conda" and might not be the best approach. like i said originally, i do not use conda. take my advice at your discretion. rather, please refer to @merv's comment in the question suggesting the use of the --prefix flag additionally take a look at the documentation for further details NOTE: you can always use a function within your bash script for repeated command invocations with very specific flags e.g function PREFIXED_CONDA(){ action=${1}; # copy $1 to $action; shift 1; # delete first argument and shift remaining indeces to the left conda ${action} --prefix /path/to/project ${@} } i am not sure how conda env list works (i don't use Anaconda); and your current if-tests are vague but i'm going out on a limb and guessing this is what you're looking for #!/usr/bin/env bash # ... find_in_conda_env(){ conda env list | grep "${@}" >/dev/null 2>/dev/null } if find_in_conda_env ".*RUN_ENV.*" ; then conda activate RUN_ENV else # ... instead of bringing it out into a separate function, you could also do # ... if conda env list | grep ".*RUN_ENV.*" >/dev/null 2>&1; then # ... bonus points for neatness and clarity if you use command grouping # ... if { conda env list | grep 'RUN_ENV'; } >/dev/null 2>&1; then # ... if simply checks the exit code. and grep exits with 0 (success) as long as there's at least one match of the pattern provided; this evaluates to "true" in the if statement (grep would match and succeed even if the pattern is just 'RUN_ENV' ;) ) the awk portion of ENVS=$(conda env list | awk '{print }' ) does virtually nothing. i would expect the output to be in tabular format, but {print } does no filtering, i believe you were looking for {print $n} where n is a column number or awk /PATTERN/ {print} where PATTERN is likely RUN_ENV and only lines which have PATTERN are printed. but even so, storing a table in a string variable is going to be messing. you might want an array. then coming to your if-condition, it's plain syntactically wrong. the [[ construct is for comparing values: integer, string, regex but here on the left of = we have a command conda env list which i believe is also the contents of $ENVS hence we can assume you meant [[ "${ENVS}" == *"RUN_ENV"* ]] or alternately [[ $(conda env list) == *"RUN_ENV"* ]] but still, regex matching against a table... not very intuitive imo but it works... sort of the proper clean syntax for regex matching is [[ ${value} =~ /PATTERN/ ]] | 8 | 9 |
70,587,544 | 2022-1-5 | https://stackoverflow.com/questions/70587544/brew-install-python-installs-3-9-why-not-3-10 | My understanding is that "brew install python" installs the latest version of python. Why isn't it pulling 3.10? 3.10 is marked as a stable release. I can install 3.10 with "brew install [email protected] just fine and can update my PATH so that python and pip point to the right versions. But I am curious why "brew install python" its not installing 3.10. My other understanding is that 3.10 is directly compatible with the M1 chips so that is why I want 3.10. Please let me know if I am mistaken. | As Henry Schreiner have specified now Python 3.10 is the new default in Brew. Thx for pointing it --- Obsolete --- The "python3" formula is still 3.9 in the brew system check the doc here: https://formulae.brew.sh/formula/[email protected]#default The latest version of the formula for 3.9 also support apple silicon. If you want to use python3.10 you need to run as you described brew install [email protected] The reason why 3.9 is still the official python3 formula is that generally user using the vanilla python3 are not looking for the latest revision but the more stable. in some months the transition will done. | 18 | 32 |
70,624,413 | 2022-1-7 | https://stackoverflow.com/questions/70624413/how-to-read-from-dev-stdin-with-asyncio-create-subprocess-exec | Backround I am calling an executable from Python and need to pass a variable to the executable. The executable however expects a file and does not read from stdin. I circumvented that problem previously when using the subprocess module by simply calling the executable to read from /dev/stdin along the lines of: # with executable 'foo' cmd = ['foo', '/dev/stdin'] input_variable = 'bar' with subprocess.Popen( cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) as process: stdout, stderr = process.communicate(input_variable) print(f"{process.returncode}, {stdout}, {stderr}") This worked fine so far. In order to add concurrency, I am now implementing asyncio and as such need to replace the subprocess module with the asyncio subprocess module. Problem Calling asyncio subprocess for a program using /dev/stdin fails. Using the following async function: import asyncio async def invoke_subprocess(cmd, args, input_variable): process = await asyncio.create_subprocess_exec( cmd, args, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, stdin=asyncio.subprocess.PIPE, ) stdout, stderr = await process.communicate(input=bytes(input_variable, 'utf-8')) print(f"{process.returncode}, {stdout.decode()}, {stderr.decode()}") This generally works for files, but fails for /dev/stdin: # 'cat' can be used for 'foo' to test the behavior asyncio.run(invoke_subprocess('foo', '/path/to/file/containing/bar', 'not used')) # works asyncio.run(invoke_subprocess('foo', '/dev/stdin', 'bar')) # fails with "No such device or address" How can I call asyncio.create_subprocess_exec on /dev/stdin? Note: I have already tried and failed via asyncio.create_subprocess_shell and writing a temporary file is not an option as the file system is readonly. Minimal example using 'cat' Script main.py: import subprocess import asyncio def invoke_subprocess(cmd, arg, input_variable): with subprocess.Popen( [cmd, arg], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) as process: stdout, stderr = process.communicate(input_variable) print(f"{process.returncode}, {stdout}, {stderr}") async def invoke_async_subprocess(cmd, arg, input_variable): process = await asyncio.create_subprocess_exec( cmd, arg, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, stdin=asyncio.subprocess.PIPE, ) stdout, stderr = await process.communicate(input=input_variable) print(f"{process.returncode}, {stdout.decode()}, {stderr.decode()}") cmd = 'cat' arg = '/dev/stdin' input_variable = b'hello world' # normal subprocess invoke_subprocess(cmd, arg, input_variable) asyncio.run(invoke_async_subprocess(cmd, arg, input_variable)) Returns: > python3 main.py 0, b'hello world', b'' 1, , cat: /dev/stdin: No such device or address Tested on: Ubuntu 21.10, Python 3.9.7 Linux Mint 20.2, Python 3.8.10 Docker image: python:3-alpine | I'll briefly wrap up the question and summarize the outcome of the discussion. In short: The problem is related to a bug in Python's asyncio library that has been fixed by now. It should no longer occur in upcoming versions. Bug Details: In contrast to the Python subprocess library, asyncio uses a socket.socketpair() and not a pipe to communicate with the subprocess. This was introduced in order to support the AIX platform. However, it breaks when re-opening /dev/stdin that doesn't work with a socket. It was fixed by only using sockets on AIX platform. | 6 | 2 |
70,645,074 | 2022-1-9 | https://stackoverflow.com/questions/70645074/tensorflow-setup-on-rstudio-r-centos | For the last 5 days, I am trying to make Keras/Tensorflow packages work in R. I am using RStudio for installation and have used conda, miniconda, virtualenv but it crashes each time in the end. Installing a library should not be a nightmare especially when we are talking about R (one of the best statistical languages) and TensorFlow (one of the best deep learning libraries). Can someone share a reliable way to install Keras/Tensorflow on CentOS 7? Following are the steps I am using to install tensorflow in RStudio. Since RStudio simply crashes each time I run tensorflow::tf_config() I have no way to check what is going wrong. devtools::install_github("rstudio/reticulate") devtools::install_github("rstudio/keras") # This package also installs tensorflow library(reticulate) reticulate::install_miniconda() reticulate::use_miniconda("r-reticulate") library(tensorflow) tensorflow::tf_config() **# Crashes at this point** sessionInfo() R version 3.6.0 (2019-04-26) Platform: x86_64-redhat-linux-gnu (64-bit) Running under: CentOS Linux 7 (Core) Matrix products: default BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] tensorflow_2.7.0.9000 keras_2.7.0.9000 reticulate_1.22-9000 loaded via a namespace (and not attached): [1] Rcpp_1.0.7 lattice_0.20-45 png_0.1-7 zeallot_0.1.0 [5] rappdirs_0.3.3 grid_3.6.0 R6_2.5.1 jsonlite_1.7.2 [9] magrittr_2.0.1 tfruns_1.5.0 rlang_0.4.12 whisker_0.4 [13] Matrix_1.3-4 generics_0.1.1 tools_3.6.0 compiler_3.6.0 [17] base64enc_0.1-3 Update 1 The only way RStudio does not crash while installing tensorflow is by executing following steps - First, I created a new virtual environment using conda conda create --name py38 python=3.8.0 conda activate py38 conda install tensorflow=2.4 Then from within RStudio, I installed reticulate and activated the virtual environment which I earlier created using conda devtools::install_github("rstudio/reticulate") library(reticulate) reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE) reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE) reticulate::py_available(initialize = TRUE) ts <- reticulate::import("tensorflow") As soon as I try to import tensorflow in RStudio, it loads the library /lib64/libstdc++.so.6 instead of /root/.conda/envs/py38/lib/libstdc++.so.6 and I get the following error - Error in py_module_import(module, convert = convert) : ImportError: Traceback (most recent call last): File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module> from tensorflow.python._pywrap_tensorflow_internal import * File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook module = _import( ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. Here is what inside /lib64/libstdc++.so.6 > strings /lib64/libstdc++.so.6 | grep GLIBC GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 GLIBCXX_3.4.3 GLIBCXX_3.4.4 GLIBCXX_3.4.5 GLIBCXX_3.4.6 GLIBCXX_3.4.7 GLIBCXX_3.4.8 GLIBCXX_3.4.9 GLIBCXX_3.4.10 GLIBCXX_3.4.11 GLIBCXX_3.4.12 GLIBCXX_3.4.13 GLIBCXX_3.4.14 GLIBCXX_3.4.15 GLIBCXX_3.4.16 GLIBCXX_3.4.17 GLIBCXX_3.4.18 GLIBCXX_3.4.19 GLIBC_2.3 GLIBC_2.2.5 GLIBC_2.14 GLIBC_2.4 GLIBC_2.3.2 GLIBCXX_DEBUG_MESSAGE_LENGTH To resolve the library issue, I added the path of the correct libstdc++.so.6 library having GLIBCXX_3.4.20 in RStudio. system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH') and, also Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib") But still I get the same error ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20'. Somehow RStudio still loads /lib64/libstdc++.so.6 first instead of /root/.conda/envs/py38/lib/libstdc++.so.6 Instead of RStudio, if I execute the above steps in the R console, then also I get the exact same error. Update 2: A solution is posted here | Update on 29 July, 2022 After months of solving this problem, I feel so stupid to have wasted time coding R on CentOS. The most popular and stable OS to code R is Ubuntu. By default, CentOS supports only the 3.6 version of R while the most stable current version of R is 4.2. With the default 3.6 version of R on CentOS, most of the libraries are outdated and they conflict with other libraries which are updated for R 4.2+. From my experience, you are going to avoid a lot of misery and frustration if you start coding R on Ubuntu. I am not sponsoring Ubuntu, the above statement is just from my experience and others might have different experiences. Original Answer Took me more than 15 days and I finally solved this problem. Boot up a clean CentOS 7 VM, install R and dependencies (taken from Jared's answer) - yum install epel-release yum install R yum install libxml2-devel yum install openssl-devel yum install libcurl-devel yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver Now, create a conda environment yum install conda conda clean -a # Clean cache and remove old packages, if you already have conda installed # Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow conda activate tf Open a new port (7878 or choose any port number you want) on the server to access RStudio with new conda environment libraries iptables -A INPUT -p tcp --dport 7878 -j ACCEPT /sbin/service iptables save then launch RStudio as follows - /usr/lib/rstudio-server/bin/rserver \ --server-daemonize=0 \ --www-port 7878 \ --rsession-which-r=$(which R) \ --rsession-ld-library-path=$CONDA_PREFIX/lib You will have your earlier environment intact on default port 8787 and a new environment with Tensorflow and Keras on 7878. The following code now works fine in RStudio install.packages("reticulate") install.packages("tensorflow") library(reticulate) library(tensorflow) ts <- reticulate::import("tensorflow") | 6 | 3 |
70,588,185 | 2022-1-5 | https://stackoverflow.com/questions/70588185/warning-the-script-pip3-8-is-installed-in-usr-local-bin-which-is-not-on-path | When running pip3.8 i get the following warning appearing in my terminal WARNING: The script pip3.8 is installed in '/usr/local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed pip-21.1.1 setuptools-56.0.0 WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv How to solve this problem on centos 7? | This question has been answered on the serverfaults forum: Here is a link to the question. You need to add the following line to your ~/.bash_profile or ~/.bashrc file. export PATH="/usr/local/bin:$PATH" You will then need to profile, do this by either running the command: source ~/.bash_profile Or by simply closing your terminal and opening a new session. You should continue to check your PATH to make sure it includes the path. echo $PATH | 49 | 72 |
70,641,660 | 2022-1-9 | https://stackoverflow.com/questions/70641660/how-do-you-get-and-use-a-refresh-token-for-the-dropbox-api-python-3-x | As the title says, I am trying to generate a refresh token, and then I would like to use the refresh token to get short lived Access tokens. There is a problem though, in that I'm not smart enough to understand the docs on the dropbox site, and all the other information I've found hasn't worked for me (A, B, C) or is in a language I don't understand. I have tried out all three examples from the github page, as well as user code from other questions on this site. I haven't got anything to work. The most I got was Error: 400 Client Error: Bad Request for url: api.dropboxapi.com/oauth2/token and dropbox.rest.RESTSocketError: Error connecting to "api.dropbox.com": [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123) :( | Here is how I did it. I'll try to keep it simple and precise Replace <APP_KEY> with your dropbox app key in the below Authorization URL https://www.dropbox.com/oauth2/authorize?client_id=<APP_KEY>&token_access_type=offline&response_type=code Complete the code flow on the Authorization URL. You will receive an AUTHORIZATION_CODE at the end. Go to Postman and create a new POST request with below configuration Request URL- https://api.dropboxapi.com/oauth2/token Authorization -> Type = Basic Auth -> Username = <APP_KEY> , Password = <APP_SECRET> (Refer this answer for cURL -u option) Body -> Select "x-www-form-urlencoded" Key Value code <AUTHORIZATION_CODE> grant_type authorization_code After you send the request, you will receive JSON payload containing refresh_token. { "access_token": "sl.****************", "token_type": "bearer", "expires_in": 14400, "refresh_token": "*********************", "scope": <SCOPES>, "uid": "**********", "account_id": "***********************" } In your python application, import dropbox dbx = dropbox.Dropbox( app_key = <APP_KEY>, app_secret = <APP_SECRET>, oauth2_refresh_token = <REFRESH_TOKEN> ) Hope this works for you too! | 15 | 37 |
70,639,556 | 2022-1-9 | https://stackoverflow.com/questions/70639556/is-it-possible-to-use-pydantic-instead-of-dataclasses-in-structured-configs-in-h | Recently I have started to use hydra to manage the configs in my application. I use Structured Configs to create schema for .yaml config files. Structured Configs in Hyda uses dataclasses for type checking. However, I also want to use some kind of validators for some of the parameter I specify in my Structured Configs (something like this). Do you know if it is somehow possible to use Pydantic for this purpose? When I try to use Pydantic, OmegaConf complains about it: omegaconf.errors.ValidationError: Input class 'SomeClass' is not a structured config. did you forget to decorate it as a dataclass? | For those of you wondering how this works exactly, here is an example of it: import hydra from hydra.core.config_store import ConfigStore from omegaconf import OmegaConf from pydantic.dataclasses import dataclass from pydantic import validator @dataclass class MyConfigSchema: some_var: float @validator("some_var") def validate_some_var(cls, some_var: float) -> float: if some_var < 0: raise ValueError(f"'some_var' can't be less than 0, got: {some_var}") return some_var cs = ConfigStore.instance() cs.store(name="config_schema", node=MyConfigSchema) @hydra.main(config_path="/path/to/configs", config_name="config") def my_app(config: MyConfigSchema) -> None: # The 'validator' methods will be called when you run the line below OmegaConf.to_object(config) if __name__ == "__main__": my_app() And config.yaml : defaults: - config_schema some_var: -1 # this will raise a ValueError | 12 | 12 |
70,669,213 | 2022-1-11 | https://stackoverflow.com/questions/70669213/gyp-err-stack-error-command-failed-python-c-import-sys-print-s-s-s-s | I'm trying to npm install in a Vue project, and even if I just ran vue create (name) it gives me this err: npm ERR! gyp verb check python checking for Python executable "c:\Python310\python.exe" in the PATH npm ERR! gyp verb `which` succeeded c:\Python310\python.exe c:\Python310\python.exe npm ERR! gyp ERR! configure error npm ERR! gyp ERR! stack Error: Command failed: c:\Python310\python.exe -c import sys; print "%s.%s.%s" % sys.version_info[:3]; npm ERR! gyp ERR! stack File "<string>", line 1 npm ERR! gyp ERR! stack import sys; print "%s.%s.%s" % sys.version_info[:3]; npm ERR! gyp ERR! stack ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ npm ERR! gyp ERR! stack SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)? npm ERR! gyp ERR! stack npm ERR! gyp ERR! stack at ChildProcess.exithandler (node:child_process:397:12) npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:390:28) npm ERR! gyp ERR! stack at maybeClose (node:internal/child_process:1064:16) npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5) npm ERR! gyp ERR! System Windows_NT 10.0.19044 npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Upwork\\contact_book\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library=" npm ERR! gyp ERR! cwd C:\Upwork\contact_book\node_modules\node-sass npm ERR! gyp ERR! node -v v16.13.1 npm ERR! gyp ERR! node-gyp -v v3.8.0 npm ERR! gyp ERR! not ok npm ERR! Build failed with error code: 1 I tried it in another PC but it is working fine, I think it is because I need to install something (since the PC is new) | As @MehdiMamas pointed out in the comments, downgrading Node to v14 should solve the problem nvm install 14 nvm use 14 | 10 | 16 |
70,648,404 | 2022-1-10 | https://stackoverflow.com/questions/70648404/syntaxerror-multiple-exception-types-must-be-parenthesized | I am a beginner and have a problem after installing pycaw for the audio control using python, on putting the basic initialization code for pycaw, i get the following error:- Traceback (most recent call last): File "c:\Users\...\volumeControl.py", line 7, in <module> from comtypes import CLSCTX_ALL File "C:\...\env\lib\site-packages\comtypes\__init__.py", line 375 except COMError, err: ^^^^^^^^^^^^^ SyntaxError: multiple exception types must be parenthesized Basic initialization:- from ctypes import cast, POINTER from comtypes import CLSCTX_ALL from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume devices = AudioUtilities.GetSpeakers() interface = devices.Activate( IAudioEndpointVolume._iid_, CLSCTX_ALL, None) volume = cast(interface, POINTER(IAudioEndpointVolume)) I tried searching for this all over the web but could not find a fix I also trying going into the module file inside the virtual env and parenthize by putting brackets around COMError, err But same error with other lines in code came, Also tried reinstalling pycaw and trying to install different versions of pycaw several times but nothing fixed How to fix this error? | After a time searching I found that comtypes uses a tool to be compatible with both python 2 and 3 and that is no longer works in new versions. I had to downgrade two packages and reinstall comtypes: pip install setuptools==57.0.0 --force-reinstall pip install wheel==0.36.2 --force-reinstall pip uninstall comtypes pip install --no-cache-dir comtypes | 18 | 6 |
70,622,426 | 2022-1-7 | https://stackoverflow.com/questions/70622426/centos-8-firewalld-error-command-failed-python-nftables-failed | when I try to reload firewalld, it tells me Error: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: Numerical result out of range JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet", "table": "firewalld", "name": "filter_IN_policy_allow-host-ipv6"}}}]} I don't know why this is, after Google, it still hasn't been resolved | I had the same error message. I enabled verbose debugs on firewalld and tailed the logs to file for a deeper dive. In my case the exception was originally happening in "nftables.py" on line "361". Exception: 2022-01-23 14:00:23 DEBUG3: <class 'firewall.core.nftables.nftables'>: calling python-nftables with JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet", "table": "firewalld", "name": "filter_IN_policy_allow-host-ipv6"}}}]} 2022-01-23 14:00:23 DEBUG1: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/firewall/core/fw.py", line 888, in rules backend.set_rule(rule, self._log_denied) File "/usr/lib/python3.6/site-packages/firewall/core/nftables.py", line 390, in set_rule self.set_rules([rule], log_denied) File "/usr/lib/python3.6/site-packages/firewall/core/nftables.py", line 361, in set_rules raise ValueError("'%s' failed: %s\nJSON blob:\n%s" % ("python-nftables", error, json.dumps(json_blob))) ValueError: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: Numerical result out of range Line 361 in "nftables.py": self._loader(config.FIREWALLD_POLICIES, "policy") Why this is a problem: Basically nftables is a backend service and firewalld is a frontend service. They are dependent on each other to function. Each time you restart firewalld it has to reconcile the backend, in this case nftables. At some point during the reconciliation a conflict is occurring in the python code. That is unfortunate as the only real solution will likely have to come from code improvements from nftables in how it is able to populate policies into chains and tables. A work-around: The good news is, if you are like me, you don't use ipv6, in which case we simply disable the policy rather than solve for the issue. I'll put the work-around steps below. Work-around Steps: The proper way to remove the policy is to use the command "firewall-cmd --delete-policy=allow-host-ipv6 --permanent" but I encountered other errors and exceptions in python when attempting to do that. Since I don't care about ipv6 I manually deleted the XML from configuration and restarted the firewalld service. rm /usr/lib/firewalld/policies/allow-host-ipv6.xml rm /etc/firewalld/policies/allow-host-ipv6.xml systemctl restart firewalld Side Note: Once I fixed this conflict, I also had some additional conflicts between nftables/iptables/fail2ban that had to be cleared up. For that I just used the command "fail2ban-client unban --all" to make fail2ban wipe clean all of the chains it added to iptables. | 8 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.