question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
69,571,919 | 2021-10-14 | https://stackoverflow.com/questions/69571919/heroku-error-while-deploying-error-rpc-failed-http-504-curl-22-the-requested | I had no problems in the past with the deployment to Heroku via HTTP transport, but recently I am unable to deploy. This is the error I am getting: Enumerating objects: 58668, done. Counting objects: 100% (57434/57434), done. Delta compression using up to 16 threads Compressing objects: 100% (16705/16705), done. Writing objects: 100% (57124/57124), 50.77 MiB | 76.23 MiB/s, done. Total 57124 (delta 44149), reused 52353 (delta 40249) error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 fatal: the remote end hung up unexpectedly fatal: the remote end hung up unexpectedly I've tried switching to SSH transport and it works, but Heroku is retiring SSH transport, so I need to figure out this error. Also I tried to change the postBuffer according to atlassian's page, but I got the error again. git config --global http.postBuffer 157286400 Does anybody have an idea how to solve this? There are very few resources on the web that I found, and none of them are fixing the problem. | I raised a support ticket to Heroku and the answer was to reset the Git repo heroku plugins:install heroku-repo heroku repo:reset -a <app-name> After I did this, I had no problems with the deployment | 8 | 13 |
69,570,682 | 2021-10-14 | https://stackoverflow.com/questions/69570682/how-to-setup-django-permissions-to-be-specific-to-a-certain-models-instances | Please consider a simple Django app containing a central model called Project. Other resources of this app are always tied to a specific Project. Exemplary code: class Project(models.Model): pass class Page(models.Model): project = models.ForeignKey(Project) I'd like to leverage Django's permission system to set granular permissions per existing project. In the example's case, a user should be able to have a view_page permission for some project instances, and don't have it for others. In the end, I would like to have a function like has_perm that takes the permission codename and a project as input and returns True if the current user has the permission in the given project. Is there a way to extend or replace Django's authorization system to achieve something like this? I could extend the user's Group model to include a link to Project and check both, the group's project and its permissions. But that's not elegant and doesn't allow for assigning permissions to single users. Somewhat related questions on the Django forum can be found here: Authorization on sets of resources How are you handling user permissions in more complex projects? Related StackOverflow questions: Django permissions via related objects permissions | I wasn't quite happy with the answers that were (thankfully!) proposed because they seemed to introduce overhead, either in complexity or maintenance. For django-guardian in particular I would have needed a way to keep those object-level permissions up-to-date while potentially suffering from (slight) performance loss. The same is true for dynamically creating permissions; I would have needed a way to keep those up-to-date and would deviate from the standard way of defining permissions (only) in the models. But both answers actually encouraged me to take a more detailed look at Django's authentication and authorization system. That's when I realized that it's quite feasible to extend it to my needs (as it is so often with Django). I solved this by introducing a new model, ProjectPermission, that links a Permission to a project and can be assigned to users and groups. This model represents the fact that a user or group has a permission for a specific project. To utilize this model, I extended ModelBackend and introduced a parallel permission check, has_project_perm, that checks if a user has a permission for a specific project. The code is mostly analogous to the default path of has_perm as defined in ModelBackend. By leveraging the default permission check, has_project_perm will return True if the user either has the project-specific permission or has the permission in the old-fashioned way (that I termed "global"). Doing so allows me to assign permissions that are valid for all projects without stating them explicitly. Lastly, I extended my custom user model to access the new permission check by introducing a new method, has_project_perm. # models.py from django.contrib import auth from django.contrib.auth.models import AbstractUser, Group, Permission from django.core.exceptions import PermissionDenied from django.db import models from showbase.users.models import User class ProjectPermission(models.Model): """A permission that is valid for a specific project.""" project = models.ForeignKey(Project, on_delete=models.CASCADE) base_permission = models.ForeignKey( Permission, on_delete=models.CASCADE, related_name="project_permission" ) users = models.ManyToManyField(User, related_name="user_project_permissions") groups = models.ManyToManyField(Group, related_name="project_permissions") class Meta: indexes = [models.Index(fields=["project", "base_permission"])] unique_together = ["project", "base_permission"] def _user_has_project_perm(user, perm, project): """ A backend can raise `PermissionDenied` to short-circuit permission checking. """ for backend in auth.get_backends(): if not hasattr(backend, "has_project_perm"): continue try: if backend.has_project_perm(user, perm, project): return True except PermissionDenied: return False return False class User(AbstractUser): def has_project_perm(self, perm, project): """Return True if the user has the specified permission in a project.""" # Active superusers have all permissions. if self.is_active and self.is_superuser: return True # Otherwise we need to check the backends. return _user_has_project_perm(self, perm, project) # auth_backends.py from django.contrib.auth import get_user_model from django.contrib.auth.backends import ModelBackend from django.contrib.auth.models import Permission class ProjectBackend(ModelBackend): """A backend that understands project-specific authorization.""" def _get_user_project_permissions(self, user_obj, project): return Permission.objects.filter( project_permission__users=user_obj, project_permission__project=project ) def _get_group_project_permissions(self, user_obj, project): user_groups_field = get_user_model()._meta.get_field("groups") user_groups_query = ( "project_permission__groups__%s" % user_groups_field.related_query_name() ) return Permission.objects.filter( **{user_groups_query: user_obj}, project_permission__project=project ) def _get_project_permissions(self, user_obj, project, from_name): if not user_obj.is_active or user_obj.is_anonymous: return set() perm_cache_name = f"_{from_name}_project_{project.pk}_perm_cache" if not hasattr(user_obj, perm_cache_name): if user_obj.is_superuser: perms = Permission.objects.all() else: perms = getattr(self, "_get_%s_project_permissions" % from_name)( user_obj, project ) perms = perms.values_list("content_type__app_label", "codename").order_by() setattr( user_obj, perm_cache_name, {"%s.%s" % (ct, name) for ct, name in perms} ) return getattr(user_obj, perm_cache_name) def get_user_project_permissions(self, user_obj, project): return self._get_project_permissions(user_obj, project, "user") def get_group_project_permissions(self, user_obj, project): return self._get_project_permissions(user_obj, project, "group") def get_all_project_permissions(self, user_obj, project): return { *self.get_user_project_permissions(user_obj, project), *self.get_group_project_permissions(user_obj, project), *self.get_user_permissions(user_obj), *self.get_group_permissions(user_obj), } def has_project_perm(self, user_obj, perm, project): return perm in self.get_all_project_permissions(user_obj, project) # settings.py AUTHENTICATION_BACKENDS = ["django_project.projects.auth_backends.ProjectBackend"] | 9 | 4 |
69,561,572 | 2021-10-13 | https://stackoverflow.com/questions/69561572/sqlalchemy-with-multiple-binds-dynamically-choose-bind-to-query | I have 4 different databases, one for each one of my customers (medical clinics), which all of them have the exact same structure. In my application, I have models such as Patient, Doctor, Appointment, etc. Let's take one of them as an example: class Patient(db.Model): __tablename__ = "patients" id = Column(Integer, primary_key=True) first_name = Column(String, index=True) last_name = Column(String, index=True) date_of_birth = Column(Date, index=True) I've figured out that with the help of binds I can create different databases and associate each model to a different bind. So I have this configuration: app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://user:pass@localhost/main' app.config['SQLALCHEMY_BINDS'] = { 'clinic1':'mysql://user:pass@localhost/clinic1', 'clinic2':'mysql://user:pass@localhost/clinic2', 'clinic3':'mysql://user:pass@localhost/clinic3', 'clinic4':'mysql://user:pass@localhost/clinic4' } Now I'm trying to achieve two things: I want that when I create tables using db.create_all() it will create the patients table in all 4 databases (clinic1->clinic4) I want to be able to choose a specific bind dynamically (in runtime), so that any query such as Patient.query.filter().count() will run against the chosen bind database Ideally, it would behave like this: with DbContext(bind='client1'): patients_count = Patient.query.filter().count() print(patients_count) # outside of the `with` context we are back to the default bind However, doing this: patients_count = Patient.query.filter().count() without specifying a bind, will raise an error (as the patients table does not exist in the default bind) Any code example that can guide how this can be done would be highly appreciated! P.S. It might be that you would suggest not to use different databases and instead use one with different columns / tables but please stick to my example and try to explain how this can be done using this pattern of multiple identical databases Thanks! | 1. Create tables in all binds Observation: db.create_all() calls self.get_tables_for_bind(). Solution: Override SQLAlchemy get_tables_for_bind() to support '__all__'. class MySQLAlchemy(SQLAlchemy): def get_tables_for_bind(self, bind=None): result = [] for table in self.Model.metadata.tables.values(): # if table.info.get('bind_key') == bind: if table.info.get('bind_key') == bind or (bind is not None and table.info.get('bind_key') == '__all__'): result.append(table) return result Usage: # db = SQLAlchemy(app) # Replace this db = MySQLAlchemy(app) # with this db.create_all() 2. Choose a specific bind dynamically Observation: SignallingSession get_bind() is responsible for determining the bind. Solution: Override SignallingSession get_bind() to get the bind key from some context. Override SQLAlchemy create_session() to use our custom session class. Support the context to choose a specific bind on db for accessibility. Force a context to be specified for tables with '__all__' as bind key, by overriding SQLAlchemy get_binds() to restore the default engine. class MySignallingSession(SignallingSession): def __init__(self, db, *args, **kwargs): super().__init__(db, *args, **kwargs) self.db = db def get_bind(self, mapper=None, clause=None): if mapper is not None: info = getattr(mapper.persist_selectable, 'info', {}) if info.get('bind_key') == '__all__': info['bind_key'] = self.db.context_bind_key try: return super().get_bind(mapper=mapper, clause=clause) finally: info['bind_key'] = '__all__' return super().get_bind(mapper=mapper, clause=clause) class MySQLAlchemy(SQLAlchemy): context_bind_key = None @contextmanager def context(self, bind=None): _context_bind_key = self.context_bind_key try: self.context_bind_key = bind yield finally: self.context_bind_key = _context_bind_key def create_session(self, options): return orm.sessionmaker(class_=MySignallingSession, db=self, **options) def get_binds(self, app=None): binds = super().get_binds(app=app) # Restore default engine for table.info.get('bind_key') == '__all__' app = self.get_app(app) engine = self.get_engine(app, None) tables = self.get_tables_for_bind('__all__') binds.update(dict((table, engine) for table in tables)) return binds def get_tables_for_bind(self, bind=None): result = [] for table in self.Model.metadata.tables.values(): if table.info.get('bind_key') == bind or (bind is not None and table.info.get('bind_key') == '__all__'): result.append(table) return result Usage: class Patient(db.Model): __tablename__ = "patients" __bind_key__ = "__all__" # Add this Test case: with db.context(bind='clinic1'): db.session.add(Patient()) db.session.flush() # Flush in 'clinic1' with db.context(bind='clinic2'): patients_count = Patient.query.filter().count() print(patients_count) # 0 in 'clinic2' patients_count = Patient.query.filter().count() print(patients_count) # 1 in 'clinic1' About foreign keys referencing the default bind You have to specify the schema. Limitations: MySQL: Binds must be in the same MySQL instance. Otherwise, it has to be a plain column. The foreign object in the default bind must already be committed. Otherwise, when inserting an object that references it, you will get this lock error: MySQLdb._exceptions.OperationalError: (1205, 'Lock wait timeout exceeded; try restarting transaction') SQLite: Foreign keys across databases are not enforced. Usage: # app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://user:pass@localhost/main' class PatientType(db.Model): __tablename__ = "patient_types" __table_args__ = {"schema": "main"} # Add this, based on database name id = Column(Integer, primary_key=True) # ... class Patient(db.Model): __tablename__ = "patients" __bind_key__ = "__all__" id = Column(Integer, primary_key=True) # ... # patient_type_id = Column(Integer, ForeignKey("patient_types.id")) # Replace this patient_type_id = Column(Integer, ForeignKey("main.patient_types.id")) # with this patient_type = relationship("PatientType") Test case: patient_type = PatientType.query.first() if not patient_type: patient_type = PatientType() db.session.add(patient_type) db.session.commit() # Commit to reference from other binds with db.context(bind='clinic1'): db.session.add(Patient(patient_type=patient_type)) db.session.flush() # Flush in 'clinic1' with db.context(bind='clinic2'): patients_count = Patient.query.filter().count() print(patients_count) # 0 in 'clinic2' patients_count = Patient.query.filter().count() print(patients_count) # 1 in 'clinic1' | 6 | 15 |
69,611,485 | 2021-10-18 | https://stackoverflow.com/questions/69611485/react-to-django-cors-issue | Error Details Two requests have been generating on button click. What did I search so far? Axios blocked by CORS policy with Django REST Framework CORS issue with react and django-rest-framework but to no avail What am I doing? Submitting POST request from react to DJango API Django side settings file CORS_ORIGIN_ALLOW_ALL = True ALLOWED_HOSTS = [ "http://127.0.0.1:3000", "http://127.0.0.1", "http://localhost:3000", "http://localhost" ] CORS_ORIGIN_WHITELIST = [ "http://127.0.0.1:3000", "http://127.0.0.1", "http://localhost:3000", "http://localhost" ] INSTALLED_APPS = [ ......, "corsheaders" ] MIDDLEWARE = [ ........., 'corsheaders.middleware.CorsMiddleware', 'django.middleware.common.CommonMiddleware', ] React axios request function authenticate() { let body = { "email": "ac", "password": "def" }; const headers = { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json', } axios.post("http://127.0.0.1:8000/login/", body, { headers: headers }) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error); }); } Tried another approach using fetch, but to no avail function authenticate() { let body = { "email": "hi", "password": "pass" }; const headers = { 'Content-Type': 'application/json', } fetch("http://127.0.0.1:8000/login", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(body) }) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error); }); } DJango side method def Login(request): if(request.method == "POST"): return JsonResponse({"message" : "Invalid credentials"}, status=401) | Below settings work for me CORS_ORIGIN_ALLOW_ALL = True ALLOWED_HOSTS = [ "127.0.0.1", ] CORS_ALLOWED_ORIGINS = [ "http://127.0.0.1", ] CORS_ALLOW_CREDENTIALS = False INSTALLED_APPS = [ ..... "corsheaders" ] MIDDLEWARE = [ ...... 'corsheaders.middleware.CorsMiddleware', 'django.middleware.common.CommonMiddleware', ] | 6 | 4 |
69,615,605 | 2021-10-18 | https://stackoverflow.com/questions/69615605/importing-custom-plugins-in-airflow-2-cloud-composer | I have a directory structure as such: airflow_dags βββ dags β βββ hk β βββ hk_dag.py βββ plugins β βββ cse β βββ operators.py β βββ cse_to_bq.py βββ test βββ dags βββ dag_test.py In the GCS bucket created by Cloud Composer, there's a plugin folder where I upload the cse folder. Now in my hk_dag.py file if I import the plugin like this: from plugins.cse.operators.cse_to_bq import CSEToBQOperator and run my unit test, it passes, but in cloud composer I get a ModuleNotFoundError: No module named 'plugins' error message. If I import the plugin like this in my hk_dag.py: from cse.operators.cse_to_bq import CSEToBQOperator My unit test fails with ModuleNotFoundError: No module named 'cse' but it works fine in Cloud Composer. How do I resolve it? | In Airflow 2.0 to import your plugin you just need to do it directly from the operators module. In your case, has to be something like: from operators.cse_to_bq import CSEToBQOperator But before that you have to change your folder structure to: airflow_dags βββ dags β βββ hk β βββ hk_dag.py βββ plugins β βββ operators β βββ cse β βββ cse_to_bq.py βββ test βββ dags βββ dag_test.py | 5 | 6 |
69,636,389 | 2021-10-19 | https://stackoverflow.com/questions/69636389/deduplication-merging-of-mutable-data-in-python | High-level view of the problem I have X sources that contain info about assets (hostname, IPs, MACs, os, etc.) in our environment. The sources contain anywhere from 1500 to 150k entries (at least the ones I use now). My script is supposed to query each of them, gather that data, deduplicate it by merging info about the same assets from different sources, and return unified list of all entries. My current implementation does work, but it's slow for bigger datasets. I'm curious if there is better way to accomplish what I'm trying to do. Universal problem Deduplication of data by merging similar entries with the caveat that merging two assets might change whether the resulting asset will be similar to the third asset that was similar to the first two before merging. Example: ~ similarity, + merging (before) A ~ B ~ C (after) (A+B) ~ C or (A+B) !~ C I tried looking for people having the same issue, I only found What is an elegant way to remove duplicate mutable objects in a list in Python?, but it didn't include merging of data which is crucial in my case. The classes used Simplified for ease of reading and understanding with unneeded parts removed - general functionality is intact. class Entry: def __init__(self, source: List[str], mac: List[str] = [], ip: List[str] = [], hostname: List[str] = [], os: OS = OS.UNKNOWN, details: dict = {}): # SO: Sorting and sanitization removed for simplicity self.source = source self.mac = mac self.ip = ip self.hostname = hostname self.os = os self.details = details def __eq__(self, other): if isinstance(other, Entry): return (self.source == other.source and self.os == other.os and self.hostname == other.hostname and self.mac == other.mac and self.ip == other.ip) return NotImplemented def is_similar(self, other) -> bool: def same_entry(l1: list, l2: list) -> bool: return not set(l1).isdisjoint(l2) if isinstance(other, Entry): if self.os == OS.UNKNOWN or other.os == OS.UNKNOWN or self.os == other.os: empty_hostnames = self.hostname == [] or other.hostname == [] empty_macs = self.mac == [] or other.mac == [] return (same_entry(self.hostname, other.hostname) or (empty_hostnames and same_entry(self.mac, other.mac)) or (empty_hostnames and empty_macs and same_entry(self.ip, other.ip))) return False def merge(self, other: 'Entry'): self.source = _merge_lists(self.source, other.source) self.hostname = _merge_lists(self.hostname, other.hostname) self.mac = _merge_lists(self.mac, other.mac) self.ip = _merge_lists(self.ip, other.ip) self.os = self.os if self.os != OS.UNKNOWN else other.os self.details = _merge_dicts(self.details, other.details) def representation(self) -> str: # Might be useful if anyone wishes to run the code return f'<Entry from {self.source}: hostname={self.hostname}, MAC={self.mac}, IP={self.ip}, OS={self.os.value}, details={self.details}>' def _merge_lists(l1: list, l2: list): return list(set(l1) | set(l2)) def _merge_dicts(d1: dict, d2: dict): """ Merge two dicts without overwriting any data. """ # If either is empty, return the other one if not d1: return d2 if not d2: return d1 if d1 == d2: return d1 result = d1 for k, v in d2.items(): if k in result: result[k + '_'] = v else: result[k] = v return result class OS(Enum): ''' Enum specifying the operating system of the asset. ''' UNKNOWN = 'Unknown' WINDOWS = 'Windows' LINUX = 'Linux' MACOS = 'MacOS' Algorithms Eeach algorithm take a list of lists of entries from different sources, eq: entries = [[entries from source A], [entries from source B], ..., [entries from source Z]] Main deduplication function It's the main function used in each algorithm. It takes list of entries from 2 different sources and combines that into list containing assets with information merged if needed. It's probably the part I need help the most. It's the only way I could think of. Because of that, I focused on how to run this function multiple times faster, but making this one faster would be the best in terms of reducing runtime. def deduplicate(en1: List[Entry], en2: List[Entry]) -> List[Entry]: """ Deduplicates entries from provided lists by merging similar entries. Entries in the lists are supposed to be already deduplicated. """ # If either is empty, return the other one if not en1: return en2 if not en2: return en1 result = [] # Iterate over longer and check for similar in shorter if len(en2) > len(en1): en1, en2 = en2, en1 for e in en1: # walrus operator in Python 3.8 or newer while (similar := next((y for y in en2 if y.is_similar(e)), None)) is not None: e.merge(similar) en2.remove(similar) del similar result.append(e) result.extend(en2) return result A reason why normal deduplication (eg. using sets) isn't applicable here is because of merging one entry with another new entries might become similar, eg.: In [2]: e1 = Entry(['SRC_A'], [], ['1.1.1.1'], [], OS.UNKNOWN) In [3]: e2 = Entry(['SRC_A'], ['aa:bb:cc:dd:ee:ff'], ['1.1.1.1'], [], OS.UNKNOWN) In [4]: e3 = Entry(['SRC_A'], ['aa:bb:cc:dd:ee:ff'], [], [], OS.UNKNOWN) In [5]: e1.is_similar(e2) Out[5]: True In [6]: e1.is_similar(e3) # at first it's not similar Out[6]: False In [7]: e1.merge(e2) In [8]: e1.is_similar(e3) # but after merging it is Out[8]: True 1st approach - sequential My first idea was the simplest one, just simple recursion. def dedup_multiple(lists: List[List[Entry]]) -> List[Entry]: """Deduplication helper allowing for providing more than 2 sources.""" if len(lists) == 1: return lists[0] return deduplicate(lists[0], dedup_multiple(lists[1:])) 2nd approach - multithreading using Pool That's the approach I'm using at the moment. So far it's the fastest one and fairly simple. def async_dedup(lists: List[List[Entry]]) -> List[Entry]: """Asynchronous deduplication helper allowing for providing more than 2 sources.""" with mp.Pool() as pool: while len(lists) > 1: if len(lists) % 2 == 1: lists.append([]) data = [(lists[i], lists[i+1]) for i in range(0, len(lists), 2)] lists = pool.map_async(_internal_deduplication, data).get() return lists[0] def _internal_deduplication(en): return deduplicate(*en) But I realized really fast that if one task takes much longer than the rest (for example because deduplicating the biggest source), everything else wait instead of working. 3rd approach - multithreading using Queue and Process As I was trying to speed up 2nd approach I came across How to use python multiprocessing pool in continuous loop and Filling a queue and managing multiprocessing in python, and I came up with the following solution. def async_dedup2(lists: List[List[Entry]]) -> List[Entry]: tasks_number = min(os.cpu_count(), len(lists) // 2) args = lists[:tasks_number] with mp.Manager() as manager: queue = manager.Queue() for l in lists[tasks_number:]: queue.put(l) processes = [] for arg in args: proc = mp.Process(target=test, args=(queue, arg, )) proc.start() processes.append(proc) for proc in processes: proc.join() return queue.get() def test(queue: mp.Queue, arg: List[Entry]): while not queue.empty(): try: arg2: List[Entry] = queue.get() except Empty: continue arg = deduplicate(arg, arg2) queue.put(arg) I thought it would be the best solution as there wouldn't be a moment when a data isn't processed if possible, but after testing it was almost always slightly slower than 2nd approach. Runtime comparison Source A 1510 Source B 1509 Source C 5000 Source D 4460 Source E 5000 Source F 2084 Deduplicating..... SYNC - Execution time: 188.6127771000 - Count: 13540 ASYNC - Execution time: 68.249583 - Count: 13532 ASYNC2 - Execution time: 69.416046 - Count: 13532 Source A 1510 Source B 1509 Source C 11821 Source D 13871 Source E 5001 Source F 2333 Deduplicating..... ASYNC - Execution time: 424.405793 - Count: 26229 ASYNC2 - Execution time: 522.697551 - Count: 26405 | Summary: we define two sketch functions f and g from entries to sets of βsketchesβ such that two entries e and eβ² are similar if and only if f(e) β© g(eβ²) β β
. Then we can identify merges efficiently (see the algorithm at the end). Iβm actually going to define four sketch functions, fos, faddr, gos, and gaddr, from which we construct f(e) = {(x, y) | x β fos(e), y β faddr(e)} g(e) = {(x, y) | x β gos(e), y β gaddr(e)}. fos and gos are the simpler of the four. fos(e) includes (1, e.os), if e.os is known (2,), if e.os is known (3,), if e.os is unknown. gos(e) includes (1, e.os), if e.os is known (2,), if e.os is unknown (3,). faddr and gaddr are more complicated because there are prioritized attributes, and they can have multiple values. Nevertheless, the same trick can be made to work. faddr(e) includes (1, h) for each h in e.hostname (2, m) for each m in e.mac, if e.hostname is nonempty (3, m) for each m in e.mac, if e.hostname is empty (4, i) for each i in e.ip, if e.hostname and e.mac are nonempty (5, i) for each i in e.ip, if e.hostname is empty and e.mac is nonempty (6, i) for each i in e.ip, if e.hostname is nonempty and e.mac is empty (7, i) for each i in e.ip, if e.hostname and e.mac are empty. gaddr(e) includes (1, h) for each h in e.hostname (2, m) for each m in e.mac, if e.hostname is empty (3, m) for each m in e.mac (4, i) for each i in e.ip, if e.hostname is empty and e.mac is empty (5, i) for each i in e.ip, if e.mac is empty (6, i) for each i in e.ip, if e.hostname is empty (7, i) for each i in e.ip. The rest of the algorithm is as follows. Initialize a defaultdict(list) mapping a sketch to a list of entry identifiers. For each entry, for each of the entryβs f-sketches, add the entryβs identifier to the appropriate list in the defaultdict. Initialize a set of edges. For each entry, for each of the entryβs g-sketches, look up the g-sketch in the defaultdict and add an edge from the entryβs identifiers to each of the other identifiers in the list. Now that we have a set of edges, we run into the problem that @btilly noted. My first instinct as a computer scientist is to find connected components, but of course, merging two entries may cause some incident edges to disappear. Instead you can use the edges as candidates for merging, and repeat until the algorithm above returns no edges. import collections import itertools Entry = collections.namedtuple("Entry", ("os", "hostname", "mac", "ip")) UNKNOWN = "UNKNOWN" WINDOWS = "WINDOWS" LINUX = "LINUX" def f_os(e): if e.os != UNKNOWN: yield (1, e.os) if e.os != UNKNOWN: yield (2,) if e.os == UNKNOWN: yield (3,) def g_os(e): if e.os != UNKNOWN: yield (1, e.os) if e.os == UNKNOWN: yield (2,) yield (3,) def f_addr(e): for h in e.hostname: yield (1, h) if e.hostname: for m in e.mac: yield (2, m) if not e.hostname: for m in e.mac: yield (3, m) if e.hostname and e.mac: for i in e.ip: yield (4, i) if not e.hostname and e.mac: for i in e.ip: yield (5, i) if e.hostname and not e.mac: for i in e.ip: yield (6, i) if not e.hostname and not e.mac: for i in e.ip: yield (7, i) def g_addr(e): for h in e.hostname: yield (1, h) if not e.hostname: for m in e.mac: yield (2, m) for m in e.mac: yield (3, m) if not e.hostname and not e.mac: for i in e.ip: yield (4, i) if not e.mac: for i in e.ip: yield (5, i) if not e.hostname: for i in e.ip: yield (6, i) for i in e.ip: yield (7, i) def f(e): return set(itertools.product(f_os(e), f_addr(e))) def g(e): return set(itertools.product(g_os(e), g_addr(e))) def is_similar(e, e_prime): return not f(e).isdisjoint(g(e_prime)) # Begin testing code for is_similar def original_is_similar(e, e_prime): if e.os != UNKNOWN and e_prime.os != UNKNOWN and e.os != e_prime.os: return False if e.hostname and e_prime.hostname: return not set(e.hostname).isdisjoint(set(e_prime.hostname)) if e.mac and e_prime.mac: return not set(e.mac).isdisjoint(set(e_prime.mac)) return not set(e.ip).isdisjoint(set(e_prime.ip)) import random def random_os(): return random.choice([UNKNOWN, WINDOWS, LINUX]) def random_names(prefix): return [ "{}{}".format(prefix, random.randrange(10)) for n in range(random.randrange(3)) ] def random_entry(): return Entry(random_os(), random_names("H"), random_names("M"), random_names("I")) def test_is_similar(): print("Testing is_similar()") for rep in range(100000): e = random_entry() e_prime = random_entry() got = is_similar(e, e_prime) expected = original_is_similar(e, e_prime) if got != expected: print(e) print(e_prime) print("got", got) print("expected", expected) break if __name__ == "__main__": test_is_similar() # End testing code def find_edges(entries): entries = list(entries) posting_lists = collections.defaultdict(list) for i, e in enumerate(entries): for sketch in f(e): posting_lists[sketch].append(i) edges = set() for i, e in enumerate(entries): for sketch in g(e): for j in posting_lists[sketch]: if i < j: edges.add((i, j)) return edges # Begin testing code for find_edges def test_find_edges(): print("Testing find_edges()") entries = [random_entry() for i in range(1000)] got = find_edges(entries) expected = { (i, j) for (i, e) in enumerate(entries) for (j, e_prime) in enumerate(entries) if i < j and is_similar(e, e_prime) } print(len(expected)) assert got == expected if __name__ == "__main__": test_find_edges() find_edges([random_entry() for i in range(10000)]) # End testing code for find_edges | 5 | 3 |
69,625,550 | 2021-10-19 | https://stackoverflow.com/questions/69625550/skip-the-default-on-onupdate-defined-for-specific-update-queries-in-sqlalchemy | If I have a list of posts, which have created and updated dates with a default attached onupdate callback. Sometimes I need to flag the post, for inappropriate reports or similar actions. I do not want the created and updated dates to be modified. How can I skip the defined onupdate, while making an update action? | SQLAlchemy will apply a default when no value was provided to the INSERT or UPDATE statement for that column however the obvious workaround - explicitly setting the column to its current value - won't work because the session checks whether the value has actually changed, and does not pass a value if it hasn't. Here are two possible solutions, assuming SQLAlchemy 1.4+ and this model: class Post(db.Model): flag = db.Column(db.Boolean, default=False) last_updated = db.Column(db.DateTime, default=some_func, onupdate=some_func) Use an event listener Add a before update listener that detects when the flag column is being modified, and mark the timestamp column as modified, even though its value is unchanged. This will make SQLAlchemy add the current value to the update, and so the onupdate function will not be called. import sqlalchemy as sa ... @sa.event.listens_for(Post, 'before_update') def receive_before_update(mapper, connection, target): insp = sa.inspect(target) flag_changed, _, _ = insp.attrs.flag.history if flag_changed: orm.attributes.flag_modified(target, 'last_updated') Use SQLAlchemy core instead of the ORM SQLAlchemy core doesn't need a session, so the current timestamp value can be passed to an update to avoid triggering the onupdate function. The ORM will be unaware of any changes made in this way, so if done within the context of a session affected objects should be refreshed or expired. This is a "quick and dirty" solution, but might be good enough if flagging happens outside of the normal application flow. with db.engine.begin() as conn: posts = Post.__table__ update = sa.update(posts).where(...).values(flag=True, last_updated=posts.c.last_updated) conn.execute(update) | 6 | 6 |
69,610,572 | 2021-10-18 | https://stackoverflow.com/questions/69610572/how-can-i-solve-the-below-error-while-importing-nltk-package | Screenshot of the error After installing nltk using pip3 install nltk I am unable to import nltk in python shell in macOS File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/nltk/__init__.py", line 137, in <module> from nltk.text import * File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/nltk/text.py", line 29, in <module> from nltk.tokenize import sent_tokenize File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/nltk/tokenize/__init__.py", line 65, in <module> from nltk.tokenize.casual import TweetTokenizer, casual_tokenize File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/nltk/tokenize/casual.py", line 49, in <module> import regex # https://github.com/nltk/nltk/issues/2409 File "/Users/userId/Library/Python/3.9/lib/python/site-packages/regex/__init__.py", line 1, in <module> from .regex import * File "/Users/userId/Library/Python/3.9/lib/python/site-packages/regex/regex.py", line 419, in <module> import regex._regex_core as _regex_core File "/Users/userId/Library/Python/3.9/lib/python/site-packages/regex/_regex_core.py", line 21, in <module> import regex._regex as _regex ImportError: dlopen(/Users/userId/Library/Python/3.9/lib/python/site-packages/regex/_regex.cpython-39-darwin.so, 2): no suitable image found. Did find: /Users/userId/Library/Python/3.9/lib/python/site-packages/regex/_regex.cpython-39-darwin.so: code signature in (/Users/userId/Library/Python/3.9/lib/python/site-packages/regex/_regex.cpython-39-darwin.so) not valid for use in process using Library Validation: Trying to load an unsigned library``` | Just ran into this, I found that the following fixes it: xcrun codesign --sign - "[YOUR_PATH_TO_DYLIB_HERE]" In my case the error was like so: ImportError: dlopen(/Users/USER/dev/cr-likes/venv/lib/python3.9/site-packages/regex/_regex.cpython-39-darwin.so, 2): no suitable image found. Did find: /Users/USER/dev/cr-likes/venv/lib/python3.9/site-packages/regex/_regex.cpython-39-darwin.so: code signature in (/Users/USER/dev/cr-likes/venv/lib/python3.9/site-packages/regex/_regex.cpython-39-darwin.so) not valid for use in process using Library Validation: Trying to load an unsigned library By running xcrun on the shared object, in this case /Users/USER/dev/cr-likes/venv/lib/python3.9/site-packages/regex/_regex.cpython-39-darwin.so the error is now gone. | 5 | 4 |
69,637,772 | 2021-10-19 | https://stackoverflow.com/questions/69637772/iterate-over-pairs-in-order-of-sum-of-absolute-values | I want to iterate over pairs of integers in order of the sum of their absolute values. The list should look like: (0,0) (-1,0) (0,1) (0,-1) (1,0) (-2,0) (-1,1) (-1,-1) (0,2) (0,-2) (1,1) (1,-1) (2,0) [...] For pairs with the same sum of absolute values I don't mind which order they come in. Ideally I would like to be able to create the pairs forever so that I can use each one in turn. How can you do that? For a fixed range I can make the list of pairs in an ugly way with: sorted([(x,y)for x in range(-20,21)for y in range(-20,21)if abs(x)+abs(y)<21],key=lambda x:sum(map(abs,x)) This doesn't allow me to iterate forever and it also doesn't give me one pair at a time. | This seems to do the trick: from itertools import count # Creates infinite iterator def abs_value_pairs(): for absval in count(): # Generate all possible sums of absolute values for a in range(-absval, absval + 1): # Generate all possible first values b = abs(a) - absval # Compute matching second value (arbitrarily do negative first) yield a, b if b: # If first b is zero, don't output again, otherwise, output positive b yield a, -b This runs forever, and operates efficiently (avoiding recomputing anything unnecessarily). | 7 | 9 |
69,626,949 | 2021-10-19 | https://stackoverflow.com/questions/69626949/is-there-a-way-to-improve-the-performance-of-this-fractal-calculation-algorithm | Yesterday I came across the new 3Blue1Brown video about Newton's fractal and I was really mesmerized by his live representation of the fractal. (Here's the video link for anybody interested, it's at 13:40: https://www.youtube.com/watch?v=-RdOwhmqP5s) I wanted to have a go at it myself and tried to code it in python (I think he uses python too). I spent a few hours trying to improve my naive implementation and got to a point where I just don't know how could I make it faster. The code looks like this: import os import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec from time import time def print_fractal(state): fig = plt.figure(figsize=(8, 8)) gs = GridSpec(1, 1) axs = [fig.add_subplot(gs[0, 0])] fig.tight_layout(pad=5) axs[0].matshow(state) axs[0].set_xticks([]) axs[0].set_yticks([]) plt.show() plt.close() def get_function_value(z): return z**5 + z**2 - z + 1 def get_function_derivative_value(z): return 5*z**4 + 2*z - 1 def check_distance(state, roots): roots2 = np.zeros((roots.shape[0], state.shape[0], state.shape[1]), dtype=complex) for r in range(roots.shape[0]): roots2[r] = np.full((state.shape[0], state.shape[1]), roots[r]) dist_2 = np.abs((roots2 - state)) original_state = np.argmin(dist_2, axis=0) + 1 return original_state def static(): time_start = time() s = 4 c = [0, 0] n = 800 polynomial = [1, 0, 0, 1, -1, 1] roots = np.roots(polynomial) state = np.transpose((np.linspace(c[0] - s/2, c[0] + s/2, n)[:, None] + 1j*np.linspace(c[1] - s/2, c[1] + s/2, n))) n_steps = 15 time_setup = time() for _ in range(n_steps): state -= (get_function_value(state) / get_function_derivative_value(state)) time_evolution = time() original_state = check_distance(state, roots) time_check = time() print_fractal(original_state) print("{0:<40}".format("Time to setup the initial configuration:"), "{:20.3f}".format(time_setup - time_start)) print("{0:<40}".format("Time to evolve the state:"), "{:20.3f}".format(time_evolution - time_setup)) print("{0:<40}".format("Time to check the closest roots:"), "{:20.3f}".format(time_check - time_evolution)) An average output looks like this: Time to setup the initial configuration: 0.004 Time to evolve the state: 0.796 Time to check the closest roots: 0.094 It's clear that it's the evolution part that bottlenecks the process. It's not "slow", but I think it's not enough to render something live like in the video. I already did what I could by using numpy vectors and avoiding loops but I guess it's not enough. What other tricks could be applied here? Note: I tried using numpy.polynomials.Polynomial class to evaluate the function, but it was slower than this version. | I got an improvement (~40% faster) by using single complex (np.complex64) precision. (...) state = np.transpose((np.linspace(c[0] - s/2, c[0] + s/2, n)[:, None] + 1j*np.linspace(c[1] - s/2, c[1] + s/2, n))) state = state.astype(np.complex64) (...) 3Blue1Brown added this link in the description: https://codepen.io/mherreshoff/pen/RwZPazd You can take a look how it was done there (sidenote: author of this pen used single precision as well) | 5 | 2 |
69,625,661 | 2021-10-19 | https://stackoverflow.com/questions/69625661/create-a-3d-surface-plot-in-plotly | I want to create a 3D surface plot in Plotly by reading the data from an external file. Following is the code I am using: import numpy as np import plotly.graph_objects as go import plotly.express as px data = np.genfromtxt('values.dat', dtype=float) # The shape of X, Y and Z is (10,1) X = data[:,0:1] Y = data[:,1:2] Z = data[:,2:3] fig = go.Surface(x=X, y=Y, z=Z, name='Surface plot', colorscale=px.colors.sequential.Plotly3) plot(fig) The above code does not produce any surface plot. What changes has to be made to create a surface plot? | From plotly figure reference: The data the describes the coordinates of the surface is set in z. Data in z should be a 2D list. Coordinates in x and y can either be 1D lists or {2D arrays} I have an example data set. It contains three columns (x,y,z). import plotly.graph_objects as go import pandas as pd import numpy as np from scipy.interpolate import griddata df = pd.read_csv('./test_data.csv') x = np.array(df.lon) y = np.array(df.lat) z = np.array(df.value) xi = np.linspace(x.min(), x.max(), 100) yi = np.linspace(y.min(), y.max(), 100) X,Y = np.meshgrid(xi,yi) Z = griddata((x,y),z,(X,Y), method='cubic') fig = go.Figure(go.Surface(x=xi,y=yi,z=Z)) fig.show() Reference Page: https://plotly.com/python/reference/surface/ | 5 | 8 |
69,606,986 | 2021-10-17 | https://stackoverflow.com/questions/69606986/regex-matching-separated-values-for-union-types | I'm trying to match type annotations like int | str, and use regex substitution to replace them with a string Union[int, str]. Desired substitutions (before and after): str|int|bool -> Union[str,int,bool] Optional[int|tuple[str|int]] -> Optional[Union[int,tuple[Union[str,int]]]] dict[str | int, list[B | C | Optional[D]]] -> dict[Union[str,int], list[Union[B,C,Optional[D]]]] The regular expression I've come up with so far is as follows: r"\w*(?:\[|,|^)[\t ]*((?'type'[a-zA-Z0-9_.\[\]]+)(?:[\t ]*\|[\t ]*(?&type))+)(?:\]|,|$)" You can try it out here on Regex Demo. It's not really working how I'd want it to. The problems I've noted so far: It doesn't seem to handle nested Union conditions so far. For example, int | tuple[str|int] | bool seems to result in one match, rather than two matches (including the inner Union condition). The regex seems to consume unnecessary ] at the end. Probably the most important one, but I noticed the regex subroutines don't seem to be supported by the re module in Python. Here is where I got the idea to use that from. Additional Info This is mainly to support the PEP 604 syntax for Python 3.7+, which requires annotatations to be forward-declared (e.g. declared as strings) to be supported, as otherwise builtin types don't support the | operator. Here's a sample code that I came up with: from __future__ import annotations import datetime from decimal import Decimal from typing import Optional class A: field_1: str|int|bool field_2: int | tuple[str|int] | bool field_3: Decimal|datetime.date|str field_4: str|Optional[int] field_5: Optional[int|str] field_6: dict[str | int, list[B | C | Optional[D]]] class B: ... class C: ... class D: ... For Python versions earlier than 3.10, I use a __future__ import to avoid the error below: TypeError: unsupported operand type(s) for |: 'type' and 'type' This essentially converts all annotations to strings, as below: >>> A.__annotations__ {'field_1': 'str | int | bool', 'field_2': 'int | tuple[str | int] | bool', 'field_3': 'Decimal | datetime.date | str', 'field_4': 'str | Optional[int]', 'field_5': 'Optional[int | str]', 'field_6': 'dict[str | int, list[B | C | Optional[D]]]'} But in code (say in another module), I want to evaluate the annotations in A. This works in Python 3.10, but fails in Python 3.7+ even though the __future__ import supports forward declared annotations. >>> from typing import get_type_hints >>> hints = get_type_hints(A) Traceback (most recent call last): eval(self.__forward_code__, globalns, localns), File "<string>", line 1, in <module> TypeError: unsupported operand type(s) for |: 'type' and 'type' It seems the best approach to make this work, is to replace all occurrences of int | str (for example) with Union[int, str], and then with typing.Union included in the additional localns used to evaluate the annotations, it should then be possible to evaluate PEP 604- style annotations for Python 3.7+. | You can install the PyPi regex module (as re does not support recursion) and use import regex text = "str|int|bool\nOptional[int|tuple[str|int]]\ndict[str | int, list[B | C | Optional[D]]]" rx = r"(\w+\[)(\w+(\[(?:[^][|]++|(?3))*])?(?:\s*\|\s*\w+(\[(?:[^][|]++|(?4))*])?)+)]" n = 1 res = text while n != 0: res, n = regex.subn(rx, lambda x: "{}Union[{}]]".format(x.group(1), regex.sub(r'\s*\|\s*', ',', x.group(2))), res) print( regex.sub(r'\w+(?:\s*\|\s*\w+)+', lambda z: "Union[{}]".format(regex.sub(r'\s*\|\s*', ',', z.group())), res) ) Output: Union[str,int,bool] Optional[Union[int,tuple[Union[str,int]]]] dict[Union[str,int], list[Union[B,C,Optional[D]]]] See the Python demo. The first regex finds all kinds of WORD[...] that contain pipe chars and other WORDs or WORD[...] with no pipe chars inside them. The \w+(?:\s*\|\s*\w+)+ regex matches 2 or more words that are separated with pipes and optional spaces. The first pattern details: (\w+\[) - Group 1 (this will be kept as is at the beginning of the replacement): one or more word chars and then a [ char (\w+(\[(?:[^][|]++|(?3))*])?(?:\s*\|\s*\w+(\[(?:[^][|]++|(?4))*])?)+) - Group 2 (it will be put inside Union[...] with all \s*\|\s* pattern replaced with ,): \w+ - one or more word chars (\[(?:[^][|]++|(?3))*])? - an optional Group 3 that matches a [ char, followed with zero or more occurrences of one or more [ or ] chars or whole Group 3 recursed (hence, it matches nested parentheses) and then a ] char (?:\s*\|\s*\w+(\[(?:[^][|]++|(?4))*])?)+ - one or more occurrences (so the match contains at least one pipe char to replace with ,) of: \s*\|\s* - a pipe char enclosed with zero or more whitespaces \w+ - one or more word chars (\[(?:[^][|]++|(?4))*])? - an optional Group 4 (matches the same thing as Group 3, note the (?4) subroutine repeats Group 4 pattern) ] - a ] char. | 5 | 1 |
69,623,784 | 2021-10-19 | https://stackoverflow.com/questions/69623784/how-to-set-environment-variable-in-pytest | I have a lamba handler that uses an environment variable. How can I set that value using pytest. I'm getting the error tests/test_kinesis.py:3: in <module> from runner import kinesis runner/kinesis.py:6: in <module> DATA_ENGINEERING_BUCKET = os.environ["BUCKET"] ../../../../../.pyenv/versions/3.8.8/lib/python3.8/os.py:675: in __getitem__ raise KeyError(key) from None E KeyError: 'BUCKET' 7:03 I tried setting in the test like this class TestHandler(unittest.TestCase): @mock_s3 @mock_lambda def test_handler(monkeypatch): monkeypatch.setenv("BUCKET", "test-bucket") actual = kinesis.handler(kinesis_stream_event, "") expected = {"statusCode": 200, "body": "OK"} assert actual == expected DATA_ENGINEERING_BUCKET = os.environ["BUCKET"] def handler(event, context): ... | You're getting the failure before your monkeypatch is able to run. The loading of the environment variable will happen when the runner module is first imported. If this is a module you own, I'd recommend modifying the code to use a default value if DATA_ENGINEERING_BUCKET isn't set. Then you can modify it's value to whatever you want at runtime by calling module.DATA_ENGINEERING_BUCKET = "my_bucket". DATA_ENGINEERING_BUCKET = os.environ.get("BUCKET", default="default_bucket") If you can't modify that file then things are more complicated. I looked into creating a global fixture that monkeypatches the environment and loads the module once, before any tests load and received a pytest error about using function level fixtures within a session level fixture. Which makes sense monkeypatch really isn't intended to fake things long term. You can stick the module load into your test after the monkeypatch but that will generate a lot of boilerplate. What eventually worked creating a fixture that will provide the class in lieu of importing it. The fixture; sets os.environ to the desired value, loads the module, resets os.environ to it's origional value then yields the module. Any tests that need this module can request the fixture to have access to it within their scope. A word of caution, because test files are imported before fixtures are run any test files that don't use the fixture and import the module normally will raise a KeyError and cause pytest to crash before running any tests. conftest.py import os, pytest @pytest.fixture(scope='session') def kinesis(): old_environ = os.environ os.environ = {'BUCKET': 'test-bucket'} import kinesis os.environ = old_environ yield kinesis tests.py # Do NOT import kinesis in any test file. Rely on the fixture. class TestHandler(unittest.TestCase): @mock_s3 @mock_lambda def test_handler(kinesis): actual = kinesis.handler(kinesis_stream_event, "") expected = {"statusCode": 200, "body": "OK"} assert actual == expected A potentially simpler method os.environ is a dictionary of environment variables that is created when os first loads. If you want a single value for every test then you just need to add the value you want to it before loading any test modules. If you put os.environ['BUCKET'] = 'test-bucket' at the top of conftest.py you will set the environment variable for the rest of the test session. Then as long as the first import of the module happens afterwards you won't have a key error. The big downside to this approach is that unless you know to look in conftest.py or grep the code it will be difficult to determine where the environment variable is getting set when troubleshooting. | 11 | 11 |
69,618,070 | 2021-10-18 | https://stackoverflow.com/questions/69618070/redefine-method-of-an-object | I've got a class, where a method should only run once. Of course, it could easily be done with artificial has_executed = True/False flag, but why use it, if you can just delete the method itself? python's a duck-typed language, everything is a reference, bla-bla-bla, what can go wrong? At least it was the thought. I couldn't actually do it: class A: def b(self): print("empty") self.__delattr__('b') a = A() a.b() raises AttributeError: b. However, executing self.__getattribute__('b') returns <bound method A.b of <__main__.A object at 0x000001CDC6742FD0>>, which sounds stupid to me: why is a method any different from an attribute, since everything in python is just a reference to an object? And why can I __getattribute__, but not __delattr__? The same goes to redefinition. I can easily set any attribute, but methods are a no-no? class A: def b(self): print("first") self.__setattr__('b', lambda self: print(f"second")) a = A() a.b() a.b() results into TypeError: <lambda>() missing 1 required positional argument: 'self'. Which, of course, means, that now python isn't using dot-notation as intended. Of course, we could ditch the self attribute in the lambda altogether, considering we've got the reference to it already in b. But isn't it incorrect by design? The further I'm trying to take python to the limit, the more frustrated I become. Some imposed limitations (or seemingly imposed?) seem so unnatural, considering the way the language is marketed. Shouldn't it allow this? Why doesn't it work? UPD Ok, consider this: class A: def __init__(self): self.variable = 1 def b(self): print("old") self.variable += 1 def new_b(): print("new") self.variable += 15 self.__setattr__('b', new_b) It will work and do what we want: none of other objects will have their A.b method redefined once one object kind of overlays its b definition. (overlays, since everyone so far says that you cannot redefine a method for an object, but instead only kind of hide it from the caller behind another attribute with the same name, as far as I understand). Is this good? | It doesn't work because b isn't an attribute belonging to the instance, it belongs to the class. So you can't delete it on the instance because it isn't there to be deleted. >>> a = A() >>> list(a.__dict__) [] >>> list(A.__dict__) ['__module__', 'b', '__dict__', '__weakref__', '__doc__'] When a.b is evaluated, Python will see that a has no instance attribute named b and fall back to the class. (It's a little more complicated because when falling back to the class, it will not simply return the method itself, but a version of the method which is bound to the instance a.) Since you don't want to delete the method on the class, the way to go is to replace the method on the instance. I don't know why you tried to do this with __setattr__ - there is no need for that, simply assign self.b = ... as normal. The reason your attempt failed is because your lambda requires a positional parameter named self, but this parameter will not be automatically bound to the instance when you look it up, because it is an instance attribute, not a class attribute. class A: def b(self): print('first') self.b = lambda: print('second') Usage: >>> a = A() >>> a.b() first >>> a.b() second | 5 | 6 |
69,552,230 | 2021-10-13 | https://stackoverflow.com/questions/69552230/no-logging-on-azure-devops-pipeline | Update: Is it possible to add or change a command that executes a pipeline on Azure DevOps? Running my program locally on Visual Studio Code, I do get outputs. However, running my GitHub origin branch on Azure DevOps does not yield any output. I followed a Stack Overflow answer, which references this solution to a GitHub Issue. I have implemented the below, but Azure's Raw Logs return blank on my Python logging. test_logging.py: import logging filename = "my.log" global logger logger = logging.getLogger() logger.setLevel(logging.INFO) formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") open(filename, "w").close() # empty logs fileHandler = logging.FileHandler(filename) fileHandler.setFormatter(formatter) fileHandler.setLevel(logging.INFO) logger.addHandler(fileHandler) logger.error('TEST') # fetch logs with open(filename, "r") as fileHandler: logs = [log.rstrip() for log in fileHandler.readlines()] open(filename, "w").close() # empty logs print('logs = ', logs) >>> logs = [] host.json: { "version": "2.0", "logging": { "fileLoggingMode": "always", "logLevel": { "default": "Debug" } } } I then tried this alternative host.json from post: "logging": { "fileLoggingMode": "debugOnly", "logLevel": { "default": "None", "Host.Results": "Information", "Function": "Information", "Host.Aggregator": "Information" }, "applicationInsights": { "samplingSettings": { "isEnabled": false, "maxTelemetryItemsPerSecond": 5 } } } azure-pipeline-ontology_tagger.yaml # ########## # A build run against multiple Python targets # ########## resources: - repo: self variables: tag: '$(Build.SourceBranchName)-$(Build.BuildNumber)' imageName: '$(Build.Repository.Name)-ontology_tagger' artifactFeed: grandproject/private-sources repositoryUrl: private-sources packageDirectory: workers/ontology_tagger trigger: batch: true branches: include: - master - development - releases/* paths: include: - "workers/ontology_tagger" exclude: - "workers" - "*.md" pr: branches: include: - master - development - releases/* paths: include: - "workers/ontology_tagger" exclude: - "workers" - "*.md" stages: - stage: BuildWP displayName: Build Workers python package jobs: - job: Build displayName: Build Worker python image pool: name: EKS-grandproject-dev steps: - bash: env - task: PipAuthenticate@0 displayName: Authenticate with artifact feed inputs: artifactFeeds: $(artifactFeed) - task: TwineAuthenticate@1 displayName: Authenticate with artifact feed inputs: artifactFeed: $(artifactFeed) - bash: echo "##vso[task.setvariable variable=POETRY_HTTP_BASIC_AZURE_PASSWORD;isOutput=true]$(echo $PIP_EXTRA_INDEX_URL | sed -r 's|https://(.+):(.+)@.*|\2|')" name: "PIPAUTH" - task: Bash@3 displayName: Test worker inputs: targetType: 'inline' workingDirectory: '$(packageDirectory)' script: | docker build . --progress plain --pull --target test \ --build-arg POETRY_HTTP_BASIC_AZURE_PASSWORD=${PIPAUTH_POETRY_HTTP_BASIC_AZURE_PASSWORD} \ --build-arg ATLASSIAN_TOKEN=$(ATLASSIAN_TOKEN) - task: Bash@3 displayName: Build and publish package inputs: targetType: 'inline' workingDirectory: '$(packageDirectory)' script: | set -e cp $(PYPIRC_PATH) ./ docker build . --target package --progress plain --build-arg REPO=$(repositoryUrl) - task: Bash@3 displayName: Build docker image inputs: targetType: 'inline' workingDirectory: '$(packageDirectory)' script: | docker build . --tag '$(imageName):$(tag)' --progress plain --pull --target production \ --build-arg POETRY_HTTP_BASIC_AZURE_PASSWORD=${PIPAUTH_POETRY_HTTP_BASIC_AZURE_PASSWORD} \ --label com.azure.dev.image.build.sourceversion=$(Build.SourceVersion) \ --label com.azure.dev.image.build.sourcebranchname=$(Build.SourceBranchName) \ --label com.azure.dev.image.build.buildnumber=$(Build.BuildNumber) - task: ECRPushImage@1 displayName: Push image with 'latest' tag condition: and(succeeded(),eq(variables['Build.SourceBranchName'], 'master')) inputs: awsCredentials: 'dev-azure-devops' regionName: 'eu-central-1' imageSource: 'imagename' sourceImageName: $(imageName) sourceImageTag: $(tag) repositoryName: $(imageName) pushTag: 'latest' autoCreateRepository: true - task: ECRPushImage@1 displayName: Push image with branch name tag condition: and(succeeded(),ne(variables['Build.SourceBranchName'], 'merge')) inputs: awsCredentials: 'iotahoe-dev-azure-devops' regionName: 'eu-central-1' imageSource: 'imagename' sourceImageName: $(imageName) sourceImageTag: $(tag) repositoryName: $(imageName) pushTag: '$(Build.SourceBranchName)' autoCreateRepository: true - task: ECRPushImage@1 displayName: Push image with uniq tag condition: and(succeeded(),ne(variables['Build.SourceBranchName'], 'merge')) inputs: awsCredentials: 'dev-azure-devops' regionName: 'eu-central-1' imageSource: 'imagename' sourceImageName: $(imageName) sourceImageTag: $(tag) repositoryName: $(imageName) pushTag: $(tag) autoCreateRepository: true outputVariable: 'ECR_PUSHED_IMAGE_NAME' Please let me know if there is anything else I should provide. | I think you have fundamentally mixed up some things here: the links you have provided and are following provide guidance on setting up logging in Azure Functions. However, you appear to be talking about logging in Azure Pipelines, which is an entirely different thing. So just to be clear: Azure Pipelines run the build and deployment jobs that deploy the code you might have on your GitHub repository to Azure Functions. Pipelines are executed in Azure Pipelines agents, that can be either Microsoft- or Self-hosted. If we assume that you are executing your pipelines with Microsoft-Hosted agents, you should not assume that these agents have any capabilities that Azure Functions might have (nor that you should execute code aimed for Azure Functions in the first place). If you want do execute python code in your pipeline, you should first start looking at what python-related capabilities the hosted agents have pre-installed and work from there: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml If you want to log something about pipeline run, you should first check the "Enable system diagnostics" option when queuing pipeline manually. For implementing more logging by yourself, do check: https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=bash For logging in Azure Functions you might want to start here: https://learn.microsoft.com/en-us/azure/azure-functions/functions-monitoring , but that would be an entirely different topic than logging in Azure Pipelines. | 7 | 2 |
69,615,293 | 2021-10-18 | https://stackoverflow.com/questions/69615293/number-of-digits-after-decimal-point-in-pandas | I have CSV file with data: Number 1.1 2.2 4.1 5.4 9.176 14.54345774 16.25664 If I print to display with pandas I get: df = pd.read_csv('data.csv') print(df) Number 0 1.100000 1 2.200000 2 4.100000 3 5.400000 4 9.176000 5 14.543458 6 16.256640 But if I cut 14.54345774 to 14.543 output is changed: Number 0 1.10000 1 2.20000 2 4.10000 3 5.40000 4 9.17600 5 14.54300 6 16.25664 The first case number of digits after decimal point in pandas is 6, second case is 5. Why format is changed? What pandas parameters should I change so these cases are equal? I want the number of digits after the decimal point to be constant and digits after the decimal point is round to max digits after the decimal point if it possibly. UPDATE: IMO, This moment arises on data initialization, so round don't get to desirable result if I want use 6 digits. It only can be decreased (6->5 digits), but it can't be increased (5->6). | You can use pd.set_option to set the decimal number display precision to e.g. 5 in this case: pd.set_option("display.precision", 5) or use: pd.options.display.float_format = '{:.5f}'.format Result: print(df) # with original value of 14.54345774 Number 0 1.10000 1 2.20000 2 4.10000 3 5.40000 4 9.17600 5 14.54346 6 16.25664 | 5 | 9 |
69,605,603 | 2021-10-17 | https://stackoverflow.com/questions/69605603/what-should-go-in-my-procfile-for-a-django-application | What should go in my Procfile for a Django application on Heroku? I tried: web: python appname.py because I found an example like that for python apps. Further searching didn't make things any clearer except for that I might need to use gunicorn instead of python. I found various posts suggesting various formats such as: web gunicorn web:gunicorn web: gunicorn I have no clue what should come after gunicorn, some posts have the programming language, some have an IP address, some have various other things. Some suggest running: heroku ps:scale web=1 but that results in an error: Scaling dynos... ! ! Couldn't find that process type (web). I just haven't got a clue and don't know where to turn. Since posting I have watched some videos about this and tried: web: gunicorn appname.wsgi in my Procfile but it still doesn't work, still resulting in: at=error code=H14 desc="No web processes running" | Heroku's Procfile format is quite simple. As described in the documentation: A Procfile declares its process types on individual lines, each with the following format: <process type>: <command> You can see that there should be a colon after the process type, so the web gunicorn example in your question is not going to work properly. You'll want to start the line with web:. <command> indicates the command that every dyno of the process type should execute on startup, such as rake jobs:work For Django, in development you'd typically use python manage.py runserver to run the application, so a reasonable attempt for Django would be web: python manage.py runserver This should work, but it's not appropriate for production work: DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And thatβs how itβs gonna stay. Weβre in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.) Instead, you should use a production-grade web server in production. Gunicorn is a common choice, and you can run your Django application with Gunicorn like so: gunicorn myproject.wsgi Putting that all together, a Procfile for Django on Heroku might look like web: gunicorn myproject.wsgi where myproject is the name of your Django project. This is exactly what Heroku's documentation suggests for Django applications. Note that you'll have to add Gunicorn to your project dependencies so Heroku will install it. I recommend also installing it locally so you can use heroku local to test your application on your dev machine in a way more similar to Heroku's production environment. heroku ps:scale is used to change the number and type of dynos for process types you have already defined. It has nothing to do with defining those process types. That's what your Procfile is for. | 6 | 1 |
69,605,313 | 2021-10-17 | https://stackoverflow.com/questions/69605313/vs-code-terminal-activate-ps1-cannot-be-loaded-because-running-scripts-is-disa | I created a virtual environment in python, now while activating the same from my command line in vscode I am getting the error PS C:\Users\hpoddar\Desktop\WebDev\ReactComplete\DjangoReact\ArticlesApp\APIProject> ..\venv\scripts\activate ..\venv\scripts\activate : File C:\Users\hpoddar\Desktop\WebDev\ReactComplete\DjangoReact\ArticlesApp\venv\scripts\Activate.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170. At line:1 char:1 + ..\venv\scripts\activate + ~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : SecurityError: (:) [], PSSecurityException + FullyQualifiedErrorId : UnauthorizedAccess This is my project structure However if I activate the same from my command line, it works without any error. Python version : 3.9.2 | A way is changing the terminal in VSCode to Command Prompt instead of PowerShell. Open the drop-down on the right of the terminal and choose Select Default Profile Select Command Prompt from the options. Or, you can also set the execution policy to RemoteSigned or Unrestricted in PowerShell Note: This only affects the current user Open PowerShell Run the following command: Set-ExecutionPolicy RemoteSigned -Scope CurrentUser OR Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser (Remove -Scope CurrentUser to apply to all users) | 24 | 70 |
69,583,134 | 2021-10-15 | https://stackoverflow.com/questions/69583134/why-is-there-a-difference-between-0-3-2-and-3-2 | I was figuring out how to do floor/ceiling operations without the math module. I solved this by using floor division //, and found out that the negative "gives the ceiling". So this works: >>> 3//2 1 >>> -3//2 -2 I would like the answer to be positive, so first I tried --3//2, but this gives 1. I inferred this is because Python evaluates -- to +. So to solve this, I found out I could use -(-3//2)), problem solved. But I came over another solution to this, namely (I included the previous example for comparison): >>> --3//2 # Does not give ceiling 1 >>> 0--3//2 # Does give ceiling 2 I am unable to explain why including the 0 helps. I have read the documentation on division, but I did not find any help there. I thought it might be because of the evaluation order: If I use --3//2 as an example, from the documentation I have that Positive, negative, bitwise NOT is strictest in this example, and I guess this evaluates -- to +. Next comes Multiplication, division, remainder, so I guess this is +3//2 which evaluates to 1, and we are finished. I am unable to infer it from the documentation why including 0 should change the result. References: 6.7. Binary arithmetic operations 6.14. Evaluation order | Python uses the symbol - as both a unary (-x) and a binary (x-y) operator. These have different operator precedence. In specific, the ordering wrt // is: unary - binary // binary - By introducing a 0 as 0--3//2, the first - is a binary - and is applied last. Without a leading 0 as --3//2, both - are unary and applied together. The corresponding evaluation/syntax tree is roughly like this, evaluating nodes at the bottom first to use them in the parent node: ---------------- ---------------- | --3//2 | 0--3//2 | |================|================| | | ------- | | | | 0 - z | | | | -----+- | | | | | | -------- | ----+--- | | | x // y | | | x // y | | | -+----+- | -+----+- | | | | | | | | | ----+ +-- | ---+ +-- | | | --3 | | 2 | | | -3 | | 2 | | | ----- --- | ---- --- | ---------------- ---------------- Because the unary - are applied together, they cancel out. In contrast, the unary and binary - are applied before and after the division, respectively. | 63 | 85 |
69,579,950 | 2021-10-15 | https://stackoverflow.com/questions/69579950/vs-code-python-doesnt-recognize-match-statement | When I use a match-case statement in Python in VS Code, it gives red squiggly lines and errors in the "problems" tab: | I got a response from one of the vscode-python devs on GitHub: Unfortunately Jedi (and it's underlying parser, parso) has not added support for the match statement yet. Please consider switching your language server to "Default"/"Pylance" as our Pylance language server already has support. As soon as Jedi makes a new release with match statement support we will take the update, but otherwise this is out of our hands. Since we have a language server that has support I'm a closing this issue. | 20 | 22 |
69,596,494 | 2021-10-16 | https://stackoverflow.com/questions/69596494/unable-to-import-freegames-python-package-attributeerror-module-collections | Python version : 3.10 I was trying to install the freegames python package using the following pip command C:\Users\praty>pip install freegames Defaulting to user installation because normal site-packages is not writeable Collecting freegames Downloading freegames-2.3.2-py2.py3-none-any.whl (108 kB) |ββββββββββββββββββββββββββββββββ| 108 kB 504 kB/s Installing collected packages: freegames Successfully installed freegames-2.3.2 But while importing the same on my python environment I was getting this error C:\Users\praty>python Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import freegames Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\praty\AppData\Roaming\Python\Python310\site-packages\freegames\_init_.py", line 61, in <module> from .utils import floor, line, path, square, vector File "C:\Users\praty\AppData\Roaming\Python\Python310\site-packages\freegames\utils.py", line 77, in <module> class vector(collections.Sequence): AttributeError: module 'collections' has no attribute 'Sequence' How do I resolve the same? | For quite some time Sequence was importable from collections: $ python2.7 -c "from collections import Sequence" $ python3.4 -c "from collections import Sequence" $ python3.5 -c "from collections import Sequence" $ python3.6 -c "from collections import Sequence" Starting from Python 3.7 there was a warning the class has been moved to collections.abc: $ python3.7 -c "from collections import Sequence" -c:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working $ python3.8 -c "from collections import Sequence" <string>:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working $ python3.9 -c "from collections import Sequence" <string>:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working In Python 3.10 this became an error: $ python3.10 -c "from collections import Sequence" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name 'Sequence' from 'collections' (/home/phd/.local/lib/python3.10/collections/__init__.py) $ python3.10 -c "from collections.abc import Sequence" Report the problem to the author of freegames. Downgrade to Python 3.9. Learn that you should never be so fast to upgrade to the latest and greatest versions. | 12 | 18 |
69,553,159 | 2021-10-13 | https://stackoverflow.com/questions/69553159/how-to-provide-type-hints-for-argparse-arguments | I would like to get proper linting and type hints by [PyFlakes, Pylint] and mypy. For example, in the following code, we cannot get type error for the last line. We cannot even know if float_input exists. import argparse parser = argparse.ArgumentParser() parser.add_argument('--float_input', type=float) args = parser.parse_args() def int_sum(a: int, b: int): return a + b c = int_sum(args.float_input, args.float_input) Is there a good way to improve this? | You can use typed-argument-parser to provide type hints for your arguments. You can define your arguments in a typesafe manner. from typing import Optional from tap import Tap class FooArgumentParser(Tap): float_input: Optional[float] = None args = FooArgumentParser().parse_args() def int_sum(a: int, b: int): return a + b c = int_sum(args.float_input, args.float_input) c = int_sum(args.foo, args.bar) which gives you: foo.py:13:13: error: Argument 1 to "int_sum" has incompatible type "Optional[float]"; expected "int" foo.py:13:31: error: Argument 2 to "int_sum" has incompatible type "Optional[float]"; expected "int" foo.py:14:13: error: "FooArgumentParser" has no attribute "foo" foo.py:14:23: error: "FooArgumentParser" has no attribute "bar" For required arguments, note that: Variables defined as name: type are required arguments while variables defined as name: type = value are not required and default to the provided value. You have to give the argument a default value to make it optional. | 7 | 3 |
69,591,717 | 2021-10-16 | https://stackoverflow.com/questions/69591717/how-is-the-keras-conv1d-input-specified-i-seem-to-be-lacking-a-dimension | My input is a array of 64 integers. model = Sequential() model.add( Input(shape=(68,), name="input")) model.add(Conv1D(64, 2, activation="relu", padding="same", name="convLayer")) I have 10,000 of these arrays in my training set. And I supposed to be specifying this in order for conv1D to work? I am getting the dreaded ValueError: Input 0 of layer convLayer is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [None, 68] error and I really don't understand what I need to do. | Don't let the name confuse you. The layer tf.keras.layers.Conv1D needs the following shape: (time_steps, features). If your dataset is made of 10,000 samples with each sample having 64 values, then your data has the shape (10000, 64), which is not directly applicable to the tf.keras.layers.Conv1D layer. You are missing the time_steps dimension. What you can do is use the tf.keras.layers.RepeatVector, which repeats your array input n times, in the example 5. This way your Conv1D layer gets an input of the shape (5, 64). Check out the documentation for more information: time_steps = 5 model = tf.keras.Sequential() model.add(tf.keras.layers.Input(shape=(64,), name="input")) model.add(tf.keras.layers.RepeatVector(time_steps)) model.add(tf.keras.layers.Conv1D(64, 2, activation="relu", padding="same", name="convLayer")) As a side note, you should ask yourself if using a tf.keras.layers.Conv1D layer is the right option for your use case. This layer is usually used for NLP and other time series tasks. For example, in sentence classification, each word in a sentence is usually mapped to a high-dimensional word vector representation, as seen in the image. This results in data with the shape (time_steps, features). If you want to use character one hot encoded embeddings it would look something like this: This is a simple example of one single sample with the shape (10, 10) --> 10 characters along the time series dimension and 10 features. It should help you understand the tutorial I mentioned a bit better. | 7 | 13 |
69,564,830 | 2021-10-14 | https://stackoverflow.com/questions/69564830/python-dataclass-setting-default-list-with-values | Can anyone help me fix this error. I just started using dataclass I wanted to put a default value so I can easily call from other function I have this class @dataclass(frozen=True) class MyClass: my_list: list = ["list1", "list2", "list3"] my_list2: list = ["list1", "list2", "list3"] But when i print print(MyClass.my_list) I'm getting this error raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'list'> for field my_list is not allowed: use default_factory | What it means by mutable default is that the lists provided as defaults will be the same individual objects in each instance of the dataclass. This would be confusing because mutating the list in an instance by e.g. appending to it would also append to the list in every other instance. Instead, it wants you to provide a default_factory function that will make a new list for each instance: from dataclasses import dataclass, field @dataclass class MyClass: my_list: list = field(default_factory=lambda: ["list1", "list2", "list3"]) my_list2: list = field(default_factory=lambda: ["list1", "list2", "list3"]) | 11 | 25 |
69,584,171 | 2021-10-15 | https://stackoverflow.com/questions/69584171/is-there-a-way-to-dynamically-change-a-plotly-animation-axis-scale-per-frame | I have an animated plotly scatter graph which plots x,y coordinates normally within the 0-0.5 range with date/time being the frame key. Sometime however I will have to handle anomalous data points which will be well out with this range. I would like the graph to be able to dynamically scale so that the points are not lost off screen. Is this possible? def draw(x1,y1,timestamp): d = { "x1": x1_trim, "y1": y1_trim, "time": time_trim } df = pd.DataFrame(d) fig = px.scatter(df, x="x1", y="y1", animation_frame="time") fig.update_yaxes(autorange=True) fig.update_xaxes(autorange=True) fig.show() I've tried using update_x/yaxes with autorange but it doesn't seem to work. | As pointed out in the comments, there is a way to change a plotly animation axis scale per frame. The question remains how dynamic it's possible to make it. But if we can say that you've made a few calculations that will let you know which frames you'd like to adjust the ranges for, then a combination of a dict like yranges = {2002:[0, 200]} and a for-loop on the frames should do the trick. Let's take a subset of the dataset px.data.gapminder as an example. And let's say that you'd like to adjust the range of the yaxis for the frame that displays data for the year 2002 in the following figure: Then you can include the following snippet, and get the figure below for year = 2002: yranges = {2002:[0, 200]} for f in fig.frames: if int(f.name) in yranges.keys(): f.layout.update(yaxis_range = yranges[int(f.name)]) Complete code: import plotly.express as px df = px.data.gapminder() df = df[(df['continent'] == 'Asia') & (df['year'].isin([1997, 2002, 2007]))] scales = [2002] fig = px.scatter(df, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country", size="pop", color="continent", hover_name="country", log_x=True, size_max=55, range_x=[100,100000], range_y=[25,90]) yranges = {2002:[0, 200]} for f in fig.frames: if int(f.name) in yranges.keys(): f.layout.update(yaxis_range = yranges[int(f.name)]) fig.show() | 5 | 3 |
69,590,754 | 2021-10-15 | https://stackoverflow.com/questions/69590754/nattype-object-has-no-attribute-isna | I am trying to create a new column 'Var' in the following Pandas DataFrame based on values from the other columns. I am encountering issues when dealing with the NaN, NaT. Data: ( Used apply(pd.to_datetime) on the Date columns at a previous step) Date C A Age 2017-12-13 1233.0 N 9 NaT NaN N 5 2007-09-24 49.0 N 14 Code: def program(Flat): if Flat['A'] == 'N' : return 0 elif Flat['Date'].isna() : return Flat['Age'] + 1 elif Flat['C'] < 365 : return 1 elif Flat['C'] >= 365 : return math.floor((Flat['C'])/365.25) + 1 Flat['Var'] = Flat.apply(program, axis=1) Error: AttributeError: 'NaTType' object has no attribute 'isna' Tried running through Anaconda & Python. All same error. Pandas version is 1.3.3. What is the correct way to detect the NaT type? | "NaT" (for date/time types) and "NaN" are not the same. However, you can use the "isnull" function for both types: elif pd.isnull(Flat['Data']): | 7 | 8 |
69,564,817 | 2021-10-14 | https://stackoverflow.com/questions/69564817/typeerror-load-missing-1-required-positional-argument-loader-in-google-col | I am trying to do a regular import in Google Colab. This import worked up until now. If I try: import plotly.express as px or import pingouin as pg I get an error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-19-86e89bd44552> in <module>() ----> 1 import plotly.express as px 9 frames /usr/local/lib/python3.7/dist-packages/plotly/express/__init__.py in <module>() 13 ) 14 ---> 15 from ._imshow import imshow 16 from ._chart_types import ( # noqa: F401 17 scatter, /usr/local/lib/python3.7/dist-packages/plotly/express/_imshow.py in <module>() 9 10 try: ---> 11 import xarray 12 13 xarray_imported = True /usr/local/lib/python3.7/dist-packages/xarray/__init__.py in <module>() 1 import pkg_resources 2 ----> 3 from . import testing, tutorial, ufuncs 4 from .backends.api import ( 5 load_dataarray, /usr/local/lib/python3.7/dist-packages/xarray/tutorial.py in <module>() 11 import numpy as np 12 ---> 13 from .backends.api import open_dataset as _open_dataset 14 from .backends.rasterio_ import open_rasterio as _open_rasterio 15 from .core.dataarray import DataArray /usr/local/lib/python3.7/dist-packages/xarray/backends/__init__.py in <module>() 4 formats. They should not be used directly, but rather through Dataset objects. 5 ----> 6 from .cfgrib_ import CfGribDataStore 7 from .common import AbstractDataStore, BackendArray, BackendEntrypoint 8 from .file_manager import CachingFileManager, DummyFileManager, FileManager /usr/local/lib/python3.7/dist-packages/xarray/backends/cfgrib_.py in <module>() 14 _normalize_path, 15 ) ---> 16 from .locks import SerializableLock, ensure_lock 17 from .store import StoreBackendEntrypoint 18 /usr/local/lib/python3.7/dist-packages/xarray/backends/locks.py in <module>() 11 12 try: ---> 13 from dask.distributed import Lock as DistributedLock 14 except ImportError: 15 DistributedLock = None /usr/local/lib/python3.7/dist-packages/dask/distributed.py in <module>() 1 # flake8: noqa 2 try: ----> 3 from distributed import * 4 except ImportError: 5 msg = ( /usr/local/lib/python3.7/dist-packages/distributed/__init__.py in <module>() 1 from __future__ import print_function, division, absolute_import 2 ----> 3 from . import config 4 from dask.config import config 5 from .actor import Actor, ActorFuture /usr/local/lib/python3.7/dist-packages/distributed/config.py in <module>() 18 19 with open(fn) as f: ---> 20 defaults = yaml.load(f) 21 22 dask.config.update_defaults(defaults) TypeError: load() missing 1 required positional argument: 'Loader' I think it might be a problem with Google Colab or some basic utility package that has been updated, but I can not find a way to solve it. | Found the problem. I was installing pandas_profiling, and this package updated pyyaml to version 6.0 which is not compatible with the current way Google Colab imports packages. So just reverting back to pyyaml version 5.4.1 solved the problem. For more information check versions of pyyaml here. See this issue and formal answers in GitHub ################################################################## For reverting back to pyyaml version 5.4.1 in your code, add the next line at the end of your packages installations: !pip install pyyaml==5.4.1 It is important to put it at the end of the installation, some of the installations will change the pyyaml version. | 73 | 57 |
69,584,027 | 2021-10-15 | https://stackoverflow.com/questions/69584027/why-is-np-sumrangen-very-slow | I saw a video about speed of loops in python, where it was explained that doing sum(range(N)) is much faster than manually looping through range and adding the variables together, since the former runs in C due to built-in functions being used, while in the latter the summation is done in (slow) python. I was curious what happens when adding numpy to the mix. As I expected np.sum(np.arange(N)) is the fastest, but sum(np.arange(N)) and np.sum(range(N)) are even slower than doing the naive for loop. Why is this? Here's the script I used to test, some comments about the supposed cause of slowing done where I know (taken mostly from the video) and the results I got on my machine (python 3.10.0, numpy 1.21.2): updated script: import numpy as np from timeit import timeit N = 10_000_000 repetition = 10 def sum0(N = N): s = 0 i = 0 while i < N: # condition is checked in python s += i i += 1 # both additions are done in python return s def sum1(N = N): s = 0 for i in range(N): # increment in C s += i # addition in python return s def sum2(N = N): return sum(range(N)) # everything in C def sum3(N = N): return sum(list(range(N))) def sum4(N = N): return np.sum(range(N)) # very slow np.array conversion def sum5(N = N): # much faster np.array conversion return np.sum(np.fromiter(range(N),dtype = int)) def sum5v2_(N = N): # much faster np.array conversion return np.sum(np.fromiter(range(N),dtype = np.int_)) def sum6(N = N): # possibly slow conversion to Py_long from np.int return sum(np.arange(N)) def sum7(N = N): # list returns a list of np.int-s return sum(list(np.arange(N))) def sum7v2(N = N): # tolist conversion to python int seems faster than the implicit conversion # in sum(list()) (tolist returns a list of python int-s) return sum(np.arange(N).tolist()) def sum8(N = N): return np.sum(np.arange(N)) # everything in numpy (fortran libblas?) def sum9(N = N): return np.arange(N).sum() # remove dispatch overhead def array_basic(N = N): return np.array(range(N)) def array_dtype(N = N): return np.array(range(N),dtype = np.int_) def array_iter(N = N): # np.sum's source code mentions to use fromiter to convert from generators return np.fromiter(range(N),dtype = np.int_) print(f"while loop: {timeit(sum0, number = repetition)}") print(f"for loop: {timeit(sum1, number = repetition)}") print(f"sum_range: {timeit(sum2, number = repetition)}") print(f"sum_rangelist: {timeit(sum3, number = repetition)}") print(f"npsum_range: {timeit(sum4, number = repetition)}") print(f"npsum_iterrange: {timeit(sum5, number = repetition)}") print(f"npsum_iterrangev2: {timeit(sum5, number = repetition)}") print(f"sum_arange: {timeit(sum6, number = repetition)}") print(f"sum_list_arange: {timeit(sum7, number = repetition)}") print(f"sum_arange_tolist: {timeit(sum7v2, number = repetition)}") print(f"npsum_arange: {timeit(sum8, number = repetition)}") print(f"nparangenpsum: {timeit(sum9, number = repetition)}") print(f"array_basic: {timeit(array_basic, number = repetition)}") print(f"array_dtype: {timeit(array_dtype, number = repetition)}") print(f"array_iter: {timeit(array_iter, number = repetition)}") print(f"npsumarangeREP: {timeit(lambda : sum8(N/1000), number = 100000*repetition)}") print(f"npsumarangeREP: {timeit(lambda : sum9(N/1000), number = 100000*repetition)}") # Example output: # # while loop: 11.493371912998555 # for loop: 7.385945574002108 # sum_range: 2.4605720699983067 # sum_rangelist: 4.509678105998319 # npsum_range: 11.85120212900074 # npsum_iterrange: 4.464334709002287 # npsum_iterrangev2: 4.498494338993623 # sum_arange: 9.537815956995473 # sum_list_arange: 13.290120724996086 # sum_arange_tolist: 5.231948580003518 # npsum_arange: 0.241889145996538 # nparangenpsum: 0.21876695199898677 # array_basic: 11.736577274998126 # array_dtype: 8.71628468400013 # array_iter: 4.303306431000237 # npsumarangeREP: 21.240833958996518 # npsumarangeREP: 16.690092379001726 | np.sum(range(N)) is slow mostly because the current Numpy implementation do not use enough informations about the exact type/content of the values provided by the generator range(N). The heart of the general problem is inherently due to dynamic typing of Python and big integers although Numpy could optimize this specific case. First of all, range(N) returns a dynamically-typed Python object which is a (special kind of) Python generator. The object provided by this generator are also dynamically-typed. It is in practice a pure-Python integer. The thing is Numpy is written in the statically-typed language C and so it cannot efficiently work on dynamically-typed pure-Python objects. The strategy of Numpy is to convert such objects into C types when it can. One big problem in this case is that the integers provided by the generator can theorically be huge: Numpy do not know if the values can overflow a np.int32 or even a np.int64 type. Thus, Numpy first detect the good type to use and then compute the result using this type. This translation process can be quite expensive and appear not to be needed here since all the values provided by range(10_000_000). However, range(5_000_000_000) returns the same object type with pure-Python integers overflowing np.int32 and Numpy needs to automatically detect this case not to return wrong results. The thing is also the input type can be correctly identified (np.int32 on my machine), it does not means that the output result will be correct because overflows can appear in during the computation of the sum. This is sadly the case on my machine. Numpy developers decided to deprecate such a use and put in the documentation that np.fromiter should be used instead. np.fromiter has a dtype required parameter to let the user define what is the good type to use. One way to check this behaviour in practice is to simply use create a temporary list: tmp = list(range(10_000_000)) # Numpy implicitly convert the list in a Numpy array but # still automatically detect the input type to use np.sum(tmp) A faster implementation is the following: tmp = list(range(10_000_000)) # The array is explicitly converted using a well-defined type and # thus there is no need to perform an automatic detection # (note that the result is still wrong since it does not fit in a np.int32) tmp2 = np.array(tmp, dtype=np.int32) result = np.sum(tmp2) The first case takes 476 ms on my machine while the second takes 289 ms. Note that np.sum takes only 4 ms. Thus, a large part of the time is spend in the conversion of pure-Python integer objects to internal int32 types (more specifically the management of pure-Python integers). list(range(10_000_000)) is expensive too as it takes 205 ms. This is again due to the overhead of pure-Python integers (ie. allocations, deallocations, reference counting, increment of variable-sized integers, memory indirections and conditions due to the dynamic typing) as well as the overhead of the generator. sum(np.arange(N)) is slow because sum is a pure-Python function working on a Numpy-defined object. The CPython interpreter needs to call Numpy functions to perform basic additions. Moreover, Numpy-defined integer object are still Python object and so they are subject to reference counting, allocation, deallocation, etc. Not to mention Numpy and CPython add many checks in the functions aiming to finally just add two native numbers together. A Numpy-aware just-in-time compiler such as Numba can solve this issue. Indeed, Numba takes 23 ms on my machine to compute the sum of np.arange(10_000_000) (with code still written in Python) while the CPython interpreter takes 556 ms. | 30 | 18 |
69,585,800 | 2021-10-15 | https://stackoverflow.com/questions/69585800/what-is-the-fundamental-difference-between-tar-unix-and-tarfile-python | What is the fundamental difference between tarring a folder using tar on Unix and tarfile in Python that results in a different file size? In the example below, there is an 8.2 MB difference. I'm currently using a Mac. The folder in this example contains a bunch of random text files for testing purposes. tar -cvf archive_unix.tar files/ python -m tarfile -c archive_pycli.tar files/ # using Python 3.9.6 -rw-r--r-- 1 userid staff 24606720 Oct 15 09:40 archive_pycli.tar -rw-r--r-- 1 userid staff 16397824 Oct 15 09:39 archive_unix.tar | Interesting question. The documentation of tarfile (https://docs.python.org/3/library/tarfile.html) mentions that the default format for tar archive created by tarfile is, since python 3.8, PAX_FORMAT whereas archives created by the tar command have the GNU format which I believe explains the difference. Now to produce the same archive as the tar command and one with the default format (as your command did): import tarfile with tarfile.TarFile(name='archive-py-gnu.tar', mode='w', format=tarfile.GNU_FORMAT) as tf: tf.add('tmp') with tarfile.TarFile(name='archive-py-default.tar', mode='w') as tf: tf.add('tmp') For comparison: $ tar cf archive-tar.tar tmp/ $ ls -l 3430400 16:28 archive-py-default.tar 3317760 16:28 archive-py-gnu.tar 3317760 16:27 archive-tar.tar Results of the file command: $ file archive_unix.tar archive_unix.tar: POSIX tar archive (GNU) $ file archive-py-gnu.tar archive-py-gnu.tar: POSIX tar archive (GNU) $ file archive-py-default.tar archive-py-default.tar: POSIX tar archive Now I cannot tell you the difference between the different formats, sorry. But I hope this helps. | 6 | 7 |
69,580,833 | 2021-10-15 | https://stackoverflow.com/questions/69580833/fastest-way-to-move-objects-within-an-s3-bucket-using-boto3 | I need to copy all files from one prefix in S3 to another prefix within the same bucket. My solution is something like: file_list = [List of files in first prefix] for file in file_list: copy_source = {'Bucket': my_bucket, 'Key': file} s3_client.copy(copy_source, my_bucket, new_prefix) However I am only moving 200 tiny files (1 kb each) and this procedure takes up to 30 seconds. It must be possible to do it fasteer? | I would do it in parallel. For example: from multiprocessing import Pool file_list = [List of files in first prefix] print(objects_to_download) def s3_coppier(s3_file): copy_source = {'Bucket': my_bucket, 'Key': s3_file} s3_client.copy(copy_source, my_bucket, new_prefix) # copy 5 objects at the same time with Pool(5) as p: p.map(s3_coppier, file_list) | 6 | 6 |
69,577,782 | 2021-10-14 | https://stackoverflow.com/questions/69577782/how-does-python-3-10-match-compares-1-and-true | PEP 622, Literal Patterns says the following: Note that because equality (__eq__) is used, and the equivalency between Booleans and the integers 0 and 1, there is no practical difference between the following two: case True: ... case 1: ... and True.__eq__(1) and (1).__eq__(True) both returns True, but when I run these two code snippets with CPython, it seems like case True and case 1 are not same. $ python3.10 >>> match 1: ... case True: ... print('a') # not executed ... >>> match True: ... case 1: ... print('a') # executed ... a How are 1 and True actually compared? | Looking at the pattern matching specification, this falls under a "literal pattern": A literal pattern succeeds if the subject value compares equal to the value expressed by the literal, using the following comparisons rules: Numbers and strings are compared using the == operator. The singleton literals None, True and False are compared using the is operator. So when the pattern is: case True: It uses is, and 1 is True is false. On the other hand, case 1: Uses ==, and 1 == True is true. | 7 | 10 |
69,575,019 | 2021-10-14 | https://stackoverflow.com/questions/69575019/overloading-operators-using-getattr-in-python | I am trying to overload several operators at once using the __getattr__ function. In my code, if I call foo.__add__(other) it works as expected, but when I try foo + bar, it does not. Here is a minimal example: class Foo(): def add(self, other): return 1 + other def sub(self, other): return 1 - other def __getattr__(self, name): stripped = name.strip('_') if stripped in {'sub', 'add'}: return getattr(self, stripped) else: return if __name__=='__main__': bar = Foo() print(bar.__add__(1)) # works print(bar + 1) # doesn't work I realize that it would be easier in this example to just define __add__ and __sub__, but that is not an option in my case. Also, as a small side question, if I replace the line: if stripped in {'sub', 'add'}: with if hasattr(self, name): the code works, but then my iPython kernel crashes. Why does this happen and how could I prevent it? | This is happening because python operators use an optimization to look up the function implementing the operator. The following lines are roughly equivalent: foo + 1 type(foo).__add__(foo, 1) Operators are found specifically on the class object only, never on the instance. bar.__add__(1) calls __getattr__ to find the missing attribute on bar. This works because it bypasses normal operator lookup procedures. bar + 1 calls Foo.__add__(bar, 1) followed by int.__radd(1, bar). The first attribute lookup fails, and the second option raises TypeError. | 6 | 7 |
69,551,066 | 2021-10-13 | https://stackoverflow.com/questions/69551066/add-xll-as-addin-to-excel | I got a .xll file that I can easily add to excel by doing this: Options > Addins > Browse > double click .xll file It gets imported + activated (and it remains in my excel addins every time I close and open Excel). This is the manual way I try to replace with a script. PowerShell $excel=New-Object -ComObject excel.application $excel.RegisterXLL("C:\temp\v-0.0.1-20210906\LS-ZmqRtd-AddIn64.xll") $excel.Visible = "$True" #$excel.Quit() This will create an instance of Excel, register the XLL (I get a "true" in my console) and show the created instance. But when I then go to AddIns, the AddIn isn't there. Python xl = win32com.client.gencache.EnsureDispatch("Excel.Application") xl.Visible = True xl.RegisterXLL( "C:/Users/michael.k/Desktop/v-0.0.1-20210906/LS-ZmqRtd-AddIn64.xll" ) wb = xl.Workbooks.Open("C:/Users/michael.k/Desktop/v-0.0.1-20210906/Test.xlsx") But this behaves like the Powershell script. So.. how can I add my .xll file into Excel to stay there permanently? Any suggestions? Thanks in advance! | Thanks to the link Charles Williams gave me, I was able to do it a little bit different. You can easily create a registry key to let excel know that it should run the .xll-file. # initializing new variables $req_path = Get-Item -Path Registry::HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Excel\Options | Select-Object -ExpandProperty Property $newest_version = (Get-ChildItem -Path I:\Software\LS-ZmqRtd\Test\ -Directory | sort lastwritetime | Select -Last 1).Name $full_path_new_version = '/R "I:\Software\LS-ZmqRtd\Test\'+$newest_version+'\LS-ZmqRtd-AddIn64.xll"' $only_opens = @() [bool]$ls_zmqrtd_found = $false [bool]$ls_zmqrtd_updated = $false $count_opens = 0 # welcome message echo ">> checking if the LS-ZmqRtd addin is installed and has the newest version.." Start-Sleep -s 5 # check if there are regkeys that contain 'OPEN' in their name (if yes, add them to the $only_opens array) foreach ($entry in $req_path) { if ($entry -like "OPEN*") { $only_opens += $entry } } if (!$only_opens) # check if array is empty (if yes, add the new regkey for LS-ZmqRtd) { echo ">> the LS-ZmqRtd addin couldn't be found.. adding it to excel now." Start-Sleep -s 2 New-ItemProperty -Path HKCU:\Software\Microsoft\Office\16.0\Excel\Options -Name OPEN -PropertyType String -Value $full_path_new_version echo ">> addin was added to excel successfully - this requires Excel to be fully closed and re-opened." } else # if no, check if one of the regkeys have the LS-ZmqRtd path value (if found, set $ls_zmqrtd_found to true - else remain false) { foreach ($open in $only_opens) { $value = (Get-ItemProperty -Path "HKCU:\Software\Microsoft\Office\16.0\Excel\Options" -Name $open).$open if ($value -eq $full_path_new_version) { $ls_zmqrtd_found = $true } else { echo ">> found an old version of LS-ZmqRtd.. replacing it with the new one now." Start-Sleep -s 2 Set-ItemProperty -Path HKCU:\Software\Microsoft\Office\16.0\Excel\Options -Name $open -Value $full_path_new_version $ls_zmqrtd_updated = $true } $count_opens += 1 } if ($ls_zmqrtd_found -eq $true) # if $ls_zmqrtd_found is true, there is nothing to do { echo ">> found that the newest version of LS-ZmqRtd is already installed - nothing to do here." } elseif ($ls_zmqrtd_updated -eq $true) { echo ">> updated LS-ZmqRtd to the newest version - an update requires Excel to be fully closed and re-opened." } else # if $ls_zmqrtd_found is false, increment the last OPEN's number by 1 and add the new reqkey for LS-ZmqRtd { $new_reg_key = "OPEN" + ($count_opens+1) echo ">> the LS-ZmqRtd addin couldn't be found.. adding it to excel now." Start-Sleep -s 2 New-ItemProperty -Path HKCU:\Software\Microsoft\Office\16.0\Excel\Options -Name $new_reg_key -PropertyType String -Value $full_path_new_version echo ">> addin was added to excel successfully - this requires Excel to be fully closed and re-opened." } } This script checks if the .xll-File is already named in a registry key. If yes and it has our newest provided version -> do nothing If yes but the version is old -> update the registrey key's value If no -> create the registry key and set the value to our newest provided version | 5 | 1 |
69,561,458 | 2021-10-13 | https://stackoverflow.com/questions/69561458/how-to-check-type-of-files-using-the-header-file-signature-magic-numbers | By entering the file with its extension, my code succeeds to detect the type of the file from the "magic number". magic_numbers = {'png': bytes([0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A]), 'jpg': bytes([0xFF, 0xD8, 0xFF, 0xE0]), #*********************# 'doc': bytes([0xD0, 0xCF, 0x11, 0xE0, 0xA1, 0xB1, 0x1A, 0xE1]), 'xls': bytes([0xD0, 0xCF, 0x11, 0xE0, 0xA1, 0xB1, 0x1A, 0xE1]), 'ppt': bytes([0xD0, 0xCF, 0x11, 0xE0, 0xA1, 0xB1, 0x1A, 0xE1]), #*********************# 'docx': bytes([0x50, 0x4B, 0x03, 0x04, 0x14, 0x00, 0x06, 0x00]), 'xlsx': bytes([0x50, 0x4B, 0x03, 0x04, 0x14, 0x00, 0x06, 0x00]), 'pptx': bytes([0x50, 0x4B, 0x03, 0x04, 0x14, 0x00, 0x06, 0x00]), #*********************# 'pdf': bytes([0x25, 0x50, 0x44, 0x46]), #*********************# 'dll': bytes([0x4D, 0x5A, 0x90, 0x00]), 'exe': bytes([0x4D, 0x5A]), } max_read_size = max(len(m) for m in magic_numbers.values()) with open('file.pdf', 'rb') as fd: file_head = fd.read(max_read_size) if file_head.startswith(magic_numbers['pdf']): print("It's a PDF File") else: print("It's not a PDF file") I want to know how I can modify it without specifying this part of code, i.e. once I generate or I enter the file it shows me directly the type of the file. if file_head.startswith(magic_numbers['pdf']): print("It's a PDF File") else: print("It's not a PDF file") I hope you understand me. | You most like just want to iterate over the loop and test them all. You may be able to optimize or provide some error checking by using the extension as well. If you strip off the extension and check that first, you'll be successful most of the time, and if not you may not want to accept "baby.png" as an xlsx file. That would be suspicious and worthy of an error. But, if you ignore extension, just loop over the entries: for ext in magic_numbers: if file_head.startswith(magic_numbers[ext]): print("It's a {} File".format(ext)) You probably want to put this in a function that returns the type, so you could just return the type instead of printing it out. EDIT Since some share magic numbers, we need to assume the extension is correct until we know that it isn't. I would extract the extension from the filename. This could be done with Pathlib or just string split: ext = filename.rsplit('.', 1)[-1] then test it specifically if ext in magic_numbers: if file_head.startswith(magic_numbers[ext]): return ext put the ext test first, so putting it all together: ext = filename.rsplit('.', 1)[-1] if ext in magic_numbers: if file_head.startswith(magic_numbers[ext]): return ext for ext in magic_numbers: if file_head.startswith(magic_numbers[ext]): return ext return nil | 6 | 4 |
69,555,581 | 2021-10-13 | https://stackoverflow.com/questions/69555581/python-string-split-by-separator-all-possible-permutations | This might be heavily related to similar questions as Python 3.3: Split string and create all combinations , but I can't infer a pythonic solution out of this. Question is: Let there be a str such as 'hi|guys|whats|app', and I need all permutations of splitting that str by a separator. Example: #splitting only once ['hi','guys|whats|app'] ['hi|guys','whats|app'] ['hi|guys|whats','app'] #splitting only twice ['hi','guys','whats|app'] ['hi','guys|whats','app'] #splitting only three times ... etc I could write a backtracking algorithm, but does python (itertools, e.g.) offer a library that simplifies this algorithm? Thanks in advance!! | An approach, once you have split the string is to use itertools.combinations to define the split points in the list, the other positions should be fused again. def lst_merge(lst, positions, sep='|'): '''merges a list on points other than positions''' '''A, B, C, D and 0, 1 -> A, B, C|D''' a = -1 out = [] for b in list(positions)+[len(lst)-1]: out.append('|'.join(lst[a+1:b+1])) a = b return out def split_comb(s, split=1, sep='|'): from itertools import combinations l = s.split(sep) return [lst_merge(l, pos, sep=sep) for pos in combinations(range(len(l)-1), split)] examples >>> split_comb('hi|guys|whats|app', 0) [['hi|guys|whats|app']] >>> split_comb('hi|guys|whats|app', 1) [['hi', 'guys|whats|app'], ['hi|guys', 'whats|app'], ['hi|guys|whats', 'app']] >>> split_comb('hi|guys|whats|app', 2) [['hi', 'guys', 'whats|app'], ['hi', 'guys|whats', 'app'], ['hi|guys', 'whats', 'app']] >>> split_comb('hi|guys|whats|app', 3) [['hi', 'guys', 'whats', 'app']] >>> split_comb('hi|guys|whats|app', 4) [] ## impossible rationale ABCD -> A B C D 0 1 2 combinations of split points: 0/1 or 0/2 or 1/2 0/1 -> merge on 2 -> A B CD 0/2 -> merge on 1 -> A BC D 1/2 -> merge on 0 -> AB C D generic function Here is a generic version, working like above but also taking -1 as parameter for split, in which case it will output all combinations def lst_merge(lst, positions, sep='|'): a = -1 out = [] for b in list(positions)+[len(lst)-1]: out.append('|'.join(lst[a+1:b+1])) a = b return out def split_comb(s, split=1, sep='|'): from itertools import combinations, chain l = s.split(sep) if split == -1: pos = chain.from_iterable(combinations(range(len(l)-1), r) for r in range(len(l)+1)) else: pos = combinations(range(len(l)-1), split) return [lst_merge(l, pos, sep=sep) for pos in pos] example: >>> split_comb('hi|guys|whats|app', -1) [['hi|guys|whats|app'], ['hi', 'guys|whats|app'], ['hi|guys', 'whats|app'], ['hi|guys|whats', 'app'], ['hi', 'guys', 'whats|app'], ['hi', 'guys|whats', 'app'], ['hi|guys', 'whats', 'app'], ['hi', 'guys', 'whats', 'app']] | 5 | 1 |
69,551,065 | 2021-10-13 | https://stackoverflow.com/questions/69551065/setup-with-submodules-dependencies | We have a python package which is also a git repo. It depends on other python packages, themselves git repos. We made the latter git submodules of the former. None of these are public, so no PyPI. None of the other questions related to installing with submodule dependencies match our pattern. My question is not about finding (sub)packages with setuptools, nor is it about relative imports. This is our structure: package-repo/ setup.py setup.cfg README.md .gitignore .gitmodules .git/ submodule-repo/ .git/ .gitignore setup.py setup.cfg README.md submodule/ __init__.py moduleX.py moduleY.py package/ __init__.py moduleA.py moduleB.py subpackage1/ As is the case with requirements.txt, I naively though that something as follow would work out: from setuptools import setup setup(name='package', version='0.4.1', description='A package depending on other self made packages', url='git.ownnetwork.com', author='wli', author_email='wli@', license='Proprietary', packages=['package','package.subpackage1'], include_package_data=True, python_requires='>=3.7', classifiers=[ 'Natural Language :: English', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', ], install_requires=["SQLAlchemy", "pandas", "./submodule-repo"]) It doesn't work. An alternative is to add the submodule in packages and indicate its directory in package_dir. Well it didn't work that well, and what's the point of making a setup.py in the "submodule" if it can't be installed when installing the dependent module? I just want it to be installed without having to put it on PyPI or create a virtual PyPI server, which is seriously overkill, or having to indicate what to do (i.e. pip install ./submodule-repo/) in the README.md, which is inelegant. What's the way? Did I miss it in distutils or setuptools documentation? | You will need to specify where to install the submodule from. install_requires=[ 'SQLAlchemy', 'pandas', # Your private repository module '<dependency_name> @ git+ssh://[email protected]/<user_name>/<repo_name>@<branch>' ] | 5 | 2 |
69,541,613 | 2021-10-12 | https://stackoverflow.com/questions/69541613/how-to-json-serialize-enum-classes-in-pydantic-basemodel | I have the following code that uses Pydantic BaseModel data class from enum import Enum import requests from pydantic import BaseModel from requests import Response class PetType(Enum): DOG: str = 'dog' CAT: str = 'cat' class Pet(BaseModel): name: str type: PetType my_dog: Pet = Pet(name='Lucky', type=PetType.DOG) # This works resp: Response = requests.post('https://postman-echo.com/post', json=my_dog.json()) print(resp.json()) #This doesn't work resp: Response = requests.post('https://postman-echo.com/post', json=my_dog.dict()) print(resp.json()) That when I send json equals to model's dict(), I get the error: TypeError: Object of type 'PetType' is not JSON serializable How do I overcome this error and make PetType also serializable? P.S. The above example is short and simple, but I hit a use case where both cases of sending json=my_dog.json() and json=my_dog.dict() don't work. This is why I need to solve sending using dict(). | **<---- Addition 2 ----> ** Check types like https://docs.python.org/3/library/enum.html#enum.StrEnum and https://docs.python.org/3.12/library/enum.html#enum.IntEnum Instead of MyEnum(str, Enum) use MyEnum(StrENum) **<---- Addition ----> ** Look for Pydantic's parameter "use_enum_values" in Pydantic Model Config use_enum_values whether to populate models with the value property of enums, rather than the raw enum. This may be useful if you want to serialise model.dict() later (default: False) It looks like setting this value to True will do the same as the below solution. Turns out that this is a behavior of ENum, which is discussed here: https://github.com/samuelcolvin/pydantic/issues/2278 The way you should define the enum is using class PetType(str, Enum): instead of class PetType(Enum): For integers this Python's Enum library provides the type IntEnum: https://docs.python.org/3.10/library/enum.html#enum.IntEnum which is basically class IntEnum(int, Enum): pass If you look at the above Enum documentation you will find that a type like StrEnum doesn't exist but following the example for PetType you can define it easily. I am attaching the working code below from enum import Enum import requests from pydantic import BaseModel from requests import Response class PetType(str, Enum): DOG: str = 'dog' CAT: str = 'cat' class Pet(BaseModel): name: str type: PetType my_dog: Pet = Pet(name='Lucky', type=PetType.DOG) # This works resp: Response = requests.post('https://postman-echo.com/post', json=my_dog.json()) print(resp.json()) # Now this also works resp: Response = requests.post('https://postman-echo.com/post', json=my_dog.dict()) print(resp.json()) | 9 | 18 |
69,506,719 | 2021-10-9 | https://stackoverflow.com/questions/69506719/dealing-with-0000-in-datetime-format | How do you convert a column of dates of the form "2020-06-30 15:20:13.078196+00:00" to datetime in pandas? This is what I have done: pd.concat([df, df.date_string.apply(lambda s: pd.Series({'date':datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f%z')}))], axis=1) pd.concat([df, df.file_created.apply(lambda s: pd.Series({'date':datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f.%z')}))], axis=1) pd.concat([df, df.file_created.apply(lambda s: pd.Series({'date':datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f:%z')}))], axis=1) I get the error - time data '2020-06-30 15:20:13.078196+00:00' does not match format in all cases. Any help is appreciated. | +00:00 is a UTC offset of zero hours, thus can be interpreted as UTC. The easiest thing to do is let pd.to_datetime auto-infer the format. That works very well for standard formats like this (ISO 8601): import pandas as pd dti = pd.to_datetime(["2020-06-30 15:20:13.078196+00:00"]) print(dti) # DatetimeIndex(['2020-06-30 15:20:13.078196+00:00'], dtype='datetime64[ns, UTC]', freq=None) Notes pandas v2 allows you to set format="ISO8601" to specify that your input is in that format. It even allows parsing a mix of ISO8601 compatible strings: example. pd.to_datetime also works very well for mixed formats: example. In pandas v2, you can set format="mixed" in such cases. | 6 | 8 |
69,475,317 | 2021-10-7 | https://stackoverflow.com/questions/69475317/how-to-setup-netbeans-ide-for-python-development | I was using PyDev plugin in eclipse for developing python. But now I switched to NetBeans IDE 12.6 and I searched google for finding python plugins for NetBeans. I found a plugin called nbpython. But it is for NetBeans 8.1 and I am using NetBeans 12.6. So is there any plugin for NetBeans IDE 12.6 for developing Python Projects. Or does nbpython work in my version? | The new plugin for python is netbeansPython: https://plugins.netbeans.apache.org/catalogue/?id=89 https://github.com/albilu/netbeansPython | 6 | 4 |
69,504,352 | 2021-10-9 | https://stackoverflow.com/questions/69504352/fastapi-get-request-results-in-typeerror-value-is-not-a-valid-dict | this is my database schema. I defined my Schema like this: from pydantic import BaseModel class Userattribute(BaseModel): name: str value: str user_id: str id: str This is my model: class Userattribute(Base): __tablename__ = "user_attribute" name = Column(String) value = Column(String) user_id = Column(String) id = Column(String, primary_key=True, index=True) In a crud.py I define a get_attributes method. def get_attributes(db: Session, skip: int = 0, limit: int = 100): return db.query(models.Userattribute).offset(skip).limit(limit).all() This is my GET endpoint: @app.get("/attributes/", response_model=List[schemas.Userattribute]) def read_attributes(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): users = crud.get_attributes(db, skip=skip, limit=limit) print(users) return users The connection to the database seems to work, but a problem is the datatype: pydantic.error_wrappers.ValidationError: 7 validation errors for Userattribute response -> 0 value is not a valid dict (type=type_error.dict) response -> 1 value is not a valid dict (type=type_error.dict) response -> 2 value is not a valid dict (type=type_error.dict) response -> 3 value is not a valid dict (type=type_error.dict) response -> 4 value is not a valid dict (type=type_error.dict) response -> 5 value is not a valid dict (type=type_error.dict) response -> 6 value is not a valid dict (type=type_error.dict) Why does FASTApi expect a dictionary here? I donΒ΄t really understand it, since I am not able to even print the response. How can I fix this? | Pydantic 2 changed how models gets configured, so if you're using the most recent version of Pydantic, see the section named Pydantic 2 below. SQLAlchemy does not return a dictionary, which is what pydantic expects by default. You can configure your model to also support loading from standard orm parameters (i.e. attributes on the object instead of dictionary lookups): class Userattribute(BaseModel): name: str value: str user_id: str id: str class Config: orm_mode = True You can also attach a debugger right before the call to return to see what's being returned. Since this answer has become slightly popular, I'd like to also mention that you can make orm_mode = True the default for your schema classes by having a common parent class that inherits from BaseModel: class OurBaseModel(BaseModel): class Config: orm_mode = True class Userattribute(OurBaseModel): name: str value: str user_id: str id: str This is useful if you want to support orm_mode for most of your classes (and for those where you don't, inherit from the regular BaseModel). Pydantic 2 Pydantic 2 has replaced the internal Config class with a model_config field: from pydantic import ConfigDict class OurBaseModel(BaseModel): model_config = ConfigDict(from_attributes=True) This works in the same way as the old orm_mode. | 34 | 81 |
69,546,459 | 2021-10-12 | https://stackoverflow.com/questions/69546459/convert-hydra-omegaconf-config-to-python-nested-dict-list | I'd like to convert a OmegaConf/Hydra config to a nested dictionary/list. How can I do this? | See OmegaConf.to_container(). Usage snippet: >>> conf = OmegaConf.create({"foo": "bar", "foo2": "${foo}"}) >>> assert type(conf) == DictConfig >>> primitive = OmegaConf.to_container(conf) >>> show(primitive) type: dict, value: {'foo': 'bar', 'foo2': '${foo}'} >>> resolved = OmegaConf.to_container(conf, resolve=True) >>> show(resolved) type: dict, value: {'foo': 'bar', 'foo2': 'bar'} | 21 | 35 |
69,490,450 | 2021-10-8 | https://stackoverflow.com/questions/69490450/objectnotexecutableerror-when-executing-any-sql-query-using-asyncengine | I'm using async_engine. When I try to execute anything: async with self.async_engine.connect() as con: query = "SELECT id, name FROM item LIMIT 50;" result = await con.execute(f"{query}") I'm getting: Exception has occurred: ObjectNotExecutableError Not an executable object: 'SELECT id, name FROM item LIMIT 50;' This question was asked before by user @stilmaniac but it is now deleted from SO. I found it in Google Search cache, here is copy. I have the same issue so I'm reasking it, but the original version is below: I'm trying to create tables from metadata as follows: Base = declarative_base() properties = Table( 'properties', Base.metadata, # ... Column('geolocation', Geography(geometry_type='POINT', srid=4326)), # ... ) engine = create_async_engine("postgresql+asyncpg://user:password@postgres/") async with engine.begin() as conn: await conn.run_sync(Base.metadata.create_all) Gives me the following error: sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: 'CREATE INDEX "idx_properties_geolocation" ON "properties" USING GIST ("geolocation")' Considering this doc Versions: OS: macOS 11.4 ARM SQLAlchemy: 1.4.22 Python: 3.6 | As the exception message suggests, the str 'SELECT id, name FROM item LIMIT 50;' is not an executable object. To make it executable, wrap it with sqlalchemy.text. from sqlalchemy import text async with self.async_engine.connect() as con: query = "SELECT id, name FROM item LIMIT 50;" result = await con.execute(text(query)) async.connection.execute requires that its statement argument [...] is always an object that is in both the ClauseElement and Executable hierarchies, including: Select Insert, Update, Delete TextClause and TextualSelect DDL and objects which inherit from DDLElement The synchronous connection.execute method permits raw strings, but this is deprecated in v1.4 and has been removed in SQLAlchemy 2.0. | 61 | 148 |
69,544,658 | 2021-10-12 | https://stackoverflow.com/questions/69544658/how-to-build-a-self-referencing-model-in-pydantic-with-dataclasses | I am building an API using FastAPI and pydantic. As I follow DDD / clean architecture, which separates the definition of the model from the definition of the persistence layer, I use standard lib dataclasses in my model and then map them to SQLAlchemy tables using imperative mapping (ie. classical mapping). This works perfectly : @dataclass class User: name: str age: int @pydantic.dataclasses.dataclass class PydanticUser(User): ... However, I encounter a problem when defining a class with self-reference. β
class inheriting from Pydanticβs BaseModel can self-reference Inheriting from pydanticβs BaseModel works, however this is not compatible with SQLAlchemy imperative mapping, which I would like to use to stick to clean architecture / DDD principles. class BaseModelPerson(BaseModel): name: str age: int parent_person: BaseModelPerson = None BaseModelPerson.update_forward_refs() john = BaseModelPerson(name="John", age=49, parent_person=None) tim = BaseModelPerson(name="Tim", age=14, parent_person=john) print(john) # BaseModelPerson(name='John', age=49, parent_person=None) print(tim) # BaseModelPerson(name='Tim', age=14, parent_person=BaseModelPerson(name='John', age=49, parent_person=None)) β
Standard lib dataclasses can also self-reference from __future__ import annotations from dataclasses import dataclass @dataclass class StdlibPerson: name: str age: int parent: StdlibPerson john = StdlibPerson(name="John", age=49, parent=None) tim = StdlibPerson(name="Tim", age=14, parent=john) print(john) # StdlibPerson(name='John', age=49, parent=None) print(tim) # StdlibPerson(name='Tim', age=14, parent=StdlibPerson(name='John', age=49, parent=None)) β Pydantic dataclass conversion causes recursion error The problem occurs when I try to convert the standard library dataclass into a pydantic dataclass. Defining a Pydantic dataclass like this: PydanticPerson = pydantic.dataclasses.dataclass(StdlibPerson) returns an error: # output (hundreds of lines - that is recursive indeed) # The name of an attribute on the class where we store the Field File "pydantic/main.py", line 990, in pydantic.main.create_model File "pydantic/main.py", line 299, in pydantic.main.ModelMetaclass.__new__ File "pydantic/fields.py", line 411, in pydantic.fields.ModelField.infer File "pydantic/fields.py", line 342, in pydantic.fields.ModelField.__init__ File "pydantic/fields.py", line 456, in pydantic.fields.ModelField.prepare File "pydantic/fields.py", line 673, in pydantic.fields.ModelField.populate_validators File "pydantic/class_validators.py", line 255, in pydantic.class_validators.prep_validators File "pydantic/class_validators.py", line 238, in pydantic.class_validators.make_generic_validator File "/usr/lib/python3.9/inspect.py", line 3111, in signature return Signature.from_callable(obj, follow_wrapped=follow_wrapped) File "/usr/lib/python3.9/inspect.py", line 2860, in from_callable return _signature_from_callable(obj, sigcls=cls, File "/usr/lib/python3.9/inspect.py", line 2323, in _signature_from_callable return _signature_from_function(sigcls, obj, File "/usr/lib/python3.9/inspect.py", line 2155, in _signature_from_function if _signature_is_functionlike(func): File "/usr/lib/python3.9/inspect.py", line 1883, in _signature_is_functionlike if not callable(obj) or isclass(obj): File "/usr/lib/python3.9/inspect.py", line 79, in isclass return isinstance(object, type) RecursionError: maximum recursion depth exceeded while calling a Python object Defining StdlibPerson like this does not solve the problem: @dataclass class StdlibPerson name: str age: int parent: "Person" = None nor does using the second way provided by pydantic documentation: @pydantic.dataclasses.dataclass class PydanticPerson(StdlibPerson) ... β using Pydantic dataclasses directly from __future__ import annotations from pydantic.dataclasses import dataclass from typing import Optional @pydantic.dataclasses.dataclass class PydanticDataclassPerson: name: str age: int parent: Optional[PydanticDataclassPerson] = None john = PydanticDataclassPerson(name="John", age=49, parent=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 6, in __init__ File "pydantic/dataclasses.py", line 97, in pydantic.dataclasses._generate_pydantic_post_init._pydantic_post_init # | False | | | File "pydantic/main.py", line 1040, in pydantic.main.validate_model File "pydantic/fields.py", line 699, in pydantic.fields.ModelField.validate pydantic.errors.ConfigError: field "parent" not yet prepared so type is still a ForwardRef, you might need to call PydanticDataclassPerson.update_forward_refs(). >>> PydanticDataclassPerson.update_forward_refs() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: type object 'PydanticDataclassPerson' has no attribute 'update_forward_refs' Question how can I define a pydantic model with self-referencing objects so that it is compatible with SQLAlchemy imperative mapping ? | It seems there are no easy solution to build a REST API with FastAPI, self-referencing objects and SQLAlchemy imperative mapping. I have decided to switch to the FastAPI / GraphQL stack, with the Strawberry library which is explicitly recommended in the FastAPI documentation. No problem so far, Strawberry makes it easy to build a GraphQL server and it handles self-referencing object in a breeze. #!/usr/bin/env python3.10 # src/my_app/entrypoints/api/schema.py import typing import strawberry @strawberry.type class Person: name: str age: int parent: Person | None | 12 | 4 |
69,542,217 | 2021-10-12 | https://stackoverflow.com/questions/69542217/how-to-disable-server-exceptions-on-fast-api-when-testing-with-httpx-asyncclient | We have a FastApi app and using httpx AsyncClient for testing purposes. We are experiencing a problem where the unit tests run locally fine but fail on the CI server (Github Actions). After further research we have come across this proposed solution by setting raise_server_exceptions=False to False. client = TestClient(app, raise_server_exceptions=False) However this is for the sync client. We are using the async client. @pytest.fixture async def client(test_app): async with AsyncClient(app=test_app, base_url="http://testserver") as client: yield client The AsyncClient does not support the raise_app_exceptions=False option. Does anyone have experience with this? Thanks | The problem is caused by FastApi version. You can use fastapi==0.65.0 and even without the ASGITransport object and the raise_app_exceptions=False flag you will be able to run the tests which are checking for custom exception raising. Also the fastapi version should be frozen in the requirements file. You can read more here | 9 | 2 |
69,513,799 | 2021-10-10 | https://stackoverflow.com/questions/69513799/pandas-read-csv-the-error-bad-lines-argument-has-been-deprecated-and-will-be-re | I am trying to read some data which may sometimes have erroneous and bad rows, so as always I passed error_bad_lines=False but the console keeps throwing the deprecation warning on every run. Why is this feature deprecated and is there any other alternative for skipping bad lines? | Read the documentation: Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon encountering a bad line instead. So, replace: df = pd.read_csv(..., error_bad_lines=False) with: df = pd.read_csv(..., on_bad_lines='skip') | 26 | 52 |
69,464,512 | 2021-10-6 | https://stackoverflow.com/questions/69464512/django-rest-error-attributeerror-module-collections-has-no-attribute-mutab | I'm build Django app, and it's work fine on my machine, but when I run inside docker container it's rest framework keep crashing, but when I comment any connection with rest framework it's work fine. My machine: Kali Linux 2021.3 docker machine: Raspberry Pi 4 4gb docker container image: python:rc-alpine3.14 python version on my machine: Python 3.9.7 python version on container: Python 3.10.0rc2 error output: Traceback (most recent call last): File "/app/manage.py", line 22, in <module> main() File "/app/manage.py", line 18, in main execute_from_command_line(sys.argv) File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.10/site-packages/django/core/management/commands/test.py", line 23, in run_from_argv super().run_from_argv(argv) File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 354, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 398, in execute output = self.handle(*args, **options) File "/usr/local/lib/python3.10/site-packages/django/core/management/commands/test.py", line 55, in handle failures = test_runner.run_tests(test_labels) File "/usr/local/lib/python3.10/site-packages/django/test/runner.py", line 728, in run_tests self.run_checks(databases) File "/usr/local/lib/python3.10/site-packages/django/test/runner.py", line 665, in run_checks call_command('check', verbosity=self.verbosity, databases=databases) File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 181, in call_command return command.execute(*args, **defaults) File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 398, in execute output = self.handle(*args, **options) File "/usr/local/lib/python3.10/site-packages/django/core/management/commands/check.py", line 63, in handle self.check( File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 419, in check all_issues = checks.run_checks( File "/usr/local/lib/python3.10/site-packages/django/core/checks/registry.py", line 76, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "/usr/local/lib/python3.10/site-packages/django/core/checks/urls.py", line 13, in check_url_config return check_resolver(resolver) File "/usr/local/lib/python3.10/site-packages/django/core/checks/urls.py", line 23, in check_resolver return check_method() File "/usr/local/lib/python3.10/site-packages/django/urls/resolvers.py", line 412, in check for pattern in self.url_patterns: File "/usr/local/lib/python3.10/site-packages/django/utils/functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/local/lib/python3.10/site-packages/django/urls/resolvers.py", line 598, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/usr/local/lib/python3.10/site-packages/django/utils/functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/local/lib/python3.10/site-packages/django/urls/resolvers.py", line 591, in urlconf_module return import_module(self.urlconf_name) File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/app/project/urls.py", line 24, in <module> path("", include("apps.urls", namespace="apps")), File "/usr/local/lib/python3.10/site-packages/django/urls/conf.py", line 34, in include urlconf_module = import_module(urlconf_module) File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/app/apps/urls.py", line 3, in <module> from . import api File "/app/apps/api.py", line 1, in <module> from .serializers import * File "/app/apps/serializers.py", line 1, in <module> from rest_framework import serializers File "/usr/local/lib/python3.10/site-packages/rest_framework/serializers.py", line 27, in <module> from rest_framework.compat import postgres_fields File "/usr/local/lib/python3.10/site-packages/rest_framework/compat.py", line 59, in <module> import requests File "/usr/local/lib/python3.10/site-packages/requests/__init__.py", line 63, in <module> from . import utils File "/usr/local/lib/python3.10/site-packages/requests/utils.py", line 27, in <module> from .cookies import RequestsCookieJar, cookiejar_from_dict File "/usr/local/lib/python3.10/site-packages/requests/cookies.py", line 172, in <module> class RequestsCookieJar(cookielib.CookieJar, collections.MutableMapping): AttributeError: module 'collections' has no attribute 'MutableMapping' Dockerfile FROM python:rc-alpine3.14 COPY . /app WORKDIR /app ENV UWSGI_PROFILE=core ENV PYTHONUNBUFFERED=TRUE RUN apk add --update --no-cache g++ gcc libxslt-dev # add some nessery libs RUN apk add python3-dev build-base linux-headers pcre-dev RUN pip3 install -U pip # upgrade pip RUN apk update \ && apk add --virtual build-deps gcc python3-dev musl-dev \ && apk add jpeg-dev zlib-dev libjpeg \ && pip install Pillow \ && apk del build-deps RUN apk add --no-cache --virtual=build-dependencies wget ca-certificates && \ wget "https://bootstrap.pypa.io/get-pip.py" -O /dev/stdout | python RUN pip install uwsgi RUN pip3 install -r requirements.txt # install all requirements RUN python uploadstatics.py EXPOSE 80 CMD ["gunicorn","project.wsgi:application", "-b 0.0.0.0:8060"] NOTES: I use gunicorn to run the app (the error show even I run't from manage.py) | You can downgrade your Python version. That should solve your problem; if not, use collections.abc.Mapping instead of the deprecated collections.Mapping. Refer here: Link | 9 | 8 |
69,482,678 | 2021-10-7 | https://stackoverflow.com/questions/69482678/specnotfound-invalid-name-try-the-format-user-package-in-creating-new-conda | I'm trying to Create New conda environment by 'Anaconda Prompt' usnig yml File in Windows 10. So here is the steps i made through: 1. using cd command i changed the directory to dir which my yml file located. (suppose my yml file is in c:/Users/<USER NAME>/.jupyter ) 2. Then i used conda env create -f Python 310.yml command to create new conda env. and what i got is: SpecNotFound: Invalid name, try the format: user/package Now I don't know how can I solve this problem and exactly what is the meaning of this error. Appendix my Python 310.yml file contains these stuff: | issue solved by changing contents of Python 310.yml and renaming yml file to Python310.yml. Here is the final .yml file content: name: Python3.9 channels: - defaults dependencies: - numpy - pandas - matplotlib - pip - python=3.9.* - python-dateutil - pytz - scikit-learn - scipy - statsmodels - xlrd - openpyxl - lxml - html5lib - beautifulsoup4 - jupyter - pip: - pmdarima - tensorflow - keras prefix: C:\Users\Shayan\Anaconda3\envs\Python3.9 | 27 | 4 |
69,534,651 | 2021-10-12 | https://stackoverflow.com/questions/69534651/disable-python-auto-concatenate-strings-across-lines | I was creating a long list of strings like this: tlds = [ 'com', 'net', 'org' 'edu', 'gov', ... ] I missed a comma after 'org'. Python automatically concatenated it with the string in the next line, into 'orgedu'. This became a bug very hard to identify. There are already many ways to define multi-line strings, some very explicit. So I wonder is there a way to disable this particular behavior? | The right Platonic thing to do is to modify the linter. But I think life is too short to do so, in addition to the fact that if the next coder does not know about your modified linter, his/her life would be a living hell. There should not be shame in ensuring that the input, even if hardcoded, is valid. If it was for me, I would implement a manual workaround like so: tlds = [ 'com', 'net', 'org' 'edu', 'gov', ] redone = ''.join(tlds) chunk_size = 3 tlds = [ redone[i:i+chunk_size] for i in range(0, len(redone), chunk_size) ] # Now you have a nice `tlds` print(tlds) You can forget commas, write two elements on the same line, or even in the same string all you want. You can invite unwary code collabs to mess it too, the text will be redone in threes (chunk_size) later on anyways if that is OK with your application. EDIT: Later to @Jasmijn 's note, I think there is an alternative approach if we have a dynamic size of entries we can use the literal input like this: tlds = ['''com net org edu gov nl co.uk'''] # This way every line is an entry by its own as seen directly without any quotations or decorations except for the first and last inputs. tlds = '\n'.split(tlds) | 13 | 1 |
69,485,319 | 2021-10-7 | https://stackoverflow.com/questions/69485319/starting-django-with-docker-unexpected-character | I'm trying to start up this Project on my Mac https://github.com/realsuayip/django-sozluk It works on my Windows machine, but I got this Error on my Mac: unexpected character "." in variable name near "127.0.0.1 192.168.2.253\nDJANGO_SETTINGS_MODULE=djdict.settings_prod\n\n\nSQL_ENGINE=django.db.backends.postgresql\nSQL_PORT=5432\nDATABASE=postgres\nSQL_HOST=db\n\nSQL_DATABASE=db_dictionary\nSQL_USER=db_dictionary_user\nSQL_PASSWORD=db_dictionary_password\n\n\nEMAIL_HOST=eh\nEMAIL_PORT=587\nEMAIL_HOST_USER=eh_usr\nEMAIL_HOST_PASSWORD=pw" furkan@MacBook-Air-von-Furkan gs % Any help would be much appreciated! | (Sorry about the answer - I don't yet have the rep to comment) Just want to add a note on to the answer by D.Mo - I had the same error this morning, and adding quotes around the values in my .env file did seem to resolve the issue. Though I then noticed that in the documentation for these env files, Docker mentions There is no special handling of quotation marks. This means that they are part of the VAL. Just wanted to point that out in case anyone ran into issues with this. I will likely just keep this change locally until others in my team experience the same problem - unless someone can confirm that the ENV values should now have ""s around them. FWIW I could not find a way to disable Docker Compose V2 (I'm on Arch, Docker v20.10.9) - docker-compose disable-v2 isn't a valid command for me (see here for what I assume is the (imo silly) reason). Edit - I ended up reverting to a previous version of docker-compose as the constant workarounds I had to implement weren't fun. I did so after seeing this - I ended up looking around yay -U /var/cache/pacman/pkg/docker-compose- and tab-complete gave me a list of cached versions. I went with 1.29.2-1 and things have been working perfectly again since then. Will just have to see what happens in future updates etc. | 11 | 2 |
69,477,169 | 2021-10-7 | https://stackoverflow.com/questions/69477169/how-to-randomly-set-inputs-to-zero-in-keras-during-training-autoencoder-callbac | I am training 2 autoencoders with 2 separate input paths jointly and I would like to randomly set one of the input paths to zero. I use tensorflow with keras backend (functional API). I am computing a joint loss (sum of two losses) for backpropagation. A -> A' & B ->B' loss => l2(A,A')+l2(B,B') networks taking A and B are connected in latent space. I would like to randomly set A or B to zero and compute the loss only on the corresponding path, meaning if input path A is set to zero loss be computed only by using outputs of only path B and vice versa; e.g.: 0 -> A' & B ->B' loss: l2(B,B') How do I randomly set input path to zero? How do I write a callback which does this? | Maybe try the following: import random def decision(probability): return random.random() < probability Define a method that makes a random decision based on a certain probability x and make your loss calculation depend on this decision. if current_epoch == random.choice(epochs): keep_mask = tf.ones_like(A.input, dtype=float32) throw_mask = tf.zeros_like(A.input, dtype=float32) if decision(probability=0.5): total_loss = tf.reduce_sum(reconstruction_loss_a * keep_mask + reconstruction_loss_b * throw_mask) else: total_loss = tf.reduce_sum(reconstruction_loss_a * throw_mask + reconstruction_loss_b * keep_mask) else: total_loss = tf.reduce_sum(reconstruction_loss_a + reconstruction_loss_b) I assume that you do not want to set one of the paths to zero every time you update your model parameters, as then there is a risk that one or even both models will not be sufficiently trained. Also note that I use the input of A to create zero_like and one_like tensors as I assume that both inputs have the same shape; if this is not the case, it can easily be adjusted. Depending on what your goal is, you may also consider replacing your input of A or B with a random tensor e.g. tf.random.normal based on a random decision. This creates noise in your model, which may be desirable, as your model would be forced to look into the latent space to try reconstruct your original input. This means precisely that you still calculate your reconstruction loss with A.input and A.output, but in reality your model never received the A.input, but rather the random tensor. Note that this answer serves as a simple conceptual example. A working example with Tensorflow can be found here. | 5 | 5 |
69,476,935 | 2021-10-7 | https://stackoverflow.com/questions/69476935/how-to-remove-parent-json-element-in-python3-if-child-is-object-is-empty | I'm trying to move data from SQL to Mongo. Here is a challenge I'm facing, if any child object is empty I want to remove parent element. I want till insurance field to be removed. Here is what I tried: def remove_empty_elements(jsonData): if(isinstance(jsonData, list) or isinstance(jsonData,dict)): for elem in list(jsonData): if not isinstance(elem, dict) and isinstance(jsonData[elem], list) and elem: jsonData[elem] = [x for x in jsonData[elem] if x] if(len(jsonData[elem])==0): del jsonData[elem] elif not isinstance(elem, dict) and isinstance(jsonData[elem], dict) and not jsonData[elem]: del jsonData[elem] else: pass return jsonData sample data { "_id": "30546c62-8ea0-4f1a-a239-cc7508041a7b", "IsActive": "True", "name": "Pixel 3", "phone": [ { "Bill": 145, "phonetype": "xyz", "insurance": [ { "year_one_claims": [ { "2020": 200 }, { }, { }, { }, { } ] }, { "year_two_claims": [ { }, { }, { }, { }, { } ] }, ] } ], "Provider": { "agent": "aaadd", } } Results should look like that { "_id": "30546c62-8ea0-4f1a-a239-cc7508041a7b", "IsActive": "True", "name": "Pixel 3", "phone": [ { "Bill": 145, "phonetype": "xyz", "insurance": [ { "year_one_claims": [ { "2020": 200 }, ] }, ] } ], "Provider": { "agent": "aaadd", } } | Your if statements are kind of confusing. I think you are looking for a recursion: import json # define which elements you want to remove: to_be_deleted = [[], {}, "", None] def remove_empty_elements(jsonData): if isinstance(jsonData, list): jsonData = [new_elem for elem in jsonData if (new_elem := remove_empty_elements(elem)) not in to_be_deleted] elif isinstance(jsonData,dict): jsonData = {key: new_value for key, value in jsonData.items() if (new_value := remove_empty_elements(value)) not in to_be_deleted} return jsonData print(json.dumps(remove_empty_elements(jsonData), indent=4)) Edit/Note: from Python3.8 you can use assignements (:=) in comprehensions Output: { "_id": "30546c62-8ea0-4f1a-a239-cc7508041a7b", "IsActive": "True", "name": "Pixel 3", "phone": [ { "Bill": 145, "phonetype": "xyz", "insurance": [ { "year_one_claims": [ { "2020": 200 } ] } ] } ], "Provider": { "agent": "aaadd" } } | 5 | 2 |
69,482,632 | 2021-10-7 | https://stackoverflow.com/questions/69482632/recover-from-pendingrollbackerror-and-allow-subsequent-queries | We have a pyramid web application. We use [email protected] with Zope transactions. In our application, it is possible for an error to occur during flush as described here which causes any subsequent usage of the SQLAlchemy session to throw a PendingRollbackError. The error which occurs during a flush is unintentional (a bug), and is raised to our exception handling view... which tries to use data from the SQLAlchemy session, which then throws a PendingRollbackError. Is it possible to "recover" from a PendingRollbackError if you have not framed your transaction management correctly? The SQLAclhemy documentation says to avoid this situation you essentially "just need to do things the right way". Unfortunately, this is a large codebase, and developers don't always follow correct transaction management. This issue is also complicated if savepoints/nested transactions are used. def some_view(): # constraint violation session.add_all([Foo(id=1), Foo(id=1)]) session.commit() # Error is raised during flush return {'data': 'some data'} def exception_handling_view(): # Wired in via pyramid framework, error ^ enters here. session.query(... does a query to get some data) # This throws a `PendingRollbackError` I am wondering if we can do something like the below, but don't understand pyramid + SQLAlchemy + Zope transactions well enough to know the implications (when considering the potential for nested transactions etc). def exception_handling_view(): # Wired in via pyramid framework, error ^ enters here. def _query(): session.query(... does a query to get some data) try: _query() except PendingRollbackError: session.rollback() _query() | Instead of trying to execute your query, just try to get the connection: def exception_handling_view(): try: _ = session.connection() except PendingRollbackError: session.rollback() session.query(...) session.rollback() only rolls back the innermost transaction, as is usually expected β assuming nested transactions are used intentionally via the explicit session.begin_nested(). You don't have to rollback parent transactions, but if you decide to do that, you can: while session.registry().in_transaction(): session.rollback() | 5 | 8 |
69,507,269 | 2021-10-9 | https://stackoverflow.com/questions/69507269/why-cant-add-file-handler-with-the-form-of-self-fh-in-the-init-method | os and python info: uname -a Linux debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux python3 --version Python 3.9.2 Here is a simple class which can start multiprocessing. from multiprocessing.pool import Pool class my_mp(object): def __init__(self): self.process_num = 3 fh = open('test.txt', 'w') def run_task(self,i): print('process {} start'.format(str(i))) time.sleep(2) print('process {} end'.format(str(i))) def run(self): pool = Pool(processes = self.process_num) for i in range(self.process_num): pool.apply_async(self.run_task,args = (i,)) pool.close() pool.join() Initialize the my_mp class,then start multiprocess. ins = my_mp() ins.run() process 0 start process 1 start process 2 start process 0 end process 2 end process 1 end Now replace fh = open('test.txt', 'w') with self.fh = open('test.txt', 'w') in my_mp class and try again. ins = my_mp() ins.run() No output!Why no process start? >>> from multiprocessing.pool import Pool >>> >>> class my_mp(object): ... def __init__(self): ... self.process_num = 3 ... fh = open('test.txt', 'w') ... def run_task(self,i): ... print('process {} start'.format(str(i))) ... time.sleep(2) ... print('process {} end'.format(str(i))) ... def run(self): ... pool = Pool(processes = self.process_num) ... for i in range(self.process_num): ... pool.apply_async(self.run_task,args = (i,)) ... pool.close() ... pool.join() ... >>> x = my_mp() >>> x.run() process 0 start process 1 start process 2 start process 2 end process 0 end process 1 end >>> class my_mp(object): ... def __init__(self): ... self.process_num = 3 ... self.fh = open('test.txt', 'w') ... def run_task(self,i): ... print('process {} start'.format(str(i))) ... time.sleep(2) ... print('process {} end'.format(str(i))) ... def run(self): ... pool = Pool(processes = self.process_num) ... for i in range(self.process_num): ... pool.apply_async(self.run_task,args = (i,)) ... pool.close() ... pool.join() ... >>> x = my_mp() >>> x.run() >>> x.run() >>> x = my_mp() >>> class my_mp(object): ... def __init__(self): ... self.process_num = 3 ... fh = open('test.txt', 'w') ... self.fh = fh ... def run_task(self,i): ... print('process {} start'.format(str(i))) ... time.sleep(2) ... print('process {} end'.format(str(i))) ... def run(self): ... pool = Pool(processes = self.process_num) ... for i in range(self.process_num): ... pool.apply_async(self.run_task,args = (i,)) ... pool.close() ... pool.join() ... >>> x = my_mp() >>> x.run() >>> Why can't add file handler with the form of self.fh in the __init__ method?I have never called the file handler defined in __init__ in any process. | The problem: Stdlib multiprocessing uses pickle to serialize objects. Anything which needs to be sent across the process boundary needs to be picklable. Custom class instances are generally picklable, as long as all their attributes are picklable - it works by importing the type within the subprocess and unpickling the attributes. The issue is that the object returned by open() is not picklable. >>> class A: ... pass ... >>> import pickle >>> pickle.dumps(A()) b'\x80\x04\x95\x15\x00\x00\x00\x00\x00\x00\x00\x8c\x08__main__\x94\x8c\x01A\x94\x93\x94)\x81\x94.' >>> class A: ... def __init__(self): ... self.fh = open("test.txt", "w") ... >>> pickle.dumps(A()) TypeError: cannot pickle '_io.TextIOWrapper' object In the first case, the multiprocessing pool still works because fh is just a local variable and it's deleted as soon as it's out of scope, i.e. when the __init__ method returns. But as soon as you save this handle into the instance's namespace with self.fh = open(...), there will remain a reference and it will need to be sent over the process boundary. You might think that since you've only scheduled the method self.run_task to execute in the pool, that the state set from __init__ doesn't matter, but that's not the case. There is still a reference: >>> ins = my_mp() >>> ins.run_task.__self__.__dict__ {'process_num': 3, 'fh': <_io.TextIOWrapper name='test.txt' mode='w' encoding='UTF-8'>} Note that calling ins = my_mp() runs the __init__ method in the main process, and ins.run_task is the object which gets sent over the process boundary. Solution: There is a third-party library which provides a drop-in replacement for the stdlib multiprocessing Pool - pip install pathos and replace the multiprocessing import with: from pathos.multiprocessing import Pool pathos uses dill, a more powerful serialization library than pickle, so it is able to serialize the objects returned by open(). Your code should work again without any other changes. However, you should beware that each worker process will not know about other processes writing bytes to self.fh, so whichever worker writes last may overwrite data written earlier from some other process. | 6 | 3 |
69,517,460 | 2021-10-10 | https://stackoverflow.com/questions/69517460/bert-get-sentence-embedding | I am replicating code from this page. I have downloaded the BERT model to my local system and getting sentence embedding. I have around 500,000 sentences for which I need sentence embedding and it is taking a lot of time. Is there a way to expedite the process? Would sending batches of sentences rather than one sentence at a time help? . #!pip install transformers import torch import transformers from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states = True, # Whether the model returns all hidden-states. ) # Put the model in "evaluation" mode, meaning feed-forward operation. model.eval() corpa=["i am a boy","i live in a city"] storage=[]#list to store all embeddings for text in corpa: # Add the special tokens. marked_text = "[CLS] " + text + " [SEP]" # Split the sentence into tokens. tokenized_text = tokenizer.tokenize(marked_text) # Map the token strings to their vocabulary indeces. indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [1] * len(tokenized_text) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # Run the text through BERT, and collect all of the hidden states produced # from all 12 layers. with torch.no_grad(): outputs = model(tokens_tensor, segments_tensors) # Evaluating the model will return a different number of objects based on # how it's configured in the `from_pretrained` call earlier. In this case, # becase we set `output_hidden_states = True`, the third item will be the # hidden states from all layers. See the documentation for more details: # https://huggingface.co/transformers/model_doc/bert.html#bertmodel hidden_states = outputs[2] # `hidden_states` has shape [13 x 1 x 22 x 768] # `token_vecs` is a tensor with shape [22 x 768] token_vecs = hidden_states[-2][0] # Calculate the average of all 22 token vectors. sentence_embedding = torch.mean(token_vecs, dim=0) storage.append((text,sentence_embedding)) ######update 1 I modified my code based upon the answer provided. It is not doing full batch processing #!pip install transformers import torch import transformers from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states = True, # Whether the model returns all hidden-states. ) # Put the model in "evaluation" mode, meaning feed-forward operation. model.eval() batch_sentences = ["Hello I'm a single sentence", "And another sentence", "And the very very last one"] encoded_inputs = tokenizer(batch_sentences) storage=[]#list to store all embeddings for i,text in enumerate(encoded_inputs['input_ids']): tokens_tensor = torch.tensor([encoded_inputs['input_ids'][i]]) segments_tensors = torch.tensor([encoded_inputs['attention_mask'][i]]) print (tokens_tensor) print (segments_tensors) # Run the text through BERT, and collect all of the hidden states produced # from all 12 layers. with torch.no_grad(): outputs = model(tokens_tensor, segments_tensors) # Evaluating the model will return a different number of objects based on # how it's configured in the `from_pretrained` call earlier. In this case, # becase we set `output_hidden_states = True`, the third item will be the # hidden states from all layers. See the documentation for more details: # https://huggingface.co/transformers/model_doc/bert.html#bertmodel hidden_states = outputs[2] # `hidden_states` has shape [13 x 1 x 22 x 768] # `token_vecs` is a tensor with shape [22 x 768] token_vecs = hidden_states[-2][0] # Calculate the average of all 22 token vectors. sentence_embedding = torch.mean(token_vecs, dim=0) print (sentence_embedding[:10]) storage.append((text,sentence_embedding)) I could update first 2 lines from the for loop to below. But they work only if all sentences have same length after tokenization tokens_tensor = torch.tensor([encoded_inputs['input_ids']]) segments_tensors = torch.tensor([encoded_inputs['attention_mask']]) moreover in that case outputs = model(tokens_tensor, segments_tensors) fails. How could I fully perform batch processing in such case? | About your original question: there is not much you can do. BERT is pretty computationally demanding algorithm. Your best shot is to use BertTokenizerFast instead of the regular BertTokenizer. The "fast" version is much more efficient and you will see the difference for large amounts of text. Saying that, I have to warn you that averaging BERT word embeddings does not create good embeddings for the sentence. See this post. From your questions I assume you want to do some kind of semantic similarity search. Try using one of those open-sourced models. | 6 | 2 |
69,468,552 | 2021-10-6 | https://stackoverflow.com/questions/69468552/efficiency-of-sorting-by-multiple-keys-in-python | I have a list of strings that I want to sort by two custom key functions in Python 3.6. Comparing the multi-sort approach (sorting by the lesser key then by the major key) to the multi-key approach (taking the key as the tuple (major_key, lesser_key)), I could see the latter being more than 2x slower than the former, which was a surprise as I thought they are equivalent. I would like to understand why this is so. import random from time import time largest = 1000000 length = 10000000 start = time() lst = [str(x) for x in random.choices(range(largest), k=length)] t0 = time() - start start = time() tmp = sorted(lst, key=lambda x: x[::2]) l1 = sorted(tmp, key=lambda x: ''.join(sorted(x))) t1 = time() - start start = time() l2 = sorted(lst, key=lambda x: (''.join(sorted(x)), x[::2])) t2 = time() - start print(f'prepare={t0} multisort={t1} multikey={t2} slowdown={t2/t1}') assert l1 == l2 | Here's a third way to time: start = time() l3 = sorted(lst, key=lambda x: (''.join(sorted(x)) + "/" + x[::2])) t3 = time() - start and expanding the last line to assert l1 == l2 == l3 This uses a single string as the key, but combining the two string keys you view as as being the "primary" and "secondary" keys. Note that: >>> chr(ord("0") - 1) '/' That's why the two keys can be combined - they're separated by an ASCII character that compares "less than" any ASCII digit (of course this is wholly specific to the precise kind of keys you're using). This is typically a little faster than multisort() for me, using the precise program you posted. prepare=3.628943920135498 multisort=15.646344423294067 multikey=34.255955934524536 slowdown=2.1893903782103075 onekey=15.11461067199707 I believe the primary reason "why" is briefly explained at the end of a modern CPython distribution's Objects/listsort.txt: As noted above, even the simplest Python comparison triggers a large pile of C-level pointer dereferences, conditionals, and function calls. This can be partially mitigated by pre-scanning the data to determine whether the data is homogeneous with respect to type. If so, it is sometimes possible to substitute faster type-specific comparisons for the slower, generic PyObject_RichCompareBool. When there's a single string used as the key, this pre-sorting scan deduces that all the keys in the list are in fact strings, so all the runtime expense of figuring out which comparison function to call can be skipped: sorting can always call the string-specific comparison function instead of the all-purpose (and significantly more expensive) PyObject_RichCompareBool. multisort() also benefits from that optimization. But multikey() doesn't, much. The pre-sorting scan sees that all the keys are tuples, but the tuple comparison function itself can't assume anything about the types of the tuple's elements: it has to resort to PyObject_RichCompareBool every time it's invoked. (Note: as touched on in comments, it's not really quite that simple: some optimization is still done exploiting that the keys are all tuples, but it doesn't always pay off, and at best is less effective - and see the next section for clearer evidence of that.) Focus There's a whole lot going on in the test case, which leads to needing ever-greater effort to explain ever-smaller distinctions. So to look at the effects of the type homogeneity optimizations, let's simplify things a lot: no key function at all. Like so: from random import random, seed from time import time length = 10000000 seed(1234567891) xs = [random() for _ in range(length)] ys = xs[:] start = time() ys.sort() e1 = time() - start ys = [(x,) for x in xs] start = time() ys.sort() e2 = time() - start ys = [[x] for x in xs] start = time() ys.sort() e3 = time() - start print(e1, e2, e3) Here's typical output on my box: 3.1991195678710938 12.756590843200684 26.31903386116028 So it's by far fastest to sort floats directly. It's already very damaging to stick the floats in 1-tuples, but the optimization still gives highly significant benefit: it takes over twice as long again to stick the floats in singleton lists. In that last case (and only in that last case), PyObject_RichCompareBool is always called. | 8 | 5 |
69,518,429 | 2021-10-10 | https://stackoverflow.com/questions/69518429/opencv-videocapture-returns-strange-frame-offset-for-different-versions | I'm using opencv-python and when I execute the following code: index = 0 cap = cv2.VideoCapture(video_path) while True: offset = cap.get(cv2.CAP_PROP_POS_MSEC) print(cv2.__version__, index, offset) ok, frame = cap.read() if not ok: break index += 1 I get the following output: 3.4.7 0 0.0 3.4.7 1 33.36666666666667 3.4.7 2 66.73333333333333 3.4.7 3 100.10000000000001 3.4.7 4 133.46666666666667 If I execute this code on version 3.4.8.29, I get the following output: 3.4.8 0 0.0 3.4.8 1 0.0 3.4.8 2 33.36666666666667 3.4.8 3 66.73333333333333 3.4.8 4 100.10000000000001 And if I execute it on version 4.5.2.52 I get: 4.5.2 0 0.0 4.5.2 1 0.0 4.5.2 2 0.0 4.5.2 3 0.0 4.5.2 4 0.0 The question is first of all, which one is the correct one? It seems like 3.4.7 is correct, but it also seems to be changing randomly between versions. And also how can I modify the other versions to get the proper result, same as 3.4.7 | I read the OpenCV docs and they said: "Reading / writing properties involves many layers. Some unexpected result might happens along this chain. Effective behaviour depends from device hardware, driver and API Backend." (source: https://docs.opencv.org/3.4.15/d4/d15/group__videoio__flags__base.html#gaeb8dd9c89c10a5c63c139bf7c4f5704d) So in other words OpenCV does not guaranty consistent and reliable behavior of this function. Also I installed openCV 4.5.2.52 and applied your script to one of my '.mp4' videos. Then I got the same result as you had for openCV version 3.4.8.29. So I think the behaviour you experience is not a 'bug', but rather the unreliable behavior of this function. As work around you can compute the "offset" by dividing the frame number by the FPS count (see code below). Then you have more control over the behaviour and maybe more consisted results. index = 0 cap = cv2.VideoCapture(video_path) fps = cap.get(cv2.CAP_PROP_FPS) while True: offset = cap.get(cv2.CAP_PROP_POS_MSEC) ok, frame = cap.read() if not ok: break # CAP_PROP_POS_MSEC print("CAP_PROP_POS_MSEC: ", index, offset) # Devide fps by frame number offset = cap.get(cv2.CAP_PROP_POS_FRAMES) / fps * 1000 print("cv2.CAP_PROP_POS_FRAMES", index, offset) index += 1 | 9 | 5 |
69,525,290 | 2021-10-11 | https://stackoverflow.com/questions/69525290/python-function-to-find-the-numeric-volume-integral | Goal I would like to compute the 3D volume integral of a numeric scalar field. Code For this post, I will use an example of which the integral can be exactly computed. I have therefore chosen the following function: In Python, I define the function, and a set of points in 3D, and then generate the discrete values at these points: import numpy as np # Make data. def function(x, y, z): return x**y**z N = 5 grid = np.meshgrid( np.linspace(0, 1, N), np.linspace(0, 1, N), np.linspace(0, 1, N) ) points = np.vstack(list(map(np.ravel, grid))).T x = points[:, 0] y = points[:, 1] z = points[:, 2] values = [function(points[i, 0], points[i, 1], points[i, 2]) for i in range(len(points))] Question How can I find the integral, if I don't know the underlying function, i.e. if I only have the coordinates (x, y, z) and the values? | A nice way to go about this would be using scipy's tplquad integration. However, to use that, we need a function and not a cloud point. An easy way around that is to use an interpolator, to get a function approximating our cloud point - we can for example use scipy's RegularGridInterpolator if the data is on a regular grid: import numpy as np from scipy import integrate from scipy.interpolate import RegularGridInterpolator # Make data. def function(x,y,z): return x*y*z N = 5 xmin, xmax = 0, 1 ymin, ymax = 0, 1 zmin, zmax = 0, 1 x = np.linspace(xmin, xmax, N) y = np.linspace(ymin, ymax, N) z = np.linspace(zmin, zmax, N) values = function(*np.meshgrid(x,y,z, indexing='ij')) # Interpolate: function_interpolated = RegularGridInterpolator((x, y, z), values) # tplquad integrates func(z,y,x) f = lambda z,y,x : my_interpolating_function([z,y,x]) result, error = integrate.tplquad(f, xmin, xmax, lambda _: ymin, lambda _:ymax,lambda *_: zmin, lambda *_: zmax) In the example above, we get result = 0.12499999999999999 - close enough! | 15 | 9 |
69,514,660 | 2021-10-10 | https://stackoverflow.com/questions/69514660/using-assignment-as-operator | Consider: course_db = Course(title='Databases') course_db.save() Coming from a C++ background, I would expect (course_db = Course(title='Databases')) to behave like it would in C++, that is, assign Course(title='Databases') to course_db and return the assigned object so that I can use it as part of a larger expression. For example, I would expect the following code to do the same thing as the code above: (course_db = Course(title='Databases')).save() This assumption got support from some quick Google searches using terms like "assignment operator in Python", e.g. this article. But when I tried this, I got a syntax error. Why can't I do this in Python, and what can I do instead? | You should do some more research about the differences between statements and expressions in Python. If you are using Python 3.8+, you can use the := operator: In [1]: class A: ...: def save(self): ...: return 1 ...: In [2]: (a := A()).save() Out[2]: 1 In [3]: a Out[3]: <__main__.A at 0x7f074e2ddaf0> | 6 | 6 |
69,546,268 | 2021-10-12 | https://stackoverflow.com/questions/69546268/pandas-group-cumsum-with-condition | I have the following df: df = pd.DataFrame({"values":[1,5,7,3,0,9,8,8,7,5,8,1,0,0,0,0,2,5],"signal":['L_exit',None,None,'R_entry','R_exit',None,'L_entry','L_exit',None,'R_entry','R_exit','R_entry','R_exit','L_entry','L_exit','L_entry','R_exit',None]}) df values signal 0 1 L_exit 1 5 None 2 7 None 3 3 R_entry 4 0 R_exit 5 9 None 6 8 L_entry 7 8 L_exit 8 7 None 9 5 R_entry 10 8 R_exit 11 1 R_entry 12 0 R_exit 13 0 L_entry 14 0 L_exit 15 0 L_entry 16 2 R_exit 17 5 None My goal is to add a tx column like this: values signal num 0 1 L_exit nan 1 5 None nan 2 7 None nan 3 3 R_entry 1.00 4 0 R_exit 1.00 5 9 None 1.00 6 8 L_entry 1.00 7 8 L_exit 1.00 8 7 None nan 9 5 R_entry 2.00 10 8 R_exit 2.00 11 1 R_entry 2.00 12 0 R_exit 2.00 13 0 L_entry 2.00 14 0 L_exit 2.00 15 0 L_entry nan 16 2 R_exit nan 17 5 None nan Business logic: when there's a signal of R_entry we group a tx until there's L_exit (if theres another R_entry - ignore it) visualizing What have I tried? g = ( df['signal'].eq('R_entry') | df_tx['signal'].eq('L_exit') ).cumsum() df['tx'] = g.where(df['signal'].eq('R_entry')).groupby(g).ffill() problem is that it increments every time it has 'R_entry' | You can first create a mask to get the contiguous R_entries up to reaching to L_exit. Then get the first R_entry per group (by comparing to the next value) and apply a cumsum. # keep only 'R_entry'/'L_exit' and get groups mask = df['signal'].where(df['signal'].isin(['R_entry', 'L_exit'])).ffill().eq('R_entry') # get groups and extend to next value (the L_exit) df['num'] = (mask.ne(mask.shift())&mask).cumsum().where(mask).ffill(limit=1) output: values signal num 0 1 L_exit NaN 1 5 None NaN 2 7 None NaN 3 3 R_entry 1.0 4 0 R_exit 1.0 5 9 None 1.0 6 8 L_entry 1.0 7 8 L_exit 1.0 8 7 None NaN 9 5 R_entry 2.0 10 8 R_exit 2.0 11 1 R_entry 2.0 12 0 R_exit 2.0 13 0 L_entry 2.0 14 0 L_exit 2.0 15 0 L_entry NaN 16 2 R_exit NaN 17 5 None NaN breaking down how it works Here are the intermediate steps: df['isin+ffill'] = df['signal'].where(df['signal'].isin(['R_entry', 'L_exit'])).ffill() df['mask'] = df['isin+ffill'].eq('R_entry') df['first_of_group'] = (mask.ne(mask.shift())&mask) df['cumsum'] = df['first_of_group'].cumsum().where(mask) df['num'] = df['cumsum'].ffill(limit=1) values signal isin+ffill mask first_of_group cumsum num 0 1 L_exit L_exit False False NaN NaN 1 5 None L_exit False False NaN NaN 2 7 None L_exit False False NaN NaN 3 3 R_entry R_entry True True 1.0 1.0 4 0 R_exit R_entry True False 1.0 1.0 5 9 None R_entry True False 1.0 1.0 6 8 L_entry R_entry True False 1.0 1.0 7 8 L_exit L_exit False False NaN 1.0 8 7 None L_exit False False NaN NaN 9 5 R_entry R_entry True True 2.0 2.0 10 8 R_exit R_entry True False 2.0 2.0 11 1 R_entry R_entry True False 2.0 2.0 12 0 R_exit R_entry True False 2.0 2.0 13 0 L_entry R_entry True False 2.0 2.0 14 0 L_exit L_exit False False NaN 2.0 15 0 L_entry L_exit False False NaN NaN 16 2 R_exit L_exit False False NaN NaN 17 5 None L_exit False False NaN NaN | 5 | 2 |
69,541,296 | 2021-10-12 | https://stackoverflow.com/questions/69541296/pd-read-csv-ignore-comma-if-it-is-inside-parenthesis | I have a very simple file: [Name] Streamline 1 [Data] X [ m ], Y [ m ], Z [ m ], Velocity [ m s^-1 ] 2.66747564e-01, 0.00000000e+00, 2.03140453e-01, (0.00000000e+00, 8.17744827e+00, 0.00000000e+00) 2.66958952e-01, 0.00000000e+00, 2.07407191e-01, (0.00000000e+00, 6.77392197e+00, 0.00000000e+00) 2.63460875e-01, 0.00000000e+00, 2.06593186e-01, (0.00000000e+00, 7.04168701e+00, 0.00000000e+00) 2.65424699e-01, 0.00000000e+00, 2.00831652e-01, (0.00000000e+00, 8.93691921e+00, 0.00000000e+00) 2.70607203e-01, 0.00000000e+00, 2.02286631e-01, (0.00000000e+00, 8.45830917e+00, 0.00000000e+00) 2.68299729e-01, 0.00000000e+00, 1.97365344e-01, (0.00000000e+00, 1.00771456e+01, 0.00000000e+00) ... I need to load the velocity as a vector, into a single row. My basic code: df = pd.read_csv("C:/Users/Marek/Downloads/0deg-5ms.csv", skiprows=5) But this attempt leads to 1st 2 cols becoming index and the rest splits into 4 columns. index_col=False can solve the issue with index, but leads to index out of range. I need a delimiter that implicitly tells pandas to ignore whatever is in brackets. I thought python ignore the separator withing brackets while reading a csv file might work but yes, I have spaces everywhere. I found some solutions that use extended functions to load files and handle them by lines, such as CSV file containing column with occasional comma in parentheses crashes pandas.read_csv and Load CSV with data surrounded by parentheses into a pandas dataframe . I however believe that this is a very easy scenario, as all lines are similar and can be solved by one-liner adding delimiter='some_regex'. I however cannot figure out, how this regex should look. It should look for delimiter , but not (.*,.*). I have tried with following, but this results in a single column: df = pd.read_csv("C:/Users/Marek/Downloads/0deg-5ms.csv", skiprows=5, delimiter=',^(\(.*,.*\))') EDIT: got to something like this - ,|(?:(\(.*,.*\))), but this adds an empty column after each comma. | After numerous attempts, I have found an answer how to create a very simple one-liner on this. Here it is if anyone is interested: df = pd.read_csv("C:/Users/Marek/Downloads/0deg-5ms.csv", skiprows=5, delimiter=',(?![^\(]*[\)])', engine="python") Delimiter checks for the comma in everything outside the brackets. Simple like a charm :) | 8 | 2 |
69,540,474 | 2021-10-12 | https://stackoverflow.com/questions/69540474/is-it-possible-to-make-the-imports-within-init-py-visible-for-python-help | Suppose I have a module: mymodule/example.py: def add_one(number): return number + 1 And mymodule/__init__.py: from .example import * foo = "FOO" def bar(): return 1 Now I see the function at the root of mymodule: >>> import mymodule >>> mymodule.add_one(3) 4 >>> mymodule.foo 'FOO' Also, I see imported add_one through dir along with example: >>> dir(mymodule) ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'add_one', 'bar', 'example', 'foo'] But when I type help(mymodule) I see only example, foo and bar, but not the imported add_one: Help on package mymodule: NAME mymodule PACKAGE CONTENTS example FUNCTIONS bar() DATA foo = 'FOO' But I can call add_one() as the root function of mymodule. Is it possible to see it in help as root function? | From the source code of help() (docmodule under pydoc.py) for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): if visiblename(key, all, object): funcs.append((key, value)) The important part is inspect.getmodule(value) is object), this is where values that are not directly part of the object itself are dropped You can add this line __all__ = ['bar', 'add_one', 'FOO'] to your __init__.py file, this way the help function will make sure to include these objects in help Keep in mind that if you do this anything you don't include in this list will not be included | 6 | 3 |
69,536,863 | 2021-10-12 | https://stackoverflow.com/questions/69536863/how-to-make-pydantic-raise-an-exception-right-away | I wrote a Pydantic model to validate API payload. The payload has 2 attributes emailId as list and role as str { "emailId": [], "role":"Administrator" } I need to perform two validation on attribute email - emailId must not be empty. emailId must not contain emails from x, y, z domains. Hence to accomplish this I wrote 2 validation methods for emailId as shown below - class PayloadValidator(BaseModel): emailId: List[str] role: str @validator("emailId") def is_email_list_empty(cls, email): if not email_id: raise ValueError("Email list is empty.") return email_id @validator("emailId") def valid_domains(cls, emailId): pass The problem here is that if the emailId list is empty then the validators does not raise ValueError right away. It waits for all the validation method to finish execution and this is causing some serious problems to me. Is there a way I can make it happen? | If you have checks, the failure of which should interrupt the further validation, then put them in the pre=True root validator. Because field validation will not occur if pre=True root validators raise an error. For example: class PayloadValidator(BaseModel): emailId: List[str] role: str @root_validator(pre=True) def root_validate(cls, values): if not values['emailId']: raise ValueError("Email list is empty.") return values @validator("emailId") def valid_domains(cls, emailId): return emailId | 5 | 5 |
69,535,331 | 2021-10-12 | https://stackoverflow.com/questions/69535331/how-to-compare-2-dataframes-in-python-unittest-using-assert-methods | I'm writing unittest for a method that returns a dataframe, but, while testing the output using: self.asserEquals(mock_df, result) I'm getting ValueError: ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Right now I'm comparing properties that serves the purpose now, self.assertEqual(mock_df.size, result.size) self.assertEqual(mock_df.col_a.to_list(), result.col_a.to_list()) self.assertEqual(mock_df.col_b.to_list(), result.col_b.to_list()) self.assertEqual(mock_df.col_c.to_list(), result.col_c.to_list()) but curious how do I assert dataframes. | import unittest import pandas as pd class TestDataFrame(unittest.TestCase): def test_dataframe(self): df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]}) df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]}) self.assertEqual(True, df1.equals(df2)) if __name__ == '__main__': unittest.main() | 5 | 6 |
69,520,829 | 2021-10-11 | https://stackoverflow.com/questions/69520829/openai-gym-attributeerror-module-contextlib-has-no-attribute-nullcontext | I'm running into this error when trying to run a command from docker a docker container on google compute engine. Here's the stacktrace: Traceback (most recent call last): File "train.py", line 16, in <module> from stable_baselines.ppo1 import PPO1 File "/home/selfplay/.local/lib/python3.6/site-packages/stable_baselines/__init__.py", line 3, in <module> from stable_baselines.a2c import A2C File "/home/selfplay/.local/lib/python3.6/site-packages/stable_baselines/a2c/__init__.py", line 1, in <module> from stable_baselines.a2c.a2c import A2C File "/home/selfplay/.local/lib/python3.6/site-packages/stable_baselines/a2c/a2c.py", line 3, in <module> import gym File "/home/selfplay/.local/lib/python3.6/site-packages/gym/__init__.py", line 13, in <module> from gym.envs import make, spec, register File "/home/selfplay/.local/lib/python3.6/site-packages/gym/envs/__init__.py", line 10, in <module> _load_env_plugins() File "/home/selfplay/.local/lib/python3.6/site-packages/gym/envs/registration.py", line 269, in load_env_plugins context = contextlib.nullcontext() AttributeError: module 'contextlib' has no attribute 'nullcontext' | It seems like this is an issue with python 3.6 and gym. Upgrading my container to python 3.7 fixed the issue. | 9 | 9 |
69,531,196 | 2021-10-11 | https://stackoverflow.com/questions/69531196/how-to-add-a-newline-between-sequences-with-pyyaml | I've searched and I haven't found very much information on this. I'm writing a Python script to take a list of dictionaries and dump it to a yaml file. For example, I have code like the following: import yaml dict_1 = {'name' : 'name1', 'value' : 12, 'list' : [1, 2, 3], 'type' : 'doc' } dict_2 = {'name' : 'name2', 'value' : 100, 'list' : [1, 2, 3], 'type' : 'cpp' } file_info = [dict_1, dict_2] with open('test_file.yaml', 'w+') as f: yaml.dump(file_info, f) The output that I get is: - list: - 1 - 2 - 3 name: name1 type: doc value: 12 - list: - 1 - 2 - 3 name: name2 type: cpp value: 100 When what I really want is something like this: - list: - 1 - 2 - 3 name: name1 type: doc value: 12 ## Notice the line break here - list: - 1 - 2 - 3 name: name2 type: cpp value: 100 I've tried putting \n and the end of the dictionaries, using file_info.append('\n') between the dictionaries, using None as a final key in the dictionary, but nothing has worked so far. Any help is greatly appreciated! I'm using Pyyaml 5.4.1 with Python 3.9. | You can dump each object one at a time following each with a new line. with open('test_file.yaml', 'w+') as f: for yaml_obj in file_info: f.write(yaml.dump([yaml_obj])) f.write("\n") | 5 | 10 |
69,526,398 | 2021-10-11 | https://stackoverflow.com/questions/69526398/capture-pycharm-stop-signal-in-python | I want to try capturing PyCharm's stop signal (when stop is pressed) in a try block, but I cannot figure out what this signal is or how to capture it in code. JetBrains doesn't provide insight into this in their documentation. I've tried catching it as BaseException but it does not seem to be an exception at all. Is this programmatically capturable at all? | I wasn't able to replicate the other answers as the stop button being sent as a keyboard interrupt. I do believe it's possible for the stop button to be implemented differently on different versions of PyCharm and OS (I'm on Linux where a different answer seems to be Windows, but I'm not positive on many aspects here) It seems to me that a kill signal is being sent, and it doesn't seem like catching that as an exception works (for me). However, I was able to somewhat catch a kill signal by referencing this post that talks about catching kill signals in Python and killing gracefully. Below is the code I used. When I press the stop button I see Hello world, but I do NOT see foobar. Also, the debugger is NOT able to be caught for me by breakpoints in handler_stop_signals by doing this, but I do see the text. So I'm not sure if this is actually going to answer your question or not, based on your needs. Also note I would never actually write code like this (using globals), but it was the simplest answer I was able to come up with. import signal import time run = True def handler_stop_signals(signum, frame): global run print("Hello world") run = False signal.signal(signal.SIGINT, handler_stop_signals) signal.signal(signal.SIGTERM, handler_stop_signals) while run: try: time.sleep(20) # do stuff including other IO stuff except: BaseException as e: print('foobar') | 5 | 4 |
69,525,753 | 2021-10-11 | https://stackoverflow.com/questions/69525753/add-comma-sepated-values-inside-a-column | Hi I have a file format (TSV) as like this Name type Age Weight Height Xxx M 12,34,23 50,30,60,70 4,5,6,5.5 Yxx F 21,14,32 40,50,20,40 3,4,5,5.5 I would like to add all the values in Age, Weight and Height and add a column after this, then so some percentage also, like Total_Height/Total_Weight (awk '$0=$0"\t"(NR==1?"Percentage":$8/$7)'). I have large data set and it is not possible to do with excel. Like this Name type Age Weight Height Total_Age Total_Weight Total_Height Percentage Xxx M 12,34,23 50,30,60,70 4,5,6,5.5 69 210 20.5 0.097 Yxx F 21,14,32 40,50,20,40 3,4,5,5.5 67 150 17.5 0.11 | With your shown samples please try following code. awk ' FNR==1{ print $0,"Total_Age Total_Weight Total_Height Percentage" next } FNR>1{ totAge=totWeight=totHeight=0 split($3,tmp,",") for(i in tmp){ totAge+=tmp[i] } split($4,tmp,",") for(i in tmp){ totWeight+=tmp[i] } split($5,tmp,",") for(i in tmp){ totHeight+=tmp[i] } $(NF+1)=totAge $(NF+1)=totWeight $(NF+1)=totHeight $(NF+1)=$(NF-1)==0?"N/A":$NF/$(NF-1) } 1' Input_file | column -t OR adding a bit short version of above awk code: awk ' BEGIN{OFS="\t"} FNR==1{ print $0,"Total_Age Total_Weight Total_Height Percentage" next } FNR>1{ totAge=totWeight=totHeight=0 split($3,tmp,",") for(i in tmp){ totAge+=tmp[i] } split($4,tmp,",") for(i in tmp){ totWeight+=tmp[i] } split($5,tmp,",") for(i in tmp){ totHeight+=tmp[i] } $(NF+1)=totAge OFS totWeight OFS totHeight $0=$0 $(NF+1)=( $(NF-1)==0 ? "N/A" : $NF/$(NF-1) ) } 1' Input_file | column -t Explanation: Simple explanation would be, take sum of 3rd, 4th and 5th columns and assign them to last column of line. Accordingly add column value which has divide value of last and 2nd last columns as per OP's request. Using column -t to make it look better on output. | 8 | 7 |
69,527,239 | 2021-10-11 | https://stackoverflow.com/questions/69527239/what-is-context-variable-in-airflow-operators | I'm trying to understand what is this variable called context in Airflow operators. as example: def execute(self, **context**). Where it comes from? where can I set it? when and how can I use it inside my function? Another question is What is *context and **context? I saw few examples that uses this variable like this: def execute(self, *context) / def execute(self, **context). What is the difference and when should I use *context and **context | When Airflow runs a task, it collects several variables and passes these to the context argument on the execute() method. These variables hold information about the current task, you can find the list here: https://airflow.apache.org/docs/apache-airflow/stable/macros-ref.html#default-variables. Information from the context can be used in your task, for example to reference a folder yyyymmdd, where the date is fetched from the variable ds_nodash, a variable in the context: def do_stuff(**context): data_path = f"/path/to/data/{context['ds_nodash']}" # write file to data_path... PythonOperator(task_id="do_stuff", python_callable=do_stuff) *context and **context are different Python notations for accepting arguments in a function. Google for "args vs kwargs" to find more on this topic. Basically *context accepts non-keyword arguments, while **context takes keyword arguments: def print_context(*context_nokeywords, **context_keywords): print(f"Non keywords args: {context_nokeywords}") print(f"Keywords args: {context_keywords}") print_context("a", "b", "c", a="1", b="2", c="3") # Non keywords args: ('a', 'b', 'c') # Keywords args: {'a': '1', 'b': '2', 'c': '3'} | 17 | 16 |
69,480,199 | 2021-10-7 | https://stackoverflow.com/questions/69480199/pad-token-id-not-working-in-hugging-face-transformers | I want to download the GPT-2 model and tokeniser. For open-end generation, HuggingFace sets the padding token ID to be equal to the end-of-sentence token ID, so I configured it manually using : import tensorflow as tf from transformers import TFGPT2LMHeadModel, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id) However, it gives me the following error: TypeError: ('Keyword argument not understood:', 'pad_token_id') I haven't been able to find a solution for this nor do I understand why I am getting this error. Insights will be appreciated. | Your code does not throw any error for me - I would try re-installing the most recent version of transformers - if that is a viable solution for you. | 5 | 2 |
69,524,514 | 2021-10-11 | https://stackoverflow.com/questions/69524514/how-to-modify-the-kernel-density-estimate-line-in-a-sns-histplot | I am creating a histrogram (frecuency vs. count) and I want to add kernel density estimate line in a different colour. How can I do this? I want to change the colour for example sns.histplot(data=penguins, x="flipper_length_mm", kde=True) Example taken from https://seaborn.pydata.org/generated/seaborn.histplot.html | histplot's line_kws={...} is meant to change the appearance of the kde line. However, the current seaborn version doesn't allow changing the color that way, probably because the color goes together with the hue parameter (although hue isn't used in this case). import seaborn as sns penguins = sns.load_dataset('penguins') ax = sns.histplot(data=penguins, x="flipper_length_mm", kde=True, line_kws={'color': 'crimson', 'lw': 5, 'ls': ':'}) In seaborn's github, it is suggested to draw the histplot and the kdeplot separately. For both to match in the y-direction, it is necessary to use histplot with stat='density' (the kdeplot doesn't have a parameter to use histplot's default stat='count'). penguins = sns.load_dataset('penguins') ax = sns.histplot(data=penguins, x="flipper_length_mm", kde=False, stat='density') sns.kdeplot(data=penguins, x="flipper_length_mm", color='crimson', ax=ax) If the count statistics is really needed, an alternative is to change the line color via matplotlib: penguins = sns.load_dataset('penguins') ax = sns.histplot(data=penguins, x="flipper_length_mm", kde=True) ax.lines[0].set_color('crimson') | 7 | 24 |
69,519,755 | 2021-10-10 | https://stackoverflow.com/questions/69519755/what-is-the-difference-between-rounding-decimals-with-quantize-vs-the-built-in-r | When working with the built in decimal module in python I can round decimals as follows. Decimal(50.212345).quantize(Decimal('0.01')) > Decimal('50.21') But I can also round the same number with the built in round function round(Decimal(50.212345), 2) > Decimal('50.21') Why would I use one instead of the other when rounding Decimals? In previous answers about rounding decimals, users suggested to use quantize because the built in round function would return a value of type float. Based on my testing, these both return a Decimal. Other than syntax, is there a reason to choose one over the other? | The return types aren't always the same. round() used with a single argument actually returns an int: >>> round(5.3) 5 >>> round(decimal.Decimal("5.3")) 5 Other than that, suit yourself. quantize() is especially handy if you want a deoimal rounded to "the same" precision as another decimal you already have. >>> x = decimal.Decimal("123.456") >>> x*x Decimal('15241.383936') >>> (x*x).quantize(x) Decimal('15241.384') See? The code doing this doesn't have to know that x originally had 3 digits after the decimal point. Just passing x to quantize() forces the function to round back to the same precision as the original x, regardless of what that may be. quantize() is also necessary if you want to use a rounding mode other than the default nearest/even. >>> (x*x).quantize(x, decimal.ROUND_FLOOR) Decimal('15241.383') | 9 | 13 |
69,515,321 | 2021-10-10 | https://stackoverflow.com/questions/69515321/an-attempt-has-been-made-to-start-a-new-process-before-the-current-process-has-f | I try to run this code on python import multiprocessing manager = multiprocessing.Manager() final_list = manager.list() input_list_one = ['one', 'two', 'three', 'four', 'five'] input_list_two = ['six', 'seven', 'eight', 'nine', 'ten'] def worker(data): for item in data: final_list.append(item) if __name__ == '__main__': process1 = multiprocessing.Process(target=worker, args=[input_list_one]) process2 = multiprocessing.Process(target=worker, args=[input_list_two]) process1.start() process2.start() process1.join() process2.join() print(final_list) But this error is happened: RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. | The problems is with the statement: manager = multiprocessing.Manager() which does its "magic" by starting a server process. Therefore, this statement needs to be moved to within the if __name__ = '__main__': block along with the creation of the managed list, which now needs to be passed as an additional argument to your process function, worker. In fact, you might as well move all declarations at global scope that do not really need to be there within the if __name__ = '__main__': block for efficiency because they would otherwise be needlessly executed by each new process created. import multiprocessing def worker(final_list, data): for item in data: final_list.append(item) if __name__ == '__main__': manager = multiprocessing.Manager() final_list = manager.list() input_list_one = ['one', 'two', 'three', 'four', 'five'] input_list_two = ['six', 'seven', 'eight', 'nine', 'ten'] process1 = multiprocessing.Process(target=worker, args=[final_list, input_list_one]) process2 = multiprocessing.Process(target=worker, args=[final_list, input_list_two]) process1.start() process2.start() process1.join() process2.join() print(final_list) Prints: ['six', 'seven', 'eight', 'nine', 'one', 'ten', 'two', 'three', 'four', 'five'] Let me elaborate a bit on my answer: You are clearly running on a platform that uses the spawn method to create new processes. This means that to launch a new process a new, empty address space is created and a new instance of the Python interpreter is run against the source. Before the target of your Process instance is invoked, any statement in the source file that is at global scope will first be executed to initialize the process, except any statement within a if __name__ = '__main__': block (because the __name__ value will not be '__main__' for the new process). This is why you need to put code that creates new processes within such a block, i.e. to avoid getting into what would be a recursive loop creating new processes ad infinitum if this went undetected. In your case it did not go undetected and you got the error message you saw. But even if creating a Manager instance had not resulted in a new process getting created, your program would not have been correct. Having the statement final_list = manager.list() at global scope meant that all three processes in your running program would have been accessing three different instances of final_list. | 7 | 12 |
69,502,756 | 2021-10-9 | https://stackoverflow.com/questions/69502756/add-task-to-running-loop-and-run-until-complete | I have a function called from an async function without await, and my function needs to call async functions. I can do this with asyncio.get_running_loop().create_task(sleep()) but the run_until_complete at the top level doesn't run until the new task is complete. How do I get the event loop to run until the new task is complete? I can't make my function async because it's not called with await. I can't change future or sleep. I'm only in control of in_control. import asyncio def in_control(sleep): """ How do I get this to run until complete? """ return asyncio.get_running_loop().create_task(sleep()) async def future(): async def sleep(): await asyncio.sleep(10) print('ok') in_control(sleep) asyncio.get_event_loop().run_until_complete(future()) | It appears that the package nest_asyncio will help you out here. I've also included in the example fetching the return value of the task. import asyncio import nest_asyncio def in_control(sleep): print("In control") nest_asyncio.apply() loop = asyncio.get_running_loop() task = loop.create_task(sleep()) loop.run_until_complete(task) print(task.result()) return async def future(): async def sleep(): for timer in range(10): print(timer) await asyncio.sleep(1) print("Sleep finished") return "Sleep return" in_control(sleep) print("Out of control") asyncio.get_event_loop().run_until_complete(future()) Result: In control 0 1 2 3 4 5 6 7 8 9 Sleep finished Sleep return Out of control [Finished in 10.2s] | 5 | 3 |
69,495,398 | 2021-10-8 | https://stackoverflow.com/questions/69495398/how-to-fillna-in-pandas-dataframe-based-on-pattern-like-in-excel-dragging | I have dataframe which should be filled by understanding rows understanding like we do in excel. If its continious integer it fill by next number itself. Is there any function in python like this? import pandas as pd d = { 'year': [2019,2020,2019,2020,np.nan,np.nan], 'cat1': [1,2,3,4,np.nan,np.nan], 'cat2': ['c1','c1','c1','c2',np.nan,np.nan]} df = pd.DataFrame(data=d) df year cat1 cat2 0 2019.0 1.0 c1 1 2020.0 2.0 c1 2 2019.0 3.0 c1 3 2020.0 4.0 c2 4 NaN NaN NaN 5 NaN NaN NaN output required: year cat1 cat2 0 2019.0 1.0 c1 1 2020.0 2.0 c1 2 2019.0 3.0 c1 3 2020.0 4.0 c2 4 2019.0 5.0 c2 #here can be ignored if it can't understand the earlier pattern 5 2020.0 6.0 c2 #here can be ignored if it can't understand the earlier pattern I tried df.interpolate(method='krogh') #it fill 1,2,3,4,5,6 but incorrect others. | Here is my solution for the specific use case you mention - The code for these helper functions for categorical_repeat, continous_interpolate and other is provided below in EXPLANATION > Approach section. config = {'year':categorical_repeat, #shortest repeating sequence 'cat1':continous_interpolate, #curve fitting (linear) 'cat2':other} #forward fill print(df.agg(config)) year cat1 cat2 0 2019.0 1 c1 1 2020.0 2 c1 2 2019.0 3 c1 3 2020.0 4 c2 4 2019.0 5 c2 5 2020.0 6 c2 EXPLANATION: As I understand, there is no direct way of handling all types of patterns in pandas as excel does. Excel involves linear interpolation for continuous sequences, but it involves other methods for other column patterns. Continous integer array -> linear interpolation Repeated cycles -> Smallest repeating sequence Alphabet (and similar) -> Tiling fixed sequence until the length of df Unrecognizable pattern -> Forward fill Here is the dummy dataset that I attempt my approach on - data = {'A': [2019, 2020, 2019, 2020, 2019, 2020], 'B': [1, 2, 3, 4, 5, 6], 'C': [6, 5, 4, 3, 2, 1], 'D': ['C', 'D', 'E', 'F', 'G', 'H'], 'E': ['A', 'B', 'C', 'A', 'B', 'C'], 'F': [1,2,3,3,4,2] } df = pd.DataFrame(data) empty = pd.DataFrame(columns=df.columns, index=df.index)[:4] df_new = df.append(empty).reset_index(drop=True) print(df_new) A B C D E F 0 2019 1 6 C A 1 1 2020 2 5 D B 2 2 2019 3 4 E C 3 3 2020 4 3 F A 3 4 2019 5 2 G B 4 5 2020 6 1 H C 2 6 NaN NaN NaN NaN NaN NaN 7 NaN NaN NaN NaN NaN NaN 8 NaN NaN NaN NaN NaN NaN 9 NaN NaN NaN NaN NaN NaN Approach: Let's start with some helper functions - import numpy as np import scipy as sp import pandas as pd #Curve fitting (linear) def f(x, m, c): return m*x+c #Modify to extrapolate for exponential sequences etc. #Interpolate continous linear def continous_interpolate(s): clean = s.dropna() popt, pcov = sp.optimize.curve_fit(f, clean.index, clean) output = [round(i) for i in f(s.index, *popt)] #Remove the round() for float values return pd.Series(output) #Smallest Repeating sub-sequence def pattern(inputv): ''' https://stackoverflow.com/questions/6021274/finding-shortest-repeating-cycle-in-word ''' pattern_end =0 for j in range(pattern_end+1,len(inputv)): pattern_dex = j%(pattern_end+1) if(inputv[pattern_dex] != inputv[j]): pattern_end = j; continue if(j == len(inputv)-1): return inputv[0:pattern_end+1]; return inputv; #Categorical repeat imputation def categorical_repeat(s): clean = s.dropna() cycle = pattern(clean) repetitions = (len(s)//len(cycle))+1 output = np.tile(cycle, repetitions)[:len(s)] return pd.Series(output) #continous sequence of alphabets def alphabet(s): alp = 'abcdefghijklmnopqrstuvwxyz' alp2 = alp*((len(s)//len(alp))+1) start = s[0] idx = alp2.find(start.lower()) output = alp2[idx:idx+len(s)] if start.isupper(): output = output.upper() return pd.Series(list(output)) #If no pattern then just ffill def other(s): return s.ffill() Next, lets create a configuration based on what we want to solve and apply the methods required - config = {'A':categorical_repeat, 'B':continous_interpolate, 'C':continous_interpolate, 'D':alphabet, 'E':categorical_repeat, 'F':other} output_df = df_new.agg(config) print(output_df) A B C D E F 0 2019 1 6 C A 1 1 2020 2 5 D B 2 2 2019 3 4 E C 3 3 2020 4 3 F A 3 4 2019 5 2 G B 4 5 2020 6 1 H C 2 6 2019 7 0 I A 2 7 2020 8 -1 J B 2 8 2019 9 -2 K C 2 9 2020 10 -3 L A 2 | 9 | 5 |
69,516,166 | 2021-10-10 | https://stackoverflow.com/questions/69516166/how-to-shuffle-the-order-of-if-statements-in-a-function-in-python | I have a functions in Python which has a series of if statements. def some_func(): if condition_1: return result_1 if condition_2: return result_2 ... #other if statements But I want that the order of these if statements is changed every time I call the function. Like if I call the function there can be a case when condition_1 and condition_2 both are true but since condition_1 is checked first thus I will get result_1 but I want that when I call the function second time some other random condition to be checked. (since I have only five if statements in the function therefore any one of the five conditions). So is there any way to do this like storing the conditions in a list or something. Any help will be appreciated. | You can create list of condition and result then shuffle them like below: import random def some_func(x): lst_condition_result = [((x>1), True), ((x>2), False)] random.shuffle(lst_condition_result) for condition, result in lst_condition_result: if condition: return result Output: >>> some_func(20) True >>> some_func(20) False >>> some_func(20) False | 6 | 5 |
69,503,347 | 2021-10-9 | https://stackoverflow.com/questions/69503347/how-can-i-solve-this-arithmetic-puzzle-my-solution-is-too-slow-after-n-14 | Given numbers 1 to 3n, construct n equations of the form a + b = c or a x b = c such that each number is used exactly once. For example: n=1 => 1+2=3 n=2 => 1+4=5, 2x3=6 n=3 => 4+5=9, 1+7=8, 2x3=6 The question is, does a solution exist for every n? I tried writing a basic program and it becomes too slow after n = 14. Here are the solutions I have so far: 1 ['1+2=3'] 2 ['2*3=6', '1+4=5'] 3 ['4+5=9', '1+7=8', '2*3=6'] 4 ['3+6=9', '1+10=11', '4+8=12', '2+5=7'] 5 ['2+8=10', '3+6=9', '1+13=14', '5+7=12', '11+4=15'] 6 ['3*5=15', '2+8=10', '4+14=18', '6+11=17', '7+9=16', '1+12=13'] 7 ['6+12=18', '3*5=15', '7+10=17', '1+20=21', '4+9=13', '2+14=16', '8+11=19'] 8 ['8+14=22', '6+12=18', '7+10=17', '2+19=21', '1+15=16', '11+13=24', '4+5=9', '3+20=23'] 9 ['6+19=25', '8+14=22', '4+13=17', '2+18=20', '1+26=27', '3+7=10', '9+15=24', '5+16=21', '11+12=23'] 10 ['6+19=25', '14+15=29', '11+17=28', '4+26=30', '2+18=20', '1+21=22', '3*9=27', '8+16=24', '5+7=12', '10+13=23'] 11 ['10+23=33', '6+19=25', '14+15=29', '11+17=28', '4+26=30', '2+18=20', '5+27=32', '1+12=13', '9+22=31', '3*7=21', '16+8=24'] 12 ['10+23=33', '3+29=32', '6+19=25', '15+21=36', '11+17=28', '8+14=22', '4+16=20', '7+27=34', '2*12=24', '1+30=31', '5+13=18', '9+26=35'] 13 ['10+23=33', '3+29=32', '7+30=37', '6+19=25', '5+34=39', '15+21=36', '11+17=28', '18+20=38', '4+31=35', '1+26=27', '9+13=22', '8+16=24', '2+12=14'] 14 ['10+23=33', '4+37=41', '3+29=32', '9+25=34', '15+21=36', '11+17=28', '8+14=22', '6+24=30', '13+27=40', '5*7=35', '2+18=20', '1+38=39', '12+19=31', '16+26=42'] Here's the code for the program: import sys from itertools import combinations def main(n): r = set(range(1, n*3+1)) print(n, solve(n, r, [])) def solve(n, lst, solution): if not lst: if len(solution) != n: return False return solution for c in combinations(lst, 3): valid_solution = valid(c) if valid_solution: new_solution = solution + [valid_solution] result = solve(n, set(lst) - set(c), new_solution) if result: return result return False def valid(lst): a = lst[0] b = lst[1] c = lst[2] if a + b == c: return "%s+%s=%s" % (a, b, c) if a * b == c: return "%s*%s=%s" % (a, b, c) return False if __name__ == "__main__": n = int(sys.argv[1]) main(n) | This looks like a combinatorial problem where its "messiness" suggests no mathematical answer can be formulated. The cheer number of allowed combinations makes it ever more likely that each n has a valid solution, especially given that for small n you already found solutions. Finding solutions for larger n can be tricky. One approach is called "simulated annealing", where you would take a random set of equations, I try to improve it step by step. "Bad equations" (i.e. equations that have numbers overlapping with others in the set) get removed, and new equations are tried. Whatever approach is used, it probably can be speed up with heuristics. Such as start creating equations for numbers that show up in the least number of possible equations. A somewhat related problem is called "graceful labeling", which has no definite answer and for which similar algorithms are appropriate. Here is an approach using z3py, a library which implements a sat/smt solver. First, for each number, a list of possible expressions is made. For each possible expression, a symbolic boolean indicates whether the expression will be used in the solution. To have a valid solution, for each number, exactly one of its possible expressions should be True. from z3 import Int, And, Or, Not, Bool, Sum, Solver, sat def exactly_one(expr): return Or([And([(expr[j] == (i == j)) for j in range(len(expr))]) for i in range(len(expr))]) for N in range(1, 43): exp_for_i = [[] for i in range(3 * N + 1)] expressions = [] for i in range(1, 3 * N - 1): for j in range(i + 1, 3 * N + 1): if i + j <= 3 * N: expr = Bool(f'{i}+{j}={i + j}') expressions.append(expr) exp_for_i[i].append(expr) exp_for_i[j].append(expr) exp_for_i[i + j].append(expr) if i > 1 and i * j <= 3 * N: expr = Int(f'{i}*{j}={i * j}') expressions.append(expr) exp_for_i[i].append(expr) exp_for_i[j].append(expr) exp_for_i[i * j].append(expr) s = Solver() s.add([exactly_one(expr) for expr in exp_for_i[1:]]) if s.check() != sat: print(f'No solution for N={N}') else: m = s.model() print(f'N={N}:', [expr for expr in expressions if m.eval(expr) == True]) s.reset() N=1: [1+2=3] N=2: [1+4=5, 2*3=6] N=3: [1+7=8, 2*3=6, 4+5=9] N=4: [1+10=11, 2*4=8, 3+6=9, 5+7=12] N=5: [1+10=11, 2*7=14, 3*4=12, 5+8=13, 6+9=15] N=6: [1+13=14, 2+7=9, 3*5=15, 4+12=16, 6+11=17, 8+10=18] N=7: [1+17=18, 2+14=16, 3*5=15, 4+6=10, 7+13=20, 8+11=19, 9+12=21] N=8: [1+18=19, 2+3=5, 4+17=21, 6+14=20, 7+9=16, 8+15=23, 10+12=22, 11+13=24] N=9: [1+22=23, 2+19=21, 3+17=20, 4*6=24, 5+7=12, 8+10=18, 9+16=25, 11+15=26, 13+14=27] N=10: [1+13=14, 2+23=25, 3*8=24, 4+22=26, 5+16=21, 6+9=15, 7+20=27, 10+19=29, 11+17=28, 12+18=30] N=11: [1+23=24, 2*13=26, 3+25=28, 4+7=11, 5+22=27, 6+9=15, 8+21=29, 10+20=30, 12+19=31, 14+18=32, 16+17=33] N=12: [1+6=7, 2+9=11, 3+26=29, 4+24=28, 5+25=30, 8+23=31, 10+22=32, 12+21=33, 13+14=27, 15+20=35, 16+18=34, 17+19=36] N=13: [1+13=14, 2+8=10, 3+26=29, 4+28=32, 5+17=22, 6+25=31, 7+30=37, 9+24=33, 11+27=38, 12+23=35, 15+19=34, 16+20=36, 18+21=39] N=14: [1+31=32, 2+12=14, 3*9=27, 4+30=34, 5+6=11, 7+28=35, 8+29=37, 10+26=36, 13+25=38, 15+24=39, 16+17=33, 18+23=41, 19+21=40, 20+22=42] N=15: [1+5=6, 2+12=14, 3*11=33, 4+32=36, 7+31=38, 8+29=37, 9+30=39, 10+25=35, 13+28=41, 15+27=42, 16+18=34, 17+26=43, 19+21=40, 20+24=44, 22+23=45] N=16: [1+25=26, 2+34=36, 3+11=14, 4+35=39, 5+37=42, 6+32=38, 7+13=20, 8+33=41, 9+10=19, 12+31=43, 15+29=44, 16+24=40, 17+30=47, 18+28=46, 21+27=48, 22+23=45] N=17: [1+38=39, 2+25=27, 3+37=40, 4+13=17, 5+36=41, 6+9=15, 7+35=42, 8+11=19, 10+34=44, 12+33=45, 14+32=46, 16+31=47, 18+30=48, 20+29=49, 21+22=43, 23+28=51, 24+26=50] N=18: [1+44=45, 2*25=50, 3+51=54, 4+42=46, 5+14=19, 6+22=28, 7+24=31, 8+33=41, 9+39=48, 10+30=40, 11+27=38, 12+35=47, 13+23=36, 15+37=52, 16+18=34, 17+26=43, 20+29=49, 21+32=53] N=19: [1+18=19, 2+41=43, 3*14=42, 4+8=12, 5+40=45, 6+10=16, 7+39=46, 9+38=47, 11+37=48, 13+36=49, 15+35=50, 17+34=51, 20+33=53, 21+23=44, 22+32=54, 24+31=55, 25+27=52, 26+30=56, 28+29=57] N=20: [1+44=45, 2+15=17, 3+43=46, 4+10=14, 5+42=47, 6+18=24, 7+41=48, 8+12=20, 9+40=49, 11+39=50, 13+38=51, 16+37=53, 19+36=55, 21+35=56, 22+30=52, 23+34=57, 25+33=58, 26+28=54, 27+32=59, 29+31=60] N=21: [1+46=47, 2+17=19, 3+45=48, 4+7=11, 5+44=49, 6+15=21, 8+43=51, 9+13=22, 10+42=52, 12+41=53, 14+40=54, 16+39=55, 18+38=56, 20+37=57, 23+36=59, 24+26=50, 25+35=60, 27+34=61, 28+30=58, 29+33=62, 31+32=63] N=22: [1+49=50, 2+21=23, 3*17=51, 4+15=19, 5+47=52, 6*8=48, 7+46=53, 9+45=54, 10+14=24, 11+44=55, 12+26=38, 13+43=56, 16+42=58, 18+41=59, 20+40=60, 22+39=61, 25+37=62, 27+36=63, 28+29=57, 30+35=65, 31+33=64, 32+34=66] N=23: [1+40=41, 2*27=54, 3+13=16, 4+42=46, 5+62=67, 6+55=61, 7+45=52, 8+49=57, 9+59=68, 10+56=66, 11+14=25, 12+48=60, 15+29=44, 17+34=51, 18+35=53, 19+39=58, 20+30=50, 21+22=43, 23+24=47, 26+38=64, 28+37=65, 31+32=63, 33+36=69] N=24: [1+18=19, 2+53=55, 3+51=54, 4*14=56, 5+52=57, 6+7=13, 8+21=29, 9+16=25, 10+49=59, 11+47=58, 12+48=60, 15+46=61, 17+45=62, 20+44=64, 22+43=65, 23+27=50, 24+42=66, 26+41=67, 28+40=68, 30+39=69, 31+32=63, 33+38=71, 34+36=70, 35+37=72] N=25: [1+57=58, 2*32=64, 3+60=63, 4+41=45, 5+37=42, 6+69=75, 7+46=53, 8+62=70, 9+13=22, 10+49=59, 11+29=40, 12+61=73, 14+25=39, 15+35=50, 16+52=68, 17+55=72, 18+48=66, 19+24=43, 20+47=67, 21+30=51, 23+33=56, 26+28=54, 27+44=71, 31+34=65, 36+38=74] N=26: [1+70=71, 2*17=34, 3+26=29, 4+44=48, 5+18=23, 6+57=63, 7+66=73, 8+37=45, 9+51=60, 10+68=78, 11+61=72, 12+52=64, 13+43=56, 14+40=54, 15+50=65, 16+33=49, 19+39=58, 20+55=75, 21+53=74, 22+25=47, 24+38=62, 27+42=69, 28+31=59, 30+46=76, 32+35=67, 36+41=77] N=27: [1+15=16, 2*30=60, 3+58=61, 4+18=22, 5+57=62, 6+20=26, 7*9=63, 8+56=64, 10+55=65, 11+13=24, 12+54=66, 14+53=67, 17+52=69, 19+51=70, 21+50=71, 23+49=72, 25+48=73, 27+47=74, 28+31=59, 29+46=75, 32+45=77, 33+35=68, 34+44=78, 36+43=79, 37+39=76, 38+42=80, 40+41=81] N=28: [1+61=62, 2+20=22, 3+68=71, 4+49=53, 5+51=56, 6+35=41, 7+70=77, 8+21=29, 9+57=66, 10+13=23, 11+54=65, 12+64=76, 14+67=81, 15+58=73, 16+34=50, 17+46=63, 18+24=42, 19+60=79, 25+55=80, 26+33=59, 27+48=75, 28+44=72, 30+52=82, 31+47=78, 32+37=69, 36+38=74, 39+45=84, 40+43=83] N=29: [1+64=65, 2+61=63, 3+26=29, 4+27=31, 5+17=22, 6+74=80, 7+48=55, 8+62=70, 9+11=20, 10+43=53, 12+69=81, 13+54=67, 14+57=71, 15+21=36, 16+52=68, 18+60=78, 19+58=77, 23+56=79, 24+49=73, 25+59=84, 28+44=72, 30+45=75, 32+34=66, 33+50=83, 35+51=86, 37+39=76, 38+47=85, 40+42=82, 41+46=87] N=30: [1+64=65, 2+28=30, 3*22=66, 4+63=67, 5+15=20, 6+26=32, 7+62=69, 8+9=17, 10+61=71, 11+13=24, 12+60=72, 14+59=73, 16+58=74, 18+70=88, 19+57=76, 21+56=77, 23+55=78, 25+54=79, 27+53=80, 29+52=81, 31+51=82, 33+35=68, 34+50=84, 36+49=85, 37+38=75, 39+48=87, 40+43=83, 41+45=86, 42+47=89, 44+46=90] N=31: [1+69=70, 2+83=85, 3+20=23, 4+75=79, 5+68=73, 6*7=42, 8+82=90, 9+41=50, 10+43=53, 11+46=57, 12+51=63, 13+39=52, 14+66=80, 15+25=40, 16+76=92, 17+64=81, 18+56=74, 19+59=78, 21+37=58, 22+71=93, 24+48=72, 26+35=61, 27+62=89, 28+60=88, 29+36=65, 30+54=84, 31+55=86, 32+45=77, 33+34=67, 38+49=87, 44+47=91] N=32: [1+75=76, 2*36=72, 3+74=77, 4+17=21, 5+24=29, 6+67=73, 7+19=26, 8+58=66, 9+53=62, 10+85=95, 11+60=71, 12+68=80, 13+20=33, 14+79=93, 15+39=54, 16+41=57, 18+65=83, 22+48=70, 23+61=84, 25+56=81, 27+63=90, 28+64=92, 30+59=89, 31+47=78, 32+55=87, 34+35=69, 37+49=86, 38+50=88, 40+42=82, 43+51=94, 44+52=96, 45+46=91] N=33: [1+27=28, 2+73=75, 3+22=25, 4+72=76, 5+11=16, 6+71=77, 7+54=61, 8+70=78, 9+23=32, 10+69=79, 12+18=30, 13+68=81, 14+20=34, 15+67=82, 17+66=83, 19+65=84, 21+64=85, 24+63=87, 26+62=88, 29+60=89, 31+59=90, 33+58=91, 35+57=92, 36+38=74, 37+56=93, 39+41=80, 40+55=95, 42+44=86, 43+53=96, 45+52=97, 46+48=94, 47+51=98, 49+50=99] N=34: [1+74=75, 2+19=21, 3+73=76, 4+28=32, 5+18=23, 6*13=78, 7+70=77, 8+71=79, 9+72=81, 10+24=34, 11+69=80, 12+26=38, 14+16=30, 15+68=83, 17+67=84, 20+66=86, 22+65=87, 25+64=89, 27+63=90, 29+62=91, 31+61=92, 33+60=93, 35+59=94, 36+49=85, 37+58=95, 39+57=96, 40+42=82, 41+56=97, 43+45=88, 44+55=99, 46+54=100, 47+51=98, 48+53=101, 50+52=102] N=35: [1+62=63, 2+76=78, 3*25=75, 4*24=96, 5+94=99, 6+59=65, 7+74=81, 8+82=90, 9+38=47, 10+57=67, 11+73=84, 12+31=43, 13+21=34, 14+86=100, 15+49=64, 16+85=101, 17+60=77, 18+80=98, 19+23=42, 20+71=91, 22+50=72, 26+79=105, 27+61=88, 28+41=69, 29+68=97, 30+36=66, 32+55=87, 33+70=103, 35+58=93, 37+46=83, 39+56=95, 40+52=92, 44+45=89, 48+54=102, 51+53=104] N=36: [1+81=82, 2+24=26, 3+76=79, 4+15=19, 5+78=83, 6+31=37, 7+77=84, 8+35=43, 9+21=30, 10+75=85, 11+17=28, 12+74=86, 13+23=36, 14+73=87, 16+72=88, 18+71=89, 20+70=90, 22+69=91, 25+68=93, 27+67=94, 29+66=95, 32+65=97, 33+63=96, 34+64=98, 38+62=100, 39+41=80, 40+61=101, 42+60=102, 44+59=103, 45+47=92, 46+58=104, 48+57=105, 49+50=99, 51+56=107, 52+54=106, 53+55=108] N=37: [1+76=77, 2+65=67, 3+37=40, 4+23=27, 5+55=60, 6+103=109, 7+87=94, 8+80=88, 9+66=75, 10+31=41, 11+47=58, 12+84=96, 13+59=72, 14+91=105, 15+93=108, 16+79=95, 17+73=90, 18+30=48, 19+78=97, 20+62=82, 21+71=92, 22+89=111, 24+61=85, 25+39=64, 26+42=68, 28+35=63, 29+81=110, 32+74=106, 33+69=102, 34+70=104, 36+50=86, 38+45=83, 43+57=100, 44+54=98, 46+53=99, 49+52=101, 51+56=107] N=38: [1+83=84, 2+32=34, 3*9=27, 4+82=86, 5+15=20, 6+81=87, 7+23=30, 8+80=88, 10+79=89, 11+25=36, 12+78=90, 13+16=29, 14+77=91, 17+76=93, 18+22=40, 19+75=94, 21+74=95, 24+73=97, 26+72=98, 28+71=99, 31+70=101, 33+69=102, 35+68=103, 37+67=104, 38+58=96, 39+66=105, 41+65=106, 42+43=85, 44+64=108, 45+47=92, 46+63=109, 48+62=110, 49+51=100, 50+61=111, 52+60=112, 53+54=107, 55+59=114, 56+57=113] N=39: [1+86=87, 2*19=38, 3+85=88, 4*10=40, 5+84=89, 6+30=36, 7+83=90, 8+34=42, 9+82=91, 11+81=92, 12+14=26, 13+80=93, 15+17=32, 16+79=95, 18+78=96, 20+77=97, 21+23=44, 22+76=98, 24+75=99, 25+28=53, 27+74=101, 29+73=102, 31+72=103, 33+71=104, 35+70=105, 37+69=106, 39+68=107, 41+67=108, 43+66=109, 45+65=110, 46+48=94, 47+64=111, 49+51=100, 50+63=113, 52+62=114, 54+61=115, 55+57=112, 56+60=116, 58+59=117] N=40: [1+96=97, 2+85=87, 3+60=63, 4+69=73, 5+104=109, 6+76=82, 7+71=78, 8+53=61, 9+58=67, 10+98=108, 11+106=117, 12+14=26, 13+28=41, 15+90=105, 16+86=102, 17+33=50, 18+81=99, 19+29=48, 20+42=62, 21+59=80, 22+91=113, 23+77=100, 24+94=118, 25+95=120, 27+74=101, 30+49=79, 31+83=114, 32+84=116, 34+54=88, 35+40=75, 36+56=92, 37+52=89, 38+65=103, 39+72=111, 43+64=107, 44+66=110, 45+70=115, 46+47=93, 51+68=119, 55+57=112] N=41: [1+110=111, 2+13=15, 3+51=54, 4+77=81, 5+84=89, 6+97=103, 7+99=106, 8+85=93, 9+11=20, 10+34=44, 12+75=87, 14+43=57, 16+79=95, 17+88=105, 18+98=116, 19+90=109, 21+96=117, 22+64=86, 23+37=60, 24+70=94, 25+42=67, 26+39=65, 27+74=101, 28+55=83, 29+78=107, 30+92=122, 31+35=66, 32+59=91, 33+82=115, 36+72=108, 38+76=114, 40+80=120, 41+63=104, 45+68=113, 46+56=102, 47+53=100, 48+73=121, 49+69=118, 50+62=112, 52+71=123, 58+61=119] N=42: [1+88=89, 2+94=96, 3*42=126, 4+47=51, 5+109=114, 6+69=75, 7+52=59, 8+33=41, 9+108=117, 10+60=70, 11+84=95, 12+111=123, 13+48=61, 14+85=99, 15+43=58, 16+81=97, 17+38=55, 18+80=98, 19+34=53, 20+105=125, 21+103=124, 22+68=90, 23+87=110, 24+83=107, 25+67=92, 26+56=82, 27+91=118, 28+78=106, 29+86=115, 30+71=101, 31+73=104, 32+40=72, 35+65=100, 36+66=102, 37+79=116, 39+54=93, 44+77=121, 45+74=119, 46+76=122, 49+64=113, 50+62=112, 57+63=120] N=43: [1+79=80, 2+109=111, 3*14=42, 4+50=54, 5+112=117, 6+89=95, 7+90=97, 8+21=29, 9+74=83, 10+88=98, 11+32=43, 12+114=126, 13+110=123, 15+103=118, 16+68=84, 17+35=52, 18+59=77, 19+86=105, 20+82=102, 22+34=56, 23+58=81, 24+101=125, 25+60=85, 26+96=122, 27+100=127, 28+93=121, 30+61=91, 31+63=94, 33+66=99, 36+70=106, 37+76=113, 38+49=87, 39+65=104, 40+67=107, 41+78=119, 44+48=92, 45+71=116, 46+62=108, 47+73=120, 51+64=115, 53+75=128, 55+69=124, 57+72=129] N=44: [1+28=29, 2+83=85, 3+86=89, 4+117=121, 5+96=101, 6+42=48, 7+70=77, 8+118=126, 9+30=39, 10+100=110, 11+120=131, 12+80=92, 13+82=95, 14+19=33, 15+51=66, 16+90=106, 17+99=116, 18+53=71, 20+107=127, 21+36=57, 22+81=103, 23+109=132, 24+91=115, 25+54=79, 26+97=123, 27+67=94, 31+88=119, 32+72=104, 34+78=112, 35+63=98, 37+74=111, 38+84=122, 40+47=87, 41+73=114, 43+50=93, 44+58=102, 45+60=105, 46+62=108, 49+76=125, 52+61=113, 55+75=130, 56+68=124, 59+69=128, 64+65=129] N=45: [1+41=42, 2+98=100, 3+92=95, 4+97=101, 5+48=53, 6+122=128, 7+107=114, 8+62=70, 9+102=111, 10+108=118, 11+73=84, 12+21=33, 13+78=91, 14+112=126, 15+65=80, 16+45=61, 17+49=66, 18+29=47, 19+74=93, 20+55=75, 22+68=90, 23+56=79, 24+105=129, 25+110=135, 26+99=125, 27+86=113, 28+39=67, 30+85=115, 31+96=127, 32+89=121, 34+72=106, 35+88=123, 36+94=130, 37+87=124, 38+82=120, 40+69=109, 43+76=119, 44+59=103, 46+58=104, 50+83=133, 51+81=132, 52+64=116, 54+77=131, 57+60=117, 63+71=134] N=46: [1+100=101, 2+128=130, 3*34=102, 4+121=125, 5+24=29, 6+30=36, 7+40=47, 8+58=66, 9+32=41, 10+124=134, 11+67=78, 12+115=127, 13+83=96, 14+91=105, 15+57=72, 16+73=89, 17+87=104, 18+81=99, 19+60=79, 20+97=117, 21+95=116, 22+62=84, 23+53=76, 25+93=118, 26+103=129, 27+108=135, 28+85=113, 31+88=119, 33+90=123, 35+71=106, 37+74=111, 38+69=107, 39+98=137, 42+80=122, 43+51=94, 44+92=136, 45+75=120, 46+68=114, 48+64=112, 49+82=131, 50+59=109, 52+86=138, 54+56=110, 55+77=132, 61+65=126, 63+70=133] N=47: [1+102=103, 2+41=43, 3+45=48, 4*26=104, 5+32=37, 6+101=107, 7*15=105, 8+100=108, 9+24=33, 10+99=109, 11+35=46, 12+98=110, 13+17=30, 14+97=111, 16+96=112, 18+95=113, 19+20=39, 21+94=115, 22+28=50, 23+93=116, 25+92=117, 27+91=118, 29+90=119, 31+89=120, 34+88=122, 36+87=123, 38+86=124, 40+85=125, 42+84=126, 44+83=127, 47+82=129, 49+81=130, 51+80=131, 52+54=106, 53+79=132, 55+78=133, 56+58=114, 57+77=134, 59+76=135, 60+61=121, 62+75=137, 63+65=128, 64+74=138, 66+73=139, 67+69=136, 68+72=140, 70+71=141] N=48: [1+126=127, 2*26=52, 3+105=108, 4+103=107, 5+101=106, 6+31=37, 7+102=109, 8+33=41, 9+39=48, 10+100=110, 11+35=46, 12+99=111, 13+16=29, 14+28=42, 15+98=113, 17+97=114, 18+22=40, 19+96=115, 20+24=44, 21+95=116, 23+94=117, 25+93=118, 27+92=119, 30+91=121, 32+90=122, 34+89=123, 36+88=124, 38+87=125, 43+86=129, 45+85=130, 47+84=131, 49+83=132, 50+54=104, 51+82=133, 53+81=134, 55+57=112, 56+80=136, 58+79=137, 59+61=120, 60+78=138, 62+77=139, 63+65=128, 64+76=140, 66+75=141, 67+68=135, 69+74=143, 70+72=142, 71+73=144] N=49: [1+136=137, 2+34=36, 3+135=138, 4+126=130, 5+61=66, 6+32=38, 7+81=88, 8+109=117, 9+105=114, 10+58=68, 11+120=131, 12+80=92, 13+42=55, 14+97=111, 15+71=86, 16+102=118, 17+46=63, 18+101=119, 19+122=141, 20+112=132, 21+48=69, 22+94=116, 23+29=52, 24+91=115, 25+75=100, 26+98=124, 27+106=133, 28+79=107, 30+99=129, 31+96=127, 33+70=103, 35+37=72, 39+89=128, 40+73=113, 41+82=123, 43+65=108, 44+60=104, 45+95=140, 47+74=121, 49+76=125, 50+84=134, 51+59=110, 53+93=146, 54+90=144, 56+87=143, 57+85=142, 62+77=139, 64+83=147, 67+78=145] N=50: [1+7=8, 2+40=42, 3*37=111, 4+109=113, 5+39=44, 6+108=114, 9+106=115, 10+35=45, 11+105=116, 12+15=27, 13+104=117, 14+33=47, 16+103=119, 17+95=112, 18+102=120, 19+32=51, 20+29=49, 21+101=122, 22+99=121, 23+100=123, 24+107=131, 25+30=55, 26+98=124, 28+97=125, 31+96=127, 34+94=128, 36+93=129, 38+92=130, 41+91=132, 43+90=133, 46+89=135, 48+88=136, 50+87=137, 52+86=138, 53+57=110, 54+85=139, 56+84=140, 58+60=118, 59+83=142, 61+82=143, 62+64=126, 63+81=144, 65+80=145, 66+68=134, 67+79=146, 69+78=147, 70+71=141, 72+77=149, 73+75=148, 74+76=150] N=51: [1+51=52, 2+111=113, 3+38=41, 4+45=49, 5+110=115, 6*19=114, 7+109=116, 8*14=112, 9+108=117, 10+24=34, 11+32=43, 12+107=119, 13+26=39, 15+105=120, 16+31=47, 17+104=121, 18+36=54, 20+103=123, 21+101=122, 22+102=124, 23+106=129, 25+100=125, 27+29=56, 28+99=127, 30+98=128, 33+97=130, 35+96=131, 37+95=132, 40+94=134, 42+93=135, 44+92=136, 46+91=137, 48+90=138, 50+89=139, 53+88=141, 55+87=142, 57+86=143, 58+60=118, 59+85=144, 61+84=145, 62+64=126, 63+83=146, 65+82=147, 66+67=133, 68+81=149, 69+71=140, 70+80=150, 72+79=151, 73+75=148, 74+78=152, 76+77=153] N=52: [1+115=116, 2+17=19, 3+114=117, 4+50=54, 5+113=118, 6+37=43, 7+38=45, 8+111=119, 9+112=121, 10+13=23, 11+109=120, 12+110=122, 14+33=47, 15+108=123, 16+32=48, 18+107=125, 20+106=126, 21+35=56, 22+105=127, 24+104=128, 25+27=52, 26+103=129, 28+102=130, 29+31=60, 30+101=131, 34+99=133, 36+98=134, 39+97=136, 40+95=135, 41+96=137, 42+58=100, 44+94=138, 46+93=139, 49+92=141, 51+91=142, 53+90=143, 55+89=144, 57+88=145, 59+87=146, 61+63=124, 62+86=148, 64+85=149, 65+67=132, 66+84=150, 68+83=151, 69+71=140, 70+82=152, 72+81=153, 73+74=147, 75+80=155, 76+78=154, 77+79=156] N=53: [1+114=115, 2+30=32, 3+113=116, 4+38=42, 5+51=56, 6+43=49, 7*17=119, 8+112=120, 9*13=117, 10+111=121, 11+34=45, 12+110=122, 14+109=123, 15+25=40, 16+108=124, 18+107=125, 19+36=55, 20+27=47, 21+106=127, 22+129=151, 23+105=128, 24+29=53, 26+104=130, 28+103=131, 31+102=133, 33+101=134, 35+100=135, 37+99=136, 39+98=137, 41+97=138, 44+96=140, 46+95=141, 48+94=142, 50+93=143, 52+92=144, 54+91=145, 57+90=147, 58+60=118, 59+89=148, 61+88=149, 62+64=126, 63+87=150, 65+67=132, 66+86=152, 68+85=153, 69+70=139, 71+84=155, 72+74=146, 73+83=156, 75+82=157, 76+78=154, 77+81=158, 79+80=159] N=54: [1+22=23, 2+57=59, 3*42=126, 4+91=95, 5+136=141, 6+52=58, 7+37=44, 8+104=112, 9+153=162, 10+46=56, 11+113=124, 12+115=127, 13+84=97, 14+118=132, 15+119=134, 16+133=149, 17+63=80, 18+96=114, 19+139=158, 20+31=51, 21+62=83, 24+74=98, 25+121=146, 26+116=142, 27+120=147, 28+81=109, 29+99=128, 30+105=135, 32+129=161, 33+107=140, 34+88=122, 35+108=143, 36+87=123, 38+100=138, 39+72=111, 40+53=93, 41+69=110, 43+102=145, 45+92=137, 47+70=117, 48+103=151, 49+101=150, 50+106=156, 54+90=144, 55+76=131, 60+65=125, 61+94=155, 64+66=130, 67+85=152, 68+89=157, 71+77=148, 73+86=159, 75+79=154, 78+82=160] N=55: [1+31=32, 2*76=152, 3+138=141, 4+62=66, 5*21=105, 6+159=165, 7+58=65, 8+117=125, 9+144=153, 10+147=157, 11+73=84, 12+83=95, 13+118=131, 14+55=69, 15+124=139, 16+146=162, 17+61=78, 18+103=121, 19+70=89, 20+116=136, 22+107=129, 23+92=115, 24+110=134, 25+112=137, 26+94=120, 27+122=149, 28+130=158, 29+111=140, 30+96=126, 33+75=108, 34+127=161, 35+74=109, 36+114=150, 37+91=128, 38+41=79, 39+54=93, 40+102=142, 42+106=148, 43+80=123, 44+46=90, 45+88=133, 47+98=145, 48+49=97, 50+104=154, 51+100=151, 52+67=119, 53+60=113, 56+99=155, 57+86=143, 59+101=160, 63+72=135, 64+68=132, 71+85=156, 77+87=164, 81+82=163] N=56: [1+53=54, 2+122=124, 3*41=123, 4+121=125, 5+51=56, 6*21=126, 7+120=127, 8+28=36, 9+119=128, 10+39=49, 11+118=129, 12+26=38, 13+45=58, 14+117=131, 15+19=34, 16+116=132, 17+43=60, 18+115=133, 20+114=134, 22+113=135, 23+24=47, 25+112=137, 27+111=138, 29+110=139, 30+32=62, 31+109=140, 33+108=141, 35+107=142, 37+106=143, 40+105=145, 42+104=146, 44+103=147, 46+102=148, 48+101=149, 50+100=150, 52+99=151, 55+98=153, 57+97=154, 59+96=155, 61+95=156, 63+94=157, 64+66=130, 65+93=158, 67+69=136, 68+92=160, 70+91=161, 71+73=144, 72+90=162, 74+89=163, 75+77=152, 76+88=164, 78+87=165, 79+80=159, 81+86=167, 82+84=166, 83+85=168] N=57: [1+36=37, 2+149=151, 3+56=59, 4+166=170, 5+80=85, 6+135=141, 7+114=121, 8+90=98, 9+128=137, 10+113=123, 11+143=154, 12+148=160, 13+47=60, 14+52=66, 15+150=165, 16+100=116, 17+34=51, 18+146=164, 19+140=159, 20+53=73, 21+115=136, 22+110=132, 23+102=125, 24+88=112, 25+78=103, 26+79=105, 27+74=101, 28+134=162, 29+129=158, 30+89=119, 31+107=138, 32+139=171, 33+75=108, 35+109=144, 38+104=142, 39+57=96, 40+117=157, 41+106=147, 42+76=118, 43+87=130, 44+83=127, 45+124=169, 46+99=145, 48+63=111, 49+84=133, 50+70=120, 54+77=131, 55+67=122, 58+94=152, 61+65=126, 62+91=153, 64+92=156, 68+93=161, 69+86=155, 71+97=168, 72+95=167, 81+82=163] N=58: [1+127=128, 2+16=18, 3+126=129, 4+50=54, 5*26=130, 6+125=131, 7+52=59, 8+124=132, 9+33=42, 10+123=133, 11+35=46, 12+122=134, 13+31=44, 14+121=135, 15+48=63, 17+120=137, 19+119=138, 20+37=57, 21+118=139, 22+39=61, 23+117=140, 24+41=65, 25+116=141, 27+29=56, 28+115=143, 30+114=144, 32+113=145, 34+112=146, 36+111=147, 38+110=148, 40+109=149, 43+108=151, 45+107=152, 47+106=153, 49+105=154, 51+104=155, 53+103=156, 55+102=157, 58+101=159, 60+100=160, 62+99=161, 64+98=162, 66+97=163, 67+69=136, 68+96=164, 70+72=142, 71+95=166, 73+94=167, 74+76=150, 75+93=168, 77+92=169, 78+80=158, 79+91=170, 81+90=171, 82+83=165, 84+89=173, 85+87=172, 86+88=174] N=59: [1+129=130, 2+47=49, 3+128=131, 4+54=58, 5+40=45, 6*22=132, 7+127=134, 8+20=28, 9*15=135, 10+126=136, 11+41=52, 12+125=137, 13+38=51, 14+124=138, 16+123=139, 17+43=60, 18+122=140, 19+72=91, 21+121=142, 23+120=143, 24+32=56, 25+119=144, 26+36=62, 27+118=145, 29+117=146, 30+34=64, 31+116=147, 33+115=148, 35+114=149, 37+113=150, 39+112=151, 42+111=153, 44+110=154, 46+109=155, 48+108=156, 50+107=157, 53+106=159, 55+105=160, 57+104=161, 59+103=162, 61+102=163, 63+101=164, 65+100=165, 66+67=133, 68+73=141, 69+99=168, 70+97=167, 71+98=169, 74+96=170, 75+77=152, 76+95=171, 78+80=158, 79+94=173, 81+93=174, 82+84=166, 83+92=175, 85+87=172, 86+90=176, 88+89=177] N=60: [1+125=126, 2*71=142, 3+148=151, 4+74=78, 5+168=173, 6+91=97, 7*20=140, 8+81=89, 9+158=167, 10+47=57, 11+128=139, 12+164=176, 13+124=137, 14+18=32, 15+75=90, 16+66=82, 17+45=62, 19+93=112, 21+38=59, 22+100=122, 23+130=153, 24+121=145, 25+102=127, 26+88=114, 27+143=170, 28+95=123, 29+109=138, 30+136=166, 31+70=101, 33+129=162, 34+118=152, 35+106=141, 36+98=134, 37+132=169, 39+41=80, 40+77=117, 42+104=146, 43+135=178, 44+133=177, 46+108=154, 48+113=161, 49+61=110, 50+99=149, 51+120=171, 52+79=131, 53+119=172, 54+111=165, 55+92=147, 56+103=159, 58+86=144, 60+115=175, 63+94=157, 64+116=180, 65+85=150, 67+96=163, 68+87=155, 69+105=174, 72+107=179, 73+83=156, 76+84=160] | 7 | 2 |
69,507,122 | 2021-10-9 | https://stackoverflow.com/questions/69507122/fastapi-custom-response-model | I have a router that fetches all data from the database. Here is my code: @router.get('/articles/', response_model=List[articles_schema.Articles]) async def main_endpoint(): query = articles_model.articles.select().where(articles_model.articles.c.status == 2) return await db.database.fetch_all(query) The response is an array that contains JSON objects like this [ { "title": "example1", "content": "example_content1" }, { "title": "example2", "content": "example_content2" }, ] But I want to make the response like this: { "items": [ { "title": "example1", "content": "example_content1" }, { "title": "example2", "content": "example_content2" }, ] } How can I achieve that? Please help. Thank you in advance | You could simply define another model containing the items list as a field: from pydantic import BaseModel from typing import List class ResponseModel(BaseModel): items: List[articles_schema.Articles] and use it in the response: @router.get('/articles/', response_model=ResponseModel) async def main_endpoint(): query = articles_model.articles.select().where( articles_model.articles.c.status == 2 ) return ResponseModel( items=await db.database.fetch_all(query), ) | 5 | 7 |
69,507,208 | 2021-10-9 | https://stackoverflow.com/questions/69507208/find-out-how-similar-a-set-is-compared-to-all-other-sets-in-a-collection-of-sets | I'm trying to calculate how similar a set is compared to all other sets in a collection by counting the number of elements that match. Once I have the counts, I want to perform further operations against each set with the top X (currently 100) similar sets (ones with the highest count). I have provided an example input and an output which shows the count of matching elements against two sets: input { "list1": [ "label1", "label2", "label3" ], "list2": [ "label2", "label3", "label4" ], "list3": [ "label3", "label4", "label5" ], "list4": [ "label4", "label5", "label6" ] } output { "list1": { "list1": 3, "list2": 2, "list3": 1, "list4": 0 }, "list2": { "list1": 2, "list2": 3, "list3": 2, "list4": 1 }, "list3": { "list1": 1, "list2": 2, "list3": 3, "list4": 2 }, "list4": { "list1": 0, "list2": 1, "list3": 2, "list4": 3 } } I came up with the following code, but it takes hours for an input of about 200,000 sets. The number of elements/labels in a set varies but averages about 10 elements in each set. The total number of unique label values is around 300. input = {} input['list1'] = ['label1', 'label2', 'label3'] input['list2'] = ['label2', 'label3', 'label4'] input['list3'] = ['label3', 'label4', 'label5'] input['list4'] = ['label4', 'label5', 'label6'] print(json.dumps(input, indent=2)) input = {key: set(value) for key, value in input.items()} output = {key1: {key2: 0 for key2 in input.keys()} for key1 in input.keys()} for key1, value1 in input.items(): for key2, value2 in input.items(): for element in value1: if element in value2: count = output[key1][key2] output[key1][key2] = count + 1 print(json.dumps(output, indent=2)) Does anyone have any ideas on how to improve on the execution time of the above code when the number of sets is large? Thank you for any suggestions! | Use an inverted index to avoid computing intersection with those sets that the cardinality of the intersection is 0: from collections import defaultdict, Counter from itertools import chain from pprint import pprint data = { "list1": ["label1", "label2", "label3"], "list2": ["label2", "label3", "label4"], "list3": ["label3", "label4", "label5"], "list4": ["label4", "label5", "label6"] } index = defaultdict(list) for key, values in data.items(): for value in values: index[value].append(key) result = {key: Counter(chain.from_iterable(index[label] for label in labels)) for key, labels in data.items()} pprint(result) Output {'list1': Counter({'list1': 3, 'list2': 2, 'list3': 1}), 'list2': Counter({'list2': 3, 'list1': 2, 'list3': 2, 'list4': 1}), 'list3': Counter({'list3': 3, 'list2': 2, 'list4': 2, 'list1': 1}), 'list4': Counter({'list4': 3, 'list3': 2, 'list2': 1})} If strictly needed you can include those sets with 0 intersection cardinality as follows: result = {key: {k: value.get(k, 0) for k in data} for key, value in result.items()} pprint(result) Output {'list1': {'list1': 3, 'list2': 2, 'list3': 1, 'list4': 0}, 'list2': {'list1': 2, 'list2': 3, 'list3': 2, 'list4': 1}, 'list3': {'list1': 1, 'list2': 2, 'list3': 3, 'list4': 2}, 'list4': {'list1': 0, 'list2': 1, 'list3': 2, 'list4': 3}} A second alternative comes from the observation that most of the time is dedicated to find intersections of sets, therefore a faster data structure such as roaring bitmap are useful: from collections import defaultdict from pprint import pprint from pyroaring import BitMap data = { "list1": ["label1", "label2", "label3"], "list2": ["label2", "label3", "label4"], "list3": ["label3", "label4", "label5"], "list4": ["label4", "label5", "label6"] } # all labels labels = set().union(*data.values()) # lookup mapping to an integer lookup = {key: value for value, key in enumerate(labels)} roaring_data = {key: BitMap(lookup[v] for v in value) for key, value in data.items()} result = defaultdict(dict) for k_out, outer in roaring_data.items(): for k_in, inner in roaring_data.items(): result[k_out][k_in] = len(outer & inner) pprint(result) Output defaultdict(<class 'dict'>, {'list1': {'list1': 3, 'list2': 2, 'list3': 1, 'list4': 0}, 'list2': {'list1': 2, 'list2': 3, 'list3': 2, 'list4': 1}, 'list3': {'list1': 1, 'list2': 2, 'list3': 3, 'list4': 2}, 'list4': {'list1': 0, 'list2': 1, 'list3': 2, 'list4': 3}}) Performance Analysis The above graph shows the performance on a dictionary data of length given by the value of the x axis, each value of the dictionary is a list of 10 labels randomly sampled from a population of 100. Against intuition roaring bitmap performs worst than your solution, while using an inverted index takes less than half the time (40 % approximately). The code to reproduce the above results can be found here | 5 | 5 |
69,505,726 | 2021-10-9 | https://stackoverflow.com/questions/69505726/pandas-typeerror-cannot-perform-rand-with-a-dtyped-bool-array-and-scalar | I wanted to change a value of a cell with the conditions of another cell value and used this code dfT.loc[dfT.state == "CANCELLED" & (dfT.Activity != "created"), "Activity"] = "cancelled" This is an Example Table: ID Activity state 1 created CANCELLED 1 completed CANCELLED 2 created FINNISHED 2 completed FINISHED 3 created REJECTED 3 rejected REJECTED and There is a Type Error like this: TypeError Traceback (most recent call last) ~\miniconda3\lib\site-packages\pandas\core\ops\array_ops.py in na_logical_op(x, y, op) 264 # (xint or xbool) and (yint or bool) --> 265 result = op(x, y) 266 except TypeError: ~\miniconda3\lib\site-packages\pandas\core\ops\roperator.py in rand_(left, right) 51 def rand_(left, right): ---> 52 return operator.and_(right, left) 53 TypeError: ufunc 'bitwise_and' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) ~\miniconda3\lib\site-packages\pandas\core\ops\array_ops.py in na_logical_op(x, y, op) 278 try: --> 279 result = libops.scalar_binop(x, y, op) 280 except ( pandas\_libs\ops.pyx in pandas._libs.ops.scalar_binop() ValueError: Buffer dtype mismatch, expected 'Python object' but got 'bool' The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) <ipython-input-6-350c55a06fa7> in <module> 4 # dfT2 = dfT1[dfT1.Activity != 'created'] 5 # df.loc[(df.state == "CANCELLED") & (df.Activity != "created"), "Activity"] = "cancelled" ----> 6 dfT.loc[dfT.state == "CANCELLED" & (dfT.Activity != "created"), "Activity"] = "cancelled" 7 dfT ~\miniconda3\lib\site-packages\pandas\core\ops\common.py in new_method(self, other) 63 other = item_from_zerodim(other) 64 ---> 65 return method(self, other) 66 67 return new_method ~\miniconda3\lib\site-packages\pandas\core\arraylike.py in __rand__(self, other) 61 @unpack_zerodim_and_defer("__rand__") 62 def __rand__(self, other): ---> 63 return self._logical_method(other, roperator.rand_) 64 65 @unpack_zerodim_and_defer("__or__") ~\miniconda3\lib\site-packages\pandas\core\series.py in _logical_method(self, other, op) 4987 rvalues = extract_array(other, extract_numpy=True) 4988 -> 4989 res_values = ops.logical_op(lvalues, rvalues, op) 4990 return self._construct_result(res_values, name=res_name) 4991 ~\miniconda3\lib\site-packages\pandas\core\ops\array_ops.py in logical_op(left, right, op) 353 filler = fill_int if is_self_int_dtype and is_other_int_dtype else fill_bool 354 --> 355 res_values = na_logical_op(lvalues, rvalues, op) 356 # error: Cannot call function of unknown type 357 res_values = filler(res_values) # type: ignore[operator] ~\miniconda3\lib\site-packages\pandas\core\ops\array_ops.py in na_logical_op(x, y, op) 286 ) as err: 287 typ = type(y).__name__ --> 288 raise TypeError( 289 f"Cannot perform '{op.__name__}' with a dtyped [{x.dtype}] array " 290 f"and scalar of type [{typ}]" If anyone understand what's my mistake is please help. Thanks in advance -Alde | You need to wrap your conditions inside () Use: dfT.loc[(dfT.state == "CANCELLED") & (dfT.Activity != "created"), "Activity"] = "cancelled" | 6 | 15 |
69,505,262 | 2021-10-9 | https://stackoverflow.com/questions/69505262/how-to-compare-dataclasses | I would like to compare two global dataclasses in terms of equality. I changed the field in one of the dataclasses and python still insists on telling me, that those objects are equal. I don't know how internally dataclasses work, but when I print asdict I get an empty dictionary... What am I doing wrong and how can I compare dataclasses by checking for equality of its members? I'm using Python 3.9.4 from dataclasses import dataclass, asdict @dataclass class TestClass: field1 = None field2 = False test1 = TestClass() test2 = TestClass() def main(): global test1 global test2 test2.field2 = True print('Test1: ', id(test1), asdict(test1), test1.field1, test1.field2) print('Test2: ', id(test2), asdict(test2), test2.field1, test2.field2) print('Are equal? ', test1 == test2) print('Are not equal?', test1 != test2) if __name__ == '__main__': main() Output: Test1: 2017289121504 {} None False Test2: 2017289119296 {} None True Are equal? True Are not equal? False | For Python to recognize fields of a dataclass, those fields should have PEP 526 type annotations. For example: from typing import Optional @dataclass class TestClass: field1: Optional[str] = None field2: bool = False With that definition comparisons and asdict work as expected: In [2]: TestClass(field2=True) == TestClass() Out[2]: False In [3]: asdict(TestClass(field2=True)) Out[3]: {'field1': None, 'field2': True} | 7 | 10 |
69,483,214 | 2021-10-7 | https://stackoverflow.com/questions/69483214/issues-setting-up-python-testing-in-vscode-using-pytest | I am trying to use the testing extension in VSCode with the Python extension. I am using pytest as my testing library. My folder structure looks like this: PACKAGENAME/ ββ PACKAGENAME/ β ββ __init__.py β ββ main.py ββ tests/ β ββ test_main.py ββ requirements.txt In the test_main.py file I am trying to import the package code, in order to test it: from PACKAGENAME import * From the command line, in the root directory, PACKAGENAME, I can use the command python -m pytest which runs the tests fine. There are no issues with modules not being found. However, when I try to use the VSCode testing tab, the tests are discovered, but this errors: =================================== ERRORS ==================================== _____________________ ERROR collecting tests/test_main.py _____________________ ImportError while importing test module 'd:\PATH\TO\PACKAGENAME\tests\test_main.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: C:\Users\USER\anaconda3\envs\uni\lib\importlib\__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests\test_main.py:1: in <module> from PACKAGENAME import * E ModuleNotFoundError: No module named 'PACKAGENAME' =========================== short test summary info =========================== Is there any way to get this working without having to use the command line? | I suggest that you try like this: make sure that your VS Code workspace is set to the parent directory of (the root directory) PACKAGENAME add an empty __init__.py file in testsdirectory in test_main.py, replace from PACKAGENAME import * with from PACKAGENAME.main import * | 9 | 10 |
69,503,887 | 2021-10-9 | https://stackoverflow.com/questions/69503887/pip-cannot-install-anything-after-upgrading-to-python-3-10-0-on-windows | I recently upgraded to the latest version of python version 3.10.0 and upgraded pip also to the latest version 21.2.4. Now I cannot use pip to install anything. This is the error it gives for anything I try to install. C:\Users\AMAL>pip install numpy Collecting numpy Using cached numpy-1.21.2.zip (10.3 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Building wheels for collected packages: numpy Building wheel for numpy (PEP 517) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\AMAL\AppData\Local\Programs\Python\Python310\python.exe' 'C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\AMAL\AppData\Local\Temp\tmpemhtoti3' cwd: C:\Users\AMAL\AppData\Local\Temp\pip-install-d1uxnt6o\numpy_5af9ff0d696c40848bc7d07b456797b7 Complete output (208 lines): setup.py:63: RuntimeWarning: NumPy 1.21.2 may not yet support Python 3.10. warnings.warn( Running from numpy source directory. C:\Users\AMAL\AppData\Local\Temp\pip-install-d1uxnt6o\numpy_5af9ff0d696c40848bc7d07b456797b7\tools\cythonize.py:69: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.version import LooseVersion Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Processing numpy/random\mtrand.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE blis_info: libraries blis not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE openblas_info: libraries openblas not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE accelerate_info: NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE C:\Users\AMAL\AppData\Local\Temp\pip-install-d1uxnt6o\numpy_5af9ff0d696c40848bc7d07b456797b7\numpy\distutils\system_info.py:2026: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): blas_info: libraries blas not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE C:\Users\AMAL\AppData\Local\Temp\pip-install-d1uxnt6o\numpy_5af9ff0d696c40848bc7d07b456797b7\numpy\distutils\system_info.py:2026: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): blas_src_info: NOT AVAILABLE C:\Users\AMAL\AppData\Local\Temp\pip-install-d1uxnt6o\numpy_5af9ff0d696c40848bc7d07b456797b7\numpy\distutils\system_info.py:2026: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): NOT AVAILABLE non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE flame_info: libraries flame not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries tatlas,tatlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs libraries tatlas,tatlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries satlas,satlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs libraries satlas,satlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries ptf77blas,ptcblas,atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs libraries ptf77blas,ptcblas,atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries f77blas,cblas,atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs libraries f77blas,cblas,atlas not found in C:\Users\AMAL\AppData\Local\Programs\Python\Python310\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: libraries lapack not found in ['C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\AMAL\\AppData\\Local\\Programs\\Python\\Python310\\libs'] NOT AVAILABLE C:\Users\AMAL\AppData\Local\Temp\pip-install-d1uxnt6o\numpy_5af9ff0d696c40848bc7d07b456797b7\numpy\distutils\system_info.py:1858: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() lapack_src_info: NOT AVAILABLE C:\Users\AMAL\AppData\Local\Temp\pip-install-d1uxnt6o\numpy_5af9ff0d696c40848bc7d07b456797b7\numpy\distutils\system_info.py:1858: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() NOT AVAILABLE numpy_linalg_lapack_lite: FOUND: language = c define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')] Warning: attempted relative import with no known parent package C:\Users\AMAL\AppData\Local\Programs\Python\Python310\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running bdist_wheel running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.10 creating build\src.win-amd64-3.10\numpy creating build\src.win-amd64-3.10\numpy\distutils building library "npymath" sources error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly I tried downgrading both python and pip back but the issue still persists. Also tried to create a virtualenv and in that also the same issue persists. When I downgraded the following is the error C:\Users\AMAL>python -m pip install torch Collecting torch Downloading https://files.pythonhosted.org/packages/8e/57/3066077aa16a852f3da0239796fa487baba0104ca2eb26f9ca4f56a7a86d/torch-1.7.0-cp38-cp38m-win_amd64.whl (184.0MB) |ββββββββββββββββββββββββββββββββ| 184.0MB 67kB/s ERROR: Exception: Traceback (most recent call last): File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\cli\base_command.py", line 188, in main status = self.run(options, args) File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\commands\install.py", line 345, in run resolver.resolve(requirement_set) File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\legacy_resolve.py", line 196, in resolve self._resolve_one(requirement_set, req) File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\legacy_resolve.py", line 362, in _resolve_one dist = abstract_dist.get_pkg_resources_distribution() File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\distributions\wheel.py", line 13, in get_pkg_resources_distribution return list(pkg_resources.find_distributions( IndexError: list index out of range C:\Users\AMAL>python --version Python 3.8.4rc1 C:\Users\AMAL>python -m pip install torch Collecting torch Downloading https://files.pythonhosted.org/packages/8e/57/3066077aa16a852f3da0239796fa487baba0104ca2eb26f9ca4f56a7a86d/torch-1.7.0-cp38-cp38m-win_amd64.whl (184.0MB) |ββββββββββββββββββββββββββββββββ| 184.0MB 67kB/s ERROR: Exception: Traceback (most recent call last): File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\cli\base_command.py", line 188, in main status = self.run(options, args) File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\commands\install.py", line 345, in run resolver.resolve(requirement_set) File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\legacy_resolve.py", line 196, in resolve self._resolve_one(requirement_set, req) File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\legacy_resolve.py", line 362, in _resolve_one dist = abstract_dist.get_pkg_resources_distribution() File "C:\Users\AMAL\AppData\Local\Programs\Python\Python38\lib\site-packages\pip\_internal\distributions\wheel.py", line 13, in get_pkg_resources_distribution return list(pkg_resources.find_distributions( IndexError: list index out of range C:\Users\AMAL>python --version Python 3.8.4rc1 I would greatly appreciate guidance on how to install the latest version of python and pip that supports libraries like pygame and numpy. | Try to upgrade your pip pip install --upgrade pip | 5 | 2 |
69,491,795 | 2021-10-8 | https://stackoverflow.com/questions/69491795/how-to-force-keras-to-use-tensorflow-gpu-backend | I know this is one of the popular questions, but none of the solutions worked for me, so far. I'm running a legacy code that is written in tensorflow v1.13.1 and keras v2.2.4. I cannot modify the code to run latest tensorflow version. Since keras has now been merged into tensorflow, I'm facing problems installing the specific versions of tensorflow and keras via pip. I found that anaconda has option to install keras and tensorflow with the above version. So, I installed it with conda install -c conda-forge keras-gpu=2.2.4 tensorflow-gpu=1.13.1 It installed the version and all works too. But it doesn't use GPU, and instead runs on CPU. I noticed that anaconda installed both CPU and GPU versions of tensorflow and I guess this is why it is defaulting to CPU version. So, my question is, how can I force it to use GPU version? PS: There are many answers out there that suggest to remove CPU version of tensorflow. But when I try to remove CPU version, conda uninstalls everything including keras. So, I assume there should be a way to use tensorflow-gpu when both of them are installed. Any help in this regard is appreciated! | Installing tensorflow first and then keras worked! conda install tensorflow-gpu=1.13.1 conda install keras-gpu=2.2.4 | 5 | 0 |
69,493,263 | 2021-10-8 | https://stackoverflow.com/questions/69493263/why-do-keyword-arguments-to-a-class-definition-reappear-after-they-were-removed | I created a metaclass that defines the __prepare__ method, which is supposed to consume a specific keyword in the class definition, like this: class M(type): @classmethod def __prepare__(metaclass, name, bases, **kwds): print('in M.__prepare__:') print(f' {metaclass=}\n {name=}\n' f' {bases=}\n {kwds=}\n {id(kwds)=}') if 'for_prepare' not in kwds: return super().__prepare__(name, bases, **kwds) arg = kwds.pop('for_prepare') print(f' arg popped for prepare: {arg}') print(f' end of prepare: {kwds=} {id(kwds)=}') return super().__prepare__(name, bases, **kwds) def __new__(metaclass, name, bases, ns, **kwds): print('in M.__new__:') print(f' {metaclass=}\n {name=}\n' f' {bases=}\n {ns=}\n {kwds=}\n {id(kwds)=}') return super().__new__(metaclass, name, bases, ns, **kwds) class A(metaclass=M, for_prepare='xyz'): pass When I run it, the for_prepare keyword argument in the definition of class A reappears in __new__ (and later in __init_subclass__, where it causes an error): $ python3 ./weird_prepare.py in M.__prepare__: metaclass=<class '__main__.M'> name='A' bases=() kwds={'for_prepare': 'xyz'} id(kwds)=140128409916224 arg popped for prepare: xyz end of prepare: kwds={} id(kwds)=140128409916224 in M.__new__: metaclass=<class '__main__.M'> name='A' bases=() ns={'__module__': '__main__', '__qualname__': 'A'} kwds={'for_prepare': 'xyz'} id(kwds)=140128409916224 Traceback (most recent call last): File "./weird_prepare.py", line 21, in <module> class A(metaclass=M, for_prepare='xyz'): File "./weird_prepare.py", line 18, in __new__ return super().__new__(metaclass, name, bases, ns, **kwds) TypeError: __init_subclass__() takes no keyword arguments As you can see the for_prepare item is removed from the dict, and the dict that is passed to __new__ is the same object that was passed to __prepare__ and the same object that the for_prepare item was popped from, but in __new__ it reappeared! Why does a keyword that was deleted from the dict get added back in? | and the dict that is passed to new is the same object that was passed to prepare Unfortunately, this is where you are wrong. Python only recycles the same object id. If you create a new dict inside __prepare__ you will notice the id of kwds changes in __new__. class M(type): @classmethod def __prepare__(metaclass, name, bases, **kwds): print('in M.__prepare__:') print(f' {metaclass=}\n {name=}\n' f' {bases=}\n {kwds=}\n {id(kwds)=}') if 'for_prepare' not in kwds: return super().__prepare__(name, bases, **kwds) arg = kwds.pop('for_prepare') x = {} # <<< create a new dict print(f' arg popped for prepare: {arg}') print(f' end of prepare: {kwds=} {id(kwds)=}') return super().__prepare__(name, bases, **kwds) def __new__(metaclass, name, bases, ns, **kwds): print('in M.__new__:') print(f' {metaclass=}\n {name=}\n' f' {bases=}\n {ns=}\n {kwds=}\n {id(kwds)=}') return super().__new__(metaclass, name, bases, ns, **kwds) class A(metaclass=M, for_prepare='xyz'): pass Output: in M.__prepare__: metaclass=<class '__main__.M'> name='A' bases=() kwds={'for_prepare': 'xyz'} id(kwds)=2595838763072 arg popped for prepare: xyz end of prepare: kwds={} id(kwds)=2595838763072 in M.__new__: metaclass=<class '__main__.M'> name='A' bases=() ns={'__module__': '__main__', '__qualname__': 'A'} kwds={'for_prepare': 'xyz'} id(kwds)=2595836298496 # <<< id has changed now Traceback (most recent call last): File "d:\nemetris\mpf\mpf.test\test_so4.py", line 22, in <module> class A(metaclass=M, for_prepare='xyz'): File "d:\nemetris\mpf\mpf.test\test_so4.py", line 19, in __new__ return super().__new__(metaclass, name, bases, ns, **kwds) TypeError: A.__init_subclass__() takes no keyword arguments | 6 | 6 |
69,492,265 | 2021-10-8 | https://stackoverflow.com/questions/69492265/fastapi-sqlalchemy-pytest-unable-to-get-100-coverage-it-doesnt-properly-co | I'm trying to build FastAPI application fully covered with test using python 3.9 For this purpose I've chosen stack: FastAPI, uvicorn, SQLAlchemy, asyncpg, pytest (+ async, cov plugins), coverage and httpx AsyncClient Here is my minimal requirements.txt All tests run smoothly and I get the expected results. But I've faced the problem, coverage doesn't properly collected. It breaks after a first await keyword, when coroutine returns control back to the event loop Here is a minimal set on how to reproduce this behavior (it's also available on a GitHub). Appliaction code main.py: import sqlalchemy as sa from fastapi import FastAPI from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine from starlette.requests import Request app = FastAPI() DATABASE_URL = 'sqlite+aiosqlite://?cache=shared' @app.on_event('startup') async def startup_event(): engine = create_async_engine(DATABASE_URL, future=True) app.state.session = AsyncSession(engine, expire_on_commit=False) app.state.engine = engine @app.on_event('shutdown') async def shutdown_event(): await app.state.session.close() @app.get('/', name="home") async def get_home(request: Request): res = await request.app.state.session.execute(sa.text('SELECT 1')) # after this line coverage breaks row = res.first() assert str(row[0]) == '1' return {"message": "OK"} test setup conftest.py looks like this: import asyncio import pytest from asgi_lifespan import LifespanManager from httpx import AsyncClient @pytest.fixture(scope='session') async def get_app(): from main import app async with LifespanManager(app): yield app @pytest.fixture(scope='session') async def get_client(get_app): async with AsyncClient(app=get_app, base_url="http://testserver") as client: yield client @pytest.fixture(scope="session") def event_loop(): loop = asyncio.new_event_loop() yield loop loop.close() test is simple as it is (just check status code is 200) test_main.py: import pytest from starlette import status @pytest.mark.asyncio async def test_view_health_check_200_ok(get_client): res = await get_client.get('/') assert res.status_code == status.HTTP_200_OK pytest -vv --cov=. --cov-report term-missing --cov-report html As a result coverage I get: Name Stmts Miss Cover Missing -------------------------------------------- conftest.py 18 0 100% main.py 20 3 85% 26-28 test_main.py 6 0 100% -------------------------------------------- TOTAL 44 3 93% Example code above uses aiosqlite instead of asyncpg but coverage failure also reproduces persistently I've concluded this problem is with SQLAlchemy, because this example with asyncpg without using the SQLAlchemy works like charm | it's an issue with SQLAlchemy 1.4 in coveragepy: https://github.com/nedbat/coveragepy/issues/1082, https://github.com/nedbat/coveragepy/issues/1012 you can try with --concurrency==greenlet option | 13 | 13 |
69,492,406 | 2021-10-8 | https://stackoverflow.com/questions/69492406/streamlit-how-to-display-buttons-in-a-single-line | Hi all I am building a simple web app with streamlit in python. I need to add 3 buttons but they must be on the same line. Obviously the following code puts them on three different lines st.button('Button 1') st.button('Button 2') st.button('Button 3') Do you have any tips? | Apparently this should do it col1, col2, col3 = st.columns([1,1,1]) with col1: st.button('1') with col2: st.button('2') with col3: st.button('3') | 20 | 33 |
69,486,648 | 2021-10-7 | https://stackoverflow.com/questions/69486648/python-regex-where-a-set-of-options-can-occur-at-most-once-in-a-list-in-any-ord | I'm wondering if there's any way in python or perl to build a regex where you can define a set of options can appear at most once in any order. So for example I would like a derivative of foo(?: [abc])*, where a, b, c could only appear once. So: foo a b c foo b c a foo a b foo b would all be valid, but foo b b would not be | You may use this regex with a capture group and a negative lookahead: For Perl, you can use this variant with forward referencing: ^foo((?!.*\1) [abc])+$ RegEx Demo RegEx Details: ^: Start foo: Match foo (: Start a capture group #1 (?!.*\1): Negative lookahead to assert that we don't match what we have in capture group #1 anywhere in input [abc]: Match a space followed by a or b or c )+: End capture group #1. Repeat this group 1+ times $: End As mentioned earlier, this regex is using a feature called Forward Referencing which is a back-reference to a group that appears later in the regex pattern. JGsoft, .NET, Java, Perl, PCRE, PHP, Delphi, and Ruby allow forward references but Python doesn't. Here is a work-around of same regex for Python that doesn't use forward referencing: ^foo(?!.* ([abc]).*\1)(?: [abc])+$ Here we use a negative lookahead before repeated group to check and fail the match if there is any repeat of allowed substrings i.e. [abc]. RegEx Demo 2 | 22 | 13 |
69,487,794 | 2021-10-7 | https://stackoverflow.com/questions/69487794/pandas-how-to-filter-dataframe-with-certain-range-of-numbers-in-a-column-of-data | I am trying to come up with a way to filter dataframe so that it contains only certain range of numbers that is needed for further processing. Below is an example dataframe data_sample = [['part1', 234], ['part2', 224], ['part3', 214],['part4', 114],['part5', 1111], ['part6',1067],['part7',1034],['part8',1457],['part9', 789],['part10',1367], ['part11',467],['part12',367] ] data_df = pd.DataFrame(data_sample, columns = ['partname', 'sbin']) data_df['sbin'] = pd.to_numeric(data_df['sbin'], errors='coerce', downcast='integer') With the above dataframe i want to filter such that any part with sbin in range [200-230] and [1000-1150] and [350-370] and [100-130] are removed. I have a bigger dataframe with lot more ranges to be removed and hence need a faster way than using below command data_df.loc[~( ((data_df.sbin >=200) & (data_df.sbin <= 230)) | ((data_df.sbin >=100) & (data_df.sbin <= 130)) | ((data_df.sbin >=350) & (data_df.sbin <= 370))| ((data_df.sbin >=1000) & (data_df.sbin <= 1150)))] that produces output as below partname sbin 0 part1 234 7 part8 1457 8 part9 789 9 part10 1367 10 part11 467 The above method requires lot of conditions and takes a long time, i would like to know if there is a better way using regex or some other python way that i am not aware off. any help would be great | New version Use np.logical_and and any to select values in ranges and invert the mask to keep other ones. intervals = [(100, 130), (200, 230), (350, 370), (1000, 1150)] m = np.any([np.logical_and(data_df['sbin'] >= l, data_df['sbin'] <= u) for l, u in intervals], axis=0) out = data_df.loc[~m] Note any can be replaced by np.logical_or.reduce: intervals = [(100, 130), (200, 230), (350, 370), (1000, 1150)] m = np.logical_or.reduce([np.logical_and(data_df['sbin'] >= l, data_df['sbin'] <= u) for l, u in intervals]) out = data_df.loc[~m] Output result: >>> out partname sbin 0 part1 234 7 part8 1457 8 part9 789 9 part10 1367 10 part11 467 Old version Not work with float numbers as is Use np.where and in1d: intervals = [(100, 130), (200, 230), (350, 370), (1000, 1150)] m = np.hstack([np.arange(l, u+1) for l, u in intervals]) out = data_df.loc[~np.in1d(data_df['sbin'], m)] Performance: for 100k records: data_df = pd.DataFrame({'sbin': np.random.randint(0, 2000, 100000)}) def exclude_range_danimesejo(): intervals = sorted([(200, 230), (1000, 1150), (350, 370), (100, 130)]) intervals = np.array(intervals).flatten() mask = (np.searchsorted(intervals, data_df['sbin']) % 2 == 0) & ~np.in1d(data_df['sbin'], intervals[::2]) return data_df.loc[mask] def exclude_range_sammywemmy(): intervals = pd.IntervalIndex.from_tuples([(200, 230), (1000, 1150), (350, 370), (100, 130)]) return data_df.loc[pd.cut(data_df.sbin, intervals, include_lowest=True).isna()] def exclude_range_corralien(): intervals = [(100, 130), (200, 230), (350, 370), (1000, 1150)] m = np.hstack([np.arange(l, u+1) for l, u in intervals]) return data_df.loc[~np.in1d(data_df['sbin'], m)] def exclude_range_corralien2(): intervals = [(100, 130), (200, 230), (350, 370), (1000, 1150)] m = np.any([np.logical_and(data_df['sbin'] >= l, data_df['sbin'] <= u) for l, u in intervals], axis=0) return data_df.loc[~m] >>> %timeit exclude_range_danimesejo() 2.66 ms Β± 18.2 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) >>> %timeit exclude_range_sammywemmy() 63.6 ms Β± 549 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) >>> %timeit exclude_range_corralien() 6.87 ms Β± 58.8 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) >>> %timeit exclude_range_corralien2() 2.26 ms Β± 8.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) | 5 | 4 |
69,477,337 | 2021-10-7 | https://stackoverflow.com/questions/69477337/function-for-restricting-the-type-of-attributes-in-an-object-python-mypy | Example setup: from typing import Optional class A(object): def __init__(self): self.a: Optional[int] = None def check_a(self) -> bool: return self.a is not None a = A() if a.check_a(): print(a.a + 1) # error: Unsupported operand types for + ("None" and "int") The check_a method checks what type of variable a is, but mypy does not see this and writes an error. TypeGuard will not help, because it can create a function to check the type, and not a function to check the type of an object variable Is it possible to somehow make mypy notice this in order to use the function to check the type of the variable self.a without explicitly referring to it in the check? (Use if a.a_check instead of if a.a is not None)? | There are two ways in which this can be done with the new TypeGuard feature, which can be imported from typing in Python 3.10, and is available from the PyPI typing_extensions package in earlier Python versions. Note that typing_extensions is already a dependency of Mypy, so if you're using Mypy, you likely already have it. The first option is to change your check_a method to a staticmethod that takes in a variable that might be int or None as an input, and verifies whether or not it is an int. (Apologies, I have changed the names of some of your variables, as I found it quite confusing to have a class A that also had an a attribute.) from typing import TypeGuard, Optional class Foo1: def __init__(self, bar: Optional[int] = None) -> None: self.bar = bar @staticmethod def check_bar(bar: Optional[int]) -> TypeGuard[int]: return bar is not None f1 = Foo1() if f1.check_bar(f1.bar): print(f1.bar + 1) The second option is to use structural subtyping to assert that an instance of class Foo (or class A in your original question) has certain properties at a certain point in time. This requires altering the test method so that it becomes a classmethod, and is a little more complex to set up, but leads to a nicer check once you have it set up. from typing import TypeGuard, Optional, Protocol, TypeVar class HasIntBar(Protocol): bar: int F = TypeVar('F', bound='Foo2') class Foo2: def __init__(self, bar: Optional[int] = None) -> None: self.bar = bar @classmethod def check_bar(cls: type[F], instance: F) -> TypeGuard[HasIntBar]: return instance.bar is not None f2 = Foo2() if Foo2.check_bar(f2): # could also write this as `if f2.check_bar(f2)` print(f2.bar + 1) You can try both of these options out on Mypy playground here. | 5 | 5 |
69,483,237 | 2021-10-7 | https://stackoverflow.com/questions/69483237/how-to-create-base64-encode-sha256-string-in-javascript | I want to log script hashes to implement content security policy. I have been able to generate the hash in python with the following code: import hashlib import base64 string=''' //<![CDATA[ var theForm = document.forms['ctl00']; if (!theForm) { theForm = document.ctl00; } function __doPostBack(eventTarget, eventArgument) { if (!theForm.onsubmit || (theForm.onsubmit() != false)) { theForm.__EVENTTARGET.value = eventTarget; theForm.__EVENTARGUMENT.value = eventArgument; theForm.submit(); } } //]]> ''' # encode as UTF-8 string_UTF8 = string.encode('utf-8') # hash the message hash_string = hashlib.sha256(string_UTF8).digest() # base64 encode result = base64.b64encode(hash_string) print('sha256-' + result.decode('utf-8')) How can I do this with Javascript? | const string = ` //<![CDATA[ var theForm = document.forms['ctl00']; if (!theForm) { theForm = document.ctl00; } function __doPostBack(eventTarget, eventArgument) { if (!theForm.onsubmit || (theForm.onsubmit() != false)) { theForm.__EVENTTARGET.value = eventTarget; theForm.__EVENTARGUMENT.value = eventArgument; theForm.submit(); } } //]]> ` async function hashFromString(string) { const hash = await crypto.subtle.digest("SHA-256", (new TextEncoder()).encode(string)) return "sha256-" + btoa(String.fromCharCode(...new Uint8Array(hash))) } hashFromString(string).then(console.log) Edit: I realize now that while not stated in your question it's likely that you are using Node.js therefor this answer that uses browser APIs may be of less use. | 5 | 8 |
69,483,502 | 2021-10-7 | https://stackoverflow.com/questions/69483502/is-there-a-way-to-infer-in-python-if-a-date-is-the-actual-day-in-which-the-dst | I would like to infer in Python if a date is the actual day of the year in which the hour is changed due to DST (Daylight Saving Time). With the library pytz you can localize a datetime and the actual DST change is correctly done. Furthermore, there is the method dst() of the library datetime that allows you to infer if an actual date is in summer or winter time (example). However, I would like to infer the actual day in which a DST change is made. Concretely, I would need a function (for example is_dst_change(date, timezone)) that receives a date and returns True only for those days of the year that do have an hour change. For example: import pytz import datetime def is_dst_change(day, timezone): # Localize loc_day = pytz.timezone(timezone).localize(day) # Pseudocode: Infer if a date is the actual day in which the dst hour change is made if loc_day is dst_change_day: return True else: return False # In the timezone 'Europe/Madrid', the days in which the DST change is made in 2021 are 28/03/2021 and 31/10/2021 is_dst_change(day=datetime.datetime(year=2021, month=3, day=28), timezone = 'Europe/Madrid') # This should return True is_dst_change(day=datetime.datetime(year=2021, month=10, day=31), timezone = 'Europe/Madrid') # This should return True is_dst_change(day=datetime.datetime(year=2021, month=2, day=1), timezone = 'Europe/Madrid') # This should return False is_dst_change(day=datetime.datetime(year=2021, month=7, day=1), timezone = 'Europe/Madrid') # This should return False Thus, in the above example the only days of 2021 for which the function is_dst_change(day, timezone='Europe/Madrid') will return True are 28/03/2021 and 31/10/2021. For the rest of the days of the year 2021, it must return False. Is there a way to infer this with Python? | You can make use of datetime.dst() (a change in UTC offset is not necessarily a DST transition): from datetime import datetime, time, timedelta from zoneinfo import ZoneInfo # Python 3.9+ def is_date_of_DSTtransition(dt: datetime, zone: str) -> bool: """ check if the date part of a datetime object falls on the date of a DST transition. """ _d = datetime.combine(dt.date(), time.min).replace(tzinfo=ZoneInfo(zone)) return _d.dst() != (_d+timedelta(1)).dst() e.g. for tz Europe/Berlin: for d in range(366): if is_date_of_DSTtransition(datetime(2021, 1, 1) + timedelta(d), "Europe/Berlin"): print((datetime(2021, 1, 1) + timedelta(d)).date()) # 2021-03-28 # 2021-10-31 Note: I'm using zoneinfo here instead of pytz; for legacy code, there is a pytz deprecation shim. Here's a pytz version anyway (needs an additional normalize): import pytz def is_date_of_DSTtransition(dt: datetime, zone: str) -> bool: _d = pytz.timezone(zone).localize(datetime.combine(dt.date(), time.min)) return _d.dst() != pytz.timezone(zone).normalize(_d+timedelta(1)).dst() | 5 | 4 |
69,479,559 | 2021-10-7 | https://stackoverflow.com/questions/69479559/why-does-mypy-not-accept-a-liststr-as-a-listoptionalstr | Example 1: from typing import List, Optional def myfunc() -> List[Optional[str]]: some_list = [x for x in "abc"] return some_list Mypy complains on example 1: Incompatible return value type (got "List[str]", expected "List[Optional[str]]") However, this example gets no complaint: Example 2: def myfunc() -> List[Optional[str]]: some_list = [x for x in "abc"] return list(some_list) What is the explanation for the inconsistent behavior? | Since in Python lists are invariant (see the examples here and here). If we pass List[str] to someone that expects List[Optional[str]], that someone may add a None to our list and break our assumptions. The second example is valid however as the output of list() in the return statement is not saved anywhere and no one can depend on the the returned value being mutated illegally . | 7 | 6 |
69,479,669 | 2021-10-7 | https://stackoverflow.com/questions/69479669/kneighborsclassifier-with-cross-validation-returns-perfect-accuracy-when-k-1 | I'm training a KNN classifier using scikit-learn's KNeighborsClassifier with cross validation: k=1 param_space = {'n_neighbors': [k]} model = KNeighborsClassifier(n_neighbors=k, metric='euclidean') search = GridSearchCV(model, param_space, cv=cv, verbose=10, n_jobs=8) search.fit(X_df, y_df) preds = search.best_estimator_.predict(X_df) when k=1 and with any cv value (lets say cv=4), I'm getting perfect score from sklearn.metrics import confusion_matrix tn, fp, fn, tp = confusion_matrix(y_df, preds).ravel() f1 = tp / (tp + 0.5 * (fp + fn)) # f1 is perfect 1 It is important to say that I use this method on multiple datasets, and every time k=1 the score is a perfect 1. I tried doing it on random data and still got f1=1. Is there any known bug with KNeighborsClassifier when k=1? Maybe I'm missing something else? Thanks in advance. | Your predictions are from the best_estimator_, which is a copy of the estimator with the optimal hyperparameters (according to the cross-validation scores) refitted to the entire training set. So the confusion matrix you generate is really a training score, and for 1-neighbors that's trivially perfect (the nearest neighbor of a point is itself). | 5 | 2 |
69,459,268 | 2021-10-6 | https://stackoverflow.com/questions/69459268/cant-install-python-3-10-0-with-pyenv-on-macos | Trying to install Python 3.10.0 on MacOS 11.6 (Intel) with pyenv 2.1.0 (from homebrew) fails with: python-build: use [email protected] from homebrew python-build: use readline from homebrew Downloading Python-3.10.0.tar.xz... -> https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tar.xz Installing Python-3.10.0... python-build: use tcl-tk from homebrew python-build: use readline from homebrew python-build: use zlib from xcode sdk BUILD FAILED (OS X 11.6 using python-build 20180424) Inspect or clean up the working tree at /var/folders/rk/_qysk9hs40qcq14h44l57wch0000gn/T/python-build.20211006114013.40649 Results logged to /var/folders/rk/_qysk9hs40qcq14h44l57wch0000gn/T/python-build.20211006114013.40649.log Last 10 log lines: checking MACHDEP... "darwin" checking for gcc... clang checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... configure: error: in `/var/folders/rk/_qysk9hs40qcq14h44l57wch0000gn/T/python-build.20211006114013.40649/Python-3.10.0': configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details make: *** No targets specified and no makefile found. Stop. The config.log in the build folder contains: This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by python configure 3.10, which was generated by GNU Autoconf 2.69. Invocation command line was $ ./configure --prefix=/Users/jonas.obrist/.pyenv/versions/3.10.0 --libdir=/Users/jonas.obrist/.pyenv/versions/3.10.0/lib --with-openssl=/usr/local/opt/[email protected] --with-tcltk-libs=-L/usr/local/opt/tcl-tk/lib -ltcl8.6 -ltk8.6 --with-tcltk-includes=-I/usr/local/opt/tcl-tk/include ## --------- ## ## Platform. ## ## --------- ## hostname = Jonas-MBP-2018.local uname -m = x86_64 uname -r = 20.6.0 uname -s = Darwin uname -v = Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64 /usr/bin/uname -p = i386 /bin/uname -X = unknown /bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = Mach kernel version: Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64 Kernel configured for up to 12 processors. 6 processors are physically available. 12 processors are logically available. Processor type: x86_64h (Intel x86-64h Haswell) Processors active: 0 1 2 3 4 5 6 7 8 9 10 11 Primary memory available: 32.00 gigabytes Default processor set: 614 tasks, 3158 threads, 12 processors Load average: 2.13, Mach factor: 9.85 /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: /usr/local/Cellar/pyenv/HEAD-483d95d/libexec PATH: /usr/local/Cellar/pyenv/HEAD-483d95d/plugins/python-build/bin PATH: /Users/jonas.obrist/.poetry/bin PATH: /Users/jonas.obrist/.rbenv/shims PATH: /usr/local/opt/llvm/bin PATH: /usr/local/opt/libxml2/bin PATH: /Users/jonas.obrist/.poetry/bin/ PATH: /Users/jonas.obrist/.cargo/bin PATH: /Users/jonas.obrist/.local/bin PATH: /Users/jonas.obrist/.fastlane/bin PATH: /Users/jonas.obrist/.gem/bin PATH: /Users/jonas.obrist/.pyenv/shims PATH: /Users/jonas.obrist/.pyenv/bin PATH: /usr/local/bin PATH: /usr/bin PATH: /bin PATH: /usr/sbin PATH: /sbin PATH: /Library/Apple/usr/bin PATH: /Users/jonas.obrist/go/bin ## ----------- ## ## Core tests. ## ## ----------- ## configure:2878: checking build system type configure:2892: result: x86_64-apple-darwin20.6.0 configure:2912: checking host system type configure:2925: result: x86_64-apple-darwin20.6.0 configure:2955: checking for python3.10 configure:2971: found /Users/jonas.obrist/.pyenv/shims/python3.10 configure:2982: result: python3.10 configure:3076: checking for --enable-universalsdk configure:3123: result: no configure:3147: checking for --with-universal-archs configure:3162: result: no configure:3318: checking MACHDEP configure:3369: result: "darwin" configure:3653: checking for gcc configure:3680: result: clang configure:3909: checking for C compiler version configure:3918: clang --version >&5 Homebrew clang version 12.0.1 Target: x86_64-apple-darwin20.6.0 Thread model: posix InstalledDir: /usr/local/opt/llvm/bin configure:3929: $? = 0 configure:3918: clang -v >&5 Homebrew clang version 12.0.1 Target: x86_64-apple-darwin20.6.0 Thread model: posix InstalledDir: /usr/local/opt/llvm/bin configure:3929: $? = 0 configure:3918: clang -V >&5 clang-12: error: argument to '-V' is missing (expected 1 value) clang-12: error: no input files configure:3929: $? = 1 configure:3918: clang -qversion >&5 clang-12: error: unknown argument '-qversion'; did you mean '--version'? clang-12: error: no input files configure:3929: $? = 1 configure:3949: checking whether the C compiler works configure:3971: clang -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/usr/local/opt/readline/include -I/usr/local/opt/readline/include -I/Users/jonas.obrist/.pyenv/versions/3.10.0/include -I/usr/local/opt/openssl/include -L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib -L/usr/local/opt/openssl/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib conftest.c >&5 ld: warning: directory not found for option '-L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib' configure:3975: $? = 0 configure:4023: result: yes configure:4026: checking for C compiler default output file name configure:4028: result: a.out configure:4034: checking for suffix of executables configure:4041: clang -o conftest -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/usr/local/opt/readline/include -I/usr/local/opt/readline/include -I/Users/jonas.obrist/.pyenv/versions/3.10.0/include -I/usr/local/opt/openssl/include -L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib -L/usr/local/opt/openssl/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib conftest.c >&5 ld: warning: directory not found for option '-L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib' configure:4045: $? = 0 configure:4067: result: configure:4089: checking whether we are cross compiling configure:4097: clang -o conftest -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/usr/local/opt/readline/include -I/usr/local/opt/readline/include -I/Users/jonas.obrist/.pyenv/versions/3.10.0/include -I/usr/local/opt/openssl/include -L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib -L/usr/local/opt/openssl/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib conftest.c >&5 In file included from conftest.c:8: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:64: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:93:16: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] unsigned char *_base; ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:93:16: note: insert '_Nullable' if the pointer may be null unsigned char *_base; ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:93:16: note: insert '_Nonnull' if the pointer should never be null unsigned char *_base; ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:138:32: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] int (* _Nullable _read) (void *, char *, int); ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:138:32: note: insert '_Nullable' if the pointer may be null int (* _Nullable _read) (void *, char *, int); ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:138:32: note: insert '_Nonnull' if the pointer should never be null int (* _Nullable _read) (void *, char *, int); ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:138:40: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] int (* _Nullable _read) (void *, char *, int); ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:138:40: note: insert '_Nullable' if the pointer may be null int (* _Nullable _read) (void *, char *, int); ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:138:40: note: insert '_Nonnull' if the pointer should never be null int (* _Nullable _read) (void *, char *, int); ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:139:35: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] fpos_t (* _Nullable _seek) (void *, fpos_t, int); ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:139:35: note: insert '_Nullable' if the pointer may be null fpos_t (* _Nullable _seek) (void *, fpos_t, int); ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:139:35: note: insert '_Nonnull' if the pointer should never be null fpos_t (* _Nullable _seek) (void *, fpos_t, int); ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:140:32: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] int (* _Nullable _write)(void *, const char *, int); ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:140:32: note: insert '_Nullable' if the pointer may be null int (* _Nullable _write)(void *, const char *, int); ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:140:32: note: insert '_Nonnull' if the pointer should never be null int (* _Nullable _write)(void *, const char *, int); ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:140:46: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] int (* _Nullable _write)(void *, const char *, int); ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:140:46: note: insert '_Nullable' if the pointer may be null int (* _Nullable _write)(void *, const char *, int); ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:140:46: note: insert '_Nonnull' if the pointer should never be null int (* _Nullable _write)(void *, const char *, int); ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:144:18: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] struct __sFILEX *_extra; /* additions to FILE to not break ABI */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:144:18: note: insert '_Nullable' if the pointer may be null struct __sFILEX *_extra; /* additions to FILE to not break ABI */ ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/_stdio.h:144:18: note: insert '_Nonnull' if the pointer should never be null struct __sFILEX *_extra; /* additions to FILE to not break ABI */ ^ _Nonnull In file included from conftest.c:8: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:220:5: error: 'TARGET_OS_IPHONE' is not defined, evaluates to 0 [-Werror,-Wundef-prefix=TARGET_OS_] #if TARGET_OS_IPHONE ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:67:13: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] extern FILE *__stdinp; ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:67:13: note: insert '_Nullable' if the pointer may be null extern FILE *__stdinp; ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:67:13: note: insert '_Nonnull' if the pointer should never be null extern FILE *__stdinp; ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:386:41: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] int (* _Nullable)(void *, const char *, int), ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:386:41: note: insert '_Nullable' if the pointer may be null int (* _Nullable)(void *, const char *, int), ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:386:41: note: insert '_Nonnull' if the pointer should never be null int (* _Nullable)(void *, const char *, int), ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:386:55: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] int (* _Nullable)(void *, const char *, int), ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:386:55: note: insert '_Nullable' if the pointer may be null int (* _Nullable)(void *, const char *, int), ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:386:55: note: insert '_Nonnull' if the pointer should never be null int (* _Nullable)(void *, const char *, int), ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:387:44: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] fpos_t (* _Nullable)(void *, fpos_t, int), ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:387:44: note: insert '_Nullable' if the pointer may be null fpos_t (* _Nullable)(void *, fpos_t, int), ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:387:44: note: insert '_Nonnull' if the pointer should never be null fpos_t (* _Nullable)(void *, fpos_t, int), ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:388:41: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] int (* _Nullable)(void *)); ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:388:41: note: insert '_Nullable' if the pointer may be null int (* _Nullable)(void *)); ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:388:41: note: insert '_Nonnull' if the pointer should never be null int (* _Nullable)(void *)); ^ _Nonnull /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:384:6: warning: pointer is missing a nullability type specifier (_Nonnull, _Nullable, or _Null_unspecified) [-Wnullability-completeness] FILE *funopen(const void *, ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:384:6: note: insert '_Nullable' if the pointer may be null FILE *funopen(const void *, ^ _Nullable /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/stdio.h:384:6: note: insert '_Nonnull' if the pointer should never be null FILE *funopen(const void *, ^ _Nonnull 13 warnings and 1 error generated. configure:4101: $? = 1 configure:4108: ./conftest ./configure: line 4110: ./conftest: No such file or directory configure:4112: $? = 127 configure:4119: error: in `/var/folders/rk/_qysk9hs40qcq14h44l57wch0000gn/T/python-build.20211006114013.40649/Python-3.10.0': configure:4121: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_build=x86_64-apple-darwin20.6.0 ac_cv_env_CC_set=set ac_cv_env_CC_value=clang ac_cv_env_CFLAGS_set=set ac_cv_env_CFLAGS_value='-I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include ' ac_cv_env_CPPFLAGS_set=set ac_cv_env_CPPFLAGS_value='-I/usr/local/opt/readline/include -I/usr/local/opt/readline/include -I/Users/jonas.obrist/.pyenv/versions/3.10.0/include -I/usr/local/opt/openssl/include' ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_LDFLAGS_set=set ac_cv_env_LDFLAGS_value='-L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib -L/usr/local/opt/openssl/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib' ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_MACHDEP_set= ac_cv_env_MACHDEP_value= ac_cv_env_PROFILE_TASK_set= ac_cv_env_PROFILE_TASK_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set= ac_cv_env_host_alias_value= ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_host=x86_64-apple-darwin20.6.0 ac_cv_prog_PYTHON_FOR_REGEN=python3.10 ac_cv_prog_ac_ct_CC=clang ## ----------------- ## ## Output variables. ## ## ----------------- ## ABIFLAGS='' ALT_SOABI='' AR='' ARCH_RUN_32BIT='' ARFLAGS='' BASECFLAGS='' BASECPPFLAGS='' BINLIBDEST='' BLDLIBRARY='' BLDSHARED='' BUILDEXEEXT='' CC='clang' CCSHARED='' CFLAGS='-I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include ' CFLAGSFORSHARED='' CFLAGS_ALIASING='' CFLAGS_NODIST='' CONFIGURE_MACOSX_DEPLOYMENT_TARGET='' CONFIG_ARGS=' '\''--prefix=/Users/jonas.obrist/.pyenv/versions/3.10.0'\'' '\''--libdir=/Users/jonas.obrist/.pyenv/versions/3.10.0/lib'\'' '\''--with-openssl=/usr/local/opt/[email protected]'\'' '\''--with-tcltk-libs=-L/usr/local/opt/tcl-tk/lib -ltcl8.6 -ltk8.6'\'' '\''--with-tcltk-includes=-I/usr/local/opt/tcl-tk/include'\'' '\''CC=clang'\'' '\''CFLAGS=-I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include '\'' '\''LDFLAGS=-L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib -L/usr/local/opt/openssl/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib'\'' '\''CPPFLAGS=-I/usr/local/opt/readline/include -I/usr/local/opt/readline/include -I/Users/jonas.obrist/.pyenv/versions/3.10.0/include -I/usr/local/opt/openssl/include'\''' CPP='' CPPFLAGS='-I/usr/local/opt/readline/include -I/usr/local/opt/readline/include -I/Users/jonas.obrist/.pyenv/versions/3.10.0/include -I/usr/local/opt/openssl/include' CXX='' DEFS='' DEF_MAKE_ALL_RULE='' DEF_MAKE_RULE='' DFLAGS='' DLINCLDIR='' DLLLIBRARY='' DTRACE='' DTRACE_HEADERS='' DTRACE_OBJS='' DYNLOADFILE='' ECHO_C='\c' ECHO_N='' ECHO_T='' EGREP='' ENSUREPIP='' EXEEXT='' EXPORTSFROM='' EXPORTSYMS='' EXPORT_MACOSX_DEPLOYMENT_TARGET='#' EXT_SUFFIX='' FRAMEWORKALTINSTALLFIRST='' FRAMEWORKALTINSTALLLAST='' FRAMEWORKINSTALLAPPSPREFIX='' FRAMEWORKINSTALLFIRST='' FRAMEWORKINSTALLLAST='' FRAMEWORKPYTHONW='' FRAMEWORKUNIXTOOLSPREFIX='/Users/jonas.obrist/.pyenv/versions/3.10.0' GITBRANCH='' GITTAG='' GITVERSION='' GNULD='' GREP='' HAS_GIT='no-repository' HAVE_GETHOSTBYNAME='' HAVE_GETHOSTBYNAME_R='' HAVE_GETHOSTBYNAME_R_3_ARG='' HAVE_GETHOSTBYNAME_R_5_ARG='' HAVE_GETHOSTBYNAME_R_6_ARG='' INSTALL_DATA='' INSTALL_PROGRAM='' INSTALL_SCRIPT='' INSTSONAME='' LDCXXSHARED='' LDFLAGS='-L/usr/local/opt/readline/lib -L/usr/local/opt/readline/lib -L/Users/jonas.obrist/.pyenv/versions/3.10.0/lib -L/usr/local/opt/openssl/lib -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/lib' LDFLAGS_NODIST='' LDLIBRARY='' LDLIBRARYDIR='' LDSHARED='' LDVERSION='' LIBC='' LIBFFI_INCLUDEDIR='' LIBM='' LIBOBJS='' LIBPL='' LIBPYTHON='' LIBRARY='' LIBRARY_DEPS='' LIBS='' LIBTOOL_CRUFT='' LINKCC='' LINKFORSHARED='' LIPO_32BIT_FLAGS='' LIPO_INTEL64_FLAGS='' LLVM_AR='' LLVM_AR_FOUND='' LLVM_PROFDATA='' LLVM_PROF_ERR='' LLVM_PROF_FILE='' LLVM_PROF_FOUND='' LLVM_PROF_MERGER='' LN='' LTLIBOBJS='' MACHDEP='darwin' MACHDEP_OBJS='' MAINCC='' MKDIR_P='' MULTIARCH='' MULTIARCH_CPPFLAGS='' NO_AS_NEEDED='' OBJEXT='' OPENSSL_INCLUDES='' OPENSSL_LDFLAGS='' OPENSSL_LIBS='' OPENSSL_RPATH='' OPT='' OTHER_LIBTOOL_OPT='' PACKAGE_BUGREPORT='https://bugs.python.org/' PACKAGE_NAME='python' PACKAGE_STRING='python 3.10' PACKAGE_TARNAME='python' PACKAGE_URL='' PACKAGE_VERSION='3.10' PATH_SEPARATOR=':' PGO_PROF_GEN_FLAG='' PGO_PROF_USE_FLAG='' PKG_CONFIG='' PLATFORM_TRIPLET='' PLATLIBDIR='' PROFILE_TASK='' PY3LIBRARY='' PYTHONFRAMEWORK='' PYTHONFRAMEWORKDIR='no-framework' PYTHONFRAMEWORKIDENTIFIER='org.python.python' PYTHONFRAMEWORKINSTALLDIR='' PYTHONFRAMEWORKPREFIX='' PYTHON_FOR_BUILD='./$(BUILDPYTHON) -E' PYTHON_FOR_REGEN='python3.10' PY_ENABLE_SHARED='' READELF='' RUNSHARED='' SED='' SHELL='/bin/sh' SHLIBS='' SHLIB_SUFFIX='' SOABI='' SOVERSION='1.0' SRCDIRS='' STATIC_LIBPYTHON='' TCLTK_INCLUDES='' TCLTK_LIBS='' TEST_MODULES='' THREADHEADERS='' TRUE='' TZPATH='' UNIVERSALSDK='' UNIVERSAL_ARCH_FLAGS='' VERSION='3.10' WHEEL_PKG_DIR='' _PYTHON_HOST_PLATFORM='' ac_ct_AR='' ac_ct_CC='clang' ac_ct_CXX='' ac_ct_READELF='' bindir='${exec_prefix}/bin' build='x86_64-apple-darwin20.6.0' build_alias='' build_cpu='x86_64' build_os='darwin20.6.0' build_vendor='apple' datadir='${datarootdir}' datarootdir='${prefix}/share' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' dvidir='${docdir}' exec_prefix='NONE' host='x86_64-apple-darwin20.6.0' host_alias='' host_cpu='x86_64' host_os='darwin20.6.0' host_vendor='apple' htmldir='${docdir}' includedir='${prefix}/include' infodir='${datarootdir}/info' libdir='/Users/jonas.obrist/.pyenv/versions/3.10.0/lib' libexecdir='${exec_prefix}/libexec' localedir='${datarootdir}/locale' localstatedir='${prefix}/var' mandir='${datarootdir}/man' oldincludedir='/usr/include' pdfdir='${docdir}' prefix='/Users/jonas.obrist/.pyenv/versions/3.10.0' program_transform_name='s,x,x,' psdir='${docdir}' runstatedir='${localstatedir}/run' sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' sysconfdir='${prefix}/etc' target_alias='' ## ----------- ## ## confdefs.h. ## ## ----------- ## /* confdefs.h */ #define _GNU_SOURCE 1 #define _NETBSD_SOURCE 1 #define __BSD_VISIBLE 1 #define _DARWIN_C_SOURCE 1 #define _PYTHONFRAMEWORK "" configure: exit 1 I am able to build it manually by downloading the tarball and ./configure && make How can I get it to install? | The problem was that I had a second version of clang installed (via homebrew), which interfered with the build. After running brew uninstall llvm pyenv/python-build picked the clang from xcode and now pyenv works again. | 11 | 2 |
69,473,782 | 2021-10-6 | https://stackoverflow.com/questions/69473782/how-to-use-one-else-statement-with-multiple-if-statements | So I'm trying to make a login prompt and I want it to print 'Success' only if there are no errors. This is the code I'm using: if not is_email(email) or not is_name(name) or password != confirmPassword or not is_secure(password): if not is_email(email): print('Not a valid email') if not is_name(name): print('Not a valid name') if password != confirmPassword: print('Passwords don\'t match') if not is_secure(password): print('Password is not secure') else: print('Success') Would there be any way to make this code shorter? I want to make it show all the errors at once so I'm not using elif. | How about this? flags = [is_email(email), is_name(name), password != confirmPassword, is_secure(password)] prints = ['Not a valid email', 'Not a valid name', 'Passwords don\'t match', 'Password is not secure'] for index in range(len(flags)): if flags[index] == False: print(prints[index]) | 6 | 3 |
69,472,351 | 2021-10-6 | https://stackoverflow.com/questions/69472351/python-pydantic-how-to-mark-field-as-secret | How to mark pydantic model filed as secret so it will not shown in the repr str and will be excluded from dict and etc... from pydantic import BaseModel class User(BaseModel): name: str password_hash: str # I do not want this field to leak out. I write my code with security in mind and I afraid that in the future someone else will write non secure code that will leak the 'password_hash' field outside to the logs etc... Is there a way to mark the field as secret to be sure it not leak out? | Pydantic provides convenience Secret* classes for this exact purpose: from pydantic import BaseModel, SecretStr class User(BaseModel): name: str password_hash: SecretStr | 6 | 8 |
69,471,749 | 2021-10-6 | https://stackoverflow.com/questions/69471749/importerror-cannot-import-name-batchnormalization-from-keras-layers-normaliz | i have an import problem when executing my code: from keras.models import Sequential from keras.layers.normalization import BatchNormalization 2021-10-06 22:27:14.064885: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2021-10-06 22:27:14.064974: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "C:\Data\breast-cancer-classification\train_model.py", line 10, in <module> from cancernet.cancernet import CancerNet File "C:\Data\breast-cancer-classification\cancernet\cancernet.py", line 2, in <module> from keras.layers.normalization import BatchNormalization ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization' (C:\Users\Catalin\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\normalization\__init__.py) Keras version: 2.6.0 Tensorflow: 2.6.0 Python version: 3.9.7 The library it is installed also with pip install numpy opencv-python pillow tensorflow keras imutils scikit-learn matplotlib Do you have any ideas? library path | You're using outdated imports for tf.keras. Layers can now be imported directly from tensorflow.keras.layers: from tensorflow.keras.models import Sequential from tensorflow.keras.layers import ( BatchNormalization, SeparableConv2D, MaxPooling2D, Activation, Flatten, Dropout, Dense ) from tensorflow.keras import backend as K class CancerNet: @staticmethod def build(width, height, depth, classes): model = Sequential() shape = (height, width, depth) channelDim = -1 if K.image_data_format() == "channels_first": shape = (depth, height, width) channelDim = 1 model.add(SeparableConv2D(32, (3, 3), padding="same", input_shape=shape)) model.add(Activation("relu")) model.add(BatchNormalization(axis=channelDim)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(SeparableConv2D(64, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=channelDim)) model.add(SeparableConv2D(64, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=channelDim)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(SeparableConv2D(128, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=channelDim)) model.add(SeparableConv2D(128, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=channelDim)) model.add(SeparableConv2D(128, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=channelDim)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(256)) model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Dense(classes)) model.add(Activation("softmax")) return model model = CancerNet() | 31 | 16 |
69,469,980 | 2021-10-6 | https://stackoverflow.com/questions/69469980/what-is-an-opaque-object-in-the-context-of-the-copy-module | Reading the documentation of the python standard library copy module I stumbled across the following sentence: The memo dictionary should be treated as an opaque object. I understand that an opaque object usually is an object whose internals are unknown and which is only accessed via member functions. What does being an opaque object mean for a simple data structure like a dictionary? And what do I have to pay attention to in the case of implementing __deepcopy__() for custom classes? | Read the sentence that precedes your quoted sentence. If the __deepcopy__() implementation needs to make a deep copy of a component, it should call the deepcopy() function with the component as first argument and the memo dictionary as second argument. The idea is that your __deepcopy__ method should do nothing with a received memo dictionary except pass it to another call to deepcopy. Specifially, you should not Add any keys Remove any keys Modify the value of an existing key. As far as you are concerned, the memo dictionary is an object whose internals are unknown (someday, it may not even be a dict!). Your only job is to pass it to any "recursive" calls to deepcopy. | 6 | 6 |
69,465,156 | 2021-10-6 | https://stackoverflow.com/questions/69465156/unable-to-build-a-docker-image-following-docker-tutorial | I was following this tutorial on a Macbook to build a sample Docker image but when I tried to run the following command: docker build -t getting-started . I got the following error: [+] Building 3.2s (15/24) => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.05kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 34B 0.0s => [internal] load metadata for docker.io/library/nginx:alpine 2.7s => [internal] load metadata for docker.io/library/python:alpine 2.7s => [internal] load metadata for docker.io/library/node:12-alpine 2.7s => [internal] load build context 0.0s => => transferring context: 7.76kB 0.0s => [base 1/4] FROM docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s => => resolve docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s => CACHED [stage-5 1/3] FROM docker.io/library/nginx:alpine@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3 0.0s => CACHED [app-base 1/8] FROM docker.io/library/node:12-alpine@sha256:1ea5900145028957ec0e7b7e590ac677797fa8962ccec4e73188092f7bc14da5 0.0s => CANCELED [app-base 2/8] RUN apk add --no-cache python g++ make 0.5s => CACHED [base 2/4] WORKDIR /app 0.0s => CACHED [base 3/4] COPY requirements.txt . 0.0s => CACHED [base 4/4] RUN pip install -r requirements.txt 0.0s => CACHED [build 1/2] COPY . . 0.0s => ERROR [build 2/2] RUN mkdocs build 0.4s ------ > [build 2/2] RUN mkdocs build: #23 0.378 Traceback (most recent call last): #23 0.378 File "/usr/local/bin/mkdocs", line 5, in <module> #23 0.378 from mkdocs.__main__ import cli #23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/__main__.py", line 14, in <module> #23 0.378 from mkdocs import config #23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/__init__.py", line 2, in <module> #23 0.378 from mkdocs.config.defaults import DEFAULT_SCHEMA #23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/defaults.py", line 4, in <module> #23 0.378 from mkdocs.config import config_options #23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/config_options.py", line 5, in <module> #23 0.378 from collections import Sequence, namedtuple #23 0.378 ImportError: cannot import name 'Sequence' from 'collections' (/usr/local/lib/python3.10/collections/__init__.py) ------ executor failed running [/bin/sh -c mkdocs build]: exit code: 1 The Dockerfile I used: # syntax=docker/dockerfile:1 FROM node:12-alpine RUN apk add --no-cache python g++ make WORKDIR /app COPY . . RUN yarn install --production CMD ["node", "src/index.js"] The sample app is from: https://github.com/docker/getting-started/tree/master/app I'm pretty new to Docker and would appreciate if someone could help point out how I can get this working. Solutions: It turns out there were two issues here: I should have run the docker build -t getting-started . command from the /app folder where my newly-created Dockerfile is located. In my test, I ran the command from the root folder where there was a different Dockerfile as @HansKilian pointed out. Once I tried it inside the /app folder, it worked fine. The problem with the Docker file in the root folder is caused by a Python version mismatch issue, as pointed out by @atline in the answer. Once I made the change as suggested, I could also build an image using that Dockerfile. Thank you both for your help. | See its Dockerfile, it uses FROM python:alpine AS base, which means it used a shared tag. Another word, at the time the document wrote, python:alpine means maybe python:3.9-alpine or others. But now, it means python:3.10-alpine, see this. The problems happens at mkdocs itself, it uses next code: from collections import Sequence, namedtuple But, if you have a import for above in a python3.9 environment, you will see next, which tell you it will stop working from python3.10: $ docker run --rm -it python:3.9-alpine /bin/sh / # python Python 3.9.7 (default, Aug 31 2021, 19:01:35) [GCC 10.3.1 20210424] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from collections import Sequence <stdin>:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working So, for you, to make the guide works again for you, you need to change FROM python:alpine AS base to: FROM python:3.9-alpine AS base | 8 | 6 |
69,462,277 | 2021-10-6 | https://stackoverflow.com/questions/69462277/use-pre-commit-hook-for-black-with-multiple-language-versions-for-python | We are using pre-commit to format our Python code using black with the following configuration in .pre-commit-config.yaml: repos: - repo: https://github.com/ambv/black rev: 20.8b1 hooks: - id: black language_version: python3.7 As our packages are tested against and used in different Python versions (e.g. 3.7, 3.8, 3.9) I want to be able to use the pre-commit Hook on different Python versions. But when committing Code e.g. on Python 3.8, I get an error due to the language_version in my configuration (see above): C:\Users\FooBar\Documents\Programmierung\foo (dev -> origin) Ξ» git commit -m "Black file with correct black version" [INFO] Initializing environment for https://github.com/ambv/black. [INFO] Installing environment for https://github.com/ambv/black. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... An unexpected error has occurred: CalledProcessError: command: ('c:\\users\\FooBar\\anaconda\\python.exe', '-mvirtualenv', 'C:\\Users\\FooBar\\.cache\\pre-commit\\repobmlg3b_m\\py_env-python3.7', '-p', 'python3.7') return code: 1 expected return code: 0 stdout: RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.7' stderr: (none) Check the log at C:\Users\FooBar\.cache\pre-commit\pre-commit.log How can I enable the pre-commit Hook on different Python-Versions e.g. only on Python 3? Thanks in advance! | one way would be to set language_version: python3 (this used to be the default for black) -- the actual language_version you use there doesn't matter all that much as black doesn't use it to pick the formatted language target (that's a separate option) generally though, you shouldn't need to set language_version as either (1) the hook itself will set a proper one or (2) it will default to your currently running python note also: you're using the twice-deprecated url for black -- it is now psf/black __ disclaimer: I created pre-commit and I'm a black contributor | 8 | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.