text
stringlengths
185
26.5k
<s>[INST] Accessing dict keys like an attribute in Python? I find it more conveniant to access dict keys as obj.foo instead of obj['foo'], so I wrote this snippet: class AttributeDict(dict): def __getattr__(self, attr): return self[attr] def __setattr__(self, attr, value): self[attr] = value However, I assume there must be some reason that Python doesn't provide this functionality out of the box. What would be the caveats and pitfalls of accessing dict keys in this manner? [/INST] The best way to do this is: class AttrDict(dict): def __init__(self, *args, **kwargs): super(AttrDict, self).__init__(*args, **kwargs) self.__dict__ = self Some pros: It actually works! No dictionary class methods are shadowed (e.g. .keys() work just fine) Attributes and items are always in sync Trying to access non-existent key as an attribute correctly raises AttributeError instead of KeyError Cons: Methods like .keys() will not work just fine if they get overwritten by incoming data Causes a memory leak in Python < 2.7.4 / Python3 < 3.2.3 Pylint goes bananas with E1123(unexpected-keyword-arg) and E1103(maybe-no-member) For the uninitiated it seems like pure magic. A short explanation on how this works All python objects internally store their attributes in a dictionary that is named __dict__. There is no requirement that the internal dictionary __dict__ would need to be "just a plain dict", so we can assign any subclass of dict() to the internal dictionary. In our case we simply assign the AttrDict() instance we are instantiating (as we are in __init__). By calling super()'s __init__() method we made sure that it (already) behaves exactly like a dictionary, since that function calls all the dictionary instantiation code. One reason why Python doesn't provide this functionality out of the box As noted in the "cons" list, this combines the namespace of stored keys (which may come from arbitrary and/or untrusted data!) with the namespace of builtin dict method attributes. For example: d = AttrDict() d.update({'items':["jacket", "necktie", "trousers"]}) for k, v in d.items(): # TypeError: 'list' object is not callable print "Never reached!" </s>
<s>[INST] Django: "projects" vs "apps" I have a fairly complex "product" I'm getting ready to build using Django. I'm going to avoid using the terms "project" and "application" in this context, because I'm not clear on their specific meaning in Django. Projects can have many apps. Apps can be shared among many projects. Fine. I'm not reinventing the blog or forum - I don't see any portion of my product being reusable in any context. Intuitively, I would call this one "application." Do I then do all my work in a single "app" folder? If so... in terms of Django's project.app namespace, my inclination is to use myproduct.myproduct, but of course this isn't allowed (but the application I'm building is my project, and my project is an application!). I'm therefore lead to believe that perhaps I'm supposed to approach Django by building one app per "significant" model, but I don't know where to draw the boundaries in my schema to separate it into apps - I have a lot of models with relatively complex relationships. I'm hoping there's a common solution to this... [/INST] Try to answer question: "What does my application do?". If you cannot answer in a single sentence, then maybe you can split it into several apps with cleaner logic. I read this thought somewhere soon after I've started to work with django and I find that I ask this question of myself quite often and it helps me. Your apps don't have to be reusable, they can depend on each other, but they should do one thing. </s>
<s>[INST] Python exit commands - why so many and when should each be used? It seems that python supports many different commands to stop script execution.The choices I've found are: quit(), exit(), sys.exit(), os._exit() Have I missed any? What's the difference between them? When would you use each? [/INST] Let me give some information on them: quit raises the SystemExit exception behind the scenes. Furthermore, if you print it, it will give a message: >>> print (quit) Use quit() or Ctrl-Z plus Return to exit >>> This functionality was included to help people who do not know Python. After all, one of the most likely things a newbie will try to exit Python is typing in quit. Nevertheless, quit should not be used in production code. This is because it only works if the site module is loaded. Instead, this function should only be used in the interpreter. exit is an alias for quit (or vice-versa). They exist together simply to make Python more user-friendly. Furthermore, it too gives a message when printed: >>> print (exit) Use exit() or Ctrl-Z plus Return to exit >>> However, like quit, exit is considered bad to use in production code and should be reserved for use in the interpreter. This is because it too relies on the site module. sys.exit raises the SystemExit exception in the background. This means that it is the same as quit and exit in that respect. Unlike those two however, sys.exit is considered good to use in production code. This is because the sys module will always be there. os._exit exits the program without calling cleanup handlers, flushing stdio buffers, etc. Thus, it is not a standard way to exit and should only be used in special cases. The most common of these is in the child process(es) created by os.fork. Note that, of the four methods given, only this one is unique in what it does. Summed up, all four methods exit the program. However, the first two are considered bad to use in production code and the last is a non-standard, dirty way that is only used in special scenarios. So, if you want to exit a program normally, go with the third method: sys.exit. Or, even better in my opinion, you can just do directly what sys.exit does behind the scenes and run: raise SystemExit This way, you do not need to import sys first. However, this choice is simply one on style and is purely up to you. </s>
<s>[INST] Get the name of current script with Python I'm trying to get the name of the Python script that is currently running. For example, I have a script called foo.py and I would like to do something like this inside it: print Scriptname and get: foo.py. [/INST] Use __file__. If you want to omit the directory part (which might be present), you can use import os,os.path.basename(__file__). </s>
<s>[INST] Sort a list of tuples by 2nd item (integer value) I have a list of tuples that looks something like this: [('abc', 121),('abc', 231),('abc', 148), ('abc',221)] I want to sort this list in ascending order by the integer value inside the tuples. Is it possible? [/INST] Try using the key keyword with sorted(). sorted([('abc', 121),('abc', 231),('abc', 148), ('abc',221)], key=lambda x: x[1]) key should be a function that identifies how to retrieve the comparable element from your data structure. In your case, it is the second element of the tuple, so we access [1]. For optimization, see jamylak's response using itemgetter(1), which is essentially a faster version of lambda x: x[1]. </s>
<s>[INST] Print current call stack from a method in Python code In Python, how can I print the current call stack from within a method (for debugging purposes). [/INST] Here's an example of getting the stack via the traceback module, and printing it: import traceback def f(): g() def g(): for line in traceback.format_stack(): print(line.strip()) f() # Prints: # File "so-stack.py", line 10, in <module> # f() # File "so-stack.py", line 4, in f # g() # File "so-stack.py", line 7, in g # for line in traceback.format_stack(): If you really only want to print the stack to stderr, you can use: traceback.print_stack() Or to print to stdout (useful if want to keep redirected output together), use: traceback.print_stack(file=sys.stdout) But getting it via traceback.format_stack() lets you do whatever you like with it. </s>
<s>[INST] How can I tell PyCharm what type a parameter is expected to be? When it comes to constructors, and assignments, and method calls, the PyCharm IDE is pretty good at analyzing my source code and figuring out what type each variable should be. I like it when it's right, because it gives me good code-completion and parameter info, and it gives me warnings if I try to access an attribute that doesn't exist. But when it comes to parameters, it knows nothing. The code-completion dropdowns can't show anything, because they don't know what type the parameter will be. The code analysis can't look for warnings. class Person: def __init__(self, name, age): self.name = name self.age = age peasant = Person("Dennis", 37) # PyCharm knows that the "peasant" variable is of type Person peasant.dig_filth() # shows warning -- Person doesn't have a dig_filth method class King: def repress(self, peasant): # PyCharm has no idea what type the "peasant" parameter should be peasant.knock_over() # no warning even though knock_over doesn't exist King().repress(peasant) # Even if I call the method once with a Person instance, PyCharm doesn't # consider that to mean that the "peasant" parameter should always be a Person This makes a certain amount of sense. Other call sites could pass anything for that parameter. But if my method expects a parameter to be of type, say, pygame.Surface, I'd like to be able to indicate that to PyCharm somehow, so it can show me all of Surface's attributes in its code-completion dropdown, and highlight warnings if I call the wrong method, and so on. Is there a way I can give PyCharm a hint, and say "psst, this parameter is supposed to be of type X"? (Or perhaps, in the spirit of dynamic languages, "this parameter is supposed to quack like an X"? I'd be fine with that.) EDIT: CrazyCoder's answer, below, does the trick. For any newcomers like me who want the quick summary, here it is: class King: def repress(self, peasant): """ Exploit the workers by hanging on to outdated imperialist dogma which perpetuates the economic and social differences in our society. @type peasant: Person @param peasant: Person to repress. """ peasant.knock_over() # Shows a warning. And there was much rejoicing. The relevant part is the @type peasant: Person line of the docstring. If you also go to File > Settings > Python Integrated Tools and set "Docstring format" to "Epytext", then PyCharm's View > Quick Documentation Lookup will pretty-print the parameter information instead of just printing all the @-lines as-is. [/INST] Yes, you can use special documentation format for methods and their parameters so that PyCharm can know the type. Recent PyCharm version supports most common doc formats. For example, PyCharm extracts types from @param style comments. See also reStructuredText and docstring conventions (PEP 257). Another option is Python 3 annotations. Please refer to the PyCharm documentation section for more details and samples. </s>
<s>[INST] How to download image using requests I'm trying to download and save an image from the web using python's requests module. Here is the (working) code I used: img = urllib2.urlopen(settings.STATICMAP_URL.format(**data)) with open(path, 'w') as f: f.write(img.read()) Here is the new (non-working) code using requests: r = requests.get(settings.STATICMAP_URL.format(**data)) if r.status_code == 200: img = r.raw.read() with open(path, 'w') as f: f.write(img) Can you help me on what attribute from the response to use from requests? [/INST] You can either use the response.raw file object, or iterate over the response. To use the response.raw file-like object will not, by default, decode compressed responses (with GZIP or deflate). You can force it to decompress for you anyway by setting the decode_content attribute to True (requests sets it to False to control decoding itself). You can then use shutil.copyfileobj() to have Python stream the data to a file object: import requests import shutil r = requests.get(settings.STATICMAP_URL.format(**data), stream=True) if r.status_code == 200: with open(path, 'wb') as f: r.raw.decode_content = True shutil.copyfileobj(r.raw, f) To iterate over the response use a loop; iterating like this ensures that data is decompressed by this stage: r = requests.get(settings.STATICMAP_URL.format(**data), stream=True) if r.status_code == 200: with open(path, 'wb') as f: for chunk in r: f.write(chunk) This'll read the data in 128 byte chunks; if you feel another chunk size works better, use the Response.iter_content() method with a custom chunk size: r = requests.get(settings.STATICMAP_URL.format(**data), stream=True) if r.status_code == 200: with open(path, 'wb') as f: for chunk in r.iter_content(1024): f.write(chunk) Note that you need to open the destination file in binary mode to ensure python doesn't try and translate newlines for you. We also set stream=True so that requests doesn't download the whole image into memory first. </s>
<s>[INST] How to initialize a two-dimensional array in Python? I'm beginning python and I'm trying to use a two-dimensional list, that I initially fill up with the same variable in every place. I came up with this: def initialize_twodlist(foo): twod_list = [] new = [] for i in range (0, 10): for j in range (0, 10): new.append(foo) twod_list.append(new) new = [] It gives the desired result, but feels like a workaround. Is there an easier/shorter/more elegant way to do this? [/INST] A pattern that often came up in Python was bar = [] for item in some_iterable: bar.append(SOME EXPRESSION) which helped motivate the introduction of list comprehensions, which convert that snippet to bar = [SOME EXPRESSION for item in some_iterable] which is shorter and sometimes clearer. Usually you get in the habit of recognizing these and often replacing loops with comprehensions. Your code follows this pattern twice twod_list = [] \ for i in range (0, 10): \ new = [] \ can be replaced } this too for j in range (0, 10): } with a list / new.append(foo) / comprehension / twod_list.append(new) / </s>
<s>[INST] Else clause on Python while statement I've noticed the following code is legal in Python. My question is why? Is there a specific reason? n = 5 while n != 0: print n n -= 1 else: print "what the..." [/INST] The else clause is only executed when your while condition becomes false. If you break out of the loop, or if an exception is raised, it won't be executed. One way to think about it is as an if/else construct with respect to the condition: if condition: handle_true() else: handle_false() is analogous to the looping construct: while condition: handle_true() else: # condition is false now, handle and go on with the rest of the program handle_false() An example might be along the lines of: while value < threshold: if not process_acceptable_value(value): # something went wrong, exit the loop; don't pass go, don't collect 200 break value = update(value) else: # value >= threshold; pass go, collect 200 handle_threshold_reached() </s>
<s>[INST] How do I get the parent directory in Python? Could someone tell me how to get the parent directory of a path in Python in a cross platform way. E.g. C:\Program Files ---> C:\ and C:\ ---> C:\ If the directory doesn't have a parent directory, it returns the directory itself. The question might seem simple but I couldn't dig it up through Google. [/INST] Using os.path.dirname: >>> os.path.dirname(r'C:\Program Files') 'C:\\' >>> os.path.dirname('C:\\') 'C:\\' >>> </s>
<s>[INST] What's the bad magic number error? What's the "Bad magic number" ImportError in python, and how do I fix it? The only thing I can find online suggests this is caused by compiling a .py -> .pyc file and then trying to use it with the wrong version of python. In my case, however, the file seems to import fine some times but not others, and I'm not sure why. The information python's providing in the traceback isn't particularly helpful (which is why I was asking here...), but here it is in case it helps: Traceback (most recent call last): File "run.py", line 7, in <module> from Normalization import Normalizer [/INST] The magic number comes from UNIX-type systems where the first few bytes of a file held a marker indicating the file type. Python puts a similar marker into its pyc files when it creates them. Then the python interpreter makes sure this number is correct when loading it. Anything that damages this magic number will cause your problem. This includes editing the pyc file or trying to run a pyc from a different version of python (usually later) than your interpreter. If they are your pyc files, just delete them and let the interpreter re-compile the py files. On UNIX type systems, that could be something as simple as: rm *.pyc or: find . -name '*.pyc' -delete If they are not yours, you'll have to either get the py files for re-compilation, or an interpreter that can run the pyc files with that particular magic value. One thing that might be causing the intermittent nature. The pyc that's causing the problem may only be imported under certain conditions. It's highly unlikely it would import sometimes. You should check the actual full stack trace when the import fails? As an aside, the first word of all my 2.5.1(r251:54863) pyc files is 62131, 2.6.1(r261:67517) is 62161. The list of all magic numbers can be found in Python/import.c, reproduced here for completeness (current as at the time the answer was posted, it may have changed since then): 1.5: 20121 1.5.1: 20121 1.5.2: 20121 1.6: 50428 2.0: 50823 2.0.1: 50823 2.1: 60202 2.1.1: 60202 2.1.2: 60202 2.2: 60717 2.3a0: 62011 2.3a0: 62021 2.3a0: 62011 2.4a0: 62041 2.4a3: 62051 2.4b1: 62061 2.5a0: 62071 2.5a0: 62081 2.5a0: 62091 2.5a0: 62092 2.5b3: 62101 2.5b3: 62111 2.5c1: 62121 2.5c2: 62131 2.6a0: 62151 2.6a1: 62161 2.7a0: 62171 </s>
<s>[INST] How to check whether two lists are circularly identical in Python For instance, I have lists: a[0] = [1, 1, 1, 0, 0] a[1] = [1, 1, 0, 0, 1] a[2] = [0, 1, 1, 1, 0] # and so on They seem to be different, but if it is supposed that the start and the end are connected, then they are circularly identical. The problem is, each list which I have has a length of 55 and contains only three ones and 52 zeros in it. Without circular condition, there are 26,235 (55 choose 3) lists. However, if the condition 'circular' exists, there are a huge number of circularly identical lists Currently I check circularly identity by following: def is_dup(a, b): for i in range(len(a)): if a == list(numpy.roll(b, i)): # shift b circularly by i return True return False This function requires 55 cyclic shift operations at the worst case. And there are 26,235 lists to be compared with each other. In short, I need 55 * 26,235 * (26,235 - 1) / 2 = 18,926,847,225 computations. It's about nearly 20 Giga! Is there any good way to do it with less computations? Or any data types that supports circular? [/INST] First off, this can be done in O(n) in terms of the length of the list You can notice that if you will duplicate your list 2 times ([1, 2, 3]) will be [1, 2, 3, 1, 2, 3] then your new list will definitely hold all possible cyclic lists. So all you need is to check whether the list you are searching is inside a 2 times of your starting list. In python you can achieve this in the following way (assuming that the lengths are the same). list1 = [1, 1, 1, 0, 0] list2 = [1, 1, 0, 0, 1] print ' '.join(map(str, list2)) in ' '.join(map(str, list1 * 2)) Some explanation about my oneliner: list * 2 will combine a list with itself, map(str, [1, 2]) convert all numbers to string and ' '.join() will convert array ['1', '2', '111'] into a string '1 2 111'. As pointed by some people in the comments, oneliner can potentially give some false positives, so to cover all the possible edge cases: def isCircular(arr1, arr2): if len(arr1) != len(arr2): return False str1 = ' '.join(map(str, arr1)) str2 = ' '.join(map(str, arr2)) if len(str1) != len(str2): return False return str1 in str2 + ' ' + str2 P.S.1 when speaking about time complexity, it is worth noticing that O(n) will be achieved if substring can be found in O(n) time. It is not always so and depends on the implementation in your language (although potentially it can be done in linear time KMP for example). P.S.2 for people who are afraid strings operation and due to this fact think that the answer is not good. What important is complexity and speed. This algorithm potentially runs in O(n) time and O(n) space which makes it much better than anything in O(n^2) domain. To see this by yourself, you can run a small benchmark (creates a random list pops the first element and appends it to the end thus creating a cyclic list. You are free to do your own manipulations) from random import random bigList = [int(1000 * random()) for i in xrange(10**6)] bigList2 = bigList[:] bigList2.append(bigList2.pop(0)) # then test how much time will it take to come up with an answer from datetime import datetime startTime = datetime.now() print isCircular(bigList, bigList2) print datetime.now() - startTime # please fill free to use timeit, but it will give similar results 0.3 seconds on my machine. Not really long. Now try to compare this with O(n^2) solutions. While it is comparing it, you can travel from US to Australia (most probably by a cruise ship) </s>
<s>[INST] Pelican 3.3 pelican-quickstart error "ValueError: unknown locale: UTF-8" When I was trying to use pelican3.3, I typed the commend "pelican-quickstart", some errors showed up. These are the errors: (PelicanEnv)59-127-113-90:myblog Richo$ pelican-quickstart Traceback (most recent call last): File "/Users/Richo/Dropbox/Github/PelicanEnv/bin/pelican-quickstart", line 9, in <module> load_entry_point('pelican==3.3', 'console_scripts', 'pelican-quickstart')() File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pkg_resources.py", line 378, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pkg_resources.py", line 2566, in load_entry_point return ep.load() File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pelican/__init__.py", line 16, in <module> from pelican.generators import (ArticlesGenerator, PagesGenerator, File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pelican/generators.py", line 20, in <module> from pelican.readers import Readers File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/pelican/readers.py", line 11, in <module> import docutils.core File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/core.py", line 20, in <module> from docutils import frontend, io, utils, readers, writers File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module> import docutils.utils File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module> import docutils.io File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/io.py", line 18, in <module> from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module> locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1] File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/locale.py", line 513, in getdefaultlocale return _parse_localename(localename) File "/Users/Richo/Dropbox/Github/PelicanEnv/lib/python2.7/locale.py", line 445, in _parse_localename raise ValueError, 'unknown locale: %s' % localename ValueError: unknown locale: UTF-8 My OS is OS X Mavericks. [/INST] You could try a solution posted here or here. Basically, add some lines to your ~/.bash_profile: export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 There is an outstanding bug report related to this issue. It appears that Python makes some assumptions about the format of locale names that aren't universally valid. Explicitly setting these environment vars is basically just a workaround for that bug. [Edit:] As @asmeurer correctly points out, the above fix assumes English and the U.S. You should really pick your preferred locale from the list given by locale -a (generally one that ends in UTF-8). </s>
<s>[INST] How to get week number in Python? How to find out what week number is this year on June 16th (wk24) with Python? [/INST] datetime.date has a isocalendar() method, which returns a tuple containing the calendar week: >>> datetime.date(2010, 6, 16).isocalendar()[1] 24 datetime.date.isocalendar() is an instance-method returning a tuple containing year, weeknumber and weekday in respective order for the given date instance. </s>
<s>[INST] Programmatically saving image to Django ImageField Ok, I've tried about near everything and I cannot get this to work. I have a Django model with an ImageField on it I have code that downloads an image via HTTP (tested and works) The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField) All I need to do is associate the already existing image file path with the ImageField I've written this code about 6 different ways. The problem I'm running into is all of the code that I'm writing results in the following behavior: (1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create. In case that was unclear, I'll try to illustrate: ## Image generation code runs.... /Upload generated_image.jpg 4kb ## Attempt to set the ImageField path... /Upload generated_image.jpg 4kb generated_image_.jpg 0kb ImageField.Path = /Upload/generated_image_.jpg How can I do this without having Django try to re-store the file? What I'd really like is something to this effect... model.ImageField.path = generated_image_path ...but of course that doesn't work. And yes I've gone through the other questions here like this one as well as the django doc on File UPDATE After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. I am stumped. Here is the code which runs successfully on XP... f = open(thumb_path, 'r') model.thumbnail = File(f) model.save() [/INST] I have some code that fetches an image off the web and stores it in a model. The important bits are: from django.core.files import File # you need this somewhere import urllib # The following actually resides in a method of my model result = urllib.urlretrieve(image_url) # image_url is a URL to an image # self.photo is the ImageField self.photo.save( os.path.basename(self.url), File(open(result[0])) ) self.save() That's a bit confusing because it's pulled out of my model and a bit out of context, but the important parts are: The image pulled from the web is not stored in the upload_to folder, it is instead stored as a tempfile by urllib.urlretrieve() and later discarded. The ImageField.save() method takes a filename (the os.path.basename bit) and a django.core.files.File object. Let me know if you have questions or need clarification. Edit: for the sake of clarity, here is the model (minus any required import statements): class CachedImage(models.Model): url = models.CharField(max_length=255, unique=True) photo = models.ImageField(upload_to=photo_path, blank=True) def cache(self): """Store image locally if we have a URL""" if self.url and not self.photo: result = urllib.urlretrieve(self.url) self.photo.save( os.path.basename(self.url), File(open(result[0])) ) self.save() </s>
<s>[INST] Can a line of Python code know its indentation nesting level? From something like this: print(get_indentation_level()) print(get_indentation_level()) print(get_indentation_level()) I would like to get something like this: 1 2 3 Can the code read itself in this way? All I want is the output from the more nested parts of the code to be more nested. In the same way that this makes code easier to read, it would make the output easier to read. Of course I could implement this manually, using e.g. .format(), but what I had in mind was a custom print function which would print(i*' ' + string) where i is the indentation level. This would be a quick way to make readable output on my terminal. Is there a better way to do this which avoids painstaking manual formatting? [/INST] If you want indentation in terms of nesting level rather than spaces and tabs, things get tricky. For example, in the following code: if True: print( get_nesting_level()) the call to get_nesting_level is actually nested one level deep, despite the fact that there is no leading whitespace on the line of the get_nesting_level call. Meanwhile, in the following code: print(1, 2, get_nesting_level()) the call to get_nesting_level is nested zero levels deep, despite the presence of leading whitespace on its line. In the following code: if True: if True: print(get_nesting_level()) if True: print(get_nesting_level()) the two calls to get_nesting_level are at different nesting levels, despite the fact that the leading whitespace is identical. In the following code: if True: print(get_nesting_level()) is that nested zero levels, or one? In terms of INDENT and DEDENT tokens in the formal grammar, it's zero levels deep, but you might not feel the same way. If you want to do this, you're going to have to tokenize the whole file up to the point of the call and count INDENT and DEDENT tokens. The tokenize module would be very useful for such a function: import inspect import tokenize def get_nesting_level(): caller_frame = inspect.currentframe().f_back filename, caller_lineno, _, _, _ = inspect.getframeinfo(caller_frame) with open(filename) as f: indentation_level = 0 for token_record in tokenize.generate_tokens(f.readline): token_type, _, (token_lineno, _), _, _ = token_record if token_lineno > caller_lineno: break elif token_type == tokenize.INDENT: indentation_level += 1 elif token_type == tokenize.DEDENT: indentation_level -= 1 return indentation_level </s>
<s>[INST] How do you log server errors on django sites So, when playing with the development I can just set settings.DEBUG to True and if an error occures I can see it nicely formatted, with good stack trace and request information. But on kind of production site I'd rather use DEBUG=False and show visitors some standard error 500 page with information that I'm working on fixing this bug at this moment ;) At the same time I'd like to have some way of logging all those information (stack trace and request info) to a file on my server - so I can just output it to my console and watch errors scroll, email the log to me every hour or something like this. What logging solutions would you recomend for a django-site, that would meet those simple requirements? I have the application running as fcgi server and I'm using apache web server as frontend (although thinking of going to lighttpd). [/INST] Well, when DEBUG = False, Django will automatically mail a full traceback of any error to each person listed in the ADMINS setting, which gets you notifications pretty much for free. If you'd like more fine-grained control, you can write and add to your settings a middleware class which defines a method named process_exception(), which will have access to the exception that was raised: http://docs.djangoproject.com/en/dev/topics/http/middleware/#process-exception Your process_exception() method can then perform whatever type of logging you'd like: writing to console, writing to a file, etc., etc. Edit: though it's a bit less useful, you can also listen for the got_request_exception signal, which will be sent whenever an exception is encountered during request processing: http://docs.djangoproject.com/en/dev/ref/signals/#got-request-exception This does not give you access to the exception object, however, so the middleware method is much easier to work with. </s>
<s>[INST] Python: Check if a string represents an int, Without using Try/Except? Is there any way to tell whether a string represents an integer (e.g., '3', '-17' but not '3.14' or 'asfasfas') Without using a try/except mechanism? is_int('3.14') = False is_int('-7') = True [/INST] with positive integers you could use .isdigit: >>> '16'.isdigit() True it doesn't work with negative integers though. suppose you could try the following: >>> s = '-17' >>> s.startswith('-') and s[1:].isdigit() True it won't work with '16.0' format, which is similar to int casting in this sense. edit: def check_int(s): if s[0] in ('-', '+'): return s[1:].isdigit() return s.isdigit() </s>
<s>[INST] Using javadoc for Python documentation I am currently beginning with Python and I have a strong PHP background and in PHP I have took the habit of using javadoc as a documentation template. I was wondering if javadoc has its place as docstring documentation in Python. Is something like this too elaborate to fit in the Python mindset or should I try to be as concise as possible? """ replaces template place holder with values @param string timestamp formatted date to display @param string priority priority number @param string priority_name priority name @param string message message to display @return string formatted string """ And if I am a bit too exhaustive should I go with something like this instead (where most of the documentation doesn't get printed through the __doc__ method)? # replaces template place holder with values # # @param string timestamp formatted date to display # @param string priority priority number # @param string priority_name priority name # @param string message message to display # # @return string formatted string def format(self, timestamp = '', priority = '', priority_name = '', message = ''): """ replaces template place holder with values """ values = {'%timestamp%' : timestamp, '%priorityName%' : priority_name, '%priority%' : priority, '%message%' : message} return self.__pattern.format(**values) [/INST] Have a look at the reStructuredText (also known as "reST") format, which is a plaintext/docstring markup format, and probably the most popular in the Python world. And you should certainly look at Sphinx, a tool to generate documentation from reStructuredText (used for eg. the Python documentation itself). Sphinx includes the possibility to extract documentation from the docstrings in your code (see sphinx.ext.autodoc), and recognizes reST field lists following certain conventions. This has probably become (or is becoming) the most popular way to do it. Your example could look as follows: """Replaces template placeholder with values. :param timestamp: formatted date to display :param priority: priority number :param priority_name: priority name :param message: message to display :returns: formatted string """ Or extended with type information: """Replaces template placeholder with values. :param timestamp: formatted date to display :type timestamp: str or unicode :param priority: priority number :type priority: str or unicode :param priority_name: priority name :type priority_name: str or unicode :param message: message to display :type message: str or unicode :returns: formatted string :rtype: str or unicode """ </s>
<s>[INST] 'import module' or 'from module import' I've tried to find a comprehensive guide on whether it is best to use import module or from module import. I've just started with Python, with the intention for developing web applications with Django and I'm trying to start off with best practices in mind. Basically, I was hoping if anyone could share their experiences, what preferences other developers have and whats the best way to avoid any gotchas down the road. [/INST] The difference between import module and from module import foo is mainly subjective. Pick the one you like best and be consistent in your use of it. Here are some points to help you decide. import module Pros: Less maintenance of your import statements. Don't need to add any additional imports to start using another item from the module Cons: Typing module.foo in your code can be tedious and redundant (tedium can be minimized by using import module as mo then typing mo.foo) from module import foo Pros: Less typing to use foo More control over which items of a module can be accessed Cons: To use a new item from the module you have to update your import statement You lose context about foo. For example, it's less clear what ceil() does compared to math.ceil() Either method is acceptable, but don't use from module import *. For any reasonable large set of code, if you import * you will likely be cementing it into the module, unable to be removed. This is because it is difficult to determine what items used in the code are coming from 'module', making it easy to get to the point where you think you don't use the import any more but it's extremely difficult to be sure. </s>
<s>[INST] Is it possible to use pip to install a package from a private github repository? As the title suggests I am trying to install a python package from a private github repo. For a public repository I can issue the following command which works fine: pip install git+git://github.com/django/django.git However if I try this for a private repository: pip install git+git://github.com/echweb/echweb-utils.git I get the following output: Downloading/unpacking git+git://github.com/echweb/echweb-utils.git Cloning Git repository git://github.com/echweb/echweb-utils.git to /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build Complete output from command /usr/local/bin/git clone git://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build: fatal: The remote end hung up unexpectedly Cloning into /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build... ---------------------------------------- Command /usr/local/bin/git clone git://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build failed with error code 128 I guess this is because I am trying to access a private repository without providing any authentication. I therefore tried to use git+ssh hoping that pip would use my ssh public key to authenticate: pip install git+ssh://github.com/echweb/echweb-utils.git This gives the following output: Downloading/unpacking git+ssh://github.com/echweb/echweb-utils.git Cloning Git repository ssh://github.com/echweb/echweb-utils.git to /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build Complete output from command /usr/local/bin/git clone ssh://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build: Cloning into /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build... Permission denied (publickey). fatal: The remote end hung up unexpectedly ---------------------------------------- Command /usr/local/bin/git clone ssh://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build failed with error code 128 Does anyone know if it what I am trying to achieve is even possible? If so can you please tell me how? [/INST] You can use git+ssh URI scheme, but you MUST set username: pip install git+ssh://[email protected]/echweb/echweb-utils.git See git@ part into URI? PS: Also read about deploy keys. PPS: In my installation the "git+ssh" URI scheme works only with "editable" requirements: pip install -e URI#egg=EggName Remember: Change the : character that git remote -v prints to a / character before using the remote's address in the pip command: $ git remote -v origin [email protected]:echweb/echweb-utils.git (fetch) ^ change this to a '/' character If you forget, you will get this error: ssh: Could not resolve hostname github.com:echweb: nodename nor servname provided, or not known </s>
<s>[INST] How to create a zip archive of a directory How can I create a zip archive of a directory structure in Python? [/INST] As others have pointed out, you should use zipfile. The documentation tells you what functions are available, but doesn't really explain how you can use them to zip an entire directory. I think it's easiest to explain with some example code: #!/usr/bin/env python import os import zipfile def zipdir(path, ziph): # ziph is zipfile handle for root, dirs, files in os.walk(path): for file in files: ziph.write(os.path.join(root, file)) if __name__ == '__main__': zipf = zipfile.ZipFile('Python.zip', 'w', zipfile.ZIP_DEFLATED) zipdir('tmp/', zipf) zipf.close() Adapted from: http://www.devshed.com/c/a/Python/Python-UnZipped/ </s>
<s>[INST] Hiding axis text in matplotlib plots I'm trying to plot a figure without tickmarks or numbers on either of the axes (I use axes in the traditional sense, not the matplotlib nomenclature!). An issue I have come across is where matplotlib adjusts the x(y)ticklabels by subtracting a value N, then adds N at the end of the axis. This may be vague, but the following simplified example highlights the issue, with '6.18' being the offending value of N: import matplotlib.pyplot as plt import random prefix = 6.18 rx = [prefix+(0.001*random.random()) for i in arange(100)] ry = [prefix+(0.001*random.random()) for i in arange(100)] plt.plot(rx,ry,'ko') frame1 = plt.gca() for xlabel_i in frame1.axes.get_xticklabels(): xlabel_i.set_visible(False) xlabel_i.set_fontsize(0.0) for xlabel_i in frame1.axes.get_yticklabels(): xlabel_i.set_fontsize(0.0) xlabel_i.set_visible(False) for tick in frame1.axes.get_xticklines(): tick.set_visible(False) for tick in frame1.axes.get_yticklines(): tick.set_visible(False) plt.show() The three things I would like to know are: How to turn off this behaviour in the first place (although in most cases it is useful, it is not always!) I have looked through matplotlib.axis.XAxis and cannot find anything appropriate How can I make N disappear (i.e. X.set_visible(False)) Is there a better way to do the above anyway? My final plot would be 4x4 subplots in a figure, if that is relevant. [/INST] Instead of hiding each element, you can hide the whole axis: frame1.axes.get_xaxis().set_visible(False) frame1.axes.get_yaxis().set_visible(False) Or, you can set the ticks to an empty list: frame1.axes.get_xaxis().set_ticks([]) frame1.axes.get_yaxis().set_ticks([]) In this second option, you can still use plt.xlabel() and plt.ylabel() to add labels to the axes. </s>
<s>[INST] How do I split a multi-line string into multiple lines? I have a multi-line string literal that I want to do an operation on each line, like so: inputString = """Line 1 Line 2 Line 3""" I want to do something like the following: for line in inputString: doStuff() [/INST] inputString.splitlines() Will give you a list with each item, the splitlines() method is designed to split each line into a list element. </s>
<s>[INST] How do I get a Cron like scheduler in Python? I'm looking for a library in Python which will provide at and cron like functionality. I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron. For those unfamiliar with cron: you can schedule tasks based upon an expression like: 0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday 0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays. The cron time expression syntax is less important, but I would like to have something with this sort of flexibility. If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received. Edit I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process. To this end, I'm looking for the expressivity of the cron time expression, but in Python. Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence. [/INST] If you're looking for something lightweight checkout schedule: import schedule import time def job(): print("I'm working...") schedule.every(10).minutes.do(job) schedule.every().hour.do(job) schedule.every().day.at("10:30").do(job) while 1: schedule.run_pending() time.sleep(1) Disclosure: I'm the author of that library. </s>
<s>[INST] Django set default form values I have a Model as follows class TankJournal(models.Model): user = models.ForeignKey(User) tank = models.ForeignKey(TankProfile) ts = models.IntegerField(max_length=15) title = models.CharField(max_length=50) body = models.TextField() I have a modelform as follows for that model class JournalForm(ModelForm): tank = forms.IntegerField(widget=forms.HiddenInput()) class Meta: model = TankJournal exclude = ('user','ts') I want to know how to set the default value for that tank hidden field.. Here is my function to show/save the form so far def addJournal(request, id=0): if not request.user.is_authenticated(): return HttpResponseRedirect('/') # # checking if they own the tank # from django.contrib.auth.models import User user = User.objects.get(pk=request.session['id']) if request.method == 'POST': form = JournalForm(request.POST) if form.is_valid(): obj = form.save(commit=False) # # setting the user and ts # from time import time obj.ts = int(time()) obj.user = user obj.tank = TankProfile.objects.get(pk=form.cleaned_data['tank_id']) # # saving the test # obj.save() else: form = JournalForm() try: tank = TankProfile.objects.get(user=user, id=id) except TankProfile.DoesNotExist: return HttpResponseRedirect('/error/') form.tank = id return render_to_response('ajax/tank_addJournal.html', {'form': form}, context_instance=RequestContext(request)) thanks! [/INST] You can use initial which is explained here You have two options either populate the value when calling form constructor: form = JournalForm(initial={'tank': 123}) or set the value in the form definition: tank = forms.IntegerField(widget=forms.HiddenInput(), initial=123) </s>
<s>[INST] Flask - configure dev server to be visible across the network I'm not sure if this is Flask specific, but when I run an app in dev mode (http://localhost:5000), I cannot access it from other machines on the network (with http://[dev-host-ip]:5000). With Rails in dev mode, for example, it works fine. I couldn't find any docs regarding the Flask dev server configuration. Any idea what should be configured to enable this? [/INST] Add a parameter to your app.run(). By default it runs on localhost, change it to app.run(host= '0.0.0.0') to run on your machines IP address. Documented on the Flask site under "Externally Visible Server" on the Quickstart page: Externally Visible Server If you run the server you will notice that the server is only available from your own computer, not from any other in the network. This is the default because in debugging mode a user of the application can execute arbitrary Python code on your computer. If you have debug disabled or trust the users on your network, you can make the server publicly available. Just change the call of the run() method to look like this: app.run(host='0.0.0.0') This tells your operating system to listen on a public IP. </s>
<s>[INST] How can I get the whole request POST body in Python with Flask? I want to be able to get the whole POST body of the request as a string or file handle that I can read but I'm unable to find a way (I'm new to Flask and Python for web at all). This is what I got: from flask import Flask app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def parse_request(): # obtain the whole post body here, in case the request is post... AS THIS QUESTION IS GETTING MANY VIEWS: The answer to the question linked below is telling a way that works only if the content-type is not recognised, this one instead is asking how to get the whole raw post body regardless of the headers: Get raw POST body in Python Flask regardless of Content-Type header [/INST] The docs describe the attributes available on the request. In most common cases request.data will be empty because it's used as a fallback: request.data Contains the incoming request data as string in case it came with a mimetype Flask does not handle. request.args: the key/value pairs in the URL query string request.form: the key/value pairs in the body, as sent by a HTML POST form request.files: the files in the body, which Flask keeps separate from form request.values: combined args and form, preferring args if keys overlap </s>
<s>[INST] Postgres: INSERT if does not exist already I'm using Python to write to a postgres database: sql_string = "INSERT INTO hundred (name,name_slug,status) VALUES (" sql_string += hundred + ", '" + hundred_slug + "', " + status + ");" cursor.execute(sql_string) But because some of my rows are identical, I get the following error: psycopg2.IntegrityError: duplicate key value violates unique constraint "hundred_pkey" How can I write an 'INSERT unless this row already exists' SQL statement? I've seen complex statements like this recommended: IF EXISTS (SELECT * FROM invoices WHERE invoiceid = '12345') UPDATE invoices SET billed = 'TRUE' WHERE invoiceid = '12345' ELSE INSERT INTO invoices (invoiceid, billed) VALUES ('12345', 'TRUE') END IF But firstly, is this overkill for what I need, and secondly, how can I execute one of those as a simple string? [/INST] How can I write an 'INSERT unless this row already exists' SQL statement? There is a nice way of doing conditional INSERT in PostgreSQL: INSERT INTO example_table (id, name) SELECT 1, 'John' WHERE NOT EXISTS ( SELECT id FROM example_table WHERE id = 1 ); CAVEAT This approach is not 100% reliable for concurrent write operations, though. There is a very tiny race condition between the SELECT in the NOT EXISTS anti-semi-join and the INSERT itself. It can fail under such conditions. </s>
<s>[INST] Django auto_now and auto_now_add For Django 1.1. I have this in my models.py: class User(models.Model): created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) When updating a row I get: [Sun Nov 15 02:18:12 2009] [error] /home/ptarjan/projects/twitter-meme/django/db/backends/mysql/base.py:84: Warning: Column 'created' cannot be null [Sun Nov 15 02:18:12 2009] [error] return self.cursor.execute(query, args) The relevant part of my database is: `created` datetime NOT NULL, `modified` datetime NOT NULL, Is this cause for concern? Side question: in my admin tool, those two fields aren't showing up. Is that expected? [/INST] Any field with the auto_now attribute set will also inherit editable=False and therefore will not show up in the admin panel. There has been talk in the past about making the auto_now and auto_now_add arguments go away, and although they still exist, I feel you're better off just using a custom save() method. So, to make this work properly, I would recommend not using auto_now or auto_now_add and instead define your own save() method to make sure that created is only updated if id is not set (such as when the item is first created), and have it update modified every time the item is saved. I have done the exact same thing with other projects I have written using Django, and so your save() would look like this: from django.utils import timezone class User(models.Model): created = models.DateTimeField(editable=False) modified = models.DateTimeField() def save(self, *args, **kwargs): ''' On save, update timestamps ''' if not self.id: self.created = timezone.now() self.modified = timezone.now() return super(User, self).save(*args, **kwargs) Hope this helps! Edit in response to comments: The reason why I just stick with overloading save() vs. relying on these field arguments is two-fold: The aforementioned ups and downs with their reliability. These arguments are heavily reliant on the way each type of database that Django knows how to interact with treats a date/time stamp field, and seems to break and/or change between every release. (Which I believe is the impetus behind the call to have them removed altogether). The fact that they only work on DateField, DateTimeField, and TimeField, and by using this technique you are able to automatically populate any field type every time an item is saved. Use django.utils.timezone.now() vs. datetime.datetime.now(), because it will return a TZ-aware or naive datetime.datetime object depending on settings.USE_TZ. To address why the OP saw the error, I don't know exactly, but it looks like created isn't even being populated at all, despite having auto_now_add=True. To me it stands out as a bug, and underscores item #1 in my little list above: auto_now and auto_now_add are flaky at best. </s>
<s>[INST] Calculating arithmetic mean (average) in Python Is there a built-in or standard library method in Python to calculate the arithmetic mean (average) of a list of numbers? [/INST] I am not aware of anything in the standard library. However, you could use something like: def mean(numbers): return float(sum(numbers)) / max(len(numbers), 1) >>> mean([1,2,3,4]) 2.5 >>> mean([]) 0.0 In numpy, there's numpy.mean(). </s>
<s>[INST] Python variable scope error I've been programming for many years, and recently started learning Python. The following code works as expected in both python 2.5 and 3.0 (on OS X if that matters): a, b, c = (1, 2, 3) print(a, b, c) def test(): print(a) print(b) print(c) # (A) #c+=1 # (B) test() However, when I uncomment line (B), I get an UnboundLocalError: 'c' not assigned at line (A). The values of a and b are printed correctly. This has me completely baffled for two reasons: Why is there a runtime error thrown at line (A) because of a later statement on line (B)? Why are variables a and b printed as expected, while c raises an error? The only explanation I can come up with is that a local variable c is created by the assignment c+=1, which takes precedent over the "global" variable c even before the local variable is created. Of course, it doesn't make sense for a variable to "steal" scope before it exists. Could someone please explain this behavior? [/INST] Python treats variables in functions differently depending on whether you assign values to them from within the function or not. If you assign any value to a variable, it is treated by default as a local variable. Therefore, when you uncomment the line, you are attempting to reference a local variable before any value has been assigned to it. If you want the variable c to refer to the global c put global c as the first line of the function. As of python 3, there is now nonlocal c that you can use to refer to the nearest enclosing (not necessarily global) scope. </s>
<s>[INST] What is the best way to call a Python script from another Python script? I have a script named test1.py which is not in a module. It just has code that should execute when the script itself is run. There are no functions, classes, methods, etc. I have another script which runs as a service. I want to call test1.py from the script running as a service. For example: File test1.py print "I am a test" print "see! I do nothing productive." File service.py # Lots of stuff here test1.py # do whatever is in test1.py I'm aware of one method which is opening the file, reading the contents, and basically eval'ing it. I'm assuming there's a better way of doing this. Or at least I hope so. [/INST] The usual way to do this is something like the following. test1.py def some_func(): print 'in test 1, unproductive' if __name__ == '__main__': # test1.py executed as script # do something some_func() service.py import test1 def service_func(): print 'service func' if __name__ == '__main__': # service.py executed as script # do something service_func() test1.some_func() </s>
<s>[INST] Easiest way to rename a model using Django/South? I've been hunting for an answer to this on South's site, Google, and SO, but couldn't find a simple way to do this. I want to rename a Django model using South. Say you have the following: class Foo(models.Model): name = models.CharField() class FooTwo(models.Model): name = models.CharField() foo = models.ForeignKey(Foo) and you want to convert Foo to Bar, namely class Bar(models.Model): name = models.CharField() class FooTwo(models.Model): name = models.CharField() foo = models.ForeignKey(Bar) To keep it simple, I'm just trying to change the name from Foo to Bar, but ignore the foo member in FooTwo for now. What's the easiest way to do this using South? I could probably do a data migration, but that seems pretty involved. Write a custom migration, e.g. db.rename_table('city_citystate', 'geo_citystate'), but I'm not sure how to fix the foreign key in this case. An easier way that you know? [/INST] To answer your first question, the simple model/table rename is pretty straightforward. Run the command: ./manage.py schemamigration yourapp rename_foo_to_bar --empty (Update 2: try --auto instead of --empty to avoid the warning below. Thanks to @KFB for the tip.) If you're using an older version of south, you'll need startmigration instead of schemamigration. Then manually edit the migration file to look like this: class Migration(SchemaMigration): def forwards(self, orm): db.rename_table('yourapp_foo', 'yourapp_bar') def backwards(self, orm): db.rename_table('yourapp_bar','yourapp_foo') You can accomplish this more simply using the db_table Meta option in your model class. But every time you do that, you increase the legacy weight of your codebase -- having class names differ from table names makes your code harder to understand and maintain. I fully support doing simple refactorings like this for the sake of clarity. (update) I just tried this in production, and got a strange warning when I went to apply the migration. It said: The following content types are stale and need to be deleted: yourapp | foo Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. I answered "no" and everything seemed to be fine. </s>
<s>[INST] In Python, how do I iterate over a dictionary in sorted order? There's an existing function that ends in: return dict.iteritems() that returns an unsorted iterator for a given dictionary. I would like to return an iterator that goes through the items in sorted order. How do I do that? [/INST] Haven't tested this very extensively, but works in Python 2.5.2. >>> d = {"x":2, "h":15, "a":2222} >>> it = iter(sorted(d.iteritems())) >>> it.next() ('a', 2222) >>> it.next() ('h', 15) >>> it.next() ('x', 2) >>> </s>
<s>[INST] Is there a way to use PhantomJS in Python? I want to use PhantomJS in Python. I googled this problem but couldn't find proper solutions. I find os.popen() may be a good choice. But I couldn't pass some arguments to it. Using subprocess.Popen() may be a proper solution for now. I want to know whether there's a better solution or not. Is there a way to use PhantomJS in Python? [/INST] The easiest way to use PhantomJS in python is via Selenium. The simplest installation method is Install NodeJS Using Node's package manager install phantomjs: npm -g install phantomjs install selenium (in your virtualenv, if you are using that) After installation, you may use phantom as simple as: from selenium import webdriver driver = webdriver.PhantomJS() # or add to your PATH driver.set_window_size(1024, 768) # optional driver.get('https://google.com/') driver.save_screenshot('screen.png') # save a screenshot to disk sbtn = driver.find_element_by_css_selector('button.gbqfba') sbtn.click() If your system path environment variable isn't set correctly, you'll need to specify the exact path as an argument to webdriver.PhantomJS(). Replace this: driver = webdriver.PhantomJS() # or add to your PATH ... with the following: driver = webdriver.PhantomJS(executable_path='/usr/local/lib/node_modules/phantomjs/lib/phantom/bin/phantomjs') References: http://selenium-python.readthedocs.org/en/latest/api.html How do I set a proxy for phantomjs/ghostdriver in python webdriver? http://python.dzone.com/articles/python-testing-phantomjs </s>
<s>[INST] Convert a number range to another range, maintaining ratio I'm trying to convert one range of numbers to another, maintaining ratio. Maths is not my strong point. I have an image file where point values may range from -16000.00 to 16000.00 though the typical range may be much less. What I want to do is compress these values into the integer range 0-100, where 0 is the value of the smallest point, and 100 is the value of the largest. All points in between should keep a relative ratio even though some precision is being lost I'd like to do this in python but even a general algorithm should suffice. I'd prefer an algorithm where the min/max or either range can be adjusted (ie, the second range could be -50 to 800 instead of 0 to 100). [/INST] NewValue = (((OldValue - OldMin) * (NewMax - NewMin)) / (OldMax - OldMin)) + NewMin Or a little more readable: OldRange = (OldMax - OldMin) NewRange = (NewMax - NewMin) NewValue = (((OldValue - OldMin) * NewRange) / OldRange) + NewMin Or if you want to protect for the case where the old range is 0 (OldMin = OldMax): OldRange = (OldMax - OldMin) if (OldRange == 0) NewValue = NewMin else { NewRange = (NewMax - NewMin) NewValue = (((OldValue - OldMin) * NewRange) / OldRange) + NewMin } Note that in this case we're forced to pick one of the possible new range values arbitrarily. Depending on context, sensible choices could be: NewMin (see sample), NewMax or (NewMin + NewMax) / 2 </s>
<s>[INST] How to start a background process in Python? I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily. [/INST] While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated. Example for your case: import subprocess subprocess.Popen(["rm","-r","some.file"]) This should run rm -r somefile in the background. But be wary: subprocess.Popen() only runs a process in the background if nothing in the python script depends on the output of the command being run: For example, the following command will not run in the background: import subprocess ls_output=subprocess.Popen(["ls", "-a"], stdout=subprocess.PIPE) See the documentation here. Also, a point of clarification: "Background" purely a shell concept: what you probably want is to spawn a new process. I've used "background" here to refer to shell-background-like behavior, but don't mistake this for the process actually being in the background. </s>
<s>[INST] Why doesn't os.path.join() work in this case? The below code will not join, when debugged the command does not store the whole path but just the last entry. os.path.join('/home/build/test/sandboxes/', todaystr, '/new_sandbox/') When I test this it only stores the /new_sandbox/ part of the code. [/INST] The latter strings shouldn't start with a slash. If they start with a slash, then they're considered an "absolute path" and everything before them is discarded. Quoting the Python docs for os.path.join: If any component is an absolute path, all previous components (on Windows, including the previous drive letter, if there was one) are thrown away, and joining continues. </s>
<s>[INST] Python setup.py develop vs install Two options in setup.py develop and install are confusing me. According to this site, using develop creates a special link to site-packages directory. People have suggested that I use python setup.py install for a fresh installation and python setup.py develop after any changes have been made to the setup file. Can anyone shed some light on the usage of these commands? [/INST] python setup.py install is used to install (typically third party) packages that you're not going to be developing/editing/debugging yourself. For your own stuff, you want to get your package installed and then be able to frequently edit your code and not have to re-install your package—this is exactly what python setup.py develop does: installs the package (typically just a source folder) in a way that allows you to conveniently edit your code after its installed to the (virtual) environment and have the changes take effect immediately. </s>
<s>[INST] Is there a "not equal" operator in Python? How would you say does not equal? Like if hi == hi: print "hi" elif hi (does not equal) bye: print "no hi" Is there a replacement for == that means "not equal"? [/INST] Use != (preferred) or <> (deprecated). See comparison operators. For comparing object identities, you can use the keyword is and its negation is not. e.g. 1 == 1 # true 1 != 1 # false 1 <> 1 # false [] is [] # false (distinct objects) a = b = []; a is b # true (same object) </s>
<s>[INST] How to perform OR condition in django queryset? I want to write a Django query equivalent to this SQL query: SELECT * from user where income >= 5000 or income is NULL. How to construct the Djagno queryset filter? User.objects.filter(income__gte=5000, income=0) This doesn't work, because it ANDs the filters. I want to OR the filters to get union of individual querysets. [/INST] from django.db.models import Q User.objects.filter(Q(income__gte=5000) | Q(income__isnull=True)) via Documentation </s>
<s>[INST] Sort a list by multiple attributes? I have a list of lists: [[12, 'tall', 'blue', 1], [2, 'short', 'red', 9], [4, 'tall', 'blue', 13]] If I wanted to sort by one element, say the tall/short element, I could do it via s = sorted(s, key = itemgetter(1)). If I wanted to sort by both tall/short and colour, I could do the sort twice, once for each element, but is there a quicker way? [/INST] A key can be a function that returns a tuple: s = sorted(s, key = lambda x: (x[1], x[2])) Or you can achieve the same using itemgetter: import operator s = sorted(s, key = operator.itemgetter(1, 2)) And notice that here you can use sort instead of using sorted and then reassigning: s.sort(key = operator.itemgetter(1, 2)) </s>
<s>[INST] Is Python strongly typed? I've come across links that say Python is a strongly typed language. However, I thought in strongly typed languages you couldn't do this : bob = 1 bob = "bob" I thought a strongly typed language didn't accept type-changing at run-time. Maybe I've got a wrong (or too simplist) definition of strong/weak types. So, is Python a strongly or weakly typed language? [/INST] Python is strongly, dynamically typed. Strong typing means that the type of a value doesn't suddenly change. A string containing only digits doesn't magically become a number, as may happen in Perl. Every change of type requires an explicit conversion. Dynamic typing means that runtime objects (values) have a type, as opposed to static typing where variables have a type. As for your example bob = 1 bob = "bob" This works because the variable does not have a type; it can name any object. After bob=1, you'll find that type(bob) returns int, but after bob="bob", it returns str. (Note that type is a regular function, so it evaluates its argument, then returns the type of the value.) Contrast this with older dialects of C, which were weakly, statically typed, so that pointers and integers were pretty much interchangeable. (Modern ISO C requires conversions in many cases, but my compiler is still lenient about this by default.) I must add that the strong vs. weak typing is more of a continuum than a boolean choice. C++ has stronger typing than C (more conversions required), but the type system can be subverted by using pointer casts. The strength of the type system in a dynamic language such as Python is really determined by how its primitives and library functions respond to different types. E.g., + is overloaded so that it works on two numbers or two strings, but not a string and an number. This is a design choice made when + was implemented, but not really a necessity following from the language's semantics. In fact, when you overload + on a custom type, you can make it implicitly convert anything to a number: def to_number(x): """Try to convert x to a number.""" if x is None: return 0 # more special cases here else: return float(x) # works for numbers and strings class Foo(object): def __add__(self, other): other = to_number(other) # now do the addition (The only language that I know that is completely strongly typed, aka strictly typed, is Haskell, where types are entirely disjoint and only a controlled form of overloading is possible via type classes.) </s>
<s>[INST] Python string.join(list) on object array rather than string array In Python, I can do: >>> list = ['a', 'b', 'c'] >>> ', '.join(list) 'a, b, c' Is there any easy way to do the same when I have a list of objects? >>> class Obj: ... def __str__(self): ... return 'name' ... >>> list = [Obj(), Obj(), Obj()] >>> ', '.join(list) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sequence item 0: expected string, instance found Or do I have to resort to a for loop? [/INST] You could use a list comprehension or a generator expression instead: ', '.join([str(x) for x in list]) # list comprehension ', '.join(str(x) for x in list) # generator expression </s>
<s>[INST] Python multiprocessing pool.map for multiple arguments In the Python multiprocessing library, is there a variant of pool.map which support multiple arguments? text = "test" def harvester(text, case): X = case[0] return text+ str(X) if __name__ == '__main__': pool = multiprocessing.Pool(processes=6) case = RAW_DATASET pool.map(harvester(text,case),case, 1) pool.close() pool.join() [/INST] is there a variant of pool.map which support multiple arguments? Python 3.3 includes pool.starmap() method: #!/usr/bin/env python3 from functools import partial from itertools import repeat from multiprocessing import Pool, freeze_support def func(a, b): return a + b def main(): a_args = [1,2,3] second_arg = 1 with Pool() as pool: L = pool.starmap(func, [(1, 1), (2, 1), (3, 1)]) M = pool.starmap(func, zip(a_args, repeat(second_arg))) N = pool.map(partial(func, b=second_arg), a_args) assert L == M == N if __name__=="__main__": freeze_support() main() For older versions: #!/usr/bin/env python2 import itertools from multiprocessing import Pool, freeze_support def func(a, b): print a, b def func_star(a_b): """Convert `f([1,2])` to `f(1,2)` call.""" return func(*a_b) def main(): pool = Pool() a_args = [1,2,3] second_arg = 1 pool.map(func_star, itertools.izip(a_args, itertools.repeat(second_arg))) if __name__=="__main__": freeze_support() main() Output 1 1 2 1 3 1 Notice how itertools.izip() and itertools.repeat() are used here. Due to the bug mentioned by @unutbu you can't use functools.partial() or similar capabilities on Python 2.6, so the simple wrapper function func_star() should be defined explicitly. See also the workaround suggested by uptimebox. </s>
<s>[INST] Why return NotImplemented instead of raising NotImplementedError Python has a singleton called NotImplemented. Why would someone want to ever return NotImplemented instead of raising the NotImplementedError exception? Won't it just make it harder to find bugs, such as code that executes invalid methods? [/INST] It's because __lt__() and related comparison methods are quite commonly used indirectly in list sorts and such. Sometimes the algorithm will choose to try another way or pick a default winner. Raising an exception would break out of the sort unless caught, whereas NotImplemented doesn't get raised and can be used in further tests. http://jcalderone.livejournal.com/32837.html To summarise that link: "NotImplemented signals to the runtime that it should ask someone else to satisfy the operation. In the expression a == b, if a.__eq__(b) returns NotImplemented, then Python tries b.__eq__(a). If b knows enough to return True or False, then the expression can succeed. If it doesn't, then the runtime will fall back to the built-in behavior (which is based on identity for == and !=)." </s>
<s>[INST] How to download large file in python with requests.py? Requests is a really nice library. I'd like to use it for download big files (>1GB). The problem is it's not possible to keep whole file in memory I need to read it in chunks. And this is a problem with the following code import requests def DownloadFile(url) local_filename = url.split('/')[-1] r = requests.get(url) f = open(local_filename, 'wb') for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: # filter out keep-alive new chunks f.write(chunk) f.close() return By some reason it doesn't work this way. It still loads response into memory before save it to a file. [/INST] I figured out what should be changed. The trick was to set stream = True in the get() method. After this python process stopped to suck memory (stays around 30kb regardless size of the download file). Thank you @danodonovan for you syntax I use it here: def download_file(url): local_filename = url.split('/')[-1] # NOTE the stream=True parameter r = requests.get(url, stream=True) with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) #f.flush() commented by recommendation from J.F.Sebastian return local_filename See http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow for further reference. </s>
<s>[INST] Iterate a list with indexes in Python I could swear I've seen the function (or method) that takes a list, like this [3, 7, 19] and makes it into iterable list of tuples, like so: [(0,3), (1,7), (2,19)] to use it instead of: for i in range(len(name_of_list)): name_of_list[i] = something but I can't remember the name and googling "iterate list" gets nothing. [/INST] >>> a = [3,4,5,6] >>> for i, val in enumerate(a): ... print i, val ... 0 3 1 4 2 5 3 6 >>> </s>
<s>[INST] How to hide output of subprocess in Python 2.7 I'm using eSpeak on Ubuntu and have a Python 2.7 script that prints and speaks a message: import subprocess text = 'Hello World.' print text subprocess.call(['espeak', text]) eSpeak produces the desired sounds, but clutters the shell with some errors (ALSA lib..., no socket connect) so i cannot easily read what was printed earlier. Exit code is 0. Unfortunately there is no documented option to turn off its verbosity, so I'm looking for a way to only visually silence it and keep the open shell clean for further interaction. How can I do this? [/INST] Redirect the output to DEVNULL: import os import subprocess FNULL = open(os.devnull, 'w') retcode = subprocess.call(['echo', 'foo'], stdout=FNULL, stderr=subprocess.STDOUT) It is effectively the same as running this shell command: retcode = os.system("echo 'foo' &> /dev/null") </s>
<s>[INST] Compact way of writing (a + b == c or a + c == b or b + c == a) Is there a more compact or pythonic way to write the boolean expression a + b == c or a + c == b or b + c == a I came up with a + b + c in (2*a, 2*b, 2*c) but that is a little strange. [/INST] If we look at the Zen of Python, emphasis mine: The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than right now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! The most Pythonic solution is the one that is clearest, simplest, and easiest to explain: a + b == c or a + c == b or b + c == a Even better, you don't even need to know Python to understand this code! It's that easy. This is, without reservation, the best solution. Anything else is intellectual masturbation. Furthermore, this is likely the best performing solution as well, as it is the only one out of all the proposals that short circuits. If a + b == c, only a single addition and comparison is done. </s>
<s>[INST] Python Infinity - Any caveats? So Python has positive and negative infinity: float("inf"), float("-inf") This just seems like the type of feature that has to have some caveat. Is there anything I should be aware of? [/INST] You can still get not-a-number (NaN) values from simple arithmetic involving inf: >>> 0 * float("inf") nan Note that you will normally not get an inf value through usual arithmetic calculations: >>> 2.0**2 4.0 >>> _**2 16.0 >>> _**2 256.0 >>> _**2 65536.0 >>> _**2 4294967296.0 >>> _**2 1.8446744073709552e+19 >>> _**2 3.4028236692093846e+38 >>> _**2 1.157920892373162e+77 >>> _**2 1.3407807929942597e+154 >>> _**2 Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: (34, 'Numerical result out of range') The inf value is considered a very special value with unusual semantics, so it's better to know about an OverflowError straight away through an exception, rather than having an inf value silently injected into your calculations. </s>
<s>[INST] find nearest value in numpy array is there a numpy-thonic way, e.g. function, to find the 'nearest value' in an array? example: np.find_nearest( array, value ) [/INST] import numpy as np def find_nearest(array,value): idx = (np.abs(array-value)).argmin() return array[idx] array = np.random.random(10) print(array) # [ 0.21069679 0.61290182 0.63425412 0.84635244 0.91599191 0.00213826 # 0.17104965 0.56874386 0.57319379 0.28719469] value = 0.5 print(find_nearest(array, value)) # 0.568743859261 </s>
<s>[INST] Case insensitive Python regular expression without re.compile In Python, I can compile a regular expression to be case-insensitive using re.compile: >>> s = 'TeSt' >>> casesensitive = re.compile('test') >>> ignorecase = re.compile('test', re.IGNORECASE) >>> >>> print casesensitive.match(s) None >>> print ignorecase.match(s) <_sre.SRE_Match object at 0x02F0B608> Is there a way to do the same, but without using re.compile. I can't find anything like Perl's i suffix (e.g. m/test/i) in the documentation. [/INST] Pass re.IGNORECASE to the flags param of search, match, or sub: re.search('test', 'TeSt', re.IGNORECASE) re.match('test', 'TeSt', re.IGNORECASE) re.sub('test', 'xxxx', 'Testing', flags=re.IGNORECASE) </s>
<s>[INST] Saving utf-8 texts in json.dumps as UTF8, not as \u escape sequence sample code: >>> import json >>> json_string = json.dumps("ברי צקלה") >>> print json_string "\u05d1\u05e8\u05d9 \u05e6\u05e7\u05dc\u05d4" The problem: it's not human readable. My (smart) users want to verify or even edit text files with JSON dumps. (and i'd rather not use XML) Is there a way to serialize objects into utf-8 json string (instead of \uXXXX ) ? this doesn't help: >>> output = json_string.decode('string-escape') "\u05d1\u05e8\u05d9 \u05e6\u05e7\u05dc\u05d4" this works, but if any sub-objects is a python-unicode and not utf-8, it'll dump garbage: >>> #### ok: >>> s= json.dumps( "ברי צקלה", ensure_ascii=False) >>> print json.loads(s) ברי צקלה >>> #### NOT ok: >>> d={ 1: "ברי צקלה", 2: u"ברי צקלה" } >>> print d {1: '\xd7\x91\xd7\xa8\xd7\x99 \xd7\xa6\xd7\xa7\xd7\x9c\xd7\x94', 2: u'\xd7\x91\xd7\xa8\xd7\x99 \xd7\xa6\xd7\xa7\xd7\x9c\xd7\x94'} >>> s = json.dumps( d, ensure_ascii=False, encoding='utf8') >>> print json.loads(s)['1'] ברי צקלה >>> print json.loads(s)['2'] ××¨× ×¦×§×× i searched the json.dumps documentation but couldn't find something useful. Edit - Solution(?): i'll try to sum up the comments and answers by Martijn Pieters: (edit: 2nd thought after @Sebastian's comment and about a year later) there might be no is a built-in solution in json.dumps. i'll have to convert all strings to UTF8 Unicode the object before it's being JSON-ed. i'll use Mark's function that converts strings recuresively in a nested object the example I gave depends too much on my computer & IDE environment, and doesn't run the same on all computers. Thank you everybody :) [/INST] Use the ensure_ascii=False switch to json.dumps(), then encode the value to UTF-8 manually: >>> json_string = json.dumps(u"ברי צקלה", ensure_ascii=False).encode('utf8') >>> json_string '"\xd7\x91\xd7\xa8\xd7\x99 \xd7\xa6\xd7\xa7\xd7\x9c\xd7\x94"' >>> print json_string "ברי צקלה" If you are writing this to a file, you can use io.open() instead of open() to produce a file object that encodes Unicode values for you as you write, then use json.dump() instead to write to that file: with io.open('filename', 'w', encoding='utf8') as json_file: json.dump(u"ברי צקלה", json_file, ensure_ascii=False) In Python 3, the built-in open() is an alias for io.open(). Do note that there is a bug in the json module where the ensure_ascii=False flag can produce a mix of unicode and str objects. The workaround for Python 2 then is: with io.open('filename', 'w', encoding='utf8') as json_file: data = json.dumps(u"ברי צקלה", ensure_ascii=False) # unicode(data) auto-decodes data to unicode if str json_file.write(unicode(data)) If you are passing in byte strings (type str in Python 2, bytes in Python 3) encoded to UTF-8, make sure to also set the encoding keyword: >>> d={ 1: "ברי צקלה", 2: u"ברי צקלה" } >>> d {1: '\xd7\x91\xd7\xa8\xd7\x99 \xd7\xa6\xd7\xa7\xd7\x9c\xd7\x94', 2: u'\u05d1\u05e8\u05d9 \u05e6\u05e7\u05dc\u05d4'} >>> s=json.dumps(d, ensure_ascii=False, encoding='utf8') >>> s u'{"1": "\u05d1\u05e8\u05d9 \u05e6\u05e7\u05dc\u05d4", "2": "\u05d1\u05e8\u05d9 \u05e6\u05e7\u05dc\u05d4"}' >>> json.loads(s)['1'] u'\u05d1\u05e8\u05d9 \u05e6\u05e7\u05dc\u05d4' >>> json.loads(s)['2'] u'\u05d1\u05e8\u05d9 \u05e6\u05e7\u05dc\u05d4' >>> print json.loads(s)['1'] ברי צקלה >>> print json.loads(s)['2'] ברי צקלה Note that your second sample is not valid Unicode; you gave it UTF-8 bytes as a unicode literal, that would never work: >>> s = u'\xd7\x91\xd7\xa8\xd7\x99 \xd7\xa6\xd7\xa7\xd7\x9c\xd7\x94' >>> print s ××¨× ×¦×§×× >>> print s.encode('latin1').decode('utf8') ברי צקלה Only when I encoded that string to Latin 1 (whose unicode codepoints map one-to-one to bytes) then decode as UTF-8 do you see the expected output. That has nothing to do with JSON and everything to do with that you use the wrong input. The result is called a Mojibake. If you got that Unicode value from a string literal, it was decoded using the wrong codec. It could be your terminal is mis-configured, or that your text editor saved your source code using a different codec than what you told Python to read the file with. Or you sourced it from a library that applied the wrong codec. This all has nothing to do with the JSON library. </s>
<s>[INST] How to delete items from a dictionary while iterating over it? Is it legitimate to delete items from a dictionary in Python while iterating over it? For example: for k, v in mydict.iteritems(): if k == val: del mydict[k] The idea is to remove elements that don't meet a certain condition from the dictionary, instead of creating a new dictionary that's a subset of the one being iterated over. Is this a good solution? Are there more elegant/efficient ways? [/INST] A simple test in the console shows you cannot modify a dictionary while iterating over it: >>> mydict = {'one': 1, 'two': 2, 'three': 3, 'four': 4} >>> for k, v in mydict.iteritems(): ... if k == 'two': ... del mydict[k] ... ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> RuntimeError: dictionary changed size during iteration As stated in delnan's answer, deleting entries causes problems when the iterator tries to move onto the next entry. Instead, use the keys() method to get a list of the keys and work with that: >>> for k in mydict.keys(): ... if k == 'two': ... del mydict[k] ... >>> mydict {'four': 4, 'three': 3, 'one': 1} If you need to delete based on the items value, use the items() method instead: >>> for k, v in mydict.items(): ... if v == 3: ... del mydict[k] ... >>> mydict {'four': 4, 'one': 1} </s>
<s>[INST] How to execute a file within the python interpreter? I'm trying to execute a file with python commands from within the interpreter. EDIT: I'm trying to use variables and settings from that file, not to invoke a separate process. [/INST] Several ways. From the shell python someFile.py From inside IDLE, hit F5. If you're typing interactively, try this. >>> variables= {} >>> execfile( "someFile.py", variables ) >>> print variables # globals from the someFile module </s>
<s>[INST] Python dictionary: are keys() and values() always the same order? It looks like the lists returned by keys() and values() methods of a dictionary are always a 1-to-1 mapping (assuming the dictionary is not altered between calling the 2 methods). For example: >>> d = {'one':1, 'two': 2, 'three': 3} >>> k, v = d.keys(), d.values() >>> for i in range(len(k)): print d[k[i]] == v[i] True True True If you do not alter the dictionary between calling keys() and calling values(), is it wrong to assume the above for-loop will always print True? I could not find any documentation confirming this. [/INST] Found this: If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond. On 2.x documentation and 3.x documentation. </s>
<s>[INST] Python: split a list based on a condition? What's the best way, both aesthetically and from a performance perspective, to split a list of items into multiple lists based on a conditional? The equivalent of: good = [x for x in mylist if x in goodvals] bad = [x for x in mylist if x not in goodvals] is there a more elegant way to do this? Update: here's the actual use case, to better explain what I'm trying to do: # files looks like: [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi'), ... ] IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png') images = [f for f in files if f[2].lower() in IMAGE_TYPES] anims = [f for f in files if f[2].lower() not in IMAGE_TYPES] [/INST] good, bad = [], [] for x in mylist: (bad, good)[x in goodvals].append(x) </s>
<s>[INST] How to use a different version of python during NPM install? Salam (means Hello) :) I have terminal access to a VPS running centos 5.9 and default python 2.4.3 installed. I also installed python 2.7.3 via these commands: (I used make altinstall instead of make install) wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz tar -xf Python-2.7.3.tgz cd Python-2.7.3 ./configure make make altinstall then I installed node.js from source via these commands: python2.7 ./configure make make install The problem is, when I use npm install and try to install a node.js package which requires python > 2.4.3 I get this error: gyp ERR! configure error gyp ERR! stack Error: Python executable "python" is v2.4.3, which is not supported by gyp. gyp ERR! stack You can pass the --python switch to point to Python >= v2.5.0 & < 3.0.0. gyp ERR! stack at failPythonVersion (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:125:14) gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:114:9 how should I "pass the --python switch to point to Python >= v2.5.0"? [/INST] You can use --python option to npm like so: npm install --python=python2.7 or set it to be used always: npm config set python python2.7 Npm will in turn pass this option to node-gyp when needed. (note: I'm the one who opened an issue on Github to have this included in the docs, as there were so many questions about it ;-) ) </s>
<s>[INST] Getting method parameter names in python Given the python function: def aMethod(arg1, arg2): pass How can I extract the number and names of the arguments. Ie. given that I have a reference to func, I want the func.[something] to return ("arg1", "arg2") The usage scenario for this is that I have a decorator, and I wish to use the method arguments in the same order that they appear for the actual function as a key. Ie. how would the decorator look that printed "a,b" when I call aMethod("a","b") [/INST] Take a look at the inspect module - this will do the inspection of the various code object properties for you. >>> inspect.getargspec(aMethod) (['arg1', 'arg2'], None, None, None) The other results are the name of the *args and **kwargs variables, and the defaults provided. ie. >>> def foo(a,b,c=4, *arglist, **keywords): pass >>> inspect.getargspec(foo) (['a', 'b', 'c'], 'arglist', 'keywords', (4,)) </s>
<s>[INST] python decorators with parameters I have a problem with the transfer of variable 'insurance_mode' by the decorator. I would do it by the following decorator statement: @execute_complete_reservation(True) def test_booking_gta_object(self): self.test_select_gta_object() but unfortunately, this statement does not work. Perhaps maybe there is better way to solve this problem. def execute_complete_reservation(test_case,insurance_mode): def inner_function(self,*args,**kwargs): self.test_create_qsf_query() test_case(self,*args,**kwargs) self.test_select_room_option() if insurance_mode: self.test_accept_insurance_crosseling() else: self.test_decline_insurance_crosseling() self.test_configure_pax_details() self.test_configure_payer_details return inner_function [/INST] You mean def test_booking_gta_object, right? Anyway, the syntax for decorators with arguments is a bit different - the decorator with arguments should return a function that will take a function and return another function. So it should really return a normal decorator. A bit confusing, right? What I mean is: def decorator(argument): def real_decorator(function): def wrapper(*args, **kwargs): funny_stuff() something_with_argument(argument) function(*args, **kwargs) more_funny_stuff() return wrapper return real_decorator Here you can read more on the subject - it's also possible to implement this using callable objects and that is also explained there. </s>
<s>[INST] Rename a dictionary key Is there a way to rename a dictionary key, without reassigning its value to a new name and removing the old name key; and without iterating through dict key/value? In case of OrderedDict, do the same, while keeping that key's position. [/INST] For a regular dict, you can use: mydict[new_key] = mydict.pop(old_key) For an OrderedDict, I think you must build an entirely new one using a comprehension. >>> OrderedDict(zip('123', 'abc')) OrderedDict([('1', 'a'), ('2', 'b'), ('3', 'c')]) >>> oldkey, newkey = '2', 'potato' >>> OrderedDict((newkey if k == oldkey else k, v) for k, v in _.viewitems()) OrderedDict([('1', 'a'), ('potato', 'b'), ('3', 'c')]) Modifying the key itself, as this question seems to be asking, is impractical because dict keys are usually immutable objects such as numbers, strings or tuples. Instead of trying to modify the key, reassigning the value to a new key and removing the old key is how you can achieve the "rename" in python. </s>
<s>[INST] Python + Django page redirect How do I accomplish a simple redirect (e.g. cflocation in ColdFusion, or header(location:http://) in Django)? [/INST] It's simple: from django.http import HttpResponseRedirect def myview(request): ... return HttpResponseRedirect("/path/") More info in the official Django docs Update: Django 1.0 There is apparently a better way of doing this in Django now using generic views. Example - from django.views.generic.simple import redirect_to urlpatterns = patterns('', (r'^one/$', redirect_to, {'url': '/another/'}), #etc... ) There is more in the generic views documentation. Credit - Carles Barrobés. Update #2: Django 1.3+ In Django 1.5 *redirect_to* no longer exists and has been replaced by RedirectView. Credit to Yonatan from django.views.generic import RedirectView urlpatterns = patterns('', (r'^one/$', RedirectView.as_view(url='/another/')), ) </s>
<s>[INST] How to send email attachments with Python I am having problems understanding how to email an attachment using Python. I have successfully emailed simple messages with the smtplib. Could someone please explain how to send an attachment in an email. I know there are other posts online but as a Python beginner I find them hard to understand. [/INST] Here's another, adapted from here: import smtplib from os.path import basename from email.mime.application import MIMEApplication from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.utils import COMMASPACE, formatdate def send_mail(send_from, send_to, subject, text, files=None, server="127.0.0.1"): assert isinstance(send_to, list) msg = MIMEMultipart() msg['From'] = send_from msg['To'] = COMMASPACE.join(send_to) msg['Date'] = formatdate(localtime=True) msg['Subject'] = subject msg.attach(MIMEText(text)) for f in files or []: with open(f, "rb") as fil: part = MIMEApplication( fil.read(), Name=basename(f) ) part['Content-Disposition'] = 'attachment; filename="%s"' % basename(f) msg.attach(part) smtp = smtplib.SMTP(server) smtp.sendmail(send_from, send_to, msg.as_string()) smtp.close() It's much the same as the first example... But it should be easier to drop in. </s>
<s>[INST] Python creating a dictionary of lists I want to create a dictionary whose values are lists. For example: { 1: ['1'], 2: ['1','2'], 3: ['2'] } If I do: d = dict() a = ['1', '2'] for i in a: for j in range(int(i), int(i) + 2): d[j].append(i) I get a KeyError, because d[...] isn't a list. In this case, I can add the following code after the assignment of a to initialize the dictionary. for x in range(1, 4): d[x] = list() Is there a better way to do this? Lets say I don't know the keys I am going to need until I am in the second for loop. For example: class relation: scope_list = list() ... d = dict() for relation in relation_list: for scope_item in relation.scope_list: d[scope_item].append(relation) An alternative would then be replacing d[scope_item].append(relation) with if d.has_key(scope_item): d[scope_item].append(relation) else: d[scope_item] = [relation,] What is the best way to handle this? Ideally, appending would "just work". Is there some way to express that I want a dictionary of empty lists, even if I don't know every key when I first create the list? [/INST] You can use defaultdict: >>> from collections import defaultdict >>> d = defaultdict(list) >>> for i in a: ... for j in range(int(i), int(i) + 2): ... d[j].append(i) ... >>> d defaultdict(<type 'list'>, {1: ['1'], 2: ['1', '2'], 3: ['2']}) >>> d.items() [(1, ['1']), (2, ['1', '2']), (3, ['2'])] </s>
<s>[INST] Changing the "tick frequency" on x or y axis in matplotlib? I am trying to fix how python plots my data. Say x = [0,5,9,10,15] and y = [0,1,2,3,4] Then I would do: matplotlib.pyplot.plot(x,y) matplotlib.pyplot.show() and the x axis' ticks are plotted in intervals of 5. Is there a way to make it show intervals of 1? [/INST] You could explicitly set where you want to tick marks with plt.xticks: plt.xticks(np.arange(min(x), max(x)+1, 1.0)) For example, import numpy as np import matplotlib.pyplot as plt x = [0,5,9,10,15] y = [0,1,2,3,4] plt.plot(x,y) plt.xticks(np.arange(min(x), max(x)+1, 1.0)) plt.show() (np.arange was used rather than Python's range function just in case min(x) and max(x) are floats instead of ints.) The plt.plot (or ax.plot) function will automatically set default x and y limits. If you wish to keep those limits, and just change the stepsize of the tick marks, then you could use ax.get_xlim() to discover what limits Matplotlib has already set. start, end = ax.get_xlim() ax.xaxis.set_ticks(np.arange(start, end, stepsize)) The default tick formatter should do a decent job rounding the tick values to a sensible number of significant digits. However, if you wish to have more control over the format, you can define your own formatter. For example, ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f')) Here's a runnable example: import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker x = [0,5,9,10,15] y = [0,1,2,3,4] fig, ax = plt.subplots() ax.plot(x,y) start, end = ax.get_xlim() ax.xaxis.set_ticks(np.arange(start, end, 0.712123)) ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1f')) plt.show() </s>
<s>[INST] Get the cartesian product of a series of lists in Python How can I get the Cartesian product (every possible combination of values) from a group of lists? Input: somelists = [ [1, 2, 3], ['a', 'b'], [4, 5] ] Desired output: [(1, 'a', 4), (1, 'a', 5), (1, 'b', 4), (1, 'b', 5), (2, 'a', 4), (2, 'a', 5) ...] [/INST] In Python 2.6+ import itertools for element in itertools.product(*somelists): print element Documentation: Python 2 - itertools.product </s>
<s>[INST] Why dict.get(key) instead of dict[key]? Today, I came across the dict method get which, given a key in the dictionary, returns the associated value. For what purpose is this function useful? If I wanted to find a value associated with a key in a dictionary, I can just do dict[key], and it returns the same thing: dictionary = {"Name": "Harry", "Age": 17} dictionary["Name"] dictionary.get("Name") [/INST] It allows you to provide a default value if the key is missing: dictionary.get("bogus", None) returns None, whereas dictionary["bogus"] would raise a KeyError. </s>
<s>[INST] How do I override __getattr__ in Python without breaking the default behavior? I want to override the __getattr__ method on a class to do something fancy but I don't want to break the default behavior. What's the correct way to do this? [/INST] Overriding __getattr__ should be fine -- __getattr__ is only called as a last resort i.e. if there are no attributes in the instance that match the name. For instance, if you access foo.bar, then __getattr__ will only be called if foo has no attribute called bar. If the attribute is one you don't want to handle, raise AttributeError: class Foo(object): def __getattr__(self, name): if some_predicate(name): # ... else: # Default behaviour raise AttributeError However, unlike __getattr__, __getattribute__ will be called first (only works for new style classes i.e. those that inherit from object). In this case, you can preserve default behaviour like so: class Foo(object): def __getattribute__(self, name): if some_predicate(name): # ... else: # Default behaviour return object.__getattribute__(self, name) See the Python docs for more. </s>
<s>[INST] Configuring so that pip install can work from github We'd like to use pip with github to install private packages to our production servers. This question concerns what needs to be in the github repo in order for the install to be successful. Assuming the following command line (which authenticates just fine and tries to install): pip install git+ssh://[email protected]/BlahCo/search/tree/prod_release_branch/ProductName What needs to reside in the ProductName? Is it the contents of what would normally be in the tar file after running setup.py with the sdist option, or is the actual tar.gz file, or something else? I'm asking here because I've tried several variations and can't make it work. Any help appreciated. [/INST] You need the whole python package, with a setup.py file in it. A package named foo would be: foo # the installable package ├── foo │   ├── __init__.py │   └── bar.py └── setup.py And install from github like: $ pip install git+git://github.com/myuser/foo.git@v123 or $ pip install git+git://github.com/myuser/foo.git@newbranch More info at http://www.pip-installer.org/en/latest/reference/pip_install.html#vcs-support </s>
<s>[INST] I can't install python-ldap When I run the following command: sudo pip install python-ldap I get this error: In file included from Modules/LDAPObject.c:9: Modules/errors.h:8: fatal error: lber.h: No such file or directory Any ideas how to fix this? [/INST] The website says that python-ldap is based on OpenLDAP, so you need to have the development files (headers) in order to compile the Python module. If you're on Ubuntu, the package is called libldap2-dev. sudo apt-get install libsasl2-dev python-dev libldap2-dev libssl-dev </s>
<s>[INST] What do (lambda) function closures capture in Python? Recently I started playing around with Python and I came around something peculiar in the way closures work. Consider the following code: adders= [0,1,2,3] for i in [0,1,2,3]: adders[i]=lambda a: i+a print adders[1](3) It builds a simple array of functions that take a single input and return that input added by a number. The functions are constructed in for loop where the iterator i runs from 0 to 3. For each of these number a lambda funciton is created which captures i and adds it to the function's input. The last line calls the second lambda function with 3 as a parameter. To my surprise the output was: 6 I expected a 4. My reasoning was: in Python everything is an object and thus every variable is essential a pointer to it. When creating the lambda closures for i, I expected it to store a pointer to the integer object currently pointed to by i. That means that when i assigned a new integer object it shouldn't effect the previously created closures. Sadly, inspecting the adders array within a debugger shows that it does. All lambda functions refer to the last value of i, 3, which results in adders1 returning 6. Which me the following: what does the closures capture exactly? What is the most elegant way to convince that lambda functions to capture the current value of i and in a way that will not be affected when i changes it's value. [/INST] you may force the capture of a variable using an argument with a default value: >>> for i in [0,1,2,3]: ... adders[i]=lambda a,i=i: i+a # note the dummy parameter with a default value ... >>> print( adders[1](3) ) 4 the idea is to declare a parameter (cleverly named i) and give it a default value of the variable you want to capture (the value of i) </s>
<s>[INST] How do I design a class in Python? I've had some really awesome help on my previous questions for detecting paws and toes within a paw, but all these solutions only work for one measurement at a time. Now I have data that consists off: about 30 dogs; each has 24 measurements (divided into several subgroups); each measurement has at least 4 contacts (one for each paw) and each contact is divided into 5 parts and has several parameters, like contact time, location, total force etc. Obviously sticking everything into one big object isn't going to cut it, so I figured I needed to use classes instead of the current slew of functions. But even though I've read Learning Python's chapter about classes, I fail to apply it to my own code (GitHub link) I also feel like it's rather strange to process all the data every time I want to get out some information. Once I know the locations of each paw, there's no reason for me to calculate this again. Furthermore, I want to compare all the paws of the same dog to determine which contact belongs to which paw (front/hind, left/right). This would become a mess if I continue using only functions. So now I'm looking for advice on how to create classes that will let me process my data (link to the zipped data of one dog) in a sensible fashion. [/INST] How to design a class. Write down the words. You started to do this. Some people don't and wonder why they have problems. Expand your set of words into simple statements about what these objects will be doing. That is to say, write down the various calculations you'll be doing on these things. Your short list of 30 dogs, 24 measurements, 4 contacts, and several "parameters" per contact is interesting, but only part of the story. Your "locations of each paw" and "compare all the paws of the same dog to determine which contact belongs to which paw" are the next step in object design. Underline the nouns. Seriously. Some folks debate the value of this, but I find that for first-time OO developers it helps. Underline the nouns. Review the nouns. Generic nouns like "parameter" and "measurement" need to be replaced with specific, concrete nouns that apply to your problem in your problem domain. Specifics help clarify the problem. Generics simply elide details. For each noun ("contact", "paw", "dog", etc.) write down the attributes of that noun and the actions in which that object engages. Don't short-cut this. Every attribute. "Data Set contains 30 Dogs" for example is important. For each attribute, identify if this is a relationship to a defined noun, or some other kind of "primitive" or "atomic" data like a string or a float or something irreducible. For each action or operation, you have to identify which noun has the responsibility, and which nouns merely participate. It's a question of "mutability". Some objects get updated, others don't. Mutable objects must own total responsibility for their mutations. At this point, you can start to transform nouns into class definitions. Some collective nouns are lists, dictionaries, tuples, sets or namedtuples, and you don't need to do very much work. Other classes are more complex, either because of complex derived data or because of some update/mutation which is performed. Don't forget to test each class in isolation using unittest. Also, there's no law that says classes must be mutable. In your case, for example, you have almost no mutable data. What you have is derived data, created by transformation functions from the source dataset. </s>
<s>[INST] No Multiline Lambda in Python: Why not? I've heard it said that multiline lambdas can't be added in Python because they would clash syntactically with the other syntax constructs in Python. I was thinking about this on the bus today and realized I couldn't think of a single Python construct that multiline lambdas clash with. Given that I know the language pretty well, this surprised me. Now, I'm sure Guido had a reason for not including multiline lambdas in the language, but out of curiosity: what's a situation where including a multiline lambda would be ambiguous? Is what I've heard true, or is there some other reason that Python doesn't allow multiline lambdas? [/INST] Guido van Rossum (the inventor of Python) answers this exact question himself in an old blog post. Basically, he admits that it's theoretically possible, but that any proposed solution would be un-Pythonic: "But the complexity of any proposed solution for this puzzle is immense, to me: it requires the parser (or more precisely, the lexer) to be able to switch back and forth between indent-sensitive and indent-insensitive modes, keeping a stack of previous modes and indentation level. Technically that can all be solved (there's already a stack of indentation levels that could be generalized). But none of that takes away my gut feeling that it is all an elaborate Rube Goldberg contraption." </s>
<s>[INST] Reusable library to get human readable version of file size? There are various snippets on the web that would give you a function to return human readable size from bytes size: >>> human_readable(2048) '2 kilobytes' >>> But is there a Python library that provides this? [/INST] Addressing the above "too small a task to require a library" issue by a straightforward implementation: def sizeof_fmt(num, suffix='B'): for unit in ['','Ki','Mi','Gi','Ti','Pi','Ei','Zi']: if abs(num) < 1024.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f%s%s" % (num, 'Yi', suffix) Supports: all currently known binary prefixes negative and positive numbers numbers larger than 1000 Yobibytes arbitrary units (maybe you like to count in Gibibits!) Example: >>> sizeof_fmt(168963795964) '157.4GiB' by Fred Cirera </s>
<s>[INST] How to check if a user is logged in (how to properly use user.is_authenticated)? I am looking over this website but just can't seem to figure out how to do this as it's not working. I need to check if the current site user is logged in (authenticated), and am trying: request.user.is_authenticated despite being sure that the user is logged in, it returns just: > I'm able to do other requests (from the first section in the url above), such as: request.user.is_active which returns a successful response. [/INST] is_authenticated is a function. You should call it like if request.user.is_authenticated(): # do something if the user is authenticated As Peter Rowell pointed out, what may be tripping you up is that in the default Django template language, you don't tack on parenthesis to call functions. So you may have seen something like this in template code: {% if user.is_authenticated %} However, in Python code, it is indeed a method in the User class. </s>
<s>[INST] How to change legend size with matplotlib.pyplot Simple question here: I'm just trying to get the size of my legend using matplotlib.pyplot to be smaller (ie, the text to be smaller). The code I'm using goes something like this: plot.figure() plot.scatter(k,sum_cf, color = 'black', label='Sum of Cause Fractions') plot.scatter(k, data[:, 0], color = 'b', label = 'Dis 1: cf = .6, var = .2') plot.scatter(k, data[:, 1], color = 'r', label = 'Dis 2: cf = .2, var = .1') plot.scatter(k, data[:, 2], color = 'g', label = 'Dis 3: cf = .1, var = .01') plot.legend(loc=2) [/INST] You can set an individual font size for the legend by adjusting the 'prop' keyword. plot.legend(loc=2,prop={'size':6}) This takes a dictionary of keywords corresponding to matplotlib.font_manager.FontProperties properties. See the documentation for legend: Keyword arguments: prop: [ None | FontProperties | dict ] A matplotlib.font_manager.FontProperties instance. If prop is a dictionary, a new instance will be created with prop. If None, use rc settings. It is also possible, as of version 1.2.1, to use the keyword fontsize. </s>
<s>[INST] What is the correct syntax for 'else if'? I'm a new Python programmer who is making the leap from 2.6.4 to 3.1.1. Everything has gone fine until I tried to use the 'else if' statement. The interpreter gives me a syntax error after the 'if' in 'else if' for a reason I can't seem to figure out. def function(a): if a == '1': print ('1a') else if a == '2' print ('2a') else print ('3a') function(input('input:')) I'm probably missing something very simple; however, I haven't been able to find the answer on my own. [/INST] In python "else if" is spelled "elif". Also, you need a colon after the elif and the else. Simple answer to a simple question. I had the same problem, when I first started (in the last couple of weeks). So your code should read: def function(a): if a == '1': print('1a') elif a == '2': print('2a') else: print('3a') function(input('input:')) </s>
<s>[INST] Convert string representation of list to list in Python I was wondering what the simplest way is to convert a string list like the following to a list: x = u'[ "A","B","C" , " D"]' Even in case user puts spaces in between the commas, and spaces inside of the quotes. I need to handle that as well to: x = ["A", "B", "C", "D"] in Python. I know I can strip spaces with strip() and split() using the split operator and check for non alphabets. But the code was getting very kludgy. Is there a quick function that I'm not aware of? [/INST] >>> import ast >>> x = u'[ "A","B","C" , " D"]' >>> x = ast.literal_eval(x) >>> x ['A', 'B', 'C', ' D'] >>> x = [n.strip() for n in x] >>> x ['A', 'B', 'C', 'D'] ast.literal_eval: Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, numbers, tuples, lists, dicts, booleans, and None. </s>
<s>[INST] Can't install PIL after Mac OS X 10.9 I've just updated my Mac OS to 10.9 and I discovered that some (all?) of my Python modules are not here anymore, especially the Image one. So I try to execute sudo pip install pil, but I get this error: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/tk.h:78:11: fatal error: 'X11/Xlib.h' file not found # include <X11/Xlib.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 My Xcode is up-to-date and I don't have any idea. Is it possible that PIL is not yet 10.9 compatible ? [/INST] Following worked for me: ln -s /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers/X11 /usr/local/include/X11 sudo pip install pil UPDATE: But there is more correct solution below, provided by Will. open your terminal and execute: xcode-select --install </s>
<s>[INST] Writing Unicode text to a text file? I'm pulling data out of a Google doc, processing it, and writing it to a file (that eventually I will paste into a Wordpress page). It has some non-ASCII symbols. How can I convert these safely to symbols that can be used in HTML source? Currently I'm converting everything to Unicode on the way in, joining it all together in a Python string, then doing: import codecs f = codecs.open('out.txt', mode="w", encoding="iso-8859-1") f.write(all_html.encode("iso-8859-1", "replace")) There is an encoding error on the last line: UnicodeDecodeError: 'ascii' codec can't decode byte 0xa0 in position 12286: ordinal not in range(128) Partial solution: This Python runs without an error: row = [unicode(x.strip()) if x is not None else u'' for x in row] all_html = row[0] + "<br/>" + row[1] f = open('out.txt', 'w') f.write(all_html.encode("utf-8") But then if I open the actual text file, I see lots of symbols like: Qur‚Äôan Maybe I need to write to something other than a text file? [/INST] Deal exclusively with unicode objects as much as possible by decoding things to unicode objects when you first get them and encoding them as necessary on the way out. If your string is actually a unicode object, you'll need to convert it to a unicode-encoded string object before writing it to a file: foo = u'Δ, Й, ק, ‎ م, ๗, あ, 叶, 葉, and 말.' f = open('test', 'w') f.write(foo.encode('utf8')) f.close() When you read that file again, you'll get a unicode-encoded string that you can decode to a unicode object: f = file('test', 'r') print f.read().decode('utf8') </s>
<s>[INST] How to install Python MySQLdb module using pip? How can I install the MySQLdb module for Python using pip? [/INST] It's easy to do, but hard to remember the correct spelling: pip install MySQL-python Note: Some dependencies might have to be in place when running the above command. Some hints on how to install these on various platforms: Ubuntu 14, Ubuntu 16, Debian 8.6 (jessie) sudo apt-get install python-pip python-dev libmysqlclient-dev Fedora 24: sudo dnf install python python-devel mysql-devel redhat-rpm-config gcc </s>
<s>[INST] Sleeping in a batch file When writing a batch file to automate something on a Windows box, I've needed to pause its execution for several seconds (usually in a test/wait loop, waiting for a process to start). At the time, the best solution I could find uses ping (I kid you not) to achieve the desired effect. I've found a better write-up of it here, which describes a callable "wait.bat", implemented as follows: @ping 127.0.0.1 -n 2 -w 1000 > nul @ping 127.0.0.1 -n %1% -w 1000> nul You can then include calls to wait.bat in your own batch file, passing in the number of seconds to sleep. Apparently the Windows 2003 Resource Kit provides a Unix-like sleep command (at last!). In the meantime, for those of us still using Windows XP, Windows 2000 or (sadly) Windows NT, is there a better way? I modified the sleep.py script in the accepted answer, so that it defaults to one second if no arguments are passed on the command line: import time, sys time.sleep(float(sys.argv[1]) if len(sys.argv) > 1 else 1) [/INST] The timeout command is available from Vista onwards. c:\> timeout /? TIMEOUT [/T] timeout [/NOBREAK] Description: This utility accepts a timeout parameter to wait for the specified time period (in seconds) or until any key is pressed. It also accepts a parameter to ignore the key press. Parameter List: /T timeout Specifies the number of seconds to wait. Valid range is -1 to 99999 seconds. /NOBREAK Ignore key presses and wait specified time. /? Displays this help message. NOTE: A timeout value of -1 means to wait indefinitely for a key press. Examples: TIMEOUT /? TIMEOUT /T 10 TIMEOUT /T 300 /NOBREAK TIMEOUT /T -1 Note: Does not work with input redirection - trivial example: C:\>echo 1 | timeout /t 1 /nobreak ERROR: Input redirection is not supported, exiting the process immediately. </s>
<s>[INST] Why are Python's arrays slow? I expected array.array to be faster than lists, as arrays seem to be unboxed. However, I get the following result: In [1]: import array In [2]: L = list(range(100000000)) In [3]: A = array.array('l', range(100000000)) In [4]: %timeit sum(L) 1 loop, best of 3: 667 ms per loop In [5]: %timeit sum(A) 1 loop, best of 3: 1.41 s per loop In [6]: %timeit sum(L) 1 loop, best of 3: 627 ms per loop In [7]: %timeit sum(A) 1 loop, best of 3: 1.39 s per loop What could be the cause of such a difference? [/INST] The storage is "unboxed", but every time you access an element Python has to "box" it (embed it in a regular Python object) in order to do anything with it. For example, your sum(A) iterates over the array, and boxes each integer, one at a time, in a regular Python int object. That costs time. In your sum(L), all the boxing was done at the time the list was created. So, in the end, an array is generally slower, but requires substantially less memory. Here's the relevant code from a recent version of Python 3, but the same basic ideas apply to all CPython implementations since Python was first released. Here's the code to access a list item: PyObject * PyList_GetItem(PyObject *op, Py_ssize_t i) { /* error checking omitted */ return ((PyListObject *)op) -> ob_item[i]; } There's very little to it: somelist[i] just returns the i'th object in the list (and all Python objects in CPython are pointers to a struct whose initial segment conforms to the layout of a struct PyObject). And here's the __getitem__ implementation for an array with type code l: static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { return PyLong_FromLong(((long *)ap->ob_item)[i]); } The raw memory is treated as a vector of platform-native C long integers; the i'th C long is read up; and then PyLong_FromLong() is called to wrap ("box") the native C long in a Python long object (which, in Python 3, which eliminates Python 2's distinction between int and long, is actually shown as type int). This boxing has to allocate new memory for a Python int object, and spray the native C long's bits into it. In the context of the original example, this object's lifetime is very brief (just long enough for sum() to add the contents into a running total), and then more time is required to deallocate the new int object. This is where the speed difference comes from, always has come from, and always will come from in the CPython implementation. </s>
<s>[INST] Binary search (bisection) in Python Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not? I found the functions bisect_left/right in the bisect module, but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything). I thought of using bisect_left and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it. Edit To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list. Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements. I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested. [/INST] from bisect import bisect_left def binary_search(a, x, lo=0, hi=None): # can't use a to specify default for hi hi = hi if hi is not None else len(a) # hi defaults to len(a) pos = bisect_left(a,x,lo,hi) # find insertion position return (pos if pos != hi and a[pos] == x else -1) # don't walk off the end </s>
<s>[INST] Installing SetupTools on 64-bit Windows I'm running Python 2.7 on Windows 7 64-bit, and when I run the installer for setuptools it tells me that Python 2.7 is not installed. The specific error message is: `Python Version 2.7 required which was not found in the registry` My installed version of Python is: `Python 2.7 (r27:82525, Jul 4 2010, 07:43:08) [MSC v.1500 64 bit (AMD64)] on win32` I'm looking at the setuptools site and it doesn't mention any installers for 64-bit Windows. Have I missed something or do I have to install this from source? [/INST] Problem: you have 64-bit Python, and a 32-bit installer. This will cause problems for extension modules. The reasons why the installer doesn't finds Python is the transparent 32-bit emulation from Windows 7. 64-bit and 32-bit programs will write to different parts of the Windows registry. 64-bit: HKLM|HKCU\SOFTWARE\ 32-bit: HKLM|HKCU\SOFTWARE\wow6432node\. This means that the 64-bit Python installer writes to HKLM\SOFTWARE\Python, but the 32-bit setuptools installer looks at HKLM\SOFTWARE\wow6432node\Python (this is handled by windows automatically, programs don't notice). This is expected behavior and not a bug. Usually, you have these choices: the "clean" way: use 32-bit Python if you have to use 32-bit modules or extensions the other "clean" way: only use 64-bit installers when using 64-bit Python (see below) what the answer above suggests: copy HKLM\SOFTWARE\Python to HKLM\SOFTWARE\wow6432node\Python, but this will cause problems with binary distributions, as 64-bit Python can't load 32-bit compiled modules (do NOT do this!) install pure Python modules with setuptools instead of the distutils installer (easy_install or pip) For setuptools itself, for example, you can't use a 32-bit installer for 64-bit Python as it includes binary files. But there's a 64-bit installer at http://www.lfd.uci.edu/~gohlke/pythonlibs/ (has many installers for other modules too). As a rule of thumb: install pure-python modules using distribute and pip (see here), and find a binary package matching your installed Python architecture for everything else. </s>
<s>[INST] How can I reference requirements.txt for the install_requires kwarg in setuptools' setup.py file? I have a requirements.txt file that I'm using with Travis-CI. It seems silly to duplicate the requirements in both requirements.txt and setup.py, so I was hoping to pass a file handle to the install_requires kwarg in setuptools.setup. Is this possible? If so, how should I go about doing it? For good measure, here is my requirements.txt file: guessit>=0.5.2 tvdb_api>=1.8.2 hachoir-metadata>=1.3.3 hachoir-core>=1.3.3 hachoir-parser>=1.3.4 [/INST] A requirement file can contain comments (#) and can include some other files (--requirement or -r). Thus, if you really want to parse a requirement.txt you should use the pip parser: from pip.req import parse_requirements # parse_requirements() returns generator of pip.req.InstallRequirement objects install_reqs = parse_requirements(<requirements_path>) # reqs is a list of requirement # e.g. ['django==1.5.1', 'mezzanine==1.4.6'] reqs = [str(ir.req) for ir in install_reqs] setup( ... install_requires=reqs ) NOTE: my answer is old now. Pip does not have a public API so this no longer works (e.g. parse_requirements now requires an instance of pip.download.PipSession). You can do the opposite: list the dependances in setup.py and have a single character -- a dot . -- in requirements.txt. </s>
<s>[INST] Catching an exception while using a Python 'with' statement To my shame, I can't figure out how to handle exception for python 'with' statement. If I have a code: with open("a.txt") as f: print f.readlines() I really want to handle 'file not found exception' in order to do somehing. But I can't write with open("a.txt") as f: print f.readlines() except: print 'oops' and can't write with open("a.txt") as f: print f.readlines() else: print 'oops' enclosing 'with' in a try/except statement doesn't work else: exception is not raised. What can I do in order to process failure inside 'with' statement in a Pythonic way? [/INST] from __future__ import with_statement try: with open( "a.txt" ) as f : print f.readlines() except EnvironmentError: # parent of IOError, OSError *and* WindowsError where available print 'oops' If you want different handling for errors from the open call vs the working code you could do: try: f = open('foo.txt') except IOError: print('error') else: with f: print f.readlines() </s>
<s>[INST] Building a minimal plugin architecture in Python I have an application, written in Python, which is used by a fairly technical audience (scientists). I'm looking for a good way to make the application extensible by the users, i.e. a scripting/plugin architecture. I am looking for something extremely lightweight. Most scripts, or plugins, are not going to be developed and distributed by a third-party and installed, but are going to be something whipped up by a user in a few minutes to automate a repeating task, add support for a file format, etc. So plugins should have the absolute minimum boilerplate code, and require no 'installation' other than copying to a folder (so something like setuptools entry points, or the Zope plugin architecture seems like too much.) Are there any systems like this already out there, or any projects that implement a similar scheme that I should look at for ideas / inspiration? [/INST] Mine is, basically, a directory called "plugins" which the main app can poll and then use imp.load_module to pick up files, look for a well-known entry point possibly with module-level config params, and go from there. I use file-monitoring stuff for a certain amount of dynamism in which plugins are active, but that's a nice-to-have. Of course, any requirement that comes along saying "I don't need [big, complicated thing] X; I just want something lightweight" runs the risk of re-implementing X one discovered requirement at a time. But that's not to say you can't have some fun doing it anyway :) </s>
<s>[INST] Django: Redirect to previous page after login I'm trying to build a simple website with login functionality very similar to the one here on SO. The user should be able to browse the site as an anonymous user and there will be a login link on every page. When clicking on the login link the user will be taken to the login form. After a successful login the user should be taken back to the page from where he clicked the login link in the first place. I'm guessing that I have to somehow pass the url of the current page to the view that handles the login form but I can't really get it to work. EDIT: I figured it out. I linked to the login form by passing the current page as a GET parameter and then used 'next' to redirect to that page. Thanks! EDIT 2: My explanation did not seem to be clear so as requested here is my code: Lets say we are on a page foo.html and we are not logged in. Now we would like to have a link on foo.html that links to login.html. There we can login and are then redirected back to foo.html. The link on foo.html looks like this: <a href='/login/?next={{ request.path }}'>Login</a> Now I wrote a custom login view that looks somewhat like this: def login_view(request): redirect_to = request.REQUEST.get('next', '') if request.method=='POST': #create login form... if valid login credentials have been entered: return HttpResponseRedirect(redirect_to) #... return render_to_response('login.html', locals()) And the important line in login.html: <form method="post" action="./?next={{ redirect_to }}"> So yeah thats pretty much it, hope that makes it clear. [/INST] You do not need to make an extra view for this, the functionality is already built in. First each page with a login link needs to know the current path, and the easiest way is to add the request context preprosessor to settings.py (the 4 first are default), then the request object will be available in each request: settings.py: TEMPLATE_CONTEXT_PROCESSORS = ( "django.core.context_processors.auth", "django.core.context_processors.debug", "django.core.context_processors.i18n", "django.core.context_processors.media", "django.core.context_processors.request", ) Then add in the template you want the Login link: base.html: <a href="{% url django.contrib.auth.views.login %}?next={{request.path}}">Login</a> This will add a GET argument to the login page that points back to the current page. The login template can then be as simple as this: registration/login.html: {% block content %} <form method="post" action=""> {{form.as_p}} <input type="submit" value="Login"> </form> {% endblock %} </s>
<s>[INST] Is False == 0 and True == 1 in Python an implementation detail or is it guaranteed by the language? Is it guaranteed that False == 0 and True == 1, in Python? For instance, is it in any way guaranteed that the following code will always produce the same results, whatever the version of Python (both existing and, likely, future ones)? 0 == False # True 1 == True # True ['zero', 'one'][False] # is 'zero' Any reference to the official documentation would be much appreciated! Edit: As noted in many answers, bool inherits from int. The question can therefore be recast as: "Does the documentation officially say that programmers can rely on booleans inheriting from integers, with the values 0 and 1?". This question is relevant for writing robust code that won't fail because of implementation details! [/INST] In Python 2.x this is not guaranteed as it is possible for True and False to be reassigned. However, even if this happens, boolean True and boolean False are still properly returned for comparisons. In Python 3.x True and False are keywords and will always be equal to 1 and 0. Under normal circumstances in Python 2, and always in Python 3: False object is of type bool which is a subclass of int: object | int | bool It is the only reason why in your example, ['zero', 'one'][False] does work. It would not work with an object which is not a subclass of integer, because list indexing only works with integers, or objects that define a __index__ method (thanks mark-dickinson). Edit: It is true of the current python version, and of that of Python 3. The docs for python 2.6 and the docs for Python 3 both say: There are two types of integers: [...] Integers (int) [...] Booleans (bool) and in the boolean subsection: Booleans: These represent the truth values False and True [...] Boolean values behave like the values 0 and 1, respectively, in almost all contexts, the exception being that when converted to a string, the strings "False" or "True" are returned, respectively. So booleans are explicitly considered as integers in Python 2.6 and 3. So you're safe until Python 4 comes along. ;-) </s>
<s>[INST] Decode HTML entities in Python string? I'm parsing some HTML with Beautiful Soup 3, but it contains HTML entities which Beautiful Soup 3 doesn't automatically decode for me: >>> from BeautifulSoup import BeautifulSoup >>> soup = BeautifulSoup("<p>&pound;682m</p>") >>> text = soup.find("p").string >>> print text &pound;682m How can I decode the HTML entities in text to get "£682m" instead of "&pound;682m". [/INST] Python 3.4+ HTMLParser.unescape is deprecated, and was supposed to be removed in 3.5, although it was left in by mistake. It will be removed from the language soon. Instead, use html.unescape(): import html print(html.unescape('&pound;682m')) see https://docs.python.org/3/library/html.html#html.unescape Python 2.6-3.3 You can use the HTML parser from the standard library: >>> try: ... # Python 2.6-2.7 ... from HTMLParser import HTMLParser ... except ImportError: ... # Python 3 ... from html.parser import HTMLParser ... >>> h = HTMLParser() >>> print(h.unescape('&pound;682m')) £682m See http://docs.python.org/2/library/htmlparser.html You can also use the six compatibility library to simplify the import: >>> from six.moves.html_parser import HTMLParser >>> h = HTMLParser() >>> print(h.unescape('&pound;682m')) £682m </s>
<s>[INST] Why is it slower to iterate over a small string than a small list? I was playing around with timeit and noticed that doing a simple list comprehension over a small string took longer than doing the same operation on a list of small single character strings. Any explanation? It's almost 1.35 times as much time. >>> from timeit import timeit >>> timeit("[x for x in 'abc']") 2.0691067844831528 >>> timeit("[x for x in ['a', 'b', 'c']]") 1.5286479570345861 What's happening on a lower level that's causing this? [/INST] TL;DR The actual speed difference is closer to 70% (or more) once a lot of the overhead is removed, for Python 2. Object creation is not at fault. Neither method creates a new object, as one-character strings are cached. The difference is unobvious, but is likely created from a greater number of checks on string indexing, with regards to the type and well-formedness. It is also quite likely thanks to the need to check what to return. List indexing is remarkably fast. >>> python3 -m timeit '[x for x in "abc"]' 1000000 loops, best of 3: 0.388 usec per loop >>> python3 -m timeit '[x for x in ["a", "b", "c"]]' 1000000 loops, best of 3: 0.436 usec per loop This disagrees with what you've found... You must be using Python 2, then. >>> python2 -m timeit '[x for x in "abc"]' 1000000 loops, best of 3: 0.309 usec per loop >>> python2 -m timeit '[x for x in ["a", "b", "c"]]' 1000000 loops, best of 3: 0.212 usec per loop Let's explain the difference between the versions. I'll examine the compiled code. For Python 3: import dis def list_iterate(): [item for item in ["a", "b", "c"]] dis.dis(list_iterate) #>>> 4 0 LOAD_CONST 1 (<code object <listcomp> at 0x7f4d06b118a0, file "", line 4>) #>>> 3 LOAD_CONST 2 ('list_iterate.<locals>.<listcomp>') #>>> 6 MAKE_FUNCTION 0 #>>> 9 LOAD_CONST 3 ('a') #>>> 12 LOAD_CONST 4 ('b') #>>> 15 LOAD_CONST 5 ('c') #>>> 18 BUILD_LIST 3 #>>> 21 GET_ITER #>>> 22 CALL_FUNCTION 1 (1 positional, 0 keyword pair) #>>> 25 POP_TOP #>>> 26 LOAD_CONST 0 (None) #>>> 29 RETURN_VALUE def string_iterate(): [item for item in "abc"] dis.dis(string_iterate) #>>> 21 0 LOAD_CONST 1 (<code object <listcomp> at 0x7f4d06b17150, file "", line 21>) #>>> 3 LOAD_CONST 2 ('string_iterate.<locals>.<listcomp>') #>>> 6 MAKE_FUNCTION 0 #>>> 9 LOAD_CONST 3 ('abc') #>>> 12 GET_ITER #>>> 13 CALL_FUNCTION 1 (1 positional, 0 keyword pair) #>>> 16 POP_TOP #>>> 17 LOAD_CONST 0 (None) #>>> 20 RETURN_VALUE You see here that the list variant is likely to be slower due to the building of the list each time. This is the 9 LOAD_CONST 3 ('a') 12 LOAD_CONST 4 ('b') 15 LOAD_CONST 5 ('c') 18 BUILD_LIST 3 part. The string variant only has 9 LOAD_CONST 3 ('abc') You can check that this does seem to make a difference: def string_iterate(): [item for item in ("a", "b", "c")] dis.dis(string_iterate) #>>> 35 0 LOAD_CONST 1 (<code object <listcomp> at 0x7f4d068be660, file "", line 35>) #>>> 3 LOAD_CONST 2 ('string_iterate.<locals>.<listcomp>') #>>> 6 MAKE_FUNCTION 0 #>>> 9 LOAD_CONST 6 (('a', 'b', 'c')) #>>> 12 GET_ITER #>>> 13 CALL_FUNCTION 1 (1 positional, 0 keyword pair) #>>> 16 POP_TOP #>>> 17 LOAD_CONST 0 (None) #>>> 20 RETURN_VALUE This produces just 9 LOAD_CONST 6 (('a', 'b', 'c')) as tuples are immutable. Test: >>> python3 -m timeit '[x for x in ("a", "b", "c")]' 1000000 loops, best of 3: 0.369 usec per loop Great, back up to speed. For Python 2: def list_iterate(): [item for item in ["a", "b", "c"]] dis.dis(list_iterate) #>>> 2 0 BUILD_LIST 0 #>>> 3 LOAD_CONST 1 ('a') #>>> 6 LOAD_CONST 2 ('b') #>>> 9 LOAD_CONST 3 ('c') #>>> 12 BUILD_LIST 3 #>>> 15 GET_ITER #>>> >> 16 FOR_ITER 12 (to 31) #>>> 19 STORE_FAST 0 (item) #>>> 22 LOAD_FAST 0 (item) #>>> 25 LIST_APPEND 2 #>>> 28 JUMP_ABSOLUTE 16 #>>> >> 31 POP_TOP #>>> 32 LOAD_CONST 0 (None) #>>> 35 RETURN_VALUE def string_iterate(): [item for item in "abc"] dis.dis(string_iterate) #>>> 2 0 BUILD_LIST 0 #>>> 3 LOAD_CONST 1 ('abc') #>>> 6 GET_ITER #>>> >> 7 FOR_ITER 12 (to 22) #>>> 10 STORE_FAST 0 (item) #>>> 13 LOAD_FAST 0 (item) #>>> 16 LIST_APPEND 2 #>>> 19 JUMP_ABSOLUTE 7 #>>> >> 22 POP_TOP #>>> 23 LOAD_CONST 0 (None) #>>> 26 RETURN_VALUE The odd thing is that we have the same building of the list, but it's still faster for this. Python 2 is acting strangely fast. Let's remove the comprehensions and re-time. The _ = is to prevent it getting optimised out. >>> python3 -m timeit '_ = ["a", "b", "c"]' 10000000 loops, best of 3: 0.0707 usec per loop >>> python3 -m timeit '_ = "abc"' 100000000 loops, best of 3: 0.0171 usec per loop We can see that initialization is not significant enough to account for the difference between the versions (those numbers are small)! We can thus conclude that Python 3 has slower comprehensions. This makes sense as Python 3 changed comprehensions to have safer scoping. Well, now improve the benchmark (I'm just removing overhead that isn't iteration). This removes the building of the iterable by pre-assigning it: >>> python3 -m timeit -s 'iterable = "abc"' '[x for x in iterable]' 1000000 loops, best of 3: 0.387 usec per loop >>> python3 -m timeit -s 'iterable = ["a", "b", "c"]' '[x for x in iterable]' 1000000 loops, best of 3: 0.368 usec per loop >>> python2 -m timeit -s 'iterable = "abc"' '[x for x in iterable]' 1000000 loops, best of 3: 0.309 usec per loop >>> python2 -m timeit -s 'iterable = ["a", "b", "c"]' '[x for x in iterable]' 10000000 loops, best of 3: 0.164 usec per loop We can check if calling iter is the overhead: >>> python3 -m timeit -s 'iterable = "abc"' 'iter(iterable)' 10000000 loops, best of 3: 0.099 usec per loop >>> python3 -m timeit -s 'iterable = ["a", "b", "c"]' 'iter(iterable)' 10000000 loops, best of 3: 0.1 usec per loop >>> python2 -m timeit -s 'iterable = "abc"' 'iter(iterable)' 10000000 loops, best of 3: 0.0913 usec per loop >>> python2 -m timeit -s 'iterable = ["a", "b", "c"]' 'iter(iterable)' 10000000 loops, best of 3: 0.0854 usec per loop No. No it is not. The difference is too small, especially for Python 3. So let's remove yet more unwanted overhead... by making the whole thing slower! The aim is just to have a longer iteration so the time hides overhead. >>> python3 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' '[x for x in iterable]' 100 loops, best of 3: 3.12 msec per loop >>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' '[x for x in iterable]' 100 loops, best of 3: 2.77 msec per loop >>> python2 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' '[x for x in iterable]' 100 loops, best of 3: 2.32 msec per loop >>> python2 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' '[x for x in iterable]' 100 loops, best of 3: 2.09 msec per loop This hasn't actually changed much, but it's helped a little. So remove the comprehension. It's overhead that's not part of the question: >>> python3 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'for x in iterable: pass' 1000 loops, best of 3: 1.71 msec per loop >>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'for x in iterable: pass' 1000 loops, best of 3: 1.36 msec per loop >>> python2 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'for x in iterable: pass' 1000 loops, best of 3: 1.27 msec per loop >>> python2 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'for x in iterable: pass' 1000 loops, best of 3: 935 usec per loop That's more like it! We can get slightly faster still by using deque to iterate. It's basically the same, but it's faster: >>> python3 -m timeit -s 'import random; from collections import deque; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 777 usec per loop >>> python3 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 405 usec per loop >>> python2 -m timeit -s 'import random; from collections import deque; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 805 usec per loop >>> python2 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 438 usec per loop What impresses me is that Unicode is competitive with bytestrings. We can check this explicitly by trying bytes and unicode in both: bytes >>> python3 -m timeit -s 'import random; from collections import deque; iterable = b"".join(chr(random.randint(0, 127)).encode("ascii") for _ in range(100000))' 'deque(iterable, maxlen=0)' :( 1000 loops, best of 3: 571 usec per loop >>> python3 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)).encode("ascii") for _ in range(100000)]' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 394 usec per loop >>> python2 -m timeit -s 'import random; from collections import deque; iterable = b"".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 757 usec per loop >>> python2 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 438 usec per loop Here you see Python 3 actually faster than Python 2. unicode >>> python3 -m timeit -s 'import random; from collections import deque; iterable = u"".join( chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 800 usec per loop >>> python3 -m timeit -s 'import random; from collections import deque; iterable = [ chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 394 usec per loop >>> python2 -m timeit -s 'import random; from collections import deque; iterable = u"".join(unichr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 1.07 msec per loop >>> python2 -m timeit -s 'import random; from collections import deque; iterable = [unichr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 469 usec per loop Again, Python 3 is faster, although this is to be expected (str has had a lot of attention in Python 3). In fact, this unicode-bytes difference is very small, which is impressive. So let's analyse this one case, seeing as it's fast and convenient for me: >>> python3 -m timeit -s 'import random; from collections import deque; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 777 usec per loop >>> python3 -m timeit -s 'import random; from collections import deque; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'deque(iterable, maxlen=0)' 1000 loops, best of 3: 405 usec per loop We can actually rule out Tim Peter's 10-times-upvoted answer! >>> foo = iterable[123] >>> iterable[36] is foo True These are not new objects! But this is worth mentioning: indexing costs. The difference will likely be in the indexing, so remove the iteration and just index: >>> python3 -m timeit -s 'import random; iterable = "".join(chr(random.randint(0, 127)) for _ in range(100000))' 'iterable[123]' 10000000 loops, best of 3: 0.0397 usec per loop >>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'iterable[123]' 10000000 loops, best of 3: 0.0374 usec per loop The difference seems small, but at least half of the cost is overhead: >>> python3 -m timeit -s 'import random; iterable = [chr(random.randint(0, 127)) for _ in range(100000)]' 'iterable; 123' 100000000 loops, best of 3: 0.0173 usec per loop so the speed difference is sufficient to decide to blame it. I think. So why is indexing a list so much faster? Well, I'll come back to you on that, but my guess is that's is down to the check for interned strings (or cached characters if it's a separate mechanism). This will be less fast than optimal. But I'll go check the source (although I'm not comfortable in C...) :). So here's the source: static PyObject * unicode_getitem(PyObject *self, Py_ssize_t index) { void *data; enum PyUnicode_Kind kind; Py_UCS4 ch; PyObject *res; if (!PyUnicode_Check(self) || PyUnicode_READY(self) == -1) { PyErr_BadArgument(); return NULL; } if (index < 0 || index >= PyUnicode_GET_LENGTH(self)) { PyErr_SetString(PyExc_IndexError, "string index out of range"); return NULL; } kind = PyUnicode_KIND(self); data = PyUnicode_DATA(self); ch = PyUnicode_READ(kind, data, index); if (ch < 256) return get_latin1_char(ch); res = PyUnicode_New(1, ch); if (res == NULL) return NULL; kind = PyUnicode_KIND(res); data = PyUnicode_DATA(res); PyUnicode_WRITE(kind, data, 0, ch); assert(_PyUnicode_CheckConsistency(res, 1)); return res; } Walking from the top, we'll have some checks. These are boring. Then some assigns, which should also be boring. The first interesting line is ch = PyUnicode_READ(kind, data, index); but we'd hope that is fast, as we're reading from a contiguous C array by indexing it. The result, ch, will be less than 256 so we'll return the cached character in get_latin1_char(ch). So we'll run (dropping the first checks) kind = PyUnicode_KIND(self); data = PyUnicode_DATA(self); ch = PyUnicode_READ(kind, data, index); return get_latin1_char(ch); Where #define PyUnicode_KIND(op) \ (assert(PyUnicode_Check(op)), \ assert(PyUnicode_IS_READY(op)), \ ((PyASCIIObject *)(op))->state.kind) (which is boring because asserts get ignored in debug [so I can check that they're fast] and ((PyASCIIObject *)(op))->state.kind) is (I think) an indirection and a C-level cast); #define PyUnicode_DATA(op) \ (assert(PyUnicode_Check(op)), \ PyUnicode_IS_COMPACT(op) ? _PyUnicode_COMPACT_DATA(op) : \ _PyUnicode_NONCOMPACT_DATA(op)) (which is also boring for similar reasons, assuming the macros (Something_CAPITALIZED) are all fast), #define PyUnicode_READ(kind, data, index) \ ((Py_UCS4) \ ((kind) == PyUnicode_1BYTE_KIND ? \ ((const Py_UCS1 *)(data))[(index)] : \ ((kind) == PyUnicode_2BYTE_KIND ? \ ((const Py_UCS2 *)(data))[(index)] : \ ((const Py_UCS4 *)(data))[(index)] \ ) \ )) (which involves indexes but really isn't slow at all) and static PyObject* get_latin1_char(unsigned char ch) { PyObject *unicode = unicode_latin1[ch]; if (!unicode) { unicode = PyUnicode_New(1, ch); if (!unicode) return NULL; PyUnicode_1BYTE_DATA(unicode)[0] = ch; assert(_PyUnicode_CheckConsistency(unicode, 1)); unicode_latin1[ch] = unicode; } Py_INCREF(unicode); return unicode; } Which confirms my suspicion that: This is cached: PyObject *unicode = unicode_latin1[ch]; This should be fast. The if (!unicode) is not run, so it's literally equivalent in this case to PyObject *unicode = unicode_latin1[ch]; Py_INCREF(unicode); return unicode; Honestly, after testing the asserts are fast (by disabling them [I think it works on the C-level asserts...]), the only plausibly-slow parts are: PyUnicode_IS_COMPACT(op) _PyUnicode_COMPACT_DATA(op) _PyUnicode_NONCOMPACT_DATA(op) Which are: #define PyUnicode_IS_COMPACT(op) \ (((PyASCIIObject*)(op))->state.compact) (fast, as before), #define _PyUnicode_COMPACT_DATA(op) \ (PyUnicode_IS_ASCII(op) ? \ ((void*)((PyASCIIObject*)(op) + 1)) : \ ((void*)((PyCompactUnicodeObject*)(op) + 1))) (fast if the macro IS_ASCII is fast), and #define _PyUnicode_NONCOMPACT_DATA(op) \ (assert(((PyUnicodeObject*)(op))->data.any), \ ((((PyUnicodeObject *)(op))->data.any))) (also fast as it's an assert plus an indirection plus a cast). So we're down (the rabbit hole) to: PyUnicode_IS_ASCII which is #define PyUnicode_IS_ASCII(op) \ (assert(PyUnicode_Check(op)), \ assert(PyUnicode_IS_READY(op)), \ ((PyASCIIObject*)op)->state.ascii) Hmm... that seems fast too... Well, OK, but let's compare it to PyList_GetItem. (Yeah, thanks Tim Peters for giving me more work to do :P.) PyObject * PyList_GetItem(PyObject *op, Py_ssize_t i) { if (!PyList_Check(op)) { PyErr_BadInternalCall(); return NULL; } if (i < 0 || i >= Py_SIZE(op)) { if (indexerr == NULL) { indexerr = PyUnicode_FromString( "list index out of range"); if (indexerr == NULL) return NULL; } PyErr_SetObject(PyExc_IndexError, indexerr); return NULL; } return ((PyListObject *)op) -> ob_item[i]; } We can see that on non-error cases this is just going to run: PyList_Check(op) Py_SIZE(op) ((PyListObject *)op) -> ob_item[i] Where PyList_Check is #define PyList_Check(op) \ PyType_FastSubclass(Py_TYPE(op), Py_TPFLAGS_LIST_SUBCLASS) (TABS! TABS!!!) (issue21587) That got fixed and merged in 5 minutes. Like... yeah. Damn. They put Skeet to shame. #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) #define PyType_FastSubclass(t,f) PyType_HasFeature(t,f) #ifdef Py_LIMITED_API #define PyType_HasFeature(t,f) ((PyType_GetFlags(t) & (f)) != 0) #else #define PyType_HasFeature(t,f) (((t)->tp_flags & (f)) != 0) #endif So this is normally really trivial (two indirections and a couple of boolean checks) unless Py_LIMITED_API is on, in which case... ??? Then there's the indexing and a cast (((PyListObject *)op) -> ob_item[i]) and we're done. So there are definitely fewer checks for lists, and the small speed differences certainly imply that it could be relevant. I think in general, there's just more type-checking and indirection (->) for Unicode. It seems I'm missing a point, but what? </s>
<s>[INST] Django 1.7 throws django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet This is the traceback on my windows system. Traceback (most recent call last): File "D:\AMD\workspace\steelrumors\manage.py", line 9, in <module> django.setup() File "D:\AMD\Django\django-django-4c85a0d\django\__init__.py", line 21, in setup apps.populate(settings.INSTALLED_APPS) File "D:\AMD\Django\django-django-4c85a0d\django\apps\registry.py", line 108, in populate app_config.import_models(all_models) File "D:\AMD\Django\django-django-4c85a0d\django\apps\config.py", line 197, in import_models self.models_module = import_module(models_module_name) File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module __import__(name) File "C:\Python27\lib\site-packages\registration\models.py", line 15, in <module> User = get_user_model() File "D:\AMD\Django\django-django-4c85a0d\django\contrib\auth\__init__.py", line 135, in get_user_model return django_apps.get_model(settings.AUTH_USER_MODEL) File "D:\AMD\Django\django-django-4c85a0d\django\apps\registry.py", line 199, in get_model self.check_models_ready() File "D:\AMD\Django\django-django-4c85a0d\django\apps\registry.py", line 131, in check_models_ready raise AppRegistryNotReady("Models aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet. And my manage.py looks like this: import os import sys import django if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "steelrumors.settings") django.setup() from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) I get this error when i am trying to use registration app in Django 1.7 [/INST] Running these commands solved my problem (credit to this answer): import django django.setup() However I'm not sure why I need this. Comments would be appreciated. </s>
<s>[INST] Escaping regex string in Python I want to use input from a user as a regex pattern for a search over some text. It works, but how I can handle cases where user puts characters that have meaning in regex? For example, the user wants to search for Word (s): regex engine will take the (s) as a group. I want it to treat it like a string "(s)" . I can run replace on user input and replace the ( with \( and the ) with \) but the problem is I will need to do replace for every possible regex symbol. Do you know some better way ? [/INST] Use the re.escape() function for this: 4.2.3 re Module Contents escape(string) Return string with all non-alphanumerics backslashed; this is useful if you want to match an arbitrary literal string that may have regular expression metacharacters in it. A simplistic example, search any occurence of the provided string optionally followed by 's', and return the match object. def simplistic_plural(word, text): word_or_plural = re.escape(word) + 's?' return re.match(word_or_plural, text) </s>
<s>[INST] Python subprocess/Popen with a modified environment I believe that running an external command with a slightly modified environment is a very common case. That's how I tend to do it: import subprocess, os my_env = os.environ my_env["PATH"] = "/usr/sbin:/sbin:" + my_env["PATH"] subprocess.Popen(my_command, env=my_env) I've got a gut feeling that there's a better way; does it look alright? [/INST] I think os.environ.copy() is better if you don't intend to modify the os.environ for the current process: import subprocess, os my_env = os.environ.copy() my_env["PATH"] = "/usr/sbin:/sbin:" + my_env["PATH"] subprocess.Popen(my_command, env=my_env) </s>
<s>[INST] Add Variables to Tuple I am learning Python and creating a database connection. While trying to add to the DB, I am thinking of creating tuples out of information and then add them to the DB. What I am Doing: I am taking information from the user and store it in variables. Can I add these variables into a tuple? Can you please help me with the syntax? Also if there is an efficient way of doing this, please share... EDIT Let me edit this question a bit...I only need the tuple to enter info into the DB. Once the information is added to the DB, should I delete the tuple? I mean I don't need the tuple anymore. [/INST] Tuples are immutable; you can't change which variables they contain after construction. However, you can concatenate or slice them to form new tuples: a = (1, 2, 3) b = a + (4, 5, 6) c = b[1:] And, of course, build them from existing values: name = "Joe" age = 40 location = "New York" joe = (name, age, location) </s>
<s>[INST] What is the fastest way to send 100,000 HTTP requests in Python? I am opening a file which has 100,000 url's. I need to send an http request to each url and print the status code. I am using Python 2.6, and so far looked at the many confusing ways Python implements threading/concurrency. I have even looked at the python concurrence library, but cannot figure out how to write this program correctly. Has anyone come across a similar problem? I guess generally I need to know how to perform thousands of tasks in Python as fast as possible - I suppose that means 'concurrently'. Thank you, Igor [/INST] Twistedless solution: from urlparse import urlparse from threading import Thread import httplib, sys from Queue import Queue concurrent = 200 def doWork(): while True: url = q.get() status, url = getStatus(url) doSomethingWithResult(status, url) q.task_done() def getStatus(ourl): try: url = urlparse(ourl) conn = httplib.HTTPConnection(url.netloc) conn.request("HEAD", url.path) res = conn.getresponse() return res.status, ourl except: return "error", ourl def doSomethingWithResult(status, url): print status, url q = Queue(concurrent * 2) for i in range(concurrent): t = Thread(target=doWork) t.daemon = True t.start() try: for url in open('urllist.txt'): q.put(url.strip()) q.join() except KeyboardInterrupt: sys.exit(1) This one is slighty faster than the twisted solution and uses less CPU. </s>