Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
13,960,897
2012-12-19T20:47:00.000
0
0
0
0
java,python,selenium,webdriver
13,961,695
5
false
1
0
For me it's just a language preference. There are bindings for other languages, but I believe they communicate with Webdriver via some sort of socket interface.
5
6
0
I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something. Thanks for any input!
Selenium Webdriver with Java vs. Python
0
0
1
12,904
13,960,897
2012-12-19T20:47:00.000
1
0
0
0
java,python,selenium,webdriver
13,961,645
5
false
1
0
You've got it spot on, there are ton load of documents for Java. All the new feature implementations are mostly explained with Java. Even stackoverflow has a pretty strong community for java + selenium.
5
6
0
I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something. Thanks for any input!
Selenium Webdriver with Java vs. Python
0.039979
0
1
12,904
13,960,897
2012-12-19T20:47:00.000
2
0
0
0
java,python,selenium,webdriver
13,961,711
5
true
1
0
Generally speaking, the Java selenium web driver is better documented. When I'm searching for help with a particular issue, I'm much more likely to find a Java discussion of my problem than a Python discussion. Another thing to consider is, what language does the rest of your code base use? If you're running selenium tests against a Java application, then it makes sense to drive your tests with Java.
5
6
0
I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something. Thanks for any input!
Selenium Webdriver with Java vs. Python
1.2
0
1
12,904
13,960,956
2012-12-19T20:51:00.000
0
0
0
0
python,pyramid
23,329,676
2
false
1
0
Not sure if it's still relevant to the OP, but I hit the same problem. The reason was that setup.py install simply didn't copy the font files, and the solution was to include all the font extensions in the MANIFEST.in file.
1
1
0
I'm having trouble with a static view, it is configured to serve files from the 'assets' folder on the server, and works fine for the following '/assets/img/hdr.png','/assets/style/default.css' however when trying to serve a web font it always returns 404 not found (despite the fact I have triple checked the file is in the correct locaiton ('/assets/font.woff') Is there something additional I need to configure to allow non img/css files to be served? config.add_static_view(name='assets', path='assets') Thanks
Web fonts always return 404 from static path
0
0
0
350
13,961,409
2012-12-19T21:22:00.000
1
0
0
0
python,django,authentication
13,964,128
2
false
1
0
What: I would actually implement that logic in an Authentication Backend. How: Use a specific, separate, Model to track login attempts, or, use the solution suggested by miko (fail2ban). Why: You de-couple authentication from users. Bonus: if you want to take advantage of the upcoming pluggable User models in Django, that's a good idea. On a side note, there probably is a way you can achieve an even "neater" solution by wrapping existing authentication backends to provide the required functionality.
2
2
0
I have a custom user model where I want to track number of failed login attempts and take action based on that. I am wondering what would be a better place to write this logic. Following are two options I have in mind for updating *failed_attempts* field in User model: Autheticate method in the backend. *check_password* method in the User model. I have overridden this method from AbstractBaseUser model. And the basic logic(does not cover all cases) is like this: If authentication fails check the time of previous failed login attempt. If that was recent, increment failed login count. If count reaches maximum attempt lock the account for few minutes (or do something else). My question is what would be a better place for writing this logic and why.
Django, where to update failed login attempts and why?
0.099668
0
0
1,638
13,961,409
2012-12-19T21:22:00.000
1
0
0
0
python,django,authentication
13,961,497
2
true
1
0
Using only the details you list, I would say the Authentication method is more appropriate, if only because it would be very confusing if check_password updates fields on the model. Why, though, do you have both an 'Authenticate method in the backed' and a check_password method in the model?
2
2
0
I have a custom user model where I want to track number of failed login attempts and take action based on that. I am wondering what would be a better place to write this logic. Following are two options I have in mind for updating *failed_attempts* field in User model: Autheticate method in the backend. *check_password* method in the User model. I have overridden this method from AbstractBaseUser model. And the basic logic(does not cover all cases) is like this: If authentication fails check the time of previous failed login attempt. If that was recent, increment failed login count. If count reaches maximum attempt lock the account for few minutes (or do something else). My question is what would be a better place for writing this logic and why.
Django, where to update failed login attempts and why?
1.2
0
0
1,638
13,962,888
2012-12-19T23:20:00.000
5
0
0
0
android,python,linux,notifications,kivy
14,005,492
1
true
0
1
Short answer, no… if you want to define such an API and implement it using platform-specific libs (libnotify on Ubuntu, and pyjnius on Android), that would be welcomed though. Others will help make it work on other platforms.
1
5
0
I want to create a cross-platform application (Ubuntu and Android) with a notification icon. Is there a standard way to create such an app using Kivy?
Kivy: crossplatform notification icon
1.2
0
0
641
13,965,403
2012-12-20T04:37:00.000
0
0
0
0
python,parsing,selenium,lxml,xpath
40,276,743
2
false
1
0
I prefer to use lxml. Because the efficiency of lxml is more higher than selenium for large elements extraction. You can use selenium to get source of webpages and parse the source with lxml's xpath instead of the native find_elements_with_xpath in selenium.
1
1
0
So I have been trying to figure our how to use BeautifulSoup and did a quick search and found lxml can parse the xpath of an html page. I would LOVE if I could do that but the tutorial isnt that intuitive. I know how to use Firebug to grab the xpath and was curious if anyone has use lxml and can explain how I can use it to parse specific xpath's, and print them.. say 5 per line..or if it's even possible?! Selenium is using Chrome and loads the page properly, just need help moving forward. Thanks!
Can I parse xpath using python, selenium and lxml?
0
0
1
1,853
13,965,684
2012-12-20T05:09:00.000
8
0
0
0
python,scrapy
13,967,095
2
true
1
0
HTTP status code 503, "Service Unavailable", means that (for some reason) the server wasn't able to process your request. It's usually a transient error. I you want to know if you have been blocked, just try again in a little while and see what happens. It could also mean that you're fetching pages too quickly. The fix is not to do this by keeping concurrent requests at 1 (and possibly adding a delay). Be polite. And you will encounter various errors if you are scraping a enough. Just make sure that your crawler can handle them.
1
5
0
I am trying to crawl a forum website with scrapy. The crawler works fine if I have CONCURRENT_REQUESTS = 1 But if I increase that number then I get this error 2012-12-21 05:04:36+0800 [working] DEBUG: Retrying http://www.example.com/profile.php?id=1580> (failed 1 times): 503 Service Unavailable I want to know if the forum is blocking the request or there is some settings problem.
Getting service unavailable error in scrapy crawling
1.2
0
1
14,588
13,971,997
2012-12-20T12:13:00.000
0
0
1
1
python,macos,pip,zsh,easy-install
13,974,516
1
true
0
0
Thanks for the hint @Evert I had to add /usr/local/share/python to my $PATH and now everything works fine.
1
1
0
I use OS X (Mountain Lion) and ZSH. I can use easy_install to install some python packages but if I want to use the command in my ZSH afterwards I just get something like this: zsh: command not found: virtualenv Have I forgotten to include anything to my $PATH or so? Hope you can help me out :)
How to use packages installed via easy_install in zsh?
1.2
0
0
1,499
13,973,309
2012-12-20T13:31:00.000
1
0
0
0
python,google-app-engine,transactions,google-cloud-datastore,app-engine-ndb
13,975,243
1
true
0
0
There's an approximate performance limit of 1 write transaction per second to an entity group. The whole group does get locked for the update. A subsequent transaction will fail and retry. 10k entities in an entity group sounds like a lot, but it really depends on your write patterns. For example, if only a few entities in the group are ever updated, it may not be a big issue. However, if random users are constantly updating random entities in the group, you'll want to split it up into more entity groups.
1
0
0
I have several tens of thousands of related small entities (NDB atop of Master-Slave, will have to move to HRD one day..), which I'd like to put in the same entity group to enable transactions. Small subsets of those entities will be updated by transactions. What are the performance implications of this setup? Does it mean the whole group gets locked during the update? I.e. one transaction at a time. Thanks!
performance implications of transactions on large entity groups (python, NDB, Master/Slave)
1.2
0
0
479
13,976,588
2012-12-20T16:35:00.000
4
0
1
0
python,encoding,utf-8,character-encoding,byte
13,976,824
3
false
0
0
It is very easy to see in a UTF-8 stream whether a given byte is at the start (or not) of a given character's byte stream. If the byte is of the form 10xxxxxx then it is a non-initial byte of a character, if the byte is of the form 0xxxxxx it is a single byte character, and other bytes are the initial bytes of a multi-byte character. As such, you can build your own function without too much difficulty. Just ensure that the last character you add to your field is either of the form 0xxxxxx, or is of the form 10xxxxxx where the next character (which you're not adding) is not of the form 10xxxxxx. I.e. you make sure you've just added a one-byte UTF-8 character or the last byte of a multi-byte UTF-8 character. You can then just add 0s to fill in the rest of your field.
1
3
0
I have a Python project where I have a fixed byte-length text field (NOT FIXED CHAR-LENGTH FIELD) in a comm protocol that contains a utf-8 encoded, NULL padded, NULL terminated string. I need to ensure that a string fits into the fixed byte-length field. Since utf-8 is a variable width encoding, this makes using brute force to truncate the string at a fixed byte length dicey since you could possibly leave part of a multi-byte character dangling at the end. Is there a module/method/function/etc that can help me with truncating utf-8 variable width encoded strings to a fixed byte-length? Something that does Null padding and termination would be a bonus. This seems like a nut that would have already been cracked. I don't want to reinvent something if it already exists.
Fixed length data field and variable length utf-8 encoding
0.26052
0
0
1,704
13,976,625
2012-12-20T16:38:00.000
0
0
0
0
python,django,windows,emacs,ropemacs
14,254,108
3
true
1
0
As i wrote in the comments and as David said, I resorted to using Jedi.
1
0
0
In a newly created Django project I'm using ropemacs to get semantic completions and refactoring functionality. But it seems that everytime I enter a character that triggers a completion list check, the buffer freezes for about a second, sometimes two. I heard that ropemacs can be slow on big projects, but is a fresh Django project considered big in this respect? I'm using YAS, rope, autocomplete and python-mode (https://launchpad.net/python-mode). In the modes section i have "Py Outl yas Rope AC", not exactly sure where Outl came from or what it does.
ropemacs on windows with django project is very slow
1.2
0
0
158
13,982,983
2012-12-21T01:14:00.000
0
0
0
0
python,machine-learning,libsvm,scikit-learn,scikits
13,986,712
1
true
0
0
Not without going to the cython code I am afraid. This has been on the todo list for way to long. Any help with it would be much appreciated. It shouldn't be too hard, I think.
1
2
1
sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset. Is there any hack/way to get the index of SVs?
sklearn.svm.SVC doesn't give the index of support vectors for sparse dataset?
1.2
0
0
265
13,984,066
2012-12-21T03:58:00.000
1
0
0
0
python,user-controls,while-loop,pygame
14,188,676
5
false
0
1
To tell if the user clicked on the rect, just set a 1 by 1 rect at the mouse position then when they click, check if the rects are colliding. Then, just call the loop again by having it be its own function, like DeadChex said.
2
4
0
I have my game running in a while True loop and I would like to be able to ask the user to "Play again?" I already have the code for a rect to pop up with the text but I need a way for the user to click on the rect or hit y for yes and the code run its self again.
Pygame restart?
0.039979
0
0
25,890
13,984,066
2012-12-21T03:58:00.000
2
0
0
0
python,user-controls,while-loop,pygame
13,984,092
5
false
0
1
Have the loop it's own method, call it from a main() run, when the game is over and ends, have main() ask the user if they want to play again, and if so recall the main loop after setting everything as its default or Just have the part that asks again clear and reinitialize all variables as there defaults and let the loop if the user wants to play again
2
4
0
I have my game running in a while True loop and I would like to be able to ask the user to "Play again?" I already have the code for a rect to pop up with the text but I need a way for the user to click on the rect or hit y for yes and the code run its self again.
Pygame restart?
0.07983
0
0
25,890
13,984,200
2012-12-21T04:16:00.000
0
0
1
0
twisted,ironpython
13,993,729
2
false
0
0
I doubt Twisted will work on IronPython without some major work. Twisted is pretty complicated. Unless somebody tries it, though, it'll never happen.
1
0
0
I am planning on porting an Python 2.7 App over to IronPython 2.7+ I saw in 2009 that it was being worked on...did this get finished ? Or do the twisted librarys work on IronPython...is there a work around or should I stop now ?? Thanks In Advance
Twisted Librarys for IronPython
0
0
0
430
13,989,166
2012-12-21T11:18:00.000
0
0
1
0
python,numpy,matplotlib,pylot
15,980,514
1
false
0
0
I had the same identical problem. I spent sometime on it today debugging few things, I realized the problem with me was that the data collected to plot charts wasn't correct and i needed to adjust. What I did was just changing the time from absolute to relative and dynamically adjusting the range of the axis. I'm not that good in python and so my code doesn't look that good.
1
0
1
I'm using Pylot 1.26 with Python 2.7 on Windows 7 64bit having installed Numpy 1.6.2 and Matplotlib 1.1.0. The test case executes and produces a report but the response time graph is empty (no data) and the throughput graph is just one straight line. I've tried the 32 bit and 64 bit installers but the result is the same.
Why are my Pylot graphs blank?
0
0
0
682
13,989,304
2012-12-21T11:27:00.000
14
0
1
0
python
13,989,367
2
false
0
0
You're right that defining __init__ in a subclass overrides the superclass's __init__, but you can always use super(CurrentClass, self).__init__ to call the superclass's constructor from the subclass. So, you don't have to "manually" duplicate the superclass's initialization work. As a side note, even though Python doesn't support method overloading, it supports default arguments (in addition to optional arguments via *args and **kwargs), which means you can easily emulate the behavior of overloaded functions by simply accepting different subsets of arguments in your function/method implementation.
1
7
0
I was going through DiveIntoPython and came across this: Java and Powerbuilder support function overloading by argument list, i.e. one class can have multiple methods with the same name but a different number of arguments, or arguments of different types. Other languages (most notably PL/SQL) even support function overloading by argument name; i.e. one class can have multiple methods with the same name and the same number of arguments of the same type but different argument names. Python supports neither of these; it has no form of function overloading whatsoever. Methods are defined solely by their name, and there can be only one method per class with a given name. So if a descendant class has an __init__ method, it always overrides the ancestor __init__ method, even if the descendant defines it with a different argument list. And the same rule applies to any other method. Isn't this a major disadvantage that a subclass's __init__ method will always override a superclass's __init__ method? So if I'm initializing some variables and calling some functions in a class class1's __init__, then I derive a subclass class2(class1) of it, I'd have to reinitialize all of class1's variables and call those functions in class2's __init__? I'm pretty sure I'm misunderstanding all this, so it'd be great if someone clarifies this up.
No constructor overloading in Python - Disadvantage?
1
0
0
5,651
13,989,640
2012-12-21T11:49:00.000
217
0
1
0
python,regex
13,989,661
7
false
0
0
A . in regex is a metacharacter, it is used to match any character. To match a literal dot in a raw Python string (r"" or r''), you need to escape it, so r"\."
1
133
0
Was wondering what the best way is to match "test.this" from "blah blah blah [email protected] blah blah" is? Using Python. I've tried re.split(r"\b\w.\w@")
Regular expression to match a dot
1
0
0
328,921
13,989,737
2012-12-21T11:56:00.000
2
0
0
0
javascript,browser,python-idle
13,989,768
3
true
1
0
Trap the window.onblur event. It's raised whenever the current window (or tab) loses focus.
2
6
0
is there some way of detecting using javascript that a user has switched to a different tab in the same browser window. Additionally is there a way to detect a user has switched to a different window than the browser? thank you
Detect change of browser tabs with javascript
1.2
0
1
3,555
13,989,737
2012-12-21T11:56:00.000
1
0
0
0
javascript,browser,python-idle
13,989,766
3
false
1
0
Most probably there is no standards javascript for this. Some browsers might support it but normally there is only a window.onblur event to find out the user has gone away from the current window.
2
6
0
is there some way of detecting using javascript that a user has switched to a different tab in the same browser window. Additionally is there a way to detect a user has switched to a different window than the browser? thank you
Detect change of browser tabs with javascript
0.066568
0
1
3,555
13,991,387
2012-12-21T13:51:00.000
1
1
0
0
python,twitter
13,991,409
1
false
0
0
Use a different font, or a better method of displaying those. All tweets in the streaming API are encoded with the same codec (JSON data is fully unicode aware), but not all characters can be displayed by all fonts.
1
0
0
Using Twitter Streaming API getting tweets from a specific query. However some tweets came with different codification (there are boxes instead of words). Is there any way to fix it?
Tweets from Twitter Streaming API
0.197375
0
1
228
13,991,871
2012-12-21T14:24:00.000
3
0
0
0
python,google-app-engine
13,992,083
1
true
1
0
It makes absolutely no difference. Firstly, however, you should realize that there's no need to have a separate file for each model. It's perfectly normal to have several model classes in one models.py file. The division is between separate apps, each of which group together related models. Secondly, you should also realize that unless you have a specific need to add extra data on the many-to-many relationship, you don't need to create a link table. Django will take care of that for you once you define a ManyToManyField.
1
0
0
I have 2 models: a and b I want to build a many to many relashionship and I will use the link model method, thus I need to create an a_to_b_membership model. The question is: Should I put the model class on the a model file? The b model file? Or create a new model file? If I need to create a new model file, then how should I name it?
What class file to create for a link model used for many to many relasionship in google app engine
1.2
0
0
38
13,991,995
2012-12-21T14:32:00.000
1
0
1
0
python,date,converter
13,992,058
3
false
0
0
You can use strftime for output (your format is "%Y-%M-%d"). For parsing input there's a corresponding function - strptime. But you won't be able to handle "any format". You have to know what you're getting in the first place. Otherwise you wouldn't be able to tell a difference between (for example) American and other dates. What does 01.02.03 mean for example? This could be: yy.mm.dd dd.mm.yy mm.dd.yy
1
0
0
I'm quite new to python and don't know much about it but i need to make a small script that when someone inputs a date in any format , it would then converts it in to yyyy-mm-dd format. The script should be able to share elements of the entered date, and identify patterns. It might be easy and obvious to some but making one by my self is over my head. Thanks in advance!
Python script: convert random date formats to fixed yyyy-mm-dd
0.066568
0
0
2,191
13,997,263
2012-12-21T21:14:00.000
0
0
0
1
python,locking,celery,apache-zookeeper,kazoo
13,997,307
2
true
0
0
Killing a process with a kill signal will do nothing to clear "software locks" such as ZooKeeper locks. The only kind of locks killed by a KILL signal are OS-level locks, since all file descriptors are killed, and file descriptor locks are therefore killed as well. But as far as ZooKeeper is concerned, those are not OS level locks (would it be only because the ZooKeeper process, even on the same machine, is not the one of your python process). It is therefore not a bug in ZooKeeper, and an expected behavior of your kill -9.
1
8
0
I am using celery and zookeeper (kazoo lock) to lock my workers. I have a problem when I kill (-9) one of the workers before releasing the lock then that lock stays locked forever. So my question is: Does killing the process release locks in that process or is this some bug in zookeeper?
zookeeper lock stayed locked
1.2
0
0
3,997
13,998,774
2012-12-22T00:04:00.000
1
0
1
1
python,linux,unix,process
13,998,816
2
true
0
0
You can go through /proc/<pid>/cmdline to get the running process names. You need to list the files in /proc and filter the numerical ones for getting access to list of the processes running on your system. However I wouldn't call this accessing "all possible running processes" because that would include kernel threads as well.
1
0
0
I need to get a list of all possible running processes(whether they are stopped currently or not) from the system, without keeping a record myself. I was wondering if there is a better way to get a list of these processes in python without having to do the dreaded subprocess output parsing of an initctl list call.
Is there a function in python similar to calling `initctl list`?
1.2
0
0
214
13,999,970
2012-12-22T04:07:00.000
0
0
0
0
python,qt,user-interface,qt4,pyqt
14,000,161
2
true
1
1
You need to connect your view to the dataChanged ( const QModelIndex & topLeft, const QModelIndex & bottomRight ) signal of the model: This signal is emitted whenever the data in an existing item changes. If the items are of the same parent, the affected ones are those between topLeft and bottomRight inclusive. If the items do not have the same parent, the behavior is undefined. When reimplementing the setData() function, this signal must be emitted explicitly.
1
3
0
By default, the built-in views in PyQt can auto-refresh itself when its model has been updated. I wrote my own chart view, but I don't know how to do it, I have to manually update it for many times. Which signal should I use?
PyQt - Automatically refresh a custom view when the model is updated?
1.2
0
0
5,032
14,000,083
2012-12-22T04:34:00.000
1
0
1
0
list,python-2.7,indexing
14,000,721
2
false
0
0
Lets use a 'list' with following values: '1', '2', '3','4', '5', '6' Steps to get the negative index of any value: Step1. Get the 'normal_index' of the value. For example the normal index of value '4' is 3. Step2. Get the 'count' of the 'list'. In our example the 'list_count' is 5. Step3. Get Negative index of the requested value. negative_index = (normal_index - list_count) - 1. Which is -3.
1
2
0
I have a string of variable length and I know the index position is 25. As it's variable in his length (>= 25), I need a way to locate the negative index of that same position for easier data manipulation. Do you have any idea how this can be done?
How do you find the negative index of a position if you know the index?
0.099668
0
0
67
14,001,116
2012-12-22T08:02:00.000
6
0
0
0
python,django,postgresql,heroku,psycopg2
59,063,813
2
false
0
0
on Mojave macOS, I solved it by running below steps: pip uninstall psycopg2 pip install psycopg2-binary
1
1
0
Im getting this error when trying to run python / django after installing psycopg2: Error: dlopen(/Users/macbook/Envs/medint/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _PQbackendPID Referenced from: /Users/macbook/Envs/medint/lib/python2.7/site-packages/psycopg2/_psycopg.so Expected in: dynamic lookup Anyone?
Psycopg2 Symbol not found: _PQbackendPID Expected in: dynamic lookup
1
1
0
3,182
14,001,216
2012-12-22T08:20:00.000
1
0
0
1
python,node.js,express,ipc
14,001,337
1
true
0
0
I would use a message passing service such as RabbitMQ or even ZeroMQ to notify or have the Node.JS process poll for this notification. So, the Python process would do it's processing then it would send a message out and from there the Node.JS process would read this message then know that it can do its job and process the data in MongoDB.
1
1
0
We are having 2 components 1 Producer/Consumer, 2 Process Producer/Consumer is i/o incentive, and nothing but take web request and make entry to mongodb based on input params. Process is separate process (in python) which process data from mongodb and group(make pair) them. This pairing can take little time, and once pairing is done, we want to notify Node that for given connection, "Process is done", so node can send data back to client. I am not sure on "How to notify Node's connection that process is done, and this is the output."
Producer/Consumer + Worker arch with Node.js/python
1.2
0
0
669
14,002,357
2012-12-22T11:15:00.000
2
0
0
0
python,gzip
14,004,771
2
true
1
0
The compressed data is the same each time. The only thing that differs is likely the modification time in the header. The fifth argument of GzipFile (if that's what you're using) allows you to specify the modification time in the header. The first argument is the file name, which also goes in the header, so you want to keep that the same. If you provide a fourth argument for the source data, then the first argument is used only to populate the file name portion of the header.
1
1
0
I'm writing a script in python for deploying static sites to aws (s3, cloudfront, route53). Because I don't want to upload every file on every deploy, I check which files were modified by comparing their md5 hash with their e-tag (which s3 sets to be the object's md5 hash). This works well for all files except for those that my build script gzips before uploading. Taking a look inside the files, it seems like gzip isn't really a pure function; there are very slight differences in the output file every time gzip is run, even if the source file hasn't changed. My question is this: is there any way to get gzip to reliably and repeatably output the exact same file given the exact same input? Or am I better off just checking if the file is gzipped, unzipping it and computing the md5 hash/manually setting the e-tag value for it instead?
Repeatably gzip files in python
1.2
0
0
486
14,003,386
2012-12-22T13:51:00.000
1
0
0
0
python,windows
14,003,444
1
false
1
0
Use sys.getfilesystemencoding(). That should allow you to convert all paths that look ok. However, there can always be illegally-encoded files or folders, you have to think how to deal with those in the framework of your application. Some apps may ignore such files, others keep name as a binary blob.
1
0
0
I'm writing a small application which saves file paths to a database (using django). I assumed file paths are utf-8 encoded, but I ran into the following file name: C:\FXG™.nfo which is apparently not encoded in utf-8. When I do filepath.decode('utf-8') I get the following error: UnicodeDecodeError: 'utf8' codec can't decode byte 0x99 in position 30: invalid start byte (I trimmed the file name, so the position is wrong here). How do I know how the file paths are encoded in a way that this will work for every file name?
File path encoding in Windows
0.197375
0
0
1,880
14,004,036
2012-12-22T15:31:00.000
10
0
1
1
python,process
14,004,255
1
true
0
0
Processes and native OS threads are only bound to specific processors if somebody specifically requests for that to happen. By default, processes and threads can (and will) be scheduled on any available processor. Modern operating systems use pre-emptive multi-threading and can interrupt a thread's execution at any moment. When that thread is next scheduled to run, it can be executed on a different processor. This is known as a context switch. The thread's entire execution context is stored away by the operating system and then when the thread is re-scheduled, the execution context is restored. Because of all this, it makes no real sense to ask what processor your thread is executing on since the answer can change at any moment. Even during the execution of the function that queried which the current thread's processor. Again, by default, there's no relationship between the processors that two separate processes execute on. The two processes could execute on the same processor, or different processors. It all depends on how the OS decides to schedule the different threads. In the comments you state: The Python process will execute on only one core due to the GIL lock. That statement is simply incorrect. For example, a section of Python code would claim the GIL, get context switched around all the available processors, and then release the GIL. Right at the start of the answer I said alluded to the possibility of binding a process or thread to a particular processor. For example, on Windows you can use SetProcessAffinityMask and SetThreadAffinityMask to do this. However, it is unusual to do this. I can only recall ever doing this once, and that was to ensure that an execution of CPUID run on a specific processor. In the normal run of things, processes and threads have affinity with all processors. In another comment you say: I am creating the child processes to use multi cores of the CPU. In which case you have nothing to worry about. Typically you would create as many processes as there are logical processors. The OS scheduler is sensible and will schedule each different process to run on a different processor. And thus make the optimal use of the available hardware resources.
1
2
0
How do I know to which process my Python process has been bound? Alone these same lines, are child processes going to execute on the same core (i.e. CPU) that the parent is currently executing?
How do I know to which core my Python process has been bound?
1.2
0
0
2,272
14,004,835
2012-12-22T17:13:00.000
1
0
0
1
python,ncurses,curses
72,080,593
3
false
0
1
While I didn't use curses in python, I am currently working with it in C99, compiled using clang on Mac OS Catalina. It seems that nodelay()` does not work unless you slow down the program step at least to 1/10 of a second, eg. usleep(100000). I suppose that buffering/buffer reading is not fast enough, and getch() or wgetch(win*) simply doesn't manage to get the keyboard input, which somehow causes it to fail (no message whatsoever, even a "Segmentation fault"). For this reason, it's better to use halfdelay(1), which equals nodelay(win*, true) combined with usleep(100000). I know this is a very old thread (2012), but the problem is still present in 2022, so I decided to reply.
2
7
0
I've written a curses program in python. It runs fine. However, when I use nodelay(), the program exits straight away after starting in the terminal, with nothing shown at all (just a new prompt). EDIT This code will reproduce the bug: sc = curses.initscr() sc.nodelay(1) # But removing this line allows the program to run properly for angry in range(20): sc.addstr(angry, 1, "hi") Here's my full code import curses, time, sys, random def paint(x, y, i): #... def string(s, y): #... def feed(): #... sc = curses.initscr() curses.start_color() curses.curs_set(0) sc.nodelay(1) ######################################### # vars + colors inited for angry in range(20): try: dir = chr(sc.getch()) sc.clear() feed() #lots of ifs body.append([x, y]) body.pop(0) for point in body: paint(*point, i=2) sc.move(height-1, 1) sc.refresh() time.sleep(wait) except Exception as e: print sys.exc_info()[0], e sc.getch() curses.beep() curses.endwin() Why is this happenning, and how can I use nodelay() safely?
nodelay() causes python curses program to exit
0.066568
0
0
5,827
14,004,835
2012-12-22T17:13:00.000
0
0
0
1
python,ncurses,curses
14,006,585
3
false
0
1
I see no difference when running your small test program with or without the sc.nodelay() line. Neither case prints anything on the screen...
2
7
0
I've written a curses program in python. It runs fine. However, when I use nodelay(), the program exits straight away after starting in the terminal, with nothing shown at all (just a new prompt). EDIT This code will reproduce the bug: sc = curses.initscr() sc.nodelay(1) # But removing this line allows the program to run properly for angry in range(20): sc.addstr(angry, 1, "hi") Here's my full code import curses, time, sys, random def paint(x, y, i): #... def string(s, y): #... def feed(): #... sc = curses.initscr() curses.start_color() curses.curs_set(0) sc.nodelay(1) ######################################### # vars + colors inited for angry in range(20): try: dir = chr(sc.getch()) sc.clear() feed() #lots of ifs body.append([x, y]) body.pop(0) for point in body: paint(*point, i=2) sc.move(height-1, 1) sc.refresh() time.sleep(wait) except Exception as e: print sys.exc_info()[0], e sc.getch() curses.beep() curses.endwin() Why is this happenning, and how can I use nodelay() safely?
nodelay() causes python curses program to exit
0
0
0
5,827
14,006,363
2012-12-22T20:30:00.000
2
0
0
0
python,fabric,boto,data-processing
14,012,685
5
true
0
0
I often use a combination of SQS/S3/EC2 for this type of batch work. Queue up messages in SQS for all of the work that needs to be performed (chunked into some reasonably small chunks). Spin up N EC2 instances that are configured to start reading messages from SQS, performing the work and putting results into S3, and then, and only then, delete the message from SQS. You can scale this to crazy levels and it has always worked really well for me. In your case, I don't know if you would store results in S3 or go right to PostgreSQL.
4
5
0
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
Processing a large amount of data in parallel
1.2
1
0
1,819
14,006,363
2012-12-22T20:30:00.000
1
0
0
0
python,fabric,boto,data-processing
14,009,860
5
false
0
0
You might benefit from hadoop in form of Amazon Elastic Map Reduce. Without getting too deep it can be seen as a way to apply some logic to massive data volumes in parralel (Map stage). There is also hadoop technology called hadoop streaming - which enables to use scripts / executables in any languages (like python). Another hadoop technology you can find useful is sqoop - which moves data between HDFS and RDBMS.
4
5
0
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
Processing a large amount of data in parallel
0.039979
1
0
1,819
14,006,363
2012-12-22T20:30:00.000
3
0
0
0
python,fabric,boto,data-processing
14,006,535
5
false
0
0
Did you do some performance measurements: Where are the bottlenecks? Is it CPU bound, IO bound, DB bound? When it is CPU bound, you can try a python JIT like pypy. When it is IO bound, you need more HDs (and put some striping md on them). When it is DB bound, you can try to drop all the indexes and keys first. Last week I imported the Openstreetmap DB into a postgres instance on my server. The input data were about 450G. The preprocessing (which was done in JAVA here) just created the raw data files which could be imported with postgres 'copy' command. After importing the keys and indices were generated. Importing all the raw data took about one day - and then it took several days to build keys and indices.
4
5
0
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
Processing a large amount of data in parallel
0.119427
1
0
1,819
14,006,363
2012-12-22T20:30:00.000
2
0
0
0
python,fabric,boto,data-processing
14,006,466
5
false
0
0
I did something like this some time ago, and my setup was like one multicore instance (x-large or more), that converts raw source files (xml/csv) into an intermediate format. You can run (num-of-cores) copies of the convertor script on it in parallel. Since my target was mongo, I used json as an intermediate format, in your case it will be sql. this instance has N volumes attached to it. Once a volume becomes full, it gets detached and attached to the second instance (via boto). the second instance runs a DBMS server and a script which imports prepared (sql) data into the db. I don't know anything about postgres, but I guess it does have a tool like mysql or mongoimport. If yes, use that to make bulk inserts instead of making queries via a python script.
4
5
0
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
Processing a large amount of data in parallel
0.07983
1
0
1,819
14,007,542
2012-12-22T23:45:00.000
2
1
1
0
python,aes,aes-ni
14,022,066
1
false
0
0
HMAC is using a secure cryptographic hash, not a symmetric cipher. You can make a "normal" MAC such as AES-CMAC perform better, but not a HMAC.
1
5
0
Is there a way to make use of AES-NI in Python? I do want to make HMAC faster by making use of my hardware support for AES-NI. Thanks.
Python support for AES-NI
0.379949
0
0
1,310
14,007,670
2012-12-23T00:12:00.000
2
0
1
0
python,visual-studio-2012,twisted,ptvs
14,410,966
2
true
0
0
3rd party libraries will run just fine. To get intellisense against them they'll need to be installed in site-packages or part of your project. If you install them after installing PTVS you'll need to run Tools->Options->Python Tools->Interpreter Options and select the interpreter you have configured and regenerated the completion database. Alternately you can have the libraries as part of your project and they'll be analyzed in real time. You also seem interested in some specialized app... If that app is a pure Python app that starts up like "python.exe app.py" you'll have no problems at all. You may need to setup a custom interpreter again in Tools->Options->Python Tools->Interpreter Options which points at the specific python.exe that the app is using if it's a special app specific build. If the app is actually a C++ app which is hosting Python life is a little more difficult. You should have no problems editing the code in PTVS but debugging will probably need to be accomplished by doing Debug->Attach to Process. This should work if the app is hosting a normal Python build and has it dynamically linked. PTVS will discover the Python interpreter and inject it's debugging script into the process. The workflow might be a little cumbersome doing the attach each time after launching but if you're not restarting frequently it shouldn't be too bad
2
0
0
I'm looking to be working on a Python app. Does Python Tools for Visual Studio support 3rd-party libraries, such as Twisted?
Python Tools Visual Studio Support Twisted
1.2
0
0
1,015
14,007,670
2012-12-23T00:12:00.000
1
0
1
0
python,visual-studio-2012,twisted,ptvs
14,007,686
2
false
0
0
PTVS is just an IDE. So it does not need to "support" any libraries - they just need to be in your PYTHONPATH so your python code can import them. However, chances are good that PTVS cannot launch a twisted-based daemon using twistd like you would do on the command line...
2
0
0
I'm looking to be working on a Python app. Does Python Tools for Visual Studio support 3rd-party libraries, such as Twisted?
Python Tools Visual Studio Support Twisted
0.099668
0
0
1,015
14,007,784
2012-12-23T00:37:00.000
0
1
0
1
python,linux,shell,unix
14,033,835
4
false
0
0
Could you try: echo 'python mypythonscript.py' | at ...
2
0
0
I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead. I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied. If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at? Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.
Unix `at` scheduling with python script: Permission denied
0
0
0
1,059
14,007,784
2012-12-23T00:37:00.000
0
1
0
1
python,linux,shell,unix
14,007,903
4
false
0
0
Start the script using python not the actual script name, ex : python path/to/script.py. at tries to run everything as a sh script.
2
0
0
I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead. I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied. If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at? Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.
Unix `at` scheduling with python script: Permission denied
0
0
0
1,059
14,007,965
2012-12-23T01:28:00.000
1
1
0
0
python,audio,beagleboard
14,008,119
3
false
1
0
The GPIO pins of the AM3359 are low-voltage and with insufficient driver strength to directly drive any kind of transducer. You would need to build a small circuit with a op-amp, transistor or FET to do this. Once you've done this, you'd simply set up a timer loop to change the state of the GPIO line at the required frequency. By far the quickest and easiest way of getting audio from this board is with a USB Audio interface.
1
2
0
I am looking for pointers/tips on how to generate a synthesized sound signal on the BeagleBone akin to watch the tone() function would return on a Arduinos. Ultimately, I'd like to connect a piezo or a speaker on a GPIO pin and hear a sound wave out of it. Any pointers?
How to generate sound signal through a GPIO pin on BeagleBone
0.066568
0
0
4,148
14,008,232
2012-12-23T02:46:00.000
1
0
0
0
python,mysql,django,security,encryption
14,008,320
2
false
1
0
Your question embodies a contradiction in terms. Either you don't want reversibility or you do. You will have to choose. The usual technique is to hash the passwords and to provide a way for the user to reset his own password on sufficient alternative proof of identity. You should never display a password to anybody, for legal non-repudiability reasons. If you don't know what that means, ask a lawyer.
2
2
0
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password. Resetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes. What would be a secure (but without sacrificing ease of use for the clients) way to go about this?
Storing MySQL Passwords
0.099668
1
0
408
14,008,232
2012-12-23T02:46:00.000
4
0
0
0
python,mysql,django,security,encryption
14,008,264
2
true
1
0
Though this is not the answer you were looking for, you only have three possibilities store the passwords plaintext (ugh!) store with a reversible encryption, e.g. RSA (http://stackoverflow.com/questions/4484246/encrypt-and-decrypt-text-with-rsa-in-php) do not store it; clients can only reset password, not view it The second choice is a secure way, as RSA is also used for TLS encryption within the HTTPS protocol used by your bank of choice ;)
2
2
0
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password. Resetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes. What would be a secure (but without sacrificing ease of use for the clients) way to go about this?
Storing MySQL Passwords
1.2
1
0
408
14,008,339
2012-12-23T03:19:00.000
1
0
1
0
python,multithreading,sockets,client-server
14,010,119
1
true
0
0
Here's a pseudo-design for your server. I'll speak in programming language agnostic terms. Have a "global hash table" that maps "client id numbers" to the corresponding "socket" (and any other client data). Any access to this hash table is guarded with a mutex. Every time you accept a new connection, spin up a thread. I'm assuming there's something in your chat protocol where a client identifies himself, gets a client id number assigned, and gets added to the session. The first thing the thread does is adds the socket for this client connection to the hash table. Whenever a message comes in (e.g. from client 1 to client 5), lookup "client 5" in the hash table to obtain its socket. Forward the message on this socket. There's a few race conditions to work out, but that should be a decent enough design. Of course, if you really want to scale, you wouldn't do the "thread per connection" approach. But if you are limited to about 100 or less clients simultaneously connected, you'll be ok. After that, you should consider a single-threaded approach using non-blocking i/o.
1
0
0
I am trying to build a chat server that handles multiple clients. I am trying to handle each connected client on a new thread. The problem is that I am really confused on how to forward the message received from a client to the intended receiver. I mean client-1 to client-5. I am very new to socket programming. So any kind of help is appreciated.
Python: multiple client threaded chat server
1.2
0
1
1,730
14,010,297
2012-12-23T10:40:00.000
1
0
0
0
python,nlp,nltk
14,085,942
4
false
1
0
Strip the text out of the HTML page (unless there is a way from the HTML to identify the address text such as div with a particular class) then build a set of rules that match the address formats used. If there are postal addresses in several countries then the formats may be markedly different but within a country, the format is the same (with some tweaks) or it is not valid. Within the US, for example, addresses are 3 or 4 lines (including the person). There is usually a zip code (5 digits optionally followed by four more). Other countries have postal codes in different formats. Unless your goal is 100% accuracy on all addresses then you probably should aim for extracting as many addresses as you can within the budget for the task. It doesn't seem like a task for NLP unless you want to use Named Entity identification to find cities, countries etc.
1
2
0
I have 300k+ html documents, which I want to extract postal addresses from. The data is different structures, so regex wont work. I have done a heap of reading on NLP and NLTK for python, however I am still struggling on where to start with this. Is this approach called Part-of-Speech Tagging or Chunking / Partial Parsing? I can't find any document on how to actually TAG a page so I can train a model on it, or even what I should be training. So my questions; What is this approach called? How can I tag some documents to train from
What is it called, to extract an address from HTML via NLP
0.049958
0
0
2,336
14,011,819
2012-12-23T14:37:00.000
1
0
0
0
python,http,redirect,response,flask
14,011,846
1
false
1
0
If you return redirect('someurl') in your view function it will result in a Location header being sent to the client. So unless the client decides to load the target from cache instead the view function of the target URL will be executed just like if the client accessed it directly.
1
0
0
Is there any difference in the way redirects and requests are handled in the flask micro-framework? I have a bunch of functions which are to be run before a request is made but apparently they are not being run whenever there is a redirect to another url.
Redirect and requests in flask microframework
0.197375
0
0
223
14,013,214
2012-12-23T17:49:00.000
0
0
0
0
python,django,linux,deployment
14,013,487
1
false
1
0
If the names on the development and production server are the same, then record the rename commands in a shell-script. You can run that on both the development and the production server...
1
0
0
I have a development & production server and some large video files. The large files need to be renamed. I don't know how to automatically change the file names in the production environment when I change their name in the development environment. I think using git is very inefficient for large files. On the development environment I copied only the first 5 seconds of the videos. I'll be using Django with South to synchronize the database and git to synchronize the code.
synchronizing file names across servers
0
0
0
67
14,014,498
2012-12-23T20:33:00.000
1
0
1
0
python,multithreading,python-2.7,download,urllib
14,014,516
4
true
0
0
urllib.urlretrieve() is very slow Really? If you've got 15-20 files of 2-4mb each, then I'd just line 'em up and download 'em. The bottle neck is going to be the bandwith for your server and yourself. So IMHO, hardly worth threading or trying anything clever in this case...
1
2
0
In Python how can I download a bunch of files quickly? urllib.urlretrieve() is very slow, and I'm not very sure how to go about this. I have a list of 15-20 files to download, and it takes forever just to download one. Each file is about 2-4 mb. I have never done this before, and I'm not really sure where I should start. Should I use threading and download a few at a time? Or should I use threading to download pieces of each file, but one file at a time, or should I even be using threading?
Python: Download multiple files quickly
1.2
0
1
6,891
14,017,996
2012-12-24T06:41:00.000
7
0
1
0
python
14,018,060
5
false
0
0
If you want give some default value to a parameter assign value in (). like (x =10). But important is first should compulsory argument then default value. eg. (y, x =10) but (x=10, y) is wrong
1
137
0
Is there a way in Python to pass optional parameters to a function while calling it and in the function definition have some code based on "only if the optional parameter is passed"
Is there a way to pass optional parameters to a function?
1
0
0
257,225
14,020,155
2012-12-24T10:35:00.000
0
0
0
0
python,matplotlib,scipy
15,859,052
1
false
0
0
I suspect the answer is no, because if you change the vectors, it would need to re-compute the stream lines. The objects returned by streamline are a line and patch collections, which know nothing about the vectors. To get this functionality would require writing a new class to wrap everything up and finding a sensible way to re-use the existing objects. The best bet is to use cla() (as suggested by dmcdougall) to clear your axes and just re-plot them. A slightly less drastic approach would be to just remove the artists added by streamplot.
1
4
1
After plotting streamlines using 'matplotlib.streamplot' I need to change the U V data and update the plot. For imshow and quiver there are the functions 'set_data' and 'set_UVC', respectively. There does not seem to be any similar function for streamlines. Is there any way to still updateget similar functionality?
update U V data for matplotlib streamplot
0
0
0
1,159
14,022,166
2012-12-24T13:43:00.000
6
1
1
0
python,windows-xp,py2exe,python-2.5
14,022,239
3
false
0
0
No, not really. Since it's merely a wrapper it provides the necessary files needed to run your code. Using Cython could make your program run faster by being able to compile it using C.
2
2
0
I am really bad at compiling programs, and I just want to know if my python 2.5 program would be faster if I converted it to a .exe using py2exe. I don't want to spend a lot of time trying to compile it if it will just be slower in the end. My program uses OpenCV and PyAudio, but I think that are the only non pure-python modules it uses. Thanks! NOTE: I do not think this question requires a snippit of code, but if it does, please say so in the comments. Thanks!
Is a python script faster when you convert it to a .exe using py2exe?
1
0
0
6,424
14,022,166
2012-12-24T13:43:00.000
1
1
1
0
python,windows-xp,py2exe,python-2.5
14,024,396
3
false
0
0
I spent last 2 months working on Windows Python and I must say i am not very happy. If you have any C modules(besides standard library), you'll have huge problems to get it working. Speed is similar to Linux, but it seems a little bit slower. Please note that py2exe is not the best. I had issues with py2exe and had to use pyinstaller. It has a better debug, it worked in cases when py2exe didn't My biggest dissapointment was exe size, I had a simple program which used lxml, suds and it was 7-8mb big...
2
2
0
I am really bad at compiling programs, and I just want to know if my python 2.5 program would be faster if I converted it to a .exe using py2exe. I don't want to spend a lot of time trying to compile it if it will just be slower in the end. My program uses OpenCV and PyAudio, but I think that are the only non pure-python modules it uses. Thanks! NOTE: I do not think this question requires a snippit of code, but if it does, please say so in the comments. Thanks!
Is a python script faster when you convert it to a .exe using py2exe?
0.066568
0
0
6,424
14,023,009
2012-12-24T15:17:00.000
0
0
0
1
python,subprocess
14,024,002
1
false
0
0
I'm a bit confused. Using subprocess.Popen(...) should spawn a new command prompt automatically for each call. What is aria2c? Is it a program you had written in python as well? Is it a 3rd party exe that writes to the command prompt window? I can help you to redirect all the sub-processes output to the main command prompt, so it can be displayed inline. Also, maybe you can give a little more detail on what is going on first, so I can understand your trouble a bit better.
1
0
0
I have a main program, in which a user can call a sub-process (to download files) several times. Each time, I call aria2c using subprocess, and it will print the progress to stdin. Of course, it is desirable that the user can see the progress of each download seperately. So the question is how can I redirect the output of each process to a seperate console window?
Python: Redirect output to several consoles?
0
0
0
288
14,028,015
2012-12-25T05:40:00.000
1
0
0
0
python,database,authentication,web2py
14,202,881
3
false
1
0
web2py by default allows blank passwords. So simply hide the password fields in the login and registration forms using CSS. You should be able to use the default auth.
1
1
0
I did not find anything on the web and so I'm asking here. Is there a way to create a custom auth wich only requires a username? That means to login to a specific subpage one has only to enter a username, no email and no password etc.? Or is there a better way to do this? E.g. a subpage can only be accessed if the username (or similar) exists in a db table?
Web2Py minimal User authentication (username only)
0.066568
0
0
3,519
14,028,164
2012-12-25T06:09:00.000
3
0
1
0
python,list,shallow-copy,deep-copy
14,028,181
2
false
0
0
The new list is a copy of references. g[0] and a[0] both reference the same object. Thus this is a shallow copy. You can see the copy module's deepcopy method for recursively copying containers, but this isn't a common operation in my experience. Stylistically, I prefer the more explicit g = list(a) to create a copy of a list, but creating a full slice has the same effect.
1
1
0
How is Deep copy being done in python for lists? I am a little confused for copying of lists. Is it using shallow copy or deep copy? Also, what is the syntax for sublists? is it g=a[:]?
python lists copying is it deep copy or Shallow copy and how is it done?
0.291313
0
0
1,094
14,029,077
2012-12-25T08:44:00.000
12
0
1
0
python,mysql,pickle
14,029,166
1
true
0
0
No, if you can keep all the data in memory a database is not necessary, and just pickling everything could work. Notable drawbacks with pickling is that it is not secure, somebody can replace your data with something else including executable code and that it's Python-only. With a database you also typically update the data and write to disk at the same time, while you will have to remember to pickle the data and save it to disk. Since this takes time, as you write all the data at once, it's usually done only when you exit, so crashes mean you lose all your changes. There is a middle-ground though: sqlite. It's a lightwieght, simple SQL database included in Python (since Python 2.5) so you don't have to install any extra software to use it. It is often a quick solution. Since SQLAlcehmey has SQLite support it also means you can even use SQLAlchemy with SQLite as default database, and hence provide an "upgrade path" to more serious databases.
1
7
0
I'm writing a basic membership web app in python. Is it always bad practice to abandon databases completely and simply pickle a python dictionary to a file (http://docs.python.org/2/library/pickle.html)? The program should never have to deal with more than ca. 500 members, and will only keep a few fields about each member, so I don't see scaling being an issue. And as this app may need to be run locally as well, it's easier to make it run on various machines if things are kept as simple as possible. So the options are to set up a mysql db, or to simply pickle to a file. I would prefer the 2nd, but would like to know if this is a terrible idea.
Pickle to file instead of using database
1.2
0
0
3,608
14,029,177
2012-12-25T08:57:00.000
0
0
1
0
python,list
14,030,552
4
false
0
0
In C language terms, a Python list is like a PyObject *mylist[100], except it's dynamically allocated. It's a contiguous chunk of memory storing references to Python objects.
3
0
0
How does the language know how much space to reserve for each element? Or does it reserve the maximum space possible required for a datatype? (Talk about large floating point numbers). In that case isn't it a bit inefficient?
How dynamic arrays store different datatypes?
0
0
0
91
14,029,177
2012-12-25T08:57:00.000
1
0
1
0
python,list
14,029,213
4
false
0
0
Arrays in Python are done via the array module. They do not store different datatypes, they store arrays of specific numerical values. I think you mean the list type. It doesn't contain values, it just contains references to objects, which can be any type of object at all. None of these reserves any space for any elements at all (well, they do, but that's internal implementation details). It adds the space for the elements needed when it they are added to the list/array. The list type is indeed less efficient than the array type, which is why the array type exists.
3
0
0
How does the language know how much space to reserve for each element? Or does it reserve the maximum space possible required for a datatype? (Talk about large floating point numbers). In that case isn't it a bit inefficient?
How dynamic arrays store different datatypes?
0.049958
0
0
91
14,029,177
2012-12-25T08:57:00.000
1
0
1
0
python,list
14,029,199
4
false
0
0
Python reserves only enough space in a list for a reference to the various objects; it is up to the objects' allocators to reserve enough space for them when they are instantiated.
3
0
0
How does the language know how much space to reserve for each element? Or does it reserve the maximum space possible required for a datatype? (Talk about large floating point numbers). In that case isn't it a bit inefficient?
How dynamic arrays store different datatypes?
0.049958
0
0
91
14,032,521
2012-12-25T17:04:00.000
5
0
1
0
python,list,sorting,alphabetical
26,274,529
6
false
0
0
ListName.sort() will sort it alphabetically. You can add reverse=False/True in the brackets to reverse the order of items: ListName.sort(reverse=False)
1
207
0
I am a bit confused regarding data structure in python; (),[], and {}. I am trying to sort a simple list, probably since I cannot identify the type of data I am failing to sort it. My list is simple: ['Stem', 'constitute', 'Sedge', 'Eflux', 'Whim', 'Intrigue'] My question is what type of data this is, and how to sort the words alphabetically?
Python data structure sort list alphabetically
0.16514
0
0
435,587
14,034,401
2012-12-25T22:21:00.000
4
0
0
0
python,qt,cookies,qwebkit
14,039,648
1
true
0
1
You can get/set the cookie jar through QWebView.page().networkAccessManager().cookieJar()/setCookieJar(). The Browser demo included with Qt (in C++) shows how to read and write cookies to disk.
1
3
0
I need to store cookies persistently in an application that uses QWebKit. I understand that I have to create a subclass of QNetworkCookieJar and attach it to a QNetworkAccessManager. But how do I attach this QNetworkAccessManager to my QWebView or get the QNetworkAccessManager used by it? I use Python 3 and PyQt if that is important.
Permanent cookies with QWebKit -- where to get the QNetworkAccessManager?
1.2
0
0
1,268
14,035,161
2012-12-26T01:23:00.000
2
0
1
0
python,opencv
14,048,691
2
false
0
0
In modules/highgui/src/window_w32.cpp(or in some other file if you are not using windows - look at void cv::namedWindow( const string& winname, int flags ) in ...src/window.cpp) there is a function static CvWindow* icvFindWindowByName( const char* name ) which probably is what you need, but it's internal so authors of OpenCV for some reason didn't want other to use it(or doesn't know someone may need it). I think that the best option is to use system api to find whether a window with specific name exists. Eventually use something that is almost impossible to be a window name, for example current time in ms + user name + random number + random string(yeah i know that window name "234564312cyriel123234123dgbdfbddfgb#$%grw$" is not beautiful).
1
2
1
quite new to OpenCV so please bear with me: I need to open up a temporary window for user input, but I need to be certain it won't overwrite a previously opened window. Is there a way to open up either an anonymous window, or somehow create a guaranteed unique window name? Obviously a long random string would be pretty safe, but that seems like a hack. P.S. I'm using the python bindings at the moment, but If you want to write a response in c/c++ that's fine, I'm familiar with them.
OpenCV anonymous/guaranteed unique window
0.197375
0
0
343
14,036,549
2012-12-26T06:00:00.000
0
1
0
1
python,uwsgi,pythonpath
14,039,533
1
false
0
0
you can specify multiple --pythonpath options, but PYTHONPATH should be honoured (just be sure it is correctly set by your init script, you can try setting it from the command line and running uwsgi in the same shell session)
1
1
0
Its weird because, when I run a normal python script on the server, it runs but when I run it via uWSGI, it cant import certain modules. there is a bash script that starts uwsgi, and passes a path via --pythonpath option. Is this an additional path or all the paths have to be given here ? If yes, how do I separate multiple paths given by this option.
Does uwsgi server read the paths in the environment variable PYTHONPATH?
0
0
0
1,207
14,038,691
2012-12-26T09:44:00.000
28
1
1
0
python,performance,python-import
14,038,726
2
true
0
0
You pollute your namespace with names that could interfere with your variables and occupy some memory. Also you will have a longer startup time as the program has to load the module. In any case, I would not become too neurotic with this, as if you are writing code you could end up writing and deleting import os continuously as your code is modified. Some IDE's as PyCharm detect unused imports so you can rely on them after your code is finished or nearly completed.
1
40
0
Is there any effect of unused imports in a Python script?
Do unused imports in Python hamper performance?
1.2
0
0
6,737
14,039,877
2012-12-26T11:20:00.000
3
0
0
0
python,database,open-source,schema,database-schema
14,039,904
3
false
0
0
It's not dangerous if you secure access to database. You are exposing only your know-how. Once somebody gains access to database, it's easy to list database structure.
3
2
0
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?
Is it dangerous if I expose my database schema in an open source project?
0.197375
1
0
599
14,039,877
2012-12-26T11:20:00.000
0
0
0
0
python,database,open-source,schema,database-schema
14,039,945
3
false
0
0
There is a difference between sharing database and database schema. You can comment the values of database machine/username/password in your code and publish the code on github. As a proof of concept, you can host your application on cloud(without disclosing its database credentials) and add its link to your github readme file.
3
2
0
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?
Is it dangerous if I expose my database schema in an open source project?
0
1
0
599
14,039,877
2012-12-26T11:20:00.000
0
0
0
0
python,database,open-source,schema,database-schema
21,087,156
3
false
0
0
I think it is dangerous, as if a SQL injection vulnerability exists in your website, the scheme will help the attacker to retrieve all important data easier.
3
2
0
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?
Is it dangerous if I expose my database schema in an open source project?
0
1
0
599
14,042,140
2012-12-26T14:43:00.000
1
0
1
0
python,python-3.x,python-2.7,mod-wsgi,wsgi
14,046,039
1
true
0
0
WSGI PEP 3333 is still for Python 2 and if you write to PEP 3333 it is still a valid PEP 333 WSGI application for Python 2. Short answer is use Python 2 and go use a framework that hides the WSGI stuff from you. Don't go building one from scratch when you don't know anything about WSGI already as would be suggested by the need to ask this question in the first place. Go look at Flask/Werkzeug. Once you understand the principles around how Flask and the underlying Werkzeug work and how WSGI in general works, then graduate to try writing your own.
1
0
0
I'm about to build a python framework from scratch... so, im confused about the WSGI/Python version, (WSGI 1.0 used for python 2.x, WSGI 1.0.1 for python 3.x). what is the best version should i start from? note that i will/may use some existing middlewares or some existing code. Thanks.
What WSGI version should i use to build a python framework from scratch?
1.2
0
0
152
14,043,045
2012-12-26T16:05:00.000
1
0
0
1
python,google-app-engine,google-cloud-datastore
14,043,190
2
false
0
0
Every blob you upload, creates a new version of that blob (with that filename) in the blobstore. Ofcourse you can delete the old version(s) of the blob, if you uploaded a new version. But to make sure you have the latest version of a blob (of a filename) you have to store the filename in the datastore and make a reference to the latest version. This reference holds the blob_key.
1
0
0
I know that I can grab a blob by BlobKey, but how do I get the blobkey associated with a given filename? In short, I want to implement "get file by filename" I can't seem to find any built-in functionality for this.
Downloading a Blob by Filename in Google App Engine (Python)
0.099668
0
0
259
14,043,470
2012-12-26T16:44:00.000
2
0
0
0
wxpython
36,878,217
2
false
0
1
Old question, I know, but there should be an answer. AppendItem(["test123", "data for second column", "third column", "etc..."])
1
1
0
I'm using DataViewListCtrl and i add itemswith AppendItem(["test123"]) How can i add data associated with each item?
WxPython DataViewListCtrl item data
0.197375
0
0
1,889
14,045,746
2012-12-26T20:16:00.000
0
0
1
0
python,django,pip,webfaction
14,053,697
2
false
0
0
My guess is you have created the virtualenv with the --system-site-packages option, so it could use some packages installed system-wide. If that's indeed what you did, try to create a clean virtualenv, and install all your dependencies inside it. This way, you'll never have to think of what packages are installed sytem-wide and what packages are installed in the virtualenv. To do so, you can use --no-site-packages, which has now become a default virtualenv option.
2
0
0
I'm using pip on webfaction and it keeps trying to uninstall system packages and then failing. For example if I try to install Fabric, one of the requirements is pycrypto. When it tries to uninstall it, it fails. Is there anyway to tell pip to not do this?
pip tries to uninstall system packages
0
0
0
195
14,045,746
2012-12-26T20:16:00.000
2
0
1
0
python,django,pip,webfaction
14,046,506
2
true
0
0
This is a common use scenario for virtualenv (aside from... all the time). Build your app around a clean virtualenv so that you don't have to think about system packages ever again (mostly) in permission limited environments.
2
0
0
I'm using pip on webfaction and it keeps trying to uninstall system packages and then failing. For example if I try to install Fabric, one of the requirements is pycrypto. When it tries to uninstall it, it fails. Is there anyway to tell pip to not do this?
pip tries to uninstall system packages
1.2
0
0
195
14,047,401
2012-12-26T23:06:00.000
0
0
0
0
python,html,django,url-routing
14,047,603
1
true
1
0
You could simply write static files to disc, and then serve them as static files. That would be easier, but it depends on your other requirements. But, from what I understand in your question you'd need: A form to upload A upload handler itself which inserts into the db A view that renders based on a path Not sure about the urls.py entry. You'll want something in there to separate this content from the rest of your site, and you'll probably also want something to safeguard against the file extensions that you serve there. note: this has security hole written all over it. I would be super careful in how you test this.
1
0
0
I am trying to let users create html pages that they can view on my website. So the homepage would just have a place for file upload, then some script on my site would take the file and convert it into the html page and place it at mysite.com/23klj4d(identifying file name). From my understanding, this would mean that the urls.py file gets updated to route that url to display the html page of the file. Is it possible to let users do this? Where would I put that conversion script?
What is the best way in python/django to let users add files to the database?
1.2
0
0
101
14,047,979
2012-12-27T00:30:00.000
0
1
0
0
php,python
70,877,468
5
false
0
0
For me the escapeshellarg(json_encode($data)) is giving not exactly a json-formatted string, but something like { name : Carl , age : 23 }. So in python i need to .replace(' ', '"') the whitespaces to get some real json and be able to cast the json.loads(sys.argv[1]) on it. The problem is, when someone enters a name with already whitespaces in it like "Ca rl".
1
29
0
Is it possible to run a Python script within PHP and transferring variables from each other ? I have a class that scraps websites for data in a certain global way. i want to make it go a lot more specific and already have pythons scripts specific to several website. I am looking for a way to incorporate those inside my class. Is safe and reliable data transfer between the two even possible ? if so how difficult it is to get something like that going ?
executing Python script in PHP and exchanging data between the two
0
0
0
63,592
14,049,028
2012-12-27T03:31:00.000
3
0
1
0
python,self
14,049,063
2
false
0
0
It's not really allright. self makes your variable available to global object-scope. That way you need to make sure that names of your variables are unique throughout complete object, rather than in localized scopes, amongst other side-effects that might or might not be unwanted. In your particular case it might be not an issue, but it's a very bad practice in general. Know your scoping and use it wisely. :)
2
3
0
I have a habit to declare new variables with self. in front to make it available to all methods. This is because sometimes I thought I don't need the variable in other methods. But halfway through I realized that I need it to be accessible in other methods. Then I have to add self. in front of all that variable. So my question is, besides needing to type 5 characters more each time I use a variable, are there any other disadvantages? Or, how do you overcome my problem?
Is declaring [almost] everything with self. alright (Python)?
0.291313
0
0
184
14,049,028
2012-12-27T03:31:00.000
14
0
1
0
python,self
14,049,045
2
true
0
0
Set a property on self only when the value is part of the overall object state. If it's only part of the method state, then it should be method-local, and should not be a property of self.
2
3
0
I have a habit to declare new variables with self. in front to make it available to all methods. This is because sometimes I thought I don't need the variable in other methods. But halfway through I realized that I need it to be accessible in other methods. Then I have to add self. in front of all that variable. So my question is, besides needing to type 5 characters more each time I use a variable, are there any other disadvantages? Or, how do you overcome my problem?
Is declaring [almost] everything with self. alright (Python)?
1.2
0
0
184
14,050,356
2012-12-27T06:24:00.000
0
0
0
0
python,django,user-interface,django-forms
14,051,044
2
true
1
0
If I understand your requirement, you want to create UI for a Form. Also, you don't want to use browser. Then you have lots of options. Django is web-framework. You can avoid it. Simply create a UI with WxPython or PyQt (I recommend to use this) or TkInter etc. Do a google search for tutorials.
1
0
0
I would like to create a small app that would be like a form on a website, but just for local files. I'm not really sure where to start (Django or other) or what to use, but I'd like to start with the GUI. What would be the best way to create a program like this? Can I use Django to create a form that would not be used in a browser and without a server?
How to make a standalone app like a website form?
1.2
0
0
318
14,050,745
2012-12-27T07:02:00.000
0
0
0
1
python,google-app-engine,memcached
14,051,678
2
false
1
0
What's wrong with the memcache viewer in the admin console?
1
1
0
Basically what I want to do is see the raw data of memcache so that I can see how my data are being stored.
Is there anyway to view memcache data in google app engine?
0
0
0
444
14,051,114
2012-12-27T07:34:00.000
0
0
1
0
regex,python-2.7
14,051,220
4
false
0
0
Do it with two regexes rather than trying to cram it all into one. Check that your word matches a[^a]*a and does not match a.*a.*a
1
0
0
I'd like a regex that finds word with exactly two a (not 3,4,5,.) need pattern? don't have to be in row. ["taat","weagda","aa"] is ok, but not this ["a","eta","aaa","aata","ssdfaasdfa"].
Regex that find words with exactly two 'a'
0
0
0
126
14,051,324
2012-12-27T07:57:00.000
0
0
0
1
python,pypy
69,422,294
2
false
0
0
For anyone coming here in the future, Oct 3 2021 pypy3 does accept the -O flag and turn off assertion statements
2
5
0
$ ./pypy -O Python 2.7.2 (a3e1b12d1d01, Dec 04 2012, 13:33:26) [PyPy 1.9.1-dev0 with GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: `` amd64 and ppc are only available in enterprise version'' >>>> assert 1==2 Traceback (most recent call last): File "", line 1, in AssertionError >>>> But when i execute $ python -O Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> assert 1==2 >>>
how to disable pypy assert statement?
0
0
0
342
14,051,324
2012-12-27T07:57:00.000
5
0
0
1
python,pypy
14,051,708
2
true
0
0
PyPy does silently ignore -O. The reasoning behind it is that we believe -O that changes semantics is seriously broken, but well, I guess it's illegal. Feel free to post a bug (that's also where such reports belong, on bugs.pypy.org)
2
5
0
$ ./pypy -O Python 2.7.2 (a3e1b12d1d01, Dec 04 2012, 13:33:26) [PyPy 1.9.1-dev0 with GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: `` amd64 and ppc are only available in enterprise version'' >>>> assert 1==2 Traceback (most recent call last): File "", line 1, in AssertionError >>>> But when i execute $ python -O Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> assert 1==2 >>>
how to disable pypy assert statement?
1.2
0
0
342
14,051,766
2012-12-27T08:42:00.000
1
0
0
1
python,linux,command-line
14,052,830
2
false
0
0
You can check the state of the printer using the lpstat command (man lpstat). To wait for a process to finish, get the PID of the process and pass it wait command as argument
1
4
0
I have a bunch of files that I need to print via PDF printer and after it is printed I need to perform additional tasks, but only when it is finally completed. So to do this from my python script i call command "lpr path/to/file.doc -P PDF" But this command immediately returns 0 and I have no way to track when printing process is finished, was it successful or not etc... There is an option to send email when printing is done, but to wait for email after I start printing looks very hacky to me. Do you have some ideas how to get this done? Edit 1 There are a plenty of ways to check if printer is printing something at current moment. Therefore at the moment after I start printing something I run lpq command every 0.5 second to find out if it is still printing. But this looks to m e not the best way to do it. I want to be able get alerted or something when actual printing process is finished. Was it successful or not etc...
How to check if pdf printing is finished on linux command line
0.099668
0
0
2,252
14,055,551
2012-12-27T13:32:00.000
2
0
0
0
python,tkinter
14,056,624
3
false
0
1
You can use the validation feature without the "heavy-handedness". Have your validation always return True after setting the state of the ok/apply button.
2
0
0
I'm designing a "settings" frame for a Python/Tkinter app that lets the user specify an IP address, port number, and a couple other configurable options. I want to validate the user entries before letting the user close the frame to apply them. Based on what I've read up on (and tried) so far with the Entry widget's validate and validatecommand options, the only choices they offer are "heavy-handed" validations. The kind where the user is blocked from leaving the Entry widget (or even typing any more keystrokes) until the entry is valid. This is exactly the behavior I avoid when designing a GUI because it's annoying as all get-out for the user. I'm planning on switching over to using .trace methods to keep watch on the values, and just disabling the "OK/Apply" button until all of the entries in the frame are valid. Before I do that though, I wanted to know whether I'm missing anything with regards to the built-in validation options. Is there an option I missed that's less heavy-handed?
Does Tkinter validation have to be heavy-handed?
0.132549
0
0
471
14,055,551
2012-12-27T13:32:00.000
1
0
0
0
python,tkinter
14,055,762
3
false
0
1
If you use trace, then you have what you want without needing to use Tkinter's validation at all. Make all traces go to the same function, where you test and validate all your values as you wish, and according to that enable or disable the ok button.
2
0
0
I'm designing a "settings" frame for a Python/Tkinter app that lets the user specify an IP address, port number, and a couple other configurable options. I want to validate the user entries before letting the user close the frame to apply them. Based on what I've read up on (and tried) so far with the Entry widget's validate and validatecommand options, the only choices they offer are "heavy-handed" validations. The kind where the user is blocked from leaving the Entry widget (or even typing any more keystrokes) until the entry is valid. This is exactly the behavior I avoid when designing a GUI because it's annoying as all get-out for the user. I'm planning on switching over to using .trace methods to keep watch on the values, and just disabling the "OK/Apply" button until all of the entries in the frame are valid. Before I do that though, I wanted to know whether I'm missing anything with regards to the built-in validation options. Is there an option I missed that's less heavy-handed?
Does Tkinter validation have to be heavy-handed?
0.066568
0
0
471
14,056,723
2012-12-27T15:04:00.000
0
0
1
0
python,linux
14,058,085
1
false
0
0
cat FILE.csv | python empty_program.py Well, python will try to read "empty_program.py", and fail to find anything in it, assuming there is file, and then exit. If the file doesn't exist, you get an error. I tested it [you should have been able to do that as well, doesn't take that much effort - probably a lot less than it took to go to SO and write the question]. So, my next thought was to use an interactive python process, but since you are feeding things through stdin, it won't work - I didn't have a good csv file, so I did "cat somefile.c|python", and that falls over at "int main()" with "invalid syntax". I'm surprised it got as far as that, but I guess that's because #include's are seen as comments. Most interactive programming languages read from stdin, so you can't really do what you are describing with any of them. I'm far from sure why you'd want to either. If your first program can produce the relevant program code, why would you not just put it in a file and let python read that file... Rather than jump through hoops? Note that an IDE is not the same as a command line program. I'm pretty sure that if you work hard enough at something, you can write a C program that acesses the Eclipse IDE with Python plugins. But that's really doing things the hard way. Why anyone would WANT to spend that much effort to achieve so little, I don't see. Sorry, but I don't really see the point of what you are trying to do - I'm sure you have some good idea in there, but I'm sure the implementation details need to be worked on.
1
0
0
Lets say you have a large file containing LETTER,NUMBER comma-delimited tokens. You want to write a program that reads from standard input and prints out NUMBER+1 for each line. Very trivial program, I understand. However, here is the constraint -- you can only read from this standard in pipe one-time AND you have to start out with programming an empty file. So for example: cat FILE.csv | python empty_program.py This should pop up an interactive session which allows you to write what ever code you want. Since empty_program.py has not called stdin.readline(), the stdin buffer is appropriately in-tact. Is something like this possible? One example of something that can sort of do this is the Excel VBA debugger/IDE. It allows you to pause execution -- add new lines to the programs source code and continue exeuction.
pipe data into python debugger and write the python program interactively
0
0
0
160
14,057,136
2012-12-27T15:33:00.000
3
0
1
0
python
14,057,221
2
true
0
0
There are two phases which you are getting a little confused. Python has to find the actual file (containing code) that you want to import, parse it, execute it, and store it somewhere. It then has to bind the name of the imported module locally to the module object. That is, the process "find the module sys and turn it into a module object" is not the same as "define the variable sys to mean the module". You can check which modules have been loaded by looking in sys.modules. As a separate issue, there are a few basics of Python that are actually hardcoded into the interpreter, not represented as separate files on disk. sys is one of these modules: there is no sys.py file; instead, it's compiled C code that's included in the python.exe binary.
1
1
0
I can't wrap my head around how 'import' statement works in Python. It is said to search for the packages in directories returned by sys.path(). However, even if sys module is available automatically in every Python program it's not imported automatically. So does import statement import sys module under the hood?
How really 'import' works in Python?
1.2
0
0
469
14,057,924
2012-12-27T16:33:00.000
49
0
0
0
python,nano
14,057,986
1
true
0
0
If you press Ctrl-X, for exit, it will ask you whether you want to save the file. Ctrl-O is for saving file without exiting the editor. Ctrl-G is for help on key combinations.
1
25
0
I'm very new to programming and playing around with a Raspberry Pi and following tutorials on Youtube. I have opened a file in GNU Nano 2.2.6 e.g: nano my_File.py and changed some of the data. I'm struggling on how to overwrite the file (or save it) because when i run it in a new window it uses the original data... Thanks.
Python - Saving a File being edited in GNU Nano 2.2.4
1.2
0
0
48,601
14,058,398
2012-12-27T17:07:00.000
1
0
1
0
python
14,058,904
1
true
0
0
There are 3 things you need for this: A server where you will store all the information about the updates and versions. It can be a web server (for Python, see flask, web.py, django, pylons, etc., or PHP or whatever) which can have a single page. It will take the current version as input (GET/POST requests) and output the updates available (in a format that can be parsed, JSON preferably, or XML or just plain text). These can be fetched from a database (see MySQL, postgresql, or any ORM that works with your choice of web server, sqlalchemy) Or by checking the names of the files available on the server (if the files will be hosted on the same web server) (the names will have a pattern XXX-r24-20121224.tar.gz and you'll check the list of files with glob or something). A piece of code that will query the server every time you start your game to check with the web server if there are updates. You can use requests or urllib2 for example. A piece of code that will download and update your actual game. The web server should give you a link to where the update file is From there you will have to download it (with requests or urllib2) Unzip it (using zipfile or tarfile) and replace your actual files with that. Now it all depends on how your files are laid out: If you're distributing the source code, what you could do is build it all in a package, and then you just replace the whole package. The zipfile package and Python actually account for that, and they give you an option to only put the python files in the zip file and Python gives you an option to add said zip file to the PYTHONPATH and import directly from there. If you're compiling it with py2exe or anything, it'll be a different issue: you might be able to only update one zip file, or replace the actual DLLs and stuff, which might be a big mess. If it's a deb package or similar, you might want to use that to update, and ask the user to do it or something. I hope this helps, even if it's very abstract. This had to be done :) Now I'll give my own (biased) opinion: If you already have a website running, use that to add a single page for such a thing. Otherwise I'd recommend a free hosting that will allow you to set up a website using flask. I'd recommend that because it would be very easy to get it running in no time, plus it will allow you to use the great ORM sqlAlchemy. Also, I wouldn't bother with more than telling the user there is a new version and let them figure out where to get it. That's unless you are only distributing it in one standard way all over.
1
1
0
I'm making a game using the (very) old Python library, PyGame. But that's not what I'm here to ask about. How do I make a code that would check the repositories in a server with the latest build, check if the build is newer or the same, and if newer prompt the user to download (or deny) the update of the game, as it will be developed in multiple versions and will allow players to gradually update as we make changes. Like Minecraft does once an update comes out and it prompts you to update... But in Python
Checking repository and updating
1.2
0
0
235
14,061,297
2012-12-27T21:05:00.000
2
0
0
1
python,django,redis,celery
14,061,333
1
true
1
0
Provided you're running Daemon processes of Redis and Celery you do not need to restart them when you restart Apache. Generally, you will need to restart them when you make configuration changes to either Redis or Celery as the applications are dependent on eachother.
1
0
0
I'm a bit new to redis and celery. Do I need to restart celeryd and redis every time I restart apache? I'm using celery and redis with a django project hosted on webfaction. Thanks for the info in advance.
redis celeryd and apache
1.2
0
0
130
14,063,516
2012-12-28T01:37:00.000
1
0
0
0
python,frameworks
14,063,549
2
true
1
0
I find Django's admin very easy to use for non-technical clients. In fact, that is the major consideration for my using Django as of late. Once set up properly, non-technical people can very easily update information, which can reflected on the front end immediately. The client feels empowered.
2
0
0
I need to write a very light database (sqlite is fine) app that will initially be run locally on a clients windows PC but could, should it ever be necessary, be upgraded to work over the public interwebs without a complete rewrite. My end user is not very technically inclined and I'd like to keep things as simple as posible. To that end I really want to avoid having to install a local webserver, however "easy" that may seem to you or I. Django specifically warns not to use it's inbuilt webserver in production so my two options seem to be... a) Use django's built in server anyway while the app is running locally on windows and, if it ever needs to be upgraded to work over the net just stick it behind apache on a linux box somewhere in the cloud. b) Use a framework that has a more robust built in web server from the start. My understanding is that the only two disadvantages of django's built in server are a lack of security testing (moot if running only locally) and it's single threaded nature (not likely to be a big deal either for a low/zero concurrency single user app running locally). Am I way off base? If so, then can I get some other "full stack" framework recommendations please - I strongly prefer python but I'm open to PHP and ruby based solutions too if there's no clear python winner. I'm probably going to have to support this app for a decade or more so I'd rather not use anything too new or esoteric unless it's from developers with some serious pedigree. Thanks for your advice :) Roger
Which python web frameworks incorporate a web sever that is suitable for production use?
1.2
0
0
232
14,063,516
2012-12-28T01:37:00.000
0
0
0
0
python,frameworks
14,069,017
2
false
1
0
Use Django. It's very simple for you to get started. Also, they have the best documentation. Follow the step by step app creating tutorial. Django supports all the databases that exist. Also, the built in server is very simple to use for the development and production server. I would highly recommend Django.
2
0
0
I need to write a very light database (sqlite is fine) app that will initially be run locally on a clients windows PC but could, should it ever be necessary, be upgraded to work over the public interwebs without a complete rewrite. My end user is not very technically inclined and I'd like to keep things as simple as posible. To that end I really want to avoid having to install a local webserver, however "easy" that may seem to you or I. Django specifically warns not to use it's inbuilt webserver in production so my two options seem to be... a) Use django's built in server anyway while the app is running locally on windows and, if it ever needs to be upgraded to work over the net just stick it behind apache on a linux box somewhere in the cloud. b) Use a framework that has a more robust built in web server from the start. My understanding is that the only two disadvantages of django's built in server are a lack of security testing (moot if running only locally) and it's single threaded nature (not likely to be a big deal either for a low/zero concurrency single user app running locally). Am I way off base? If so, then can I get some other "full stack" framework recommendations please - I strongly prefer python but I'm open to PHP and ruby based solutions too if there's no clear python winner. I'm probably going to have to support this app for a decade or more so I'd rather not use anything too new or esoteric unless it's from developers with some serious pedigree. Thanks for your advice :) Roger
Which python web frameworks incorporate a web sever that is suitable for production use?
0
0
0
232
14,063,712
2012-12-28T02:07:00.000
4
0
0
0
python,django
14,063,752
1
true
1
0
Python does not allow pickling of some functions either, because of security problems if it were to be allowed. (It depends - there are ways to pickle some functions by reference) Pickling file objects has been requested in the features threads of python many times, and the best reasoning is because it opens up additional hack vectors into the security processes of python, by allowing run-time injections of potentially malicious events. It would be very convenient to have in a number of ways, but it appears to be a security restriction.
1
0
0
I am trying to understand why sending a django InMemoryUploadedFile object cannot be pickled when sending it as an argument of a celery task, Can't pickle <type 'cStringIO.StringO'>: attribute lookup cStringIO.StringO failed. So i tried out the File object, doesn't work as well, but a StringIO would work. Need some dummies' guidance in understanding the difference between the 3. thanks!
python: why a File object cannot be pickled?
1.2
0
0
465
14,064,945
2012-12-28T05:19:00.000
3
0
1
0
python,string,list
14,064,964
2
false
0
0
You will need two functions from the standard lib: str.replace ast.literal_eval
1
0
0
I want to convert a string of the form: {{0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1}, {1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0}, {0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0}, {1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0}, {1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0}} into a list that looks exactly like it (that is I have a string written as if it were a list, except I would need to replace each { with a [ and each } with a ] in order to work with it as a list in python. Any ideas?
Converting a string into a list and removing prefixes and suffixes
0.291313
0
0
176
14,068,303
2012-12-28T10:45:00.000
-1
1
1
0
python,module
14,068,920
3
false
0
0
It's simple. Make them to put your script in site-packages or dist-packages. They can import the script using import module and use them.
1
2
0
I have written a script in python that I would like to be able to give to some less tech-savvy friends. However, it relies on PIL and requests to function. How can I include these modules without forcing my friends to try to install them?
Auto-including Modules in Python
-0.066568
0
0
112
14,070,565
2012-12-28T13:53:00.000
1
0
0
0
python
14,070,812
4
false
0
0
Yes, you do have edges, and they are the distances between the nodes. In your case, you have a complete graph with weighted edges. Simply derive the distance from each node to each other node -- which gives you O(N^2) in time complexity --, and use both nodes and edges as input to one of these approaches you found. Happens though your problem seems rather an analysis problem other than anything else; you should try to run some clustering algorithm on your data, like K-means, that clusters nodes based on a distance function, in which you can simply use the euclidean distance. The result of this algorithm is exactly what you'll need, as you'll have clusters of close elements, you'll know what and how many elements are assigned to each group, and you'll be able to, according to these values, generate the coefficient you want to assign to each node. The only concern worth pointing out here is that you'll have to determine how many clusters -- k-means, k-clusters -- you want to create.
1
7
1
I have a list of X and Y coordinates from geodata of a specific part of the world. I want to assign each coordinate, a weight, based upon where it lies in the graph. For Example: If a point lies in a place where there are a lot of other nodes around it, it lies in a high density area, and therefore has a higher weight. The most immediate method I can think of is drawing circles of unit radius around each point and then calculating if the other points lie within in and then using a function, assign a weight to that point. But this seems primitive. I've looked at pySAL and NetworkX but it looks like they work with graphs. I don't have any edges in the graph, just nodes.
Calculating Point Density using Python
0.049958
0
0
12,400