Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
13,702,106
2012-12-04T11:42:00.000
1
0
1
0
python,qt,windows-8,pyside,stackless
16,126,325
2
false
0
1
I had the same problem with Pyside 1.1.2 and Qt 4.8.4. The solution for me was to set the compatibility mode of the Python executable to Windows 7 via right click on the executable -> Properties -> Compatiblity -> Run this program in compatibility mode for: Windows 7 Hope that helps.
2
2
0
I cannot get my code to run on my win8 laptop. I am working with a combination of: Stackless Python 2.7.2 Qt 4.8.4 PySide 1.1.2 Eclipse/Pydev and WingIDE This works well on my Win7 PC, but now i have bought a demo laptop with windows 8. As far as I know all is installed the same way as on my PC. When i run my program (same code) now, i get a warning: "Qt: Untested Windows version 6.2 detected!" Ok, so that could be the source of my problem, but also i get errors: some times the program just quits after the warning above (i think only eclipse) sometimes i get an APPCRASH (i think only eclipse) sometimes i get the exception: TypeError: Error when calling the metaclass bases: mro() returned base with unsuitable layout ('') sometimes i get the exception: TypeError: Error when calling the metaclass bases: multiple bases have instance lay-out conflict Especially the last two don't seem like a windows problem, but i don't see any other difference with my PC win7 install. Does anyone have any idea what is going on or how to fix this? Did i miss a step in the installation or is it some incompatibility maybe? Cheers, Lars Does anyone have some input on this?
windows 8 incompatibility?
0.099668
0
0
2,146
13,702,106
2012-12-04T11:42:00.000
0
0
1
0
python,qt,windows-8,pyside,stackless
13,881,726
2
false
0
1
try using Hyper-V however Hyper-V is not installed by default in Windows 8. You need to go "Turn Windows features on or off."
2
2
0
I cannot get my code to run on my win8 laptop. I am working with a combination of: Stackless Python 2.7.2 Qt 4.8.4 PySide 1.1.2 Eclipse/Pydev and WingIDE This works well on my Win7 PC, but now i have bought a demo laptop with windows 8. As far as I know all is installed the same way as on my PC. When i run my program (same code) now, i get a warning: "Qt: Untested Windows version 6.2 detected!" Ok, so that could be the source of my problem, but also i get errors: some times the program just quits after the warning above (i think only eclipse) sometimes i get an APPCRASH (i think only eclipse) sometimes i get the exception: TypeError: Error when calling the metaclass bases: mro() returned base with unsuitable layout ('') sometimes i get the exception: TypeError: Error when calling the metaclass bases: multiple bases have instance lay-out conflict Especially the last two don't seem like a windows problem, but i don't see any other difference with my PC win7 install. Does anyone have any idea what is going on or how to fix this? Did i miss a step in the installation or is it some incompatibility maybe? Cheers, Lars Does anyone have some input on this?
windows 8 incompatibility?
0
0
0
2,146
13,707,922
2012-12-04T16:54:00.000
1
0
0
1
python,google-app-engine,blob,blobstore
13,714,228
1
true
1
0
When you upload data to the blobstore you receive a blob_key and a file_name. The blob_key is unique. The file_name is NOT unique. When you do another upload with the same file_name a new version is stored in the blobstore with the same file_name and a new unique blob_key. The first blob is NOT deleted. You have to do it yourself. To administer these uploaded blobs, you create a datastore entity with your own key_name. You can use the file_name for this purpose. And you can use a BlobKeyProperty (NDB) or blobstore.BlobReferenceProperty (datastore) in this entity to reference your blob (to save your blob_key reference). In this way your key_name / file_name uniquely identifies your blob.
1
1
0
I am building a service where you can upload images. On the blob creation I would like to supply a key_name, which will be used by the relevant entity to retrieve it later.
Can I store a blob with a key_name with Google Appengine ndb?
1.2
0
0
337
13,710,631
2012-12-04T19:42:00.000
3
0
1
0
python
13,710,665
4
false
0
0
You've got the ternary syntax x if x else '' - is that what you're after?
1
305
0
In C#, I can say x ?? "", which will give me x if x is not null, and the empty string if x is null. I've found it useful for working with databases. Is there a way to return a default value if Python finds None in a variable?
Is there shorthand for returning a default value if None in Python?
0.148885
0
0
276,258
13,711,765
2012-12-04T20:56:00.000
0
0
1
1
python
13,711,840
2
false
0
0
Install both pythons, and change the path in Windows, by default both Pythons will be PATH=c:\python\python 2.7 and PATH=c:\python\python 3.2 Or something like that. What and since windows stops as soon as it finds the first python, what you could do is have one called PATH=c:\python27\ and another PATH=c:\python32\ this way you can run both of them.
1
1
0
I try to save a python script as a shortcut that I want to run. It opens it but then closes right away. I know why it is doing this, it is opening my windows command line in python3.2 the script is in python 2.7 I need both version on my PC, my question is how to I change the cmd default. I have tried to "open with" shortcut on the icon and it just continues to default to 3.2. Help please
changing python windows command line
0
0
0
538
13,715,508
2012-12-05T02:47:00.000
3
0
1
0
python,pydev
13,715,560
2
true
0
0
Highlight all the code and use Tab and Shift+Tab. EDIT: You can also highlight the code, right-click on selection, and choose Shift+► (or to unindent, Shift+◄).
1
2
0
I'm working with pydev on eclipse, and I'm wondering if there is a way to circumvent the indentation, or at least quickly edit indentation in files. For example, if I make a handful of methods, but then want to put them under a class umbrella, is there an easy way of doing this in terms of indentation, or do I need to go through them line by line?
Fixing Python Indentation When Including in a Method or a Class
1.2
0
0
288
13,716,319
2012-12-05T04:29:00.000
1
0
0
0
python,time,timestamp,unix-timestamp
13,716,366
3
false
0
0
The seconds are just an ordinary decimal number, so "41.229431" means 41.229431 seconds after the start of the minute. Since there are six digits after the decimal, that means that the precision of the timestamp extends to microseconds in this case, but there could just as easily be fewer or many more digits.
2
1
0
I have been trying to figure out what kind of timestamp takes this form: 2012-07-02T21:27:41.229431 It seems like it is some sort of unix time, but I can't figure out what the 6 digits after the decimal point represent. I'm assuming 21 is the hour, 27 is the minute, and 41 is the second. Obviously next would be milliseconds, but it seems like 6 digits would be too high precision. Could someone please help? By the way, this was produced in Python, if that helps.
I'm having trouble figuring out what this time stamp means
0.066568
0
0
674
13,716,319
2012-12-05T04:29:00.000
0
0
0
0
python,time,timestamp,unix-timestamp
13,716,332
3
false
0
0
ISO-8601 The 6 digits after the decimal are microseconds.
2
1
0
I have been trying to figure out what kind of timestamp takes this form: 2012-07-02T21:27:41.229431 It seems like it is some sort of unix time, but I can't figure out what the 6 digits after the decimal point represent. I'm assuming 21 is the hour, 27 is the minute, and 41 is the second. Obviously next would be milliseconds, but it seems like 6 digits would be too high precision. Could someone please help? By the way, this was produced in Python, if that helps.
I'm having trouble figuring out what this time stamp means
0
0
0
674
13,718,656
2012-12-05T07:59:00.000
0
0
0
0
python,django,django-models
13,718,988
5
false
1
0
you can separate the model file like this : -------models/ -------------- init.py -------------- usermodels.py --------------othermodel.py in the init.py: ---------------from usermodels import * ---------------from othermodel import * and in the *models.py , add META class: --------class Meta: --------------app_label = 'appName'
1
8
0
Currently all my models are in models.py. Ist becomming very messy. Can i have the separate file like base_models.py so that i put my main models there which i don't want to touch Also same case for views and put in separate folder rather than develop a new app
Can i divide the models in different files in django
0
0
0
2,427
13,721,808
2012-12-05T11:03:00.000
1
1
0
1
python,python-3.x,daemon,launch-daemon
13,722,236
2
true
0
0
Suppose for python script name is monitor. use following steps: copy monitor script in /usr/local/bin/ (not necessary) Also add a copy in /etc/init.d/ Then execute following command to make it executable sudo -S chmod "a+x" "/etc/init.d/monitor" At last run update.rc command sudo -S update-rc.d "monitor" "defaults" "98" this will execute you monitor whenever you login for all tty.
1
2
0
I am writing a script in python3 for Ubuntu that should be executed all X Minutes and should automatic start after logging in. Therefore I want to create a daemon (is it the right solution for that?) but I haven't found any modules / examples for python3, just for python 2.X. Do you know something what I can work with? Thank you,
Daemon with python 3
1.2
0
0
3,305
13,722,760
2012-12-05T11:57:00.000
4
1
1
0
python
13,722,800
2
false
0
0
Unit testing is the best way to handle this. If you think the testing is taking too much time, ask yourself how much time you are loosing on defects - identifying, diagnosing and rectifying - after you have released the code. In effect, you are testing in production, and there's plenty of evidence to show that defects found later in the development cycle can be orders of magnitude more expensive to fix.
1
0
0
I've always have trouble with dynamic language like Python. Several troubles: Typo error, I can use pylint to reduce some of these errors. But there's still some errors that pylint can not figure out. Object type error, I often forgot what type of the parameter is, int? str? some object? Also, forgot the type of some object in my code. Unit test might help me sometimes, but I'm not always have enough time to do UT. When I need a script to do a small job, the line of code are 100 - 200 lines, not big, but I don't have time to do the unit test, because I need to use the script as soon as possible. So, many errors appear. So, any idea on how to reduce the number of these troubles?
How to reduce errors in dynamic language such as python, and improve my code quality?
0.379949
0
0
166
13,725,567
2012-12-05T14:35:00.000
3
0
1
0
python,excel,time,number-formatting,datanitro
13,725,706
2
true
0
0
If .355556 (represented as 8:32) is in A1 then =HOUR(A1)&":"&MINUTE(A1) and Copy/Paste Special Values should get you to a string.
1
1
0
I have a list of times in h:m format in an Excel spreadsheet, and I'm trying to do some manipulation with DataNitro but it doesn't seem to like the way Excel formats times. For example, in Excel the time 8:32 is actually just the decimal number .355556 formatted to appear as 8:32. When I access that time with DataNitro it sees it as the decimal, not the string 8:32. If I change the format in Excel from Time to General or Number, it converts it to the decimal (which I don't want). The only thing I've found that works is manually going through each cell and placing ' in front of each one, then going through and changing the format type to General. Is there any way to convert these times in Excel into strings so I can extract the info with DataNitro (which is only viewing it as a decimal)?
Converting time with Python and DataNitro in Excel
1.2
1
0
488
13,727,279
2012-12-05T15:58:00.000
0
0
0
0
python,pyqt4,opensuse,kde-plasma,pyrcc
13,730,501
2
false
0
1
Well, turns out all I needed was to install the extra package python-qt4-utils ontop of the existing python-qt4. Now, I have the sought after utilities in place.
1
1
0
Am working on a pyqt app, done the ui via qt-designer 4.8.1, and generated the corresponding py file using pykdeuic4 (available on OpenSuse 12.2), but can't find an equivalent for pyrcc4 to hadle the *.qrc files. what's the equivalent tool/command? Edit: Most of the documentation on using QtDesigner with PyQt, indicates using pyuic4 / pyuic (which on my platform is pykdeuic4), but as for the other tool pyrcc4 / pyrcc, I can't find an equivalent. Am wondering, where can I even get the original tool from (pyrcc4)?
Equivalent of pyrcc4 on KDE
0
0
0
452
13,728,325
2012-12-05T16:54:00.000
1
1
0
1
python,z3
13,730,652
2
true
0
0
Yes, you can do it by including the build directory in your LD_LIBRARY_PATH and PYTHONPATH environment variables.
1
2
0
I'm trying to use Z3 from its python interface, but I would prefer not to do a system-wide install (i.e. sudo make install). I tried doing a local install with a --prefix, but the Makefile is hard-coded to install into the system's python directory. Best case, I would like run z3 directly from the build directly, in the same way I use the z3 binary (build/z3). Does anyone know how to, or have script, to run the z3py directly from the build directory, without doing an install?
Can I use Z3Py withouth doing a system-wide install?
1.2
0
0
540
13,728,955
2012-12-05T17:27:00.000
0
0
0
0
python,mongodb
13,729,295
2
false
0
0
If this is an inescapable problem, you could split the array of ids across multiple queries and then merge the results client-side.
1
0
0
I've implemented a breadth first search with a PyMongo social network. It's breadth first to reduce the number of connections. Now I get queries like coll.find({"_id":{"$in":["id1", "id2", ...]}} with a huge number of ids. PyMongo does not process some of these big queries due to their size. Is there a technical solution around it? Or do you suggest another approach to such kind of queries where I need to select all docs with one of a huge set of ids?
Large size query with PyMongo?
0
1
0
116
13,729,740
2012-12-05T18:14:00.000
0
1
1
0
python
13,729,994
3
true
0
0
The print "foo-bar" trick is basically what people do for quick&dirty scripts. However, if you have lots and lots of loops, you don't want to print a line for each one. Besides the fact that it'll fill the scrollback buffer of your terminal, on many terminals it's hard to see whether anything is happening when all it's doing is printing the same line over and over. And if your loops are quick enough, it may even mean you're spending more time printing than doing the actual work. So, there are some common variations to this trick: Print characters or short words instead of full lines. Print something that's constantly changing. Only print every N times through the loop. To print a word without a newline, you just print 'foo',. To print a character with neither a newline nor a space, you have to sys.stdout.write('.'). Either way, people can see the cursor zooming along horizontally, so it's obvious how fast the feedback is. If you're got a for n in … loop, you can print n. Or, if you're progressively building something, you can print len(something), or outfile.ftell(), or whatever. Even if it's not objectively meaningful, the fact that it's constantly changing means you can tell what's going on. The easiest way to not print all the time is to add a counter, and do something like counter += 1; if counter % 250 == 0: print 'foo'. Variations on this include checking the time, and printing only if it's been, say, 1 or more seconds since the last print, or breaking the task into subtasks and printing at the end of each subtask. And obviously you can mix and match these. But don't put too much effort into it. If this is anything but a quick&dirty aid to your own use, you probably want something that looks more professional. As long as you can expect to be on a reasonably usable terminal, you can print a \r without a \n and overwrite the line repeatedly, allowing you to draw nice progress bars or other feedback (ala curl, wget, scp, and other similar Unix tools). But of course you also need to detect when you're not on a terminal, or at least write this stuff to stderr instead of stdout, so if someone redirects or pipes your script they don't get a bunch of garbage. And you might want to try to detect the terminal width, and if you can detect it and it's >80, you can scale the progress bar or show more information. And so on. This gets complicated, so you probably want to look for a library that does it for you. There are a bunch of choices out there, so look through PyPI and/or the ActiveState recipes.
1
1
0
Simple question: Is there some code or function I can add into most scripts that would let me know its "running"? So after you execute foo.py most people would see a blinking cursor. I currently am running a new large script and it seems to be working, but I wont know until either an error is thrown or it finish(might not finish). I assume you could put a simple print "foo-bar"at the end of each for loop in the script? Any other neat visual read out tricks?
Code to check a scripts activity (python)
1.2
0
0
131
13,730,790
2012-12-05T19:18:00.000
1
0
0
0
python,linux,web-applications,chown
13,730,842
1
true
1
0
The proper way is to compile to bytecode on install so that .pyc files never need to be created on the fly. The rest is basic stuff, like "never use 0777/0666".
1
1
0
I've been using chown www-data:www-data -R /path/to/my/django-app/ and simply letting my virtualenv's dirs / files be owned by root (since sudo pip install foo implies that by default). This just doesn't feel right though. Is this pretty typical, or, should www-data only own directories that it can upload files to? If I allow root to own everything, my server won't even be able to write .pyc files, or will it? I'm clearly quite new to Unix permissions. What is the secure, proper way to handle this?
What user should own my Python scripts and application directories?
1.2
0
0
112
13,734,601
2012-12-05T23:44:00.000
1
0
0
0
python,xmpp,message
13,739,615
2
false
0
0
I cannot give you a specific python example, but I explain how the logic works. When you send a message to a bare Jid then it depends on the server software or configuration how its routed. Some servers send the message to the "most available resource", and some servers send it to all resources. E.g. Google Talk sends it to all resources. If you control the server software and it allows you to route messages to a bare Jid to all connected resources then this would be the easiest way. When your code must work on any server then you should collect all available resources of your contacts. You get them with the presence, most libraries have a callback for this. Then you can send out the messages to full Jids (with resources) in a loop.
1
1
0
How can I send one XMPP message to all connected clients/resources using a Python libraries for example: xmpppy, jabber.py, jabberbot. Any other commandline solution is well. So far I've only been able to send an echo or a single message to only one client. The purpose is to send a message to all resources/clients connected, not grouped. This might be triggered by a command but is not 'really' necessary. Thank you.
Send an xmpp message to all connected clients/resources
0.099668
0
1
1,722
13,735,024
2012-12-06T00:30:00.000
12
0
0
0
python,flask
21,083,908
4
false
1
0
If you have security concerns (and everyone should have) There is the answer: This is not REALLY possible Flask uses cookie-based sessions. When you edit or delete session, you send a REQUEST to CLIENT to remove the cookie, normal clients (browsers) will do. But if session hijacked by an attacker, the attacker's session remains valid.
1
27
0
How do I create a new clean session and invalidate the current one in Flask? Do I use make_null_session() or open_session()?
Invalidate an old session in Flask
1
0
0
25,119
13,736,310
2012-12-06T03:07:00.000
1
0
1
1
python,loops
13,736,441
1
false
0
0
The simplest solution is to add a variable outside of the loop which stores the last time the data size was checked. Every time through your loop you can compare the current time vs the last time every time through the loop and check if more than X time has elapsed.
1
0
0
I'm just starting out with Python. And I need help understanding how to do the main loop of my program. I have a source file with two columns of data, temperature & time. This file gets updated every 60 seconds by a bash script. I successfully wrote these three separate programs; 1. A program that can read the last 1440 lines of the source data and plot out a day graph. 2. A program that can read the last 10080 lines of the source data and plot out a week graph. 3. A program that can read the source data and just display the last recorded temperature. 4. Check the size of the source file and delete data over X days old. I want to put it all together so that a user can toggle between the 3 different display types. I understand that this would work in a main loop, with just have the input checked in the loop, and call a function based on what is returned. But I don't know how to handle the file size check. I don't want it checked every time the loops cycles. I would like it to be run once a day. thanks in advance!
Understanding an element of the main loop
0.197375
0
0
70
13,737,315
2012-12-06T05:09:00.000
1
0
1
0
python,split
13,737,367
6
false
0
0
Do a text substitution first then split. e.g. replace all tabs with spaces, then split on space.
1
2
0
...note that values will be delimited by one or more space or TAB characters How can I use the split() method if there are multiple separating characters of different types, as in this case?
Splitting strings separated by multiple possible characters?
0.033321
0
0
916
13,738,301
2012-12-06T06:44:00.000
6
0
1
0
python,pyscripter
15,842,157
3
true
0
1
Goto pyscripter menus and check the box for clear output before run. Tools>Options > IDE options > Python interpreter > Clear output before run. Very useful in interactive sessions.
1
3
0
Is there a single command to clean the screen of the Python Interpreter every time one runs a code in PyScripter? thanks,
Command to clean screen in PyScripter?
1.2
0
0
2,810
13,746,872
2012-12-06T15:25:00.000
2
0
0
0
python,chameleon,template-metal
15,097,605
1
true
1
0
Figured it out. You need to wrap the entire master template in a metal tag like so: <metal:macro metal:define-macro="layout"> <!DOCTYPE html> ... </html></metal:macro>
1
3
0
I've thoroughly RTFMed and Googled for this, and I can't seem to find the answer. I am new to Chameleon, so maybe it's just so obvious that it's no where to be found, but when I put <!DOCTYPE html> in my master template, the rendered page has it stripped out resulting in the dreaded quirksmode. Is there a trick that I'm missing?
How do you specify the html5 doctype with Python Chameleon?
1.2
0
0
367
13,748,022
2012-12-06T16:26:00.000
4
1
1
0
python,performance,cpu-usage,raspberry-pi,cpu-speed
13,748,368
1
true
0
0
While you sure can tinker with your program and make it more optimized, the fact is that all programs are generally designed to take as much CPU as they need in order to finish in smallest time possible. I see two ways to achieve your goal: Raspberry pi is Linux right? So just lower process priority of the python interpreter running your script. this would make sure that other programs can have CPU if they need it In your script, sleep for few milliseconds every few milliseconds.. ugly, but could do the trick But option one is probably way to go.
1
1
0
I have a python program that uses a lot of my CPU's resources. While it is fine on my regular PC, I'm afraid it might be too much to handle for my Raspberry Pi. Speed is not an issue. I don't care if my code is executed slowly as I am implementing a real time system that executes the code only once every few hours, but my CPU needs to be freed up as I would also be running other processes simultaneously. Is there anyway I can reduce the resources that it takes from the CPU at the cost of speed of execution? Any help would be appreciated, thank you
How do I reduce CPU and memory usage by a python program?
1.2
0
0
5,970
13,748,166
2012-12-06T16:33:00.000
7
0
0
0
python,django,multithreading,session,race-condition
13,924,932
3
true
1
0
Yes, it is possible for a request to start before another has finished. You can check this by printing something at the start and end of a view and launch a bunch of request at the same time. Indeed the session is loaded before the view and saved after the view. You can reload the session using request.session = engine.SessionStore(session_key) and save it using request.session.save(). Reloading the session however does discard any data added to the session before that (in the view or before it). Saving before reloading would destroy the point of loading late. A better way would be to save the files to the database as a new model. The essence of the answer is in the discussion of Thomas' answer, which was incomplete so I've posted the complete answer.
1
8
0
Summary: is there a race condition in Django sessions, and how do I prevent it? I have an interesting problem with Django sessions which I think involves a race condition due to simultaneous requests by the same user. It has occured in a script for uploading several files at the same time, being tested on localhost. I think this makes simultaneous requests from the same user quite likely (low response times due to localhost, long requests due to file uploads). It's still possible for normal requests outside localhost though, just less likely. I am sending several (file post) requests that I think do this: Django automatically retrieves the user's session* Unrelated code that takes some time Get request.session['files'] (a dictionary) Append data about the current file to the dictionary Store the dictionary in request.session['files'] again Check that it has indeed been stored More unrelated code that takes time Django automatically stores the user's session Here the check at 6. will indicate that the information has indeed been stored in the session. However, future requests indicate that sometimes it has, sometimes it has not. What I think is happening is that two of these requests (A and B) happen simultaneously. Request A retrieves request.session['files'] first, then B does the same, changes it and stores it. When A finally finishes, it overwrites the session changes by B. Two questions: Is this indeed what is happening? Is the django development server multithreaded? On Google I'm finding pages about making it multithreaded, suggesting that by default it is not? Otherwise, what could be the problem? If this race condition is the problem, what would be the best way to solve it? It's an inconvenience but not a security concern, so I'd already be happy if the chance can be decreased significantly. Retrieving the session data right before the changes and saving it right after should decrease the chance significantly I think. However I have not found a way to do this for the request.session, only working around it using django.contrib.sessions.backends.db.SessionStore. However I figure that if I change it that way, Django will just overwrite it with request.session at the end of the request. So I need a request.session.reload() and request.session.commit(), basically.
Django session race condition?
1.2
0
0
1,895
13,750,417
2012-12-06T18:48:00.000
1
1
0
0
python,git,continuous-integration,flask,gunicorn
13,767,025
1
false
1
0
buildbot, jenkins/hudson but these give you continuous integration in the sense you can run a "make" equivalent with every code base change through a commit hook. You could also look at vagrant if there is something there for you for creating repeatable vm's wrt to config/setup. Could tie it with a commit hook.
1
4
0
How should I implement continuous integration on my new application? Currently, this is how we're pushing to production - please bear with me, I know this is far from sane: From local, git push origin production (the production codebase is kept on the production branch, modifications are either written directly there and committed, or files are checked out individually from another branch. Origin is the remote production server) On the remote box, sudo stop gunicorn (application is running as a process) cp ~/flaskgit/application.py ~/flask/applicaion.py (the git push origin from local pushes to an init -bare repo with a post-update hook that populates the files in ~/flaskgit. ~/flask is where the gunicorn service runs the application under a virtualenv) sudo start gunicorn we do our testing with the ~/flaskgit code running on a different port. once it looks good we do the CP I would love to have something more fluid. I have used jenkins in the past, and loved the experience - but didn't set it up. What resources / utilities should I look up in order to do this well? Thank you!
Continuous integration with python 2.7 / flask / mongoDB / git
0.197375
0
0
1,085
13,751,271
2012-12-06T19:43:00.000
0
1
0
1
python,arduino,interprocess,python-multithreading
16,685,053
2
false
0
0
Have WAMP server. It is the easiest and quickest way. The web server will support php, python , http etc. If you are using Linux , the easiest tool for serial communication is php. But in windows php cannot read data from serial communication. Hence use python / perl etc. Thanks
2
1
0
The situation: I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part? Thanks, any comments are the most welcomed.
python daemon + interprocess communication + web server
0
0
1
544
13,751,271
2012-12-06T19:43:00.000
0
1
0
1
python,arduino,interprocess,python-multithreading
16,689,821
2
true
0
0
For those wondering what I have opted for; I have decoupled the two part: The Arduino daemon I am using Python with a micro web framework called [Bottle][1] which handles the API calls and I have used PySerial to communicate with the Arduino's. The web server The canonical Apache and PHP; are used to make API calls to the Arduino daemon.
2
1
0
The situation: I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part? Thanks, any comments are the most welcomed.
python daemon + interprocess communication + web server
1.2
0
1
544
13,754,725
2012-12-06T23:48:00.000
2
0
1
0
python,jython,robotframework
13,769,988
1
false
1
0
So after some reading and trial and error, it IS possible to use Selenium2Library with only Jython AS LONG AS you use jython 2.7+ ... jython 2.5.x is NOT compatible with Selenium2Library. So you can get away with not using Python at all: Install jython 2.7+ Install ez_setup.py Install robot framework Install Selenium2Library.
1
0
0
Can I use Selenium2Library if I only have Jython? That is, I haven't installed Python, and was hoping to get away with not needing it. I've read conflicting information however that jybot CANNOT use selenium2library, and I'll need pybot to use it. If jybot can't use selenium2Library, is there a way to have jybot call pybot somehow? Thanks
Can I install Selenium2Library for RobotFramework without installing Python?
0.379949
0
0
1,914
13,756,090
2012-12-07T02:32:00.000
0
1
0
0
python,pyramid
14,030,455
3
false
1
0
This is somewhat a different answer than ohters but here is a completely different flow. Write all your pages in html. Everything!!! and then use something like angularjs or knockoutjs to add dynamic content. Pyramid will serve dynamic content requested using ajax. You can then map everything to you html templates... edit those templates wherever you want since they are simply html files. The downside is that making it work altogether isn't that simple at first.
1
3
0
I'm switching to Pyramid from Apache/PHP/Smarty/Dreamweaver scheme. I mean the situation of having static site in Apache with menu realized via Dreamweaver template or other static tools. And then if I wanted to put some dynamic content in html I could make the following: Put smarty templates in html. Create php behind html with same name. Php takes html as template. Change links from html to php. And that was all. This scheme is convenient because the site is viewable in browser and editable in Dreamweaver. How can I reproduce this scheme in Pyramid? There are separate dirs for templates and static content. Plus all this myapp:static modifiers in hrefs. Where to look up? Thank you for your advices.
How to make almost static site in Pyramid?
0
0
0
720
13,756,269
2012-12-07T02:57:00.000
0
0
0
0
python,ruby,node.js,grails,cloud
13,756,326
2
false
1
0
Memory footprint will certainly reflect on your PaaS expenses. But to tell you what to use is hard without knowing more about the project. Node.js per se is great, but it's not perfect for every case. Python is very friendly for development, and has ok memory usage, but again - it all depends on what you're doing.
1
4
0
So, I have built and deployed a Grails app onto cloudfoundry. And as I play around with examining instances & memory I start to wonder; If my app's footprint is larger because of the technology I chose to develop it in, will it start costing me money sooner rather than later? Surely it must? If that is the case, am I better off developing in an alternative language? if so which has the smaller footprint (python, ruby, node.js)? Of course, costs should not determine which language I use, I should select language/framework on merits and personal preference. But it is still a question I'd would really like to know the answer to.
Does a smaller application footprint mean cheaper PaaS costs? which language?
0
0
0
216
13,760,852
2012-12-07T10:04:00.000
0
0
0
0
python,soap,windows-7
45,009,606
1
false
0
0
Open your command prompt (If python already installed and you have set python path in environment variable) then c:> pip install zeep c:> pip install lxml==3.7.3 zeep c:> pip install zeep[xmlsec] c:> pip install zeep[async] Now you are ready to create SOAP call using python 1. c:/> Python 2. >>> from zeep import Client 3. >>> client = Client('your WSDL URL'); 4. result = client.service.method_name(parameters if required) 5. >>> print (result)
1
1
0
I use Python 3.2. I have an API which uses SOAP. I need to perform a number of SOAP calls to modify some objects in a database. I'm trying to install a SOAP library which would work with Python 3.2 (or 2.7 if that's what it takes) in order to do my task. If someone could give me some guidance how to go through with what library to install and how to install, I would be very grateful. I would be able to continue with the rest of my development. Note: I heard about SOAPy but it looks like it's discontinued. I've downloaded an executable which asks me to point where I want it installed and I'm given no choices... I'm a little lost.
How to make SOAP calls in Python on Windows?
0
0
1
1,697
13,762,051
2012-12-07T11:18:00.000
3
0
0
0
python,session,cookies,ssl,wsgi
13,762,690
1
false
1
0
What's wrong with your protocol You might want to consider the following: Alice contacts your server and obtains an SSL session ID S. A cookie containing H(S) is sent to Alice. Bob is listening in on the exchange. The SSL session ID is not encrypted, so he can read S. Bob contacts your server with the session cookie set to H(S). His Session ID is not recognized, but your system will let him in Alice's session anyway (and probably kick Alice out, too!). The solution would then be to use HMAC so sign the Session ID. But then you might as well just use an HMAC'ed session id in the first place. A few details: To know the name of the Cookie he should send, Bob can just contact your server. Bob can do the same to get an idea of the hashing algorithm you are using What's great with HMAC Session Cookies + HMAC has been proved to be cryptographically secure. HMAC was designed for the purpose of authenticating data. The logic behing HMAC is sound, and there is, as of today, no attack that exist on the protocol. Even better, it was proved than an attack on the underlying Hash algorithm doesn't mean an attack on HMAC exists (That doesn't mean you should use MD5 though!). There is no reason why you wouldn't want to use HMAC. SSL Session IDs are, at best, useful for load balancers. Never implement your own cryptography You should never, ever, re-invent cryptography. Cryptographic algorithms have been reviewed by (possibly) thousands of people with lots of experience in the field. Whenever you feel like you have a better idea, you are probably missing something. Maybe you don't though! But then you should write a paper on your algorithm, and let it be peer-reviewed. Stick to the standards.
1
1
0
I'm reviewing the possibility and advisability of implementing wsgi-app to browser session management, without using any off the shelf middleware. The general approach under consideration is to; Use SSL between the browser and the server. Expose the SSL session ID to the WSGI or OS.environment, to be used as a session ID to enable application level persistence and authentication. As the SSL session ID could change at any time if the server+browser handshake again, the idea is to use a cookie to hold a hashed version of the SSL ID generated. If they handshake and a change in SSL ID is detected (the SSL session ID exposed to the environment does not match the cookie returned by the client), the hashed cookie could be checked to see if it contained the previous known session ID, if it did then we should continue the current session and update the SSL session ID used in the cookie (and stored in a backend db) to be the newly generated-via-handshake SSL session ID. Hence enabling us to continue the session even though SSL session ID's can change under our feet. The idea, as I understand it, is to let SSL be generating session ID's, and to be doing something that is more secure than relying on just cookies+hmac to hold the session ID. I would be interested in anyones thoughts on the above process. In principle it seems sound to me, but I have very little experience with this kind of functionality. I have drawn out the flow of exchanges between client & server & wsgi-app for a few scenarios and it appears to work out fine, but I'm not comfortable I've covered all the bases.
Session managment, SSL, WSGI and Cookies
0.53705
0
0
690
13,768,732
2012-12-07T18:15:00.000
0
0
1
0
python,hash,mechanize,autofill
13,769,150
3
false
0
0
If the requested form has always the same amount of forms you can find it by the form number (0 beeing the first form and so on) Try br.select_form(nr=number)
1
4
0
I am trying to autofill a text box(multiple boxes) in a form using mechanize in python, but the name of the box(es) is a hash, so I can't automate the input like br.form['name'] = 'blah' since the name is an unknown hash from a hash function. Is there any way to do this? I've looked online and haven't been able to find anything. Thanks!
Autofill if name = hash
0
0
0
381
13,769,381
2012-12-07T19:01:00.000
0
0
1
0
python,django,django-admin
13,770,047
2
false
1
0
The django package is hosted on your server. Have you considered changing the images for icon_error.png and icon_success.png, then resaving them with the same name on your filesystem? The location where you can find the static image files within the django package is: / django / django / contrib / admin / static / admin / img /
1
0
0
How can I override default admin static files - for example icon_error.png, icon_success.png ?
How override default admin static files?
0
0
0
118
13,771,694
2012-12-07T22:01:00.000
0
0
0
0
python,pyqt,pyqt4,qtablewidget
13,772,005
3
false
0
1
If you disable the entire event loop, the app becomes unresponsive. And, even if the user doesn't notice, the OS might, and put up some kind of "hang" notification (like OS X's brightly-colored spinning beachball, which no user will ever miss). You might want to disable repaints without disabling the event loop entirely. But even that's probably too drastic. All you're really trying to do is make sure the table stops redrawing itself (without changing the way you've implemented your table view, which you admit isn't ideal, but you have reasons for). So, just disable the ItemChanged updates. The easiest way to do this, in almost every case, is to call blockSignals(True) on the widget. In the rare cases where this won't work (or when you're dealing with ancient code that's meant to be used in both Qt4-based and earlier projects), you can still get the handler(s) for the signal, stash them away, and remove them, then do your work, then restore the previous handler(s). You could instead create a flag that the handlers can access, and change them so they do nothing if the flag is set. This is the traditional C way of doing things, but it's usually not what you want to do in Python.
1
9
0
I'm developing a GUI with PyQt. The GUI has a qListWidget, a qTableWidget, and a plot implemented with Mayavi. The list refers to shapes that are plotted (cylinders and cones for example). When a shape is selected in the list, I want the shape's properties to be loaded into the table (from a dictionary variable) and the shape to be highlighted in the plot. I've got the Mayavi plotting working fine. Also, if the table is edited, I need the shape to be re-plotted, to reflect the new property value (like for a cylinder, if the radius is changed). So, when a list item is selected -> update the table with the item's properties (from a dictionary variable), highlight the item on the plot When the table is edited -> update the dictionary variable and re-plot the item The Problem: when I select a list item and load data into the table, the qTableWidget ItemChanged signal fires every time a cell is updated, which triggers re-plotting the shape numerous times with incomplete data. Is there a typical means of disabling the GUI event loop while the table is being programmatically updated? (I have experience with Excel VBA, in that context setting Application.EnableEvents=False will prevent triggering a WorksheetChange event every time a cell is programmatically updated.) Should I have a "table update in progress" variable to prevent action from being taken while the table is being updated? Is there a way to update the Table Widget all at once instead of item by item? (I'll admit I'm intentionally avoiding Model-View framework for the moment, hence the qListWIdget and qTableWidget). Any suggestions? I'm a first time poster, but a long time user of StackOverflow, so I just want to say thanks in advance for being such an awesome community!
Turn off PyQt Event Loop While Editing Table
0
0
0
13,515
13,772,750
2012-12-07T23:39:00.000
3
0
0
0
python,django,django-taggit
13,772,887
2
true
1
0
The only technique I can think of would be to attach a custom pre_delete signal handler to every taggable model that checks if it was the last model with any particular tag. In the event that it is, delete that tag.
1
2
0
I think the title says it. Many tags are created and deleted but they still exist even when no more objects are using them. Is there a way to make it check upon save and delete unused tags?
How can I have django-taggit tags deleted when there are no more objects attached to them?
1.2
0
0
728
13,772,857
2012-12-07T23:52:00.000
0
0
0
0
python,mysql,flot
13,774,224
1
false
0
0
Install a httpd server Install php Write a php script to fetch the data from the database and render it as a webpage. This is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.
1
0
1
So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows: Database: testDB Table: sensors First record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor data. It is in the following format: 23432.32 112343.3 53454.322 34563.32 76653.44 000.000 333.2123 I am completely lost on how to complete this project. I have read many pages showing examples dont really understand them. They provide source code, but I am not sure where that code goes. I installed httpd on my server and that is where I stand. Does anyone know of a good how-to from beginning to end that I could follow? Or could someone post a good step by step for me to follow? Thanks for your help
Plotting data using Flot and MySQL
0
1
0
428
13,776,610
2012-12-08T10:27:00.000
1
0
0
1
google-app-engine,python-2.7,google-cloud-datastore
13,779,121
3
false
1
0
The datastore typically saves to disk when you shut down. If you turned off your computer without shutting down the server, I could see this happening.
1
4
0
On my local machine (i.e. http://localhost:8080/), I have entered data into my GAE datastore for some entity called Article. After turning off my computer and then restarting next day, I find the datastore empty: no entity. Is there a way to prevent this in the future? How do I make a copy of the data in my local datastore? Also, will I be able to upload said data later into both localhost and production? My model is ndb. I am using Max OS X and Python 2.7, if theses matter.
local GAE datastore does not keep data after computer shuts down
0.066568
0
0
2,643
13,776,973
2012-12-08T11:20:00.000
1
0
1
1
python,bash,shell,race-condition
13,777,096
3
false
0
0
The only sure way that no two scripts will act on the same file at the same time is to employ some kind of file locking mechanism. A simple way to do this could be to rename the file before beginning work, by appending some known string to the file name. The work is then done and the file deleted. Each script tests the file name before doing anything, and moves on if it is 'special'. A more complex approach would be to maintain a temporary file containing the names of files that are 'in process'. This file would obviously need to be removed once everything is finished.
1
1
0
I have a directory with thousands of files and each of them has to be processed (by a python script) and subsequently deleted. I would like to write a bash script that reads a file in the folder, processes it, deletes it and moves onto another file - the order is not important. There will be n running instances of this bash script (e.g. 10), all operating on the same directory. They quit when there are no more files left in the directory. I think this creates a race condition. Could you give me an advice (or a code snippet) how to make sure that no two bash scripts operate on the same file? Or do you think I should rather implement multithreading in Python (instead of running n different bash scripts)?
Multiple processes reading&deleting files in the same directory
0.066568
0
0
1,245
13,779,114
2012-12-08T15:57:00.000
1
0
1
0
python,networking
13,779,203
1
true
0
0
I think this might be off-topic for StackOverflow; however, you can consider implementing a simple online chat room. The main advantage of this topic is that it's as probably as simple as you can get to demonstrate application of core concepts in networking and security. You'll be able to do a bit of architecture as well: Server Backend: Django, or another framework for Python? Is event-driven architecture appropriate here? Publish-subscribe? Client UI/Model: You should technically still use the Model-View-Controller pattern, even though the model here would just be a "proxy" for the model on the server. Serialized Mediums: JSON, YAML, YAML? ORM for user accounts / history It's also a safe project because, while it's not overly ambitious, you can keep adding features to it as long as you have more time left; I'm sure you can think of many possible features for a chatroom =)
1
1
0
I am currently a student and next semester I am taking a network programming course in python. I need to propose a network project on which I'll work during the semester before it starts. I wanted to ask if anybody knows any good sources of the project for my topic. I can include anything related to networking and security.
Python network programming project sources
1.2
0
1
814
13,779,439
2012-12-08T16:34:00.000
0
1
1
0
python,python-2.7
13,779,703
2
false
0
0
If you want to make change in the code; better will be; download its source code first; apply the changes; modify the setup.py file or make new one; give it a new name.. . i mean do not change directly in the installed version; do them separately .. .or do whatever you like but before doing all this; study the license agreements present in the original source by its author ; and this must be done carefully when you want to distribute your copy to others
1
0
0
I'm trying to improve a package written in python . The package is already installed in the system. All the source files are also present . I want to create a copy of the package source so that i can make all the changes to the copy and test so that i do not make any change to the installed package . Is there a way for me to tell python to pick my copy of code instead of the Installed version whenever a file tries to import the package , so that i can test the new code in the copy ? I'm a noob with respect to python , so please do elaborate the solution
Making Changes to Installed Packages
0
0
0
65
13,780,732
2012-12-08T18:52:00.000
0
0
0
0
python,django,background-process,backend
13,783,426
3
false
1
0
Why don't you have a url or python script that triggers whatever sort of calculation you need to have done everytime it's run and then fetch that url or run that script via a cronjob on the server? From what your question was it doesn't seem like you need a whole lot more than that.
1
4
0
Newb quesion about Django app design: Im building reporting engine for my web-site. And I have a big (and getting bigger with time) amounts of data, and some algorithm which must be applied to it. Calculations promise to be heavy on resources, and it would be stupid if they are performed by requests of users. So, I think to put them into background process, which would be executed continuously and from time to time return results, which could be feed to Django views-routine for producing html output by demand. And my question is - what proper design approach for building such system? Any thoughts?
Django - how to set up asynchronous longtime background data processing task?
0
0
0
2,371
13,781,287
2012-12-08T19:54:00.000
0
0
0
0
python,database,heroku
13,782,284
1
false
1
0
You can connect directly to it using something like PGAdmin, look at the output of heroku config for your application to get your database URL which you can breakup to use in GUI tools.
1
0
0
How do I see the database for my heroku web app? I just want to verify if users are being registered, properly, etc. I am using Flask-SQLAlchemy Python, if that makes a difference.
How To See A Database On Heroku
0
0
0
61
13,783,071
2012-12-08T23:33:00.000
4
0
0
0
python,list,numpy,min
37,094,880
2
false
0
0
numpy.argpartition(cluster, 3) would be much more effective.
1
14
1
So I have this list called sumErrors that's 16000 rows and 1 column, and this list is already presorted into 5 different clusters. And what I'm doing is slicing the list for each cluster and finding the index of the minimum value in each slice. However, I can only find the first minimum index using argmin(). I don't think I can just delete the value, because otherwise it would shift the slices over and the indices is what I have to recover the original ID. Does anyone know how to get argmin() to spit out indices for the lowest three? Or perhaps a more optimal method? Maybe I should just assign ID numbers, but I feel like there maybe a more elegant method.
Finding the indices of the top three values via argmin() or min() in python/numpy without mutation of list?
0.379949
0
0
11,034
13,783,274
2012-12-09T00:03:00.000
5
0
1
0
python,python-3.x
13,783,304
2
false
0
0
A program is deterministic if you get the same result and behavior every time you run it. The term is not particular for Python.
1
0
0
I am studying for a programming exam and one of the words I need to know is deterministic as it applies to python.
What does the word "deterministic" mean in python?
0.462117
0
0
1,723
13,783,586
2012-12-09T00:53:00.000
3
0
1
1
python,executable
38,528,239
4
false
0
0
Assuming you have pip installed which you should after installing Python inside folder Scripts. Install PyInstaller using pip, type in the command prompt the following. pip install pyinstaller After you install PyInstaller locate where your pyinstaller files are (they should be where your pip files are inside the Scripts folder) and go to the command prompt and type the following. c:\python27\Scripts>pyinstaller --onefile c:\yourscript.py The above command will create a folder called “dist” inside the Scripts folder, this will contain your single executable file “yourscript.exe”.
1
5
0
We are trying to make our python script execute itself as a .exe file, without having python installed. Like if we give our program to someone else, they wouldn't need to install python to open it. It is a text-based game like zork, so we need a gui, like cmd, to run it. We have tried using py2exe, and pyinstaller, but none of them made any sense, and don't work with 2.7.3 for some reason. Any help?
Making a python script executable in python 2.7
0.148885
0
0
16,418
13,783,865
2012-12-09T01:41:00.000
0
1
1
0
python,c,fonts
43,614,842
3
false
0
0
It looks like font files may have multiple localised names. Example with the fontconfig tools: $ fc-query -f '%{fullname} (%{fullnamelang}): %{file}\n' /usr/share/fonts/truetype/unfonts-core/UnBatang.ttf Un Batang,은 바탕 (en,ko): /usr/share/fonts/truetype/unfonts-core/UnBatang.ttf I can select the korean (ko) name using the order in fullnamelang: $ fc-query -f '%{fullname[1]}\n' /usr/share/fonts/truetype/unfonts-core/UnBatang.ttf 은 바탕
1
1
0
Is there anyway I can extract localised name from ttf/otf font file? A solution in Python would be preferred, but I am fine with any language. Thank you very much.
Getting ttf/otf font localised name
0
0
0
1,306
13,783,964
2012-12-09T02:01:00.000
0
0
0
0
google-app-engine,python-2.7,xlwt
13,805,700
1
true
0
0
I figured it out, more or less. I put the image in the app's root directory instead of inside the static directory, seems to like that ok. Like this: sheetLoopParam.insert_bitmap(os.path.abspath("PLL_diagram2_small.bmp"), 12, 0) instead of like this: sheetLoopParam.insert_bitmap(os.path.abspath("static/PLL_diagram2_small.bmp"), 12, 0) I just need to insert the one image so that works ok.
1
0
0
I tried: sheetLoopParam.insert_bitmap(os.path.abspath("static/PLL_diagram2_small.bmp"), 12, 0) but it says: IOError: [Errno 13] file not accessible: 'C:\Users\rest_of_path\static\PLL_diagram2_small.bmp' The path it displays is correct, I don't understand why it says it can't access it. Thank you.
Insert image in Excel using xlwt in GAE
1.2
0
0
262
13,784,459
2012-12-09T03:52:00.000
0
1
1
1
python,linux,startup,runlevel
13,876,262
1
true
0
0
I may as well answer my own question with my findings. On Debian,Ubuntu,CentOS systems there is a file named /etc/rc.local. If you use pythons' FileIO to edit that file, you can put a command that will be run at the end of all the multi-user boot levels. This facility is still present on systems that use upstart. On BSD I have no idea. If you know how to make something go on startup please comment to improve this answer. Archlinux and Fedora use systemd to start daemons - see the Arch wiki page for systemd. Basically you need to create a systemd service and symlink it. (Thanks Emil Ivanov)
1
0
0
I would like to find out how to write Python code which sets up a process to run on startup, in this case level two. I have done some reading, yet it has left me unclear as to which method is most reliable on different systems. I originally thought I would just edit /etc/inittab with pythons fileIO, but then I found out that my computers inittab was empty. What should I do? Which method of setting something to startup on boot is most reliable? Does anyone have any code snippets lying around?
Programmatically setting a process to execute at startup (runlevel 2)?
1.2
0
0
302
13,784,859
2012-12-09T05:14:00.000
23
1
1
0
python,uuid
13,784,879
1
true
0
0
There is a standard for UUIDs, so they're the same in all languages. However, there is a string representation and a binary representation. The normal string representation (str(myuuid)) looks like 42c151a8-b22b-4cd5-b103-21bdb882e489 and is 36 characters. The binary representation, myuuid.bytes (or bytes_le, but stay consistent with it when reconstructing the UUID objects), is 16 bytes. You can also get the string representation with no hyphens (32 characters) with myuuid.hex. You should be aware that some databases have a specific UUID type for storing UUIDs. What kind of database are you using?
1
11
0
I'm building a database to hold a uuid generated by the python uuid4 method - however, the documentation doesn't mention how many chars the uuid is! I'm not overly familiar with uuids, so i don't know if all languages generate the same length for a uuid.
What is the number of characters in a python uuid (type 4)?
1.2
0
0
10,805
13,785,630
2012-12-09T07:48:00.000
2
0
0
0
python,selenium,screen-scraping,web-scraping,screenshot
13,787,570
1
false
1
0
Sure. Use Selenium and just loop through all visible, displayable elements.
1
0
0
Is there a way to capture visible webpage content or text as if copying from a browser display to parse later (maybe using regular expression etc)? I don't mean to clean the html tags, javascript, etc and only show leftover text. I would like to copy all visible text, since some style elements may hide some of the html text while showing others when displayed in the browser. So far I have looked into nltk, lxml Cleaner, and selenium without luck. Maybe I can capture a screenshot in selenium and then extract text using ocr, but that seems computer intensive? Thanks for any help!
Capture Visible Webpage Content (or text) as if Copying from Browser
0.379949
0
1
781
13,786,926
2012-12-09T11:19:00.000
0
1
0
0
php,python,scraper
13,952,344
7
false
0
0
I think I have a fair idea of what you are saying put I am not too sure what you mean. If you mean to say that everytime the python script does a print, you want the php code to output what was print? If that is the case you could pass it as a POST DATA via HTTP. That is instead of printing in Python, you could send it to the PHP Script, which on receiving the data would print it. I am not too sure if this is what you want though.
2
10
0
I have a scraper which scrape one site (Written in python). While scraping the site, that print lines which are about to write in CSV. Scraper has been written in Python and now I want to execute it via PHP code. My question is how can I print each line which is getting printed by python code. I have used exec function but it is none of my use and gives output after executing all the program. So; Is it possible to get python output printed while it is getting executed via PHP.
Print Python output by PHP Code
0
0
0
3,658
13,786,926
2012-12-09T11:19:00.000
0
1
0
0
php,python,scraper
13,989,869
7
false
0
0
Simply use system() instead of exec(). exec() saves all lines of stdout output of the external program into an array, but system() flushes stdout output "as it happens".
2
10
0
I have a scraper which scrape one site (Written in python). While scraping the site, that print lines which are about to write in CSV. Scraper has been written in Python and now I want to execute it via PHP code. My question is how can I print each line which is getting printed by python code. I have used exec function but it is none of my use and gives output after executing all the program. So; Is it possible to get python output printed while it is getting executed via PHP.
Print Python output by PHP Code
0
0
0
3,658
13,787,244
2012-12-09T12:07:00.000
0
1
0
0
python,web-services,xmpp,openfire,strophe
13,797,108
1
false
1
0
It seems to me like XMPP is a bit of a heavy-weight solution for what you're doing, given that communication is just one-way and you're just sending notifications (no real-time multi-user chat etc.). I would consider using something like Socket.io (http://socket.io) for the server<=>client channel along with Redis (http://redis.io) for persistence.
1
0
0
I am planning to integrate real time notifications into a web application that I am currently working on. I have decided to go with XMPP for this and selected openfire server which i thought to be suitable for my needs. The front end uses strophe library to fetch the notifications using BOSH from my openfire server. However the notices are the notifications and other messages are to be posted by my application and hence I think this code needs to reside at the backend. Initially I thougt of going with PHP XMPP libraries like XMPHP and JAXL but then I think that this would cause much overhead as each script will have to do same steps like connection, authentication etc. and I think this would make the PHP end a little slow and unresponsive. Now I am thinking of creating a middle-ware application acting as a web service that the PHP will call and this application will handle the stuff with XMPP service. The benefit with this is that this app(a server if you will) will have to connect just once and the it will sit there listening on a port. also I am planning to build it in a asynchronous way such that It will first take all the requests from my PHp app and then when there are no more requests; go about doing the notification publishing stuff. I am planninng to create this service in Python using SleekXMPP. This is just what I planned. I am new to XMPP and this whole web service stuff ans would like to take your comments on this regarding issues like memory and CPU usage, advantages, disadvantages, scalability issues,security etc. Thanks in advance. PS:-- also if something like this already exists(although I didn't find after a lot of Googling) Please direct me there. EDIT --- The middle-level service should be doing the following(but not limited to): 1. Publishing notifications for different level of groups and community pages. 2. Notification for single user on some event. 3. User registration(can be done using user service plugin though). EDIT --- Also it should like to create pub-sub nodes and subscribe and unsubscribe users from these pub-sub nodes. Also I want to store the notifications and messages in a database(openfire doesn't). Would that be a good choice?
XMPP-- openfire,PHP and python web service
0
0
0
1,049
13,788,172
2012-12-09T14:14:00.000
1
0
0
0
javascript,python,ajax,localhost
13,788,959
1
true
1
0
The most direct way to protect against this attack is to just have a long complex secret key being required for every request. Just make your local code authenticate itself before processing the request. This is essentially how web services on the Internet are protected. You might also want to consider having inter process communication in some other form like DBUS or unix sockets. I'm not sure which OS you are on but there are many options for inter process communication that wouldn't make you vulnerable in this way.
1
2
0
I have a written an app, with python acting as a simple web server (I am using bottle framework for this) and an HTML + JS client. The whole thing runs locally. The web page acts as GUI in this case. In my code I have implemented a file browser interface so I can access local file structure from JavaScript. The server accepts only local connections, but what bothers me is that: if for example somebody knows that I am running my app locally, and forges a site with AJAX request to localhost? and I visit his site in some way, will my local files be visible to the attacker? My main question is: is there any way to secure this? I mean that my server will know for sure that the request came from my locally served file?
Secure localhost JavaScript ajax requests
1.2
0
1
320
13,790,066
2012-12-09T17:58:00.000
2
0
0
1
python,.htaccess,authorization,tornado
13,793,602
2
true
0
0
If you based your application on the Tornado "Hello World" example then you probably haven't, but you really should consider writing your application as a WSGI application. Tornado has no problem with that, and the advantage is that your application now will run under a multitude of other environments (Apache + mod_wsgi to name but one). But how does that solve your original problem? Well, just Google "WSGI authentication middleware", it'll yield plenty of hits. Basically, what that entails is transparently 'wrapping' your WSGI-application in another, one allowing you to completely decouple that aspect of your application. If you're lucky, and one of hits turns out to be a perfect fit, you might get away with not witing any extra code at all. Since you mentioned .htaccess: it is possible to have Apache do the authentication in an Apache/mod_wsgi configuration.
1
1
0
I have implemented a very small application with Tornado, where HTTP GET Requests are used to perform actions. Now I would like to secure these requests. What would be a preferable way? Using .htaccess? How can I realize that? It doesn't have to be for certain requests, it should be for all requests running on a certain port.
Secure web requests via Tornado with .htaccess
1.2
0
0
860
13,791,542
2012-12-09T20:34:00.000
1
0
0
0
python,mongodb,path,mongoengine,filefield
13,962,502
1
true
0
0
You can't since it stores the data into database. If you need to store the original path then you can create an EmbeddedDocument which contains a FileField and a StringField with the path string. But remember that the stored file and the file you might find on that path are not the same
1
0
0
I need to obtain path from a FileField, in order to check it against a given file system path, to know if the file I am inserting into mongo database is already present. Is it possible? All I get is a GridFSProxy, but I am unable to understand how to handle it.
How to get filesystem path from mongoengine FileField
1.2
1
0
430
13,792,502
2012-12-09T22:25:00.000
5
0
0
0
python-3.x,qt4,pyside
57,644,204
7
false
0
1
Just create a directory where it search for uic.exe file and copy existing uic.exe file to that directory. My example : When I clicked View Code it shows Error asking for file uic.exe in path C:\python374\Lib\site-packages\pyqt5_tools\Qt\bin\bin But I found uicexe file is in C:\python374\Lib\site-packages\pyqt5_tools\Qt\bin folder So I created another bin folder and copied uic.exe into that folder . That solved my problem.
1
6
0
I have installed Qt designer 4.8.2, Pyside, and Python 3.3. When I create a form with Qt designer I am not able to see the code when clicking on view code. The error message is:"Unable to launch C:\Qt\4.8.2\bin\uic". I have the pyuic under C:\Python33\Lib\site-packages\PyQt4\uic. Please help.
Unable to Launch Qt uic
0.141893
0
0
22,091
13,793,158
2012-12-09T23:49:00.000
0
0
0
0
android,python
54,814,139
4
false
0
1
Download PyDroid 3 from Playstore and if you like it buy the pro version. It costs around 2$ but you have all you need to do Python 3 programming
1
7
0
So I want to practice python on my Android. Is there a way I can get the interpreter or an interpreter emulator on my device?
Python Interpreter on Android
0
0
0
28,974
13,794,399
2012-12-10T02:58:00.000
0
0
0
0
python,django,recursion,tree,mptt
14,391,692
2
false
1
0
I have been sitting here for five minutes and I can't think of a way to do this in SQL given the data structure you are describing. To begin with, I would suggest that you separate your data into Posts and Comments, rather than just having one sort of data object. Then you can do a join to gather the comments together with your posts and give different ordering to each. Also, an MPTT seems like overkill for a two layer tree.
2
2
0
I'm using MPTT tree structure in my Django project to organise comments. I have only 2 level : comment and comment of comment Everything works perfectly except the ordering. I would like to sort all Comment that don't have parents by creation date ascendent ("-creation_date") and all comment that has a parent by creation date descendant ("creation_date"). Basically it's like the comments are working on the Facebook wall. (you alway see the latest comment on top but the comments within a comment are in a reverse order) In my class Comment I have the following MPTTMeta : order_insertion_by=['creation_date'] I hope I'll get some help. Thank you
Django MPTT ordering
0
0
0
629
13,794,399
2012-12-10T02:58:00.000
-1
0
0
0
python,django,recursion,tree,mptt
14,846,769
2
false
1
0
I found a solution so I forgot to check back. I played around with the mptt structure and django functions ... Thanks
2
2
0
I'm using MPTT tree structure in my Django project to organise comments. I have only 2 level : comment and comment of comment Everything works perfectly except the ordering. I would like to sort all Comment that don't have parents by creation date ascendent ("-creation_date") and all comment that has a parent by creation date descendant ("creation_date"). Basically it's like the comments are working on the Facebook wall. (you alway see the latest comment on top but the comments within a comment are in a reverse order) In my class Comment I have the following MPTTMeta : order_insertion_by=['creation_date'] I hope I'll get some help. Thank you
Django MPTT ordering
-0.099668
0
0
629
13,795,682
2012-12-10T05:47:00.000
25
0
0
0
python,numpy
13,795,874
2
true
0
0
A singular matrix is one that is not invertible. This means that the system of equations you are trying to solve does not have a unique solution; linalg.solve can't handle this. You may find that linalg.lstsq provides a usable solution.
1
13
1
What does the error Numpy error: Matrix is singular mean specifically (when using the linalg.solve function)? I have looked on Google but couldn't find anything that made it clear when this error occurs.
Numpy error: Singular matrix
1.2
0
0
60,944
13,795,785
2012-12-10T05:59:00.000
8
0
1
0
python,ocaml
13,795,826
3
true
0
0
Directly, no. However, if you create a C API for your Ocaml library, you can call that API via. Python's ctypes module or similar. Likewise, if you expose a network service for your OCaml application, Python can call into that.
1
7
0
I've got a large legacy program that is written in OCaml, and I'd like to be able to call some OCaml functions from my Python program. How can I do this the easiest way?
How can I call OCaml functions from a Python program?
1.2
0
0
1,780
13,795,993
2012-12-10T06:23:00.000
1
0
0
1
python,hadoop,apache-pig,embedding
13,796,893
1
false
0
0
Ok. Have found the solution. If you are also seeing this error then I hope this helps. 1) Downloaded the Jython installer jar. 2) ran it with java -jar 3) Specify a location for the installation 4) Added the Jython executable shell script to my PATH environment variable. 5) Copied the jython jar from installation folder to HADOOP_HOME/lib folder. ie. lib folder under hadoop. Mostly the step 5 is the deal maker. But these are the steps I followed. It seems that copying/setting the Jython jar to PIG does not seem to help. I am running Hadoop in pseudo cluster mode with Pig on top of it. And Pig seems to take the HADOOP based jars rather than its own lib!! After this it runs like a charm.
1
0
0
I am trying to embed a pig script in Python and am encountering an exception and can't seem to find what the problem is. I have a Python script with pig script embedded in it and have Apache PIG 0.10 installed. I can run pig scripts from the shell and it works ok. when I run the python script with pig embedded from shell using command pig -x mapreduce pythonscript.py it gives me the error Error before Pig is launched ---------------------------- ERROR 2998: Unhandled internal error. org/python/util/PythonInterpreter java.lang.NoClassDefFoundError: org/python/util/PythonInterpreter at org.apache.pig.scripting.jython.JythonScriptEngine.main(JythonScriptEngine.java:338) I have tried adding Jython jar to the $PIG_CLASSPATH environment variable at shell before running pig command. It does not help. I see that others are also encountering this problem but, has anyone found a solution? Any pointers?
Embedding Pig into Python
0.197375
0
0
1,676
13,798,520
2012-12-10T09:53:00.000
3
0
0
1
python,bash,upnp,server-push,samsung-smart-tv
13,798,803
1
true
0
0
You will still need a DLNA server to host your videos on. Via UPnP you only hand the URL to the TV, not the video directly. Once you have it hosted on a DLNA server, you can find out the URL to a video by playing it in Windows Media Player (which has DLNA-support) or by using UPnP Inspector (which I recommend anyways, if you are going to be working with UPnP). You can then push this URL to the TV, which will download and play the video, if its format is supported. I do not know my way around python, but you since UPnP is HTTP based, you will need to send an HTTP request with appropriate UPnP-headers (see wikipedia or test it yourself with UPnP Inspector) and the proper XML-formatted body for the function you are trying to use. The UPnP-function I worked with to push a link to the TV is "SetAVTransportURI", but it might differ from your TV. Use UPnP Inspector to find the correct one, including its parameters. In summary: Get a DLNA-Server to host you videos on. Find out the links to those videos using UPnP Inspector or other DLNA-clients. Find out the UPnP-function that sends a URL to your TV (again, I recomment UPnP Inspector, you can explore and call all functions with it). Implement a call to that function in your script.
1
2
0
I would like to make a simple script to push a movie to a Smart TV. I have already install miniupnp or ushare, but I don't want to go in a folder by the TV Smart Apps, i want to push the movie to the TV, to win time and in future why not make the same directly from a NAS. Can anyone have an idea how to do this ? This application SofaPlay make it great but only from my mac. Thanks you
uPnP pushing video to Smart TV/Samsung TV on OSX/Mac
1.2
0
0
3,338
13,799,586
2012-12-10T10:58:00.000
2
1
0
0
python,opencv,ffmpeg,codec
14,071,725
2
true
0
0
I solved the problem finally. Windows7 x64 + Python 2.7 x86 + NumPy x86 + ffdshow x86 + Eclipse x64 is the way to go. Everything is working like a charm. x64 ffdshow is also required for other programs like VirtualDub though.
1
1
0
I started working on a new computer a tried to set everything as it used to be on my old one. Unfortunately switching to 64bit Windows made everything quite difficult. With the current setup I can only open raw I420 videos converted with memcoder, but I can't open DivX/XVID videos, that I used to on my old PC. I tried ffdshow and K-Lite codec pack. Opening the videos in gspot shows that the codecs are indeed installed. I've searched for solution all over the Internet, but I couldn't find the solution. I've tried copying the ffmpeg dll into the Python27 folder. The environment is 64bits Windows 7 Pro EDIT: I tried saving a video using OpenCV: I passed -1 to the cv2.VideoWriter function to get the codec selection dialog. The dialog dosn't show the ffdshow codecs.
Open DivX/XVID videos in OpenCV Python
1.2
0
0
2,972
13,803,315
2012-12-10T14:51:00.000
0
1
0
0
python,eclipse
13,866,443
1
false
0
0
You can use NetBeans 6.5 IDE its provide python support.
1
0
0
I have installed PyCuda without any difficulty but am having trouble linking it to my eclipse environment. Does anyone know how I can link pycuda and eclipse IDE? Thanks in Adanced
PyCuda and Eclipse
0
0
0
284
13,809,013
2012-12-10T20:51:00.000
1
1
1
0
python,ruby,jvm,stack
13,810,563
3
false
1
0
Good question. In Smalltalk, yes. Actually, in Smalltalk, dumping the whole program and restarting is the only way to store and share programs. There are no source files and there is no way of starting a program from square zero. So in Smalltalk you would get your feature for free. The Smalltalk VM offers a hook where each object can register to restore its externals resources after a restart, like reopening files and internet connections. But also, for example integer arrays are registered to that hook to change the endianness of their values on case the dump has been moved to a machine with different endianness. This might give a hunch at how difficult (or not) it might turn our to achieve this in a language which does not support resumable dumps by design. All other languages are, alas, much less live. Except for some Lisp implementation, I would not know of any language which supports resuming from a memory dump. Which is a missed opportunity.
1
6
0
I'm just curious, is it possible to dump all the variables and current state of the program to a file, and then restore it on a different computer?! Let's say that I have a little program in Python or Ruby, given a certain condition, it would dump all the current variables, and current state to a file. Later, I could load it again, in a different machine, and return to it. Something like VM snapshot function. I've seen here a question like this, but Java related, saving the current JVM and running it again in a different JVM. Most of the people told that there was nothing like that, only Terracotta had something, still, not perfect. Thank you. To clarify what I am trying to achieve: Given 2 or more Raspberry Pi's, I'm trying to run my software at Pi nº1, but then, when I need to do something different with it, I need to move the software to Pi nº2 without dataloss, only a minor break time. And so on, to an unlimited number of machines.
Saving the stack?
0.066568
0
0
156
13,811,575
2012-12-11T00:13:00.000
1
1
0
0
python,dependency-management
13,811,685
2
false
0
0
The path where your modules are install is probably normally sourced by .bashrc or something similar. .bashrc doesn't get sourced when it's not an interactive shell. /etc/profile is one place that you can put system wide path changes. Depending on what Linux version/distro it may use /etc/profile.d/ in which case /etc/profile runs all the scripts in /etc/profile.d, add a new shell script there with execute permissions and a .sh extention.
1
2
0
I'm facing of a strange issue, and after a couple of hour of research I'm looking for help / explanation about the issue. It's quite simple, I wrote a cgi server in python and I'm working with some libs including pynetlinux for instance. When I'm starting the script from terminal with any user, it works fine, no bug, no dependency issue. But when I'm trying to start it using a script in rc.local, the following code produce an error. import sys, cgi, pynetlinux, logging it produce the following error : Traceback (most recent call last): File "/var/simkiosk/cgi-bin/load_config.py", line 3, in import cgi, sys, json, pynetlinux, loggin ImportError: No module named pynetlinux Other dependencies produce similar issue.I suspect some few things like user who executing the script in rc.local (root normaly) and trying some stuff found on the web without success. Somebody can help me ? Thanx in advance. Regards. Ollie314
python scripts issue (no module named ...) when starting in rc.local
0.099668
0
0
1,890
13,815,107
2012-12-11T06:55:00.000
0
0
0
0
wxpython
13,878,574
1
false
0
1
After some investigation, I have solved this problem by myself. Though FoldPanelBar doesn't play well with others in a Sizer, if it's the only one in a sizer, it works. weird.
1
0
0
In one of my software, I use AuiManager to manage all the different parts of the UI. Now, I start to use FoldPanelBar in one of the AuiPane. At first, I changed part of the Panel to use FoldPanelBar, and the bar is layout with others using BoxSizer. But it doesn't function correctly as the window is resized. Then I moved all the different controls in the Panel into the FoldPanelBar and make the bar the only control of the panel (No sizers anymore). But the FoldPanelBar still don't resize. Do you know why? Thanks.
Resize FoldPanelBar in AuiManager
0
0
0
109
13,817,662
2012-12-11T09:55:00.000
0
0
1
1
python,linux,unix,gcc,cross-compiling
13,817,769
1
false
0
0
I would assume that it's better to build on the OS itself, rather than "cross compile". Although since this is all Unix, cross-compiling might very well work as well, with a bit of effort. But it's probably easier to just build the binaries on the OS in question. I guess that also depends on whether you link statically or not. Python's build process will itself select the best compiler, and it will prefer gcc to cc, at least in most cases.
1
2
0
Am looking at building python (2.7 version) from sources for various UNIX like OSes including SUSE (Desktop, Server), RHEL (Desktop, Server), Ubuntu, AIX, Solaris (SPARC) OSes. Also, some of these OSes might have to build both 32 bit and 64 bit versions. I also want to minimize dependencies on (shared) libraries. That said, is it better to use the native C compiler (cc) wherever available as against gcc? Is it better to cross compile? Thanks.
C compiler to build python from sources on various unix flavors
0
0
0
287
13,817,697
2012-12-11T09:57:00.000
2
1
1
0
python,cgi
13,818,037
1
true
0
0
There's no way your code could ever accumulate a list of keywords over multiple posts. Firstly, CGI scripts have no state, so they will start from a blank list each time. And even if that wasn't true, you explicitly reset keywords to the blank list each time anyway. You will need to store the list somewhere between runs. A text file will work, but only if you can guarantee that only one user will be accessing it at any one time. Since you're new to CGI scripts, I've no idea why you are trying to learn them. There's very little good reason to use them these days. Really, you should drop the CGI scripts, use a web framework (a micro-framework like Flask would suit you), and store the list in a database (again, an unstructure "no-sql" store might be good for you).
1
1
0
Basically what I am trying to accomplish is have users be able to type in a certain word on one cgi script (which I currently have) and then it will save that entry in a list and display that word and the whole list on the other page. Also I will save it into a .txt file but first I am trying to figure out how to display the whole list. Right now it is only showing the keyword the user enters.
Have users enter a keyword from one cgi script and save that information in a list/txtfile on another cgi script
1.2
0
0
59
13,819,496
2012-12-11T11:33:00.000
1
0
0
1
python,linux,python-2.7
71,559,746
3
false
0
0
makedirs : Recursive directory creation function. Like mkdir(), but makes all intermediate-level directories needed to contain the leaf directory. Raises an error exception if the leaf directory already exists or cannot be created.
1
86
0
I am confused to use about these two osmethods to create the new directory. Please give me some example in Python.
What is different between makedirs and mkdir of os?
0.066568
0
0
63,432
13,820,124
2012-12-11T12:10:00.000
0
0
1
0
python,python-3.x,iterator,iterable
13,820,681
5
false
0
0
Now that I understand the question from itertools import islice a = {'A','B','C','D'} zip(a,islice(a,1,None)) #[('A', 'C'), ('C', 'B'), ('B', 'D')]
1
1
0
Suppose I have a set {a, b, c, d}. I want to create a "path" from it, which is a generator that yields (a, b), then (b, c), then (c, d) (of course set is unordered, so any other path through the elements is acceptable). What is the best way to do this?
Generating a path from an iterable of points
0
0
0
114
13,823,554
2012-12-11T15:52:00.000
0
0
0
0
python,elasticsearch
14,884,970
2
false
0
0
It sounds like you have an issue unrelated to the client. If you can pare down what's being sent to ES and represent it in a simple curl command it will make what's actually running slowly more apparent. I suspect we just need to tweak your query to make sure it's optimal for your context.
2
4
0
I'm writing some scripts for our sales people to query an index with elastic search through python. (Eventually the script will update lead info in our Salesforce DB.) I have been using the urllib2 module, with simplejson, to pull results. The problem is that this seems to be a not-so-good approach, evidenced by scripts which are taking longer and longer to run. Questions: Does anyone have any opinions (opinions, on the internet???) about Elastic Search clients for Python? Specifically, I've found pyes and pyelasticsearch, via elasticsearch.org---how do these two stack up? How good or bad is my current approach of dynamically building the query and running it via self.raw_results = simplejson.load(urllib2.urlopen(self.query))? Any advice is greatly appreciated!
Elastic search client for Python: advice?
0
0
1
748
13,823,554
2012-12-11T15:52:00.000
2
0
0
0
python,elasticsearch
14,870,170
2
true
0
0
We use pyes. And its pretty neat. You can there go with the thrift protocol which is faster then the rest service.
2
4
0
I'm writing some scripts for our sales people to query an index with elastic search through python. (Eventually the script will update lead info in our Salesforce DB.) I have been using the urllib2 module, with simplejson, to pull results. The problem is that this seems to be a not-so-good approach, evidenced by scripts which are taking longer and longer to run. Questions: Does anyone have any opinions (opinions, on the internet???) about Elastic Search clients for Python? Specifically, I've found pyes and pyelasticsearch, via elasticsearch.org---how do these two stack up? How good or bad is my current approach of dynamically building the query and running it via self.raw_results = simplejson.load(urllib2.urlopen(self.query))? Any advice is greatly appreciated!
Elastic search client for Python: advice?
1.2
0
1
748
13,828,378
2012-12-11T20:56:00.000
0
0
0
0
python,sockets,flask,pipe
13,855,281
1
false
0
0
You can use a database layer in between. We faced the same issue and we resolved it by sending HTTP requests to Redis instance and another application reading it. we wanted to monitor the HTTP requests. This way you can achieve persistence of data also.
1
0
0
I have a program "A" written in Python that traps HTTP Requests/Responses going from my browser to the internet and back. I want to display those HTTP Requests/Responses on a web app (program "B") I'm building using Flask. What is the best way to 'send' the captured data from program 'A' to 'B' Right now, I'm creating two pipes from program "B" and instantiating the main object in program "A", once I have data to display...program "A" writes it out to a pipe and program "B" reads/displays it. This does not seem to be working consistently and I'm seeing data encoding issues as well. Before investing additional time on this approach, I wanted to get your thoughts on this. Is this the best way to communicate between program 'A' and 'B'? or are there others?
How do I send data between two programs in Python?
0
0
0
587
13,829,962
2012-12-11T23:01:00.000
1
0
1
0
math,python-3.2
13,848,408
1
false
0
0
Someone removed his/her answer before I could accept it, so I write mine which is no more than a summary. Two options that I liked: In this particular case, since the operation was math.log(1000, 10), it could be replaced with math.log10(1000) which shows much greater precision. In a more general case, round(math.log(1000, 10)) will round 2.999... to the integer 3 so this would be more what was asked.
1
2
0
In python the function math.log(1000, 10) returns 2.9999999998 or some approximate value (neraly every third integer does that) Which firstly is kind of messed up even though I imagine there's not much (except divisibility tests) to do about it. And secondly it's not the value I want of course, how should I proceed? Casting to int will clearly return 2 and not 3... So what method is used to get the round to nearest int? In this case and in general, please.
python integer approximation
0.197375
0
0
649
13,830,334
2012-12-11T23:36:00.000
1
0
1
0
python,variables
13,830,530
3
false
0
0
You need to store it on disk. Unless you want to be really fancy, you can just use something like CSV, JSON, or YAML to make structured data easier. Also check out the python pickle module.
1
6
0
I'm writing a small program that helps you keep track of what page you're on in your books. I'm not a fan of bookmarks, so I thought "What if I could create a program that would take user input, and then display the number or string of text that they wrote, which in this case would be the page number they're on, and allow them to change it whenever they need to?" It would only take a few lines of code, but the problem is, how can I get it to display the same number the next time I open the program? The variables would reset, would they not? Is there a way to permanently change a variable in this way?
Change variable permanently
0.066568
0
0
5,917
13,830,557
2012-12-11T23:57:00.000
4
0
1
0
python,multithreading,user-interface,wxpython
13,830,881
1
true
0
1
Putting the thumbnail generation in a background thread with threading.Thread will solve your first problem, making the program usable. If you want a way to interrupt it, the usual way is to add a "stop" variable which the background thread checks every so often (e.g., once per thumbnail), and the GUI thread sets when it wants to stop it. Ideally you should protect this with a threading.Condition. (The condition isn't actually necessary in most cases—the same GIL that prevents your code from parallelizing well also protects you from certain kinds of race conditions. But you shouldn't rely on that.) For the third problem, the first question is: Is thumbnail generation actually CPU-bound? If you're spending more time reading and writing images from disk, it probably isn't, so there's no point trying to parallelize it. But, let's assume that it is. First, if you have N cores, you want a pool of N threads, or N-1 if the main thread has a lot of work to do too, or maybe something like 2N or 2N-1 to trade off a bit of best-case performance for a bit of worst-case performance. However, if that CPU work is done in Python, or in a C extension that nevertheless holds the Python GIL, this won't help, because most of the time, only one of those threads will actually be running. One solution to this is to switch from threads to processes, ideally using the standard multiprocessing module. It has built-in APIs to create a pool of processes, and to submit jobs to the pool with simple load-balancing. The problem with using processes is that you no longer get automatic sharing of data, so that "stop flag" won't work. You need to explicitly create a flag in shared memory, or use a pipe or some other mechanism for communication instead. The multiprocessing docs explain the various ways to do this. You can actually just kill the subprocesses. However, you may not want to do this. First, unless you've written your code carefully, it may leave your thumbnail cache in an inconsistent state that will confuse the rest of your code. Also, if you want this to be efficient on Windows, creating the subprocesses takes some time (not as in "30 minutes" or anything, but enough to affect the perceived responsiveness of your code if you recreate the pool every time a user clicks a new folder), so you probably want to create the pool before you need it, and keep it for the entire life of the program. Other than that, all you have to get right is the job size. Hopefully creating one thumbnail isn't too big of a job—but if it's too small of a job, you can batch multiple thumbnails up into a single job—or, more simply, look at the multiprocessing API and change the way it batches jobs when load-balancing. Meanwhile, if you go with a pool solution (whether threads or processes), if your jobs are small enough, you may not really need to cancel. Just drain the job queue—each worker will finish whichever job it's working on now, but then sleep until you feed in more jobs. Remember to also drain the queue (and then maybe join the pool) when it's time to quit. One last thing to keep in mind is that if you successfully generate thumbnails as fast as your computer is capable of generating them, you may actually cause the whole computer—and therefore your GUI—to become sluggish and unresponsive. This usually comes up when your code is actually I/O bound and you're using most of the disk bandwidth, or when you use lots of memory and trigger swap thrash, but if your code really is CPU-bound, and you're having problems because you're using all the CPU, you may want to either use 1 fewer core, or look into setting thread/process priorities.
1
0
0
My wx GUI shows thumbnails, but they're slow to generate, so: The program should remain usable while the thumbnails are generating. Switching to a new folder should stop generating thumbnails for the old folder. If possible, thumbnail generation should make use of multiple processors. What is the best way to do this?
Python: Interruptable threading in wx
1.2
0
0
155
13,832,095
2012-12-12T02:59:00.000
1
0
1
0
python
13,832,200
3
false
0
0
While the approach of @Makato is certainly right, for your 'diff' like application you want to capture the inode 'stat()' information of the files in your directory and pickle that python object from day-to-day looking for updates; this is one way to do it - overkill maybe - but more suitable than save/parse-load from text-files IMO.
1
2
0
Is it possible for the user to input a specific directory on their computer, and for Python to write the all of the file/folder names to a text document? And if so, how? The reason I ask is because, if you haven't seen my previous posts, I'm learning Python, and I want to write a little program that tracks how many files are added to a specific folder on a daily basis. This program isn't like a project of mine or anything, but I'm writing it to help me further learn Python. The more I write, the more I seem to learn. I don't need any other help than the original question so far, but help will be appreciated! Thank you! EDIT: If there's a way to do this with "pickle", that'd be great! Thanks again!
PYTHON - Search a directory
0.066568
0
0
7,504
13,838,231
2012-12-12T10:58:00.000
2
0
0
0
python,postgresql,transactions,commit,psycopg2
13,849,917
2
false
0
0
If you are committing your transactions after every 5000 record interval, it seems like you could do a little bit of preprocessing of your input data and actually break it out into a list of 5000 record chunks, i.e. [[[row1_data],[row2_data]...[row4999_data]],[[row5000_data],[row5001_data],...],[[....[row1000000_data]]] Then run your inserts, and keep track of which chunk you are processing as well as which record you are currently inserting. When you get the error, you rerun the chunk, but skip the the offending record.
2
1
0
I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions): I am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my program was too slow. Hence, I altered my code to run a commit every 5000 records and the difference in speed was tremendous. My problem is that at some point an exception occurs when inserting records (some integrity check fails) and I wish to commit my changes up to that point, except of course for the last command that caused the exception to happen, and continue with the rest of my insert statements. I haven't found a way to achieve this; the only thing I've achieved was to capture the exception, rollback my transaction and keep on from that point, where I loose my pending insert statements. Moreover, I tried (deep)copying the cursor object and the connection object without any luck, either. Is there a way to achieve this functionality, either directly or indirectly, without having to rollback and recreate/re-run my statements? Thank you all in advance, George.
How can I commit all pending queries until an exception occurs in a python connection object
0.197375
1
0
1,119
13,838,231
2012-12-12T10:58:00.000
3
0
0
0
python,postgresql,transactions,commit,psycopg2
13,838,751
2
true
0
0
I doubt you'll find a fast cross-database way to do this. You just have to optimize the balance between the speed gains from batch size and the speed costs of repeating work when an entry causes a batch to fail. Some DBs can continue with a transaction after an error, but PostgreSQL can't. However, it does allow you to create subtransactions with the SAVEPOINT command. These are far from free, but they're lower cost than a full transaction. So what you can do is every (say) 100 rows, issue a SAVEPOINT and then release the prior savepoint. If you hit an error, ROLLBACK TO SAVEPOINT, commit, then pick up where you left off.
2
1
0
I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions): I am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my program was too slow. Hence, I altered my code to run a commit every 5000 records and the difference in speed was tremendous. My problem is that at some point an exception occurs when inserting records (some integrity check fails) and I wish to commit my changes up to that point, except of course for the last command that caused the exception to happen, and continue with the rest of my insert statements. I haven't found a way to achieve this; the only thing I've achieved was to capture the exception, rollback my transaction and keep on from that point, where I loose my pending insert statements. Moreover, I tried (deep)copying the cursor object and the connection object without any luck, either. Is there a way to achieve this functionality, either directly or indirectly, without having to rollback and recreate/re-run my statements? Thank you all in advance, George.
How can I commit all pending queries until an exception occurs in a python connection object
1.2
1
0
1,119
13,840,379
2012-12-12T13:00:00.000
-2
0
1
0
python,list,multiplication
61,343,262
17
false
0
0
'''the only simple method to understand the logic use for loop''' Lap=[2,5,7,7,9] x=1 for i in Lap: x=i*x print(x)
1
250
0
I need to write a function that takes a list of numbers and multiplies them together. Example: [1,2,3,4,5,6] will give me 1*2*3*4*5*6. I could really use your help.
How can I multiply all items in a list together with Python?
-0.023525
0
0
299,911
13,841,206
2012-12-12T13:46:00.000
3
1
0
0
python,apache,mod-wsgi
13,841,885
2
true
0
0
I would go the micro-framework approach just in case your requirements change - and you never know, it may end up being an app rather just a basic dump... Perhaps the simplest (and old fashioned way!?) is using CGI: Duplicate your script and include print 'Content-Type: text/plain\n' before any other output to sys.stdout Put that script somewhere apache2 can access it (your cgi-bin for instance) Make sure the script is executable Make sure .py is added to the Apache CGI handler But - I don't see anyway this is going to be a fantastic advantage (in the long run at least)
1
1
0
I have a Python script that I'd like to be run from the browser, it seem mod_wsgi is the way to go but the method feels too heavy-weight and would require modifications to the script for the output. I guess I'd like a php approach ideally. The scripts doesn't take any input and will only be accessible on an internal network. I'm running apache on Linux with mod_wsgi already set up, what are the options here?
Webserver to serve Python script
1.2
0
0
379
13,841,296
2012-12-12T13:52:00.000
2
0
0
0
python,numpy,fft
19,329,962
2
false
0
0
In my experience the algorithms don't do automatic padding, or at least some of them don't. For example, running the scipy.signal.hilbert method on a signal that wasn't of length == a power of two took about 45 seconds. When I padded the signal myself with zeros to such a length, it took 100ms. YMMV but it's something to double check basically any time you run a signal processing algorithm.
1
4
1
My question is about the algorithm which is used in Numpy's FFT function. The documentation of Numpy says that it uses the Cooley-Tukey algorithm. However, as you may know, this algorithm works only if the number N of points is a power of 2. Does numpy pad my input vector x[n] in order to calculate its FFT X[k]? (I don't think so, since the number of points I have in the output is also N). How could I actually "see" the code which is used by numpy for its FFT function? Cheers!
FFT in Numpy (Python) when N is not a power of 2
0.197375
0
0
6,896
13,842,774
2012-12-12T15:15:00.000
7
0
1
0
python,pycharm,console
44,884,775
4
false
0
0
In Windows you can use Ctrl Alt e
2
19
0
PyCharm keeps a command history for it's interactive Python console. Is there a way to access this history with some sort of search instead of just browsing the entries with the arrow keys in the interactive console window? My environment: PyCharm 2.7 EAP (124.138) on MacOS X 10.8.2.
Search in PyCharm interactive console command history
1
0
0
11,610
13,842,774
2012-12-12T15:15:00.000
18
0
1
0
python,pycharm,console
22,051,978
4
true
0
0
As of PyCharm 3.1, while in the console, press ⌥⌘e, the "Browse History" window will come up, start typing to search for specific commands
2
19
0
PyCharm keeps a command history for it's interactive Python console. Is there a way to access this history with some sort of search instead of just browsing the entries with the arrow keys in the interactive console window? My environment: PyCharm 2.7 EAP (124.138) on MacOS X 10.8.2.
Search in PyCharm interactive console command history
1.2
0
0
11,610
13,843,907
2012-12-12T16:12:00.000
-2
0
0
1
python,google-app-engine,google-cloud-datastore,google-search
20,389,173
3
false
1
0
Look like this is not an issue anymore. according to documentation (and my tests): "The development web server simulates the App Engine datastore using a file on your computer. This file persists between invocations of the web server, so data you store will still be available the next time you run the web server." Please let me know if it is otherwise and I will follow up on that.
2
8
0
Is there anyway of forcing the GAE dev server to keep full text search indexes after restart? I am finding that the index is lost whenever the dev server is restarted. I am already using a static datastore path when I launch the dev server (the --datastore_path option).
GAE development server keep full text search indexes after restart?
-0.132549
0
0
1,909
13,843,907
2012-12-12T16:12:00.000
2
0
0
1
python,google-app-engine,google-cloud-datastore,google-search
13,849,805
3
false
1
0
This functionality was added a few releases ago (in either 1.7.1 or 1.7.2, I think). If you're using an SDK from the last few months it should be working. You can try explicitly setting the --search_indexes_path flag on dev_appserver.py; it's possible that the default location (/tmp/) isn't writable. Could you post the first few lines of the logs from when you start dev_appserver?
2
8
0
Is there anyway of forcing the GAE dev server to keep full text search indexes after restart? I am finding that the index is lost whenever the dev server is restarted. I am already using a static datastore path when I launch the dev server (the --datastore_path option).
GAE development server keep full text search indexes after restart?
0.132549
0
0
1,909
13,844,158
2012-12-12T16:27:00.000
2
0
1
0
python,sorting,datetime,python-2.7
13,844,460
3
false
0
0
Find and replace all instances of monday with 1, tuesday with 2, etc sort, reassign.
1
4
0
I have list ["Tue", "Wed", "Mon", "Thu", "Fri"] as list, I want to make it as ["Mon", "Tue", "Wed", "Thu", "Fri"]. How to sort this?
Sort week day texts
0.132549
0
0
9,557
13,845,981
2012-12-12T18:10:00.000
1
0
1
0
python,dictionary
13,846,048
3
false
0
0
Since dictionaries can have keys of multiple types, and you are using names (strings only) as one key and numbers (integers only) as another, you can simply make two separate entries point to the same object - one for the number, and one for the string. dict[0] = dict['key'] = object1
1
0
0
I am writing my own function for parsing XML text into objects which is can manipulate and render back into XML text. To handle the nesting, I am allowing XML objects to contain other XML objects as elements. Since I am automatically generating these XML objects, my plan is to just enter them as elements of a dict as they are created. I was planning on generating an attribute called name which I could use as the key, and having the XML object itself be a value assigned to that key. All this makes sense to me at this point. But now I realize that I would really like to also save an attribute called line_number, which would be the line from the original XML file where I first encountered the object, and there may be some cases where I would want to locate an XML object by line_number, rather than by name. So these are my questions: Is it possible to use a dict in such a way that I could find my XML object either by name or by line number? That is, is it possible to have multiple keys assigned to a single value in a dict? How do I do that? If this is a bad idea, what is a better way?
Is it possible to use multiple keys for a single element in a dict?
0.066568
0
1
174
13,846,155
2012-12-12T18:22:00.000
1
1
1
1
python,egg,python-internals
13,846,221
1
true
0
0
No, that is not a bug. Eggs, when being created, have their bytecode compiled in a build/bdist.<platform>/egg/ path, and you see that reflected in the co_filename variable. The bdist stands for binary distribution.
1
0
0
When tracing(using sys.settrace) python .egg execution by Python 2.7 interpreter frame.f_code.co_filename instead of <path-to-egg>/<path-inside-egg> eqauls to something like build/bdist.linux-x86_64/egg/<path-inside-egg> Is it a bug? And how to reveal real path to egg? In Python 2.6 and Python 3 everything works as expected.
Strange co_filename for file from .egg during tracing in Python 2.7
1.2
0
0
84
13,849,503
2012-12-12T22:06:00.000
0
0
1
0
python
13,849,588
3
false
0
0
s = raw_input('--> ') --> 10 Then a simple for loop s number of times will loop 10 times. Create an array or dictionary to hold the inputs in it.
1
0
0
Is there a way to create a variable after user input? Multiple ones, rather. For example, I'm writing a simple little program that records what page you're on in a book, but I'd like it to record as many books as the user needs. So, for example, the program would ask a user how many books they would like to record bookmarks for. If the user inputs 5, it would create 5 randomly named variables, and be able to read them back at a later time. I should be fine writing everything on my own, but I would need to know, is there a way to create a variable from user input, and if so, how?
Create variable at user input
0
0
0
240
13,849,618
2012-12-12T22:14:00.000
0
0
1
0
python-3.x
13,849,871
5
false
0
0
This question is very easy; I don't want to just tell you the answer because you can look it up. Look at the documentation for the built-in function range(). Since you are using Python 3.x, you will need to explicitly force the output of range() to expand out to a list by wrapping it with a call to list().
1
6
0
Create a list of integers from from a (inclusive) to b (inclusive). Example: integers(2,5) returns [2, 3, 4, 5]. I know this is probably an easy one, but I just can't seem to get anything to work.
create a list of integers from a to b in python
0
0
0
18,130
13,849,627
2012-12-12T22:15:00.000
0
0
0
0
python,gstreamer
13,855,922
2
false
0
0
You'd see more artifacts when recompressing using a lower bit-rate. Although this would be a general thing and not be related to seeking. Doing accurate seeks would only cause more cpu load, as gst would still seek to a keyframe and decode as fast as possible to until it gets to the accurate pos. In videoconferencing you might see artifacts when switching views, as it is common to have a low bandwidth stream for the small preview of other conference participands and when that is switched to large, it looks crappy for a while until the sender as switched to a higher bit rate.
2
1
0
This is a strange question, I know. I am working on an art project right now and we are disappointed that when we arbitrarily seek around files we don't see any compression artifacts. The source files are a mix of mostly mp4 and avi files. The application will need to jump between files and randomly seek to different offsets in the timeline. Should I just build custom pipelines and tweak the buffers down to nothing? Is there a way to tell decodebin2 to seek directly and to ignore keyframes? I am open to non-gstreamer options but I'd prefer to stick to python.
How can I make glitchy video with Gstreamer?
0
0
0
253
13,849,627
2012-12-12T22:15:00.000
1
0
0
0
python,gstreamer
13,886,833
2
true
0
0
Simplest way is to introduce errors in the stream. Random bit / burst errors will lead to unpredictable glitches! If you are open to a modifying a plugin like identity, insert it before the decoder and change it to insert random errors with some probability. Change the probability to your liking. If you can avoid destroying headers of frames but then you may get fancier glitches. Simpler solution. Take your files.. randomly insert errors into with a program that reads them and writes new versions. Feed these versinos to your program. The first method is for dynamic random effects and later method static effects [the file when run again will give same artifacts] :)
2
1
0
This is a strange question, I know. I am working on an art project right now and we are disappointed that when we arbitrarily seek around files we don't see any compression artifacts. The source files are a mix of mostly mp4 and avi files. The application will need to jump between files and randomly seek to different offsets in the timeline. Should I just build custom pipelines and tweak the buffers down to nothing? Is there a way to tell decodebin2 to seek directly and to ignore keyframes? I am open to non-gstreamer options but I'd prefer to stick to python.
How can I make glitchy video with Gstreamer?
1.2
0
0
253
13,850,513
2012-12-12T23:32:00.000
0
0
0
1
python,macos
13,850,697
1
false
0
0
The 2nd one you can't do for sure, since the events are grabbed by other processes. You should look for a osx specific library for doing that and then write a python wrapper around it.
1
2
0
I'll explain my question: is it possible to write a Python script which interacts with OS X architecture in a high-level way? For example, can I gain control on Mac OS X windows resizing from a Python script? Are there modules for that? I'm not finding any. To push things even further, would I be able to control keyboard shortcuts too? I mean, with Python, could I write a script that opens a terminal window everytime I type cmd + Enter from wherever I am in that moment, as if it was a system shortcut (Awesome WM style, if you know what I'm talking about)? Hope I've been clear.
Python interacting with OS X - is it possible?
0
0
0
466
13,851,739
2012-12-13T01:53:00.000
2
0
1
0
python,c,compiler-construction
13,851,766
1
true
0
0
There is no one universal answer that always works, but typically a language-to-language translator works in much the same way that a compiler does - it reads in the source code, builds up an internal representation of the program, then emits code in a target language. The key difference between a normal compiler and a cross-compiler, though, is that a normal compiler usually outputs assembly (or some sort of bytecode), while a cross-compiler usually outputs constructs in a different programming language. If you want to learn more about the key techniques involved in building such translators, you might want to read up on general compiler construction techniques. It's really cool! Hope this helps!
1
0
0
I am curious as how cross-compilers such as Pyjamas work. Do they simply have a list keywords and line by line replacing each word with the translated code? I want to understand. I apologize for my ignorance, I am just curious.
How does a programming language translator work?
1.2
0
0
415