Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
11,816,203
2012-08-05T11:56:00.000
1
0
1
0
python,image-processing,image-comparison
11,817,479
4
false
0
0
I can't give a ready to use answer, but I will point you in (I think) the right direction. A simple way of comparing two images is by making a hash of their binary representations and then see if those hashes are the same. One problem with this is with the hash function you want to use and you must look for one that have low chances of collisions, and the other is that an image file probably has metadata attached to the original binary information, so you will have to look at how to cut off that metadata in order to compare the images only using their binary info. Also, I don't know for sure but probably the binary representation of an image encoded in jpg is different from an image encoded in png, so you should be aware of that.
1
12
0
I need to compare two images that are screenshots of a software. I want to check if the two images are identical, including the numbers and letters displayed in the images. How can this be accomplished?
Compare Images in Python
0.049958
0
0
32,588
11,817,673
2012-08-05T15:30:00.000
2
0
1
0
python,serialization,nltk,corpus
11,834,156
2
true
0
0
In the terminology of the NLTK, a "corpus" is the whole collection, and can consist of multiple files. Sounds like you can store each forum session (what you call a "corpus") into a separate file, using a structured format that allows you to store metadata in the beginning of the file. The NLTK generally uses XML for this purpose, but it's not hard to roll your own corpus reader that reads a file header and then defers to PlainTextCorpusReader, or whatever standard reader best fits your file format. If you use XML, you'll also have to extend XMLCorpusReader and provide methods sents(), words(), etc.
1
2
0
I have a huge database of forum data. I need to extract corpora from the database for NLP purposes. The extracting step has parameters (for example FTS queries), and I'd like to save the corpus with the parameter metadata on the file system. Some corpora will be dozens of megabytes large. What is the best way of saving a file with it's metadata, so that I can read the metadata without loading the entire file. I am using the following technologies which might be relevant : PyQt, Postgres, Python, NLTK. Some notes: I want the corpus to be divorced from a heavyweight database. I'd prefer not to use sqlite, as the metadata is very simple in structure. Pickling doesn't allow partial unserialization from what I can tell. I'd prefer not to have a separate metadata file. I have experience with protocol buffers, but again It seems far too heavy handed. I guess I could pickle the metadata to string and have the first line of the file represent the metadata. This seems to be the simplest way I think. that is, if the pickle format is ASCII-safe.
Serializing corpora with python
1.2
0
0
256
11,820,258
2012-08-05T21:34:00.000
0
0
0
0
python,opencv,wxpython,pygame
11,833,097
1
false
0
1
I don't know why you'd want to embed a gaming library into wxPython in the hopes of gaining a performance boost. Personally, I don't think that will happen. You should take a look at the currently supported drawing canvases that wxPython provides instead or explain what you're trying to do. People have done in games in wxPython... Anyway, the main drawing interfaces for wx today are wx.GCDC / wx.GraphicsContext, cairo, FloatCanvas, or GLCanvas. Of course, there are also wx.DC, wx.PaintDC and the one you found as well.
1
1
0
My goal is to capture frames from a webcam as efficiently as possible using OpenCV. At the moment I'm able to capture 30FPS 6408*480 drawing directly onto a wxPython panel using the standard drawing context (BufferedPaintDC), with about 15% CPU usage (older Core Duo processor). What I'm curious is what sort of performance boost (if any) I'll see if I embed a PyGame canvas within a wxPython frame, and draw directly to the PyGame canvas. What I'm not sure about is whether the bottleneck is the wxPython frame, and if embedding a PyGame canvas will actually do anything. Or does the wxPython frame act simply like a container and has no influence on the PyGame canvas? I'm hoping I'm making sense here. The other option would be to use PyGame exclusively, however I really like the functionality of the wxPython widgets, so I'd hate to lose that. Or is there a faster canvas that I can integrate into wxPython that I'm not aware of? Thoughts? Thanks.
wxPython, or PyGame canvas within wxPython -> which is faster?
0
0
0
528
11,820,366
2012-08-05T21:52:00.000
2
0
0
0
python,urlparse
11,820,400
2
false
0
0
The function urllib.urlencode is appropriate.
1
5
0
In Python's urlparse, you can use urlparse to parse the URL, and then parse_qsl to parse the query. I want to remove a query (name, value) pair, and then reconstruct the URL. There is a urlunparse method, but no unparse_qsl method. What is the correct way to reconstruct the query from the qsl list?
Python urlparse.unparse_qsl?
0.197375
0
1
782
11,820,566
2012-08-05T22:23:00.000
0
0
0
0
python,django,amazon-s3,boto,django-storage
57,364,827
5
false
1
0
Simple workaround for me was to generate a new access key with only alphanumeric characters (ie no special characters such as "/", "+", etc. which AWS sometimes adds to the keys).
1
3
0
I have 2 files compiled by django-pipeline along with s3boto: master.css and master.js. They are set to "Public" in my buckets. However, when I access them, sometimes master.css is served, sometimes it errs with SignatureDoesNotMatch. The same with master.js. This doesn't happen on Chrome. What could I be missing? EDIT: It now happens on Chrome too.
Inconsistent SignatureDoesNotMatch Amazon S3 with django-pipeline, s3boto and storages
0
0
0
5,584
11,821,116
2012-08-05T23:52:00.000
6
0
0
0
python,django,twitter-bootstrap
11,821,278
4
false
1
0
I've been using django-crispy-forms with bootstrap for the last couple of months and it has been quite useful. Forms render exactly as they're meant to. If you do any custom form rendering though, be prepared to define your forms in code rather than in template, using helpers.
1
70
0
I want to start using Twitter's Bootstrap for a recently started Django app. I have quite a bit of experience with Django, but I'm totally new to Bootstrap. What's the best way to proceed? Are there any particular Boostrap apps for Django you would recommend or have experience with? I understand that I could use Bootstrap directly, without any special Bootstrap-specific Django apps. However, I also read that the form rendering doesn't come out particularly well without a little server side support (rendering the Bootstrap specific CSS into the form HTML, for example). There seem to be several projects, such as crispy forms, django-bootstrap-toolkit, etc. Looking at their project pages, I can see different levels of activity and support. If I decide to go with one of those, I would of course like to pick one which has some momentum and therefore a good likelihood of staying supported and maintained for a while. This is very important and so even if the particular app doesn't have all possible features or is a bit less flexible, it might still be a good choice, due to support/freshness, availability of examples, etc. Thank you for any recommendations or feedback.
Django and Bootstrap: What app is recommended?
1
0
0
50,626
11,821,322
2012-08-06T00:28:00.000
0
0
1
0
python,database,json,dictionary
11,821,761
8
false
0
0
on the JSON direction there is also something called simpleJSON. My first time using json in python the json library didnt work for me/ i couldnt figure it out. simpleJSON was...easier to use
2
28
0
Currently expensively parsing a file, which generates a dictionary of ~400 key, value pairs, which is seldomly updated. Previously had a function which parsed the file, wrote it to a text file in dictionary syntax (ie. dict = {'Adam': 'Room 430', 'Bob': 'Room 404'}) etc, and copied and pasted it into another function whose sole purpose was to return that parsed dictionary. Hence, in every file where I would use that dictionary, I would import that function, and assign it to a variable, which is now that dictionary. Wondering if there's a more elegant way to do this, which does not involve explicitly copying and pasting code around? Using a database kind of seems unnecessary, and the text file gave me the benefit of seeing whether the parsing was done correctly before adding it to the function. But I'm open to suggestions.
Elegant way to store dictionary permanently with Python?
0
0
0
50,650
11,821,322
2012-08-06T00:28:00.000
3
0
1
0
python,database,json,dictionary
11,821,401
8
false
0
0
If storage efficiency matters, use Pickle or CPickle(for execution performance gain). As Amber pointed out, you can also dump/load via Json. It will be human-readable, but takes more disk.
2
28
0
Currently expensively parsing a file, which generates a dictionary of ~400 key, value pairs, which is seldomly updated. Previously had a function which parsed the file, wrote it to a text file in dictionary syntax (ie. dict = {'Adam': 'Room 430', 'Bob': 'Room 404'}) etc, and copied and pasted it into another function whose sole purpose was to return that parsed dictionary. Hence, in every file where I would use that dictionary, I would import that function, and assign it to a variable, which is now that dictionary. Wondering if there's a more elegant way to do this, which does not involve explicitly copying and pasting code around? Using a database kind of seems unnecessary, and the text file gave me the benefit of seeing whether the parsing was done correctly before adding it to the function. But I'm open to suggestions.
Elegant way to store dictionary permanently with Python?
0.07486
0
0
50,650
11,822,111
2012-08-06T03:10:00.000
2
0
1
0
python,file
11,822,131
1
false
0
0
Opening a file in a+ positions the pointer at the end of the file; truncation from there results in no change to the file.
1
0
0
python file object how to remove byte from current seed postion to end f = open(filename, "a+") truncate_pos = f.tell() f.truncate(truncate_pos) seems not work,how could i do?
python file object how to remove byte from current seed postion to end
0.379949
0
0
138
11,822,790
2012-08-06T05:05:00.000
1
1
1
0
python,unit-testing
11,823,239
2
false
0
0
The Python unittest is fine, but it may be difficult to add unit testing to a large project. The reason is that unit testing is related to the testing of the functionality of the tiniest blocks. Unit testing means to use a lot of small tests that are separated each from the other. They should be independent on anything but the tested part of the code. When unittests are added to the existing code, it is usually added only to test the isolated cases that was proved to cause the error. The added unittest should be written with uncorrected functionality to disclose the error. Then the error should be fixed so that the unittest passes. This is the first extreme -- to add unit tests only to the code that fails. This is a must. You should always add unit test for the code that fails, and you should do it before you fix the error. Now, it is a question how to add unit tests to the large project that did not use them. The quantity of code of unit tests may be comparable with the size of the project itself. This way the other extreme could be to add unit test to everything. However, this is too much work, and you usually have to reverse engineer your own code to find the building blocks to be tested. I suggest to find the most important parts of the code and add unit tests to them.
1
2
0
I am working on a project in Python, using Git for version control, and I've decided it's time to add a couple of unit tests. However, I'm not sure about the best way to go about this. I have two main questions: which framework should I use and how should I arrange my tests? For the first, I'm planning to use unittest since it is built in to Python, but if there is a compelling reason to prefer something else I'm open to suggestions. The second is a tougher question, because my code is already somewhat disorganized, with lots of submodules and relative imports. I'm not sure where to fit the testing code. Also, I'd prefer to keep the testing code seperate from everything else if possible. Lastly, I want the tests to be easy to run, preferably with a single commandline command and minimal path setup. How do large Python projects handle testing? I understand that there is typically an automated system to run tests on all new checkins. How do they do it? What are the best practices for setting up a testing system?
How to arrange and set up unit testing in Python
0.099668
0
0
489
11,823,586
2012-08-06T06:42:00.000
0
0
1
0
python,django,multiprocessing
11,823,901
2
false
1
0
You can try celery as it's django friendly. But if to be honest i'm not fond of it (bugs :) We are going to switch to Gearman. Writing your own job producers and consumers (workers) are a kind of a fun!
1
1
0
I'm trying to write a Web app in Python, which is to consist of two parts: A Django-based user interface, which allows each user to set up certain tasks Worker processes (one per user), which, when started by the user, perform the tasks in the background without freezing the UI. Since any object I create in a view is not persistent, I have no way of keeping tracks of worker processes. I'm not even sure how to approach this task. Any ideas?
Python multiprocessing and Django - I'm confused
0
0
0
685
11,824,589
2012-08-06T08:06:00.000
4
0
1
1
python,visual-studio-2010,ptvs
11,826,101
4
true
0
0
I found that if : main.py is set as Startup File, in the Properties of the project -> Debug tab -> Interpreter Path field, I put the path C:...\env\Scripts\python.exe (i.e. the python executable of the virtualenv) It works !
1
6
0
I don't know how to run the activate.bat in a Python Tools for Visual Studio Project. I have a directory environment in my project with my virtualenv. But, I don't know how I can run ./env/Scripts/activate.bat before the project run my main python script.
How to run a python script with Python Tools for Visual Studio in a virtualenv?
1.2
0
0
30,422
11,824,697
2012-08-06T08:15:00.000
0
0
0
0
python,opencv
11,826,057
2
false
0
0
Have you run 'make install' or 'sudo make install'? While not absolutely necessary, it copies the generated binaries to your system paths.
2
0
1
I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help.
Python For OpenCV2.4
0
0
0
329
11,824,697
2012-08-06T08:15:00.000
2
0
0
0
python,opencv
11,824,855
2
true
0
0
You should either copy the cv2 library to a location in your PYTHONPATH or add your current directory to the PYTHONPATH.
2
0
1
I downloaded opencv 2.4 source code from svn, and then I used the command 'cmake -D BUILD_TESTS=OFF' to generate makefile and make and install. I found that the python module was successfully made. But when I import cv2 in python, no module cv2 exists. Is there anything else I should configure? Thanks for your help.
Python For OpenCV2.4
1.2
0
0
329
11,829,360
2012-08-06T13:34:00.000
1
1
0
1
python,seo,indexing,robots.txt,googlebot
11,829,421
2
true
1
0
No. The search engine should not care what script generates the pages. Just so long as the pages generated by the webapps are indexed you should be fine. Second question: You should create a separate robots.txt per subdomain. That is when robots.txt is fetched from a particular subdomain, return a robots.txt file that pertains to that sudomain only. So if you want the subdomain indexed, has that robots file allow all. If you don't want it indexed, have the robots file deny all.
1
0
0
I am working on Web application, which allows users to create their own webapp in turn. For each new webapp created by my application I Assign a new Subdomain. e.g. subdomain1.xyzdomain.com, subdomain2.xyzdomain.com etc. All these Webapps are stored in Database and are served by a python script (say default_script.py) kept in /var/www/. Till now, I have blocked Search Engine indexing for directory ( /var/www/ ) using robots.txt. Which essentially blocks indexing of my all scripts including default_script.py as well as content served for multiple webapps using that default_script.py script. But now I want that some of those subdomains should be indexed. After searching for a while I was able to figure out a way to block indexing of my scripts by explicitly specifing them in robots.txt But I am still doubtful about the following: Will blocking the my default_script.py from indexing also block indexing of all content that are served from default_script.py. If yes then if I let it index, will default_script.py start showing up in search results also. How can I allow indexing of some of the Subdomains seletively. Ex: Index subdomain1.xyzdomain.com but NOT subdomain2.xyzdomain.com
Selectively indexing subdomains
1.2
0
0
134
11,833,542
2012-08-06T18:05:00.000
1
0
1
0
python-2.7
11,857,668
1
true
0
0
What is the import error you get? pyspeech also needs pywin32. Is that package installed? Also check if Microsoft speech kit is installed. If everything is fine, add sys.path.append(site package path) at the top of your code.
1
0
0
I want to make and explore the speech to text using the python. So I searched on the google about the python speech to text api I have found pyspeech at the first and found it much easy to use but even after installing it I got some problems. 1) I have installed pyspeech using command prompt easy_install speech at the c:>python27...scripts but still i am unable to run the python speech API. 2) I have also installed the base PyPI required for the pyspeech running ez_setup.py.
pyspeech not working in python 2.7
1.2
0
0
731
11,837,330
2012-08-06T23:24:00.000
2
0
0
1
python,buildbot
12,306,901
1
true
0
0
If you don't care about the name of the directory, just the name of the builder, you can set the builddir attribute of the builder to be whatever it currently is, then name you builder however you want. The data stored in the builder directory is in pickles. Looking at the code, I think the only data could cause issues is the builder name. If you done care about non-build events, you could probably just delete the builder file from each directory. Otherwise, rewriting the pickle with the update builder name should work.
1
22
0
Is there a way to rename a build in buildbot without losing all of the logs? For instance I have several windows slaves which all might build: "Windows 2008+ DEBUG" but I want to rename this build to: "Windows 2008R2+ DEBUG". How do I set compare_attr (if that's even what I need to do) so that all of the logs/etc... are included from the previous builds in the new one. Can I manually rename the directories and expect everything to work? Experimentation has told me that will not work but maybe I can write a command to change certain things?
rename a build in buildbot
1.2
0
0
812
11,841,451
2012-08-07T07:50:00.000
1
0
1
0
python,ide
11,841,755
1
true
0
0
I haven't used Editra in a while, but I think it's just the py2exe binary that's flagged the antivirus (which is subject to false positives). You can run the Editra source (IIRC, Editra comes with wxPython) directly. As per Editra vs IDLE, I'm sure Editra has some more functionality than IDLE. Just my 2 cents.
1
1
0
I have been using this particular IDE (Editra) for about a year, previously it was version 0.6.77, the latest is version 0.7.08, after updating to the the newer version I built the exe with py2exe. The ide works fine, but Norton's Avirus flagged and quarantined it. The primary reason I use it is because it can be configured to run on different versions of Python from a single drop down box, so I have it set use Python 2.6 and/or Python 3.1. Does anyone else use Editra? and if so, does it trigger your Anti-virus? or should I stick with IDLE?
Is with Editra 0.7.08 a reliable IDE?
1.2
0
0
155
11,843,817
2012-08-07T10:18:00.000
3
0
0
0
python,django
11,843,883
1
false
1
0
You can create a "Run Configuration" in Eclipse to invoke manage.py. There is an "Arguments" tab that allows you to provide command line arguments.
1
1
0
I am doing project on django, and every time we have to run project we have to give commands in command prompt. Can we pass arguments to manage.py in eclipse(IDE) itself or at the runconfig. Please throw some light on this.
python: how to pass parameters to application in eclipse without using command prompt
0.53705
0
0
94
11,844,628
2012-08-07T11:08:00.000
3
0
1
0
python,c,matplotlib
13,551,794
1
true
0
0
Finally resolved this - so going to explain what occurred for the sake of Googlers! This only happened when using third-party libraries like numpy or matplotlib, but actually related to an error elsewhere in my code. As part of the software I wrote, I was extending the Python interpreter following the same basic pattern as shown in the Python C API documentation. At the end of this code, I called the Py_DECREF function on some of the Python objects I had created along the way. My mistake was that I was calling this function on borrowed references, which should not be done. This caused the software to crash with the error above when it reached the Py_Finalize command that I used to clean up. Removing the DECREF on the borrowed references fixed this error.
1
1
1
I'm puzzling over an embedded Python 2.7.2 interpreter issue. I've embedded the interpreter in a Visual C++ 2010 application and it essentially just calls user-written scripts. My end-users want to use matplotlib - I've already resolved a number of issues relating to its dependence on numpy - but when they call savefig(), the application crashes with: **Fatal Python Error: PyEval_RestoreThread: NULL tstate This isn't an issue running the same script using the standard Python 2.7.2 interpreter, even using the same site-packages, so it seems to definitely be something wrong with my embedding. I call Py_Initialize() - do I need to do something with setting up Python threads? I can't quite get the solution from other questions here to work, but I'm more concerned that this is symptomatic of a wider problem in how I'm setting up the Python interpreter.
Matplotlib with TkAgg error: PyEval_RestoreThread: null tstate on save_fig() - do I need threads enabled?
1.2
0
0
2,495
11,845,803
2012-08-07T12:18:00.000
3
0
1
0
python,datetime,time,timezone
11,845,865
6
false
0
0
time.time() returns the number of seconds since the UNIX epoch began at 0:00 UTC, Jan 1, 1970. Assuming the machines have their clocks set correctly, it returns the same value on every machine.
2
42
0
Apologies for asking too basic question but I couldn't get it cleared after reading docs. It just seems that I am missing or have misunderstood something too basic here. Does calling time.time() from different timezones, at the same time produce different results? This maybe comes down to definition of epoch, which on the docs (and on my not-so-deep search on the Internet), has no mentions of the timezone. Also, suppose time.time() has been called from places with different timezones, and converted to UTC datetimes on their machines, will they all give same UTC time?
Is Python's time.time() timezone specific?
0.099668
0
0
15,434
11,845,803
2012-08-07T12:18:00.000
4
0
1
0
python,datetime,time,timezone
11,845,891
6
false
0
0
The return value should be the same, since it's the offset in seconds to the UNIX Epoch. That being said, if you convert it to a Date using different timezones, the values will, of course, differ. If, from those Dates, you convert each of them to UTC, then the result has to be the same.
2
42
0
Apologies for asking too basic question but I couldn't get it cleared after reading docs. It just seems that I am missing or have misunderstood something too basic here. Does calling time.time() from different timezones, at the same time produce different results? This maybe comes down to definition of epoch, which on the docs (and on my not-so-deep search on the Internet), has no mentions of the timezone. Also, suppose time.time() has been called from places with different timezones, and converted to UTC datetimes on their machines, will they all give same UTC time?
Is Python's time.time() timezone specific?
0.132549
0
0
15,434
11,848,650
2012-08-07T14:55:00.000
4
0
1
1
python,ubuntu,compilation,nautilus
11,848,690
3
false
0
0
You should make the .py files executable and click on them. The .pyc files cannot be run directly.
2
6
0
Sorry, for the vague question, don't know actually how to ask this nor the rightful terminologies for it. How to run a python script/bytecode/.pyc (any compiled python code) without going through the terminal. Basically on Nautilus: "on double click of the python script, it'll run" or "on select then [Enter], it'll run!". That's my goal at least. When i check the "Allow executing of file as a program" then press [enter] on the file. It gives me this message: Could not display "/home/ghelo/Music/arrange.pyc". There is no application installed for Python bytecode files. Do you want to search for an application to open this file? Using Ubuntu 12.04, by the way and has to be python 2, one of the packages doesn't work on python 3. If there's a difference between how to do it on the two version, include it, if it's not to much t ask, thank you. I know it doesn't matter, but it's a script auto renaming & arranging my music files. Guide me accordingly, stupid idiot here. :)
How to run Python script with one icon click?
0.26052
0
0
15,947
11,848,650
2012-08-07T14:55:00.000
1
0
1
1
python,ubuntu,compilation,nautilus
12,015,527
3
true
0
0
Adding " #!/usr/bin/env python " at the top of the .py file works! Hmm, although don't appreciate the pop-up, but nevermind. :P From PHPUG: You do not invoke the pyc file. It's the .py file that's invoked. Python is an interpreted language. A simpler way to make a python exectuable (explained): 1) Add #!/usr/bin/env python at the top of your python executable file (eg. main.py) (it uses the default python - eg. if using arch, that's py3 instead of py2. You can explicitly tell it to run python2/python3 by replacing python with it's version: ex. python2.7) 2) Write the code. If the script is directly invoked, __name__ variable becomes equal to the string '__main__' thus the idiom: if __name__ == '__main__':. You can add all the logic that relates to your script being directly invoked in this if-block. This keeps your executable importable. 3) Make it executable 'chmod +x main.py' 4) Call the script: ./main.py args args
2
6
0
Sorry, for the vague question, don't know actually how to ask this nor the rightful terminologies for it. How to run a python script/bytecode/.pyc (any compiled python code) without going through the terminal. Basically on Nautilus: "on double click of the python script, it'll run" or "on select then [Enter], it'll run!". That's my goal at least. When i check the "Allow executing of file as a program" then press [enter] on the file. It gives me this message: Could not display "/home/ghelo/Music/arrange.pyc". There is no application installed for Python bytecode files. Do you want to search for an application to open this file? Using Ubuntu 12.04, by the way and has to be python 2, one of the packages doesn't work on python 3. If there's a difference between how to do it on the two version, include it, if it's not to much t ask, thank you. I know it doesn't matter, but it's a script auto renaming & arranging my music files. Guide me accordingly, stupid idiot here. :)
How to run Python script with one icon click?
1.2
0
0
15,947
11,853,818
2012-08-07T20:42:00.000
0
0
0
1
python,macos,terminal
11,854,667
3
false
0
0
Automator would be your best bet actually, creating Macintosh .app bundles by hand can be annoying. It's been a long time since I've used automator, but here goes, use the Run Shell Script command from Terminal, and make the script be python ~/Desktop/script.py or maybe use the full path like python /Users/<username>/Desktop/script.py P.S. cd ~/Desktop/script.py doesn't do what you think it does, you want python ~/Desktop/script.py
3
3
0
I have a simple python script I can run through terminal. Is there a way to put a shortcut on my mac desktop to open terminal and run like "cd ~/Desktop/script.py"? I have tried automator but i couldn't get it to work
Can I make a script to open terminal and run .py?
0
0
0
4,323
11,853,818
2012-08-07T20:42:00.000
-1
0
0
1
python,macos,terminal
11,853,877
3
false
0
0
you have to create a samplename.sh file and put the below line and run it ./samplename.sh ~/Desktop/script.py create symbolic link to that .sh file and place it on desktop.
3
3
0
I have a simple python script I can run through terminal. Is there a way to put a shortcut on my mac desktop to open terminal and run like "cd ~/Desktop/script.py"? I have tried automator but i couldn't get it to work
Can I make a script to open terminal and run .py?
-0.066568
0
0
4,323
11,853,818
2012-08-07T20:42:00.000
6
0
0
1
python,macos,terminal
11,854,938
3
true
0
0
Create a file called anyname.command with python ~/Desktop/script.py in it. Then make it executable by running chmod 555 ~/Desktop/anyname.command in terminal. Then when you double-click on anyname.command it should run the python script.
3
3
0
I have a simple python script I can run through terminal. Is there a way to put a shortcut on my mac desktop to open terminal and run like "cd ~/Desktop/script.py"? I have tried automator but i couldn't get it to work
Can I make a script to open terminal and run .py?
1.2
0
0
4,323
11,854,137
2012-08-07T21:04:00.000
9
0
0
1
python,google-app-engine,google-cloud-datastore
11,855,209
1
true
1
0
The only way to change the ancestor of an entity is to delete the old one and create a new one with a new key. This must be done for all child (and grand child, etc) entities in the ancestor path. If this isn't possible, then your listed solution works. This is required because the ancestor path of an entity is part of its unique key. Parents of entities (i.e., entities in the ancestor path) need not exist, so changing a parent's key will leave the children in the datastore with no parent.
1
5
0
In the High-Replication Datastore (I'm using NDB), the consistency is eventual. In order to get a guaranteed complete set, ancestor queries can be used. Ancestor queries also provide a great way to get all the "children" of a particular ancestor with kindless queries. In short, being able to leverage the ancestor model is hugely useful in GAE. The problem I seem to have is rather simplistic. Let's say I have a contact record and a message record. A given contact record is being treated as the ancestor for each message. However, it is possible that two contacts are created for the same person (user error, different data points, whatever). This situation produces two contact records, which have messages related to them. I need to be able to "merge" the two records, and bring put all the messages into one big pile. Ideally, I'd be able to modify ancestor for one of the record's children. The only way I can think of doing this, is to create a mapping and make my app check to see if record has been merged. If it has, look at the mappings to find one or more related records, and perform queries against those. This seems hugely inefficient. Is there more of "by the book" way of handling this use case?
How to change ancestor of an NDB record?
1.2
1
0
1,422
11,854,528
2012-08-07T21:41:00.000
0
0
0
0
python,opencv
25,125,143
1
false
0
0
I do something similar with C++ in OpenCV. There are a couple of ways to go about this. I use TCP/IP protocols - make sure I don't have packet loss. Next, To test quality, I send and receive a Video file (that I recorded from the camera) instead of stream "new" video from the camera. Now, I can check the quality by checking the bytes received in every frame. This may not be optimal, but it is a starting point.
1
0
0
I am new to python and programming in general. I was wondering if there is a way to validate that you are getting a video feed and not just a black screen from an incoming call. I have automated a script in Python that makes a call and answers the call, but some of the issues we are testing is how often we get black screen instead of the video call. I have been reading up on OpenCV and played around with it some, but am not getting anywhere near the results I am looking for. Is there another way in python to detect video? If so I would greatly appreciate being pointed in the right direction. Thanks
Checking the quality of an Incoming Video with python
0
0
0
1,107
11,857,535
2012-08-08T04:06:00.000
4
0
1
0
python,python-bindings
11,857,573
3
false
0
0
Although I cannot say this with full authority because it is preference-based, developing Python bindings for C makes development process easier for those who find Python syntax more productive and easier to work with. (for example, Python CUDA, 3D, Kinect, etc. libraries)
2
2
0
What is the motive behind developing Python bindings for existing code in other languages? I see many programmers developing Python bindings for their existing C code. Why? How does it help?
Why are Python bindings developed for existing code in other languages such as C?
0.26052
0
0
430
11,857,535
2012-08-08T04:06:00.000
1
0
1
0
python,python-bindings
15,140,667
3
false
0
0
Because there are many very high quality libraries in C with many many years of testing, bugfixing, etc., and it is crazy to try to reimplement everything in Python (e.g., I would never use cryptographic libraries in Python, one should use bindings to well tested and paranoid-developed C libraries like openssl, NSS, or gnutls).
2
2
0
What is the motive behind developing Python bindings for existing code in other languages? I see many programmers developing Python bindings for their existing C code. Why? How does it help?
Why are Python bindings developed for existing code in other languages such as C?
0.066568
0
0
430
11,859,234
2012-08-08T06:58:00.000
0
0
1
0
python
11,859,397
2
false
0
0
Just to clarify what @lazyr already said: the documentation refers to file-like objects. Actual file objects will always have the closed attribute and you can use it to see if the file is closed.
1
2
0
The Python File Type documentation gives file.closed as a bool indicating the current state of the file object. This is a read-only attribute; the close() method changes the value. It may not be available on all file-like objects. Given that it's not guaranteed to be available on all file-like objects, is there another (better?) method of checking whether I've already got the file open, before trying to re-open it?
Check if file is open for any file type
0
0
0
2,199
11,860,036
2012-08-08T07:51:00.000
0
1
0
0
php,python,load,cron
11,860,120
1
true
0
0
Using multiple files to fetch smaller parts probably won't make a difference in server load (well, in fact it'd make the load x times bigger for x times shorter period of time, but the overall result is the same), but it should fetch the data faster (thanks to multithreading and paralleling the requests) therefore reducing your response times.
1
1
0
I'm running a python file every minute using a cron job. It queries a site and gather's information, but it has to load through 4-5 pages before it gets to the data I need. The execution time is around 5-10s per query. I'm wondering if there's a difference in server load if the file is being run congruently multiple times verses having 3 different files assigned to load sections. Example: test1.py loads information between A-H test2.py loads information between I-Q test3.py loads information between R-Z If someone requests information about a "B", "M", and "S" topic each file would run and return the results, verses one file test.py running a loop to return all three results. P.S. I'm asking because I'm expecting in the future that people will request information about 2-6 topics, and that's just one person. So I don't want one file running for 60 seconds straight. I'm wondering if it'll alleviate load to spread it across multiple files. P.P.S. Also I'm wondering the implications of using python vs php.
Server load comparison
1.2
0
0
57
11,860,280
2012-08-08T08:07:00.000
5
0
0
1
python,swing,ui-automation,pywinauto
12,048,644
1
true
1
0
Pywinauto uses standard windows API calls. Unfortunately many UI libraries (like Swing/QT/GTK) do not respond in a typical way to the API calls used - so unfortunately pywinauto usually cannot get the control information. (I am the Author of pywinauto). I can't give you a way to access the properties of the Swing controls.
1
0
0
I am using swapy(desktop automation tool which uses pywinauto python package) to automate desktop UI activities, but swapy does not recognize the properties of a swing based java application, but it can recognize the properties of other applications like notepad windows media player etc.. can anybody please the reason for this problem and can I use swing explorer for this swing based application of which I don not have code, just the application If i cant use it, please give me a way/solution to access the properties of swing based java application. Thanks in advance..
Swapy could not be used to access swing properties of swing based java application.How to access swing properties of a java application
1.2
0
0
1,126
11,868,582
2012-08-08T16:03:00.000
1
0
0
0
python,mysql,postgresql
11,870,176
3
false
0
0
Use SQL-Alchemy. It will work with most database types, and certainly does work with postgres and MySQL.
1
3
0
I am looking for a pure-python SQL library that would give access to both MySQL and PostgreSQL. The only requirement is to run on Python 2.5+ and be pure-python, so it can be included with the script and still run on most platforms (no-install). In fact I am looking for a simple solution that would allow me to write SQL and export the results as CSV files.
Pure python SQL solution that works with PostgreSQL and MySQL?
0.066568
1
0
2,182
11,868,963
2012-08-08T16:27:00.000
0
0
1
0
python,python-imaging-library,tiff
11,877,012
1
true
0
1
It appears that TiffImagePlugin does not easily allow me to hook in additional decompressors. Replacing TiffImageFile._decoder with a dictionary of decoders might work, but you would have to examine and test each release of PIL to ensure it hasn't broken. This level of maintenance is just as bad as a custom PIL. I appreciate the design of tifffile.py for using a dictionary of decoders. It made it very easy. Final solution? I couldn't hook my code into PIL. I had to use PIL.Image.fromarray() to using my decompressed images.
1
1
0
I wrote a pure python TIFF G4 decompress for use with tifffile.py. I know there are ways to add libtiff to a custom PIL, but I never could get that working very well in a mixed virtualenv. I want to manipulate the image in PIL. I am looking for pointers in hooking my decompressor to stock PIL for TiffImagePlugin.py. Any ideas?
Using TIFF G4 image in PIL
1.2
0
0
973
11,870,785
2012-08-08T18:24:00.000
0
0
0
0
python,c,algorithm,binary-search
11,871,291
4
false
0
0
Hmmm... you're basically looking for some kind of space-filling curve of sorts. I'm almost certain that you can do that with clever bit-twiddling. You might like to take a look at the way indices are computed for the Morton Z-order or Ahnentafel indexing that's used in some cache-oblivious stencil algorithms. I had a look at this some years ago, and the indexing was similar to what you're describing and done with bit-twiddling.
2
9
0
I have been working on what seems to be a simple task that is driving me nuts. So if you fancy a programming challenge ... read on. I want to be able to take a number range e.g. [1:20] and print the values using a mechanism similar to a binary serach algorithm. So, print first the lowest value (in this case 1) and then the mid value (e.g. in this case 10) and then divide the range into quarters and print the values at 1/4 and 3/4 (in this case, 5 and 15) and then divide into eights and so on until all values in the range have been printed. The application of this (which isn't really necessary to understand here) is for a memory page access mechanism that behaves more efficiently when pages are accessed at the mid ranges first. For this problem, it would be sufficient to take any number range and print the values in the manner described above. Any thoughts on this? A psuedo code solution would be fine. I would show an attempt to this but everything I've tried so far doesn't cut it. Thanks. Update: As requested, the desired output for the example [1:20] would be something like this: 1, 10, 5, 15, 3, 7, 12, 17, 2, 4, 6, 8, 11, 13, 16, 18, 9, 19, 20 This output could be presented in many similar ways depending on the algorithm used. But, the idea is to display first the half values, then the quarters, then the eights, then 16ths, etc leaving out previously presented values.
Binary selection process
0
0
0
842
11,870,785
2012-08-08T18:24:00.000
0
0
0
0
python,c,algorithm,binary-search
11,872,613
4
false
0
0
It's easy for 1/2, right? So why not do it recursively, so that 1/4 is 1/2 of 1/2, and 1/8 is 1/2 of 1/4?
2
9
0
I have been working on what seems to be a simple task that is driving me nuts. So if you fancy a programming challenge ... read on. I want to be able to take a number range e.g. [1:20] and print the values using a mechanism similar to a binary serach algorithm. So, print first the lowest value (in this case 1) and then the mid value (e.g. in this case 10) and then divide the range into quarters and print the values at 1/4 and 3/4 (in this case, 5 and 15) and then divide into eights and so on until all values in the range have been printed. The application of this (which isn't really necessary to understand here) is for a memory page access mechanism that behaves more efficiently when pages are accessed at the mid ranges first. For this problem, it would be sufficient to take any number range and print the values in the manner described above. Any thoughts on this? A psuedo code solution would be fine. I would show an attempt to this but everything I've tried so far doesn't cut it. Thanks. Update: As requested, the desired output for the example [1:20] would be something like this: 1, 10, 5, 15, 3, 7, 12, 17, 2, 4, 6, 8, 11, 13, 16, 18, 9, 19, 20 This output could be presented in many similar ways depending on the algorithm used. But, the idea is to display first the half values, then the quarters, then the eights, then 16ths, etc leaving out previously presented values.
Binary selection process
0
0
0
842
11,876,545
2012-08-09T03:52:00.000
1
0
1
0
windows,python-2.7
11,876,572
3
true
0
0
Run your program from a Windows command prompt. That will not automatically close when the program finishes. If you run your program by double-clicking on the .py file icon, then Windows will close the window when your program finishes (whether it was successful or not).
1
1
0
I'm making a application in python from Windows. When I run it in the console, it stops, shows an error, and closes. I can't see the error becase its too fast, and I can't read it. I'm editing the code with IDLE (the program that came with python when I instaled it), and when I run it with the python shell, there are no errors. I would run it from IDLE, but when I use the console, it has more features. I don't know why this is happening. I need your help.
how to make my console in python not to close?
1.2
0
0
7,808
11,877,666
2012-08-09T06:11:00.000
1
0
1
0
python,form-submit
11,879,610
1
true
1
0
@Serafeim, your approach is very good for the situation. Here are some ideas of extending it: Make sure that the secret_word (in hashing terms it is called salt) is long enough. Make the end function a bit more complex, e.g. hash = h(h(username) + month + year + h(salt)) Use a bit more complex hash function, e.g. SHA1 Don't give the end user the whole hash value. E.g. md5 hex digest contains 32 digits, but it would be enough to have first 5-10 digits of the hash in the report. Updated: In case you have resources, generate a random salt per user. Then even if somehow a user will learn the salt and the hash function, it will be still useless for the others.
1
0
0
There is a requirement that our users should complete and submit a form once a month. So, each month we should have a form that will contain data for the triplet (username, month, year). I want our users to be able to certify that they did actually submit the form for that particular month by creating a receipt for them. So, for each month there will be a report containing the data the user submitted along with the receipt. I don't want the users to be able to create that receipt by themselves though. What I was thinking was to create a string that contained username, month, year, secret_word and give the md5 hash of that string to the users as their receipt. That way because the users won't have the secret word they won't be able to generate the md5 hash. However my users will probably complain when they see the complexity of that md5 hash. Also if the find out the secret word they will be able to create receipts for everybody. Is there a standard way of doing what I ask ? Could you recommend me any other possible solutions ? I am using Python but some pseudocode or link to the appropriate methods would be ok.
Create a receipt for a user form submission
1.2
0
0
504
11,878,454
2012-08-09T07:14:00.000
1
0
0
0
python,database,django,configuration
11,878,547
3
false
1
0
You can just use a different settings.py in your production environment. Or - which is a bit cleaner - you might want to create a file settings_local.py next to settings.py where you define a couple of settings that are specific for the current machine (like DEBUG, DATABASES, MEDIA_ROOT etc.) and do a from settings_local import * at the beginning of your generic settings.py file. Of course settings.py must not overwrite these imported settings.
1
0
0
I am relatively new to Django and one thing that has been on my mind is changing the database that will be used when running the project. By default, the DATABASES 'default' is used to run my test project. But in the future, I want to be able to define a 'production' DATABASES configuration and have it use that instead. In a production environment, I won't be able to "manage.py runserver" so I can't really set the settings. I read a little bit about "routing" the database to use another database, but is there an easier way so that I won't need to create a new router every time I have another database I want to use (e.g. I can have test database, production database, and development database)?
How do I make Django use a different database besides the 'default'?
0.066568
1
0
304
11,878,554
2012-08-09T07:21:00.000
0
1
1
0
python,nameerror
11,878,657
4
false
0
0
You could get an IDE which helps a bit with autocompletion of names, though not in all situations. PyDev is one such IDE with autocompletion; PyCharm is another (not free). Using autocomplete is probably your best bet to solve your problem in the long term. Even if you find a tool which attempts to correct such spelling errors, that will not solve the initial problem and will probably just cause new ones.
1
1
0
I am not a native English speaker. When I code with Python, I often make spelling mistakes and get 'NameError' Exceptions. Unit test can solve some problems but not all. Because one can hardly construct test cases which cover all logic. So I think a tool that detect such errors would help me a lot but I searched Google and cannot find it.
Does there a tool exists which can help programmer avoid Python NameError?
0
0
0
130
11,881,165
2012-08-09T10:15:00.000
0
0
0
0
python,pandas,slice
71,602,253
3
false
0
0
If you just need to get the top rows; you can use df.head(10)
1
46
1
I am working with survey data loaded from an h5-file as hdf = pandas.HDFStore('Survey.h5') through the pandas package. Within this DataFrame, all rows are the results of a single survey, whereas the columns are the answers for all questions within a single survey. I am aiming to reduce this dataset to a smaller DataFrame including only the rows with a certain depicted answer on a certain question, i.e. with all the same value in this column. I am able to determine the index values of all rows with this condition, but I can't find how to delete this rows or make a new df with these rows only.
Slice Pandas DataFrame by Row
0
0
0
92,185
11,881,900
2012-08-09T10:58:00.000
1
0
0
0
python,django
11,882,269
2
false
1
0
You can use the original file's name as part of the file name when storing in the disk, and you probably can use the file's creation/modification date for the upload date. IMO, you should just store it explicitly in the database.
1
4
0
Is there any way to get the uploaded file date and name which we have stored into the database using forms ? Right now I am just creating two more database tuples for name and date and storing them like this file_name = request.FILES['file'].name for file_name and storing date using upload_date = datetime.datetime.now()
Django:Any way to get date and name of uploaded file later?
0.099668
0
0
1,226
11,882,484
2012-08-09T11:33:00.000
0
0
1
0
python,dll,32bit-64bit,printers,python-2.2
11,883,191
1
true
0
0
In a word, no. ctypes using LoadLibrary is used to connect to an external DLL, but if you are a 32-bit process then you will be using 32-bit addresses, so a 64-bit binary could not be mapped against your address space.
1
0
0
I am using Python 2.2, a 32-bit process, but I need to load a 64bit dll from a printer. It might seem strange, but is this possible?
Load 64-bits Dll on Python2.2(32-bits)
1.2
0
0
528
11,885,062
2012-08-09T13:55:00.000
1
0
0
0
python,web-applications,ftp,download
11,886,438
1
true
0
0
The only way to grant access in the manner that you want to is to pass it through your server , write a frontend on the FTP server, or provide a limited download of the file on the FTP (Temporary account). The latter option is not secure and wouldn't recommend it although it would be easy to do. So, that leaves either passing the file through your server and handing it off to the user that way or having some kind of web frontend on the FTP server to serve the file. The frontend on the FTP server would be the best option, although it requires more work, but the basic requirements are: Link generation Database of some kind to hold the links/user's allowed to access it. A method to pass the authentication to this frontend so the user doesn't have to relogin anywhere, simple cookie/session would be easiest but again is difficult. It will require a lot of extra work but will be the most flexible, that is if it is possible to do this else I would stick with passing the data through or look into a third party CDN.
1
0
0
I have a site A where is installed a web portal written in python. Then, I have a site X (that is not static but change dinamically), where are stored some file. The site A and site X communicate through ftp. How can i allow a registered user of the portal to download a file like the file was in the site A. Is there a standard way to do this? Since the files can be large I would avoid to pass for the server. Thanks
Make accessible a remote file to a registered user
1.2
0
1
57
11,889,104
2012-08-09T17:48:00.000
1
0
0
0
python,postgresql,web-applications,flask,psycopg2
11,889,137
5
false
1
0
I think connection pooling is the best thing to do if this application is to serve multiple clients and concurrently.
3
8
0
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly. Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?
Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
0.039979
1
0
9,082
11,889,104
2012-08-09T17:48:00.000
3
0
0
0
python,postgresql,web-applications,flask,psycopg2
11,889,659
5
false
1
0
The answer depends on how many such requests will happen and how many concurrently in your web app ? Connection pooling is usually a better idea if you expect your web app to be busy with 100s or even 1000s of user concurrently logged in. If you are only doing this as a side project and expect less than few hundred users, you can probably get away without pooling.
3
8
0
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly. Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?
Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
0.119427
1
0
9,082
11,889,104
2012-08-09T17:48:00.000
0
0
0
0
python,postgresql,web-applications,flask,psycopg2
61,078,209
5
false
1
0
Pooling seems to be totally impossible in context of Flask, FastAPI and everything relying on wsgi/asgi dedicated servers with multiple workers. Reason for this behaviour is simple: you have no control about the pooling and master thread/process. A pooling instance is only usable for a single thread serving a set of clients - so for just one worker. Any other worker will get it's own pool and therefore there cannot be any sharing of established connections. Logically it's also impossible, because you cannot share these object states across threads/processes in multi core env with python (2.x - 3.8).
3
8
0
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly. Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?
Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
0
1
0
9,082
11,889,932
2012-08-09T18:43:00.000
3
0
1
0
python,virtualenv,pip
11,890,145
5
false
1
0
it would be really convenient not to have to tell every new person joining the team how to set up their virtualenv Just add it to the normal set of instructions you give new members when the join; right in the same place when you tell them about the internal documentation wiki, the password to the wifi and the phone number to the sandwich delivery shop. It will be extremely uncovenient to not have to tell people and have them figure it out themselves; the first time they submit something that uses collections.Counter only to find out it broke the build because the server doesn't have 2.7.x
1
27
0
I'm using virtualenv to develop a django application with a team. The server we're deploying on is running python 2.6, but the default for our machines is 2.7.3. Is there any way to specify python version in the requirements.txt file, or something similar, within the code base? I know requirements.txt is a pip thing, and python version is a virtualenv thing, but it would be really convenient not to have to tell every new person joining the team how to set up their virtualenv.
Specify Python Version for Virtualenv in Requirements.txt
0.119427
0
0
20,644
11,890,437
2012-08-09T19:17:00.000
15
0
0
0
python,matlab
11,890,470
3
false
0
0
Yes, there is the : operator. The command -10:5:11 would produce the vector [-10, -5, 0, 5, 10];
1
10
1
Is there an equivalent MATLAB function for the range() function in Python? I'd really like to be able to type something like range(-10, 11, 5) and get back [-10, -5, 0, 5, 10] instead of having to write out the entire range by hand.
Is there an equivalent of the Python range function in MATLAB?
1
0
0
21,375
11,892,497
2012-08-09T21:51:00.000
0
0
1
0
python,ipython
55,928,585
8
false
0
0
For me the solution was to rename my script from zmq.py to anything else. This happens because using the name zmq.py crates a name conflict with the package as python tries to include the script itself rather than the library as scripts have priority in the include hierarchy.
3
14
0
I am trying to get ipython notebook run. I already installed pyzmq. Do you know why it's still giving this error?
Already installed pyzmq but still getting "ImportError: No module named zmq"
0
0
0
37,531
11,892,497
2012-08-09T21:51:00.000
0
0
1
0
python,ipython
46,959,438
8
false
0
0
you should add Phython path to windows local variable PATH before install zmq
3
14
0
I am trying to get ipython notebook run. I already installed pyzmq. Do you know why it's still giving this error?
Already installed pyzmq but still getting "ImportError: No module named zmq"
0
0
0
37,531
11,892,497
2012-08-09T21:51:00.000
0
0
1
0
python,ipython
71,136,487
8
false
0
0
I had a file that was trying to import zmq that was throwing: ModuleNotFoundError: No module named 'zmq' But pip install zmq or pip install pyzmq kept telling me I had a version already installed, and thus Requirement already satisfied at Library/blahblah/Python/3.6 ... I've already aliased pip3 to pip, but trying pip3 didn't work either I wound up having to reset my $PYTHONPATH and deleted that Python 3.6 install I didn't actually need. After that pip install pyzmq STILL didn't work though. What finally worked was pip3 install zmq ... Weird since I already had that aliased, but something w/the paths might still be screwy.
3
14
0
I am trying to get ipython notebook run. I already installed pyzmq. Do you know why it's still giving this error?
Already installed pyzmq but still getting "ImportError: No module named zmq"
0
0
0
37,531
11,892,953
2012-08-09T22:31:00.000
1
0
0
0
search-engine,sitemap,pythonanywhere
11,899,675
1
true
0
0
At the moment it is certainly possible. But would only realistically happen if you have people linking to your .pythonanywhere.com. We are currently working on a major upgrade that will give each webapp it's own wsgi server and the potential for this to occur will go away completely
1
1
0
I am currently using my own domain name for my Pythonanywhere app. The original username.pythonanywhere.com still serves the same content as www.my-domain.com, and I wanted to know if there would be duplicate search engine results from this. My sitemap.xml file is written for www.my-domain.com in case that changes anything. I only want www.my-domain to be crawled.
Duplicate Search Engine results on Pythonanywhere
1.2
0
1
79
11,893,311
2012-08-09T23:11:00.000
0
1
0
1
python,module
31,068,954
5
false
0
0
To install nay python package in ubuntu, first run sudo apt-get update Then type "sudo apt-get install python-" and press tab twice repeatedly. press y or yes and it will display all the packages available for python. Then again type sudo apt-get install python-package It will install the package from the internet.
2
13
0
I'm guessing my question is pretty basic, but after 15-20 minutes on Google and YouTube, I am still a little fuzzy. I am relatively new to both Linux and Python, so I am having some difficulty comprehending the file system tree (coming from Windows). From what I've found digging around the directories in Ubuntu (which is version 12.04, I believe, which I am running in VBox), I have ID'd the following two directories related to Python: /usr/local/lib/python2.7 which contains these two subdirectories: dist-packages site-packages both of which do not show anything when I type "ls" to get a list of the files therein, but show ". .." when I type "ls -a". /usr/lib/python2.7 which has no site-packages directory but does have a dist-packages directory that contains many files and subdirectories. So if I want to install a 3rd party Python module, like, say, Mechanize, in which one of the above directories (and which subdirectory), am I supposed to install it in? Furthermore, I am unclear on the steps to take even after I know where to install it; so far, I have the following planned: Download the tar.gz (or whatever kind of file the module comes in) from whatever site or server has it Direct the file to be unzipped in the appropriate subdirectory (one of the 2 listed above) Test to make sure it works via import mechanize in interactive mode. Lastly, if I want to replace step number 1 above with a terminal command (something like sudo apt-get), what command would that be, i.e., what command via the terminal would equate to clicking on a download link from a browser to download the desired file?
Installing 3rd party Python modules on an Ubuntu Linux machine?
0
0
0
63,583
11,893,311
2012-08-09T23:11:00.000
11
1
0
1
python,module
11,893,356
5
false
0
0
You aren't supposed to manually install anything. There are three ways to install Python libraries: Use apt-get, aptitude or similar utilities. Use easy_install or pip (install pip first, its not available by default) If you download some .tar.gz file, unzip it and then type sudo python setup.py install Manually messing with paths and moving files around is the first step to headaches later. Do not do it. For completeness I should mention the portable, isolated way; that is to create your own virtual environment for Python. Run sudo apt-get install python-virtualenv virtualenv myenv (this creates a new virtual environment. You can freely install packages in here without polluting your system-wide Python libraries. It will add (myenv) to your prompt.) source myenv/bin/activate (this activates your environment; making sure your shell is pointing to the right place for Python) pip install _____ (replace __ with whatever you want to install) Once you are done type deactivate to reset your shell and environment to the default system Python.
2
13
0
I'm guessing my question is pretty basic, but after 15-20 minutes on Google and YouTube, I am still a little fuzzy. I am relatively new to both Linux and Python, so I am having some difficulty comprehending the file system tree (coming from Windows). From what I've found digging around the directories in Ubuntu (which is version 12.04, I believe, which I am running in VBox), I have ID'd the following two directories related to Python: /usr/local/lib/python2.7 which contains these two subdirectories: dist-packages site-packages both of which do not show anything when I type "ls" to get a list of the files therein, but show ". .." when I type "ls -a". /usr/lib/python2.7 which has no site-packages directory but does have a dist-packages directory that contains many files and subdirectories. So if I want to install a 3rd party Python module, like, say, Mechanize, in which one of the above directories (and which subdirectory), am I supposed to install it in? Furthermore, I am unclear on the steps to take even after I know where to install it; so far, I have the following planned: Download the tar.gz (or whatever kind of file the module comes in) from whatever site or server has it Direct the file to be unzipped in the appropriate subdirectory (one of the 2 listed above) Test to make sure it works via import mechanize in interactive mode. Lastly, if I want to replace step number 1 above with a terminal command (something like sudo apt-get), what command would that be, i.e., what command via the terminal would equate to clicking on a download link from a browser to download the desired file?
Installing 3rd party Python modules on an Ubuntu Linux machine?
1
0
0
63,583
11,894,210
2012-08-10T01:18:00.000
3
1
0
0
python,pyramid,production
11,898,284
1
true
1
0
Well the big difference between python setup.py develop and python setup.py install. Is that install will install the package in your site-packages directory. While develop will install an egg-link that point to the directory for development. So yeah you can technically use both method. But depending on how you did your project, installing in site-package might be a bad idea. Why? FileUpload or anything your app might generate like dynamic files etc... If your app doesn't use config files to find where to save your files. Installing your app and running your app may try to write file in your site-packages directory. In other words, you have to make sure that all files and directories that may be generated, etc can be located using config files. Then if all dynamic directories are pointed out in the configs, then installing is good... All you'll have to do is create a folder with a production.ini file and run pserve production.ini. Code can be saved anywhere on your comp that way and you can also use uWSGI or any other WSGI server you like. I think installing the code isn't a bad thing, and having data appart from the application is a good thing. It has some advantage for deployment I guess.
1
3
0
So as I near the production phase of my web project, I've been wondering how exactly to deploy a pyramid app. In the docs, it says to use ../bin/python setup.py develop to put the app in development mode. Is there another mode that is designed for production. Or do I just use ../bin/python setup.py install.
Preparing a pyramid app for production
1.2
0
0
2,476
11,894,333
2012-08-10T01:37:00.000
1
1
1
0
python,memory,virtualenv,web-frameworks
12,218,779
3
false
1
0
It depends on how you're going to run the application in your environment. There are many different ways to run Python web apps. Recently popular methods seem to be Gunicorn and uWSGI. So you'd be best off running the application as you would in your environment and you could simply use a process monitor to see how much memory and CPU is being used by the process running your applicaiton.
1
1
0
I'm creating an app in several different python web frameworks to see which has the better balance of being comfortable for me to program in and performance. Is there a way of reporting the memory usage of a particular app that is being run in virtualenv? If not, how can I find the average, maximum and minimum memory usage of my web framework apps?
Testing memory usage of python frameworks in Virtualenv
0.066568
0
0
1,578
11,895,298
2012-08-10T04:00:00.000
3
1
0
1
python,networking,file-transfer
11,895,345
1
true
0
0
I think your best bet is to use scp or rsync from within screen. That way you can detach the screen session and logout and the transfer will keep going. man screen
1
1
0
I am working on a task to back up (copy) about 100 Gb of data (including a thousand files and sub folders in a directory) to another server. Normally, for the smaller scale, I can use scp or rsync instead. However, as the other server is not on the same LAN network, it could easily take hours, even days, to complete the task. I can't just leave my computer there with the terminal running. I don't think that's the best choice, and again, I have another good reason to use Python :) Is there any library, or best practice for me to start with? As, it's just for in-house project, we don't need any fancy features, just some fundamental things such as logging, error tolerance, etc.
How can we transfer large amounts of data over a network, using Python?
1.2
0
0
723
11,898,451
2012-08-10T09:03:00.000
0
0
1
0
python,loops,cron,infinite-loop
11,898,609
3
false
0
0
I guess one way to work around the issue is having a script for one loop run, that would: check no other instance of the script is running look into the queue and process everything found there Now, then you can run this script from cron every minute between 8 a.m. and 8 p.m. The only downside is that new items may some time to get processed.
1
3
0
I'm working on a Python script that will constantly scrape data, but it will take quite a long time. Is there a safe way to stop a long running python script? The loop will run for more than 10 minutes and I need a way to stop it if I want, after it's already running. If I execute it from a cron job, then I'm assuming it'll just run until it's finished, so how do I stop it? Also, if I run it from a browser and just call the file. I'm assuming stopping the page from loading would halt it, correct? Here's the scenario: I have one python script that is gather info from pages and put it into a queue. Then I want to have another python script that is in an infinite loop that just checks for new items in the queue. Lets say I want the infinite loop to begin at 8am and end at 8pm. How do I accomplish this?
Stop python script in infinite loop
0
0
0
10,068
11,901,273
2012-08-10T12:03:00.000
4
0
0
0
python,boto,amazon-sqs
12,972,060
5
false
0
0
Other way could be you can put an extra identifier at the end of the message in your SQS queue. This identifier can keep the count of the number of times the message has been read. Also if you want that your service should not poll these message again and again then you can create one more queue say "Dead Message Queue" and can transfer then message which has crossed the threshold to this queue.
2
8
0
I am using boto library in Python to get Amazon SQS messages. In exceptional cases I don't delete messages from queue in order to give a couple of more changes to recover temporary failures. But I don't want to keep receiving failed messages constantly. What I would like to do is either delete messages after receiving more than 3 times or not get message if receive count is more than 3. What is the most elegant way of doing it?
How to get messages receive count in Amazon SQS using boto library in Python?
0.158649
0
1
21,434
11,901,273
2012-08-10T12:03:00.000
1
0
0
0
python,boto,amazon-sqs
11,960,623
5
false
0
0
It should be done in few steps. create SQS connection :- sqsconnrec = SQSConnection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) create queue object :- request_q = sqsconnrec.create_queue("queue_Name") load the queue messages :- messages= request_q.get_messages() now you get the array of message objects and to find the total number of messages :- just do len(messages) should work like charm.
2
8
0
I am using boto library in Python to get Amazon SQS messages. In exceptional cases I don't delete messages from queue in order to give a couple of more changes to recover temporary failures. But I don't want to keep receiving failed messages constantly. What I would like to do is either delete messages after receiving more than 3 times or not get message if receive count is more than 3. What is the most elegant way of doing it?
How to get messages receive count in Amazon SQS using boto library in Python?
0.039979
0
1
21,434
11,902,954
2012-08-10T13:41:00.000
0
0
0
1
python,google-app-engine,queue
11,903,487
2
false
1
0
There is a task queue link on your appengine console where you can look at the pending tasks, statistics and see what's going on.
1
6
0
I am firing up a queue to complete some tasks in Python Appengine application. Is there a way to get the status of the queue? I would like to check whether it is still running or has incomplete tasks.
How can I check programmatically the status of my task queue in Google Appengine?
0
0
0
585
11,903,083
2012-08-10T13:50:00.000
-1
0
1
0
python,numpy,set-difference
11,903,766
3
false
0
0
I'm not sure what you are going for, but this will get you a boolean array of where 2 arrays are not equal, and will be numpy fast: import numpy as np a = np.random.randn(5, 5) b = np.random.randn(5, 5) a[0,0] = 10.0 b[0,0] = 10.0 a[1,1] = 5.0 b[1,1] = 5.0 c = ~(a-b==0) print c [[False True True True True] [ True False True True True] [ True True True True True] [ True True True True True] [ True True True True True]]
1
18
1
I have two large 2-d arrays and I'd like to find their set difference taking their rows as elements. In Matlab, the code for this would be setdiff(A,B,'rows'). The arrays are large enough that the obvious looping methods I could think of take too long.
Find the set difference between two large arrays (matrices) in Python
-0.066568
0
0
13,283
11,903,310
2012-08-10T14:02:00.000
3
1
0
0
python,unit-testing,tdd
11,903,386
1
true
0
0
Bit old on Python. But this is how I would approach it. Grab the image doing a manual test. Compute a check sum (MD5 perhaps). Then the automated tests need to compare it by computing the MD5 (in this example) with the one done on the manual test. Hope this helps.
1
1
0
I have an array of pixels which I wish to save to an image file. Python appears to have a few libraries which can do this for me, so I'm going to use one of them, passing in my pixel array and using functions I didn't write to write the image headers and data to disk. How do I do unit testing for this situation? I can: Test that the pixel array I'm passing to the external library is what I expect it to be. Test that the external library functions I call give me the expected return values. Manually verify that the image looks like I'm expecting (by opening the image and eyeballing it). I can't: Test that the image file is correct. To do that I'd have to either generate an image to compare to (but how do I generate that 'trustworthy' image?), or write a unit-testable image-writing module (so I wouldn't need to bother with the external library at all). Is this enough to provide coverage for my code? Is testing the interface between my code and the external library sufficient, leaving me to trust that the output of the external library (the image file) is correct through manual eyeballing? How do you write unit tests to ensure that the external libraries you use do what you expect them to?
Unittest binary file output
1.2
0
0
641
11,903,674
2012-08-10T14:22:00.000
2
0
1
0
python,pyscripter
11,903,990
1
true
0
0
You can use reload(imported_module_name) in the interactive shell to reload the module before re-running your script. PyScripter does everything in a single Python instance, which makes debugging easier, but also, as you discovered, makes fixing imported files a bit trickier. You can also completely reinitialize the Remote Engine from the Run menu to get a fresh interpreter.
1
2
0
I am using portable python 1.1 with python 2.6.2. The PyScriptor is 1.9.9.6. I open all the files I am working on with PyScriptor. So, I run my main file and an error shows up with code in one of my imported files. I fix it and run the main file again, but the same error shows up. It is as if the imported file is still the old one but PyScriptor is correctly saving the files I edit. Restarting PyScriptor fixes it, but is a pain to do that for every bug. I tested this happens bu adding a print statement that showed up after restarting, and then removing it and still see the print statement.
PyScripter won't update my imports
1.2
0
0
881
11,906,505
2012-08-10T17:28:00.000
1
0
0
0
python,tkinter,py2app
11,906,589
1
true
0
1
It's probably continuing to work because it's using your frameworks to load Tkinter instead of the bundled ones. If you moved it to another computer without Tkinter, it might not launch or could just crash right away.
1
0
0
I'm making a few Python applications for MacOSX (10.6 in this case, though I don't imagine it matters) using Tkinter to code the interface, and py2app to create the bundle. As many of you know, these stand-alone apps tend to be a fairly large size, about 70-80 MB, mostly because I am using numpy. As expected, 23 MB of this is from numpy (which must remain uncompressed to function), but I found that 30 MB is from the Tcl framework at Contents/Frameworks/Tcl.framework, and 5 MB is from the Tk framework. For the hell of it I tried compressing both of these folders, which brought them down to 9 MB and 1 MB, respectively. Now the application is almost half its original size, and as far as I can tell everything is working perfectly. My question is to the Tkinter/application gurus out there: Is this bad? Is there any reason that I shouldn't be compressing these frameworks? Could this impact distribution in any way? And if not, why doesn't py2app do this natively? EDIT: I tried actually removing both the Tcl and the Tk frameworks from my application bundle, and everything still works fine. Why are these here if they aren't used with tkinter?
Saving space in tkinter application
1.2
0
0
151
11,907,027
2012-08-10T18:09:00.000
3
1
0
0
python,user-interface,raspberry-pi
11,907,139
3
false
0
1
You could use Python for this easily with just the standard library (python 2.7.3). For the GUI you can use Tkinter or Pygame (not standard library) which both support images and text placement (and full screen). It is notable that Tkinter is not thread safe, so that may be a problem if your planning on threading this program. For the http request you can use httplib. For the Json related things you can use the json library.
1
3
0
I started a project where you can "log in" on a terminal (basically a Raspberry Pi with a touchscreen attached) with a wireless token (for time tracking). What will be the best and fastest solution to display the status (basically a background picture and 2-3 texts changing depending on the status of the token) on the screen (fullscreen)? I tried it web-based with chromium, which is -very- slow... It has to be easy to do http request and en-/decoding JSON - and please no C/C++. Maybe python + wxwidgets?
fast gui on raspberry
0.197375
0
0
3,021
11,907,410
2012-08-10T18:39:00.000
1
0
1
0
python,hashmap,interpreter,rhino,v8
11,908,073
3
false
0
0
CPython uses namespaces extensively for function/method dispatch, which means a hash type, AKA "dictionary". Pypy, Jython, IronPython, etcetera may have their own thoughts about how best to do this. Python != CPython.
2
3
0
When it comes to script interpreters, like Rhino, Google V8, Python, etc. - is there any general approach to determining the underlying native methods, given only a string of scripting language? At some point, do these interpreters use hash maps with strings for keys? Or is there a lot of string equality testing and branches?
How do scripting language interpreters reference their underlying functions?
0.066568
0
0
213
11,907,410
2012-08-10T18:39:00.000
1
0
1
0
python,hashmap,interpreter,rhino,v8
11,908,106
3
false
0
0
For Python, when the source code is processed by the Python, all definitions (of classes and their methods, of normal functions, etc.) are compiled. The result of compilation of the parts of the code is stored as objects that capture the code. The name is stored inside only for introspection purpose -- from user point of view, the objects are unnamed. However, the name (of the class, of the function) is stored as a key in an internal hash map (called dictionary in Python). The value is the reference to the unnamed object. Any variable in Python is the name bound to untyped reference (key, value in a hash map). Whenever a name appears in Python, you are working with a reference variable. It is automatically dereferenced via searching in the mentioned hash map (dictionary). The user even has access to the dictionary. This way, you can try that it works this way. Then you also can easily give the function a different name (e.g. shorter) by simply assigning the function name into another variable -- assignment always means assigning the reference value.
2
3
0
When it comes to script interpreters, like Rhino, Google V8, Python, etc. - is there any general approach to determining the underlying native methods, given only a string of scripting language? At some point, do these interpreters use hash maps with strings for keys? Or is there a lot of string equality testing and branches?
How do scripting language interpreters reference their underlying functions?
0.066568
0
0
213
11,907,468
2012-08-10T18:43:00.000
0
0
0
0
python,wxpython,network-drive
11,908,070
1
true
0
0
It works fine for me in the wxPython demo on Windows 7, wxPython 2.8.12.1. What version of wxPython are you using? Which OS? There are two other directory controls that you can try too: GenericDirCtrl and MultiDirDialog. I would recommend creating a little demo app that we can play with too.
1
0
0
I am using a DirDialog in wxpython. It works fine for the local drive. But, it is not able to list the folders in the network drive. Is there any way to do it ? I want to select a folder from the network drive.
Directory listing of a network drive by DirDialog in wxpython
1.2
0
0
251
11,909,078
2012-08-10T20:52:00.000
5
1
1
0
c++,python,compilation,interpreter,execution
11,909,155
4
false
0
0
The problem with efficiency in high-level languages (or, at least, the dynamic ones) stems from the fact that it's usually not known WHAT operations needs to be performed until the actual types of objects are resolved in runtime. As a consequence, these languages don't compile to straightforward machine code and have to do all the heavy lifting behind the covers.
2
4
0
It seems to me that languages that are quite simple to use (i.e. Python) often have slower execution times than languages that are deemed more complex to learn (i.e. C++ or Java). Why? I understand that part of the problem arises from the fact that Python is interpreted rather than compiled, but what prevents Python (or another high-level language) from being compiled efficiently? Is there any programming language that you feel does not have this trade off?
Why does there seem to be tension between the simplicity of a language and execution time?
0.244919
0
0
375
11,909,078
2012-08-10T20:52:00.000
5
1
1
0
c++,python,compilation,interpreter,execution
11,909,183
4
true
0
0
Lets compare C and Python. By most accounts C is more "complex" to program in than say, Python. This is because Python automates a lot of work which C doesn't. For example, garbage collection is automated in Python, but is the programmer's responsibility in C. The price of this automation is that these "high level features" need to be generic enough to "fit" the needs of every program. For example, the Python garbage collector has a predefined schedule/garbage collection algorithm, which may not be the optimal for every application. On the other hand, C gives the programmer complete flexibility to define the GC schedule and algorithm as she wants it to be. So there you have it, ease versus performance.
2
4
0
It seems to me that languages that are quite simple to use (i.e. Python) often have slower execution times than languages that are deemed more complex to learn (i.e. C++ or Java). Why? I understand that part of the problem arises from the fact that Python is interpreted rather than compiled, but what prevents Python (or another high-level language) from being compiled efficiently? Is there any programming language that you feel does not have this trade off?
Why does there seem to be tension between the simplicity of a language and execution time?
1.2
0
0
375
11,909,396
2012-08-10T21:18:00.000
0
0
1
0
python
11,910,009
2
false
0
0
pickle is the usual solution to such things, but if you see any value in being able to edit the saved data, and if the dictionary uses only simple types such as strings and numbers (nested dictionaries or lists are also OK), you can simply write the repr() of the dictionary to a text file, then parse it back into a Python dictionary using eval() (or, better yet, ast.literal_eval()).
1
0
0
a couple of my python programs aim to format into a hash table (hence, I'm a dict() addict ;-) ) some informations in a "source" text file, and to use that table to modify a "target" file. My concern is that the "source" files I usually process can be very large (several GB) so it makes more than 10sec to parse, and I need to run that program a bunch of times. To conclude, I feel like it's a waste to reload the same large file each time I need to modify a new "target". My thought is, if it would be possible to write once the dict() made from the "source" file in a way that python would be able to read/process much faster (I think about a format close to the one used in RAM by python), it would be great. Is there a possibility to achieve that? Thank you.
How to make a python instanced object reusable?
0
0
0
176
11,909,800
2012-08-10T21:57:00.000
0
0
0
0
python,django,authentication,django-registration
11,909,991
2
true
1
0
To limit registrations to people already in the database, you will need some way to identify them. Require the club administrator to enter an email address for each member entered. Require the user to supply that address when registering. Send the registration link to that address, including the primary key of the user record in the link. When the user clicks the link, in your django view examine the link and make sure the key matches, then complete the registration.
2
1
0
I have a site catering to multiple clubs, and each club has administrators who maintain a database of club members. I want to limit site registration only to members who have explicitly been added to the club's database. How do I go about auto-generating and sending out registration links to members as they are added to the database? In other words, I want registration to be initiated only by club administrators.
Selective user registration using django_registration?
1.2
0
0
82
11,909,800
2012-08-10T21:57:00.000
0
0
0
0
python,django,authentication,django-registration
11,909,979
2
false
1
0
You said you have a database of club members already, so you must have a primary key or a tuple which is unique for that database like a club registration number which club members should know already Tell Users to give there primary key value(club registration number) at the time of registration. Make that club registration number also as a primary key to the new database which you are creating after registration for the user, next time if some body will re-use that club registration number to re-register then It will fail as the database tuple will already be there associated with that club registration number Also have a warning message at the time of registration that you can use one club registration number for one registration only for the site.
2
1
0
I have a site catering to multiple clubs, and each club has administrators who maintain a database of club members. I want to limit site registration only to members who have explicitly been added to the club's database. How do I go about auto-generating and sending out registration links to members as they are added to the database? In other words, I want registration to be initiated only by club administrators.
Selective user registration using django_registration?
0
0
0
82
11,910,295
2012-08-10T23:00:00.000
2
0
0
1
python,host,fabric
11,918,085
1
false
1
0
It's just python so do what you need to do to keep them seperate. You can define the dir differences in a dictionary or some yaml file that's read into the script. There isn't anything made in fabric to make you do it one way nor provide any specific way to do this. But essentially just keep in mind that it's not a DSL, it's a full python file, and you'll stumble onto what works best for you and your environment.
1
3
0
I have many application servers running on the same host. Every application server is installed in a different directory. How should I tackle deployments on the servers, using Fabric? I would like to be able to perform deployments on each server separately, and on subsets of servers. Clearly the env.hosts parameter has no use here, since all servers are on the same host. Same goes for the env.roledefs parameter. These come in handy when every server is installed on a different host. How should I deal with grouping of the servers, and setting separate environment parameters for each one of them which the fab tool can read and apply.
How to deal with deployments to a single host with multiple application servers using Fabric?
0.379949
0
0
202
11,910,481
2012-08-10T23:24:00.000
0
0
1
0
python,python-3.x,machine-learning,scikit-learn
58,766,819
3
false
0
0
Old question, Scikit-Learn is supported by Python3 now.
1
19
1
I was bummed out to see that scikit-learn does not support Python 3...Is there a comparable package anyone can recommend for Python 3?
Best Machine Learning package for Python 3x?
0
0
0
3,322
11,912,049
2012-08-11T05:01:00.000
1
0
1
1
python,opencv
11,912,085
1
false
0
0
When you configure your interpreter in eclipe (either the first time, or by going to the preferences menu) you need to select an interpreter (don't use the auto-configuration). Eclipe will use that interpreter, and libraries relative to it. If you install new libraries, just go back to prefrences>pydev>interpreter, and click "Apply" on the screen where the interpreters are selected (you don't need to change anything, but new libraries will be scanned for). I recomend using ports, if possible, since you'll most likely find everything you need there and won't have to deal with any manual installation of modules.
1
1
0
QUESTION1:I have 3 versions of pythons installed in my mac. 1.Apple supplied one (2.7.1) (/usr/local/bin) 2.Macports installed one (2.7.3) (/opt/local/bin) 3.and one from python.org (2.7.3) (/Library/Frameworks/Python.framework/Versions/2.7/bin) I would like to add external modules like opencv,pygame.I have no idea where the installed binaries are going and when I try to import them I get this "no module found" error.How to make macports installed python and python.org installed python use opencv module or some other external modules. QUESTION2:How to add external libraries to pydev in eclipse
keeping libraries at correct place when multiple pythons are installed
0.197375
0
0
65
11,914,338
2012-08-11T11:34:00.000
1
0
1
0
python,performance,string.format
11,918,336
4
false
0
0
If the float conversion is still a bottleneck, you might try to farm the formatting out to a multiprocessing.Pool, and use multiprocessing.map_async or multiprocessing.imap to print the resulting string. This will use all the cores on your machine to do the formatting. Although it could be that the overhead from passing the data to and from the different processes masks the improvents from parallelizing the formatting.
1
2
0
I'm writing a program that needs to do a lot of string formatting and I have noticed that .format() is taking a small but significant amount of cpu time. Here's how I'm using it: str = 'vn {0:.15f} {1:.15f} {2:.15f}\n'.format(normal_array[i].x, normal_array[i].y, normal_array[i].z) Does anyone know if there is even a slightly faster way to do this as a small fraction X 100000 can add up
faster way to do .format()
0.049958
0
0
263
11,914,614
2012-08-11T12:15:00.000
1
1
0
0
java,c++,python,qt,plugins
13,563,225
1
false
0
1
C++ mangles its symbols, and has special magic to define classes, which is sort of hacked on top of standard (C) object files. You don't want your files from other languages to understand that magic. So I would certainly follow your own suggestion, to do everything in pure C. However, that doesn't mean you can't use C++. Only the interface has to be C, not the implementation. Or more strictly speaking, the object file that is produced must not use special features that other languages don't use. While it is possible for a plugin to link to your program and thus use functions from it, I personally find it more readable (and thus maintainable) to call a plugin function after loading it, passing an array of function pointers which can be used by the plugin. Every language has support for opening shared object (SO or DLL) files. Use that. Your interface will consist of functions which have several arguments and return types, which probably have special needs in how they are passed in or retrieved. There probably are automated systems for this, but personally I would just write the interface file by hand. The most important is that you properly document the interface, so people can use any language they want, as long as they know how to load object files from their language. Different languages have very different ways of storing objects. I would recommend to make the creator of the data also the owner of the memory. So if your program has a class with a constructor (which is wrapped in C functions for the plugin interface), the class is the one creating the data, and your program, not the plugin, should own it. This means that the plugin will need to notify your program when it's done with it and at that point your program can destroy it (unless it is still needed, of course). In languages which support it, such as Python and C++, this can be done automatically when their interface object is destroyed. (I'm assuming here that the plugin will create an object for the purpose of communicating with the actual object; this object behaves like the real object, but in the target language instead of C.) Keep any libraries (such as Qt) out of the interface. You can allow functions like "Put resource #x at this position on the screen", but not "Put this Qt object at this position on the screen". The reason is that when you require the plugin to pass Qt objects around, they will need to understand Qt, which makes it a lot harder to write a plugin. If plugins are completely trusted, you can allow them to pass (opaque) pointers to those objects, but for the interface that isn't any different from using other number types. Just don't require them to do things with the objects, other than calling functions in your program.
1
3
0
I am writing an application in Qt that I want to extend with plugins. My application also has a library that the plugins will use. So, I need a 2 way communication. Basically, the plugins can call the library, and my application which loads the plugins will call them. Right now, I have my library written in C++, so it has some classes. The plugins can include the header files, link to it and use it. I also have a header file with my interface, which is abstract base class that the plugins must have implemented. They should also export a function that will return a pointer to that class, and uses C linkage. Up to this point I believe that everything is clear, a standard plugin interface. However, there are 3 main problems, or subtasks: How to use the library from other languages? I tried this with Python only. I used SIP to generate a Python component that I successfully imported in a test.py file, and called functions from a class in the library. I haven't tried with any other language. How to generate the appropriate declaration, or stub, for my abstract class in other languages? Since the plugins must implement this class, I should be able to somehow generate an equivalent to a header in the other languages, like .py files for Python, .class files for Java, etc. I didn't try this yet, but I suppose there are generators for other languages. How am I going to make instances of the objects in the plugins? If I got to this point the class would be implemented in the plugins. Now I will need to call the function that returns the instance of the implemented abstract class, and get a pointer to it. Based on my research, in order to make this work I will have to get a handle to the Python interpreter, JVM, etc., and then communicate with the plugin from there. It doesn't look too complex, but when I started my research even for the simplest case it took a good amount of work. And I successfully got only to the 1st point, and only in Python. That made me wonder if I am taking the right approach? What are your thoughts on this.. maybe I should not have used Qt in my library and the abstract base class, but only pure C++. It could probably make the things a bit easier. Or maybe I should have used only C in my library, and make the plugins return a C struct instead of a class. That I believe would make the things much easier, since calling the library would be a trivial thing. And I believe the implementation of a C struct would be much easier that implementing C++ class, and even easier that implementing a C++ class that uses Qt objects. Please point me to the right direction, and share your expertise on this. Also, if you know of any book on the subject, I'd be more than happy to purchase it. Or some links that deal with this would do.
How to write Qt plugin system with bindings in other languages?
0.197375
0
0
308
11,917,869
2012-08-11T21:37:00.000
2
1
0
1
python,google-app-engine,cron
11,918,536
1
false
1
0
There is no "launch" the app in production as such. You deploy the app for the first time and crontab is now present and crontab scheduling is started. So I assume you really mean you would like to run the cron job every time you deploy a new version of your application in addition to the cron schedule. The cron handler is callable by you, so why not just wrap appcfg in a script that calls the cron handler after you do the deploy. Use wget/curl etc.....
1
0
0
I am successfully running a cron job every hour on Google Appengine. However I would like it to start when I launch the app. Now it does the first cron job 1 hour after the start. I am using Python.
Cron job on Appengine - first time on start?
0.379949
0
0
211
11,919,437
2012-08-12T03:11:00.000
0
1
0
1
aptana,ubuntu-10.04,pythonpath
12,043,683
1
true
0
0
I just changed the workspace and it works fine now. After doing so it asked for some paths for interpreter and I gave that and it works fine now.
1
0
0
I was working on a project using windows in Aptana. I changed my OS and installed ubuntu on unpartitioned space. I again downloaded Aptana for ubuntu and run it. I specified same workspace that I was using during windows as my that project partition is still there. The problem I am having is that I am unable to use Aptana intelligence so should I change some paths e.t.c. or is there a way to remove data from workspace(data that tells info to aptana) and recreate project so that it take new info. I tried to see that data but didn't see data that aptana use from workspace or project directory. Please tell what should be done in this sitaution. thanks in advance guys.
Aptana getting path of Windows python interpreter instead of linux
1.2
0
0
124
11,919,615
2012-08-12T03:59:00.000
0
0
1
1
python,ide,path,spyder
53,098,634
4
false
0
0
Execute the following command: %cd"P:\Python"
3
25
0
I'm using Debian. I installed Python 3.2.3. The path of Python 3 is /usr/bin/python3. How do I change it in Spyder?
How to change the path of Python in Spyder?
0
0
0
92,641
11,919,615
2012-08-12T03:59:00.000
36
0
1
1
python,ide,path,spyder
12,355,200
4
true
0
0
Press CTRL+SHIFT+ALT+P to open the Preferences window. Within this window, select the Console item on the left, then the Advanced Settings tab. The path to the Python executable will be right there.
3
25
0
I'm using Debian. I installed Python 3.2.3. The path of Python 3 is /usr/bin/python3. How do I change it in Spyder?
How to change the path of Python in Spyder?
1.2
0
0
92,641
11,919,615
2012-08-12T03:59:00.000
0
0
1
1
python,ide,path,spyder
49,143,964
4
false
0
0
simple if your not able to change the working directory .Press CTRL+SHIFT+ALT+P to open the Preferences window then go to RUN then see the working directory options and finally press the option THE CURRENT WORKING DIRECTORY.
3
25
0
I'm using Debian. I installed Python 3.2.3. The path of Python 3 is /usr/bin/python3. How do I change it in Spyder?
How to change the path of Python in Spyder?
0
0
0
92,641
11,920,330
2012-08-12T07:00:00.000
1
1
0
0
python,email,anonymous,smtplib
11,920,368
1
true
0
0
You don't have to have an account (ie. authenticate to your SMTP server) if your company's server is configured to accept mail from certain trusted networks. Typically SMTP servers consider the internal network as trusted and may accept mail from it without authentication.
1
0
0
First of all, this question has no malicious purposes. I had asked the same question yesterday in stackoverflow but it was removed. I would like to learn if I have to log into an account when sending emails with attachments using python smtplib module. The reason I don't want to log in to an account is that because there is no account that I can use in my company. Or I can ask my company's IT department to set up an account, but until that I want to write the program code and test it. Please don't remove this question. Best Regards
Do I have to log into an email account when sending emails using python smtplib?
1.2
0
1
95
11,925,782
2012-08-12T20:48:00.000
1
0
0
0
python,opencv
11,926,117
3
false
0
0
Please go through the book Learning OpenCV: Computer Vision with the OpenCV Library It has theory as well as example codes.
1
0
1
I am looking for some methods for detecting movement. I've tried two of them. One method is to have background frame that has been set on program start and other frames are compared (threshold) to that background frame. Other method is to compare current frame (let's call that frame A) with frame-1 (frame before A). None of these methods are great. I want to know other methods that work better.
What are some good methods for detecting movement using a camera? (opencv)
0.066568
0
0
637
11,926,023
2012-08-12T21:28:00.000
0
0
1
0
python
11,926,214
3
false
0
0
The TSV file has already lost all the type information. If the pickle module had been used to write out the file, you would have been able to easily unpickle it, however it looks like you just get to read the damaged file, so pickle is no use to you here The best you can do is attempt to convert each field to int and handle the exception if it fails
1
0
0
I have a TSV file that consists of integers along with some false data that could be anything such as floats or characters etc. The idea is to read the contents of the file and find out which ones are bad (containing data other than integers) Each line can be read using the readline method once the file has been opened for reading. Off course, the readline() method returns each line read as a string and not it's constituent data types. My understanding is, that I could use the pickle module somehow to ensure that i retain the original data type by representing it as it's serialized version carrying out dump and load methods. The question is, how do I do this? By reading each line and pickling it, would not help since readline by default reads it as a string. Thereby upon pickling, it's really just pickling a string into a serialized python object representation and unpickling would only return it as a string. Thus the actual data in the line, such as integers or chars are being represented as strings irrespective. So I assume the question is, how do I pickle things the right way OR how do I process each line of a file ensuring that it's data types are being maintained?
Read a file of mixed items and retain their data types
0
0
0
374
11,927,409
2012-08-13T02:04:00.000
1
1
0
0
python,smtp,rabbitmq,postfix-mta,amqp
11,927,486
3
false
0
0
Making an AMQP connection over plain TCP is pretty quick. Perhaps if you're using SSL then it's another story but you sure that enqueueing the raw message onto the AMQP exchange is going to be the bottleneck? My guess would be that actually delivering the message via SMTP is going to be much slower so how fast you can queue things up isn't going to affect the throughput of the system. If this piece does turn out to be a bottleneck I rather like creating little web servers using Sinatra, or Rack but it sounds like you might prefer a Python based solution. Have the postfix content filter perform a HTTP POST using curl to a webserver, which maintains a persistent connection to the AMQP server. Of course now you have an extra moving part for which you need to think about monitoring, error handling and security.
1
4
0
I'm looking for a way to take gads of inbound SMTP messages and drop them onto an AMQP broker for further routing and processing. The messages won't actually end up in a mailbox, but instead SMTP is used as a message gateway. I've written a Postfix After-Queue Content Filter in Python that drops the inbound SMTP message onto a RabbitMQ broker. That works well - I get the raw message over a queue and it gets picked up nicely by a consumer. The issue is that the AMQP connection is created and torn down with each message... the Content Filter script gets re-executed from scratch each time. I imagine that will end up being a performance issue. If I could leverage something re-entrant I could reuse the connection. Or maybe I'm just approaching the whole thing incorrectly...
Sending raw SMTP messages to an AMQP broker
0.066568
0
1
1,843
11,927,604
2012-08-13T02:40:00.000
3
0
1
0
javascript,python,google-apps-script
11,927,684
1
true
1
0
From what I can tell, the JavaScript-like language is the only one offered for Google Apps Script. You seem to have confused it with Google App Engine, which is a platform-as-a-service that you can use to write your own applications, and offers Java, Python, and Go runtime environments. It is not a scripting language for Google Apps products such as Docs spreadsheets; that's what Apps Script is for.
1
2
0
I've inherited a fairly complex Googledoc spreadsheet with some scripted functionality implemented in the Google App Engine. The original coder used the JavaScript environment. Personally, I'm more comfortable with Python and I'm running into all kinds of weird errors on the JavaScript environment. I'd like to just scrap what we have and rewrite the same scripts in Python, an exercise in translation, if you will...I'm wondering if there's a way to do that keeping the original spreadsheet so I don't have to recreate all the existing spreadsheet structure (several tabs, each with built-in conditional formatting, filters, etc. not to mention a length and complex submission form). So, in short, I'd like to switch from JavaScript to Python in GAE -- is it possible? If so, how? If not, is there a way to copy the whole spreadsheet but start fresh with a blank Python script? Thanks in advance.
Migrate from JavaScript to Python on Google Apps
1.2
0
0
157
11,927,848
2012-08-13T03:27:00.000
5
0
0
0
python,sockets
11,927,873
1
false
0
0
EWOULDBLOCK means that the socket send buffer is full when sending, or that the socket receive buffer is empty when receiving. You are supposed to use select() to detect when these conditions become false.
1
3
0
I use socket in non-blocking mode, Client send data continuously to Server, although I set buffer for socket is big enough to save all data from client but Ewouldblock always threw, I don't know why, could you explain to me in detail about this Ewouldblock.
EWOULDBLOCK Error in socket programming
0.761594
0
1
12,052
11,929,073
2012-08-13T06:20:00.000
1
0
0
0
python,openerp,accounting
11,929,906
2
true
1
0
You can override "pay_and_reconcile" function to write in account field, this function is called at time of Pay. action_date_assign() action_move_create() action_number() this 3 function are called at time of validating invoice. You can override any one from this or you can add your own function . in workflow for the "open" activity.
1
1
0
I'm new to OpenERP and python and I need some help saving an amount into a particular account. I have created a field in the invoice form that calculates a specific amount based on some code and displays that amount in the field. What I want to do is to associate an account with this field, so when the invoice is validated and/or payed, this amount is saved into an account and later on I can see it in the journal entries and/or chart of account. Any idea how to do that ?
OpenERP : Saving field value (amount) into an account
1.2
0
0
226
11,931,804
2012-08-13T09:48:00.000
-1
0
1
0
python,coding-style
13,382,603
3
true
0
0
There is no better way to do it, it should work fine
1
0
0
Is there better way of getting the single value in the dictionary, if my dictionary only has one element? I'm currently doing if len(a) == 1: print a.values()[0], is this the pythonic way to do it?
Get only value in dictionary if len == 1
1.2
0
0
1,170
11,939,286
2012-08-13T17:28:00.000
1
0
1
0
python,utf-8,python-3.x,yaml,urldecode
11,940,331
4
false
0
0
The urllib.parse.unquote returned a correct UTF-8 string and writing that straight to the file returned did the expected result. The problem was with yaml. By default it doesn't encode with UTF-8. My solution was to do: yaml.dump("pe%20to%C5%A3i%20mai",encoding="utf-8").decode("unicode-escape") Thanks to J.F. Sebastian and Mark Byers for asking me the right questions that helped me figure out the problem!
1
0
0
I have a string like "pe%20to%C5%A3i%20mai". When I apply urllib.parse.unquote to it, I get "pe to\u0163i mai". If I try to write this to a file, I get those exact simbols, not the expected glyph. How can I transform the string to utf-8 so in the file I have the proper glyph instead? Edit: I'm using Python 3.2 Edit2: So I figured out that the urllib.parse.unquote was working correctly, and my problem actually is that I'm serializing to YAML with yaml.dump and that seems to screw things up. Why?
Decoding UTF-8 URL in Python
0.049958
0
1
4,404
11,942,094
2012-08-13T20:49:00.000
1
0
0
1
python,django,google-app-engine,amazon-ec2,urllib
11,960,819
3
false
1
0
For server to server communication, traditional security advice would recommend some sort of IP range restriction at the web server level for the URLs in addition to whatever default security is in place. However, since you are making the call from a cloud provider to another cloud provider, your ability to permanently control the IP address of either the client and the server may diminished. That said, I would recommend using a standard username/password authentication mechanism and HTTPS for transport security. A basic auth username/password would be my recommendation(https:\\username:[email protected]\). In addition, I would make sure to enforce a lockout based on a certain number of failed attempts in a specific time window. This would discourage attempts to brute force the password. Depending on what web framework you are using on the App Engine, there is probably already support for some or all of what I just mentioned. If you update this question with more specifics on your architecture or open a new question with more information, we could give you a more accurate recommendation.
2
3
0
I have a website which uses Amazon EC2 with Django and Google App Engine for its powerful Image API and image serving infrastructure. When a user uploads an image the browser makes an AJAX request to my EC2 server for the Blobstore upload url. I'm fetching this through my Django server so I can check whether the user is authenticated or not and then the server needs to get the url from the App Engine server. After the upload is complete and processed in App Engine I need to send the upload info back to the django server so I can build the required model instances. How can I accomplish this? I was thinking to use urllib but how can I secure this to make sure the urls will only get accessed by my servers only and not by a web user? Maybe some sort of secret key?
Secured communication between two web servers (Amazon EC2 with Django and Google App Engine)
0.066568
0
0
705
11,942,094
2012-08-13T20:49:00.000
2
0
0
1
python,django,google-app-engine,amazon-ec2,urllib
11,960,193
3
true
1
0
apart from the Https call ( which you should be making to transfer info to django ), you can go with AES encryption ( use Pycrypto/ any other lib). It takes a secret key to encrypt your message.
2
3
0
I have a website which uses Amazon EC2 with Django and Google App Engine for its powerful Image API and image serving infrastructure. When a user uploads an image the browser makes an AJAX request to my EC2 server for the Blobstore upload url. I'm fetching this through my Django server so I can check whether the user is authenticated or not and then the server needs to get the url from the App Engine server. After the upload is complete and processed in App Engine I need to send the upload info back to the django server so I can build the required model instances. How can I accomplish this? I was thinking to use urllib but how can I secure this to make sure the urls will only get accessed by my servers only and not by a web user? Maybe some sort of secret key?
Secured communication between two web servers (Amazon EC2 with Django and Google App Engine)
1.2
0
0
705
11,944,796
2012-08-14T02:02:00.000
1
0
0
0
python,opencv,cluster-analysis,k-means
12,489,308
1
true
0
0
Since k-means is a randomized approach, you will probably encounter this problem even when analyzing the same frame multiple times. Try to use the previous frames cluster centers as initial centers for k-means. This may make the ordering stable enough for you, and it may even significantly speed up k-means (assuming that the green spots don't move too fast). Alternatively, just try reordering the means so that they are closest to the previous images means.
1
2
1
So I have a video with 3 green spots on it. These spots have a bunch of "good features to track" around their perimeter. The spots are very far away from each other so using KMeans I am easily able to identify them as separate clusters. The problem comes in that the ordering of the clusters changes from frame to frame. In one frame a particular cluster is the first in the output list. In the next cluster it is the second in the output list. It is making for a difficult time measuring angles. Has anyone come across this or can think of a fix other than writing extra code to compare each list to the list of the previous frame?
cv.KMeans2 clustering indices inconsistent
1.2
0
0
232
11,945,183
2012-08-14T02:56:00.000
0
0
1
0
python,pyqt,pyqt4
66,418,264
4
false
0
1
Emitting signals instead of synchronic UI control was the key to avoid problems for me during implementation of logic circuit simulator
1
39
0
I love both python and Qt, but it's pretty obvious to me that Qt was not designed with python in mind. There are numerous ways to crash a PyQt / PySide application, many of which are extraordinarily difficult to debug, even with the proper tools. I would like to know: what are good practices for avoiding crashes and lockups when using PyQt and PySide? These can be anything from general programming tips and support modules down to highly specific workarounds and bugs to avoid.
What are good practices for avoiding crashes / hangs in PyQt?
0
0
0
14,012