Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
1
4
0
1
2
0
0.049958
0
How to analyse frequency of wave file in a simple way? Without extra modules.
0
python,wave
2012-11-14T10:38:00.000
0
13,377,197
If your wave file consists of only one note, you can get the fundamental frequency (not the harmonics) simply by detecting the periodicity of the wave. Do this by looking for 0-crossings.
0
2,086
false
0
1
How to analyse frequency of wave file
13,378,468
2
2
0
2
2
0
0.197375
1
I have the following line of code using imaplib M = imaplib.IMAP4('smtp.gmail.com', 587) I get the following error from imaplib: abort: unexpected response: '220 mx.google.com ESMTP o13sm12303588vde.21' However from reading elsewhere, it seems that that response is the correct response demonstrating that the connection was made to the server successfully at that port. Why is imaplib giving this error?
0
python,email,response,imaplib
2012-11-14T19:33:00.000
0
13,385,981
You are connecting to the wrong port. 587 is authenticated SMTP, not IMAP; the IMAP designated port number is 143 (or 993 for IMAPS).
0
2,034
false
0
1
python imaplib unexpected response 220
13,387,035
2
2
0
2
2
0
1.2
1
I have the following line of code using imaplib M = imaplib.IMAP4('smtp.gmail.com', 587) I get the following error from imaplib: abort: unexpected response: '220 mx.google.com ESMTP o13sm12303588vde.21' However from reading elsewhere, it seems that that response is the correct response demonstrating that the connection was made to the server successfully at that port. Why is imaplib giving this error?
0
python,email,response,imaplib
2012-11-14T19:33:00.000
0
13,385,981
I realized I needed to do IMAP4_SSL() - has to be SSL for IMAP and for using IMAP I need the IMAP server for gmail which is imap.googlemail.com. I ultimately got it work without specifying a port. So, final code is: M = imaplib.IMAP4_SSL('imap.googlemail.com')
0
2,034
true
0
1
python imaplib unexpected response 220
13,399,991
1
4
0
5
9
1
0.244919
0
I need to have an import in __init__() method (because i need to run that import only when i instance the class). But i cannot see that import outside __init__(), is the scope limited to__init__? how to do?
0
python,python-import
2012-11-15T10:01:00.000
0
13,395,116
You can just import it again other places that you need it -- it will be cached after the first time so this is relatively inexpensive. Alternatively you could modify the current global namespaces with something like globals()['name'] = local_imported_module_name. EDIT: For the record, although using the globals() function will certainly work, I think a "cleaner" solution would be to declare the module's name global and then import it, as several other answers have mentioned.
0
7,587
false
0
1
Python import in __init__()
13,395,225
1
1
0
1
0
0
1.2
0
I have a Python web application in which one function that can take up to 30 seconds to complete. I have been kicking off the process with a cURL request (inc. parameters) from PHP but I don't want the user staring at a blank screen the whole time the Python function is working. Is there a way to have it process the data 'in the background', e.g. close the http socket and allow the user to do other things while it continues to process the data? Thank you.
0
python
2012-11-15T18:22:00.000
0
13,403,741
You should use an asynchronous data approach to transfer data from a PHP script - or directly from the Python script, to an already rendered HTML page on the user side. Check a javascript framework for the way that is easier for you to do that (for example, jquery). Then return an html page minus results to the user, with the javascript code to show a "calculating" animation, and fetch the reslts, in xml or json from the proper URL when they are done.
0
338
true
1
1
Python long running process
13,404,694
1
1
0
2
6
0
1.2
0
Is there any tool to convert the LLVM IR code to Python code? I know it is possible to convert it to Javascript (https://github.com/kripken/emscripten/wiki), to Java (http://da.vidr.cc/projects/lljvm/) and I would love to convert it to Python also. Additionaly if such tool does not exist, could you provide any information, what is the best tool to base on (maybe I should extend the emscripten with other language - Javascript and Python are similar to each other in some terms ;) )
0
python,compiler-construction,code-generation,llvm,converter
2012-11-16T11:26:00.000
0
13,415,660
LLVM up to 3.0 provided a C backend (see lib/Target/CBackend) which should be a good starting point for implementing a simple Python code generator.
0
1,835
true
1
1
LLVM IR to Python Compiler
13,416,469
2
5
0
3
92
0
0.119427
0
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?
0
c++,python,performance,opencv
2012-11-17T17:14:00.000
0
13,432,800
Why choose? If you know both Python and C++, use Python for research using Jupyter Notebooks and then use C++ for implementation. The Python stack of Jupyter, OpenCV (cv2) and Numpy provide for fast prototyping. Porting the code to C++ is usually quite straight-forward.
1
76,398
false
0
1
Does performance differ between Python or C++ coding of OpenCV?
66,955,473
2
5
0
6
92
0
1
0
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?
0
c++,python,performance,opencv
2012-11-17T17:14:00.000
0
13,432,800
You're right, Python is almost always significantly slower than C++ as it requires an interpreter, which C++ does not. However, that does require C++ to be strongly-typed, which leaves a much smaller margin for error. Some people prefer to be made to code strictly, whereas others enjoy Python's inherent leniency. If you want a full discourse on Python coding styles vs. C++ coding styles, this is not the best place, try finding an article. EDIT: Because Python is an interpreted language, while C++ is compiled down to machine code, generally speaking, you can obtain performance advantages using C++. However, with regard to using OpenCV, the core OpenCV libraries are already compiled down to machine code, so the Python wrapper around the OpenCV library is executing compiled code. In other words, when it comes to executing computationally expensive OpenCV algorithms from Python, you're not going to see much of a performance hit since they've already been compiled for the specific architecture you're working with.
1
76,398
false
0
1
Does performance differ between Python or C++ coding of OpenCV?
13,432,830
2
4
0
0
2
0
0
0
I am working on a repository management system for my university that will provide a gui for modifying permissions to individual folders in a subversion repository, and make it easy for professors to add directories for students and TA's, with the appropriate permissions. In order to make this work, I need to be able to retrieve the directory structure of an existing svn repository, and present it on the web. I have looked at several methods, and was wondering if anyone had other ideas, or suggestions. Some things I have looked at: Every hour, run a script that runs 'svn ls -R --xml' on all of the repositories and populates a mysql database Positive: Fast page loads afterwards Doesn't take a lot of disk space Easy to manage permission, i.e. the website doesn't need to touch svn directly at all Negative: Really slow on some of our more complicated repositories No 'live' updates Has to run whether there are changes or not On page load, run 'svn ls -R --xml' and retrieve only the directory I need to render the current page Positive: updates live no cron job to tie up the server Negative: website is slow as molasses webserver uses a lot more resources Directly read svn database Positive: Fast page loads live updates Negative: Difficult? I am very curious what alternatives there are that I have not seen or thought of, because I feel like any of these would be quite awful and inelegant in one way or another. Also I don't want to reinvent the wheel if it can be avoided. Thanks!
0
python,svn
2012-11-18T20:41:00.000
0
13,444,341
you should use direct repository access via file:// this comparable to a slow hd (if you have fast CPU and HD) use the svn bindings for your respective scripting language, do not rely on xml parsing, as they are much slower do not read the whole tree out, but maintain a navigational hierarchy and read directories on demand, usually if you read the whole hierarchy, you end up with hundreds/thousands directories in the deeper levels which are usually not interesting for your application, so you can omit them and display them on demand(if user browse this deep) if you are doing svn access modifications use the entries in your access file to get to know your important directories beforehand
0
292
false
0
1
Quickly retrieving svn directory trees
13,445,099
2
4
0
0
2
0
0
0
I am working on a repository management system for my university that will provide a gui for modifying permissions to individual folders in a subversion repository, and make it easy for professors to add directories for students and TA's, with the appropriate permissions. In order to make this work, I need to be able to retrieve the directory structure of an existing svn repository, and present it on the web. I have looked at several methods, and was wondering if anyone had other ideas, or suggestions. Some things I have looked at: Every hour, run a script that runs 'svn ls -R --xml' on all of the repositories and populates a mysql database Positive: Fast page loads afterwards Doesn't take a lot of disk space Easy to manage permission, i.e. the website doesn't need to touch svn directly at all Negative: Really slow on some of our more complicated repositories No 'live' updates Has to run whether there are changes or not On page load, run 'svn ls -R --xml' and retrieve only the directory I need to render the current page Positive: updates live no cron job to tie up the server Negative: website is slow as molasses webserver uses a lot more resources Directly read svn database Positive: Fast page loads live updates Negative: Difficult? I am very curious what alternatives there are that I have not seen or thought of, because I feel like any of these would be quite awful and inelegant in one way or another. Also I don't want to reinvent the wheel if it can be avoided. Thanks!
0
python,svn
2012-11-18T20:41:00.000
0
13,444,341
I would refer to a database solution. The problem with a repository is simultaneous access. Using the database that stores the updates the repository gives you a reliable source of information to retrieve its layout. I have done a fair amount of research for an internship and using a database is almost always the fastest way in which you can read the repo. To put it in terms of pseudo code: read repository load site print layout perform tasks -- and repeat. Another option to consider is using a database that tracks the layout of the repository independently. This way, you're sure users won't bump into eachothers updates and it keeps the repository database safe from corruption.
0
292
false
0
1
Quickly retrieving svn directory trees
13,444,407
5
5
0
2
0
0
0.07983
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
0
java,python
2012-11-19T05:35:00.000
0
13,448,232
If you are just learning object oriented programming language then I will suggest you to start with JAVA. Because if you don't understand the ideas behind the object oriented programming nicely, you will certainly legging behind. but if you have good experience on the ideologies (i.e. structured programming language or object oriented) then, its not a matter whether you should go with JAVA or Python. The basic concept is the main thing you need to learn.
0
194
false
0
1
Which to choose Python or java
13,448,289
5
5
0
2
0
0
0.07983
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
0
java,python
2012-11-19T05:35:00.000
0
13,448,232
This is a really relative questions and there is no "right" answer. I personally would go with Python but I already took multiple Java classes. Python is fun and interesting but Java has been around for a while and isn't going anywhere any time soon.
0
194
false
0
1
Which to choose Python or java
13,448,256
5
5
0
1
0
0
0.039979
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
0
java,python
2012-11-19T05:35:00.000
0
13,448,232
Start out with Python; use Python for your own hackish projects - it's great for Web Apps and rapid prototyping. Learn Java later on and you'll enjoy it; learn it before Python and you won't appreciate the kind of OOP Java has to offer as much. This is from personal experience; again, like twodayslate mentioned, there is no "right" answer. I learnt both Python and Java on my own and use mainly Python for personal projects.
0
194
false
0
1
Which to choose Python or java
13,448,303
5
5
0
1
0
0
0.039979
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
0
java,python
2012-11-19T05:35:00.000
0
13,448,232
I feel, there is not much about the language. Its just implementing the logic. You can use anything to express that. But the have to keep in mind about the drivers and libraries available for the language that you are selecting
0
194
false
0
1
Which to choose Python or java
13,448,294
5
5
0
3
0
0
0.119427
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
0
java,python
2012-11-19T05:35:00.000
0
13,448,232
I'd say go for python. Its very easy to code.
0
194
false
0
1
Which to choose Python or java
13,448,259
1
4
0
1
7
0
0.049958
0
This is the scenario. I want to be able to backup the contents of a folder using a python script. However, I want my backups to be stored in a zipped format, possibly bz2. The problem comes from the fact that I don’t want to bother backing up the folder if the contents in the “current” folder are exactly the same as what is in my most recent backup. My process will be like this: Initiate backup Check contents of “current” folder against what is stored in the most recent zipped backup If same – then “complete” If different, then run backup, then “complete” Can anyone recomment the most reliable and simple way of completing step2? Do I have to unzip the contents of the backup and store in a temp directory to do a comparison or is there a more elegant way of doing this? Possibly to do with modified date?
0
python,zip,backup,unzip
2012-11-19T09:48:00.000
1
13,451,235
Rsync will automatically detect and only copy modified files, but seeing as you want to bzip the results, you still need to detect if anything has changed. How about you output the directory listing (including time stamps) to a text file alongside your archive. The next time you diff the current directory structure against this stored text. You can grep differences out and pipe this file list to rsync to include those changed files.
0
5,546
false
0
1
How to elegantly compare zip folder contents to unzipped folder contents
13,451,351
1
3
0
1
0
0
0.066568
0
So I have a little script I wish to run once a week. It will check on some variable and if its set, it continues running the script, if not, I want it to wait an hour and try again. If it's still not set, it'll wait 2 hours, then 4, and then give up for the week. My question is, can I do this in python? Seems like I'd have to create and delete cron jobs in python to get this to work.
0
python,cron
2012-11-19T16:40:00.000
0
13,458,249
You can't really set standard crons directly from Python. Instead, I'd set the cron to fire every hour, and in the code determine if it needs to run again (ie last successful execution is more than 7 days ago).
0
1,865
false
0
1
Creating and deleting cron jobs in python
13,458,421
1
1
0
1
3
1
0.197375
0
I'm using pydev in eclipse. I was hoping that pydev would first use the python classes I develop in my source dir. but since I also install the built egg into system dir, pydev also picks up the classes from system dir. the problem is that pydev uses system dir first in its python path. so after I installed a buggy version, and debug through pydev, and made the necessary changes in local sourcecode, it does not take effect, since the installed egg is not changed. or in the reverse case, as I was debugging, pydev takes me to the egg files, and I modify those egg files, so the real source code is not changed. so How could I let pydev rearrange pythonpath order? (just like eclipse does for java build classpath) ? thanks yang
0
python,eclipse,pydev
2012-11-19T18:13:00.000
1
13,459,647
if you are using setuptools, you can try running sudo python setup.py develop on the egg as well as adding project dependencies between the two in Eclipse
0
464
false
0
1
re-arrange pythonpath in pydev-eclipse?
14,926,237
1
1
0
2
0
0
1.2
0
I am learning python from learnpythonthehardway. in the windows I had no issues with going through a lots of exercises because the setup was easier but I want to learn linux as well and ubuntu seemed to me the nicest choice. now I am having trouble with setting up. I can get access to the terminal and then usr/lib/python.2.7 but I don't know if to save the script in this directory? if I try to make a directory inside this through mkdir I can't as permission is denied. I also tried to do chmod but didn't know how or if to do it. any help regarding how to save my script in what libary? how to do that? and how to run it in terminal as: user@user$ python sampleexcercise.py using ubuntu 12.04 lts skill = newbie thanks in advance.
0
python,installation,ubuntu-12.04
2012-11-19T23:46:00.000
1
13,464,456
As an absolute beginner, don't worry right now about where to install libraries. Simple example scripts that you're trying out for learning purposes don't belong being installed in any lib directory such as under /usr/lib/python.' On Linux you want to do most work in your home directory, so just cd ~ to make sure you're there and create files there with an editor of your choice. You might want to organize your files hierarchically too. For example create a directory called src/ using the mkdir command in your home directory. And and then mdkir src/lpthw, for example, as a place to store all your samples from "Learn Python the Hard Way". Then simply fun python <path/to/py/file> to execute the script. Or you can cd ~/src/lpthw and run your scripts by filename only.
0
409
true
0
1
python library access in ubuntu 12.04
13,464,518
1
4
0
1
1
0
0.049958
0
I'd like to make a webapp that asks people multiple choice questions, and times how long they take to answer. I'd like those who want to, to be able to make accounts, and to store the data for how well they've done and how their performance is increasing. I've never written any sort of web app before, although I'm a good programmer and understand how http works. I'm assuming (without evidence) that it's better to use a 'framework' than to hack something together from scratch, and I'd appreciate advice on which framework people think would be most appropriate. I hope that it will prove popular, but would rather get something working than spend time at the start worrying about scaling. Is this sane? And I'd like to be able to develop and test this on my own machine, and then deploy it to a virtual server or some other hosting solution. I'd prefer to use a language like Clojure or Lisp or Haskell, but if the advantages of using, say, Python or Ruby would outweigh the fact that I'd enjoy it more in a more maths-y language, then I like both of those too. I probably draw the line at perl, but if perl or even something like Java or C have compelling advantages then I'm quite happy with them too. They just don't seem appropriate for this sort of thing.
0
python,web-applications,haskell,clojure,lisp
2012-11-20T12:44:00.000
0
13,473,489
When the server-side creates the form, encode an hidden field with the timestamp of the request, so when the users POSTs his form, you can see the time difference. How to implement that is up to you, which server you have available, and several other factors.
0
947
false
1
1
How do I make a web server to make timed multiple choice tests?
13,476,327
1
4
0
1
1
1
0.049958
0
I'm making a search tool in Python. Its objective is to be able to search files by their content. (we're mostly talking about source file, text files, not images/binary - even if searching in their METADATA would be a great improvment). For now I don't use regular expression, casual plain text. This part of the algorithm works great ! The problem is that I realize I'm searching mostly in the same few folders, I'd like to find a way to build an index of the content of each files in a folder. And be able as fast as possible to know if the sentence I'm searching is in xxx.txt or if it can't be there. The idea for now is to maintain a checksum for each file that makes me able to know if it contains a particular string. Do you know any algorithm close to this ? I don't need a 100% success rate, I prefer a little index than a big one with 100% success. The idea is to provide a generic tool. EDIT : To be clear, I want to search a PART of the content of the file. So making a md5 hash of all its content & comparing it with the hash of what i'm searching isn't a good idea ;)
0
python,search
2012-11-21T16:25:00.000
0
13,497,607
The only reason anyone would want a tool that is capable of searching 'certain parts' of a file is because what they are trying to do is analyze data that has legal restrictions on which parts of it you can read. For example, Apple has the capability of identifying the GPS location of your iPhone at any moment a text was sent or received. But, what they cannot legally do is associate that location data with anything that can be tied to you as an individual. On a broad scale you can use obscure data like this to track and analyze patterns throughout large amounts of data. You could feasibly assign a unique 'Virtual ID' to every cell phone in the USA and log all location movement; afterward you implement a method for detecting patterns of travel. Outliers could be detected through deviations in their normal travel pattern. That 'metadeta' could then be combined with data from outside sources such as names and locations of retail locations. Think of all the situations you might be able to algorithmically detect. Like the soccer dad who for 3 years has driven the same general route between work, home, restaurants, and a little league field. Only being able to search part of a file still offers enough data to detect that Soccer Dad's phone's unique signature suddenly departed from the normal routine and entered a gun shop. The possibilities are limitless. That data could be shared with local law enforcement to increase street presence in public spaces nearby; all while maintaining anonymity of the phone's owner. Capabilities like the example above are not legally possible in today's environment without the method IggY is looking for. On the other hand, it could just be that he is only looking for certain types of data in certain file types. If he knows where in the file he wants to search for the data he needs he can save major CPU time only reading the last half or first half of a file.
0
3,002
false
0
1
Create an index of the content of each file in a folder
35,276,856
1
5
0
0
4
1
0
0
I am looking for some way to handle numbers that are in the tens of millions of digits and can do math at this level. I can use Java and a bit of Python. So a library for one of these would be handy, but a program that can handle these kind of numbers would also work. Does anyone have any suggestions? Thanks
0
java,python,math
2012-11-22T15:26:00.000
0
13,515,813
What kind of math operations do you need? Java already provides classes like java.math.BigDecimal or java.math.BigInteger you can use to do basic stuff (addition, multiplication, etc.)
0
207
false
0
1
Program or library to handle massive numbers
13,515,894
2
3
0
0
7
0
0
0
I've written a script that opens up a file, reads the content and does some operations and calculations and stores them in sets and dictionaries. How would I write a unit test for such a thing? My questions specifically are: Would I test that the file opened? The file is huge (it's the unix dictionary file). How would I unit test the calculations? Do I literally have to manually calculate everything and test that the result is right? I have a feeling that this defeats the whole purpose of unit testing. I'm not taking any input through stdin.
0
python,unit-testing
2012-11-22T21:34:00.000
0
13,520,279
You haven't explained what the calculations are, but I guess your program should be able to work with a subset of the big file, as well. If this is the case, make a unit test which opens up a small file , does the calculations and produces some result, which you can verify is correct.
0
3,162
false
0
1
Unit testing a script that opens a file
13,520,332
2
3
0
6
7
0
1
0
I've written a script that opens up a file, reads the content and does some operations and calculations and stores them in sets and dictionaries. How would I write a unit test for such a thing? My questions specifically are: Would I test that the file opened? The file is huge (it's the unix dictionary file). How would I unit test the calculations? Do I literally have to manually calculate everything and test that the result is right? I have a feeling that this defeats the whole purpose of unit testing. I'm not taking any input through stdin.
0
python,unit-testing
2012-11-22T21:34:00.000
0
13,520,279
You should refactor your code to be unit-testable. That, on the top of my head, would say: Take the functionality of the file opening into a separate unit. Make that new unit receive the file name, and return the stream of the contents. Make your unit receive a stream and read it, instead of opening a file and reading it. Write a unit test for your main (calculation) unit. You would need to mock a stream, e.g. from a dictionary. Write several test cases, each time provide your unit with a different stream, and make sure your unit calculates the correct data for each input. Get your code coverage as close to 100% as you can. Use nosetests for coverage. Finally, write a test for your 'stream provider'. Feed it with several files (store them in your test folder), and make sure your stream provider reads them correctly. Get the second unit test coverage as close to 100% as you can. Now, and only now, commit your code.
0
3,162
false
0
1
Unit testing a script that opens a file
14,733,487
1
1
0
2
0
1
1.2
0
I'm making a game using Python with PyGame module. I am trying to make an introduction screen for my game using a video that I made since it was easier to make a video than coding the animation for the intro screen. The Pygame movie module does not work as stated on their site so I cannot use that. I tried using Pymedia but I have no idea how to even get a video running since their documentation weren't that helpful. Do you guys know any sample code that uses Pymedia to play a video? Or any code at all that loads a video using python. Or if there's any other video module out there that is simple, please let me know. I'm totally stumped.
0
python-2.7,pygame
2012-11-23T04:10:00.000
0
13,522,975
I found a solution. The latest version of Pygame is still able to play MPEG-1 files. The problem was that there are different encoding of MPEG-1. The ones I found that works so far is Any Video Converter and Zamzar.com online converter. The downside to Zamzar is that it outputs really small version of the original video. video.online-convert.com does not work
0
1,309
true
0
1
Playing Video Files in Python
13,537,741
1
13
0
1
40
0
1.2
1
I have a script where I want to check if a file exists in a bucket and if it doesn't then create one. I tried using os.path.exists(file_path) where file_path = "/gs/testbucket", but I got a file not found error. I know that I can use the files.listdir() API function to list all the files located at a path and then check if the file I want is one of them. But I was wondering whether there is another way to check whether the file exists.
0
python,google-cloud-storage,file-exists
2012-11-23T08:39:00.000
0
13,525,482
I guess there is no function to check directly if the file exists given its path. I have created a function that uses the files.listdir() API function to list all the files in the bucket and match it against the file name that we want. It returns true if found and false if not.
0
52,000
true
1
1
How to check if file exists in Google Cloud Storage?
13,644,827
1
6
0
0
2
0
0
0
I want to write a script which can Shutdown remote Ubuntu system. Actually i want my VM to shutdown safely when i shutdown my main machine on which my VM is installed . Is there is any of doing this with the help of Sh scripts or script written in any language like Python.
0
python,linux,bash,ubuntu,sh
2012-11-23T19:01:00.000
1
13,534,541
You can call poweroff from a script, as long as it's running with superuser privileges.
0
4,767
false
0
1
Script to Shutdown Ubuntu
13,534,648
1
1
0
2
3
0
0.379949
0
I am trying to import pythoncom, but it gives me this error: Traceback (most recent call last): File "F:/Documents and Settings/Emery/Desktop/Python 27/Try", line 2, in import pythoncom File "F:\Python27\lib\site-packages\pythoncom.py", line 2, in import pywintypes ImportError: No module named pywintypes I reinstalled Python win32, but it still doesn't fix it. Any help? Also, I am trying to access the pythoncom.PumpMessages() method, an alternative would be nice as well.
0
python,winapi,pywin32,importerror,pythoncom
2012-11-23T23:39:00.000
0
13,536,952
If you are using an IDE like I am (PyCharm), you should go the where the Python is installed e.g C:\Users\***\AppData\Local\Programs\Python\Python37\Lib\site-packages In this folder check for the folder name pywin32. Copy that folder and paste it to C:\Users\***\PycharmProjects\project\venv\Lib\site-packages. After that restart your IDE, then it will import the pywin32 as it did in my case. I hope it helps.
0
5,795
false
0
1
win32 Python - pythoncom error - ImportError: No module named pywintypes
53,436,989
1
2
0
0
3
1
0
0
I use python apt library and I would like that the commit() function doesn't produce any output. I've searched on the web and saw that the fork function can do the trick but I don't know how to do that or if there exists another way. I don't use any GUI, I work via the terminal.
0
python,apt
2012-11-24T15:38:00.000
0
13,542,698
The use to fork is just a possibility I think. I've already try to redirect the sys.stdout even the sys.stderr : No Joy, it won't work.
0
256
false
0
1
How to silence the commit function from the python apt library?
13,552,981
2
2
0
1
1
0
0.099668
0
I created a encrypted file from a text file in python with beefish. beefish uses pycrypto. so my source text file is 33742 bytes and the encrypted version is 33752. thats ok so far but ... when I compress the test.enc (encrypted test file) with tar -czvf the final file is 33989 bytes. Why does the compression not work when the source file is encrypted? So far the only option then seems to compress it first and then encrypt it cause then the file stays that small.
0
python,python-2.7,pycrypto
2012-11-24T18:10:00.000
0
13,544,061
The compression method used by tar -z relies on repeating patterns in the input file, replacing these patterns by a count of how many times the pattern repeated (grossly simplified). However, when you encrypt a file, you are basically trying to hide any repeating patterns in as much 'random'-looking noise as possible. That makes your file nearly incompressible. Combine that with the overhead of the archive and compression file format (metadata, etc.) and your file actually ends up slightly larger instead. You should reverse the process; compress first, then encrypt, and you'll increase the chances you end up with a smaller payload significantly.
0
494
false
0
1
compressed encrypted file is bigger then source
13,544,093
2
2
0
7
1
0
1.2
0
I created a encrypted file from a text file in python with beefish. beefish uses pycrypto. so my source text file is 33742 bytes and the encrypted version is 33752. thats ok so far but ... when I compress the test.enc (encrypted test file) with tar -czvf the final file is 33989 bytes. Why does the compression not work when the source file is encrypted? So far the only option then seems to compress it first and then encrypt it cause then the file stays that small.
0
python,python-2.7,pycrypto
2012-11-24T18:10:00.000
0
13,544,061
Compression works by identifying patterns in the data. Since you can't identify patterns in encrypted data (that's the whole point), you can't compress it. For a perfect encryption algorithm that produced a 33,742 byte output, ideally all you would be able to determine about the decrypted original data is that it can fit in 33,742 bytes, but no more than that. If you could compress it to, say, 31,400 bytes, then you would immediately know the input data was not, say, 32,000 bytes of random data since random data is patternless and thus incompressible. That would indicate a failure on the part of the encryption scheme. It's nobody's business whether the decrypted data is random or not.
0
494
true
0
1
compressed encrypted file is bigger then source
13,544,079
1
1
1
2
1
0
0.379949
0
I read few Boost.Python tutorials and I know how to call C++ function from Python. But what I want to do is create C++ application which will be running in the background all the time and Python script that will be able to call C++ function from that instance of C++ application. The C++ application will be a game server and it has to run all the time. I know I could use sockets/shared memory etc. for this kind of communication but is it possible to make it with Boost.Python?
0
c++,python,boost,boost-python
2012-11-25T16:51:00.000
0
13,553,174
Boost python is useful for exposing C++ objects to python. Since you're talking about interacting with an already running application from python, and the lifetime of the script is shorter than the lifetime of the game server, I don't think boost python is what you're looking for, but rather some form of interprocess communication. Whilst you could create your IPC mechanism in C++, and then expose it to python using boost python, I doubt this is what you want to do.
0
426
false
0
1
Boost.Python - communication with running C++ program
13,558,179
1
2
0
0
1
1
0
0
Is there any way to force python module to be installed in the following directory? /usr/lib/python2.7
0
python,module,installation
2012-11-25T18:49:00.000
1
13,554,241
Install the module: sudo pip-2.7 install guess_language Validate import and functionality: > Python2.7 >>> import guess_language >>> print guess_language.guessLanguage(u"שלום לכם") he
0
70
false
0
1
Force python module to be installed in certain directory
13,555,251
1
1
1
1
0
0
0.197375
0
I want to create an application that is capable of loading plugins. The twist is that I want to be able to create plugins in both C/C++ and Python. So I've started thinking about this and have a few questions that I'm hoping people can help me with. My first question is whether I need to use C/C++ for the "core" of the application (the part that actually does the loading of the plugins)? This is my feeling at least, I would think that implementing the core in Python would result in a nasty performance hit, but it would probably simplify loading the plugins dynamically. Second question is how would I go about defining the plugin interface for C/C++ on one hand and Python on the other? The plugin interface would be rather simple, just a single method that accepts a list of image as a parameter and returns a list of images as a return value. I will probably use the OpenCV image type within the plugins which exists for both C/C++ and Python. Finally, I would like the application to dynamically discover plugins. So if you place either a .py file or a shared library file (.so/.dll) in this directory, the application would be able to produce a list of available plugins at runtime. I found something in the Boost library called Boost.Extension (http://boost-extension.redshoelace.com/docs/boost/extension/index.html) but unfortunately it doesn't seem to be a part of the official Boost library and it also seems to be a bit stale now. On top of that, I don't know how well it would work with Python, that is, how easy it would be to create Python plugins that fit into this mechanism. As a side note, I imagine the application split into two "parts". One part is a stripped down core (loading and invoking plugin instances from a "recipe"). The other part is the core plus a GUI that I plan on writing in Python (using PySide). This GUI will enable the user to define the aforementioned "recipes". This GUI part would require the core to be able to provide a list of available plugins. Sorry for the long winded "question". I guess I'm hoping for more of a discussion and of course if anybody knows of something similar that would help me I would very much appreciate a pointer. I would also appreciate any concise and to the point reading material about something similar (such as integrating C/C++ and Python etc).
0
c++,python,c,plugins,shared-libraries
2012-11-25T20:40:00.000
0
13,555,278
Write your application in Python, then you can have a folder for your plugins. Your application searches for them by checking the directory/traversing the plugin tree. Then import them via "import" or use ctypes for a .so/.dll, or even easier: you can use boost::python for creating a .so/.dll that you can 'import' like a normal python module. Don't use C++ and try to do scripting in Python - that really sucks, you will regret it. ;)
0
648
false
0
1
Application that can load both C/C++ and Python plugins
13,555,348
1
5
0
0
3
0
0
0
I want to open a ppt file using Python on linux, (like python open a .txt file). I know win32com, but I am working on linux. So, What do I need to do?
0
python
2012-11-26T05:28:00.000
1
13,559,133
Using catdoc/catppt with subprocess to open doc files and ppt files.
0
14,562
false
0
1
How to open ppt file using Python
13,568,384
1
1
0
0
2
0
1.2
0
I am a new user of Sublime text. It has been working fine for a few days until it began to refuse to compile anything and I don't know where the problem is. I wrote python programs and pressed cmd+b and nothing happened. When I try to launch repl for this file - that also doesn't work. I haven't installed any plugins and before this issue all has been working well. Any suggestions on how to identify/fix the problem are greatly appreciated
0
python,sublimetext2
2012-11-26T19:47:00.000
1
13,571,993
Yes, you might want to give more detail. Have you made sure you have saved the file as .py? Try something simple like Print "Hello" and then see if this works.
0
182
true
0
1
Build command in sublime text has stopped functioning
20,480,227
2
3
0
0
3
1
0
0
CMUdict works for the english language, but what if I want to count the syllables of content in another language?
0
python,nlp,nltk
2012-11-26T20:17:00.000
0
13,572,454
You certainly can't do it in a general way for all languages, because different languages render sounds to text differently. For example, the Hungarian word "vagy" looks like 2 syllables to an English speaker, but it's only one. And the English word "bike" would naturally be read as 2 syllables by speakers of many other languages. Furthermore, for English you probably can't do this very accurately without a dictionary anyway, because English has so much bizarre variation in its spelling. For example, we pronounce the "oe" in "poet" as two distinct syllables, but only one in "does". This is probably true of some other languages as well.
0
1,881
false
0
1
Is there anyway in python to count syllables without the use of a dictionary?
13,572,969
2
3
0
2
3
1
0.132549
0
CMUdict works for the english language, but what if I want to count the syllables of content in another language?
0
python,nlp,nltk
2012-11-26T20:17:00.000
0
13,572,454
In general, no. For some languages there might be, but if you don't have a dictionary you'd need knowledge of those languages' linguistic structure. How words are divided into syllables varies from language to language.
0
1,881
false
0
1
Is there anyway in python to count syllables without the use of a dictionary?
13,572,478
1
1
0
0
0
1
1.2
0
I have a custom C++ Python Module that I want to build into Python that builds fine but fails when it goes to the linking stage. I have determined that the problem is that it is using the gcc to link and not g++ and this is what is causing all of the errors I am seeing when it tries to link in the std libraries. How would I get the Python build process to link with g++ instead of gcc? Do I have to manually edit the Makefile or is it something I need to set when I am configuring it. I am compiling Python 2.6 on CentOS 5.8. Thanks in advance for the help!
0
python,linux,gcc,g++,python-2.6
2012-11-27T16:50:00.000
1
13,589,075
I was able to solve my problem by manually editing the Makefile generated by the configure script so that the linker was using g++ instead of gcc and that solved my problems. Thanks for the possible suggestions!
0
85
true
0
1
C++ Python Module not being linked into Python with g++
13,590,180
1
2
0
3
1
0
0.291313
0
I'm coming from PHP/Apache world where running an application is super easy. Whenever PHP application crashes Apache process running that request will stop but server will be still ruining happily and respond to other clients. Is there a way to have Python application work in a smilar way. How would I setup wsgi server like Tornado or CherryPy so it will work similarly? also, how would I run several applications from one server with different domains?
0
python,tornado,wsgi,cherrypy
2012-11-29T04:49:00.000
1
13,619,021
What you are after would possibly happen anyway for WSGI severs. This is because any Python exception only affects the current request and the framework or WSGI server would catch the exception, log it and translate it to a HTTP 500 status page. The application would still be in memory and would continue to handle future requests. What we get down to is what exactly you mean by 'crashes Apache process'. It would be rare for your code to crash, as in cause the process to completely exit due to a core dump, the whole process. So are you being confused in your terminology in equating an application language level error to a full process crash. Even if you did find a way to crash a process, Apache/mod_wsgi handles that okay and the process will be replaced. The Gunicorn WSGI server will also do that. CherryPy will not unless you have a process manager running which monitors it and the process monitor restarts it. Tornado in its single process mode will have the same problem. Using Tornado as the worker in Gunicorn is one way around that plus I believe Tornado itself may have some process manager in it now for running multiple process which allow it to restart processes if they die. Do note that if your application bug which caused the Python exception is bad enough and it corrupts state within the process, subsequent requests may possibly have issues. This is the one difference with PHP. With PHP, after any request, whether successful or not, the application is effectively thrown away and doesn't persist. So buggy code cannot affect subsequent requests. In Python, because the process with loaded code and retained state is kept between requests, then technically you could get things in a state where you would have to restart the process to fix it. I don't know of any WSGI server though that has a mechanism to automatically restart a process if one request returned an error response.
0
300
false
1
1
How to setup WSGI server to run similarly to Apache?
13,619,836
1
3
0
0
4
0
0
0
I've installed python-mode, pymacs and pycomplete+ from el-get on emacs24. But i am not able to get auto-completion for python in emacs.
0
python,emacs
2012-11-29T18:35:00.000
0
13,632,415
bzr branch lp:python-mode/components-python-mode i.e. the development branch of python-mode.el delivering an inlined Pymacs provides auto-completion right out of the box for me Might conflict with an already installed Pymacs though
0
6,063
false
0
1
Emacs python autocompletion
13,634,637
1
1
0
1
0
0
1.2
0
I'm starting to use Sublime Text 2 in favor of Eclipse for developing Python code. In all I'm liking the change, but one of the things I miss from Eclipse is a convenient "problems" window that shows a summary of all errors and warnings from files in the project. While sublimelinter helps, it only works for files that you have open and are editing. It will place a box clearly around the error as you type it, but what if there are other problems in other files that you haven't seen yet? (ie, might have been committed by a coworker, etc) Does there exist something in Sublime Text 2 that will show a summary of linting output?
0
python,sublimetext2,sublimelinter
2012-11-29T23:04:00.000
0
13,636,412
pylint is first and foremost a command line tool for code analysis. You can simply run it on a module from the command line and it will generate a whole report with every error/warning within the project. I don't know if such feature exists from within sublime text but this is not something you will use often. I simply use the command line about once a week to check if I didn't miss anything. I also use a SublimeTODO plugin, which basically analyses the code looking for TODO comments. Unlike sublimelint, it does generate a report for all the open files or files within a project.
0
813
true
0
1
Sublime Text 2, Sublimelint, summary of problems
13,636,603
1
2
0
0
0
0
1.2
0
I have a Python script that scans my logs and reports all its findings. Is it possible that the script in my box (say Box A) can be executed for another box (say B) without copying it. Do I really need to copy my Python script to Box B and then execute it from box A or there is a method by which staying in Box A I can connect to Box B run my python program for box B there get its output to and close the same.
0
python
2012-11-30T07:31:00.000
0
13,640,812
If your logs on box B can be accessed over the network (e.g., through a network share or FTP), then you could modify the script on box A to retrieve and process them. If they are not network accessible, then you'll need to copy either the script from box A to box B, or the logs from box B to box A.
0
119
true
0
1
Python : run a py program on another box
13,640,945
2
2
0
3
0
0
0.291313
1
I need a way to programmatically create Twitter Applications/API keys. I could make something on my own, but does anyone know of a pre-made solution?
0
php,python,twitter,twitter-oauth
2012-11-30T20:16:00.000
0
13,652,514
Assuming you're referring to the consumer key and consumer secret, you're not supposed to be able to create those programmatically. That's why you have to sign in to a web page with a CAPTCHA in order to create one.
0
1,280
false
0
1
Is there an unofficial API for creating Twitter applications/api keys?
13,652,765
2
2
0
0
0
0
0
1
I need a way to programmatically create Twitter Applications/API keys. I could make something on my own, but does anyone know of a pre-made solution?
0
php,python,twitter,twitter-oauth
2012-11-30T20:16:00.000
0
13,652,514
Not sure what you mean, but there are plenty of libraries that abstracts twitter API (https://dev.twitter.com/docs/twitter-libraries)
0
1,280
false
0
1
Is there an unofficial API for creating Twitter applications/api keys?
13,652,757
1
1
0
1
2
1
1.2
0
I have a project where I use south migrations. Often, the migrations .py file has unused imports. This generates warnings in PyDev/Eclipse. I want the warnings turned on in general, as they promote code discipline. However, I wish I could either turn them off on the package in Eclipse or through some directive. I am aware of the #@UnusedImport comment tag. Is it possible to do something like that, but on a package level? Perhaps init.py could be used?
0
python,eclipse,pydev
2012-12-02T09:44:00.000
0
13,668,095
Putting # @PydevCodeAnalysisIgnore at the top of the module will cause PyDev to skip all code analysis on a given file. While not quite on package level, this is good enough.
0
694
true
0
1
Python and PyDev unused imports - possible to disable on the package level?
15,648,517
1
2
0
3
0
0
0.291313
0
I have some python file in windows, and I transfer them to my gentoo by samba. I check their mode is executable, and I use ./xxx.py to run it, but get an error: : No such file or directory I am troubled that it does not prompt what file is not here. but when I use python xxx.py, it can run in the right way. and then I check the CR character by use set ff in vim, and found it is dos, then I use set ff=unix to set it, now it can run by using ./xxx.py but I don't know why it can be use python xxx.py when ff=dos?
0
python,linux
2012-12-02T12:01:00.000
1
13,669,092
Windows line endings are CRLF, or \r\n. Unix uses simply \n. When the OS reads your shebang line, it sees #!/usr/bin/python\r. It can't run this command. A simple way to see this behavior from a unix shell would be $(echo -e 'python\r') (which tries to run python\r as a command). This output will also be similar to : command not found. Many advanced code editors under Windows support natively saving with unix line endings.
0
1,281
false
0
1
python file in dos and unix
13,669,170
1
1
0
1
5
0
0.197375
0
I have a repository of PDF documents, and most of the text contained in these documents are formatted in Comic Sans. I would like to change this to something similar to Arial. The original font is embedded in the document. I haven't found any existing tool to do this for me (I'm on Linux), and I wonder if it's possible to do it programmaticaly. A Python library would be perfect, but a library in any programming language would do. In which library will I be able to substitute fonts with the least effort? And which parts of the API would I use?
0
python,pdf,fonts
2012-12-02T19:14:00.000
0
13,672,763
There are commercial tools that can do this - one of which is pdfToolbox from callas software (warning - I'm affiliated with this company). However - even though this functionality exists and is sometimes used - the results are often completely undesirable and I have not seen many contexts where it is used on more than very specific files. And usually with limited success. To the point where this replacement is only available as a manual operation in the tool I mentioned - and not in automatic mode. Depending on how complex these files are, you would probably have better success to extract all text from the documents into something like RTF, do whatever manipulation you need to do there and regenerate PDF afterwards. Sounds like a roundabout way but I'm guessing the result will be better in most cases...
0
2,714
false
0
1
Changing the font in a PDF
13,674,330
2
2
0
4
1
0
0.379949
0
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux. The reason that we want to use OpenBSD is that it's well known for it's security. The reason we chose Python is that it's fast. The reason we want to use Nginx is that it's known to be able to handle more http request when compared to Apache. The reason we want to use NoSQL is that MySQL is known to have problems in scalability when the databases grows. We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP). We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
1
python,nginx,nosql,openbsd
2012-12-03T00:05:00.000
0
13,675,440
My advice - if you don't know how to use these technologies - don't do it. Few servers will cost you less than the time spent mastering technologies you don't know. If you want to try them out - do it. One by one, not everything at once. There is no magic solution on how to use them.
0
1,447
false
0
1
How to utilize OpenBSD, Nginx, Python and NoSQL
13,675,611
2
2
0
1
1
0
0.099668
0
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux. The reason that we want to use OpenBSD is that it's well known for it's security. The reason we chose Python is that it's fast. The reason we want to use Nginx is that it's known to be able to handle more http request when compared to Apache. The reason we want to use NoSQL is that MySQL is known to have problems in scalability when the databases grows. We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP). We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
1
python,nginx,nosql,openbsd
2012-12-03T00:05:00.000
0
13,675,440
I agree with wdev, the time it takes to learn this is not worth the money you will save. First of all, MySQL databases are not hard to scale. WordPress utilizes MySQL databases, and some of the world's largest websites use MySQL (google for a list). I can also say the same of linux and PHP. If you design your site using best practices (CSS sprites) Apache versus Nginx will not make a considerable difference in load times if you utilize a CDN and best practices (caching, gzip, etc). I strongly urge you to reconsider your decisions. They seem very ill-advised.
0
1,447
false
0
1
How to utilize OpenBSD, Nginx, Python and NoSQL
13,676,002
1
2
0
1
2
0
1.2
0
I am using Debian and I have a python script that I would like to run during rc.local so that it will run on boot. I already have it working with a test file that is meant to run and terminate. The problem is that this file should eventually run indefinitely using Scheduler. It's job is to do serial reads, a small amount of processing on those reads, and inserts into a MySQL database. However, I am nervous about then not being able to cancel the script to get to my login prompt if changes need to be made since I was unable to terminate the test script early using Ctrl+C (^C). My hope is that there is some command that I am just missing that will accomplish this. Is there another key command that I'm missing that will terminate the python script and end rc.local? Thanks. EDIT: Another possible solution that would help me here is if there is a way to start a python script in the background during boot. So it would start the script and then allow login while continuing to run the script in the background. I'm starting to think this isn't something that's possible to accomplish so other suggestions to accomplish something similar to what I'm trying to do would be helpful as well. Thanks again.
0
python,debian,boot
2012-12-03T00:47:00.000
1
13,675,689
Seems like it was just a dumb mistake on my part. I realized the whole point of this was to allow the python script to run as a background process during boot so I added the " &" to the end of the script call like you would when running it from the shell and viola I can get to my password prompt by pressing "Enter". I wanted to put this answer here just in case this would be something horribly wrong to do, but it accomplishes what I was looking for.
0
575
true
0
1
End Python Script when running it as boot script?
13,706,652
1
2
0
1
2
0
1.2
0
I am writing a script in python3 for Ubuntu that should be executed all X Minutes and should automatic start after logging in. Therefore I want to create a daemon (is it the right solution for that?) but I haven't found any modules / examples for python3, just for python 2.X. Do you know something what I can work with? Thank you,
0
python,python-3.x,daemon,launch-daemon
2012-12-05T11:03:00.000
1
13,721,808
Suppose for python script name is monitor. use following steps: copy monitor script in /usr/local/bin/ (not necessary) Also add a copy in /etc/init.d/ Then execute following command to make it executable sudo -S chmod "a+x" "/etc/init.d/monitor" At last run update.rc command sudo -S update-rc.d "monitor" "defaults" "98" this will execute you monitor whenever you login for all tty.
0
3,305
true
0
1
Daemon with python 3
13,722,236
1
2
0
4
0
1
0.379949
0
I've always have trouble with dynamic language like Python. Several troubles: Typo error, I can use pylint to reduce some of these errors. But there's still some errors that pylint can not figure out. Object type error, I often forgot what type of the parameter is, int? str? some object? Also, forgot the type of some object in my code. Unit test might help me sometimes, but I'm not always have enough time to do UT. When I need a script to do a small job, the line of code are 100 - 200 lines, not big, but I don't have time to do the unit test, because I need to use the script as soon as possible. So, many errors appear. So, any idea on how to reduce the number of these troubles?
0
python
2012-12-05T11:57:00.000
0
13,722,760
Unit testing is the best way to handle this. If you think the testing is taking too much time, ask yourself how much time you are loosing on defects - identifying, diagnosing and rectifying - after you have released the code. In effect, you are testing in production, and there's plenty of evidence to show that defects found later in the development cycle can be orders of magnitude more expensive to fix.
0
166
false
0
1
How to reduce errors in dynamic language such as python, and improve my code quality?
13,722,800
1
2
0
1
2
0
1.2
0
I'm trying to use Z3 from its python interface, but I would prefer not to do a system-wide install (i.e. sudo make install). I tried doing a local install with a --prefix, but the Makefile is hard-coded to install into the system's python directory. Best case, I would like run z3 directly from the build directly, in the same way I use the z3 binary (build/z3). Does anyone know how to, or have script, to run the z3py directly from the build directory, without doing an install?
0
python,z3
2012-12-05T16:54:00.000
1
13,728,325
Yes, you can do it by including the build directory in your LD_LIBRARY_PATH and PYTHONPATH environment variables.
0
540
true
0
1
Can I use Z3Py withouth doing a system-wide install?
13,730,652
1
3
0
0
1
1
1.2
0
Simple question: Is there some code or function I can add into most scripts that would let me know its "running"? So after you execute foo.py most people would see a blinking cursor. I currently am running a new large script and it seems to be working, but I wont know until either an error is thrown or it finish(might not finish). I assume you could put a simple print "foo-bar"at the end of each for loop in the script? Any other neat visual read out tricks?
0
python
2012-12-05T18:14:00.000
0
13,729,740
The print "foo-bar" trick is basically what people do for quick&dirty scripts. However, if you have lots and lots of loops, you don't want to print a line for each one. Besides the fact that it'll fill the scrollback buffer of your terminal, on many terminals it's hard to see whether anything is happening when all it's doing is printing the same line over and over. And if your loops are quick enough, it may even mean you're spending more time printing than doing the actual work. So, there are some common variations to this trick: Print characters or short words instead of full lines. Print something that's constantly changing. Only print every N times through the loop. To print a word without a newline, you just print 'foo',. To print a character with neither a newline nor a space, you have to sys.stdout.write('.'). Either way, people can see the cursor zooming along horizontally, so it's obvious how fast the feedback is. If you're got a for n in … loop, you can print n. Or, if you're progressively building something, you can print len(something), or outfile.ftell(), or whatever. Even if it's not objectively meaningful, the fact that it's constantly changing means you can tell what's going on. The easiest way to not print all the time is to add a counter, and do something like counter += 1; if counter % 250 == 0: print 'foo'. Variations on this include checking the time, and printing only if it's been, say, 1 or more seconds since the last print, or breaking the task into subtasks and printing at the end of each subtask. And obviously you can mix and match these. But don't put too much effort into it. If this is anything but a quick&dirty aid to your own use, you probably want something that looks more professional. As long as you can expect to be on a reasonably usable terminal, you can print a \r without a \n and overwrite the line repeatedly, allowing you to draw nice progress bars or other feedback (ala curl, wget, scp, and other similar Unix tools). But of course you also need to detect when you're not on a terminal, or at least write this stuff to stderr instead of stdout, so if someone redirects or pipes your script they don't get a bunch of garbage. And you might want to try to detect the terminal width, and if you can detect it and it's >80, you can scale the progress bar or show more information. And so on. This gets complicated, so you probably want to look for a library that does it for you. There are a bunch of choices out there, so look through PyPI and/or the ActiveState recipes.
0
131
true
0
1
Code to check a scripts activity (python)
13,729,994
1
1
0
4
1
1
1.2
0
I have a python program that uses a lot of my CPU's resources. While it is fine on my regular PC, I'm afraid it might be too much to handle for my Raspberry Pi. Speed is not an issue. I don't care if my code is executed slowly as I am implementing a real time system that executes the code only once every few hours, but my CPU needs to be freed up as I would also be running other processes simultaneously. Is there anyway I can reduce the resources that it takes from the CPU at the cost of speed of execution? Any help would be appreciated, thank you
0
python,performance,cpu-usage,raspberry-pi,cpu-speed
2012-12-06T16:26:00.000
0
13,748,022
While you sure can tinker with your program and make it more optimized, the fact is that all programs are generally designed to take as much CPU as they need in order to finish in smallest time possible. I see two ways to achieve your goal: Raspberry pi is Linux right? So just lower process priority of the python interpreter running your script. this would make sure that other programs can have CPU if they need it In your script, sleep for few milliseconds every few milliseconds.. ugly, but could do the trick But option one is probably way to go.
0
5,970
true
0
1
How do I reduce CPU and memory usage by a python program?
13,748,368
1
1
0
1
4
0
0.197375
0
How should I implement continuous integration on my new application? Currently, this is how we're pushing to production - please bear with me, I know this is far from sane: From local, git push origin production (the production codebase is kept on the production branch, modifications are either written directly there and committed, or files are checked out individually from another branch. Origin is the remote production server) On the remote box, sudo stop gunicorn (application is running as a process) cp ~/flaskgit/application.py ~/flask/applicaion.py (the git push origin from local pushes to an init -bare repo with a post-update hook that populates the files in ~/flaskgit. ~/flask is where the gunicorn service runs the application under a virtualenv) sudo start gunicorn we do our testing with the ~/flaskgit code running on a different port. once it looks good we do the CP I would love to have something more fluid. I have used jenkins in the past, and loved the experience - but didn't set it up. What resources / utilities should I look up in order to do this well? Thank you!
0
python,git,continuous-integration,flask,gunicorn
2012-12-06T18:48:00.000
0
13,750,417
buildbot, jenkins/hudson but these give you continuous integration in the sense you can run a "make" equivalent with every code base change through a commit hook. You could also look at vagrant if there is something there for you for creating repeatable vm's wrt to config/setup. Could tie it with a commit hook.
0
1,085
false
1
1
Continuous integration with python 2.7 / flask / mongoDB / git
13,767,025
2
2
0
0
1
0
0
1
The situation: I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part? Thanks, any comments are the most welcomed.
0
python,arduino,interprocess,python-multithreading
2012-12-06T19:43:00.000
1
13,751,271
Have WAMP server. It is the easiest and quickest way. The web server will support php, python , http etc. If you are using Linux , the easiest tool for serial communication is php. But in windows php cannot read data from serial communication. Hence use python / perl etc. Thanks
0
544
false
0
1
python daemon + interprocess communication + web server
16,685,053
2
2
0
0
1
0
1.2
1
The situation: I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part? Thanks, any comments are the most welcomed.
0
python,arduino,interprocess,python-multithreading
2012-12-06T19:43:00.000
1
13,751,271
For those wondering what I have opted for; I have decoupled the two part: The Arduino daemon I am using Python with a micro web framework called [Bottle][1] which handles the API calls and I have used PySerial to communicate with the Arduino's. The web server The canonical Apache and PHP; are used to make API calls to the Arduino daemon.
0
544
true
0
1
python daemon + interprocess communication + web server
16,689,821
1
3
0
0
3
0
0
0
I'm switching to Pyramid from Apache/PHP/Smarty/Dreamweaver scheme. I mean the situation of having static site in Apache with menu realized via Dreamweaver template or other static tools. And then if I wanted to put some dynamic content in html I could make the following: Put smarty templates in html. Create php behind html with same name. Php takes html as template. Change links from html to php. And that was all. This scheme is convenient because the site is viewable in browser and editable in Dreamweaver. How can I reproduce this scheme in Pyramid? There are separate dirs for templates and static content. Plus all this myapp:static modifiers in hrefs. Where to look up? Thank you for your advices.
0
python,pyramid
2012-12-07T02:32:00.000
0
13,756,090
This is somewhat a different answer than ohters but here is a completely different flow. Write all your pages in html. Everything!!! and then use something like angularjs or knockoutjs to add dynamic content. Pyramid will serve dynamic content requested using ajax. You can then map everything to you html templates... edit those templates wherever you want since they are simply html files. The downside is that making it work altogether isn't that simple at first.
0
720
false
1
1
How to make almost static site in Pyramid?
14,030,455
1
2
0
0
0
1
0
0
I'm trying to improve a package written in python . The package is already installed in the system. All the source files are also present . I want to create a copy of the package source so that i can make all the changes to the copy and test so that i do not make any change to the installed package . Is there a way for me to tell python to pick my copy of code instead of the Installed version whenever a file tries to import the package , so that i can test the new code in the copy ? I'm a noob with respect to python , so please do elaborate the solution
0
python,python-2.7
2012-12-08T16:34:00.000
0
13,779,439
If you want to make change in the code; better will be; download its source code first; apply the changes; modify the setup.py file or make new one; give it a new name.. . i mean do not change directly in the installed version; do them separately .. .or do whatever you like but before doing all this; study the license agreements present in the original source by its author ; and this must be done carefully when you want to distribute your copy to others
0
65
false
0
1
Making Changes to Installed Packages
13,779,703
1
3
0
0
1
1
0
0
Is there anyway I can extract localised name from ttf/otf font file? A solution in Python would be preferred, but I am fine with any language. Thank you very much.
0
python,c,fonts
2012-12-09T01:41:00.000
0
13,783,865
It looks like font files may have multiple localised names. Example with the fontconfig tools: $ fc-query -f '%{fullname} (%{fullnamelang}): %{file}\n' /usr/share/fonts/truetype/unfonts-core/UnBatang.ttf Un Batang,은 바탕 (en,ko): /usr/share/fonts/truetype/unfonts-core/UnBatang.ttf I can select the korean (ko) name using the order in fullnamelang: $ fc-query -f '%{fullname[1]}\n' /usr/share/fonts/truetype/unfonts-core/UnBatang.ttf 은 바탕
0
1,306
false
0
1
Getting ttf/otf font localised name
43,614,842
1
1
0
0
0
1
1.2
0
I would like to find out how to write Python code which sets up a process to run on startup, in this case level two. I have done some reading, yet it has left me unclear as to which method is most reliable on different systems. I originally thought I would just edit /etc/inittab with pythons fileIO, but then I found out that my computers inittab was empty. What should I do? Which method of setting something to startup on boot is most reliable? Does anyone have any code snippets lying around?
0
python,linux,startup,runlevel
2012-12-09T03:52:00.000
1
13,784,459
I may as well answer my own question with my findings. On Debian,Ubuntu,CentOS systems there is a file named /etc/rc.local. If you use pythons' FileIO to edit that file, you can put a command that will be run at the end of all the multi-user boot levels. This facility is still present on systems that use upstart. On BSD I have no idea. If you know how to make something go on startup please comment to improve this answer. Archlinux and Fedora use systemd to start daemons - see the Arch wiki page for systemd. Basically you need to create a systemd service and symlink it. (Thanks Emil Ivanov)
0
302
true
0
1
Programmatically setting a process to execute at startup (runlevel 2)?
13,876,262
1
1
0
23
11
1
1.2
0
I'm building a database to hold a uuid generated by the python uuid4 method - however, the documentation doesn't mention how many chars the uuid is! I'm not overly familiar with uuids, so i don't know if all languages generate the same length for a uuid.
0
python,uuid
2012-12-09T05:14:00.000
0
13,784,859
There is a standard for UUIDs, so they're the same in all languages. However, there is a string representation and a binary representation. The normal string representation (str(myuuid)) looks like 42c151a8-b22b-4cd5-b103-21bdb882e489 and is 36 characters. The binary representation, myuuid.bytes (or bytes_le, but stay consistent with it when reconstructing the UUID objects), is 16 bytes. You can also get the string representation with no hyphens (32 characters) with myuuid.hex. You should be aware that some databases have a specific UUID type for storing UUIDs. What kind of database are you using?
0
10,805
true
0
1
What is the number of characters in a python uuid (type 4)?
13,784,879
2
7
0
0
10
0
0
0
I have a scraper which scrape one site (Written in python). While scraping the site, that print lines which are about to write in CSV. Scraper has been written in Python and now I want to execute it via PHP code. My question is how can I print each line which is getting printed by python code. I have used exec function but it is none of my use and gives output after executing all the program. So; Is it possible to get python output printed while it is getting executed via PHP.
0
php,python,scraper
2012-12-09T11:19:00.000
0
13,786,926
I think I have a fair idea of what you are saying put I am not too sure what you mean. If you mean to say that everytime the python script does a print, you want the php code to output what was print? If that is the case you could pass it as a POST DATA via HTTP. That is instead of printing in Python, you could send it to the PHP Script, which on receiving the data would print it. I am not too sure if this is what you want though.
0
3,658
false
0
1
Print Python output by PHP Code
13,952,344
2
7
0
0
10
0
0
0
I have a scraper which scrape one site (Written in python). While scraping the site, that print lines which are about to write in CSV. Scraper has been written in Python and now I want to execute it via PHP code. My question is how can I print each line which is getting printed by python code. I have used exec function but it is none of my use and gives output after executing all the program. So; Is it possible to get python output printed while it is getting executed via PHP.
0
php,python,scraper
2012-12-09T11:19:00.000
0
13,786,926
Simply use system() instead of exec(). exec() saves all lines of stdout output of the external program into an array, but system() flushes stdout output "as it happens".
0
3,658
false
0
1
Print Python output by PHP Code
13,989,869
1
1
0
0
0
0
0
0
I am planning to integrate real time notifications into a web application that I am currently working on. I have decided to go with XMPP for this and selected openfire server which i thought to be suitable for my needs. The front end uses strophe library to fetch the notifications using BOSH from my openfire server. However the notices are the notifications and other messages are to be posted by my application and hence I think this code needs to reside at the backend. Initially I thougt of going with PHP XMPP libraries like XMPHP and JAXL but then I think that this would cause much overhead as each script will have to do same steps like connection, authentication etc. and I think this would make the PHP end a little slow and unresponsive. Now I am thinking of creating a middle-ware application acting as a web service that the PHP will call and this application will handle the stuff with XMPP service. The benefit with this is that this app(a server if you will) will have to connect just once and the it will sit there listening on a port. also I am planning to build it in a asynchronous way such that It will first take all the requests from my PHp app and then when there are no more requests; go about doing the notification publishing stuff. I am planninng to create this service in Python using SleekXMPP. This is just what I planned. I am new to XMPP and this whole web service stuff ans would like to take your comments on this regarding issues like memory and CPU usage, advantages, disadvantages, scalability issues,security etc. Thanks in advance. PS:-- also if something like this already exists(although I didn't find after a lot of Googling) Please direct me there. EDIT --- The middle-level service should be doing the following(but not limited to): 1. Publishing notifications for different level of groups and community pages. 2. Notification for single user on some event. 3. User registration(can be done using user service plugin though). EDIT --- Also it should like to create pub-sub nodes and subscribe and unsubscribe users from these pub-sub nodes. Also I want to store the notifications and messages in a database(openfire doesn't). Would that be a good choice?
0
python,web-services,xmpp,openfire,strophe
2012-12-09T12:07:00.000
0
13,787,244
It seems to me like XMPP is a bit of a heavy-weight solution for what you're doing, given that communication is just one-way and you're just sending notifications (no real-time multi-user chat etc.). I would consider using something like Socket.io (http://socket.io) for the server<=>client channel along with Redis (http://redis.io) for persistence.
0
1,049
false
1
1
XMPP-- openfire,PHP and python web service
13,797,108
1
2
0
2
1
0
1.2
0
I started working on a new computer a tried to set everything as it used to be on my old one. Unfortunately switching to 64bit Windows made everything quite difficult. With the current setup I can only open raw I420 videos converted with memcoder, but I can't open DivX/XVID videos, that I used to on my old PC. I tried ffdshow and K-Lite codec pack. Opening the videos in gspot shows that the codecs are indeed installed. I've searched for solution all over the Internet, but I couldn't find the solution. I've tried copying the ffmpeg dll into the Python27 folder. The environment is 64bits Windows 7 Pro EDIT: I tried saving a video using OpenCV: I passed -1 to the cv2.VideoWriter function to get the codec selection dialog. The dialog dosn't show the ffdshow codecs.
0
python,opencv,ffmpeg,codec
2012-12-10T10:58:00.000
0
13,799,586
I solved the problem finally. Windows7 x64 + Python 2.7 x86 + NumPy x86 + ffdshow x86 + Eclipse x64 is the way to go. Everything is working like a charm. x64 ffdshow is also required for other programs like VirtualDub though.
0
2,972
true
0
1
Open DivX/XVID videos in OpenCV Python
14,071,725
1
1
0
0
0
0
0
0
I have installed PyCuda without any difficulty but am having trouble linking it to my eclipse environment. Does anyone know how I can link pycuda and eclipse IDE? Thanks in Adanced
0
python,eclipse
2012-12-10T14:51:00.000
0
13,803,315
You can use NetBeans 6.5 IDE its provide python support.
0
284
false
0
1
PyCuda and Eclipse
13,866,443
1
3
0
1
6
1
0.066568
0
I'm just curious, is it possible to dump all the variables and current state of the program to a file, and then restore it on a different computer?! Let's say that I have a little program in Python or Ruby, given a certain condition, it would dump all the current variables, and current state to a file. Later, I could load it again, in a different machine, and return to it. Something like VM snapshot function. I've seen here a question like this, but Java related, saving the current JVM and running it again in a different JVM. Most of the people told that there was nothing like that, only Terracotta had something, still, not perfect. Thank you. To clarify what I am trying to achieve: Given 2 or more Raspberry Pi's, I'm trying to run my software at Pi nº1, but then, when I need to do something different with it, I need to move the software to Pi nº2 without dataloss, only a minor break time. And so on, to an unlimited number of machines.
0
python,ruby,jvm,stack
2012-12-10T20:51:00.000
0
13,809,013
Good question. In Smalltalk, yes. Actually, in Smalltalk, dumping the whole program and restarting is the only way to store and share programs. There are no source files and there is no way of starting a program from square zero. So in Smalltalk you would get your feature for free. The Smalltalk VM offers a hook where each object can register to restore its externals resources after a restart, like reopening files and internet connections. But also, for example integer arrays are registered to that hook to change the endianness of their values on case the dump has been moved to a machine with different endianness. This might give a hunch at how difficult (or not) it might turn our to achieve this in a language which does not support resumable dumps by design. All other languages are, alas, much less live. Except for some Lisp implementation, I would not know of any language which supports resuming from a memory dump. Which is a missed opportunity.
0
156
false
1
1
Saving the stack?
13,810,563
1
2
0
1
2
0
0.099668
0
I'm facing of a strange issue, and after a couple of hour of research I'm looking for help / explanation about the issue. It's quite simple, I wrote a cgi server in python and I'm working with some libs including pynetlinux for instance. When I'm starting the script from terminal with any user, it works fine, no bug, no dependency issue. But when I'm trying to start it using a script in rc.local, the following code produce an error. import sys, cgi, pynetlinux, logging it produce the following error : Traceback (most recent call last): File "/var/simkiosk/cgi-bin/load_config.py", line 3, in import cgi, sys, json, pynetlinux, loggin ImportError: No module named pynetlinux Other dependencies produce similar issue.I suspect some few things like user who executing the script in rc.local (root normaly) and trying some stuff found on the web without success. Somebody can help me ? Thanx in advance. Regards. Ollie314
0
python,dependency-management
2012-12-11T00:13:00.000
0
13,811,575
The path where your modules are install is probably normally sourced by .bashrc or something similar. .bashrc doesn't get sourced when it's not an interactive shell. /etc/profile is one place that you can put system wide path changes. Depending on what Linux version/distro it may use /etc/profile.d/ in which case /etc/profile runs all the scripts in /etc/profile.d, add a new shell script there with execute permissions and a .sh extention.
0
1,890
false
0
1
python scripts issue (no module named ...) when starting in rc.local
13,811,685
1
1
0
2
1
1
1.2
0
Basically what I am trying to accomplish is have users be able to type in a certain word on one cgi script (which I currently have) and then it will save that entry in a list and display that word and the whole list on the other page. Also I will save it into a .txt file but first I am trying to figure out how to display the whole list. Right now it is only showing the keyword the user enters.
0
python,cgi
2012-12-11T09:57:00.000
0
13,817,697
There's no way your code could ever accumulate a list of keywords over multiple posts. Firstly, CGI scripts have no state, so they will start from a blank list each time. And even if that wasn't true, you explicitly reset keywords to the blank list each time anyway. You will need to store the list somewhere between runs. A text file will work, but only if you can guarantee that only one user will be accessing it at any one time. Since you're new to CGI scripts, I've no idea why you are trying to learn them. There's very little good reason to use them these days. Really, you should drop the CGI scripts, use a web framework (a micro-framework like Flask would suit you), and store the list in a database (again, an unstructure "no-sql" store might be good for you).
0
59
true
0
1
Have users enter a keyword from one cgi script and save that information in a list/txtfile on another cgi script
13,818,037
1
2
0
3
1
0
1.2
0
I have a Python script that I'd like to be run from the browser, it seem mod_wsgi is the way to go but the method feels too heavy-weight and would require modifications to the script for the output. I guess I'd like a php approach ideally. The scripts doesn't take any input and will only be accessible on an internal network. I'm running apache on Linux with mod_wsgi already set up, what are the options here?
0
python,apache,mod-wsgi
2012-12-12T13:46:00.000
0
13,841,206
I would go the micro-framework approach just in case your requirements change - and you never know, it may end up being an app rather just a basic dump... Perhaps the simplest (and old fashioned way!?) is using CGI: Duplicate your script and include print 'Content-Type: text/plain\n' before any other output to sys.stdout Put that script somewhere apache2 can access it (your cgi-bin for instance) Make sure the script is executable Make sure .py is added to the Apache CGI handler But - I don't see anyway this is going to be a fantastic advantage (in the long run at least)
0
379
true
0
1
Webserver to serve Python script
13,841,885
1
1
0
1
0
1
1.2
0
When tracing(using sys.settrace) python .egg execution by Python 2.7 interpreter frame.f_code.co_filename instead of <path-to-egg>/<path-inside-egg> eqauls to something like build/bdist.linux-x86_64/egg/<path-inside-egg> Is it a bug? And how to reveal real path to egg? In Python 2.6 and Python 3 everything works as expected.
0
python,egg,python-internals
2012-12-12T18:22:00.000
1
13,846,155
No, that is not a bug. Eggs, when being created, have their bytecode compiled in a build/bdist.<platform>/egg/ path, and you see that reflected in the co_filename variable. The bdist stands for binary distribution.
0
84
true
0
1
Strange co_filename for file from .egg during tracing in Python 2.7
13,846,221
1
3
0
2
5
1
0.132549
0
I have a python script I hope to do roughly this: calls some particle positions into an array runs algorithm over all 512^3 positions to distribute them to an NxNxN matrix feed that matrix back to python use plotting in python to visualise matrix (i.e. mayavi) First I have to write it in serial but ideally I want to parrallelize step 2 to speed up computation. What tools/strategy might get me started. I know Python and Fortran well but not much about how to connect the two for my particular problem. At the moment I am doing everything in Fortran then loading my python program - I want to do it all at once.I've heard of py2f but I want to get experienced people's opinions before I go down one particular rabbit hole. Thanks Edit: The thing I want to make parallel is 'embarrassingly parallel' in that is is just a loop of N particles and I want to get through that loop as quickly as possible.
0
python,arrays,parallel-processing,fortran,f2py
2012-12-13T03:51:00.000
0
13,852,646
An alternative approach to VladimirF's suggestion, could be to set up the two parts as a client server construct, where your Python part could talk to the Fortran part using sockets. Though this comes with the burden to implement some protocol for the interaction, it has the advantage, that you get a clean separation and can even go on running them on different machines with an interaction over the network. In fact, with this approach you even could do the embarrassing parallel part, by spawning as many instances of the Fortran application as needed and feed them all with different data.
1
929
false
0
1
I want Python as front end, Fortran as back end. I also want to make fortran part parallel - best strategy?
13,858,423
2
2
0
8
8
1
1
0
I know this is probably a very obvious answer and that I'm exposing myself to less-than-helpful snarky comments, but I don't know the answer so here goes. If Python compiles to bytecode at runtime, is it just that initial compiling step that takes longer? If that's the case wouldn't that just be a small upfront cost in the code (ie if the code is running over a long period of time, do the differences between C and python diminish?)
0
python,c,compilation
2012-12-13T04:39:00.000
0
13,853,053
Byte codes are not natural to the CPU so they need interpretation (by a CPU native code called interpreter). The advantage of byte code is that it enables optimizations, pre-computations, and saves space. C compiler produces machine code and machine code does not need interpretation, it is native to CPU.
0
7,934
false
0
1
What makes C faster than Python?
13,853,083
2
2
0
18
8
1
1.2
0
I know this is probably a very obvious answer and that I'm exposing myself to less-than-helpful snarky comments, but I don't know the answer so here goes. If Python compiles to bytecode at runtime, is it just that initial compiling step that takes longer? If that's the case wouldn't that just be a small upfront cost in the code (ie if the code is running over a long period of time, do the differences between C and python diminish?)
0
python,c,compilation
2012-12-13T04:39:00.000
0
13,853,053
It's not merely the fact that Python code is interpreted which makes it slower, although that definitely sets a limit to how fast you can get. If the bytecode-centric perspective were right, then to make Python code as fast as C all you'd have to do is replace the interpreter loop with direct calls to the functions, eliminating any bytecode, and compile the resulting code. But it doesn't work like that. You don't have to take my word for it, either: you can test it for yourself. Cython converts Python code to C, but a typical Python function converted and then compiled doesn't show C-level speed. All you have to do is look at some typical C code thus produced to see why. The real challenge is multiple dispatch (or whatever the right jargon is -- I can't keep it all straight), by which I mean the fact that whereas a+b if a and b are both known to be integers or floats can compile down to one op in C, in Python you have to do a lot more to compute a+b (get the objects that the names are bound to, go via __add__, etc.) This is why to make Cython reach C speeds you have to specify the types in the critical path; this is how Shedskin makes Python code fast using (Cartesian product) type inference to get C++ out of it; and how PyPy can be fast -- the JIT can pay attention to how the code is behaving and specialize on things like types. Each approach eliminates dynamism, whether at compile time or at runtime, so that it can generate code which knows what it's doing.
0
7,934
true
0
1
What makes C faster than Python?
13,853,280
3
4
0
0
2
1
0
0
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved? Background: I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes. This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed? NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed. EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
0
c++,python,memory,ram
2012-12-16T23:50:00.000
0
13,906,679
Your problem description is kind of vague and can be read in several different ways. One way in which I read this is that you have some kind of ASCII representation of a data structure on disk. You read this representation into memory, and then grep through it one or more times looking for things that match a given regular expression. Speeding this up depends a LOT on the data structure in question. If you are simply doing line splitting, then maybe you should just read the whole thing into a byte array using a single read instruction. Then you can alter how you grep to use a byte-array grep that doesn't span multiple lines. If you fiddle the expression to always match a whole line by putting ^.*? at the beginning and .*?$ at the end (the ? forces a minimal instead of maximal munch) then you can check the size of the matched expression to find out how many bytes forward to go. Alternately, you could try using the mmap module to achieve something similar without having to read anything and incur the copy overhead. If there is a lot of processing going on to create your data structure and you can't think of a way to use the data in the file in a very raw way as a simple byte array, then you're left with various other solutions depending, though of these it sounds like creating a daemon is the best option. Since your basic operation seems to be 'tell me which tables entries match a regexp', you could use the xmlrpc.server and xmlrpc.client libraries to simply wrap up a call that takes the regular expression as a string and returns the result in whatever form is natural. The library will take care of all the work of wrapping up things that look like function calls into messages over a socket or whatever. Now, your idea of actually keeping it in memory is a bit of a red-herring. I don't think it takes 30 minutes to read 2G of information from disk these days. It likely takes at most 5, and likely less than 1. So you might want to look at how you're building the data structure to see if you could optimize that instead. What pickle and/or marshal will buy you is highly optimized code for building the data structure out of a serialized form. This will cause the data structure creation to possibly be constrained by disk read speeds instead. That means the real problem you're addressing is not reading it off disk each time, but building the data structure in your own address space. And holding it in memory and using a daemon isn't a guarantee that it will stay in memory. It just guarantees that it stays built up as the data structure you want within the address space of a Python process. The os may decide to swap that memory to disk at any time. Again, this means that focusing on the time to read it from disk is likely not the right focus. Instead, focus on how to efficiently re-create (or preserve) the data structure in the address space of a Python process. Anyway, that's my long-winded ramble on the topic. Given the vagueness of your question, there is no definite answer, so I just gave a smorgasbord of possible techniques and some guiding ideas.
0
2,201
false
0
1
Storing large python object in RAM for later use
13,928,292
3
4
0
2
2
1
0.099668
0
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved? Background: I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes. This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed? NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed. EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
0
c++,python,memory,ram
2012-12-16T23:50:00.000
0
13,906,679
We regularly load and store much larger chunks of memory than 2 Gb in no time (seconds). We can get 350 Mb/s from our 3 year old SAN. The bottlenecks /overheads seem to involve mainly python object management. I find that using marshal is much faster than cPickle. Allied with the use of data structures which involve minimal python object handles, this is more than fast enough. For data structures, you can either use array.array or numpy. array.array is slightly more portable (no extra libraries involved) but numpy is much more convenient in many ways. For example, instead of having 10 million integer (python objects), you would create a single array.array('i') with 10 million elements. The best part to using marshal is that it is a very simple format you can write to and read from easily using c/c++ code.
0
2,201
false
0
1
Storing large python object in RAM for later use
13,924,546
3
4
0
2
2
1
0.099668
0
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved? Background: I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes. This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed? NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed. EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
0
c++,python,memory,ram
2012-12-16T23:50:00.000
0
13,906,679
You could try pickling your object and saving it to a file, so that each time the program runs it just has to deserialise the object instead of recalculating it. Hopefully the server's disk cache will keep the file hot if necessary.
0
2,201
false
0
1
Storing large python object in RAM for later use
13,906,794
1
1
0
0
0
0
1.2
0
I believe this question is probably outside of the scope of SO, but I was wondering what the best practice is for testing a payment processing feature? For any feature developed, it's been relatively easy to test, if not through unit testing than through a front-end walkthrough, but with this, I'm at a bit of a loss, as I have not done this before. What is suggested here?
0
python,database,web,payment-processing
2012-12-17T09:13:00.000
0
13,911,219
Most payment processors have a sandbox/developers account where you can process transactions in a test mode so you can fully test them as if you were in a live environment.
0
85
true
0
1
Testing Payment Processing Feature
13,914,479
1
1
0
3
2
0
0.53705
0
I am trying to get alerts for the data that I'm sending peers. My code works great for incoming blocks by looking for libtorrent.block_finished_alert but I want to know when and what I am sending to peers. I can't find an alert that will give me the equivalent for outbound transfers. I need to know the file and offset (the peer request). Is there an alert for outbound block requests? I'm using the python bindings but C++ code is fine too.
0
python,bittorrent,libtorrent
2012-12-17T17:51:00.000
0
13,919,367
The closest thing you have to alerts is probably stats_alert. It will tell you the number of payload bytes uploaded. It won't give the you granularity of a full block being sent though. If you'd like to add an alert, have a look at bt_peer_connection::write_piece. patches are welcome!
0
384
false
0
1
Get alerts for upload activity with libtorrent (rasterbar)
13,938,080
1
7
0
8
32
0
1
0
I am using linux mint, and to run a python file I have to type in the terminal: python [file path], so is there way to make the file executable, and make it run the python command automatically when I doublr click it? And since I stopped dealing with windows ages ago, I wonder if the .py files there are also automatically executable or do I need some steps. Thanks
0
python,linux
2012-12-18T12:36:00.000
1
13,933,169
yes there is. add #!/usr/bin/env python to the beginning of the file and do chmod u+rx <file> assuming your user owns the file, otherwise maybe adjust the group or world permissions. .py files under windows are associated with python as the program to run when opening them just like MS word is run when opening a .docx for example.
0
290,768
false
0
1
How to execute python file in linux
13,933,228
4
4
0
1
2
0
0.049958
0
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs. I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/). thanks in advance.
0
c#,java,php,python,api
2012-12-18T13:42:00.000
0
13,934,311
you could look for unusual character combinations like many consecutive vowels/consonants, and watch your registrations and create a list of recurring patterns (like asd) in false names i would refrain from automatically block those inputs and rather mark them for examination
0
361
false
0
1
service or strategy to detect if users enter fake names?
13,934,575
4
4
0
0
2
0
0
0
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs. I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/). thanks in advance.
0
c#,java,php,python,api
2012-12-18T13:42:00.000
0
13,934,311
Ask for a real email and send info to connect there. Then get info from the account. No way is really safe anyway.
0
361
false
0
1
service or strategy to detect if users enter fake names?
13,934,456
4
4
0
0
2
0
0
0
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs. I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/). thanks in advance.
0
c#,java,php,python,api
2012-12-18T13:42:00.000
0
13,934,311
If speed isn't an issue, download a list of the top 100k most common names, throw them in an O(1) lookup data structure, see if the input is there, and if not, you could always compare the input to the entries using a string similarity algorithm. Although if you do, you will probably want to bucket by starting letter to prevent having to perform that calculation on the entire list.
0
361
false
0
1
service or strategy to detect if users enter fake names?
49,864,949
4
4
0
2
2
0
0.099668
0
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs. I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/). thanks in advance.
0
c#,java,php,python,api
2012-12-18T13:42:00.000
0
13,934,311
One strategy is having a black list with weird names and/or a white list with normal names, to reject/accept names. But it can be difficult to found it.
0
361
false
0
1
service or strategy to detect if users enter fake names?
13,934,347
1
1
0
1
1
0
0.197375
1
I am trying to find a way to rename (change email address aka group id) a google group via api. Using the python client libraries and the provisioning api i am able to modify the group name and description, and I have used the group settings api to modify a group's settings. Is there a way to change the email address?
0
gdata-api,google-api-client,google-api-python-client,google-provisioning-api
2012-12-18T16:28:00.000
0
13,937,326
There is no group rename function for groups as there is for users. With the Group Settings and Provisioning APIs though, you can capture much of the group specifics and migrate that over to a new group. You would lose: -Group Archive -Managers (show only as members) -Email Delivery (Immediate, Digest, No-Delivery, etc)
0
923
false
0
1
Is it possible to change email address of a Google group via API?
13,938,196
1
3
0
2
2
0
0.132549
1
I wonder how to update fast numbers on a website. I have a machine that generates a lot of output, and I need to show it on line. However my problem is the update frequency is high, and therefore I am not sure how to handle it. It would be nice to show the last N numbers, say ten. The numbers are updated at 30Hz. That might be too much for the human eye, but the human eye is only for control here. I wonder how to do this. A page reload would keep the browser continuously loading a page, and for a web page something more then just these numbers would need to be shown. I might generate a raw web engine that writes the number to a page over a specific IP address and port number, but even then I wonder whether this page reloading would be too slow, giving a strange experience to the users. How should I deal with such an extreme update rate of data on a website? Usually websites are not like that. In the tags for this question I named the languages that I understand. In the end I will probably write in C#.
0
c#,python,asp.net,web-services,perl
2012-12-18T18:07:00.000
0
13,938,903
a) WebSockets in conjuction with ajax to update only parts of the site would work, disadvantage: the clients infrastructure (proxies) must support those (which is currently not the case 99% of time). b) With existing infrastructure the approach is Long Polling. You make an XmlHttpRequest using javascript. In case no data is present, the request is blocked on server side for say 5 to 10 seconds. In case data is avaiable, you immediately answer the request. The client then immediately sends a new request. I managed to get >500 updates per second using java client connecting via proxy, http to a webserver (real time stock data displayed). You need to bundle several updates with each request in order to get enough throughput.
0
114
false
0
1
Rapid number updates on a website
13,939,065
1
1
0
2
0
0
0.379949
0
Iam trying to stream audio in my TideSDK application, but it seems to be quite difficult. The HTML5 audio does not work for me, neither does video tags. The player simply keeps loading. I've tested and confirmed that my code worked in many other browsers. My next attemp was VLC via Python bindings. But without any confirmation I do believe you need to have VLC installed for the vlc.py file to work? Basically, what I want to do is play audio in a sophisticated way (probably through Python) and wrap it in my TideSDK application. I want it to work out of the box - nothing for my end users to install. Iam by the way pretty new the the whole python thing, but I learn fast so I'd love to see some examples on how to get started! Perhaps a quite quirky way to do it would be by using flash, but I'd love not to. For those of you who are not familiar with TideSDK, its a way to build desktop applications with HTML, CSS, Python, Ruby and PHP.
0
python,html,tidesdk
2012-12-18T19:52:00.000
0
13,940,449
The current version has very old webkit so because of that the HTML5 support is lacking. Audio and video tags are currently not supported in windows because underlying webkit implementation (wincairo) does not support it. Wa are working on the first part to use the latest webkit. once completed we are also planning to work on the audio/video support on windows.
0
401
false
1
1
Streaming audio in Python with TideSDK
13,948,150
1
2
0
0
0
0
0
0
I am using smtplib to send emails with python. I can get the email to send with the info I want in the body, but I can't find a good source to look over on how I can format the mail itself. Anyone know of a good resource.... I have a list that I want to iterate over and put lines between.
0
python
2012-12-18T21:01:00.000
0
13,941,436
This is more of an email question then a python question. I'd refer to the RFC on email. However to address your question in between lines of the body you should put a CRLF
0
2,444
false
0
1
smtplib email formatting
13,941,495
1
2
0
3
9
1
0.291313
0
I need to put a lot of filepaths in the form of strings in Python as part of my program. For example one of my directories is D:\ful_automate\dl. But Python recognizes some of the characters together as other characters and throws an error. In the example the error is IOError: [Errno 22] invalid mode ('wb') or filename: 'D:\x0cul_automate\\dl. It happens a lot for me and every time I need to change the directory name to one that may not be problematic.
0
python,string,error-handling,filepath
2012-12-19T15:03:00.000
0
13,955,176
Use raw string instead of string ie use r'filepath' It fixes the problem off blacklash "\"
0
22,129
false
0
1
File paths in Python in the form of string throw errors
13,955,742
1
1
1
1
6
0
0.197375
0
I've wrapped a C++ class using Boost.Python. These Objects have strong references (boost::shared_ptr) on the C++-side, and there may be intermittent strong references in Python as well. So far, everything works well. However, if I create a python weak reference from one of the strong references, this weak reference is deleted as soon as the last python strong reference disappears. I'd like the weak reference to stay alive until the last strong reference on the C++ side disappears as well. Is it possible to achieve that? Phrased another way: Is there a way to find out from python if a particular C++ object (wrapped by Boost.Python) still exists?
0
c++,python,boost-python
2012-12-19T15:49:00.000
0
13,956,055
How are you holding a "C++ strong reference" to the wrapped class ? I'm quite rusty on boost python, but I believe it's the boost::shared_ptr's deleter presence which ensures lifetime management. If that isn't the problem, you probably need to hold the instance in C++ in a boost::python::object.
0
241
false
0
1
Boost.Python: Getting a python weak reference to a wrapped C++ object
13,989,547
1
1
0
6
4
0
1.2
0
Wondering if it is possible to see a history of emails that a GAE app has sent? Need to look into the history for debugging purposes. Note that logging when I send the email or bcc'ing a user are not options for this particular question as the period I'm curious about was in the past (since then we are bcc'ing).
0
python,google-app-engine
2012-12-19T16:27:00.000
1
13,956,774
you can try one of the following way: Log write a log to datastore while each time you call sent_mail. write log with logging module and check the log in dashboard. mail while send the email, add a debug email address in email's "bcc" field. you can also check the "sent mail" in the email account used as sender.
0
88
true
1
1
Does a GAE app keep a log of the emails it sends?
13,957,307
1
1
0
5
1
0
0.761594
1
We have a suite of selenium tests that on setup and teardown open and close the browser to start a new test. This approach takes a long time for tests to run as the opening and closing is slow. Is there any way to open the browser once in the constructor then reste on setup and cleanup on teardown, then on the deconstructor close the browser? Any example would be really appreciated.
0
python,selenium
2012-12-19T17:02:00.000
0
13,957,413
You can use class or module level setup and teardown methods instead of test level setup and teardown. Be careful with this though, as if you don't reset your test environment explicitly in each test, you have to handle cleaning everything out (cookies, history, etc) manually, and recovering the browser if it has crashed, before each test.
0
601
false
1
1
Selenium test suite only open browser once
13,957,461
1
1
0
1
0
0
0.197375
1
Using Twitter Streaming API getting tweets from a specific query. However some tweets came with different codification (there are boxes instead of words). Is there any way to fix it?
0
python,twitter
2012-12-21T13:51:00.000
0
13,991,387
Use a different font, or a better method of displaying those. All tweets in the streaming API are encoded with the same codec (JSON data is fully unicode aware), but not all characters can be displayed by all fonts.
0
228
false
0
1
Tweets from Twitter Streaming API
13,991,409
1
1
0
2
5
1
0.379949
0
Is there a way to make use of AES-NI in Python? I do want to make HMAC faster by making use of my hardware support for AES-NI. Thanks.
0
python,aes,aes-ni
2012-12-22T23:45:00.000
0
14,007,542
HMAC is using a secure cryptographic hash, not a symmetric cipher. You can make a "normal" MAC such as AES-CMAC perform better, but not a HMAC.
0
1,310
false
0
1
Python support for AES-NI
14,022,066
2
4
0
0
0
0
0
0
I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead. I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied. If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at? Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.
0
python,linux,shell,unix
2012-12-23T00:37:00.000
1
14,007,784
Could you try: echo 'python mypythonscript.py' | at ...
0
1,059
false
0
1
Unix `at` scheduling with python script: Permission denied
14,033,835
2
4
0
0
0
0
0
0
I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead. I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied. If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at? Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.
0
python,linux,shell,unix
2012-12-23T00:37:00.000
1
14,007,784
Start the script using python not the actual script name, ex : python path/to/script.py. at tries to run everything as a sh script.
0
1,059
false
0
1
Unix `at` scheduling with python script: Permission denied
14,007,903
1
3
0
1
2
0
0.066568
0
I am looking for pointers/tips on how to generate a synthesized sound signal on the BeagleBone akin to watch the tone() function would return on a Arduinos. Ultimately, I'd like to connect a piezo or a speaker on a GPIO pin and hear a sound wave out of it. Any pointers?
0
python,audio,beagleboard
2012-12-23T01:28:00.000
0
14,007,965
The GPIO pins of the AM3359 are low-voltage and with insufficient driver strength to directly drive any kind of transducer. You would need to build a small circuit with a op-amp, transistor or FET to do this. Once you've done this, you'd simply set up a timer loop to change the state of the GPIO line at the required frequency. By far the quickest and easiest way of getting audio from this board is with a USB Audio interface.
0
4,148
false
1
1
How to generate sound signal through a GPIO pin on BeagleBone
14,008,119