Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
13,596,505 |
2012-11-28T01:53:00.000
| 1 | 0 | 1 | 1 |
python,windows,windows-8,command
| 46,435,281 | 23 | false | 0 | 0 |
If you are working with command prompt and if you are facing the issue even after adding python path to system variable PATH.
Remember to restart the command prompt (cmde.exe).
| 11 | 116 | 0 |
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 0.008695 | 0 | 0 | 713,166 |
13,596,505 |
2012-11-28T01:53:00.000
| 3 | 0 | 1 | 1 |
python,windows,windows-8,command
| 45,970,626 | 23 | false | 0 | 0 |
Just go with the command py. I'm running python 3.6.2 on windows 7 and it works just fine.
I removed all the python paths from the system directory and the paths don't show up when I run the command echo %path% in cmd. Python is still working fine.
I ran into this by accidentally pressing enter while typing python...
EDIT: I didn't mention that I installed python to a custom folder C:\Python\
| 11 | 116 | 0 |
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 0.026081 | 0 | 0 | 713,166 |
13,596,505 |
2012-11-28T01:53:00.000
| 1 | 0 | 1 | 1 |
python,windows,windows-8,command
| 34,934,533 | 23 | false | 0 | 0 |
When you add the python directory to the path (Computer > Properties > Advanced System Settings > Advanced > Environmental Variables > System Variables > Path > Edit), remember to add a semicolon, then make sure that you are adding the precise directory where the file "python.exe" is stored (e.g. C:\Python\Python27 if that is where "python.exe" is stored). Then restart the command prompt.
| 11 | 116 | 0 |
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 0.008695 | 0 | 0 | 713,166 |
13,596,505 |
2012-11-28T01:53:00.000
| 2 | 0 | 1 | 1 |
python,windows,windows-8,command
| 13,596,605 | 23 | false | 0 | 0 |
Add the python bin directory to your computer's PATH variable. Its listed under Environment Variables in Computer Properties -> Advanced Settings in Windows 7. It should be the same for Windows 8.
| 11 | 116 | 0 |
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 0.01739 | 0 | 0 | 713,166 |
13,596,505 |
2012-11-28T01:53:00.000
| 99 | 0 | 1 | 1 |
python,windows,windows-8,command
| 13,596,981 | 23 | true | 0 | 0 |
It finally worked!!!
I needed to do things to get it to work
Add C:\Python27\ to the end of the PATH system variable
Add C:\Python27\ to the end of the PYTHONPATH system variable
I had to add these to both for it to work.
If I added any subdirectories, it did not work for some reason.
Thank you all for your responses.
| 11 | 116 | 0 |
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 1.2 | 0 | 0 | 713,166 |
13,596,505 |
2012-11-28T01:53:00.000
| 47 | 0 | 1 | 1 |
python,windows,windows-8,command
| 38,766,602 | 23 | false | 0 | 0 |
The video was very useful.
Go to system properties -> Advance ( or type "system env" in
start menu.)
Click environment variables
Edit the 'PATH' variable
Add 2 new paths 'C:\Python27' and 'C:\Python27\scripts'
Run cmd again and type python.
it worked for me
| 11 | 116 | 0 |
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do?
Note: I have Python 2.7 and Python 3.2 installed on my computer.
|
Python command not working in command prompt
| 1 | 0 | 0 | 713,166 |
13,603,882 |
2012-11-28T11:21:00.000
| 1 | 0 | 0 | 0 |
python,nlp,svm,sentiment-analysis,feature-extraction
| 13,615,685 | 5 | false | 0 | 0 |
Linear svm is recommended for high dimensional features. Based on my experience the ultimate limitation of SVM accuracy depends on the positive and negative "features". You can do a grid search (or in the case of linear svm you can just search for the best cost value) to find the optimal parameters for maximum accuracy, but in the end you are limited by the separability of your feature-sets. The fact that you are not getting 90% means that you still have some work to do finding better features to describe your members of the classes.
| 1 | 53 | 1 |
I am currently working on a project, a simple sentiment analyzer such that there will be 2 and 3 classes in separate cases. I am using a corpus that is pretty rich in the means of unique words (around 200.000). I used bag-of-words method for feature selection and to reduce the number of unique features, an elimination is done due to a threshold value of frequency of occurrence. The final set of features includes around 20.000 features, which is actually a 90% decrease, but not enough for intended accuracy of test-prediction. I am using LibSVM and SVM-light in turn for training and prediction (both linear and RBF kernel) and also Python and Bash in general.
The highest accuracy observed so far is around 75% and I need at least 90%. This is the case for binary classification. For multi-class training, the accuracy falls to ~60%. I need at least 90% at both cases and can not figure how to increase it: via optimizing training parameters or via optimizing feature selection?
I have read articles about feature selection in text classification and what I found is that three different methods are used, which have actually a clear correlation among each other. These methods are as follows:
Frequency approach of bag-of-words (BOW)
Information Gain (IG)
X^2 Statistic (CHI)
The first method is already the one I use, but I use it very simply and need guidance for a better use of it in order to obtain high enough accuracy. I am also lacking knowledge about practical implementations of IG and CHI and looking for any help to guide me in that way.
Thanks a lot, and if you need any additional info for help, just let me know.
@larsmans: Frequency Threshold: I am looking for the occurrences of unique words in examples, such that if a word is occurring in different examples frequently enough, it is included in the feature set as a unique feature.
@TheManWithNoName: First of all thanks for your effort in explaining the general concerns of document classification. I examined and experimented all the methods you bring forward and others. I found Proportional Difference (PD) method the best for feature selection, where features are uni-grams and Term Presence (TP) for the weighting (I didn't understand why you tagged Term-Frequency-Inverse-Document-Frequency (TF-IDF) as an indexing method, I rather consider it as a feature weighting approach). Pre-processing is also an important aspect for this task as you mentioned. I used certain types of string elimination for refining the data as well as morphological parsing and stemming. Also note that I am working on Turkish, which has different characteristics compared to English. Finally, I managed to reach ~88% accuracy (f-measure) for binary classification and ~84% for multi-class. These values are solid proofs of the success of the model I used. This is what I have done so far. Now working on clustering and reduction models, have tried LDA and LSI and moving on to moVMF and maybe spherical models (LDA + moVMF), which seems to work better on corpus those have objective nature, like news corpus. If you have any information and guidance on these issues, I will appreciate. I need info especially to setup an interface (python oriented, open-source) between feature space dimension reduction methods (LDA, LSI, moVMF etc.) and clustering methods (k-means, hierarchical etc.).
|
Feature Selection and Reduction for Text Classification
| 0.039979 | 0 | 0 | 30,760 |
13,605,582 |
2012-11-28T12:56:00.000
| 3 | 0 | 1 | 0 |
python,python-2.6
| 13,605,662 | 1 | true | 0 | 0 |
Yes, the L signifies a long (which is practically only limited by the memory and computation time).
Python in general does automatic promotions of ints to longs as soon as the value exceeds the range the int type can handle. Also, if some operation with another long takes place, the result always is a long (0 + 0L → 0L).
In your case I can only speculate what caused this effect. Maybe reading beyond a certain limit caused the promotion, maybe some internal handling (which does not always take place) was the reason.
I don't think that it will matter to you.
| 1 | 2 | 0 |
I am working through a tutorial on searching an .html file using regular expressions (the re module). I am using the interpreter.
I opened the file and preformed my search. After each attempt, I used f.seek(0) to return to the beginning of the file for the next attempt. I confirmed my location in the file using f.tell().
The first few time that I did this, the location of the file (in bytes) was returned without an L appended to it. But after several attempts, f.tell() returned the location with an L appended.
I understand that the L signifies that the file location (in bytes) is a long number. But why would f.tell() suddenly begin to return the L, when it had not on prior occasions?
I then closed and re-opened the file, and f.tell() returned the long number from the onset?
|
Python regarding f.tell()
| 1.2 | 0 | 0 | 295 |
13,606,584 |
2012-11-28T13:47:00.000
| 0 | 0 | 0 | 1 |
python,networking,httplib
| 13,643,155 | 3 | false | 0 | 0 |
After alot more research, the glibc problem jedwards suggested, seemed to be the problem. I did not find a solution, but made workaround for my usecase.
Considering I only use one URL, I added my own "resolv.file" .
A small daemon gets the IP address of the URL when PHY reports cable connected. This IP is saved to "my own resolv.conf". From this file the python script retrieves the IP to use for posts.
Not really a good solution, but a solution.
| 1 | 1 | 0 |
I hope this doesn't cross into superuser territory.
So I have an embedded linux, where system processes are naturally quite stripped. I'm not quite sure which system process monitors to physical layer and starts a dhcp client when network cable is plugged in, but i made one myself.
¨
The problem is, that if i have a python script, using http connections, running before i have an IP address, it will never get a connection. Even after i have a valid IP, the python still has
"Temporary error in name resolution"
So how can I get the python to realize the new connection available, without restarting the script?
Alternatively , am I missing some normal procedure Linux runs normally at network cable connect.
The dhcp client I am using is udhcpc and python version is 2.6. Using httplib for connections.
|
Python not getting IP if cable connected after script has started
| 0 | 0 | 1 | 772 |
13,606,867 |
2012-11-28T14:02:00.000
| 126 | 0 | 1 | 0 |
python,subprocess,multiprocessing
| 13,607,111 | 3 | true | 0 | 0 |
The subprocess module lets you run and control other programs. Anything you can start with the command line on the computer, can be run and controlled with this module. Use this to integrate external programs into your Python code.
The multiprocessing module lets you divide tasks written in python over multiple processes to help improve performance. It provides an API very similar to the threading module; it provides methods to share data across the processes it creates, and makes the task of managing multiple processes to run Python code (much) easier. In other words, multiprocessing lets you take advantage of multiple processes to get your tasks done faster by executing code in parallel.
| 3 | 98 | 0 |
My work should use parallel techniques, and I a new user of python. So I wonder if you could share some material about the python multiprocessing and subprocess modules. What is the difference between these two?
|
What is the difference between multiprocessing and subprocess?
| 1.2 | 0 | 0 | 34,078 |
13,606,867 |
2012-11-28T14:02:00.000
| 19 | 0 | 1 | 0 |
python,subprocess,multiprocessing
| 13,606,965 | 3 | false | 0 | 0 |
Subprocess spawns new processes, but aside from stdin/stdout and whatever other APIs the other program may implement you have no means to communicate with them. Its main purpose is to launch processes that are completely separate from your own program.
Multiprocessing also spawns new processes, but they run your code, and are designed to communicate with each other. You use it to divide tasks within your own program across multiple CPU cores.
| 3 | 98 | 0 |
My work should use parallel techniques, and I a new user of python. So I wonder if you could share some material about the python multiprocessing and subprocess modules. What is the difference between these two?
|
What is the difference between multiprocessing and subprocess?
| 1 | 0 | 0 | 34,078 |
13,606,867 |
2012-11-28T14:02:00.000
| 45 | 0 | 1 | 0 |
python,subprocess,multiprocessing
| 13,606,946 | 3 | false | 0 | 0 |
If you want to call an external program (especially one not written in Python) use subprocess.
If you want to call a Python function in a subprocess, use multiprocessing.
(If the program is written in Python, but is also importable, then I would try to call its functions using multiprocessing, rather than calling it externally through subprocess.)
| 3 | 98 | 0 |
My work should use parallel techniques, and I a new user of python. So I wonder if you could share some material about the python multiprocessing and subprocess modules. What is the difference between these two?
|
What is the difference between multiprocessing and subprocess?
| 1 | 0 | 0 | 34,078 |
13,609,322 |
2012-11-28T16:04:00.000
| 0 | 0 | 0 | 1 |
python,unix,python-2.7,hp-quality-center,comobject
| 13,610,968 | 1 | false | 0 | 0 |
OTA is a win 32 COM library. In theory it's not intended to run on Linux.
You can try to use WINE on linux but you will need to run your python application inside WINE as well.
| 1 | 0 | 0 |
Please forgive me if my question confuses you.
I have to use HP Quality Center's QC OTA Library (DLL) in my Python Script.
I was able to do this on my Windows System after Registering that DLL using Com Makepy Utility . The utility gave me a .py for that .dll inside the gen_py folder.
Here is my question,
Will i be able to use that same registered .py file on a Unix system as well? or Do i have any other alternatives to let my Python Script use that Quality Center Library file in Unix as Python compatible class?
|
using a Python Win32Com .py in Unix - QC OTA library
| 0 | 0 | 0 | 577 |
13,609,985 |
2012-11-28T16:38:00.000
| 0 | 0 | 0 | 0 |
javascript,python,html,django,web-applications
| 13,610,314 | 2 | false | 1 | 0 |
As Sanjay says, prefer using memory solutions (online statuses have a quite brief use) like the Django cache (Redis or Memcache).
If you want a simple way of updating the online status of an user on an already loaded web page, use any lib like jQuery, AJAX-poll an URL giving the status of an user, and then update the tiny bit of your page showing your wanted status.
Don't poll this page too often, once every 15 seconds seems reasonable.
| 2 | 1 | 0 |
I am quite new to web development and am working on this social networking site.
Now I want to add functionality to show if a person is online.
Now one of the ways I figure out doing this is by keeping online status bit in the database.
My question is how to do it dynamically. Say the page is loaded and a user (say connection) comes online. How do I dynamically change status of that connection on that page.
I wanted to know if there are any tools(libraries) available for this type of tracking. My site is in python using django framework. I think something can be done using javascript/ jquery . I want to know if I am going in the right direction or is there anything else I should look into?
|
Tracking online status?
| 0 | 0 | 0 | 302 |
13,609,985 |
2012-11-28T16:38:00.000
| 1 | 0 | 0 | 0 |
javascript,python,html,django,web-applications
| 13,610,508 | 2 | false | 1 | 0 |
Create a new model with a last_activity DateTimeField and a OneToOneField to User. Alternatively, if you are subclassing User, using a custom User in django 1.5, or using a user profile, just add the field to that model.
Write a custom middleware that automatically updates the last_activity field for each user on every request.
Write an is_online method in one of your models that uses a timedelta to determine a user's inactivity period to return a boolean for whether they are online. For example, if their last_activity was more than 15 minutes ago, return False.
Write a view that is polled through jQuery ajax to return a particular user's online status.
| 2 | 1 | 0 |
I am quite new to web development and am working on this social networking site.
Now I want to add functionality to show if a person is online.
Now one of the ways I figure out doing this is by keeping online status bit in the database.
My question is how to do it dynamically. Say the page is loaded and a user (say connection) comes online. How do I dynamically change status of that connection on that page.
I wanted to know if there are any tools(libraries) available for this type of tracking. My site is in python using django framework. I think something can be done using javascript/ jquery . I want to know if I am going in the right direction or is there anything else I should look into?
|
Tracking online status?
| 0.099668 | 0 | 0 | 302 |
13,611,126 |
2012-11-28T17:38:00.000
| 1 | 0 | 0 | 0 |
python,opencv,machine-learning,computer-vision,object-detection
| 13,612,350 | 1 | true | 0 | 0 |
This looks like you need to determine what features you would like to train your classifier on first, as using the haar classifier it benefits from those extra features. From there you will need to train the classifier, this requires you to get a lot of images that have cars and those that do not have cars in them and then run it over this and having it tweak the average it is shooting for in order to classify to the best it can with your selected features.
To get a better classifier you will have to figure out the order of your features and the optimal order you put them together to further dive into the object and determine if it is in fact what you are looking for. Again this will require a lot of examples for your particular features and your problem as a whole.
| 1 | 1 | 1 |
I am having a little bit of trouble creating a haar classifier. I need to build up a classifier to detect cars. At the moment I made a program in python that reads in an image, I draw a rectangle around the area the object is in, Once the rectangle is drawn, it outputs the image name, the top left and bottom right coordinates of the rectangle. I am unsure of where to go from here and how to actually build up the classifier. Can anyone offer me any help?
EDIT*
I am looking for help on how to use the opencv_traincascade. I have looked at the documentation but I can't quite figure out how to use it to create the xml file to be used in the detection program.
|
Creating a haar classifier using opencv_traincascade
| 1.2 | 0 | 0 | 1,867 |
13,612,225 |
2012-11-28T18:44:00.000
| 3 | 0 | 0 | 1 |
python,cpu,python-idle
| 13,612,274 | 1 | false | 0 | 0 |
If you run the script as low priority (nice 20 python foo.py), it will be running all the time, but won't have much of a noticeable impact on higher priority processes (which will be all of them, because 20 is the lowest priority level).
| 1 | 1 | 0 |
i want to run a very long working python script, and its hard on the CPU.
there is a way to find out if the user is actively working? (moving mouse and keyboard?)
Edit: running on windows only. priority is not a good idea, still taking a lot of CPU.
|
how to run a python script, only when user is not actively working?
| 0.53705 | 0 | 0 | 121 |
13,613,327 |
2012-11-28T19:54:00.000
| 6 | 0 | 1 | 0 |
python,variables,concatenation
| 13,613,342 | 2 | true | 0 | 0 |
The cleanest way is to put the variables into a list or a dictionary, and then access them by index or by name.
| 2 | 2 | 0 |
I have some code that generates two strings - digit_ and 'x', where x is an integer. My gut feeling is that this is a stupid question, but I can't find an answer online. I have variables called digit_1, digit_2 ... etc up to digit_9. How can I call the correct one of these without using a really long if/elif function? Is there a way of calling a variable from a concatenation of it's name?
Sam
|
Call variable from two concatenated strings Python
| 1.2 | 0 | 0 | 151 |
13,613,327 |
2012-11-28T19:54:00.000
| 1 | 0 | 1 | 0 |
python,variables,concatenation
| 13,613,345 | 2 | false | 0 | 0 |
No, there isn't a good way to "create" variable names and access them.
However, you can just use a list, and index into it instead.
| 2 | 2 | 0 |
I have some code that generates two strings - digit_ and 'x', where x is an integer. My gut feeling is that this is a stupid question, but I can't find an answer online. I have variables called digit_1, digit_2 ... etc up to digit_9. How can I call the correct one of these without using a really long if/elif function? Is there a way of calling a variable from a concatenation of it's name?
Sam
|
Call variable from two concatenated strings Python
| 0.099668 | 0 | 0 | 151 |
13,613,336 |
2012-11-28T19:54:00.000
| 6 | 0 | 1 | 0 |
python,file-io,concatenation
| 13,613,428 | 12 | false | 0 | 0 |
What's wrong with UNIX commands ? (given you're not working on Windows) :
ls | xargs cat | tee output.txt does the job ( you can call it from python with subprocess if you want)
| 1 | 209 | 0 |
I have a list of 20 file names, like ['file1.txt', 'file2.txt', ...]. I want to write a Python script to concatenate these files into a new file. I could open each file by f = open(...), read line by line by calling f.readline(), and write each line into that new file. It doesn't seem very "elegant" to me, especially the part where I have to read/write line by line.
Is there a more "elegant" way to do this in Python?
|
How do I concatenate text files in Python?
| 1 | 0 | 0 | 319,053 |
13,615,789 |
2012-11-28T22:45:00.000
| 0 | 0 | 1 | 0 |
python
| 13,615,827 | 3 | false | 0 | 0 |
You'd have to create a database or hash mapping nicknames onto formal names. If you can find such a list online, the process of implementing the map will be trivial. The real fun will be getting a complete enough list, ensuring variations are taken care of, and making sure you don't run into problems when people's formal names ARE their nicknames. Not everyone who goes by Dave has a formal name of David for example. The person's formal name may very well be Dave.
| 1 | 4 | 0 |
I am trying to mapping users from different systems based on user first and last name in Python.
One issue is that the first names are in many cases 'nicknames.'
For example, for a user, his first name is 'Dave' in one system, and 'David' in another.
Is there any easy way in python to convert common nicknames like these to their formal counterparts?
Thanks!
|
Converting user nickname to formal first name in Python
| 0 | 0 | 0 | 3,686 |
13,616,559 |
2012-11-28T23:56:00.000
| 3 | 0 | 1 | 0 |
python
| 13,616,592 | 1 | true | 0 | 0 |
The threading and multiprocessing module already use a similar interface for this. The multiprocessing actually falls back to the threading module if it isn't supported.
You can use an import multiprocessing as something and import threading as something to switch between the two unseen.
Note that data sharing between the two is different and this might create potential pitfalls. As noted by jdi
| 1 | 0 | 0 |
Is there a module that abstract threading and multiprocessing in Python? I want to have an option of switching between them in future.
|
Abstracting threading and multiprocessing in Python?
| 1.2 | 0 | 0 | 87 |
13,619,021 |
2012-11-29T04:49:00.000
| 3 | 1 | 0 | 1 |
python,tornado,wsgi,cherrypy
| 13,619,836 | 2 | false | 1 | 0 |
What you are after would possibly happen anyway for WSGI severs. This is because any Python exception only affects the current request and the framework or WSGI server would catch the exception, log it and translate it to a HTTP 500 status page. The application would still be in memory and would continue to handle future requests.
What we get down to is what exactly you mean by 'crashes Apache process'.
It would be rare for your code to crash, as in cause the process to completely exit due to a core dump, the whole process. So are you being confused in your terminology in equating an application language level error to a full process crash.
Even if you did find a way to crash a process, Apache/mod_wsgi handles that okay and the process will be replaced. The Gunicorn WSGI server will also do that. CherryPy will not unless you have a process manager running which monitors it and the process monitor restarts it. Tornado in its single process mode will have the same problem. Using Tornado as the worker in Gunicorn is one way around that plus I believe Tornado itself may have some process manager in it now for running multiple process which allow it to restart processes if they die.
Do note that if your application bug which caused the Python exception is bad enough and it corrupts state within the process, subsequent requests may possibly have issues. This is the one difference with PHP. With PHP, after any request, whether successful or not, the application is effectively thrown away and doesn't persist. So buggy code cannot affect subsequent requests. In Python, because the process with loaded code and retained state is kept between requests, then technically you could get things in a state where you would have to restart the process to fix it. I don't know of any WSGI server though that has a mechanism to automatically restart a process if one request returned an error response.
| 1 | 1 | 0 |
I'm coming from PHP/Apache world where running an application is super easy. Whenever PHP application crashes Apache process running that request will stop but server will be still ruining happily and respond to other clients. Is there a way to have Python application work in a smilar way. How would I setup wsgi server like Tornado or CherryPy so it will work similarly? also, how would I run several applications from one server with different domains?
|
How to setup WSGI server to run similarly to Apache?
| 0.291313 | 0 | 0 | 300 |
13,619,088 |
2012-11-29T04:55:00.000
| 2 | 0 | 1 | 1 |
python,virtualenv
| 13,619,252 | 2 | true | 1 | 0 |
Tell Eclipse or Idle that the python interpreter is django_venv/bin/python instead of /usr/bin/python
| 1 | 3 | 0 |
When I enter my virtual environment (source django_venv/bin/activate), how do I make that environment transfer to apps run outside the terminal, such as Eclipse or even Idle? Even if I run Idle from the virtualenv terminal window command line (by typing idle), none of my pip installed frameworks are available within Idle, such as SQLAlchemy (which is found just fine when running a python script from within the virtual environment).
|
Virtualenv and python - how to work outside the terminal?
| 1.2 | 0 | 0 | 2,074 |
13,620,867 |
2012-11-29T07:32:00.000
| 1 | 0 | 0 | 0 |
python,django,postgresql,sharding
| 13,639,532 | 2 | false | 1 | 0 |
I agree with @DanielRoseman. Also, how many is too many rows. If you are careful with indexing, you can handle a lot of rows with no performance problems. Keep your indexed values small (ints). I've got tables in excess of 400 million rows that produce sub-second responses even when joining with other many million row tables.
It might make more sense to break user up into multiple tables so that the user object has a core of commonly used things and then the "profile" info lives elsewhere (std Django setup). Copies would be a small table referencing books which has the bulk of the data. Considering how much ram you can put into a DB server these days, sharding before you have too seems wrong.
| 1 | 3 | 0 |
I'm starting a Django project and need to shard multiple tables that are likely to all be of too many rows. I've looked through threads here and elsewhere, and followed the Django multi-db documentation, but am still not sure how that all stitches together. My models have relationships that would be broken by sharding, so it seems like the options are to either drop the foreign keys of forgo sharding the respective models.
For argument's sake, consider the classic Authot, Publisher and Book scenario, but throw in book copies and users that can own them. Say books and users had to be sharded. How would you approach that? A user may own a copy of a book that's not in the same database.
In general, what are the best practices you have used for routing and the sharding itself? Did you use Django database routers, manually selected a database inside commands based on your sharding logic, or overridden some parts of the ORM to achive that?
I'm using PostgreSQL on Ubuntu, if it matters.
Many thanks.
|
Sharding a Django Project
| 0.099668 | 1 | 0 | 1,300 |
13,622,895 |
2012-11-29T09:45:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,web2py
| 13,892,159 | 1 | false | 1 | 0 |
Mutual exclusion is already built into DBMS so we just have to use that. Lets take an example.
First, your table in the model should be defined in such a way that your room number should be unique (use UNIQUE constraint).
When User1 and User2 both query for a room, they should get a response saying the room is vacant. When both the users send the "BOOK" request for that room at the same time, the booking function should directly insert the "BOOK" request of both users into the db. But only one will actually be executed (because of the UNIQUE constraint) and the other will produce a DAL exception. Catch the exception and respond to the user whose "BOOK" request was unsuccesful, saying You just missed this room by an instant :-)
Hope this helped.
| 1 | 0 | 0 |
I'm making a room reservation system in Web2Py over Google App Engine.
When a user is booking a Room the system must be sure that this room is really available and no one else have reserved it just a moment before.
To be sure I make a query to see if the room is available, then I make the reservation. The problem is how can I do this transaction in a kind of "Mutual exclusion" to be sure that this room is really for this user?
Thank you!! :)
|
Transactions in Web2Py over Google App Engine
| 0.197375 | 0 | 0 | 192 |
13,623,206 |
2012-11-29T10:02:00.000
| 0 | 0 | 0 | 0 |
python,django,django-sites
| 13,623,382 | 1 | true | 1 | 0 |
No. The Django site name does not have anything to do with how it's hosted - it's purely used for internal stuff like displaying the name on the site itself and on emails.
| 1 | 0 | 0 |
I am new to django framework. I created a site with name "project" and it is working on local machine. Now, I am trying to move on to my test server ("ideometrics.se)". I created a subdomain ("project.ideometrics.se") to access this application from that subdomain. Do I have to change my django site name to "project.ideometrics.se" to make it work on my server ?
Any help is appreciated.
|
django site name needs to match domain name registered?
| 1.2 | 0 | 0 | 102 |
13,623,601 |
2012-11-29T10:24:00.000
| 0 | 0 | 1 | 0 |
python,eclipse,console,pydev
| 49,955,840 | 2 | false | 0 | 0 |
Not sure if the source of the problem is the same, but I encountered a similar issue in which the PyDev console kept disappearing as soon as I clicked in the script editor, even if the console's Pin Console button was clicked.
I solved this simply by double-clicking on the script's tab, which allowed the console to stay visible at all times.
| 2 | 6 | 0 |
I've just started using pydev in Eclipse and I have a lot of questions about the way the interactive console works.
I have found out how to launch an interactive console and use it to run functions. My questions are this:
Every time I change my code and re-run it, my interactive console disappears. This is annoying as I have to reopen a console and I can't see/rerun my previous history. If I pin it, it stays, but then I can't run any code. Is there any way to keep the interactive console open all the time, but also be able to run your code? I currently spend a significant amount of my day closing and opening consoles!
How can I run a function from the interactive console, but still get the debugger to stop at breakpoints. I can use the debugger if I am running the code from a main function, but if I run the code from the console, it just skips right over break points.
thanks
Niall
|
pydev interactive console always disappearing and other console questions
| 0 | 0 | 0 | 468 |
13,623,601 |
2012-11-29T10:24:00.000
| 0 | 0 | 1 | 0 |
python,eclipse,console,pydev
| 17,660,192 | 2 | false | 0 | 0 |
Instead of clicking in "Python Run", you may press Ctrl+Alt+Enter with the desired Python file active, and the console will call execfile on it. All your previous story stays there. You can also select some part of your code and run only it.
As far as I know, you can't. Check the pdb module instead.
| 2 | 6 | 0 |
I've just started using pydev in Eclipse and I have a lot of questions about the way the interactive console works.
I have found out how to launch an interactive console and use it to run functions. My questions are this:
Every time I change my code and re-run it, my interactive console disappears. This is annoying as I have to reopen a console and I can't see/rerun my previous history. If I pin it, it stays, but then I can't run any code. Is there any way to keep the interactive console open all the time, but also be able to run your code? I currently spend a significant amount of my day closing and opening consoles!
How can I run a function from the interactive console, but still get the debugger to stop at breakpoints. I can use the debugger if I am running the code from a main function, but if I run the code from the console, it just skips right over break points.
thanks
Niall
|
pydev interactive console always disappearing and other console questions
| 0 | 0 | 0 | 468 |
13,623,771 |
2012-11-29T10:33:00.000
| 0 | 0 | 1 | 1 |
python,eclipse,eclipse-plugin
| 13,632,373 | 4 | false | 0 | 0 |
Have you considered giving the interpreter location as a relative path? For example:
..\..\..\python\python.exe. I am not sure what the working directory of PyDev is, but if you put enough .. in, Windows will stop at the drive root.
| 3 | 2 | 0 |
I'm using PortableApps application with portable eclipse and portable python installed. I've equipped my eclipse with PyDev plugin enabling me to run and debug my files on whatever windows-based platform I'd like. The problem is in order to use the interpreter inside my USB stick, I need to address the proper location of the python interpreter in PyDev settings. with USB drive connected to different computers, I get different drive letter for my USB stick which would make problem locating the path of the installed python inside my USB stick.
Is there any way to enforce eclipse's PyDev plugin to look for python interpreter which is installed inside my USB permanently?!
|
PyDev interpreter indication within USB drive
| 0 | 0 | 0 | 1,265 |
13,623,771 |
2012-11-29T10:33:00.000
| 0 | 0 | 1 | 1 |
python,eclipse,eclipse-plugin
| 14,310,218 | 4 | false | 0 | 0 |
Have you tried to use subst? You can config with some letter like Z: or X: and in any computed that you plug your pen drive, you just run a .bat with the subst and your environment is ready ...
| 3 | 2 | 0 |
I'm using PortableApps application with portable eclipse and portable python installed. I've equipped my eclipse with PyDev plugin enabling me to run and debug my files on whatever windows-based platform I'd like. The problem is in order to use the interpreter inside my USB stick, I need to address the proper location of the python interpreter in PyDev settings. with USB drive connected to different computers, I get different drive letter for my USB stick which would make problem locating the path of the installed python inside my USB stick.
Is there any way to enforce eclipse's PyDev plugin to look for python interpreter which is installed inside my USB permanently?!
|
PyDev interpreter indication within USB drive
| 0 | 0 | 0 | 1,265 |
13,623,771 |
2012-11-29T10:33:00.000
| 0 | 0 | 1 | 1 |
python,eclipse,eclipse-plugin
| 18,883,477 | 4 | false | 0 | 0 |
use a relative eclipse path variable like:
{eclipse_home}..\..\..\PortablePython\App\python.exe
| 3 | 2 | 0 |
I'm using PortableApps application with portable eclipse and portable python installed. I've equipped my eclipse with PyDev plugin enabling me to run and debug my files on whatever windows-based platform I'd like. The problem is in order to use the interpreter inside my USB stick, I need to address the proper location of the python interpreter in PyDev settings. with USB drive connected to different computers, I get different drive letter for my USB stick which would make problem locating the path of the installed python inside my USB stick.
Is there any way to enforce eclipse's PyDev plugin to look for python interpreter which is installed inside my USB permanently?!
|
PyDev interpreter indication within USB drive
| 0 | 0 | 0 | 1,265 |
13,624,198 |
2012-11-29T10:55:00.000
| 0 | 0 | 0 | 1 |
python,winapi,backup,volume-shadow-service
| 13,625,736 | 3 | false | 0 | 0 |
I would look into IronPython on your Windows client side. Simply because this will give you access to COM+ DLLs and other WinAPI objects. It's .NET, but it would still be python. I've not used it enough to say for 100% certainty it will work with VSS, but it should.
| 1 | 8 | 0 |
I'm working on a remote backup solution in python. The server part will run on Unix/Linux because it will use hard links for efficient incremental backups.
The client part, however, will have to run on Windows too, and file locking can be a problem.
From what I've researched, Volume Shadow Copy Service (VSS) is the thing I need. Similar to a LVM snapshot, and isn't affected by file locking.
THe VSS API, however, doesn't seem to be implemented in pywin32.
My current idea is to use some wrapper that will create the a temporary VSS snapshot, run the client, and delete it afterwards.
I'm wondering if anyone has experience in this scenario.
|
Consistent backups in python
| 0 | 0 | 0 | 1,558 |
13,627,686 |
2012-11-29T14:13:00.000
| 6 | 0 | 1 | 0 |
python,macos,python-2.7,osx-mountain-lion,pyc
| 13,630,073 | 4 | false | 0 | 0 |
You can avoid the creation of both .pyc and .pyo files with: python -B script.py
| 1 | 8 | 0 |
It seems that on OS X 10.8 (with Python 2.7) the .pyc files are created even if you setup the environment variable PYTHONDONTWRITEBYTECODE=1
How can I prevent this from happening, or how can I convince Python not to create this files in the same location as the source files.
|
How to avoid creation of .pyc files on OS X 10.8 with Python 2.7?
| 1 | 0 | 0 | 4,598 |
13,628,190 |
2012-11-29T14:42:00.000
| 0 | 0 | 0 | 0 |
java,python,cookies,http-headers,httpwebrequest
| 13,628,291 | 1 | true | 1 | 0 |
When you send the login information (and usually in response to many other requests) the server will set some cookies to the client, you must keep track of them and send them back to the server for each subsequent request.
A full implementation would also keep track of the time they are supposed to be stored.
| 1 | 0 | 0 |
I have a crawler that automates the login and crawling for a website, but since the login was changed it is not working anymore.
I am wondering, can I feed the browser cookie (aka, I manually log-in) to my HTTP request? Is there anything particularly wrong in principle that wouldn't make this work? How do I find the browser cookies relevant for the website?
If it works, how do I get the "raw" cookie strings I can stick into my HTTP request?
I am quite new to this area, so forgive my ignorant questions. I can use either PYthon or Java
|
How to use browser cookies programmatically
| 1.2 | 0 | 1 | 925 |
13,632,415 |
2012-11-29T18:35:00.000
| 0 | 1 | 0 | 0 |
python,emacs
| 13,634,637 | 3 | false | 0 | 0 |
bzr branch lp:python-mode/components-python-mode
i.e. the development branch of python-mode.el
delivering an inlined Pymacs provides auto-completion right out of the box for me
Might conflict with an already installed Pymacs though
| 1 | 4 | 0 |
I've installed python-mode, pymacs and pycomplete+ from el-get on emacs24. But i am not able to get auto-completion for python in emacs.
|
Emacs python autocompletion
| 0 | 0 | 0 | 6,063 |
13,632,479 |
2012-11-29T18:39:00.000
| 1 | 0 | 1 | 0 |
python-3.x,icons
| 13,632,646 | 1 | true | 0 | 1 |
Short answer: No, you can not. The application icon is set in completely different ways in different environments. Usually it is a setting on the shortcut.
Longer answer: It also depends on what you mean with "Application Icon".
If you mean the icon for the menu entry in the Applcartions/Start/Whatever menu, then you can probably set it with Python, when you are creating the shortcut in whatever menu system you are using, but it will be specific for your system and not portable.
There may be some library out there to help you create installers for different systems that can help you, but it's definitely no longer "clean Python 3" in any meaningful sense. :-)
| 1 | 0 | 0 |
Can I set application icon using clean Python 3?
I have .ico file in same directory and want to add into application.
(Without tkinter)
|
Application Icon in Python 3
| 1.2 | 0 | 0 | 156 |
13,636,412 |
2012-11-29T23:04:00.000
| 1 | 1 | 0 | 0 |
python,sublimetext2,sublimelinter
| 13,636,603 | 1 | true | 0 | 0 |
pylint is first and foremost a command line tool for code analysis.
You can simply run it on a module from the command line and it will generate a whole report with every error/warning within the project.
I don't know if such feature exists from within sublime text but this is not something you will use often. I simply use the command line about once a week to check if I didn't miss anything.
I also use a SublimeTODO plugin, which basically analyses the code looking for TODO comments. Unlike sublimelint, it does generate a report for all the open files or files within a project.
| 1 | 0 | 0 |
I'm starting to use Sublime Text 2 in favor of Eclipse for developing Python code.
In all I'm liking the change, but one of the things I miss from Eclipse is a convenient "problems" window that shows a summary of all errors and warnings from files in the project. While sublimelinter helps, it only works for files that you have open and are editing. It will place a box clearly around the error as you type it, but what if there are other problems in other files that you haven't seen yet? (ie, might have been committed by a coworker, etc)
Does there exist something in Sublime Text 2 that will show a summary of linting output?
|
Sublime Text 2, Sublimelint, summary of problems
| 1.2 | 0 | 0 | 813 |
13,636,419 |
2012-11-29T23:05:00.000
| 3 | 0 | 0 | 0 |
python,elasticsearch,django-haystack
| 13,637,244 | 2 | true | 0 | 0 |
if you're using the edgeNGram tokenizer, then it will treat "EdgeNGram 12323" as a single token and then apply the edgeNGram'ing process on it. For example, if min_grams=1 max_grams=4, you'll get the following tokens indexed: ["E", "Ed", "Edg", "Edge"]. So I guess this is not what you're really looking for - consider using the edgeNGram token filter instead:
If you're using the edgeNGram token filter, make sure you're using a tokenizer that actually tokenizes the text "EdgeNGram 12323" to produce two tokens out of it: ["EdgeNGram", "12323"] (standard or whitespace tokenizer will do the trick). Then apply the edgeNGram filter next to it.
In general, edgeNGram will take "12323" and produce tokens such as "1", "12", "123", etc...
| 1 | 6 | 1 |
Any ideas on how EdgeNgram treats numbers?
I'm running haystack with an ElasticSearch backend. I created an indexed field of type EdgeNgram. This field will contain a string that may contain words as well as numbers.
When I run a search against this field using a partial word, it works how it's supposed to. But if I put in a partial number, I'm not getting the result that I want.
Example:
I search for the indexed field "EdgeNgram 12323" by typing "edgen" and I'll get the index returned to me. If I search for that same index by typing "123" I get nothing.
Thoughts?
|
ElasticSearch: EdgeNgrams and Numbers
| 1.2 | 0 | 0 | 2,671 |
13,636,458 |
2012-11-29T23:08:00.000
| 1 | 0 | 1 | 0 |
python,dependencies,pip,uninstallation
| 13,636,649 | 1 | false | 0 | 0 |
I think it leaves the package broken. Pip install and pip uninstall are neither atomic nor very reliable (certainly not like, say, apt, which isn't problem-free but is much more robust).
| 1 | 1 | 0 |
What happens when pip uninstall in virtualenv fails? Is it smart enough to reinstall package or it raise an exception and leave virtualenv broken?
I'm making script which uninstall packages, run pytest and install them again. I need it for testing if dependencies are still unnecessary.
|
What happens if pip uninstall fails?
| 0.197375 | 0 | 0 | 274 |
13,636,758 |
2012-11-29T23:33:00.000
| 3 | 0 | 1 | 0 |
python,numpy,scipy,fft
| 13,645,588 | 1 | false | 0 | 0 |
If the data is not uniformly sampled (i.e. Tx[i]-Tx[i-1] is constant), then you cannot do an FFT on it.
Here's an idea:
If you have a pretty good idea of the bandwidth of the signal, then you could create a resampled version of the DFT basis vectors R. I.e. the complex sinusoids evaluated at the Tx times. Then solve the linear system x = A*z: where x is your observation, z is the unknown frequency content of the signal, and A is the resamapled DFT basis. Note that A may not actually be a basis depending on the severity of the non-uniformity. It will almost certainly not be an orthogonal basis like the DFT.
| 1 | 2 | 1 |
I have data and a time 'value' associated with it (Tx and X).
How can I perform a fast Fourier transform on my data.
Tx is an array I have and X is another array I have. The length of both arrays are of course the same and they are associated by Tx[i] with X[i] , where i goes from 0 to len(X).
How can I perform a fft on such data to ultimately achieve a Power Spectral Density plot frequency against |fft|^2.
|
Fast Fourier Transform (fft) with Time Associated Data Python
| 0.53705 | 0 | 0 | 1,901 |
13,637,899 |
2012-11-30T01:57:00.000
| 4 | 0 | 0 | 0 |
python,django,postgresql,multiprocessing
| 13,637,994 | 1 | true | 0 | 0 |
Right now, if your code works the way I think it works, Postgres is always waiting for Python to send it a query, or Python is waiting for Postgres to come back with a response. There's no situation where they'd both be doing work at once, so only one ever runs at a time.
To start using your machine more heavily, you'll need to implement some sort of multithreading on the Python end. Since you haven't given many details on what your queries are, it's hard to say what that might look like.
| 1 | 1 | 0 |
I'm tinkering with some big-data queries in the ipython shell using the Django ORM. This is on a Debian 6 VM in VMware Fusion on OS X, the VM is allowed access 4 or 8 cores (I've played with the settings) of the 4-core HT i7 on the host.
When I watch the progress in top, when doing for example a 'for result in results: do_query()' in the python shell, it seems that python and one of the postgres processes are always co-located on the same physical CPU core - their total CPU usage never adds up to more than 100%, python is usually 65% to postgres' 25% or so. iowait on the VM isn't excessively high.
I'm not positive they're always on the same core, but it sure looks it. Given how I plan to scale this eventually, I'd prefer that the python process(es) and postgress workers be scheduled more optimally. Any insight?
|
python and postgres always share single CPU core?
| 1.2 | 0 | 0 | 230 |
13,639,219 |
2012-11-30T04:52:00.000
| 2 | 0 | 1 | 0 |
python,algorithm,graphics
| 13,639,414 | 3 | true | 0 | 1 |
If a unit square starts out with one side resting on the x-axis, and the lower right corner at (xs, 0), then after a quarter turn clockwise it will again have one side resting on the x-axis, and the lower right corner now at (xs+1, 0). Before it turns, label the lower left corner a; the upper left, b; and the upper right, c. Corners a and c move along arcs of a unit circle as the square turns. Corner b moves with radius d = sqrt(2).
This leads to the following method: Step angle t from 0 to pi/2 (ie 90°), letting
• xa = xs - cos t
• ya = sin t
• xb = xs - d*cos(t+pi/4)
• yb = d*sin(t+pi/4)
• xc = xs + sin t
• yc = cos t
At each time step, erase the old square by drawing background-colored lines, compute new (xa,ya,xb,yb,xc,yc) from equations, draw new square with lines from (xs,0) to (xa,yb) to (xc,yc) to (xd,yd) to (xs,0), and then delay appropriate amount. Each time t gets up to pi/2, set t back to 0 and add 1 to xs. Note, instead of erasing the whole square and then drawing the new, one might try erasing one old line and drawing one new line in turn for the four sides.
| 2 | 0 | 0 |
This is for a computer science assignment using Python, does anybody know where I would begin to create an algorithm to create a square or box that rolls across the screen? And I do indeed mean roll, not slide. It does not necessarily have to be using python, I just need a general idea as to how the coordinates would work and a general algorithm.
|
Create a square that rolls across the screen
| 1.2 | 0 | 0 | 244 |
13,639,219 |
2012-11-30T04:52:00.000
| 0 | 0 | 1 | 0 |
python,algorithm,graphics
| 13,639,761 | 3 | false | 0 | 1 |
You could alternatively completely break the spirit of the assignment by using pybox2d, pymunk or another physics engine to do all the calculations for you. Then you could have lots of boxes rolling around and bouncing off each other :D
| 2 | 0 | 0 |
This is for a computer science assignment using Python, does anybody know where I would begin to create an algorithm to create a square or box that rolls across the screen? And I do indeed mean roll, not slide. It does not necessarily have to be using python, I just need a general idea as to how the coordinates would work and a general algorithm.
|
Create a square that rolls across the screen
| 0 | 0 | 0 | 244 |
13,640,812 |
2012-11-30T07:31:00.000
| 0 | 1 | 0 | 0 |
python
| 13,640,945 | 2 | true | 0 | 0 |
If your logs on box B can be accessed over the network (e.g., through a network share or FTP), then you could modify the script on box A to retrieve and process them. If they are not network accessible, then you'll need to copy either the script from box A to box B, or the logs from box B to box A.
| 1 | 0 | 0 |
I have a Python script that scans my logs and reports all its findings. Is it possible that the script in my box (say Box A) can be executed for another box (say B) without copying it.
Do I really need to copy my Python script to Box B and then execute it from box A or there is a method by which staying in Box A I can connect to Box B run my python program for box B there get its output to and close the same.
|
Python : run a py program on another box
| 1.2 | 0 | 0 | 119 |
13,645,120 |
2012-11-30T12:19:00.000
| 1 | 0 | 0 | 0 |
python,mod-wsgi,wsgi,beaker
| 13,762,515 | 1 | false | 1 | 0 |
It turns out this behaviour is down to multiprocessing via apache.
It was resolved by using an external store to manage tracking when the session ID is first seen, and maintaining my own 'last_accessed_time' etc.
| 1 | 1 | 0 |
I'm using beakers WSGI SessionMiddleware to manage a session between browser and application.
I am trying to differentiate between when a session is first accessed against any further requests.
Fom the docs it appears there are two useful values made available in the WSGI environment,
["beaker.session"].last_accessed and ["beaker.session"]["_accessed_time"]
However, on repeated requests ["beaker.session"].last_accessed is always returning None, while the timestamp value in ["beaker.session"]["_accessed_time"] can be seen to be increasing with each request.
Each request performs a ["beaker.session"].save() - I have tried various combinations of setting auto=True in the session, and using .save() / .persist(), but no joy : .last_accessed is always None.
I am not using the session to actually persist any data, only to manage the creation of and pass through the session.id. ( I am using a session type of 'cookie' )
|
last_accessed time in beaker session always None, but _accessed_time is changing
| 0.197375 | 0 | 0 | 194 |
13,647,880 |
2012-11-30T15:11:00.000
| -1 | 0 | 1 | 0 |
python,datetime,mean
| 13,647,929 | 2 | false | 0 | 0 |
Split the strings based on ' ', take the first element and if it's not saturday or sunday, it's a weekday. Now I need to know what you mean by the "mean" of a list of dates.
| 1 | 0 | 0 |
I have a list of timestamps and I want to calculate the mean of the list, but I need to ignore the weekend days which are Saturday and Sunday and consider Friday and Monday as one day. I only want to include the working days from Monday to Friday. This is an example of the list. I wrote the timestamps in readable format to follow the process easily.
Example:
['Wed Feb 17 12:57:40 2011', ' Wed Feb 8 12:57:40 2011', 'Tue Jan 25 17:15:35 2011']
MIN='Tue Jan 25 17:15:35 2011'
' Wed Feb 17 12:57:40 2011' , since we have 6 weekend days between this number and the MIN I shift back this number 6days.It will be = 'Fri Feb 11 12:57:40 2011'.
'Wed Feb 8 12:57:40 2011', since we have 4 weekend days between this number and the MIN I shift back this number 4days it will be 'Wed Feb 4 12:57:40 2011'
The new list is now [' Fri Feb 11 12:57:40 2011',' Wed Feb 4 12:57:40 2011',' Tue Jan 25 17:15:35 2011]
MAX= 'Fri Feb 11 12:57:40 2011'
average= (Fri Feb 11 12:57:40 2011 + Wed Feb 4 12:57:40 2011 + Tue Jan 25 17:15:35 2011) /3
difference= MAX - average
|
calculating the mean of a list of timestamps ignoring weekend days in python
| -0.099668 | 0 | 0 | 1,443 |
13,649,128 |
2012-11-30T16:26:00.000
| 0 | 0 | 1 | 0 |
python,performance,list
| 13,649,209 | 2 | false | 0 | 0 |
You could set up the functions so that they take the main list, and start and end indexes.
| 1 | 1 | 0 |
I'm trying to parse a long sequence of binary data that contains a sequence of serialized structures. I'm able to get the data in memory as a list of integers (let's call it the main list).
For parsing the different fields contained in the main list I'm creating some functions & classes. My question is what is the most efficient way for passing a sub range of the main list to these functions & classes.
I'm new to Python so forgive me if I say something really wrong.
If I do something like: parse_foo(main_list[n:N]) I suppose that a new list will be created. If I'm not wrong this method will be very inefficient. By the way, I'm not going to modify the main list at any moment.
I think I could use iterators (iterator.next()). But the problem is that I cannot access the elements randomly.
Comments & suggestion are always more than welcome.
|
efficient way to pass a sub list to a function
| 0 | 0 | 0 | 301 |
13,651,117 |
2012-11-30T18:38:00.000
| -4 | 0 | 0 | 0 |
python,pandas
| 53,256,590 | 7 | false | 0 | 0 |
You can specify nrows parameter.
import pandas as pd
df = pd.read_csv('file.csv', nrows=100)
This code works well in version 0.20.3.
| 2 | 123 | 1 |
How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something?
Example: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.
|
How can I filter lines on load in Pandas read_csv function?
| -1 | 0 | 0 | 95,461 |
13,651,117 |
2012-11-30T18:38:00.000
| 4 | 0 | 0 | 0 |
python,pandas
| 60,026,814 | 7 | false | 0 | 0 |
If the filtered range is contiguous (as it usually is with time(stamp) filters), then the fastest solution is to hard-code the range of rows. Simply combine skiprows=range(1, start_row) with nrows=end_row parameters. Then the import takes seconds where the accepted solution would take minutes. A few experiments with the initial start_row are not a huge cost given the savings on import times. Notice we kept header row by using range(1,..).
| 2 | 123 | 1 |
How can I filter which lines of a CSV to be loaded into memory using pandas? This seems like an option that one should find in read_csv. Am I missing something?
Example: we've a CSV with a timestamp column and we'd like to load just the lines that with a timestamp greater than a given constant.
|
How can I filter lines on load in Pandas read_csv function?
| 0.113791 | 0 | 0 | 95,461 |
13,652,514 |
2012-11-30T20:16:00.000
| 3 | 1 | 0 | 0 |
php,python,twitter,twitter-oauth
| 13,652,765 | 2 | false | 0 | 0 |
Assuming you're referring to the consumer key and consumer secret, you're not supposed to be able to create those programmatically. That's why you have to sign in to a web page with a CAPTCHA in order to create one.
| 2 | 0 | 0 |
I need a way to programmatically create Twitter Applications/API keys. I could make something on my own, but does anyone know of a pre-made solution?
|
Is there an unofficial API for creating Twitter applications/api keys?
| 0.291313 | 0 | 1 | 1,280 |
13,652,514 |
2012-11-30T20:16:00.000
| 0 | 1 | 0 | 0 |
php,python,twitter,twitter-oauth
| 13,652,757 | 2 | false | 0 | 0 |
Not sure what you mean, but there are plenty of libraries that abstracts twitter API (https://dev.twitter.com/docs/twitter-libraries)
| 2 | 0 | 0 |
I need a way to programmatically create Twitter Applications/API keys. I could make something on my own, but does anyone know of a pre-made solution?
|
Is there an unofficial API for creating Twitter applications/api keys?
| 0 | 0 | 1 | 1,280 |
13,653,030 |
2012-11-30T20:54:00.000
| 4 | 0 | 0 | 0 |
python,pandas
| 13,852,311 | 7 | false | 0 | 0 |
Check out DataFrame.from_items too
| 1 | 26 | 1 |
I realize Dataframe takes a map of {'series_name':Series(data, index)}. However, it automatically sorts that map even if the map is an OrderedDict().
Is there a simple way to pass a list of Series(data, index, name=name) such that the order is preserved and the column names are the series.name? Is there an easy way if all the indices are the same for all the series?
I normally do this by just passing a numpy column_stack of series.values and specifying the column names. However, this is ugly and in this particular case the data is strings not floats.
|
How do I Pass a List of Series to a Pandas DataFrame?
| 0.113791 | 0 | 0 | 51,167 |
13,653,605 |
2012-11-30T21:41:00.000
| 0 | 0 | 0 | 0 |
python,shell,unix,google-chrome-extension
| 13,653,956 | 1 | false | 0 | 0 |
Well you will have to locate where it stores the summary information you want. Is it in RAM temporarily or is it persistant across reboots and the sort? What kind of information? If the information is stored temporarily, like per session, it could make what you wish to accomplish a little more difficult. If the summary data is stored locally this could be achieved a bit easier. For instance if it was stored locally, you could write some python to open the file that would contain your summary information, read it into a variable and then parse that information into whatever format you need it in.
Pretty open ended question though, can you offer any more details?
| 1 | 0 | 0 |
Assume I'm using a Chrome extension that gives me a nice summary of content on a webpage. Rather than writing my own program to mimic the services of the extension, I'd like to create a script that then uses the summary information that the extension generates, capturing it in a variable that I can manipulate.
Is this possible to write a script that could achieve this? If so, what would be good a starting point? I'd like to write the script in perhaps unix or python.
|
How can I access the content of a Chrome extension from outside the browser - e.g. I'd the content to be use in a script I'm writing?
| 0 | 0 | 1 | 128 |
13,653,714 |
2012-11-30T21:52:00.000
| 0 | 0 | 1 | 0 |
python,ftp,python-2.6
| 13,654,110 | 1 | true | 0 | 0 |
You can try passing -t to LIST, some servers support it. This will sort the listing by the modified time (similar to ls).
Typically such filtering/sorting is left up to the client to implement. FTP servers are not obligated to provide server-side filtering (beyond basics).
| 1 | 0 | 0 |
I am using the standard Python FTP library to get a file listing and subsequently download files from an FTP site after inspecting the list. This used to be a quick operation, however, the list of files on the site grows daily and is now sufficiently long to cause a significant delay when executing ftp.retrlines('LIST',readFileCallback) to get the file listing.
I am not interested in the full listing on the FTP server. Is it possible to e.g. get a listing of files what are only 1 month old?
|
Is it possible to identify a date limit on Python's FTP.retrlines() function?
| 1.2 | 0 | 0 | 205 |
13,654,122 |
2012-11-30T22:29:00.000
| 2 | 0 | 1 | 0 |
python
| 62,995,730 | 10 | false | 0 | 0 |
you can try the following as well:
import os
print (os.environ['USERPROFILE'])
The advantage of this is that you directly get an output like:
C:\\Users\\user_name
| 3 | 39 | 0 |
I know that using
getpass.getuser() command, I can get the username, but how can I implement it in the following script automatically? So i want python to find the username and then implement it in the following script itself.
Script: os.path.join('..','Documents and Settings','USERNAME','Desktop'))
(Python Version 2.7 being used)
|
How to make Python get the username in windows and then implement it in a script
| 0.039979 | 0 | 0 | 107,727 |
13,654,122 |
2012-11-30T22:29:00.000
| 46 | 0 | 1 | 0 |
python
| 29,154,027 | 10 | false | 0 | 0 |
os.getlogin() did not exist for me. I had success with os.getenv('username') however.
| 3 | 39 | 0 |
I know that using
getpass.getuser() command, I can get the username, but how can I implement it in the following script automatically? So i want python to find the username and then implement it in the following script itself.
Script: os.path.join('..','Documents and Settings','USERNAME','Desktop'))
(Python Version 2.7 being used)
|
How to make Python get the username in windows and then implement it in a script
| 1 | 0 | 0 | 107,727 |
13,654,122 |
2012-11-30T22:29:00.000
| 72 | 0 | 1 | 0 |
python
| 13,654,236 | 10 | true | 0 | 0 |
os.getlogin() return the user that is executing the, so it can be:
path = os.path.join('..','Documents and Settings',os.getlogin(),'Desktop')
or, using getpass.getuser()
path = os.path.join('..','Documents and Settings',getpass.getuser(),'Desktop')
If I understand what you asked.
| 3 | 39 | 0 |
I know that using
getpass.getuser() command, I can get the username, but how can I implement it in the following script automatically? So i want python to find the username and then implement it in the following script itself.
Script: os.path.join('..','Documents and Settings','USERNAME','Desktop'))
(Python Version 2.7 being used)
|
How to make Python get the username in windows and then implement it in a script
| 1.2 | 0 | 0 | 107,727 |
13,655,152 |
2012-12-01T00:34:00.000
| 0 | 0 | 0 | 0 |
python,wxpython
| 13,687,868 | 2 | false | 0 | 1 |
Probably the only way to do it with the standard wx.Menu is to destroy and recreate the entire menubar. You might be able to Hide it though. Either way, I think it would be easiest to just put together a set of methods that creates each menubar on demand. Then you can destroy one and create the other.
You might also take a look at FlatMenu since it is pure Python and easier to hack.
| 1 | 1 | 0 |
I'm writing a document based application in wxPython, by which I mean that the user can have open multiple documents at once in multiple windows or tabs. There are multiple kinds of documents, and the documents can all be in different "states", meaning that there should be different menu options available in the main menu.
I know how to disable and enable menu items using the wx.EVT_UPDATE_UI event, but I can't figure out how to pull off a main menu that changes structure and content drastically based on which document that currently has focus. One of my main issues is that the main menu is created in the top level window, and it has to invoke methods in grand children and great grand children that haven't even been created yet.
Contrived example; when a document of type "JPEG" is open, the main menu should look like:
File Edit Compression Help
And when the user switches focus (CTRL+Tab) to a document of type "PDF", the main menu should change to:
File Edit PDF Publish Help
And the "Edit" menu should contain some different options from when the "JPEG" document was in focus.
Current I'm just creating the menu in a function called create_main_menu in the top level window, and the document panels have no control over it. What would be necessary to pull off the kind of main menu scheme I describe above, specifically in wxPython?
|
How do I manage a dynamic, changing main menu in wxPython?
| 0 | 0 | 0 | 2,119 |
13,655,486 |
2012-12-01T01:35:00.000
| 0 | 0 | 0 | 0 |
python
| 29,955,598 | 2 | false | 1 | 0 |
You can disable javascript directly from the browser. Steps:
Type About:config in url
Click I'll be careful, I promise
Search for javascript.enabled
Right click -> Toggle
Value = false
| 1 | 3 | 0 |
How can I add preferences to the browser so it launches without javascript?
|
How can I disable javascript in firefox with selenium?
| 0 | 0 | 1 | 5,735 |
13,656,736 |
2012-12-01T05:34:00.000
| 1 | 0 | 0 | 0 |
python,architecture,rabbitmq,web-frameworks,gevent
| 13,775,086 | 1 | false | 1 | 0 |
My first thought is that you could use a service oriented architecture to separate these tasks. Each of these services could run a Flask app on a separate port (or machine (or pool of machines)) and communicate to each other using simple HTTP. The breakdown might go something like this:
GameService: Handles incoming connections from players and communicates with them through socketio.
GameFinderService: Accepts POST requests from GameService to start looking for games for player X. Accepts GET requests from GameService to get the next best game for playerX. You could use Redis as a backing store for this short-lived queue of games per connected player that gets updated each time GameStatusService (below) notifies us of a change.
GameStatusService: Monitors in-progress games via UDP and when a notable event occurs e.g. new game created, player disconnects, etc it notifies GameFinderService of the change. GameFinderService would then update its queues appropriately for each connected player.
Redis is really nice because it serves as a data structure store that allows you to maintain both short and long lived data structures such as queues without too much overhead.
| 1 | 2 | 0 |
I'm creating a website that allows players to queue to find similarly skilled players for a multiplayer video game. Simple web backends only modify a database and create a response using a template, but in addition to that, my backend has to:
Communicate with players in real-time (via gevent-socketio) while they queue or play
Run calculations in the background to find balanced games, slowly compromising game quality as waiting time grows (and inform players via SocketIO when a game has been found)
Monitor in progress games via a UDP socket (and if a player disconnects, ask the queue for a substitute) and eventually update the database with the results
I know how I would do these things individually, but I'm wondering how I should separate these components and let them communicate. I imagine that my web framework (Flask) shouldn't be very involved at all in these other things.
Since I already must use gevent, I'm currently planning to start separate greenlets for each of these tasks. This will work for all my tasks (with the possible exception of the calculations) because they will usually be waiting for something to happen. However, this won't scale at all because I can't run more Flask instances. Everything would be dependent on the greenlets running in just a single thread.
So is this the best way? Is there another way to handle separating these tasks (especially with languages I might use in the future that don't have coroutines)? I've heard of RabbitMQ/ZeroMQ and Celery and other such tools, but I wasn't sure how and whether to use them to solve this problem.
|
When a web backend does more than simply reply to requests, how should my application be structured?
| 0.197375 | 0 | 0 | 183 |
13,657,199 |
2012-12-01T06:57:00.000
| 0 | 0 | 1 | 0 |
ipython
| 13,661,513 | 2 | true | 0 | 0 |
Do you mean the Cell -> Run menu item? If so, no. The notebook's not really designed to be used like that. What are you trying to do?
| 1 | 2 | 0 |
Is there a way for "ipython3 notebook" to receive command line arguments with its 'run' button?
Thank you very much.
|
Use sys.argv[] inside ipython3 notebook
| 1.2 | 0 | 0 | 2,339 |
13,657,404 |
2012-12-01T07:31:00.000
| 0 | 0 | 0 | 0 |
python,networking,cpanel
| 13,657,435 | 2 | true | 0 | 0 |
Your best option is probably to find a [dynamic DNS] provider. The idea is to have a client running on your machine which updates a DNS entry on a remote server. Then you can use the hostname provided instead of your IP address in cPanel.
| 1 | 0 | 0 |
I am connecting my python software to a remote msql server.
i have had to add an access host on cPanel just for my computer but the problem is the access host, which is my IP, is dynamic.
How can i connect to the remote server without having to change the access host everytime?
thanks guys, networking is my weakness.
|
Configuring Remote MYSQL with a Dynamic IP
| 1.2 | 1 | 0 | 1,475 |
13,660,301 |
2012-12-01T14:17:00.000
| 2 | 0 | 0 | 0 |
python,django,amazon-ec2,memcached,django-middleware
| 13,690,608 | 3 | true | 1 | 0 |
It was a silly glitch. I found out that i needed to reload gunicorn server to make the new middleware work. Thanks everybody for the help.
| 1 | 2 | 0 |
I am quite new to web development. I am working on a website hosted on amazon ec2 server. The site is in python using django framework. I am using memcached to cache some client information. My site and caching works on local machine but not on the EC2 server. I checked memcached server and found out that it was not able to set the keys. Is there something I might need to change in settings.py so that keys are set appropriately on the server or something else that I might be missing.
EDIT: Found out the problem. I added a new middleware for setting keys in the memcache. That is not getting called. It works perfectly on the local machine. On the server I am using gunicorn as the app server and nginx as the reverse proxy. Can any of these cause the problems. Also I tried to reload nginx but that didn't help either.
|
New Django middleware not getting called
| 1.2 | 0 | 0 | 1,449 |
13,662,400 |
2012-12-01T18:30:00.000
| 0 | 0 | 1 | 0 |
python,multithreading,asynchronous
| 13,663,583 | 1 | false | 0 | 0 |
i didn't really understand what sort of application its going to be, but i tried to anwser your questions
create a thread that query's, and then sleeps for a while
create a thread for each user, and close it when the user is gone
create a thread that download's and stops
after all, there ain't going to be 500 threads.
| 1 | 0 | 0 |
I'm developing a Python-application that "talks" to the user, and performs tasks based on what the user says(e.g. User:"Do I have any new facebook-messages?", answer:"Yes, you have 2 new messages. Would you like to see them?"). Functionality like integration with facebook or twitter is provided by plugins. Based on predefined parsing rules, my application calls the plugin with the parsed arguments, and uses it's response. The application needs to be able to answer multiple query's from different users at the same time(or practically the same time).
Currently, I need to call a function, "Respond", with the user input as argument. This has some disadvantages, however:
i)The application can only "speak when it is spoken to". It can't decide to query facebook for new messages, and tell the user whether it does, without being told to do that.
ii)Having a conversation with multiple users at a time is very hard, because the application can only do one thing at a time: if Alice asks the application to check her Facebook for new messages, Bob can't communicate with the application.
iii)I can't develop(and use) plugins that take a lot of time to complete, e.g. download a movie, because the application isn't able to do anything whilesame the previous task isn't completed.
Multithreading seems like the obvious way to go, here, but I'm worried that creating and using 500 threads at a time dramatically impacts performance, so using one thread per query(a query is a statement from the user) doesn' seem like the right option.
What would be the right way to do this? I've read a bit about Twisted, and the "reactor" approach seems quite elegant. However, I'm not sure how to implement something like that in my application.
|
Multithreading or how to avoid blocking in a Python-application
| 0 | 0 | 0 | 190 |
13,663,294 |
2012-12-01T20:12:00.000
| 0 | 0 | 1 | 0 |
python,hadoop,hadoop-streaming
| 13,663,566 | 1 | true | 0 | 0 |
The most efficient way to do this is to maintain a hash map of word frequency in your mappers, and flush them to the output context when they reach a certain size (say 100,000 entries). Then clear out the map and continue (remember to flush the map in the cleanup method too).
If you still truely have 100 of millions of words, then you'll either need to wait a long time for the reducers to finish, or increase your cluster size and use more reducers.
| 1 | 0 | 1 |
I want to implement a hadoop reducer for word counting.
In my reducer I use a hash table to count the words.But if my file is extremely large the hash table will use extreme amount of memory.How I can address this issue ?
(E.g A file with 10 million lines each reducer receives 100million words how can he count the words a hash table requires 100million keys)
My current implementation is in python.
Is there a smart way to reduce the amount of memory?
|
Efficient Hadoop Word counting for large file
| 1.2 | 0 | 0 | 442 |
13,664,330 |
2012-12-01T22:19:00.000
| 2 | 0 | 1 | 0 |
python-3.x
| 13,664,338 | 3 | false | 0 | 0 |
It's an argument. If you define a function as function(), it must be called without arguments. If you define it as function(foo), it must be called with one argument. A copy of this argument is available to the function as a local variable named foo.
| 1 | 5 | 0 |
Hello I'm just wondering about something in Python3.x.
What is the foo in def function(foo): used for when you can use def function(): too?
I know there is a difference, I just don't understand the definitions I have found in various books and tutorials.
|
what is "foo" used for in "def function(foo):" when using Python3.x
| 0.132549 | 0 | 0 | 18,770 |
13,664,482 |
2012-12-01T22:40:00.000
| 2 | 0 | 0 | 0 |
python,django
| 13,664,767 | 1 | true | 1 | 0 |
Instead of writing the same data models twice you can create a small django app (which will contain the model definition and logic) as a python module and install it on both the two servers / apps.
| 1 | 0 | 0 |
I want to write my first Python program using Django. The site will be hosted on Amazon. However my API will use Django and Piston sitting on another instance. I don’t want to have to replicate my Models across two servers. How can I get the API to share the same model as the main Django instance, or should I?
|
How to share the same model on another AWS Instance
| 1.2 | 0 | 0 | 33 |
13,664,861 |
2012-12-01T23:34:00.000
| 0 | 0 | 1 | 0 |
python,recursion,artificial-intelligence,greedy
| 13,665,044 | 3 | false | 0 | 0 |
You could actually brute force the game, and prove that every time there is a winning strategy, your A.I. picks the correct move. Then, you could prove that for every position, your A.I. picks the move which maximizes the chances of having a winning strategy, assuming the other player is playing randomly. There are not that many possibilities, so you should be able to eliminate all of them.
You could also significantly diminish the space of possibilities by assuming the other player is actually slightly intelligent, e.g. always tries to block a move which results in immediate victory.
| 2 | 2 | 0 |
I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move. (Even if moves are equally correct, it chooses same one every time, it does not pick a random one)
I also made a function that loops though all possible plays made with the A.I.
So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move.
I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses.
But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
|
How can I test if my Tic Tac Toe A.I. is perfect?
| 0 | 0 | 0 | 1,040 |
13,664,861 |
2012-12-01T23:34:00.000
| 0 | 0 | 1 | 0 |
python,recursion,artificial-intelligence,greedy
| 18,435,798 | 3 | false | 0 | 0 |
One issue with akaRem's answer is that an optimal player shouldn't look like the overall distribution. For example, a player that I just wrote wins about 90% of the time against someone playing randomly and ties 10% of the time. You should only expect akaRem's statistics to match if you have two players against each other playing randomly. Two optimal players would always result in a tie.
| 2 | 2 | 0 |
I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move. (Even if moves are equally correct, it chooses same one every time, it does not pick a random one)
I also made a function that loops though all possible plays made with the A.I.
So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move.
I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses.
But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
|
How can I test if my Tic Tac Toe A.I. is perfect?
| 0 | 0 | 0 | 1,040 |
13,665,968 |
2012-12-02T02:54:00.000
| 2 | 0 | 0 | 0 |
python,django
| 13,666,236 | 1 | true | 1 | 0 |
Make sure you have STATIC_ROOT defined in your settings.
Define STATIC_URL.
Use python manage.py collectstatic command to collect every static file from every app (including contrib.admin) in your STATIC_ROOT folder.
| 1 | 0 | 0 |
My admin css is not working.
I tried to find it in folder: /usr/local/lib/python2.7/site-packages/django/contrib/admin
There is no media folder there.
I am using Django 1.5a.
|
Where is the django admin media folder situated?
| 1.2 | 0 | 0 | 1,602 |
13,667,690 |
2012-12-02T08:33:00.000
| 0 | 0 | 1 | 1 |
python,ide,pymongo
| 32,381,638 | 2 | false | 0 | 0 |
If you are in windows platform just install the pymongo.exe file and it will install in python directory. Then you will be able to access it in any IDE such PyCharm by typing:
import pymongo
| 1 | 0 | 0 |
I am a python newbie - I want to use the pymongo library to access mongoDb using some convenient IDE, and after looking through the web i decided to use WING.
Can some one point how to add the pymongo library to the WING ide (or to any other IDE for that matter)? i want to get the auto-completion for commands.
Thanks
|
Adding PyMongo to Python IDE
| 0 | 0 | 0 | 666 |
13,668,095 |
2012-12-02T09:44:00.000
| 1 | 1 | 1 | 0 |
python,eclipse,pydev
| 15,648,517 | 1 | true | 0 | 0 |
Putting # @PydevCodeAnalysisIgnore at the top of the module will cause PyDev to skip all code analysis on a given file. While not quite on package level, this is good enough.
| 1 | 2 | 0 |
I have a project where I use south migrations. Often, the migrations .py file has unused imports. This generates warnings in PyDev/Eclipse. I want the warnings turned on in general, as they promote code discipline. However, I wish I could either turn them off on the package in Eclipse or through some directive.
I am aware of the #@UnusedImport comment tag. Is it possible to do something like that, but on a package level? Perhaps init.py could be used?
|
Python and PyDev unused imports - possible to disable on the package level?
| 1.2 | 0 | 0 | 694 |
13,669,092 |
2012-12-02T12:01:00.000
| 3 | 1 | 0 | 1 |
python,linux
| 13,669,170 | 2 | false | 0 | 0 |
Windows line endings are CRLF, or \r\n.
Unix uses simply \n.
When the OS reads your shebang line, it sees #!/usr/bin/python\r. It can't run this command.
A simple way to see this behavior from a unix shell would be $(echo -e 'python\r') (which tries to run python\r as a command). This output will also be similar to : command not found.
Many advanced code editors under Windows support natively saving with unix line endings.
| 1 | 0 | 0 |
I have some python file in windows, and I transfer them to my gentoo by samba.
I check their mode is executable, and I use ./xxx.py to run it, but get an error:
: No such file or directory
I am troubled that it does not prompt what file is not here.
but when I use python xxx.py, it can run in the right way.
and then I check the CR character by use set ff in vim, and found it is dos, then I use set ff=unix to set it, now it can run by using ./xxx.py
but I don't know why it can be use python xxx.py when ff=dos?
|
python file in dos and unix
| 0.291313 | 0 | 0 | 1,281 |
13,669,690 |
2012-12-02T13:25:00.000
| 4 | 0 | 0 | 1 |
python,subprocess
| 13,669,702 | 1 | true | 0 | 0 |
As long as you use a list of parameters and leave shell set to False, yes, the parameters are safe from code injection. They will not be parsed by a shell and thus not subject to any code execution opportunities.
Note that on Windows, the chances of using code injection are already mitigated by the fact that a CreateProcess call is used.
| 1 | 1 | 0 |
Are parameters passed to subprocess safe in any way from code injection?
I am building a small python program to do some movie file tagging. For ease, I am passing the tag info to AtomicParsley (on Windows) using subprocess.call(). The tag information is an online source, retrieved automatically. If some individual were to place code in the tags (i.e. replaced the actors with some sort of rd term), would subprocess be safe from that execution? This is more of a conceptual question than a question about the specifics of the language.
The subprocess.call is executed with ['AtomicParsley',filename,'--tag1',tag1_info,(...)]. Since the first part of the command is guaranteed to be the name of the AP executable and the second is guaranteed to be a valid filename, I should think any malicious code inside the metainfo database would just be written as a string to the appropriate tag (i.e. the Actor's name would be del C:\Windows). Do those seem like reasonable assumptions?
|
Are parameters passed to subprocess safe in any way from code injection?
| 1.2 | 0 | 0 | 643 |
13,672,316 |
2012-12-02T18:24:00.000
| 2 | 0 | 0 | 0 |
python,gtk
| 13,672,419 | 1 | true | 0 | 1 |
Basically there are two ways. The difference is who is the "main" program. If you want gtk to be in charge, you just create your dialog, and set up an callback for use when idle (gobject.idle_add). This job should make sure that it doesn't take long every step, so gtk can update the gui (you probably want gobject.timeout_add for your spinner).
The other way is that your "background" job is in control. It can just do what it wants, it should just call gtk every now and then (while gtk.events_pending (): gtk.main_iteration (False)) to make sure gtk can update the gui.
| 1 | 0 | 0 |
Basically I want to do this:
when the user presses a button, the application should open a dialog with a spinner inside of it, while some tasks are executed in background. When this tasks are finished, this dialog should be destroyed, and a new dialog should be opened.
I'm working with python and gtk
How can I do it?
thanks!
|
gtk loading dialog while performing task
| 1.2 | 0 | 0 | 340 |
13,672,346 |
2012-12-02T18:28:00.000
| 1 | 0 | 0 | 0 |
c#,asp.net,python,web2py,browser-automation
| 13,672,402 | 1 | false | 1 | 0 |
Selenium is a pretty good library for automation if you want to scrape information off of javascript enabled pages. It has bindings for a number of languages. If you only want basic scraping though, I would go with Mechanize; no need to open a browser.
| 1 | 2 | 0 |
Novice to programming. I have most of my experience in python. I am comparind this to C#. I have created small web apps using Web2py, and have read 'learn python the hard way'. I have limited to no C# experience besides setting up and playing in VS.
My end goal is to be able to develop web apps (So far I do like web2py), and even some web automation programs using GUI's. For example, an application that will allow me to put / get information in a database from my GUI, and then post it to my site's either via a database connection, or post to other sites that are not mine, through automation.
I really like python so far, but I feel like since I do want to work with GUI applications, that C# may be the best bet...
More specifically, does Python even compare, or have modules/library that will help me do GUI web & browser automation, versus C#? How about with just basic scraping? Pulling data from numerous sites to display in a database. Does Python still have an edge?
Thanks. I hope this question has some objectivity to it considering the different libraries and modules available. If it is too subjective , please accept my apologies.
|
Advice on which language to persue for browser automation & scraping
| 0.197375 | 0 | 1 | 155 |
13,672,763 |
2012-12-02T19:14:00.000
| 1 | 1 | 0 | 0 |
python,pdf,fonts
| 13,674,330 | 1 | false | 0 | 0 |
There are commercial tools that can do this - one of which is pdfToolbox from callas software (warning - I'm affiliated with this company).
However - even though this functionality exists and is sometimes used - the results are often completely undesirable and I have not seen many contexts where it is used on more than very specific files. And usually with limited success. To the point where this replacement is only available as a manual operation in the tool I mentioned - and not in automatic mode.
Depending on how complex these files are, you would probably have better success to extract all text from the documents into something like RTF, do whatever manipulation you need to do there and regenerate PDF afterwards. Sounds like a roundabout way but I'm guessing the result will be better in most cases...
| 1 | 5 | 0 |
I have a repository of PDF documents, and most of the text contained in these documents are formatted in Comic Sans. I would like to change this to something similar to Arial. The original font is embedded in the document. I haven't found any existing tool to do this for me (I'm on Linux), and I wonder if it's possible to do it programmaticaly. A Python library would be perfect, but a library in any programming language would do.
In which library will I be able to substitute fonts with the least effort? And which parts of the API would I use?
|
Changing the font in a PDF
| 0.197375 | 0 | 0 | 2,714 |
13,674,242 |
2012-12-02T21:48:00.000
| 3 | 0 | 1 | 0 |
python,pydev,usability
| 26,126,717 | 2 | false | 0 | 0 |
In PyDev, the Source/Wrap paragraph menu (Ctrl+2, w) wraps the current line automatically. You should of course check if the output is what you expected.
| 1 | 2 | 0 |
Is it possible to set text wrapping in PyDev?
For Python such feature is especially important because you cant freely use newline characters.
|
Wrap text in PyDev editor
| 0.291313 | 0 | 0 | 3,873 |
13,675,440 |
2012-12-03T00:05:00.000
| 4 | 1 | 0 | 0 |
python,nginx,nosql,openbsd
| 13,675,611 | 2 | false | 0 | 0 |
My advice - if you don't know how to use these technologies - don't do it. Few servers will cost you less than the time spent mastering technologies you don't know. If you want to try them out - do it. One by one, not everything at once. There is no magic solution on how to use them.
| 2 | 1 | 0 |
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit.
We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux.
The reason that we want to use OpenBSD is that it's well known for
it's security.
The reason we chose Python is that it's fast.
The reason we want to use Nginx is that it's known to be able to
handle more http request when compared to Apache.
The reason we want to use NoSQL is that MySQL is known to have
problems in scalability when the databases grows.
We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP).
We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
|
How to utilize OpenBSD, Nginx, Python and NoSQL
| 0.379949 | 1 | 0 | 1,447 |
13,675,440 |
2012-12-03T00:05:00.000
| 1 | 1 | 0 | 0 |
python,nginx,nosql,openbsd
| 13,676,002 | 2 | false | 0 | 0 |
I agree with wdev, the time it takes to learn this is not worth the money you will save. First of all, MySQL databases are not hard to scale. WordPress utilizes MySQL databases, and some of the world's largest websites use MySQL (google for a list). I can also say the same of linux and PHP.
If you design your site using best practices (CSS sprites) Apache versus Nginx will not make a considerable difference in load times if you utilize a CDN and best practices (caching, gzip, etc).
I strongly urge you to reconsider your decisions. They seem very ill-advised.
| 2 | 1 | 0 |
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit.
We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux.
The reason that we want to use OpenBSD is that it's well known for
it's security.
The reason we chose Python is that it's fast.
The reason we want to use Nginx is that it's known to be able to
handle more http request when compared to Apache.
The reason we want to use NoSQL is that MySQL is known to have
problems in scalability when the databases grows.
We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP).
We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
|
How to utilize OpenBSD, Nginx, Python and NoSQL
| 0.099668 | 1 | 0 | 1,447 |
13,675,689 |
2012-12-03T00:47:00.000
| 1 | 1 | 0 | 1 |
python,debian,boot
| 13,706,652 | 2 | true | 0 | 0 |
Seems like it was just a dumb mistake on my part.
I realized the whole point of this was to allow the python script to run as a background process during boot so I added the " &" to the end of the script call like you would when running it from the shell and viola I can get to my password prompt by pressing "Enter".
I wanted to put this answer here just in case this would be something horribly wrong to do, but it accomplishes what I was looking for.
| 1 | 2 | 0 |
I am using Debian and I have a python script that I would like to run during rc.local so that it will run on boot. I already have it working with a test file that is meant to run and terminate.
The problem is that this file should eventually run indefinitely using Scheduler. It's job is to do serial reads, a small amount of processing on those reads, and inserts into a MySQL database. However, I am nervous about then not being able to cancel the script to get to my login prompt if changes need to be made since I was unable to terminate the test script early using Ctrl+C (^C).
My hope is that there is some command that I am just missing that will accomplish this. Is there another key command that I'm missing that will terminate the python script and end rc.local?
Thanks.
EDIT: Another possible solution that would help me here is if there is a way to start a python script in the background during boot. So it would start the script and then allow login while continuing to run the script in the background.
I'm starting to think this isn't something that's possible to accomplish so other suggestions to accomplish something similar to what I'm trying to do would be helpful as well.
Thanks again.
|
End Python Script when running it as boot script?
| 1.2 | 0 | 0 | 575 |
13,677,625 |
2012-12-03T05:24:00.000
| 2 | 0 | 0 | 0 |
python,http-headers,httprequest,urllib2,http-status-code-403
| 13,685,106 | 2 | false | 0 | 0 |
The Python library urllib has a default user-agent string that includes the word Python in it and wget uses "wget/VERSION". If the site you are cionnectiing checks the user-agent info, it will probably reject these two. Google, for instance, will do so.
It's easy enough to fix.. for wget, use the -U parameter and for urllib, create a URLOpener with an appropriate string.
| 1 | 1 | 0 |
There is a webpage that my browser can access, but urllib2.urlopen() (Python) and wget both return HTTP 403 (Forbidden). Is there a way to figure out what happened?
I am using the most primitive form, like urllib2.urlopen("http://test.com/test.php"), using the same url (http://test.com/test.php) for both the browser and wget. I have cleared all my cookies in browser before the test.
Thanks a lot!
|
urllib2 and wget returns HTTP 403 (forbidden), while browser returns OK
| 0.197375 | 0 | 1 | 2,333 |
13,681,305 |
2012-12-03T10:27:00.000
| 0 | 0 | 0 | 0 |
python,c,user-interface,cross-platform
| 13,681,502 | 2 | false | 0 | 1 |
You can take the openoffice way on windows platform and write a launcher that simply tries to access the file needed by your software, so that they will end up in the memory cache, speeding up the startup time (but creating useless things in the cache, in case the user doesn't want to open your program).
| 1 | 2 | 0 |
I'm trying to find a way to rapidly develop (or rather eventually reach a point where I can rapidly develop) very nice looking cross platform GUI desktop apps that have a very small footprint on disk and in memory, launch very fast (much faster, for example, than even a bare bones wxPython window) (for a good example, look at how fast TextEdit launches under OSX. That's the kind of launch speed I want for my GUI apps), deploy easily, and interact very well with the operating system (Gimp and Gedit and various other open source, cross platform apps exhibit various behaviors that I really hate, depending on the platform, but especially on OSX) without spending any money. (Hey, stop laughing! =P)
I'm dissatisfied with wxWidgets, Qt, SDL, and everything else I've tried so far, so I'm down to writing native GUI code (especially the part that interacts with the OS's windowing system) on each platform, using native tools (XCode/ObjC/Cocoa/OpenGL, MSVC/Win32/DirectX, gcc/GTK/OpenGL), and then trying to come up with some way of writing as much as possible of the rest of the program in Python.
I've thought about maybe writing a set of shared libraries / dll's to deal with matters GUI, and then wrapping them with a set of Python C extensions, but there are some technical challenges involved in doing that when it comes to packaging (menus, the app icon, certain OS-specific application manifests, etc), and I'm not sure that launch speed and performance in general will be acceptable, depending on the particular program I'm writing.
So I've thought about maybe creating a sort of a "shell" program on each platform, and embedding python, kind of in a similar way to the way Sublime Text 2 does.
I don't like the startup slowness that occurs when launching any python program for the first time. I was hoping this was a result of compiling to byte code, and that I could just include precompiled versions of python modules with my apps, but from experimenting, it seems this is not the case.. it seems that the first time anything python runs (since the last system reboot), a shared library / dll is loaded or something. So that's one reason I think of maybe embedding Python - I wonder if there are some options available when imbedding/calling python that could help reduce that launch delay. Or if worst comes to worst, in the imbedded case I can launch without Python, then launch Python if/when I need to, asynchronously (not in the main thread), after the app has already launched.
Is there a way to reduce the first-time launch delay for deployed python programs (ie., programs who's packages include a version of the intepreter.. maybe the interpreter can be compiled with switches I haven't tried)?
Is there any way to reduce the interpreter load/initialize delay when embedding python?
Is it completely unrealistic to expect any python gui program to launch as fast or have as small a footprint as TextEdit?
|
Should I embed or extend python to create high quality, high speed GUI programs?
| 0 | 0 | 0 | 722 |
13,683,289 |
2012-12-03T12:29:00.000
| 0 | 0 | 1 | 0 |
python,django
| 13,683,513 | 2 | false | 1 | 0 |
No, only thing what virtualenv does, is that it creates an environment that has its own installation directories, that doesn’t share libraries with other virtualenv environments (and optionally doesn’t access the globally installed libraries either). Therefore it just means, that your project will use libraries and packages from virtualenv. So you won't have to change your manage.py.
| 1 | 0 | 0 |
I have completely shifted my all packages to virtualenv, but my project files were generated by the global Django installation.
I want to know what changes I need to make to the manage.py file, and do I need to use the virtualenv django-admin.py file now?
|
How will my usage of manage.py and django-admin.py change with virtualenv?
| 0 | 0 | 0 | 72 |
13,686,325 |
2012-12-03T15:25:00.000
| 3 | 0 | 0 | 1 |
python,debugging,breakpoints,bottle,wing-ide
| 13,687,662 | 2 | false | 0 | 0 |
Are you debugging under WSGI using wingdbstub.py or launching bottle from the IDE? I'm not that familiar with bottle but a common problem is a web framework's reload mechanism running code in a sub-process that is not debugged. I'm not certain bottle would do that under WSGI, however, but printing the process id at time of importing wingdbstub (or startup if launching from the IDE) and again at the line where the breakpoint is missed would rule this in our out. The "reloader" arg for Bottle.__init__ may be relevant here. If set to True, try setting it to False when debugging under Wing.
Another thing to try is to raise an exception on purpose where the breakpoint is (like "assert 0, 'test exception'" and see if this exception is reported in Wing's debugger in the Exceptions tool and if so whether Wing also manages to open the source code. If bottle is running code in a way that doesn't make it possible to find the source code then this would still stop on the assertion (Wing's debugger stops on all assertions by default even if the host code handles the exception) but it would fail to show the debug file and would put up a message in the status area (at bottle of IDE screen and in the Messages tool) that indicates the file name the debug process specified. Depending on this it may be possible to fix the problem (but would require modifying Bottle if the file name is something like "".
BTW, to insert code that is only run under Wing's debugger us something like this:
import os
if 'WINGDB_ACTIVE' in os.environ:
# code here
If this doesn't help please email support at wingware dot com.
| 1 | 4 | 0 |
I'm writing an Python Bottle application (Python 2.7.2 and Bottle 0.10.9) and developing it in the WingIDE (3.2.8-1) Professional for Linux. This all works well, except when I want to debug the Bottle application. I have it running in standalone mode within WingIDE, but it won't stop at any of my break points in the code, even if I set Bottle.debug(False). Does anyone have any suggestions/ideas about how I can setup Bottle so it will stop on breakpoints within WingIDE?
|
Debugging Python bottle apps with WingIDE
| 0.291313 | 0 | 0 | 537 |
13,688,815 |
2012-12-03T17:45:00.000
| 0 | 0 | 1 | 0 |
python,wxpython,inno-setup,pyinstaller,upx
| 13,690,799 | 1 | true | 0 | 1 |
I don't think you can compress the files before the exe or you'll run into issues of Python not being able to read the zip files. You might be able to use eggs here, but I don't believe eggs are compressed. I would just try compressing the exe with UPX and see how it goes.
I personally don't worry about it. The exe's usually are 20-40 MB and hard disk space is cheap and plentiful.
| 1 | 0 | 0 |
I am creating guis with wxPython then compiling them using pyInstaller and finally using inno to set them up.
seeing as i am new to all of this i would like to know, do i need to use UPX to compress just the final compiled exe? or all of the stuff the exe needs to run aswell?
thanks, sorry for being a noob.
|
Compressing a compiled wxPython program with UPX
| 1.2 | 0 | 0 | 325 |
13,689,617 |
2012-12-03T18:42:00.000
| 2 | 0 | 0 | 0 |
python,django
| 13,689,659 | 1 | true | 1 | 0 |
No, importing your models is enough, as long as you have Django installed and correctly configured.
| 1 | 0 | 0 |
What is the definition of a Django application? Any application that uses Django features, such as orm and url-view mapping?
I ask because I have a component which has 2 sub-components: a web service server and a standalone application. The web service server uses Django views to map url to request handlers. The web service server and the application use Django models and a database managed by Django. The web service server obviously needs to be a Django application. The standalone application must be a Django application as well?
Thanks in advance.
|
Application that uses Django models need to be a Django app?
| 1.2 | 0 | 0 | 56 |
13,690,514 |
2012-12-03T19:46:00.000
| 1 | 0 | 0 | 0 |
python,objective-c,xml,django,json
| 13,690,743 | 1 | true | 1 | 0 |
It really depends on the data you need to represent.
If you need to represent programming language objects, JSON is probably you best choice, being more lightweight and human-readable than XML.
If you need to represent a complex data structure with its custom schema, you will probably want to give XML a shot.
That being said, Objective-C provides both XML and JSON parsers.
| 1 | 0 | 0 |
Ok all! I have a question for you...Currently looking at building an Iphone app with Objective-C and were going to be using Python / Django as the back-end as the website is already built. Meaning all the content is already stored in the database. Were going to use an app called tastypie as our API which can either pull then data into a JSON format or XML.
However I want to know which is going to be the best for my needs, either JSON or XML? The data that is going to be pulled is a directory list which will display a map within each property. Then a page which will display a load of recipes.
If you could give your thoughts on which you think is going to be the best to use from JSON or XML, would be awesome! :)
If you need to know any more information, please let me know.
Thanks,
Josh
|
Parsing data to Objective-C with XML or JSON with Python / Django backend
| 1.2 | 0 | 0 | 401 |
13,692,293 |
2012-12-03T21:45:00.000
| 0 | 0 | 1 | 0 |
python
| 13,692,356 | 3 | false | 0 | 0 |
Using continue in Python, personally, is fine. If you're putting it in a if/else statement type condition in python, continue will work.
| 3 | 0 | 0 |
Is using continue in Python considered bad practice? It seems like stopping a function/etc mid-execution is a generally "poor" way to construct a program, like sys.exit() under the wrong circumstances, or goto.
|
Is "continue" bad practice?
| 0 | 0 | 0 | 3,458 |
13,692,293 |
2012-12-03T21:45:00.000
| 3 | 0 | 1 | 0 |
python
| 13,692,425 | 3 | false | 0 | 0 |
The basis of structured programming is to have well defined entry and exit points into a method - ideally one of each. However, this isn't a hard and fast rule, and there are many cases where it is appropriate to have multiple exit points. I would say have a look at the code, think about whether there's a different way to do it without a 'continue' that is still elegant, and if not, using the 'continue' will be the correct approach.
| 3 | 0 | 0 |
Is using continue in Python considered bad practice? It seems like stopping a function/etc mid-execution is a generally "poor" way to construct a program, like sys.exit() under the wrong circumstances, or goto.
|
Is "continue" bad practice?
| 0.197375 | 0 | 0 | 3,458 |
13,692,293 |
2012-12-03T21:45:00.000
| 0 | 0 | 1 | 0 |
python
| 13,692,349 | 3 | false | 0 | 0 |
Use of flow control is a controversial subject.
One school of thought believes that they obscure the code, by hiding the flow control a reader would assume -- a linear one.
Another school of thought thinks that they are justifiable if the alternative is complicated logic (like additional flags and more complex loop conditions).
Your best bet is to look at your code and decide; would using anything like continue hide the flow control? Can your code be expressed cleanly without it?
My opinion is this: if you have no compelling reason to do otherwise, I recommend you avoid continue and its kin.
| 3 | 0 | 0 |
Is using continue in Python considered bad practice? It seems like stopping a function/etc mid-execution is a generally "poor" way to construct a program, like sys.exit() under the wrong circumstances, or goto.
|
Is "continue" bad practice?
| 0 | 0 | 0 | 3,458 |
13,692,601 |
2012-12-03T22:05:00.000
| 1 | 0 | 1 | 0 |
python,multiprocessing
| 13,728,055 | 1 | false | 0 | 0 |
I ended up using BaseManager.start() to spawn a new process for the manager. I also used a customized proxy by subclassing BaseProxy and registered the proxy with the manager.
I still don't know how to create a pipe into the spawned process, but that wasn't the main problem anyways.
| 1 | 1 | 0 |
So I have this object that is available in a different process. How can I expose this object using an alias in my current process so that whenever I call a method on the alias (proxy), it will be pickled and called on the referent object and raise any exceptions if there is.
I also like the data to be sent over a pipe, not sockets.
The examples I see on multiprocessing page in python does not address this scenario. or at least doesn't directly.
|
exposing an object using manager/proxy in a different process
| 0.197375 | 0 | 0 | 126 |
13,694,984 |
2012-12-04T01:59:00.000
| 0 | 0 | 1 | 0 |
python,openssl,pyopenssl
| 13,809,257 | 3 | false | 0 | 0 |
The functionality doesn't exist currently.
We ended up having to extend pyOpenSSL to handle this.
| 1 | 4 | 0 |
I'm trying to get the dates for a CRL using PyOpenSSL. The CRL class doesn't contain them as accessible members. I'm going through all of the underscore members, but I'd rather not use one of those, as they're not supposed to be 'public'.
Any suggestions on getting the dates out?
|
PyOpenSSL: Get a CRL's last update and next update fields
| 0 | 0 | 0 | 2,780 |
13,695,322 |
2012-12-04T02:48:00.000
| 3 | 0 | 0 | 0 |
python,api,salesforce
| 13,695,617 | 2 | true | 1 | 0 |
If you already know the salesforce Id of the record, just call update instead.
| 1 | 0 | 0 |
I'm using beatbox API to update/insert data in to salesforce opportunity object.
upsert() throws INVALID FIELD error when I pass Id as externalIDFieldName. Currently I'm using another unique external Id and it's working fine but I want to use the salesforce Id.
Please shed some light on what I'm missing.
|
upsert throws error when I pass Id as externalIdFieldName in Beatbox API while contacting Salesforce
| 1.2 | 0 | 1 | 565 |
13,695,692 |
2012-12-04T03:37:00.000
| 4 | 0 | 1 | 0 |
python,list,floating-point
| 13,695,755 | 5 | false | 0 | 0 |
If you're dealing with currency and accuracy matters, don't use float, use decimal.
| 2 | 4 | 0 |
I have a list and it contains a certain number '5.74536541' in it which I convert to a float.
I am printing it out in Python 3 using ("%0.2f" % (variable)) but it always prints out 5.75 instead of 5.74.
I know you're thinking who cares, but it is for a currency converter program and I don't want the currencies to round up/down but to be exact.
How can I keep it from rounding but also keep the 2 decimal places?
|
Convert to Float without Rounding Decimal Places
| 0.158649 | 0 | 0 | 9,863 |
13,695,692 |
2012-12-04T03:37:00.000
| -2 | 0 | 1 | 0 |
python,list,floating-point
| 13,695,736 | 5 | false | 0 | 0 |
If the number is a string then truncate the string to only 2 characters after the decimal and then convert it to a float.
Otherwise multiply it with 10^n where n is the number of digits after the decimal and then divide your float by 10^n.
| 2 | 4 | 0 |
I have a list and it contains a certain number '5.74536541' in it which I convert to a float.
I am printing it out in Python 3 using ("%0.2f" % (variable)) but it always prints out 5.75 instead of 5.74.
I know you're thinking who cares, but it is for a currency converter program and I don't want the currencies to round up/down but to be exact.
How can I keep it from rounding but also keep the 2 decimal places?
|
Convert to Float without Rounding Decimal Places
| -0.07983 | 0 | 0 | 9,863 |
13,696,872 |
2012-12-04T05:48:00.000
| 5 | 0 | 1 | 0 |
python,django,virtualenv
| 13,697,179 | 1 | true | 1 | 0 |
It is possible to create multiple virtualenvs with the same name; they must be in different parent directories, however.
Alternately, you could create multiple virtualenvs in the same parent directory, but with different names.
| 1 | 4 | 0 |
I am making the base skeleton of some Django project files so that I can put them on git and whenever I need to make a new Django site I can grab the files from git and start a blank project.
In my fabfile, I'm generating a virtualenv named virutalenv.
I just want to know that if I need to make many sites on single computer then all will have same not but they will be in the project directory.
Is that ok?
|
Can i have multiple virtual env on same computer withsame name
| 1.2 | 0 | 0 | 1,400 |
13,701,035 |
2012-12-04T10:39:00.000
| 0 | 0 | 0 | 0 |
python,pandas
| 13,701,036 | 3 | false | 0 | 0 |
I see two ways of getting this, both of which look like a detour – which makes me think there must be a better way which I'm overlooking.
Converting the MultiIndex into columns: df[df.reset_index()["B"] == 2]
Swapping the name I want to use to the start of the MultiIndex and then use lookup by index: df.swaplevel(0, "B").ix[2]
| 2 | 1 | 1 |
When I have a pandas.DataFrame df with columns ["A", "B", "C", "D"], I can filter it using constructions like df[df["B"] == 2].
How do I do the equivalent of df[df["B"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby(["A", "B"]).mean() or df.setindex(["A", "B"]))
|
boolean indexing on index (instead of dataframe)
| 0 | 0 | 0 | 148 |
13,701,035 |
2012-12-04T10:39:00.000
| 1 | 0 | 0 | 0 |
python,pandas
| 13,755,051 | 3 | true | 0 | 0 |
I would suggest either:
df.xs(2, level='B')
or
df[df.index.get_level_values('B') == val]
I'd like to make the syntax for the latter operation a little nicer.
| 2 | 1 | 1 |
When I have a pandas.DataFrame df with columns ["A", "B", "C", "D"], I can filter it using constructions like df[df["B"] == 2].
How do I do the equivalent of df[df["B"] == 2], if B is the name of a level in a MultiIndex instead? (For example, obtained by df.groupby(["A", "B"]).mean() or df.setindex(["A", "B"]))
|
boolean indexing on index (instead of dataframe)
| 1.2 | 0 | 0 | 148 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.