Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
11,259,076
2012-06-29T09:13:00.000
2
0
0
1
python,ubuntu,sysadmin,nice
11,259,189
1
true
0
0
The scheduler will only put your process on hold if there is another process ready to run. If you have no other processes which hog up the CPU, your process will be running most of the time. The scheduler does not put your process to sleep just because it feels like it. My guess is that there is some reason your process is not runnable, e.g. it is blocking and waiting for I/O or data.
1
1
0
I have written a data munging script that is very CPU intensive. It has been running for a few days now, but now (thanks to trace messages sent to the console), I can see that it is not working (actually, has not been working for the last 10 hours or so. When I run top, I notice that the process is either sleeping (S) or in uninterreptable sleep (D). This is wasting a lot of time. I used sudo renice -10 PID to change the process's nice value, and after running for a short while, I notice that the process has gone back to sleep again. My question(s): Is there anything I can do to FORCE the script to run until it finishes (if even it means the machine is unusable until the end of the script? Is there a yield command I can use in Python, which allows me to periodically pass control to other process/threads to stop the scheduler from trying to put my script to sleep?. I am using python 2.7.x on Ubuntu 10.0.4
long running CPU intensive python script sent to sleep by scheduler
1.2
0
0
376
11,262,984
2012-06-29T13:48:00.000
1
0
0
1
python,c,ipc,protocol-buffers,thrift
11,263,176
2
false
0
0
My default choice would be to use normal sockets communicating over localhost. Sockets are a well understood language and platform neutral API that tend to perform very well. They also the advantage of not tieing you to two processes on the same box which can be advantageous in many situations.
1
3
0
I have an existing C process that can take one text input and produce a single image file. This C process has a high setup/teardown cost due to it's interface with an external system. Once the setup/teardown has occured the actual production of image from text is almost instantaneous. My plan is to daemonize the C process, so it will receive text and produce image files in an infinite loop, while maintaining a connection to the external system. I will also write a small client program in python, which will interface with the daemon to send text/receive the image. The target OS is unix. The question is, what is the best way to do bidirectional IPC between python/C in this case? Should I just open a unix domain socket and send packed structs back and forth, or should I look at something like Apache Thrift or protobuf? UPDATE: Just going to keep it simple, and open a unix domain socket
what is the best way to do bidirectional IPC between a long-running c process and python?
0.099668
0
0
1,766
11,265,603
2012-06-29T16:31:00.000
0
0
1
0
python,pydoc
49,285,972
10
false
0
0
In Windows, just open up a Windows Command Line window, go to the Lib subfolder of your Python installation, and type python pydoc.py moduleName.memberName > c:\myFolder\memberName.txt to put the documentation for the property or method memberName in moduleName into the file memberName.txt. If you want an object further down the hierarchy of the module, just put more dots. For example python pydoc.py wx.lib.agw.ultimatelistctrl > c:\myFolder\UltimateListCtrl.txt to put the documentation on the UltimateListCtrl control in the agw package in the wxPython package into UltimateListCtrl.txt.
1
24
0
I've got a python package which outputs considerable help text from: help(package) I would like to export this help text to a file, in the format in which it's displayed by help(package) How might I go about this?
How do I export the output of Python's built-in help() function
0
0
0
10,401
11,267,143
2012-06-29T18:32:00.000
1
0
0
0
python,database,data-mining,text-mining
11,267,361
2
false
0
0
Why not have simple SQL tables Tables: documents with a primary key of id or file name or something observations with foreign key into documents and the term (indexed on both fields probably unique) The array approach you mentioned seems like a slow way to get at terms. With sql you can easily allow new terms be added to the observations table. Easy to aggregate and even do trending stuff by aggregating by date if the documents table includes a timestamp.
1
3
1
I am looking to track topic popularity on a very large number of documents. Furthermore, I would like to give recommendations to users based on topics, rather than the usual bag of words model. To extract the topics I use natural language processing techniques that are beyond the point of this post. My question is how should I persist this data so that: I) I can quickly fetch trending data for each topic (in principle, every time a user opens a document, the topics in that document should go up in popularity) II) I can quickly compare documents to provide recommendations (here I am thinking of using clustering techniques) More specifically, my questions are: 1) Should I go with the usual way of storing text mining data? meaning storing a topic occurrence vector for each document, so that I can later measure the euclidean distance between different documents. 2) Some other way? I am looking for specific python ways to do this. I have looked into SQL and NoSQL databases, and also into pytables and h5py, but I am not sure how I would go about implementing a system like this. One of my concerns is how can I deal with an ever growing vocabulary of topics? Thank you very much
Storing text mining data
0.099668
0
0
612
11,267,463
2012-06-29T18:57:00.000
0
0
1
0
python,visual-studio,python-2.7,compilation,windows-7-x64
11,285,320
5
false
0
0
there are several ways to do it apparently. build using mingw build python 2.7 using VS 2008 express. i'm not sure regarding which version is good to build 3.2 but it could be VS 2010. you can compile python x64 from source using your desired VS, but u'll have competability issues with other pre built packages.
1
17
0
I'm starting out some projects in words processing and I needed NumPy and NLTK. That was the first time I got to know easy_install and how to compile new module of python into the system. I have Python 2.7 x64 plus VS 11 and VS 12. Also Cygwin (the latest one I guess). I could see in the file that compiles using VS that it looks for VS env with the same version as the one that compiled the python code, why? When I hardcoded 11.0 which is my version, numpy failed to build on several strange errors regarding vcvarsall (it found vcvarsall, probably misused it). Can't I build python binaries on Windows? If not, can I cross compile on Linux for Windows? (using the same method as Google for the Android SDK)
Compiling Python modules on Windows x64
0
0
0
20,743
11,268,501
2012-06-29T20:25:00.000
4
0
1
0
python,python-3.x,python-2.7,pip
30,897,479
10
false
0
0
On Suse Linux 13.2, pip calls python3, but pip2 is available to use the older python version.
3
213
0
I installed Python 3.x (besides Python 2.x on Ubuntu) and slowly started to pair modules I use in Python 2.x. So I wonder, what approach should I take to make my life easy by using pip for both Python 2.x and Python 3.x?
How to use pip with Python 3.x alongside Python 2.x
0.07983
0
0
242,644
11,268,501
2012-06-29T20:25:00.000
6
0
1
0
python,python-3.x,python-2.7,pip
53,039,045
10
false
0
0
In Windows, first installed Python 3.7 and then Python 2.7. Then, use command prompt: pip install python2-module-name pip3 install python3-module-name That's all
3
213
0
I installed Python 3.x (besides Python 2.x on Ubuntu) and slowly started to pair modules I use in Python 2.x. So I wonder, what approach should I take to make my life easy by using pip for both Python 2.x and Python 3.x?
How to use pip with Python 3.x alongside Python 2.x
1
0
0
242,644
11,268,501
2012-06-29T20:25:00.000
5
0
1
0
python,python-3.x,python-2.7,pip
27,873,874
10
false
0
0
This worked for me on OS X: (I say this because sometimes is a pain that mac has "its own" version of every open source tool, and you cannot remove it because "its improvements" make it unique for other apple stuff to work, and if you remove it things start falling appart) I followed the steps provided by @Lennart Regebro to get pip for python 3, nevertheless pip for python 2 was still first on the path, so... what I did is to create a symbolic link to python 3 inside /usr/bin (in deed I did the same to have my 2 pythons running in peace): ln -s /Library/Frameworks/Python.framework/Versions/3.4/bin/pip /usr/bin/pip3 Notice that I added a 3 at the end, so basically what you have to do is to use pip3 instead of just pip. The post is old but I hope this helps someone someday. this should theoretically work for any LINUX system.
3
213
0
I installed Python 3.x (besides Python 2.x on Ubuntu) and slowly started to pair modules I use in Python 2.x. So I wonder, what approach should I take to make my life easy by using pip for both Python 2.x and Python 3.x?
How to use pip with Python 3.x alongside Python 2.x
0.099668
0
0
242,644
11,269,277
2012-06-29T21:35:00.000
2
0
0
0
python,mysql,django,django-models,code-generation
11,269,451
1
true
1
0
If you're starting a new project, always let Django generate the tables. Django provides mechanisms to use a pre-existing database to support legacy data or situations where you aren't in direct control of the database structure. This is a nice feature of Django for those who need it, but it creates more work for the developer and breaks many conventions in Django. You still can, and in fact are encouraged to, add additional indexes and any other database-specific optimizations available to you after the fact.
1
0
0
I'm getting myself acquainted with MVC by making a 'for-fun' site in django. I'm really liking a lot of the features of the api and python itself. But one thing I really didn't like was that django encourages you to let it build your database FOR you. I'm well aware of inspectDB and I'm interested in using it. So my question is this, is there any solid reason that I should choose one method of model/DB generation over another? I feel far more comfortable defining my database in the traditional SQL way (where I have access to combined keys). But I'm concerned using things that aren't available through the Model api may cause problems for me later on. Such as combined keys, medium_text, etc. I'm using mysql btw.
Django: Generate models from database vs database from models
1.2
0
0
362
11,269,631
2012-06-29T22:15:00.000
0
1
0
0
python,excel,audio,measure
11,269,704
1
false
0
0
I know that this is a long shot, but maybe there are some libavcodec/FFMpeg ports to python. It's always worth a shot to see if there is something that exists out there along these lines...
1
1
0
Is there a way to measure audio output level in Python? I'd like to measure the volume of a 30 second audio file every 1/10th of a second, then export the data into something like Excel. Is this possible?
Using Python to measure audio output level?
0
0
0
799
11,269,862
2012-06-29T22:45:00.000
1
0
0
0
python,database,performance,google-app-engine,google-cloud-datastore
11,269,923
3
false
1
0
Caching is most useful when the calculation is expensive. If the calculation is simple and cheap, you might as well recalculate as needed.
3
0
0
I writing an app in python for google app engine where each user can submit a post and each post has a ranking which is determined by its votes and comment count. The ranking is just a simple calculation based on these two parameters. I am wondering should I store this value in the datastore (and take up space there) or just simply calculate it every time that I need it. Now just fyi the posts will be sorted by ranking so that needs to be taken into account. I am mostly thinking for the sake of efficiency and trying to balance if I should try and save the datastore room or save the read/write quota. I would think it would be better to simply store it but then I need to recalculate and rewrite the ranking value every time anyone votes or comments on the post. Any input would be great.
store a calculated value in the datastore or just calculate it on the fly
0.066568
0
0
303
11,269,862
2012-06-29T22:45:00.000
2
0
0
0
python,database,performance,google-app-engine,google-cloud-datastore
11,270,514
3
true
1
0
What about storing the ranking as a property in the post. That would make sense for querying/sorting wouldn't it. If you store the ranking at the same time (meaning in the same entitiy) as you store the votes/comment count, then the only increase in write cost would be for the index. (ok initial write cost too but that is what 2 [very small anyway]). You need to do a database operation everytime anyone votes or comments on the post anyway right!?! How else can to track votes/comments? Actually though, I imagine you will get into use text search to find data in the posts. If so, I would look into maybe storing the ranking as a property in the search index and using it to rank matching results. Don't we need to consider how you are selecting the posts to display. Is ranking by votes and comments the only criteria?
3
0
0
I writing an app in python for google app engine where each user can submit a post and each post has a ranking which is determined by its votes and comment count. The ranking is just a simple calculation based on these two parameters. I am wondering should I store this value in the datastore (and take up space there) or just simply calculate it every time that I need it. Now just fyi the posts will be sorted by ranking so that needs to be taken into account. I am mostly thinking for the sake of efficiency and trying to balance if I should try and save the datastore room or save the read/write quota. I would think it would be better to simply store it but then I need to recalculate and rewrite the ranking value every time anyone votes or comments on the post. Any input would be great.
store a calculated value in the datastore or just calculate it on the fly
1.2
0
0
303
11,269,862
2012-06-29T22:45:00.000
1
0
0
0
python,database,performance,google-app-engine,google-cloud-datastore
11,271,252
3
false
1
0
If you're depending on keeping a running vote count in an entity, then you either have to be willing to lose an occasional vote, or you have to use transactions. If you use transactions, you're rate limited as to how many transactions you can do per second. (See the doc on transactions and entity groups). If you're liable to have a high volume of votes, rate limiting can be a problem. For a low rate of votes, keeping a count in an entity might work fine. But if you any significant peaks in voting rate, storing separate Vote entities that periodically get rolled up into a cached count, perhaps adjusted by (possibly unreliable) incremental counts kept in memcache, might work better for you. It really depends on what you want to optimize for. If you're trying to minimize disk writes by keeping a vote count cached non-transactionally, you risk losing votes.
3
0
0
I writing an app in python for google app engine where each user can submit a post and each post has a ranking which is determined by its votes and comment count. The ranking is just a simple calculation based on these two parameters. I am wondering should I store this value in the datastore (and take up space there) or just simply calculate it every time that I need it. Now just fyi the posts will be sorted by ranking so that needs to be taken into account. I am mostly thinking for the sake of efficiency and trying to balance if I should try and save the datastore room or save the read/write quota. I would think it would be better to simply store it but then I need to recalculate and rewrite the ranking value every time anyone votes or comments on the post. Any input would be great.
store a calculated value in the datastore or just calculate it on the fly
0.066568
0
0
303
11,270,021
2012-06-29T23:07:00.000
3
0
1
1
python,python-3.x
11,270,699
2
false
0
0
There are a few different options here. First, start with jdi's suggestion of using multiprocessing. It may be that Windows process creation isn't actually expensive enough to break your use case. If it actually is a problem, what I'd personally do is use Virtual PC, or even User Mode Linux, to just run the same code in another OS, where process creation is cheap. You get a free sandbox out of that, as well. If you don't want to do that, jdi's suggestion of processes pools is a bit more work, but should work well as long as you don't have to kill processes very often. If you really do want everything to be threads, you can do so, as long as you can restrict the way the jobs are written. If the jobs can always be cleanly unwound, you can kill them just by raising an exception. Of course they also have to not catch the specific exception you choose to raise. Obviously neither of these conditions is realistic as a general-purpose solution, but for your use case, it may be fine. The key is to make sure your code evolver never inserts any manual resource-management statements (like opening and closing a file); only with statements. (Alternatively, insert the open and close, but inside a try/finally.) And that's probably a good idea even if you're not doing things this way, because spinning off hundreds of processes that, e.g., each leak as many file handles as they can until they either time out or hit the file limit would slow your machine to a crawl. If you can restrict the code generator/evolver even further, you could use some form of cooperative threading (e.g., greenlets), which makes things even nicer. Finally, you could switch from CPython to a different Python implementation that can run multiple interpreter instances in a single process. I don't know whether jython or IronPython can do so. PyPy can do that, and also has a restricted-environment sandbox, but unfortunately I think both of those—and Python 3.x support—are not-ready-for-prime-time features, which means you either have to get a special build of PyPy (probably without the JIT optimizer), or build it yourself. This might be the best long-term solution, but it's probably not what you want today.
1
2
0
My script accepts arbitrary-length and -content strings of Python code, then runs them inside exec() statements. If the time to run the arbitrary code passes over some predetermined limit, then the exec() statement needs to exit and a boolean flag needs to be set to indicate that a premature exit has occurred. How can this be accomplished? Additional information These pieces of code will be running in parallel in numerous threads (or at least as parallel as you can get with the GIL). If there is an alternative method in another language, I am willing to try it out. I plan on cleaning the code to prevent access to anything that might accidentally damage my system (file and system access, import statements, nested calls to exec() or eval(), etc.). Options I've considered Since the exec() statements are running in threads, use a poison pill to kill the thread. Unfortunately, I've read that poison pills do not work for all cases. Running the exec() statements inside processes, then using process.terminate() to kill everything. But I'm running on Windows and I've read that process creation can be expensive. It also complicates communication with the code that's managing all of this. Allowing only pre-written functions inside the exec() statements and having those functions periodically check for an exit flag then perform clean-up as necessary. This is complicated, time-consuming, and there are too many corner-cases to consider; I am looking for a simpler solution. I know this is a bit of an oddball question that deserves a "Why would you ever want to allow arbitrary code to run in an exec() statement?" type of response. I'm trying my hand at a bit of self-evolving code. This is my major stumbling block at the moment: if you allow your code to do almost anything, then it can potentially hang forever. How do you regain control and stop it when it does?
Escaping arbitrary blocks of code
0.291313
0
0
161
11,270,434
2012-06-30T00:18:00.000
0
0
0
1
python,google-app-engine,indexing,google-cloud-datastore
11,270,908
2
false
1
0
I'd suggest using pre-existing code and building around that in stead of re-inventing the wheel.
1
0
0
I am running a webapp on google appengine with python and my app lets users post topics and respond to them and the website is basically a collection of these posts categorized onto different pages. Now I only have around 200 posts and 30 visitors a day right now but that is already taking up nearly 20% of my reads and 10% of my writes with the datastore. I am wondering if it is more efficient to use the google app engine's built in get_by_id() function to retrieve posts by their IDs or if it is better to build my own. For some of the queries I will simply have to use GQL or the built in query language because they are retrieved on more than just and ID but I wanted to see which was better. Thanks!
use standard datastore index or build my own
0
1
0
82
11,271,205
2012-06-30T03:38:00.000
6
0
1
0
python,object,ironpython,pickle
11,271,404
1
false
0
0
I think the best answer you are going to get for the information you have provided is this... No other serialization format is going to make your situation better. pickle handles native Python objects a lot more gracefully than something like JSON. Any other format will require you to define handlers to help it serialize objects it can't handle. The best route for you is to try and solve why pickle is failing.
1
0
0
When i try to use pickle to load a dumped file, python crashes with code CLR20r3 !! I wanna know if there are any alternatives to pickle, that can dump a python object and load it back. 3rd party libraries are acceptable.
Any alternative to Python pickle?
1
0
0
2,951
11,271,883
2012-06-30T06:05:00.000
1
1
1
0
python,macos,packages,egg,file-management
11,273,716
1
false
0
0
If all you want is to use the installed Python package, then you don't need the downloaded directory at all. You can delete it if you like. If you want to use it for its docs, then you can keep it, or move it somewhere else. There's no connection between the installed package and the original unzipped directory you installed from, so you are free to do what you like with it.
1
1
0
Sorry about the nooby question, but, when I download and unzip a third-party python package, and then python setup.py install it thereby making an egg directory in site-packages, what do I do with the original unzipped directory in the Download folder? Should I sudo copy & paste all the test/docs/README files along with the rest of the corresponding site-packages files? I've typically deleted them but don't think that's a smart thing to do..
What to do with the test/docs/readme files after the downloaded python package is built?
0.197375
0
0
55
11,274,297
2012-06-30T13:05:00.000
0
0
1
1
python,pip,easy-install
11,278,661
3
false
0
0
Make sure you use a recent version of virtualenv itself, the latest at the time of writing is 1.7.2. Old versions required the use of -E switch, to install into the virtual environment.
1
2
0
In my Ubuntu 12.04 machine, the installation of pip requirements is asking me for sudo permission every time it attempts to install. How would I override this, as this is terrible for my working environment to install things globally instead of inside the venv? Note: I did not setup the venv using sudo.
How to configure a virtual environment that doesn't require sudo?
0
0
0
134
11,275,133
2012-06-30T15:03:00.000
1
0
0
0
python,django,oauth,google-api,gdata-api
11,275,174
1
true
1
0
Django has some packages like django-facebook or django-social-auth which manage the authentication part of facebook login for you. You could either use these in your project, or look at the code there as a good starting point to learn about FB OAuth implementation.
1
1
0
I am building a social reader Facebook application using Django where I am using Google Data API (Blogger API). But I am unable to deal with the authorization step to use the Google API (currently using ClientLogin under development). I tried to read the OAuth documentation but couldn't figure out how to proceed. I don't want my users to provide any login credentials for google.. which makes the app completely absurd. So, can anyone help me on my project and tell me what kind of authorization I should actually use for google API and how ? (I am using gdata lib)
What kind of authorization I should use for my facebook application
1.2
0
0
90
11,277,323
2012-06-30T20:13:00.000
1
0
0
0
python,ajax,redis,download,flask
11,277,333
1
true
1
0
Normally you would configure your webserver so that URLs that refer to static files are handled directly by the server, rather than going through Flask.
1
0
0
This might be a bit of a noob question, so I apologize in advance. How do I make a web server running flask+redis serve binary files as a response to a link/query? I want the response to the link to be either some AJAX action such as changing a div, or popping up an "unavailable" response, or to serve back some binary file. I would like help both with the client side (jQuery / other Javascript) and the server side. Thanks! Side question: Would you choose redis for this task? Or maybe something else such as MongoDB, or a regular RDBMS? And why?
Serving files through flask + redis
1.2
0
0
723
11,279,467
2012-07-01T03:59:00.000
3
0
0
0
python,django,multithreading,gunicorn,eventlet
11,281,661
1
true
1
0
If you set workers = 1 in your gunicorn configuration, two processes will be created: 1 master process and 1 worker process. When you use worker_class = eventlet, the simultaneous connections are handled by green threads. Green threads are not like real threads. In simple terms, green threads are functions (coroutines) that yield whenever the function encounters I/O operation. So nothing is copied. You just need to worry about making every I/O operation 'green'.
1
1
0
If I deploy Django using Gunicorn with the Eventlet worker type, and only use one process, what happens under the hood to service the 1000 (by default) worker connections? What parts of Django are copied into each thread? Are any parts copied?
What goes into an Eventlet+Gunicorn worker thread?
1.2
0
0
2,064
11,279,527
2012-07-01T04:15:00.000
4
0
0
1
python,heroku,subprocess,worker
11,282,872
1
true
1
0
On the newer Cedar stack, there are no issues with spawning multiple processes. Each dyno is a virtual machine and has no particular limitations except in memory and CPU usage (about 512 MB of memory, I think, and 1 CPU core). Following the newer installation instructions for some stacks such as Python will result in a configuration with multiple (web server) processes out of the box. Software installed on web dynos may vary depending on what buildpack you are using; if your subprocesses need special software then you may have to either bundle it with your application or (better) roll your own buildpack. At this point I would normally remind you that running asynchronous tasks on worker dynos instead of web dynos, with a proper task queue system, is strongly encouraged, but it sounds like you know that already. Do keep in mind that accounts with only one web dyno (typically this means, "free" accounts) will have that dyno spun down after an hour or so of not receiving any web requests, and that any background processes running on the dyno at that time will necessarily be killed. Accounts with multiple web dynos are not subject to this restriction.
1
10
0
I am aware of the memory limitations of the Heroku platform, and I know that it is far more scalable to separate an app into web and worker dynos. However, I still would like to run asynchronous tasks alongside the web process for testing purposes. Dynos are costly and I would like to prototype on the free instance that Heroku provides. Are there any issues with spawning a new job as a process or subprocess in the same dyno as a web process?
Is it feasible to run multiple processeses on a Heroku dyno?
1.2
0
0
2,842
11,279,779
2012-07-01T05:20:00.000
-1
0
1
0
c++,python,algorithm
11,280,036
5
false
0
0
This is actually a classic interview question and the answer they were expecting was that you first sort the urls and then make a binary search. If it doesn't fit in memory, you can do the same thing with a file.
2
0
0
Recently I was asked this question in an interview. I gave an answer in O(n) time but in two passes. Also he asked me how to do the same if the url list cannot fit into the memory. Any help is very much appreciated.
Finding a unique url from a large list of URLs in O(n) time in a single pass
-0.039979
0
0
1,129
11,279,779
2012-07-01T05:20:00.000
6
0
1
0
c++,python,algorithm
11,279,881
5
true
0
0
If it all fits in memory, then the problem is simple: Create two sets (choose your favorite data structure), both initially empty. One will contain unique URLs and the other will contain URLs that occur multiple times. Scan the URL list once. For each URL, if it exists in the unique set, remove it from the unique set and put it in the multiple set; otherwise, if it does not exist in the multiple set, add it to the unique set. If the set does not fit into memory, the problem is difficult. The requirement of O(n) isn't hard to meet, but the requirement of a "single pass" (which seems to exclude random access, among other things) is tough; I don't think it's possible without some constraints on the data. You can use the set approach with a size limit on the sets, but this would be easily defeated by unfortunate orderings of the data and would in any event only have a certain probability (<100%) of finding a unique element if one exists. EDIT: If you can design a set data structure that exists in mass storage (so it can be larger than would fit in memory) and can do find, insert, and deletes in O(1) (amortized) time, then you can just use that structure with the first approach to solve the second problem. Perhaps all the interviewer was looking for was to dump the URLs into a data base with a UNIQUE index for URLs and a count column.
2
0
0
Recently I was asked this question in an interview. I gave an answer in O(n) time but in two passes. Also he asked me how to do the same if the url list cannot fit into the memory. Any help is very much appreciated.
Finding a unique url from a large list of URLs in O(n) time in a single pass
1.2
0
0
1,129
11,281,233
2012-07-01T10:30:00.000
3
0
0
0
java,python,django,playframework
11,283,888
3
false
1
0
PyCharm is an IDE created by JetBrains. Originally, JetBrains only had one product, IntelliJ IDE (a Java IDE), and PyCharm and all the other products were spawned from that one highly successful product. As for which language, I would suggest trying to do something small (but feature rich enough to be a holistic test) with all 3 and see which one works best for you. Language choice is a massive question, and depends on personal factors, project factors and many other besides. Therefore I won't even begin to tell you which one is best (because it would be what is best for me, in my situation).
3
2
0
I already know Java, C# and C++. Now I want to start with web development and I saw that some really big sites are built with Python/C++. I like the coding style of Python, it looks really clean, but some other things like no errors before runtime is really strange. However, I don't know what I should learn now. I started with Python but then I saw that Google App Engine also supports Java and the PlayFramework looks amazing too. Now I am really confused. Should I go with Python or Java? I found the IDE for Python "PyCharm" really amazing for web development. Does Java have something similar, eclipse maybe? I know that this question isn't constructive, but it will help me with my decision. What are pro and cons of both languages?
Java PlayFramework & Python Django GAE
0.197375
0
0
411
11,281,233
2012-07-01T10:30:00.000
4
0
0
0
java,python,django,playframework
11,285,063
3
true
1
0
I just want to add, that if it is a requirement for you that it is compatible with GAE, then I think Django is the best choise. Playframework is of version 2.0 no longer compatible with GAE.
3
2
0
I already know Java, C# and C++. Now I want to start with web development and I saw that some really big sites are built with Python/C++. I like the coding style of Python, it looks really clean, but some other things like no errors before runtime is really strange. However, I don't know what I should learn now. I started with Python but then I saw that Google App Engine also supports Java and the PlayFramework looks amazing too. Now I am really confused. Should I go with Python or Java? I found the IDE for Python "PyCharm" really amazing for web development. Does Java have something similar, eclipse maybe? I know that this question isn't constructive, but it will help me with my decision. What are pro and cons of both languages?
Java PlayFramework & Python Django GAE
1.2
0
0
411
11,281,233
2012-07-01T10:30:00.000
0
0
0
0
java,python,django,playframework
15,122,029
3
false
1
0
It depends on you. What do you want more: learn new programming language or learn how to make web apps? I just started few PLay tutorials and it's really great. PLay 2 is even more amazing than previous one. I'd like to learn Scala, so it's perfect for me, but also because of that it's not GAE compatible anymore, but come on, there are other ways to deploy apps, I'd like to try OpenShift (dunno if it's possible, I'll try it soon). I'm also a big fan of Python, so it's naturally that I'm also looking for frameworks to build apps in that. I would say, that Django isn't the only choice. I had few tries with Django, right now I'm trying web2py. As many stated, Django has quite hard learning curve. Web2py should be better, but I don't like the 'wizzard' way of scaffolding apps. I've used Bottle (Flask is similar) and it's great for small apps. RESTful apps are super-easy with them, so maybe it should be your starting point. From what I've read about Python's frameworks: Django is quite good for typical websites/CMS-like, hard to learn web2py very interesting --- I'm in the middle of testing that, Reddit's using it? web.py -- minimalistic, lightweight framework, you have to build webapp almost from scratch Tornado/Twisted --- fast, async frameworks Flask/Bottle --- very nice microframeworks. Great for REST services I've not tried them all, but it's what I've found out during reading the web/blogs etc. I'm looking for something like Play Framework 2.x but in Python(ideally 3) :)
3
2
0
I already know Java, C# and C++. Now I want to start with web development and I saw that some really big sites are built with Python/C++. I like the coding style of Python, it looks really clean, but some other things like no errors before runtime is really strange. However, I don't know what I should learn now. I started with Python but then I saw that Google App Engine also supports Java and the PlayFramework looks amazing too. Now I am really confused. Should I go with Python or Java? I found the IDE for Python "PyCharm" really amazing for web development. Does Java have something similar, eclipse maybe? I know that this question isn't constructive, but it will help me with my decision. What are pro and cons of both languages?
Java PlayFramework & Python Django GAE
0
0
0
411
11,282,099
2012-07-01T12:51:00.000
0
0
0
0
python,wxwidgets
14,640,674
1
true
0
1
When you create a wx widget, the normal pattern of the create function is wx.SomeUIObject(parent, id, ...). The parent could be set when you create the dialog.
1
0
0
I've tried setting dlg.CenterOnParent() but that doesn't work. I assume it's because my MessageDialog is not setting the frame as the parent. In that case how would I do this?
How do I center a wx.MessageDialog to the frame?
1.2
0
0
176
11,284,600
2012-07-01T18:29:00.000
1
0
0
0
python,django
11,284,649
1
false
1
0
For example: field_name_exists = field_name in ModelName._meta.get_all_field_names()
1
0
0
Given a django class and a field name how can you test to see whether the class has a field with the given name? The field name is a string in this case.
Test for existence of field in django class
0.197375
0
0
63
11,285,049
2012-07-01T19:36:00.000
1
0
1
0
python,algorithm
11,285,066
4
false
0
0
Use frozensets instead, and add them to a set.
1
0
0
I have to create unique, unordered sets of 2 elements each from a list of numbers. Then insert each set into a list. Eg: setslist = [(2,1)] uniquenumbers = [1,2,3] unique sets- (1,2),(2,3),(1,3) insert each set in setslist if it doesnt already exists. (Sets are unordered. So (1,2) is same as (2,1)) Final setslist = [(2,1),(2,3),(1,3)] What is the most optimized solution in python?
add unique unordered sets to list
0.049958
0
0
135
11,286,809
2012-07-02T00:56:00.000
0
0
0
0
python,google-app-engine,youtube,youtube-api,jinja2
64,008,461
3
false
1
0
Use this code when getting embed link from list value. In the template inside the iframe use below code src="{{results[0].video_link}}" "video_link" is the Field name.
1
4
0
I have a website that gets a lot of links to youtube and similar sites and I wanted to know if there is anyway that I can make a link automatically appear as a video. Like what happens when you post a link to a video on facebook. You can play it right on the page. Is there a way to do this without users actually posting the entire embed video HTML code? By the way I am using google app engine with python and jinja2 templating.
how to make youtube videos embed on your webpage when a link is posted
0
0
0
4,555
11,287,466
2012-07-02T03:23:00.000
1
0
1
1
python,deployment
11,287,479
1
false
0
0
Copy the directory containing the virtualenv. Exclude all virtualenv-generated files. On the destination machine, create a virtualenv over the directory. source bin/activate pip install -r requirements.txt The first step is simplified if you use version control; you simply clone (Mercurial or Git) or checkout (Subversion) the code. All the virtualenv-generated files should have been in the appropriate ignore file. (.hgignore, .gitignore, .svnignore).
1
1
0
I'm a little confused on the deployment process for Python. Let's say you create a brand new project with virtualenv source bin/activate pip install a few libraries write a simple hello world app pip freeze the dependencies When I deploy this code into a machine, do I need first make sure the machine is sourced before installing dependencies? I don't mean to sound like a total noob but in the PHP world, I don't have to worry about this because it's already part of the project. All the dependencies are registered with the autoloader in place. The steps would be: rsync the files (or any other method) source bin/activate pip install the dependencies from the pip freeze output file It feels awkward, or just wrong and very error prone. What are the correct steps to make? I've searched around but it seems many tutorials/articles make an assumption that anyone reading the article has past python experience (imo). UPDATE: I've should have mentioned that I'm trying to understand how it hooks up with Apache.
deploying a Python application from a PHP developer
0.197375
0
0
230
11,287,585
2012-07-02T03:48:00.000
0
0
1
1
python,lxml,virtualenv,pip
23,508,071
3
false
0
0
Just wanted to add that emeraldo.cs's answer is correct, but you also have to copy the lxml related files that exist in the site-packages root. Once all the files are copied, pip will think it's installed.
1
6
0
I've recently started using virtualenv, and would like to install lxml in this isolated environment. Normally I would use the windows binary installer, but I want to use lxml in this virtualenv (not globally). Pip install does not work for lxml, so I'm at a loss for what I can do. I've read that creating symlinks may work, although I unfamiliar with how symlinks work and what files I should be creating them for. Does anyone else know of any methods to install lxml in a virtualenv on Windows? If creating symlinks is the only method that works I'm definitely willing to learn if someone can point me in the right direction.
Installing lxml in virtualenv for windows
0
0
0
2,061
11,287,862
2012-07-02T04:40:00.000
3
0
1
1
python,vim,macvim
11,288,495
2
true
0
0
For Python 3, just simply execute :!python3 % Furthermore, you might also want to map it to a hotkey in your settings, like what I did: noremap <D-r> <esc>:w<CR>:!python3 %<CR> So that you can just press Command+r to execute the current code with Python 3 anytime (it will be saved automatically.
1
0
0
I just set up IDE env for Python 3. I was wondering how I can run the file being currently edited in vim. I remembered that the command was ":python %", but it did not work for Python 3. Thank you very much.
In Macvim with +python3 supported, which command should I use to execute the current file itself?
1.2
0
0
428
11,288,320
2012-07-02T05:46:00.000
3
0
0
0
django,django-admin,python-2.7,django-urls,django-1.4
11,288,438
2
false
1
0
Just put your desired url mapping before the admin mapping in your root urls.py. The first match for the request will be taken, because django goes the url mappings from top to down. Just remember that you don't use an url the admin normally needs or provides because this will never match with a custom mapping in front of it. HTH!
1
7
0
I am using django 1.4 and Python 2.7. I just have a simple requirement where I have to add a new URL to the django admin app. I know how to add URLs which are for the custom apps but am unable figure out how to add URLs which are of the admin app. Please guide me through this. Basically the full URL should be something like admin/my_url. UPDATE I want a way after which I can as well reverse map the URL using admin.
New URL on django admin independent of the apps
0.291313
0
0
10,538
11,288,956
2012-07-02T06:51:00.000
1
0
0
0
python,arduino,xbee,osc,max-msp-jitter
31,403,175
2
true
0
0
On python, install pyosc. There are examples file within the package. On maxmsp, use udpsend or udpreceive. Remember to match the IP and port number. Without specific details, it is hard to answer your question in a more precise way. But hope this helps.
1
0
0
Can anyone help me with a problem routing OSC messages? I'm using Python, MAX/MSP with OSC to communicate between Arduino Xbees. I hope there's someone out there!
OSC-address in MAX/MSP
1.2
0
0
713
11,289,652
2012-07-02T07:50:00.000
1
0
0
0
python,image,compare
11,289,709
3
false
0
0
If you want to check if they are binary equal you can count a checksum on them and compare it. If you want to check if they are similar in some other way , it will be more complicated and definitely would not fit into simple answer posted on Stack Overflow. It just depends on how you define similarity but anyway it would require good programming skills and a lot of code written.
1
0
1
I'm looking for an algorithm to compare two images (I work with python). I find the PIL library, numpy/scipy and opencv. I know how to transform in greyscale, binary, make an histogram, .... that's ok but I don't know what I have to do with the two images to say "yes they're similar // they're probably similar // they don't match". Do you know the right way to go about it ?
How to compare image with python
0.066568
0
0
1,355
11,289,670
2012-07-02T07:52:00.000
0
0
1
0
python,numpy,matplotlib,ipython,pandas
11,616,216
3
false
0
0
This happened to me too. It works if you right click and 'Run As Administrator'
1
0
1
I have recently installed numpy due to ease using the exe installer for Python 2.7. However, when I attempt to install IPython, Pandas or Matplotlib using the exe file, I consistently get a variant of the following error right after the installation commeces (pandas in the following case): pandas-0.8.0.win32-py2.7[1].exe has stopped working The problem caused the program to stop working correctly: windows close the program and notify whether a solution is available. NumPy just worked fine when I installed it. This is extremely frustrating and I would appreciate any insight. Thanks
.EXE installer crashes when installing Python modules: IPython, Pandas and Matplotlib
0
0
0
1,016
11,291,016
2012-07-02T09:28:00.000
1
0
1
0
python,matplotlib,import
17,303,163
1
false
0
0
This might be very late :-) but to those browsing - Pyscripter can connect to the internal bundled python installation and to your global installation (as a remote engine). Packages imported to the global will not be available in the internal engine. Just connect to remote, and you should be good.
1
1
0
I am using PyScripter as an IDE for python coding. I installed matplotlib in windows.Now i written a program which imports matplotlib but PyScripter doesn't import this lib.How i can matplotlib using PyScripter.
Importing matplotlib using PyScripter
0.197375
0
0
924
11,291,975
2012-07-02T10:36:00.000
0
0
0
0
python,django,filesystems,storage
22,918,886
3
false
1
0
Also can use a custom storage manager that if a file is small save this in a database model with binary field and save small files of more of 16MB and don't need use other database.
1
4
0
Scenario: A Django app generates a lot of small files related to objects in different models. I've done a lot of search for avoiding generation of large number of files in a single directory, when using the default Filestorage. Is django-fsfield the only open source solution for this? Anything else you would recommend for fixing the large number of inodes in a dir? Thank you!
Django filesystem storage for large number of files
0
0
0
1,387
11,294,077
2012-07-02T12:52:00.000
12
0
0
1
python,unix,filesystems,chmod,umask
11,294,312
1
true
0
0
There is no real inconsistency, as the relation between umask and chmod can purely be written down with equations. Apparently, umask sets the opposite of chmod, it was created like this back in the old days. Example: 022 (the default usual umask) creates 755. It works like this: 7 - 0 = 7 becomes the first byte 7 - 2 = 5 becomes the second and third bytes Using this example, umask 777 creates a file with chmod 000, umask 112 will be equal to chmod 664. As far as I know, this happened because the umask command was originally created to indicate what permission bits the file will NOT have after it's created (hence the invertion). While it could be annoying, it's really not hard to get used to it. Just think how you would chmod your files, and subtract the byte you want from 7, and you will get the umask value. Or, when you are at the IDE, writing your code, don't use umask, but rather create the file (with the default umask of course) and then use, in Python, os.chmod() instead.
1
12
0
In most places, permissions are defined as an octal number in the format of 0777. But UNIX's umask command (thus os.umask()) needs 0o000 to produce the permission bits of 0o777 and 0o022 equals to 0o755 in my understanding. I heard that UNIX's umask is inverted for some reason and I do not understand the reason behind it. Could someone please explain this inconsistancy?
Why is the argument of os.umask() inverted? (umask 0o000 makes chmod 0o777)
1.2
0
0
5,228
11,295,030
2012-07-02T13:49:00.000
4
0
1
0
python,module,osx-snow-leopard,pyobjc
11,295,054
2
true
0
0
Python C extension modules like objc cannot be re-used between python versions. You'll have to install the objc module for 2.7 separately. Generally, different python installations (such as 2.6 or 2.7, or 3.2) use separate module import locations, and you normally install extensions per python setup.
1
1
0
My system: Mac OS X 10.6.8, gcc 4.2, python 2.7, xcode 3.2.3 I use python 2.7 and I got error when tried to do: import objc, it returns: ImportError: No module named objc. It looks like the objc module is not there. But actually I have the objc module installed already. Snow Leopard has got pyobjc preinstalled and I have also checked this using python2.6 (I have python 2.7 and 2.6 in my Mac). So if I invoke import objc using python2.6, I got no error which means objc exists and I can use that module without problems ... but if I import using python 2.7, I will got the ImportError: No module named objc error. Does anyone have any solution? FYI, the python2.6 is coming preinstalled with OS X while 2.7 is manually installed. I've been using the 2.7 for couple of months without problems.
Can import objc module in python 2.6 but NOT in python 2.7
1.2
0
0
1,274
11,295,714
2012-07-02T14:30:00.000
0
0
1
0
python,string,algorithm
11,295,792
8
false
0
0
Can't you just scan from the last character to the first character and stop when the next char doesn't equal the previous. Then split at that index.
1
0
0
I want to now how do i split a string like 44664212666666 into [44664212 , 666666] or 58834888888888 into [58834, 888888888] without knowing where the first occurrence of the last recurring digit occurs. so passing it to a function say seperate(str) --> [non_recurring_part, end_recurring digits]
Splitting a number pattern
0
0
0
121
11,296,583
2012-07-02T15:23:00.000
2
0
0
1
python,cmd
11,296,830
3
true
0
0
To append your path with python directory: path=%PATH$;c:\Python27 Run as normal user. You should also double check that c:\python27\python.exe actually exists.
3
1
0
I'm on Windows 7 and if I type "python" in the command prompt as my regular user, I get the old, "'python' is not recognized as an internal or external command, operable program or batch file." but if I open the prompt as the administrator, python initiates like it should. The very first thing I did was edit the PATH variables through Control Panel, which seemed to add the environment variable, but there's a disconnect between doing this and cmd recognizing that I've done it. I have changed the permissions on the Python27 folder to allow full access to all users, I've tried adding a pythonexe variable and add that to the PATH, as another StackOverflow question suggested. When I type PATH = C:\Python27 into cmd as a regular user, that also wont work. and if I type in set PATH, "C:\Python27;" is in the returned line. I'm fairly certain it's a permission problem, which is the only reason I've re-posted my own version of this age old question. How do I run Python, given this error and these circumstances?
"python" only runs from command prompt as Admin
1.2
0
0
2,992
11,296,583
2012-07-02T15:23:00.000
1
0
0
1
python,cmd
24,021,057
3
false
0
0
So, one of the things I noticed when I had that problem is that the USERNAME environment variable was only set to system which is the administrator environment variable. I simply looked up the username in the regular command prompt, using echo %USERNAME% and appended a semicolon and the username to the %USERNAME& environment variable. That fixed the issue. Everything you can do in the administrator is now able to be done in the regular user command line as well.
3
1
0
I'm on Windows 7 and if I type "python" in the command prompt as my regular user, I get the old, "'python' is not recognized as an internal or external command, operable program or batch file." but if I open the prompt as the administrator, python initiates like it should. The very first thing I did was edit the PATH variables through Control Panel, which seemed to add the environment variable, but there's a disconnect between doing this and cmd recognizing that I've done it. I have changed the permissions on the Python27 folder to allow full access to all users, I've tried adding a pythonexe variable and add that to the PATH, as another StackOverflow question suggested. When I type PATH = C:\Python27 into cmd as a regular user, that also wont work. and if I type in set PATH, "C:\Python27;" is in the returned line. I'm fairly certain it's a permission problem, which is the only reason I've re-posted my own version of this age old question. How do I run Python, given this error and these circumstances?
"python" only runs from command prompt as Admin
0.066568
0
0
2,992
11,296,583
2012-07-02T15:23:00.000
0
0
0
1
python,cmd
37,835,125
3
false
0
0
I've experienced a similar issue in the past and found that also checking the order of the values in the environmental/system variables matters as well.
3
1
0
I'm on Windows 7 and if I type "python" in the command prompt as my regular user, I get the old, "'python' is not recognized as an internal or external command, operable program or batch file." but if I open the prompt as the administrator, python initiates like it should. The very first thing I did was edit the PATH variables through Control Panel, which seemed to add the environment variable, but there's a disconnect between doing this and cmd recognizing that I've done it. I have changed the permissions on the Python27 folder to allow full access to all users, I've tried adding a pythonexe variable and add that to the PATH, as another StackOverflow question suggested. When I type PATH = C:\Python27 into cmd as a regular user, that also wont work. and if I type in set PATH, "C:\Python27;" is in the returned line. I'm fairly certain it's a permission problem, which is the only reason I've re-posted my own version of this age old question. How do I run Python, given this error and these circumstances?
"python" only runs from command prompt as Admin
0
0
0
2,992
11,298,097
2012-07-02T17:06:00.000
1
0
1
0
python,dataframe,concat,pandas
11,312,776
1
false
0
0
If the Series are in a dict data, you need only do: frame = DataFrame(data) That puts things into a DataFrame and unions all the dates. If you want to fill values forward, you can call frame = frame.fillna(method='ffill').
1
2
1
I am using pandas in python. I have several Series indexed by dates that I would like to concat into a single DataFrame, but the Series are of different lengths because of missing dates etc. I would like the dates that do match up to match up, but where there is missing data for it to be interpolated or just use the previous date or something like that. What is the easiest way to do this?
concatenating TimeSeries of different lengths using Pandas
0.197375
0
0
1,369
11,299,182
2012-07-02T18:29:00.000
0
0
0
0
python,sqlalchemy,pyramid
11,300,227
1
false
1
0
The best way to do this that I know is to use the same database with multiple schemas. Unfortunately I don't think this works with MySQL. The idea is that you connection pool engines to the same database and then when you know what user is associated with the request you can switch schemas for that connection.
1
0
0
We are using Python Pyramid with SQLAlchemy and MySQL to build a web application. We would like to have user-specific database connections, so every web application user has their own database credentials. This is primarily for security reasons, so each user only has privileges for their own database content. We would also like to maintain the performance advantage of connection pooling. Is there a way we can setup a new engine at login time based on the users credentials, and reuse that engine for requests made by the same user?
How to manage user-specific database connections in a Pyramid Web Application?
0
1
0
343
11,300,745
2012-07-02T20:31:00.000
3
0
0
0
python,pygame
11,300,894
4
false
0
1
You're updating your screen 2 times in a loop, one for drawing first text(TextA) and one for second text(Begin). After your first update, only first text appears, so you can't see begin text between first update and second update. This causes flickering. Update your screen after drawing everything. In your case, remove first pygame.display.update().
1
2
0
Using PyGame, I get flickering things. Boxes, circles, text, it all flickers. I can reduce this by increasing the wait between my loop, but I though maybe I could eliminate it by drawing everything to screen at once, instead of doing everything individually. Here's a simple example of what happens to me: import pygame, time pygame.init() screen = pygame.display.set_mode((400, 300)) loop = "yes" while loop=="yes": screen.fill((0, 0, 0), (0, 0, 400, 300)) font = pygame.font.SysFont("calibri",40) text = font.render("TextA", True,(255,255,255)) screen.blit(text,(0,0)) pygame.display.update() font = pygame.font.SysFont("calibri",20) text = font.render("Begin", True,(255,255,255)) screen.blit(text,(50,50)) pygame.display.update() time.sleep(0.1) The "Begin" button flickers for me. It could just be my slower computer, but is there a way to reduce or eliminate the flickers? In more complex things I'm working on, it gets really bad. Thanks!
Update display all at one time PyGame
0.148885
0
0
5,741
11,300,979
2012-07-02T20:51:00.000
1
0
0
0
python,xml,xsd,dtd,elementtree
11,301,443
1
false
0
0
I would try using the lxml library; it supports etree representations and validation.
1
1
0
Am I correct in that ElementTree does not support DTD or XSD? Is there a means of plugging anything into ElementTree to support validation, preferrably via XML Schema?
Validating XML parsed with ElementTree, possible?
0.197375
0
1
785
11,302,656
2012-07-02T23:40:00.000
3
1
1
0
python,c,shared-memory
11,305,191
4
false
0
0
If you don't want pickling, multiprocessing.sharedctypes might fit. It's a bit low-level, though; you get single values or arrays of specified types. Another way to distribute data to child processes (one way) is multiprocessing.Pipe. That can handle Python objects, and it's implemented in C, so I cannot tell you wether it uses pickling or not.
1
15
0
I'm trying to figure out a way to share memory between python processes. Basically there is are objects that exists that multiple python processes need to be able to READ (only read) and use (no mutation). Right now this is implemented using redis + strings + cPickle, but cPickle takes up precious CPU time so I'd like to not have to use that. Most of the python shared memory implementations I've seen on the internets seem to require files and pickles which is basically what I'm doing already and exactly what I'm trying to avoid. What I'm wondering is if there'd be a way to write a like...basically an in-memory python object database/server and a corresponding C module to interface with the database? Basically the C module would ask the server for an address to write an object to, the server would respond with an address, then the module would write the object, and notify the server that an object with a given key was written to disk at the specified location. Then when any of the processes wanted to retrieve an object with a given key they would just ask the db for the memory location for the given key, the server would respond with the location and the module would know how to load that space in memory and transfer the python object back to the python process. Is that wholly unreasonable or just really damn hard to implement? Am I chasing after something that's impossible? Any suggestions would be welcome. Thank you internet.
Shared memory between python processes
0.148885
0
0
19,934
11,304,019
2012-07-03T03:19:00.000
0
0
0
0
mysql-python
12,535,972
1
false
0
0
MacPorts' py27-mysql, MySQL-python, and MySQLdb are all synonyms for the same thing. If you successfully installed py27-mysql, you should not need anything else, and it's possible you've messed up your python site-packages. Also, make sure you are invoking the right python binary, i.e. MacPorts' python27 and not the one that comes with Mac OS X.
1
0
0
I have successfully installed py27-mysql from MacPorts and MySQL-python-1.2.3c1 on a machine running Snow Leopard. Because I have MySQL 5.1.48 in an odd location (/usr/local/mysql/bin/mysql/), I had to edit the setup.cfg file when I installed mysql-python. However, now that it's installed, I'm still getting the error "ImportError: No module named MySQLdb" when I run "import MySQLdb" in python. What is left to install? Thanks.
setting up mysql-python on Snow Leopard
0
1
0
67
11,304,235
2012-07-03T03:56:00.000
1
0
0
1
python,google-app-engine,ip,blacklist,denial-of-service
11,304,280
1
false
1
0
You could see the IP on the Logs page in the admin panel. Click the 'plus' icon next to a log item in order to expand it and view request data.
1
0
0
I'm getting a lot of requests to my appengine app from a malicious user and I suspect it might be an attempt at a DOS attack. I need to add thier IP address to blacklists on GAE. However when I look at self.request.remote_addr all I get is my own IP address. How can I get the remote IP of the client that is actually sending me these requests?
Need to get IP address to add to GAE blacklist
0.197375
0
0
283
11,304,679
2012-07-03T04:53:00.000
1
0
0
1
python,google-app-engine,task-queue
11,308,295
2
false
1
0
With pull queues, you can use modify_task_lease to set the ETA relative to the current time (even if you do not currently have the task leased). You can't change the ETA of a pull queue task. Each task's name remains unavailable for seven days.
2
2
0
Is it possible to update an AppEngine task in the task queue? Specifically, changing the eta property of the task to make it run at a different time? In my scenario, each item in my datastore has an associated task attached to it. If the element is updated, the task needs to updated with a new eta. I currently set the name of the task explicitly as the id of the item using name=item.key().id() so that I can uniquely refer to the task. When the task is called and deleted, the name doesn't get freed immediately (I think). This causes issues because I need to re-add the task as soon as it gets executed.
Update App Engine Tasks?
0.099668
0
0
294
11,304,679
2012-07-03T04:53:00.000
1
0
0
1
python,google-app-engine,task-queue
11,313,223
2
true
1
0
So I resolved this in the following way: I created an entry in my Model for a task_name. When I create the element and add a new task, I allow app engine to generate an automated, unique name for the task then retrieve the name of that task and save it with the model. This allows me to have that reference to the task. When I need to modify the task, I simply delete the existing one, create a new one with the new eta and then save the new task's name to the model. This is working so far, but there might be issues in the future regarding tasks not being consistent when the Task.add() function returns.
2
2
0
Is it possible to update an AppEngine task in the task queue? Specifically, changing the eta property of the task to make it run at a different time? In my scenario, each item in my datastore has an associated task attached to it. If the element is updated, the task needs to updated with a new eta. I currently set the name of the task explicitly as the id of the item using name=item.key().id() so that I can uniquely refer to the task. When the task is called and deleted, the name doesn't get freed immediately (I think). This causes issues because I need to re-add the task as soon as it gets executed.
Update App Engine Tasks?
1.2
0
0
294
11,305,271
2012-07-03T06:00:00.000
11
0
1
0
python,syntax
11,305,344
3
false
0
0
Such a regular expression cannot exist, because regular expressions are, by definition, not powerful enough to recognize Turing complete languages (such as python).
1
4
0
I want to write a python code generator, and it would be helpful if I had the regular expression that describes all valid python programs. Does such a regular expression exist? What is it?
Regular expression for python syntax
1
0
0
217
11,307,928
2012-07-03T09:18:00.000
1
0
0
0
python,django,linux,sqlite,ubuntu-10.04
11,308,029
1
true
1
0
The exception says: no such table: search_keywords, which is quite self-explanatory and means that there is no database table with such name. So: You may be using relative path to db file in settings.py, which resolves to a different db depending on place where you execute the script. Try to use absolute path and see if it helps. You have not synced your models with the database. Run manage.py syncdb to generate the database tables.
1
1
0
Now on writing path as sys.path.insert(0,'/home/pooja/Desktop/mysite'), it ran fine asked me for the word tobe searched and gave this error: Traceback (most recent call last): File "call.py", line 32, in s.save() File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py", line 463, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py", line 524, in save_base manager.using(using).filter(pk=pk_val).exists())): File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 562, in exists return self.query.has_results(using=self.db) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 441, in has_results return bool(compiler.execute_sql(SINGLE)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 818, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 40, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/sqlite3/base.py", line 337, in execute return Database.Cursor.execute(self, query, params) django.db.utils.DatabaseError: no such table: search_keywords Please help!!
error in accessing table created in django in the python code
1.2
1
0
768
11,313,950
2012-07-03T15:13:00.000
20
0
1
0
python
11,313,999
2
true
0
0
The complexity of comparing two lists is O(n) if both lists have length n, and O(1) if the lists have different lengths.
1
7
0
Assume the lists contain hashable objects only. BTW, I'm not sure if this question makes sense as I am a complete noob when it comes to complexity and academic stuff.
What is the order of complexity of comparing two python lists?
1.2
0
0
3,482
11,314,253
2012-07-03T15:29:00.000
0
0
1
1
c++,python,windows,linux,porting
11,347,955
1
true
0
1
I think your choice here depends on your goals for compiling under Windows. Are you preparing to involve other developers that can choose their development platform? Do you want to use a different compiler for additional warnings generation? Are you looking to deploy the application on the windows platform? Asking these kinds of questions should help you make a more informed decision. Here are some suggestions... It doesn't hurt to try MSVC. The 2010 express edition is the last free edition to support standard C++ development. Future express editions are for "Metro" apps only. I would weigh that against your goals for Windows development and choose accordingly. For a cross platform build, see if you can implement a standardized build system such as CMake or SCons. I wouldn't ship with dependencies, regardless of the final decision. It is standard practice for Open Source to require developers to download dependencies individually. Just be sure to include version information for anything where the current stable release is not backwards compatible with your application. (Or even better, FIX those problems so you get the benefit of the latest fixes in 3rd party code.) Python, at the very least, should be the responsibility of the developer. It is meant to be installed, and pywin32 extensions will register COM items in the system registry on a Windows installation. As far as recruiting Open Source developers, you may find that requiring MinGW to be installed on a developer's machine will discourage some of the dedicated MSVC users from working on the project.
1
2
0
I have a larger code running in Linux, written in c++ (c++11) and python and using numerous libraries (VTK, boost, pyqt, OpenGL) and compiles to python extension modules (and plugins of those modules) and pure python modules (the main program is a python script). The code is cross-platform (with a few exceptions, like dlopen, gettimeofday which can be replaced by windows equivalents via #ifdef's) and compiler-agnostic (it compiles with -ansi, and a few compiler-specific things like __attribute__ can also be, hopefully, replaced, if needed). I am cosindering attempting compilation on Windows, but I am totally lost on how should I proceed (I am fairly experienced with development in Linux, but I have not used Windows since late 90s). Should I go for mingw or MSVC compiler? Would I be better of to cross-compile? Do I need to install dependencies "by hand" by downloading installers from the web; do I need to compile those as well? Are there standard paths for include files, or are all of them to be detected? If I ever manage to compile it, how can make some sort of package (it is a bundle of pure-python modules and shared libs)? I assume I am not the first one who is trying to see how it works under Windows (I reckon I am spoiled by package managers and all dev-friendly things in Linux), perhaps there is a helpful reference somewhere.
compiling+distributing Linux code on Windows
1.2
0
0
534
11,316,369
2012-07-03T17:47:00.000
4
0
1
1
python,process,subprocess,popen
11,316,397
1
false
0
0
Terminating the parent process does not terminate child processes in Unix-like operating systems, so you don't need to do anything special. Just start your subprocesses with subprocess.Popen and terminate the main process. The orphaned processes will automatically be adopted by init.
1
2
0
I need to create spawn off a process in python that allows the calling process to exit while the child is still running. What is an effective way to do this? Note: I'm running on a UNIX environment.
Spawning a non-child process in python
0.664037
0
0
1,204
11,316,694
2012-07-03T18:10:00.000
1
1
1
0
python,psse
26,021,216
4
false
0
0
You can use API routine called "LOAD_CHNG_4" (search for this routin in API.pdf documentation). This routine belongs to the set of load data specification functions. It can be used to modify the data of an existing load in the working case.
1
1
0
I am trying to have the bus loads in PSS/E to change by using python program. So I am trying to write a script in python where I could change loads to different values between two buses in PSS/E.
Python and PSS/E
0.049958
0
0
5,345
11,316,738
2012-07-03T18:13:00.000
3
0
1
0
python,hash
11,316,820
4
false
0
0
There are plenty of built-in Python types that are not hashable. So it's perfectly Pythonic for a class not to be hashable. The example you give is a good example of the problems of creating a hashable class, because for an object to be usable as a key in a dictionary, it must implement both __hash__() and __eq__(). If you can't reliably determine equality, then hashability has no real benefit anyway and implementing it is wasted effort.
3
3
0
I am arguing with a colleague of mine, whether all Python classes really need to be hashable. We have this class that holds symbolic expressions (something similar to SymPy). My argument is that since we cannot compare two expressions for equality, hashing should not be allowed. For example the expressions '(x)' and '(1*x)' might compare equal, whereas 'sqrt(x*x*x)' and 'abs(x)*sqrt(x)' might not. Therefore, 'hash()' should throw an error when called with a symbolic expression. His argument is that you should be able to use all classes as keys in dictionaries and sets. Therefore, they must also be hashable. (I'm putting words in his mouth now, he would have explained it better.). Who is right? Is it unpythonic or not to have classes that throw errors if you try to hash them?
Is it unpythonic to have classes that are not hashable?
0.148885
0
0
209
11,316,738
2012-07-03T18:13:00.000
4
0
1
0
python,hash
11,316,810
4
false
0
0
A hash function is only useful if you have a well-defined equality test and the information taken into account for equality tests is immutable. By default, all user-defined classes are compared by object identity, and they use the id() as hash value. If you don't override the == operator, there is rarely a reason to change this behaviour. If you do override ==, and the information considered in this operator is immutable (meaning it can't change during the liftime of an instance), you can as well also define a hash function to make the instances hashable. From your question, I cannot quite tell if these conditions hold. It isn't "Pythonic" or "Unpythonic" to make a class hashable – the question is rather if the semantics of a class allow hashing or not.
3
3
0
I am arguing with a colleague of mine, whether all Python classes really need to be hashable. We have this class that holds symbolic expressions (something similar to SymPy). My argument is that since we cannot compare two expressions for equality, hashing should not be allowed. For example the expressions '(x)' and '(1*x)' might compare equal, whereas 'sqrt(x*x*x)' and 'abs(x)*sqrt(x)' might not. Therefore, 'hash()' should throw an error when called with a symbolic expression. His argument is that you should be able to use all classes as keys in dictionaries and sets. Therefore, they must also be hashable. (I'm putting words in his mouth now, he would have explained it better.). Who is right? Is it unpythonic or not to have classes that throw errors if you try to hash them?
Is it unpythonic to have classes that are not hashable?
0.197375
0
0
209
11,316,738
2012-07-03T18:13:00.000
2
0
1
0
python,hash
11,316,809
4
false
0
0
It's certainly not unpythonic to have unhashable classes, although your reason isn't a usual one I'd give. The main reason a class might be unhashable is because it's mutable and so its core data is itself unhashable. This would be the case for classes that wrap a dict or list, for instance. I don't quite follow your logic on equality comparisons. You say that you can't compare expressions for equality, but then you say certain expressions might or might not compare equal. Can you or can you not compare them for equality? If you can't, it doesn't make sense to say they compare equal or unequal.
3
3
0
I am arguing with a colleague of mine, whether all Python classes really need to be hashable. We have this class that holds symbolic expressions (something similar to SymPy). My argument is that since we cannot compare two expressions for equality, hashing should not be allowed. For example the expressions '(x)' and '(1*x)' might compare equal, whereas 'sqrt(x*x*x)' and 'abs(x)*sqrt(x)' might not. Therefore, 'hash()' should throw an error when called with a symbolic expression. His argument is that you should be able to use all classes as keys in dictionaries and sets. Therefore, they must also be hashable. (I'm putting words in his mouth now, he would have explained it better.). Who is right? Is it unpythonic or not to have classes that throw errors if you try to hash them?
Is it unpythonic to have classes that are not hashable?
0.099668
0
0
209
11,316,815
2012-07-03T18:18:00.000
1
0
0
0
python,django,sorting,time-complexity,aggregation
11,317,758
3
true
1
0
The short answer is No. There is no guarantee that a Top-Ten-Of-Last-Year song was ever on a Top-Ten-Daily list (it's highly likely, but not guaranteed). The only way to get an absolutely-for-sure Top Ten is to add up all the votes over the specified time period, then select the Top Ten.
2
1
0
What I want to do: Calculate the most popular search queries for: past day, past 30 days, past 60 days, past 90 days, each calendar month, and for all time. My raw data is a list of timestamped search queries, and I'm already running a nightly cron job for related data aggregation so I'd like to integrate this calculation into it. Reading through every query is fine (and as far as I can tell necessary) for a daily tally, but for the other time periods this is going to be an expensive calculation so I'm looking for a way to use my precounted data to save time. What I don't want to do: Pull the records for every day in the period, sum all the tallies, sort the entire resulting list, and take the top X values. This is going to be inefficient, especially for the "all time" list. I considered using heaps and binary trees to keep realtime sorts and/or access data faster, reading words off of each list in parallel and pushing their values into the heap with various constraints and ending conditions, but this always ruins either the lookup time or the sort time and I'm basically back to looking at everything. I also thought about keeping running totals for each time period, adding the latest day and subtracting the earliest (saving monthly totals on the 1st of every month), but then I have to save complete counts for every time period every day (instead of just the top X) and I'm still looking through every record in the daily totals. Is there any way to perform this faster, maybe using some other data structure or a fun mathematical property that I'm just not aware of? Also, tn case anyone needs to know, this whole thing lives inside a Django project.
Efficent way to calculate Top 10, or Top X, list, across multiple time periods
1.2
0
0
275
11,316,815
2012-07-03T18:18:00.000
0
0
0
0
python,django,sorting,time-complexity,aggregation
11,317,049
3
false
1
0
Could use the Counter() class, part of the high-performance container datatypes. Create a dictionary of all the searches as keys to the dictionary with a count of their frequency. cnt = Counter() for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']: cnt[word] += 1 print cnt Counter({'blue': 3, 'red': 2, 'green': 1})
2
1
0
What I want to do: Calculate the most popular search queries for: past day, past 30 days, past 60 days, past 90 days, each calendar month, and for all time. My raw data is a list of timestamped search queries, and I'm already running a nightly cron job for related data aggregation so I'd like to integrate this calculation into it. Reading through every query is fine (and as far as I can tell necessary) for a daily tally, but for the other time periods this is going to be an expensive calculation so I'm looking for a way to use my precounted data to save time. What I don't want to do: Pull the records for every day in the period, sum all the tallies, sort the entire resulting list, and take the top X values. This is going to be inefficient, especially for the "all time" list. I considered using heaps and binary trees to keep realtime sorts and/or access data faster, reading words off of each list in parallel and pushing their values into the heap with various constraints and ending conditions, but this always ruins either the lookup time or the sort time and I'm basically back to looking at everything. I also thought about keeping running totals for each time period, adding the latest day and subtracting the earliest (saving monthly totals on the 1st of every month), but then I have to save complete counts for every time period every day (instead of just the top X) and I'm still looking through every record in the daily totals. Is there any way to perform this faster, maybe using some other data structure or a fun mathematical property that I'm just not aware of? Also, tn case anyone needs to know, this whole thing lives inside a Django project.
Efficent way to calculate Top 10, or Top X, list, across multiple time periods
0
0
0
275
11,318,320
2012-07-03T20:02:00.000
0
0
0
0
python
11,318,363
1
true
0
0
The easiest method would be to send back some sort of return code and when the client sees the return code from the server, it would throw the exception itself.
1
0
0
I've the following problem. I've a Server Application that accepts sockets connections wrapped over SSL. The client sends user and password over it, once on the Server I check if the user/password is correct. If the user/password is wrong I want the Server to send to the Client a Socket.error. Right now the only idea that comes to mind is sending back to the client "Wrong Password" but I think it is safer to use the built-in errors, this way I can warp the code over a try except statement. Is it there anyway to send a socket error from the Server to the Client?
Raising socket.error from client to server
1.2
0
1
227
11,319,116
2012-07-03T21:01:00.000
4
0
0
0
python,django,ide,eric-ide
11,370,886
2
false
1
0
In the Plugins dropdown menu click on Plugin repository... Make sure that the repository URL is: http://eric-ide.python-projects.org/plugins4/repository.xml and then click on update. The Django plugin will show up in the list of available plugins, click on it and then click the download button. That should download the plugin for you. After that you need to actually install the plugin as well: In the Plugins dropdown menu click on Install plugins. Then select your newly downloaded Django plugin and install it. Good luck!
1
0
0
I can't believe I have to ask this, but I have spent almost three hours looking for the answer. Anyway, I have Eric IDE 4 installed on my linux distro. I can't seem to download any plugins to the plugins repository. The only one I really want is the Django plugin so when I start a new project in Eric, the Django option shows. The plugin repository just shows me an empty folder for .eric4/eric4plugins and there's no follow up as to where I can get the plugins from somewhere else. Actually, there was a hinting at it on the Eric docs site, but what I ended up getting was the ENTIRE trunk for eric. And the plugins that came with the trunk are just the bare bones ones that ship with it. I didn't get the Django one and the documentation on the Eric site is seriously lacking and overly complex. Anyone know how I can just get the Django snap in?
How to install the Django plugin for the Eric IDE?
0.379949
0
0
4,909
11,319,890
2012-07-03T22:08:00.000
0
0
0
1
javascript,python,google-app-engine,nosql,multi-tenant
11,319,983
2
false
1
0
The overhead of making calls from appengine to these external machines is going to be worse than the performance you're seeing now (I would expect). why not just move everything to a non-appengine machine? I can't speak for couch, but mongo or redis are definitely capable of handling serious load as long as they are set up correctly and with enough horsepower for your needs.
2
1
0
im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents. at this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appengine is way to slow. i tried to store serialized objects in the blobstore but between reading/writing and updating stuff to the blobstore it takes too much time and the app gets really slow. i thought of using a nosql db on an external machine to do these operations over appengine. a few options would be mongodb, couchdb or redis. but i am not sure about how good they perform with that much data and concurrent requests/inserts from different tenants. lets say i have 20 tenants and each tenant has 50k docs. are these dbs capable to handle this load? is this even the right way to go?
key/value store with good performance for multiple tenants
0
1
0
239
11,319,890
2012-07-03T22:08:00.000
2
0
0
1
javascript,python,google-app-engine,nosql,multi-tenant
11,323,377
2
true
1
0
Why not use the much faster regular appengine datastore instead of blobstore? Simply store your documents in regular entities as Blob property. Just make sure the entity size doesn't exceed 1 MB in which case you have to split up your data into more then one entity. I run an application whith millions of large Blobs that way. To further speed up things use memcache or even in-memory cache. Consider fetching your entites with eventual consistency which is MUCH faster. Run as many database ops in parallel as possible using either bulk operations or the async API.
2
1
0
im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents. at this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appengine is way to slow. i tried to store serialized objects in the blobstore but between reading/writing and updating stuff to the blobstore it takes too much time and the app gets really slow. i thought of using a nosql db on an external machine to do these operations over appengine. a few options would be mongodb, couchdb or redis. but i am not sure about how good they perform with that much data and concurrent requests/inserts from different tenants. lets say i have 20 tenants and each tenant has 50k docs. are these dbs capable to handle this load? is this even the right way to go?
key/value store with good performance for multiple tenants
1.2
1
0
239
11,322,475
2012-07-04T04:36:00.000
0
0
1
0
python,timezone
11,330,631
1
true
0
0
An offset is not enough to define a timezone, e.g., it ignores DST and other changes to the time in the area. To get a limited tzinfo object you could use pytz.FixedOffset(-8*60).
1
0
0
I have a IP geolocation service that returns the users estimated timezone in the format of -08:00 At first I tried to dump this number into Pytz, but that doesn't work. (I realize now that was a stupid idea.) So how might I parse this number into a tzinfo object?
Parsing timezone in format of "-xx:00"
1.2
0
0
85
11,322,538
2012-07-04T04:44:00.000
8
0
1
0
python,installation,include,pyinstaller,packaging
59,118,034
6
false
0
0
The problem is easier than you can imagine try this: --add-data="path/to/folder/*;." hope it helps !!!
1
40
0
All of the documentation for Pyinstaller talks about including individual files. Is it possible to include a directory, or should I write a function to create the include array by traversing my include directory?
Including a directory using Pyinstaller
1
0
0
50,776
11,324,804
2012-07-04T07:59:00.000
1
0
1
0
python,algorithm,math,boolean
26,163,606
6
false
0
0
Unfortunately, most of the given suggestions may not actually give @turtlesoup what he/she is looking for. @turtlesoup asked for a way to minimize the number of characters for a given boolean expression. Most simplification methods don't target the number of characters as a focus for simplification. When it comes to minimization in electronics, users typically want the fewest number of gates (or parts). This doesn't always result in a shorter expression in terms of the "length" of the expression -- most times it does, but not always. In fact, sometimes the expression can become larger, in terms of length, though it may be simpler from an electronics standpoint (requires fewer gates to build). boolengine.com is the best simplification tool that I know of when it comes to boolean simplification for digital circuits. It doesn't allow hundreds of inputs, but it allows 14, which is a lot more than most simplification tools. When working with electronics, simplification programs usually break down the expression into sum-of-product form. So the expression '(ab)+'cd becomes 'c+'b+'a+d. The "simplified" result requires more characters to print as an expression, but is easier to build from an electronics standpoint. It only requires a single 4-input OR gate and 3 inverters (4 parts). Whereas the original expression would require 2 AND gates, 2 inverters, and an OR gate (5 parts). After giving @turtlesoup's example to BoolEngine, it shows that BC(A+D)+DE becomes E+D+ABC. This is a shorter expression, and will usually be. But certainly not always.
1
7
0
I'm trying to write out a piece of code that can reduce the LENGTH of a boolean expression to the minimum, so the code should reduce the number of elements in the expression to as little as possible. Right now I'm stuck and I need some help =[ Here's the rule: there can be arbitrary number of elements in a boolean expression, but it only contains AND and OR operators, plus brackets. For example, if I pass in a boolean expression: ABC+BCD+DE, the optimum output would be BC(A+D)+DE, which saves 2 unit spaces compared to the original one because the two BCs are combined into one. My logic is that I will attempt to find the most frequently appeared element in the expression, and factor it out. Then I call the function recursively to do the same thing to the factored expression until it's completely factored. However, how can I find the most common element in the original expression? That is, in the above example, BC? It seems like I would have to try out all different combinations of elements, and find number of times each combination appears in the whole expression. But this sounds really naive. Second Can someone give a hint on how to do this efficiently? Even some keywords I can search up on Google will do.
algorithm - minimizing boolean expressions
0.033321
0
0
8,347
11,325,019
2012-07-04T08:13:00.000
3
1
1
0
python
11,325,504
8
false
0
0
You should use the logging library, which has this capability built in. You simply add handlers to a logger to determine where to send the output.
1
61
0
I'm trying to find out a way in python to redirect the script execution log to a file as well as stdout in a pythonic way. Is there any easy way of achieving this?
How to output to the console and file?
0.07486
0
0
174,224
11,326,522
2012-07-04T09:45:00.000
26
0
1
0
python,eclipse,pydev
11,330,067
2
true
0
0
list all operations could solve it here, for others convenience, and make this question closed remove the project and recreated it, and this time the project dir is the the PYTHONPATH remove your python interpretor settings, and set it again in eclipse - window preference - pydev -interpreter Python, refresh the pydev index Project -> Properties -> PyDev -PYTHONPATH, all is empty. I then "add source folder"
2
13
0
I have installed Eclipse 3.7.2 from APT in Ubuntu 12.04, and installed PyDev in Eclipse. First, it warns unused import and unused wild import, but it no longer displays them today. However, it can display errors like missing parenthesis. I created a new user, and installed PyDev using that user, problem still happens. How can I enable them for warnings? I have not change the code analysis settings.
PyDev code analysis missing
1.2
0
0
5,879
11,326,522
2012-07-04T09:45:00.000
9
0
1
0
python,eclipse,pydev
16,353,313
2
false
0
0
I had the same problem. Went to project properties > pydev - PYTHONPATH, then setting the source folder did it for me !
2
13
0
I have installed Eclipse 3.7.2 from APT in Ubuntu 12.04, and installed PyDev in Eclipse. First, it warns unused import and unused wild import, but it no longer displays them today. However, it can display errors like missing parenthesis. I created a new user, and installed PyDev using that user, problem still happens. How can I enable them for warnings? I have not change the code analysis settings.
PyDev code analysis missing
1
0
0
5,879
11,327,779
2012-07-04T11:02:00.000
0
0
0
0
python,web
11,328,214
1
false
1
0
Given your background and the analysys code already being written in Python, Django + Celery seems like an obvious candidate here. We're currently using this solution for a very processing-heavy app with one front-end django server, one dedicated database server, and two distinct celery servers for the background processing. Having the celery processes on distinct servers keeps the djangon front responsive whatever the load on the celery servers (and we can add new celery servers if required). So well, I don't know if it's "the most efficient" solution but it does work.
1
0
0
I am trying to design a web based app at the moment, that involves requests being made by users to trigger analysis of their previously entered data. The background analysis could be done on the same machine as the web server or be run on remote machines, and should not significantly impede the performance of the website, so that other users can also make analysis requests while the background analysis is being done. The requests should go into some form of queueing system, and once an analysis is finished, the results should be returned and viewable by the user in their account. Please could someone advise me of the most efficient framework to handle this project? I am currently working on Linux, the analysis software is written in Python, and I have previously designed dynamic sites using Django. Is there something compatible with this that could work?
choosing an application framework to handle offline analysis with web requests
0
0
0
168
11,328,322
2012-07-04T11:35:00.000
0
0
1
0
python,pyusb
11,328,476
1
true
0
0
The issue you're having as you noted in your comment is not attributed to incompatibilities with Python and PyUSB, but the fact that the Python path does not get automatically added to your PATH variable in the Windows Environment Settings. Right-Click My Computer Properties Advanced System Settings Environment Variables Select Path from the System Variables box and click EDIT Add your Python path to the end of the line, preceding it with a semi-colon if needed (for eg. ;C:\Python27) Ok out all the windows Restart your command window, and install the package.
1
0
0
So, I want to communicate with a USB device in python, but pyusb won't install (is not compatible?) with python 2.7 and windows7. Within the current project, updating python to a newer 2.X version is no option. Pyusb can't be the only option communicating with a USB device... Any solutions/tips?
Alternatives for pyusb on python 2.7
1.2
0
0
740
11,329,588
2012-07-04T12:56:00.000
0
1
0
0
php,python,mysql,json,pingdom
11,329,769
2
false
0
0
The most basic solution with the setup you have now would be to: Get a list of all events, ordered by server ID and then by time of the event Loop through that list and record the start of a new event / end of an old event for your new database when: the server ID changes the time between the current event and the previous event from the same server is bigger than a certain threshold you set. Store the old event you were monitoring in your new database The only complication I see, is that the next time you run the script, you need to make sure that you continue monitoring events that were still taking place at the time you last ran the script.
1
0
0
Not sure if the title is a great way to word my actual problem and I apologize if this is too general of a question but I'm having some trouble wrapping my head around how to do something. What I'm trying to do: The idea is to create a MySQL database of 'outages' for the thousands of servers I'm responsible for monitoring. This would give a historical record of downtime and an easy way to retroactively tell what happened. The database will be queried by a fairly simple PHP form where one could browse these outages by date or server hostname etc. What I have so far: I have a python script that runs as a cron periodically to call the Pingdom API to get a list of current down alerts reported by the pingdom service. For each down alert, a row is inserted into a database containing a hostname, time stamp, pingdom check id, etc. I then have a simple php form that works fine to query for down alerts. The problem: What I have now is missing some important features and isn't quite what I'm looking for. Currently, querying this database would give me a simple list of down alerts like this: Pindom alerts for Test_Check from 2012-05-01 to 2012-06-30: test_check was reported DOWN at 2012-05-24 00:11:11 test_check was reported DOWN at 2012-05-24 00:17:28 test_check was reported DOWN at 2012-05-24 00:25:24 test_check was reported DOWN at 2012-05-24 00:25:48 What I would like instead is something like this: test_check was reported down for 15 minutes (2012-05-24 00:11:11 to 2012-05-24 00:25:48)(link to comment on this outage)(link to info on this outage). In this ideal end result, there would be one row containing a outage ID, hostname of the server pingdom is reporting down, the timestamp for when that box was reported down originally and the timestamp for when it was reported up again along with a 'comment' field I (and other admins) would use to add notes about this particular event after the fact. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options. I'm a little lost as to how I will go about combining several down alerts that occur within a short period of time into a single 'outage' that would be inserted into a separate table in the existing MySQL database where individual down alerts are currently being stored. This would allow me to comment and add specific details for future reference and would generally make this thing a lot more usable. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options. I've been wracking my brain trying to figure out how to do this. It seems like a simple concept but I'm a somewhat inexperienced programmer (I'm a Linux admin by profession) and I'm stumped at this point. I'm looking for any thoughts, advice, examples or even just a more technical explanation of what I'm trying to do here to help point me in the right direction. I hope this makes sense. Thanks in advance for any advice :)
How can I combine rows of data into a new table based on similar timestamps? (python/MySQL/PHP)
0
1
0
131
11,329,598
2012-07-04T12:56:00.000
0
0
0
0
python,django,django-models,selenium
11,331,728
1
true
1
0
Selenium API enables you to interact with a web page and various web elements on it. To change any value in your database will have have to do that based on which database you are using and which python methods are using to interact with this database provider. (For eg - one will use JDBC when using Java language) It has nothing to do with selenium or it's API.
1
0
0
There is this field in my database which is package_expiry_date , now I want to change its value from selenium test case itself to use it after a while. like extend the date by 5 days by adding datetime.timedelta(days=5) . How can I change its value in database table, in python selenium testcase for django.
Changing a datafield into a database in selenium testcase: django python
1.2
0
0
125
11,331,310
2012-07-04T14:46:00.000
1
0
1
0
python,exe
11,331,468
1
true
0
0
If you want read-write data: Don't do this. An executable changing itself isn't guaranteed to work. Some executables write data at the end of the file (in theory) but you don't know: whether antivirus software will pick this behaviour up as part of behavioural analysis whether the executable is actually writable from the executable process whether data you write might become executable in theory and result in a security exploit whether you'll want to update a new release to the code next week, which will replace the executable file and lose the data [Nearly] all software is able to get by with 'normal' file storage (i.e. in a user / application data directory). If you just want read-only data: Fine, no problem. Write a Python file with the data in it, as a variable in a module. You can write a python file as part of your build process.
1
1
0
I have to generate an executable (.exe) file from my python program. I would like to store information in a persistent way within this .exe file itself. Normally I would prickel it into an external file, however for me it is important that the information is stored in the .exe file itself and not externally. Thanks in advance!
Store information into .exe file, exported from python
1.2
0
0
349
11,331,719
2012-07-04T15:12:00.000
1
0
1
0
python
11,331,832
5
false
0
0
This is language-dependent. Some languages, like Java, insist that you use a class for everything. There's simply no concept of a standalone function. Python isn't like that. It's perfectly OK - in fact recommended - to define functions standalone, and related functions can be grouped together in modules. As others have stated, the only time you really want a class in Python is when you have state that you need to keep - ie, encapsulating the data within the object.
5
4
0
When is a class more useful to use than a function? Is there any hard or fast rule that I should know about? Is it language dependent? I'm intending on writing a script for Python which will parse different types of json data, and my gut feeling is that I should use a class to do this, versus a function.
When should I use a class and when should I use a function?
0.039979
0
0
359
11,331,719
2012-07-04T15:12:00.000
3
0
1
0
python
11,331,803
5
false
0
0
First of all, I think that isn't language-dependent (if the language permit you to define classes and function as well). As a general rule I can tell you that a Class wrap into itself a behaviour. So, if you have a certain type of service that you have to implement (with, i.e. different functions) a class is what you're lookin' for. Moreover classes (say object that is more correct) has state and you can instantiate more occurrences of a class (so different objects with different states). Not less important, a class can be inearthed: so you can overwrite a specific behaviour of your function only with small changes.
5
4
0
When is a class more useful to use than a function? Is there any hard or fast rule that I should know about? Is it language dependent? I'm intending on writing a script for Python which will parse different types of json data, and my gut feeling is that I should use a class to do this, versus a function.
When should I use a class and when should I use a function?
0.119427
0
0
359
11,331,719
2012-07-04T15:12:00.000
1
0
1
0
python
11,331,755
5
false
0
0
For anything non-trivial, you should probably be using a class. I tend to limit all of my "free-floating" functions to a utils.py file.
5
4
0
When is a class more useful to use than a function? Is there any hard or fast rule that I should know about? Is it language dependent? I'm intending on writing a script for Python which will parse different types of json data, and my gut feeling is that I should use a class to do this, versus a function.
When should I use a class and when should I use a function?
0.039979
0
0
359
11,331,719
2012-07-04T15:12:00.000
1
0
1
0
python
11,331,745
5
false
0
0
the class when you have the state - something that should be persistent across the calls the function in other cases exception: if your class is only storing couple of values and has a single method besides __init__, you should better use the function
5
4
0
When is a class more useful to use than a function? Is there any hard or fast rule that I should know about? Is it language dependent? I'm intending on writing a script for Python which will parse different types of json data, and my gut feeling is that I should use a class to do this, versus a function.
When should I use a class and when should I use a function?
0.039979
0
0
359
11,331,719
2012-07-04T15:12:00.000
9
0
1
0
python
11,331,759
5
true
0
0
You should use a class when your routine needs to save state. Otherwise a function will suffice.
5
4
0
When is a class more useful to use than a function? Is there any hard or fast rule that I should know about? Is it language dependent? I'm intending on writing a script for Python which will parse different types of json data, and my gut feeling is that I should use a class to do this, versus a function.
When should I use a class and when should I use a function?
1.2
0
0
359
11,332,225
2012-07-04T15:47:00.000
0
0
1
0
python,synchronization
11,332,382
2
false
0
0
I would suggest using a database whose transactions allow for concurrent processing.
1
0
0
Firstly I am new to Python. Now my question goes like this: I have a call back script running in remote machine which sends some data and run a script in local machine which process that data and write to a file. Now another script of mine locally needs to process the file data one by one and delete them from the file if done. The problem is the file may be updating continuoulsy. How do i schyncronize the work so that it doesnt mess up my file. Also please suggest me if the same work can be done in some better way.
Python: Two script working with same file , one updating it another deleting the data when processed
0
0
0
1,114
11,333,061
2012-07-04T17:00:00.000
1
0
1
1
python,process
11,333,155
3
false
1
0
No. Each Python script has its own independent interpreter, so there is no convenient way to list all Python processes.
2
2
0
Does Python have a tool to list python processes similar to Java's jps (http://docs.oracle.com/javase/6/docs/technotes/tools/share/jps.html)? Edit: Getting the pid's of python processes is relatively easy (ps -A | grep python). What I am really looking for is a way to query a currently-running python process and find out the python file it was originally executed on. From the JPS docs, "jps will list each Java application's lvmid followed by the short form of the application's class name or jar file name." Basically, is there an easy way to query a bunch of python processes and find out useful information like JPS does for JVMs?
Does python have a tool similar to Java's JPS
0.066568
0
0
1,045
11,333,061
2012-07-04T17:00:00.000
1
0
1
1
python,process
12,424,762
3
true
1
0
The best way I've found to get the information I need is using ps -x, which gives the original command line arguments of all running processes.
2
2
0
Does Python have a tool to list python processes similar to Java's jps (http://docs.oracle.com/javase/6/docs/technotes/tools/share/jps.html)? Edit: Getting the pid's of python processes is relatively easy (ps -A | grep python). What I am really looking for is a way to query a currently-running python process and find out the python file it was originally executed on. From the JPS docs, "jps will list each Java application's lvmid followed by the short form of the application's class name or jar file name." Basically, is there an easy way to query a bunch of python processes and find out useful information like JPS does for JVMs?
Does python have a tool similar to Java's JPS
1.2
0
0
1,045
11,335,656
2012-07-04T21:33:00.000
0
0
1
0
python,regex
11,335,816
3
false
0
0
use this regex (^([^#].*?)?c99.*?$)
1
1
0
I'm trying to write a regular expression match I'd like to match c99 in files, so long as its not part of a hexadecimal color code for example Do NOT match on #000c99 DO match on /mysite.com/c99.php DO match on %20c99.php DO match on c99?hi=moo Is this even possible with regex?
Backchecking on python regex
0
0
0
98
11,338,044
2012-07-05T04:59:00.000
49
0
1
0
python,multiprocessing
11,338,089
3
true
0
0
That is the difference. One reason why you might use imap instead of map is if you wanted to start processing the first few results without waiting for the rest to be calculated. map waits for every result before returning. As for chunksize, it is sometimes more efficient to dole out work in larger quantities because every time the worker requests more work, there is IPC and synchronization overhead.
1
56
0
I'm trying to learn how to use Python's multiprocessing package, but I don't understand the difference between map and imap. Is the difference that map returns, say, an actual array or set, while imap returns an iterator over an array or set? When would I use one over the other? Also, I don't understand what the chunksize argument is. Is this the number of values that are passed to each process?
Python Multiprocessing: What's the difference between map and imap?
1.2
0
0
25,529
11,338,382
2012-07-05T05:43:00.000
1
0
0
1
python,windows,django,linux
11,339,389
5
false
1
0
I normally use OSX on my desktop, but I use Linux for Python because that's how it will get deployed. Specifically, I use Ubuntu Desktop in a virtual machine to develop Python applications and I use Ubuntu on the server to deploy them. This means that my understanding of library and module requirements/dependencies are 100% transferrable to the server when I'm ready to deploy the application. If I used OSX (or Windows) to develop Python apps I would have to deal with two different methods of handling requirements and dependencies --- it's just too much work. My suggestion: use VMWare Player (it's free) and find a Ubuntu VM to start learning. It's not too complicated and is actually quite fun.
2
19
0
I have been working on Python quite a lot recently and started reading the doc for Django, however I can't deny the fact that most of the video tutorials I find usually shows Linux as the chosen OS. I've ignored this mostly, but I started to come upon some problems with people using commands such as "touch" for which I have no idea about what the equivalent is in the Windows 7 command prompt. I've heard about New-Item in Power Shell, however it's messy and I am fearing that this "equivalent hunt" might come again and again... So I started to wonder why were most of the people using Linux with Python, would be a good move (knowing that my Linux knowledge is completely null) to learn to use Linux for development purpose? Would it allow me to be more efficient at developing with Python in general? Would it be possible to list the benefits of doing so?
Python/Django development, windows or linux?
0.039979
0
0
25,407
11,338,382
2012-07-05T05:43:00.000
27
0
0
1
python,windows,django,linux
11,338,865
5
true
1
0
I used Windows for quite some time for Django development, but finally figured out that Linux is simply the better way to go. Here are some reasons why: some Python packages can not be installed at all or correctly in Windows OR it will create a lot of hassle for you to do so if you need to deploy your Django app it makes more sense to use a Unix-flavored system, simply because its 99% likely that you deployment environment is the same. Doing a dry run on your local machine with the same configuration will save you a lot of time later on + here you are "allowed" to make mistakes. If your apps gets complex its way easier in Linux to get the required dependencies, be it extensions, libraries, etc.. In Windows you end up looking for the right site to download everything and go through some hassle of installation and configuration. It took me lots of time to just search for some specific things sometimes. In Linux its often just an "apt-get" (or similiar) and you are done. Did I mention that everything is faster to get and install in Linux? Of course if your app is simple and you don't need to care about the deployment then Windows is fine.
2
19
0
I have been working on Python quite a lot recently and started reading the doc for Django, however I can't deny the fact that most of the video tutorials I find usually shows Linux as the chosen OS. I've ignored this mostly, but I started to come upon some problems with people using commands such as "touch" for which I have no idea about what the equivalent is in the Windows 7 command prompt. I've heard about New-Item in Power Shell, however it's messy and I am fearing that this "equivalent hunt" might come again and again... So I started to wonder why were most of the people using Linux with Python, would be a good move (knowing that my Linux knowledge is completely null) to learn to use Linux for development purpose? Would it allow me to be more efficient at developing with Python in general? Would it be possible to list the benefits of doing so?
Python/Django development, windows or linux?
1.2
0
0
25,407
11,339,815
2012-07-05T07:39:00.000
2
0
0
0
python,django,django-forms
11,342,225
1
false
1
0
You can make use of AJAX for a single form submission instead of whole page submit.
1
1
0
I've been developing a django project for 1 month. So I'm new at Django. My current problem with Django is; When I have multiple forms in one page and the page is submitted for a form, the other forms field values are lost. Because they are not posted. I've found a solution for this problem; When there is get method, I send the other forms value with the page url and I can handle them from the get request. When there is post method, I keep the others form fields value in hidden inputs in HTML side in the form which is posted. Hence I can handle it from the post request. Maybe I can keep them in session object. But it may not be good to keep them for whole time which the user logg in. But I dont know. I may have to use this method. Is there another way which is more effective to keep all forms fields in Django? Any Suggestion? Thank!
Multiple forms in one page in DJANGO
0.379949
0
0
543
11,340,203
2012-07-05T08:04:00.000
1
0
1
0
python-3.x,sqlalchemy,formalchemy
11,704,283
1
true
0
0
At time of writing there is none
1
3
0
Is there a FormAlchemy alternative for Python3.2? I'm specifically interested in using it in conjunction with Pyramid. I'm getting syntax errors when setting up FormAlchemy1.3.3 so their latest release is not compatible.
python3 formalchemy alternative
1.2
0
0
252
11,341,112
2012-07-05T09:05:00.000
3
0
0
0
python,plone
11,350,843
2
false
1
0
Cue tune from Hotel California: "You can check out any time you like, but you can never leave." You do not not really want to disable all downloading, I believe that you really just want to disable downloads from all users but Owner. There is no practical use for putting files into something with no vehicle for EVER getting them back out... ...so you need to solve this problem with workflow: Use a custom workflow definition that has a state for this behavior ("Confidential"). Ensure that "View" permission is not inherited from folder above in the permissions for this state, and check "Owner" (and possibly "Manager" if you see fit) as having "View" permission. Set the confidential state as the default state for files. You can do this using Workflow policy support ("placeful workflows") in parts of the site if you do not wish to do this site-wide. Should you wish to make the existence of the items viewable, but the download not, you are best advised to create a custom permission and a custom type to protect downloading with a permission other than "View" (but you still should use workflow state as permission-to-role mapping templates).
2
1
0
I wish to make the uploaded file contents only viewable on the browser i.e using atreal.richfile.preview for doc/xls/pdf files. The file should not be downloadable at any cost. How do I remove the hyperlink for the template in a particular folder for all the files in that folder? I use Plone 4.1 There is AT at_download.
In plone how can I make an uploaded file as NOT downloadable?
0.291313
0
0
362
11,341,112
2012-07-05T09:05:00.000
1
0
0
0
python,plone
11,355,784
2
true
1
0
Script (Python) at /mysite/portal_skins/archetypes/at_download Just customize to contain nothing. Thought this will be helpful to someone who would like to keep files/ image files in Plone confidential by sharing the folders with view permission and disable checkout and copy option for the role created
2
1
0
I wish to make the uploaded file contents only viewable on the browser i.e using atreal.richfile.preview for doc/xls/pdf files. The file should not be downloadable at any cost. How do I remove the hyperlink for the template in a particular folder for all the files in that folder? I use Plone 4.1 There is AT at_download.
In plone how can I make an uploaded file as NOT downloadable?
1.2
0
0
362
11,342,314
2012-07-05T10:21:00.000
0
1
0
0
python,linux,paramiko
11,360,629
1
false
0
0
Reviewing my SO activity this week, saw this opportunity to whore for rep: Those look like ANSI/VT100 terminal control codes, which suggests that something which thinks it is attached to a terminal is sending them but they are being received by something which doesn't know what to do with them. Now you can Google for 'VT100 control codes' and learn what you want.
1
3
0
I'm using python 2.7 and paramiko library. client app running on window sends ssh commands to server app running on linux. when I send vi command, I get the response <-[0m<-[24;2H<-[K<-[24;1H<-[1m~<-[0m<-[25;2H.... I don't know what these characters mean and how I process it. I'm struggling for hours, please help me.
vi command returns error format data?
0
0
1
223
11,342,620
2012-07-05T10:38:00.000
0
0
0
0
python,wxpython,wxwidgets
11,382,239
1
true
0
1
I solved this by getting the width of the parent widget inside the scrolledpanel, instead of the width of the scrolledpanel itself. Sometimes the answer is so obvious :)
1
0
0
I'm trying to use the ScrolledPanel in wx.lib.scrolledpanel, and i would like to check if the scrollbar of the ScrolledPanel is currently visible, so i can give my StaticText the correct wrap width. Because when the scrollbar is visible i need to remove another 10 pixels or so from the wrap width... Anyone any idea how this is done? Thanks!
Check if wx.lib.scrolledpanel.ScrolledPanel is currently scrolling
1.2
0
0
134
11,345,236
2012-07-05T13:22:00.000
0
0
0
0
python,django
11,345,396
2
false
1
0
By default it shows as active i.e checked/True. If you want NULL value as well try using NullBooleanField
1
0
0
I use BooleanField in my model like this: active = models.BooleanField() During edition in django admin, attribute 'active' is always checked, why?
BooleanField doesn't work (always checked) in Django 1.4
0
0
0
652