Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
3,963,438
2010-10-18T21:12:00.000
0
1
1
0
0
python,programming-languages
0
3,963,524
0
6
0
false
0
0
The stock answer is the only features that make certain languages more powerful than others are language features that cannot be easily replaced by adding libraries. This definition will almost always list LISP on the top, but has the odd side effect of listing assembly near the top unless special care is taken to exclude it.
4
5
0
0
I'm going to reveal my ignorance here, but in my defense, I'm an accounting major, and I've never taken a computer science class. I'm about to start a new project, and I'm considering using Python instead of PHP, even though I am much more adept with PHP, because I have heard that Python is a more powerful language. That got me wondering, what makes one programming language more powerful than another? I figure javascript isn't very powerful because it (generally) runs inside a browser. But, why is Python more powerful than PHP? In each case, I'm giving instructions to the computer, so why are some languages better at interpreting and executing these instructions? How do I know how much "power" I actually need for a specific project?
What makes some programming languages more powerful than others?
0
0
1
0
0
6,357
3,963,438
2010-10-18T21:12:00.000
1
1
1
0
0
python,programming-languages
0
3,963,547
0
6
0
false
0
0
I would not say that there are computer languages "more powerful", just languages more suited for your specific problem domain. That said, PHP is a language that evolved from a hack and tailored for a very specific problem domain; this shows up in several places, like for example inconsistent parameter order across database interfaces. IMHO PHP community has made some very sad decisions for new syntax enhancements over the time. IMHO Python is much more general, well designed and elegant, but your question is one that usually starts flamewars.
4
5
0
0
I'm going to reveal my ignorance here, but in my defense, I'm an accounting major, and I've never taken a computer science class. I'm about to start a new project, and I'm considering using Python instead of PHP, even though I am much more adept with PHP, because I have heard that Python is a more powerful language. That got me wondering, what makes one programming language more powerful than another? I figure javascript isn't very powerful because it (generally) runs inside a browser. But, why is Python more powerful than PHP? In each case, I'm giving instructions to the computer, so why are some languages better at interpreting and executing these instructions? How do I know how much "power" I actually need for a specific project?
What makes some programming languages more powerful than others?
0
0.033321
1
0
0
6,357
3,963,438
2010-10-18T21:12:00.000
0
1
1
0
0
python,programming-languages
0
3,963,564
0
6
0
false
0
0
Its an interesting topic and in my line of word I come across this a lot. But I've discovered 'power' in the literal sense no longer has value when it comes to the language. What I fear those telling you 'python is more' powerful are getting mixed up with the language and the implementation. I'm a recent convert to python (last 2 weeks) previously I was a PHP coder. The libraries made on top of the language of python - namely django -help make the language more powerful - as its faster to use and build upon. PHP has the fable 'if you want to do something. there is a function for it' and the documentation is brilliant - therefore powerful in that sense. And in regards to interpreting the language - again dependant upon who has been coding it - its no matter. By general consensus python may be considered quicker and less CPU intensive, is it creates compiled versions of your code. But PHP can have a good caching system. In short - Pick you favorite.
4
5
0
0
I'm going to reveal my ignorance here, but in my defense, I'm an accounting major, and I've never taken a computer science class. I'm about to start a new project, and I'm considering using Python instead of PHP, even though I am much more adept with PHP, because I have heard that Python is a more powerful language. That got me wondering, what makes one programming language more powerful than another? I figure javascript isn't very powerful because it (generally) runs inside a browser. But, why is Python more powerful than PHP? In each case, I'm giving instructions to the computer, so why are some languages better at interpreting and executing these instructions? How do I know how much "power" I actually need for a specific project?
What makes some programming languages more powerful than others?
0
0
1
0
0
6,357
3,963,438
2010-10-18T21:12:00.000
4
1
1
0
0
python,programming-languages
0
3,963,482
0
6
0
false
0
0
I hate statements of the sort "language X is more powerful than Y." The real question is which language makes you more powerful. If language X allows you to write better code (that works) faster than Y does then, yes, X is more "powerful". If you are looking for an objective explanation of language powerful-ness ... well, good luck with that.
4
5
0
0
I'm going to reveal my ignorance here, but in my defense, I'm an accounting major, and I've never taken a computer science class. I'm about to start a new project, and I'm considering using Python instead of PHP, even though I am much more adept with PHP, because I have heard that Python is a more powerful language. That got me wondering, what makes one programming language more powerful than another? I figure javascript isn't very powerful because it (generally) runs inside a browser. But, why is Python more powerful than PHP? In each case, I'm giving instructions to the computer, so why are some languages better at interpreting and executing these instructions? How do I know how much "power" I actually need for a specific project?
What makes some programming languages more powerful than others?
0
0.132549
1
0
0
6,357
3,968,275
2010-10-19T12:26:00.000
3
0
0
0
0
python,tkinter
0
3,968,351
0
1
0
true
0
1
frame.config(width=100) Be aware that if there are children in the frame that are managed by grid or pack, your changes may have no effect. There are solutions to this, but it is rarely needed. Generally speaking you should let widgets be their natural size. If you do need to resize a frame that is a container of other widgets you need to turn "geometry propagation" off.
1
3
0
0
how can I set width to a tk.Frame (post-initialization ?) In other words, is there a member function to do it ? Sometheing like frame.setWidth() thanks
TKinter: how to change Frame width dynamically
0
1.2
1
0
0
4,026
3,976,368
2010-10-20T09:23:00.000
7
0
0
1
0
python,google-app-engine,web-applications
0
3,976,759
0
2
1
true
1
0
The framework you use is irrelevant to how you handle forms. You have a couple of options: you can distinguish the forms by changing the URL they submit to - in which case, you can use the same handler or a different handler for each form - or you can distinguish them based on the contents of the form. The easiest way to do the latter is to give your submit buttons distinct names or values, and check for them in the POST data.
1
2
0
0
Say if I have multiple forms with multiple submit button in a single page, can I somehow make all of these buttons work using webapp as backend handler? If not, what are the alternatives?
How to handle multiple forms in google app engine?
1
1.2
1
0
0
1,373
3,980,878
2010-10-20T18:01:00.000
0
0
0
0
0
python,html,pdf,permissions,pylons
0
4,025,388
0
4
0
false
1
0
Maybe filename with md5 key will be enough? 48cd84ab06b0a18f3b6e024703cfd246-myfilename.pdf You can use filename and datetime.now to generate md5 key.
3
1
0
0
I'm building a web site from the old one and i need to show a lot of .pdf files. I need users to get authenficated before the can't see any of my .pdf but i don't know how (and i can't put my pdf in my database). I'm using Pylons with Python. Thank for you help. If you have any question, ask me! :)
Show pdf only to authenticated users
0
0
1
0
0
216
3,980,878
2010-10-20T18:01:00.000
2
0
0
0
0
python,html,pdf,permissions,pylons
0
3,980,973
0
4
0
false
1
0
Paul's suggestion of X-Sendfile is excellent - this is truly a great way to deal with actually getting the document back to the user. (+1 for Paul :) As for the front end, do something like this: Store your pdfs somewhere not accessible by the web (say /secure) Offer a URL that looks like /unsecure/filename.pdf Have your HTTP server (if it's Apache, see Mod Rewrite) convert that link into /normal/php/path/authenticator.php?file=filename.pdf authenticator.php confirms that the file exists, that the user is legit (i.e. via a cookie), and then uses X-Sendfile to return the PDF.
3
1
0
0
I'm building a web site from the old one and i need to show a lot of .pdf files. I need users to get authenficated before the can't see any of my .pdf but i don't know how (and i can't put my pdf in my database). I'm using Pylons with Python. Thank for you help. If you have any question, ask me! :)
Show pdf only to authenticated users
0
0.099668
1
0
0
216
3,980,878
2010-10-20T18:01:00.000
2
0
0
0
0
python,html,pdf,permissions,pylons
0
3,980,896
0
4
0
false
1
0
You want to use the X-Sendfile header to send those files. Precise details will depend on which Http server you're using.
3
1
0
0
I'm building a web site from the old one and i need to show a lot of .pdf files. I need users to get authenficated before the can't see any of my .pdf but i don't know how (and i can't put my pdf in my database). I'm using Pylons with Python. Thank for you help. If you have any question, ask me! :)
Show pdf only to authenticated users
0
0.099668
1
0
0
216
3,981,357
2010-10-20T19:02:00.000
2
0
1
0
0
python
0
3,981,399
0
3
0
false
0
1
Python provides number of ways to do this using function calls: - eval() - exec() For your needs you should read about exec.
1
1
0
0
Well i want to input a python function as an input in run time and execute that part of code 'n' no of times. For example using tkinter i create a textbox where the user writes the function and submits it , also mentioning how many times it wants to be executed. My program should be able to run that function as many times as mentioned by the user. Ps: i did think of an alternative method where the user can write the program in a file and then i can simply execute it as python filename as a system cmd inside my python program , but i dont want it that way.
how to input python code in run time and execute it?
0
0.132549
1
0
0
6,779
3,989,952
2010-10-21T16:55:00.000
2
1
1
0
0
python,time
0
3,990,021
0
2
0
true
0
0
CPU load will affect timing. If your application is startved of a slice of CPU time, then timing would get affected. You can not help that much. You can be as precise and no more. Ensure that your program gets a health slice of cpu time and the result will be accurate. In most cases, the results should be accurate to milliseconds.
2
3
0
0
My question was not specific enough last time, and so this is second question about this topic. I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit. I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time. So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it? Thank you, Joon
Is python time module reliable enough to use to measure response time?
0
1.2
1
0
0
533
3,989,952
2010-10-21T16:55:00.000
1
1
1
0
0
python,time
0
3,990,976
0
2
0
false
0
0
If you benchmark on a *nix system (Linux most probably), time.clock() will return CPU time in seconds. On its own, it's not very informative, but as a difference of results (i.e. t0 = time.clock(); some_process(); t = time.clock() - t0), you'd have a much more load-independent timing than with time.time().
2
3
0
0
My question was not specific enough last time, and so this is second question about this topic. I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit. I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time. So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it? Thank you, Joon
Is python time module reliable enough to use to measure response time?
0
0.099668
1
0
0
533
3,992,192
2010-10-21T21:37:00.000
-2
0
1
0
0
python,string-length
0
65,062,702
0
17
0
false
0
0
Create a for loop styx = "How do I count this without using Lens function" number = 0 for c in styx: number = 0 + 1 print(number)
1
3
0
0
Can anyone tell me how can I get the length of a string without using the len() function or any string methods. Please anyone tell me as I'm tapping my head madly for the answer. Thank you.
String length without len function
0
-0.023525
1
0
0
43,653
3,993,125
2010-10-22T00:50:00.000
7
0
0
0
0
python,numpy
0
3,993,156
0
3
0
true
0
0
Yes, you're right. It fills in as many : as required. The only difference occurs when you use multiple ellipses. In that case, the first ellipsis acts in the same way, but each remaining one is converted to a single :.
1
7
1
0
And what is it called? I don't know how to search for it; I tried calling it ellipsis with the Google. I don't mean in interactive output when dots are used to indicate that the full array is not being shown, but as in the code I'm looking at, xTensor0[...] = xVTensor[..., 0] From my experimentation, it appears to function the similarly to : in indexing, but stands in for multiple :'s, making x[:,:,1] equivalent to x[...,1].
What does ... mean in numpy code?
0
1.2
1
0
0
1,449
3,999,007
2010-10-22T16:41:00.000
3
0
0
1
0
python,file-io
0
3,999,039
0
6
0
false
0
0
See the tell() method on the stream object.
2
11
0
0
I am using the output streams from the io module and writing to files. I want to be able to detect when I have written 1G of data to a file and then start writing to a second file. I can't seem to figure out how to determine how much data I have written to the file. Is there something easy built in to io? Or might I have to count the bytes before each write manually?
How to limit file size when writing one?
0
0.099668
1
0
0
17,308
3,999,007
2010-10-22T16:41:00.000
1
0
0
1
0
python,file-io
0
4,766,092
0
6
0
false
0
0
I noticed an ambiguity in your question. Do you want the file to be (a) over (b) under (c) exactly 1GiB large, before switching? It's easy to tell if you've gone over. tell() is sufficient for that kind of thing; just check if tell() > 1024*1024*1024: and you'll know. Checking if you're under 1GiB, but will go over 1GiB on your next write, is a similar technique. if len(data_to_write) + tell > 1024*1024*1024: will suffice. The trickiest thing to do is to get the file to exactly 1GiB. You will need to tell() the length of the file, and then partition your data appropriately in order to hit the mark precisely. Regardless of exactly which semantics you want, tell() is always going to be at least as slow as doing the counting yourself, and possibly slower. This doesn't mean that it's the wrong thing to do; if you're writing the file from a thread, then you almost certainly will want to tell() rather than hope that you've correctly preempted other threads writing to the same file. (And do your locks, etc., but that's another question.) By the way, I noticed a definite direction in your last couple questions. Are you aware of #twisted and #python IRC channels on Freenode (irc.freenode.net)? You will get timelier, more useful answers. ~ C.
2
11
0
0
I am using the output streams from the io module and writing to files. I want to be able to detect when I have written 1G of data to a file and then start writing to a second file. I can't seem to figure out how to determine how much data I have written to the file. Is there something easy built in to io? Or might I have to count the bytes before each write manually?
How to limit file size when writing one?
0
0.033321
1
0
0
17,308
4,002,514
2010-10-23T04:42:00.000
1
0
0
1
0
python,google-app-engine,task,dashboard,task-queue
0
4,059,302
0
3
0
false
1
0
A workaround, since they don't seem to support this yet, would be to model a Task datastore object. Create one on task queue add, update it when running, and delete it when your task fires. This can also be a nice way to get around the payload limits of the task queue api.
1
6
0
0
I know you can view the currently queued and running tasks in the Dashboard or development server console. However, is there any way to get that list programmatically? The docs only describe how to add tasks to the queue, but not how to list and/or cancel them. In python please.
Getting the Tasks in a Google App Engine TaskQueue
1
0.066568
1
0
0
2,402
4,003,840
2010-10-23T11:58:00.000
0
0
0
0
0
python,search,lucene,nlp,lsa
0
4,004,384
0
4
0
false
0
0
First , write a piece of python code that will return you pineapple , orange , papaya when you input apple. By focusing on "is" relation of semantic network. Then continue with has a relationship and so on. I think at the end , you might get a fairly sufficient piece of code for a school project.
3
5
1
0
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
How to build a conceptual search engine?
0
0
1
0
0
1,997
4,003,840
2010-10-23T11:58:00.000
1
0
0
0
0
python,search,lucene,nlp,lsa
0
4,004,024
0
4
0
false
0
0
This is an incredibly hard problem and it can't be solved in a way that would always produce adequate results. I'd suggest to stick to some very simple principles instead so that the results are at least predictable. I think you need 2 things: some basic morphology engine plus a dictionary of synonyms. Whenever a search query arrives, for each word you Look for a literal match "Normalize/canonicalze" the word using the morphology engine, i.e. make it singular, first form, etc and look for matches Look for synonyms of the word Then repeat for all combinations of the input words, i.e. "big cats", "big cat", "huge cats" huge cat" etc. In fact, you need to store your index data in canonical form, too (singluar, first form etc) along with the literal form. As for concepts, such as cats are also animals - this is where it gets tricky. It never really worked, because otherwise Google would have been returning conceptual matches already, but it's not doing that.
3
5
1
0
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
How to build a conceptual search engine?
0
0.049958
1
0
0
1,997
4,003,840
2010-10-23T11:58:00.000
9
0
0
0
0
python,search,lucene,nlp,lsa
0
4,004,314
0
4
0
true
0
0
I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? Step 1. Stop. Step 2. Get something to work. Step 3. By then, you'll understand more about Python and Lucene and other tools and ways you might integrate them. Don't start by trying to solve integration problems. Software can always be integrated. That's what an Operating System does. It integrates software. Sometimes you want "tighter" integration, but that's never the first problem to solve. The first problem to solve is to get your search or concept thing or whatever it is to work as a dumb-old command-line application. Or pair of applications knit together by passing files around or knit together with OS pipes or something. Later, you can try and figure out how to make the user experience seamless. But don't start with integration and don't stall because of integration questions. Set integration aside and get something to work.
3
5
1
0
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
How to build a conceptual search engine?
0
1.2
1
0
0
1,997
4,005,355
2010-10-23T18:02:00.000
1
0
1
0
0
c++,python,swig
0
4,026,127
0
1
0
false
0
1
I think it's not possible. If you need to increase the refcount, it's because you don't want the C++ object to be destroyed when it goes out of scope because there is a pointer to that object elsewhere. In that case, look at using the DISOWN typemap to ensure the target language doesn't think it "owns" the C++ object, so it won't get destroyed.
1
1
0
0
I've got class A wrapped with method foo implemented using %extend: class A { ... %extend { void foo() { self->foo_impl(); } } Now I want to increase ref count to an A inside foo_impl, but I only got A* (as self). Question: how can I write/wrap function foo, so that I have an access both to A* and underlying PyObject*? Thank you
Python Swig wrapper: how access underlying PyObject
0
0.197375
1
0
0
619
4,013,733
2010-10-25T10:38:00.000
0
0
0
0
0
python,django,django-models,django-admin
0
4,030,636
0
1
0
false
1
0
say you have a product model class Product(models.Model): name=models.CharField(max_length=20) cost=models.DecimalField(max_length=10) you can subclass the admin's Modeladmin to show a list display for the products or you can do a custom modelform for the product which you can call in the product's admin from django.contrib import admin from django import forms class PropertyInline(admin.TabularInine): model=Property extra=1 class PropertyForm(admin.ModelAdmin): inlines=(PropertyInline,)
1
0
0
0
Our product model can have multiple campaigns. Our customers change these campaigns frequently and generally on multiple products. So what we need right now seems that we need to show a multiple select widget on a change-list of Product model where our customers can easily change the campaigns. Any idea on this? Maybe another way to achieve this kind of UI interaction? Thanks,
django multiple select in admin change-list
1
0
1
0
0
559
4,015,227
2010-10-25T13:53:00.000
0
0
0
0
1
python,streaming,p2p,vlc,instant-messaging
0
4,200,613
0
1
0
false
0
0
I think they were suggesting you run your program on a LAN which has no ports blocked.
1
0
0
0
I've have asked these questions before with no proper answer. I hope I'll get some response here. I'm developing an instant messenger in python and I'd like to handle video/audio streaming with VLC. Tha basic idea right now is that in each IM client I'm running one VLC instance that acts as a server that streams to all the users I want, and another VLC instance that's a client and recieves and displays all the streams that other users are sending to me. As you can see, it's kind of a P2P connection and I am having lots of problems. My first problem was VLC can handle only one stream per port, but I solved this using VLM, the Videolan Manager which allows multiple streams with one instance and on one port. My second problem was this kind of P2P take has several drawbacks as if someone is behind NAT or a router, you have to do manual configurations to forward the packages from the router to your PC, and it also has another drawback, you can only forward to 1 PC, so you would be able to use the program in only one workstation. Also, the streams were transported in HTTP protocol, which uses TCP and it's pretty slow. When I tried to do the same with RTSP, I wasn't able to get the stream outside my private LAN. So, this P2P take is very unlikely to be implemented successfully by an amateur like me, as it has all the typical NAT traversal problems, things that I don't want to mess with as this is not a commercial application, just a school project I must finish in order to graduate as a technician. Finally, I've been recommended to a use a server in a well known IP and that would solve the problem, only one router configuration and let both ends of the conversations be clients. I have no idea how to implement this idea, please any help is useful. Thanks in advance. Sorry for any error, I am not a programming/networking expert nor am I an english-speaking person.
Problems with VLC and instant messaging
0
0
1
0
1
224
4,027,508
2010-10-26T20:02:00.000
1
0
0
0
0
javascript,python,ajax,django
0
4,027,614
0
3
1
false
1
0
The reason to update all of the comments is to take in account other comments that other people may have also submitted in the meantime. to keep the site truly dynamic, you can do either that, or even, when the page is loaded, load up a variable with the newest submitted comment ID, and set a timer that goes back to check if there are more comments since then. if there are, return them as a JSON object and append them onto the current page a DIV at a time. That would be my preferred way to handle it, because you can then target actions based on each DIV's id or rel, which directly relates back to the comment's ID in the database...
3
2
0
0
This is a situation I've run into several times, not sure what the best approach is. Let's say I have some kind of dynamic content that users can add to via Ajax. For example, take Stack Overflow's comments - people can add comments directly on to the page using Ajax. The question is, how do I implement the addition itself of the new content. For example, it looks like the ajax call which Stack Overflow uses to add a comment, simply returns html which replaces all of the comments (not just the new one). This way saves redundant logic in the Javascript code which has to "know" what the html looks like. Is this the best approach? If so, how best to implement it in Django so that there is no redundancy (i.e., so that the view doesn't have to know what the html looks like?) Thanks! EDIT: In my specific case, there is no danger of other comments being added in the meantime - this is purely a question of the best implementation.
Loading New Content with Ajax
0
0.066568
1
0
0
261
4,027,508
2010-10-26T20:02:00.000
1
0
0
0
0
javascript,python,ajax,django
0
4,027,641
0
3
1
false
1
0
I am not proficient neither with Django nor Python but I suppose the basic logic is similar for all server-side languages. When weighing the pros and cons of either approach things depend on what you want to optimize. If bandwidth is important then obviously using pure JSON in communication reduces latency when compared to transmitting ready-made HTML. However, duplicating the server-side view functionality to Javascript is a tedious and error-prone process. It simply takes a lot of time. I personally feel that in most cases (for small and medium traffic sites) it's perfectly fine to put the HTML fragment together on server side and simply replace the contents of a container (e.g. contents of a div) inside an AJAX callback function. Of course, replacing the whole page would be silly, hence your Django application needs to support outputting specific regions of the document.
3
2
0
0
This is a situation I've run into several times, not sure what the best approach is. Let's say I have some kind of dynamic content that users can add to via Ajax. For example, take Stack Overflow's comments - people can add comments directly on to the page using Ajax. The question is, how do I implement the addition itself of the new content. For example, it looks like the ajax call which Stack Overflow uses to add a comment, simply returns html which replaces all of the comments (not just the new one). This way saves redundant logic in the Javascript code which has to "know" what the html looks like. Is this the best approach? If so, how best to implement it in Django so that there is no redundancy (i.e., so that the view doesn't have to know what the html looks like?) Thanks! EDIT: In my specific case, there is no danger of other comments being added in the meantime - this is purely a question of the best implementation.
Loading New Content with Ajax
0
0.066568
1
0
0
261
4,027,508
2010-10-26T20:02:00.000
4
0
0
0
0
javascript,python,ajax,django
0
4,029,103
0
3
1
true
1
0
If the content is simple, I would get JSON and build the HTML in jQuery. If it's complex, I would create a template and call render() on it on the server and return the HTML (which jQuery could either append to other content or replace existing content).
3
2
0
0
This is a situation I've run into several times, not sure what the best approach is. Let's say I have some kind of dynamic content that users can add to via Ajax. For example, take Stack Overflow's comments - people can add comments directly on to the page using Ajax. The question is, how do I implement the addition itself of the new content. For example, it looks like the ajax call which Stack Overflow uses to add a comment, simply returns html which replaces all of the comments (not just the new one). This way saves redundant logic in the Javascript code which has to "know" what the html looks like. Is this the best approach? If so, how best to implement it in Django so that there is no redundancy (i.e., so that the view doesn't have to know what the html looks like?) Thanks! EDIT: In my specific case, there is no danger of other comments being added in the meantime - this is purely a question of the best implementation.
Loading New Content with Ajax
0
1.2
1
0
0
261
4,033,669
2010-10-27T13:19:00.000
2
0
1
0
0
python
0
4,033,706
0
1
0
true
0
0
The file is not run in a separate thread or process, it runs synchronously with the caller.
1
2
0
0
does the file which is sent as an argument to execfile runs as an independent process / thread or is the code imported and then executed ? . Also i wanted to know how efficient is it compared to running threads / process .
how does execfile() work in python?
0
1.2
1
0
0
793
4,034,169
2010-10-27T14:09:00.000
0
0
0
0
1
python,webkit
0
17,223,197
0
10
0
false
1
0
If you are thinking about real desktop applications that are multi-threaded and/or use multiple system component - forget about JavaScript. That requires a very good SDK (like PyQt4), and not a basic wrapper like Appcelerator Titanium. Note that developing such applications takes a lot of time. Second option is to drop the desktop binding and make a web application with an advanced frontend UI made with one of JavaScript frameworks & friends (Ember, Angular... up to things like dhtmlx and alike widgets). Those can't use system components (like access some hardware), but they can provide nice services.
2
22
0
0
I have been experimenting with Appcelerator Titanum yesterday and I think it's cool when it comes to Javascript. Python features on Appcelerator Titanum are so limited (can't use some modules for example). My question is How can I use html & javascript as a GUI tool for a real python application ? I am running windows 7 and i was thinking of using webkit for that purpose but couldn't know how to work with it in python. I am planning to make a standalone executable using py2exe as I don't know if the users always have python and the appropriate modules installed.
How can I use HTML + Javascript to build a python GUI?
0
0
1
0
0
24,264
4,034,169
2010-10-27T14:09:00.000
0
0
0
0
1
python,webkit
0
4,064,133
0
10
0
false
1
0
I assume you are mobilizing a web-application for cross platform access. If so have you considered abstracting the cross platform access at the web-app presentation layer ? Appcelerator / Webkit does not provide true native look-and-feel on mobile devices but this is where a new technology can help.
2
22
0
0
I have been experimenting with Appcelerator Titanum yesterday and I think it's cool when it comes to Javascript. Python features on Appcelerator Titanum are so limited (can't use some modules for example). My question is How can I use html & javascript as a GUI tool for a real python application ? I am running windows 7 and i was thinking of using webkit for that purpose but couldn't know how to work with it in python. I am planning to make a standalone executable using py2exe as I don't know if the users always have python and the appropriate modules installed.
How can I use HTML + Javascript to build a python GUI?
0
0
1
0
0
24,264
4,042,407
2010-10-28T11:27:00.000
3
0
1
0
0
python,django,constants
0
4,042,429
0
7
0
false
1
0
Consider putting it into settings.py of your application. Of course, in order to use it in template you will need to make it available to template as any other usual variable.
1
23
0
0
I want to have some constants in a Django Projects. For example, let's say a constant called MIN_TIME_TEST. I would like to be able to access this constant in two places: from within my Python code, and from within any Templates. What's the best way to go about doing this? EDIT: To clarify, I know about Template Context Processors and about just putting things in settings.py or some other file and just importing. My question is, how do I combine the two approaches without violating the "Don't Repeat Yourself" rule? Based on the answers so far, here's my approach: I'd like to create a file called global_constants.py, which will have a list of constants (things like MIN_TIME_TEST = 5). I can import this file into any module to get the constants. But now, I want to create the context processor which returns all of these constants. How can I go about doing this automatically, without having to list them again in a dictionary, like in John Mee's answer?
Defining Constants in Django
0
0.085505
1
0
0
31,657
4,046,853
2010-10-28T20:20:00.000
1
0
1
0
0
python,automated-tests,static-analysis,enforcement
0
4,047,303
0
3
0
true
0
0
You can't enforce that all strings are Unicode; even with from __future__ import unicode_literals in a module, byte strings can be written as b'...', as they can in Python 3. There was an option that could be used to get the same effect as unicode_literals globally: the command-line option -U. However it was abandoned early in the 2.x series because it basically broke every script. What is your purpose for this? It is not desirable to abolish byte strings. They are not “bad” and Unicode strings are not universally “better”; they are two separate animals and you will need both of them. Byte strings will certainly be needed to talk to binary files and network services. If you want to be prepared to transition to Python 3, the best tack is to write b'...' for all the strings you really mean to be bytes, and u'...' for the strings that are inherently Unicode. The default string '...' format can be used for everything else, places where you don't care and/or whether Python 3 changes the default string type.
1
5
0
0
How can I automate a test to enforce that a body of Python 2.x code contains no string instances (only unicode instances)? Eg. Can I do it from within the code? Is there a static analysis tool that has this feature? Edit: I wanted this for an application in Python 2.5, but it turns out this is not really possible because: 2.5 doesn't support unicode_literals kwargs dictionary keys can't be unicode objects, only strings So I'm accepting the answer that says it's not possible, even though it's for different reasons :)
Python 2.x: how to automate enforcing unicode instead of string?
1
1.2
1
0
0
213
4,047,899
2010-10-28T23:16:00.000
2
0
0
0
1
python,mysql,file-upload,apache2,fastcgi
0
4,047,955
0
1
1
true
1
0
If the web server/gateway layer were truncating incoming form submissions I'd expect an error from FieldStorage, since the truncation would not just interrupt the file upload but also the whole multipart/form-data structure. Even if cgi.py tolerated this, it would be very unlikely to have truncated the multipart at just the right place to leave exactly 2**16-1 bytes of file upload. So I would suspect MySQL. LONGBLOB should be fine up to 2**32-1, but 65535 would be the maximum length of a normal BLOB. Are you sure the types are what you think? Check with SHOW CREATE TABLE x. Which database layer are you using to get the data in?
1
3
0
0
I'm having a problem with file uploading. I'm using FastCGI on Apache2 (unix) to run a WSGI-compliant application. File uploads, in the form of images, are begin saved in a MySQL database. However, larger images are being truncated at 65535 bytes. As far as I can tell, nothing should be limiting the size of the files and I'm not sure which one of the pieces in my solution would be causing the problem. Is it FastCGI; can it limit file upload sizes? Is it Python? The cgi.FieldStorage object gives me a file handle to the uploaded file which I then read: file.read(). Does this limit file sizes in any way? Is it MySQL? The type of the column for saving the image data is a longblob. I figured this could store a couple of GB worth of data. So a few MB shouldn't be a problem, right? Is it the flups WSGIServer? I can't find any information regarding this. My file system can definitely handle huge files, so that's not a problem. Any ideas? UPDATE: It is MySQL. I got python to output the number of bytes uploaded and it's greater than 65535. So I looked into max_allowed_packet for mysqld and set it to 128M. Overkill, but wanting to be sure for the moment. My only problem now is getting python's MySQLdb to allow the transfer of more than 65535 bytes. Does anyone know how to do this? Might post as a separate question.
Does FastCGI or Apache2 limit upload sizes?
1
1.2
1
1
0
1,279
4,059,691
2010-10-30T16:53:00.000
1
0
1
0
0
python,validation,conditional-statements
0
4,059,701
0
3
1
false
0
0
Use Python's language services to parse and compile the string, then execute the resulting AST.
2
0
0
0
what I am struggling with is testing predefined conditions which takes user provided parameters like in example below: cond = "if ( 1 is yes and 2 is no ) or ( 1 is yes and 2 is no )" cond2 = "if (3 is no or 1 is no )" vars = [] lst = cond.split() lst += cond2.split() for l in lst: if l.isdigit(): if l not in vars: vars.append(l) # ... sort # ... read user answers => x = no, y = no, y = yes # ... replace numbers with input (yes or no) # ... finally I have cond = "if ( no is yes and no is no ) or ( no is yes and no is no )" cond2 = "if (yes is no or no is no )" First of all, is this the right approach? Secondly, how do I validate above conditions if True or False ? Thank You in advance.
python: how to validate userdefined condition?
0
0.066568
1
0
0
618
4,059,691
2010-10-30T16:53:00.000
0
0
1
0
0
python,validation,conditional-statements
0
4,060,569
0
3
1
false
0
0
so, after some reading based on Ignacio's tip, I think I have it after some string modifications. However, still not quite sure if this is the right approach. So, my condition variable is defined as below cond = """if ( 'no' is 'yes' and 'no' is 'no' ): result.append[1] else: result.append[0] """ Additionally I create variable to store condition result to evaluate it later result = [] I run exec on my string exec(cond) Finally, I can evaluate result. if result[0] == 1 print "Condition was met" else: print "Condition wasn't met" Any thoughts or comments highly appreciated.
2
0
0
0
what I am struggling with is testing predefined conditions which takes user provided parameters like in example below: cond = "if ( 1 is yes and 2 is no ) or ( 1 is yes and 2 is no )" cond2 = "if (3 is no or 1 is no )" vars = [] lst = cond.split() lst += cond2.split() for l in lst: if l.isdigit(): if l not in vars: vars.append(l) # ... sort # ... read user answers => x = no, y = no, y = yes # ... replace numbers with input (yes or no) # ... finally I have cond = "if ( no is yes and no is no ) or ( no is yes and no is no )" cond2 = "if (yes is no or no is no )" First of all, is this the right approach? Secondly, how do I validate above conditions if True or False ? Thank You in advance.
python: how to validate userdefined condition?
0
0
1
0
0
618
4,068,906
2010-11-01T12:37:00.000
0
0
0
0
1
python,dll,windows-vista,qt4,pyqt4
0
4,453,952
0
1
0
false
0
1
Yes, Qt plugin infrastructure is a fairly simple and robust one. It attempts to load every file in sqldrivers directory. If it is successful, each dll then runs a function that registers all the features such a plugin supports. Then, you application initalizes. If all the features it needs are available, it works properly, otherwise, some form of error or exception handling occurs.
1
0
0
0
I'm experimenting with PyQT, and I was trying to figure out how to get it to work with Firebird. I built the Firebird driver, but couldn't get it to work, so I was thinking maybe I wasn't putting it in the right place. So I tried experimenting with the SQLite driver, since PyQT came with it already installed, with working examples. I figured if I renamed all the qsqlite4.dll driver files I could find, eventually the example program would stop working when I renamed the one it was actually using. That didn't work. So I tried renaming the "site-packages\pyqt4\plugins\sqldrivers" folder to "site-packages\pyqt4\plugins\sqldrivers-old", and that did it. The example program stopped working. So I changed the folder name back, and tried renaming all the files in the folder. But the example program started working again. Then I moved the qsqlite4.dll file to a subdirectory, and it stopped working. So I moved it back, and renamed it to blah.blah.blah. And it worked again. Then I opened up blah.blah.blah with notepad++, and deleted some stuff at the top of the file, and that kept the example program from working. So I'm confused. As far as I can tell, either Python, PyQT, QT, or Windows Vista is finding the dll, no matter what I rename it to, as long as it's in the right folder. I even tried renaming it to the name of one of the other dll's, thinking maybe that would confuse it. But it only confused me. Is this normal? edit: I'm thinking this has something to do with plugins
Either Python, PyQT, QT, or Windows Vista is finding my dll, no matter what I rename it to. Is this normal?
0
0
1
0
0
284
4,073,928
2010-11-01T23:21:00.000
0
0
0
0
0
python,scroll,2d,pygame
0
14,293,575
0
5
1
false
0
1
You can have a 2 variables level_landlevel_d which see where you are in the level, Then check which sprites are in the visible area level_d+height and level_l+width, and draw them on the screen.
3
2
0
0
I'm currently making a 2D side-scrolling run'n'jump platform game in PyGame. Most of the stuff is working OK and very well in fact - I am exploiting the fast pyGame Sprite objects & groups. What I'm interested to know is how people usually deal with Rects for scrolling games. I obviously have a level that is much bigger than visible area, and the player, enemies, bullets etc each have their own (x,y) coordinates which describe where they are in the level. But now, since we use the "spriteGroup.draw(surface)" call, it will not display them in the right spot unless each objects Rects have been adjusted so that the right part displays on the screen. In other words, everytime a player/enemy/bullet/whatever else is updated, the Camera information needs to be passed, so that their Rect can be updated. Is this the best method to use? It works but I don't really like passing the camera information to every single object at every update to offset the Rects. Obviously the ideal method (I think) is to use Rects with "real" coordinates, blit everything to a buffer as big as the level, and then just blit the visible part to the screen, but in practice that slows the game down A LOT. Any comments/insight would be appreciated. Thanks
Pygame: Updating Rects with scrolling levels
0
0
1
0
0
1,235
4,073,928
2010-11-01T23:21:00.000
0
0
0
0
0
python,scroll,2d,pygame
0
14,189,213
0
5
1
false
0
1
One method I found is to keep track of a scrollx and a scrolly. Then, just add scrollx and scroll y to the coordinates when you move the rectangles.
3
2
0
0
I'm currently making a 2D side-scrolling run'n'jump platform game in PyGame. Most of the stuff is working OK and very well in fact - I am exploiting the fast pyGame Sprite objects & groups. What I'm interested to know is how people usually deal with Rects for scrolling games. I obviously have a level that is much bigger than visible area, and the player, enemies, bullets etc each have their own (x,y) coordinates which describe where they are in the level. But now, since we use the "spriteGroup.draw(surface)" call, it will not display them in the right spot unless each objects Rects have been adjusted so that the right part displays on the screen. In other words, everytime a player/enemy/bullet/whatever else is updated, the Camera information needs to be passed, so that their Rect can be updated. Is this the best method to use? It works but I don't really like passing the camera information to every single object at every update to offset the Rects. Obviously the ideal method (I think) is to use Rects with "real" coordinates, blit everything to a buffer as big as the level, and then just blit the visible part to the screen, but in practice that slows the game down A LOT. Any comments/insight would be appreciated. Thanks
Pygame: Updating Rects with scrolling levels
0
0
1
0
0
1,235
4,073,928
2010-11-01T23:21:00.000
1
0
0
0
0
python,scroll,2d,pygame
0
4,077,600
0
5
1
false
0
1
You could extend de Sprite.Group so it recives the camera information. Then do one of these options: A. Override the update method so it updates the on-screen coordinates of every sprite. B. Override the draw method so it updates the on-screen coordinates of every sprite and then calls its parent draw method. I think A it's easier and cleaner.
3
2
0
0
I'm currently making a 2D side-scrolling run'n'jump platform game in PyGame. Most of the stuff is working OK and very well in fact - I am exploiting the fast pyGame Sprite objects & groups. What I'm interested to know is how people usually deal with Rects for scrolling games. I obviously have a level that is much bigger than visible area, and the player, enemies, bullets etc each have their own (x,y) coordinates which describe where they are in the level. But now, since we use the "spriteGroup.draw(surface)" call, it will not display them in the right spot unless each objects Rects have been adjusted so that the right part displays on the screen. In other words, everytime a player/enemy/bullet/whatever else is updated, the Camera information needs to be passed, so that their Rect can be updated. Is this the best method to use? It works but I don't really like passing the camera information to every single object at every update to offset the Rects. Obviously the ideal method (I think) is to use Rects with "real" coordinates, blit everything to a buffer as big as the level, and then just blit the visible part to the screen, but in practice that slows the game down A LOT. Any comments/insight would be appreciated. Thanks
Pygame: Updating Rects with scrolling levels
0
0.039979
1
0
0
1,235
4,076,114
2010-11-02T08:49:00.000
0
0
1
0
0
c#,python,remoting
0
4,077,076
0
3
0
false
0
0
I would use XML-RPC for communication. It is very simple to implement (a couple lines of code in Python, not sure about .NET but it shouldn't be difficult there either) and should be enough in your scenario.
2
1
0
0
I've got a problem. I Have a tool(threads manager) that receives some data and do some computation. I need to write Python client to send data to that tool...I thing I should use .NET Remoting, but how to do that? pls share some links where I can read or post some code...I can't google info about that... P.S Python 2.7, NOT IronPython
How implement .NET server and Python client?
0
0
1
0
0
987
4,076,114
2010-11-02T08:49:00.000
1
0
1
0
0
c#,python,remoting
0
4,076,158
0
3
0
true
0
0
.Net Remoting is designed for when you have .net at both ends so will be very hard to use from the Python end. (Event the XML encoding of .net remoting is not easy to use from other platforms) I am assuming that Python has support for soap, if so I would look at using WCF at the .net end running over the “basic profile”. JSON is another option, there are lots of open source projects that add JSON support to .net, I assume that Python also has JSON surport.
2
1
0
0
I've got a problem. I Have a tool(threads manager) that receives some data and do some computation. I need to write Python client to send data to that tool...I thing I should use .NET Remoting, but how to do that? pls share some links where I can read or post some code...I can't google info about that... P.S Python 2.7, NOT IronPython
How implement .NET server and Python client?
0
1.2
1
0
0
987
4,077,338
2010-11-02T11:52:00.000
0
1
0
1
0
python,linux,unit-testing,regression
0
4,078,021
0
2
0
false
0
0
You could use a helper application that is setuid root to run the chroot; that would avoid the need to run the tests as root. Of course, that would probably still open up a local root exploit, so should only be done with appropriate precautions (e.g. in a VM image). At any rate, any solution with chroot is inherently platform-dependent, so it's rather awkward. I actually like the idea of Dave Webb (override open) better, I must admit...
1
3
0
0
I need to extend a python code which has plenty of hard coded path In order not to mess everything, I want to create unit-tests for the code before my modifications: it will serve as non-regression tests with my new code (that will not have hard-coded paths) But because of hard coded system path, I shall run my test inside a chroot tree (I don't want to pollute my system dir) My problem is that I want to set up the chroot only for test, and this can be done with os.chroot only with root privileges (and I don't want to run the test scripts as root) In fact, I just need a fake tree diretory so that when the code that open('/etc/resolv.conf) retrieves a fake resolv.conf and not my system one I obviously don't want to replace myself the hard coded path in the code because it would not be real regression test Do you have any idea how to achieve this? Thanks Note that all the path accessed are readable with a user accout
regression test dealing with hard coded path
0
0
1
0
0
291
4,082,725
2010-11-02T22:37:00.000
1
0
1
0
0
python,reportlab,pypdf
0
4,082,827
0
2
0
false
0
0
Basically you'll have to remove the corresponding text drawing commands in the PDF's page content stream. It's much easier to generate the pages twice, once with the confidential information, once without them. It might be possible (I don't know ReportLab enough) to specially craft the PDF in a way that the confidential information is easier accessible (e.g. as separate XObjects) for deletion. Still you'd have to do pretty low-level operations on the PDF -- which I would advise against.
2
1
0
0
In Python, I have files generated by ReportLab. Now, i need to extract some pages from that PDF and hide confidential information. I can create a PDF file with blacked-out spots and use pyPdf to mergePage, but people can still select and copy-paste the information under the blacked-out spots. Is there a way to make those spots completely confidential? Per example, I need to hide addresses on the pages, how would i do it? Thanks,
Hide information in a PDF file in Python
0
0.099668
1
0
0
1,789
4,082,725
2010-11-02T22:37:00.000
0
0
1
0
0
python,reportlab,pypdf
0
4,083,017
0
2
0
false
0
0
(Sorry, I was not able to log on when I posted the question...) Unfortunately, the document cannot be regenerated at will (context sensitive), and those PDF files (about 35) are 3000+ pages. I was thinking about using pdf2ps and pdf2ps back, but there is a lot of quality. pdf2ps -dLanguageLevel=3 input.pdf - | ps2pdf14 - output.pdf And if i use "pdftops" instead, the text is still selectable. If there is a way to make it non-selectable like with "pdf2ps" but with better quality, it will do too.
2
1
0
0
In Python, I have files generated by ReportLab. Now, i need to extract some pages from that PDF and hide confidential information. I can create a PDF file with blacked-out spots and use pyPdf to mergePage, but people can still select and copy-paste the information under the blacked-out spots. Is there a way to make those spots completely confidential? Per example, I need to hide addresses on the pages, how would i do it? Thanks,
Hide information in a PDF file in Python
0
0
1
0
0
1,789
4,083,440
2010-11-03T01:07:00.000
0
0
0
0
1
python,web-applications,flask
0
4,083,522
0
2
1
false
1
0
IMHO your time would be better invested learning something like Django, because much of what you could improve in a micro framework is already builtin on a bigger framework.
1
0
0
0
I'd like to look at some good web-app code written in python, just so I can learn some of the patterns / see how I can improve my code. I've already googled around a bit, used google code search and run a search on github too - but haven't come across a well built, comprehensive app. Perhaps a book could work as well. Basically, I'm just trying to find a way to learn the basic programming patterns for web-applications. Any suggestions?
Great flask / other python micro framework code I could learn from
0
0
1
0
0
1,700
4,086,675
2010-11-03T11:44:00.000
3
0
0
1
0
python,portability,pygame
0
4,086,715
0
4
0
false
0
0
The Python scripts are reasonably portable, as long as the interpreter and relevant libraries are installed. Generated .exe and .app files are not.
3
1
0
0
I have a lovely Macbook now, and I'm enjoying coding on the move. I'm also enjoying coding in Python. However, I'd like to distribute the end result to friends using Windows, as an executable. I know that Py2Exe does this, but I don't know how portable Python is across operating systems. Can anyone offer any advice? I'm using PyGame too. Many thanks
Portable Python (Mac -> Windows)
0
0.148885
1
0
0
1,743
4,086,675
2010-11-03T11:44:00.000
0
0
0
1
0
python,portability,pygame
0
4,087,087
0
4
0
false
0
0
If you are planning to include Linux in your portability criteria, it's worth remembering that many distributions still package 2.6 (or even 2.5), and will probably be a version behind in the 3.x series as well (I'm assuming your using 2.x given the PyGame requirement though). Versions of PyGame seem to vary quite heavily between distros as well.
3
1
0
0
I have a lovely Macbook now, and I'm enjoying coding on the move. I'm also enjoying coding in Python. However, I'd like to distribute the end result to friends using Windows, as an executable. I know that Py2Exe does this, but I don't know how portable Python is across operating systems. Can anyone offer any advice? I'm using PyGame too. Many thanks
Portable Python (Mac -> Windows)
0
0
1
0
0
1,743
4,086,675
2010-11-03T11:44:00.000
0
0
0
1
0
python,portability,pygame
0
4,153,679
0
4
0
true
0
0
Personally I experienced huge difficult with all the Exe builder, py2exe , cx_freeze etc. Bugs and errors all the time , keep displaying an issue with atexit module. I find just by including the python distro way more convinient. There is one more advantage beside ease of use. Each time you build an EXE for a python app, what you essential do is include the core of the python installation but only with the modules your app is using. But even in that case your app may increase from a mere few Kbs that the a python module is to more than 15 mbs because of the inclusion of python installation. Of course installing the whole python will take more space but each time you send your python apps they will be only few kbs long. Plus you want have to go to the hussle of bundling the exe each time you change even a coma to your python app. Or I think you do , I dont know if just replacing the py module can help you avoid this. In any case installing python and pygame is as easy as installing any other application in windows. In linux via synaptic is also extremly easy. MACOS is abit tricky though. MACOS already come with python pre installed, Snow leopard has 2.6.1 python installed. However if you app is using a python later than that and include the install of python with your app, you will have to instruct the user to set via "GET INFO -> open with" the python launcher app which is responsible for launcing python apps to use your version of python and not the onboard default 2.6.1 version, Its not difficult and it only takes a few seconds, even a clueless user can do this. Python is extremely portable, python pygame apps cannot only run unchanged to the three major platform , Windows , MACOS ,Linux . They can even run on mobile and portable devices as well. If you need to build app that runs across platform , python is dead easy and highly recomended.
3
1
0
0
I have a lovely Macbook now, and I'm enjoying coding on the move. I'm also enjoying coding in Python. However, I'd like to distribute the end result to friends using Windows, as an executable. I know that Py2Exe does this, but I don't know how portable Python is across operating systems. Can anyone offer any advice? I'm using PyGame too. Many thanks
Portable Python (Mac -> Windows)
0
1.2
1
0
0
1,743
4,093,387
2010-11-04T02:22:00.000
0
0
0
0
0
python
0
4,093,406
0
2
0
false
1
0
I'll admit I don't know Python 3, so I may be wrong, but in Python 2, you can just check the __file__ variable in your module to get the name of the file it was loaded from. Just create your file in that same directory (preferably using os.path.dirname and os.path.join to remain platform-independent).
1
0
0
0
I have a rather simple program that writes HTML code ready for use. It works fine, except that if one were to run the program from the Python command line, as is the default, the HTML file that is created is created where python.exe is, not where the program I wrote is. And that's a problem. Do you know a way of getting the .write() function to write a file to a specific location on the disc (e.g. C:\Users\User\Desktop)? Extra cool-points if you know how to open a file browser window.
Python3:Save File to Specified Location
0
0
1
0
0
2,983
4,093,819
2010-11-04T04:07:00.000
2
0
0
0
0
wxpython,wxwidgets,openfiledialog,savefiledialog,filedialog
0
5,943,314
0
4
0
false
0
1
In wxWidgets 2.9 custom controls can be added to file dialogs using wxFileDialog::SetExtraControlCreator(). It's implemented for GTK, MSW and generic dialogs. Alternatively, you may use the wxFileCtrl class. It has native implementation only in wxGTK. I don't know if these features is available from Python wrappers, though.
2
2
0
0
I'm creating a file dialog that allows the user to save a file after editing it in my app. I want to add a checkbox to the dialog so the user can make some choices about what format the file is saved in. I think I need to make some new class that inherits from FileDialog and inserts a checkbox into the frame created by the filedialog, but I don't really know how to do that. Can anyone help me out? (I also want to create an analogous file dialog for opening a file, but I assume that will just mean replacing the SAVE style with the OPEN style.)
How do I add widgets to a file dialog in wxpython?
0
0.099668
1
0
0
2,300
4,093,819
2010-11-04T04:07:00.000
2
0
0
0
0
wxpython,wxwidgets,openfiledialog,savefiledialog,filedialog
0
12,428,450
0
4
0
false
0
1
I have to disagree with the sentiment that you should use standard dialogs only how they were designed. I take another view and would rather look at using subclassing the way that subclassing was intended. And to me, it is to add additional functionality/specialization to a class. So it is not changing the behavior of the standard dialog. It is creating a new dialog BASED ON the standard dialog with a little additional functionality. In my case, I want to add two buttons to the wx.MultiChoiceDialog to provide a Select All and/or Unselect All functions.
2
2
0
0
I'm creating a file dialog that allows the user to save a file after editing it in my app. I want to add a checkbox to the dialog so the user can make some choices about what format the file is saved in. I think I need to make some new class that inherits from FileDialog and inserts a checkbox into the frame created by the filedialog, but I don't really know how to do that. Can anyone help me out? (I also want to create an analogous file dialog for opening a file, but I assume that will just mean replacing the SAVE style with the OPEN style.)
How do I add widgets to a file dialog in wxpython?
0
0.099668
1
0
0
2,300
4,095,925
2010-11-04T10:55:00.000
1
0
1
0
0
python,multithreading,queue
0
4,096,149
0
1
0
true
0
0
If a threads waits for a specific task completion, i.e it shouldn't pick any completed task except that one it put, you can use locks to wait for the task: def run(self): # get a task, do somethings, put a new task newTask.waitFor() ... class Task: ... def waitFor(self): self._lock.acquire() def complete(self): self._lock.release() def failedToComplete(self, err): self._error = err self._lock.release() This will help to avoid time.sleep()-s on response queue monitoring. Task completion errors handling should be considered here. But this is uncommon approach. Is it some specific algorithm where the thread which puts a new task, should wait for it? Even so, you can implement that logic into a Task class, and not in the thread that processes it. And why the thread picks a task from the destination queue and puts a new task back to the destination queue? If you have n steps of processing, you can use n queues for it. A group of threads serves the first queue, gets a task, processes it, puts the result (a new task) to the next queue. The group of final response-handler threads gets a response and sends it back to the client. The tasks encapsulate details concerning themselves, the threads don't distinguish a task from another. And there is not need to wait for a particular task.
1
0
0
0
could anyone please provide on how to achieve below scenario ? 2 queues - destination queue, response queue thread picks task up from destination queue finds out needs more details submits new task to destination queue waits for his request to be processed and result appear in response queue or monitors response queue for response to his task but does not actually pick any response so it is available to the other threads waiting for other responses ? thank you
python: how to make threads wait for specific response?
0
1.2
1
0
0
1,230
4,098,119
2010-11-04T15:15:00.000
6
0
0
1
0
python,google-app-engine
0
4,098,417
0
1
0
true
1
0
Somewhere in your top level module code is something that uses Python print statements. Print outputs to standard out, which is what is returned as the response body; if it outputs a pair of newlines, the content before that is treated by the browser as the response header. The 'junk' you're seeing is the real response headers being produced by your webapp. It's only happening on startup requests, because that's the only time the code in question gets executed.
1
1
0
0
I'm developing an app in Python for Google App Engine. When I run the deployed app from appspot, it works fine unless I'm accessing it for the first time in over, say, 5 minutes. The problem is that if I haven't accessed the app for a while, the page renders with the message Status: 200 OK Content-Type: text/html; charset=utf-8 Cache-Control: no-cache Expires: Fri, 01 Jan 1990 00:00:00 GMT Content-Length: 15493 prepended at the top. Usually that text is displayed for a second or two before the rest of the page is displayed. If I check the server Logs, I see the info message This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. The problem is easily corrected by refreshing the page. In this case, the page is delivered correctly, and works for subsequent refreshes. But if I wait 5 minutes, the problem comes back. Any explanations, or suggestions on how to troubleshoot this? I've got a vague notion that when GAE "wakes up" after being inactive, there is an incorrect initialization going on. Or perhaps a header from a previous bout of activity is lingering in a buffer somewhere. But self.response.out seems to be empty when the request handler is invoked.
Google App Engine gives spurious content at beginning of page after quiescent period
0
1.2
1
0
0
108
4,101,815
2010-11-04T22:01:00.000
0
0
0
1
0
python,matlab,amazon-web-services,hadoop,mapreduce
0
4,101,917
0
2
0
false
0
0
The following is not exactly an answer to your Hadoop question, but I couldn't resist not asking why you don't execute your processing jobs on the Grid resources? There are proven solutions for executing compute intensive workflows on the Grid. And as far as I know matlab runtime environment is usually available on these resources. You may also consider using the Grid especially if you are in academia. Good luck
1
1
1
0
I am writing and distributed image processing application using hadoop streaming, python, matlab, and elastic map reduce. I have compiled a binary executable of my matlab code using the matlab compiler. I am wondering how I can incorporate this into my workflow so the binary is part of the processing on Amazon's elastic map reduce? It looks like I have to use the Hadoop Distributed Cache? The code is very complicated (and not written by me) so porting it to another language is not possible right now. THanks
Hadoop/Elastic Map Reduce with binary executable?
1
0
1
0
0
1,138
4,103,085
2010-11-05T02:15:00.000
15
1
0
1
0
python,notepad++,nppexec
0
4,106,339
0
2
0
true
0
0
Notepad++ >nppexec >follow $(current directory)
1
6
0
0
Using windows for the first time in quite awhile and have picked up notepad++ and am using the nppexec plugin to run python scripts. However, I noticed that notepad++ doesn't pick up the directory that my script is saved in. For example, I place "script.py" in 'My Documents' however os.getcwd() prints "Program Files \ Notepad++" Does anyone know how to change this behavior? Not exactly used to it in Mac.
Getting NppExec to understand path of the current file in Notepad++ (for Python scripts)
0
1.2
1
0
0
4,494
4,107,644
2010-11-05T15:57:00.000
0
0
0
0
0
python,django
0
4,107,663
0
2
0
false
1
0
Put the poll page in its own view, connect to the view via urls.py, and set up your frame or iframe to source from that URL.
1
0
0
1
Can you take things like the poll app from the tutorial and display them in an iframe or frameset? The tutorial is great and the app is very nice, but, how often do you go to a site with a whole page dedicated to a poll? I was trying to think about how you do it using the urls.py file, but couldn't wrap my head around it. Just wondering if anyone has done this or knows of any tutorials that cover this issue? Thanks.
Django Question
0
0
1
0
0
101
4,108,852
2010-11-05T18:22:00.000
4
0
0
0
0
python,django
0
4,108,925
0
3
0
true
1
0
With Django it's a bit more than introspection actually. The models use a metaclass to register themselves, I will spare you the complexities of everything involved but the admin does not introspect the models as you browse through it. Instead, the registering process creates a _meta object on the model with all the data needed for the admin and ORM. You can see the ModelBase metaclass in django/db/models/base.py, as you can see in the __new__ function it walks through all the fields to add them to the _meta object. The _meta object itself is generated dynamically using the Meta class definition on the model. You can see the result with print SomeModel._meta or print SomeModel._meta.fields
1
0
0
0
I don't really want to know Django I am actually more interested in the administrator. The thing that interests me is how they introspect the models to create the administrator back-end. I browsed through the Django source code and found a little info but since it's such a big project I was wondering if there are smaller examples of how they do it? This is just a personal project to get to understand Python better. I thought that learning about introspecting objects would be a good way to do this.
Django's introspecting administrator: How does it work?
0
1.2
1
0
0
91
4,111,049
2010-11-05T23:48:00.000
2
0
0
0
0
python,tkinter,clipboard,toolbar
0
4,111,218
0
2
0
false
0
1
You don't have to maintain a big framework, you can create a single binding on the root widget for <FocusIn> and put all the logic in that binding. Or, use focus_class and bind to the class all. Binding on the root will only affect children of the root, binding to all will affect all widgets in the entire app. That only matters if you have more than one toplevel widget.
2
0
0
0
I'm looking for suggestions on how one might implement a toolbar that provides edit cut, copy, paste commands using the Tkinter framework. I understand how to build a toolbar and bind the toolbar commands, but I'm confused over how the toolbar button bound commands will know which widget to apply the cut, copy, or paste action because the widget with edit activity will lose focus when the toolbar button is clicked. My first thought was to have each widget with potential edit activity set a global variable when the widget gains focus and have other widgets (without edit activity, eg. buttons, sliders, checkbox/radiobox, etc) clear this global variable. But this sounds complicated to maintain unless I build a framework of widgets that inherit this behavior. Is there a simpler way to go about this or am I on the right track?
Python/Tkinter: Building a toolbar that provides edit cut, copy, paste commands
0
0.197375
1
0
0
1,053
4,111,049
2010-11-05T23:48:00.000
1
0
0
0
0
python,tkinter,clipboard,toolbar
0
4,111,334
0
2
0
true
0
1
You can tell the toolbar buttons to not take the focus; it's a configuration option and no UI guidelines I've ever seen have had toolbar buttons with focus. (Instead, the functionality is always available through some other keyboard-activatable mechanism, e.g., a hotkey combo.)
2
0
0
0
I'm looking for suggestions on how one might implement a toolbar that provides edit cut, copy, paste commands using the Tkinter framework. I understand how to build a toolbar and bind the toolbar commands, but I'm confused over how the toolbar button bound commands will know which widget to apply the cut, copy, or paste action because the widget with edit activity will lose focus when the toolbar button is clicked. My first thought was to have each widget with potential edit activity set a global variable when the widget gains focus and have other widgets (without edit activity, eg. buttons, sliders, checkbox/radiobox, etc) clear this global variable. But this sounds complicated to maintain unless I build a framework of widgets that inherit this behavior. Is there a simpler way to go about this or am I on the right track?
Python/Tkinter: Building a toolbar that provides edit cut, copy, paste commands
0
1.2
1
0
0
1,053
4,115,033
2010-11-06T20:34:00.000
0
0
1
0
1
python,thread-safety,queue
0
4,115,106
0
3
0
false
0
0
Why can't you just add the final step to the queue ?
2
0
0
0
how can i update a shared variable between different threading.Thread in python? lets say that i have 5 threads working down a Queue.Queue(). after the queue is done i want to do an other operation but i want it to happen only once. is it possible to share and update a variable betweeen the threads. so when Queue.empty() is True this event gets fired but if one of the threads is doing it i dont want the others to do do that too because i would get wrong results. EDIT i have a queue which reflects files on the filesystem. the files are uploaded to a site by the threads and while each thread is uploading the file it updates a set() of keywords i got from the files. when the queue is empty i need to contact the site and tell it to update the keyword counts. right now each thread does this and i get an update for each thread which is bad. i also tried to empty the set but it doesnt work. keywordset = set() hkeywordset = set() def worker(): while queue: if queue.empty(): if len(keywordset) or len(hkeywordset): # as soon as the queue is empty we send the keywords and hkeywords to the # imageapp so it can start updating apiurl = update_cols_url if apiurl[-1] != '/': apiurl = apiurl+'/' try: keywords = [] data = dict(keywords=list(keywordset), hkeywords=list(hkeywordset)) post = dict(data=simplejson.dumps(data)) post = urllib.urlencode(post) urllib2.urlopen(apiurl, post) hkeywordset.clear() keywordset.clear() print 'sent keywords and hkeywords to imageapp...' except Exception, e: print e # we get the task form the Queue and process the file based on the action task = queue.get() print str(task) try: reindex = task['reindex'] except: reindex = False data = updater.process_file(task['filename'], task['action'], task['fnamechange'], reindex) # we parse the images keywords and hkeywords and add them to the sets above for later # processing try: for keyword in data['keywords']: keywordset.add(keyword) except: pass try: for hkw in data['hkeywords']: hkeywordset.add(hkw) except:pass queue.task_done() for i in range(num_worker_threads): t = threading.Thread(target=worker) t.daemon = True t.start() while 1: line = raw_input('type \'q\' to stop filewatcher... or \'qq\' to force quit...\n').strip() this is what i was trying basically. but of course the part of queue.empty() gets exectued as many times as threads i have.
python threading and shared variables
1
0
1
0
0
3,372
4,115,033
2010-11-06T20:34:00.000
0
0
1
0
1
python,thread-safety,queue
0
4,115,111
0
3
0
false
0
0
Have another queue where you place this event after first queue is empty. Or have special thread for this event.
2
0
0
0
how can i update a shared variable between different threading.Thread in python? lets say that i have 5 threads working down a Queue.Queue(). after the queue is done i want to do an other operation but i want it to happen only once. is it possible to share and update a variable betweeen the threads. so when Queue.empty() is True this event gets fired but if one of the threads is doing it i dont want the others to do do that too because i would get wrong results. EDIT i have a queue which reflects files on the filesystem. the files are uploaded to a site by the threads and while each thread is uploading the file it updates a set() of keywords i got from the files. when the queue is empty i need to contact the site and tell it to update the keyword counts. right now each thread does this and i get an update for each thread which is bad. i also tried to empty the set but it doesnt work. keywordset = set() hkeywordset = set() def worker(): while queue: if queue.empty(): if len(keywordset) or len(hkeywordset): # as soon as the queue is empty we send the keywords and hkeywords to the # imageapp so it can start updating apiurl = update_cols_url if apiurl[-1] != '/': apiurl = apiurl+'/' try: keywords = [] data = dict(keywords=list(keywordset), hkeywords=list(hkeywordset)) post = dict(data=simplejson.dumps(data)) post = urllib.urlencode(post) urllib2.urlopen(apiurl, post) hkeywordset.clear() keywordset.clear() print 'sent keywords and hkeywords to imageapp...' except Exception, e: print e # we get the task form the Queue and process the file based on the action task = queue.get() print str(task) try: reindex = task['reindex'] except: reindex = False data = updater.process_file(task['filename'], task['action'], task['fnamechange'], reindex) # we parse the images keywords and hkeywords and add them to the sets above for later # processing try: for keyword in data['keywords']: keywordset.add(keyword) except: pass try: for hkw in data['hkeywords']: hkeywordset.add(hkw) except:pass queue.task_done() for i in range(num_worker_threads): t = threading.Thread(target=worker) t.daemon = True t.start() while 1: line = raw_input('type \'q\' to stop filewatcher... or \'qq\' to force quit...\n').strip() this is what i was trying basically. but of course the part of queue.empty() gets exectued as many times as threads i have.
python threading and shared variables
1
0
1
0
0
3,372
4,126,247
2010-11-08T17:19:00.000
2
0
0
0
0
python,frameworks,zope,zope.interface
0
4,126,734
0
1
0
true
0
0
You can upload zope the same way you would upload anything else, but it's not suitable for deploying on many shared webhosts as many of them would not like you running the zope process due to the ammount of resources it can consume You are best off trying to find a webhost that supports zope or to use a VPS
1
0
0
0
Hey, I'd like to know how to upload my zope site on my ftp. I have a domain, and I like to upload it, like a upload normal files on my ftp. Thanks.
How to upload zope site on my ftp?
0
1.2
1
0
0
175
4,136,800
2010-11-09T17:49:00.000
5
0
0
0
0
python,performance,sqlite
0
4,136,841
0
3
0
true
0
0
SQLite does not run in a separate process. So you don't actually have any extra overhead from IPC. But IPC overhead isn't that big, anyway, especially over e.g., UNIX sockets. If you need multiple writers (more than one process/thread writing to the database simultaneously), the locking overhead is probably worse, and MySQL or PostgreSQL would perform better, especially if running on the same machine. The basic SQL supported by all three of these databases is the same, so benchmarking isn't that painful. You generally don't have to do the same type of debugging on SQL statements as you do on your own implementation. SQLite works, and is fairly well debugged already. It is very unlikely that you'll ever have to debug "OK, that row exists, why doesn't the database find it?" and track down a bug in index updating. Debugging SQL is completely different than procedural code, and really only ever happens for pretty complicated queries. As for debugging your code, you can fairly easily centralize your SQL calls and add tracing to log the queries you are running, the results you get back, etc. The Python SQLite interface may already have this (not sure, I normally use Perl). It'll probably be easiest to just make your existing Table class a wrapper around SQLite. I would strongly recommend not reinventing the wheel. SQLite will have far fewer bugs, and save you a bunch of time. (You may also want to look into Firefox's fairly recent switch to using SQLite to store history, etc., I think they got some pretty significant speedups from doing so.) Also, SQLite's well-optimized C implementation is probably quite a bit faster than any pure Python implementation.
3
4
0
0
I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc. I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory. Here's my thinking so far: Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite). With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite). I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time. The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance. Any corrections to the above, or anything else I should think about?
Pros and cons of using sqlite3 vs custom table implementation
0
1.2
1
1
0
903
4,136,800
2010-11-09T17:49:00.000
4
0
0
0
0
python,performance,sqlite
0
4,136,876
0
3
0
false
0
0
You could try to make a sqlite wrapper with the same interface as your class Table, so that you keep your code clean and you get the sqlite performences.
3
4
0
0
I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc. I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory. Here's my thinking so far: Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite). With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite). I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time. The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance. Any corrections to the above, or anything else I should think about?
Pros and cons of using sqlite3 vs custom table implementation
0
0.26052
1
1
0
903
4,136,800
2010-11-09T17:49:00.000
0
0
0
0
0
python,performance,sqlite
0
4,136,862
0
3
0
false
0
0
If you're doing database work, use a database, if your not, then don't. Using tables, it sound's like you are. I'd recommend using an ORM to make it more pythonic. SQLAlchemy is the most flexible (though it's not strictly just an ORM).
3
4
0
0
I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc. I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory. Here's my thinking so far: Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite). With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite). I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time. The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance. Any corrections to the above, or anything else I should think about?
Pros and cons of using sqlite3 vs custom table implementation
0
0
1
1
0
903
4,138,886
2010-11-09T21:35:00.000
4
0
0
1
0
python,linux,unix,cross-compiling
0
4,139,691
0
1
0
true
0
0
The standard freeze tool (from Tools/freeze) can be used to make fully-standalone binaries on Unix, including all extension modules and builtins (and omitting anything that is not directly or indirectly imported).
1
2
0
0
Anybody know how this can be done? I took a look at cx_Freeze, but it seems that it doesn't compile everything necessary into one binary (i.e., the python builtins aren't present).
How to create Unix and Linux binaries from Python code
0
1.2
1
0
0
3,222
4,140,943
2010-11-10T03:20:00.000
0
1
1
0
1
python,emacs
0
4,143,655
1
2
0
false
0
0
PATH is only searched when a program is launched via the shell. For programs that are launched directly by Emacs (for example, via call-process), it's the exec-path variable that is searched.
2
1
0
0
how do I change the version of Python that emacs uses in the python-mode to the latest version that I just installed ? I tried setting the PATH in my init.el file to the path where the latest version of python resides but its not working.
Emacs on Mac for Python - python-mode keeps using the default Python version
0
0
1
0
0
811
4,140,943
2010-11-10T03:20:00.000
1
1
1
0
1
python,emacs
0
4,141,003
1
2
0
false
0
0
Set the variable python-python-command. This can be done via customize: M-x customize-option RET python-python-command RET Change the value to point to the appropriate binary.
2
1
0
0
how do I change the version of Python that emacs uses in the python-mode to the latest version that I just installed ? I tried setting the PATH in my init.el file to the path where the latest version of python resides but its not working.
Emacs on Mac for Python - python-mode keeps using the default Python version
0
0.099668
1
0
0
811
4,149,598
2010-11-10T22:07:00.000
0
0
0
0
0
python,cgi,html-parsing
0
4,149,742
0
4
0
false
1
0
You can access only data, posted by form (or as GET parameters). So, you can extract data you need using JavaScript and post it through form
1
0
0
0
I would like to access any element in a web page. I know how to do that when I have a form (form = cgi.FieldStorage()), but not when I have, for example, a table. How can I do that? Thanks
How can I access any element in a web page with Python?
0
0
1
0
1
178
4,155,126
2010-11-11T14:00:00.000
5
0
1
0
0
python,windows,memory-management
0
4,155,299
0
4
0
false
0
0
As far as I know there is no easy way to see what the memory consumption of a certain object is. It would be a non-trivial thing to do because references could be shared among objects. Here are my two favourite workarounds: Use the process manager. Have the program pause for before allocation. Write down the memory used before allocation. Allocate. Write down memory after allocation. It's a low-tech method but it works. Alternatively you can use pickle.dump to serialize your data structure. The resulting pickle will be comparable (not identical!) in size to the space needed to store the data structure in memory. For better results, use the binary pickle protocol.
2
3
0
0
I have some script that loads a lot of data to memory. I want to know how efficient the data stored in memory. So, I want to be able to know how many memory was used by python before I loaded data, and after I loaded data. Also I wondering, if it is some way to check memory usage of complex object. Let say i have nested dictionary with different types of data inside. How can i know how many memory used by all data in this dictionary. Thanks, Alex
How to find total amount of memory used by python process/object in windows
1
0.244919
1
0
0
950
4,155,126
2010-11-11T14:00:00.000
0
0
1
0
0
python,windows,memory-management
0
4,187,961
0
4
0
false
0
0
An alternative is that you could use windows's performance counters through pywin32
2
3
0
0
I have some script that loads a lot of data to memory. I want to know how efficient the data stored in memory. So, I want to be able to know how many memory was used by python before I loaded data, and after I loaded data. Also I wondering, if it is some way to check memory usage of complex object. Let say i have nested dictionary with different types of data inside. How can i know how many memory used by all data in this dictionary. Thanks, Alex
How to find total amount of memory used by python process/object in windows
1
0
1
0
0
950
4,158,367
2010-11-11T19:19:00.000
72
0
0
0
0
python,charts,matplotlib
0
4,158,455
0
3
0
true
0
0
It was easier than I expected, I just did: pylab.subplot(4,4,10) and it worked.
1
51
1
0
Is it possible to get more than 9 subplots in matplotlib? I am on the subplots command pylab.subplot(449); how can I get a 4410 to work? Thank you very much.
more than 9 subplots in matplotlib
1
1.2
1
0
0
28,836
4,158,758
2010-11-11T20:03:00.000
0
0
0
1
0
python,logging,message-queue,task,celery
0
4,440,220
0
3
0
false
1
0
It sounds like some kind of 'watcher' would be ideal. If you can watch and consume the logs as a stream you could slurp the results as they come in. Since the watcher would be running seperately and therefore have no dependencies with respect to what it is watching I believe this would satisfy your requirements for a non-invasive solution.
1
5
0
0
I want to convert my homegrown task queue system into a Celery-based task queue, but one feature I currently have is causing me some distress. Right now, my task queue operates very coarsely; I run the job (which generates data and uploads it to another server), collect the logging using a variant on Nose's log capture library, and then I store the logging for the task as a detailed result record in the application database. I would like to break this down as three tasks: collect data upload data report results (including all logging from the preceding two tasks) The real kicker here is the logging collection. Right now, using the log capture, I have a series of log records for each log call made during the data generation and upload process. These are required for diagnostic purposes. Given that the tasks are not even guaranteed to run in the same process, it's not clear how I would accomplish this in a Celery task queue. My ideal solution to this problem will be a trivial and ideally minimally invasive method of capturing all logging during the predecessor tasks (1, 2) and making it available to the reporter task (3) Am I best off remaining fairly coarse-grained with my task definition, and putting all of this work in one task? or is there a way to pass the existing captured logging around in order to collect it at the end?
How can I capture all of the python log records generated during the execution of a series of Celery tasks?
1
0
1
0
0
1,306
4,171,555
2010-11-13T08:01:00.000
1
0
1
1
0
python,linux,ms-word
0
4,171,572
0
2
0
false
0
0
Openoffice has some Python scripting ability. I only heard about it, haven't studied it nor used it.
1
1
0
1
I want to create new work document from sketch with Python on Linux platform, but do not know how to do that. I do not willing to use RTF or use pythondocx to create docx document. Is there any other way to do so? Remember, I need the document keeps the formatting. Thanks everyone's help!
How to create a well-formatted word document(.DOC) in python on Linux?
0
0.099668
1
0
0
4,741
4,173,883
2010-11-13T17:49:00.000
5
1
0
0
0
python,social-networking,pylons,get-satisfaction
0
4,174,212
0
1
0
true
0
0
So you're not interested in a fixed solution but want to program it yourself, do I get that correctly? If not: Go with a fixed solution. This will be a lot of programming effort, and whatever you want to do afterwards, doing it in another framework than you intended will be a much smaller problem. But if you're actually interested in the programming experience, and you haven't found any tutorials googling for, say "messaging python tutorial", then that's because these are large-scale projects,- if you describe a project of this size, you're so many miles above actual lines of code that the concrete programming language almost doesn't matter (or at least you don't get stuck with the details). So you need to break these things down into smaller components. For example, the friend/follow function: How to insert stuff into a table with a user id, how to keep a table of follow-relations, how to query for a user all texts from people she's following (of course there's also some infrastructural issues if you hit >100.000 people, but you get the idea ;). Then you can ask yourself, which is the part of this which I don't know how to do in Python? If your problem, on the other hand, is breaking down the problems into these subproblems, you need to start looking for help on that, but that's probably not language specific (so you might just want to start googling for "architecture friend feed" or whatever). Also, you could ask that here (beware, each bullet point makes for a huge question in itself ;). Finally, you could get into the Pinax code (don't know it but I assume it's open source) and see how they're doing it. You could try porting some of their stuff to Pylons, for example, so you don't have to reinvent their wheel, learn how they do it, end up in the framework you wanted and maybe even create something reusable by others. sorry for tl;dr, that's because I don't have a concrete URL to point you to!
1
3
0
1
I am looking for tutorials and/or examples of certain components of a social network web app that may include Python code examples of: user account auto-gen function(database) friend/follow function (Twitter/Facebook style) messaging/reply function (Twitter style) live chat function (Facebook style) blog function public forums (like Get Satisfaction or Stack Overflow) profile page template auto-gen function I just want to start getting my head around how Python can be used to make these features. I am not looking for a solution like Pinax since it is built upon Django and I will be ultimately using Pylons or just straight up Python.
Where can I find Python code examples, or tutorials, of social networking style functions/components?
0
1.2
1
0
1
2,075
4,177,907
2010-11-14T14:27:00.000
3
0
0
1
1
python,django,google-app-engine,django-nonrel
0
4,180,124
0
1
0
true
1
0
Yes, djangoappengine has a mail backend for GAE and it's enabled by default in your settings.py via "from djangoappengine.settings_base import *". You can take a look at the settings_base module to see all backends and default settings.
1
2
0
0
I'm using Django-nonrel for Google App Engine and I was wondering if it's possible to use Django's built-in mail API instead of GAE's mail API for sending mail. If it is, how do I do it? Sorry if this seems like a noob question. I just started learning Django and GAE recently and I can't work out this problem that I have by myself.
Can I use Django's mail API in Google App Engine?
0
1.2
1
0
0
670
4,179,077
2010-11-14T18:57:00.000
1
0
0
1
0
python,multithreading,sockets,admin-interface
0
4,179,129
0
3
0
false
0
0
python includes some multi-threading servers (SocketServer, BaseHTTPServer, xmlrpclib). You might want to look at Twisted as well, it is a powerful framework for networking.
2
1
0
0
I've built a very simple TCP server (in python) that when queried, returns various system level statistics of the host OS running said script. As part of my experimentation and goal to gain knowledge of python and its available libraries; i would like to build on an administration interface that a) binds to a separate TCP socket b) accepts remote connections from the LAN and c) allows the connected user to issue various commands. The Varnish application is an example of a tool that offers similar administrative functionality. My knowledge of threads is limited, and I am looking for pointers on how to accomplish something similar to the following : user connects to admin port (telnet remote.host 12111), and issues "SET LOGGING DEBUG", or "STOP SERVICE". My confusion relates to how i would go about sharing data between threads. If the service is started on for example thread-1 , how can i access data from that thread? Alternatively, a list of python applications that offer such a feature would be a great help. I'd gladly poke through code, in order to reuse their ideas.
Suggestions for developing a threaded tcp based admin interface
0
0.066568
1
0
0
178
4,179,077
2010-11-14T18:57:00.000
0
0
0
1
0
python,multithreading,sockets,admin-interface
0
4,179,107
0
3
0
false
0
0
Probably the easiest starting point would involve Python's xmlrpclib. Regarding threading, all threads can read all data in a Python program; only one thread at a time can modify any given object, so primitives such as lists and dicts will always be in a consistent state. Data structures (i.e. class objects) involving multiple primitives will require a little more care. The safest way to coordinate between threads is to pass messages/commands between threads via something like Queue.Queue; this isn't always the most efficient way but it's far less prone to problems.
2
1
0
0
I've built a very simple TCP server (in python) that when queried, returns various system level statistics of the host OS running said script. As part of my experimentation and goal to gain knowledge of python and its available libraries; i would like to build on an administration interface that a) binds to a separate TCP socket b) accepts remote connections from the LAN and c) allows the connected user to issue various commands. The Varnish application is an example of a tool that offers similar administrative functionality. My knowledge of threads is limited, and I am looking for pointers on how to accomplish something similar to the following : user connects to admin port (telnet remote.host 12111), and issues "SET LOGGING DEBUG", or "STOP SERVICE". My confusion relates to how i would go about sharing data between threads. If the service is started on for example thread-1 , how can i access data from that thread? Alternatively, a list of python applications that offer such a feature would be a great help. I'd gladly poke through code, in order to reuse their ideas.
Suggestions for developing a threaded tcp based admin interface
0
0
1
0
0
178
4,179,831
2010-11-14T21:38:00.000
5
0
1
0
0
python
0
4,179,849
0
4
0
true
0
0
It's not necessary to do so, but if you want you can have your method raise a TypeError if you know that the object is of a type that you cannot handle. One reason to do this is to help people to understand why the method call is failing and to give them some help fixing it, rather than giving them obscure error from the internals of your function. Some methods in the standard library do this: >>> [] + 1 Traceback (most recent call last): File "", line 1, in TypeError: can only concatenate list (not "int") to list
1
5
0
0
I have a function that is supposed to take a string, append things to it where necessary, and return the result. My natural inclination is to just return the result, which involved string concatenation, and if it failed, let the exception float up to the caller. However, this function has a default value, which I just return unmodified. My question is: What if someone passed something unexpected to the method, and it returns something the user doesn't expect? The method should fail, but how to enforce that?
What is the proper python way to write methods that only take a particular type?
0
1.2
1
0
0
134
4,179,879
2010-11-14T21:46:00.000
1
1
0
0
0
python,multithreading,load-testing
0
4,180,003
0
2
0
false
0
0
too many variables. 1000 at the same time... no. in the same second... possibly. bandwidth may well be the bottleneck. this is something best solved by experimentation.
1
4
0
0
I want to do a test load for a web page. I want to do it in python with multiple threads. First POST request would login user (set cookies). Then I need to know how many users doing the same POST request simultaneously can server take. So I'm thinking about spawning threads in which requests would be made in loop. I have a couple of questions: 1. Is it possible to run 1000 - 1500 requests at the same time CPU wise? I mean wouldn't it slow down the system so it's not reliable anymore? 2. What about the bandwidth limitations? How good the channel should be for this test to be reliable? Server on which test site is hosted is Amazon EC2 script would be run from another server(Amazon too). Thanks!
Python script load testing web page
1
0.099668
1
0
1
8,721
4,180,390
2010-11-14T23:40:00.000
4
1
0
1
0
python,ssh
0
4,180,771
1
4
0
false
0
0
On Linux machines, you can run the script with 'at'. echo "python scriptname.py" ¦ at now
1
23
0
0
I want to execute a Python script on several (15+) remote machine using SSH. After invoking the script/command I need to disconnect ssh session and keep the processes running in background for as long as they are required to. I have used Paramiko and PySSH in past so have no problems using them again. Only thing I need to know is how to disconnect a ssh session in python (since normally local script would wait for each remote machine to complete processing before moving on).
Execute remote python script via SSH
0
0.197375
1
0
0
51,995
4,189,717
2010-11-15T23:00:00.000
20
0
0
1
0
python,process,pid
0
4,189,752
0
5
0
true
0
0
Under Linux, you can read proc filesystem. File /proc/<pid>/cmdline contains the commandline.
1
26
0
0
This should be simple, but I'm just not seeing it. If I have a process ID, how can I use that to grab info about the process such as the process name.
Get process name by PID
0
1.2
1
0
0
26,492
4,196,389
2010-11-16T16:25:00.000
1
0
1
1
0
python,command,command-line-arguments,tuples
0
4,196,432
0
8
0
false
0
0
Iterate through sys.argv until you reach another flag.
1
2
0
0
I have a program which provides a command line input like this: python2.6 prog.py -p a1 b1 c1 Now, we can have any number of input parameters i.e. -p a1 and -p a1 c1 b1 e2 are both possibilities. I want to create a tuple based on the variable input parameters. Any suggestions on how to do this would be very helpful! A fixed length tuple would be easy, but I am not sure how to implement a variable length one. thanks.
Python: Create a tuple from a command line input
0
0.024995
1
0
0
5,466
4,200,644
2010-11-17T01:10:00.000
0
0
1
0
1
python,macos,pygame
0
4,200,863
0
1
0
true
0
1
My guess is that you installed it for 2.6 and so it is residing in 2.6's library directory. Install it in 2.7's library directory and you should be good to go. I don't know OSX so I can't help with the details but a little bit of googling shouldn't be too hard. The problem is that the two python installations have distinct import paths.
1
0
0
0
Like the subject says: Does the latest stable pygame release work with python2.7? I've got both versions installed on my OSX Snow Leopard, but import pygame only works on python2.6 - That's the official distro which is 2.6.6, not the pre-installed one which is 2.6.1). And if it does work, how can I make it work on my machine? What am I doing wrong? Thanks in advance.
Does the latest stable pygame release work with python2.7?
0
1.2
1
0
0
324
4,201,590
2010-11-17T04:46:00.000
1
0
1
0
0
python
0
4,201,648
0
4
0
false
0
0
No, don't check for types explicitly. Python is a duck typed language. If the wrong type is passed, a TypeError will be raised. That's it. You need not bother about the type, that is the responsibility of the programmer.
1
0
0
0
I have a class that wants to be initialized from a few possible inputs. However a combination of no function overloading and my relative inexperience with the language makes me unsure of how to proceed. Any advice?
Is it good form to have an __init__ method that checks the type of its input?
0
0.049958
1
0
0
2,458
4,202,358
2010-11-17T07:34:00.000
-1
1
0
0
0
python,django,django-models
0
4,203,124
0
3
0
true
0
0
Rename the fixture to something else than initial_data
2
5
0
0
is there a way to run syncdb without loading fixtures? xo
how do run syncdb without loading fixtures?
1
1.2
1
0
0
1,286
4,202,358
2010-11-17T07:34:00.000
0
1
0
0
0
python,django,django-models
0
15,206,734
0
3
0
false
0
0
best to name your fixtures something_else.json, then run syncdb (and migrate if needed), followed by manage.py loaddata something_else.json
2
5
0
0
is there a way to run syncdb without loading fixtures? xo
how do run syncdb without loading fixtures?
1
0
1
0
0
1,286
4,212,877
2010-11-18T08:29:00.000
50
0
0
1
0
python,asynchronous,nonblocking,tornado
0
4,213,777
0
2
0
true
1
0
There is a server and a webframework. When should we use framework and when can we replace it with other one? This distinction is a bit blurry. If you are only serving static pages, you would use one of the fast servers like lighthttpd. Otherwise, most servers provide a varying complexity of framework to develop web applications. Tornado is a good web framework. Twisted is even more capable and is considered a good networking framework. It has support for lot of protocols. Tornado and Twisted are frameworks that provide support non-blocking, asynchronous web / networking application development. When should Tornado be used? When is it useless? When using it, what should be taken into account? By its very nature, Async / Non-Blocking I/O works great when it is I/O intensive and not computation intensive. Most web / networking applications suits well for this model. If your application demands certain computational intensive task to be done then it has to be delegated to some other service that can handle it better. While Tornado / Twisted can do the job of web server, responding to web requests. How can we make inefficient site using Tornado? Do any thing computational intensive task Introduce blocking operations But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost. Performance is usually a characteristic of complete web application architecture. You can bring down the performance with most web frameworks, if the application is not designed properly. Think about caching, load balancing etc. Tornado and Twisted provide reasonable performance and they are good for building performant web applications. You can check out the testimonials for both twisted and tornado to see what they are capable of.
1
87
0
0
Ok, Tornado is non-blocking and quite fast and it can handle a lot of standing requests easily. But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost. I couldn't find comprehensive explanation of this, so I'm asking it here: When should Tornado be used? When is it useless? When using it, what should be taken into account? How can we make inefficient site using Tornado? There is a server and a webframework. When should we use framework and when can we replace it with other one?
When and how to use Tornado? When is it useless?
0
1.2
1
0
0
29,364
4,213,091
2010-11-18T09:01:00.000
3
1
1
1
0
python,bash,environment-variables,crontab
0
4,213,327
0
3
0
false
0
0
Use a command line option that only cron will use. Or a symlink to give the script a different name when called by cron. You can then use sys.argv[0]to distinguish between the two ways to call the script.
1
15
0
0
Imagine a script is running in these 2 sets of "conditions": live action, set up in sudo crontab debug, when I run it from console ./my-script.py What I'd like to achieve is an automatic detection of "debug mode", without me specifying an argument (e.g. --debug) for the script. Is there a convention about how to do this? Is there a variable that can tell me who the script owner is? Whether script has a console at stdout? Run a ps | grep to determine that? Thank you for your time.
Detect if python script is run from console or by crontab
0
0.197375
1
0
0
4,781
4,214,868
2010-11-18T12:44:00.000
13
0
0
0
0
python,machine-learning,svm,libsvm
0
4,215,056
0
8
0
true
0
0
LIBSVM reads the data from a tuple containing two lists. The first list contains the classes and the second list contains the input data. create simple dataset with two possible classes you also need to specify which kernel you want to use by creating svm_parameter. >> from libsvm import * >> prob = svm_problem([1,-1],[[1,0,1],[-1,0,-1]]) >> param = svm_parameter(kernel_type = LINEAR, C = 10) ## training the model >> m = svm_model(prob, param) #testing the model >> m.predict([1, 1, 1])
2
25
1
0
I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing Thanks
An example using python bindings for SVM library, LIBSVM
0
1.2
1
0
0
50,030
4,214,868
2010-11-18T12:44:00.000
3
0
0
0
0
python,machine-learning,svm,libsvm
0
8,302,624
0
8
0
false
0
0
Adding to @shinNoNoir : param.kernel_type represents the type of kernel function you want to use, 0: Linear 1: polynomial 2: RBF 3: Sigmoid Also have in mind that, svm_problem(y,x) : here y is the class labels and x is the class instances and x and y can only be lists,tuples and dictionaries.(no numpy array)
2
25
1
0
I am in dire need of a classification task example using LibSVM in python. I don't know how the Input should look like and which function is responsible for training and which one for testing Thanks
An example using python bindings for SVM library, LIBSVM
0
0.07486
1
0
0
50,030
4,216,988
2010-11-18T16:16:00.000
0
0
1
0
1
c++,python,visual-studio-2010,python-c-api,python-embedding
1
4,277,222
0
2
0
true
0
1
Well, i finally found out what went wrong. I did compile my python27_d.dll with the same VC10 as my program itself. But my program is normally compiled as 64 bit executable. I just forgot to compile the dll for x64, too. I didnt think this would lead to such annoying behavoiur, as i believed i would get a linkr error then.
2
1
0
0
I am trying to embed some python code in a c++ application i am developing with ms visual studio c++ 2010. But when i run the program, it exits with code 0x01 when i call Py_initialize(). I dont know how to find out what went wrong. the help file says, Py_Initialize can't return an error value, it only fails fataly. But, why did it fail? I am using a self-compiled python27_d.dll, which i created with the msvs project files in the source downloads from python.org.
Tried to embed python in a visual studio 2010 c++ file, exits with code 1
0
1.2
1
0
0
1,264
4,216,988
2010-11-18T16:16:00.000
0
0
1
0
1
c++,python,visual-studio-2010,python-c-api,python-embedding
1
4,217,625
0
2
0
false
0
1
Is there simple 'hello world' type example of the Py_Initilize code in the python sdk you can start with? That will at least tell you if you have the compiler environment setup correctly, or if the error is in your usage.
2
1
0
0
I am trying to embed some python code in a c++ application i am developing with ms visual studio c++ 2010. But when i run the program, it exits with code 0x01 when i call Py_initialize(). I dont know how to find out what went wrong. the help file says, Py_Initialize can't return an error value, it only fails fataly. But, why did it fail? I am using a self-compiled python27_d.dll, which i created with the msvs project files in the source downloads from python.org.
Tried to embed python in a visual studio 2010 c++ file, exits with code 1
0
0
1
0
0
1,264
4,228,757
2010-11-19T19:43:00.000
0
0
1
0
0
python,validation,parameters
0
4,228,788
0
8
0
false
0
0
You can cast the argument and try... except the ValueError. If you are using sys.argv, also investigate argparse.
1
12
0
0
I want to write a python script that takes 3 parameters. The first parameter is a string, the second is an integer, and the third is also an integer. I want to put conditional checks at the start to ensure that the proper number of arguments are provided, and they are the right type before proceeding. I know we can use sys.argv to get the argument list, but I don't know how to test that a parameter is an integer before assigning it to my local variable for use. Any help would be greatly appreciated.
Python: Test if an argument is an integer
0
0
1
0
0
27,532
4,232,228
2010-11-20T10:16:00.000
1
0
0
0
0
python,cocoa,qt4,pyqt
0
4,365,030
0
1
0
true
0
1
I switched from Fink to Macports, and I got the nice widgets.
1
0
0
0
The widgets in my application are the old style mac widgets. How do I make them become the new style ones. I am using pyqt 4.6.3-1 with python 2.7 on os x 10.6. Everything was installed using fink and I installed both qt4-mac and qt4-x11. Not sure which is being used or how to select one or the other.
getting pyqt to use cocoa widgets
0
1.2
1
0
0
475
4,237,164
2010-11-21T09:18:00.000
3
1
0
0
0
python,login,screen-scraping,screen,mechanize
0
4,238,162
0
2
0
false
0
0
lxml.html provides form handling facilities and supports Python 3.
1
10
0
0
is there any way how to use Mechanize with Python 3.x? Or is there any substitute which works in Python 3.x? I've been searching for hours, but I didn't find anything :( I'm looking for way how to login to the site with Python, but the site uses javascript. Thanks in advance, Adam.
Mechanize for Python 3.x
1
0.291313
1
0
0
3,776
4,239,896
2010-11-21T19:36:00.000
1
0
0
0
0
python,wxpython
0
4,240,158
0
1
0
false
0
1
HI, I did an R&D on wx.Event(GetEventObject) and found out that i can get object details. SO i solved my problem using this.
1
0
0
0
I am creating a series of buttons(Or windows) etc on RUN time. Now how do i identify when user clicks on these buttons?
How to Identify Dynamically created window/buttons in wxpython?
0
0.197375
1
0
0
77
4,242,540
2010-11-22T05:28:00.000
2
1
0
0
1
python,linux,email
0
4,242,804
0
1
0
true
0
0
POP3 does not have push ability. Like a regular ol' post office you need to actually go to check your e-mail. IMAP does have functionality similar to (but not exactly the same as) mail pushing. I'd suggest taking a look at it.
1
0
0
0
I am developing an email parsing application using python POP3 library on a linux server using Dovecot email server. I have parsed the emails to get the contents and the attachments etc. using POP3 library. Now the issue is how to notify a user or actually the application that a new email has arrived? I guess there should be some notification system on email server itself which I am missing or something on linux which we can use to implement the same. Please suggest. Thanks in advance.
Linux email server, how to know a new email has arrived
0
1.2
1
0
0
240
4,253,557
2010-11-23T07:04:00.000
0
0
0
1
1
python
0
4,255,361
0
3
0
false
0
0
Here's an approach. Write an "agent" in Python. The agent is installed on the various computers. It does whatever processing your need locally. It uses urllib2 to make RESTful HTTP requests of the server. It either posts data or requests work to do or whatever is supposed to go on. Write a "server" in Python. The server is installed on one computer. This is written using wsgiref and is a simple WSGI-based server that serves requests from the various agents scattered around campus. While this requires agent installation, it's very, very simple. It can be made very, very secure (use HTTP Digest Authentication). And the agent's privileges define the level of vulnerability. If the agent is running in an account with relatively few privileges, it's quite safe. The agent shouldn't run as root and the agent's account should not be allowed to su or sudo.
1
4
0
1
Question: Where is a good starting point for learning to write server applications? Info: I'm looking in to writing a distributed computing system to harvest the idle cycles of the couple hundred computers sitting idle around my college's campus. There are systems that come close, but don't quite meet all the requirements I need. (most notable all transactions have to be made through SSH because the network blocks everything else) So I've decided to write my own application. partly to get exactly what I want, but also for experience. Important features: Written in python All transaction made through ssh(this is solved through the simple use of pexpect) Server needs to be able to take potentially hundreds of hits. I'll optimize later, the point being simulation sessions. I feel like those aren't to ridiculous of things to try and accomplish. But with the last one I'm not certain where to even start. I've actually already accomplished the first 2 and written a program that will log into my server, and then print ls -l to a file locally. so that isn't hard. but how do i attach several clients asking the server for simulation data to crunch all at the same time? obviously it feels like threading comes in to play here, but more than that I'm sure. This is where my problem is. Where does one even start researching how to write server applications? Am I even using the right wording? What information is there freely available on the internet and/or what books are there on such? again, specifically python, but a step in the right direction is one more than where i am now. p.s. this seeemed more fitting for stackoverflow than serverfault. Correct me if I am wrong.
where to start programing a server application
0
0
1
0
1
186
4,258,278
2010-11-23T16:25:00.000
0
1
0
0
0
python,authentication,cookies,cookielib
0
4,258,354
0
1
0
false
0
0
You can't set cookies for another domain - browsers will not allow it.
1
2
0
0
I'm using python with urllib2 & cookielib and such to open a url. This url set's one cookie in it's header and two more in the page with some javascript. It then redirects to a different page. I can parse out all the relevant info for the cookies being set with the javascript, but I can't for the life of me figure out how to get them into the cookie-jar as cookies. Essentially, when I follow to the site being redirected too, those two cookies have to be accessible by that site. To be very specific, I'm trying to login in to gomtv.net by using their "login in with a Twitter account" feature in python. Anyone?
How do I manually put cookies in a jar?
0
0
1
0
1
255
4,264,076
2010-11-24T06:35:00.000
0
0
0
0
0
javascript,python,parsing,url,dynamically-generated
0
4,264,223
0
2
0
false
1
0
I you want generated source you'll need a browser, I don't think you can with only python.
2
2
0
0
I have some url to parse, and they used some javascript to create it dynamicly. So if i want to parse the result generated page with python... how can i do that ? Firefox do that well with web developer... so i think it possible ... but i don't know where to start... Thx for help lo
How to see generated source from an URL page with python script and not anly source?
0
0
1
0
1
212