Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,602,122 | 2009-10-21T16:51:00.000 | 32 | 0 | 1 | 0 | python,casting,types,coercion | 1,602,151 | 2 | true | 0 | 0 | I think "casting" shouldn't be used for Python; there are only type conversion, but no casts (in the C sense). A type conversion is done e.g. through int(o) where the object o is converted into an integer (actually, an integer object is constructed out of o). Coercion happens in the case of binary operations: if you do x+y, and x and y have different types, they are coerced into a single type before performing the operation. In 2.x, a special method __coerce__ allows object to control their coercion. | 1 | 35 | 0 | In the Python documentation and on mailing lists I see that values are sometimes "cast", and sometimes "coerced". | What's the difference between casting and coercion in Python? | 1.2 | 0 | 0 | 33,210 |
1,602,177 | 2009-10-21T17:00:00.000 | -3 | 0 | 0 | 0 | python,python-3.x,parallel-processing,cluster-computing | 1,602,353 | 7 | false | 0 | 0 | "Would it be possible to make a python cluster"
Yes.
I love yes/no questions. Anything else you want to know?
(Note that Python 3 has few third-party libraries yet, so you may wanna stay with Python 2 at the moment.) | 1 | 8 | 0 | Would it be possible to make a python cluster, by writing a telnet server, then telnet-ing the commands and output back-and-forth? Has anyone got a better idea for a python compute cluster?
PS. Preferably for python 3.x, if anyone knows how. | Python compute cluster | -0.085505 | 0 | 0 | 18,293 |
1,602,516 | 2009-10-21T18:03:00.000 | -1 | 1 | 0 | 0 | python,frameworks,cgi,pylons | 1,603,021 | 2 | false | 0 | 0 | maybe you should direct your search towards inter process commmunication and make a search process that returns the results to the web server. This search process will be running all the time assuming you have your own server. | 1 | 2 | 0 | I have a website that right now, runs by creating static html pages from a cron job that runs nightly.
I'd like to add some search and filtering features using a CGI type script, but my script will have enough of a startup time (maybe a few seconds?) that I'd like it to stay resident and serve multiple requests.
This is a side-project I'm doing for fun, and it's not going to be super complex. I don't mind using something like Pylons, but I don't feel like I need or want an ORM layer.
What would be a reasonable approach here?
EDIT: I wanted to point out that for the load I'm expecting and processing I need to do on a request, I'm confident that a single python script in a single process could handle all requests without any slowdowns, especially since my dataset would be memory-resident. | I want to create a "CGI script" in python that stays resident in memory and services multiple requests | -0.099668 | 0 | 0 | 602 |
1,602,919 | 2009-10-21T19:02:00.000 | 2 | 0 | 0 | 0 | python,xml,python-3.x | 1,603,011 | 2 | false | 0 | 0 | Sure it is possible.
The xml.etree.ElementTree module will help you with parsing XML, finding tags and replacing values.
If you know a little bit more about the XML file you want to change, you can probably make the task a bit easier than if you need to write a generic function that will handle any XML file.
If you are already familiar with DOM parsing, there's a xml.dom package to use instead of the ElementTree one. | 1 | 0 | 0 | I have a XML document "abc.xml":
I need to write a function replace(name, newvalue) which can replace the value node with tag 'name' with the new value and write it back to the disk. Is this possible in python? How should I do this? | Setting value for a node in XML document in Python | 0.197375 | 0 | 1 | 2,424 |
1,603,109 | 2009-10-21T19:36:00.000 | 11 | 1 | 0 | 1 | python,linux,scripting,daemons | 8,956,634 | 16 | false | 0 | 0 | how about using $nohup command on linux?
I use it for running my commands on my Bluehost server.
Please advice if I am wrong. | 4 | 213 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | How to make a Python script run like a service or daemon in Linux | 1 | 0 | 0 | 365,270 |
1,603,109 | 2009-10-21T19:36:00.000 | 7 | 1 | 0 | 1 | python,linux,scripting,daemons | 35,008,431 | 16 | false | 0 | 0 | If you are using terminal(ssh or something) and you want to keep a long-time script working after you log out from the terminal, you can try this:
screen
apt-get install screen
create a virtual terminal inside( namely abc): screen -dmS abc
now we connect to abc: screen -r abc
So, now we can run python script: python keep_sending_mails.py
from now on, you can directly close your terminal, however, the python script will keep running rather than being shut down
Since this keep_sending_mails.py's PID is a child process of the virtual screen rather than the
terminal(ssh)
If you want to go back check your script running status, you can use screen -r abc again | 4 | 213 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | How to make a Python script run like a service or daemon in Linux | 1 | 0 | 0 | 365,270 |
1,603,109 | 2009-10-21T19:36:00.000 | 1 | 1 | 0 | 1 | python,linux,scripting,daemons | 20,908,406 | 16 | false | 0 | 0 | Use whatever service manager your system offers - for example under Ubuntu use upstart. This will handle all the details for you such as start on boot, restart on crash, etc. | 4 | 213 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | How to make a Python script run like a service or daemon in Linux | 0.012499 | 0 | 0 | 365,270 |
1,603,109 | 2009-10-21T19:36:00.000 | 12 | 1 | 0 | 1 | python,linux,scripting,daemons | 19,515,492 | 16 | false | 0 | 0 | cron is clearly a great choice for many purposes. However it doesn't create a service or daemon as you requested in the OP. cron just runs jobs periodically (meaning the job starts and stops), and no more often than once / minute. There are issues with cron -- for example, if a prior instance of your script is still running the next time the cron schedule comes around and launches a new instance, is that OK? cron doesn't handle dependencies; it just tries to start a job when the schedule says to.
If you find a situation where you truly need a daemon (a process that never stops running), take a look at supervisord. It provides a simple way to wrapper a normal, non-daemonized script or program and make it operate like a daemon. This is a much better way than creating a native Python daemon. | 4 | 213 | 0 | I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times? | How to make a Python script run like a service or daemon in Linux | 1 | 0 | 0 | 365,270 |
1,603,189 | 2009-10-21T19:52:00.000 | 2 | 0 | 1 | 0 | python,jython,signals | 1,603,283 | 1 | true | 0 | 0 | I would imagine Unix-style signals are difficult to do on the JVM, since the JVM has no notion of signals, and it is likely some JNI magic would be required to get this to work.
In Jython 2.5, the module exists, but seems to throw NotImplementedError for most functions. | 1 | 2 | 0 | I can't find any reference to the 'signal' class being left out in Jython. Using Jython 2.1.
Thanks | Why can I not import the Python module 'signal' using Jython, in Linux? | 1.2 | 0 | 0 | 242 |
1,603,658 | 2009-10-21T21:08:00.000 | 0 | 0 | 0 | 1 | python,windows,process | 1,603,828 | 3 | false | 0 | 0 | subprocess.Popen objects come with a kill and a terminate method (differs in which signal you send to the process).
signal.signal allows you install signal handlers, in which you can call the child's kill method. | 1 | 6 | 0 | I am writing a python program that lauches a subprocess (using Popen).
I am reading stdout of the subprocess, doing some filtering, and writing to
stdout of main process.
When I kill the main process (cntl-C) the subprocess keeps running.
How do I kill the subprocess too? The subprocess is likey to run a long time.
Context:
I'm launching only one subprocess at a time, I'm filtering its stdout.
The user might decide to interrupt to try something else.
I'm new to python and I'm using windows, so please be gentle. | kill subprocess when python process is killed? | 0 | 0 | 0 | 3,964 |
1,603,688 | 2009-10-21T21:13:00.000 | 2 | 0 | 0 | 0 | python,algorithm,image,image-processing,python-imaging-library | 1,603,723 | 4 | false | 0 | 1 | If you know the statespace of your data, you can use Principal Component Analysis. With PCA all of the objects must be posed (in the center of the screen). PCA will not do detection, but it will seperate objects into unique layers in which you can identify as being a triangle, etc. Also note: this is not scale or rotation invariant.
[I can't remember what this technique is called, but its similar to how the postoffice does handwritting rec]
If you can handle only non-curved curvfaces, you could do edge detection, and then do sampling at intersections to get an approximation of similarity. | 1 | 36 | 0 | what I want to do is a image recognition for a simple app:
given image (500 x 500) pxs ( 1 color background )
the image will have only 1 geometric figure (triangle or square or smaleyface :) ) of (50x50) pxs.
python will do the recognition of the figure and display what geometric figure is.
any links? any hints? any API? thxs :) | python image recognition | 0.099668 | 0 | 0 | 38,079 |
1,603,940 | 2009-10-21T22:08:00.000 | 2 | 0 | 1 | 0 | python,traceback | 1,843,184 | 7 | true | 0 | 0 | What about not changing the traceback? The two things you request can both be done more easily in a different way.
If the exception from the library is caught in the developer's code and a new exception is raised instead, the original traceback will of course be tossed. This is how exceptions are generally handled... if you just allow the original exception to be raised but you munge it to remove all the "upper" frames, the actual exception won't make sense since the last line in the traceback would not itself be capable of raising the exception.
To strip out the last few frames, you can request that your tracebacks be shortened... things like traceback.print_exception() take a "limit" parameter which you could use to skip the last few entries.
That said, it should be quite possible to munge the tracebacks if you really need to... but where would you do it? If in some wrapper code at the very top level, then you could simply grab the traceback, take a slice to remove the parts you don't want, and then use functions in the "traceback" module to format/print as desired. | 1 | 28 | 0 | I'm working on a Python library used by third-party developers to write extensions for our core application.
I'd like to know if it's possible to modify the traceback when raising exceptions, so the last stack frame is the call to the library function in the developer's code, rather than the line in the library that raised the exception. There are also a few frames at the bottom of the stack containing references to functions used when first loading the code that I'd ideally like to remove too.
Thanks in advance for any advice! | How can I modify a Python traceback object when raising an exception? | 1.2 | 0 | 0 | 12,932 |
1,604,079 | 2009-10-21T22:43:00.000 | 0 | 0 | 1 | 0 | python,pylons,paste | 1,843,498 | 4 | false | 1 | 0 | To answer your basic question directly, you should be able to use threads just as you'd like. The "killing hung threads" part is paste cleaning up its own threads, not yours.
There are other packages that might help, etc, but I'd suggest you start with simple threads and see how far you get. Only then will you know what you need next.
(Note, "Thread.daemon" should be mostly irrelevant to you here. Setting that true will ensure a thread you start will not prevent the entire process from exiting. Doing so would mean, however, that if the process exited "cleanly" (as opposed to being forced to exit) your thread would be terminated even if it wasn't done its work. Whether that's a problem, and how you handle things like that, depend entirely on your own requirements and design. | 2 | 1 | 0 | I'm writing a web application using pylons and paste. I have some work I want to do after an HTTP request is finished (send some emails, write some stuff to the db, etc) that I don't want to block the HTTP request on.
If I start a thread to do this work, is that OK? I always see this stuff about paste killing off hung threads, etc. Will it kill my threads which are doing work?
What else can I do here? Is there a way I can make the request return but have some code run after it's done?
Thanks. | starting my own threads within python paste | 0 | 0 | 0 | 862 |
1,604,079 | 2009-10-21T22:43:00.000 | 0 | 0 | 1 | 0 | python,pylons,paste | 1,768,292 | 4 | false | 1 | 0 | Take a look at gearman, it was specifically made for farming out tasks to 'workers' to handle. They can even handle it in a different language entirely. You can come back and ask if the task was completed, or just let it complete. That should work well for many tasks.
If you absolutely need to ensure it was completed, I'd suggest queuing tasks in a database or somewhere persistent, then have a separate process that runs through it ensuring each one gets handled appropriately. | 2 | 1 | 0 | I'm writing a web application using pylons and paste. I have some work I want to do after an HTTP request is finished (send some emails, write some stuff to the db, etc) that I don't want to block the HTTP request on.
If I start a thread to do this work, is that OK? I always see this stuff about paste killing off hung threads, etc. Will it kill my threads which are doing work?
What else can I do here? Is there a way I can make the request return but have some code run after it's done?
Thanks. | starting my own threads within python paste | 0 | 0 | 0 | 862 |
1,604,391 | 2009-10-22T00:25:00.000 | 1 | 0 | 1 | 0 | python,oop | 1,604,426 | 5 | false | 0 | 0 | There are many 'mindsets' that you could adopt to help in the design process (some of which point towards OO and some that don't). I think it is often better to start with questions rather than answers (i.e. rather than say, 'how can I apply inheritance to this' you should ask how this system might expect to change over time).
Here's a few questions to answer that might point you towards design principles:
Are other's going to use this API? Are they likely to break it? (info hiding)
do I need to deploy this across many machines? (state management, lifecycle management)
do i need to interoperate with other systems, runtimes, languages? (abstraction and standards)
what are my performance constraints? (state management, lifecycle management)
what kind of security environment does this component live in? (abstraction, info hiding, interoperability)
how would i construct my objects, assuming I used some? (configuration, inversion of control, object decoupling, hiding implementation details)
These aren't direct answers to your question, but they might put you in the right frame of mind to answer it yourself. :) | 3 | 6 | 0 | I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design.
My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts.
I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.)
The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions.
I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals.
In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself.
Any suggestions would be appreciated. | Object oriented design? | 0.039979 | 0 | 0 | 1,625 |
1,604,391 | 2009-10-22T00:25:00.000 | 7 | 0 | 1 | 0 | python,oop | 1,604,427 | 5 | false | 0 | 0 | You don't have to throw out structured programming to do object-oriented programming. The code is still structured, it just belongs to the objects rather than being separate from them.
In classical programming, code is the driving force that operates on data, leading to a dichotomy (and the possibility that code can operate on the wrong data).
In OO, data and code are inextricably entwined - an object contains both data and the code to operate on that data (although technically the code (and sometimes some data) belongs to the class rather than an individual object). Any client code that wants to use those objects should do so only by using the code within that object. This prevents the code/data mismatch problem.
For a bookkeeping system, I'd approach it as follows:
Low-level objects are accounts and categories (actually, in accounting, there's no difference between these, this is a false separation only exacerbated by Quicken et al to separate balance sheet items from P&L - I'll refer to them as accounts only). An account object consists of (for example) an account code, name and starting balance, although in the accounting systems I've worked on, starting balance is always zero - I've always used a "startup" transaction to set the balanaces initially.
Transactions are a balanced object which consist of a group of accounts/categories with associated movements (changes in dollar value). By balanced, I mean they must sum to zero (this is the crux of double entry accounting). This means it's a date, description and an array or vector of elements, each containing an account code and value.
The overall accounting "object" (the ledger) is then simply the list of all accounts and transactions.
Keep in mind that this is the "back-end" of the system (the data model). You will hopefully have separate classes for viewing the data (the view) which will allow you to easily change it, depending on user preferences. For example, you may want the whole ledger, just the balance sheet or just the P&L. Or you may want different date ranges.
One thing I'd stress to make a good accounting system. You do need to think like a bookkeeper. By that I mean lose the artificial difference between "accounts" and "categories" since it will make your system a lot cleaner (you need to be able to have transactions between two asset-class accounts (such as a bank transfer) and this won't work if every transaction needs a "category". The data model should reflect the data, not the view.
The only difficulty there is remembering that asset-class accounts have the opposite sign from which you expect (negative values for your cash-at-bank mean you have money in the bank and your very high positive value loan for that company sports car is a debt, for example). This will make the double-entry aspect work perfectly but you have to remember to reverse the signs of asset-class accounts (assets, liabilities and equity) when showing or printing the balance sheet. | 3 | 6 | 0 | I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design.
My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts.
I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.)
The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions.
I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals.
In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself.
Any suggestions would be appreciated. | Object oriented design? | 1 | 0 | 0 | 1,625 |
1,604,391 | 2009-10-22T00:25:00.000 | 3 | 0 | 1 | 0 | python,oop | 1,604,403 | 5 | false | 0 | 0 | "My data is basically a list of accounts"
Account is a class.
"dicts that represent transactions"
Transaction appears to be a class. You happen to have elected to represent this as a dict.
That's your first pass at OO design. Focus on the Responsibilities and Collaborators.
You have at least two classes of objects. | 3 | 6 | 0 | I'm trying to learn object oriented programming, but am having a hard time overcoming my structured programming background (mainly C, but many others over time). I thought I'd write a simple check register program as an exercise. I put something together pretty quickly (python is a great language), with my data in some global variables and with a bunch of functions. I can't figure out if this design can be improved by creating a number of classes to encapsulate some of the data and functions and, if so, how to change the design.
My data is basically a list of accounts ['checking', 'saving', 'Amex'], a list of categories ['food', 'shelter', 'transportation'] and lists of dicts that represent transactions [{'date':xyz, 'cat':xyz, 'amount':xyz, 'description':xzy]. Each account has an associated list of dicts.
I then have functions at the account level (create-acct(), display-all-accts(), etc.) and the transaction level (display-entries-in-account(), enter-a-transaction(), edit-a-transaction(), display-entries-between-dates(), etc.)
The user sees a list of accounts, then can choose an account and see the underlying transactions, with ability to add, delete, edit, etc. the accounts and transactions.
I currently implement everything in one large class, so that I can use self.variable throughout, rather than explicit globals.
In short, I'm trying to figure out if re-organizing this into some classes would be useful and, if so, how to design those classes. I've read some oop books (most recently Object-Oriented Thought Process). I like to think my existing design is readable and does not repeat itself.
Any suggestions would be appreciated. | Object oriented design? | 0.119427 | 0 | 0 | 1,625 |
1,604,811 | 2009-10-22T03:06:00.000 | 1 | 0 | 0 | 0 | python,user-interface | 1,604,842 | 2 | false | 0 | 1 | If by printf you mean exactly thqt call from C code, you need to redirect (and un-buffer) your standard output (file descriptor 0) to somewhere you can pick up the data from -- far from trivial, esp. in Windows, although maybe doable. But why not just change that call in your C code to something more sensible? (Worst case, a geprintf function of your own devising that mimics printf to build a string then directs that string appropriately).
If you actually mean print statements in Python code, it's much easier -- just set sys.stdout to an object with a write method accepting a string, and you can have that method do whatever you want, including logging, writing on a GUI windows, whatever you wish. Ah were it that simple at the C level!-) | 1 | 0 | 0 | I am using the Python/C API with my app and am wondering how you can get console output with a gui app. When there is a script error, it is displayed via printf but this obviously has no effect with a gui app. I want to be able to obtain the output without creating a console. Can this be done?
Edit - Im using Windows, btw.
Edit - The Python/C library internally calls printf and does so before any script can be loaded and run. If there is an error I want to be able to get it. | How to get output? | 0.099668 | 0 | 0 | 173 |
1,605,006 | 2009-10-22T04:20:00.000 | 1 | 0 | 1 | 0 | python,dll,wxwidgets,py2exe,gui2exe | 5,768,777 | 2 | false | 0 | 1 | what you need is to go to microsoft's download site and get visual C++ 2008 redistributed package. Tell it to do a repair and search for the driver. Copy the driver to the DLL folder in the python directory | 1 | 9 | 0 | I'm trying to compile my python script into a single .exe using gui2exe (which uses py2exe to create a .exe). My program is using wxWidgets and everytime I try to compile it I get the following error message:
error MSVCP90.dll: No such file or directory.
I have already downloaded and installed the VC++ redistributable package, so I should have this .dll shouldn't I? | Making a Windows .exe with gui2exe does not work because of missing MSVCP90.dll | 0.099668 | 0 | 0 | 3,149 |
1,605,022 | 2009-10-22T04:29:00.000 | 2 | 0 | 1 | 0 | python,file,runtime | 1,605,039 | 2 | true | 0 | 1 | The best way to tell is to try it on 'clean' installations of windows and see what it complains about. Virtual machines are a good way to do that. | 1 | 2 | 0 | I have an app that uses the python/c api and I was wondering what files I need to distribute with it? The app runs on Windows and links with libpython31.a Are there any other files? I tried the app on a seperate Win2k system and it said that python31.dll was needed so theres at least one.
Edit - My app is written in C++ and uses the Python/C api as noted below. | What files do I need to include with my Python app? | 1.2 | 0 | 0 | 260 |
1,605,662 | 2009-10-22T08:04:00.000 | 1 | 0 | 0 | 0 | python,django,django-models | 37,202,110 | 7 | false | 1 | 0 | In django 1.6
At first we have run - python manage.py sql <app name>
Then we have to run - python manage.py syncdb | 3 | 86 | 0 | I have recently updated my model, added a BooleanField to it however when I do python manage.py syncdb, it doesn't add the new field to the database for the model. How can I fix this ? | django syncdb and an updated model | 0.028564 | 0 | 0 | 51,126 |
1,605,662 | 2009-10-22T08:04:00.000 | 0 | 0 | 0 | 0 | python,django,django-models | 27,652,020 | 7 | false | 1 | 0 | If you run Django with Apache and MySQL, restart apache after making migration with makemigrations. | 3 | 86 | 0 | I have recently updated my model, added a BooleanField to it however when I do python manage.py syncdb, it doesn't add the new field to the database for the model. How can I fix this ? | django syncdb and an updated model | 0 | 0 | 0 | 51,126 |
1,605,662 | 2009-10-22T08:04:00.000 | 2 | 0 | 0 | 0 | python,django,django-models | 1,605,670 | 7 | false | 1 | 0 | Havent used django in a while, but i seem to remember that syncdb does perform alter commands on db tables. you have to drop the table then run again and it will create again.
edit: sorry does NOT perform alter. | 3 | 86 | 0 | I have recently updated my model, added a BooleanField to it however when I do python manage.py syncdb, it doesn't add the new field to the database for the model. How can I fix this ? | django syncdb and an updated model | 0.057081 | 0 | 0 | 51,126 |
1,605,706 | 2009-10-22T08:18:00.000 | 0 | 0 | 0 | 0 | python,django,internationalization | 2,450,262 | 3 | false | 1 | 0 | Depends on application and architecture...
Hack provided by Ignacio should works, but what is you will run in non activated yet thread?
I would use Ignacio solution + add Queue visible by all threads, monkeypatch trans_real.activate function and set attribute in queue. | 1 | 2 | 0 | django.utils.translation.get_language() returns default locale if translation is not activated. Is there a way to find out whether the translation is activated (via translation.activate()) or not? | Django: How to detect if translation is activated? | 0 | 0 | 0 | 735 |
1,606,700 | 2009-10-22T11:53:00.000 | 2 | 0 | 1 | 0 | python,tkinter | 1,606,815 | 1 | false | 0 | 1 | Check the .after method of your Tk() object. This allows you to use Tk's timer to fire events within the gui's own loop by giving it a length of time and a callback method. | 1 | 1 | 0 | I am writing a python app using Tkinter for buttons and graphics and having trouble getting a timer working, what I need is a sample app that has three buttons and a label.
[start timer] [stop timer] [quit]
When I press the start button a function allows the label to count up from zero every 5 seconds, the stop button stops the timer and the quit button quits the app.
I need to be able to press stop timer and quit at any time, and the time.sleep(5) function locks everything up so I can't use that.
currently i'm using threading.timer(5,do_count_function) and getting nowhere !
I'm a vb.net programmer, so python is a bit new to me, but hey, i'm trying. | Timer in Python | 0.379949 | 0 | 0 | 1,711 |
1,606,746 | 2009-10-22T12:02:00.000 | 1 | 0 | 0 | 1 | python,netbeans | 1,606,803 | 2 | false | 1 | 0 | I just installed Python for NetBeans yesterday and hadn't tried the debugger, so just tried it, and I got the same error. So I thought maybe it's a Firewall issue, disabled my Firewall and retried it, and then it worked.
However I restarted the Firewall and now it's still working, so I don't know. I saw the Netbeans options for Python have an input to specify the beginning listening port (which mine was 29000 not 11111 like yours). | 2 | 4 | 0 | I have a problem with debugging Python programs under the Netbeans IDE. When I start debugging, the debugger writes the following log and error. Thank you for help.
[LOG]PythonDebugger : overall Starting
>>>[LOG]PythonDebugger.taskStarted : I am Starting a new Debugging Session ...
[LOG]This window is an interactive debugging context aware Python Shell
[LOG]where you can enter python console commands while debugging
>>>c:\documents and settings\aster\.netbeans\6.7\config\nbpython\debug\nbpythondebug\jpydaemon.py
args = ['C:\\Documents and Settings\\aster\\.netbeans\\6.7\\config\\nbPython\\debug\\nbpythondebug\\jpydaemon.py', 'localhost', '11111']
localDebuggee= None
JPyDbg connecting localhost on in= 11111 /out= 11112
ERROR:JPyDbg connection failed errno(10061) : Connection refused
Debug session normal end
ERROR :: Server Socket listen for debuggee has timed out (more than 20 seconds wait) java.net.SocketTimeoutException: Accept timed out
thanks for answer | Python debugging in Netbeans | 0.099668 | 0 | 0 | 2,156 |
1,606,746 | 2009-10-22T12:02:00.000 | 1 | 0 | 0 | 1 | python,netbeans | 1,636,617 | 2 | false | 1 | 0 | For Python I like WingIDE from Wingware. | 2 | 4 | 0 | I have a problem with debugging Python programs under the Netbeans IDE. When I start debugging, the debugger writes the following log and error. Thank you for help.
[LOG]PythonDebugger : overall Starting
>>>[LOG]PythonDebugger.taskStarted : I am Starting a new Debugging Session ...
[LOG]This window is an interactive debugging context aware Python Shell
[LOG]where you can enter python console commands while debugging
>>>c:\documents and settings\aster\.netbeans\6.7\config\nbpython\debug\nbpythondebug\jpydaemon.py
args = ['C:\\Documents and Settings\\aster\\.netbeans\\6.7\\config\\nbPython\\debug\\nbpythondebug\\jpydaemon.py', 'localhost', '11111']
localDebuggee= None
JPyDbg connecting localhost on in= 11111 /out= 11112
ERROR:JPyDbg connection failed errno(10061) : Connection refused
Debug session normal end
ERROR :: Server Socket listen for debuggee has timed out (more than 20 seconds wait) java.net.SocketTimeoutException: Accept timed out
thanks for answer | Python debugging in Netbeans | 0.099668 | 0 | 0 | 2,156 |
1,607,641 | 2009-10-22T14:29:00.000 | 2 | 1 | 0 | 1 | python,linux,profiling | 1,608,157 | 2 | false | 0 | 0 | I'm not sure if python will provide the low level information you are looking for. You might want to look at oprofile and latencytop though. | 1 | 1 | 0 | I've been using Python's built-in cProfile tool with some pretty good success. But I'd like to be able to access more information such as how long I'm waiting for I/O (and what kind of I/O I'm waiting on) or how many cache misses I have. Are there any Linux tools to help with this beyond your basic time command? | What profiling tools exist for Python on Linux beyond the ones included in the standard library? | 0.197375 | 0 | 0 | 1,160 |
1,608,724 | 2009-10-22T17:26:00.000 | 1 | 0 | 0 | 1 | python,subprocess | 1,608,884 | 3 | true | 0 | 0 | No, but you can simply subclass and extend the Popen class to store the time it was created. | 1 | 1 | 0 | Is there an easy way to find out the current (real or cpu) run time of a subprocess.Popen instance? | Run time of a subprocess.Popen instance | 1.2 | 0 | 0 | 1,488 |
1,609,460 | 2009-10-22T19:32:00.000 | 5 | 0 | 1 | 0 | python,pydev | 1,609,487 | 3 | false | 0 | 0 | You need to set your PYTHONPATH accordingly (Google search is your friend) or use *.pth in your installation site-packages directory pointing to your project path. Don't forget to set your interpreter details with Pydev (Window->Preferences->Pydev->interpreter). | 1 | 8 | 0 | Newbie question (I'm just getting started with Python and Pydev):
I've created a project "Playground" with (standard?) src/root sub-folder. In there I've created example.py.
How do I import my "example" module into Pydev's interactive console?
">>> import example" gives: "ImportError: No module named example" | How import Pydev project into interactive console? | 0.321513 | 0 | 0 | 4,934 |
1,610,305 | 2009-10-22T22:18:00.000 | 23 | 0 | 1 | 0 | python,user-interface | 1,610,340 | 5 | false | 0 | 0 | Ctrl+[ should do the trick for unindenting.
Conversely, you can indent with Ctrl+], but IDLE generally handles indenting much better than unindenting. | 3 | 9 | 0 | I want to write the following statment in IDLE (Python GUI)
>>> if x == 0:
... x = 0
... print('Negative changed to zero')
... elif x == 0:
How can I get the unindention for the elif statment ?
Some additional facts:
I tried backspace and shift + tab, that doesn't help.
Runing on Windows.
Thanks.
Sorry, you should just use backspace, thing is that ">>>" does not intent on indentation.
That means that in:
>>> if x == 54:
x = 4
elif y = 4:
y = 6
elif is just as indented as the if statment.
Sorry to waste your time..., although you can blame the IDE for making an un-selfexplanitory UI. | How do I unindent using IDLE (Python gui) | 1 | 0 | 0 | 10,493 |
1,610,305 | 2009-10-22T22:18:00.000 | 2 | 0 | 1 | 0 | python,user-interface | 1,610,353 | 5 | false | 0 | 0 | Backspace works for me.
If you go to Options->Configure IDLE and click on the Keys tab, what options are selected? It might make a difference - I have IDLE Classic Windows. | 3 | 9 | 0 | I want to write the following statment in IDLE (Python GUI)
>>> if x == 0:
... x = 0
... print('Negative changed to zero')
... elif x == 0:
How can I get the unindention for the elif statment ?
Some additional facts:
I tried backspace and shift + tab, that doesn't help.
Runing on Windows.
Thanks.
Sorry, you should just use backspace, thing is that ">>>" does not intent on indentation.
That means that in:
>>> if x == 54:
x = 4
elif y = 4:
y = 6
elif is just as indented as the if statment.
Sorry to waste your time..., although you can blame the IDE for making an un-selfexplanitory UI. | How do I unindent using IDLE (Python gui) | 0.07983 | 0 | 0 | 10,493 |
1,610,305 | 2009-10-22T22:18:00.000 | 4 | 0 | 1 | 0 | python,user-interface | 35,314,626 | 5 | false | 0 | 0 | ctrl + [ in Windows
command + [ in Mac | 3 | 9 | 0 | I want to write the following statment in IDLE (Python GUI)
>>> if x == 0:
... x = 0
... print('Negative changed to zero')
... elif x == 0:
How can I get the unindention for the elif statment ?
Some additional facts:
I tried backspace and shift + tab, that doesn't help.
Runing on Windows.
Thanks.
Sorry, you should just use backspace, thing is that ">>>" does not intent on indentation.
That means that in:
>>> if x == 54:
x = 4
elif y = 4:
y = 6
elif is just as indented as the if statment.
Sorry to waste your time..., although you can blame the IDE for making an un-selfexplanitory UI. | How do I unindent using IDLE (Python gui) | 0.158649 | 0 | 0 | 10,493 |
1,610,371 | 2009-10-22T22:30:00.000 | 3 | 0 | 1 | 0 | python,iterator | 1,610,418 | 5 | false | 0 | 0 | A class supporting the __iter__ method will return an iterator object instance: an object supporting the next() method. This object will be usuable in the statements "for" and "in". | 2 | 23 | 0 | Despite reading up on it, I still dont quite understand how __iter__ works. What would be a simple explaination?
I've seen def__iter__(self): return self. I don't see how this works or the steps on how this works. | How does __iter__ work? | 0.119427 | 0 | 0 | 13,949 |
1,610,371 | 2009-10-22T22:30:00.000 | 28 | 0 | 1 | 0 | python,iterator | 1,610,437 | 5 | false | 0 | 0 | As simply as I can put it:
__iter__ defines a method on a class which will return an iterator (an object that successively yields the next item contained by your object).
The iterator object that __iter__() returns can be pretty much any object, as long as it defines a next() method.
The next method will be called by statements like for ... in ... to yield the next item, and next() should raise the StopIteration exception when there are no more items.
What's great about this is it lets you define how your object is iterated, and __iter__ provides a common interface that every other python function knows how to work with. | 2 | 23 | 0 | Despite reading up on it, I still dont quite understand how __iter__ works. What would be a simple explaination?
I've seen def__iter__(self): return self. I don't see how this works or the steps on how this works. | How does __iter__ work? | 1 | 0 | 0 | 13,949 |
1,610,822 | 2009-10-23T00:34:00.000 | 0 | 0 | 1 | 0 | python,distribution,cpython,pypy | 1,610,835 | 2 | false | 0 | 0 | I think it's clear when to use IronPython or Jython. If you want to use Python on CLR/JVM, either as the main language or a scripting language for your C#/Java application. | 1 | 5 | 0 | I have been programming in Python for a few years now and have always used CPython without thinking about it. The books and documentation I have read always refer to CPython too.
When does it make sense to use an alternative distribution (PyPy, Stackless, etc)?
Thanks! | when to use an alternative Python distribution? | 0 | 0 | 0 | 341 |
1,611,543 | 2009-10-23T05:31:00.000 | 5 | 1 | 0 | 1 | python,pythonw | 1,611,558 | 1 | true | 0 | 0 | sys.executable -- "A string giving the name of the executable binary for the Python interpreter, on systems where this makes sense." | 1 | 2 | 0 | I would like to redirect stderr and stdout to files when run inside of pythonw. How can I determine whether a script is running in pythonw or in python? | Determine if a script is running in pythonw? | 1.2 | 0 | 0 | 1,325 |
1,612,733 | 2009-10-23T11:04:00.000 | -15 | 0 | 1 | 0 | python,distutils | 1,613,085 | 14 | false | 0 | 0 | Figured out a workaround: I renamed my lgpl2.1_license.txt to lgpl2.1_license.txt.py, and put some triple quotes around the text. Now I don't need to use the data_files option nor to specify any absolute paths. Making it a Python module is ugly, I know, but I consider it less ugly than specifying absolute paths. | 1 | 261 | 0 | How do I make setup.py include a file that isn't part of the code? (Specifically, it's a license file, but it could be any other thing.)
I want to be able to control the location of the file. In the original source folder, the file is in the root of the package. (i.e. on the same level as the topmost __init__.py.) I want it to stay exactly there when the package is installed, regardless of operating system. How do I do that? | Including non-Python files with setup.py | -1 | 0 | 0 | 157,529 |
1,613,927 | 2009-10-23T14:41:00.000 | 2 | 0 | 1 | 0 | python,debugging,pydev | 6,309,211 | 3 | false | 0 | 0 | You can run arbitrary commands in the console during the breakpoint. For my needs, this typically achieves the same purpose as coding live, although I do wish it were as elegant as simply using the editor. | 2 | 6 | 0 | I am new to python and haven't been able to find out whether this is possible or not.
I am using the PyDev plugin under Eclipse, and basically all I want to find out is, is it possible to edit code whilst you're sitting at a breakpoint? I.e. Edit code whilst you're debugging.
It allows me to do this at present, but it seems to still be executing the line of code that previously existed before I made changes.
Also, are you able to drag program execution back like you can in VBA and C# for example?
If either of these are possible, how can I enable them? | Python Debugging: code editing on the fly | 0.132549 | 0 | 0 | 1,488 |
1,613,927 | 2009-10-23T14:41:00.000 | 2 | 0 | 1 | 0 | python,debugging,pydev | 1,614,052 | 3 | false | 0 | 0 | When you start a Python program, it will be compiled into bytecode (and possibly saved as .pyc file). That means you can change the source but since you don't "open" the source again, the change won't be picked up.
There are systems like TurboGears (a web framework) which detect these changes and restart themselves but that's probably going to confuse the debugger.
Going back in time also isn't possible currently since the bytecode interpreter would need support for this. | 2 | 6 | 0 | I am new to python and haven't been able to find out whether this is possible or not.
I am using the PyDev plugin under Eclipse, and basically all I want to find out is, is it possible to edit code whilst you're sitting at a breakpoint? I.e. Edit code whilst you're debugging.
It allows me to do this at present, but it seems to still be executing the line of code that previously existed before I made changes.
Also, are you able to drag program execution back like you can in VBA and C# for example?
If either of these are possible, how can I enable them? | Python Debugging: code editing on the fly | 0.132549 | 0 | 0 | 1,488 |
1,614,059 | 2009-10-23T15:02:00.000 | 3 | 0 | 1 | 0 | python,text-to-speech | 41,871,110 | 14 | false | 0 | 0 | You can use espeak using python for text to speech converter.
Here is an example python code
from subprocess import call
speech="Hello World!"
call(["espeak",speech])
P.S : if espeak isn't installed on your linux system then you need to install it first.
Open terminal(using ctrl + alt + T) and type
sudo apt install espeak | 1 | 78 | 0 | How could I make Python say some text?
I could use Festival with subprocess but I won't be able to control it (or maybe in interactive mode, but it won't be clean).
Is there a Python TTS library? Like an API for Festival, eSpeak, ... ? | How to make Python speak | 0.042831 | 0 | 0 | 169,591 |
1,614,190 | 2009-10-23T15:24:00.000 | 2 | 0 | 1 | 0 | python | 1,614,204 | 3 | false | 0 | 0 | Write a comment? Python comments start with #. | 1 | 1 | 0 | When writing code in Python, how can you write something next to it that explains what the code is doing, but which doesn't affect the code? | How do I add text describing the code into a Python source file? | 0.132549 | 0 | 0 | 419 |
1,614,898 | 2009-10-23T17:33:00.000 | 5 | 0 | 1 | 1 | python,windows | 1,615,021 | 7 | false | 0 | 0 | If you don't want to install an IDE, you can also use IDLE which includes a Python editor and a console to test things out, this is part of the standard installation.
If you installed the python.org version, you will see an IDLE (Python GUI) in your start menu. I would recommend adding it to your Quick Launch or your desktop - whatever you are most familiar with. Then right-click on the shortcut you have created and change the "Start in" directory to your project directory or a place you can mess with, not the installation directory which is the default place and probably a bad idea.
When you double-click the shortcut it will launch IDLE, a console in which you can type in Python command and have history, completion, colours and so on. You can also start an editor to create a program file (like mentioned in the other posts). There is even a debugger.
If you saved your application in "test.py", you can start it from the editor itself. Or from the console with execfile("test.py"), import test (if that is a module), or finally from the debugger. | 4 | 1 | 0 | How do I run a Python file from the Windows Command Line (cmd.exe) so that I won't have to re-enter the code each time? | Running Python from the Windows Command Line | 0.141893 | 0 | 0 | 15,004 |
1,614,898 | 2009-10-23T17:33:00.000 | 0 | 0 | 1 | 1 | python,windows | 1,614,915 | 7 | false | 0 | 0 | In DOS you can use edit to create/modify text files, then execute them by typing python [yourfile] | 4 | 1 | 0 | How do I run a Python file from the Windows Command Line (cmd.exe) so that I won't have to re-enter the code each time? | Running Python from the Windows Command Line | 0 | 0 | 0 | 15,004 |
1,614,898 | 2009-10-23T17:33:00.000 | 2 | 0 | 1 | 1 | python,windows | 1,614,908 | 7 | false | 0 | 0 | If you put the Python executable (python.exe) on your path, you can invoke your script using python script.py where script.py is the Python file that you want to execute. | 4 | 1 | 0 | How do I run a Python file from the Windows Command Line (cmd.exe) so that I won't have to re-enter the code each time? | Running Python from the Windows Command Line | 0.057081 | 0 | 0 | 15,004 |
1,614,898 | 2009-10-23T17:33:00.000 | 0 | 0 | 1 | 1 | python,windows | 1,614,906 | 7 | false | 0 | 0 | Open a command prompt, by pressing Win+R and writing cmd in that , navigate to the script directory , and write : python script.py | 4 | 1 | 0 | How do I run a Python file from the Windows Command Line (cmd.exe) so that I won't have to re-enter the code each time? | Running Python from the Windows Command Line | 0 | 0 | 0 | 15,004 |
1,615,690 | 2009-10-23T20:21:00.000 | 3 | 0 | 0 | 1 | python,flash | 1,617,438 | 2 | true | 0 | 0 | One way it can be done using ffmpeg. ffmpeg needs to be installed with h.264 and h.263 codec support. Then following is the command to retrieve the video duration, which can be called via python system(command).
ffmpeg -i flv_file 2>&1 | grep "Duration" | cut -d ' ' -f 4 | sed s/,// | 1 | 2 | 0 | On Linux, YouTube places temporary flash files in /tmp. Nautilus can display the duration (Minutes:Seconds) of them, but I haven't found a way to extract the duration using python.'
The fewer dependencies your method requires the better.
Thanks in advance. | How to get duration of video flash file? | 1.2 | 0 | 0 | 1,706 |
1,616,228 | 2009-10-23T22:19:00.000 | 4 | 1 | 0 | 0 | python,testing,pyqt | 1,829,332 | 4 | false | 0 | 1 | It looks like PyQT4 includes a QtTest object that can be used for unit testing. | 1 | 18 | 0 | Does anyone know of a automated GUI testing package for that works with PyQT besides Squish? Nothing against Squish I am just looking for other packages. It would be cool if there were an open source package. I am doing my testing under Linux. | PyQT GUI Testing | 0.197375 | 0 | 0 | 3,768 |
1,617,494 | 2009-10-24T09:36:00.000 | 1 | 0 | 1 | 0 | python | 1,617,508 | 9 | false | 0 | 0 | Using grep with a cleverly constructed regex (does not begin or end with quotes) seems to be your best bet. | 4 | 29 | 0 | Sometimes I leave debugging printing statements in my project and it is difficult to find it. Is there any way to find out what line is printing something in particular?
Sidenote
It appears that searching smart can solve the majority of cases. In Pydev (and other IDEs) there is a Search function which allows searching through all files in a project. Of course, a similar effect can be obtained using grep with the -rn flag, although you only get line numbers instead of direct links.
"print(" works much better within my code and often there is extra text in a print statement that can be searched for with a regular expression. The most difficult case is when you have just written print(x), although this can be searched for a regular expression where the value inside x does not begin or end with quotes (thanks! BecomingGuro) | Finding a print statement in Python | 0.022219 | 0 | 0 | 8,846 |
1,617,494 | 2009-10-24T09:36:00.000 | 2 | 0 | 1 | 0 | python | 1,617,554 | 9 | false | 0 | 0 | The easiest way would be to use a "debug_print" function instead of plain "print".
That way you could just redefine the function and be sure you did not miss one...
and still have them handy if you need to debug that code again instead of editing your code back and forth each time.
(Yes leaving debug_print calls can eat some performance : just remove them when it's the case)
Spotting "debug only" statements in your code is a very good reason to do a diff before comitting any code in your revision control system ! (knowing what to put in your commit comments being a second good reason to do it !) | 4 | 29 | 0 | Sometimes I leave debugging printing statements in my project and it is difficult to find it. Is there any way to find out what line is printing something in particular?
Sidenote
It appears that searching smart can solve the majority of cases. In Pydev (and other IDEs) there is a Search function which allows searching through all files in a project. Of course, a similar effect can be obtained using grep with the -rn flag, although you only get line numbers instead of direct links.
"print(" works much better within my code and often there is extra text in a print statement that can be searched for with a regular expression. The most difficult case is when you have just written print(x), although this can be searched for a regular expression where the value inside x does not begin or end with quotes (thanks! BecomingGuro) | Finding a print statement in Python | 0.044415 | 0 | 0 | 8,846 |
1,617,494 | 2009-10-24T09:36:00.000 | -1 | 0 | 1 | 0 | python | 1,617,725 | 9 | false | 0 | 0 | This probably doesnt answer your question directly, but you can get away with tons of print statements if you use pdb (python debugger) in a way to effectively debug and write code.
I know it works, because 99% of time you simple dont wanna print stuff, but you wanna set a breakpoint somewhere and see what the variables are and how far the program has reached.
HTH | 4 | 29 | 0 | Sometimes I leave debugging printing statements in my project and it is difficult to find it. Is there any way to find out what line is printing something in particular?
Sidenote
It appears that searching smart can solve the majority of cases. In Pydev (and other IDEs) there is a Search function which allows searching through all files in a project. Of course, a similar effect can be obtained using grep with the -rn flag, although you only get line numbers instead of direct links.
"print(" works much better within my code and often there is extra text in a print statement that can be searched for with a regular expression. The most difficult case is when you have just written print(x), although this can be searched for a regular expression where the value inside x does not begin or end with quotes (thanks! BecomingGuro) | Finding a print statement in Python | -0.022219 | 0 | 0 | 8,846 |
1,617,494 | 2009-10-24T09:36:00.000 | 3 | 0 | 1 | 0 | python | 1,617,503 | 9 | false | 0 | 0 | Use grep:
grep -rn print . | 4 | 29 | 0 | Sometimes I leave debugging printing statements in my project and it is difficult to find it. Is there any way to find out what line is printing something in particular?
Sidenote
It appears that searching smart can solve the majority of cases. In Pydev (and other IDEs) there is a Search function which allows searching through all files in a project. Of course, a similar effect can be obtained using grep with the -rn flag, although you only get line numbers instead of direct links.
"print(" works much better within my code and often there is extra text in a print statement that can be searched for with a regular expression. The most difficult case is when you have just written print(x), although this can be searched for a regular expression where the value inside x does not begin or end with quotes (thanks! BecomingGuro) | Finding a print statement in Python | 0.066568 | 0 | 0 | 8,846 |
1,619,908 | 2009-10-25T03:24:00.000 | 4 | 1 | 1 | 0 | python,exception,attributes | 1,619,944 | 1 | true | 0 | 1 | From the docs to Py_Finalize():
Bugs and caveats: The destruction of
modules and objects in modules is done
in random order; this may cause
destructors (__del__() methods) to
fail when they depend on other objects
(even functions) or modules.
Dynamically loaded extension modules
loaded by Python are not unloaded.
Small amounts of memory allocated by
the Python interpreter may not be
freed (if you find a leak, please
report it). Memory tied up in circular
references between objects is not
freed. Some memory allocated by
extension modules may not be freed.
Some extensions may not work properly
if their initialization routine is
called more than once; this can happen
if an application calls
Py_Initialize() and Py_Finalize() more
than once.
Most likely a __del__ contains a call to <somemodule>.GetName(), but that module has already been destroyed by the time __del__ is called. | 1 | 2 | 0 | I have a C++ app that uses Python to load some scripts. It calls some functions in the scripts, and everything works fine until the app exits and calls Py_Finalize. Then it displays the following: (GetName is a function in one of the scripts)
Exception AttributeError: "'module' object has no attribute 'GetName'" in 'garbage collection' ignored
Fatal Python error: unexpected exception during garbage collection
Then the app crashes.
I'm using Python 3.1 on Windows. Any advice would be appreciated. | What is causing this Python exception? | 1.2 | 0 | 0 | 1,194 |
1,620,363 | 2009-10-25T08:31:00.000 | 4 | 0 | 0 | 0 | python | 1,620,413 | 4 | true | 1 | 0 | python standard module html.parser should allow you to parse simple html content and eliminate tags. you only have to derive HTMLParser, then overload all handle_*() methods so that they output or discard content, depending on the surrounding element tags. | 1 | 0 | 0 | I know that NLTK has it. But any else? | What is a light python library that can eliminate HTML tags? (and only text) | 1.2 | 0 | 0 | 746 |
1,620,575 | 2009-10-25T10:37:00.000 | 5 | 0 | 0 | 0 | python,mysql | 1,620,642 | 3 | false | 0 | 0 | _mysql is the one-to-one mapping of the rough mysql API. On top of it, the DB-API is built, handling things using cursors and so on.
If you are used to the low-level mysql API provided by libmysqlclient, then the _mysql module is what you need, but as another answer says, there's no real need to go so low-level. You can work with the DB-API and behave just fine, with the added benefit that the DB-API is backend-independent. | 1 | 11 | 0 | Two libraries for Mysql.
I've always used _mysql because it's simpler.
Can anyone tell me the difference, and why I should use which one in certain occasions? | Python: advantages and disvantages of _mysql vs MySQLdb? | 0.321513 | 1 | 0 | 4,941 |
1,621,430 | 2009-10-25T17:10:00.000 | 0 | 1 | 0 | 0 | python,ajax,cgi,web-applications,fastcgi | 11,067,328 | 5 | false | 1 | 0 | As suggested by a few of the others you can use a keep alive connection and instead of "return" statements use yield statements and instead of "print" statements also use yield statements. This will basically show everything that happens in the python script onto the website page.
After extensive searching and testing I would advise nginx as a reverse proxy with gevent & bottle as the backend which allows for peace of mind as nginx will not serve up the python source file ever. | 1 | 3 | 0 | I have a Python script that outputs something every second or two, but takes a long while to finish completely. I want to set up a website such that someone can directly invoke the script, and the output is sent to the screen while the script is running.
I don't want the user to wait until the script finishes completely, because then all the output is displayed at once. I also tried that, and the connection always times out.
I don't know what this process is called, what terms I'm looking for, and what I need to use. CGI? Ajax? Need some serious guidance here, thanks!
If it matters, I plan to use Nginx as the webserver. | How do I display real-time python script output on a website? | 0 | 0 | 0 | 5,553 |
1,622,038 | 2009-10-25T20:53:00.000 | 54 | 0 | 1 | 0 | python,django | 1,622,263 | 8 | false | 1 | 0 | ChristopheD's post is close to what you want. I don't have enough rep to make a comment :(
Instead of (which actually gives you the next upcoming monday):
>>> today + datetime.timedelta(days=-today.weekday(), weeks=1)
datetime.date(2009, 10, 26)
I would say:
>>> last_monday = today - datetime.timedelta(days=today.weekday())
If you want the previous week, add the 'weeks=1' parameter.
This makes the code more readable since you are subtracting a timedelta. This clears up any confusion caused by adding a timedelta that has negative and positive offsets. | 1 | 87 | 0 | How do I find the previous Monday's date, based off of the current date using Python? I thought maybe I could use: datetime.weekday() to do it, but I am getting stuck.
I basically want to find today's date and Mondays date to construct a date range query in django using: created__range=(start_date, end_date). | Find Monday's date with Python | 1 | 0 | 0 | 60,733 |
1,622,884 | 2009-10-26T02:52:00.000 | 2 | 0 | 0 | 0 | wxpython,drag,playing-cards | 1,623,027 | 2 | false | 0 | 1 | Going through the wxPython demo and looking at all the examples would be a good start. You'll likely find page Using Images | DragImage to be useful, since you'll probably want cards that you can drag.
Generally, the demo can help you do most things in wxPython, and also show you what wxPython can do, and it's worth the time to see every demo. This approach works for everything except the very first step of getting an app running and putting a frame in it (since the demo itself is an app, but not a simple one). Any of the basic tutorials can help you get started with an app and frame in just a very lines of code. | 1 | 0 | 0 | I know python and I'm a newibe with wx python but I would like to make a card game.
However I have no idea how to make a image follow the mouse and put it in the middle of the screen when the program running. It will be nice if you guys can help me out. | wx python card game | 0.197375 | 0 | 0 | 877 |
1,623,311 | 2009-10-26T06:06:00.000 | 6 | 0 | 0 | 0 | php,python,linux,perl | 1,623,338 | 5 | false | 1 | 0 | Any method you choose to determine the source of a request is only as reliable as the HTTP_REFERER information that is sent by the user's browser, which is not very. Requiring authentication is the only good way to protect content. | 2 | 2 | 0 | On my website I store user pictures in a simple manner such as:
"image/user_1.jpg".
I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...)
So far I have three solutions in mind:
I tried using .htaccess to password protect the "images" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method.
I can start converting my user_id's to an md5 hash with some salt. The images would be named as: /image/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated.
or I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image.
I'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this?
Basically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site.
Thanks a lot,
Haluk | Restrict access to images on my website except through my own htmls | 1 | 1 | 0 | 4,621 |
1,623,311 | 2009-10-26T06:06:00.000 | 2 | 0 | 0 | 0 | php,python,linux,perl | 1,623,325 | 5 | false | 1 | 0 | You are right considering option #3. Use service script that would validate user and readfile() an image. Be sure to set correct Content-Type HTTP header via header() function prior to serving an image. For better isolation images should be put above web root directory, or protected by well written .htaccess rules - there is definitely a way of protecting files and/or directories this way. | 2 | 2 | 0 | On my website I store user pictures in a simple manner such as:
"image/user_1.jpg".
I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...)
So far I have three solutions in mind:
I tried using .htaccess to password protect the "images" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method.
I can start converting my user_id's to an md5 hash with some salt. The images would be named as: /image/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated.
or I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image.
I'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this?
Basically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site.
Thanks a lot,
Haluk | Restrict access to images on my website except through my own htmls | 0.07983 | 1 | 0 | 4,621 |
1,623,331 | 2009-10-26T06:16:00.000 | 3 | 0 | 1 | 0 | python,pygame | 1,623,346 | 4 | false | 0 | 1 | Sadly, I have never studied algorithms so I have no clue on how to calculate a path.
Before you start writing games, you should educate yourself on those. This takes a little more effort at the beginning, but will save you much time later. | 1 | 0 | 0 | I'm working on an adventure game in Python using Pygame. My main problem is how I am going to define the boundaries of the room and make the main character walk aroud without hitting a boundary every time. Sadly, I have never studied algorithms so I have no clue on how to calculate a path. I know this question is quite general and hard to answer but a point in the right direction would be very appreciated. Thanks! | Adventure game - walking around inside a room | 0.148885 | 0 | 0 | 1,431 |
1,623,538 | 2009-10-26T07:50:00.000 | 4 | 0 | 0 | 0 | python,django,psyco | 1,623,547 | 3 | false | 1 | 0 | You should be using fastcgi or wsgi with django, so the process won't be starting up for each request.
You really need to write your code to be psyco friendly if you want decent gains, and you will not benefit if your bottleneck is the database. | 2 | 4 | 0 | I know the benefits of Psyco for a Desktop app, but in a Web app where a process ( = a web page or an AJAX call) dies immediately after been fired, isn't it pointless ? | Does using Psyco with django make any sense? | 0.26052 | 0 | 0 | 1,160 |
1,623,538 | 2009-10-26T07:50:00.000 | 4 | 0 | 0 | 0 | python,django,psyco | 1,623,581 | 3 | true | 1 | 0 | First, as gribbler and Ibrahim mentioned, your process won't die unless you are using pure CGI... which you shouldn't be using.
Secondly, the bottleneck in most web apps are database queries, for which Psyco won't help.
If you happen to have a some logic that is computationally intensive it can certainly make sense to use Psyco or Cython. In fact I read a report somewhere (sorry it's been a while so can't find a link now) by someone who was doing some complex calculations and had great results compiling their entire views.py with Cython. | 2 | 4 | 0 | I know the benefits of Psyco for a Desktop app, but in a Web app where a process ( = a web page or an AJAX call) dies immediately after been fired, isn't it pointless ? | Does using Psyco with django make any sense? | 1.2 | 0 | 0 | 1,160 |
1,624,570 | 2009-10-26T12:38:00.000 | 1 | 0 | 1 | 0 | java,python,user-interface,multithreading | 1,624,585 | 5 | false | 0 | 0 | I am a fan of Erlang:
Wx GUI tool
Regex (module regexp)
Cross-platform
Multi-threading (of course !)
EUnit testing
Of course Python is really appropriate too! | 3 | 1 | 0 | I'm interested in learning a programming language with support for GUI, multithreading and easy test manipulation (support for regex).
Mainly on Windows but preferably cross-platform. What does the Stack Overflow community suggest? | GUI + multithreading support + regex support. Which language? JAVA / Python / Ruby? | 0.039979 | 0 | 0 | 499 |
1,624,570 | 2009-10-26T12:38:00.000 | 5 | 0 | 1 | 0 | java,python,user-interface,multithreading | 1,624,603 | 5 | true | 0 | 0 | My suggestion would be Java. You can do all of that and much more. | 3 | 1 | 0 | I'm interested in learning a programming language with support for GUI, multithreading and easy test manipulation (support for regex).
Mainly on Windows but preferably cross-platform. What does the Stack Overflow community suggest? | GUI + multithreading support + regex support. Which language? JAVA / Python / Ruby? | 1.2 | 0 | 0 | 499 |
1,624,570 | 2009-10-26T12:38:00.000 | 1 | 0 | 1 | 0 | java,python,user-interface,multithreading | 1,624,683 | 5 | false | 0 | 0 | If you really like typing go for Java, if you really like whitespace go for python, if you like programming more than you like high performance go for Ruby.
Seriously, Java is very complete and very cross-platform. I don't know how Python adds up for GUI stuff but when I was looking at Ruby in detail a couple of years back it seemed a trifle complex ( or at least, nothing is hard to write in ruby but it didn't look easy to produce a nice, modern-looking UI ) but I much prefer what I can achieve with a scripting language in terms of lines of code compared with Java's painful verbosity.
Erlang, which I see recommended above, I've never tried but it's a language I would be very interested to learn. Possibly well worth looking into if you're learning something new anyway, especially if multi-threading is important to you. | 3 | 1 | 0 | I'm interested in learning a programming language with support for GUI, multithreading and easy test manipulation (support for regex).
Mainly on Windows but preferably cross-platform. What does the Stack Overflow community suggest? | GUI + multithreading support + regex support. Which language? JAVA / Python / Ruby? | 0.039979 | 0 | 0 | 499 |
1,626,155 | 2009-10-26T17:30:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,oop | 1,626,233 | 3 | false | 1 | 0 | You've got two options I can think of right now:
Since you don't want the field in the database your best bet is to define a method on the model that returns self.id * SOME_CONSTANT, say you call it big_id(). You can access this method anytime as yourObj.big_id(), and it will be available in templates as yourObj.big_id (you might want to read about the "magic dot" in django templates).
If you don't mind it being in the DB, you can override the save method on your object to calculate id * SOME_CONSTANT and store it in a big_id field. This would save you from having to calculate it every time, since I assume ID isn't going to change | 1 | 4 | 0 | Every Django model has a default primary-key id created automatically. I want the model objects to have another attribute big_id which is calculated as:
big_id = id * SOME_CONSTANT
I want to access big_id as model_obj.big_id without the corresponding database table having a column called big_id.
Is this possible? | Is there an easy way to create derived attributes in Django Model/Python classes? | 0 | 0 | 0 | 1,644 |
1,626,326 | 2009-10-26T18:00:00.000 | 4 | 0 | 0 | 0 | python,django,deployment | 5,528,824 | 22 | false | 1 | 0 | For most of my projects I use following pattern:
Create settings_base.py where I store settings that are common for all environments
Whenever I need to use new environment with specific requirements I create new settings file (eg. settings_local.py) which inherits contents of settings_base.py and overrides/adds proper settings variables (from settings_base import *)
(To run manage.py with custom settings file you simply use --settings command option: manage.py <command> --settings=settings_you_wish_to_use.py) | 4 | 339 | 0 | What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.
Currently, I am adding all constants to settings.py. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes... :(
Edit: looks like there is no standard answer to this question, I've accepted the most popular method. | How to manage local vs production settings in Django? | 0.036348 | 0 | 0 | 119,151 |
1,626,326 | 2009-10-26T18:00:00.000 | 14 | 0 | 0 | 0 | python,django,deployment | 1,626,371 | 22 | false | 1 | 0 | I use a settings_local.py and a settings_production.py. After trying several options I've found that it's easy to waste time with complex solutions when simply having two settings files feels easy and fast.
When you use mod_python/mod_wsgi for your Django project you need to point it to your settings file. If you point it to app/settings_local.py on your local server and app/settings_production.py on your production server then life becomes easy. Just edit the appropriate settings file and restart the server (Django development server will restart automatically). | 4 | 339 | 0 | What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.
Currently, I am adding all constants to settings.py. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes... :(
Edit: looks like there is no standard answer to this question, I've accepted the most popular method. | How to manage local vs production settings in Django? | 1 | 0 | 0 | 119,151 |
1,626,326 | 2009-10-26T18:00:00.000 | 1 | 0 | 0 | 0 | python,django,deployment | 36,996,922 | 22 | false | 1 | 0 | As an alternative to maintain different file if you wiil:
If you are using git or any other VCS to push codes from local to server, what you can do is add the settings file to .gitignore.
This will allow you to have different content in both places without any problem. SO on server you can configure an independent version of settings.py and any changes made on the local wont reflect on server and vice versa.
In addition, it will remove the settings.py file from github also, the big fault, which i have seen many newbies doing. | 4 | 339 | 0 | What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.
Currently, I am adding all constants to settings.py. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes... :(
Edit: looks like there is no standard answer to this question, I've accepted the most popular method. | How to manage local vs production settings in Django? | 0.009091 | 0 | 0 | 119,151 |
1,626,326 | 2009-10-26T18:00:00.000 | 6 | 0 | 0 | 0 | python,django,deployment | 9,517,763 | 22 | false | 1 | 0 | The problem with most of these solutions is that you either have your local settings applied before the common ones, or after them.
So it's impossible to override things like
the env-specific settings define the addresses for the memcached pool, and in the main settings file this value is used to configure the cache backend
the env-specific settings add or remove apps/middleware to the default one
at the same time.
One solution can be implemented using "ini"-style config files with the ConfigParser class. It supports multiple files, lazy string interpolation, default values and a lot of other goodies.
Once a number of files have been loaded, more files can be loaded and their values will override the previous ones, if any.
You load one or more config files, depending on the machine address, environment variables and even values in previously loaded config files. Then you just use the parsed values to populate the settings.
One strategy I have successfully used has been:
Load a default defaults.ini file
Check the machine name, and load all files which matched the reversed FQDN, from the shortest match to the longest match (so, I loaded net.ini, then net.domain.ini, then net.domain.webserver01.ini, each one possibly overriding values of the previous). This account also for developers' machines, so each one could set up its preferred database driver, etc. for local development
Check if there is a "cluster name" declared, and in that case load cluster.cluster_name.ini, which can define things like database and cache IPs
As an example of something you can achieve with this, you can define a "subdomain" value per-env, which is then used in the default settings (as hostname: %(subdomain).whatever.net) to define all the necessary hostnames and cookie things django needs to work.
This is as DRY I could get, most (existing) files had just 3 or 4 settings. On top of this I had to manage customer configuration, so an additional set of configuration files (with things like database names, users and passwords, assigned subdomain etc) existed, one or more per customer.
One can scale this as low or as high as necessary, you just put in the config file the keys you want to configure per-environment, and once there's need for a new config, put the previous value in the default config, and override it where necessary.
This system has proven reliable and works well with version control. It has been used for long time managing two separate clusters of applications (15 or more separate instances of the django site per machine), with more than 50 customers, where the clusters were changing size and members depending on the mood of the sysadmin... | 4 | 339 | 0 | What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.
Currently, I am adding all constants to settings.py. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes... :(
Edit: looks like there is no standard answer to this question, I've accepted the most popular method. | How to manage local vs production settings in Django? | 1 | 0 | 0 | 119,151 |
1,626,403 | 2009-10-26T18:14:00.000 | 4 | 1 | 1 | 0 | python,email,mime | 1,626,650 | 3 | false | 0 | 0 | The way I've figured out to do it is:
Set the payload to an empty list with set_payload
Create the payload, and attach to the message. | 1 | 5 | 0 | I have an email that I'm reading with the Python email lib that I need to modify the attachments of. The email Message class has the "attach" method, but does not have anything like "detach". How can I remove an attachment from a multipart message? If possible, I want to do this without recreating the message from scratch.
Essentially I want to:
Load the email
Remove the mime attachments
Add a new attachment | Python email lib - How to remove attachment from existing message? | 0.26052 | 0 | 0 | 6,827 |
1,626,786 | 2009-10-26T19:17:00.000 | 1 | 0 | 0 | 1 | python,streaming,hadoop,mapreduce | 1,664,639 | 3 | false | 0 | 0 | Is it possible to replace the outputFormatClass, when using streaming?
In a native Java implementation you would extend the MultipleTextOutputFormat class and modify the method that names the output file. Then define your implementation as new outputformat with JobConf's setOutputFormat method
you should verify, if this is possible in streaming too. I donno :-/ | 1 | 9 | 0 | Using only a mapper (a Python script) and no reducer, how can I output a separate file with the key as the filename, for each line of output, rather than having long files of output? | Generating Separate Output files in Hadoop Streaming | 0.066568 | 0 | 0 | 4,270 |
1,628,001 | 2009-10-27T00:04:00.000 | 5 | 1 | 0 | 0 | c++,python,c,perl,integration | 1,628,011 | 5 | false | 0 | 1 | Anything is "possible", but whether it is necessary or beneficial is debatable and highly depends on your requirements. Don't mix if you don't need to. Use the language that best fits the domain or target requirements.
I can't think of a scenario where one needs to mix Python and Perl as their domain is largely the same.
Using C/C++ can be beneficial in cases where you need hardcore system integration or specialized machine dependent services. Or when you need to extend Python or Perl itself (both are written in C/C++).
EDIT: if you want to do a GUI application, it is probably easier to choose a language that fits the OS you want your GUI to run in. I.e. something like (but not limited to) C# for Windows, Objective-C for iPhone or Mac, Qt + C++ for Linux etc. | 3 | 1 | 0 | I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages? | Python, Perl And C/C++ With GUI | 0.197375 | 0 | 0 | 735 |
1,628,001 | 2009-10-27T00:04:00.000 | 1 | 1 | 0 | 0 | c++,python,c,perl,integration | 1,628,012 | 5 | false | 0 | 1 | Everything is possible - but why add two and a half more levels of complexity? | 3 | 1 | 0 | I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages? | Python, Perl And C/C++ With GUI | 0.039979 | 0 | 0 | 735 |
1,628,001 | 2009-10-27T00:04:00.000 | 1 | 1 | 0 | 0 | c++,python,c,perl,integration | 1,628,013 | 5 | false | 0 | 1 | Python & Perl? together?
I can only think of an editor. | 3 | 1 | 0 | I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages? | Python, Perl And C/C++ With GUI | 0.039979 | 0 | 0 | 735 |
1,628,222 | 2009-10-27T01:02:00.000 | 20 | 0 | 1 | 0 | python,language-design | 1,628,300 | 4 | true | 0 | 0 | The big advantage is that built-in functions (and operators) can apply extra logic when appropriate, beyond simply calling the special methods. For example, min can look at several arguments and apply the appropriate inequality checks, or it can accept a single iterable argument and proceed similarly; abs when called on an object without a special method __abs__ could try comparing said object with 0 and using the object change sign method if needed (though it currently doesn't); and so forth.
So, for consistency, all operations with wide applicability must always go through built-ins and/or operators, and it's those built-ins responsibility to look up and apply the appropriate special methods (on one or more of the arguments), use alternate logic where applicable, and so forth.
An example where this principle wasn't correctly applied (but the inconsistency was fixed in Python 3) is "step an iterator forward": in 2.5 and earlier, you needed to define and call the non-specially-named next method on the iterator. In 2.6 and later you can do it the right way: the iterator object defines __next__, the new next built-in can call it and apply extra logic, for example to supply a default value (in 2.6 you can still do it the bad old way, for backwards compatibility, though in 3.* you can't any more).
Another example: consider the expression x + y. In a traditional object-oriented language (able to dispatch only on the type of the leftmost argument -- like Python, Ruby, Java, C++, C#, &c) if x is of some built-in type and y is of your own fancy new type, you're sadly out of luck if the language insists on delegating all the logic to the method of type(x) that implements addition (assuming the language allows operator overloading;-).
In Python, the + operator (and similarly of course the builtin operator.add, if that's what you prefer) tries x's type's __add__, and if that one doesn't know what to do with y, then tries y's type's __radd__. So you can define your types that know how to add themselves to integers, floats, complex, etc etc, as well as ones that know how to add such built-in numeric types to themselves (i.e., you can code it so that x + y and y + x both work fine, when y is an instance of your fancy new type and x is an instance of some builtin numeric type).
"Generic functions" (as in PEAK) are a more elegant approach (allowing any overriding based on a combination of types, never with the crazy monomaniac focus on the leftmost arguments that OOP encourages!-), but (a) they were unfortunately not accepted for Python 3, and (b) they do of course require the generic function to be expressed as free-standing (it would be absolutely crazy to have to consider the function as "belonging" to any single type, where the whole POINT is that can be differently overridden/overloaded based on arbitrary combination of its several arguments' types!-). Anybody who's ever programmed in Common Lisp, Dylan, or PEAK, knows what I'm talking about;-).
So, free-standing functions and operators are just THE right, consistent way to go (even though the lack of generic functions, in bare-bones Python, does remove some fraction of the inherent elegance, it's still a reasonable mix of elegance and practicality!-). | 2 | 10 | 0 | i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min()
what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls?
why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope)
thanks!! | The advantages of having static function like len(), max(), and min() over inherited method calls | 1.2 | 0 | 0 | 1,752 |
1,628,222 | 2009-10-27T01:02:00.000 | 3 | 0 | 1 | 0 | python,language-design | 1,628,285 | 4 | false | 0 | 0 | It emphasizes the capabilities of an object, not its methods or type. Capabilites are declared by "helper" functions such as __iter__ and __len__ but they don't make up the interface. The interface is in the builtin functions, and beside this also in the buit-in operators like + and [] for indexing and slicing.
Sometimes, it is not a one-to-one correspondance: For example, iter(obj) returns an iterator for an object, and will work even if __iter__ is not defined. If not defined, it goes on to look if the object defines __getitem__ and will return an iterator accessing the object index-wise (like an array).
This goes together with Python's Duck Typing, we care only about what we can do with an object, not that it is of a particular type. | 2 | 10 | 0 | i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min()
what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls?
why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope)
thanks!! | The advantages of having static function like len(), max(), and min() over inherited method calls | 0.148885 | 0 | 0 | 1,752 |
1,628,372 | 2009-10-27T01:56:00.000 | 5 | 0 | 0 | 0 | python,statistics,sas | 1,628,669 | 2 | false | 0 | 0 | yes. sas 9.2 can interact with soap and restful apis. i haven't had much success with twitter. i have had some success with google spreadsheets (in sas 9.1.3) and i've seen code to pull google analytics (in sas 9.2).
as with python and r, you can write the code in any text editor, but you'll need to have sas to actually execute it. lately, i've been bouncing between eclipse, pspad, and sas's enhanced editor for writing code, but i always have to submit in sas. | 1 | 1 | 0 | I have been taking a few graduate classes with a professor I like alot and she raves about SAS all of the time. I "grew up" learning stats using SPSS, and with their recent decisions to integrate their stats engine with R and Python, I find it difficult to muster up the desire to learn anything else. I am not that strong in Python, but I can get by with most tasks that I want to accomplish.
Admittedly, I do see the upside to SAS, but I have learned to do some pretty cool things combining SPSS and Python, like grabbing data from the web and analyzing it real-time. Plus, I really like that I can use the GUI to generate the base for my code before I add my final modifications. In SAS, it looks like I would have to program everything by hand (ignoring Enterprise Guide).
My question is this. Can you grab data from the web and parse it into SAS datasets? This is a deal-breaker for me. What about interfacing with API's like Google Analytics, Twitter, etc? Are there external IDE's that you can use to write and execute SAS programs?
Any help will be greatly appreciated.
Brock | SAS and Web Data | 0.462117 | 0 | 1 | 2,215 |
1,628,562 | 2009-10-27T03:07:00.000 | 4 | 0 | 0 | 0 | python,google-app-engine,leaderboard | 1,628,703 | 2 | true | 0 | 0 | If writes are very rare compared to reads (a key assumption in most key-value stores, and not just in those;-), then you might prefer to take a time hit when you need to update scores (a write) rather than to get the relative leaderboards (a read). Specifically, when a user's score change, queue up tasks for each of their friends to update their "relative leaderboards" and keep those leaderboards as list attributes (which do keep order!-) suitably sorted (yep, the latter's a denormalization -- it's often necessary to denormalize, i.e., duplicate information appropriately, to exploit key-value stores at their best!-).
Of course you'll also update the relative leaderboards when a friendship (user to user connection) disappears or appears, but those should (I imagine) be even rarer than score updates;-).
If writes are pretty frequent, since you don't need perfectly precise up-to-the-second info (i.e., it's not financials/accounting stuff;-), you still have many viable approaches to try.
E.g., big score changes (rarer) might trigger the relative-leaderboards recomputes, while smaller ones (more frequent) get stashed away and only applied once in a while "when you get around to it". It's hard to be more specific without ballpark numbers about frequency of updates of various magnitude, typical network-friendship cluster sizes, etc, etc. I know, like everybody else, you want a perfect approach that applies no matter how different the sizes and frequencies in question... but, you just won't find one!-) | 1 | 0 | 0 | Ive been working on a feature of my application to implement a leaderboard - basically stack rank users according to their score. Im currently tracking the score on an individual basis. My thought is that this leaderboard should be relative instead of absolute i.e. instead of having the top 10 highest scoring users across the site, its a top 10 among a user's friend network. This seems better because everyone has a chance to be #1 in their network and there is a form of friendly competition for those that are interested in this sort of thing. Im already storing the score for each user so the challenge is how to compute the rank of that score in real time in an efficient way. Im using Google App Engine so there are some benefits and limitations (e.g., IN [array]) queries perform a sub-query for every element of the array and also are limited to 30 elements per statement
For example
1st Jack 100
2nd John 50
Here are the approaches I came up with but they all seem to be inefficient and I thought that this community could come up with something more elegant. My sense is that any solution will likely be done with a cron and that I will store a daily rank and list order to optimize read operations but it would be cool if there is something more lightweight and real time
Pull the list of all users of the site ordered by score.
For each user pick their friends out of that list and create new rankings.
Store the rank and list order.
Update daily.
Cons - If I get a lot of users this will take forever
2a. For each user pick their friends and for each friend pick score.
Sort that list.
Store the rank and list order.
Update daily.
Record the last position of each user so that the pre-existing list can be used for re-ordering for the next update in order to make it more efficient (may save sorting time)
2b. Same as above except only compute the rank and list order for people who's profiles have been viewed in the last day
Cons - rank is only up to date for the 2nd person that views the profile | Real time update of relative leaderboard for each user among friends | 1.2 | 0 | 1 | 1,929 |
1,628,564 | 2009-10-27T03:07:00.000 | 1 | 0 | 0 | 0 | javascript,jquery,python | 1,628,598 | 3 | false | 1 | 0 | There are two general approaches:
Modify your Python code so that it runs as a CGI (or WSGI or whatever) module and generate the page of interest by running some server side code.
Use Javascript with jQuery to load the content of interest by running some client side code.
The difference between these two approaches is where the third party server sees the requests coming from. In the first case, it's from your web server. In the second case, it's from the browser of the user accessing your page.
Some browsers may not handle loading content from third party servers very gracefully (that is, they might pop up warning boxes or something). | 1 | 1 | 0 | Basically, what I'm trying to do is simply make a small script that accesses finds the most recent post in a forum and pulls some text or an image out of it. I have this working in python, using the htmllib module and some regex. But, the script still isn't very convenient as is, it would be much nicer if I could somehow put it into an HTML document. It appears that simply embedding Python scripts is not possible, so I'm looking to see if theres a similar feature like python's htmllib that can be used to access some other webpage and extract some information from it.
(Essentially, if I could get this script going in the form of an html document, I could just open one html document, rather than navigate to several different pages to get the information I want to check)
I'm pretty sure that javascript doesn't have the functionality I need, but I was wondering about other languages such as jQuery, or even something like AJAX? | Is it possible access other webpages from within another page | 0.066568 | 0 | 1 | 213 |
1,628,766 | 2009-10-27T04:23:00.000 | 1 | 0 | 0 | 0 | python,proxy,multithreading,web-crawler,pool | 5,803,567 | 2 | false | 1 | 0 | usually proxies filter websites categorically based on how the website was created. It is difficult to transmit data through proxies based on categories. Eg youtube is classified as audio/video streams therefore youtube is blocked in some places espically schools.
If you want to bypass proxies and get the data off a website and put it in your own genuine website like a dot com website that can be registered it to you.
When you are making and registering the website categorise your website as anything you want. | 1 | 1 | 0 | Instead of just using urllib does anyone know of the most efficient package for fast, multithreaded downloading of URLs that can operate through http proxies? I know of a few such as Twisted, Scrapy, libcurl etc. but I don't know enough about them to make a decision or even if they can use proxies.. Anyone know of the best one for my purposes? Thanks! | Python Package For Multi-Threaded Spider w/ Proxy Support? | 0.099668 | 0 | 1 | 9,003 |
1,629,800 | 2009-10-27T10:08:00.000 | 2 | 0 | 0 | 0 | python,django,multithreading | 1,629,913 | 3 | false | 1 | 0 | Generally, your Django app already is multi-threaded. That's the way most of the standard Django servers operate -- they can tolerate multiple WSGI threads sending requests to them.
Further, you'll almost always have Django running under Apache, which is also multi-threaded.
If you use mod_wsgi, then Django may be part of the Apache process or a separate process.
Anything that is running "side-by-side" (Whatever that means) will be outside Apache, outside Django, and in a separate process.
So any multi-threading considerations don't apply between your Apache process (which contains Django) and your other process. | 1 | 1 | 0 | I need to develop an app that runs side by side with a django-app.
This will be the first time i develop a multithreaded app that runs next to a django-app so are there any 'gotchas' and 'traps' i should be aware of? | Are there any known issues with django and multithreading? | 0.132549 | 0 | 0 | 1,383 |
1,630,427 | 2009-10-27T12:19:00.000 | 0 | 0 | 0 | 0 | python,django,zip | 1,631,366 | 2 | true | 1 | 0 | it works with ZipFile.extractall | 1 | 0 | 0 | I have few uploads in this app, uploading csv files is working fine.
I have a model that has zip upload in it. Zip file is uploaded, can be viewed, but having issues extracting it.
class Message(models.Model):
uploadFile = models.FileField(_('images file (.zip)'),
upload_to='message/',
storage=FileSystemStorage(),
help_text=_(''))
The error is
IOError at /backend/media/new
(13, 'Permission denied') | Django Zip upload permission problem | 1.2 | 0 | 0 | 677 |
1,630,728 | 2009-10-27T13:15:00.000 | 3 | 0 | 1 | 1 | python,python-3.x,python-idle | 19,543,322 | 2 | false | 0 | 0 | Type 'idle3' in the terminal window. That should launch your copy of idle 3.1 | 1 | 6 | 0 | I installed Python 3.1 yesterday on my Windows Vista PC, and was surprised to find that the version of IDLE is 2.6.4, for "Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32"
I was hoping to use IDLE to investigate some of the new features of Python 3...
I guess I'm stuck with the command line...
Anyone know what's up with Python 3's IDLE?
Thanks | No IDLE for Python 3? | 0.291313 | 0 | 0 | 12,340 |
1,631,574 | 2009-10-27T15:23:00.000 | 2 | 0 | 0 | 0 | python,user-interface,qt,pyqt | 1,632,726 | 2 | true | 0 | 1 | I'm showing this as C++, which is what the documentation I'm looking at is in. It shouldn't be too difficult to convert to python.
You need to create a custom derivative of QLayoutItem, which overrides bool hasHeightForWidth() and int heightForWidth( int width) to preserve the aspect ratio somehow. You could either pass the image in and query it, or you could just set the ratio directly. You'll also need to make sure the widget() function returns a pointer to the proper label.
Once that is done, you can add a layout item to a layout in the same manner you would a widget. So when your label gets added, change it to use your custom layout item class.
I haven't actually tested any of this, so it is a theoretical solution at this point. I don't know of any way to do this solution through designer, if that was desired. | 1 | 1 | 0 | If you have a QImage wrapped inside a QLabel, is it possible to scale it up or down when you resize the window and maintain the aspect ratio (so the image doesn't become distorted)? I figured out that it can scale using setScaledContents(), and you can set a minimum and maximum size, but the image still loses its aspect.
It would be great if this could be explained using Python (since I don't know c++), but I'll take what I can get. :-)
Thanks in advance! | How to allow scaling with uniform aspect ratio in (Py)Qt? | 1.2 | 0 | 0 | 2,384 |
1,633,934 | 2009-10-27T22:03:00.000 | 0 | 0 | 1 | 0 | python,network-programming | 1,634,178 | 4 | false | 0 | 0 | If you're developing something as a learning exercise you might find it best to go with a structured text (ie. human readable and human writable) format.
An example would be to use a fixed number of fields per command, fixed width text fields and/or easily parsable field delimiters.
Generally text is less efficient in terms of packet size, but it does have the benefits that you can read it easily if you do a packet capture (eg. using wireshark) or if you want to use telnet to mimic a client.
And if this is only a learning exercise then ease of debugging is a significant issue. | 2 | 2 | 0 | I'm learning socket programming (in python) and I was wondering what the best/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text? | Designing a simple network packet | 0 | 0 | 1 | 916 |
1,633,934 | 2009-10-27T22:03:00.000 | 1 | 0 | 1 | 0 | python,network-programming | 1,635,005 | 4 | false | 0 | 0 | I suggest plain text to begin with - it is easier to debug. The format that your text takes depends on what you're doing, how many commands, arguments, etc. Have you fleshed out how your commands will look? Once you figure out what that looks like it'll likely suggest a format all on its own.
Are you using TCP or UDP? TCP is easy since it is a stream, but if you're using UDP keep in mind the maximum size of UDP packets and thus how big your message can be. | 2 | 2 | 0 | I'm learning socket programming (in python) and I was wondering what the best/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text? | Designing a simple network packet | 0.049958 | 0 | 1 | 916 |
1,634,509 | 2009-10-28T00:27:00.000 | 2 | 0 | 0 | 0 | python,pygame,geometry-surface,blit | 1,634,553 | 11 | false | 0 | 0 | I don't know what API you're using, so here's a vague answer:
In virtually all cases "clearing" a surface simply blits a coloured quad of the same size as the surface onto it. The colour used is whatever you want your clear colour to be.
If you know how do blit, just blit a white quad of the same size onto the surface. | 3 | 8 | 0 | Is there any way to clear a surface from anything that has been blitted to it? | Is there any way to "clear" a surface? | 0.036348 | 0 | 0 | 23,510 |
1,634,509 | 2009-10-28T00:27:00.000 | 2 | 0 | 0 | 0 | python,pygame,geometry-surface,blit | 1,637,386 | 11 | false | 0 | 0 | You can't undo one graphic written over the top of another graphic any more than you can undo one chalk illustration drawn over the top of another chalk illustration on the same board.
What is typically done in graphics is what you'd do with the chalkboard - clear the whole lot, and next time only redraw what you want to keep. | 3 | 8 | 0 | Is there any way to clear a surface from anything that has been blitted to it? | Is there any way to "clear" a surface? | 0.036348 | 0 | 0 | 23,510 |
1,634,509 | 2009-10-28T00:27:00.000 | 1 | 0 | 0 | 0 | python,pygame,geometry-surface,blit | 26,833,607 | 11 | false | 0 | 0 | .fill((255,255,255,0)) appears to work for me anyway. | 3 | 8 | 0 | Is there any way to clear a surface from anything that has been blitted to it? | Is there any way to "clear" a surface? | 0.01818 | 0 | 0 | 23,510 |
1,635,565 | 2009-10-28T00:47:00.000 | 1 | 0 | 0 | 0 | python,powerpoint | 1,737,489 | 4 | false | 0 | 0 | Save the file with the extension ".pps". That will make powerpoint open the file in presentation mode.
The presentaion needs to designed to advance slides, else you will have to script that part. | 1 | 2 | 0 | I was wondering how I can make a script load powerpoint file, advance slides automatically and put it on full screen. Is there a way to make windows do that? Can I just load powerpoint.exe and maybe use some sort of API/Pipe to give commands from another script.
To make a case: I'm making a script that automatically scans a folder in windows (using python) and loads up the powerpoint presentations and keeps playing them in order. | How do I make powerpoint play presentations/load up ppts automatically? | 0.049958 | 0 | 0 | 4,926 |
1,635,638 | 2009-10-28T07:25:00.000 | 1 | 0 | 0 | 0 | python,django,google-app-engine,referenceproperty | 1,636,697 | 2 | false | 1 | 0 | I got it to work by overriding the init method of the modelform to pick up the correct fields as I had to do filtering of the choices as well. | 1 | 1 | 0 | The default choice field display of a reference property in appengine returns the choices
as the string representation of the entire object. What is the best method to override this behaviour? I tried to override str() in the referenced class. But it does not work. | How do we override the choice field display of a reference property in appengine using Django? | 0.099668 | 0 | 0 | 289 |
1,638,870 | 2009-10-28T17:40:00.000 | 1 | 0 | 0 | 0 | python,django,frameworks,templating | 1,639,340 | 2 | false | 1 | 0 | I don't see you getting away from writing templates, especially if you would want to format it, even slightly.
However you can re-use basic templates, for e.g, create a generic object_list.html and object_detail.html
that will basically contain the information to loop over the object list and present it, and show the object detail. You could use these "Generic" templates across the entire app if need be. | 1 | 3 | 0 | So, Generic views are pretty cool, but what I'm interested in is something that's a generic template.
so for example, I can give it an object and it'll just tostring it for me.
or if I give it a list, it'll just iterate over the objects and tostring them as a ul (or tr, or whatever else it deems necessary).
for most uses you wouldn't need this. I just threw something together quickly for a friend (a bar stock app, if you must know), and I don't feel like writing templates. | django generic templates | 0.099668 | 0 | 0 | 1,334 |
1,640,080 | 2009-10-28T20:57:00.000 | 0 | 0 | 1 | 0 | c#,python,excel | 1,640,928 | 10 | false | 0 | 0 | The answer probably has more to do with the circumstances than the question of which is the better language for the task at hand.
If you have no prior C# experience, it's probably not a good idea to jump into it now and learn it as you go along. Syntax differences are trivial to learn, mastery is defined by knowing the dirty details and how to use the language's strengths to get the job done.
If you have plenty of time and starting over a lot is NOT an issue, or you have experience with a language that is paradigmatically very similar to C# (say, Java or C++), you could probably afford to take the risk and adapt to the new language very quickly.
If you don't care about efficiency (e.g. it's a pet project, you don't care much about your job or efficiency simply isn't important), try C# just to learn something new (always a good thing) or stick to Python to make sure you won't have to adjust your thinking too much (always a source of vulnerabilities and misunderstandings).
If you definitely want to use C#, you should play around with it in your "free time" (not necessarily at home, but preferably at work when there's nothing else to do) and thus build a skillset you can use once you have to start with the actual project. This way you'll already be familiar with the paradigm (to use the term loosely) and encounter less problems where they count. | 5 | 3 | 0 | I have the task of developing an application to pull data from remote REST services and generating Excel reports. This application will be used by a handful of users at the company (10-15). The data load can reach 10,000-200,000 records. I have been debating whether to use Python or C#...
The only reason I am considering Python is because I am familiar with it and it would be less of a risk.
Personally I want to try use C# since this would be a good opportunity to learn it. The application is not too complicated so the overhead of learning it won't be too much... I think.
Are there any issues with C# that I should be concerned about for this type of program? The users run Windows XP... would users not having .NET installed be a major concern?
Thanks in advance.
EDIT:
I guess I need to stress the fact that the end users should be able to run the application without installing additional libraries/frameworks. | C# or Python for my app | 0 | 0 | 0 | 1,737 |
1,640,080 | 2009-10-28T20:57:00.000 | 5 | 0 | 1 | 0 | c#,python,excel | 1,640,096 | 10 | false | 0 | 0 | If you want to learn C# and you don't have such a limited time constraint, now might be a good time to try it. Also, though I haven't done Excel work with either Python or C#, I would expect it to be easier to work with Excel files with a Microsoft product rather than Python. If you're just dumping data in CSV format to a file, though, either Python or C# will work fine.
If you want to try something even crazier, you can use IronPython such that you'll have .NET resources available but you can still write in Python. | 5 | 3 | 0 | I have the task of developing an application to pull data from remote REST services and generating Excel reports. This application will be used by a handful of users at the company (10-15). The data load can reach 10,000-200,000 records. I have been debating whether to use Python or C#...
The only reason I am considering Python is because I am familiar with it and it would be less of a risk.
Personally I want to try use C# since this would be a good opportunity to learn it. The application is not too complicated so the overhead of learning it won't be too much... I think.
Are there any issues with C# that I should be concerned about for this type of program? The users run Windows XP... would users not having .NET installed be a major concern?
Thanks in advance.
EDIT:
I guess I need to stress the fact that the end users should be able to run the application without installing additional libraries/frameworks. | C# or Python for my app | 0.099668 | 0 | 0 | 1,737 |
1,640,080 | 2009-10-28T20:57:00.000 | 0 | 0 | 1 | 0 | c#,python,excel | 1,640,094 | 10 | false | 0 | 0 | If the users have to install a C# client then not having .NET installed would be an issue. You can package the installer so it downloads the .NET runtime, but depending on what features you use it could be quite a big download.
If your application is web based then all the C# code runs on the server and just delivers HTML to the browser so whether .NET is installed or not shouldn't matter. | 5 | 3 | 0 | I have the task of developing an application to pull data from remote REST services and generating Excel reports. This application will be used by a handful of users at the company (10-15). The data load can reach 10,000-200,000 records. I have been debating whether to use Python or C#...
The only reason I am considering Python is because I am familiar with it and it would be less of a risk.
Personally I want to try use C# since this would be a good opportunity to learn it. The application is not too complicated so the overhead of learning it won't be too much... I think.
Are there any issues with C# that I should be concerned about for this type of program? The users run Windows XP... would users not having .NET installed be a major concern?
Thanks in advance.
EDIT:
I guess I need to stress the fact that the end users should be able to run the application without installing additional libraries/frameworks. | C# or Python for my app | 0 | 0 | 0 | 1,737 |
1,640,080 | 2009-10-28T20:57:00.000 | 0 | 0 | 1 | 0 | c#,python,excel | 1,640,232 | 10 | false | 0 | 0 | If you do a web based application not having installed .NET in the client machine wont be a problem .. but I suppose you dont want to try both C# and web development at the same time | 5 | 3 | 0 | I have the task of developing an application to pull data from remote REST services and generating Excel reports. This application will be used by a handful of users at the company (10-15). The data load can reach 10,000-200,000 records. I have been debating whether to use Python or C#...
The only reason I am considering Python is because I am familiar with it and it would be less of a risk.
Personally I want to try use C# since this would be a good opportunity to learn it. The application is not too complicated so the overhead of learning it won't be too much... I think.
Are there any issues with C# that I should be concerned about for this type of program? The users run Windows XP... would users not having .NET installed be a major concern?
Thanks in advance.
EDIT:
I guess I need to stress the fact that the end users should be able to run the application without installing additional libraries/frameworks. | C# or Python for my app | 0 | 0 | 0 | 1,737 |
1,640,080 | 2009-10-28T20:57:00.000 | 0 | 0 | 1 | 0 | c#,python,excel | 1,640,103 | 10 | false | 0 | 0 | Integration with Excel will almost certainly be simpler from C#. Given the requirements "Windows only" and "Integrates with Excel", it would seem a simple choice that C# is better suited to this individual problem.
And, no, users' not having .NET is not a concern compared with Python as the alternative. The Dot Net framework is a standard part of Microsoft Update. In contrast, Python will almost certainly not be there on the average end users' machine. In your case, with an internal app, that last point might not matter of course. | 5 | 3 | 0 | I have the task of developing an application to pull data from remote REST services and generating Excel reports. This application will be used by a handful of users at the company (10-15). The data load can reach 10,000-200,000 records. I have been debating whether to use Python or C#...
The only reason I am considering Python is because I am familiar with it and it would be less of a risk.
Personally I want to try use C# since this would be a good opportunity to learn it. The application is not too complicated so the overhead of learning it won't be too much... I think.
Are there any issues with C# that I should be concerned about for this type of program? The users run Windows XP... would users not having .NET installed be a major concern?
Thanks in advance.
EDIT:
I guess I need to stress the fact that the end users should be able to run the application without installing additional libraries/frameworks. | C# or Python for my app | 0 | 0 | 0 | 1,737 |
1,640,112 | 2009-10-28T21:06:00.000 | 0 | 0 | 1 | 0 | python,multithreading,queue | 1,642,871 | 3 | false | 0 | 0 | If the rate of input is fast enough, you can always buffer bytes up into a string before pushing that onto the Queue. That will probably increase throughput by reducing the amount of locking done, at the expense of a little extra latency on the receiving end. | 3 | 3 | 0 | I have a simple thread that grabs bytes from a Bluetooth RFCOMM (serial-port-like) socket and dumps them into a Queue.Queue (FIFO), which seems like the typical method to exchange data between threads. Works fine.
Is this overkill though? Could I just use a bytearray then have my reader thread .append(somebyte) and the processing function just .pop(0)? I'm not sure if the protections in Queue are meant for more complex "multi-producer, multi-consumer queues" and a waste for a point-to-point byte stream. Doing things like flushing the queue or grabbing multiple bytes seem more awkward with the Queue vs. a simpler data type.
I guess the answer might have to do with if .pop() is atomic, but would it even matter then?... | Is a Python Queue needed for simple byte stream between threads? | 0 | 0 | 0 | 2,431 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.