Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,098,088 | 2010-01-19T23:49:00.000 | 2 | 0 | 1 | 0 | python | 2,098,099 | 6 | false | 0 | 0 | Probably not. Python files are "modules". Modules should contain just what code is independently reusable. If that comprises several classes, which is the norm, then that's perfectly ok. | 4 | 40 | 0 | I'm quite new to Python in general.
I'm aware that I can create multiple classes in the same .py file, but I'm wondering if I should create each class in its own .py file.
In C# for instance, I would have a class that handles all Database interactions. Then another class that had the business rules.
Is this the case in Python? | Should I create each class in its own .py file? | 0.066568 | 0 | 0 | 22,009 |
2,098,088 | 2010-01-19T23:49:00.000 | 3 | 0 | 1 | 0 | python | 2,102,195 | 6 | false | 0 | 0 | Another point worth mentioning, is that if a file grows too large, you can always transform it into a package, making easy to reorganize without breaking the client's code. | 4 | 40 | 0 | I'm quite new to Python in general.
I'm aware that I can create multiple classes in the same .py file, but I'm wondering if I should create each class in its own .py file.
In C# for instance, I would have a class that handles all Database interactions. Then another class that had the business rules.
Is this the case in Python? | Should I create each class in its own .py file? | 0.099668 | 0 | 0 | 22,009 |
2,099,189 | 2010-01-20T05:12:00.000 | 0 | 0 | 0 | 0 | python,django,chat,twisted,forums | 2,101,355 | 3 | false | 1 | 0 | If forum application needs to get something from chat application, it's simplier to make forum application communicate with chat application with plain HTTP requests and to make them run separately. | 1 | 5 | 0 | 1)I want to devlop a website that has forums and chat.The chat and forums are linked in some way.Meaning for each thread the users can chat in the chat room for that thread or can post a reply to the forum.
I was thinking of using django for forums and twisted for chat thing.Can i combine the two?
The chat application devloped using twisted is linked to the forum.
2)If i use twisted and django what kind of web host shold i use while putting by website on web ?Shold i use a VPS? Or can i get a host that supports both? | using django and twisted together | 0 | 0 | 0 | 15,742 |
2,101,838 | 2010-01-20T14:06:00.000 | 7 | 0 | 0 | 0 | python,django | 2,102,333 | 2 | false | 1 | 0 | Calling People.objects.all(pk=code) (calling all) will result in the pk=code being ignored and a QuerySet for all People returned.
Calling People.objects.get(pk=code) (calling get) will result in the People object with pk=code returned, or an error if not found. | 1 | 1 | 0 | When querying in django say People.objects.all(pk=code), what does pk=code mean? | Django primary key | 1 | 0 | 0 | 9,900 |
2,102,216 | 2010-01-20T14:55:00.000 | 0 | 0 | 1 | 0 | python,parallel-processing,multiple-processes | 2,102,248 | 3 | false | 0 | 0 | Because of the global interpreter lock you would be hard pressed to get any speedup this way. In reality even multithreaded programs in Python only run on one core. Thus, you would just be doing N processes at 1/N times the speed. Even if one finished in half the time of the others you would still lose time in the big picture. | 1 | 0 | 0 | How can multiple calculations be launched in parallel, while stopping them all when the first one returns?
The application I have in mind is the following: there are multiple ways of calculating a certain value; each method takes a different amount of time depending on the function parameters; by launching calculations in parallel, the fastest calculation would automatically be "selected" each time, and the other calculations would be stopped.
Now, there are some "details" that make this question more difficult:
The parameters of the function to be calculated include functions (that are calculated from data points; they are not top-level module functions). In fact, the calculation is the convolution of two functions. I'm not sure how such function parameters could be passed to a subprocess (they are not pickeable).
I do not have access to all calculation codes: some calculations are done internally by Scipy (probably via Fortran or C code). I'm not sure whether threads offer something similar to the termination signals that can be sent to processes.
Is this something that Python can do relatively easily? | How can multiple calculations be launched in parallel, while stopping them all when the first one returns? [Python] | 0 | 0 | 0 | 704 |
2,103,071 | 2010-01-20T16:35:00.000 | 1 | 0 | 1 | 0 | python,csv,types,casting | 2,103,132 | 7 | false | 0 | 0 | Well..you can't.
How would you decide if "5" is meant as a string or an integer?
How would you decide if "20100120" is meant as an integer or a date?
You can of course make educated guesses, and implement some kind of parse order. First try it as a date, then as a float, then as an int, and lastly as a string. | 1 | 8 | 0 | When I read a comma seperated file or string with the csv parser in python all items are represented as a string. see example below.
import csv
a = "1,2,3,4,5"
r = csv.reader([a])
for row in r:
d = row
d
['1', '2', '3', '4', '5']
type(d[0])
<type 'str'>
I want to determine for each value if it is a string, float, integer or date. How can I do this in python? | determine the type of a value which is represented as string in python | 0.028564 | 0 | 0 | 7,507 |
2,103,274 | 2010-01-20T17:01:00.000 | 0 | 0 | 0 | 0 | python,sqlalchemy | 65,265,231 | 6 | false | 0 | 0 | You can install 'DB Browser (SQLite)' and open your current database file and simple add/edit table in your database and save it, and run your app
(add script in your model after save above process) | 1 | 15 | 0 | I want to add a field to an existing mapped class, how would I update the sql table automatically. Does sqlalchemy provide a method to update the database with a new column, if a field is added to the class. | SqlAlchemy add new Field to class and create corresponding column in table | 0 | 1 | 0 | 11,257 |
2,103,728 | 2010-01-20T18:04:00.000 | 0 | 1 | 0 | 0 | c++,python,perl,performance,lua | 2,103,784 | 7 | false | 0 | 1 | you could probably create an embedded language using C++ templates and operator overloading, see for example ublas or ftensor matrix languages. i do not think python or other interpreted languages of is suitable for having numbercrunching/data processing. | 3 | 7 | 0 | I'm making an application that analyses one or more series of data using several different algorithms (agents). I came to the idea that each of these agents could be implemented as separate Python scripts which I run using either the Python C API or Boost.Python in my app.
I'm a little worried about runtime overhead TBH, as I'm doing some pretty heavy duty data processing and I don't want to have to wait several minutes for each simulation. I will typically be making hundreds of thousands, if not millions, of iterations in which I invoke the external "agents"; am I better of just hardcoding everything in the app, or will the performance drop be tolerable?
Also, are there any other interpreted languages I can use other than Python? | Selecting An Embedded Language | 0 | 0 | 0 | 2,474 |
2,103,728 | 2010-01-20T18:04:00.000 | 1 | 1 | 0 | 0 | c++,python,perl,performance,lua | 2,103,831 | 7 | false | 0 | 1 | For millions of calls (from I'm assuming c++, because you mentioned boost) into python, yes: you will notice a performance hit. This may or may not be significant - perhaps the speed gain of trying out new 'agents' would be greater than the hit. Python does have fast numerical libraries (such as numpy) that might help, but you'll still incur overhead of marshalling data, calling into python, the gil, etc.
Yes, you can embed many other languages: check out lua. Also, check out swig.org, which can connect to many other languages besides python. | 3 | 7 | 0 | I'm making an application that analyses one or more series of data using several different algorithms (agents). I came to the idea that each of these agents could be implemented as separate Python scripts which I run using either the Python C API or Boost.Python in my app.
I'm a little worried about runtime overhead TBH, as I'm doing some pretty heavy duty data processing and I don't want to have to wait several minutes for each simulation. I will typically be making hundreds of thousands, if not millions, of iterations in which I invoke the external "agents"; am I better of just hardcoding everything in the app, or will the performance drop be tolerable?
Also, are there any other interpreted languages I can use other than Python? | Selecting An Embedded Language | 0.028564 | 0 | 0 | 2,474 |
2,103,728 | 2010-01-20T18:04:00.000 | 5 | 1 | 0 | 0 | c++,python,perl,performance,lua | 2,104,152 | 7 | false | 0 | 1 | Tcl was designed from the ground up to be an embedded language. | 3 | 7 | 0 | I'm making an application that analyses one or more series of data using several different algorithms (agents). I came to the idea that each of these agents could be implemented as separate Python scripts which I run using either the Python C API or Boost.Python in my app.
I'm a little worried about runtime overhead TBH, as I'm doing some pretty heavy duty data processing and I don't want to have to wait several minutes for each simulation. I will typically be making hundreds of thousands, if not millions, of iterations in which I invoke the external "agents"; am I better of just hardcoding everything in the app, or will the performance drop be tolerable?
Also, are there any other interpreted languages I can use other than Python? | Selecting An Embedded Language | 0.141893 | 0 | 0 | 2,474 |
2,104,767 | 2010-01-20T20:38:00.000 | 3 | 0 | 0 | 0 | python,django,templatetags | 2,104,791 | 1 | true | 1 | 0 | Template tags are just Python functions; you can import their module and call them with impunity, the only requirement being that you pass them appropriate arguments. The django.contrib.humanize.templatetags.humanize module has separate functions to do the work, so it's even easier in that specific case. | 1 | 0 | 0 | Is it possible to load a django template tag/filter to use as a function in one of my template tags?
I'm trying to load up some of the django.contrib.humanize filters so I can apply them to the results of some of my custom template tags. I can't seem to import them at all, and I don't want to have to rewrite any of that code. | Load and Reuse Django Template Filters | 1.2 | 0 | 0 | 250 |
2,105,090 | 2010-01-20T21:26:00.000 | 3 | 0 | 1 | 0 | python,shell | 2,105,139 | 5 | false | 0 | 0 | I don't know specifically in python,
but in a shell the environment variable $COLUMNS contains the information you want. | 1 | 5 | 0 | How can I find out how may character are there in a line before the end line in an interactive shell using python? (Usually 80) | how many characters are there in line in a console? | 0.119427 | 0 | 0 | 1,971 |
2,106,178 | 2010-01-21T00:50:00.000 | 1 | 0 | 1 | 0 | python,pyqt4 | 2,184,315 | 1 | true | 0 | 1 | You can remove the demos and examples directories inside your qt installation directory... they take up over 1GB of space and are not required. I would leave the rest there, unless you are really worried about space.
If you do try to clean up the QT installation directory, start by renaming larger files/directories (e.g. add a .old suffix to the name), and see if the features you use in QT still function. If it breaks, just rename the files/directories back (remove .old). | 1 | 3 | 0 | I successfully installed PyQt in both mac and PC. To do so I had to install mingw (on PC), Xcode (on MAC) and Qt4.6 library. Now that I have PyQt working perfectly, I would like to uninstall mingw, Xcode and Qt Library from both mac and PC.
I know I can remove Xcode and mingw, but what care should I take before removing Qt library. I know PyQt is still using it but it is not using whole 1.5Gig of files installed by Qt installer. So which files should I copy before removing Qt and where should I copy it to. | PyQt post installation question | 1.2 | 0 | 0 | 148 |
2,106,271 | 2010-01-21T01:16:00.000 | 2 | 0 | 1 | 0 | python | 2,106,293 | 4 | false | 0 | 0 | You can put code (classes, function defs, etc) into python modules (individual source files), which are then imported with import. Typically like functionality (like in the Python standard library) is contained within a single module. | 1 | 5 | 0 | I'm looking to develop a project in python and all of the python I have done is minor scripting with no regard to classes or structure. I haven't seen much about this, so is this how larger python projects are done?
Also, do things like "namespaces" and "projects" exist in this realm? As well as object oriented principles such as inheriting from other classes? | Can you separate python projects logically into separate files/classes like in C#/Java? | 0.099668 | 0 | 0 | 13,190 |
2,106,324 | 2010-01-21T01:29:00.000 | 1 | 1 | 0 | 0 | python,c,function-pointers | 2,106,391 | 1 | true | 0 | 1 | The CObject (PyCOBject) data type exists for this purpose. It holds a void*, but you can store any data you wish. You do have to be careful not to pass the wrong CObject to the wrong functions, as some other library's CObjects will look just like your own.
If you want more type security, you could easily roll your own PyType for this; all it has to do, after all, is contain a pointer of the right type. | 1 | 0 | 0 | I'm writing an application working with plugins. There are two types of plugins: Engine and Model. Engine objects have an update() method that call the Model.velocity() method.
For performance reasons these methods are allowed to be written in C. This means that sometimes they will be written in Python and sometimes written in C.
The problem is that this forces to do an expensive Python function call of Model.velocity() in Engine.update() (and also reacquiring the GIL). I thought about adding something like Model.get_velocity_c_func() to the API, that would allow Model implementations to return a pointer to the C version of their velocity() method if available, making possible for Engine to do a faster C function call.
What data type should I use to pass the function pointer ? And is this a good design at all, maybe there is an easier way ? | Passing C function pointers between two python modules | 1.2 | 0 | 0 | 366 |
2,106,377 | 2010-01-21T01:46:00.000 | 1 | 0 | 0 | 0 | python,apache,mod-wsgi,cherrypy | 2,106,456 | 1 | false | 1 | 0 | The HTTP 500 error is used for internal server errors. Something in the server or your application is likely throwing an exception, so no matter what you set the response code to be before this, CherryPy will send a 500 back.
You can look into whatever tools CherryPy includes for debugging or logging (I'm not familiar with them). You can also set breakpoints into your code and continue stepping into the CherryPy internals until it hits the error case. | 1 | 3 | 0 | In my python application using mod_wsgi and cherrypy ontop of Apache my response code get changed to a 500 from a 403. I am explicitly setting this to 403.
i.e.
cherrypy.response.status = 403
I do not understand where and why the response code that the client receives is 500. Does anyone have any experience with this problem> | CherryPy changes my response code | 0.197375 | 0 | 1 | 1,737 |
2,106,823 | 2010-01-21T03:58:00.000 | 11 | 0 | 0 | 0 | python,python-3.x,django,django-models,django-admin | 2,106,836 | 7 | true | 1 | 0 | An easy way is to use the setting's name as the primary key in the settings table. There can't be more than one record with the same primary key, so that will allow both Django and the database to guarantee integrity. | 1 | 7 | 0 | I want use a model to save the system setting for a django app, So I want to limit the model can only have one record, how to do the limit? | Limit a single record in model for django app? | 1.2 | 0 | 0 | 9,457 |
2,107,682 | 2010-01-21T07:58:00.000 | 1 | 0 | 1 | 0 | python,naming-conventions,global-variables | 2,111,674 | 6 | true | 0 | 0 | I'd call it env. There's little risk that someone will confuse it with os.environ (especially if you organize your code so that you can call it myapp.environ).
I'd also make everything exposed by myapp.environ a property of a class, so that I can put breakpoints in the setter when the day comes that I need to. | 3 | 1 | 0 | I'm writing an application in Python, and I've got a number of universal variables (such as the reference to the main window, the user settings, and the list of active items in the UI) which have to be accessible from all parts of the program1. I only just realized I've named the module globals.py and I'm importing the object which contains those variables with a from globals import globals statement at the top of my files.
Obviously, this works, but I'm a little leery about naming my global object the same as the Python builtin. Unfortunately, I can't think of a much better naming convention for it. global and all are also Python builtins, universal seems imprecise, state isn't really the right idea. I'm leaning towards static or env, although both have a specific meaning in computer terms which suggests a different concept.
So, what (in Python) would you call the module which contains variables global to all your other modules?
1 I realize I could pass these (or the single object containing them) as a variable into every other function I call. This ends up being infeasible, not just because it makes the startup code and function signatures really ugly. | What should I name my global module in Python? | 1.2 | 0 | 0 | 495 |
2,107,682 | 2010-01-21T07:58:00.000 | 0 | 0 | 1 | 0 | python,naming-conventions,global-variables | 2,107,744 | 6 | false | 0 | 0 | top? top_level? | 3 | 1 | 0 | I'm writing an application in Python, and I've got a number of universal variables (such as the reference to the main window, the user settings, and the list of active items in the UI) which have to be accessible from all parts of the program1. I only just realized I've named the module globals.py and I'm importing the object which contains those variables with a from globals import globals statement at the top of my files.
Obviously, this works, but I'm a little leery about naming my global object the same as the Python builtin. Unfortunately, I can't think of a much better naming convention for it. global and all are also Python builtins, universal seems imprecise, state isn't really the right idea. I'm leaning towards static or env, although both have a specific meaning in computer terms which suggests a different concept.
So, what (in Python) would you call the module which contains variables global to all your other modules?
1 I realize I could pass these (or the single object containing them) as a variable into every other function I call. This ends up being infeasible, not just because it makes the startup code and function signatures really ugly. | What should I name my global module in Python? | 0 | 0 | 0 | 495 |
2,107,682 | 2010-01-21T07:58:00.000 | 0 | 0 | 1 | 0 | python,naming-conventions,global-variables | 2,107,730 | 6 | false | 0 | 0 | global is a keyword, not a built-in. 'globals' is not a keyword, but is a built-in function. It can be assigned to, but is bad practice. Code checkers like pylint and pychecker can catch these accidental assignments. How about config? | 3 | 1 | 0 | I'm writing an application in Python, and I've got a number of universal variables (such as the reference to the main window, the user settings, and the list of active items in the UI) which have to be accessible from all parts of the program1. I only just realized I've named the module globals.py and I'm importing the object which contains those variables with a from globals import globals statement at the top of my files.
Obviously, this works, but I'm a little leery about naming my global object the same as the Python builtin. Unfortunately, I can't think of a much better naming convention for it. global and all are also Python builtins, universal seems imprecise, state isn't really the right idea. I'm leaning towards static or env, although both have a specific meaning in computer terms which suggests a different concept.
So, what (in Python) would you call the module which contains variables global to all your other modules?
1 I realize I could pass these (or the single object containing them) as a variable into every other function I call. This ends up being infeasible, not just because it makes the startup code and function signatures really ugly. | What should I name my global module in Python? | 0 | 0 | 0 | 495 |
2,108,105 | 2010-01-21T09:30:00.000 | 0 | 0 | 1 | 0 | python,linux,installation,virtualenv | 2,108,129 | 3 | false | 1 | 0 | You can but you don't really need 'version' control for that. You need to setup your environment. It's a one time job to setup your environment. After that you'll just use it. Why version control it? | 2 | 1 | 0 | I have created a python web virtual environment contains all django, pylons related packages. I use the host ubuntu desktop PC at home and I have ubuntu virtual machine running on windows PC laptop.
Both the operating systems are linux only. I will be using the same environment for production that will be ubuntu server.
Is it possible to store the my python virtual environment to the version control and use the same files for ubuntu desktop, laptop ubuntu desktop VM and ubuntu server in production? | python virtual environment on source control | 0 | 0 | 0 | 789 |
2,108,105 | 2010-01-21T09:30:00.000 | 2 | 0 | 1 | 0 | python,linux,installation,virtualenv | 2,108,226 | 3 | true | 1 | 0 | You might want to look into virtualenv. This will allow you to set up your working environment, 'freeze' the list of packages that are needed to replicate it, and store that list of requirements in version control so that others can check it out and rebuild the environment with a single step. | 2 | 1 | 0 | I have created a python web virtual environment contains all django, pylons related packages. I use the host ubuntu desktop PC at home and I have ubuntu virtual machine running on windows PC laptop.
Both the operating systems are linux only. I will be using the same environment for production that will be ubuntu server.
Is it possible to store the my python virtual environment to the version control and use the same files for ubuntu desktop, laptop ubuntu desktop VM and ubuntu server in production? | python virtual environment on source control | 1.2 | 0 | 0 | 789 |
2,109,109 | 2010-01-21T12:27:00.000 | 2 | 0 | 1 | 0 | python,python-imaging-library | 2,109,260 | 6 | false | 0 | 0 | Not too suprising you're running out of memory; that image will take over 2gig in memory and depending on the system you're using your OS might not be able to allocate enough virtual memory to python to run it, regardless of your actual RAM.
You are definitely going to need to write it out incrementally. If you're using a raw format you could probably do this per row of images, if they are all of the same dimensions. Then you could concatenate the files, otherwise you'd have to be a bit more careful with how you encode the data. | 3 | 13 | 0 | I'm trying to create a very large image (25000x25000) by pasting together many smaller images. Upon calling Image.new() with such large dimensions, python runs out of memory and I get a MemoryError.
Is there a way to write out an image like this incrementally, without having the whole thing resident in RAM?
EDIT:
Using ImageMagick's montage command, it seems possible to create arbitrarily sized images. It looks like it's not trying loading the final image into RAM (it uses very little memory during the process) but rather streaming it out to disk, which is ideal. | Creating very large images using Python Image Library | 0.066568 | 0 | 0 | 6,674 |
2,109,109 | 2010-01-21T12:27:00.000 | 0 | 0 | 1 | 0 | python,python-imaging-library | 2,109,238 | 6 | false | 0 | 0 | Check if your system runs out of virtual memory when you do this. If it does, try adding more. That way, you offload the entire problem onto the virtual memory subsystem, which might be quicker. | 3 | 13 | 0 | I'm trying to create a very large image (25000x25000) by pasting together many smaller images. Upon calling Image.new() with such large dimensions, python runs out of memory and I get a MemoryError.
Is there a way to write out an image like this incrementally, without having the whole thing resident in RAM?
EDIT:
Using ImageMagick's montage command, it seems possible to create arbitrarily sized images. It looks like it's not trying loading the final image into RAM (it uses very little memory during the process) but rather streaming it out to disk, which is ideal. | Creating very large images using Python Image Library | 0 | 0 | 0 | 6,674 |
2,109,109 | 2010-01-21T12:27:00.000 | -1 | 0 | 1 | 0 | python,python-imaging-library | 2,121,558 | 6 | false | 0 | 0 | Use numpy.memmap and the png module. | 3 | 13 | 0 | I'm trying to create a very large image (25000x25000) by pasting together many smaller images. Upon calling Image.new() with such large dimensions, python runs out of memory and I get a MemoryError.
Is there a way to write out an image like this incrementally, without having the whole thing resident in RAM?
EDIT:
Using ImageMagick's montage command, it seems possible to create arbitrarily sized images. It looks like it's not trying loading the final image into RAM (it uses very little memory during the process) but rather streaming it out to disk, which is ideal. | Creating very large images using Python Image Library | -0.033321 | 0 | 0 | 6,674 |
2,110,843 | 2010-01-21T16:22:00.000 | 1 | 0 | 0 | 0 | python,algorithm,indexing,binary-tree | 2,111,067 | 5 | false | 0 | 0 | If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database.
Script the file data into a database, index as you see fit, and query the data in any complex way the database supports.
That is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme. | 3 | 4 | 1 | Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.
I realize this is a little ill-formed, and solutions are welcome.
Additional Details
files long, not wide
millions of lines per hour, spread over 100 files per hour
tab seperated, not many columns (~10)
fields are short (say < 50 chars per field)
queries are on fields, combinations of fields, and can be historical
Drawbacks to various solutions:
(All of these are based on my observations and tests, but I'm open to correction)
BDB
has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
single writer (if it's possible to get around this, I want to see code!)
hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
since it only stores strings, there is a serialize / deserialize step
RDBMSes
Wins:
flat table model is excellent for querying, indexing
Losses:
In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning. | File indexing (using Binary trees?) in Python | 0.039979 | 1 | 0 | 2,801 |
2,110,843 | 2010-01-21T16:22:00.000 | 1 | 0 | 0 | 0 | python,algorithm,indexing,binary-tree | 2,110,912 | 5 | false | 0 | 0 | The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.
To reduce the time spent waiting for I/O, your best bet is compression.
Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I/O time, however, will dominate your processing, so reduce I/O time by zipping everything. | 3 | 4 | 1 | Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.
I realize this is a little ill-formed, and solutions are welcome.
Additional Details
files long, not wide
millions of lines per hour, spread over 100 files per hour
tab seperated, not many columns (~10)
fields are short (say < 50 chars per field)
queries are on fields, combinations of fields, and can be historical
Drawbacks to various solutions:
(All of these are based on my observations and tests, but I'm open to correction)
BDB
has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
single writer (if it's possible to get around this, I want to see code!)
hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
since it only stores strings, there is a serialize / deserialize step
RDBMSes
Wins:
flat table model is excellent for querying, indexing
Losses:
In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning. | File indexing (using Binary trees?) in Python | 0.039979 | 1 | 0 | 2,801 |
2,110,843 | 2010-01-21T16:22:00.000 | 1 | 0 | 0 | 0 | python,algorithm,indexing,binary-tree | 12,805,622 | 5 | false | 0 | 0 | sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system. | 3 | 4 | 1 | Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.
I realize this is a little ill-formed, and solutions are welcome.
Additional Details
files long, not wide
millions of lines per hour, spread over 100 files per hour
tab seperated, not many columns (~10)
fields are short (say < 50 chars per field)
queries are on fields, combinations of fields, and can be historical
Drawbacks to various solutions:
(All of these are based on my observations and tests, but I'm open to correction)
BDB
has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
single writer (if it's possible to get around this, I want to see code!)
hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
since it only stores strings, there is a serialize / deserialize step
RDBMSes
Wins:
flat table model is excellent for querying, indexing
Losses:
In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning. | File indexing (using Binary trees?) in Python | 0.039979 | 1 | 0 | 2,801 |
2,112,274 | 2010-01-21T19:38:00.000 | 0 | 1 | 1 | 0 | php,python | 2,112,306 | 5 | false | 0 | 0 | Well, my first thought would be to use a web server that uses SSL and set the cookie's secure property to true, meaning that it will only be served over SSL connections.
However, I'm aware that this probably isn't what you're looking for. | 2 | 4 | 0 | I have a string that I would like to encrypt in Python, store it as a cookie, then in a PHP file I'd like to retrieve that cookie, and decrypt it in PHP. How would I go about doing this?
I appreciate the fast responses.
All cookie talk aside, lets just say I want to encrypt a string in Python and then decrypt a string in PHP.
Are there any examples you can point me to? | How to encrypt a string in Python and decrypt that same string in PHP? | 0 | 0 | 0 | 7,880 |
2,112,274 | 2010-01-21T19:38:00.000 | 1 | 1 | 1 | 0 | php,python | 2,112,331 | 5 | false | 0 | 0 | If you're not talking about encryption but encoding to make sure the contents make it through safely regardless of quoting issues, special characters, and line breaks, I think base64 encoding is your best bet. PHP has base64_encode / decode() out of the box, and I'm sure Python has, too.
Note that base64 encoding obviously does nothing to encrypt your data (i.e. to make it unreadable to outsiders), and base64 encoded data grows by 33%. | 2 | 4 | 0 | I have a string that I would like to encrypt in Python, store it as a cookie, then in a PHP file I'd like to retrieve that cookie, and decrypt it in PHP. How would I go about doing this?
I appreciate the fast responses.
All cookie talk aside, lets just say I want to encrypt a string in Python and then decrypt a string in PHP.
Are there any examples you can point me to? | How to encrypt a string in Python and decrypt that same string in PHP? | 0.039979 | 0 | 0 | 7,880 |
2,112,298 | 2010-01-21T19:43:00.000 | 3 | 1 | 1 | 0 | python,performance | 2,115,650 | 5 | false | 0 | 0 | Try refining the algorithms or changing the data structures used. That's usually the best way to get an increase in performance. | 3 | 35 | 0 | I'm a PhD student and use Python to write the code I use for my research. My workflow often consists of making a small change to the code, running the program, seeing whether the results improved, and repeating the process. Because of this, I find myself spending more time waiting for my program to run than I do actually working on it (a common experience, I know). I'm currently using the most recent version of Python 2 on my system, so my question is whether switching to Python 3 is going to give me any speed boost or not. At this point, I don't really have a compelling reason to move to Python 3, so if the execution speeds are similar, I'll probably just stick with 2.x. I know I'm going to have to modify my code a bit to get it working in Python 3, so it's not trivial to just test it on both versions to see which runs faster. I'd need to be reasonably confident I will get a speed improvement before I spend the time updating my code to Python 3. | Python 2.x vs 3.x Speed | 0.119427 | 0 | 0 | 14,827 |
2,112,298 | 2010-01-21T19:43:00.000 | 2 | 1 | 1 | 0 | python,performance | 2,112,852 | 5 | false | 0 | 0 | I have phylogenetics analysis that takes a long time to run, and uses about a half-dozen python scripts as well as other bioinformatics software (muscle, clustal, blast, even R!). I use temp files to save intermediate results and a master script with the subprocess module to glue all the pieces together. It's easy to change the master to run only the modified parts that I want to test. But, if the changes are being made to early steps, and you only know how good it is at the end of the whole process, then this strategy wouldn't help much. | 3 | 35 | 0 | I'm a PhD student and use Python to write the code I use for my research. My workflow often consists of making a small change to the code, running the program, seeing whether the results improved, and repeating the process. Because of this, I find myself spending more time waiting for my program to run than I do actually working on it (a common experience, I know). I'm currently using the most recent version of Python 2 on my system, so my question is whether switching to Python 3 is going to give me any speed boost or not. At this point, I don't really have a compelling reason to move to Python 3, so if the execution speeds are similar, I'll probably just stick with 2.x. I know I'm going to have to modify my code a bit to get it working in Python 3, so it's not trivial to just test it on both versions to see which runs faster. I'd need to be reasonably confident I will get a speed improvement before I spend the time updating my code to Python 3. | Python 2.x vs 3.x Speed | 0.07983 | 0 | 0 | 14,827 |
2,112,298 | 2010-01-21T19:43:00.000 | 2 | 1 | 1 | 0 | python,performance | 2,112,332 | 5 | false | 0 | 0 | I can't answer the root of your question, but if you read anything regarding the sluggish performance of the io module please disregard it. The were definitely performance issues in Python 3.0, but they were largely resolved in Python 3.1. | 3 | 35 | 0 | I'm a PhD student and use Python to write the code I use for my research. My workflow often consists of making a small change to the code, running the program, seeing whether the results improved, and repeating the process. Because of this, I find myself spending more time waiting for my program to run than I do actually working on it (a common experience, I know). I'm currently using the most recent version of Python 2 on my system, so my question is whether switching to Python 3 is going to give me any speed boost or not. At this point, I don't really have a compelling reason to move to Python 3, so if the execution speeds are similar, I'll probably just stick with 2.x. I know I'm going to have to modify my code a bit to get it working in Python 3, so it's not trivial to just test it on both versions to see which runs faster. I'd need to be reasonably confident I will get a speed improvement before I spend the time updating my code to Python 3. | Python 2.x vs 3.x Speed | 0.07983 | 0 | 0 | 14,827 |
2,112,525 | 2010-01-21T20:15:00.000 | 5 | 0 | 1 | 0 | asp.net,python,asp.net-mvc,apache,iis | 3,100,230 | 2 | false | 1 | 0 | We've been running django on IIS for a couple of years using PyISAPIe. It's a fairly big site, about 150,000 users. We're moving to linux/apache though, partly cos PyISAPIe isn't great.
Case in point - WebKit browsers don't work well with it, it seems to mess up the chunking. That's tolerable for us as we are allowed to limit our users to FF/IE7+, but annoys me on a mac as I much prefer Safari to FF. | 1 | 6 | 0 | Is it possible to run Python & Django on IIS?
I am going to be a Lead Developer in some web design company and right now they are using classic ASP and ASP.NET.
As far as I can see ASP.NET MVC is not mature. Should I recommend Python & Django stack?
If it's not possible to run Python on IIS what do you think I should do? Stick with ASP.NET which I don't know? I don't know python well as well but I'm more comfortable with it.
Can I run IIS and Apache in parallel? | Running Python & Django on IIS | 0.462117 | 0 | 0 | 5,212 |
2,113,352 | 2010-01-21T22:11:00.000 | 1 | 0 | 0 | 1 | python,multithreading,apache,pylons,fork | 2,113,376 | 3 | true | 1 | 0 | Perhaps you could keep the relevant counters and other statistics in a memcached, that is accessed by all apache processes? | 1 | 3 | 0 | I have an application running under apache that I want to keep "in the moment" statistics on. I want to have the application tell me things like:
requests per second, broken down by types of request
latency to make requests to various backend services via thrift (broken down by service and server)
number of errors being served per second
etc.
I want to do this without any external dependencies. However, I'm running into issues sharing statistics between apache processes. Obviously, I can't just use global memory. What is a good pattern for this sort of issue?
The application is written in python using pylons, though I suspect this is more of a "communication across processes" design question than something that's python specific. | How can I keep on-the-fly application-level statistics in an application running under Apache? | 1.2 | 0 | 0 | 333 |
2,113,427 | 2010-01-21T22:24:00.000 | 78 | 1 | 0 | 1 | python,file,permissions,directory,operating-system | 2,113,457 | 10 | false | 0 | 0 | It may seem strange to suggest this, but a common Python idiom is
It's easier to ask for forgiveness
than for permission
Following that idiom, one might say:
Try writing to the directory in question, and catch the error if you don't have the permission to do so. | 1 | 120 | 0 | What would be the best way in Python to determine whether a directory is writeable for the user executing the script? Since this will likely involve using the os module I should mention I'm running it under a *nix environment. | Determining Whether a Directory is Writeable | 1 | 0 | 0 | 85,042 |
2,114,615 | 2010-01-22T02:40:00.000 | 1 | 0 | 1 | 1 | python,tcl,tkinter,header-files,tk | 2,114,635 | 2 | false | 0 | 0 | The windows installers don't include ANY source files. Simply because that's how windows apps work. It can be compiled on one computer and it will work on all. So windows versions of things like python and php come precompiled with all options enabled.
If you want the source files you have to download a source tarball or something. | 1 | 0 | 0 | The MSI installers downloadable from python.org does not include Tcl/Tk header (not source) files (that are required to compile some packages like matplotlib). Does anyone know of the rationale behind not including them? | Why does not the Python MSI installers come with Tcl/Tk header files? | 0.099668 | 0 | 0 | 511 |
2,114,627 | 2010-01-22T02:44:00.000 | 1 | 1 | 0 | 0 | python,ironpython,jython | 2,119,710 | 2 | false | 0 | 1 | If you're wrapping an existing native library, the ctypes is absolutely the way to go.
If you're trying to speed up the hot spots in a Python extension, then making a custom extension for each interpreter (and a pure-Python fallback) is tractable because the bulk of the code is pure Python that can be shared, but undesirable and labour-intensive, as you said. You could use ctypes in this case as well. | 2 | 4 | 0 | In the 'old days' when there was just cpython, most extensions were written in c (as platform independent as possible) and compiled into pyd's (think PyCrypto for example). Now there is Jython, IronPython and PyPy and the pyd’s do not work with any of them (Ironclad aside). It seems they all support ctypes and that the best approach MIGHT be to create a platform independent dll or shared library and then use ctypes to interface to it.
But I think this approach will be a bit slower than the old fashion pyd approach. You could also program a pyd for cpython, a similar c# dll for IronPython and a java class or jar for Jython (I'm not sure about PyPy. But while this approach will appeal to platform purists it is very labor intensive. So what is the best route to take today? | Python extensions that can be used in all varieties of python (jython / IronPython / etc.) | 0.099668 | 0 | 0 | 501 |
2,114,627 | 2010-01-22T02:44:00.000 | 2 | 1 | 0 | 0 | python,ironpython,jython | 2,116,557 | 2 | true | 0 | 1 | Currently, it seems the ctypes is indeed the best approach. It works today, and it's so convenient that it's gonna conquer (most of) the world.
For performance-critical APIs (such as numpy), ctypes is indeed problematic. The cleanest approach would probably be to port Cython to produce native IronPython / Jython / PyPy extensions.
I recall that PyPy had plans to compile ctypes code to efficient wrappers, but as far as I google, there is nothing like that yet... | 2 | 4 | 0 | In the 'old days' when there was just cpython, most extensions were written in c (as platform independent as possible) and compiled into pyd's (think PyCrypto for example). Now there is Jython, IronPython and PyPy and the pyd’s do not work with any of them (Ironclad aside). It seems they all support ctypes and that the best approach MIGHT be to create a platform independent dll or shared library and then use ctypes to interface to it.
But I think this approach will be a bit slower than the old fashion pyd approach. You could also program a pyd for cpython, a similar c# dll for IronPython and a java class or jar for Jython (I'm not sure about PyPy. But while this approach will appeal to platform purists it is very labor intensive. So what is the best route to take today? | Python extensions that can be used in all varieties of python (jython / IronPython / etc.) | 1.2 | 0 | 0 | 501 |
2,114,847 | 2010-01-22T04:01:00.000 | 1 | 0 | 0 | 1 | python,webserver,twisted,tornado | 2,114,986 | 3 | true | 0 | 0 | I'd recommend against building your own web server and handling raw socket calls to build web applications; it makes much more sense to just write your web services as wsgi applications and use an existing web server, whether it's something like tornado or apache with mod_wsgi. | 1 | 2 | 0 | I have been working with python for a while now. Recently I got into Sockets with Twisted which was good for learning Telnet, SSH, and Message Passing. I wanted to take an idea and implement it in a web fashion. A week of searching and all I can really do is create a resource that handles GET and POST all to itself. And this I am told is bad practice.
So The questions I have after one week:
* Are other options like Tornado and Standard Python Sockets a better (or more popular) approach?
* Should one really use separate resources in Twisted GET and POST operations?
* What is a good resource to start in this area of Python Development?
My background with languages are C, Java, HTML/DHTML/XHTML/XML and my main systems (even home) are Linux. | Python approach to Web Services and/or handeling GET and POST | 1.2 | 0 | 1 | 474 |
2,116,310 | 2010-01-22T09:41:00.000 | 1 | 1 | 1 | 0 | f#,ironpython,ironruby | 2,116,333 | 1 | true | 0 | 0 | The answer is "depends".
F# is great if you need to functional programming is used extensively by Academics that need its pure computing power to get something done.
IronPython and IronRuby is great for being able to create applications that run on the CLR because they give you the .NET goodness with the speed of writing Python or Ruby. I don't think that any of these is more preferable to another without it being in a proper context | 1 | 0 | 0 | Well aware that DLR is here!! I have recently reading up on all of these and was wondering if there were any specific benefits of using one language over another?
For example performance benefits! and available functionality through standard libaries!! | What are the benefits of using IronPython over IronRuby or F#? | 1.2 | 0 | 0 | 638 |
2,119,067 | 2010-01-22T17:16:00.000 | 1 | 0 | 0 | 0 | python,wxpython | 2,120,967 | 1 | true | 0 | 1 | I don't have a good understanding of your application, but trying to force wxWizard to suit your needs sounds like a bad idea.
I suggest checking out the Demos available from the wxPython website. Go through each demo and I bet you'll find one that suits your needs.
I've personally never used wxWizard as I find it too cumbersome. Instead, I create a sequence of dialogs that do what I need. | 1 | 3 | 0 | I am attempting to create my first OS-level GUI using wxPython. I have the book wxPython in Action and have looked at the code demos. I have no experience with event-driven programming (aside from some Javascript), sizers, and all of the typical GUI elements. The book is organized a little strangely and assumes I know far more about OS GUI programming than I actually do. I'm fairly recent to object-oriented programming, as well. I'm aware that I am clearly out of my depth.
My application, on the GUI side, is simple: mostly a set of reminder screens ("Turn on the scanner," "Turn on the printer," etc) and background actions in Python either in the filesystem or from hitting a web service, but it is just complex enough that the Wizard class does not quite seem to cover it. I have to change the names on the "Back" and "Next" buttons, disable them at times, and so forth.
What is the standard process for an application such as mine?
1) Create a single wxFrame, then put all of my wxPanels inside of it, hiding all but one, then performing a sequence of hides and shows as the "Next" button (or the current equivalent) are triggered?
2) Create multiple wxFrames, with one wxPanel in each, then switch between them?
3) Some non-obvious fashion of changing the names of the buttons in wxWizard and disabling them?
4) Something I have not anticipated in the three categories above. | In wxPython, What is the Standard Process of Making an Application Slightly More Complex Than a Wizard? | 1.2 | 0 | 0 | 290 |
2,119,153 | 2010-01-22T17:28:00.000 | 0 | 0 | 0 | 0 | python,mysql,null | 2,119,402 | 3 | false | 0 | 0 | You cannot query data you do not have.
You (as a thinking person) can claim that the 00:20 data is missing; but there's no easy way to define "missing" in some more formal SQL sense.
The best you can do is create a table with all of the expected times.
Then you can do an outer join between expected times (including a 0 for 00:20) and actual times (missing the 00:20 sample) and you'll get kind of result you're expecting. | 2 | 1 | 0 | Table structure - Data present for 5 min. slots -
data_point | point_date
12 | 00:00
14 | 00:05
23 | 00:10
10 | 00:15
43 | 00:25
10 | 00:40
When I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp). Simple Query -
select data_point
from some_table
where point_date >= start_date
AND point_date < end_date
order by point_date
Now when I don't have an entry for a particular time slot (e.g. time slot 00:20 is missing), I want the "data_point" to be returned as 0
The REPLACE, IF, IFNULL, ISNULL don't work when there no rows returned.
I thought Union with a default value would work, but it failed too or maybe I didn't use it correctly.
Is there a way to get this done via sql only ?
Note : Python 2.6 & mysql version 5.1 | python : mysql : Return 0 when no rows found | 0 | 1 | 0 | 1,466 |
2,119,153 | 2010-01-22T17:28:00.000 | 0 | 0 | 0 | 0 | python,mysql,null | 2,119,384 | 3 | false | 0 | 0 | I see no easy way to create non-existing records out of thin air, but you could create yourself a point_dates table containing all the timestamps you're interested in, and left join it on your data:
select pd.slot, IFNULL(data_point, 0)
from point_dates pd
left join some_table st on st.point_date=pd.slot
where point_date >= start_date
AND point_date < end_date
order by point_date | 2 | 1 | 0 | Table structure - Data present for 5 min. slots -
data_point | point_date
12 | 00:00
14 | 00:05
23 | 00:10
10 | 00:15
43 | 00:25
10 | 00:40
When I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp). Simple Query -
select data_point
from some_table
where point_date >= start_date
AND point_date < end_date
order by point_date
Now when I don't have an entry for a particular time slot (e.g. time slot 00:20 is missing), I want the "data_point" to be returned as 0
The REPLACE, IF, IFNULL, ISNULL don't work when there no rows returned.
I thought Union with a default value would work, but it failed too or maybe I didn't use it correctly.
Is there a way to get this done via sql only ?
Note : Python 2.6 & mysql version 5.1 | python : mysql : Return 0 when no rows found | 0 | 1 | 0 | 1,466 |
2,119,217 | 2010-01-22T17:39:00.000 | 3 | 0 | 0 | 1 | python | 2,648,514 | 1 | true | 0 | 0 | socat command is solution.
First you need to install socat:
pacman -S socat
Just insert this in console, but first you should be login as root:
socat PTY,link=/dev/ttyVirtualS0,echo=0 PTY,link=/dev/ttyVirtualS1,echo=0
and now we have two virtual serial ports which are virtualy connected:
/dev/ttyVirtualS0 <-------> /dev/ttyVirtualS1 | 1 | 2 | 0 | I am using Arch linux and I need to create virtual serial port on it. I tried everything but it seems doesnt work. All I want is to connect that virtual port to another virtual port over TCP and after that to use it in my python application to communicate with python application to other side. Is that posible? Please help me.
Thanx | virtual serial port on Arch linux | 1.2 | 0 | 0 | 2,709 |
2,119,472 | 2010-01-22T18:20:00.000 | 3 | 0 | 1 | 0 | python,timedelta | 63,045,890 | 12 | false | 0 | 0 | I found the easiest way is using str(timedelta). It will return a sting formatted like 3 days, 21:06:40.001000, and you can parse hours and minutes using simple string operations or regular expression. | 2 | 331 | 0 | I've got a timedelta. I want the days, hours and minutes from that - either as a tuple or a dictionary... I'm not fussed.
I must have done this a dozen times in a dozen languages over the years but Python usually has a simple answer to everything so I thought I'd ask here before busting out some nauseatingly simple (yet verbose) mathematics.
Mr Fooz raises a good point.
I'm dealing with "listings" (a bit like ebay listings) where each one has a duration. I'm trying to find the time left by doing when_added + duration - now
Am I right in saying that wouldn't account for DST? If not, what's the simplest way to add/subtract an hour? | Convert a timedelta to days, hours and minutes | 0.049958 | 0 | 0 | 512,001 |
2,119,472 | 2010-01-22T18:20:00.000 | -5 | 0 | 1 | 0 | python,timedelta | 2,119,499 | 12 | false | 0 | 0 | timedeltas have a days and seconds attribute .. you can convert them yourself with ease. | 2 | 331 | 0 | I've got a timedelta. I want the days, hours and minutes from that - either as a tuple or a dictionary... I'm not fussed.
I must have done this a dozen times in a dozen languages over the years but Python usually has a simple answer to everything so I thought I'd ask here before busting out some nauseatingly simple (yet verbose) mathematics.
Mr Fooz raises a good point.
I'm dealing with "listings" (a bit like ebay listings) where each one has a duration. I'm trying to find the time left by doing when_added + duration - now
Am I right in saying that wouldn't account for DST? If not, what's the simplest way to add/subtract an hour? | Convert a timedelta to days, hours and minutes | -1 | 0 | 0 | 512,001 |
2,121,617 | 2010-01-23T01:06:00.000 | 1 | 0 | 1 | 0 | java,c++,python,perl,multithreading | 2,121,635 | 10 | false | 0 | 0 | Threads don't speed up applications. Algorithms speed up applications. Threads can be used in algorithms, if appropriate. | 5 | 12 | 0 | I have been considering adding threaded procedures to my application to speed up execution, but the problem is that I honestly have no idea how to use threads, or what is considered "thread safe". For example, how does a game engine utilize threads in its rendering processes, or in what contexts would threads only be considered nothing but a hindrance? Can someone point the way to some resources to help me learn more or explain here? | Can Someone Explain Threads to Me? | 0.019997 | 0 | 0 | 3,288 |
2,121,617 | 2010-01-23T01:06:00.000 | 1 | 0 | 1 | 0 | java,c++,python,perl,multithreading | 2,121,649 | 10 | false | 0 | 0 | Threads are simply a way of executing multiple things simultaneously (assuming that the platform on which they are being run is capable of parallel execution). Thread safety is simply (well, nothing with threads is truly simple) making sure that the threads don't affect each other in harmful ways.
In general, you are unlikely to see systems use multiple threads for rendering graphics on the screen due to the multiple performance implications and complexity issues that may arise from that. Other tasks related to state management (or AI) can potentially be moved to separate threads however. | 5 | 12 | 0 | I have been considering adding threaded procedures to my application to speed up execution, but the problem is that I honestly have no idea how to use threads, or what is considered "thread safe". For example, how does a game engine utilize threads in its rendering processes, or in what contexts would threads only be considered nothing but a hindrance? Can someone point the way to some resources to help me learn more or explain here? | Can Someone Explain Threads to Me? | 0.019997 | 0 | 0 | 3,288 |
2,121,617 | 2010-01-23T01:06:00.000 | 1 | 0 | 1 | 0 | java,c++,python,perl,multithreading | 2,121,639 | 10 | false | 0 | 0 | Well someone will probably answer this better, but threads are for the purpose of having background processing that won't freeze the user interface. You don't want to stop accepting keyboard input or mouse input, and tell the user, "just a moment, I want to finish this computation, it will only be a few more seconds." (And yet its amazing how many times commercial programs do this.
As far as thread safe, it means a function that does not have some internal saved state. If it did you couldn't have multiple threads using it simutaneously.
As far as thread programming you just have to start doing it, and then you'll start encountering various issues unique to thread programming, for example simultaneuous access to data, in which case you have to decide to use some syncronization method such as critical sections or mutexes or something else, each one having slightly different nuances in their behavior.
As far as the differences between processes and threads (which you didn't ask) processes are an OS level entity, whereas threads are associated with a program. In certain instances your program may want to create a process rather than a thread. | 5 | 12 | 0 | I have been considering adding threaded procedures to my application to speed up execution, but the problem is that I honestly have no idea how to use threads, or what is considered "thread safe". For example, how does a game engine utilize threads in its rendering processes, or in what contexts would threads only be considered nothing but a hindrance? Can someone point the way to some resources to help me learn more or explain here? | Can Someone Explain Threads to Me? | 0.019997 | 0 | 0 | 3,288 |
2,121,617 | 2010-01-23T01:06:00.000 | 32 | 0 | 1 | 0 | java,c++,python,perl,multithreading | 2,121,638 | 10 | true | 0 | 0 | This is a very broad topic. But here are the things I would want to know if I knew nothing about threads:
They are units of execution within a single process that happen "in parallel" - what this means is that the current unit of execution in the processor switches rapidly. This can be achieved via different means. Switching is called "context switching", and there is some overhead associated with this.
They can share memory! This is where problems can occur. I talk about this more in depth in a later bullet point.
The benefit of parallelizing your application is that logic that uses different parts of the machine can happen simultaneously. That is, if part of your process is I/O-bound and part of it is CPU-bound, the I/O intensive operation doesn't have to wait until the CPU-intensive operation is done. Some languages also allow you to run threads at the same time if you have a multicore processor (and thus parallelize CPU-intensive operations as well), though this is not always the case.
Thread-safe means that there are no race conditions, which is the term used for problems that occur when the execution of your process depends on timing (something you don't want to rely on). For example, if you have threads A and B both incrementing a shared counter C, you could see the case where A reads the value of C, then B reads the value of C, then A overwrites C with C+1, then B overwrites C with C+1. Notice that C only actually increments once!
A couple of common ways avoid race conditions include synchronization, which excludes mutual access to shared state, or just not having any shared state at all. But this is just the tip of the iceberg - thread-safety is quite a broad topic.
I hope that helps! Understand that this was a very quick introduction to something that requires a good bit of learning. I would recommend finding a resource about multithreading in your preferred language, whatever that happens to be, and giving it a thorough read. | 5 | 12 | 0 | I have been considering adding threaded procedures to my application to speed up execution, but the problem is that I honestly have no idea how to use threads, or what is considered "thread safe". For example, how does a game engine utilize threads in its rendering processes, or in what contexts would threads only be considered nothing but a hindrance? Can someone point the way to some resources to help me learn more or explain here? | Can Someone Explain Threads to Me? | 1.2 | 0 | 0 | 3,288 |
2,121,617 | 2010-01-23T01:06:00.000 | 2 | 0 | 1 | 0 | java,c++,python,perl,multithreading | 2,121,691 | 10 | false | 0 | 0 | There are four things you should know about threads.
Threads are like processes, but they share memory.
Threads often have hardware, OS, and language support, which might make them better than processes.
There are lots of fussy little things that threads need to support (like locks and semaphores) so they don't get the memory they share into an inconsistent state. This makes them a little difficult to use.
Locking isn't automatic (in the languages I know), so you have to be very careful with the memory they (implicitly) share. | 5 | 12 | 0 | I have been considering adding threaded procedures to my application to speed up execution, but the problem is that I honestly have no idea how to use threads, or what is considered "thread safe". For example, how does a game engine utilize threads in its rendering processes, or in what contexts would threads only be considered nothing but a hindrance? Can someone point the way to some resources to help me learn more or explain here? | Can Someone Explain Threads to Me? | 0.039979 | 0 | 0 | 3,288 |
2,121,945 | 2010-01-23T03:20:00.000 | 1 | 1 | 0 | 0 | python,urllib2,pycurl | 2,122,198 | 4 | false | 0 | 0 | Use urllib2. It's got very good documentation in python, while pycurl is mostly C documentation. If you hit a wall, switch to mechanize or pycurl. | 2 | 2 | 0 | I have extensive experience with PHP cURL but for the last few months I've been coding primarily in Java, utilizing the HttpClient library.
My new project requires me to use Python, once again putting me at the crossroads of seemingly comparable libraries: pycurl and urllib2.
Putting aside my previous experience with PHP cURL, what is the recommended library in Python? Is there a reason to use one but not the other? Which is the more popular option? | Python: urllib2 or Pycurl? | 0.049958 | 0 | 1 | 4,948 |
2,121,945 | 2010-01-23T03:20:00.000 | 3 | 1 | 0 | 0 | python,urllib2,pycurl | 2,121,967 | 4 | false | 0 | 0 | urllib2 is part of the standard library, pycurl isn't (so it requires a separate step of download/install/package etc). That alone, quite apart from any difference in intrinsic quality, is guaranteed to make urllib2 more popular (and can be a pretty good pragmatical reason to pick it -- convenience!-). | 2 | 2 | 0 | I have extensive experience with PHP cURL but for the last few months I've been coding primarily in Java, utilizing the HttpClient library.
My new project requires me to use Python, once again putting me at the crossroads of seemingly comparable libraries: pycurl and urllib2.
Putting aside my previous experience with PHP cURL, what is the recommended library in Python? Is there a reason to use one but not the other? Which is the more popular option? | Python: urllib2 or Pycurl? | 0.148885 | 0 | 1 | 4,948 |
2,122,385 | 2010-01-23T06:33:00.000 | 0 | 0 | 0 | 1 | python,terminal | 2,122,421 | 10 | false | 0 | 0 | When I do this in shell scripts on Unix, I tend to just use the clear program. You can use the Python subprocess module to execute it. It will at least get you what you're looking for quickly. | 2 | 57 | 0 | Certain applications like hellanzb have a way of printing to the terminal with the appearance of dynamically refreshing data, kind of like top().
Whats the best method in python for doing this? I have read up on logging and curses, but don't know what to use. I am creating a reimplementation of top. If you have any other suggestions I am open to them as well. | Dynamic terminal printing with python | 0 | 0 | 0 | 88,169 |
2,122,385 | 2010-01-23T06:33:00.000 | 0 | 0 | 0 | 1 | python,terminal | 68,317,499 | 10 | false | 0 | 0 | I don't think that including another libraries in this situation is really good practice. So, solution:
print("\rCurrent: %s\t%s" % (str(<value>), <another_value>), end="") | 2 | 57 | 0 | Certain applications like hellanzb have a way of printing to the terminal with the appearance of dynamically refreshing data, kind of like top().
Whats the best method in python for doing this? I have read up on logging and curses, but don't know what to use. I am creating a reimplementation of top. If you have any other suggestions I am open to them as well. | Dynamic terminal printing with python | 0 | 0 | 0 | 88,169 |
2,123,269 | 2010-01-23T13:27:00.000 | 2 | 0 | 1 | 0 | python,multithreading,multiprocess | 2,123,513 | 2 | false | 0 | 0 | The GIL is really only something to care about if you want to do multiprocessing, that is spread the load over several cores/processors. If that is the case, and it kinda sounds like it from your description, use multiprocessing.
If you just need to do three things "simultaneously" in that way that you need to wait in the background for things to happen, then threads are just fine. That's what threads are for in the first place. 8-I) | 1 | 4 | 0 | I'm making a python script that needs to do 3 things simultaneously.
What is a good way to achieve this as do to what i've heard about the GIL i'm not so lean into using threads anymore.
2 of the things that the script needs to do will be heavily active, they will have lots of work to do and then i need to have the third thing reporting to the user over a socket when he asks (so it will be like a tiny server) about the status of the other 2 processes.
Now my question is what would be a good way to achieve this? I don't want to have three different script and also due to GIL using threads i think i won't get much performance and i'll make things worse.
Is there a fork() for python like in C so from my script so fork 2 processes that will do their job and from the main process to report to the user? And how can i communicate from the forked processes with the main process?
LE:: to be more precise 1thread should get email from a imap server and store them into a database, another thread should get messages from db that needs to be sent and then send them and the main thread should be a tiny http server that will just accept one url and will show the status of those two threads in json format. So are threads oK? will the work be done simultaneously or due to the gil there will be performance issues? | python threading/fork? | 0.197375 | 0 | 0 | 8,102 |
2,123,651 | 2010-01-23T15:34:00.000 | 2 | 1 | 0 | 0 | php,python,twitter | 8,286,513 | 2 | false | 0 | 0 | You can track 400 filter words and 5000 userids via streaming api.
Filter words can be something apple, orange, ipad etc...
And in order to track any user's timeline you need to get the user's twitter user id. | 1 | 6 | 0 | I have a big list of Twitter users stored in a database, almost 1000.
I would like to use the Streaming API in order to stream tweets from these users, but I cannot find an appropriate way to do this.
Help would be very much appreciated. | Streaming multiple tweets - from multiple users? - Twitter API | 0.197375 | 0 | 1 | 2,823 |
2,124,296 | 2010-01-23T18:58:00.000 | 3 | 0 | 0 | 0 | python,gtk,window,pygtk,resizable | 2,124,325 | 2 | true | 0 | 1 | Perhaps you could set_size_request() with the current window size (from window_get_size()) before you call set_resizable()? | 2 | 0 | 0 | I am looking for a way to lock the window. I found window.set_resizable(False), but that resizes the window to its requested size and then locks it. I would like to be able to resize my window, and then lock it into the size I have resized it to. | PyGTK, how can I lock a window so it cannot be resized? | 1.2 | 0 | 0 | 1,153 |
2,124,296 | 2010-01-23T18:58:00.000 | 0 | 0 | 0 | 0 | python,gtk,window,pygtk,resizable | 15,961,711 | 2 | false | 0 | 1 | You can use this:
window.set_geometry_hints(window, min_width, min_height, max_width, max_height) | 2 | 0 | 0 | I am looking for a way to lock the window. I found window.set_resizable(False), but that resizes the window to its requested size and then locks it. I would like to be able to resize my window, and then lock it into the size I have resized it to. | PyGTK, how can I lock a window so it cannot be resized? | 0 | 0 | 0 | 1,153 |
2,124,347 | 2010-01-23T19:12:00.000 | 1 | 0 | 1 | 0 | python,permutation | 2,124,356 | 6 | false | 0 | 0 | You may want the itertools.permutations() function. Gotta love that itertools module!
NOTE: New in 2.6 | 1 | 30 | 1 | i have an array of 27 elements,and i don't want to generate all permutations of array (27!)
i need 5000 randomly choosed permutations,any tip will be useful... | how to generate permutations of array in python? | 0.033321 | 0 | 0 | 38,205 |
2,124,455 | 2010-01-23T19:48:00.000 | 3 | 0 | 0 | 1 | python,linux,ubuntu,gtk,daemon | 2,124,579 | 3 | false | 0 | 0 | If five users are logged in to X sessions, who gets the message? Everyone?
If someone is logged in locally but only using the tty, and not X11, should they see the message?
If someone is logged in remotely via ssh -X to run a graphic application on their own system off of your CPU, should they see the message? How would you get it to them?
Linux is too flexible for your current approach. The standard way to do this is for any user who is interested in the kind of message you are sending to run an application that receives the message and displays it in a way of its choosing. Dbus is a popular way of setting up the messaging process. This way remote users or users logged in with TTY mode only still have an option for seeing the message. | 2 | 0 | 0 | on Ubuntu 8/9,
i'm trying to write a daemon in python, that monitors a certain network condition and informs the user using a gtk.messagedialog.
I installed this script using rc-update.
The daemon starts at boot, but doesn't show the dialog even after I login. I assume because init.d starts my daemon at tty1 and no gnome is available.
Tried running the dialog through a subprocess, but it seems to inherit the same run environment.
Whats the best practice for this sort of thing. | Python / Linux/ Daemon process trying to show gtk.messagedialog | 0.197375 | 0 | 0 | 772 |
2,124,455 | 2010-01-23T19:48:00.000 | 0 | 0 | 0 | 1 | python,linux,ubuntu,gtk,daemon | 2,136,615 | 3 | false | 0 | 0 | You may use notify-send (from libnotify-bin package) to send notifications to desktop users from your daemon. | 2 | 0 | 0 | on Ubuntu 8/9,
i'm trying to write a daemon in python, that monitors a certain network condition and informs the user using a gtk.messagedialog.
I installed this script using rc-update.
The daemon starts at boot, but doesn't show the dialog even after I login. I assume because init.d starts my daemon at tty1 and no gnome is available.
Tried running the dialog through a subprocess, but it seems to inherit the same run environment.
Whats the best practice for this sort of thing. | Python / Linux/ Daemon process trying to show gtk.messagedialog | 0 | 0 | 0 | 772 |
2,124,688 | 2010-01-23T20:55:00.000 | 2 | 0 | 0 | 1 | iphone,python,google-app-engine,gql | 2,124,718 | 4 | true | 1 | 0 | True, Google App Engine is a very cool product, but the datastore is a different beast than a regular mySQL database. That's not to say that what you need can't be done with the GAE datastore; however it may take some reworking on your end.
The most prominent different that you notice right off the start is that GAE uses an object-relational mapping for its data storage scheme. Essentially object graphs are persisted in the database, maintaining there attributes and relationships to other objects. In many cases ORM (object relational mappings) map fairly well on top of a relational database (this is how Hibernate works). The mapping is not perfect though and you will find that you need to make alterations to persist your data. Also, GAE has some unique contraints that complicate things a bit. One contraint that bothers me a lot is not being able to query for attribute paths: e.g. "select ... where dog.owner.name = 'bob' ". It is these rules that force you to read and understand how GAE data store works before you jump in.
I think GAE could work well in your situation. It just may take some time to understand ORM persistence in general, and GAE datastore in specifics. | 3 | 2 | 0 | I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. | iPhone app with Google App Engine | 1.2 | 1 | 0 | 1,021 |
2,124,688 | 2010-01-23T20:55:00.000 | 1 | 0 | 0 | 1 | iphone,python,google-app-engine,gql | 2,124,705 | 4 | false | 1 | 0 | That's a pretty generic question :)
Short answer: yes. It's going to involve some rethinking of your data model, but yes, changes are you can support it with the GAE Datastore API.
When you create your Python models (think of these as tables), you can certainly define references to other models (so now we have a foreign key). When you select this model, you'll get back the referencing models (pretty much like a join).
It'll most likely work, but it's not a drop in replacement for a mySQL server. | 3 | 2 | 0 | I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. | iPhone app with Google App Engine | 0.049958 | 1 | 0 | 1,021 |
2,124,688 | 2010-01-23T20:55:00.000 | 2 | 0 | 0 | 1 | iphone,python,google-app-engine,gql | 2,125,297 | 4 | false | 1 | 0 | GQL offers almost no functionality at all; it's only used for SELECT queries, and it only exists to make writing SELECT queries easier for SQL programmers. Behind the scenes, it converts your queries to db.Query objects.
The App Engine datastore isn't a relational database at all. You can do some stuff that looks relational, but my advice for anyone coming from an SQL background is to avoid GQL at all costs to avoid the trap of thinking the datastore is anything at all like an RDBMS, and to forget everything you know about database design. Specifically, if you're normalizing anything, you'll soon wish you hadn't. | 3 | 2 | 0 | I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. | iPhone app with Google App Engine | 0.099668 | 1 | 0 | 1,021 |
2,125,149 | 2010-01-23T23:18:00.000 | 1 | 0 | 0 | 0 | python,json,networking,ipc | 2,125,162 | 1 | false | 1 | 0 | Use a client side certificate for the connection. This is a good monetization technique to get more income for your client side app. | 1 | 1 | 0 | I am looking for a way to connect a frontend server (running Django) with a backend server.
I want to avoid inventing my own protocol on top of a socket, so my plan was to use SimpleHTTPServer + JSON or XML.
However, we also require some security (authentication + encryption) for the connection, which isn't quite as simple to implement.
Any ideas for alternatives? What mechanisms would you use? I definitely want to avoid CORBA (we have used it before, and it's way too complex for what we need). | Network IPC With Authentication (in Python) | 0.197375 | 0 | 1 | 324 |
2,125,702 | 2010-01-24T02:43:00.000 | 2 | 0 | 1 | 0 | python,sdl,pygame | 5,063,983 | 8 | false | 0 | 1 | I use pythonw.exe (on Windows) instead of python.exe.
In other OSes, you could also redirect output to /dev/nul.
And in order to still see my debug output, I am using the logging module. | 1 | 47 | 0 | I'm using Pygame/SDL's joystick module to get input from a gamepad. Every time I call its get_hat() method it prints to the console. This is problematic since I use the console to help me debug and now it gets flooded with SDL_JoystickGetHat value:0: 60 times every second. Is there a way I can disable this? Either through an option in Pygame/SDL or suppress console output while the function calls? I saw no mention of this in the Pygame documentation.
edit: This turns out to be due to debugging being turned on when the SDL library was compiled. | How to suppress console output in Python? | 0.049958 | 0 | 0 | 97,342 |
2,125,865 | 2010-01-24T03:53:00.000 | 1 | 0 | 0 | 0 | python,ruby-on-rails,django,rest | 2,126,329 | 11 | false | 1 | 0 | You have written down no requirements, you have written down technology decisions. That's something totally different. What do you want to achieve? Then we might be able to help you with how to achieve them. | 7 | 7 | 0 | This is a general question about how limiting are web development frameworks such as Django and ruby-on-rails.
I am planning on building a RESTful web service which will have a purely JSON/XML interface, no GUI. The service will rely on a database however for a few of the more important operations there is no clear way of persisting a "model" object directly into a database table. In addition I require full control over when and how the data is being written to the database. I will need to maintain multiple database connections in order to use some connections only for reads and others only for writes.
I've looked at the "full" MVC frameworks such as Django and more basic ones such web.py and pylons. The impression I currently have is that if I go with the full framework initially things will go faster but eventually I will get stuck because I will be limited by the framework in what I can do. If I go with a more basic framework it will take much longer to get everything running but I will be free to do what I need.
This is what it seems like but I suspect that it might be an incorrect impression given how many sites are written in Django and Rails. Could you please provide your opinion. Am I totally wrong and there is a way to easily do anything with a framework like Django or Rails or given my requirements I should go with something like web.py?
Thank you! | How limiting are web frameworks | 0.01818 | 0 | 0 | 653 |
2,125,865 | 2010-01-24T03:53:00.000 | 0 | 0 | 0 | 0 | python,ruby-on-rails,django,rest | 2,125,958 | 11 | false | 1 | 0 | You'll be much more limited by the abilities of yourself versus a diverse community of developers working on a large project to share all those common parts. | 7 | 7 | 0 | This is a general question about how limiting are web development frameworks such as Django and ruby-on-rails.
I am planning on building a RESTful web service which will have a purely JSON/XML interface, no GUI. The service will rely on a database however for a few of the more important operations there is no clear way of persisting a "model" object directly into a database table. In addition I require full control over when and how the data is being written to the database. I will need to maintain multiple database connections in order to use some connections only for reads and others only for writes.
I've looked at the "full" MVC frameworks such as Django and more basic ones such web.py and pylons. The impression I currently have is that if I go with the full framework initially things will go faster but eventually I will get stuck because I will be limited by the framework in what I can do. If I go with a more basic framework it will take much longer to get everything running but I will be free to do what I need.
This is what it seems like but I suspect that it might be an incorrect impression given how many sites are written in Django and Rails. Could you please provide your opinion. Am I totally wrong and there is a way to easily do anything with a framework like Django or Rails or given my requirements I should go with something like web.py?
Thank you! | How limiting are web frameworks | 0 | 0 | 0 | 653 |
2,125,865 | 2010-01-24T03:53:00.000 | 1 | 0 | 0 | 0 | python,ruby-on-rails,django,rest | 2,125,973 | 11 | false | 1 | 0 | Rails is as helpful or not as you need it to be, overall. If you need to load a collection with straight SQL, it's straightforward. If in the same line you want to use all the built-in ActiveRecord Fu, you can. RESTful routing is extremely simple, but again if the particular Rails flavor of REST doesn't meet your needs, the routing is completely configurable. In a Rails app you can use as much or as little of the defaults as you need to, and reconfiguration is available at all levels. | 7 | 7 | 0 | This is a general question about how limiting are web development frameworks such as Django and ruby-on-rails.
I am planning on building a RESTful web service which will have a purely JSON/XML interface, no GUI. The service will rely on a database however for a few of the more important operations there is no clear way of persisting a "model" object directly into a database table. In addition I require full control over when and how the data is being written to the database. I will need to maintain multiple database connections in order to use some connections only for reads and others only for writes.
I've looked at the "full" MVC frameworks such as Django and more basic ones such web.py and pylons. The impression I currently have is that if I go with the full framework initially things will go faster but eventually I will get stuck because I will be limited by the framework in what I can do. If I go with a more basic framework it will take much longer to get everything running but I will be free to do what I need.
This is what it seems like but I suspect that it might be an incorrect impression given how many sites are written in Django and Rails. Could you please provide your opinion. Am I totally wrong and there is a way to easily do anything with a framework like Django or Rails or given my requirements I should go with something like web.py?
Thank you! | How limiting are web frameworks | 0.01818 | 0 | 0 | 653 |
2,125,865 | 2010-01-24T03:53:00.000 | 5 | 0 | 0 | 0 | python,ruby-on-rails,django,rest | 2,125,979 | 11 | false | 1 | 0 | You can still use the full potential of the language in question, even if you also use a framework. A framework isn't a limiting factor, it's basically a tool to ease development of certain parts of your application. Django and rails, for instance, abstract away some database functionality, so you'll only have to worry about your model objects. That doesn't mean you can't do stuff on your own as well... | 7 | 7 | 0 | This is a general question about how limiting are web development frameworks such as Django and ruby-on-rails.
I am planning on building a RESTful web service which will have a purely JSON/XML interface, no GUI. The service will rely on a database however for a few of the more important operations there is no clear way of persisting a "model" object directly into a database table. In addition I require full control over when and how the data is being written to the database. I will need to maintain multiple database connections in order to use some connections only for reads and others only for writes.
I've looked at the "full" MVC frameworks such as Django and more basic ones such web.py and pylons. The impression I currently have is that if I go with the full framework initially things will go faster but eventually I will get stuck because I will be limited by the framework in what I can do. If I go with a more basic framework it will take much longer to get everything running but I will be free to do what I need.
This is what it seems like but I suspect that it might be an incorrect impression given how many sites are written in Django and Rails. Could you please provide your opinion. Am I totally wrong and there is a way to easily do anything with a framework like Django or Rails or given my requirements I should go with something like web.py?
Thank you! | How limiting are web frameworks | 0.090659 | 0 | 0 | 653 |
2,125,865 | 2010-01-24T03:53:00.000 | 1 | 0 | 0 | 0 | python,ruby-on-rails,django,rest | 2,126,030 | 11 | false | 1 | 0 | I have used Ruby/Rails for years now, and unlike just about every other language/framework I have used (across nearly 15 years of Java, PHP, ColdFusion, ASP, etc etc) it gets out of the way when you need it to.
It sounds like you might benefit from a "lighter-weight" framework like Sinatra, but with the upcoming Rails 3 release the benefits are becoming less pronounced. Rails 3 makes everything configurable ... in fact, Rails will now just be a particular set of plugins and extensions sitting onto of an infinitely flexible core.
I am interested in this statement:
"The service will rely on a database however for a few of the more important operations
there is no clear way of persisting a "model" object directly into a database table."
Not sure what you mean by this statement ... at some point you have something going into the database, right?
In most non-trivial applications you rarely have a single model tied to the end of a request ... you might actually have a quite complex network of models that are returned or updated.
If you are working with JSON, I would definitely suggest looking at a database like MongoDB. MongoDB is based entirely on storing JSON data, and may therefore fit really neatly with your application. | 7 | 7 | 0 | This is a general question about how limiting are web development frameworks such as Django and ruby-on-rails.
I am planning on building a RESTful web service which will have a purely JSON/XML interface, no GUI. The service will rely on a database however for a few of the more important operations there is no clear way of persisting a "model" object directly into a database table. In addition I require full control over when and how the data is being written to the database. I will need to maintain multiple database connections in order to use some connections only for reads and others only for writes.
I've looked at the "full" MVC frameworks such as Django and more basic ones such web.py and pylons. The impression I currently have is that if I go with the full framework initially things will go faster but eventually I will get stuck because I will be limited by the framework in what I can do. If I go with a more basic framework it will take much longer to get everything running but I will be free to do what I need.
This is what it seems like but I suspect that it might be an incorrect impression given how many sites are written in Django and Rails. Could you please provide your opinion. Am I totally wrong and there is a way to easily do anything with a framework like Django or Rails or given my requirements I should go with something like web.py?
Thank you! | How limiting are web frameworks | 0.01818 | 0 | 0 | 653 |
2,125,865 | 2010-01-24T03:53:00.000 | 2 | 0 | 0 | 0 | python,ruby-on-rails,django,rest | 2,126,628 | 11 | false | 1 | 0 | In average, the more complete and helpful the web framework is, the more limiting it is when you try to do things another way than the way the web framework thinks is The Right Way. Some web frameworks try to be very helpful and still not restrictive, and some do that better than others.
And the general recommendation there is: Don't fight the framework. You will lose. So it's important to chose a framework that helps you with the things you want to do, but doesn't enforce anything else. For your webservice case, this should not be a problem. There are tons of minimalistic web frameworks out there, at least in the Python world (which is all I care about). Bobo, BFG, Pylons, Werkzeug, etc, etc. None of these will get in your way one bit.
Also don't forget that you often can use several frameworks together by having them run side by side. Especially using techniques such as Dexterity/XDV. Plone.org for example is mostly Plon (duh) an excellent content management system, but extremely restrictive if you want to do something else. So part of the site is Trac, the excellent Python bug tracker. It's all integrated to look the same. | 7 | 7 | 0 | This is a general question about how limiting are web development frameworks such as Django and ruby-on-rails.
I am planning on building a RESTful web service which will have a purely JSON/XML interface, no GUI. The service will rely on a database however for a few of the more important operations there is no clear way of persisting a "model" object directly into a database table. In addition I require full control over when and how the data is being written to the database. I will need to maintain multiple database connections in order to use some connections only for reads and others only for writes.
I've looked at the "full" MVC frameworks such as Django and more basic ones such web.py and pylons. The impression I currently have is that if I go with the full framework initially things will go faster but eventually I will get stuck because I will be limited by the framework in what I can do. If I go with a more basic framework it will take much longer to get everything running but I will be free to do what I need.
This is what it seems like but I suspect that it might be an incorrect impression given how many sites are written in Django and Rails. Could you please provide your opinion. Am I totally wrong and there is a way to easily do anything with a framework like Django or Rails or given my requirements I should go with something like web.py?
Thank you! | How limiting are web frameworks | 0.036348 | 0 | 0 | 653 |
2,125,865 | 2010-01-24T03:53:00.000 | 1 | 0 | 0 | 0 | python,ruby-on-rails,django,rest | 2,129,721 | 11 | false | 1 | 0 | If you know you're not going to use an ORM, or create a user interface, then you've just eliminated about 90% of what you'd use a web application framework for in the first place. If you look at the feature set of Django, for instance: what parts of that would you use to implement a web service that you couldn't get from using something much simpler, like Werkzeug or CherryPy?
The principal differences between building a web service and building any old black box that takes input and produces output are the various technical limitations imposed by the API being HTTP-based, the problem of statelessness, and the problem of idempotence. A web application framework is going to give you a little help with those issues, but not much. | 7 | 7 | 0 | This is a general question about how limiting are web development frameworks such as Django and ruby-on-rails.
I am planning on building a RESTful web service which will have a purely JSON/XML interface, no GUI. The service will rely on a database however for a few of the more important operations there is no clear way of persisting a "model" object directly into a database table. In addition I require full control over when and how the data is being written to the database. I will need to maintain multiple database connections in order to use some connections only for reads and others only for writes.
I've looked at the "full" MVC frameworks such as Django and more basic ones such web.py and pylons. The impression I currently have is that if I go with the full framework initially things will go faster but eventually I will get stuck because I will be limited by the framework in what I can do. If I go with a more basic framework it will take much longer to get everything running but I will be free to do what I need.
This is what it seems like but I suspect that it might be an incorrect impression given how many sites are written in Django and Rails. Could you please provide your opinion. Am I totally wrong and there is a way to easily do anything with a framework like Django or Rails or given my requirements I should go with something like web.py?
Thank you! | How limiting are web frameworks | 0.01818 | 0 | 0 | 653 |
2,126,001 | 2010-01-24T04:53:00.000 | 1 | 0 | 1 | 0 | python,windows,installation,activepython | 2,126,117 | 7 | false | 0 | 0 | Download Python 2.6 from the python.org and read its tutorial as a start. | 1 | 7 | 0 | I'm about to refresh myself in programming and I have decided on Python 2.6 for that. I have searched the net and it gave me two possible installers for download. One is from the Python site and another is from Activestate. Which one should I install on my Windows computer? | Which python installation should I use? | 0.028564 | 0 | 0 | 632 |
2,126,383 | 2010-01-24T08:07:00.000 | 2 | 0 | 0 | 0 | python,artificial-intelligence,machine-learning,data-mining | 2,126,656 | 1 | false | 0 | 0 | The question is very unclear, but assuming what you mean is that your machine learning algorithm is not working without negative examples and you can't give it every possible negative example, then it's perfectly alright to give it some negative examples.
The point of data mining (a.k.a. machine learning) is to try coming up with general rules based on a relatively small samples of data and then applying them to larger data. In real life problems you will never have all the data. If you had all possible inputs, you could easily create a simple sequence of if-then rules which would always be correct. If it was that simple, robots would be doing all our thinking for us by now. | 1 | 2 | 1 | I had to build a concept analyzer for computer science field and I used for this machine learning, the orange library for Python. I have the examples of concepts, where the features are lemma and part of speech, like algorithm|NN|concept. The problem is that any other word, that in fact is not a concept, is classified as a concept, due to the lack of negative examples. It is not feasable to put all the other words in learning file, classified as simple words not concepts(this will work, but is not quite a solution). Any idea?
Thanks. | Machine learning issue for negative instances | 0.379949 | 0 | 0 | 296 |
2,126,433 | 2010-01-24T08:27:00.000 | 1 | 0 | 0 | 0 | python,django,deployment,size,sqlalchemy | 2,127,512 | 2 | false | 1 | 0 | The Django ORM is usable on its own - you can use "settings.configure()" to set up the database settings. That said, you'll have to do the stripping down and repackaging yourself, and you'll have to experiment with how much you can actually strip away. I'm sure you can ditch contrib/, forms/, template/, and probably several other unrelated pieces. The ORM definitely relies on conf/, and quite likely on core/ and util/ as well. A few quick greps through db/* should make it clear what's imported. | 2 | 0 | 0 | Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API.
Django 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).
I would like to use the Django models (so as to not have other developers learn yet-another-ORM), but do not want to include 3.6 megabytes of stuff most of which are not needed. (FYI - the application, final executable that is, actually bundles the install_requires from setup.py) | Using Django's Model API without having to *include* the full Django stack | 0.099668 | 1 | 0 | 191 |
2,126,433 | 2010-01-24T08:27:00.000 | 1 | 0 | 0 | 0 | python,django,deployment,size,sqlalchemy | 2,130,014 | 2 | false | 1 | 0 | You may be able to get a good idea of what is safe to strip out by checking which files don't have their access time updated when you run your application. | 2 | 0 | 0 | Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API.
Django 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).
I would like to use the Django models (so as to not have other developers learn yet-another-ORM), but do not want to include 3.6 megabytes of stuff most of which are not needed. (FYI - the application, final executable that is, actually bundles the install_requires from setup.py) | Using Django's Model API without having to *include* the full Django stack | 0.099668 | 1 | 0 | 191 |
2,126,824 | 2010-01-24T11:24:00.000 | 1 | 0 | 0 | 0 | python,web2py | 2,391,381 | 1 | false | 1 | 0 | I have seen something like this happen with cookie based load balancing. The cookie was being set too late, so the user would switch frontends sometimes when they logged it.
If you have a load balancer over 2 frontends, you might see this happen 50% of the time.
Check the logs and make sure the hits are all going to the same frontend. | 1 | 2 | 0 | i have a web2py application and am using default/user/login to login to my application but sometimes when i login the application redirect to the login page agin and sometimes the system logged fine and there is no problem i dont know why ?
so please can anyone tell me ?
Thanks in advance | web2py - my application doesn' login | 0.197375 | 0 | 0 | 199 |
2,127,067 | 2010-01-24T12:46:00.000 | 3 | 0 | 0 | 0 | python,django,web-services,scalability | 2,127,162 | 6 | false | 1 | 0 | Read scaling to millions of users is not a database problem, but is fixed with load balancing and caching, etc, see S. Lott above.
Write scaling can indeed be a database problem. "Sharding" and having multiple databases can be one solution, but that's hard with SQL while still retaining the relationality of the database. Popular solutions there are the new types of "nosql" databases. But if you really have those problems, then you need serious expert help, not just answers from dudes Stackoverflow. :) | 2 | 5 | 0 | Django only allows you to use one database in settings.py.
Does that prevent you from scaling up? (millions of users) | Can you really scale up with Django...given that you can only use one database? (In the models.py and settings.py) | 0.099668 | 0 | 0 | 1,742 |
2,127,067 | 2010-01-24T12:46:00.000 | 0 | 0 | 0 | 0 | python,django,web-services,scalability | 2,127,160 | 6 | false | 1 | 0 | If you find out that the DB is the bottlenck of your app, and their is now way around it (like using caching) then you should scale your DB as well. Django has nothing to do with this | 2 | 5 | 0 | Django only allows you to use one database in settings.py.
Does that prevent you from scaling up? (millions of users) | Can you really scale up with Django...given that you can only use one database? (In the models.py and settings.py) | 0 | 0 | 0 | 1,742 |
2,127,956 | 2010-01-24T17:17:00.000 | 2 | 0 | 0 | 0 | python,django,textmate | 2,133,642 | 2 | false | 1 | 0 | It's possible - the Rails bundle does this for ERB (<% automatically gets closing %> tags).
So that's a place you could go look. | 2 | 3 | 0 | I have installed a TextMate bundle that I believe enables the ability for automatic closing of the "{{" markup (so that it will automatically close the markup with "}}"), but this does not seem to be possible with the other markup that uses "{%" and "%}".
So, I was wondering if anyone out there knows how to get TextMate to add the automatic closing tags for the {% %} just like is already done with {{ }}.
Any help is appreciated! | TextMate and Django Integration - Supporting {% %} markup | 0.197375 | 0 | 0 | 796 |
2,127,956 | 2010-01-24T17:17:00.000 | 1 | 0 | 0 | 0 | python,django,textmate | 2,128,058 | 2 | false | 1 | 0 | I don't think that's possible, but the Django bundle for TextMate does allow you to insert the opening and closing tags in one go, placing the cursor in the middle, with ctrl-% (ctrl-shift-5).
Click the Bundles -> Python Django Templates menu to see all the shortcuts that are available. | 2 | 3 | 0 | I have installed a TextMate bundle that I believe enables the ability for automatic closing of the "{{" markup (so that it will automatically close the markup with "}}"), but this does not seem to be possible with the other markup that uses "{%" and "%}".
So, I was wondering if anyone out there knows how to get TextMate to add the automatic closing tags for the {% %} just like is already done with {{ }}.
Any help is appreciated! | TextMate and Django Integration - Supporting {% %} markup | 0.099668 | 0 | 0 | 796 |
2,128,266 | 2010-01-24T18:51:00.000 | 0 | 0 | 0 | 0 | python,sockets,network-programming | 39,160,652 | 8 | false | 0 | 0 | Socket is low level api, it is mapped directly to operating system interface.
Twisted, Tornado ... are high level framework (of course they are built on socket because socket is low level).
When it come to TCP/IP programming, you should have some basic knowledge to make a decision about what you shoud use:
Will you use well-known protocol like HTTP, FTP or create your own protocol?
Blocking or non-blocking? Twisted, Tornado are non-blocking framework (basically like nodejs).
Of course, socket can do everything because every other framework is base on its ;) | 2 | 2 | 0 | What library should I use for network programming? Is sockets the best, or is there a higher level interface, that is standard?
I need something that will be pretty cross platform (ie. Linux, Windows, Mac OS X), and it only needs to be able to connect to other Python programs using the same library. | Network programming in Python | 0 | 0 | 1 | 1,164 |
2,128,266 | 2010-01-24T18:51:00.000 | 1 | 0 | 0 | 0 | python,sockets,network-programming | 2,128,966 | 8 | false | 0 | 0 | The socket module in the standard lib is in my opinion a good choice if you don't need high performance.
It is a very famous API that is known by almost every developpers of almost every languages. It's quite sipmple and there is a lot of information available on the internet. Moreover, it will be easier for other people to understand your code.
I guess that an event-driven framework like Twisted has better performance but in basic cases standard sockets are enough.
Of course, if you use a higher-level protocol (http, ftp...), you should use the corresponding implementation in the python standard library. | 2 | 2 | 0 | What library should I use for network programming? Is sockets the best, or is there a higher level interface, that is standard?
I need something that will be pretty cross platform (ie. Linux, Windows, Mac OS X), and it only needs to be able to connect to other Python programs using the same library. | Network programming in Python | 0.024995 | 0 | 1 | 1,164 |
2,128,505 | 2010-01-24T19:49:00.000 | 133 | 0 | 0 | 0 | python,sqlalchemy | 2,157,930 | 5 | false | 0 | 0 | We actually had these merged together originally, i.e. there was a "filter"-like method that accepted *args and **kwargs, where you could pass a SQL expression or keyword arguments (or both). I actually find that a lot more convenient, but people were always confused by it, since they're usually still getting over the difference between column == expression and keyword = expression. So we split them up. | 4 | 380 | 0 | Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | Difference between filter and filter_by in SQLAlchemy | 1 | 1 | 0 | 221,892 |
2,128,505 | 2010-01-24T19:49:00.000 | 40 | 0 | 0 | 0 | python,sqlalchemy | 2,128,567 | 5 | false | 0 | 0 | filter_by uses keyword arguments, whereas filter allows pythonic filtering arguments like filter(User.name=="john") | 4 | 380 | 0 | Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | Difference between filter and filter_by in SQLAlchemy | 1 | 1 | 0 | 221,892 |
2,128,505 | 2010-01-24T19:49:00.000 | 494 | 0 | 0 | 0 | python,sqlalchemy | 2,128,558 | 5 | true | 0 | 0 | filter_by is used for simple queries on the column names using regular kwargs, like
db.users.filter_by(name='Joe')
The same can be accomplished with filter, not using kwargs, but instead using the '==' equality operator, which has been overloaded on the db.users.name object:
db.users.filter(db.users.name=='Joe')
You can also write more powerful queries using filter, such as expressions like:
db.users.filter(or_(db.users.name=='Ryan', db.users.country=='England')) | 4 | 380 | 0 | Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | Difference between filter and filter_by in SQLAlchemy | 1.2 | 1 | 0 | 221,892 |
2,128,505 | 2010-01-24T19:49:00.000 | 4 | 0 | 0 | 0 | python,sqlalchemy | 68,331,326 | 5 | false | 0 | 0 | Apart from all the technical information posted before, there is a significant difference between filter() and filter_by() in its usability.
The second one, filter_by(), may be used only for filtering by something specifically stated - a string or some number value. So it's usable only for category filtering, not for expression filtering.
On the other hand filter() allows using comparison expressions (==, <, >, etc.) so it's helpful e.g. when 'less/more than' filtering is needed. But can be used like filter_by() as well (when == used).
Just to remember both functions have different syntax for argument typing. | 4 | 380 | 0 | Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | Difference between filter and filter_by in SQLAlchemy | 0.158649 | 1 | 0 | 221,892 |
2,131,533 | 2010-01-25T10:16:00.000 | 1 | 0 | 0 | 0 | python,django | 37,694,543 | 7 | false | 1 | 0 | For unique e-mail addresses in django-registration-redux 1.4.
In url.py add following
from registration.forms import RegistrationFormUniqueEmail
from registration.backends.default.views import RegistrationView
urlpatterns = [
url(r'^accounts/register/$',RegistrationView.as_view(form_class=RegistrationFormUniqueEmail),
name='registration_register'),
url(r'^accounts/', include('registration.backends.default.urls'))
] | 1 | 16 | 0 | Can I force users to make unique e-mail addresses in django-registration? | Django-registration, force unique e-mail | 0.028564 | 0 | 0 | 8,343 |
2,133,648 | 2010-01-25T16:15:00.000 | 0 | 0 | 1 | 0 | python,api,xquery | 3,104,813 | 4 | false | 0 | 0 | Zorba 1.2 works from python. After installation you will get a python folder under zorba folder. Append it to sys.path, with zorba\bin folder also. After all manipulations import "zorba_api" will work! | 1 | 22 | 0 | Is there any existing way to run XQuery under python? (not starting to build a parser yourself in other words).
I got a ton of legacy XQuery that I want to port to our new system, or rather I want to
port the framework and not XQuery.
Therefore: Is there any library that allows me to run XQuery under python? | XQuery library under Python | 0 | 0 | 0 | 14,029 |
2,134,732 | 2010-01-25T18:53:00.000 | 3 | 1 | 0 | 1 | python,linux,sigterm | 2,134,763 | 3 | false | 0 | 0 | Not a direct solution but it might be a good idea to check for an actual process running with the pid in the pid file at startup and if none exists, to cleanup the stale file.
It's possible that your process is getting a SIGKILL before it has a chance to cleanup the pid file. | 1 | 3 | 0 | I have some daemons that use PID files to prevent parallel execution of my program. I have set up a signal handler to trap SIGTERM and do the necessary clean-up including the PID file. This works great when I test using "kill -s SIGTERM #PID". However, when I reboot the server the PID files are still hanging around preventing start-up of the daemons. It is my understanding that SIGTERM is sent to all processes when a server is shutting down. Should I be trapping another signal (SIGINT, SIGQUIT?) in my daemon? | PID files hanging around for daemons after server restart | 0.197375 | 0 | 0 | 727 |
2,135,316 | 2010-01-25T20:18:00.000 | 2 | 0 | 0 | 0 | python,zope,zcml,zope.component | 2,135,406 | 2 | true | 0 | 0 | A factory creates utilities, while a utility registered as a component is an instance. Hence, if you look up a utility registered as a component, you will get the same object back every time. But if it's registered as a factory, you'll get a new instance every time. | 1 | 2 | 0 | It's a little confusing that ZCML registrations for Zope utilities can accept a component or a factory.
<utility component=".some.Class" />
versus
<utility factory=".some.Factory" />
What is the difference? | What is the difference between a Zope utility defined with a factory versus a component? | 1.2 | 0 | 0 | 325 |
2,135,595 | 2010-01-25T21:00:00.000 | 0 | 0 | 0 | 0 | python,client-server | 2,135,937 | 6 | false | 0 | 0 | notionOn TCP/IP networks 127.0.0.0/8 is a non-routeable network, so you should not be able to send an IP datagram destined to 127.0.0.1 across a routed infrastructure. The router will just discard the datagram. However, it is possible to construct and send datagrams with a destination address of 127.0.0.1, so a host on the same network (IP sense of network) as your host could possibly get the datagram to your host's TCP/IP stack. This is where your local firewal comes into play. Your local (host) firewall should have a rule that discards IP datagrams destined for 127.0.0.0/8 coming into any interface other than lo0 (or the equivalent loopback interface). If your host either 1) has such firewall rules in place or 2) exists on its own network (or shared with only completely trusted hosts) and behind a well configured router, you can safely just bind to 127.0.0.1 and be fairly certain any datagrams you receive on the socket came from the local machine. The prior answers address how to open and bind to 127.0.0.1. | 2 | 10 | 0 | I have a python program with many threads. I was thinking of creating a socket, bind it to localhost, and have the threads read/write to this central location. However I do not want this socket open to the rest of the network, just connections from 127.0.0.1 should be accepted. How would I do this (in Python)? And is this a suitable design? Or is there something a little more elegant? | Creating a socket restricted to localhost connections only | 0 | 0 | 1 | 15,248 |
2,135,595 | 2010-01-25T21:00:00.000 | 0 | 0 | 0 | 0 | python,client-server | 2,135,628 | 6 | false | 0 | 0 | If you do sock.bind((port,'127.0.0.1')) it will only listen on localhost, and not on other interfaces, so that's all you need. | 2 | 10 | 0 | I have a python program with many threads. I was thinking of creating a socket, bind it to localhost, and have the threads read/write to this central location. However I do not want this socket open to the rest of the network, just connections from 127.0.0.1 should be accepted. How would I do this (in Python)? And is this a suitable design? Or is there something a little more elegant? | Creating a socket restricted to localhost connections only | 0 | 0 | 1 | 15,248 |
2,136,844 | 2010-01-26T00:40:00.000 | 0 | 0 | 1 | 1 | python,select,ipc,named-pipes,flush | 2,136,921 | 3 | false | 0 | 0 | The flush operation is irrelevant for named pipes; the data for named pipes is held strictly in memory, and won't be released until it is read or the FIFO is closed. | 3 | 4 | 0 | I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes? | Named pipe is not flushing in Python | 0 | 0 | 0 | 4,276 |
2,136,844 | 2010-01-26T00:40:00.000 | 1 | 0 | 1 | 1 | python,select,ipc,named-pipes,flush | 2,200,679 | 3 | false | 0 | 0 | What APIs are you using? os.read() and os.write() don't buffer anything. | 3 | 4 | 0 | I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes? | Named pipe is not flushing in Python | 0.066568 | 0 | 0 | 4,276 |
2,136,844 | 2010-01-26T00:40:00.000 | 1 | 0 | 1 | 1 | python,select,ipc,named-pipes,flush | 2,508,809 | 3 | false | 0 | 0 | To find out if Python's internal buffering is causing your problems, when running your scripts do "python -u" instead of "python". This will force python in to "unbuffered mode" which will cause all output to be printed instantaneously. | 3 | 4 | 0 | I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes? | Named pipe is not flushing in Python | 0.066568 | 0 | 0 | 4,276 |
2,136,885 | 2010-01-26T00:51:00.000 | 1 | 0 | 1 | 0 | python,concurrency,shared-memory | 2,136,956 | 3 | true | 0 | 0 | You can share read-only data among processes simply with a fork (on Unix; no easy way on Windows), but that won't catch the "once a day change" (you'd need to put an explicit way in place for each process to update its own copy). Native Python structures like dict are just not designed to live at arbitrary addresses in shared memory (you'd have to code a dict variant supporting that in C) so they offer no solace.
You could use Jython (or IronPython) to get a Python implementation with exactly the same multi-threading abilities as Java (or, respectively, C#), including multiple-processor usage by multiple simultaneous threads. | 1 | 3 | 0 | I have a cpu intensive code which uses a heavy dictionary as data (around 250M data). I have a multicore processor and want to utilize it so that i can run more than one task at a time. The dictionary is mostly read only and may be updated once a day.
How can i write this in python without duplicating the dictionary?
I understand that python threads don't use native threads and will not offer true concurrency. Can i use multiprocessing module without data being serialized between processes?
I come from java world and my requirement would be something like java threads which can share data, run on multiple processors and offers synchronization primitives. | shared data utilizing multiple processor in python | 1.2 | 0 | 0 | 472 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.