Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,886,192 | 2009-12-11T06:29:00.000 | 0 | 1 | 1 | 0 | python,pylons | 1,886,225 | 4 | false | 0 | 0 | The most important aspect of collaboration is communicating with your teammates. See if you can come to a quick consensus on how to handle the situation.
My suggestion though, would be to pass around your completed ini file for the other devs to modify for their own purposes. If there are a lot of hand tuned settings that they won't want (or need) to change, then they shouldn't have to do the work. At the end of the day though, they'll need to write the settings somehow. | 4 | 1 | 0 | I'm learning about Pylons and I've read a few tutorials, but none of them have addressed collaboration practices. Starting on a practice project. I'd like to keep my code in a revision-control system (Git, specifically) as if it were an open-source project with multiple developers, in order to practice that aspect of Pylons development as well.
I'm wondering what I should do with the development.ini file that was generated by Paster as part of my new application. On one hand, it contains lots of settings that other develpers wouldn't want to have to recreate by hand, so it seems like it ought to be stored in my Git repository so that other developers can access it. On the other hand, some of the settings, such as the database connection URL, are specific to one person's development environment and wouldn't make sense to share with others.
What do real-world Pylons applications do with this file? | Should Pylons' development.ini be checked in? | 0 | 0 | 0 | 358 |
1,889,967 | 2009-12-11T18:25:00.000 | 1 | 1 | 1 | 0 | python | 1,890,161 | 6 | false | 0 | 0 | Sure, you can alter sys.path to add the current directory (or a subdirectory of it) to the search path. site.addsitedir is a good way to do it. Since you'd be doing this from Python you can have any sort of logic you like for deciding which directory to add; you could base it on os.path.normpathing the current directory if it looks like a branch, or looking for the newest branch on-disc, or something else.
You could put this code in the sitecustomize.py module or other startup-triggered location. | 2 | 3 | 0 | I am developing a library and an application that uses the library in Python 2.6. I've placed a "mylib.pth" file in "site-packages" so that I can import mylib from within my application.
I am using a DVCS so when I want to fix a bug or add a feature to the library I make a branch of the repository and work within that branch. To test my application with the changes I am making to the library I edit the path in "mylib.pth" to point to the new development branch.
This gets a little tedious if I have a few parallel branches of my library going on at one. I have to keep editing the "mylib.pth" file before testing to ensure I am testing against the correct version of my library. Is there a way to use the current path (i.e. the development branch of the library that I am current in) to set the library path when I invoke my application instead of using the "mylib.pth" in the global "site-packages" directory? | Setting Python path while developing library module | 0.033321 | 0 | 0 | 683 |
1,889,967 | 2009-12-11T18:25:00.000 | 2 | 1 | 1 | 0 | python | 1,890,000 | 6 | false | 0 | 0 | If you use setuptools, then you can say setup.py develop in your working tree, and it will do the .pth file manipulation for you. | 2 | 3 | 0 | I am developing a library and an application that uses the library in Python 2.6. I've placed a "mylib.pth" file in "site-packages" so that I can import mylib from within my application.
I am using a DVCS so when I want to fix a bug or add a feature to the library I make a branch of the repository and work within that branch. To test my application with the changes I am making to the library I edit the path in "mylib.pth" to point to the new development branch.
This gets a little tedious if I have a few parallel branches of my library going on at one. I have to keep editing the "mylib.pth" file before testing to ensure I am testing against the correct version of my library. Is there a way to use the current path (i.e. the development branch of the library that I am current in) to set the library path when I invoke my application instead of using the "mylib.pth" in the global "site-packages" directory? | Setting Python path while developing library module | 0.066568 | 0 | 0 | 683 |
1,891,551 | 2009-12-11T23:32:00.000 | 17 | 1 | 0 | 0 | python,c,networking | 1,891,560 | 9 | false | 0 | 0 | Just use Python. You'll have access to the same low-level socket APIs as in C, without having to learn about indirection and memory management at the same time.
Later, if you find that Python is too slow for your purposes, you can rewrite some parts in C. But don't do it to begin with. | 7 | 6 | 0 | I am looking for a few pointers, I got pointed to this site.
My primary interest is network programming. I have done quite a bit of reading and experimenting and am familiar with mechanisms of most protocols. Now I want to start writing code. I read introductory stuff on python and grasped it well too. I had just started playing with the python modules, when I met somebody(with a tall reputation) at the local lug meeting who told me that I could always learn python very easily later but C was the language I must know, specially given my interest on network programming. I did some research and thought maybe the guy is right. So I've been with a k&r for 4 weeks now. It didn't intimidate me but I am progressing very very slowly and maybe that's why also slacking a bit. I am posting this because I'm at the stage where it's even worrying me now. I'm always thinking that in python i could be building stuff right now. I know python won't teach me low level things like memory management etc, but my progress is pain-stakingly slow in C.
Question: Should I continue battling with C like i'm now and write some working code in it or switch to python where i'll be at a bit more ease? Will a high level language spoil me too much to come back to C later? | Network programming: Python vs. C for a complete beginner | 1 | 0 | 0 | 8,819 |
1,891,551 | 2009-12-11T23:32:00.000 | 5 | 1 | 0 | 0 | python,c,networking | 1,891,571 | 9 | false | 0 | 0 | Depending on what level(s) of the networking stack you want to work, C may be indispensable, useful, or hardly relevant. But if trying to tackle C first is wearing down your motivation, by all means go back to Python and get some success and therefore incentive -- you can come back to C later. Learning an easier language first, a harder one later, is a perfectly natural progression! MIT, for example, uses Python for some "programming 101" courses -- and yet, most definitely, that doesn't turn students off harder languages such as C (or even C++, which is harder yet!) in later courses. | 7 | 6 | 0 | I am looking for a few pointers, I got pointed to this site.
My primary interest is network programming. I have done quite a bit of reading and experimenting and am familiar with mechanisms of most protocols. Now I want to start writing code. I read introductory stuff on python and grasped it well too. I had just started playing with the python modules, when I met somebody(with a tall reputation) at the local lug meeting who told me that I could always learn python very easily later but C was the language I must know, specially given my interest on network programming. I did some research and thought maybe the guy is right. So I've been with a k&r for 4 weeks now. It didn't intimidate me but I am progressing very very slowly and maybe that's why also slacking a bit. I am posting this because I'm at the stage where it's even worrying me now. I'm always thinking that in python i could be building stuff right now. I know python won't teach me low level things like memory management etc, but my progress is pain-stakingly slow in C.
Question: Should I continue battling with C like i'm now and write some working code in it or switch to python where i'll be at a bit more ease? Will a high level language spoil me too much to come back to C later? | Network programming: Python vs. C for a complete beginner | 0.110656 | 0 | 0 | 8,819 |
1,891,551 | 2009-12-11T23:32:00.000 | 5 | 1 | 0 | 0 | python,c,networking | 1,891,586 | 9 | false | 0 | 0 | Twenty years ago, even ten, you couldn't live without it.
Now many do.
It's possible (probable, actually) that more than half the programmers in the world don't know C. It's completely unnecessary for Web work and for most app work. I'm being gracious with this--if you really were to include web, hobby, overseas consultants and the like, the percent who have used C is probably pretty low at this point.
Embedded often uses C, but I've worked on 2 embedded platforms (a waveform analyzer and cable box) where I've done nothing but Java work.
Honestly a basic understanding of C is nice for writing drivers and understanding pointers, but these days you can easily get through an entire career without ever needing C. I personally would completely skip C++, although it's used quite a bit, I don't see any big advantages to learning it now. | 7 | 6 | 0 | I am looking for a few pointers, I got pointed to this site.
My primary interest is network programming. I have done quite a bit of reading and experimenting and am familiar with mechanisms of most protocols. Now I want to start writing code. I read introductory stuff on python and grasped it well too. I had just started playing with the python modules, when I met somebody(with a tall reputation) at the local lug meeting who told me that I could always learn python very easily later but C was the language I must know, specially given my interest on network programming. I did some research and thought maybe the guy is right. So I've been with a k&r for 4 weeks now. It didn't intimidate me but I am progressing very very slowly and maybe that's why also slacking a bit. I am posting this because I'm at the stage where it's even worrying me now. I'm always thinking that in python i could be building stuff right now. I know python won't teach me low level things like memory management etc, but my progress is pain-stakingly slow in C.
Question: Should I continue battling with C like i'm now and write some working code in it or switch to python where i'll be at a bit more ease? Will a high level language spoil me too much to come back to C later? | Network programming: Python vs. C for a complete beginner | 0.110656 | 0 | 0 | 8,819 |
1,891,551 | 2009-12-11T23:32:00.000 | 1 | 1 | 0 | 0 | python,c,networking | 1,891,569 | 9 | false | 0 | 0 | As a python programmer, I would give you the opposite advice. Learn python first. At least until you learn the limitations and possibilities it has compared to what you can do in C. Then use C for those far out problems you can't fix in Python. :) | 7 | 6 | 0 | I am looking for a few pointers, I got pointed to this site.
My primary interest is network programming. I have done quite a bit of reading and experimenting and am familiar with mechanisms of most protocols. Now I want to start writing code. I read introductory stuff on python and grasped it well too. I had just started playing with the python modules, when I met somebody(with a tall reputation) at the local lug meeting who told me that I could always learn python very easily later but C was the language I must know, specially given my interest on network programming. I did some research and thought maybe the guy is right. So I've been with a k&r for 4 weeks now. It didn't intimidate me but I am progressing very very slowly and maybe that's why also slacking a bit. I am posting this because I'm at the stage where it's even worrying me now. I'm always thinking that in python i could be building stuff right now. I know python won't teach me low level things like memory management etc, but my progress is pain-stakingly slow in C.
Question: Should I continue battling with C like i'm now and write some working code in it or switch to python where i'll be at a bit more ease? Will a high level language spoil me too much to come back to C later? | Network programming: Python vs. C for a complete beginner | 0.022219 | 0 | 0 | 8,819 |
1,891,551 | 2009-12-11T23:32:00.000 | 1 | 1 | 0 | 0 | python,c,networking | 1,891,955 | 9 | false | 0 | 0 | I would recommend starting with Python, unless you absolutely need the speed. It's often said that programming languages are just tools in your toolbox, and certain ones are going to be able to accomplish a given task better than others. If you don't need the speed, Python is going to accomplish the task you're looking to accomplish with less code and will be easier to learn.
I am entirely self-taught and went from Apple II BASIC to assembly language to scripting languages (Perl, PHP, Ruby) and now am using mostly C. C is a relatively small language, but I believe that had I started out with C, I probably would've lost my motivation. Start out with Python - you'll learn the gist of programming, then if you have the need or the want to learn C later, it will be easier to pick up. | 7 | 6 | 0 | I am looking for a few pointers, I got pointed to this site.
My primary interest is network programming. I have done quite a bit of reading and experimenting and am familiar with mechanisms of most protocols. Now I want to start writing code. I read introductory stuff on python and grasped it well too. I had just started playing with the python modules, when I met somebody(with a tall reputation) at the local lug meeting who told me that I could always learn python very easily later but C was the language I must know, specially given my interest on network programming. I did some research and thought maybe the guy is right. So I've been with a k&r for 4 weeks now. It didn't intimidate me but I am progressing very very slowly and maybe that's why also slacking a bit. I am posting this because I'm at the stage where it's even worrying me now. I'm always thinking that in python i could be building stuff right now. I know python won't teach me low level things like memory management etc, but my progress is pain-stakingly slow in C.
Question: Should I continue battling with C like i'm now and write some working code in it or switch to python where i'll be at a bit more ease? Will a high level language spoil me too much to come back to C later? | Network programming: Python vs. C for a complete beginner | 0.022219 | 0 | 0 | 8,819 |
1,891,551 | 2009-12-11T23:32:00.000 | 2 | 1 | 0 | 0 | python,c,networking | 1,891,967 | 9 | false | 0 | 0 | I would recommend using Python. Because it is a higher level language than C, you can concentrate more on the "what" rather than the "how". This means that you can avoid the level of detail required by C in order to achieve what you need to get done right now.
This isn't to say that a low level of detail is never required. It certainly is, but at this time I'd recommend you ignore it and pick it up in the future, should you need to. | 7 | 6 | 0 | I am looking for a few pointers, I got pointed to this site.
My primary interest is network programming. I have done quite a bit of reading and experimenting and am familiar with mechanisms of most protocols. Now I want to start writing code. I read introductory stuff on python and grasped it well too. I had just started playing with the python modules, when I met somebody(with a tall reputation) at the local lug meeting who told me that I could always learn python very easily later but C was the language I must know, specially given my interest on network programming. I did some research and thought maybe the guy is right. So I've been with a k&r for 4 weeks now. It didn't intimidate me but I am progressing very very slowly and maybe that's why also slacking a bit. I am posting this because I'm at the stage where it's even worrying me now. I'm always thinking that in python i could be building stuff right now. I know python won't teach me low level things like memory management etc, but my progress is pain-stakingly slow in C.
Question: Should I continue battling with C like i'm now and write some working code in it or switch to python where i'll be at a bit more ease? Will a high level language spoil me too much to come back to C later? | Network programming: Python vs. C for a complete beginner | 0.044415 | 0 | 0 | 8,819 |
1,891,551 | 2009-12-11T23:32:00.000 | 0 | 1 | 0 | 0 | python,c,networking | 14,009,100 | 9 | false | 0 | 0 | learn lisp, its performance is almost that of C and its easy to learn and you can do more in lisp than you can in python and its not hard to learn. You can also do natural language programming to solve problems. go lisp. | 7 | 6 | 0 | I am looking for a few pointers, I got pointed to this site.
My primary interest is network programming. I have done quite a bit of reading and experimenting and am familiar with mechanisms of most protocols. Now I want to start writing code. I read introductory stuff on python and grasped it well too. I had just started playing with the python modules, when I met somebody(with a tall reputation) at the local lug meeting who told me that I could always learn python very easily later but C was the language I must know, specially given my interest on network programming. I did some research and thought maybe the guy is right. So I've been with a k&r for 4 weeks now. It didn't intimidate me but I am progressing very very slowly and maybe that's why also slacking a bit. I am posting this because I'm at the stage where it's even worrying me now. I'm always thinking that in python i could be building stuff right now. I know python won't teach me low level things like memory management etc, but my progress is pain-stakingly slow in C.
Question: Should I continue battling with C like i'm now and write some working code in it or switch to python where i'll be at a bit more ease? Will a high level language spoil me too much to come back to C later? | Network programming: Python vs. C for a complete beginner | 0 | 0 | 0 | 8,819 |
1,892,324 | 2009-12-12T04:46:00.000 | 9 | 0 | 1 | 0 | python,functional-programming | 1,899,731 | 8 | false | 0 | 0 | The question, which seems to be mostly ignored here:
does programming Python functionally really help with concurrency?
No. The value FP brings to concurrency is in eliminating state in computation, which is ultimately responsible for the hard-to-grasp nastiness of unintended errors in concurrent computation. But it depends on the concurrent programming idioms not themselves being stateful, something that doesn't apply to Twisted. If there are concurrency idioms for Python that leverage stateless programming, I don't know of them. | 4 | 37 | 0 | At work we used to program our Python in a pretty standard OO way. Lately, a couple guys got on the functional bandwagon. And their code now contains lots more lambdas, maps and reduces. I understand that functional languages are good for concurrency but does programming Python functionally really help with concurrency? I am just trying to understand what I get if I start using more of Python's functional features. | Why program functionally in Python? | 1 | 0 | 0 | 3,869 |
1,892,324 | 2009-12-12T04:46:00.000 | 19 | 0 | 1 | 0 | python,functional-programming | 1,892,345 | 8 | false | 0 | 0 | I program in Python everyday, and I have to say that too much 'bandwagoning' toward OO or functional could lead toward missing elegant solutions. I believe that both paradigms have their advantages to certain problems - and I think that's when you know what approach to use. Use a functional approach when it leaves you with a clean, readable, and efficient solution. Same goes for OO.
And that's one of the reasons I love Python - the fact that it is multi-paradigm and lets the developer choose how to solve his/her problem. | 4 | 37 | 0 | At work we used to program our Python in a pretty standard OO way. Lately, a couple guys got on the functional bandwagon. And their code now contains lots more lambdas, maps and reduces. I understand that functional languages are good for concurrency but does programming Python functionally really help with concurrency? I am just trying to understand what I get if I start using more of Python's functional features. | Why program functionally in Python? | 1 | 0 | 0 | 3,869 |
1,892,324 | 2009-12-12T04:46:00.000 | 2 | 0 | 1 | 0 | python,functional-programming | 1,892,394 | 8 | false | 0 | 0 | The standard functions filter(), map() and reduce() are used for various operations on a list and all of the three functions expect two arguments: A function and a list
We could define a separate function and use it as an argument to filter() etc., and its probably a good idea if that function is used several times, or if the function is too complex to be written in a single line. However, if it's needed only once and it's quite simple, it's more convenient to use a lambda construct to generate a (temporary) anonymous function and pass it to filter().
This helps in readability and compact code.
Using these function, would also turn out to be efficient, because the looping on the elements of the list is done in C, which is a little bit faster than looping in python.
And object oriented way is forcibly needed when states are to be maintained, apart from abstraction, grouping, etc., If the requirement is pretty simple, I would stick with functional than to Object Oriented programming. | 4 | 37 | 0 | At work we used to program our Python in a pretty standard OO way. Lately, a couple guys got on the functional bandwagon. And their code now contains lots more lambdas, maps and reduces. I understand that functional languages are good for concurrency but does programming Python functionally really help with concurrency? I am just trying to understand what I get if I start using more of Python's functional features. | Why program functionally in Python? | 0.049958 | 0 | 0 | 3,869 |
1,892,324 | 2009-12-12T04:46:00.000 | 1 | 0 | 1 | 0 | python,functional-programming | 1,893,341 | 8 | false | 0 | 0 | Map and Filter have their place in OO programming. Right next to list comprehensions and generator functions.
Reduce less so. The algorithm for reduce can rapidly suck down more time than it deserves; with a tiny bit of thinking, a manually-written reduce-loop will be more efficient than a reduce which applies a poorly-thought-out looping function to a sequence.
Lambda never. Lambda is useless. One can make the argument that it actually does something, so it's not completely useless. First: Lambda is not syntactic "sugar"; it makes things bigger and uglier. Second: the one time in 10,000 lines of code that think you need an "anonymous" function turns into two times in 20,000 lines of code, which removes the value of anonymity, making it into a maintenance liability.
However.
The functional style of no-object-state-change programming is still OO in nature. You just do more object creation and fewer object updates. Once you start using generator functions, much OO programming drifts in a functional direction.
Each state change appears to translate into a generator function that builds a new object in the new state from old object(s). It's an interesting world view because reasoning about the algorithm is much, much simpler.
But that's no call to use reduce or lambda. | 4 | 37 | 0 | At work we used to program our Python in a pretty standard OO way. Lately, a couple guys got on the functional bandwagon. And their code now contains lots more lambdas, maps and reduces. I understand that functional languages are good for concurrency but does programming Python functionally really help with concurrency? I am just trying to understand what I get if I start using more of Python's functional features. | Why program functionally in Python? | 0.024995 | 0 | 0 | 3,869 |
1,893,094 | 2009-12-12T11:20:00.000 | 2 | 0 | 1 | 0 | python,floating-point | 1,893,110 | 6 | false | 0 | 0 | Because of the way floating points numbers are represented in a computer. It's not just a Python thing. | 3 | 1 | 0 | Why does 0.1 + 0.1 + 0.1 - 0.3 evaluate to
5.5511151231257827e-17 in Python? | Basic Python Numbers | 0.066568 | 0 | 0 | 779 |
1,893,094 | 2009-12-12T11:20:00.000 | 0 | 0 | 1 | 0 | python,floating-point | 1,893,293 | 6 | false | 0 | 0 | As an example, consider representing 1/3 as a scientific number in base 10. With only a finite number of digits (say, 10), you'll wind up with a rounding error. Say 1/3 ≈ 0.3333333333e0. Then 1/3+1/3+1/3 (after first converting to decimal expansions) is represented as 0.9999999999e0, but 1 is 1.0e0. Similarly, 1/7 ≈ 0.1428571429e0, and 1/7+1/7 would be 0.2857142858e0, but the representation for 2/7 would be 0.2857142857e0. In both cases, the sum is off by 1e-10. | 3 | 1 | 0 | Why does 0.1 + 0.1 + 0.1 - 0.3 evaluate to
5.5511151231257827e-17 in Python? | Basic Python Numbers | 0 | 0 | 0 | 779 |
1,893,094 | 2009-12-12T11:20:00.000 | 3 | 0 | 1 | 0 | python,floating-point | 1,893,442 | 6 | false | 0 | 0 | You might be interested in knowing that Python 3 has improved the situation by changing how repr works. It will now give you the shortest string representation that will be converted back to the original float:
Python 3.1.1+ (r311:74480, Oct 11 2009, 20:19:13)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.1
'0.1'
Older versions behave like this:
Python 2.6.4 (r264:75706, Oct 28 2009, 22:19:17)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.1
'0.10000000000000001'
It is only the output of repr (called implicitly when you enter a value in the interactive interpreter) that has changed. The underlying values are still IEEE-754 floating-point numbers, and they still have the usual limitations:
Python 3.1.1+ (r311:74480, Oct 11 2009, 20:19:13)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.1
0.1
>>> 0.2
0.2
>>> 0.3
0.3
>>> 0.1 + 0.2
0.30000000000000004
>>> 0.1 + 0.2 - 0.3
5.551115123125783e-17 | 3 | 1 | 0 | Why does 0.1 + 0.1 + 0.1 - 0.3 evaluate to
5.5511151231257827e-17 in Python? | Basic Python Numbers | 0.099668 | 0 | 0 | 779 |
1,893,213 | 2009-12-12T12:06:00.000 | 4 | 0 | 0 | 0 | python,cross-platform,notifications | 1,893,404 | 3 | true | 0 | 1 | Does Python on Windows and Mac also ship with Tk wrappers? If so, you might be able to roll your own notification box. I do not think they have a dead-simple notification API (i.e. you pass it a string and a cute box pops up for 5 seconds) however at least you will only have one codebase to maintain.
I am thinking about other cross-platform apps such as Skype, Dropbox, and Thunderbird. Skype and Thunderbird seem to have rolled their own, and Dropbox went the platform-specific route. | 1 | 9 | 0 | I'm making a python script that should run in the background and notify a user of changes, and I'd quite like it to work cross-platform. Main problem is, I don't have access to a mac at all, so coding specifically for it could be very difficult. wxPython seems like massive overkill for simple popups, so is there anything with a lighter footprint? | Cross-Platform Python Notification Library | 1.2 | 0 | 0 | 2,853 |
1,895,089 | 2009-12-12T23:49:00.000 | 0 | 0 | 0 | 0 | python,mysql,web-services | 1,895,731 | 2 | false | 0 | 0 | Note: Persistent connections can have a very negative effect on your system performance. If you have a large number of web server processes all holding persistent connections to your DB server you may exhaust the DB server's limit on connections. This is one of those areas where you need to test it under heavy simulated loads to make sure you won't hit the wall at 100MPH. | 1 | 2 | 0 | PHP provides mysql_connect() and mysql_pconnect() which allow creating both temporary and persistent database connections.
Is there a similar functionality in Python? The environment on which this will be used is lighttpd server with FastCGI.
Thank you! | Persistent MySQL connections in Python | 0 | 1 | 0 | 3,649 |
1,896,722 | 2009-12-13T14:46:00.000 | 2 | 0 | 1 | 0 | python,floating-point | 1,896,737 | 4 | false | 0 | 0 | Floating point numbers have infinite number of decimal places. The physical representation on the computer is dependent on the representation of float, or double, or whatever and is dependent on a) language b) construct, e.g. float, double, etc. c) compiler implementation d) hardware.
Now, given that you have a representation of a floating point number (i.e. a real) within a particular language, is your question how to round it off or truncate it to a specific number of digits?
There is no need to do this within the return call, since you can always truncate/round afterwards. In fact, you would usually not want to truncate until actually printing, to preserve more precision. An exception might be if you wanted to ensure that results were consistent across different algorithms/hardware, ie. say you had some financial trading software that needed to pass unit tests across different languages/platforms etc. | 2 | 0 | 0 | So I know how to print a floating point number with a certain decimal places.
My question is how to return it with a specified number of decimal places?
Thanks. | How to return a float point number with a defined number of decimal places? | 0.099668 | 0 | 0 | 3,481 |
1,896,722 | 2009-12-13T14:46:00.000 | 4 | 0 | 1 | 0 | python,floating-point | 1,896,729 | 4 | false | 0 | 0 | In order to get two decimal places, multiply the number by 100, floor it, then divide by 100.
And note that the number you will return will not really have only two decimal places because division by 100 cannot be represented exactly in IEEE-754 floating-point arithmetic most of the time. It will only be the closest representable approximation to a number with only two decimal places. | 2 | 0 | 0 | So I know how to print a floating point number with a certain decimal places.
My question is how to return it with a specified number of decimal places?
Thanks. | How to return a float point number with a defined number of decimal places? | 0.197375 | 0 | 0 | 3,481 |
1,897,748 | 2009-12-13T21:07:00.000 | 1 | 1 | 0 | 1 | python,http,unix,asynchronous | 1,897,759 | 3 | false | 0 | 0 | Django is great for writing web applications, and the subprocess module (subprocess.Popen en .communicate()) is great for executing shell scripts. You can give it a stdin,stdout and stderr stream for communication if you want. | 1 | 2 | 0 | We have a collection of Unix scripts (and/or Python modules) that each perform a long running task. I would like to provide a web interface for them that does the following:
Asks for relevant data to pass into scripts.
Allows for starting/stopping/killing them.
Allows for monitoring the progress and/or other information provided by the scripts.
Possibly some kind of logging (although the scripts already do logging).
I do know how to write a server that does this (e.g. by using Python's built-in HTTP server/JSON), but doing this properly is non-trivial and I do not want to reinvent the wheel.
Are there any existing solutions that allow for maintaining asynchronous server-side tasks? | Executing server-side Unix scripts asynchronously | 0.066568 | 0 | 0 | 257 |
1,897,779 | 2009-12-13T21:17:00.000 | 0 | 0 | 0 | 0 | python,algorithm,point | 1,897,910 | 5 | false | 0 | 0 | Your R-tree approach is the best approach I know of (that's the approach I would choose over quadtrees, B+ trees, or BSP trees, as R-trees seem convenient to build in your case). Caveat: I'm no expert, even though I remember a few things from my senior year university class of algorithmic! | 2 | 11 | 1 | I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point.
Using R-trees seem to work, but they are really meant for rectangles and not points. I could use a modified version of a R-tree algorithm which works with points too, but I'd rather not reinvent the wheel, if there is already some better solution. I'm not very familiar with data-structures, so maybe there already exists some structure that works for my problem?
In summary, basically what I'm asking is if anyone knows of a good algorithm, that works in Python, that can be used to check if a point lies in any rectangle in a given set of rectangles.
edit: This is in 2D and the rectangles are not rotated. | Test if point is in some rectangle | 0 | 0 | 0 | 13,398 |
1,897,779 | 2009-12-13T21:17:00.000 | 3 | 0 | 0 | 0 | python,algorithm,point | 1,897,962 | 5 | false | 0 | 0 | For rectangles that are aligned with the axes, you only need two points (four numbers) to identify the rectangle - conventionally, bottom-left and top-right corners. To establish whether a given point (Xtest, Ytest) overlaps with a rectangle (XBL, YBL, XTR, YTR) by testing both:
Xtest >= XBL && Xtest <= XTR
Ytest >= YBL && Ytest <= YTR
Clearly, for a large enough set of points to test, this could be fairly time consuming. The question, then, is how to optimize the testing.
Clearly, one optimization is to establish the minimum and maximum X and Y values for the box surrounding all the rectangles (the bounding box): a swift test on this shows whether there is any need to look further.
Xtest >= Xmin && Xtest <= Xmax
Ytest >= Ymin && Ytest <= Ymax
Depending on how much of the total surface area is covered with rectangles, you might be able to find non-overlapping sub-areas that contain rectangles, and you could then avoid searching those sub-areas that cannot contain a rectangle overlapping the point, again saving comparisons during the search at the cost of pre-computation of suitable data structures. If the set of rectangles is sparse enough, there may be no overlapping, in which case this degenerates into the brute-force search. Equally, if the set of rectangles is so dense that there are no sub-ranges in the bounding box that can be split up without breaking rectangles.
However, you could also arbitrarily break up the bounding area into, say, quarters (half in each direction). You would then use a list of boxes which would include more boxes than in the original set (two or four boxes for each box that overlapped one of the arbitrary boundaries). The advantage of this is that you could then eliminate three of the four quarters from the search, reducing the amount of searching to be done in total - at the expense of auxilliary storage.
So, there are space-time trade-offs, as ever. And pre-computation versus search trade-offs. If you are unlucky, the pre-computation achieves nothing (for example, there are two boxes only, and they don't overlap on either axis). On the other hand, it could achieve considerable search-time benefit. | 2 | 11 | 1 | I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point.
Using R-trees seem to work, but they are really meant for rectangles and not points. I could use a modified version of a R-tree algorithm which works with points too, but I'd rather not reinvent the wheel, if there is already some better solution. I'm not very familiar with data-structures, so maybe there already exists some structure that works for my problem?
In summary, basically what I'm asking is if anyone knows of a good algorithm, that works in Python, that can be used to check if a point lies in any rectangle in a given set of rectangles.
edit: This is in 2D and the rectangles are not rotated. | Test if point is in some rectangle | 0.119427 | 0 | 0 | 13,398 |
1,899,503 | 2009-12-14T07:42:00.000 | 0 | 0 | 1 | 0 | python,design-patterns | 8,545,397 | 4 | false | 0 | 0 | If you are seeking for a design pattern, then I suggest you strategy pattern.Because Implementing this pattern you can dynamically interchange the components of robot. | 3 | 3 | 0 | I have some algorithms to do that are very similar in very aspects but are all different.
I'll try to give an example of what I mean.
Let's assume I have a Robot class. This class should be the "base" of all classes. It provides basic mechanisms to make the robot work in its environment. It might or not have to work by itself (with this I mean it can either be an abstract class, that is useless by itself, or if possible, to have basics mechanisms to be ready to work).
All robots have hands. But some robots will have human-like hands, other will have blades, other will have razors. I could do a base class called RobotBase and then create RobotHumandHand, RobotBladeHand and RobotRazorHand. But they also can have different heads, different eyes, different legs, different arms, etc. Is there any easy way I can address this? I'd like to put this in a way that is a bit like LEGO, so I could just define a robot and "add" the pieces I want. Maybe through interfaces? I am not even aware if those exist in python(that's the language I'll be using).
Any comments / suggestions are really appreciated! Thanks! | Design pattern help | 0 | 0 | 0 | 297 |
1,899,503 | 2009-12-14T07:42:00.000 | 1 | 0 | 1 | 0 | python,design-patterns | 1,899,521 | 4 | false | 0 | 0 | I'm not a Python guy, but a quick look indicates they support multiple inheritance which can be used like Java Interfaces (Python does not seem to support interfaces). So you can have essentially superclasses for RobotHand then RobotHumanHand, RobotBladeHand, etc, Same with eyes, feet what have you. This is a reasonable way to do what you want to do. | 3 | 3 | 0 | I have some algorithms to do that are very similar in very aspects but are all different.
I'll try to give an example of what I mean.
Let's assume I have a Robot class. This class should be the "base" of all classes. It provides basic mechanisms to make the robot work in its environment. It might or not have to work by itself (with this I mean it can either be an abstract class, that is useless by itself, or if possible, to have basics mechanisms to be ready to work).
All robots have hands. But some robots will have human-like hands, other will have blades, other will have razors. I could do a base class called RobotBase and then create RobotHumandHand, RobotBladeHand and RobotRazorHand. But they also can have different heads, different eyes, different legs, different arms, etc. Is there any easy way I can address this? I'd like to put this in a way that is a bit like LEGO, so I could just define a robot and "add" the pieces I want. Maybe through interfaces? I am not even aware if those exist in python(that's the language I'll be using).
Any comments / suggestions are really appreciated! Thanks! | Design pattern help | 0.049958 | 0 | 0 | 297 |
1,899,503 | 2009-12-14T07:42:00.000 | 4 | 0 | 1 | 0 | python,design-patterns | 1,899,527 | 4 | false | 0 | 0 | I think your robot should have a list of ports i.e. a number of injected components each robot may have. Your Robot class will be a container of RobotParts. You can have specific parts to have specific interfaces. RobotHand extends RobotPart and Robot class has a field that holds a list of RobotHand implementations (you can limit it to 2 hands, but in general case there could be more). You can do the same with RobotHead that will inherit from RobotPart and also there will be a field in Robot class holding implementation of RobotHead. In its turn RobotHead may hold a list of RobotEye implementations and so on. Then your specific Robot implementations may inherit their behavior from base class or take advantage of configuration e.g. by using RobotBladeHands if available. | 3 | 3 | 0 | I have some algorithms to do that are very similar in very aspects but are all different.
I'll try to give an example of what I mean.
Let's assume I have a Robot class. This class should be the "base" of all classes. It provides basic mechanisms to make the robot work in its environment. It might or not have to work by itself (with this I mean it can either be an abstract class, that is useless by itself, or if possible, to have basics mechanisms to be ready to work).
All robots have hands. But some robots will have human-like hands, other will have blades, other will have razors. I could do a base class called RobotBase and then create RobotHumandHand, RobotBladeHand and RobotRazorHand. But they also can have different heads, different eyes, different legs, different arms, etc. Is there any easy way I can address this? I'd like to put this in a way that is a bit like LEGO, so I could just define a robot and "add" the pieces I want. Maybe through interfaces? I am not even aware if those exist in python(that's the language I'll be using).
Any comments / suggestions are really appreciated! Thanks! | Design pattern help | 0.197375 | 0 | 0 | 297 |
1,901,354 | 2009-12-14T14:55:00.000 | 2 | 0 | 0 | 1 | python,emacs | 1,901,609 | 1 | false | 0 | 0 | It sounds like you need print; use print.
emacs is launching a python process and getting text from its standard output, not a python value. | 1 | 1 | 0 | After selecting 1 + 1 and issuing python-send-region, my subprocess buffer shows no results. I have to evaluate print 1 + 1, instead.
How can I force the python-send-* commands to print the value of the respective statements rather than echoing their stdout? | Emacs Python-Mode: Sending statements to a subprocess does not lead to REPL-style evaluation | 0.379949 | 0 | 0 | 289 |
1,902,338 | 2009-12-14T17:38:00.000 | 4 | 0 | 1 | 0 | python,django,multithreading,scheduling | 1,902,471 | 5 | true | 0 | 0 | Django is a server application, which only reacts to external events.
You should use a scheduler like cron to create events that call your django application, either calling a management subcommand or doing an HTTP request on some special page. | 1 | 1 | 0 | In python how to implement a thread which runs in the background (may be when the module loads) and calls the function every minute Monday to Friday 10 AM to 3 PM. For example the function should be called at:
10:01 AM
10:02 AM
10:03 AM
.
.
2:59 PM
Any pointers?
Environment: Django
Thanks | How to implement time event scheduler in python? | 1.2 | 0 | 0 | 3,634 |
1,903,065 | 2009-12-14T19:51:00.000 | 3 | 0 | 0 | 1 | python,google-app-engine,web2py | 1,903,297 | 7 | false | 1 | 0 | The AppEngine uses BigTable as it's datastore backend. Don't try to write a traditional relational-database driven application. BigTable is much more well suited for use as a highly-scalable key-value store. Avoid joins if at all possible. | 6 | 9 | 0 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks | Is Google App Engine right for me? | 0.085505 | 0 | 0 | 1,390 |
1,903,065 | 2009-12-14T19:51:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine,web2py | 1,904,574 | 7 | false | 1 | 0 | I wouldn't worry about any of this. After having played with Google App Engine for a while now, I've found that it scales quite well for large data sets. If your data elements are large (i.e. photos), then you'll need to integrate with another service to handle them, but that's probably going to be true no matter what with data of that size. Also, I've found BigTable relatively easy to work with having come from a background entirely in relational databases. Finally, Django is a somewhat hidden, but awesome, "feature" of Google App Engine. If you've never used it, it's a really nice, elegant web framework that makes a lot of common tasks trivial (forms come to mind here). | 6 | 9 | 0 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks | Is Google App Engine right for me? | 0.057081 | 0 | 0 | 1,390 |
1,903,065 | 2009-12-14T19:51:00.000 | 5 | 0 | 0 | 1 | python,google-app-engine,web2py | 1,905,263 | 7 | false | 1 | 0 | using web2py on Google App Engine is a great strategy. It lets you get up and running fast, and if you do outgrow the restrictions of GAE then you can move your web2py application elsewhere.
However, keeping this portability means you should stay away from the advanced parts of GAE (Task Queues, Transactions, ListProperty, etc). | 6 | 9 | 0 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks | Is Google App Engine right for me? | 0.141893 | 0 | 0 | 1,390 |
1,903,065 | 2009-12-14T19:51:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,web2py | 1,994,758 | 7 | false | 1 | 0 | What about Google Wave? It's being built on appengine, and once live, real-time translatable chat reaches the corporate sector... I could see it hitting top 1000th... But then again, that's an internal project that gets to do special stuff other appengine apps can't.... Like hanging threads; I think... And whatever else Wave has under the hood... | 6 | 9 | 0 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks | Is Google App Engine right for me? | 0 | 0 | 0 | 1,390 |
1,903,065 | 2009-12-14T19:51:00.000 | 8 | 0 | 0 | 1 | python,google-app-engine,web2py | 1,904,610 | 7 | false | 1 | 0 | Having developed a smallish site with GAE, I have some thoughts
If you mean "huge" like "the next YouTube", then GAE might be a great fit, because of the previously mentioned scaling.
If you mean "huge" like "massively complex, with a whole slew of screens, models, and features", then GAE might not be a good fit. Things like unit testing are hard on GAE, and there's not a built-in structure for your app that you'd get with something like (famously) (Ruby on) Rails, or (Python powered) Turbogears.
ie: there is no staging environment: just your development copy of the system and production. This may or may not be a bad thing, depending on your situation.
Additionally, it depends on the other Python modules you intend to pull in: some Python modules just don't run on GAE (because you can't talk to hardware, or because there are just too many files in the package).
Hope this helps | 6 | 9 | 0 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks | Is Google App Engine right for me? | 1 | 0 | 0 | 1,390 |
1,903,065 | 2009-12-14T19:51:00.000 | -11 | 0 | 0 | 1 | python,google-app-engine,web2py | 1,903,114 | 7 | false | 1 | 0 | If you are planning on a 'huge' website, then don't use App Engine. Simple as that. The App Engine is not built to deliver the next top 1000th website.
Allow me to also ask what do you mean by 'huge', how many simultaneous users? Queries per second? DB load? | 6 | 9 | 0 | I am thinking about using Google App Engine.It is going to be a huge website. In that case, what is your piece of advice using Google App Engine. I heard GAE has restrictions like we cannot store images or files more than 1MB limit(they are going to change this from what I read in the GAE roadmap),query is limited to 1000 results, and I am also going to se web2py with GAE. So I would like to know your comments.
Thanks | Is Google App Engine right for me? | -1 | 0 | 0 | 1,390 |
1,903,653 | 2009-12-14T21:33:00.000 | 0 | 1 | 1 | 1 | python,easy-install,pkg-resources | 2,164,148 | 4 | false | 0 | 0 | Use "easy_install -m" to install all the platform-specific packages, so that there is no default version on sys.path. That way, version resolution takes place at runtime, and platform information will be taken into consideration. | 1 | 6 | 0 | We have a common python installation for all of our systems in order to ensure every system has the same python installation and to ease configuration issues. This installation is located on a shared drive. We also have multiple platforms that share this installation. We get around conflicting platform-specific files by setting the --exec-prefix configure option when compiling python.
My issue is that I now want to install an egg using easy_install (or otherwise) that is platform-dependent. easy_install puts the egg in the site-packages directory of the platform-independent part of the install. The name of the egg has the platform in it so there should be no conflict. But python will only load the first one it finds. (So, on Solaris it might try to load the Linux egg). Modifying the easy-install.pth file can change which one it finds, but that's pretty useless.
I can move the .egg files into a platform-depended packages directory and then use pkg_resources.require() to load them (or manually adjust the path). But it seems as though I shouldn't have to since the platform is in the name of the egg.
Is there any more generic way I can ensure that python will load the egg for the correct platform? | How can I deal with python eggs for multiple platforms in one location? | 0 | 0 | 0 | 1,197 |
1,903,980 | 2009-12-14T22:32:00.000 | 8 | 0 | 1 | 0 | python,naming-conventions,list-comprehension | 17,777,329 | 5 | false | 0 | 0 | "Comprehension" used to mean not only "understanding," but "inclusion" in logic. Oxford English Dictionary has the following: "I.4. Logic The sum of the attributes comprehended in a notion or concept; intension" as the fourth subdefinition under the first definition, "Inclusion, comprising." I wouldn't be surprised to learn that the usage passed into the history of mathematics through there. In a list or set comprehension, instead of giving the elements of the list or set explicitly, the programmer is describing what they comprehend (in the "include" sense) with an expression. | 2 | 23 | 0 | I know python is not the first language to have list comprehension.
I'm just interest in the history of the name.
I'm particularly interested in why it's called comprehension | why list comprehension is called so in python? | 1 | 0 | 0 | 2,861 |
1,903,980 | 2009-12-14T22:32:00.000 | 1 | 0 | 1 | 0 | python,naming-conventions,list-comprehension | 2,005,502 | 5 | false | 0 | 0 | Because it's a very comprehensive way to describe a sequence (a set in math and other languages, and a list/sequence in Python). | 2 | 23 | 0 | I know python is not the first language to have list comprehension.
I'm just interest in the history of the name.
I'm particularly interested in why it's called comprehension | why list comprehension is called so in python? | 0.039979 | 0 | 0 | 2,861 |
1,904,320 | 2009-12-14T23:40:00.000 | 0 | 1 | 0 | 1 | asp.net,python,remote-execution | 1,904,344 | 1 | false | 0 | 0 | Probably the best approach is the least coupled one. If you can determine a protocol that you're comfortable with the two (asp/python) talking in, it will go a long way to reducing headaches.
Let's say you pick XML.
Setup the python script to run as a WSGI application with either cherrypy or apache (or whatever). The script formats it's response in XML and passes that to WSGI which returns the XML over HTTP.
On the ASP.NET side of things, whenever you want to "run the script" you simply query the URL with the WebRequest class, then parse the results with LINQ-to-XML (which on a side note is a really cool technology).
Here's where this becomes relevant: Later on if either the ASP.NET implementation or the python implementation changes you don't have to re-code/refactor the other. Later if you realize that the ASP.NET app and some desktop app need to be able to do that, you've standardized on a protocol and implementing it should be easy and well supported. | 1 | 1 | 0 | I have a python script on a linux server that I can SSH into and I want to run the script on the linux server( and pass it parameters entered by the user) and get the output on an ASP.net webpage running on IIS. How would I be able to do that?
Would it be easier if I was running a wamp server?
Edit: The servers are in the same internal intranet. | Run a remote python script from ASP.Net | 0 | 0 | 0 | 1,105 |
1,904,724 | 2009-12-15T01:37:00.000 | 0 | 0 | 1 | 0 | python,windows,py2exe,cherrypy | 2,141,065 | 3 | false | 0 | 0 | A manifest file will not be required for console applications. w9xpopen.exe is not required for Win XP and later. | 2 | 11 | 0 | I'm using Py2exe to compile a CherryPy (3.1) server using Python 2.6 (32-bit) on Windows 7 Pro (64-bit).
This server will run without a GUI.
Questions:
Do I need to be concerned about adding a manifest file for this application if it runs without a GUI?
Do I need to include w9xpopen.exe with my exe?
So far, my limited testing has indicated that I don't need to include a manifest file or w9xpopen.exe with my executable in order for it to work.
Comments appreciated.
Thank you,
Malcolm | Py2exe: Are manifest files and w9xpopen.exe required when compiling a web server without GUI interface? | 0 | 0 | 0 | 10,540 |
1,904,724 | 2009-12-15T01:37:00.000 | 12 | 0 | 1 | 0 | python,windows,py2exe,cherrypy | 1,904,750 | 3 | true | 0 | 0 | w9xpopen.exe is for windows 95/98, So If you don't use those you will not need it.
You can add dll_excludes=['w9xpopen.exe'] in your setup file for py2exe to exclude that.
and of course you will not need manifest file if you don't use GUI too. | 2 | 11 | 0 | I'm using Py2exe to compile a CherryPy (3.1) server using Python 2.6 (32-bit) on Windows 7 Pro (64-bit).
This server will run without a GUI.
Questions:
Do I need to be concerned about adding a manifest file for this application if it runs without a GUI?
Do I need to include w9xpopen.exe with my exe?
So far, my limited testing has indicated that I don't need to include a manifest file or w9xpopen.exe with my executable in order for it to work.
Comments appreciated.
Thank you,
Malcolm | Py2exe: Are manifest files and w9xpopen.exe required when compiling a web server without GUI interface? | 1.2 | 0 | 0 | 10,540 |
1,905,023 | 2009-12-15T03:41:00.000 | 3 | 0 | 1 | 0 | python,ironpython,dynamic-language-runtime,cpython | 1,905,306 | 3 | false | 0 | 0 | It has been tested to work well with mono on Linux and I use it regularly to open up opportunities to use - as Alex Martelli so eloquently put it - "every .NET module on the planet".
I have faced some troubles in accessing third party extension modules, but that has pretty much always been a path issue, which is easy to correct.
I don't know how well this works on a Mac, though. | 2 | 4 | 0 | Has IronPython gotten to a point where you can just drop it in as a replacement for CPython?
To clarify: I mean can IronPython run applications originally written for CPython (no .NET involved, of course) | Is IronPython usable as a replacement for CPython? | 0.197375 | 0 | 0 | 566 |
1,905,023 | 2009-12-15T03:41:00.000 | -1 | 0 | 1 | 0 | python,ironpython,dynamic-language-runtime,cpython | 3,157,192 | 3 | false | 0 | 0 | Ironpython have some prolbems to replace the cpython,like
Base on cpy, you can use some libs directly, but, in ipy, you must use ironclad, and the effiencency is insufferable.
And, if you want use py files, there will be many errors, even if you use same gramma.
So, there are two different things, only same gramma.
Its only the merit is to be load by .net easily. | 2 | 4 | 0 | Has IronPython gotten to a point where you can just drop it in as a replacement for CPython?
To clarify: I mean can IronPython run applications originally written for CPython (no .NET involved, of course) | Is IronPython usable as a replacement for CPython? | -0.066568 | 0 | 0 | 566 |
1,906,991 | 2009-12-15T12:06:00.000 | 5 | 0 | 1 | 0 | python,string | 1,907,017 | 6 | false | 0 | 0 | In Python, there is no difference between strings that are single or double quoted, so I don't know why you would want to do this. However, if you actually mean single quote characters inside a string, then to replace them with double quotes, you would do this: mystring.replace('\'', '"') | 1 | 34 | 0 | I want to check whether the given string is single- or double-quoted. If it is single quote I want to convert it to be double quote, else it has to be same double quote. | Convert single-quoted string to double-quoted string | 0.16514 | 0 | 0 | 95,592 |
1,907,519 | 2009-12-15T13:36:00.000 | 1 | 0 | 1 | 0 | python | 1,907,544 | 3 | false | 0 | 0 | This smells a bit like homework.
Try writing down the successive outputs, one per line, and look for a pattern. See if you can explain that pattern with slices of the input string. Then look for a numeric pattern to the slicing.
Also, please edit your question to put quotes around your strings. What you've written isn't very clear in terms of the outputs, whether you output strings with commas or lists of substrings. | 1 | 0 | 0 | in a string suppose 12345 , i want to take nested loops , so that i would be able to iterate through the string in this following way :-
1, 2, 3, 4, 5 would be taken as integers
12, 3, 4,5 as integers
1, 23, 4, 5 as integers
1, 2, 34, 5 as integers
...
And so on. I know what's the logic but being a noob in Python, I'm not able to form the loop. | nested looping in python | 0.066568 | 0 | 0 | 374 |
1,907,736 | 2009-12-15T14:13:00.000 | 0 | 0 | 0 | 0 | python,vector,svg | 1,907,972 | 2 | true | 0 | 1 | If you have any .NET experience I would recommend Silverlight. I have worked with it in the academic setting and it has impressed me very much. Some of the examples are pretty mind blowing, for the web applications at least. I also know they did focus on making silverlight into exactly what your question asks, a framework for building "visually rich" desktop applications also. There is a set of tools called expression blend that interact directly with visual studio to build the GUI and it's pretty impressive the control their GUI gives you in making your GUI. At least worth a look. | 1 | 0 | 0 | I've started building an app with Flex/Air but am getting sick of it's clunkyness.
The app that I'm building has similar behaviour to Prezi (www.prezi.com) but in a completely different field.
I'm looking for something on the desktop which has flex like capabilities, such as drawing vectors then zooming in/out, rotating etc, gui widgets would be a bonus but not essential.
If it was written in Python/Ruby or had an abstraction in either language that would be great.
I've had a quick look at PyGame and Pyglet but am not sure of their suitability.
Any ideas?
Cheers,
Chris | Framework for building visually rich desktop applications? | 1.2 | 0 | 0 | 384 |
1,907,782 | 2009-12-15T14:21:00.000 | 0 | 1 | 0 | 0 | php,python,integration,trac | 1,909,272 | 3 | false | 1 | 0 | Your Python code will have access to your users' cookies. A template would be best, but if you don't have one available (or the header/footer are trivially small, or whatever), you can simply port the PHP header and footer code to Python, using the cookies that are already there to query the database or whatever you need to do.
If you want to retain your links for logging in, registering, and whatever else might be in the PHP version, simply link to the PHP side, then redirect back to Trac once PHP's done its job. | 2 | 4 | 0 | This might sound really crazy, but still...
For our revamped project site, we want to integrate Trac (as code browser, developer wiki and issue tracker) into the site design. That is, of course, difficult, since Trac is written in Python and our site in PHP. Does anybody here know a way how to integrate a header and footer (PHP) into the Trac template (preferrably without invoking a - or rather two for header and footer - PHP process from the command line)? | Integrate Python app into PHP site | 0 | 0 | 0 | 2,293 |
1,907,782 | 2009-12-15T14:21:00.000 | 1 | 1 | 0 | 0 | php,python,integration,trac | 1,907,850 | 3 | false | 1 | 0 | The best option probably is to (re)write the header and footer using python.
If the header and footer are relatively static you can also generate them once using php (or once every x minutes) and include them from the filesystem. (You probably already thought about this and dismissed the idea because your sites are too dynamic to use this option?)
While I would not really recommend it you could also use some form of AJAX to load parts of the page, and nothing prevents you from loading this content from a php based system. That could keep all parts dynamic. Your pages will probably look ugly while loading, and you now generate more hits on the server than needed, but if it is nog a big site this might be a big.
Warning: If you have user logins on both systems you will probably run into problems with people only being logged in to half of your site. | 2 | 4 | 0 | This might sound really crazy, but still...
For our revamped project site, we want to integrate Trac (as code browser, developer wiki and issue tracker) into the site design. That is, of course, difficult, since Trac is written in Python and our site in PHP. Does anybody here know a way how to integrate a header and footer (PHP) into the Trac template (preferrably without invoking a - or rather two for header and footer - PHP process from the command line)? | Integrate Python app into PHP site | 0.066568 | 0 | 0 | 2,293 |
1,908,206 | 2009-12-15T15:30:00.000 | 0 | 0 | 1 | 0 | python,multithreading,sleep,yield | 38,280,120 | 3 | false | 0 | 0 | Thread.yield() is missing from python because perhaps it has been forgotten, or the designer thought that all synchronization and interprocess communication issues are solvable without Thread.yield().
I'd use Thread.yield() for the following issue:
E.g. there is a job queue and there are 2 worker threads which can fetch entries from the job queue and place entries into the job queue.
One way to solve it is to use the threading.Condition class.
When worker 'B' wants to fetch a queue entry, but the queue is empty, it goes to wait state (Condition.wait()). When worker 'A' places entry into the queue, it wakes up worker 'B' (Condition.notify()). At this point yielding is essential, because if worker 'A' doesn't yield here, worker 'A' can fetch the task before the woken up worker 'B' which causes race condition issue.
I wonder how it is solvable without Thread.yield(). | 1 | 8 | 0 | I want to tell my Python threads to yield, and so avoid hogging the CPU unnecessarily. In Java, you could do that using the Thread.yield() function. I don't think there is something similar in Python, so I have been using time.sleep(t) where t = 0.00001. For t=0 there seems to be no effect.
I think that maybe there is something I am not understanding correctly about Python's threading model, and hence the reason for the missing thread.yield(). Can someone clarify this to me? Thanks!
PS: This is what the documentation for Java's Thread.yield() says:
Causes the currently executing thread
object to temporarily pause and allow
other threads to execute. | In there something similar to Java's Thread.yield() in Python? Does that even make sense? | 0 | 0 | 0 | 8,147 |
1,908,250 | 2009-12-15T15:37:00.000 | 4 | 1 | 1 | 0 | python | 1,908,528 | 5 | false | 0 | 0 | One suggestion is to find an open-source project in Python, and start contributing. You may ask "how can I contribute, if I'm a beginner?". One answer is "write tests". Almost any project will welcome you as a tester. Another answer is "documentation", though that is less likely to give immediate benefits. | 2 | 10 | 0 | I started with c++ but as we all know, c++ is a monster. I still have to take it and I do like C++ (it takes programming a step further)
However, currently I have been working with python for a while. I see how you guys can turn some long algorithm into simple one.
I know programming is a progress, and can take up to years of experience.
I also know myself - I am not a natural programmer, and software engineering is not my first choice anyway. However, I would like to do heavy programming on my own, and create projects.
How can I become a better python programmer? | How to become a good Python coder? | 0.158649 | 0 | 0 | 8,219 |
1,908,250 | 2009-12-15T15:37:00.000 | 3 | 1 | 1 | 0 | python | 1,908,456 | 5 | false | 0 | 0 | The already-posted answers are great.
In addition, whenever you're coding something in Python and you start doing something that feels clumsy, take a step back and think. If you can't think of a more elegant way to do it, post it as a question on Stack Overflow. I can't count the number of times that I've seen someone reduce ten lines of Python into one (which is still perfectly easy to read and understand). | 2 | 10 | 0 | I started with c++ but as we all know, c++ is a monster. I still have to take it and I do like C++ (it takes programming a step further)
However, currently I have been working with python for a while. I see how you guys can turn some long algorithm into simple one.
I know programming is a progress, and can take up to years of experience.
I also know myself - I am not a natural programmer, and software engineering is not my first choice anyway. However, I would like to do heavy programming on my own, and create projects.
How can I become a better python programmer? | How to become a good Python coder? | 0.119427 | 0 | 0 | 8,219 |
1,908,334 | 2009-12-15T15:49:00.000 | 0 | 0 | 0 | 0 | python,python-imaging-library,counter | 1,908,428 | 3 | false | 1 | 0 | If you really need to handle thousands "renders" per second I would not suggest to generate the images on the fly. How about precomputing n images where n is the expected (you might want to be generous here) count you have to handle?
I know you state that you don't want to use javascript and you only want one img tag, but I would recommend to reconsider pushing visualization to the client side as you would burn unnecessary resources if you really get the load you are expecting (thousands hits / second, every hit incrementing the counter and generating an image using PIL). | 2 | 0 | 0 | How can I combine multiple images, such as base image with logo and number of digits images to display graphical counter with pageviews count, updated dynamically?
It should be very fast, with thousands of renders per second. User should see counter image without Javascript and with single img tag.
I prefer to implement that counter with Python using PIL library, but other solutions welcome as well. | How to combine multiple images fast for page views counter | 0 | 0 | 0 | 743 |
1,908,334 | 2009-12-15T15:49:00.000 | 2 | 0 | 0 | 0 | python,python-imaging-library,counter | 1,908,385 | 3 | true | 1 | 0 | Precompute for the given background the image of a single digit (for each digit 0 ... 10) at each digit position.
Then to create arbitrary number you only have to paste the correct images next to eachother, but you won't have to do any alpha blending. Therefore this must be more efficient.
Also, if certain page counts are more common (e.g. page counts < 10000) you might want to precompute these (10000) complete counter images to serve those directly.
EDIT:
You can do this with python PIL, or any other method. If you have a specific difficulty with PIL then please ask a more direct question about the problems you have encounterd. | 2 | 0 | 0 | How can I combine multiple images, such as base image with logo and number of digits images to display graphical counter with pageviews count, updated dynamically?
It should be very fast, with thousands of renders per second. User should see counter image without Javascript and with single img tag.
I prefer to implement that counter with Python using PIL library, but other solutions welcome as well. | How to combine multiple images fast for page views counter | 1.2 | 0 | 0 | 743 |
1,908,670 | 2009-12-15T16:33:00.000 | 0 | 0 | 1 | 0 | python,datetime | 1,920,099 | 3 | false | 0 | 0 | Seconds since epoch is the most compact and portable format for storing time data. Native DATETIME format in MySQL, for example, takes 8 bytes instead of 4 for TIMESTAMP (seconds since epoch). You'd also avoid timezone issues if you need to get the time from clients in multiple geographic locations. Logical operations (for sorting, etc.) are also fastest on integers. | 1 | 0 | 0 | When storing a time in Python (in my case in ZODB, but applies to any DB), what format (epoch, datetime etc) do you use and why? | Storing times in Python - Best format? | 0 | 0 | 0 | 2,389 |
1,909,025 | 2009-12-15T17:26:00.000 | 20 | 0 | 1 | 0 | python,virtualenv | 1,910,294 | 3 | true | 0 | 0 | Is there a bash alias active on this machine for "python", by any chance? That will take priority over the PATH-modifications made by activate, and could cause the wrong python binary to be used.
Try running virtualenv/bin/python directly (no need to activate) and see if you can import your module.
If this fixes it, you just need to get rid of your python bash alias. | 1 | 10 | 0 | I have a problem with virtualenv. I use it regulary, I use it on my development machine and on several servers. But on this last server I tried to use i got a problem.
I created a virtualenv with the --no-site-packages argument, and then I installed some python modules inside the virtualenv. I can confirm that the modules is located inside the virtualenvs site-packages and everything seems to be fine.
But when i try to do:source virtualenv/bin/activate and then import one of the module python import modulename i get an import error that says that the module doesnt exist. How is it that this is happending? It seems like it never activates even thoug that it says it do.
Anybody have a clue on how to fix this? | Import error with virtualenv | 1.2 | 0 | 0 | 24,484 |
1,909,249 | 2009-12-15T18:03:00.000 | 0 | 0 | 1 | 0 | python,macos,osx-snow-leopard | 23,410,716 | 4 | false | 0 | 0 | when doing an "port selfupdate", rsync timesout with rsync.macports.org. There are mirror sites available to use. | 2 | 20 | 0 | I'm developing on Snow Leopard and going through the various "how tos" to get the MySQLdb package installed and working (uphill battle). Things are a mess and I'd like to regain confidence with a fresh, clean, as close to factory install of Python 2.6.
What folders should I clean out?
What should I run?
What symbolic links should I destroy or create? | How to clean up my Python Installation for a fresh start | 0 | 0 | 0 | 29,101 |
1,909,249 | 2009-12-15T18:03:00.000 | 1 | 0 | 1 | 0 | python,macos,osx-snow-leopard | 1,909,283 | 4 | false | 0 | 0 | My experience doing development on MacOSX is that the directories for libraries and installation tools are just different enough to cause a lot of problems that you end up having to fix by hand. Eventually, your computer becomes a sketchy wasteland of files and folders duplicated all over the place in an effort to solve these problems. A lot of hand-tuned configuration files, too. The thought of getting my environment set up again from scratch gives me the chills.
Then, when it's time to deploy, you've got to do it over again in reverse (unless you're deploying to an XServe, which is unlikely).
Learn from my mistake: set up a Linux VM and do your development there. At least, run your development "server" there, even if you edit the code files on your Mac. | 2 | 20 | 0 | I'm developing on Snow Leopard and going through the various "how tos" to get the MySQLdb package installed and working (uphill battle). Things are a mess and I'd like to regain confidence with a fresh, clean, as close to factory install of Python 2.6.
What folders should I clean out?
What should I run?
What symbolic links should I destroy or create? | How to clean up my Python Installation for a fresh start | 0.049958 | 0 | 0 | 29,101 |
1,909,471 | 2009-12-15T18:39:00.000 | 1 | 1 | 0 | 0 | python,c,sockets,scapy | 1,909,504 | 2 | false | 0 | 0 | i would think C would be faster, but python would be a lot easier to manage and use.
the difference would be so small, you wouldn't need it unless you were trying to send masses amount of data (something stupid like 1 million gb/second lol)
joe | 2 | 10 | 0 | my question simply relates to the difference in performance between a socket in C and in Python. Since my Python build is CPython, I assume it's similar, but I'm curious if someone actually has "real" benchmarks, or at least an opinion that's evidence based.
My logics is as such:
C socket much faster? then write a C
extension.
not/barely a difference?
keep writing in Python and figure out
how to obtain packet level control
(scapy? dpkt?)
I'm sure someone will want to know for either context or curiosity. I plan to build a sort of proxy for myself (not for internet browsing, anonymity, etc) and will bind the application I want to use with it to a specific port. Then, all packets on said port will be queued, address header modified, and then sent, etc, etc.
Thanks in advance. | C/Python Socket Performance? | 0.099668 | 0 | 0 | 4,477 |
1,909,471 | 2009-12-15T18:39:00.000 | 13 | 1 | 0 | 0 | python,c,sockets,scapy | 1,909,511 | 2 | true | 0 | 0 | In general, sockets in Python perform just fine. For example, the reference implementation of the BitTorrent tracker server is written in Python.
When doing networking operations, the speed of the network is usually the limiting factor. That is, any possible tiny difference in speed between C and Python's socket code is completely overshadowed by the fact that you're doing networking of some kind.
However, your description of what you want to do indicates that you want to inspect and modify individual IP packets. This is beyond the capabilities of Python's standard networking libraries, and is in any case a very OS-dependent operation. Rather than asking "which is faster?" you will need to first ask "is this possible?" | 2 | 10 | 0 | my question simply relates to the difference in performance between a socket in C and in Python. Since my Python build is CPython, I assume it's similar, but I'm curious if someone actually has "real" benchmarks, or at least an opinion that's evidence based.
My logics is as such:
C socket much faster? then write a C
extension.
not/barely a difference?
keep writing in Python and figure out
how to obtain packet level control
(scapy? dpkt?)
I'm sure someone will want to know for either context or curiosity. I plan to build a sort of proxy for myself (not for internet browsing, anonymity, etc) and will bind the application I want to use with it to a specific port. Then, all packets on said port will be queued, address header modified, and then sent, etc, etc.
Thanks in advance. | C/Python Socket Performance? | 1.2 | 0 | 0 | 4,477 |
1,909,512 | 2009-12-15T18:46:00.000 | 134 | 1 | 1 | 0 | python | 1,923,081 | 2 | false | 0 | 0 | Python is a dynamic, strongly typed, object oriented, multipurpose programming language, designed to be quick (to learn, to use, and to understand), and to enforce a clean and uniform syntax.
Python is dynamically typed: it means that you don't declare a type (e.g. 'integer') for a variable name, and then assign something of that type (and only that type). Instead, you have variable names, and you bind them to entities whose type stays with the entity itself. a = 5 makes the variable name a to refer to the integer 5. Later, a = "hello" makes the variable name a to refer to a string containing "hello". Static typed languages would have you declare int a and then a = 5, but assigning a = "hello" would have been a compile time error. On one hand, this makes everything more unpredictable (you don't know what a refers to). On the other hand, it makes very easy to achieve some results a static typed languages makes very difficult.
Python is strongly typed. It means that if a = "5" (the string whose value is '5') will remain a string, and never coerced to a number if the context requires so. Every type conversion in python must be done explicitly. This is different from, for example, Perl or Javascript, where you have weak typing, and can write things like "hello" + 5 to get "hello5".
Python is object oriented, with class-based inheritance. Everything is an object (including classes, functions, modules, etc), in the sense that they can be passed around as arguments, have methods and attributes, and so on.
Python is multipurpose: it is not specialised to a specific target of users (like R for statistics, or PHP for web programming). It is extended through modules and libraries, that hook very easily into the C programming language.
Python enforces correct indentation of the code by making the indentation part of the syntax. There are no control braces in Python. Blocks of code are identified by the level of indentation. Although a big turn off for many programmers not used to this, it is precious as it gives a very uniform style and results in code that is visually pleasant to read.
The code is compiled into byte code and then executed in a virtual machine. This means that precompiled code is portable between platforms.
Python can be used for any programming task, from GUI programming to web programming with everything else in between. It's quite efficient, as much of its activity is done at the C level. Python is just a layer on top of C. There are libraries for everything you can think of: game programming and openGL, GUI interfaces, web frameworks, semantic web, scientific computing... | 1 | 105 | 0 | What is Python used for and what is it designed for? | What is Python used for? | 1 | 0 | 0 | 155,982 |
1,909,994 | 2009-12-15T20:02:00.000 | 2 | 0 | 0 | 0 | python,arrays,numpy,reshape | 1,916,520 | 4 | false | 0 | 0 | No matter what, you'll be stuck reallocating a chunk of memory, so it doesn't really matter if you use arr.resize(), np.concatenate, hstack/vstack, etc. Note that if you're accumulating a lot of data sequentially, Python lists are usually more efficient. | 2 | 9 | 1 | Hello I have a 1000 data series with 1500 points in each.
They form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data.
Now what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way?
I would want the data already in the 1000x1500 piece of the array not to be changed, only blank data (zeros) added to the bottom and right, basically.
Thanks. | How do I add rows and columns to a NUMPY array? | 0.099668 | 0 | 0 | 17,603 |
1,909,994 | 2009-12-15T20:02:00.000 | 3 | 0 | 0 | 0 | python,arrays,numpy,reshape | 1,910,401 | 4 | true | 0 | 0 | If you want zeroes in the added elements, my_array.resize((1600, 1000)) should work. Note that this differs from numpy.resize(my_array, (1600, 1000)), in which previous lines are duplicated, which is probably not what you want.
Otherwise (for instance if you want to avoid initializing elements to zero, which could be unnecessary), you can indeed use hstack and vstack to add an array containing the new elements; numpy.concatenate() (see pydoc numpy.concatenate) should work too (it is just more general, as far as I understand).
In either case, I would guess that a new memory block has to be allocated in order to extend the array, and that all these methods take about the same time. | 2 | 9 | 1 | Hello I have a 1000 data series with 1500 points in each.
They form a (1000x1500) size Numpy array created using np.zeros((1500, 1000)) and then filled with the data.
Now what if I want the array to grow to say 1600 x 1100? Do I have to add arrays using hstack and vstack or is there a better way?
I would want the data already in the 1000x1500 piece of the array not to be changed, only blank data (zeros) added to the bottom and right, basically.
Thanks. | How do I add rows and columns to a NUMPY array? | 1.2 | 0 | 0 | 17,603 |
1,910,131 | 2009-12-15T20:22:00.000 | 3 | 0 | 0 | 0 | python,django,facebook | 1,910,419 | 1 | false | 1 | 0 | Based on your comments, I think I see what's going on. 192.168.2.2 is not a valid URL. That is a local network IP, and cannot be accessed from outside your network.
You need to set your Canvas Callback URL to the external IP address of your modem. | 1 | 0 | 0 | i am making a facebook app in python with django. now i have successfully resolved the callback url to my localhost account. but the app is not displaying on facebook.
when i navigate to apps.facebook.com/'myappname', it authenticates and then displays the file list on project folder? | facebook app in python showing file list | 0.53705 | 0 | 0 | 108 |
1,911,615 | 2009-12-16T01:04:00.000 | 0 | 0 | 1 | 0 | python,python-idle | 1,911,639 | 2 | false | 0 | 0 | Enable the debugger and see if it tells you anything. | 1 | 0 | 0 | I am using python 2.5 on windows. All I am doing is unpickling a large file (18MB - a list of dictionaries) and modifiying some of its values. Now this works fine. But when I add a couple of prints, IDLE restarts. And weirdly enough it seems to be happening where I added the print. I figured this out commenting and uncommenting things line by line. I added a try catch around the print, but am not able to catch anything.
When does IDLE restart? And how do I catch any exceptions or errors it throws(if it does)? | Python: How do I find why IDLE restarts? | 0 | 0 | 0 | 705 |
1,912,229 | 2009-12-16T04:11:00.000 | 9 | 0 | 1 | 0 | python,properties,decorator | 1,912,265 | 2 | false | 0 | 0 | In Python 3 you WOULD see the print's result -- and then an AttributeError for the last print (because _m has disappeared). You may be using Python 2.6, in which case you need to change the class clause to class M(object): to make M new-style, and then you'll get the same behavior as in Python 3. | 1 | 16 | 0 | I'm playing around with property in Python and I was wondering how this @propertyName.deleter decorator works. I'm probably missing something, I could not find clear answers by Google.
What I would like to achieve is when this deleter behavior is called, I can trigger other actions (e.g: using my 3d application SDK).
For now just a simple print() doesn't seem to get triggered.
Is deleter fired when I delete the property using del(instance.property) ?
Otherwise, how can I achieve this?
class M():
def __init__(self):
self._m = None
@property
def mmm(self):
return self._m
@mmm.setter
def mmm(self, val):
self._m = val
@mmm.deleter
def mmm(self):
print('deleting') # Not printing
del(self._m)
if __name__ == '__main__':
i = M()
i.mmm = 150
print(i.mmm)
del(i.mmm)
print(i.mmm)
Thank you very much (: | deleter decorator using Property in Python | 1 | 0 | 0 | 13,562 |
1,912,351 | 2009-12-16T04:44:00.000 | 18 | 0 | 0 | 0 | python,django | 1,914,081 | 4 | false | 1 | 0 | create a reusable app that include your generic functions so you can share between projects.
use for example a git repo to store this app and manage deployments and evolution (submodule)
use a public git repo so you can share with the community :) | 3 | 80 | 0 | I have a couple of functions that I wrote that I need to use in my django app. Where would I put the file with them and how would I make them callable within my views? | Django: Where to put helper functions? | 1 | 0 | 0 | 40,688 |
1,912,351 | 2009-12-16T04:44:00.000 | 13 | 0 | 0 | 0 | python,django | 1,912,371 | 4 | false | 1 | 0 | If they are related to a specific app, I usually just put them in the related app folder and name the file, 'functions.py'.
If they're not specific to an app, I make a commons app for components (tests, models, functions, etc) that are shared across apps. | 3 | 80 | 0 | I have a couple of functions that I wrote that I need to use in my django app. Where would I put the file with them and how would I make them callable within my views? | Django: Where to put helper functions? | 1 | 0 | 0 | 40,688 |
1,912,351 | 2009-12-16T04:44:00.000 | 2 | 0 | 0 | 0 | python,django | 13,270,468 | 4 | false | 1 | 0 | I am using new python file service.py in app folder. The file contains mostly helper queries for specific app. Also I used to create a folder inside Django application that contains global helper functions and constants. | 3 | 80 | 0 | I have a couple of functions that I wrote that I need to use in my django app. Where would I put the file with them and how would I make them callable within my views? | Django: Where to put helper functions? | 0.099668 | 0 | 0 | 40,688 |
1,912,557 | 2009-12-16T05:43:00.000 | 4 | 0 | 1 | 0 | python,multithreading,gil | 1,912,602 | 4 | false | 0 | 0 | It really depends on the library you're using. The GIL is meant to prevent Python objects and its internal data structures to be changed at the same time. If you're doing an upload, the library you use to do the actual upload might release the GIL while it's waiting for the actual HTTP request to complete (I would assume that is the case with the HTTP modules in the standard library, but I didn't check).
As a side note, if you really want to have things running in parallel, just use multiple processes. It will save you a lot of trouble and you'll end up with better code (more robust, more scalable, and most probably better structured). | 3 | 3 | 0 | Does the presence of python GIL imply that in python multi threading the same operation is not so different from repeating it in a single thread?.
For example, If I need to upload two files, what is the advantage of doing them in two threads instead of uploading them one after another?.
I tried a big math operation in both ways. But they seem to take almost equal time to complete.
This seems to be unclear to me. Can someone help me on this?.
Thanks. | A question on python GIL | 0.197375 | 0 | 0 | 832 |
1,912,557 | 2009-12-16T05:43:00.000 | 12 | 0 | 1 | 0 | python,multithreading,gil | 1,912,607 | 4 | true | 0 | 0 | Python's threads get a slightly worse rap than they deserve. There are three (well, 2.5) cases where they actually get you benefits:
If non-Python code (e.g. a C library, the kernel, etc.) is running, other Python threads can continue executing. It's only pure Python code that can't run in two threads at once. So if you're doing disk or network I/O, threads can indeed buy you something, as most of the time is spent outside of Python itself.
The GIL is not actually part of Python, it's an implementation detail of CPython (the "reference" implementation that the core Python devs work on, and that you usually get if you just run "python" on your Linux box or something.
Jython, IronPython, and any other reimplementations of Python generally do not have a GIL, and multiple pure-Python threads can execute simultaneously.
The 0.5 case: Even if you're entirely pure-Python and see little or no performance benefit from threading, some problems are really convenient in terms of developer time and difficulty to solve with threads. This depends in part on the developer, too, of course. | 3 | 3 | 0 | Does the presence of python GIL imply that in python multi threading the same operation is not so different from repeating it in a single thread?.
For example, If I need to upload two files, what is the advantage of doing them in two threads instead of uploading them one after another?.
I tried a big math operation in both ways. But they seem to take almost equal time to complete.
This seems to be unclear to me. Can someone help me on this?.
Thanks. | A question on python GIL | 1.2 | 0 | 0 | 832 |
1,912,557 | 2009-12-16T05:43:00.000 | 0 | 0 | 1 | 0 | python,multithreading,gil | 58,748,154 | 4 | false | 0 | 0 | Multithreading is a concept where two are more tasks need be completed simultaneously, for example, I have word processor in this application there are N numbers of a parallel task have to work. Like listening to keyboard, formatting input text, sending a formatted text to display unit. In this context with sequential processing, it is time-consuming and one task has to wait till the next task completion. So we put these tasks in threads and simultaneously complete the task. Three threads are always up and waiting for the inputs to arrive, then take that input and produce the output simultaneously.
So multi-threading works faster if we have multi-core and processors. But in reality with single processors, threads will work one after the other, but we feel it's executing with greater speed, Actually, one instruction executes at a time and a processor can execute billions of instructions at a time. So the computer creates illusion that multi-task or thread working parallel. It just an illusion. | 3 | 3 | 0 | Does the presence of python GIL imply that in python multi threading the same operation is not so different from repeating it in a single thread?.
For example, If I need to upload two files, what is the advantage of doing them in two threads instead of uploading them one after another?.
I tried a big math operation in both ways. But they seem to take almost equal time to complete.
This seems to be unclear to me. Can someone help me on this?.
Thanks. | A question on python GIL | 0 | 0 | 0 | 832 |
1,912,567 | 2009-12-16T05:45:00.000 | 2 | 0 | 1 | 0 | python,file | 1,912,589 | 4 | false | 0 | 0 | This is one of those things that is both so trivial to implement and so app-specific that there really wouldn't be any point in a library, and any library intended for this purpose would grow so unwieldy trying to adapt to the many variations required, learning and using the library would take as much time as implementing it yourself. | 2 | 9 | 0 | Suppose I have a program A. I run it, and performs some operation starting from a file foo.txt. Now A terminates.
New run of A. It checks if the file foo.txt has changed. If the file has changed, A runs its operation again, otherwise, it quits.
Does a library function/external library for this exists ?
Of course it can be implemented with an md5 + a file/db containing the md5. I want to prevent reinventing the wheel. | Python library to detect if a file has changed between different runs? | 0.099668 | 0 | 0 | 11,101 |
1,912,567 | 2009-12-16T05:45:00.000 | 0 | 0 | 1 | 0 | python,file | 1,912,579 | 4 | false | 0 | 0 | Cant we just check the last modified date . i.e after the first operation we store the last modified date in the db , and then before running again we compare the last modified date of the file foo.txt with the value stored in our db .. if they differ ,we perform the operation again ? | 2 | 9 | 0 | Suppose I have a program A. I run it, and performs some operation starting from a file foo.txt. Now A terminates.
New run of A. It checks if the file foo.txt has changed. If the file has changed, A runs its operation again, otherwise, it quits.
Does a library function/external library for this exists ?
Of course it can be implemented with an md5 + a file/db containing the md5. I want to prevent reinventing the wheel. | Python library to detect if a file has changed between different runs? | 0 | 0 | 0 | 11,101 |
1,912,971 | 2009-12-16T07:36:00.000 | 0 | 1 | 1 | 0 | python,ironpython | 10,440,546 | 6 | false | 0 | 0 | We use it a lot for small administrative tools against SharePoint. In particular it is fantastic for exploring the API against real data (with all its real life quirks). Development iterations are faster and you can't always install Visual Studio on production servers. | 4 | 8 | 0 | I am learning IronPython along wiht Python. I'm curious what kinds of tasks you tend to use IronPython to tackle more often than standard .NET languages.
Thanks for any example. | IronPython: What kind of jobs you ever done with IronPython instead of standard .NET languages (e.g., C#) | 0 | 0 | 0 | 797 |
1,912,971 | 2009-12-16T07:36:00.000 | 0 | 1 | 1 | 0 | python,ironpython | 10,438,778 | 6 | false | 0 | 0 | I use IronPython for a few different purposes:
An alternative to Powershell when I need to script something and invoke a .NET library, or when the script is complicated enough to warrant a real programming language.
Embedding in a .NET app for scriptable plugins.
Prototyping and testing .NET libs in immediate mode. This is way easier than making a test project in C# | 4 | 8 | 0 | I am learning IronPython along wiht Python. I'm curious what kinds of tasks you tend to use IronPython to tackle more often than standard .NET languages.
Thanks for any example. | IronPython: What kind of jobs you ever done with IronPython instead of standard .NET languages (e.g., C#) | 0 | 0 | 0 | 797 |
1,912,971 | 2009-12-16T07:36:00.000 | 3 | 1 | 1 | 0 | python,ironpython | 1,913,029 | 6 | false | 0 | 0 | In the day job, it's my standard language for those little bits of build process that are too much for .bat files and not heavyweight enough to demand a separate executable; this includes anything that could use a little bit of XML processing or reflection -- generating Wix files with systematic handling of 32 and 64 bit installs, for example. It beats out PowerShell in this role because IronPython is an XCOPY install onto build machines.
It's also very useful for prototyping fragments of code against unfamiliar or complex APIs (WMI and Active Directory being the usual ones for me), or diagnosing problems in code using those APIs (like sniffing out the oddities that happen when you're on the domain controller, rather than elsewhere). | 4 | 8 | 0 | I am learning IronPython along wiht Python. I'm curious what kinds of tasks you tend to use IronPython to tackle more often than standard .NET languages.
Thanks for any example. | IronPython: What kind of jobs you ever done with IronPython instead of standard .NET languages (e.g., C#) | 0.099668 | 0 | 0 | 797 |
1,912,971 | 2009-12-16T07:36:00.000 | 0 | 1 | 1 | 0 | python,ironpython | 1,913,220 | 6 | false | 0 | 0 | Created a load tool for a MS Group Chat Server plugin. The GC API is in C#. I wrapped that into a dll and had FePy load it. The main application, configuration scripts etc are all in FePy. | 4 | 8 | 0 | I am learning IronPython along wiht Python. I'm curious what kinds of tasks you tend to use IronPython to tackle more often than standard .NET languages.
Thanks for any example. | IronPython: What kind of jobs you ever done with IronPython instead of standard .NET languages (e.g., C#) | 0 | 0 | 0 | 797 |
1,916,009 | 2009-12-16T16:49:00.000 | 0 | 0 | 0 | 0 | python,google-app-engine | 1,916,805 | 3 | false | 1 | 0 | If you want some handler on your GAE app (including one for a scheduled task, reception of messages, web page visits, etc) to store some new information in such a way that some handler in the future can recover that information, then GAE's storage is the only good general way (memcache could expire from under you, for example). Not sure what you mean by "tables" (?!), but guessing that you actually mean GAE's storage the answer is "yes". (Under very specific circumstances you might want to put that data to some different place on the network, such as your visitor's browser e.g. via cookies, or an Amazon storage instance, etc, but it does not appear to me that those specific circumstances are appliable to your use case). | 2 | 1 | 0 | I want to load info from another site (this part is done), but i am doing this every time the page is loaded and that wont do. So i was thinking of having a variable in a table of settings like 'last checked bbc site' and when the page loads it would check if its been long enough since last check to check again. Is there anything silly about doing it that way?
Also do i absolutely have to use tables to store 1 off variables like this setting? | App engine app design questions | 0 | 0 | 0 | 127 |
1,916,009 | 2009-12-16T16:49:00.000 | 2 | 0 | 0 | 0 | python,google-app-engine | 1,917,064 | 3 | true | 1 | 0 | I think there are 2 options that would work for you, besides creating a entity in the datastore to keep track of "last visited time".
One way is to just check the external page periodically, using the cron api as described by jldupont.
The second way is to store the last visited time in memcache. Although memcache is not permanent, it doesn't have to be if you are only storing last refresh times. If your entry in memcache were to disappear for some reason, the worst that would happen would be that you would fetch the page again, and update memcache with the current date/time.
The first way would be best if you want to check the external page at regular intervals. The second way might be better if you want to check the external page only when a user clicks on your page, and you haven't fetched that page yourself in the recent past. With this method, you aren't wasting resources fetching the external page unless someone is actually looking for data related to it. | 2 | 1 | 0 | I want to load info from another site (this part is done), but i am doing this every time the page is loaded and that wont do. So i was thinking of having a variable in a table of settings like 'last checked bbc site' and when the page loads it would check if its been long enough since last check to check again. Is there anything silly about doing it that way?
Also do i absolutely have to use tables to store 1 off variables like this setting? | App engine app design questions | 1.2 | 0 | 0 | 127 |
1,916,579 | 2009-12-16T18:14:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 64,592,250 | 7 | false | 1 | 0 | Update on October 2020:
I tried using os.environ["SERVER_SOFTWARE"] and os.environ["APPENGINE_RUNTIME"] but both didn't work so I just logged all keys from the results from os.environ.
In these keys, there was GAE_RUNTIME which I used to check if I was in the local environment or cloud environment.
The exact key might change or you could add your own in app.yaml but the point is, log os.environ, perhaps by adding to a list in a test webpage, and use its results to check your environment. | 1 | 41 | 0 | Whilst developing I want to handle some things slight differently than I will when I eventually upload to the Google servers.
Is there a quick test that I can do to find out if I'm in the SDK or live? | In Python, how can I test if I'm in Google App Engine SDK? | 0 | 0 | 0 | 8,997 |
1,916,928 | 2009-12-16T19:10:00.000 | 5 | 1 | 0 | 0 | python,bytearray,decode | 1,917,086 | 5 | false | 0 | 0 | To convert to hex:
hexdata = ''.join('%02x' % ord(byte) for byte in bindata)
To reverse every other hex character (if I'm understanding correctly):
hexdata = ''.join(('%02x' % ord(byte))[::-1] for byte in bindata) | 1 | 0 | 0 | I have a GSM date/time stamp from a PDU encoded SMS it is formatted as so
\x90,\x21,\x51,\x91,\x40,\x33
format yy,mm,dd,hh,mm,ss
I have read them from a binary file into a byte array. I want to convert them to a string but without doing any decoding I want to end up with a string that contains 902151914033. I then need to reverse each 2 characters in the string.
Can anyone give me some pointers?
Many Thanks | convert byte array to string without interpreting the bytes? | 0.197375 | 0 | 0 | 10,490 |
1,917,958 | 2009-12-16T21:41:00.000 | 11 | 1 | 1 | 0 | python,coding-style,import,module,conventions | 1,918,234 | 2 | false | 0 | 0 | Alan's given a great answer, but I wanted to add that for your question 1 it depends on what you mean by 'imports'.
If you use the from C import x syntax, then x becomes available in the namespace of B. If in A you then do import B, you will have access to x from A as B.x.
It's not so much bad practice as potentially confusing, and will make debugging etc harder as you won't necessarily know where the objects have come from. | 1 | 13 | 0 | I have two related Python 'import' questions. They are easily testable, but I want answers that are language-defined and not implementation-specific, and I'm also interested in style/convention, so I'm asking here instead.
1)
If module A imports module B, and module B imports module C, can code in module A reference module C without an explicit import? If so, am I correct in assuming this is bad practice?
2)
If I import module A.B.C, does that import modules A and A.B as well? If so, is it by convention better to explicitly import A; import A.B; import A.B.C? | Python import mechanics | 1 | 0 | 0 | 3,559 |
1,918,420 | 2009-12-16T23:00:00.000 | 0 | 0 | 1 | 0 | python,pdf,pypdf | 6,202,708 | 4 | false | 0 | 0 | Darrell's class can be modified slightly to produce a multi-level table of contents for a pdf (in the manner of pdftoc in the pdftk toolkit.)
My modification adds one more parameter to _setup_page_id_to_num, an integer "level" which defaults to 1. Each invocation increments the level. Instead of storing just the page number in the result, we store the pair of page number and level. Appropriate modifications should be applied when using the returned result.
I am using this to implement the "PDF Hacks" browser-based page-at-a-time document viewer with a sidebar table of contents which reflects LaTeX section, subsection etc bookmarks. I am working on a shared system where pdftk can not be installed but where python is available. | 1 | 5 | 0 | i would like to use pyPdf to split a pdf file based on the outline where each destination in the outline refers to a different page within the pdf.
example outline:
main --> points to page 1
sect1 --> points to page 1
sect2 --> points to page 15
sect3 --> points to page 22
it is easy within pyPdf to iterate over each page of the document or each destination in the document's outline; however, i cannot figure out how to get the page number where the destination points.
does anybody know how to find the referencing page number for each destination in the outline? | split a pdf based on outline | 0 | 0 | 0 | 5,099 |
1,918,456 | 2009-12-16T23:06:00.000 | 1 | 0 | 1 | 0 | python,dictionary,map,hashtable,data-structures | 1,918,576 | 10 | false | 0 | 0 | If you're actually storing millions of unique values, why not use a dictionary?
Store: d[hash(key)/32] |= 2**(hash(key)%32)
Check: (d[hash(key)/32] | 2**(hash(key)%32))
If you have billions of entries, use a numpy array of size (2**32)/32, instead. (Because, after all, you only have 4 billion possible values to store, anyway). | 5 | 6 | 0 | I'm storing millions, possibly billions of 4 byte values in a hashtable and I don't want to store any of the keys. I expect that only the hashes of the keys and the values will have to be stored. This has to be fast and all kept in RAM. The entries would still be looked up with the key, unlike set()'s.
What is an implementation of this for Python? Is there a name for this?
Yes, collisions are allowed and can be ignored.
(I can make an exception for collisions, the key can be stored for those. Alternatively, collisions can just overwrite the previously stored value.) | What is a hashtable/dictionary implementation for Python that doesn't store the keys? | 0.019997 | 0 | 0 | 2,326 |
1,918,456 | 2009-12-16T23:06:00.000 | 3 | 0 | 1 | 0 | python,dictionary,map,hashtable,data-structures | 1,918,551 | 10 | false | 0 | 0 | Its the good old space vs runtime tradeoff: You can have constant time with linear space usage for the keys in a hastable. Or you can store the key implicitly and use log n time by using a binary tree. The (binary) hash of a value gives you the path in the tree where it will be stored. | 5 | 6 | 0 | I'm storing millions, possibly billions of 4 byte values in a hashtable and I don't want to store any of the keys. I expect that only the hashes of the keys and the values will have to be stored. This has to be fast and all kept in RAM. The entries would still be looked up with the key, unlike set()'s.
What is an implementation of this for Python? Is there a name for this?
Yes, collisions are allowed and can be ignored.
(I can make an exception for collisions, the key can be stored for those. Alternatively, collisions can just overwrite the previously stored value.) | What is a hashtable/dictionary implementation for Python that doesn't store the keys? | 0.059928 | 0 | 0 | 2,326 |
1,918,456 | 2009-12-16T23:06:00.000 | 1 | 0 | 1 | 0 | python,dictionary,map,hashtable,data-structures | 1,918,544 | 10 | false | 0 | 0 | It's not what you asked for buy why not consider Tokyo Cabinet or BerkleyDB for this job? It won't be in memory but you are trading performance for greater storage capacity. You could still keep your list in memory and use the database only to check existence. | 5 | 6 | 0 | I'm storing millions, possibly billions of 4 byte values in a hashtable and I don't want to store any of the keys. I expect that only the hashes of the keys and the values will have to be stored. This has to be fast and all kept in RAM. The entries would still be looked up with the key, unlike set()'s.
What is an implementation of this for Python? Is there a name for this?
Yes, collisions are allowed and can be ignored.
(I can make an exception for collisions, the key can be stored for those. Alternatively, collisions can just overwrite the previously stored value.) | What is a hashtable/dictionary implementation for Python that doesn't store the keys? | 0.019997 | 0 | 0 | 2,326 |
1,918,456 | 2009-12-16T23:06:00.000 | 2 | 0 | 1 | 0 | python,dictionary,map,hashtable,data-structures | 1,918,534 | 10 | false | 0 | 0 | Build your own b-tree in RAM.
Memory use:
(4 bytes) comparison hash value
(4 bytes) index of next leaf if hash <= comparison OR if negative index of value
(4 bytes) index of next leaf if hash > comparison OR if negative index of value
12 bytes per b-tree node for the b-tree. More overhead for the values (see below).
How you structure this in Python - aren't there "native arrays" of 32bit integers upported with almost no extra memory overhead...? what are they called... anyway those.
Separate ordered array of subarrays each containing one or more values. The "indexes of value" above are indexes into this big array, allowing retrieval of all values matching the hash.
This assumes a 32bit hash. You will need more bytes per b-tree node if you have
greater than 2^31-1 entries or a larger hash.
BUT Spanner in the works perhaps: Note that you will not be able, if you are not storing the key values, to verify that a hash value looked up corresponds only to your key unless through some algorithmic or organisational mechanism you have guaranteed that no two keys will have the same hash. Quite a serious issue here. Have you considered it? :) | 5 | 6 | 0 | I'm storing millions, possibly billions of 4 byte values in a hashtable and I don't want to store any of the keys. I expect that only the hashes of the keys and the values will have to be stored. This has to be fast and all kept in RAM. The entries would still be looked up with the key, unlike set()'s.
What is an implementation of this for Python? Is there a name for this?
Yes, collisions are allowed and can be ignored.
(I can make an exception for collisions, the key can be stored for those. Alternatively, collisions can just overwrite the previously stored value.) | What is a hashtable/dictionary implementation for Python that doesn't store the keys? | 0.039979 | 0 | 0 | 2,326 |
1,918,456 | 2009-12-16T23:06:00.000 | 1 | 0 | 1 | 0 | python,dictionary,map,hashtable,data-structures | 1,918,532 | 10 | false | 0 | 0 | Hash table has to store keys, unless you provide a hash function that gives absolutely no collisions, which is nearly impossible.
There is, however, if your keys are string-like, there is a very space-efficient data structure - directed acyclic word graph (DAWG). I don't know any Python implementation though. | 5 | 6 | 0 | I'm storing millions, possibly billions of 4 byte values in a hashtable and I don't want to store any of the keys. I expect that only the hashes of the keys and the values will have to be stored. This has to be fast and all kept in RAM. The entries would still be looked up with the key, unlike set()'s.
What is an implementation of this for Python? Is there a name for this?
Yes, collisions are allowed and can be ignored.
(I can make an exception for collisions, the key can be stored for those. Alternatively, collisions can just overwrite the previously stored value.) | What is a hashtable/dictionary implementation for Python that doesn't store the keys? | 0.019997 | 0 | 0 | 2,326 |
1,920,246 | 2009-12-17T08:34:00.000 | 1 | 0 | 0 | 1 | python,file,system,iso | 1,920,536 | 1 | true | 0 | 0 | Following 'do not reinvent the wheel' I would try using mkisofs (part of cdrtools) (although originating on Linux, I think there are windows builds floating around the net). | 1 | 3 | 0 | I'm making a cross-platform (Windows and OS X) with wxPython that will be compiled to exe later.
Is it possible for me to create ISO files for CDs or DVDs in Python to burn a data disc with?
Thanks,
Chris | python write CD/DVD iso file | 1.2 | 0 | 0 | 5,258 |
1,920,805 | 2009-12-17T10:33:00.000 | 7 | 0 | 1 | 0 | python,ruby,concurrency,haskell,multithreading | 1,920,818 | 8 | false | 0 | 0 | The current version of Ruby 1.9(YARV- C based version) has native threads but has the problem of GIL. As I know Python also has the problem of GIL.
However both Jython and JRuby(mature Java implementations of both Ruby and Python) provide native multithreading, no green threads and no GIL.
Don't know about Haskell. | 5 | 16 | 0 | We are planning to write a highly concurrent application in any of the Very-High Level programming languages.
1) Do Python, Ruby, or Haskell support true multithreading?
2) If a program contains threads, will a Virtual Machine automatically assign work to multiple cores (or to physical CPUs if there is more than 1 CPU on the mainboard)?
True multithreading = multiple independent threads of execution utilize the resources provided by multiple cores (not by only 1 core).
False multithreading = threads emulate multithreaded environments without relying on any native OS capabilities. | Python, Ruby, Haskell - Do they provide true multithreading? | 1 | 0 | 0 | 7,226 |
1,920,805 | 2009-12-17T10:33:00.000 | 16 | 0 | 1 | 0 | python,ruby,concurrency,haskell,multithreading | 1,920,979 | 8 | false | 0 | 0 | The GHC compiler will run your program on multiple OS threads (and thus multiple cores) if you compile with the -threaded option and then pass +RTS -N<x> -RTS at runtime, where <x> = the number of OS threads you want. | 5 | 16 | 0 | We are planning to write a highly concurrent application in any of the Very-High Level programming languages.
1) Do Python, Ruby, or Haskell support true multithreading?
2) If a program contains threads, will a Virtual Machine automatically assign work to multiple cores (or to physical CPUs if there is more than 1 CPU on the mainboard)?
True multithreading = multiple independent threads of execution utilize the resources provided by multiple cores (not by only 1 core).
False multithreading = threads emulate multithreaded environments without relying on any native OS capabilities. | Python, Ruby, Haskell - Do they provide true multithreading? | 1 | 0 | 0 | 7,226 |
1,920,805 | 2009-12-17T10:33:00.000 | 1 | 0 | 1 | 0 | python,ruby,concurrency,haskell,multithreading | 1,920,843 | 8 | false | 0 | 0 | For real concurrency, you probably want to try Erlang. | 5 | 16 | 0 | We are planning to write a highly concurrent application in any of the Very-High Level programming languages.
1) Do Python, Ruby, or Haskell support true multithreading?
2) If a program contains threads, will a Virtual Machine automatically assign work to multiple cores (or to physical CPUs if there is more than 1 CPU on the mainboard)?
True multithreading = multiple independent threads of execution utilize the resources provided by multiple cores (not by only 1 core).
False multithreading = threads emulate multithreaded environments without relying on any native OS capabilities. | Python, Ruby, Haskell - Do they provide true multithreading? | 0.024995 | 0 | 0 | 7,226 |
1,920,805 | 2009-12-17T10:33:00.000 | -2 | 0 | 1 | 0 | python,ruby,concurrency,haskell,multithreading | 1,921,244 | 8 | false | 0 | 0 | Haskell is suitable for anything.
python has processing module, which (I think - not sure) helps to avoid GIL problems. (so it suitable for anything too).
But my opinion - best way you can do is to select highest level possible language with static type system for big and huge things. Today this languages are: ocaml, haskell, erlang.
If you want to develop small thing - python is good. But when things become bigger - all python benefits are eaten by miriads of tests.
I didn't use ruby. I still thinking that ruby is a toy language. (Or at least there's no reason to teach ruby when you know python - better to read SICP book). | 5 | 16 | 0 | We are planning to write a highly concurrent application in any of the Very-High Level programming languages.
1) Do Python, Ruby, or Haskell support true multithreading?
2) If a program contains threads, will a Virtual Machine automatically assign work to multiple cores (or to physical CPUs if there is more than 1 CPU on the mainboard)?
True multithreading = multiple independent threads of execution utilize the resources provided by multiple cores (not by only 1 core).
False multithreading = threads emulate multithreaded environments without relying on any native OS capabilities. | Python, Ruby, Haskell - Do they provide true multithreading? | -0.049958 | 0 | 0 | 7,226 |
1,920,805 | 2009-12-17T10:33:00.000 | 1 | 0 | 1 | 0 | python,ruby,concurrency,haskell,multithreading | 1,921,624 | 8 | false | 0 | 0 | I second the choice of Erlang. Erlang can support distributed highly concurrent programming out of the box. Does not matter whether you callit "multi-threading" or "multi-processing". Two important elements to consider are the level of concurrency and the fact that Erlang processes do not share state.
No shared state among processes is a good thing. | 5 | 16 | 0 | We are planning to write a highly concurrent application in any of the Very-High Level programming languages.
1) Do Python, Ruby, or Haskell support true multithreading?
2) If a program contains threads, will a Virtual Machine automatically assign work to multiple cores (or to physical CPUs if there is more than 1 CPU on the mainboard)?
True multithreading = multiple independent threads of execution utilize the resources provided by multiple cores (not by only 1 core).
False multithreading = threads emulate multithreaded environments without relying on any native OS capabilities. | Python, Ruby, Haskell - Do they provide true multithreading? | 0.024995 | 0 | 0 | 7,226 |
1,920,997 | 2009-12-17T11:04:00.000 | 3 | 0 | 0 | 1 | python,ide | 1,921,032 | 5 | false | 0 | 0 | IDE for running scripts? You can have any IDE you like, but if you need only to run python scripts you go like this:
python.exe pythonScript.py | 2 | 0 | 0 | Can anyone please tell me an IDE for running python programs? Is it possible to run the program through command line? | How can I run a python script on windows? | 0.119427 | 0 | 0 | 14,781 |
1,920,997 | 2009-12-17T11:04:00.000 | 0 | 0 | 0 | 1 | python,ide | 1,921,097 | 5 | false | 0 | 0 | PyDev and Komodo Edit are 2 nice Python IDE on Windows.
I also like the SciTE text editor very much.
These 3 solutions make possible to run Python scripts | 2 | 0 | 0 | Can anyone please tell me an IDE for running python programs? Is it possible to run the program through command line? | How can I run a python script on windows? | 0 | 0 | 0 | 14,781 |
1,921,559 | 2009-12-17T12:49:00.000 | 2 | 0 | 0 | 0 | python,django,postgresql | 1,922,486 | 3 | true | 1 | 0 | "What i need to know is whether the porposed solution is suitable for such a small project and could not be easily replaced by less complicated languages/frameworks/dmbses like PHP with MySQL etc.
"
Yes. It's suitable.
No. Nothing is "less complicated" than Django. PHP language may appear less complicated than Python, but you'll do more work to create the site.
With Django, you define the model, define the non-administrative views and you're done. For simple sites this can take as little as 20 minutes. The built-in admin is more valuable than you can imagine.
MySQL is not "less complicated" that PostgresSQL -- they're the same thing | 2 | 0 | 0 | Ok, I have a question from a "client" perspective. Let's say we are talking about website designed for distribution: products + their logistics info.
Definitely less than a 2k rows, rarely changed but often accessed. Typical row with several columns will have to consist of a picture so it might make it a bit "heavy". I was proposed a websited in Django Framework coded in Python with Postgresql database.
Is it efficient? Cost-efficient, for such a small purpose is it really needed? and is there a cheaper and also reliable solution?
From what I know the porposed solution is efficient for a programmer - loads of features, flexibility, distinction between layers of code-content-graphics. It gives a chance to build rly complicated websites and databases - thus the cost of service is bigger.
What i need to know is whether the porposed solution is suitable for such a small project and could not be easily replaced by less complicated languages/frameworks/dmbses like PHP with MySQL etc.
Please help :]
and sry for not editing the q in the first place | The best solution for distribution website? | 1.2 | 0 | 0 | 138 |
1,921,559 | 2009-12-17T12:49:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql | 1,921,584 | 3 | false | 1 | 0 | I would not comment about Django & Python. But a more simpler way to store images would be to store just the "path" (location in the directory) in the tables, and load the path in your application/framework. | 2 | 0 | 0 | Ok, I have a question from a "client" perspective. Let's say we are talking about website designed for distribution: products + their logistics info.
Definitely less than a 2k rows, rarely changed but often accessed. Typical row with several columns will have to consist of a picture so it might make it a bit "heavy". I was proposed a websited in Django Framework coded in Python with Postgresql database.
Is it efficient? Cost-efficient, for such a small purpose is it really needed? and is there a cheaper and also reliable solution?
From what I know the porposed solution is efficient for a programmer - loads of features, flexibility, distinction between layers of code-content-graphics. It gives a chance to build rly complicated websites and databases - thus the cost of service is bigger.
What i need to know is whether the porposed solution is suitable for such a small project and could not be easily replaced by less complicated languages/frameworks/dmbses like PHP with MySQL etc.
Please help :]
and sry for not editing the q in the first place | The best solution for distribution website? | 0 | 0 | 0 | 138 |
1,921,771 | 2009-12-17T13:26:00.000 | 1 | 0 | 1 | 0 | python,django | 1,921,921 | 10 | false | 1 | 0 | Since Django just expects a view to be a callable object, you can put then wherever you like in your PYTHONPATH. So you could for instance just make a new package myapp.views and put views into multiple modules there. You will naturally have to update your urls.py and other modules that reference these view callables. | 4 | 172 | 0 | My views.py has become too big and it's hard to find the right view.
How do I split it in several files and then import it? Does it involve any speed loss?
Can I do the same with models.py? | Split views.py in several files | 0.019997 | 0 | 0 | 42,864 |
1,921,771 | 2009-12-17T13:26:00.000 | 10 | 0 | 1 | 0 | python,django | 1,921,892 | 10 | false | 1 | 0 | Basically, you can put your code, whereever you wish. Just make sure, you change the import statements accordingly, e.g. for the views in the urls.py.
Not knowing your actual code its hard to suggest something meaningful. Maybe you can use some kind of filename prefix, e.g. views_helper.py, views_fancy.py, views_that_are_not_so_often_used.py or so ...
Another option would be to create a views directory with an __init__.py, where you import all subviews. If you have a need for a large number of files, you can create more nested subviews as your views grow ... | 4 | 172 | 0 | My views.py has become too big and it's hard to find the right view.
How do I split it in several files and then import it? Does it involve any speed loss?
Can I do the same with models.py? | Split views.py in several files | 1 | 0 | 0 | 42,864 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.