Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,885,537 |
2012-06-04T17:34:00.000
| 5 | 0 | 1 | 0 |
python,python-3.x,python-2.7
| 10,885,580 | 3 | false | 0 | 0 |
Python 3.x's input is python 2.x's raw_input. The function has just been renamed since the old 2.x input was broken by design and therefore eliminated in 3.x.
| 2 | 24 | 0 |
I have tried a lot to run raw_input("") on the python console but that gives an error. Moreover I watch some videos that might have been made on old python. so input("") is the only method and why raw_input("") is discarded in the new version is there any reason ?
|
raw_input("") has been eliminated from python 3.2
| 0.321513 | 0 | 0 | 68,511 |
10,885,984 |
2012-06-04T18:07:00.000
| 2 | 0 | 0 | 0 |
python,image-processing,python-imaging-library,rgb,pixel
| 10,886,261 | 6 | false | 0 | 0 |
As a basic optimization, it may save a little time if you create 3 lookup tables, one each for R, G, and B, to map the input value (0-255) to the output value (0-255). Looking up an array entry is probably faster than multiplying by a decimal value and rounding the result to an integer. Not sure how much faster.
Of course, this assumes that the values should always map the same.
| 2 | 2 | 1 |
I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this efficiently?
Thanks!
|
Multiply each pixel in an image by a factor
| 0.066568 | 0 | 0 | 12,396 |
10,885,984 |
2012-06-04T18:07:00.000
| 0 | 0 | 0 | 0 |
python,image-processing,python-imaging-library,rgb,pixel
| 64,900,468 | 6 | false | 0 | 0 |
If the type is numpy.ndarray just img = np.uint8(img*factor)
| 2 | 2 | 1 |
I have an image that is created by using a bayer filter and the colors are slightly off. I need to multiply RG and B of each pixel by a certain factor ( a different factor for R, G and B each) to get the correct color. I am using the python imaging library and of course writing in python. is there any way to do this efficiently?
Thanks!
|
Multiply each pixel in an image by a factor
| 0 | 0 | 0 | 12,396 |
10,886,946 |
2012-06-04T19:16:00.000
| 2 | 0 | 1 | 0 |
python,ipython
| 35,548,911 | 6 | false | 0 | 0 |
One of the useful answers was lost in the comments, so wanted to restate it along with adding a reference for another useful IPython magic function.
First to restate what @EOL said, one way to solve OP's problem is to turn off auto-indentation by first running %autoindent and doing the paste (not needed if you are using %paste, of course).
Now to add more information to what is already there here, one more useful mode in IPython is %doctest_mode which allows you to copy paste example and test snippets from doc strings. This is also useful to execute interactive python session output that you could find in documentation and online forums, without having to first strip out the prompt strings.
| 2 | 101 | 0 |
I want to copy already indented Python code / whole functions and classes into IPython. Everytime I try the indentation is screwed up and I get following error message:
IndentationError: unindent does not match any outer indentation level (<ipython-input-23-354f8c8be51b>, line 12)
If you want to paste code into IPython, try the %paste and %cpaste magic functions.
|
How does IPython's magic %paste work?
| 0.066568 | 0 | 0 | 69,070 |
10,886,946 |
2012-06-04T19:16:00.000
| 144 | 0 | 1 | 0 |
python,ipython
| 10,886,947 | 6 | true | 0 | 0 |
You can't copy to IPython directly. This are the steps:
Copy the lines you want to copy into IPython into the clipboard
Enter %paste into IPython
Press enter
Profit!
| 2 | 101 | 0 |
I want to copy already indented Python code / whole functions and classes into IPython. Everytime I try the indentation is screwed up and I get following error message:
IndentationError: unindent does not match any outer indentation level (<ipython-input-23-354f8c8be51b>, line 12)
If you want to paste code into IPython, try the %paste and %cpaste magic functions.
|
How does IPython's magic %paste work?
| 1.2 | 0 | 0 | 69,070 |
10,887,836 |
2012-06-04T20:23:00.000
| 1 | 0 | 0 | 1 |
python,windows,performance,opencv
| 12,700,150 | 2 | false | 0 | 0 |
I had same issue and I found out that this is caused by prolonged exposure. It may be the case that Windows drivers increased exposure to increase brightness of picture. Try to point your camera to light source or manually set decreased exposure
| 1 | 1 | 1 |
I have build a simple webcam recorder on linux which works quite well.
I get ~25fps video and good audio.
I am porting the recorder on windows (win7) and while it works, it is unusable.
The QueryFrame function takes something more than 350ms, i.e 2.5fps.
The code is in python but the problem really seems to be the lib call.
I tested on the same machine with the same webcam (a logitech E2500).
On windows, I installed openCV v2.2. I cannot check right now but the version might be a bit higher on Ubuntu.
Any idea what could be the problem ?
edit : I've just installed openCV2.4 and I have the same slow speed.
|
QueryFrame very slow on Windows
| 0.099668 | 0 | 0 | 565 |
10,889,378 |
2012-06-04T22:39:00.000
| 8 | 0 | 1 | 0 |
python
| 10,889,432 | 2 | false | 0 | 0 |
No, Python is not statically typed.
In static typing names are bound to both a type and an object (or value), in Python names are only bound to objects. At any time you can reassign a name to an object of a different type, which you cannot do in statically typed languages.
I'm not sure what you mean by needing declaring your variables beforehand, but my guess is that you are actually just creating an empty list or dictionary and assigning it to a name.
| 1 | 1 | 0 |
I know Python is dynamic typed, duck typed and is also strong. In some cases, we have to make sure it is declare as a list or a dictionary beforehand in order to use them... so can I say Python is also statically typed language?
|
Is Python static typed?
| 1 | 0 | 0 | 143 |
10,891,589 |
2012-06-05T04:32:00.000
| 1 | 0 | 1 | 0 |
python,memory,large-data
| 10,892,021 | 2 | true | 0 | 0 |
Working with large datasets isn't necessarily going to cause memory complications. As long as you use sound approaches when you view and manipulate your data, you can typically make frugal use of memory.
There are two concepts you need to consider as you're building the models that process your data.
What is the smallest element of your data need access to to perform a given calculation? For example, you might have a 300GB text file filled with numbers. If you're looking to calculate the average of the numbers, read one number at a time to calculate a running average. In this example, the smallest element is a single number in the file, since that is the only element of our data set that we need to consider at any point in time.
How can you model your application such that you access these elements iteratively, one at a time, during that calculation? In our example, instead of reading the entire file at once, we'll read one number from the file at a time. With this approach, we use a tiny amount of memory, but can process an arbitrarily large data set. Instead of passing a reference to your dataset around in memory, pass a view of your dataset, which knows how to load specific elements from it on demand (which can be freed once worked with). This similar in principle to buffering and is the approach many iterators take (e.g., xrange, open's file object, etc.).
In general, the trick is understanding how to break your problem down into tiny, constant-sized pieces, and then stitching those pieces together one by one to calculate a result. You'll find these tenants of data processing go hand-in-hand with building applications that support massive parallelism, as well.
Looking towards gc is jumping the gun. You've provided only a high-level description of what you are working on, but from what you've said, there is no reason you need to complicate things by poking around in memory management yet. Depending on the type of analytics you are doing, consider investigating numpy which aims to lighten the burden of heavy statistical analysis.
| 1 | 4 | 0 |
I am working on a long running Python program (a part of it is a Flask API, and the other realtime data fetcher).
Both my long running processes iterate, quite often (the API one might even do so hundreds of times a second) over large data sets (second by second observations of certain economic series, for example 1-5MB worth of data or even more). They also interpolate, compare and do calculations between series etc.
What techniques, for the sake of keeping my processes alive, can I practice when iterating / passing as parameters / processing these large data sets? For instance, should I use the gc module and collect manually?
UPDATE
I am originally a C/C++ developer and would have NO problem (and would even enjoy) writing parts in C++. I simply have 0 experience doing so. How do I get started?
Any advice would be appreciated.
Thanks!
|
Iterating over a large data set in long running Python process - memory issues?
| 1.2 | 0 | 0 | 2,438 |
10,893,554 |
2012-06-05T07:52:00.000
| 1 | 0 | 0 | 1 |
python
| 13,256,718 | 1 | false | 0 | 0 |
Use logging.handlers.TimedRotatingFileHandler(filename, when='H', interval=1)
| 1 | 0 | 0 |
I have a function that makes calls to a logger almost every second, however, I only want to log information an hour before the logfile rotates.
|
TimedRotatingFileHandler: Log only when detected time to rotate
| 0.197375 | 0 | 0 | 239 |
10,896,895 |
2012-06-05T11:59:00.000
| 0 | 0 | 0 | 0 |
python,jython,grinder
| 10,921,272 | 1 | false | 0 | 0 |
More detail would be helpful. What information does a thread need before it can construct a valid pause request? Could the thread that sends the play request write this information into some module-level data structure? Then the thread that is doing the pause could read this information and build a valid request.
| 1 | 0 | 0 |
I want to create a script in which if a http request is being executed e.g, I have played a voicefile using a http operation Play() defined in my code.
In the mean time when the file is getting played , i want Pause() Operation to be called which can pause the file being played.
The problem I am facing is that, As the HTTP request for PLAY is getting hit, the script gets back the control only after the successful/failure execution of PLAY() i.e. when the complete play operation has been completed due to which my pause operation returns failure because there isn't any file which is getting played currently.
I can't use 2 scripts because both use the same data (Call-ID)
Any help on this would be highly Appreciated.
Thanks in Advance.
|
Multiple Threads doing different operations in a single Grinder Script
| 0 | 0 | 1 | 219 |
10,897,239 |
2012-06-05T12:23:00.000
| 1 | 0 | 1 | 0 |
python,linux
| 10,897,423 | 1 | false | 0 | 0 |
I used an editor that does code rollups and understood Python syntax, then I looked for rollups that are in unexpected locations. I don't remember if Kate does that. It's not obvious that there is an issue, but it makes it easier when you are looking for an issue.
| 1 | 6 | 0 |
(Warning: Potential flame-war starter. This is however not my goal, the point here is not to discuss the design choices of Python, but to know how to make the best out of it).
Is there a program, script, method (Unix-based, ideally), to display "virtual" brackets around blocs of code in Python, and to keep them where they are so that the code can still be executed even if indenting is broken ?
I realize that Python only uses indentation to define blocks of code, and that the final program may not contain brackets.
However, I find it very annoying that your program can stop functioning just because of an unfortunate and undetected carriage-return.
So, ideally I would be looking for a plugin in a text editor (kate, gedit...) that would:
Display virtual brackets around blocks of code in my Python program
Keep them in place
Generate dynamically the "correct" Python code with the indentation corresponding to where the brackets belong.
(no flame-war, please !)
|
Virtual brackets in Python
| 0.197375 | 0 | 0 | 262 |
10,898,319 |
2012-06-05T13:33:00.000
| 0 | 0 | 1 | 0 |
python,execute,shellexecute
| 10,900,752 | 2 | false | 0 | 0 |
put the line in a class then just call that class. This is what I would do rather than commenting out the lines..
| 2 | 1 | 0 |
I have a simulation in python which I have run, but half way to the end I got an error. I have already fixed the error. Now I want to execute the same file, but beginning in the line of the error. How can I do that? Execfile, as far as I looked doesn't do that...
|
How do I execute a python file beginning in a specific code line?
| 0 | 0 | 0 | 1,595 |
10,898,319 |
2012-06-05T13:33:00.000
| 5 | 0 | 1 | 0 |
python,execute,shellexecute
| 10,898,354 | 2 | true | 0 | 0 |
You don't.
The easiest solution would be to comment out all the intervening lines, or put them inside an if False: block.
You could also simply save the appropriate portion of the code into a new file and run that instead.
Any of these operations should be trivial in most editors.
| 2 | 1 | 0 |
I have a simulation in python which I have run, but half way to the end I got an error. I have already fixed the error. Now I want to execute the same file, but beginning in the line of the error. How can I do that? Execfile, as far as I looked doesn't do that...
|
How do I execute a python file beginning in a specific code line?
| 1.2 | 0 | 0 | 1,595 |
10,898,846 |
2012-06-05T14:05:00.000
| 5 | 0 | 0 | 0 |
python,packet,icmp,scapy,ttl
| 10,902,783 | 1 | true | 0 | 0 |
What you're saying is essentially you can only test for so many unreachable hosts in a given span of time. One possible reason: many routers rate-limit ICMP messages.
It is much better to test for a ping success to a host before doing something else; this way you have positive confirmation of reachability. The downside is MS Windows blocks pings by default.
If you can't ping first, then you'll need to increase the time between your probes, or raise the ICMP unreachable rate on the router that is returning the ICMP messages.
EDIT:
Based on the comments, it looks like you're hitting a wall for scapy's ability to process traffic. I have improved throughput in the past by sending with scapy and spawning tcpdump in the background to receive traffic.
| 1 | 6 | 0 |
I'm using Scapy to replay some dumped packets in which I change the TTL value. I've been getting very odd results even with TTL=1.
When I run my test hours apart from each other, I can get from roughly 40% to 95% of packets replied to with an ICMP time-exceeded message. Then I can recursively replay unanswered packets and get each time more or less the same percentage of answered packets as before.
Why is that?
I've been sending packets with an interval of 0.1 seconds between each other. This should be ok, right? My timeout value is 10s, which should be very conservative.
What's wrong here?
|
not getting all ICMP time-exceeded messages: why?
| 1.2 | 0 | 0 | 1,531 |
10,899,192 |
2012-06-05T14:26:00.000
| 0 | 0 | 0 | 0 |
javascript,python,html,automation
| 10,899,256 | 3 | false | 1 | 0 |
I think it will be easier for you get program like autoit.
| 1 | 2 | 0 |
Every Monday at Work, I have the task of printing out Account analysis (portfolio analysis) and Account Positions for over 50 accounts. So i go to the page, click "account analysis", enter the account name, click "format this page for printing", Print the output (excluding company disclosures), then I go back to the account analysis page and click "positions" instead this time, the positions for that account comes up. Then I click "format this page for printing", Print the output (excluding company disclosures).Then I repeat the process for the other 50 accounts.
I haven't taken any programming classes in the past but I heard using python to automate a html response might help me do this faster. I was wondering if that's true, and if so, how does it work? Also, are there any other programs that could enable me automate this process and save time?
Thank you so much
|
Automating HTTP navigation and HTML printing using Python
| 0 | 0 | 1 | 773 |
10,900,319 |
2012-06-05T15:34:00.000
| 0 | 0 | 0 | 0 |
python,sqlite,web
| 10,900,387 | 2 | false | 0 | 0 |
I suggest you look in the log of your server to find out what caused the 500 error.
| 1 | 0 | 0 |
Using CGI scripts, I can run single Python files on my server and then use their output on my website.
However, I have a more complicated program on my computer that I would like to run on the server. It involves several modules I have written myself, and the SQLITE3 module built in Python. The program involves reading from a .db file and then using that data.
Once I run my main Python executable from a browser, I get a "500: Internal server error" error.
I just wanted to know whether I need to change something in the permission settings or something for Python files to be allowed to import other Python files, or to read from a .db file.
I appreciate any guidance, and sorry if I'm unclear about anything I'm new to this site and coding in general.
FOLLOW UP: So, as I understand, there isn't anything inherently wrong with importing Python files on a server?
|
Importing Python files into each other on a web server
| 0 | 1 | 0 | 88 |
10,900,852 |
2012-06-05T16:09:00.000
| 0 | 0 | 0 | 0 |
python,random,seed
| 10,901,418 | 6 | false | 0 | 0 |
First: define similarity. Next: code a similarity test. Then: check for similarity.
With only a vague description of similarity it is hard to check for it.
| 3 | 12 | 1 |
I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?
I think it doesn't change anything, but I'm using python
Edit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers?
|
May near seeds in random number generation give similar random numbers?
| 0 | 0 | 0 | 2,871 |
10,900,852 |
2012-06-05T16:09:00.000
| 0 | 0 | 0 | 0 |
python,random,seed
| 10,904,377 | 6 | false | 0 | 0 |
What kind of simulation are you doing?
For simulation purposes your argument is valid (depending on the type of simulation) but if you implement it in an environment other than simulation, then it could be easily hacked if it requires that there are security concerns of the environment based on the generated random numbers.
If you are simulating the outcome of a machine whether it is harmful to society or not then the outcome of your results will not be acceptable. It requires maximum randomness in every way possible and I would never trust your reasoning.
| 3 | 12 | 1 |
I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?
I think it doesn't change anything, but I'm using python
Edit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers?
|
May near seeds in random number generation give similar random numbers?
| 0 | 0 | 0 | 2,871 |
10,900,852 |
2012-06-05T16:09:00.000
| 0 | 0 | 0 | 0 |
python,random,seed
| 10,905,149 | 6 | false | 0 | 0 |
To quote the documentation from the random module:
General notes on the underlying Mersenne Twister core generator:
The period is 2**19937-1.
It is one of the most extensively tested generators in existence.
I'd be more worried about my code being broken than my RNG not being random enough. In general, your gut feelings about randomness are going to be wrong - the Human mind is really good at finding patterns, even if they don't exist.
As long as you know your results aren't going to be 'secure' due to your lack of random seeding, you should be fine.
| 3 | 12 | 1 |
I'm using sequential seeds (1,2,3,4,...) for generation of random numbers in a simulation. Does the fact that the seeds are near each other make the generated pseudo-random numbers similar as well?
I think it doesn't change anything, but I'm using python
Edit: I have done some tests and the numbers don't look similar. But I'm afraid that the similarity cannot be noticed just by looking at the numbers. Is there any theoretical feature of random number generation that guarantees that different seeds give completely independent pseudo-random numbers?
|
May near seeds in random number generation give similar random numbers?
| 0 | 0 | 0 | 2,871 |
10,901,161 |
2012-06-05T16:29:00.000
| 9 | 0 | 1 | 0 |
python
| 10,901,272 | 4 | false | 0 | 0 |
Python 3.3 included an optimization for Unicode strings that reduced their memory consumption. That might translate into faster code if more of it fits into cache.
Really the only way to know is to benchmark your most critical code in both and see what the difference is.
| 2 | 35 | 0 |
I've looked around for answers and much seems to be old or outdated. Has Python 3 been updated yet so that it's decently faster than Python 2.7 or am I still better off sticking with my workable code?
|
Python 2.7 or Python 3 (for speed)?
| 1 | 0 | 0 | 38,853 |
10,901,161 |
2012-06-05T16:29:00.000
| 5 | 0 | 1 | 0 |
python
| 10,901,390 | 4 | false | 0 | 0 |
The necessity of Libraries for your applications will determine whether Python3 or Pyhton2 is better.
| 2 | 35 | 0 |
I've looked around for answers and much seems to be old or outdated. Has Python 3 been updated yet so that it's decently faster than Python 2.7 or am I still better off sticking with my workable code?
|
Python 2.7 or Python 3 (for speed)?
| 0.244919 | 0 | 0 | 38,853 |
10,902,671 |
2012-06-05T18:21:00.000
| 7 | 0 | 1 | 0 |
python,cocoa,py2app,scripting-bridge
| 10,903,455 | 1 | true | 0 | 0 |
The short version: PyObjC is the way you call Mac OS X APIs, Scripting Bridge is the way you talk to other apps' scripting interfaces. In more detail:
PyObjC is a bridge between the Python language and the Objective C runtime (and the set of Cocoa wrappers built trivially on top of that bridge, and some nice convenience stuff). If you want to call Cocoa methods, you use PyObjC, typically by importing either Cocoa or Foundation.
Scripting Bridge is a bridge between the Python language and the Apple Event-based scripting system. If you want to call another app's scripting interface, you use Scripting Bridge. (In most cases, if you're using Scripting Bridge, you'll also want to import Foundation, because Scripting Bridge deals with things like NSArrays, etc.)
So, PyObjC is not an example of a scripting bridge. An example of a scripting bridge is, well, Scripting Bridge, or Appscript (which is better, but not from Apple, and no longer maintained).
py2app has nothing much to do with either of these; it's a way to wrap up a Python application, together with all of the extension modules it requires, and as much of the Python interpreter as necessary, into a single .app bundle that you can distribute to users so they can just double-click to run it. Of course most such apps will have GUIs, and many of them will use PyObjC to create those GUIs directly in Cocoa (rather than using, e.g., PyQt or wxPython), but beyond that, there's no real connection.
| 1 | 6 | 0 |
I am just starting to learn about integrating Python and Mac OS apps. (I want to call some methods from Cocoa to Python.) I've ran into these terminologies -- Scripting Bridge, PyObjC, and py2app. What's the difference? Is PyObjC an example of a scripting bridge? And when does py2app come into play?
|
Scripting Bridge vs PyObjC vs py2app
| 1.2 | 0 | 0 | 1,241 |
10,904,721 |
2012-06-05T20:49:00.000
| 11 | 1 | 1 | 1 |
python,operating-system
| 10,905,302 | 4 | false | 0 | 0 |
I suggest you find a good textbook on operating system design, and study that. I'm pretty sure you won't find such a book with Python source code; C is more likely. (You might find an older textbook that uses Pascal instead of C, but it's really not that different.)
Once you have studied operating systems design enough to actually be able to write an operating system, you will know enough to have your own opinions on what languages would be suitable.
| 1 | 38 | 0 |
Is it possible to make a minimalistic operating system using Python?
I really don't want to get into low-level code like assembly, so I want to use a simple language like Perl, Python. But how?
|
Is it possible to create an operating system using Python?
| 1 | 0 | 0 | 76,823 |
10,906,198 |
2012-06-05T23:01:00.000
| 0 | 1 | 0 | 1 |
python,exe,dmg
| 10,906,453 | 1 | false | 1 | 0 |
If you mean specifically with Python, as I gather from tagging that in your question, it won't simply run the same way as Java will, because there's no equivalent Virtual Machine.
If the user has a Python interpreter on their system, they they can simply run the .py file.
If they do not, you can bundle the interpreter and needed libraries into an executable using Py2Exe, cxFreeze, or bbFreeze. For replacing a dmg, App2Exe does something similar.
However. the three commands you listed are not python-related, and rely on functionality that is not necessarily available on Windows or Mac, so it might not be as possible.
| 1 | 0 | 0 |
Newbie question I am finding it hard to get my head around.
If I wanted to use one of the many tool out their like rsync lsync or s3cmd how can you build these into a program for none computer savvy people to use.
Ie I am comfortable opening terminal and running s3cmd which Is developed in python how would I go about developing this as a dmg file for mac or exe file for windows?
So a user could just install the dmg or exe then they have s3cmd lsync or rsync on their computer.
I can open up eclipse code a simple app in java and then export as a dmg or exe I cannot figure out how you do this for other languages say write a simple piece of code that I cam save as a dmg or exe and that after installed will add a folder to my desktop or something simple like that to get me started?
|
Compiling and running code as dmg or exe
| 0 | 0 | 0 | 1,925 |
10,906,477 |
2012-06-05T23:41:00.000
| 2 | 0 | 0 | 0 |
python,pyramid
| 10,907,158 | 1 | false | 1 | 0 |
Pyramid has nothing to do with it. The global needs to handle whatever mechanism the WSGI server is using to serve your application.
For instance, most servers use a separate thread per request, so your global variable needs to be threadsafe. gunicorn and gevent are served using greenlets, which is a different mechanic.
A lot of engines/orms support a threadlocal connection. This will allow you to access your connection as if it were a global variable, but it is a different variable in each thread. You just have to make sure to close the connection when the request is complete to avoid that connection spilling over into the next request in the same thread. This can be done easily using a Pyramid tween or several other patterns illustrated in the cookbook.
| 1 | 2 | 0 |
It looks like this is what e.g. MongoEngine does. The goal is to have model files be able to access the db without having to explicitly pass around the context.
|
In Pyramid, is it safe to have a python global variable that stores the db connection?
| 0.379949 | 1 | 0 | 877 |
10,908,715 |
2012-06-06T05:42:00.000
| 1 | 0 | 0 | 0 |
python,bots,google-finance
| 10,908,773 | 1 | false | 1 | 0 |
well, you finally reached a quite challenging realm. decode the captcha.
there do exist OCR approaches to decode simple captcha into code. not seems to work for google captcha.
I heard there are some companies provide manual captcha decoding services, you can try to use some. ^_^ LOL
ok, to be serious, if google don't want you to do it that way, then it is not easy to decode those captchas. After all, why google on finance data, there are a lot other providers, right? try to scrape those websites.
| 1 | 1 | 0 |
I wrote a script that retrieves stock data on google finance and prints it out, nice and simple. It always worked, but since this morning I only get a page that tells me that I'm probably an automated script instead of the stock data. Of course, being a script, I can't pass the captcha. What can I do?
|
Google Finance recognizes my Python script as a bot and blocks it
| 0.197375 | 0 | 1 | 1,173 |
10,909,812 |
2012-06-06T07:20:00.000
| 0 | 0 | 0 | 0 |
python,plone
| 10,913,922 | 3 | false | 1 | 0 |
If you're running out of other ideas, you can copy them in using WebDAV access. (Be aware, though, to pack the database afterwards: while Plone4 has blob support, I think files uploaded via WebDAV leave a stale copy in the database.)
| 1 | 1 | 0 |
I want to implement DMS for the existing files on my File system. How do I import such existing files / images into my Plone DMS. I don't wish to use Products.Reflecto as I am unable to add any version control/ edit the uploaded files, images in it.
|
How to migrate large data from any file system to a plone site plone 4.1?
| 0 | 0 | 0 | 280 |
10,910,246 |
2012-06-06T07:55:00.000
| 0 | 0 | 0 | 0 |
python,sql,string,mysql-python
| 10,910,268 | 2 | false | 0 | 0 |
No, other than the string can contain newlines.
| 1 | 1 | 0 |
is there a difference if i use """..""" in the sql of cusror.execute. Even if there is any slight difference please tell
|
What is the use of """...""" in python instead of "..." or '...', especially in MySQLdb cursor.execute
| 0 | 1 | 0 | 125 |
10,910,591 |
2012-06-06T08:21:00.000
| 2 | 0 | 0 | 1 |
google-app-engine,python-2.7
| 10,910,709 | 2 | false | 1 | 0 |
Put a main file in the top-level directory and import all your handlers there, then reference them via that file
| 1 | 3 | 0 |
I am trying to migrate my app and everything worked fine until I changed in app.yaml
from threadsafe: false to threadsafe: true.
The error I was receiving was:
threadsafe cannot be enabled with CGI handler: a/b/xyz.app
After some googling I found:
Only scripts in the top-level directory work as handlers, so if you have any in subdirectories, they'll need to be moved, and the script reference changed accordingly:
- url: /whatever
# This doesn't work ...
# script: lib/some_library/handler.app
# ... this does work
script: handler.app
Is there any workaround for this(if above research is valid), as I don't want to change my project hirarchy?
|
Migrating GAE app from python 2.5 to 2.7
| 0.197375 | 0 | 0 | 335 |
10,911,789 |
2012-06-06T09:46:00.000
| 6 | 1 | 0 | 0 |
java,python,osgi
| 10,928,722 | 1 | false | 0 | 0 |
The purpose of OSGi is to write (reusable) active modules that can discover each other at runtime so that these modules can decide to collaborate. The primary mechanism is the service registry that acts as a simple broker for objects.
A similar mechanism exists in JavaScript with the exports global variable. Unlike the JavaScript module systems, however, the OSGi service registry is dynamic.
I am not aware of anything like this in Python. I think the need for something like OSGi arises in larger programs made with larger or diversified teams. An area that Java with its static typing is more suitable for. Especially since Java has a very strong focus on interface based design; in the eco system of Java/OSGi you find many specifications and actually multiple implementations. In this world, a broker that matches implementations to specifications is important.
I think Python, and for that matter Ruby, and other languages would greatly benefit from a service broker like OSGi.
| 1 | 4 | 0 |
While trying to understand what problem does OSGI solve in the java ecosystem ,i find myself wondering if there is such a problem in python as well ? if yes how it is solved , if no why ?
|
is there a requirement in python similar to what osgi tries to solve in java ?
| 1 | 0 | 0 | 908 |
10,911,878 |
2012-06-06T09:52:00.000
| 1 | 0 | 0 | 0 |
python,twitter-bootstrap,flask
| 10,926,171 | 1 | false | 1 | 0 |
To re-use a chunk of HTML, you can use Jinja's {% include %} tag. If that's too limiting, Jinja macros are also well suited. You can define your macros in a separate file and import them with {% import "path/to/macros.html" as my_macros %}.
Flask-Assets can help with the organisation of your assets.
As for using Blueprints, yes you should use them. But they mostly apply to Python code and HTML templates are organised in a different realm, so maybe their use is unrelated here.
You can't always remove all duplication though. If your game needs to affect three distant locations of the server-generated HTML, that's bits of template code to copy in every template that includes your game.
| 1 | 5 | 0 |
I have a page (located at /games/compare/) and it's a mini image comparison game. Users are shown two images and asked to pick between them in response to a question. This page can get images from the database, render a template with javascript and css inside and communicate back to the database using AJAX.
Now what if I wanted to embed this voting game onto the main page without duplicating any code? Ideally, I'd update the game and all the pages that "feature" the game will also reflect the changes.
I'm getting hung up on how to manage the assets for the entire site in a coherent and organized way. Some pages have css, javascript and I'm also using frameworks like bootstrap and a GIS framework.
Would I set the game up as a blueprint? How would I organize the assets (Javascript and CSS) so that there is no duplication?
Currently, I have a page rendering a template (main.html) which extends another (base.html). Base.html includes header.html, nav.html and footer.html with blocks set up for body and others.
My current approach is to strip everything out at the lowest level and reassemble it at a highest common level, which makes coding really slow. For instance, I have that voting game and right now it's located in a page called voting_game.html and has everything in it needed to play the game (full page html, styles and javascript included). Now if I want to include that game on another page, like the root index, the only solution I know of is to strip out the style, js and full page html from voting_game.html, leaving only the html necessary for the game to run. When I'm creating the index now, I'll import the html from voting_game.html but I'll separately have to import the style and javascript. This means I have to build every page twice, which is twice the work I need to be doing. This process also leaves little files all over the place, as I'm constantly refactoring and it makes development just a bookkeeping nightmare.
There has to be a way to do this and stay organized but I need your help understanding the best way to do this.
Thanks,
Phil
Edit: The embedded page should also be able to communicate with its parent page (the one it is being embedded into), or with other embedded pages within the same parent (children of a parent should be able to talk. So when someone plays the embedded game, they earn points, which should show up on another part other page, which would update reflecting the users current points.
This "Score board" would also be a separate widget/page/blueprint that can be embedded and will look for certain pieces of data in order to function.
|
Embed Flask page in another without code duplication?
| 0.197375 | 0 | 0 | 909 |
10,912,706 |
2012-06-06T10:48:00.000
| 4 | 0 | 0 | 0 |
python,django,image,internationalization
| 10,912,731 | 5 | false | 1 | 0 |
You could pass a language parameter to your page template and use it as part of your media file URL.
This would require you to host all media files for, e.g., English in a folder SITE_MEDIA/english, while other, e.g., Japanese images would be available from SITE_MEDIA/japanese.
Inside your page templates, you could then use {{MEDIA_URL}}{{language}}/my-image.jpg...
| 1 | 3 | 0 |
How would I implement different images from static folder based on language?
For example, when visiting the main site the layout will load in english but when changed to japanese the logo and images attached to the layout will change based on the requested language. please help.....
|
Internationalizing images in django
| 0.158649 | 0 | 0 | 1,446 |
10,914,740 |
2012-06-06T13:01:00.000
| 2 | 0 | 0 | 1 |
python,internet-explorer,real-time,twisted,server-sent-events
| 10,949,657 | 1 | true | 1 | 0 |
Answering my own question is a little weird. But I just found the answer. I had to go with long polling. looks like, I have to write a framework which falls-back to long polling when server sent events are not supported. Answering just in case anyone comes for reference in future.
| 1 | 2 | 0 |
I am working on a project that requires real time update. So, long ago, I decided to go with using Twisted SSE Handler (cyclone.sse). The project is at an end. And all the pub/sub stuff is good on all the browsers except Internet Explorer. IE doesn't support SSE. How do I get pub-sub working on IE without change of code in server-side? Also long polling will not help as I am using cyclone.sse.
|
Twisted Server Sent Events accessing using Internet Explorer
| 1.2 | 0 | 0 | 431 |
10,918,002 |
2012-06-06T16:10:00.000
| 0 | 0 | 0 | 0 |
python,django,django-forms,monkeypatching
| 11,048,847 | 2 | true | 1 | 0 |
The solution here was to copy the package to my application folder and patch it locally.
| 2 | 1 | 0 |
I have an app included into INSTALLED_APPS that needs to be monkey-patched.
The problem is that I don't explicitly import modules from this app (django-allauth).
Is there any way to get some access at the point when Django imports an application
and monkey patch one of it's internal forms?
Which in my case would be socialaccount.forms.DisconnectForm.clean = smth
|
Monkey patch form in INSTALLED_APPS
| 1.2 | 0 | 0 | 259 |
10,918,002 |
2012-06-06T16:10:00.000
| -1 | 0 | 0 | 0 |
python,django,django-forms,monkeypatching
| 10,918,221 | 2 | false | 1 | 0 |
import ipdb; ipdb.set_trace() in the __init__ of the module. And write the char "w" to see the trace
| 2 | 1 | 0 |
I have an app included into INSTALLED_APPS that needs to be monkey-patched.
The problem is that I don't explicitly import modules from this app (django-allauth).
Is there any way to get some access at the point when Django imports an application
and monkey patch one of it's internal forms?
Which in my case would be socialaccount.forms.DisconnectForm.clean = smth
|
Monkey patch form in INSTALLED_APPS
| -0.099668 | 0 | 0 | 259 |
10,918,905 |
2012-06-06T17:11:00.000
| 1 | 0 | 0 | 1 |
python,django,cron,celery
| 10,918,986 | 2 | true | 1 | 0 |
In my personal opinion, i would learn how to use cron. This won't take more than 5 to 10 minutes, and it's an essential tool when working on a Linux server.
What you could do is set up a cronjob that requests one page of your django instance every minute, and have the django script figure out what time it is and what needs to be done, depending on the configuration stored in your database. This is the approach i've seen in other similar applications.
| 1 | 3 | 0 |
I'd like to run periodic tasks on my django project, but I don't want all the complexity of celery/django-celery (with celerybeat) bundled in my project.
I'd like, also, to store the config with the times and which command to run within my SCM.
My production machine is running Ubuntu 10.04.
While I could learn and use cron, I feel like there should be a higher level (user friendly) way to do it. (Much like UFW is to iptables).
Is there such thing? Any tips/advice?
Thanks!
|
Cron-like scheduler, something between cron and celery
| 1.2 | 0 | 0 | 1,859 |
10,919,301 |
2012-06-06T17:40:00.000
| 3 | 0 | 0 | 1 |
python,github,jenkins
| 10,919,450 | 2 | false | 1 | 0 |
The solution is quite simple: make cleaner commits (fix typos before committing, only commit changes that belong together, not for too small edits). It's a bit odd that you don't take the time to fix typos (by running/testing locally) but wish to reduce the number of commits by some other means.
| 2 | 1 | 0 |
I have a standard-ish setup. Call it three servers - www, app and db, all fed from fabric scripts, and the whole on github.
I have a local laptop with the repo clone. I change a file locally, and push it to github then deploy using jenkins - which pulls from github and does its business. The problem here is I can put a dozen rubbish commits up till I manage to fix all my typos.
Its not so much the round trip to github that matters, but the sheer number of commits - I cannot squash them as they have been pushed. It looks ugly. It works sure but it is ugly.
I don't think I can edit on the servers directly - the file are spread out a lot, and I cannot make each directory on three servers a clone of github and hope to keep things sane.
And trying to write scripts that will synch the servers with my local repo is insane - fabric files took long enough.
I cannot easily git pull from jenkins, because I still have to commit to have jenkins pull, and we still get ugly ugly commit logs.
I cannot see a graceful way to do this - ideas anyone.
|
As part of development, I am committing to github and pulling down and executing elsewhere. It feels wrong
| 0.291313 | 0 | 0 | 97 |
10,919,301 |
2012-06-06T17:40:00.000
| 0 | 0 | 0 | 1 |
python,github,jenkins
| 10,920,164 | 2 | true | 1 | 0 |
The solution is to not use github / jenkins to deploy to the servers.
The servers should be seen as part of the 'local' deployment (local being pre-commit)
So use the fab files directly, from my laptop.
That was harder because of pre processing occuring on jenkins but that is replicable.
So, I shall take Jeff Atwoods advice here
embrace the suck, in public.
Well I certainly sucked at that - but hey I learnt.
Will put brain in the right way tomorrow.
| 2 | 1 | 0 |
I have a standard-ish setup. Call it three servers - www, app and db, all fed from fabric scripts, and the whole on github.
I have a local laptop with the repo clone. I change a file locally, and push it to github then deploy using jenkins - which pulls from github and does its business. The problem here is I can put a dozen rubbish commits up till I manage to fix all my typos.
Its not so much the round trip to github that matters, but the sheer number of commits - I cannot squash them as they have been pushed. It looks ugly. It works sure but it is ugly.
I don't think I can edit on the servers directly - the file are spread out a lot, and I cannot make each directory on three servers a clone of github and hope to keep things sane.
And trying to write scripts that will synch the servers with my local repo is insane - fabric files took long enough.
I cannot easily git pull from jenkins, because I still have to commit to have jenkins pull, and we still get ugly ugly commit logs.
I cannot see a graceful way to do this - ideas anyone.
|
As part of development, I am committing to github and pulling down and executing elsewhere. It feels wrong
| 1.2 | 0 | 0 | 97 |
10,920,199 |
2012-06-06T18:43:00.000
| 0 | 0 | 0 | 0 |
python,metrics,recommendation-engine,personalization,cosine-similarity
| 10,955,138 | 2 | false | 0 | 0 |
Recommender systems in the land of research generally work on a scale of 1 - 5. It's quite nice to get such an explicit signal from a user. However I'd imagine the reality is that most users of your system would never actually give a rating, in which case you have nothing to work with.
Therefore I'd track page views but also try and incorporate some explicit feedback mechanism (1-5, thumbs up or down etc.)
Your algorithm will have to take this into consideration.
| 2 | 0 | 1 |
I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.
My question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way?
|
Recommendation system - using different metrics
| 0 | 0 | 0 | 614 |
10,920,199 |
2012-06-06T18:43:00.000
| 0 | 0 | 0 | 0 |
python,metrics,recommendation-engine,personalization,cosine-similarity
| 10,956,591 | 2 | false | 0 | 0 |
For recommendation system, there are two problems:
how to quantify the user's interest in a certain item based on the numbers you collected
how to use the quantified interest data to recommend new items to the user
I guess you are more interested in the first problem.
To solve the first problem, you need either linear combination or some other fancy functions to combine all the numbers. There is really no a single universal function for all systems. It heavily depends on the type of your users and your items. If you want a high quality recommandation system, you need to have some data to do machine learning to train your functions.
For the second problem, it's somehow the same thing, plus you need to analyze all the items to abstract some relationships between each other. You can google "Netflix prize" for some interesting info.
| 2 | 0 | 1 |
I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.
My question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way?
|
Recommendation system - using different metrics
| 0 | 0 | 0 | 614 |
10,920,423 |
2012-06-06T18:58:00.000
| 5 | 0 | 0 | 0 |
python,django,mercurial,pyc
| 10,920,888 | 4 | false | 1 | 0 |
Usually you are safe, because *.pyc are regenerated if the corresponding *.py changes its content.
It is problematic if you delete a *.py file and you are still importing from it in another file. In this case you are importing from the *.pyc file if it is existing. But this will be a bug in your code and is not really related to your mercurial workflow.
Conclusion: Every famous Python library is ignoring their *.pyc files, just do it ;)
| 2 | 7 | 0 |
Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track.
I am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?
|
What to do with pyc files when Django or python is used with Mercurial?
| 0.244919 | 1 | 0 | 8,119 |
10,920,423 |
2012-06-06T18:58:00.000
| 0 | 0 | 0 | 0 |
python,django,mercurial,pyc
| 10,920,511 | 4 | false | 1 | 0 |
Sure if you have a .pyc file from an older version of the same module python will use that. Many times I have wondered why my program wasn't reflecting the changes I made, and realized it was because I had old pyc files.
If this means that .pyc are not reflecting your current version then yes you will have to delete all .pyc files.
If you are on linux you can find . -name *.pyc -delete
| 2 | 7 | 0 |
Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track.
I am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?
|
What to do with pyc files when Django or python is used with Mercurial?
| 0 | 1 | 0 | 8,119 |
10,920,488 |
2012-06-06T19:03:00.000
| 1 | 0 | 1 | 0 |
python,duplicate-removal
| 10,920,591 | 3 | false | 0 | 0 |
use fslint or some similar software. Fslint is able to for example give you a list of the different files and hardlink the copies together, or delete the duplicates. One option is also to just use a diff-like program to diff the directories if their internal structure is the same.
| 3 | 2 | 0 |
I have a directory of files that contain files of records. I just got access to a new directory that has the same records but additional files as well, but the additional files are buried deep inside other folders and i cant find them.
So my solution would be to have a python program run and delete all files that are duplicates in the two different directories (and subdirectories), and leave others intact, which will give me the "new files" i'm looking for.
I have seen a couple of programs that find duplicates, but i'm unsure as to how they run really, and they haven't been helpful.
Any way i can accomplish what i'm looking for?
Thanks!
|
Search multiple directories, delete duplicate files
| 0.066568 | 0 | 0 | 447 |
10,920,488 |
2012-06-06T19:03:00.000
| 0 | 0 | 1 | 0 |
python,duplicate-removal
| 10,922,803 | 3 | false | 0 | 0 |
Do they duplicate files in both directories have the same name/path? If I understand correctly you want to find the duplicate filenames rather than file contents? If so, a 'synchronised' call to os.walk in both trees might be helpful.
| 3 | 2 | 0 |
I have a directory of files that contain files of records. I just got access to a new directory that has the same records but additional files as well, but the additional files are buried deep inside other folders and i cant find them.
So my solution would be to have a python program run and delete all files that are duplicates in the two different directories (and subdirectories), and leave others intact, which will give me the "new files" i'm looking for.
I have seen a couple of programs that find duplicates, but i'm unsure as to how they run really, and they haven't been helpful.
Any way i can accomplish what i'm looking for?
Thanks!
|
Search multiple directories, delete duplicate files
| 0 | 0 | 0 | 447 |
10,920,488 |
2012-06-06T19:03:00.000
| 1 | 0 | 1 | 0 |
python,duplicate-removal
| 10,920,575 | 3 | false | 0 | 0 |
Possible approach:
Create a set of MD5 hashes from your original folder.
Recursively MD5 hash the files in your new folder, deleting any files that generate hashes already present in your set.
Caveat to the above is that there is a chance two different files can generate the same hash. How different are the files?
| 3 | 2 | 0 |
I have a directory of files that contain files of records. I just got access to a new directory that has the same records but additional files as well, but the additional files are buried deep inside other folders and i cant find them.
So my solution would be to have a python program run and delete all files that are duplicates in the two different directories (and subdirectories), and leave others intact, which will give me the "new files" i'm looking for.
I have seen a couple of programs that find duplicates, but i'm unsure as to how they run really, and they haven't been helpful.
Any way i can accomplish what i'm looking for?
Thanks!
|
Search multiple directories, delete duplicate files
| 0.066568 | 0 | 0 | 447 |
10,921,655 |
2012-06-06T20:22:00.000
| 0 | 0 | 1 | 0 |
python,enthought
| 10,921,702 | 2 | false | 0 | 0 |
The problem is that you don't have the library scipy installed, which is a totally different library of epdfree.
you can install it from apt-get in linux I guess, or going to their website
www.scipy.org
| 2 | 0 | 1 |
I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.)
After the install on one machine, I run python and try to import scipy. Everything goes fine.
On the other machine, I follow all the same steps as far as I can tell, but when I try to import scipy, I am told “ImportError: No module named scipy”.
As far as I can tell, I am doing everything the same on the two machines. I installed from the same script, I run the python in the epdfree installation directory, everything I can think of.
Does anyone have any idea what would keep “import scipy” from working on one machine while it works fine on the other? Thanks.
|
trouble with installing epdfree
| 0 | 0 | 0 | 242 |
10,921,655 |
2012-06-06T20:22:00.000
| 1 | 0 | 1 | 0 |
python,enthought
| 10,922,030 | 2 | true | 0 | 0 |
Well, turns out there was one difference. File permissions were being set differently on the two machines. I installed epdfree as su on both machines. On the second machine, everything was locked out when I tried to run it without going under "su". Now my next task is to find out why the permissions were set differently. I guess it's a difference in umask settings? Well, this I won't bother anyone with. But feel free to offer an answer if you want to! Thanks.
| 2 | 0 | 1 |
I am trying to install epdfree on two virtually identical machines: Linux 2.6.18-308.1.1.el5, CentOS release 5.8., 64-bit machines. (BTW, I'm a bit new to python.)
After the install on one machine, I run python and try to import scipy. Everything goes fine.
On the other machine, I follow all the same steps as far as I can tell, but when I try to import scipy, I am told “ImportError: No module named scipy”.
As far as I can tell, I am doing everything the same on the two machines. I installed from the same script, I run the python in the epdfree installation directory, everything I can think of.
Does anyone have any idea what would keep “import scipy” from working on one machine while it works fine on the other? Thanks.
|
trouble with installing epdfree
| 1.2 | 0 | 0 | 242 |
10,922,394 |
2012-06-06T21:13:00.000
| 0 | 0 | 0 | 0 |
python,sqlite,copy
| 10,922,927 | 1 | false | 0 | 0 |
You did a .backup on the source system, but you don't mention doing a .restore on the target system. Please clarify.
You don't mention what versions of the sqlite3 executable you have on the source and target systems.
You don't mention how you transferred the .bak file from the source to the target.
Was the source db being accessed by another process when you did the .backup?
How big is the file? Have you considered zip/copy/unzip instead of backup/copy/restore?
| 1 | 0 | 0 |
I have a sqlite3 database that I created from Python (2.7) on a local machine, and am trying to copy it to a remote location. I ran "sqlite3 posts.db .backup posts.db.bak" to create a copy (I can use the original and this new copy just fine). But when I move the copied file to the remote location, suddenly every command gives me: sqlite3.OperationalError: database is locked. How do I safely move a sqlite3 database so that I can use it after the move?
|
How to safely move an SQLite3 database?
| 0 | 1 | 0 | 624 |
10,922,849 |
2012-06-06T21:53:00.000
| 0 | 0 | 1 | 0 |
python,arrays,numpy
| 10,922,884 | 3 | false | 0 | 0 |
What you are doing sounds fine for the object arrays, that's really the only way to get an average. As for the average of all the elements in A, just add each average as you iterate through the list, and then divide by the total number of objects in A
| 1 | 0 | 0 |
Python question.
I have a list (A) of numpy object arrays (B). I'd like to get the mean of one of the object variables for all the objects in the B array. Right now, I'm just parsing through the B array, summing the variable and dividing it by the number of objects in B. Is there a better or more pythonic way to do this?
It would also be great, if I could get the mean of all objects in the A list (i.e. all objects)
|
Mean of object variables in a python array
| 0 | 0 | 0 | 674 |
10,924,309 |
2012-06-07T00:51:00.000
| 3 | 1 | 0 | 1 |
python,linux,service
| 10,924,388 | 2 | false | 0 | 0 |
The cron job is probably a good approach in general, as the shell approach requires manual intervention to start it.
A couple of suggestions:
You could use a lock file to ensure that the cron job only ever starts one instance of the python script - often problems occur when using cron for larger jobs because it starts a second instance before the first instance has actually finished. You can do this simply by checking whether the lock file exists, then, if it does not, 'touch'ing the file at the beginning of the script and 'rm'ing it as your last action at the end of the script. If the lock file exists -- simply exit the script, as there is already one instance running. (Of course, if the script dies you will have to delete the lock file before running the script again).
Also, if excessive resource use is a problem, you can ensure that the script does not eat too many resources by giving it a low priority (prefix with, for example, nice -n 19).
| 1 | 2 | 0 |
I have a small python script that creates a graph of data pulled from MySQL. I'm trying to figure out a way to run the script in the background all time on a regular basis. I've tried a number of things:
A Cron Job that runs the script
A loop timer
Using the & command to run the script in the background
These all have there pluses and minuses:
The Cron Job running more then every half hour seems to eat up more resources then it's worth.
The Loop timer put into the script doesn't actually put the script in the background it just keeps it running.
The Linux & command backgrounds the process but unlike a real Linux service I can't restart/stop it without killing it.
Can someone point me to a way to get the best out of all of these methods?
|
How can I run my python script in the background on a schedule?
| 0.291313 | 0 | 0 | 7,997 |
10,928,948 |
2012-06-07T09:16:00.000
| 2 | 0 | 1 | 0 |
python,tox
| 10,928,950 | 1 | true | 0 | 0 |
Use TERM=dumb on the command line if you want to disable the colors but don't know about changing the colors.
| 1 | 0 | 0 |
When using tox by default it will output colours to the terminal which is actually fine if you using a white background terminal but hard to see with a dark terminal. Is there any tricks to disable colours in tox without hacking the code directly?
|
How to disable or change colors from tox output
| 1.2 | 0 | 0 | 682 |
10,929,285 |
2012-06-07T09:39:00.000
| 0 | 0 | 0 | 0 |
python,qt,checkbox,combobox,pyqt
| 10,930,835 | 1 | true | 0 | 1 |
You can do this using the model->view framework, but it means implementing a custom model to support checkable data.
You create a custom model by subclassing QAbstractItemModel. This presents an API to the QComboBox for accessing the underlying data. Off the top of my head I think you'll need to implement the flags method to indicate that you support ItemIsUserCheckable for the indexes you want to be able to check. You'll also need to implement data() which reports back the data state from your underlying data, and setData() which accept input from the QComboBox and changes the underlying data.
You then set this as the model for the QComboBox using setModel().
This isn't really beginner stuff, but the model->view framework in Qt is one of it's most important and valuable features and well worth getting to grips with.
| 1 | 0 | 0 |
I'm new to PyQt and I have to work on an application which use it. For the moment, I don't have any problem, but I'm stuck on something. I have to create a "ComboBox with its items checkable, like a CheckBox". This ComboBox should contain many image format, like "jpg", "exr" or "tga", and I will have to pick up the text of the checked option and put it in a variable. The problem is that I can't find a thing about making items checkable using a ComboBox (if you know how to, It would gladly help me !)
Since I can't do it with a ComboBox, maybe I can do it with a QList I thought, but I can't find anything either which is understandable for a beginner like me. I have read stuff about flags and "Qt.ItemIsUserCheckable" but I don't know how to use it in a easy way :(
Can you help me ? Thanks !
PyQt version : 4.4.3
Python version : 2.6
|
How to simply create a list of CheckBox which has a dropdown list like a ComboBox with PyQt?
| 1.2 | 0 | 0 | 1,527 |
10,930,459 |
2012-06-07T11:00:00.000
| 1 | 0 | 0 | 0 |
python,mysql,django
| 10,935,789 | 2 | false | 1 | 0 |
You could use a middleware with a process_view method and a try / except wrapping your call.
Or you could decorate your views and wrap the call there.
Or you could use class based views with a base class that has a method decorator on its dispatch method, or an overriden.dispatch.
Really, you have plenty of solutions.
Now, as said above, you might want to modify your Desktop application too!
| 1 | 1 | 0 |
I have a desktop application that send POST requests to a server where a django app store the results. DB server and web server are not on the same machine and it happens that sometimes the connectivity is lost for a very short time but results in a connection error on some requests:
OperationalError: (2003, "Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (110)")
On a "normal" website I guess you'd not worry too much: the browser display a 500 error page and the visitor tries again later.
In my case loosing info posted by a request is not an option and I am wondering how to handle this? I'd try to catch on this exception, wait for the connectivity to come back (lag is not a problem) and then continue the process. But as the exception can occur about anywhere in the code I'm a bit stuck on how to proceed.
Thanks for your advice.
|
Django: how to properly handle a database connection error
| 0.099668 | 1 | 0 | 2,536 |
10,931,481 |
2012-06-07T12:08:00.000
| -1 | 0 | 1 | 0 |
python,pyopengl
| 10,932,247 | 2 | false | 0 | 0 |
You need to download and install the 32 bit version of python 2.7 .
you can have both 32 and 64 bit python on you sys at the same time, with no ill-effects.
When you see a python library that is 32 bit, in my experience, you need 32 bit python. to my knowledge, 32 bit libraries will not work with 64 bit python installs.
| 1 | 1 | 0 |
I'm a new to Python.
I have installed Python 2.7 64 bit on Win7 64 bit machine. Now I want to install PyOpenGL but there is only win32 version available. When I'm trying to install PyOpenGL is says "No Python installation found in registry"
How do I proceed form here now?
|
Python 2.7 64 bit and PyOpenGL-3.0.1.win32 installation
| -0.099668 | 0 | 0 | 5,355 |
10,931,889 |
2012-06-07T12:34:00.000
| 1 | 0 | 0 | 0 |
python,mongodb,mongoalchemy,nosql
| 10,932,004 | 4 | false | 1 | 0 |
What I would do with mongodb would be to embed the user id into the comments (which are part of the structure of the "post" document).
Three simple hints for better performances:
1) Make sure to ensure an index on the user_id
2) Use comment pagination method to avoid querying 200 times the database
3) Caching is your friend
| 1 | 3 | 0 |
I cant find "best" solution for very simple problem(or not very)
Have classical set of data: posts that attached to users, comments that attached to post and to user.
Now i can't decide how to build scheme/classes
On way is to store user_id inside comments and inside.
But what happens when i have 200 comments on page?
Or when i have N posts on page?
I mean it should be 200 additional requests to database to display user info(such as name,avatar)
Another solution is to embed user data into each comment and each post.
But first -> it is huge overhead, second -> model system is getting corrupted(using mongoalchemy), third-> user can change his info(like avatar). And what then? As i understand update operation on huge collections of comments or posts is not simple operation...
What would you suggest? Is 200 requests per page to mongodb is OK(must aim for performance)?
Or may be I am just missing something...
|
MongoDB: Embedded users into comments
| 0.049958 | 1 | 0 | 919 |
10,937,081 |
2012-06-07T17:48:00.000
| 2 | 0 | 0 | 0 |
python,sungridengine,drmaa
| 14,248,534 | 1 | false | 0 | 0 |
From an DRMAA API point of view, there is no better way. The API simply reflects what you are able to do in a shell script with the default command-line tools.
The problem lies in the implementation strategy of your particular DRMAA library, since SGE offers better ways than constant polling to get job status updates.You therefore have the following options:
Fix the DRMAA implementation you are using to rely on some direct communication with the scheduler. One source of information for the wire protocol could be the Open Grid Scheduler project.
Wait until Univa (or others) deliver a DRMAAv2 implementation for their Grid Engine product. This version of the API supports library callbacks on job status changes (http://ogf.org/documents/GFD.198.pdf), which seems to be exactly what you want.
| 1 | 0 | 0 |
I wanted to ask about "wait" feature in drmaa API I am using through Python. Does it do constant qstat's ( if we are running it on SGE) to check whether a program has finished execution.
Our admin want us to avoid any constant qstat's as it slows down the performance due to extra load on scheduler.
In general wat would be an efficient way to check for job status through DRMAA without overboarding the scheduler.
Thanks!
-Abhi
|
efficient way to wait for job completion : python and drmaa
| 0.379949 | 0 | 0 | 844 |
10,937,861 |
2012-06-07T18:39:00.000
| 1 | 0 | 0 | 0 |
python,dropbox
| 10,937,959 | 2 | true | 0 | 0 |
If the script is gonna run every 10 minutes, EC2 etc. won't do you any good since they are priced based on 15-minute time slices (and the server will always be used).
The cheapest solution is a small VPS, which can be found for as little as 5$ / month from some providers. Install dropbox & python and you're good to go.
| 1 | 0 | 0 |
I wrote a python script which downloads data via the Yahoo! Finance API and puts it into a file. After that, it uploads the file to Dropbox. The script does that every 10 minutes.
How can I implement this at a minimal cost with a server? I don't want to let my computer run 24/7.
Thank you in advance!
|
How to run a python script and upload the generated to dropbox on a server?
| 1.2 | 0 | 1 | 729 |
10,938,360 |
2012-06-07T19:12:00.000
| 39 | 0 | 0 | 0 |
python,flask,wsgi,gunicorn
| 10,943,523 | 4 | false | 1 | 0 |
Flask will process one request per thread at the same time. If you have 2 processes with 4 threads each, that's 8 concurrent requests.
Flask doesn't spawn or manage threads or processes. That's the responsability of the WSGI gateway (eg. gunicorn).
| 2 | 184 | 0 |
I'm building an app with Flask, but I don't know much about WSGI and it's HTTP base, Werkzeug. When I start serving a Flask application with gunicorn and 4 worker processes, does this mean that I can handle 4 concurrent requests?
I do mean concurrent requests, and not requests per second or anything else.
|
How many concurrent requests does a single Flask process receive?
| 1 | 0 | 0 | 143,304 |
10,938,360 |
2012-06-07T19:12:00.000
| 9 | 0 | 0 | 0 |
python,flask,wsgi,gunicorn
| 10,942,272 | 4 | false | 1 | 0 |
No- you can definitely handle more than that.
Its important to remember that deep deep down, assuming you are running a single core machine, the CPU really only runs one instruction* at a time.
Namely, the CPU can only execute a very limited set of instructions, and it can't execute more than one instruction per clock tick (many instructions even take more than 1 tick).
Therefore, most concurrency we talk about in computer science is software concurrency.
In other words, there are layers of software implementation that abstract the bottom level CPU from us and make us think we are running code concurrently.
These "things" can be processes, which are units of code that get run concurrently in the sense that each process thinks its running in its own world with its own, non-shared memory.
Another example is threads, which are units of code inside processes that allow concurrency as well.
The reason your 4 worker processes will be able to handle more than 4 requests is that they will fire off threads to handle more and more requests.
The actual request limit depends on HTTP server chosen, I/O, OS, hardware, network connection etc.
Good luck!
*instructions are the very basic commands the CPU can run. examples - add two numbers, jump from one instruction to another
| 2 | 184 | 0 |
I'm building an app with Flask, but I don't know much about WSGI and it's HTTP base, Werkzeug. When I start serving a Flask application with gunicorn and 4 worker processes, does this mean that I can handle 4 concurrent requests?
I do mean concurrent requests, and not requests per second or anything else.
|
How many concurrent requests does a single Flask process receive?
| 1 | 0 | 0 | 143,304 |
10,939,138 |
2012-06-07T20:11:00.000
| 0 | 0 | 1 | 0 |
python,xml,json,standards,dsl
| 46,526,632 | 3 | false | 0 | 0 |
In the end, I implemented a simple interpreter in Python, using S-expressions. Parsers are easy to find online (about half a page of code) and implementing functions for the language can be made simple by use of function decorators.
| 1 | 1 | 0 |
XML is a good file format for storing documents: content with metadata. JSON is a good file format for storing data.
Is there an analogous file format standard which is good at encoding operations? In other words, is there a standard file format which would be good for encoding small light-weight domain-specific languages? What I have in mind are simple DSLs consisting of only string data and no more than a dozen simple commands. My languages would consist of calling one command after another in a very simple manner (no conditionals or loops).
Currently, I've used XML to encode a series of operations, where each tag represents a different command. A SAX parser dispatches each element as a function call. It's very difficult to look at; just doesn't feel like an elegant solution.
Ideally, I'd be working in python and not writing my own parsers...trying to get the benefit of using an established standard file format. One fallback is to use python itself, but of course I'd prefer a language-neutral standard if one is to be found.
|
Domain Specific Language, Standard File Format
| 0 | 0 | 1 | 615 |
10,940,349 |
2012-06-07T21:47:00.000
| 1 | 0 | 1 | 0 |
c#,python,multithreading,dll,activex
| 10,952,585 | 2 | false | 0 | 0 |
I haven't worked with Phyton, but for C# I would suggest creating a helper class that contains the ActiveX as a static public property. Have the main thread create the ActiveX and then from there all threads access it as needed.
| 1 | 1 | 0 |
I have an ActiveX (COM) DLL that makes windows system calls (such as ReadFile() and WriteFile()). My GUIs (in Python or C#) create an instance of the DLL in the main GUI thread. However, in order to make calls to it from threads, a new instance of the DLL must be created in each thread, regardless of using C# or Python. (As a side note, the original instance could be called from a thread in C#, but this blocks the main thread; doing this in Python crashes the GUI.) Is there any way to avoid creating a new instance of the DLL in threads?
The reason why using the original DLL instance is desired: The DLL allows connection to a HID microcontroller. The DLL provides an option to allow only one exclusive handle to the microcontroller. If the GUI designer chooses this option (which is necessary in certain situations), the second DLL instance would not work as desired, since only one instance of the DLL could make the connection.
|
ActiveX DLL called from thread
| 0.099668 | 0 | 0 | 1,250 |
10,941,830 |
2012-06-08T00:51:00.000
| 2 | 0 | 1 | 0 |
python,python-3.x
| 10,943,105 | 4 | false | 0 | 0 |
I put an import pdb;pdb.set_trace() in the relevant place, and once in the debugger I use dir(), .__dict__ and pp, or any other forms of inspection necessary.
| 1 | 1 | 0 |
I am a noob to Python.
I constantly find myself looking at a piece of code and trying to work out what is inside a data structure such as for example a dictionary. In fact the first thing I am trying to work out is "what sort of data structure is this?" and THEN I try to work out how to see what is inside it. I look at a variable and say "is this a dict, or a list, or a multidict or something else I'm not yet familiar with?". Then, "What's inside it?". It's consuming vast amounts of time and I just don't know if I'm taking the right approach.
So, the question is, "How do the Python masters find out what sort of data structure something is, and what techniques do they use to see what is inside those data structures?"
I hope the question is not too general but I'm spending ridiculous amounts of time just trying to fix issues with recognizing data structures and viewing their contents, let alone getting useful code written.
thanks heaps.
|
How do the Python masters recognise and examine contents of Python 3 data structures?
| 0.099668 | 0 | 0 | 237 |
10,942,469 |
2012-06-08T02:44:00.000
| 0 | 0 | 0 | 0 |
python,firebug,web-scraping
| 10,942,524 | 2 | false | 1 | 0 |
If the answer's not in the source code (possibly obfuscated, encoded, etc), then it was probably retrieved after the page loaded with an XmlHTTPRequest. You can use the 'network' panel in Firebug to see what other pieces of data the page loaded, and what requests it made to load them.
(You may have to enable the network panel and then reload the page/start over)
| 1 | 0 | 0 |
I am trying to scrape a website, but the thing that I want to get is not in the source code. But it does appear when i use firebug. Is there a way to scrape from the firebug code as opposed to the source code?
|
Scraping a website in python with firebug?
| 0 | 0 | 1 | 1,907 |
10,948,636 |
2012-06-08T12:12:00.000
| 1 | 0 | 0 | 0 |
python
| 10,950,787 | 1 | true | 1 | 0 |
In general your application is using wsgi compliant framework and you shouldn't be afraid of multi-threaded / single-threaded server side. It's meant to work transparent and has to react same way despite of what kind of server is it, as long as it is wsgi compliant.
Every code block before bottle.run() will be run only once. As so, every connection (database, memcached) will be instantiated only once and shared.
When you call bottle.run() bottlepy starts wsgi server for you. Every request to that server fires some wsgi callable inside bottlepy framework. You are not really interested if it is single or multi -threaded environment, as long as you don't do something strange.
For strange i mean for instance synchronizing something through global variables. (Exception here is global request object for which bottlepy ensures that it contains proper request in proper context).
And in response to first question on the list: request may be computed in newly spawned thread or thread from the pool of threads (CherryPy is thread-pooled)
| 1 | 1 | 0 |
I am developing an application with the bottlepy framework. I am using the standard library WSGIRefServer() to run a development server. It is a single threaded server.
Now when going into production, I will want to move to a multi-threaded production server, and there are many choices. Let's say I choose CherryPy.
Now, in my code, I am initializing a single wsgi application. Other than that, I am also initializing other things...
Memcached connection
Mako templates
MongoDB connection
Since standard library wsgiref is a single threaded server, and I am creating only a single wsgi app (wsgi callable), everything works just fine.
What I want to know is that when I move to the multi-threaded server, how will my wsgi app, initialization code, connections to different server, etc. behave.
Will a multi-threaded server create a separate instance of wsgi app for every thread. And will a new thread be spawned for each new request (which then means a new wsgi app for each request)?
Will my connections to memcached, mongoDB, etc, be shared across threads or not. What else will be shared between threads
Please explain the request-response cycle for a threaded server
|
Python: moving from dev server to production server
| 1.2 | 0 | 0 | 307 |
10,953,060 |
2012-06-08T17:01:00.000
| 2 | 0 | 0 | 0 |
python,sockets
| 37,355,354 | 2 | false | 0 | 0 |
Python supports os.writev() as well as sendmsg(). These functions are atomic, so are equivalent of calling write() and send() respectively with concatenated buffer.
There is TCP_CORK. You may say kernel not to send partial frames until un-corked.
Using either technique, you may have control over partial TCP frames.
| 1 | 4 | 0 |
In POSIX C we can use writev to write multiple arrays at once to a file descriptor. This is useful when you have to concatenate multiple buffers in order to form a single message to send through a socket (think of a HTTP header and body, for instance). This way I don't need to call send twice, once for the header and once for the body (what prevent the messages to be split in different frames on the wire), nor I need to concatenate the buffers before sending.
My question is, is there a Python equivalent?
|
Scatter/gather socket write in Python
| 0.197375 | 0 | 1 | 708 |
10,954,677 |
2012-06-08T19:05:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,pipeline
| 10,957,940 | 1 | true | 0 | 0 |
There's no official way at the moment. You could probably prepend a task to the MapReduce pipeline to compute and cache the list (in the datastore or blobstore, whichever is most appropriate, plus a copy in memcache). Then have your mapper and/or reducer function do a lazy initialization of a global variable that holds the list, checking first in memcache, and falling back on datastore/blobstore as necessary (and re-caching the list). As new instances are spun up to handle tasks, they'll initialize themselves.
Assuming the list is fixed at the time the MapReduce starts, competing reads from different instances won't be an issue.
| 1 | 3 | 0 |
I've begun creating a MapReduce job with the new Google App Engine Pipeline API, and I've run into a situation where I'd like every worker to have a copy of the same list during runtime.
One option would be to use memcache, but I'm worried that the size of this list might eventually be greater than what I can set with memcache. I think my other option would be to initialize every worker with this list context at runtime, but I can't find any way to do this in the docs and looking at the source code hasn't offered any obvious answers.
Is there a way to add extra parameters into a map reduce function or otherwise inject state into a MapReduce worker context?
|
Can I keep state across GAE Pipeline API workers?
| 1.2 | 0 | 0 | 123 |
10,954,687 |
2012-06-08T19:06:00.000
| 3 | 0 | 1 | 0 |
python,string
| 10,954,728 | 2 | false | 0 | 0 |
It depends on the representation of the object being printed; if the string to print contains the \" character, then a single quote will be used; if the string contains the \' character, then a double quote will be used.
| 1 | 3 | 0 |
I have a list I want to pretty print that contains empty lists as well as lists with string members. the problem is that lists that contains strings are printed with double quotes:
>>>str(['a'])
"['a']"
But an empty list is printed with single quotes:
>>> str([])
'[]'
Is there a way to always force printing string with double quotes ?
|
Print empty string with double quotes instead of single quotes in Python
| 0.291313 | 0 | 0 | 4,892 |
10,955,150 |
2012-06-08T19:45:00.000
| 0 | 0 | 1 | 0 |
python,multithreading,multiprocessing,gil
| 10,956,679 | 2 | false | 0 | 0 |
How is the GIL even relevant here? What are you expecting to get out of it?
You can spawn n threads and have them all perform blocking I/O, without a GIL.
And if you want to "manage" the threads—e.g., join the all so you know when you're done—you still need to do that explicitly; the GIL doesn't help.
| 1 | 0 | 0 |
Note: My education on this topic is lacking, so I may be making some naive assumptions.
Assume you have a function performing blocking I/O. You need to run this function n times.
If you were to simply spawn n threads (using the threading module) and start them at the same time, would it work to simply use the GIL to manage the threads (based on I/O) as opposed to using the multiprocessing.pool module to manage subprocesses?
|
Using the GIL as a thread pool
| 0 | 0 | 0 | 226 |
10,956,175 |
2012-06-08T21:20:00.000
| 2 | 0 | 1 | 0 |
python,ubuntu,file-io
| 10,956,213 | 3 | false | 0 | 0 |
When files get too large, addressing becomes an issue. Typically you get 32 bits which translates to a maximum size of about 4 gb.
| 2 | 5 | 0 |
I am writing to a file using python. The script suddenly stops running and throws an 'IOError: [Errno 27] File too large'
Is there a limit on the size of the file that you are allowed to create using a program?
Has anyone else faced this issue?
The file size was close to 4.3Gb(it is a bit big) when it stopped.
|
File too Large python
| 0.132549 | 0 | 0 | 10,844 |
10,956,175 |
2012-06-08T21:20:00.000
| 4 | 0 | 1 | 0 |
python,ubuntu,file-io
| 45,533,860 | 3 | false | 0 | 0 |
I also got this error when I had too many files in one directory. I had 64435 files in a directory, each with 10 digits + '.json' in their names, and any subsequent attempts to write new files to the directory threw errors (e.g.) OSError: [Errno 27] File too large: 'ngrams/0/0/0/0000029503.json'
| 2 | 5 | 0 |
I am writing to a file using python. The script suddenly stops running and throws an 'IOError: [Errno 27] File too large'
Is there a limit on the size of the file that you are allowed to create using a program?
Has anyone else faced this issue?
The file size was close to 4.3Gb(it is a bit big) when it stopped.
|
File too Large python
| 0.26052 | 0 | 0 | 10,844 |
10,956,683 |
2012-06-08T22:21:00.000
| 0 | 0 | 0 | 0 |
python,ruby-on-rails,server-side
| 33,303,096 | 2 | false | 1 | 0 |
Are you sure your database is well maintained and efficient (good indexes, normalised, clean etc)
Or can you not make use of messaging queues, so you keep your rails crud app, then the jobs are just added to a queue. Python scripts on the backend (or different machine) read from the queue, process then insert back into the database or add results to a results queue or whereever you want to read them from
| 1 | 0 | 0 |
I am a data scientist and database veteran but a total rookie in web development and have just finished developing my first Ruby On Rails app. This app accepts data from users submitting data to my frontend webpage and returns stats on the data submitted. Some users have been submitting way too much data - its getting slow and I think I better push the data crunching to a backed python or java app, not a database. I don't even know where to start. Any ideas on how to best architect this application? The job flow is > data being submitted from the fronted app which pushes it to the > backend for my server app to process and > send back to my Ruby on Rails page. Any good tutorials that cover this? Please help!
What should I be reading up on?
|
Ruby on Rails frontend and server side processing in Python or Java.. HOW, What, Huh?
| 0 | 0 | 0 | 694 |
10,957,467 |
2012-06-09T00:34:00.000
| 1 | 0 | 1 | 0 |
python,list,comparison
| 10,957,519 | 2 | false | 0 | 0 |
The most straightforward way: Create an object of each one, with your comparison function as __cmp__ (python 2.x) or define __lt__ and __eq__ (python 3.x). Stash each one in a list named list_. Find the least valued one using min(list_).
An optimization that might help, if practical: If you can come up with a way of mapping your objects to (possibly large) integers, such that the integer for x is < the integer for y, iff the original object ox is < the original object oy, and then take a min of the integers. This should speed things up slightly, if it's workable for your types.
| 1 | 1 | 0 |
I have a function that takes as parameters 2 objects: a and b
The function checks (with a very long algorithm) which one of these objects is better.
If a is better it returns -1, if b is better it returns 1, if they tied it returns 0
My problem is:
I have 21 of these objects in a list.
I need to find out, using the function above (the function cannot be
changed, the only way is to compare 2 objects, it's a very complicated
and long algorithm), which one of these 21 objects is the best.
I tried thinking for hours how to do it efficiently without doing the same comparison too many times, how to write an algorithm that will find out which one is the best (and if two of them are tied and they both are the best, it doesn't matter which one to take, though I don't think it's even possible for a tie to happen), and I couldn't come up with anything good.
The function's name is handCompare(a, b)
The objects are found in a list called Combos, len(combos) is 21
I need an algorithm that will find out the best item in the combos list
Thanks for reading and I hope you can help :)
|
Python, using a comparison function for finding the best object
| 0.099668 | 0 | 0 | 103 |
10,957,877 |
2012-06-09T02:16:00.000
| 7 | 0 | 0 | 0 |
python,database,flat-file
| 10,957,953 | 2 | false | 0 | 0 |
Assuming by 'database' you mean 'relational database' - even the embedded databases like SQLite come with some overhead compared to a plain text file. But, sometimes that overhead is worth it compared to rolling your own.
The biggest question you need to ask is whether you are storing relational data - whether things like normalisation and SQL queries make any sense at all. If you need to lookup data across multiple tables using joins, you should certainly use a relational database - that's what they're for. On the other hand, if all you need to do is lookup into one table based on its primary key, you probably want a CSV file. Pickle and shelve are useful if what you're persisting is the objects you use in your program - if you can just add the relevant magic methods to your existing classes and expect it all to make sense.
Certainly "you shouldn't use databases unless you have a lot of data" isn't the best advice - the amount of data goes more to what database you might use if you are using one. SQLite, for example, wouldn't be suitable for something the size of Stackoverflow - but, MySQL or Postgres would almost certainly be overkill for something with five users.
| 1 | 14 | 0 |
I am making a little add-on for a game, and it needs to store information on a player:
username
ip-address
location in game
a list of alternate user names that have came from that ip or alternate ip addresses that come from that user name
I read an article a while ago that said that unless I am storing a large amount of information that can not be held in ram, that I should not use a database. So I tried using the shelve module in python, but I'm not sure if that is a good idea.
When do you guys think it is a good idea to use a database, and when it better to store information in another way , also what are some other ways to store information besides databases and flat file databases.
|
When is it appropriate to use a database , in Python
| 1 | 1 | 0 | 5,782 |
10,959,051 |
2012-06-09T06:54:00.000
| 0 | 0 | 0 | 0 |
python,django,performance,jquery,dajaxice
| 10,959,121 | 1 | false | 1 | 0 |
In my experience the main load in the web application laying on databases. Not on frameworks or templates engines.
| 1 | 0 | 0 |
If this question has already been answered by someone on this site, please point me there.
Is it a good option to use Dajaxice for a high-traffic website, assuming millions of hits per day? Has anybody faced performance issues in terms of load-times for web pages with multiple AJAX calls to server?
What are the alternatives for Python+Django projects? Is it better to use just jQuery?
|
Dajaxice performance measures for high-traffic site
| 0 | 0 | 0 | 218 |
10,959,227 |
2012-06-09T07:27:00.000
| 0 | 0 | 1 | 0 |
python
| 10,959,279 | 3 | false | 0 | 0 |
A word is full-width if its characters are full-width. You need to look up the unicode specification and see which character ranges are full-width, then check each character against that.
| 1 | 0 | 0 |
Recently I've been dealing with texts with mixed languages, including Chinese, English, and even some emoticons.
I've been searching for this issue quite a lot, but the only thing I can find is "to replace full-width characters with half-width characters" rather than telling you how to determine whether the character is a half- or full-width word.
So, my question is:
Is it possible to tell whether a word is half-width or full-width?
|
How to distinguish whether a word is half-width or full-width?
| 0 | 0 | 0 | 1,957 |
10,960,085 |
2012-06-09T10:05:00.000
| 1 | 0 | 1 | 0 |
python,functional-programming,list-comprehension
| 10,960,143 | 5 | false | 0 | 0 |
I'm sure that others will be able to explain it better than I will but functional programming just has mostly to do with how you think of the flow of the program and whether or not you can pass around functions as objects to compute on. For example in javascript when you provide a function to be executed when an even fires this is passing around a function and in this sense it is almost like functional programming.
This is the sense in which list comprehension is like functional programming because you are giving instructions on how to compute each element rather than the more procedural approach which would be to loop through and do the computation yourself rather than hand it off as a function. Python isn't really what I would consider a true functional-programming language like LISP or ML or Haskell, (is erlang? can't remember) but it can do somethings like it (look into lambda expressions in python).
Java and C/C++ are not really functional either but you could simulate it with function pointers as arguments. Not that familiar with C#...
Event driven languages tend to make use of this idea of function passing more just because they need some way to pass unknown code to be executed at a later date.
| 1 | 7 | 0 |
In my Python learning book, when I read about List Comprehension, the author has a small note explaining:
Python’s list comprehension is an example of the language’s support for
functional programming concepts.
I have go on wikipedia to read about functional programming. But I hard to imagine because I don't see any connection between List Comprehension and this concept in as explained wiki page.
Please give me a clear explanation (and if can, give me some more examples about functional programming in Java or C# too :D)
|
Python: List Comprehension and Functional Programming
| 0.039979 | 0 | 0 | 3,589 |
10,960,672 |
2012-06-09T11:39:00.000
| 4 | 0 | 0 | 0 |
python
| 10,960,826 | 1 | true | 0 | 0 |
Unless you entirely control the system, I think you'd be better off abandoning this particular pursuit. Modern filesystems or mediums (e.g. SSD wear-leveling) can result in data being retained physically on-disk even if you overwrite them in-place.
Best practice in my book is to fill the disk with random data, then exclusively use whole-disk encryption.
| 1 | 0 | 0 |
I'm going to make a kind of remover that erase files never recoverable.
I don't know the algorithm but I think it is possible to get exact file memory address and
write something like 'null' at there. So I'm searching at os module and others,
but don't know how to do that...Is there a function or the otherway?
Or what I have to do is just read the file binarymode and override it null?
|
How can I get the exact memory address on HDD of an file?
| 1.2 | 0 | 0 | 130 |
10,962,076 |
2012-06-09T15:08:00.000
| 10 | 0 | 1 | 1 |
python,concurrency,wsgi,tornado
| 10,962,103 | 2 | false | 0 | 0 |
If you are truly going to be dealing with multiple simultaneous requests that are compute-bound, and you want to do it in Python, then you need a multi-process server, not multi-threaded. CPython has Global Interpreter Lock (GIL) that prevents more than one thread from executing python bytecode at the same time.
Most web applications do very little computation, and instead are waiting for I/O, either from the database, or the disk, or from services on other servers. Be sure you need to handle compute-bound requests before discarding Tornado.
| 1 | 11 | 0 |
I understand tornado is a single threaded and non-Blocking server, hence requests are handled sequentially (except when using event driven approach for IO operation).
Is there a way to process multiple requests parallel in tornado for normal(non-IO) execution. I can't fork multiple process since I need a common memory space across requests.
If its not possible please suggest to me other python servers which can handle parallel request and also supports wsgi.
|
Is concurrency possible in tornado?
| 1 | 0 | 0 | 5,472 |
10,962,393 |
2012-06-09T15:52:00.000
| 7 | 0 | 1 | 0 |
python,garbage-collection,cpython
| 10,962,487 | 3 | false | 0 | 0 |
How does Python detect & free circular memory references before making use of the gc module?
Python's garbage collector (not actually the gc module, which is just the Python interface to the garbage collector) does this. So, Python doesn't detect and free circular memory references before making use of the garbage collector.
Python ordinarily frees most objects as soon as their reference count reaches zero. (I say "most" because it never frees, for example, small integers or interned strings.) In the case of circular references, this never happens, so the garbage collector periodically walks memory and frees circularly-referenced objects.
This is all CPython-specific, of course. Other Python implementations have different memory management (Jython = Java VM, IronPython = Microsoft .NET CLR).
| 1 | 43 | 0 |
I'm trying to understand how Python's garbage collector detects circular references. When I look at the documentation, all I see is a statement that circular references are detected, except when the objects involved have a __del__ method.
If this happens, my understanding (possibly faulty) is that the gc module acts as a failsafe by (I assume) walking through all the allocated memory and freeing any unreachable blocks.
How does Python detect & free circular memory references before making use of the gc module?
|
How does Python's Garbage Collector Detect Circular References?
| 1 | 0 | 0 | 14,083 |
10,967,849 |
2012-06-10T10:08:00.000
| 17 | 0 | 1 | 0 |
python
| 10,967,855 | 4 | true | 0 | 0 |
Properties are more flexible than attributes, since you can define functions that describe what is supposed to happen when setting, getting or deleting them. If you don't need this additional flexibility, use attributes – they are easier to declare and faster.
In languages like Java, it is usually recommended to always write getters and setters, in order to have the option to replace these functions with more complex versions in the future. This is not necessary in Python, since the client code syntax to access attributes and properties is the same, so you can always choose to use properties later on, without breaking backwards compatibilty.
| 3 | 8 | 0 |
Just a quick question, I'm having a little difficulty understanding where to use properties vs. where use to plain old attributes. The distinction to me is a bit blurry. Any resources on the subject would be superb, thank you!
|
When to use attributes vs. when to use properties in python?
| 1.2 | 0 | 0 | 1,363 |
10,967,849 |
2012-06-10T10:08:00.000
| 0 | 0 | 1 | 0 |
python
| 10,968,354 | 4 | false | 0 | 0 |
In addition to what Daniel Roseman said, I often use properties when I'm wrapping something i.e. when I don't store the information myself but wrapped object does. Then properties make excellent accessors.
| 3 | 8 | 0 |
Just a quick question, I'm having a little difficulty understanding where to use properties vs. where use to plain old attributes. The distinction to me is a bit blurry. Any resources on the subject would be superb, thank you!
|
When to use attributes vs. when to use properties in python?
| 0 | 0 | 0 | 1,363 |
10,967,849 |
2012-06-10T10:08:00.000
| 14 | 0 | 1 | 0 |
python
| 10,967,861 | 4 | false | 0 | 0 |
The point is that the syntax is interchangeable. Always start with attributes. If you find you need additional calculations when accessing an attribute, replace it with a property.
| 3 | 8 | 0 |
Just a quick question, I'm having a little difficulty understanding where to use properties vs. where use to plain old attributes. The distinction to me is a bit blurry. Any resources on the subject would be superb, thank you!
|
When to use attributes vs. when to use properties in python?
| 1 | 0 | 0 | 1,363 |
10,968,439 |
2012-06-10T11:51:00.000
| 9 | 0 | 0 | 1 |
python,google-app-engine,gql,app-engine-ndb
| 10,974,037 | 2 | false | 1 | 0 |
This depends on lots of things like the size of the entities and the number of values that need to look up in the index, so it's best to benchmark it for your specific application. Also beware that if you find that on a sunny day it takes e.g. 10 seconds to load all your items, that probably means that some small fraction of your queries will run into a timeout due to natural variations in datastore performance, and occasionally your app will hit the timeout all the time when the datastore is having a bad day (it happens).
| 2 | 5 | 0 |
I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.
|
What is the Google Appengine Ndb GQL query max limit?
| 1 | 1 | 0 | 1,106 |
10,968,439 |
2012-06-10T11:51:00.000
| 7 | 0 | 0 | 1 |
python,google-app-engine,gql,app-engine-ndb
| 10,969,575 | 2 | true | 1 | 0 |
Basically you don't have the old limit of 1000 entities per query anymore, but consider using a reasonable limit, because you can hit the time out error and it's better to get them in batches so users won't wait during load time.
| 2 | 5 | 0 |
I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.
|
What is the Google Appengine Ndb GQL query max limit?
| 1.2 | 1 | 0 | 1,106 |
10,968,541 |
2012-06-10T12:07:00.000
| 0 | 1 | 1 | 0 |
python,imap
| 12,879,530 | 5 | true | 0 | 0 |
To answer your question: What you're looking for doesn't exist in the wild AFAIK.
Short of that, have you considered calling context.io from Python?
| 1 | 4 | 0 |
Is there a high-level IMAP library for Python?
With high-level I mean, that I do not want a library where I can issue basic IMAP commands (like Python's own imaplib). What I want, is library that cares about most of the IMAP details and gives me a more generic interface with objects for folders/mailboxes and messages. Additionally, it would be nice if it supports the disconnected mode of operation (offline mode) transparently.
|
High-Level IMAP library for Python
| 1.2 | 0 | 0 | 3,078 |
10,970,042 |
2012-06-10T15:43:00.000
| -1 | 1 | 1 | 0 |
python
| 23,380,402 | 5 | false | 0 | 0 |
It's simpler than one may think. The following is for pc2 9.2.3-2565
Add language as follows (python here as an example):
Display Name: Python
Compile Cmd Line: touch OK
Executable Filename: OK
Program Execution Command Line: python {:mainfile}
python3.3 or python3.4 will work too.
pc2 could be easier, of course, but there does not seem to be much support left at CSUS. Reseting contest would be even greater feature; the current need to clone directories for test, practice, and actual contest is very awkward. Better management of the database (like the ability to remove things) would make it into a great tool. It is alright, but it could be great.
| 1 | 0 | 0 |
I will be running a programming competition for high school students in the near future, and was originally going to use PC^2 (Programming Contest Control System) for the automated judging of the solutions. This software is commonly used in the ACM's International Collegiate Programming Contest regionals as well as the world finals. This is an excellent system which I have used before, but one of its pitfalls is its language support (Java, C, and C++). I'm a little bit concerned, as not all high school students who may be attending will have exposure to any of these languages. However, many local high schools teach introductory programming courses in Python. Is there an equivalent system to PC^2 which has Python support?
|
PC^2 equivalent compatible with Python
| -0.039979 | 0 | 0 | 1,041 |
10,970,310 |
2012-06-10T16:28:00.000
| 4 | 0 | 0 | 1 |
python,hadoop,amazon-web-services,amazon-emr
| 10,970,649 | 1 | true | 0 | 0 |
If you want to signal error, return a non-zero code from your python script. You can write any logging to stderr and hadoop will capture that in the task logs. You can also send status to the reporter and counters by prefixing the stderr lines with reporter:status:<msg> or reporter:counter:<group>,<name>,<increment>
| 1 | 3 | 0 |
What is the best practice for reporting exceptions in Hadoop streaming with Python scripts?
I mean: let's say I have a mapper script that can't understand its input, how do I signal Hadoop to terminate the job & report an error message?
Do I use logging and finish off with sys.exit?
|
Hadoop streaming: reporting error
| 1.2 | 0 | 0 | 1,374 |
10,972,821 |
2012-06-10T22:25:00.000
| 0 | 0 | 0 | 0 |
python,pygtk
| 10,973,513 | 1 | true | 0 | 1 |
So, I understand now: it turns out that when items are reordered in IconView, gtk.ListStore.reorder or something similar is called. What that means is that all I needed to do was to use gtk.ListStore.get_iter() or gtk.ListStore.get_iter_first() and all the problems are solved.
How trivial! All I needed to do was eat over it it seems.
| 1 | 0 | 0 |
Using PyGtk's IconView, I can set the icons to be reorderable by calling gtk.IconView.set_reorderable(True). My question is what is the best way to retrieve the new order? That is, how should I access a property of each of the elements in the new order? An iterator of sorts?
I am using gtk.ListStore to store the data.
I know this might sound trivial but I have virtually no experience in Python or PyGtk (or GTK in general) so I'd like to know the right way! Thanks!
|
Reordering in IconView (PyGtk)
| 1.2 | 0 | 0 | 135 |
10,973,432 |
2012-06-11T00:21:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,web-applications
| 10,973,713 | 1 | false | 1 | 0 |
You'd use the datastore to create a union as an entity class, with a description and a name. If your image is small you can store it in your entity, if it's large, you may store it in the blobstore and store a link to it inside your entity.
You can use the python User API for authentication. You don't really need any special session work if you're using the User API.
| 1 | 0 | 0 |
First than all, I don't even know if this is a session related question. But I could not think a better way to describe it in the title.
I'm developing a web application for registered users so they can create and manage trade unions.
A user can create several unions. Each union can store an image, a description and a name.
The index page shows the list of unions created by the currently registered user.
When the user clicks on a union from the list, all the pages of the application must show
in they headers the corresponding name and image stored for that union.
Also, all the options of the application must refer to the currently selected union.
That is the process for every selected union.
How could I do this on App Engine Python? What technique could I use? Is it something
related to sessions? I do the authentication process with the Gmail service.
I hope I explained myself clearly.
Thanks in advance!
|
How to do this kind of session related task in App Engine Python?
| 0.197375 | 0 | 0 | 42 |
10,978,425 |
2012-06-11T10:29:00.000
| 4 | 0 | 1 | 0 |
python,regex,twitter
| 10,978,445 | 2 | false | 0 | 0 |
Just remove the anchor ^ and $ and you will be good to go.
In case you don't want to match empty string from "Example @ nothing", you may want to specify "1 or more qualifier" + instead of *. i.e. @([a-zA-Z0-9_]+)[:;, ]
Restricting to 1-15 character username can be done by replacing * with {1,15}, i.e. @([a-zA-Z0-9_]{1,15})[:;, ].
If you want to get @ sign plus the ending characters as result, @[a-zA-Z0-9_]{1,15}[:;, ] is sufficient.
If you want to capture the name only, you can use this @([a-zA-Z0-9_]{1,15})[:;, ]
In case the token is right at the end of the string and without the special characters, and you want to capture it, you may want to modify [:;, ] to (?:[:;, ]|$)
| 1 | 1 | 0 |
How do I match words that begin with @ and ends with ;, ., :, or ?
The words can have any alphanumeric characters and may consist of underscores.
I have come up with ^@([a-zA-Z0-9_])*[:;, ]$ which seems to work for single word sentences alone.
|
Matching @user with regex
| 0.379949 | 0 | 0 | 213 |
10,979,667 |
2012-06-11T11:59:00.000
| 202 | 0 | 1 | 0 |
python,ipython,jupyter
| 10,986,483 | 12 | true | 0 | 0 |
CTRL - ML toggles line numbers in the CodeMirror area. See the QuickHelp for other keyboard shortcuts.
In more details CTRL - M (or ESC) bring you to command mode, then pressing the L keys should toggle the visibility of current cell line numbers. In more recent notebook versions Shift-L should toggle for all cells.
If you can't remember the shortcut, bring up the command palette Ctrl-Shift+P (Cmd+Shift+P on Mac), and search for "line numbers"), it should allow to toggle and show you the shortcut.
| 6 | 176 | 0 |
Error reports from most language kernels running in IPython/Jupyter Notebooks indicate the line on which the error occurred; but (at least by default) no line numbers are indicated in Notebooks.
Is it possibile to add the line numbers to IPython/Jupyter Notebooks?
|
Showing line numbers in IPython/Jupyter Notebooks
| 1.2 | 0 | 0 | 185,903 |
10,979,667 |
2012-06-11T11:59:00.000
| 7 | 0 | 1 | 0 |
python,ipython,jupyter
| 36,500,917 | 12 | false | 0 | 0 |
For me, ctrl + m is used to save the webpage as png, so it does not work properly. But I find another way.
On the toolbar, there is a bottom named open the command paletee, you can click it and type in the line, and you can see the toggle cell line number here.
| 6 | 176 | 0 |
Error reports from most language kernels running in IPython/Jupyter Notebooks indicate the line on which the error occurred; but (at least by default) no line numbers are indicated in Notebooks.
Is it possibile to add the line numbers to IPython/Jupyter Notebooks?
|
Showing line numbers in IPython/Jupyter Notebooks
| 1 | 0 | 0 | 185,903 |
10,979,667 |
2012-06-11T11:59:00.000
| 9 | 0 | 1 | 0 |
python,ipython,jupyter
| 42,519,445 | 12 | false | 0 | 0 |
Here is how to know active shortcut (depending on your OS and notebook version, it might change)
Help > Keyboard Shortcuts > toggle line numbers
On OSX running ipython3 it was ESC L
| 6 | 176 | 0 |
Error reports from most language kernels running in IPython/Jupyter Notebooks indicate the line on which the error occurred; but (at least by default) no line numbers are indicated in Notebooks.
Is it possibile to add the line numbers to IPython/Jupyter Notebooks?
|
Showing line numbers in IPython/Jupyter Notebooks
| 1 | 0 | 0 | 185,903 |
10,979,667 |
2012-06-11T11:59:00.000
| 6 | 0 | 1 | 0 |
python,ipython,jupyter
| 59,373,156 | 12 | false | 0 | 0 |
Adding to ronnefeldt's accepted answer: Shift L toggles line numbers in all cells. This works in JupyterLab 1.0.0 and in Jupyter Notebooks.
| 6 | 176 | 0 |
Error reports from most language kernels running in IPython/Jupyter Notebooks indicate the line on which the error occurred; but (at least by default) no line numbers are indicated in Notebooks.
Is it possibile to add the line numbers to IPython/Jupyter Notebooks?
|
Showing line numbers in IPython/Jupyter Notebooks
| 1 | 0 | 0 | 185,903 |
10,979,667 |
2012-06-11T11:59:00.000
| 1 | 0 | 1 | 0 |
python,ipython,jupyter
| 45,615,365 | 12 | false | 0 | 0 |
You can also find Toggle Line Numbers under View on the top toolbar of the Jupyter notebook in your browser.
This adds/removes the lines numbers in all notebook cells.
For me, Esc+l only added/removed the line numbers of the active cell.
| 6 | 176 | 0 |
Error reports from most language kernels running in IPython/Jupyter Notebooks indicate the line on which the error occurred; but (at least by default) no line numbers are indicated in Notebooks.
Is it possibile to add the line numbers to IPython/Jupyter Notebooks?
|
Showing line numbers in IPython/Jupyter Notebooks
| 0.016665 | 0 | 0 | 185,903 |
10,979,667 |
2012-06-11T11:59:00.000
| -3 | 0 | 1 | 0 |
python,ipython,jupyter
| 41,928,934 | 12 | false | 0 | 0 |
1.press esc to enter the command mode
2.perss l(it L in lowcase) to show the line number
| 6 | 176 | 0 |
Error reports from most language kernels running in IPython/Jupyter Notebooks indicate the line on which the error occurred; but (at least by default) no line numbers are indicated in Notebooks.
Is it possibile to add the line numbers to IPython/Jupyter Notebooks?
|
Showing line numbers in IPython/Jupyter Notebooks
| -0.049958 | 0 | 0 | 185,903 |
10,979,918 |
2012-06-11T12:15:00.000
| 1 | 0 | 1 | 1 |
python,upgrade,pythonpath,system-variable
| 10,979,970 | 2 | false | 0 | 0 |
I know you say you've updated %PATH%. However, from the description of the symptoms it is almost certain that c:\Python27 still appears on the %PATH% instead of (or before) c:\Python32.
To diagnose, start cmd.exe and type set. Then locate PATH and see what Python directories it contains and in what order.
| 2 | 1 | 0 |
I have installed a new version of Python, so I want to make sure when Python is invoked that version is first in my path. So, now on my 'C' drive I have "Python27" and "Python32" (old and new version, respectively).
When I type "python" in the command line I get "Python 2.7". Using control panel I have changed the "path" and "pythonpath" user variables (from 'C:\Python27' to 'C:\Python32') and to be sure I have reload the system. It still does not work. Does anyone have any idea how I can force the system to use the new version of Python?
ADDED
May be this is important. When I go to the 'Python32' directory and type in command line 'python', I do get the new version.
|
How to make newest version of Python the default or first in path
| 0.099668 | 0 | 0 | 109 |
10,979,918 |
2012-06-11T12:15:00.000
| 1 | 0 | 1 | 1 |
python,upgrade,pythonpath,system-variable
| 10,980,880 | 2 | false | 0 | 0 |
Personally, I put the dirs to all installed Python versions in %PATH%, but changed the executable names for all but the 'default' version. E.g., I have a C:\Python26\Python.exe, C:\Python27\Python27.exe and a C:\Python32\Python32.exe. This way I can easily start any version from the command line.
| 2 | 1 | 0 |
I have installed a new version of Python, so I want to make sure when Python is invoked that version is first in my path. So, now on my 'C' drive I have "Python27" and "Python32" (old and new version, respectively).
When I type "python" in the command line I get "Python 2.7". Using control panel I have changed the "path" and "pythonpath" user variables (from 'C:\Python27' to 'C:\Python32') and to be sure I have reload the system. It still does not work. Does anyone have any idea how I can force the system to use the new version of Python?
ADDED
May be this is important. When I go to the 'Python32' directory and type in command line 'python', I do get the new version.
|
How to make newest version of Python the default or first in path
| 0.099668 | 0 | 0 | 109 |
10,983,139 |
2012-06-11T15:28:00.000
| 2 | 0 | 1 | 0 |
python,multithreading
| 10,991,579 | 3 | false | 0 | 0 |
A fairly simple answer that occurred to me after posting this is to simply break up the timer into multiple sub-timers, e.g. having 10 6-second timers instead where each one starts the next one in a chain. That way, if I get suspended, I only lose one of the timers and still get most of the wait before timing out.
This is of course not foolproof, especially if I get repeatedly suspended and restarted, but it's easy to do and seems like it might be good enough.
| 1 | 3 | 0 |
I have some Python code which uses threading.Timer to implement a 60-second timeout for an operation.
The problem is that this code runs in a job-control environment where it may get pre-empted by a higher priority job. In this case it will be sent SIGSTOP, and then some time later, SIGCONT. I need a way to somehow notice that this has happened and reset the timeout: obviously the operation hasn't really timed out if it's been suspended for the whole 60 seconds.
I tried to add a signal handler for SIGCONT but this seems to get executed after the code provided to threading.Timer has been executed.
Is there some way to achieve this?
|
How to handle timeouts when a process receives SIGSTOP and SIGCONT?
| 0.132549 | 0 | 0 | 428 |
10,984,263 |
2012-06-11T16:46:00.000
| 5 | 0 | 0 | 0 |
python,wxpython
| 10,984,306 | 2 | true | 0 | 1 |
The & character is a special symbol for those kinds of buttons. It defines what key you press to use a keyboard shortcut.
&x - [Alt]+[x] hotkey shortcut. You can escape it by using a second: && -> '&' char
| 1 | 3 | 0 |
When I use wx python to create a button with a file name as the button label, I lose the & character that is inside of the file names.
If a file were named: hello&goodbye.txt, the button would read: hellogoodbye.txt
I have no idea where the & character goes and would love a little help here.
|
Python '&' Character is missing
| 1.2 | 0 | 0 | 203 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.