Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
19,205,614 | 2013-10-06T05:05:00.000 | 0 | 0 | 1 | 0 | for-loop,python-3.x | 19,205,633 | 3 | false | 0 | 0 | It would probably be easiest to write a for loop with an index like i and use that to add i*increment ti start value and save the resulting value to a list. Have the loop run numberOfValues times. If this is homework it would be better for you to write out the actual code for yourself | 2 | 0 | 0 | I'm a beginner to Python and I'm having some trouble with this. I have to make a for loop out of this problem. Can anyone explain how I would go about this?
nextNValues (startValue, increment, numberOfValues)
This function creates a string of numberOfValues values, starting with startValue and
counting by increment. For example, nextNValues (5, 4, 3) would generate a string of
(not including the comments):
5 - the start value
9 - counting by 4, the increment
13 - stopping after 3 lines of output, the numberOfValues | Basic for loop in Python 3 | 0 | 0 | 0 | 142 |
19,207,019 | 2013-10-06T08:44:00.000 | 29 | 0 | 1 | 0 | python,windows,pycharm | 19,213,327 | 3 | true | 0 | 0 | UPDATE
Starting with version 4.0 there's an option Show command line afterwards (renamed in later versions to Run with Python console) when editing run/debug configuration in Run|Edit Configurations....
From output of python --help:
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
To set interpreter option in PyCharm go to Run|Edit Configuration | 3 | 26 | 0 | In PyCharm, after I run a script it automatically kills it:
C:\Users\Sean.virtualenvs\Stanley\Scripts\python.exe C:/Users/Sean/PycharmProjects/Stanley/Stanley.py
Process finished with exit code 0
How can I interact with the script after it starts? For lack of a better way to phrase it, how can I get the
>>>
prompt after the script runs once through?
PyCharm Community Edition 3.0
Windows 7
Python 2.7 | Interacting with program after execution | 1.2 | 0 | 0 | 17,624 |
19,207,019 | 2013-10-06T08:44:00.000 | 37 | 0 | 1 | 0 | python,windows,pycharm | 27,978,591 | 3 | false | 0 | 0 | in Pycharm, Run/Debug menu choose Edit Configuration, check the box before 'Show command line afterwards' | 3 | 26 | 0 | In PyCharm, after I run a script it automatically kills it:
C:\Users\Sean.virtualenvs\Stanley\Scripts\python.exe C:/Users/Sean/PycharmProjects/Stanley/Stanley.py
Process finished with exit code 0
How can I interact with the script after it starts? For lack of a better way to phrase it, how can I get the
>>>
prompt after the script runs once through?
PyCharm Community Edition 3.0
Windows 7
Python 2.7 | Interacting with program after execution | 1 | 0 | 0 | 17,624 |
19,207,019 | 2013-10-06T08:44:00.000 | 8 | 0 | 1 | 0 | python,windows,pycharm | 50,508,341 | 3 | false | 0 | 0 | Click Run -> Edit Configurations...,
Then check the box Run with Python console. | 3 | 26 | 0 | In PyCharm, after I run a script it automatically kills it:
C:\Users\Sean.virtualenvs\Stanley\Scripts\python.exe C:/Users/Sean/PycharmProjects/Stanley/Stanley.py
Process finished with exit code 0
How can I interact with the script after it starts? For lack of a better way to phrase it, how can I get the
>>>
prompt after the script runs once through?
PyCharm Community Edition 3.0
Windows 7
Python 2.7 | Interacting with program after execution | 1 | 0 | 0 | 17,624 |
19,209,139 | 2013-10-06T13:02:00.000 | 1 | 0 | 1 | 0 | vim,python-mode,syntastic | 19,209,263 | 4 | false | 0 | 0 | I don't work in python, so I can't tell you if there will be a conflict, but you can turn off Syntastic for python files - see :h syntastic_ignore_files. | 1 | 12 | 0 | I have installed python-mode in VIM. But I also have Syntastic installed. Since both do syntax checking, is there going to be a conflict? How can I turn off Syntastic for Python files?
Thanks for any help | Syntastic and Python-mode together? | 0.049958 | 0 | 0 | 4,223 |
19,211,444 | 2013-10-06T17:00:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine | 19,237,649 | 4 | false | 1 | 0 | To further clarify jayhendren's answer - if you are planning to use GAE's memcache service, you must use
from google.appengine.api import memcache
you cannot use an open source memcache library. The only scenario where you could use the standard python memcache lib is if you were running your own memcache service somewhere (e.g. on Compute Engine) and you wanted to connect out to that over a socket. I'm guessing you're not doing that.
Assuming you want to use GAE's built in memcache service: since there are differences between the API defined by GAE's memcache lib and the standard python memcache libs, you will have to make some minor changes to memorised so that it can successfully talk to the GAE memcache library. For the most part though the developer facing functionality is the same as the standard python lib. If you get it working, let us know! | 1 | 2 | 0 | I want to use a library (memorised) that uses memcache like this: import memcache
Now on App Engine, memcache must be imported like this: from google.appengine.api import memcache
So I get this error when running with dev_appserver.py: ImportError: No module named memcache
Can I use this library without modifying it? | How to use a library that imports memcache in App Engine | 0.099668 | 0 | 0 | 1,996 |
19,211,578 | 2013-10-06T17:14:00.000 | 5 | 0 | 1 | 1 | python,asynchronous,twisted | 19,211,645 | 3 | true | 0 | 0 | It's hard to talk about this without defining a lot of terms more precisely and taking issue with your facts, but here's my attempt:
Question 1:
Try man select, which is approximately how Twisted is implemented - it's a way to ask the operating system to monitor several things at once and let the application know when any one of them fires (block on multiple things).
Question 2:
Yeah, pretty much - but you're wrong about Javascript, it's just like Twisted. | 1 | 6 | 0 | I'm new to the twisted library and I'm trying to understand how it is done that operations in python/twisted are performed asynchronously. So far I thought that only GUI-alike (Qt or javascript) platforms use event-driven architecture extensively.
facts:
Twisted programs are run in one thread = no multithreading
reactor and deferred patterns are used: callbacks/errbacks are declared and the execution of everything is controlled by reactor main loop
a single CPU can never do anything truly parallelly, because it shares its resources between processes, etc. By parallel code execution I mean that the programming platform (python, javascript, whatever) executes more than one sequence of operations (which can be done, for example, using multithreading)
question 1
Python could be seen as a high-level wrapper for the operating system. What are the OS functions (or C functions) that provide asynchronous operation handling? Are there any?
question 2
Q1 leads me to an idea, that twisted's asynchronicity is not a true asynchronicity, like we have in Javascript. In JavaScript, for example, if we provide 3 different buttons, attach callback functions to them and we click all three buttons - then the 3 callbacks will be executed parallelly. Truly parallelly.
In Twisted - as far as I understand - it's not true asynchronicity - it's, let's say, approximated asynchronicity instead, since no operations would be performed parallelly (in terms of code, as I mentioned in fact3). In Twisted the first n line of code (defining protocols, factories, connections, etc.) are the declarations of what is going to happen when entire system starts. Nothing runs so far. Real execution starts then the reactor.run() is fired. I understand that the reactor runtime is based on a single while True loop which iterates through events. The reactor checks any awaiting tasks to do, processes them, send their result back to the queue (either to callbacks or errbacks). In the next loop execution they'll be processed one step further. So the deferred execution is linear in fact (though, from outside it looks like it was executed parallelly). Is my interpretation correct?
I'd appreciate if someone could answer my questions and/or explain how asynchronicity works in twisted/python platform and how is it related to operating system. Thanks in advance for good explanations!
edit: links to articles explaining asynchronicity are very welcome! | understanding python twisted asynchronicity in terms of operating system | 1.2 | 0 | 0 | 394 |
19,213,352 | 2013-10-06T20:03:00.000 | 1 | 0 | 1 | 0 | python,generator,itertools | 19,213,381 | 2 | true | 0 | 0 | Not only is there not a simple way, there is not a way at all, if you want to allow any generator (or any iterable). In general, there is no way to know when you are 10 items from the end of a generator, or even whether the generator has an end. Generators only give you one item at a time, and tell you nothing about how many items are "left". You would have to iterate through the entire generator, keeping a temporary cache of the most recent 10 items, and then yield those when (or if!) the generator terminates.
Note the "or if". A generator need not be finite. For an infinite generator, there is no such thing as the "last" 10 elements. | 1 | 2 | 0 | Say I have a generator in Python and I want to iterate over everything in it except the first 10 iterations and the last 10 iterations. itertools.islice supports the first part of this slicing operation, but not the second. Is there a simple way to accomplish this? | Indexing form the end of a generator | 1.2 | 0 | 0 | 48 |
19,213,407 | 2013-10-06T20:09:00.000 | 0 | 0 | 0 | 0 | python,opencv,rgb,gimp | 19,217,476 | 4 | false | 0 | 0 | As the blue,green,red images each has 1 channel only.So, this is basically a gray-scale image.
If you want to add colors in the dog_blue.jpg for example then you create a 3-channel image and copy the contents in all the channels or do cvCvtColor(src,dst,CV_GRAY2BGR). Now you will be able to add colors to it as it has become 3-channel image. | 2 | 1 | 1 | I have a JPG image, and I would like to find a way to:
Decompose the image into red, green and blue intensity layers (8 bit per channel).
Colorise each of these now 'grayscale' images with its appropriate color
Produce 3 output images in appropriate color, of each channel.
For example if I have an image:
dog.jpg
I want to produce:
dog_blue.jpg dog_red.jpg and dog_green.jpg
I do not want grayscale images for each channel. I want each image to be represented by its correct color.
I have managed to use the decompose function in gimp to get the layers, but each one is grayscale and I can't seem to add color to it.
I am currently using OpenCV and Python bindings for other projects so any suitable code that side may be useful if it is not easy to do with gimp | How do I use Gimp / OpenCV Color to separate images into coloured RGB layers? | 0 | 0 | 0 | 2,596 |
19,213,407 | 2013-10-06T20:09:00.000 | 0 | 0 | 0 | 0 | python,opencv,rgb,gimp | 55,448,010 | 4 | false | 0 | 0 | In the BGR image, you have three channel. When you split the channel using the split() function, like B,G,R=cv2.split(img), then B,G,R becomes a single or monochannel image. So you need to add two extra channel with zeros to make it 3 channel image but activated for a specific color channel. | 2 | 1 | 1 | I have a JPG image, and I would like to find a way to:
Decompose the image into red, green and blue intensity layers (8 bit per channel).
Colorise each of these now 'grayscale' images with its appropriate color
Produce 3 output images in appropriate color, of each channel.
For example if I have an image:
dog.jpg
I want to produce:
dog_blue.jpg dog_red.jpg and dog_green.jpg
I do not want grayscale images for each channel. I want each image to be represented by its correct color.
I have managed to use the decompose function in gimp to get the layers, but each one is grayscale and I can't seem to add color to it.
I am currently using OpenCV and Python bindings for other projects so any suitable code that side may be useful if it is not easy to do with gimp | How do I use Gimp / OpenCV Color to separate images into coloured RGB layers? | 0 | 0 | 0 | 2,596 |
19,215,815 | 2013-10-07T01:17:00.000 | 1 | 1 | 0 | 0 | python,google-api,google-search-api,pagerank,alexa | 19,393,738 | 4 | false | 1 | 0 | Alexa (via AWS) charges to use their API to access Alexa rankings. The charge per query is micro so you can get hundreds of thousands of ranks relatively cheaply. I used to run a few search directories that indexed Alexa rankings over time, so I have experience with this. The point is, you're being evil by scraping vast amounts of data when you can pay for the legitimate service.
Regarding PageRank... Google do not provide a way to access this data. The sites that offer to show your PageRank use a trick to get the PageRank via the Google Toolbar. So again, this is not legitimate, and I wouldn't count on it for long-term data mining, especially not in bulk quantities.
Besides, PageRank counts for very little these days, since Google now relies on about 200 other factors to rank search results, as opposed to just measuring sites' link authority. | 1 | 1 | 0 | I am trying to access historical google page rankings or alexa rankings over time to add some weightings on a search engine I am making for fun. This would be a separate function that I would call in Python (ideally) and pass in the paramaters of the URL and how long I wanted to get the average over, measured in days and then I could just use that information to weight my results!
I think it could be fun to work on, but I also feel that this may be easy to do with some trick of the APIs some guru might be able to show me and save me a few sleepless weeks! Can anyone help?
Thanks a lot ! | Possible to get alexa information or google page rankings over time? | 0.049958 | 0 | 1 | 3,663 |
19,218,011 | 2013-10-07T06:02:00.000 | 24 | 0 | 1 | 0 | python,django,virtualenv,pycharm | 20,163,768 | 2 | true | 1 | 0 | I've found the decision and asked the support which confirmed its:
Here is the steps:
copy a project to a local directory.
configure: tools - deployment, to upload this local copy to remote server
make deployment automatic: tools - deployment - "automatic upload"
add remote interpreter: file - settings - python interpreters - "+" - "Remote.."
The remote interpreter is the virtualenv interpreter with all packages are installed.
Debug also works, we can debug completely remote project on server using local pycharm. | 2 | 17 | 0 | I have the remote server with a few virtualenv environments (django projects).
How can I open, develop and debug these projects completely remote?
Shall I mount remote directory via sshfs to open a project?
(I can't open project other way than as local path)
I am working on debian and windows xp. | pycharm remote project with virtualenv | 1.2 | 0 | 0 | 14,227 |
19,218,011 | 2013-10-07T06:02:00.000 | -1 | 0 | 1 | 0 | python,django,virtualenv,pycharm | 19,218,082 | 2 | false | 1 | 0 | Debian:
From the file manager, click on Connect To server, connect to ssh by giving login credentials which will open your remote project on your file manager itself.
Or you can go to the server using ssh via terminal and edit your project via command line text editor.
IDE:
If you are working with IDE such as Aptana or PyCharm, you can load the project from the server itself by login credentials. | 2 | 17 | 0 | I have the remote server with a few virtualenv environments (django projects).
How can I open, develop and debug these projects completely remote?
Shall I mount remote directory via sshfs to open a project?
(I can't open project other way than as local path)
I am working on debian and windows xp. | pycharm remote project with virtualenv | -0.099668 | 0 | 0 | 14,227 |
19,220,726 | 2013-10-07T09:03:00.000 | 2 | 0 | 1 | 0 | python,cad,opencascade | 19,221,047 | 1 | false | 0 | 0 | It looks like pythonOCC do not currently support .3dm files you can either output from Rhino in another format, (sub-optimal in your post), or find/write/sponsor the writing of a .3dm importer for pythonOCC. | 1 | 0 | 0 | In the pythonOCC examples CADViewerMDI.py the CAD format step, stp, iges, igs, and brep are suported.
Do pythonOCC support the format ".3dm" and if, how do I load it.
Supoptimal sulution:
Change the format in rhino to one of the other formats. | pythonocc loading .3md format | 0.379949 | 0 | 0 | 182 |
19,221,300 | 2013-10-07T09:32:00.000 | 5 | 0 | 1 | 0 | python | 19,221,339 | 3 | true | 0 | 0 | There isn't one. Python strings store the length of the string independent from the string contents. | 1 | 0 | 0 | I am from a c background. started learning python few days ago. my question is what is the end of string notation in python. like we are having \0 in c. is there anything like that in python. | What is the end of string notation in python | 1.2 | 0 | 0 | 152 |
19,221,694 | 2013-10-07T09:52:00.000 | 6 | 0 | 0 | 0 | python,pandas | 19,221,774 | 2 | false | 0 | 0 | You get an out of memory error because you run out of memory, not because there is a limit on the number of columns. | 1 | 1 | 1 | Have anyone known the total columns in pandas, python?
I have just created a dataframe for pandas included more than 20,000 columns but I got memory error.
Thanks a lot | How many columns in pandas, python? | 1 | 0 | 0 | 2,587 |
19,221,937 | 2013-10-07T10:04:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,dependency-injection,dynamic-languages | 19,224,347 | 1 | false | 0 | 0 | The pythonic way is probably to pass the resource class as a parameter and rely on duck-typing... (you could probably create an abstract base class and multiply inherit it as a mixin to demonstrate that you know about interfaces, but if you want that kind of pain you probably shouldn't be coding in Python... ;-) | 1 | 0 | 0 | I have a wrapper around DB which provides some utility methods. An instance of DB wrapper is created once and accessible from a base class. I want to reuse this instance in a helper class, and was thinking about dependency injection due to my experience with OOP like C# or Java. However, with python I seem to lose intellisense support when I do this. I saw examples of dependency injection per function, however, this does not work for me, since a wrapper class has many different functions I want to use. What is the Python-ic way of achieving this? | Python-ic way to reuse instances in a way similar to dependency injection | 0 | 0 | 0 | 116 |
19,223,411 | 2013-10-07T11:16:00.000 | 5 | 0 | 1 | 1 | python,linux,macos,cython | 19,223,474 | 1 | false | 0 | 0 | You can't. You'll have to compile a different library for each platform you need to support. | 1 | 2 | 0 | I compiled a module.pyx file to module.so under Mac OS X, and now I can use it with:
from module import method1
However, the same .so file won't work on Linux, I have to compile a new module.so under Linux.
So the problem is, how can I write a cross platform(Mac and Linux) module with Cython? | How to load Cython compiled .so file on both Mac OS X and Linux? | 0.761594 | 0 | 0 | 1,310 |
19,225,188 | 2013-10-07T12:39:00.000 | 0 | 0 | 1 | 0 | python,cython | 19,225,688 | 2 | false | 0 | 0 | Not too sure how you will make it python compatible but gcc #defines __FILE__ for the name of the file that the code is in. | 1 | 17 | 0 | I convert my python code to c by cython and after that compile .c file and use .so in my project.
My problem:
I use __file__ in my python code and in while compile by gcc, it doesn't get error. but when I run program and import .so in other python files, appear an error from __file__ line.
How can solve this problem? Is there any method to replace with __file__? | What method can I use instead of __file__ in python? | 0 | 0 | 0 | 3,116 |
19,227,318 | 2013-10-07T14:22:00.000 | 0 | 0 | 1 | 0 | c++,python,qt | 19,238,373 | 2 | false | 0 | 1 | There is no checklist beyond proper C++ design. A QThread doesn't, unfortunately, offer any sane default destruction behaviors. In C++ land, that's nominally a no-no. You need some QObject that owns your threads and, before vanishing itself, takes care to either quit or terminate them, followed by waiting on them before they get destroyed. Same goes for all the classes that you wrote yourself: they must act properly when destructed. Qt generally acts appropriately when the instances of its various classes are deleted, QThread is really a standout.
Once you follow the base tenet of C++ design, namely that objects release resources upon destruction, you won't have any problems. Use smart pointers, including C++11 if available. QPointer, QSharedPointer, QScopedPointer are all your friends. | 1 | 0 | 0 | I have always problems to close my Qt applications properly. What should one take care of when quitting from a Qt::Application? I want to compile a check-list that I can follow to exit all parts properly, depending what functionalities the program uses. For example, if I use QThreads, what needs to done to make sure they are shut down properly, and so on with all other parts that might need special care.
I hope I am not the only one having such problems and it turns out to be useful for many. | Qt: Quit application -> checklist for proper clean up | 0 | 0 | 0 | 600 |
19,228,271 | 2013-10-07T15:04:00.000 | 6 | 0 | 1 | 1 | python,powershell,windows-7 | 19,229,832 | 2 | true | 0 | 0 | If it's intermittent with all other factors being unchanged, it sounds like you've inadvertently selected some text in the PowerShell console and it's halted updating output so that you can do something with it.
Next time, be careful to look to see if you have something selected before clicking. | 1 | 6 | 0 | Sometime I look back at my terminal when there is a python script running and the console output has frozen, then I right-click on the terminal and the console output (printing to screen) beings again.
Its a bit disconcerting because sometimes I think my script has broken.
Do others also experience this? Anybody know a fix?
Thanks in advance for any responses | Why does powershell freeze for a bit when running my python scripts | 1.2 | 0 | 0 | 1,655 |
19,228,380 | 2013-10-07T15:10:00.000 | 0 | 0 | 1 | 0 | python,macos,port,fink | 19,228,974 | 1 | false | 0 | 0 | In terms of how your Python interpreter works, no: there is no negative effect on having Fink Python as well as MacPorts Python installed on the same machine, just as there is no effect from having multiple installations of Python by anything. | 1 | 0 | 1 | I have a mac server and I have both FINK and macport installation of python/numpy/scipy
I was wondering if having both will affect the other? In terms of memory leaks/unusual results?
In case you are wondering why both ? Well I like FINK but macports allows me to have python2.4 which FINK does not provide (yes I needed an old version for a piece of code I have)
I wonder this since I tried to use homebrew once and it complained about the machine having port and FINK (I did not realize that port provided python2.4 so was looking at homebrew but when I realized port did give 2.4 I abandoned it) | macport and FINK | 0 | 0 | 0 | 421 |
19,228,979 | 2013-10-07T15:41:00.000 | 1 | 0 | 1 | 0 | python,algorithm,python-2.7,counting | 19,229,466 | 10 | false | 0 | 0 | You can define two boolean vars hasZero and hasOne and set them to True if corresponding value was met while iterating the list. Then b = 2 if hasZero and hasOne, b = 1 if only hasOne and b = 0 if only hasZero.
Another way: you can sum all the a values along the list. If sumA == len(list) then b = 1, if sumA == 0 then b = 0 and if 0 < sumA < len(list) then b = 2. | 1 | 6 | 0 | This may be a trivial problem, but I want to learn more about other more clever and efficient ways of solving it.
I have a list of items and each item has a property a whose value is binary.
If every item in the list has a == 0, then I set a separate variable b = 0.
If every item in the list has a == 1, then I set b = 1.
If there is a mixture of a == 0 and a == 1 in the list, then I set
b = 2.
I can use a set to keep track of the types of a value, such that if there are two items in the set after iterating through the list, then I can set b = 2, whereas if there is only one item in the set I just retrieve the item (either 0 or 1) and use it to set b.
Any better way? | Efficient way of counting True and False | 0.019997 | 0 | 0 | 808 |
19,229,316 | 2013-10-07T15:57:00.000 | 2 | 0 | 0 | 0 | python,django | 19,229,607 | 2 | true | 1 | 0 | When accessing a setting defined by your app in your apps' defaults, I'd suggest if settings.name: - the default is defined by you already. On the other hand, when writing an external app - I'd suggest if hasattr(settings, 'name'): - you can't expect your apps users will define all the defaults. | 1 | 0 | 0 | Please chime in if there are other methods but the ones I see most often are these two and they both exist in the django source.
if settings.DEBUG:
and
if hasattr(settings, 'POSTGIS_VERSION'):
The later has the advantage of having a default but in cases where a default would be an error (something is missing) is it better to use the first version? What's the rule on defaults, when should you put it in the settings.py file and when should you include it directly in the source like above? | What's the preferred method for accessing settings information in Django? | 1.2 | 0 | 0 | 76 |
19,229,841 | 2013-10-07T16:22:00.000 | 0 | 1 | 1 | 0 | python,mercurial | 19,232,319 | 2 | false | 0 | 0 | Identifying the "current method" obviously depends on the file's language, so you're not going to find a ready solution in the mercurial commandset. But it's not too hard to scan a python file manually, and track the current class and method (as long as the code doesn't play games with the syntax). You did say you don't need it to be bullet-proof, right?
If one of the changesets being compared is an ancestor of the other, you should get pretty good mileage out of hg annotate (a.k.a. hg blame), which tells you when each line in your file was last touched. You can then scan files for recent changes, while at the same time keeping track of the current class and method or function.
If the changesets have a more complex relationship, you may have to do some work: Run a diff between the two versions, and parse the diff output for a list of file-line pairs that have changed; then scan the source files to figure out the class and method that contains each change. (Alternately, you could pre-process the source files to build an index, then review the diffs). | 1 | 0 | 0 | I would like a way to list all of the python methods which were touched between two mercurial changesets. Is there a tool available which will easily do this?
Clarification based on comment:
I am not looking for something 100% comprehensive. If the tool could identify each line changed in the diff, then which method/class it falls within, that would be great. | How can I identify the methods touched in HG changset? | 0 | 0 | 0 | 59 |
19,231,425 | 2013-10-07T17:53:00.000 | 2 | 0 | 1 | 0 | permission-denied,ipython-notebook,windows-firewall,spyder,pyzmq | 34,738,396 | 1 | true | 0 | 0 | I realise this is two years old, but I've just been able to sort out a similar problem, so it would have been good to see more answers to this.
In my case it wasn't the Windows Firewall or virus scanner, but that my employer's IT services group runs VNC over the same port that is hardcoded into 0MQ, as used by iPython in Anaconda.
Enough people complained that IT provided a script to move the VNC port for affected people, while also logging the change in their own records, so everybody is happy.
Try doing a "netstat -ab" from the command line, and check if anything is listening on port 5905, which iPython needs (at least as it is in early 2016, with Anaconda). You'll need to use "Run As Administrator" with cmd. If you can't do that just use "netstat -a". The difference is the b option will also list the process that has taken the port, and in our case we see vncserve.exe there. But -a is enough to see listening ports. | 1 | 3 | 0 | I'm encountering a problem starting an ipython notebook or an ipython console in spyder that results in the error message "Assertion failed: Permission denied (......\src\err.cpp:247) and (in ipython notebook) the kernel endlessly restarts.
I'm using Anaconda installation of python on Windows 7, and have the same problems with both ipython 1.1 and ipython 1.0. I did not have this problem when I ran ipython versions before 1.0, before I switched to Anaconda.
A google search finds another instance of this problem, which suggests that its due to interactions with PyZMQ and a firewall. I've tried adding specific exceptions for python and ipython to my firewall, and turning the firewall off completely, with no change. I can run ipython in command line, but neither the notebook or the console in spyder work (giving the error above.)
Any information about this would be helpful. I couldn't find any file err.cpp in any folder \src\ in my python installation, so I can't confirm what triggers the error has any relation to PyZMQ or firewalls. No change is made when turning off the firewall or elevating the command prompt. What else can I try? | IPython Permission Denied | 1.2 | 0 | 0 | 7,350 |
19,231,985 | 2013-10-07T18:29:00.000 | 0 | 0 | 0 | 0 | javascript,python,html | 19,233,083 | 2 | false | 1 | 0 | Keep track of the url of each page you scraped. One way would be to save it with the full URL as a filename. Then, you can resolve relative urls as per the HTML spec. | 1 | 1 | 0 | What are some methods to make relative urls absolute in scraped content so that the scraped html appears like the original and css are not broken?
I found out <base> tag may help. But how can I find out what the original base of the URL is?
I don't care about interactions with the links, but do want them to appear correct.
Assume a site 'example.com/blog/new/i.html' i scrape that has 2 resources
< link src="/style/style.css" >
< link src="newstyle.css" >.
Now if i set base as 'example.com/blog/new/i.html' wont the first one break | How best to handle relative urls in scraped content? | 0 | 0 | 1 | 358 |
19,233,132 | 2013-10-07T19:37:00.000 | 11 | 0 | 0 | 1 | python,multithreading,process,load-balancing,uwsgi | 19,326,373 | 2 | true | 1 | 0 | So, the solution is:
Upgrade uWSGI to recent stable version (as roberto suggested).
Use --thunder-lock option.
Now I'm running with 50 threads per process and all requests are distributed between processes equally. | 2 | 14 | 0 | I've installed Nginx + uWSGI + Django on a VDS with 3 CPU cores. uWSGI is configured for 6 processes and 5 threads per process. Now I want to tell uWSGI to use processes for load balancing until all processes are busy, and then to use threads if needed. It seems uWSGI prefer threads, and I have not found any config option to change this behaviour. First process takes over 100% CPU time, second one takes about 20%, and another processes are mostly not used.
Our site receives 40 r/s. Actually even having 3 processes without threads is anough to handle all requests usually. But request processing hangs from time to time for various reasons like locked shared resources, etc. In such cases we have -1 process. Users don't like to wait and click the link again and again. As a result all processes hangs and all users have to wait.
I'd add even more threads to make the server more robust. But the problem is probably python GIL. Threads wan't use all CPU cores. So multiple processes work much better for load balancing. But threads may help a lot in case of locked shared resources and i/o wait delays. A process may do much work while one of it's thread is locked.
I don't want to decrease time limits until there is no another solution. It is possible to solve this problem with threads in theory, and I don't want to show error messages to user or to make him waiting on every request until there is no another choice. | How to tell uWSGI to prefer processes to threads for load balancing | 1.2 | 0 | 0 | 9,291 |
19,233,132 | 2013-10-07T19:37:00.000 | 9 | 0 | 0 | 1 | python,multithreading,process,load-balancing,uwsgi | 19,238,645 | 2 | false | 1 | 0 | Every process is effectively a thread, as threads are execution contexts of the same process.
For such a reason there is nothing like "a process executes it instead of a thread". Even without threads your process has 1 execution context (a thread). What i would investigate is why you get (perceived) poor performances when using multiple threads per process. Are you sure you are using a stable (with solid threading support) uWSGI release ? (1.4.x or 1.9.x)
Have you thought about dynamically spawning more processes when the server is overloaded ? Check the uWSGI cheaper modes, there are various algorithm available. Maybe one will fit your situation.
The GIL is not a problem for you, as from what you describe the problem is the lack of threads for managing new requests (even if from your numbers it looks you may have a too much heavy lock contention on something else) | 2 | 14 | 0 | I've installed Nginx + uWSGI + Django on a VDS with 3 CPU cores. uWSGI is configured for 6 processes and 5 threads per process. Now I want to tell uWSGI to use processes for load balancing until all processes are busy, and then to use threads if needed. It seems uWSGI prefer threads, and I have not found any config option to change this behaviour. First process takes over 100% CPU time, second one takes about 20%, and another processes are mostly not used.
Our site receives 40 r/s. Actually even having 3 processes without threads is anough to handle all requests usually. But request processing hangs from time to time for various reasons like locked shared resources, etc. In such cases we have -1 process. Users don't like to wait and click the link again and again. As a result all processes hangs and all users have to wait.
I'd add even more threads to make the server more robust. But the problem is probably python GIL. Threads wan't use all CPU cores. So multiple processes work much better for load balancing. But threads may help a lot in case of locked shared resources and i/o wait delays. A process may do much work while one of it's thread is locked.
I don't want to decrease time limits until there is no another solution. It is possible to solve this problem with threads in theory, and I don't want to show error messages to user or to make him waiting on every request until there is no another choice. | How to tell uWSGI to prefer processes to threads for load balancing | 1 | 0 | 0 | 9,291 |
19,234,950 | 2013-10-07T21:17:00.000 | 0 | 0 | 1 | 0 | python,list,matrix,linked-list | 19,235,077 | 2 | false | 0 | 0 | There's more than one way to interpret this, but one option is:
Have a single "head" node at the top-left corner and a "tail" node at the bottom-right. There will then be row-head, row-tail, column-head, and column-tail nodes, but these are all accessible from the overall head and tail, so you don't need to keep track of them, and they're already part of the linked matrix, so they don't need to be part of a separate linked list.
(Of course a function that builds up an RxC matrix of zeroes will probably have local variables representing the current row's head/tail, but that's not a problem.) | 2 | 0 | 1 | I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user inputs the number of rows and the number of columns. Should I have multiple head nodes then at the beginning of each row and multiple tail nodes at the end of each row? If so, should I store the multiple head/tail nodes in a list?
Thank you. | Linked Matrix Implementation in Python? | 0 | 0 | 0 | 1,333 |
19,234,950 | 2013-10-07T21:17:00.000 | 0 | 0 | 1 | 0 | python,list,matrix,linked-list | 19,237,061 | 2 | false | 0 | 0 | It really depends on what options you want/need to efficiently support.
For instance, a singly linked list with only a head pointer can be a stack (insert and remove at the head). If you add a tail pointer you can insert at either end, but only remove at the head (stack or queue). A doubly linked list can support insertion or deletion at either end (deque). If you try to implement an operation that your data structure is not designed for you incur an O(N) penalty.
So I would start with a single pointer to the (0,0) element and then start working on the operations your instructor asks for. You may find you need additional pointers, you may not. My guess would be that you will be fine with a single head pointer. | 2 | 0 | 1 | I know in a linked list there are a head node and a tail node. Well, for my data structures assignment, we are suppose to create a linked matrix with references to a north, south, east, and west node. I am at a loss of how to implement this. A persistent problem that bothers me is the head node and tail node. The user inputs the number of rows and the number of columns. Should I have multiple head nodes then at the beginning of each row and multiple tail nodes at the end of each row? If so, should I store the multiple head/tail nodes in a list?
Thank you. | Linked Matrix Implementation in Python? | 0 | 0 | 0 | 1,333 |
19,236,154 | 2013-10-07T22:49:00.000 | 2 | 0 | 1 | 0 | python | 19,236,191 | 2 | false | 0 | 0 | In terms of asymptotic complexity, that's actually the best you can do. You know the front item is the maximal element, and the runner-up is one of its children. But the other child of the root node might be only the 100th biggest, with the higher 98 in the other half of the tree.
Of course, once you've pulled off your X items, you don't need to re-heapify them -- they'll already be sorted, and hence a well-formed binary heap of their own. | 1 | 0 | 0 | What would be the fastest way to get the top X items of a heap, as a heap still?
I would figure there is a better way than rebuilding a heap by popping the heap X times. | Python: Trim heapq heap so it is only X items long | 0.197375 | 0 | 0 | 109 |
19,238,296 | 2013-10-08T02:53:00.000 | 0 | 0 | 1 | 0 | python,iphone,ios,itunes,itunes-sdk | 20,518,264 | 1 | false | 0 | 0 | Apple does not offer any APIs to achieve this. | 1 | 0 | 0 | I've been using Python to script Win32 iTunes, and it's been rocky but doable. However, I wanted to move beyond just media (songs, etc.) to analyze what apps were on my devices. Can anyone recommend how to use the iTunes Win32 COM interface to, say, get a list of apps that are currently on the phone?
I thought the app list might be exposed as a playlist, with each app as an IITFileOrCDTrack, but that doesn't seem to be the case. When I look at my phone as a source, it just lists media playlists (books, movies, etc.)
Or, if you can suggest a different way to do this from Python, open to suggestions. I assumed I'd have to use iTunes as my phone is not jailbroken and I don't know any other way to see what's on the phone, but if there is another way, cool. I don't need to add or remove, just want to see what's there.
Thanks for any ideas... | Access iOS Apps List in iTunes via Win32 COM? | 0 | 0 | 0 | 133 |
19,240,218 | 2013-10-08T06:01:00.000 | 1 | 0 | 0 | 1 | google-app-engine,python-2.7,webapp2 | 19,242,752 | 2 | false | 1 | 0 | No, that doesn't affect the speed. Your code needs to be loaded anyway, so it makes no difference if it's all in one file or not. It will of course make the file more complex, but that's your problem, not GAE's. | 2 | 0 | 0 | I'm building a web application using GAE.
I've been doing some research by my own on GAE python project structures,
and found out that there isn't a set trend on how to place my handlers within the project.
As of now, I'm putting all the handlers(controllers) in main.py,
and make all urls (/.*) be directed to main.application.
Is this going to make my application slower?
Thank You! | GAE: is putting all handlers in main.py gonna make my app slow? | 0.099668 | 0 | 0 | 154 |
19,240,218 | 2013-10-08T06:01:00.000 | 1 | 0 | 0 | 1 | google-app-engine,python-2.7,webapp2 | 19,253,273 | 2 | true | 1 | 0 | In general, this will not make your application slower, however it can potentially slow you down your instance start-up time, but it generally isn't a problem unless you have very large complicated apps.
The instance start up time comes into play whenever GAE spins up a new instance for you. For example, if your app is unused for a long period and you start it up once in a long while, or for example, if your app is very busy and need a new instance to handle the load.
python loads your modules as needed. So if you launch an instance, and the request goes to main.py, then main.py and all the modules associated with it will get loaded. If your app is large, this may take a few seconds. Let's just say for example it takes 6 seconds to load every module in your app. That's a 6 second wait for whoever is issuing that request. Subsequent requests to that loaded instance will be quick.
It's possible to break down your handlers to separate modules. If handler for \a requires very little code, then having \a in a separate file will reduce the response time for \a. But when you load \b that has all the rest of the code, that would take a while to load. So it's possible to take that 6 second load and potentially break it up into a few requests that may take 2 seconds.
This type of optimization really depends on the libraries you need to load with each request. You generally want to do this later on, when you run into the problems, rather than design your layout for this purpose up front, since it's pretty difficult to predict.
App Engine warmup requests also help alleviate this problem. | 2 | 0 | 0 | I'm building a web application using GAE.
I've been doing some research by my own on GAE python project structures,
and found out that there isn't a set trend on how to place my handlers within the project.
As of now, I'm putting all the handlers(controllers) in main.py,
and make all urls (/.*) be directed to main.application.
Is this going to make my application slower?
Thank You! | GAE: is putting all handlers in main.py gonna make my app slow? | 1.2 | 0 | 0 | 154 |
19,242,443 | 2013-10-08T08:12:00.000 | 2 | 0 | 0 | 0 | python,django,authentication | 19,242,718 | 1 | true | 1 | 0 | It depends what you mean, really. Although the permissions are part of the auth app, there's no requirement to actually assign any, or check them at any point: that's entirely up to you. Most of the projects I have done use auth in exactly this way, to check logins and nothing else. | 1 | 0 | 0 | Is it possible to use the built-in django.contrib.auth mechanism without enabling permissions at all?
I just need a simple registration and login system.
Thanks. | Django contrib.auth without permissions | 1.2 | 0 | 0 | 71 |
19,245,329 | 2013-10-08T10:33:00.000 | 3 | 0 | 1 | 1 | python,windows,registry,cx-freeze | 19,245,809 | 1 | true | 0 | 0 | I just resolved it but not sure if this is the right way to go about it. Here's what I did,
In the cmd box, type regedit, and then click OK you will have the Registry editor.
Right Clicked on the key name HKEY_LOCAL_MACHINE and searched for the wrong path name that kept showing up. In a few seconds it will take you to the location of that path in the registry. I did see two mentions of Python. Wasn't hard to figure out the wrong one(incorrect path) and deleted it without any side effects.
Immediately after this I was able to install the modules perfectly. | 1 | 2 | 0 | I was trying to install cx_Freeze module and it gives me the could not locate network location error along with a non existent path name(supposedly pointing to python? but it wasnt). Then I tried installing another module py2exe, this time the installer was a bit more user friendly and informed that I got two mentions of Python in my registry. One pointing to the correct Python directory, the other pointing to the same wrong one.
My question is how or is it possible to delete the wrong mention of Python from my registry or another way around it? I wanted to install cx_Freeze. Thanks | How to remove multiple versions of Python from the Windows registry | 1.2 | 0 | 0 | 7,170 |
19,255,537 | 2013-10-08T18:31:00.000 | 0 | 0 | 1 | 0 | python,sorting,python-2.7,dictionary,calendar | 19,269,052 | 3 | true | 0 | 0 | after a lot of trying i changed my approach and instead of tuples, i used datetime objects and then applied the:
months_sorted = sorted(months.iteritems(), key=operator.itemgetter(0)) | 1 | 0 | 0 | I have a dictionary with entries that are of the format
a = dict({(2,2013):[], (2,2011):[], (7,2013):[] , (4,2013):[]})
i want my output to be like this:
{(2,2011):[], (2,2013):[], (4,2013):[] , (7,2013):[]}
By the way its supposed to be (month,year), how can i achieve that? | sorting dates in key of tuples in dictionary | 1.2 | 0 | 0 | 78 |
19,256,594 | 2013-10-08T19:28:00.000 | 1 | 0 | 0 | 1 | python,centos,web.py | 19,256,882 | 2 | true | 1 | 0 | In general, there are two parts of this.
The "remote and event-based" part: Service used remotely over network needs certain set of skills: to be able to accept (multiple) connections, read requests, process, reply, speak at least basic TCP/HTTP, handle dead connections, and if it's more than small private LAN, it needs to be robust (think DoS) and maybe also perform some kind of authentication.
If your script is willing to take care of all of this, then it's ready to open its own port and listen. I'm not sure if web.py provides all of these facilities.
Then there's the other part, "daemonization", when you want to run the server unattended: running at boot, running under the right user, not blocking your parent (ssh, init script or whatever), not having ttys open but maybe logging somewhere...
Servers like nginx and Apache are built for this, and provide interfaces like mod_python or WSGI, so that much simpler applications can give up as much of the above as possible.
So the answer would be: yes, you still need Nginx or the likes, unless:
you can implement it yourself in Python,
or you are using the script on localhost only and are willing to take some
risks of instability.
Then probably you can do on your own. | 1 | 0 | 0 | I've used web.py to create a web service that returns results in json.
I run it on my local box as python scriptname.py 8888
However, I now want to run it on a linux box.
How can I run it as a service on the linux box?
update
After the answers it seems like the question isn't right. I am aware of the deployment process, frameworks, and the webserver. Maybe the following back story will help:
I had a small python script that takes as input a file and based on some logic splits that file up. I wanted to use this script with a web front end I already have in place (Grails). I wanted to call this from the grails application but did not want to do it by executing a command line. So I wrapped the python script as a webservice. which takes in two parameters and returns, in json, the number of split files. This webservice will ONLY be used by my grails front end and nothing else.
So, I simply wish to run this little web.py service so that it can respond to my grails front end.
Please correct me if I'm wrong, but would I still need ngix and the like after the above? This script sounds trivial but eventually i will be adding more logic to it so I wanted it as a webservice which can be consumed by a web front end. | Running web.py as a service on linux | 1.2 | 0 | 0 | 855 |
19,256,629 | 2013-10-08T19:30:00.000 | 0 | 1 | 0 | 1 | python,command,pytest | 19,256,758 | 1 | false | 0 | 0 | I can't check this but what I'd do first would be check PATH for the pytest executable. I'd except a Windows batch script, and continue investigation in the code, maybe that's where the args are lost or passed (quoted?) incorrectly. | 1 | 0 | 0 | I have 2.3.3 version of pytest running on windows. I have a test folder which contains bunch of test files like test1.py, test2.py, test3.py etc. If i open command prompt and navigate to this folder to run a particular test
pytest test1.py
Instead of just running test1.py, it is running all the tests in the folder. Like test1.py, test2.py, test3.py etc.
So pytest is not taking arguments and parsing them. I am seeing this only on windows. Does anyone know what is happening here?
Thanks a bunch in advance. | pytest is not parsing command line arguments on windows | 0 | 0 | 0 | 552 |
19,256,930 | 2013-10-08T19:46:00.000 | 11 | 0 | 0 | 0 | python,time-series | 43,874,003 | 5 | false | 0 | 0 | The solutions given are good for a series that aren’t incremental nor decremental(stationary). In financial time series( or any other series with a a bias) the formula given is not right. It should, first be detrended or perform a scaling based in the latest 100-200 samples.
And if the time series doesn't come from a normal distribution ( as is the case in finance) there is advisable to apply a non linear function ( a standard CDF funtion for example) to compress the outliers.
Aronson and Masters book (Statistically sound Machine Learning for algorithmic trading) uses the following formula ( on 200 day chunks ):
V = 100 * N ( 0.5( X -F50)/(F75-F25)) -50
Where:
X : data point
F50 : mean of the latest 200 points
F75 : percentile 75
F25 : Percentile 25
N : normal CDF | 2 | 9 | 1 | I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of normalizing the data. That is, making all of the time-series examples fall between a certain region e.g [0,100]. Can anyone tell me how this can be done in python | Python - how to normalize time-series data | 1 | 0 | 0 | 20,885 |
19,256,930 | 2013-10-08T19:46:00.000 | 0 | 0 | 0 | 0 | python,time-series | 21,486,466 | 5 | false | 0 | 0 | I'm not going to give the Python code, but the definition of normalizing, is that for every value (datapoint) you calculate "(value-mean)/stdev". Your values will not fall between 0 and 1 (or 0 and 100) but I don't think that's what you want. You want to compare the variation. Which is what you are left with if you do this. | 2 | 9 | 1 | I have a dataset of time-series examples. I want to calculate the similarity between various time-series examples, however I do not want to take into account differences due to scaling (i.e. I want to look at similarities in the shape of the time-series, not their absolute value). So, to this end, I need a way of normalizing the data. That is, making all of the time-series examples fall between a certain region e.g [0,100]. Can anyone tell me how this can be done in python | Python - how to normalize time-series data | 0 | 0 | 0 | 20,885 |
19,259,809 | 2013-10-08T22:41:00.000 | 2 | 1 | 1 | 0 | c#,ironpython | 19,260,358 | 1 | true | 0 | 0 | Use the [PythonHidden] attribute on methods you do not want to expose.
IronPython will always make calls based on the original object, not the interface type. Creating a wrapper class, which maintains a reference to your Interface implementor , forwarding the calls as required, is also a good approach. | 1 | 1 | 0 | I have a c# class that implements an interface and I also have some more public methods on this class, what I want is to expose to python code only the methods belonging to this interface and not the whole object.
Is there a simple way to do this without to create a new class ? | How to expose only the interface implementation to IronPython | 1.2 | 0 | 0 | 198 |
19,262,267 | 2013-10-09T02:55:00.000 | 0 | 0 | 0 | 0 | python,security,session,cookies,browser | 19,262,430 | 1 | false | 1 | 0 | No. The server has no idea when a browser closes. Because the connection between the browser and the server is stateless, when a user closes a tab or shuts down the whole application, the server is unaware of it. It doesn't even destroy the session when you "manually close the browser or clean cookies". The Session does not expire until it times out.
Sessions can be destroyed programatically (I suspect, I don't use Python), for example, when a user clicks the "Log Out" button you should be destroying their session programatically, but if they just close the tab... you can't.
Using session cookies and having relatively short session timeouts in what you should be doing. Session cookies will be orphaned by the browser when the user closes a tab or the app, so even if they open it right back up, they will need to reauthenticate. And having a short session timeout means that their sessions will not be sitting idle, taking up memory, and waiting to be hijacked on your server. | 1 | 0 | 0 | As we login stackoverflow,there's a session created between the browser and server which only expired after we manually close the browser or clean cookies. But howto doing this by a programming way on CLIENT SYSTEM during all browser behavior acts normally ? Like nothing happened and just need another login action.
Ok! just curiosity :)
I don't know if this could possibly be done .
Any tips would be appropriated. Danke! | is there A chance to destroy a session via scripts(like python) before the IE/Chome exit? Not using browser options | 0 | 0 | 1 | 66 |
19,267,714 | 2013-10-09T09:15:00.000 | 0 | 0 | 0 | 1 | python,windows,path | 19,271,370 | 4 | false | 0 | 0 | You could also work with Windows path:
set path=C:\Python26;.;..;C:\windows;C:\windows\system32
prompt $ & start title Python26
Save this as Py26.bat and type Python in the screen that displays
set path=C:\Python33;.;..;C:\windows;C:\windows\system32
prompt $ & start title Python33
Save this as Py33.bat and type Python in the screen that displays | 1 | 1 | 0 | I have two versions of Python on Windows and want to use them through cmd. I tried to make shortcuts of their python.exe and renaming them to python26 and python33 (I also added their locations to PATH), but unfortunately this does not work. Calling python26 or python26.lnk outputs in not recognized as an internal command.
Is there any other way to do it (like Linux virtualenv), or I missed something in my idea ? | How to make shortcut work from PATH | 0 | 0 | 0 | 2,981 |
19,267,886 | 2013-10-09T09:22:00.000 | 5 | 0 | 0 | 0 | python,django,multilingual | 20,371,763 | 1 | true | 1 | 0 | I found the solution, you have to manually add a folder in /django/conf/locale/ with the language extension you want. Actually you can just copy past the en (english) folder and name it after the missing language (in my case mt).
In this folder you can also edit the file formats.py to localize the dates, numbers etc.
Restart django and your language will be natively supported. | 1 | 2 | 0 | I'm translating my website into the 24 languages of the European Union. These include the "Malti" language, that is not listed in django default supported languages.
I would like to know if there is a way to add a custom language to django so it can work with the native i18n url function.
Thanks! | adding a custom language to django | 1.2 | 0 | 0 | 1,607 |
19,269,005 | 2013-10-09T10:07:00.000 | 0 | 0 | 1 | 0 | python,pdf-generation | 19,269,175 | 1 | false | 0 | 0 | I'm not totally sure, but you can create a report with jasperReport and create a pdf file after. I believe python also can work with jasper reports.
what do you think? maybe is too much work. | 1 | 0 | 0 | How to convert multiple images(jpeg) as a pdf file with multiple pages in windows.
Using Image library, i can convert every image as single pdf, i can merge those converted files to a single pdf file using pdfminer, but it is two way work.
I try to download MagicK, but couldn't get binary for windows. Is it possible to achieve using PIL ? | Python : Convert multiple images as multiple pages in pdf for windows | 0 | 0 | 0 | 563 |
19,273,500 | 2013-10-09T13:29:00.000 | 1 | 0 | 0 | 0 | python,django,mongodb | 19,279,589 | 1 | false | 1 | 0 | One way is with straight django and raw sql. But it will look ugly. If you spend the time going through the pain of getting GeoDjango up and running you'll find it well worth the effort, as performing queries such as "find all locations within 5 miles" become very easy to implement. | 1 | 0 | 0 | I am working on a project that involves a spatial model, just 2 coordinates points with latitude and longitude - no areas, boundaries, lines etc. Can I get away just by using plain django or should I use GeoDjango? Do I need GeoDjango to do spatial queries (find all locations within 5 miles etc.)? Also, should I consider using MongoDB for the data storage? The other models should be fairly standard: users, relationships, events... basically pretty structured things that would fit nicely in a standard DB? | GeoDjango or just plain Django? | 0.197375 | 0 | 0 | 144 |
19,276,592 | 2013-10-09T15:42:00.000 | 3 | 0 | 1 | 0 | python,data-structures | 19,277,899 | 5 | false | 0 | 0 | The closest I can think of to a single structure with the properties you want is a splay tree (with your hash as the key).
By rotating recently-accessed (and hence updated) nodes to the root, you should end up with the least recently-accessed (and hence updated) data at the leaves or grouped in a right subtree.
Figuring out the details (and implementing them) is left as an exercise for the reader ...
Caveats:
worst case height - and therefore complexity - is linear. This shouldn't occur with a decent hash
any read-only operations (ie, lookups that don't update the timestamp) will disrupt the relationship between splay tree layout and timestamp
A simpler approach is to store an object containing (hash, timestamp, prev, next) in a regular dict, using prev and next to keep an up-to-date doubly-linked list. Then all you need alongside the dict are head and tail references.
Insert & update are still constant time (hash lookup + linked-list splice), and walking backwards from the tail of the list collecting the oldest hashes is linear. | 2 | 6 | 0 | I am looking for a good data structure to contain a list of tuples with (hash, timestamp) values. Basically, I want to use it in the following way:
Data comes in, check to see if it's already present in the data structure (hash equality, not timestamp).
If it is, update the timestamp to "now"
If not, add it to the set with timestamp "now"
Periodically, I wish to remove and return a list of tuples that older than a specific timestamp (I need to update various other elements when they 'expire'). Timestamp does not have to be anything specific (it can be a unix timestamp, a python datetime object, or some other easy-to-compare hash/string).
I am using this to receive incoming data, update it if it's already present and purge data older than X seconds/minutes.
Multiple data structures can be a valid suggestion as well (I originally went with a priority queue + set, but a priority queue is less-than-optimal for constantly updating values).
Other approaches to achieve the same thing are welcome as well. The end goal is to track when elements are a) new to the system, b) exist in the system already and c) when they expire. | Ideal data structure with fast lookup, fast update and easy comparison/sorting | 0.119427 | 0 | 0 | 2,812 |
19,276,592 | 2013-10-09T15:42:00.000 | 2 | 0 | 1 | 0 | python,data-structures | 19,278,584 | 5 | false | 0 | 0 | Unless I'm misreading your question, a plain old dict should be ideal for everything except the purging. Assuming you are trying to avoid having to inspect the entire dictionary during purging, I would suggest keeping around a second data structure to hold (timestamp, hash) pairs.
This supplemental data structure could either be a plain old list or a deque (from the collections module). Possibly the bisect module could be handy to keep the number of timestamp comparisons down to a minimum (as opposed to comparing all the timestamps until you reach the cut-off value), but since you'd still have to iterate sequentially over the items that need to be purged, ironing out the exact details of what would be quickest requires some testing.
Edit:
For Python 2.7 or 3.1+, you could also consider using OrderedDict (from the collections module). This is basically a dict with a supplementary order-preserving data structure built into the class, so you don't have to implement it yourself. The only hitch is that the only order it preserves is insertion order, so that for your purpose, instead of just reassigning an existing entry to a new timestamp, you'll need to remove it (with del) and then assign a fresh entry with the new timestamp. Still, it retains the O(1) lookup and saves you from having to maintain the list of (timestamp, hash) pairs yourself; when it comes time to purge, you can just iterate straight through the OrderedDict, deleting entries until you reach one with a timestamp that is later than your cut-off. | 2 | 6 | 0 | I am looking for a good data structure to contain a list of tuples with (hash, timestamp) values. Basically, I want to use it in the following way:
Data comes in, check to see if it's already present in the data structure (hash equality, not timestamp).
If it is, update the timestamp to "now"
If not, add it to the set with timestamp "now"
Periodically, I wish to remove and return a list of tuples that older than a specific timestamp (I need to update various other elements when they 'expire'). Timestamp does not have to be anything specific (it can be a unix timestamp, a python datetime object, or some other easy-to-compare hash/string).
I am using this to receive incoming data, update it if it's already present and purge data older than X seconds/minutes.
Multiple data structures can be a valid suggestion as well (I originally went with a priority queue + set, but a priority queue is less-than-optimal for constantly updating values).
Other approaches to achieve the same thing are welcome as well. The end goal is to track when elements are a) new to the system, b) exist in the system already and c) when they expire. | Ideal data structure with fast lookup, fast update and easy comparison/sorting | 0.07983 | 0 | 0 | 2,812 |
19,276,767 | 2013-10-09T15:50:00.000 | 0 | 0 | 1 | 0 | python,csv,text | 19,276,900 | 1 | false | 0 | 0 | 1) use regex.findall or split() command to break original data into a list.
2) loop through your list and take out the important information, put it into a dictionary, and append it to a list. (you want a list of lists of dictionaries, [{"Arrest": 1, "date": 01/08/2011, "sex": "male", "charge":"assault"}, {}, {}...]
3) open up a txt file and writerows, big_list[0]{'arrest'}, big_list[0]{'date'}, etc | 1 | 1 | 0 | I am a statistician and am somewhat new to Python. I have a text document that looks like:
Arrest # 1
Arrest Date
01/08/2011
Sex
Male
Charge
Assault
Arrest # 2
Arrest Date
01/13/2011
Sex
Charge
Deviant
Trespassing
Arrest # 3....
I would like to transform this into the following form:
Arrest Sex Charge
1 Male Assault
2 Missing
Deviant Trespassing
3...
I can pull out the text in between say Arrest Date and Sex using regular expressions, but I cannot figure out how to perform these operations for each arrest. This is a problem that I encounter a lot as police departments tend to hand over PDFs (which I then convert into tex files that are in the above format), and not spreadsheets, and so any help would be greatly appreciated. | Python: Transforming column of data into spreadsheet | 0 | 0 | 0 | 69 |
19,277,240 | 2013-10-09T16:11:00.000 | 3 | 0 | 0 | 0 | python,tkinter,text-widget | 19,277,903 | 1 | true | 0 | 1 | The text widget allows you to associate tags with a block of text. You do this with the tag_add method of a text widget object. You can then associate various attributes to a tag, such as a bold font, colors, underlining, etc. You configure the attributes of a tag with the tag_configure method of the text widget object. | 1 | 1 | 0 | Here is what I want to do. A user enters a search query "hello world". The text is searched for this query, when the sentence with "hello world" is found it is inserted in Text widget and shown to the user.
I want to somehow highlight the words from the search query so they would look like this:
"This is a simple hello world expression."
How can I do it? | Tkinter: How to make found words bold in Text widget? | 1.2 | 0 | 0 | 796 |
19,278,242 | 2013-10-09T16:59:00.000 | 0 | 0 | 0 | 0 | android,python,sl4a | 19,764,120 | 1 | false | 1 | 0 | There is no explicit function for accessing the CallLog.Calls object.
It may be possible in the ContactsFacade e.g. queryAttributes or queryContent but you would need to investigate further there. | 1 | 0 | 0 | Is it possible to access the call log (history of outgoing, incoming and missed calls) of an android phone using a python code in the SL4A environment? | Access call log from SL4A | 0 | 0 | 0 | 383 |
19,279,379 | 2013-10-09T18:02:00.000 | 0 | 0 | 0 | 0 | python,pylons,ckan | 19,279,825 | 1 | false | 1 | 0 | I just figured out that all I needed to do was add {.format} and rename the original file because pylons with route to the static page first! | 1 | 0 | 0 | I am currently trying to make a once static page into a dynamic page. The customers does not want to change the url to not have the .html at the end of the url. So, as an example the current static page is /foo/bar.html which is located in my public folder, no problem. I can easily make it /foo/bar, but once I have a period pylons no longer excepts the route.
current code:map.connect('foo', '/foo/bar.html',controller=controller , action='foo') | pylons route with period | 0 | 0 | 0 | 28 |
19,282,409 | 2013-10-09T20:49:00.000 | 0 | 1 | 0 | 0 | python,acrobat,reportlab | 19,321,562 | 1 | false | 1 | 0 | There is a python module, pyPDF, that can also be used to slice-and-dice PDF's.
This could be used if you had already exported you Assets using the native program (example printing an AutoCAD drawing as a PDF, from within AutoCAD itself). Acrobat is pretty good at magically guessing how this should be done when using those difficult Proprietary applications with specialized formats.
The disadvantage (from an automation point-of-view): is that now we probably need to script AutoCAD to output the PDF in an organized way, so that we can pass it on to pyPDF. (Or we do these kinds of things by hand, but that is not very scalable). | 1 | 1 | 0 | I am trying to embed various PDF documents into my ReportLab canvas. It seems that maybe you can hack in support for SVG (but I really need PDF).
If you want pure python, the proper way is to pay for the commercial ReportLab-PLUS addons, which includes PageCatcher, a mighty powerful artwork/PDF toolset.
Im not ready for the PLUS upgrade just yet, but I have one other potential solution: Adobe Acrobat. I use Acrobat quite often, but I have never attempted to automate it (using python+COM I suppose).
I dont want to just slam PDFs together, because it will ruin indexing and Table Of Contents generated by ReportLab. What I would need to do is set some type of placeholder in ReportLab that simply takes up space, yet, it would need to leave some type of identifier for Acrobat to look for and replace. I will plan to fill in entire pages in Acrobat.
Any idea how I can create this placeholder from the ReportLab side? It almost seems like I would want to embed metadata in the PDF that gives Acrobat exact instructions for the insertion. I also suppose adding actual entities could work, and then Acrobat will need to remove them or cover them up.
I am try to merge AutoCAD drawings, Vector illustrations, and assorted reStructuredText snippets (using rst2pdf). | ReportLab import PDF, Acrobat | 0 | 0 | 0 | 561 |
19,285,892 | 2013-10-10T02:10:00.000 | 0 | 0 | 1 | 0 | python,directory,libs | 19,286,739 | 2 | false | 0 | 0 | Just looking on mine (Windows 7) /libs appears to be the native code libraries (*.lib) vs the straight python libraries in /Lib. The readme also mentions a configuration flag:
--with-libs='libs': Add 'libs' to the LIBS that the python interpreter
is linked against.
Which may or may not be set on different installs/platforms.
This isn't really a answer; hopefully someone with a firmer knowledge of it will explain further - was just a bit too much info to squeeze into a comment. | 1 | 8 | 0 | For me, it's located at C:\Python33\libs.
For reference - this is not the same folder as C:\Python33\Lib - note the capitalization and lack of an 's'.
On one computer I was working on, I simply dropped a .py file into the libs folder and could import and use it like a library / module (sorry, I don't really know terminology very well), regardless of where the project I was working on is.
However, in trying to duplicate this on another machine, this doesn't work. Attempting to import simply gives a "no module named X" error.
So, clearly I'm misunderstanding the purpose of the libs folder, and how it differs from the Lib folder.
So, what exactly is the difference? | Python - what is the libs subfolder for? | 0 | 0 | 0 | 7,130 |
19,296,422 | 2013-10-10T12:59:00.000 | 0 | 0 | 0 | 0 | java,php,python,actionscript-3,flash | 19,300,355 | 1 | false | 1 | 0 | GraniteDS will allow you to access your java objects via the Flex framework. | 1 | 0 | 0 | Is there anyway I can make a Java application communicate with a Flash Player (application) that is on a website? The flash application is quite dynamic, meaning the data changes as i refresh and visit different pages. In fact the page itself is fully flash.
Where should i be looking at to get this working?
I'm thinking how can i even retrieve the text / objects from this flash and then send a action(click, text ) .
Any advice would be greatly appreciated. | Java Application Interacting with Flash on Web Application | 0 | 0 | 0 | 113 |
19,298,609 | 2013-10-10T14:29:00.000 | 0 | 0 | 1 | 0 | python,django,web-frameworks | 24,934,345 | 3 | false | 1 | 0 | Try using the latest GA version of mysql-connector-python. | 1 | 1 | 0 | I'm currently testing frameworks to create a big multiplayers game. I choose Django.
But I have a question about the version of Python. I should to create that project from scratch with Python 3.x or Python 2.x?
Python 3.x and Django compatibly is ok, or not production usable for now? | Django and python3 | 0 | 0 | 0 | 367 |
19,304,206 | 2013-10-10T19:08:00.000 | 0 | 0 | 1 | 0 | python,string-formatting | 19,304,344 | 2 | false | 0 | 0 | The reason is that the actual argument on the right-hand side of % is supposed to be a tuple, because the string on the left-hand side can have multiple placeholders for elements of the tuple to fill in. The single-argument version is actually a special case. So when you place your x there and it is actually a tuple, Python assumes you're providing several arguments to fill in the placeholders - only there aren't several placeholders, hence the exception.
Putting (x,) fixes it because that now makes the argument a tuple containing a single element, which is itself a tuple. | 1 | 1 | 0 | Every time I'm using print "in some_function. x: %s" % x to debug some python program (typically Python 2.5 or Python 2.6), if x is a tuple, my program crashes. Why? How can I avoid this when adding prints to my code? | Adding print statements with % crashes my program? | 0 | 0 | 0 | 66 |
19,307,827 | 2013-10-10T23:19:00.000 | 1 | 1 | 1 | 0 | emacs,ipython,putty | 19,351,887 | 2 | true | 0 | 0 | Solution: add this line to ~/.emacs.d/init.el:
(ansi-color-for-comint-mode-on) | 1 | 0 | 0 | I have a PuTTY terminal running emacs 23. I just installed python-mode.el-6.1.2 and pinard-Pymacs-5989046. The IPython shell looks like this:
IPython 1.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
^[[0;32mIn [^[[1;32m2^[[0;32m]: ^[[0m
Whereas when I run ipython from bash, I get
IPython 1.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]:
Does this look like a charset issue in my PuTTY setup or should I try to find the issue within emacs/python-mode? | Ugly ipython prompt on emacs 23 | 1.2 | 0 | 0 | 330 |
19,309,287 | 2013-10-11T02:25:00.000 | 2 | 0 | 1 | 0 | python,ipython,ipython-notebook,ipython-magic | 64,969,412 | 7 | false | 0 | 0 | The simplest way to skip python code in jupyter notebook cell from running, I temporarily convert those cells to markdown. | 1 | 61 | 1 | I usually have to rerun (most parts of) a notebook when reopen it, in order to get access to previously defined variables and go on working.
However, sometimes I'd like to skip some of the cells, which have no influence to subsequent cells (e.g., they might comprise a branch of analysis that is finished) and could take very long time to run. These cells can be scattered throughout the notebook, so that something like "Run All Below" won't help much.
Is there a way to achieve this?
Ideally, those cells could be tagged with some special flags, so that they could be "Run" manually, but would be skipped when "Run All".
EDIT
%%cache (ipycache extension) as suggested by @Jakob solves the problem to some extent.
Actually, I don't even need to load any variables (which can be large but unnecessary for following cells) when re-run, only the stored output matters as analyzing results.
As a work-around, put %%cache folder/unique_identifier to the beginning of the cell. The code will be executed only once and no variables will be loaded when re-run unless you delete the unique_identifier file.
Unfortunately, all the output results are lost when re-run with %%cache...
EDIT II (Oct 14, 2013)
The master version of ipython+ipycache now pickles (and re-displays) the codecell output as well.
For rich display outputs including Latex, HTML(pandas DataFrame output), remember to use IPython's display() method, e.g., display(Latex(r'$\alpha_1$')) | How to (intermittently) skip certain cells when running IPython notebook? | 0.057081 | 0 | 0 | 40,298 |
19,310,083 | 2013-10-11T04:06:00.000 | 0 | 0 | 0 | 0 | python,django,sqlite,orm,memcached | 19,311,615 | 2 | false | 1 | 0 | Does the disk IO really become the bottleneck of your application's performance and affect your user experience? If not, I don't think this kind of optimization is necessary.
Operating system and RDBMS (e.g MySQL , PostgresQL) are really smart nowdays. The data in the disk will be cached in memory by RDBMS and OS automatically. | 1 | 2 | 0 | I'm working with a somewhat large set (~30000 records) of data that my Django app needs to retrieve on a regular basis. This data doesn't really change often (maybe once a month or so), and the changes that are made are done in a batch, so the DB solution I'm trying to arrive at is pretty much read-only.
The total size of this dataset is about 20mb, and my first thought is that I can load it into memory (possibly as a singleton on an object) and access it very fast that way, though I'm wondering if there are other, more efficient ways of decreasing the fetch time by avoiding disk I/O. Would memcached be the best solution here? Or would loading it into an in-memory SQLite DB be better? Or loading it on app startup simply as an in-memory variable? | Load static Django database into memory | 0 | 1 | 0 | 1,409 |
19,310,244 | 2013-10-11T04:25:00.000 | 1 | 0 | 1 | 0 | python,python-2to3 | 20,411,997 | 2 | false | 0 | 0 | You need to run Python, followed by the 2to3 script, followed by tags and arguments.
Running 2to3 on command line looks something like this:
[python] [2tp3.py] [tags] [files to be converted (can be 1+)]
C:\python33\python.exe C:\python33\Tools\Scripts\2to3.py -w C:\Users\watt\Documents\Tom's Stuff\Programs\Python\python 2 test.py
By running Python33 followed by 2to3.py, you can run the 2to3 script. Then you add the -w tag to actually convert your program to Python 3. Then you add the files to be converted.
The command can be simplified by using changing directory to your Programs folder first. | 1 | 2 | 0 | I am fairly new to programming and have been learning python on codecademy. I would like to convert a python 2x program to python 3x using 2to3 on the command line but have no idea how to do it. I have looked at various other questions and articles on how to do it but I still do not understand. I have python 3.3 installed, and am running windows 8. This is the path to my python 2x program and my path to 2to3.
My program: "C:\Users\watt\Documents\Tom's Stuff\Programs\Python\python 2 test.py"
2to3 Location: "C:\Python33\Tools\Scripts\2to3.py"
Can someone please tell me what I would have to enter into the command line?
Thanks in advance... | Using 2to3 python in windows | 0.099668 | 0 | 0 | 2,921 |
19,313,800 | 2013-10-11T08:40:00.000 | 0 | 0 | 0 | 0 | python | 19,313,971 | 2 | false | 1 | 0 | Using a framework like Django will start you right into developing your application. They you intend to work will cost you years of work to create your own web application infrastructure without getting anything useful.
The strength of Python is the availability of countless modules, packages and frameworks to build upon. Without there you will be getting nowhere. | 1 | 0 | 0 | I need to create a website using python but without Django or any other framework, since the website I need to create ts very custom (at the back-end level specially) like having a dashboard after login and stuff like that.
I want to know what are the best practices and/or tutorials that can help me in such a situation. | Create website with Python without using Django | 0 | 0 | 0 | 1,444 |
19,316,788 | 2013-10-11T11:15:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,tf-idf | 19,317,456 | 1 | true | 0 | 0 | Use get_feature_names method as specified in comments by larsmans | 1 | 1 | 1 | I want to visualize "Words/grams" used in columns of TfidfVectorizer outut in python-scikit library . Is there a way ?
I tried to to convert csr to array , but cannot see header composed of grams. | Is there a way to see column 'grams' of the TfidfVectoririzer output? | 1.2 | 0 | 0 | 31 |
19,317,126 | 2013-10-11T11:32:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,encryption | 19,318,479 | 1 | false | 1 | 0 | It sounds like conventional cryptographical methods should meet your needs, e.g. AES256. When it comes to crypto you should try to innovate as little as possible. Use well-established and well-trusted methods -- when "rolling your own" it's very easy to make mistakes, and you don't get the benefit of peer review from the academic cryptography community.
Make sure to benchmark how long the encryption with the strong algorithm of your choice actually takes before doing work to address the issue of the encryption blocking the request. Would a few hundred milliseconds' delay really be a problem?
If it turns out encryption is too slow, you still shouldn't compromise on the quality of your encryption algorithm. A better solution to this would be to perform the encryption in a background thread and continue with the request immediately.
Allot an ID in your database to the resource that is to be inserted in encrypted form, but don't bother with an intermediate, intentionally "mild" form of encryption before the "real" encryption. This layer would only be providing a false sense of security.
If a user attempts to access a resource that has not yet been encrypted, return an error indicating that the resource is still being processed (or that the encryption has failed, if applicable).
Make sure that there's no possibility of the encryption process failing or being delayed in a way that results in unencrypted data being kept around for longer than it should. If encryption can't succeed in a timely manner (because of disks being full / power failure / cosmic rays), the insertion must simply be allowed to fail and the unencrypted data must not be kept. | 1 | 0 | 0 | I have unencrypted web request data (not under my jurisdiction) that I would like to quickly save into the Datastore so as not to slow down the request process.
The sensitive data occasionally is required to be opened by system users via the web. When a user makes such a request, it will require them to complete a reCAPTCHA before the decryption process starts and an event is logged regarding their behavior. Decryption time could suitably be up to 1 minute long for a string of between 10 and 20 characters.
Is there an encryption algorithm usable on GAE which is slower to decrypt than encrypt that would be suitable in this case?
I'm contemplating another method to alleviate the encryption time:
temporarily store the data mildly encrypted with MD5 & hash quick encryption method, while a scheduled job iterates over any records not flagged as properly encrypted and economically applies very strong encryption (it would be acceptable for the user to be alerted that encryption has not yet finished if they tried to access the data immediately after input)
Assuming the above method is feasible, then I assume I can encrypt the pants out of the data for a few minutes, rendering it extremely costly to try to decrypt if data is compromised but system is not. | Fast encrypt, slow decrypt method for NDB Datastore | 0.197375 | 0 | 0 | 560 |
19,318,298 | 2013-10-11T12:34:00.000 | 3 | 0 | 1 | 0 | ipython,ipython-notebook | 47,452,654 | 4 | false | 0 | 0 | i have a german Keyboard and tried out some keys. The following worked:
[strg] + [#] | 2 | 34 | 0 | I have defined a function in an IPython notebook and would like to be able to block comment a section of it. Intuitively, I'd expect to be able to highlight a section of code, right click and have an option to comment out the selection but this has not been implemented.
Is there a way to do this? | How can I block comment code in the IPython notebook? | 0.148885 | 0 | 0 | 34,719 |
19,318,298 | 2013-10-11T12:34:00.000 | 0 | 0 | 1 | 0 | ipython,ipython-notebook | 68,533,010 | 4 | false | 0 | 0 | For me Ctrl + ^/~.
I'm using windows 10 and Jupyter Notebook. | 2 | 34 | 0 | I have defined a function in an IPython notebook and would like to be able to block comment a section of it. Intuitively, I'd expect to be able to highlight a section of code, right click and have an option to comment out the selection but this has not been implemented.
Is there a way to do this? | How can I block comment code in the IPython notebook? | 0 | 0 | 0 | 34,719 |
19,320,747 | 2013-10-11T14:31:00.000 | 33 | 0 | 1 | 0 | python,vim,vi | 21,820,207 | 4 | false | 0 | 0 | Certain keys, when pressed, will trigger Vim's indent feature, which will attempt to set the correct amount of indentation on the current line. (You can manually trigger this by typing == in normal mode.)
You can change which keys trigger this behavior, but first you need to know what indenting mode is being used.
First, execute :set indentexpr?. If it is nonempty (I would expect this for Python), then indentexpr mode is being used. In this case, executing :set indentkeys? gives you the list of trigger keys. To remove the colon, execute :setlocal indentkeys-=:.
If indentexpr is empty, then you are probably using cindent mode, and :set cindent? will tell you that cindent is set. In this case, do the same as before, but using cinkeys instead of indentkeys. (Note that indentexpr mode takes precedence over cindent mode.) | 1 | 47 | 0 | Whenever I append a : character in Vim in Python mode, it either:
indents the line
dedents the line
does nothing
What is it even trying to do, and how do I get rid of this behavior? | Prevent Vim from indenting line when typing a colon (:) in Python | 1 | 0 | 0 | 6,351 |
19,323,049 | 2013-10-11T16:33:00.000 | 2 | 0 | 0 | 0 | python,xlrd,xlwt,openpyxl,xlutils | 20,910,668 | 1 | false | 0 | 0 | This is currently not possible with either but I hope to have it in openpyxl 2.x. Patches / pull requests always welcome! ;-) | 1 | 4 | 0 | I have a workbook that has some sheets in it. One of the sheets has charts in it. I need to use xlrd or openpyxl to edit another sheet, but, whenever I save the workbook, the charts are gone.
Any workaround to this? Is there another python package that preserves charts and formatting? | How can I edit Excel Workbooks using XLRD or openpyxl while preserving charts? | 0.379949 | 1 | 0 | 477 |
19,329,745 | 2013-10-12T02:15:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 19,329,776 | 2 | false | 0 | 0 | You could just "grep" the Python source files in your project for "import " to get an exhaustive list of packages you use. Remove the obvious ones that are part of the standard library, like datetime or whatever, and the rest are what you might include in requirements.txt.
I don't know of a more "automatic" way to do it; another way might be to set up a clean virtualenv or other sandboxed install of Python with no extra packages, and try installing your software in there using only your requirements.txt. | 1 | 2 | 0 | I'm starting to dive into python but I'm a bit confused with how the requirements.txt file work. How do I know what to include in it?
For example, the current project I'm working on, I only installed Flask. So do I just add only flask to that file? Or are there other packages that I don't know about - if so is there a way to find out (e.g display a full list)? | how to check what packages are used to include in a requirements.txt file? | 0.099668 | 0 | 0 | 370 |
19,330,790 | 2013-10-12T05:13:00.000 | 2 | 0 | 0 | 1 | python,macos,vim,terminal | 19,331,487 | 5 | false | 0 | 0 | You can't execute a file if that file doesn't exist.
Write the file with :w filename.py (further writes only need :w) and execute your script with :!python %.
Learning programming and Vim at the same time is not a very good idea: Vim is a complex beast and trying to handle both learning curves won't be easy. As much as I love Vim, I'd suggest you use another text editor, at least in the beginning, like Sublime Text or TextMate.
In short, focus on programming first by using a simple and intuitive editor and learn Vim once you are comfortable enough in your craft.
Or don't, Vim is the greatest text editor but you can definitely be a successful programmer without it. | 2 | 1 | 0 | I'm writing Python code using Vim inside Terminal (typing command "vim" to start up Vim). I've been trying to find a way to execute the code through the mac terminal in the same window.
I'm trying to use :!python % but I get the following error message:
E499: Empty file name for '%' or '#', only works with ":p:h"
Anyone have any suggestions? | Using Python and Vim within the Mac Terminal | 0.07983 | 0 | 0 | 3,871 |
19,330,790 | 2013-10-12T05:13:00.000 | -1 | 0 | 0 | 1 | python,macos,vim,terminal | 64,542,587 | 5 | false | 0 | 0 | in vim type :w yourfilenamehere.py and press enter | 2 | 1 | 0 | I'm writing Python code using Vim inside Terminal (typing command "vim" to start up Vim). I've been trying to find a way to execute the code through the mac terminal in the same window.
I'm trying to use :!python % but I get the following error message:
E499: Empty file name for '%' or '#', only works with ":p:h"
Anyone have any suggestions? | Using Python and Vim within the Mac Terminal | -0.039979 | 0 | 0 | 3,871 |
19,330,905 | 2013-10-12T05:29:00.000 | 1 | 0 | 0 | 0 | python,web.py,http-referer | 19,331,922 | 2 | false | 1 | 0 | thefourtheye is right that you can't rely on REFERER.
But that doesn't mean you can't use it.
As a security measure (e.g., to prevent deep linking), it's laughably useless.
But for convenience features, there's nothing wrong with it. Assume, say, a third of your users won't supply it. Is your navigation good enough without it? Is the benefit in making things a little nicer for 2 in 3 users worth it? Sometimes the answer is yes.
Keep in mind that some proxies or user agents will intentionally send you garbage. If the REFERER is the same as the current page, or is not part of your app at all, don't use it.
Also, ask yourself whether what you really want here is a redirect to REFERER, or JS window.history.back(). The former is a poor substitute for the latter if that's what you're intending it for (although it can occasionally be useful as a fallback for people who can't run JS). | 1 | 0 | 0 | I'm writing a web app in Python/web.py where you go to the url /unfriend to unfriend someone. This link is spread out across multiple pages. After being unfriended, I would like the user to be redirected to the page they came from. Can I rely on HTTP_REFERER to implement this behavior? I don't want to have to add a parameter to the url. | Is it wise to rely on HTTP_REFERER in a web app? | 0.099668 | 0 | 1 | 280 |
19,332,760 | 2013-10-12T09:38:00.000 | 2 | 0 | 0 | 0 | python,mysql,django | 19,333,028 | 3 | false | 1 | 0 | you need to elemplments the poll/long poll or server push. | 1 | 1 | 0 | I have a simple python/Django Application in which I am inserting records in database through some scanning event. And I am able to show the data on a simple page. I keep reloading the page every second to show the latest inserted database records.But I want it to improve so that page should update the records when ever new entry comes in database, instead of reloading every second.
Is there any way to do this?
Database: I am using mysql
Python: Python 2.7
Framework: Django | Updating client page only when new entry comes in database in Django | 0.132549 | 1 | 0 | 824 |
19,338,113 | 2013-10-12T19:10:00.000 | 0 | 0 | 1 | 0 | python,dictionary,information-retrieval,trie | 41,477,845 | 3 | false | 0 | 0 | You can think of creating a sequence out of the entire dictionary and then aligning them to get the words in the sequence using smith water man or any heuristic local alignment algorithm | 1 | 6 | 0 | I'm doing an artistic project where I want to see if any information emerges from a long string of characters (~28,000). It's sort of like the problem one faces in solving a Jumble. Here's a snippet:
jfifddcceaqaqbrcbdrstcaqaqbrcrisaxohvaefqiygjqotdimwczyiuzajrizbysuyuiathrevwdjxbinwajfgvlxvdpdckszkcyrlliqxsdpunnvmedjjjqrczrrmaaaipuzekpyqflmmymedvovsudctceccgexwndlgwaqregpqqfhgoesrsridfgnlhdwdbbwfmrrsmplmvhtmhdygmhgrjflfcdlolxdjzerqxubwepueywcamgtoifajiimqvychktrtsbabydqnmhcmjhddynrqkoaxeobzbltsuenewvjbstcooziubjpbldrslhmneirqlnpzdsxhyqvfxjcezoumpevmuwxeufdrrwhsmfirkwxfadceflmcmuccqerchkcwvvcbsxyxdownifaqrabyawevahiuxnvfbskivjbtylwjvzrnuxairpunskavvohwfblurcbpbrhapnoahhcqqwtqvmrxaxbpbnxgjmqiprsemraacqhhgjrwnwgcwcrghwvxmqxcqfpcdsrgfmwqvqntizmnvizeklvnngzhcoqgubqtsllvppnedpgtvyqcaicrajbmliasiayqeitcqtexcrtzacpxnbydkbnjpuofyfwuznkf
What's the most efficient way of searching for all possible English words embedded (both forwards and backwards) in this string?
What is a useful dictionary against which to check the substrings? Is there a good library for doing this sort of thing? I have searched around and found some interesting TRIE solutions; but most of them are dealing with the situation where you know the set of words in advance. | How to find possible English words in long random string? | 0 | 0 | 0 | 6,406 |
19,338,489 | 2013-10-12T19:48:00.000 | 0 | 0 | 1 | 0 | python | 19,338,776 | 4 | false | 0 | 0 | You can read it line by line, count the words you are interested in on each line, add the results to the subtotal, and print the total when you are done. Handy if the file you are processing is big enough to cause swapping. | 1 | 0 | 0 | How to read entire text file as chunk of data or string?
I do not want to read the file line by line instead read entire file as text and find count of certain words. What is the way to do that? | Read whole file as text and not line wise in Python? | 0 | 0 | 0 | 859 |
19,340,807 | 2013-10-13T01:12:00.000 | 0 | 0 | 1 | 0 | python,json,netbeans | 19,340,898 | 2 | false | 1 | 0 | Install Python setuptools
sudo apt-get install python-setuptools
Now using pip install simplejson
sudo pip install simplejson
In general most Python packages can be installed this way. | 2 | 0 | 0 | I am trying to install simplejson-3.3.1.tar.gz so it can be accessed by my Python project in netbeans IDE 7.3.1.
I installed json.py in my src as a quick fix, but need more functionality.
I am using linus mint 15 as an OS.
I am unsure how to get my modules in netbeans to "see" methors e.g. json.dumps.
I am new to netbeans and would appreciate your assistance.
Thanks and regards,
Chris | How do I install simplejson 3.3.1 for a Python project in Netbeans IDE 7.3.1 | 0 | 0 | 0 | 2,016 |
19,340,807 | 2013-10-13T01:12:00.000 | 0 | 0 | 1 | 0 | python,json,netbeans | 19,346,510 | 2 | false | 1 | 0 | I found the issue ...
netbeans defaults to jython. I had to save my project files to another directory, delete my project (changing to python 2.7. for current project had no effect) and create a new project with netbeans with python 2.7 as the default.
Thanks for helping me get simplejson into my python 2.7 Nipun! Chris | 2 | 0 | 0 | I am trying to install simplejson-3.3.1.tar.gz so it can be accessed by my Python project in netbeans IDE 7.3.1.
I installed json.py in my src as a quick fix, but need more functionality.
I am using linus mint 15 as an OS.
I am unsure how to get my modules in netbeans to "see" methors e.g. json.dumps.
I am new to netbeans and would appreciate your assistance.
Thanks and regards,
Chris | How do I install simplejson 3.3.1 for a Python project in Netbeans IDE 7.3.1 | 0 | 0 | 0 | 2,016 |
19,347,461 | 2013-10-13T16:26:00.000 | 0 | 0 | 1 | 0 | python | 19,347,812 | 2 | false | 0 | 0 | You can follow following steps:
Search and download easy_install
Unzip it.
python setup.py install
Setup path in environment
You are ready to go using easy_install modulename
For some strange reason, if you are on windows, you may face a dll missing issue. I don't remember the name of dll. So instead of searching for solution, just install Visual Studio 2008 express ( Visual C++ ). Yes I mentioned strange. If you face any issue, feel free to ask. | 1 | 0 | 0 | I am new to Python, as like in Perl CPAN modules, is there anyway to download and import libraries in Python for the functionality which are not supported by the in built library modules? | how to import third party libraries in Python | 0 | 0 | 0 | 701 |
19,350,627 | 2013-10-13T21:44:00.000 | 0 | 0 | 1 | 0 | python,list | 19,350,715 | 6 | false | 0 | 0 | May not be most efficient, but one way is
>>> myList
[0, 0, 1, 2, 1, 2, 2, 2, 0, 1, 1, 0]
>>> [int(str_element) for str_element in "".join(str(int_element) for int_element in myList).strip('0')]
[1, 2, 1, 2, 2, 2, 0, 1, 1] | 2 | 2 | 0 | I have a simple question that I can't seem to find the answer to. I have a list in python that I want to make shorter based on the values at the end. For example say I have list = [0,0,1,2,1,2,2,2,0,1,1,0] I want to remove any 0's that are at the end or beginning of the list so this translates into list = [1,2,1,2,2,2,0,1,1]. I tried using filter() but this is removing all instances of 0 when I just want to remove the ends. I'm new to python and can't seem to figure this out. Any help would be appreciated! | Shortening a list | 0 | 0 | 0 | 721 |
19,350,627 | 2013-10-13T21:44:00.000 | 0 | 0 | 1 | 0 | python,list | 19,350,698 | 6 | false | 0 | 0 | you can use pop method:
L.pop([index]) -> item -- remove and return item at index (default last).
Raises IndexError if list is empty or index is out of range. | 2 | 2 | 0 | I have a simple question that I can't seem to find the answer to. I have a list in python that I want to make shorter based on the values at the end. For example say I have list = [0,0,1,2,1,2,2,2,0,1,1,0] I want to remove any 0's that are at the end or beginning of the list so this translates into list = [1,2,1,2,2,2,0,1,1]. I tried using filter() but this is removing all instances of 0 when I just want to remove the ends. I'm new to python and can't seem to figure this out. Any help would be appreciated! | Shortening a list | 0 | 0 | 0 | 721 |
19,350,785 | 2013-10-13T22:00:00.000 | 1 | 0 | 0 | 0 | python,django | 66,488,574 | 4 | false | 1 | 0 | As per the official Django documentation
Projects vs. apps
What’s the difference between a project and an app? An app is a Web
application that does something – e.g., a Weblog system, a database of
public records or a small poll app. A project is a collection of
configuration and apps for a particular website. A project can contain
multiple apps. An app can be in multiple projects. | 2 | 85 | 0 | I am creating my first real website using Django but I am still struggling to understand the diff between a project and an app.
For example, My website is a sports news website which will contain sections like articles, ranking tables and "fixtures and results", my question is should each one of these sections be in a separate app inside a whole project or not? what is the best practice in this situation? | What’s the difference between a project and an app in Django world? | 0.049958 | 0 | 0 | 35,473 |
19,350,785 | 2013-10-13T22:00:00.000 | 122 | 0 | 0 | 0 | python,django | 19,351,042 | 4 | true | 1 | 0 | A project refers to the entire application and all its parts.
An app refers to a submodule of the project. It's self-sufficient and not intertwined with the other apps in the project such that, in theory, you could pick it up and plop it down into another project without any modification. An app typically has its own models.py (which might actually be empty). You might think of it as a standalone python module. A simple project might only have one app.
For your example, the project is the whole website. You might structure it so there is an app for articles, an app for ranking tables, and an app for fixtures and results. If they need to interact with each other, they do it through well-documented public classes and accessor methods.
The main thing to keep in mind is this level of interdependence between the apps. In practice it's all one project, so there's no sense in going overboard, but keep in mind how co-dependent two apps are. If you find one app is solving two problems, split them into two apps. If you find two apps are so intertwined you could never reuse one without the other, combine them into a single app. | 2 | 85 | 0 | I am creating my first real website using Django but I am still struggling to understand the diff between a project and an app.
For example, My website is a sports news website which will contain sections like articles, ranking tables and "fixtures and results", my question is should each one of these sections be in a separate app inside a whole project or not? what is the best practice in this situation? | What’s the difference between a project and an app in Django world? | 1.2 | 0 | 0 | 35,473 |
19,352,225 | 2013-10-14T01:23:00.000 | -1 | 0 | 0 | 0 | python,numpy,histogram,sample | 19,352,360 | 2 | false | 0 | 0 | You need to refine your problem statement better. For example, if your array has only 1 row, what do you expect. If your array has 20,000 rows what do you expect? ... | 1 | 2 | 1 | I have a 2 column array, 1st column weights and the 2 nd column values which I am plotting using python. I would like to draw 20 samples from this weighted array, proportionate to their weights. Is there a python/numpy command which does that? | Sample from weighted histogram | -0.099668 | 0 | 0 | 1,324 |
19,355,986 | 2013-10-14T08:15:00.000 | 1 | 0 | 1 | 0 | python,data-structures,python-3.x,deque | 19,356,586 | 3 | false | 0 | 0 | Using doubly-linked lists in Python is a bit uncommon. However, your own proposed solution of a doubly-linked list and a dictionary has the correct complexity: all the operations you ask for are O(1).
I don't think there is in the standard library a more direct implementation. Trees might be nice theoretically, but also come with drawbacks, like O(log n) or (precisely) their general absence from the standard library. | 1 | 1 | 1 | I'm looking for a data structure that preserves the order of its elements (which may change over the life of the data structure, as the client may move elements around).
It should allow fast search, insertion before/after a given element, removal of a given element, lookup of the first and last elements, and bidirectional iteration starting at a given element.
What would be a good implementation?
Here's my first attempt:
A class deriving from both collections.abc.Iterable and collections.abc.MutableSet that contains a linked list and a dictionary. The dictionary's keys are elements, values are nodes in the linked list. The dictionary would handle search for a node given an element. Once an element is found, the linked list would handle insertion before/after, deletion, and iteration. The dictionary would be updated by adding or deleting the relevant key/value pair. Clearly, with this approach the elements must be hashable and unique (or else, we'll need another layer of indirection where each element is represented by an auto-assigned numeric identifier, and only those identifiers are stored as keys).
It seems to me that this would be strictly better in asymptotic complexity than either list or collections.deque, but I may be wrong. [EDIT: Wrong, as pointed out by @roliu. Unlike list or deque, I would not be able to find an element by its numeric index in O(1). As of now, it is O(N) but I am sure there's some way to make it O(log N) if necessary.] | How can I implement a data structure that preserves order and has fast insertion/removal? | 0.066568 | 0 | 0 | 708 |
19,356,408 | 2013-10-14T08:41:00.000 | 1 | 0 | 0 | 0 | python,3d | 19,362,585 | 2 | false | 0 | 1 | Vtk is a very sophisticated framework, but it might be overkill. | 1 | 0 | 0 | Could anyone suggest a python library which has the ability to construct simple 3D objects and interact(touch) them?
Here's what I'm exactly looking into:
To have a test object, square/rectangular box(or any object) on a ground plane.
To have another sphere object of certain diameter.
To simulate, i.e. the roll the sphere all over and all sides of the test object.
To highlight or shade the parts of the test objects that are touched during the rolling process. (it won't roll all over, due to the ground plane restriction)
Not interested to see any animations, just the end product of the test object which were touched by the sphere.
Any suggestions on libraries or mathematical methods?
Many thanks.
p.s. In electrical engineering, this is one of the method to see which part of the building the lightning may be able to strike, i.e the "touched" area. | Python: How to model 3D objects and interact them in 3D space? | 0.099668 | 0 | 0 | 1,707 |
19,356,827 | 2013-10-14T09:05:00.000 | -1 | 0 | 0 | 0 | python,flask,spyne | 26,411,467 | 2 | false | 1 | 0 | I am assuming a shared config file approach is not possible for you, otherwise I would go for import from the config file in the spyne
e.g from config import blah blah
just a thought | 2 | 1 | 0 | I have Flask application and need to add SOAP server functionality to integrate with some services. The Spyne library was choosen for SOAP. I found how to combine Flask and Spyne wsgi apps together using werkzeug.wsgi.DispatcherMiddleware. But now I faced with issue of getting Flask app config inside Spyne service views. I usually use current_app.config['FOO'] to get Flask app settings, but when request comes to Spyne wsgi app I have no Flask application context. I need an advice how to deal with it, please. | Spyne with Flask application context | -0.099668 | 0 | 0 | 1,738 |
19,356,827 | 2013-10-14T09:05:00.000 | 0 | 0 | 0 | 0 | python,flask,spyne | 19,360,943 | 2 | true | 1 | 0 | I don't know how to get hold of that config object outside of the Flask context, but if you can, you can setattr anything to the Application instance which is acccessible via ctx.app within Spyne's @rpc context. | 2 | 1 | 0 | I have Flask application and need to add SOAP server functionality to integrate with some services. The Spyne library was choosen for SOAP. I found how to combine Flask and Spyne wsgi apps together using werkzeug.wsgi.DispatcherMiddleware. But now I faced with issue of getting Flask app config inside Spyne service views. I usually use current_app.config['FOO'] to get Flask app settings, but when request comes to Spyne wsgi app I have no Flask application context. I need an advice how to deal with it, please. | Spyne with Flask application context | 1.2 | 0 | 0 | 1,738 |
19,357,726 | 2013-10-14T10:00:00.000 | 0 | 0 | 1 | 0 | python-2.7,maya,mel | 19,367,872 | 1 | false | 0 | 0 | In the script directory in your local maya profile or startup directory in {MAYA_INSTALL_DIR}\scripts add a file my_custom_script.mel which contains:
source "/path/to/custom/shelves/script/in/non/standard/location.mel"; | 1 | 1 | 0 | I want to load a set of custom shelves from a non-standard location without changing the Maya.env file. Is this possible? Ideally the solution would be in Python but mel is fine too. | Load Maya shelves from a non-standard location without editing Maya.env | 0 | 0 | 0 | 1,111 |
19,360,323 | 2013-10-14T12:31:00.000 | 0 | 0 | 0 | 0 | python,http,web,web.py | 19,360,420 | 2 | false | 1 | 0 | which is not what we want:
http://google.com
So why do you then redirect to www.google.com instead of http://google.com? | 1 | 0 | 0 | Function seeother() and redirect() in web.py is no use. I try to use
web.header('Location', 'www.google.com')
web.header('status', '301')
or
web.HTTPError('301', {'Location': 'www.google.com'})
but still redirect to:
http://127.0.0.1:80/www.google.com
which is not what we want:
http://google.com
How to redirect correctly? | In web.py, How to 301 redirect to another domain? | 0 | 0 | 1 | 384 |
19,361,740 | 2013-10-14T13:46:00.000 | 2 | 1 | 0 | 1 | python,linux,process | 19,361,844 | 1 | false | 0 | 0 | The information is lost when a process-in-the-middle terminates. So in your situation there is no way to find this out.
You can, of course, invent your own infrastructure to store this information at forking time. The middle process (PID 3 in your example) can of course save the information which child PIDs it created (e. g. in a file or by reporting back to the father process (PID 1 in your example) via pipes or similar). | 1 | 1 | 0 | How can I find child process pid after the parent process died.
I have program that creates child process that continues running after it (the parent) terminates.
i.e.,
I run a program from python script (PID = 2).
The script calls program P (PID = 3, PPID = 2)
P calls fork(), and now I have another instance of P named P` (PID = 4 and PPID = 3).
After P terminates P` PID is 4 and PPID is 1.
Assuming that I have the PID of P (3), how can I find the PID of the child P`?
Thanks. | How to find orphan process's pid | 0.379949 | 0 | 0 | 947 |
19,363,207 | 2013-10-14T15:00:00.000 | 0 | 0 | 1 | 1 | python,merge,doc | 19,363,269 | 2 | false | 0 | 0 | You can open and read contents of each files and write them in a separate file. You can File I/O functions. | 1 | 1 | 0 | In Python on Linux I would like to merge several .doc files into 1 .doc file? (The .doc file will be open in Windows machines). I have searched on internet but I don't find useful information.
I know that this feature is working for PDF in GhostScript, but now it needs also working for doc files.
Has somebody suggestions how to solve this issue? | Python merging doc files into 1 doc file | 0 | 0 | 0 | 999 |
19,366,605 | 2013-10-14T18:21:00.000 | 5 | 0 | 0 | 0 | python,sql,sqlalchemy,relationship | 19,369,883 | 1 | true | 0 | 0 | It doesn't do anything at the database level, it's purely for convenience. Defining a relationship lets SQLAlchemy know how to automatically query for the related object, rather than you having to manually use the foreign key. SQLAlchemy will also do other high level management such as allowing assignment of objects and cascading changes. | 1 | 1 | 0 | I understand that ForeignKey constrains a column to be an id value contained in another table so that entries in two different tables can be easily linked, but I do not understand the behavior of relationships(). As far as I can tell, the primary effect of declaring a relationship between Parent and Child classes is that parentobject.child will now reference the entries linked to the parentobject in the children table. What other effects does declaring a relationship have? How does declaring a relationship change the behavior of the SQL database or how SQLAlchemy interacts with the database? | SQLAlchemy Relationships | 1.2 | 1 | 0 | 251 |
19,370,230 | 2013-10-14T22:20:00.000 | 0 | 0 | 0 | 0 | python,cython | 28,981,092 | 2 | false | 0 | 1 | I know this is an old question, but after my own recent struggles with Cython I thought I'd post an answer for the sake of posterity.
It seems to me you could use a copy constructor to create a new PyCluster object from an existing Cluster object.
Define the copy constructor in your C code, then call the copy constructor in the Python class definition (in this case, when a pointer is passed) using new. This will work, although it may not be the best or most performant solution. | 1 | 0 | 0 | I'm trying to wrap two C++ classes: Cluster and ClusterTree. ClusterTree has a method get_current_cluster() that instantiates a Cluster object, and returns a reference to it. ClusterTree owns the Cluster object, and manages its creation and deletion in C++.
I've wrapped Cluster with cython, resulting in PyCluster.
PyCluster should have two ways of creation:
1) By passing in two arrays, which then implies that Python should then automatically handle deletion (via __dealloc__)
2) By directly passing in a raw C++ pointer (created by ClusterTree's get_current_cluster()). In this case, ClusterTree then assumes responsibility of deleting the underlying pointer.
from libcpp cimport bool
from libcpp.vector cimport vector
cdef extern from "../include/Cluster.h" namespace "Terran":
cdef cppclass Cluster:
Cluster(vector[vector[double]],vector[int]) except +
cdef class PyCluster:
cdef Cluster* __thisptr
__autoDelete = True
def __cinit__(self, vector[vector[double]] data, vector[int] period):
self.__thisptr = new Cluster(data, period)
@classmethod
def __constructFromRawPointer(self, raw_ptr):
self.__thisptr = raw_ptr
self.__autoDelete = False
def __dealloc__(self):
if self.__autoDelete:
del self.__thisptr
cdef extern from "../include/ClusterTree.h" namespace "Terran":
cdef cppclass ClusterTree:
ClusterTree(vector[vector[double]],vector[int]) except +
Cluster& getCurrentCluster()
cdef class PyClusterTree:
cdef ClusterTree *__thisptr
def __cinit__(self, vector[vector[double]] data, vector[int] period):
self.__thisptr = new ClusterTree(data,period)
def __dealloc__(self):
del self.__thisptr
def get_current_cluster(self):
cdef Cluster* ptr = &(self.__thisptr.getCurrentCluster())
return PyCluster.__constructFromRawPointer(ptr)
This results in:
Error compiling Cython file:
------------------------------------------------------------
...
def get_current_cluster(self):
cdef Cluster* ptr = &(self.__thisptr.getCurrentCluster())
return PyCluster.__constructFromRawPointer(ptr)
^
------------------------------------------------------------
terran.pyx:111:54: Cannot convert 'Cluster *' to Python object
Note I cannot cdef __init__ or @classmethods. | Cython: overloaded constructor initialization using raw pointer | 0 | 0 | 0 | 857 |
19,370,567 | 2013-10-14T22:54:00.000 | 0 | 0 | 0 | 0 | python,pygame | 20,135,822 | 1 | false | 0 | 0 | You could load it all as one image and than draw the different segments onto the screen separately, or even define variables of the segments. This would effectively give you multiple images. | 1 | 0 | 0 | I have an image file that has multiple different images on it. I was wondering how to make it so that I can load the individual images from the single one instead of breaking each into its own thing. Sorry if I couldn't clarify what I am trying to ask. | How do I load a single image with multiple images on it? | 0 | 0 | 0 | 189 |
19,373,289 | 2013-10-15T04:26:00.000 | 0 | 0 | 0 | 0 | python,mysql,apache-storm | 20,010,872 | 1 | false | 1 | 0 | Is it not possible for you to remove the 'IsNull' constraint from your MySQL database? I'm not aware of any where it is not possible to do this. Otherwise you could set a default string which represents a null value. | 1 | 1 | 0 | I'm just curious if there's a way to make the no default value warning I get from Storm to go away. I have an insert trigger in MySQL that handles these fields and everything is functioning as expected so I just want to remove this unnecessary information. I tried setting the default value to None but that causes an error because the fields do not allow nulls. So how do I make the warning go away? | How can I avoid "Warning: Field 'xxx' doesn't have a default value" in Storm? | 0 | 1 | 0 | 770 |
19,374,254 | 2013-10-15T06:02:00.000 | 7 | 0 | 1 | 0 | python,constants,nan | 19,374,300 | 6 | false | 0 | 0 | You can do float('nan') to get NaN. | 1 | 116 | 1 | Most languages have a NaN constant you can use to assign a variable the value NaN. Can python do this without using numpy? | Assigning a variable NaN in python without numpy | 1 | 0 | 0 | 180,714 |
19,377,427 | 2013-10-15T09:15:00.000 | 6 | 1 | 0 | 0 | python,pdf,document | 19,401,254 | 1 | true | 1 | 0 | Page headers and footers are not (at least not necessarily) located in some content part separate from the rest of the page content. Thus, in general there is no way to reliably extract headers and footers from PDFs.
It is possible, though, to try and use heuristics which look at the whole PDF contents and try to guess what parts are headers and/or footers.
If the PDFs you want to analyze are fairly homogeneous, e.g. all produced by the same publisher and looking alike, this might be feasible. The more divers your source PDFs are, though, the more complex your heuristics likely will become and the less accurate the results will be. | 1 | 4 | 0 | Is this possible to extract the header and/or footer from a PDF document?
As I tried a few options (including PDFMiner, the Ruby gem pdf-extract, study the PDF format specs), I'm starting to suspect that the header/footer information is not available whatsoever.
(I would like to do this from Python, if possible, but any other alternative is viable.) | Extract header/footer from PDF (programmatically) | 1.2 | 0 | 0 | 4,166 |
19,378,143 | 2013-10-15T09:52:00.000 | 0 | 0 | 1 | 0 | python | 19,378,238 | 4 | false | 0 | 0 | If you know one structure is always a subset of the other, then just iterate the superset and in O(n) time you can check element by element whether it exists in the subset and if it doesn't, put it there. As far as I know there's no magical way of doing this other than checking it manually element by element. Which, as I said, is not bad as it can be done in with O(n) complexity. | 1 | 3 | 1 | I am looking to efficiently merge two (fairly arbitrary) data structures: one representing a set of defaults values and one representing overrides. Example data below. (Naively iterating over the structures works, but is very slow.) Thoughts on the best approach for handling this case?
_DEFAULT = { 'A': 1122, 'B': 1133, 'C': [ 9988, { 'E': [ { 'F': 6666, }, ], }, ], }
_OVERRIDE1 = { 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }
_ANSWER1 = { 'A': 1122, 'B': 1234, 'C': [ 9876, { 'D': 2345, 'E': [ { 'F': 6789, 'G': 9876, }, 1357, ], }, ], }
_OVERRIDE2 = { 'C': [ 6543, { 'E': [ { 'G': 9876, }, ], }, ], }
_ANSWER2 = { 'A': 1122, 'B': 1133, 'C': [ 6543, { 'E': [ { 'F': 6666, 'G': 9876, }, ], }, ], }
_OVERRIDE3 = { 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }
_ANSWER3 = { 'A': 1122, 'B': 3456, 'C': [ 1357, { 'D': 4567, 'E': [ { 'F': 6677, 'G': 9876, }, 2468, ], }, ], }
This is an example of how to run the tests:
(The dictionary update doesn't work, just an stub function.)
import itertools
def mergeStuff( default, override ):
# This doesn't work
result = dict( default )
result.update( override )
return result
def main():
for override, answer in itertools.izip( _OVERRIDES, _ANSWERS ):
result = mergeStuff(_DEFAULT, override)
print('ANSWER: %s' % (answer) )
print('RESULT: %s\n' % (result) ) | Python: Merging two arbitrary data structures | 0 | 0 | 0 | 1,800 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.