Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
25,440,006 | 2014-08-22T05:17:00.000 | 3 | 0 | 1 | 0 | python,matplotlib | 25,442,617 | 3 | true | 0 | 0 | There are a few possibilities
If your remote machine is somehow unixish, you may use the X Windows (then your session is on the remote machine and display on the local machine)
mpld3
bokeh and iPython notebook
nbagg backend of matplotlib.¨
Alternative #1 requires you to have an X server on your machine and a connection between the two machines (possibly tunneled through ssh, etc.) So, this is OS dependent, and the performance depends on the connection between the two machines.
Alternatives #2 and #3 are very new but promising. They have quite different approaches, mpl3d enables the use of standard matplotlib plotting commands, but with large datasets bokeh may be more useful.
Alternative #4 is probably the ultimate solution (see tcaswell's comments), but not yet available without using a development version of matplotlib (i.e. there may be some installation challenges). On the other hand, if you can hold your breath for a week, 1.4.0 will be out. | 2 | 3 | 0 | I am running ipython remotely on a remote server. I access it using serveraddress:8888/ etc to write code for my notebooks.
When I use matplotlib of course the plots are inline. Is there any way to remotely send data so that plot window opens up? I want the whole interactive environment on matplotlib on my local machine and all the number crunching on the server machine? This is something very basic....but somehow after rumaging through google for quite a while i can't figure it out. | how to display matplotlib plots on local machine? | 1.2 | 0 | 0 | 2,520 |
25,440,747 | 2014-08-22T06:22:00.000 | 1 | 0 | 1 | 1 | python,vb.net,arguments | 25,442,205 | 2 | true | 0 | 0 | If you don't have the source to the python exe convertor and if the arguments don't need to change on each execution, you could probably open the exe in a debugger like ollydbg and search for shellexecute or createprocess and then create a string in a code cave and use that for the arguments. I think that's your only option.
Another idea: Maybe make your own extractor that includes the python script, vbscript, and python interpreter. You could just use a 7zip SFX or something. | 1 | 1 | 0 | I have converted a python script to .exe file. I just want to run the exe file from a VB script. Now the problem is that the python script accepts arguments during run-time (e.g.: serial port number, baud rate, etc.) and I cannot do the same with the .exe file. Can someone help me how to proceed? | Running an .exe script from a VB script by passing arguments during runtime | 1.2 | 0 | 0 | 185 |
25,445,041 | 2014-08-22T10:35:00.000 | 1 | 0 | 0 | 0 | python,django,apache,web-applications,desktop-application | 25,447,817 | 1 | false | 1 | 0 | This question is a bit vague: some specifics, or even some code, would have helped.
There are two separate differences between running this as a desktop app and running it on the web. Lack of state is one issue, but it seems like the much more significant difference is the per-user configuration. You need some way of storing that configuration for each user.
Where you put that depends on how persistent you want it to be: the session is great for things that need to be persisted for an actual session, ie the time the user is actively using the app at one go, but don't necessarily need to be persisted beyond that or if the user logs in from a new machine. Otherwise, storing it explicitly in the database attached to the user record is a good way to go.
In terms of "what happens between requests", the answer as Bruno points out is "nothing". Each request is really a blank state: the only way to keep state is with a cookie, which is abstracted by the session framework. What you definitely don't want to do is try to keep any kind of global or module-level per-user state in your app, as that can't work: any state really is global, so applies to all users, and therefore is clearly not what you want here. | 1 | 0 | 0 | I'm confused as to what considerations should be taken into account when converting a desktop application into a web app. We have a desktop app written in python using the wxPython library for the GUI and its a very traditional application which sets up the main window and calls the app.Mainloop() method to sustain the GUI and respond to events. The application itself is a configuration utility that simply accepts file(s) and allows the user to configure it. Naturally, the program keeps track of all the changes made and responds to events in light of those changes.
I intend to serve this utility as part of a larger application using the Django framework hosted on an Apache server and expect many users to use it simultaneously. Once I remove the app.MainLoop() directive, as expected, running the app simply goes through the code once and exits. This is obviously not what I need, the application needs to remember the state.
So far, I've started to identify and decouple all GUI code from the core of the application so I can effectively reuse the core and decided to write the UI in JavaScript using frameworks such as jQuery to handle GUI events and such. Two obvious options of storing the state would be sessions and databases but I'm somewhat stuck while forming a big picture of how this will all work. What will happen between requests in terms of Django views? I would really appreciate it if someone could shed some light on the overall workflow.
Thank you. | State considerations when converting a python desktop application into a web app? | 0.197375 | 0 | 0 | 84 |
25,446,832 | 2014-08-22T12:15:00.000 | 2 | 0 | 0 | 0 | python,django | 25,447,136 | 1 | true | 1 | 0 | It's quite simple actually:
in settings.py, let's say your logger is based on a handler which formatter is named 'simple'.
'formatters': {
...
'simple': {
'format': '%(asctime)s %(message).150s'
},
...
},
The message will now be truncated to the first 150 caracters. Playing with handlers will allow you to specify this parameter per each logger. Thanks Python! | 1 | 2 | 0 | Logging sql queries is useful for debugging but in some cases, it's useless to log the whole query, especially for big inserts. In this case, display only first N caracters would be enough.
Is there a simple way to truncate sql queries when they are logged ? | Truncate logging of sql queries in Django | 1.2 | 1 | 0 | 142 |
25,447,592 | 2014-08-22T12:56:00.000 | 1 | 0 | 1 | 0 | python,module | 25,447,713 | 2 | false | 0 | 0 | Import re in each script (m and n). Then the scripts can be relocated to another package (if e.g. you refactor your code), and it is clearer from within the file what re is / where it is coming from etc. | 2 | 1 | 0 | For example, module m in my package p uses re. I write import re in m.py. Some other module n also uses re. Do I write import re twice or include import re once in my __init__.py?
What's the convention for writing packages that include external modules? | How to import external modules in a Python package? | 0.099668 | 0 | 0 | 240 |
25,447,592 | 2014-08-22T12:56:00.000 | 1 | 0 | 1 | 0 | python,module | 25,448,106 | 2 | false | 0 | 0 | Import a module in every other module that uses it.
Python is intelligent about how that happens under the covers. You can easily get into a Python form of DLL hell if you rely on indirect imports of modules. Fortunately the indirect approaches are harder than the direct approaches :) so most people do the right thing somewhat naturally. | 2 | 1 | 0 | For example, module m in my package p uses re. I write import re in m.py. Some other module n also uses re. Do I write import re twice or include import re once in my __init__.py?
What's the convention for writing packages that include external modules? | How to import external modules in a Python package? | 0.099668 | 0 | 0 | 240 |
25,450,885 | 2014-08-22T15:47:00.000 | -3 | 0 | 1 | 0 | python,dictionary,lazy-loading,descriptor | 25,451,044 | 3 | false | 0 | 0 | As you have the values as empty list, you can check for the first time if d[key] is empty list, then use a function to get appropriate values. If there is a chance that the value returned from the function be an empty list, you might want to use another dictionary to track which keys have been accessed so far. | 1 | 0 | 0 | I am implementing some sort of lazy loading. I have a dict, with each key's value being an empty list. When I access a key, I need to run some loading logic to populate the corresponding list with some values.
So, when I create the dictionary (let's name it d) there are some keys created: a:[], b:[]. Now, when I access a key in the dictionary: d['a'] I need to run some logic that basically returns a computed list (let's say [1, 2, 3]) and the d dict becomes: a: [1, 2, 3], b:[].
I hope I explained things well enough. In my knowledge, this is something similar to descriptors in Python since you can attach custom logic to an attribute. But what I need here is attaching custom logic to a dict's keys, which surely doesn't work with descriptors.
Is there a way of doing this? Or maybe I could use descriptors but in some other manner? | Descriptor behavior when accessing a dict's key | -0.197375 | 0 | 0 | 166 |
25,451,876 | 2014-08-22T16:44:00.000 | 3 | 0 | 0 | 0 | python,alfresco,cmis,opencmis | 25,561,736 | 2 | false | 0 | 0 | As Gagravarr says you are going to have to break inheritance on the documentLibrary folder to get it to work like you want. Breaking inheritance is not supported by CMIS so you'll have to write your own web script to do that.
I would manually set the permissions until it works like you want it to, then once you get that working, write a web script that puts it into effect for all of your sites. | 1 | 0 | 0 | I'd like to build a site which contains a few folders for different teams. However, there is one team that is common to one folder on all sites. I do not want that team to be allowed to see the content of the other folders. I tried creating a folder in a site and giving permission to a user via CMIS (in python), however that folder doesn't seem to be accessible from their share UI.
I'm not even sure this is the best way to do this. The organisation of the information requires that the areas are in the same place (i.e. the same site) however if you have access to the site you seem to have access to all the folders (I can't figure out a way of removing access to a folder on a site for a single user)
Also the requirement here is that it needs to be done programmatically; I'm not bothered particularly about using CMIS and if I have to rewrite the file/folder code, but in my head the best thing to do would be to add a widget on the share UI that access all the folders that a user has access to in the absence of being able to deny access to a folder. | How do I generate alfresco sites where different users can see different content? | 0.291313 | 0 | 0 | 174 |
25,457,025 | 2014-08-22T23:42:00.000 | 2 | 0 | 1 | 0 | python,jsonschema,json-schema-validator | 25,594,732 | 1 | true | 0 | 0 | No. jsonschema operates on deserialized JSON (== Python objects), not strings. So the way it works is quite simple, each string type is mapped to a set of valid Python types, and validating that a thing is of the correct type is just an isinstance check.
You're correct that DEFAULT_TYPES is the default mapping used for that. | 1 | 2 | 0 | How does jsonschema work?
My assumption is that they convert the raw json strings they see into the python type that is listed in say jsonschema.Draft4Validator.DEFAULT_TYPES and see if it can be converted. If the convert is successful, then validation proceeds.
If that's the case, each of the types in python in DEFAULT_TYPES must have a "from string" method that converts a string to that type.
Is my understanding of jsonschema correct? | How does jsonschema map the raw json string value to python objects? | 1.2 | 0 | 0 | 437 |
25,457,217 | 2014-08-23T00:12:00.000 | 0 | 0 | 1 | 0 | python,variables | 25,459,233 | 2 | false | 0 | 0 | An easy(?) way to do this is to use 'ctrl f' and search for the variables, this gives them one at a time, though, but you have to hit 'ctrl f' again every time you want to go to the next matching result which is a pain.
Another solution would be Pyscripter (or a similar IDE), I used to use pyscripter when coding (not anymore) python, but it isn't great, and it crashes often, but it does however, have the built in ability to when you double click on a variable, highlight everything else with the same name, which is what I think you are after. I miss this as well, it is very handy.
My suggestion is that you could download pyscripter, and if you need to find all of one variable, you copy your code into pyscripter and use it for this only.
Hope this helps you. good luck | 2 | 2 | 0 | I was wondering if in python there is some way to find where else in your code you use a certain variable. MATLAB has a feature like this and it has been very helpful when having to make changes to every usage a variable. | Python Find Where Else Variable Is Used In Own Code | 0 | 0 | 0 | 81 |
25,457,217 | 2014-08-23T00:12:00.000 | 4 | 0 | 1 | 0 | python,variables | 25,457,349 | 2 | true | 0 | 0 | This isn't a language feature, it's an editor, or IDE feature. I prefer to use grep or other file-searching tools to answer these kinds of questions. | 2 | 2 | 0 | I was wondering if in python there is some way to find where else in your code you use a certain variable. MATLAB has a feature like this and it has been very helpful when having to make changes to every usage a variable. | Python Find Where Else Variable Is Used In Own Code | 1.2 | 0 | 0 | 81 |
25,457,358 | 2014-08-23T00:35:00.000 | 9 | 0 | 1 | 0 | python,cpython,python-internals | 25,457,580 | 2 | false | 0 | 0 | Python's stack frames are allocated on the heap. But they are linked one to another to form a stack. When function a calls function b, the b stack frame points to the a stack frame as the next frame (technically, a is the f_back attribute of the b frame.)
Having stack frames allocated on the heap is what makes generators possible: when a generator yields a value, rather than discarding its stack frame, it's simply removed from the linked list of current stack frames, and saved off to the side. Then when the generator needs to resume, its stack frame is relinked into the stack, and its execution continues. | 1 | 5 | 0 | What do we call "stack" in Python? Is it the C stack of CPython? I read that Python stackframes are allocated in a heap. But I thought the goal of a stack was... to stack stackframes. What does the stack do then? | What is the stack in Python? | 1 | 0 | 0 | 743 |
25,459,285 | 2014-08-23T06:55:00.000 | 0 | 0 | 0 | 0 | python,kivy | 25,461,440 | 1 | true | 0 | 1 | There isn't a property that lets you simply do this - the transition is a property of the screenmanager, not of the screen.
You could add your own screen change method for the screenmanager that knows about the screen names and internally sets the transition. | 1 | 0 | 0 | I have two screens and want to change the SlideTransition from my second to first screen to direction: 'right' while keeping the first to second transition the default. The docs only show how to change the transition for every transition. How would I make a transition unique to one screen, done in the kv file?
Note: I have declared my screen manager screens in the kv file also. | Making unique screen transitions - kivy | 1.2 | 0 | 0 | 249 |
25,461,071 | 2014-08-23T10:44:00.000 | 1 | 0 | 1 | 0 | python,list,file,csv,threshold | 25,461,532 | 2 | false | 0 | 0 | I'm not sure exactly what you are asking, but get_threshold is an arcane detail of CPython's memory management implementation, and isn't useful for any other task. | 1 | 1 | 0 | get_threshold() as given in python document is an object of garbage collection. But is it possible to use the same object to identify the threshold in the list or more simple any CSV? | get_threshold() use other than grabage collection | 0.099668 | 0 | 0 | 53 |
25,464,255 | 2014-08-23T16:56:00.000 | 1 | 1 | 0 | 0 | python,vimeo,vimeo-api | 25,668,644 | 1 | true | 0 | 0 | EDIT: Since this was written the vimeo.py library was rebuilt. This is now as simple as taking the API URI and requesting vc.get('/videos/105113459') and looking for the review link in the response.
The original:
If you know the API URL you want to retrieve this for, you can convert it into a vimeo.py call by replacing the slashes with dots. The issue with this is that in Python attributes (things separated by the dots), are syntax errors.
With our original rule, if you wanted to see /videos/105113459 in the python library you would do vc.videos.105113459() (if you had vc = vimeo.VimeoClient(<your token and app data>)).
To resolve this you can instead use python's getattr() built-in function to retrieve this. In the end you use getattr(vc.videos, '105113459')() and it will return the result of GET /videos/105113459.
I know it's a bit complicated, but rest assured there are improvements that we're working on to eliminate this common workaround. | 1 | 0 | 0 | How to structure GET 'review link' request from Vimeo API?
New to python and assume others might benefit from my ignorance.
I'm simply trying to upload via the new vimeo api and return a 'review link'.
Are there current examples of the vimeo-api in python? I've read the documentation and can upload perfectly fine. However, when it comes to the http GET I can't seem to figure it out. Im using python2.7.5 and have tried requests library. Im ready to give up and just go back to PHP because its documented so much better.
Any python programmers out there familiar? | How to structure get 'review link' request from Vimeo API? | 1.2 | 0 | 1 | 435 |
25,468,191 | 2014-08-24T03:09:00.000 | 1 | 1 | 0 | 0 | python,unicode,utf-8,decode,encode | 25,468,417 | 1 | true | 0 | 0 | They return the same thing because b'\x53' == b'S'. It's the same of other characters in the ASCII table as they're represented by the same bytes.
You're getting a UnicodeDecodeError because you seem to be using a wrong encoding. If I run b'\xf9'.decode('iso-8859-1') I get ù so it's possible that the encoding is ISO-8859-1.
However, I'm not familiar with the MIDI protocol so you have to review it to see what bytes need to be interpreted as what encoding. If decode all the given bytes as ISO-8859-1 it doesn't give me a meaningful string so it may mean that these bytes stand for something else, not text? | 1 | 0 | 0 | I am dealing with the following string of bytes in Python 3.
b'\xf9', b'\x02', b'\x03', b'\xf0', b'y', b'\x02', b'\x03', b'S', b'\x00', b't', b'\x00', b'a'
This is a very confusing bunch of bytes for me because it is coming from a microcontroller which is emitting information according to the MIDI protocol.
My first question is about the letters near the end. Most all of the other bytes are true hexadecimal values (i.e. I know the b'\x00' is supposed to be a null character). However, the capital S, which is supposed to be a capital S, appears as such (a b'S'). According to the ASCII / HEX charts I have looked at, Uppercase S should be x53 (which is what b'\x53'.decode('utf-8') returns.
However, in Python when I do b'S'.decode('utf-8') it also returns a capita S, (how can it be both?)
Also, Some of the bytes (such as b'\xf9') are truly meant to be escaped (which is why they have the \x however, I am running into issues when trying to decode them. When running [byteString].decode('utf-8') on a longer version of the above string I get the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf9 in position 0: invalid start byte
Shouldn't those bytes be skipped over or printed as? Thanks | Issues decoding a Python 3 Bytes object based on MIDI | 1.2 | 0 | 0 | 312 |
25,468,397 | 2014-08-24T03:57:00.000 | 1 | 0 | 1 | 0 | matplotlib,anaconda,python-3.4,pyqt5 | 44,932,618 | 4 | false | 0 | 1 | I use Anaconda and with Python v2.7.X and qt5 doesn't work. The work-around I found was
Tools -> Preferences -> Python console -> External modules -> Library: PySlide | 1 | 8 | 1 | I have an existing PyQt5/Python3.4 application that works great, and would now like to add "real-time" data graphing to it. Since matplotlib installation specifically looks for Python 3.2, and NumPhy / ipython each have there own Python version requirements, I thought I'd use a python distribution to avoid confusion.
But out of all the distros (pythonxy, winpython, canopy epd) Anaconda is the only one that supports Python 3.4, however it only has PyQt 4.10.4. Is there a way I can install Anaconda, and use matplotlib from within my existing PyQt5 gui app?
Would I be better off just using another charting package (pyqtgraph, pyqwt, guiqwt, chaco, etc) that might work out of the box with PyQt5/Python3.4? | Using Anaconda Python 3.4 with PyQt5 | 0.049958 | 0 | 0 | 15,580 |
25,469,151 | 2014-08-24T06:38:00.000 | 0 | 0 | 0 | 0 | android,python,local-storage,updates,kivy | 25,471,205 | 1 | false | 0 | 1 | There is not a simple way (as in a build hook or similar) right now, but it's something we've specifically discussed in the last few days as the current situation has become a direct problem. I'm not sure what the resolution was, but there will probably be a change in python-for-android to fix it fairly soon.
If you want to keep up to date with this, ask on the kivy mailing list or irc. In particular, knapper_tech was making these changes. | 1 | 0 | 0 | I've got an Android app written in Kivy (Python), which stores local files that should survive an app update (adb install -r).
If the files are stored in a subdirectory of the current directory ("data/data/app_name/files"), I see that they are deleted after update.
However after some experiments I could "solve" this by storing the files in the "data/data/app_name/shared_prefs" directory, which seems to be persistent after updates. By the way, I didn't check but maybe the "data/data/app_name/databases" also is.
Is there a cleaner way of doing things ?
I need to test if I can create a new folder not called shared_prefs nor databases under "data/data/app_name", and if it is persistent.
(this seems kind of a hack because those directories have another dedicated purpose, even though my app is not using them for this dedicated purpose right now)
(NB: I don't want to keep the files outside the app private directory) | Kivy on Android : keep local storage files after app updates | 0 | 0 | 0 | 1,291 |
25,471,026 | 2014-08-24T11:02:00.000 | -1 | 0 | 1 | 0 | python,data-structures,pickle | 25,471,783 | 5 | false | 0 | 0 | Since it is a dictionary, you can convert it to a list of key value pairs ([(k, v)]). You can then serialize each tuple into a string with whatever technology you'd like (like pickle), and store them onto a file line by line. This way, parallelizing processes, checking the file's content etc. is also easier.
There are libraries that allows you to stream with single objects, but IMO it just makes it more complicated. Just storing it line by line removes so much headache. | 1 | 4 | 0 | I'm currently doing a project in python that uses dictionaries that are relatively big (around 800 MB). I tried to store one of this dictionaries by using pickle, but got an MemoryError.
What is the proper way to save this kind of files in python? Should I use a database? | Python: storing big data structures | -0.039979 | 0 | 0 | 13,056 |
25,472,840 | 2014-08-24T14:42:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,osx-mavericks,ipython-notebook,anaconda | 25,669,586 | 4 | false | 0 | 0 | Use Anaconda full suite , that include installing all the tools and necessary packages ,it works fine for me , I didn't use the Launcher ! | 1 | 2 | 0 | I've installed anaconda on mavericks osx. When I'm trying to install ipython notebook from launcher app - it shows message that app is installing, but nothing happens after. Also links in launcher don't work and I can easily start ipython notebook from terminal. So I guess something wrong with launcher itself.
How can I fix it? | anaconda launcher links don't work | 0 | 0 | 0 | 14,710 |
25,473,736 | 2014-08-24T16:20:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 25,474,076 | 1 | false | 0 | 0 | I can't quite understand what happened. Inside a comment in another file there was the word "Track.py" (originally just "Track" but PyCharm apparently added the ".py")
This caused an error for an inexplicable reason. Something relating to word occurrences in other files must have been the issue. After removing the ".py" from the comment refactoring (renaming) worked. And now the file is named "Track" like it should be.
Hopefully this is helpful! | 1 | 1 | 0 | I have created a file with a regular name "Track", but somehow in the rush there must have happened some error. I didn't bother reading it in the console, instead quickly did a safe delete (with usage search) and then recreated it.
Now that the file is back PyCharm won't accept it as a Python file (even though it has the .py ending). Completion and Highlighting does not work. Other file names however work.
Recreating it again without safe delete doesn't change anything. How can I make PyCharm accept the file like all the other Python files? | PyCharm won't accept file name | 0 | 0 | 0 | 81 |
25,473,948 | 2014-08-24T16:43:00.000 | 4 | 0 | 1 | 0 | python,debugging,pycharm | 25,474,067 | 1 | true | 0 | 0 | The simplest solution would be setting a breakpoint just before/at the wanted line so you can see the variables just before the exception is called.
It's also possible to set an "exception breakpoint". This stops the script when a specific exception is encountered. Open "Run" > "View Breakpoints", click on the "+" sign and add a "Python Exception Breakpoint". Now you have to choose a specific exception. | 1 | 2 | 0 | When a script ends with an error during debugging, PyCharm got disconnected from pydev. So, the state of variable at the moment of the error remains unknown.
How can I find out the very latest values of variables right before the error happens?
Updated: This problem arises only when you try to debug a unit test. You have to check "Activation policy: On raise" for "All Breakpoints" in the list of breakpoints. | How to print Python variables after an error in PyCharm? | 1.2 | 0 | 0 | 2,420 |
25,475,906 | 2014-08-24T20:15:00.000 | 1 | 0 | 0 | 1 | python,bash,shell,ulimit | 25,475,976 | 3 | false | 0 | 0 | I'm guessing your problem is that you haven't understood that rlimits are set per process. If you use os.system in Python to call ulimit, that is only going to set the ulimit in that newly spawned shell process, which then immediately exits after which nothing has been changed.
What you need to do, instead, is to run ulimit in the shell that starts your program. The process your program is running in will then inherit that rlimit from the shell.
I do not think there is any way to alter the rlimit of process X from process Y, where X != Y.
EDIT: I'll have to take that last back, at least in case you're running in Linux. There is a Linux-specific syscall prlimit that allows you to change the rlimits of a different process, and it also does appear to be available in Python's resource module, though it is undocumented there. See the manpage prlimit(2) instead; I'd assume that the function available in Python uses the same arguments. | 1 | 7 | 0 | I have a program that's running automatically on boot, and sporadically causing a coredump.
I'd like to record the output, but I can't seem to set ulimit -c programmatically (It's defaulted to 0, and resets every time).
I've tried using a bash script, as well as python's sh, os.system and subprocess, but I can't get it to work. | set `ulimit -c` from outside shell | 0.066568 | 0 | 0 | 7,676 |
25,479,018 | 2014-08-25T04:24:00.000 | 0 | 0 | 0 | 0 | python,vba,button | 25,479,346 | 1 | true | 0 | 1 | If you're on Windows just call the Python file by itself. If you're on Mac or *Nix then make it executeable with chmod a+x pythonfile.py, then call it.
You may need to add python to the Path environment variable but that is always a convenient thing to have. | 1 | 0 | 0 | I have been trying something like this in VB.
Private Sub python_Click()
Shell "C:\Python25\python.exe ""C:\rowcount.py"
End Sub
This python script just creates a text file and this script works fine. But how to run a python script from vb button_click event as if the python script accepts run-time arguments.
Something like the following
e.g: Shell "C:\Python25\python.exe ""C:\rowcount.py -e -w -v COM6" | How to pass arguments to a python script on a button press from vb | 1.2 | 0 | 0 | 415 |
25,483,349 | 2014-08-25T09:57:00.000 | 1 | 0 | 0 | 0 | python,google-bigquery | 25,489,066 | 1 | true | 0 | 0 | This issue is an artifact of how bigquery does "allow large results" queries interacting poorly with the "ignore case" clause. We're tracking an internal bug on the issue, and hopefully will have a fix soon. The workaround is either to remove the "allow large results" flag or the "ignore case" clause. | 1 | 1 | 0 | I ran a simple select query (with no LIMIT applied) using the Big Query python api. I also supplied a destination table as the result was too large. When run, the job returned an "unexpected LIMIT clause" error. I used ignore case at the end of the query. There could be a possibility that it might be causing the problem.
Anybody ran into a similar problem?
For reference, my job_id is job_QrkB7t9WFEHqcH5qfsPZZsM476E | BigQuery: "unexpected LIMIT clause at:" error when using list query job | 1.2 | 1 | 0 | 144 |
25,485,168 | 2014-08-25T11:44:00.000 | 1 | 0 | 1 | 0 | ipython,syntax-highlighting,traceback | 25,485,483 | 1 | false | 0 | 0 | You can easily change IPython text color typing "ipython profile create" and then messing with the config file in ~/.ipython/profile_default/ipython_config.py.
Search color with your favorite text editor, "LightBG" will certainly correspond to you, the colors are less "aggressives".
Type "ipython profile create"
open "~/.ipython/profile_default/ipython_config.py"
Uncomment line 158 and set the variable to 'LightBG' (c.TerminalInteractiveShell.colors = 'LightBG') | 1 | 0 | 0 | Coloring traceback is great for fast visual parsing, but default colors for IPython 2.2.0 running on linux bash shell are too much for me.
(I mean, comments in red? really?)
How can the user modify this coloring? | How to change IPython traceback coloring | 0.197375 | 0 | 0 | 406 |
25,485,886 | 2014-08-25T12:23:00.000 | 0 | 0 | 0 | 0 | python,numpy,opencv,image-processing | 71,228,716 | 7 | false | 0 | 0 | This is the simplest way I found: img8 = cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U) | 2 | 27 | 1 | I have a 16-bit grayscale image and I want to convert it to an 8-bit grayscale image in OpenCV for Python to use it with various functions (like findContours etc.). How can I do this in Python? | How to convert a 16-bit to an 8-bit image in OpenCV? | 0 | 0 | 0 | 66,029 |
25,485,886 | 2014-08-25T12:23:00.000 | 1 | 0 | 0 | 0 | python,numpy,opencv,image-processing | 63,122,556 | 7 | false | 0 | 0 | Yes you can in Python. To get the expected result, choose a method based on what you want the values mapped from say uint16 to uint8 be.
For instance,
if you do img8 = (img16/256).astype('uint8') values below 256 are mapped to 0
if you do img8 = img16.astype('uint8') values above 255 are mapped to 0
In the LUT method as described and corrected above, you have to define the mapping. | 2 | 27 | 1 | I have a 16-bit grayscale image and I want to convert it to an 8-bit grayscale image in OpenCV for Python to use it with various functions (like findContours etc.). How can I do this in Python? | How to convert a 16-bit to an 8-bit image in OpenCV? | 0.028564 | 0 | 0 | 66,029 |
25,489,450 | 2014-08-25T15:35:00.000 | 2 | 0 | 1 | 1 | python,cmd | 25,490,166 | 1 | true | 0 | 0 | Each call to os.system is a separate instance of the shell. The cd you issued only had effect in the first instance of the shell. The second call to os.system was a new shell instance that started in the Python program's current working directory, which was not affected by the first cd invocation.
Some ways to do what you want:
1 -- put all the relevant commands in a single bash file and execute that via os.system
2 -- skip the cd call; just invoke your tesseract command using a full path to the file
3 -- change the directory for the Python program as a whole using os.chdir but this is probably not the right way -- your Python program as a whole (especially if running in a web app framework like Django or web2py) may have strong feelings about the current working directory.
The main takeaway is, os.system calls don't change the execution environment of the current Python program. It's equivalent to what would happen if you created a sub-shell at the command line, issued one command then exited. Some commands (like creating files or directories) have permanent effect. Others (like changing directories or setting environment variables) don't. | 1 | 0 | 0 | I am very new to Python and I have been trying to find a way to write in cmd with python.
I tried os.system and subprocess too. But I am not sure how to use subprocess.
While using os.system(), I got an error saying that the file specified cannot be found.
This is what I am trying to write in cmd os.system('cd '+path+'tesseract '+'a.png out')
I have tried searching Google but still I don't understand how to use subprocess.
EDIT:
It's not a problem with python anymore, I have figured out. Here is my code now.
os.system("cd C:\\Users\\User\\Desktop\\Folder\\data\\")
os.system("tesseract a.png out")
Now it says the file cannot be open. But if I open the cmd separately and write the above code, it successfully creates a file in the folder\data. | Writing a line to CMD in python | 1.2 | 0 | 0 | 727 |
25,491,867 | 2014-08-25T18:08:00.000 | 2 | 0 | 1 | 0 | python,arrays,list,tuples | 25,491,931 | 2 | true | 0 | 0 | Use a tuple. In your application, it doesn't seem like you will want or need to change the list of results after.
Though, with many return values you might want to consider returning a dictionary with named values. That way is more flexible and extensible, as adding a new statistic doesn't requiring modifying every single time you use the function. | 1 | 1 | 0 | I have a function which must return many values (statistics) for other function to interact with them. So I thought about returning them inside a list (array). But then I wondered: should I do so using a list (["foo", "bar"]) or using a tuple (("foo", "bar"))? what are the problems or differences there are when using one instead of the other?? | Return multiple vars: list/tuple | 1.2 | 0 | 0 | 60 |
25,491,977 | 2014-08-25T18:15:00.000 | 0 | 0 | 0 | 0 | ipython,ipython-magic | 32,031,738 | 1 | false | 1 | 0 | Using "double-quotes" fixed it for me.
Have a go with: %bookmark md "C:/Users/user1/my documents" | 1 | 0 | 0 | This is probably a stupid question, but I can't find an answer.
%bookmark dl 'C:/Users/user1/Downloads' works, but %bookmark md 'C:/Users/user1/my documents' doesn't work, throwing error:
"UsageError: %bookmark: too many arguments"
How to fix this? | ipython %bookmark error: quotes don't fix this | 0 | 0 | 0 | 92 |
25,492,551 | 2014-08-25T18:55:00.000 | 0 | 0 | 0 | 1 | windows,python-2.7,popen,py2exe | 25,511,407 | 1 | false | 0 | 0 | The error was caused by registering the task to run as a different user. The task was registered as the "SYSTEM" user, but could not access remote files which were accessed by the Popen() call. The solution was to run the task as another user, or run it as the "SYSTEM" user on the machine where the remote files are located. | 1 | 0 | 0 | I am writing a wrapper program to add features to a Windows command-line tool. This program is made to run on a Windows server with py2exe. In it, there are several lines which look like:
job = subprocess.Popen(syncCommand, stdout=myLog, stderr=myLog)
When I call this program from the command line, everything works fine, however, I wish to automate this script using Windows Task Scheduler (per a request from my manager). When I register the executable and attempt to run it as a service, the logs are touched, but do not become populated with dialogue that would normally be returned. I am unsure where precisely the error is occurring, as running it from the task scheduler does not call a terminal window with which to view debugging messages.
Note that in previous versions, I had calls to subprocess.Popen() replaced by os.system(), and everything worked as it should have.
Is there an obvious obstacle/problem with this method? Is there a compelling reason to use Popen() instead of system()? | Popen() does not redirect output from Task Scheduler task | 0 | 0 | 0 | 95 |
25,495,410 | 2014-08-25T22:40:00.000 | 0 | 0 | 0 | 1 | python,shell,flask | 25,496,797 | 1 | false | 1 | 0 | The one way I can think of doing this is to refresh the page. So, you could set the page to refresh itself every X seconds.
You would hope that the file you are reading is not large though, or it will impact performance. Better to have the output in memory. | 1 | 0 | 0 | In my project workflow, i am invoking a sh script from a python script file. I am planning to introduce a web user interface for this, hence opted for flask framework. I am yet to figure out how to display the terminal output of the shell script invoked by my python script in a component like text area or label. This file is a log file which is constantly updated till the script run is completed.
The solution i thought was to redirect terminal output to a text file and read the text file every X seconds and display the content. I can also do it via the Ajax way from my web application. Is there any other prescribed way to achieve this ?
Thanks | Display a contantly updated text file in a web user interface using Python flask framework | 0 | 0 | 0 | 268 |
25,495,712 | 2014-08-25T23:13:00.000 | 2 | 0 | 1 | 1 | python,windows,python-2.7,module,pip | 25,495,917 | 1 | false | 0 | 0 | It doesn't matter what kind of machine you have. You can run 32-bit Windows on a 64-bit machine. And you can run 32-bit Python on 64-bit Windows.
If you have 32-bit Python, you need to install 32-bit pip. (Or you need to switch to 64-bit Python.)
From your description, you most likely have 32-bit Python on 64-bit Windows, and tried to use a 64-bit pip.
PS, if you want to install it manually instead of using Gohlke's installer, nobody can help you debug your problem based on "it says it failed to install". It produces a lot more output than that, and without that output, it's impossible to know which of the billion things that could possibly go wrong actually did.
PPS, just installing pip is sufficient to install any pure-Python packages. But if you want to install packages that include C extensions, you will need to set up a compiler (either MSVC, or MinGW/gcc), as explained in the pip documentation. | 1 | 0 | 0 | I've been looking around the internet for days now and cannot find a solution to my problem. I've learned all the basics to programming in Python 2.7 and I want to add Pip to my copy of 2.7. I found the link to download the unoffical 64-Bit installer (www.lfd.uci.edu/~gohlke/pythonlibs/), but when I downloaded it and ran it, it said I needed to have Python 2.7 (which I do) and it couldn't find it in the registry. I went to Pip's website and downloaded the official Windows installer and unpacked it using WinRAR.
I then tried opening Command Prompt and changed the directory to where the get-pip.py is located and running get-pip.py install but it says it failed to install.
I am completely lost and really need detailed and clear help. Please answer! | How do I install Python 2.7 modules on Windows 64-Bit? | 0.379949 | 0 | 0 | 1,791 |
25,507,911 | 2014-08-26T14:09:00.000 | 6 | 0 | 1 | 1 | python,eclipse,debugging,shared-libraries,ctypes | 25,719,883 | 2 | true | 0 | 0 | Actually, it is a fairly simple thing to do using the CDT and PyDev environments in Eclipse.
I assume here that you have already configured the projects correctly, so you can build and debug each one seperately.
Basically, you simply need to start the Python project in Debug mode and then to attach the CDT debugger to the running python process. To make it easier I'll try to describe it step by step:
Run your Python project in debug mode. Put a breakpoint somewhere after the loading of the dll using ctypes. Make note of the pid of the python process created (you should see a first line in the console view stating the pid. something like: pydev debugger: starting (pid: 1234))
Create a Debug configuration for your CDT project, choosing the type "C/C++ Attach to Application". You can use the default configuration.
Debug your project using the configuration you've created. A window should appear, asking you which process you want to attach to. Choose the python process having the right pid.
You can now add breakpoints to you C code.
You'll have two debuggers in the debug perspective, as if they were two different processes. You should always make sure the C/C++ debugging session is running when you work with the python debugger - as long as the C/C++ debugging session is suspended, the python debugger will be unresponsive. | 1 | 13 | 0 | I have a Python-program that uses ctypes and a C-shared library (dll-file). As an IDE, I am using Eclipse, where both projects will be developed (the C-shared library and the python program that uses it).
My idea is: when I start the Python-program in Debug-mode, can I somehow debug the shared library, which is written in C, too? Meaning: Can I set breakpoints and when the Python-program reaches that breakpoint in the shared library, executing stops and I can change variable values etc.? | Debug C-library from Python (ctypes) | 1.2 | 0 | 0 | 5,237 |
25,508,177 | 2014-08-26T14:20:00.000 | 1 | 1 | 0 | 1 | python,perl | 25,508,481 | 2 | false | 0 | 0 | Every modern Linux or Unixlike system comes with both installed; which you choose is a matter of taste. Just pick one.
I would say that whatever you choose, you should try to write easily-readable code in it, even if you're the only one who will be reading that code. Now, Pythonists will tell you that writing readable code is easier in Python, which may be true, but it's certainly doable in Perl as well. | 2 | 0 | 0 | I'm working on a large-ish project. I currently have some functional tests hacked together in shell scripts. They work, but I'd like to make them a little bit more complicated. I think it will be painful to do in bash, but easy in a full-blown scripting language. The target language of the project is not appropriate for implementing the tests.
I'm the only person who will be working on my branch for the foreseeable future, so it isn't a huge deal if my tests don't work for others. But I'd like to avoid committing code that's going to be useless to people working on this project in the future.
In terms of test harness "just working" for the largest number of current and future contributors to this project as possible, am I better off porting my shell scripts to Python or Perl?
I suspect Perl, since I know that the default version of Python installed (if any) varies widely across OSes, but I'm not sure if that's just because I'm less familiar with Perl. | Python vs Perl for portability? | 0.099668 | 0 | 0 | 254 |
25,508,177 | 2014-08-26T14:20:00.000 | 0 | 1 | 0 | 1 | python,perl | 25,508,315 | 2 | false | 0 | 0 | This is more personal, but I always used python and will use it till something shows me it's better to use other. Its simple, very expansible, strong and fast and supported by a lot of users.
The version of python doesn't matter much since you can update it and most OSes have python 2.5 with expands the compatibility a lot. Perl is also included in Linux and Mac OS, though.
I think that Python will be good, but if you like perl and have always worked with it, just use perl and don't complicate your life. | 2 | 0 | 0 | I'm working on a large-ish project. I currently have some functional tests hacked together in shell scripts. They work, but I'd like to make them a little bit more complicated. I think it will be painful to do in bash, but easy in a full-blown scripting language. The target language of the project is not appropriate for implementing the tests.
I'm the only person who will be working on my branch for the foreseeable future, so it isn't a huge deal if my tests don't work for others. But I'd like to avoid committing code that's going to be useless to people working on this project in the future.
In terms of test harness "just working" for the largest number of current and future contributors to this project as possible, am I better off porting my shell scripts to Python or Perl?
I suspect Perl, since I know that the default version of Python installed (if any) varies widely across OSes, but I'm not sure if that's just because I'm less familiar with Perl. | Python vs Perl for portability? | 0 | 0 | 0 | 254 |
25,508,510 | 2014-08-26T14:34:00.000 | 4 | 0 | 0 | 0 | python,pandas | 25,508,739 | 4 | false | 0 | 0 | One thing to check is the actual performance of the disk system itself. Especially if you use spinning disks (not SSD), your practical disk read speed may be one of the explaining factors for the performance. So, before doing too much optimization, check if reading the same data into memory (by, e.g., mydata = open('myfile.txt').read()) takes an equivalent amount of time. (Just make sure you do not get bitten by disk caches; if you load the same data twice, the second time it will be much faster because the data is already in RAM cache.)
See the update below before believing what I write underneath
If your problem is really parsing of the files, then I am not sure if any pure Python solution will help you. As you know the actual structure of the files, you do not need to use a generic CSV parser.
There are three things to try, though:
Python csv package and csv.reader
NumPy genfromtext
Numpy loadtxt
The third one is probably fastest if you can use it with your data. At the same time it has the most limited set of features. (Which actually may make it fast.)
Also, the suggestions given you in the comments by crclayton, BKay, and EdChum are good ones.
Try the different alternatives! If they do not work, then you will have to do write something in a compiled language (either compiled Python or, e.g. C).
Update: I do believe what chrisb says below, i.e. the pandas parser is fast.
Then the only way to make the parsing faster is to write an application-specific parser in C (or other compiled language). Generic parsing of CSV files is not straightforward, but if the exact structure of the file is known there may be shortcuts. In any case parsing text files is slow, so if you ever can translate it into something more palatable (HDF5, NumPy array), loading will be only limited by the I/O performance. | 1 | 32 | 1 | I am using pandas to analyse large CSV data files. They are around 100 megs in size.
Each load from csv takes a few seconds, and then more time to convert the dates.
I have tried loading the files, converting the dates from strings to datetimes, and then re-saving them as pickle files. But loading those takes a few seconds as well.
What fast methods could I use to load/save the data from disk? | Fastest way to parse large CSV files in Pandas | 0.197375 | 0 | 0 | 36,848 |
25,510,779 | 2014-08-26T16:30:00.000 | 0 | 0 | 1 | 0 | python,user-interface | 25,511,791 | 2 | false | 0 | 0 | An easy way would be to create a definition that specifically looks for these inputs any time a user inputs values and have it execute before any other definition. | 1 | 1 | 0 | I have a program that displays a "main menu" with several options for users and then starts a method when an option is selected. E.g. If you select "Add Name" the add_name method is executed and then the main menu is displayed again.
Inside each of the methods you can access from the menu are several prompts. E.g. Inside of "Add Name" is the prompt "Add which name?".
I would like the user to be able to type "help" or "quit" from any prompt anywhere in the program and have the program display a help menu in the first case or return to the main menu in the second.
Is this possible? How would a program like that be structured? | How to create a command "home" or "quit" that returns to the main function of a program in python | 0 | 0 | 0 | 58 |
25,511,708 | 2014-08-26T17:25:00.000 | 0 | 1 | 0 | 0 | javascript,python,ajax,raspberry-pi,sudo | 25,527,346 | 2 | true | 1 | 0 | Solved it by spawning my python script from nodejs and communicating in realtime to my webclient using socket.io. | 1 | 0 | 0 | I'm building a house monitor using a Raspberry Pi and midori in kiosk mode. I wrote a python script which reads out some GPIO pins and prints a value. Based on this value I'd like to specify a javascript event. So when for example some sensor senses it's raining I want the browser to immediately update its GUI.
What's the best way to do this? I tried executing the python script in PHP and accessing it over AJAX, but this is too slow. | Javascript execute python file | 1.2 | 0 | 0 | 556 |
25,518,381 | 2014-08-27T03:14:00.000 | 0 | 0 | 0 | 0 | python,excel | 25,518,599 | 1 | true | 1 | 0 | Do it at the same time. It will probably only take a handful of lines of code. There's no reason to do the work of walking over the whole file twice. | 1 | 0 | 0 | I am scraping websites for a research project using Python Beautifulsoup.
I have scraped a few thousand records and put them in excel.
In essence, I want to extract a substring of text (e.g. "python" from a post-title "Introduction to python for dummies").
The post-title is scraped and stored in a cell in excel.
I want to extract "pyhon" and put it in another cell.
I need some advice if it was better to do the extraction while scraping OR do it offline in excel.
Since this is research project, there is no need for real time speed. i am looking at saving my effort.
Another related question is if python can be used to do the extraction in the offline mode - i.e. open excel, do the extraction , close excel.
Any help or advice is really appreciated. | Scraping websites - Online or offline data processing is better | 1.2 | 0 | 0 | 259 |
25,520,264 | 2014-08-27T06:25:00.000 | 0 | 1 | 1 | 1 | python,pycrypto | 25,542,008 | 1 | false | 0 | 0 | Got it working after installing xcode.
pycrypto was installed by default once xcode was installed and fabric is working now.
(I should have mentioned that I am new to MAC in the question) | 1 | 0 | 0 | I am trying to install 'fabric'. I tried using 'pip install fabric' and the installation is failing when it is trying to install the 'pycrypto'
I see it is fetching the 2.6.1 version. I tried installing lower versions and I am getting same error.
'sudo easy_install fabric' also throws same error.
I also have the gmplib installed. I have the lib file in these places
/usr/lib/libgmp.dylib
/usr/local/lib/libgmp.dylib
pip install fabric
Requirement already satisfied (use --upgrade to upgrade): fabric in /Library/Python/2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): paramiko>=1.10.0 in /Library/Python/2.7/site-packages/paramiko-1.14.1-py2.7.egg (from fabric)
Downloading/unpacking pycrypto>=2.1,!=2.4 (from paramiko>=1.10.0->fabric)
Downloading pycrypto-2.6.1.tar.gz (446kB): 446kB downloaded
Running setup.py (path:/private/tmp/pip_build_root/pycrypto/setup.py) egg_info for package pycrypto
Requirement already satisfied (use --upgrade to upgrade): ecdsa in /Library/Python/2.7/site-packages/ecdsa-0.11-py2.7.egg (from paramiko>=1.10.0->fabric)
Installing collected packages: pycrypto
Running setup.py install for pycrypto
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for __gmpz_init in -lgmp... yes
checking for __gmpz_init in -lmpir... no
checking whether mpz_powm is declared... yes
checking whether mpz_powm_sec is declared... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for inttypes.h... (cached) yes
checking limits.h usability... yes
checking limits.h presence... yes
checking for limits.h... yes
checking stddef.h usability... yes
checking stddef.h presence... yes
checking for stddef.h... yes
checking for stdint.h... (cached) yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking wchar.h usability... yes
checking wchar.h presence... yes
checking for wchar.h... yes
checking for inline... inline
checking for int16_t... yes
checking for int32_t... yes
checking for int64_t... yes
checking for int8_t... yes
checking for size_t... yes
checking for uint16_t... yes
checking for uint32_t... yes
checking for uint64_t... yes
checking for uint8_t... yes
checking for stdlib.h... (cached) yes
checking for GNU libc compatible malloc... yes
checking for memmove... yes
checking for memset... yes
configure: creating ./config.status
config.status: creating src/config.h
building 'Crypto.PublicKey._fastmath' extension
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -Wall -Wstrict-prototypes -Wshorten-64-to-32 -fwrapv -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -std=c99 -O3 -fomit-frame-pointer -Isrc/ -I/usr/include/ -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/_fastmath.c -o build/temp.macosx-10.9-intel-2.7/src/_fastmath.o
src/_fastmath.c:83:13: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
size = p->ob_size;
~ ~~~^~~~~~~
src/_fastmath.c:86:10: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
size = -p->ob_size;
~ ^~~~~~~~~~~
src/_fastmath.c:113:49: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
int size = (mpz_sizeinbase (m, 2) + SHIFT - 1) / SHIFT;
~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~
src/_fastmath.c:1310:12: warning: implicit conversion loses integer precision: 'unsigned long' to 'unsigned int' [-Wshorten-64-to-32]
offset = mpz_get_ui (mpz_offset);
~ ^~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/gmp.h:840:20: note: expanded from macro 'mpz_get_ui'
#define mpz_get_ui __gmpz_get_ui
^
src/_fastmath.c:1360:10: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
return return_val;
~~~~~~ ^~~~~~~~~~
src/_fastmath.c:1373:27: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
rounds = mpz_get_ui (n) - 2;
~ ~~~~~~~~~~~~~~~^~~
src/_fastmath.c:1433:9: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
return return_val;
~~~~~~ ^~~~~~~~~~
src/_fastmath.c:1545:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
src/_fastmath.c:1621:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
9 warnings generated.
src/_fastmath.c:1545:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
src/_fastmath.c:1621:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
2 warnings generated.
This is the error i get when i execute 'fab'
Traceback (most recent call last):
File "/usr/local/bin/fab", line 5, in
from pkg_resources import load_entry_point
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2603, in
working_set.require(requires)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 666, in require
needed = self.resolve(parse_requirements(requirements))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: pycrypto>=2.1,!=2.4 | Error installing pycrypto in mac 10.9.6 | 0 | 0 | 0 | 494 |
25,521,837 | 2014-08-27T07:59:00.000 | 1 | 0 | 0 | 0 | python,scrapy,web-crawler | 25,521,907 | 1 | false | 1 | 0 | You can just set your DNS names manually in your hosts file. On windows this can be found at C:\Windows\System32\Drivers\etc\hosts and on Linux in /etc/hosts | 1 | 0 | 0 | Wasn't able to find anything in the docs/SO relating to my question.
So basically I'm crawling a website with 8 or so subdomains
They are all using Akamai/CDN.
My question is if I can find Ips of a few different Akamai data centres can, I somehow explicitly say this subdomain should use this ip for the host name etc.. So basically override the auto dns resolving...
As this would allow greater efficiency and I would imagine less likely to be throttled as I'd be distributing the crawling.
Thanks | Scrapy - use multiple IP Addresses for a host | 0.197375 | 0 | 1 | 953 |
25,527,787 | 2014-08-27T12:55:00.000 | 4 | 0 | 1 | 0 | python,sequence | 25,527,941 | 1 | true | 0 | 0 | A Physically Stored Sequence is best explained by contrast. It is one type of "iterable" with the main example of the other type being a "generator."
A generator is an iterable, meaning you can iterate over it as in a "for" loop, but it does not actually store anything--it merely spits out values when requested. Examples of this would be a pseudo-random number generator, the whole itertools package, or any function you write yourself using yield. Those sorts of things can be the subject of a "for" loop but do not actually "contain" any data.
A physically stored sequence then is an iterable which does contain its data. Examples include most data structures in Python, like lists. It doesn't matter in the Python parlance if the items in the sequence have any particular reference count or anything like that (e.g. the None object exists only once in Python, so [None, None] does not exactly "store" it twice).
A key feature of physically stored sequences is that you can usually iterate over them multiple times, and sometimes get items other than the "first" one (the one any iterable gives you when you call next() on it).
All that said, this phrase is not very common--certainly not something you'd expect to see or use as a workaday Python programmer. | 1 | 3 | 0 | I am currently reading Learning Python, 5th Edition - by Mark Lutz and have come across the phrase "Physically Stored Sequence".
From what I've learnt so far, a sequence is an object that contains items that can be indexed in sequential order from left to right e.g. Strings, Tuples and Lists.
So in regards to a "Physically Stored Sequence", would that be a Sequence that is referenced by a variable for use later on in a program? Or am not getting it?
Thank you in advance for your answers. | What is a "Physically Stored Sequence" in Python? | 1.2 | 0 | 0 | 84 |
25,530,467 | 2014-08-27T14:55:00.000 | 0 | 0 | 0 | 0 | database,couchdb,couchdb-futon,couchdb-python,couchdb-lucene | 25,606,357 | 1 | false | 0 | 0 | When creating new document "test_1", there should be a document with that name already having a different _rev in your db.
If you need to update the old "test_1", you need to provide the _rev of that document when updating. Or else, you can delete "test_1" and then try creating another document with the name "test_1".
The point here is, you should provide the latest _rev of a certain document, when updating that document. | 1 | 0 | 0 | I am trying to Delete the whole data from the CouchDB and again i am trying to write same data with modified **_id field and some extra field **
but i am getting following error :
{
'reason' => 'Document update conflict.',
'error' => 'conflict',
'id' => 'test_1'
},
{
'reason' => 'Document update conflict.',
'error' => 'conflict',
'id' => 'test_2'
},
How to resolve the error ? | Error occurred while writing a data into CouchDB | 0 | 0 | 0 | 526 |
25,532,502 | 2014-08-27T16:32:00.000 | 0 | 0 | 0 | 0 | python,algorithm,random,python-3.4 | 25,533,547 | 3 | false | 0 | 0 | Well you're probably going to need to come up with some more detailed requirements but yes, there are ways:
pre-populate a dictionary with however many terms in the series you require for a given seed and then at run-time simply look the nth term up.
if you're not fussed about the seed values and/or do not require some n terms for any given seed, then find a O(1) way of generating different seeds and only use the first term in each series.
Otherwise, you may want to stop using the built-in python functionality & devise your own (more predictable) algo.
EDIT wrt the new infos:
Ok. so i also looked at your profile & so you are doing something (musical?) other than any new crypto thing. if that's the case, then it's unfortunately mixed blessings, because while you don't require security, you also still won't want (audible) patterns appearing. so you unfortunately probably do still need a strong prng.
One of the transformers that I want adds a random value to the input y
coordinate depending on the the input x coordinate
It's not yet clear to me if there is actually any real requirement for y to depend upon x...
Now say that I want two different instances of the transformer that
adds random values to y. My question is about my options for making
this new random transformer give different values than the first one.
..because here, i'm getting the impression that all you really require is for two different instances to be different in some random way.
But, assuming you have some object containing tuple (x,y) and you really do want a transform function to randomly vary y for the same x; and you want an untransform function to quickly undo any transform operations, then why not just keep a stack of the state changes throughout the lifetime of any single instance of an object; and then in the untransform implementation, you just pop the last transformation off the stack ? | 1 | 2 | 1 | By 'graph' I mean 'function' in the mathematical sense, where you always find one unchanging y value per x value.
Python's random.Random class's seed behaves as the x-coordinate of a random graph and each new call to random.random() gives a new random graph with all new x-y mappings.
Is there a way to directly refer to random.Random's nth graph, or in other words, the nth value in a certain seed's series without calling random.random() n times?
I am making a set of classes that I call Transformers that take any (x,y) coordinates as input and output another pair of (x,y) coordinates. Each transformer has two methods: transform and untransform. One of the transformers that I want adds a random value to the input y coordinate depending on the the input x coordinate. Say that I then want this transformer to untransform(x, y), now I need to subtract the same value I added from y if x is the same. This can be done by setting the seed to the same value it had when I added to y, so acting like the x value. Now say that I want two different instances of the transformer that adds random values to y. My question is about my options for making this new random transformer give different values than the first one. | Many independent pseudorandom graphs each with same arbitrary y for any input x | 0 | 0 | 0 | 172 |
25,534,295 | 2014-08-27T18:15:00.000 | 1 | 0 | 0 | 1 | database,google-app-engine,python-2.7 | 25,538,591 | 1 | false | 1 | 0 | Assuming that you are talking about an entity that you have on your local machine but not on App Engine once you deploy the app: your local datastore is for testing purposes only and nothing from it will be deployed to GAE. You will need to re-create all datastore data once your app is deployed if it wasn't there already. | 1 | 0 | 0 | I have an instance which is blobstore.BlobReferenceProperty() in local data base viewer it appears but when I deploy the application in the google database the value is '{}' and when click on an entity it appears at that instance that it has an unknow property.Can anyone help me? | google app engine datastore | 0.197375 | 0 | 0 | 28 |
25,534,894 | 2014-08-27T18:51:00.000 | 2 | 1 | 1 | 0 | python,heuristics,malware,malware-detection | 25,545,902 | 3 | true | 0 | 0 | Python is described as a general purpose programming language so yes, this is defiantly possible but not necessarily the best implementation. In programming, just like a trade, you should use the best tools for the job.
It could be recommended prototyping your application with Python and Clamd and then consider moving to another language if you want a closed source solution, which you can sell and protect your intellectual property.
Newb quotes:
Anything written in python is typically quite easy to
reverse-engineer, so it won't do for real protection.
I disagree, in fact a lot but it is up for debate I suppose. I really depends how the developer packages the application. | 2 | 1 | 0 | I am trying to create a virus scanner in Python, and I know that signature based detection is possible, but is heuristic based detection possible in Python, ie. run a program in a safe environment, or scan the program's code, or check what the program behaves like, and then decide if the program is a virus or not. | Is it possible to implement heuristic virus scanning in Python? | 1.2 | 0 | 0 | 2,774 |
25,534,894 | 2014-08-27T18:51:00.000 | 2 | 1 | 1 | 0 | python,heuristics,malware,malware-detection | 25,535,011 | 3 | false | 0 | 0 | Yes, it is possible.
...and...
No, it is probably not the easiest, fastest, best performing, or most efficient way to accomplish the task. | 2 | 1 | 0 | I am trying to create a virus scanner in Python, and I know that signature based detection is possible, but is heuristic based detection possible in Python, ie. run a program in a safe environment, or scan the program's code, or check what the program behaves like, and then decide if the program is a virus or not. | Is it possible to implement heuristic virus scanning in Python? | 0.132549 | 0 | 0 | 2,774 |
25,535,192 | 2014-08-27T19:11:00.000 | 0 | 1 | 1 | 0 | python,module,pip | 25,537,762 | 1 | false | 0 | 0 | Feel like comment has too many things..
So as @Zahir Jacobs said, this problem is because pip is installing all packages in different path. After I moved all the packages to $which python path, I can import these modules now.
But the following question is, if I still want to use pip installation in the future, I have to move it by manually again. is there anyway to change the path for pip?
I tried to move pip package, but it returned:
MacBook-Air:~ User$ pip install tweepy
Traceback (most recent call last):
File "/usr/local/bin/pip", line 5, in
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources | 1 | 0 | 0 | I used pip to install all python packages, and the path is:
PYTHONPATH="/usr/local/lib/python2.7/site-packages"
I found all the packages I tried to install were installed under this path, but when I tried to import them, it always said module not found.
MacBook-Air:~ User$ pip install tweepy
Requirement already satisfied (use --upgrade to upgrade): tweepy in /usr/local/lib/python2.7/site-packages
import tweepy
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named tweepy
I tried with tweepy, httplib2, oauth and some others, none of these can work.
Can anyone tell how can I solve this problem?
Thanks!!!! | Python2.7 package installed in right path but not found | 0 | 0 | 1 | 770 |
25,535,893 | 2014-08-27T19:55:00.000 | -1 | 0 | 0 | 0 | python,win32gui,showwindow,setforegroundwindow | 28,421,442 | 1 | false | 0 | 1 | Call BringWindowToTop and SetActiveWindow functions. | 1 | 3 | 0 | Here is what I want to do:
1)Open an application with username
2)Give some inputs
3)Open 2nd window for the application with different username
4)Give some inputs again
5)Switch to first application window, do somethg
6)Switch to second application
I am programming in python using the win32gui library.
I have tried using the ShowWindow and SetForegroundWindow , but it doesnt work correctly, can someone please explain me what would be the way to do it in a very simple way? | How to use ShowWIndow() and SetForegroundWindow Correctly? | -0.197375 | 0 | 0 | 2,967 |
25,536,981 | 2014-08-27T21:10:00.000 | 1 | 0 | 1 | 0 | python,eclipse,pydev | 25,633,954 | 1 | false | 0 | 0 | When you press '.' it'll get the code-completion for the variable you have... probably getting those variables is slow in your use case (because of some construct in your own code).
Mostly PyDev will do a dir(obj) and getattr(obj, attr_name)... So, if inspecting your object is slow, that'll be slow too.
To improve that you can disable the auto code completion on '.' or you can make your object more friendly to inspections. | 1 | 1 | 0 | In Eclipse, when in interactive mode, when I type in a variable name and press ., so that the list of hints pops up, there is a big pause.
How can I prevent the big pause? | Pydev, eclipse, pauses when I press . in interactive mode | 0.197375 | 0 | 0 | 50 |
25,542,787 | 2014-08-28T07:10:00.000 | 2 | 0 | 0 | 1 | python,oracle | 27,795,948 | 1 | false | 0 | 0 | If python finds more than one OCI.DLL file in the path (even if they are identical) it will throw this error. (Your path statement looks like it may throw up more than one). You can manipulate the path inside your script to constrain where python will look for the supporting ORACLE files which may be your only option if you have to run several versions of oracle/clients locally. | 1 | 4 | 0 | Background.
My OS is Win7 64bit.
My Python is 2.7 64bit from python-2.7.8.amd64.msi
My cx_Oracle is 5.0 64bit from cx_Oracle-5.0.4-10g-unicode.win-amd64-py2.7.msi
My Oracle client is 10.1 (I don't know 32 or 64 arch, but SQL*Plus is 10.1.0.2.0
Database is
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
ORACLE_HOME variable added from haki reply.
C:\Oracle\product\10.1.0\Client_1\
Not work problem still persist.
ORACLE_HOME Try Oracle instant from instantclient-basic-win64-10.2.0.5.zip
C:\instantclient_10_2\
C:\Users\PavilionG4>sqlplus Lee/123@chstchmp
Error 6 initializing SQL*Plus
Message file sp1.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
My sql*plus is not let me set the Oracle.
ORACLE_HOME Come back to the
C:\Oracle\product\10.1.0\Client_1\
PATH variable
C:\Program Files (x86)\Seagate Software\NOTES\C:\Program Files (x86)\Seagate Software\NOTES\DATA\C:\Program Files (x86)\Java\jdk1.7.0_05\binC:\Oracle\product\10.1.0\Client_1\binC:\Oracle\product\10.1.0\Client_1\jre\1.4.2\bin\clientC:\Oracle\product\10.1.0\Client_1\jre\1.4.2\binC:\app\PavilionG4\product\11.2.0\dbhome_1\binC:\app\PavilionG4\product\11.2.0\client_2\binc:\Program Files (x86)\AMD APP\bin\x86_64c:\Program Files (x86)\AMD APP\bin\x86C:\Windows\system32C:\WindowsC:\Windows\System32\WbemC:\Windows\System32\WindowsPowerShell\v1.0\c:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-StaticC:\Users\PavilionG4\AppData\Local\Smartbar\Application\C:\PROGRA~2\IBM\SQLLIB\BINC:\PROGRA~2\IBM\SQLLIB\FUNCTIONC:\Program Files\gedit\binC:\Kivy-1.7.2-w32C:\Program Files (x86)\ZBar\binjC:\Program Files (x86)\Java\jdk1.7.0_05\binC:\Program Files\MATLAB\R2013a\runtime\win64C:\Program Files\MATLAB\R2013a\binC:\Python27
TNS is :
C:\Oracle\product\10.1.0\Client_1\NETWORK\ADMIN\tnsnames.ora
REPORT1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.28.128.110)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = REPORT1)
)
)
f1.py shows me error
import cx_Oracle
ip = '172.25.25.42'
port = 1521
SID = 'REPORT1'
dns_tns = cx_Oracle.makedsn(ip,port,SID)
connection = cx_Oracle.connect(u"Lee",u"123",dns_tns)
cursor = connection.cursor()
connection.close()
Error
Traceback (most recent call last):
File "f1.py", line 6, in
connection = cx_Oracle.connect(u"Lee",u"123",dns_tns)
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
Questions
1. How to acquire Oracle environment handle?
I had searched the websites. Unfortunately they are not hit my problem at all.
2. How to let Python use another Oracle client without impact to the existing one? | Python + cx_Oracle : Unable to acquire Oracle environment handle | 0.379949 | 1 | 0 | 6,803 |
25,543,058 | 2014-08-28T07:26:00.000 | 0 | 0 | 0 | 0 | python,jenkins,jython | 25,563,114 | 1 | false | 1 | 0 | I have found the way to do this:
install Scriptler plugin
write Groovy script that implements some additional functionality needed by Jenkins users
write webpage that uses Javascript + jQuery to use form elements' values for GET/POST to Groovy script, update the webpage dynamically (say by replacing html body or adding to it), put it in userContent
grant selected Jenkins users Run script permission in the Jenkins' security matrix config | 1 | 0 | 0 | For non-technical reasons I need to keep generating user content in Jenkins.
Theoretically I could do smth like:
have parameterized build
provide webpage in user content folder that does GET/POST to parameterized build
display webpage with results (I don't even know if it's possible)
UPDATE: That is, I want to run some dynamic webpage in Jenkins (yes I know it does not look very good). Specifically, Jenkins users after logging in need some additional functionality like generating paths and hashes from job workspaces and have them displayed and running such logic as a separate Jenkins job is not very attractive (user content folder is simply the most appropriate place for such stuff I think). Typically, I'd provide such features using say simple Django webpage, but that's not an option for various reasons. | Generating user content in Jenkins | 0 | 0 | 0 | 879 |
25,550,116 | 2014-08-28T13:33:00.000 | 0 | 0 | 0 | 0 | python,django,webserver,localhost | 70,518,273 | 7 | false | 1 | 0 | very simple,
first you need to add ip to allowed host,
ALLOWED_HOST =['*']
2. then execute python manage.py runserver 0.0.0.0:8000
now you can access the local project on different system in the same network | 3 | 23 | 0 | I am developing a web application on my local computer in Django.
Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this? | Access Django app from other computers | 0 | 0 | 0 | 29,704 |
25,550,116 | 2014-08-28T13:33:00.000 | 11 | 0 | 0 | 0 | python,django,webserver,localhost | 57,634,195 | 7 | false | 1 | 0 | Just add your own IP Address to ALLOWED_HOSTS
ALLOWED_HOSTS = ['192.168.1.50', '127.0.0.1', 'localhost']
and run your server python manage.py runserver 192.168.1.50:8000
and access your own server to other computer in your network | 3 | 23 | 0 | I am developing a web application on my local computer in Django.
Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this? | Access Django app from other computers | 1 | 0 | 0 | 29,704 |
25,550,116 | 2014-08-28T13:33:00.000 | 6 | 0 | 0 | 0 | python,django,webserver,localhost | 43,633,252 | 7 | false | 1 | 0 | Run the application with IP address then access it in other machines.
python manage.py runserver 192.168.56.22:1234
Both machines should be in same network, then only this will work. | 3 | 23 | 0 | I am developing a web application on my local computer in Django.
Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this? | Access Django app from other computers | 1 | 0 | 0 | 29,704 |
25,551,634 | 2014-08-28T14:41:00.000 | 0 | 0 | 1 | 0 | python,windows | 25,551,742 | 2 | false | 0 | 0 | You can use \r\n for a line ending on Windows environment. | 1 | 1 | 0 | When ever I do print 'some text' with a Windows interpreter of Python, it always adds CRLF at the end of each line. I tried doing "print 'some text\n'," but it always puts CRLF when I just want to do LF. Is there a way to just do LF at the end of print within Python 2.7? | Only do LF (line feed) at the end of print in python | 0 | 0 | 0 | 1,406 |
25,552,075 | 2014-08-28T15:00:00.000 | 1 | 0 | 0 | 0 | python,openerp | 25,993,349 | 1 | true | 1 | 0 | You can do it easily with python library called XlsxWriter. Just download it and add in openerp Server, look for XlsxWriter Documentation , plus there are also other python libraries for generating Xlsx reports. | 1 | 0 | 0 | I need to know, what are the steps to generate an Excel sheet in OpenERP?
Or put it this way, I want to generate an Excel sheet for data that I have retrieved from different tables through queries with a function that I call from a button on wizard. Now I want when I click on the button an Excel sheet should be generated.
I have installed OpenOffice, the problem is I don't know how to create that sheet and put data on it. Please will you tell me the steps? | What are the steps to create or generate an Excel sheet in OpenERP? | 1.2 | 1 | 0 | 596 |
25,553,781 | 2014-08-28T16:35:00.000 | 1 | 0 | 1 | 0 | python,optimization,numpy | 25,554,002 | 2 | false | 0 | 0 | It computes max(a) once, then it compares the (scalar) result against each (scalar) element in a, and creates a bool-array for the result. | 1 | 0 | 1 | Let a be a numpy array of length n.
Does the statement
a == max(a)
calculate the expression max(a) n-times or just one? | Numpy array element-by-element comparison optimization | 0.099668 | 0 | 0 | 70 |
25,556,512 | 2014-08-28T19:24:00.000 | 0 | 0 | 1 | 1 | ipython,ipython-notebook | 25,556,599 | 1 | false | 0 | 0 | just a guess, but maybe there is a feature turned on in IPython that's calling home for updates or something, and it's running at that time? maybe check to see if it has that feature, and turn it off, and see if that helps?
EDITED: see my comment below, I don't think this is an ipython related issue. | 1 | 0 | 0 | This is an odd problem I am having to which I have no solution!
Every day, at around noon -- sometimes closer to 1pm -- my computer locks up. It only does so if I am running an IPython Notebook kernel.
I am running Mavericks on a MBPr 2013.
Has anyone else had this issue or related?
How can I investigate further?
Thanks. | IPython Notebook crashes Macbook Pro every day around noon | 0 | 0 | 0 | 63 |
25,557,693 | 2014-08-28T20:43:00.000 | 0 | 0 | 0 | 0 | python,windows-7,scrapy,pyinstaller,scrapy-spider | 52,980,333 | 2 | false | 1 | 0 | You need to create a scrapy folder under the same directory as runspider.exe (the exe file generated by pyinstaller).
Then copy the "VERSION" and "mime.types" files(default path: %USERPROFILE%\AppData\Local\Programs\Python\Python37\Lib\site-packages\scrapy) to the scrapy you just created in the scrappy folder you create . (If you only copy "VERSION", you will be prompted to find the "mime.types" file) | 1 | 1 | 0 | After installing all dependencies for scrapy on windows 32bit. I've tried to build an executable from my scrapy spider. Spider script "runspider.py" works ok when running as "python runspider.py"
Building executable "pyinstaller --onefile runspider.py":
C:\Users\username\Documents\scrapyexe>pyinstaller --onefile
runspider.py 19 INFO: wrote
C:\Users\username\Documents\scrapyexe\runspider.spec 49 INFO: Testing
for ability to set icons, version resources... 59 INFO: ... resource
update available 59 INFO: UPX is not available. 89 INFO: Processing
hook hook-os 279 INFO: Processing hook hook-time 279 INFO: Processing
hook hook-cPickle 380 INFO: Processing hook hook-_sre 561 INFO:
Processing hook hook-cStringIO 700 INFO: Processing hook
hook-encodings 720 INFO: Processing hook hook-codecs 1351 INFO:
Extending PYTHONPATH with C:\Users\username\Documents\scrapyexe 1351
INFO: checking Analysis 1351 INFO: building Analysis because
out00-Analysis.toc non existent 1351 INFO: running Analysis
out00-Analysis.toc 1351 INFO: Adding Microsoft.VC90.CRT to dependent
assemblies of final executable
1421 INFO: Searching for assembly
x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21
022.8_none ... 1421 INFO: Found manifest C:\Windows\WinSxS\Manifests\x86_microsoft.vc90.crt_1fc
8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91.manifest 1421 INFO:
Searching for file msvcr90.dll 1421 INFO: Found file
C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_
9.0.21022.8_none_bcb86ed6ac711f91\msvcr90.dll 1421 INFO: Searching for file msvcp90.dll 1421 INFO: Found file
C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_
9.0.21022.8_none_bcb86ed6ac711f91\msvcp90.dll 1421 INFO: Searching for file msvcm90.dll 1421 INFO: Found file
C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_
9.0.21022.8_none_bcb86ed6ac711f91\msvcm90.dll 1592 INFO: Analyzing C:\python27\lib\site-packages\PyInstaller\loader_pyi_boots trap.py
1621 INFO: Processing hook hook-os 1661 INFO: Processing hook
hook-site 1681 INFO: Processing hook hook-encodings 1872 INFO:
Processing hook hook-time 1872 INFO: Processing hook hook-cPickle 1983
INFO: Processing hook hook-_sre 2173 INFO: Processing hook
hook-cStringIO 2332 INFO: Processing hook hook-codecs 2963 INFO:
Processing hook hook-pydoc 3154 INFO: Processing hook hook-email 3255
INFO: Processing hook hook-httplib 3305 INFO: Processing hook
hook-email.message 3444 INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_import ers.py
3535 INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_archiv e.py 3615
INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_carchi ve.py 3684
INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_os_pat h.py 3694
INFO: Analyzing runspider.py 3755 WARNING: No django root directory
could be found! 3755 INFO: Processing hook hook-django 3785 INFO:
Processing hook hook-lxml.etree 4135 INFO: Processing hook hook-xml
4196 INFO: Processing hook hook-xml.dom 4246 INFO: Processing hook
hook-xml.sax 4296 INFO: Processing hook hook-pyexpat 4305 INFO:
Processing hook hook-xml.dom.domreg 4736 INFO: Processing hook
hook-pywintypes 5046 INFO: Processing hook hook-distutils 7750 INFO:
Hidden import 'codecs' has been found otherwise 7750 INFO: Hidden
import 'encodings' has been found otherwise 7750 INFO: Looking for
run-time hooks 7750 INFO: Analyzing rthook
C:\python27\lib\site-packages\PyInstaller\loader\rth
ooks\pyi_rth_twisted.py 8111 INFO: Analyzing rthook
C:\python27\lib\site-packages\PyInstaller\loader\rth
ooks\pyi_rth_django.py 8121 INFO: Processing hook hook-django.core
8131 INFO: Processing hook hook-django.core.management 8401 INFO:
Processing hook hook-django.core.mail 8862 INFO: Processing hook
hook-django.db 9112 INFO: Processing hook hook-django.db.backends 9153
INFO: Processing hook hook-django.db.backends.mysql 9163 INFO:
Processing hook hook-django.db.backends.mysql.base 9163 INFO:
Processing hook hook-django.db.backends.oracle 9183 INFO: Processing
hook hook-django.db.backends.oracle.base 9253 INFO: Processing hook
hook-django.core.cache 9874 INFO: Processing hook hook-sqlite3 10023
INFO: Processing hook hook-django.contrib 10023 INFO: Processing hook
hook-django.contrib.sessions 11887 INFO: Using Python library
C:\Windows\system32\python27.dll 12226 INFO: Warnings written to
C:\Users\username\Documents\scrapyexe\build\runspid
er\warnrunspider.txt 12256 INFO: checking PYZ 12256 INFO: rebuilding
out00-PYZ.toc because out00-PYZ.pyz is missing 12256 INFO: building
PYZ (ZlibArchive) out00-PYZ.toc 16983 INFO: checking PKG 16993 INFO:
rebuilding out00-PKG.toc because out00-PKG.pkg is missing 16993 INFO:
building PKG (CArchive) out00-PKG.pkg 19237 INFO: checking EXE 19237
INFO: rebuilding out00-EXE.toc because runspider.exe missing 19237
INFO: building EXE from out00-EXE.toc 19237 INFO: Appending archive to
EXE C:\Users\username\Documents\scrapyexe\dist\run spider.exe
running built exe "runspider.exe":
C:\Users\username\Documents\scrapyexe\dist>runspider.exe
Traceback (most recent call last):
File "", line 2, in
File "C:\python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line
270, in load_module
exec(bytecode, module.dict)
File "C:\Users\username\Documents\scrapyexe\build\runspider\out00-PYZ.pyz\scrapy"
, line 10, in
File "C:\Users\username\Documents\scrapyexe\build\runspider\out00-PYZ.pyz\pkgutil
", line 591, in get_data
File "C:\python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line
342, in get_data
fp = open(path, 'rb')
IOError: [Errno 2] No such file or directory: 'C:\Users\username\AppData\Local\
\Temp\_MEI15522\scrapy\VERSION'
I'm extremely helpful for any kind of help. I need to know how to build standalone exe from scrapy spider for windows.
Thank you very much for any help. | Pyinstaller scrapy error: | 0 | 0 | 0 | 2,324 |
25,559,157 | 2014-08-28T22:44:00.000 | 1 | 0 | 1 | 0 | python,dependencies,packages | 25,560,097 | 2 | false | 0 | 0 | It depends on the project.
If you're working on a library, you'll want to put your dependencies in setup.py so that if you're putting the library on PyPi, people will be able to install it, and its dependencies automatically.
If you're working on an application in Python (possibly web application), a requirements.txt file will be easier for deploying. You can copy all your code to where you need it, set up a virtual environment with virtualenv or pyvenv, and then do pip install -r requirements.txt. (You should be doing this for development as well so that you don't have a mess of libraries globally).
It's certainly easier to write the packages you're installing to your requirements.txt as soon as you've installed them than trying to figure out which ones you need at the end. What I do so that I never forget is I write the packages to the file first and then install with pip install -r.
pip freeze helps if you've forgotten what you've installed, but you should always read the file it created to make sure that you actually need everything that's in there. If you're using virtualenv it'll give better results than if you're installing all packages globally. | 1 | 0 | 0 | Lets say a developer is working on a project when he realizes he needs to use some package.
He uses pip to install it. Now, after installing it, would a the developer write it down as a dependency in the requirements file / setup.py?
What does that same dev do if he forgot to write down all the dependencies of the project (or if he didn't know better since he hasn't been doing it long)?
What I'm asking is what's the workflow when working with external packages from the PyPi? | How to handle python dependencies throughout the project? | 0.099668 | 0 | 0 | 662 |
25,559,300 | 2014-08-28T22:59:00.000 | 3 | 0 | 1 | 0 | python,string,time-complexity | 25,559,347 | 1 | false | 0 | 0 | If the strings have length m and n, then appending them (it doesn't matter if it's at the beginning or at the end) will be an O(m+n) operation, because a new string will be created. Strings are immutable in Python, and all the chars in each of the original strings will have to be copied into the new string. | 1 | 0 | 0 | What's the time complexity of appending to the front of a python string? | Complexity of appending to front of python string | 0.53705 | 0 | 0 | 129 |
25,560,712 | 2014-08-29T02:10:00.000 | 1 | 0 | 1 | 0 | python | 25,560,823 | 1 | false | 0 | 0 | python -c "import sysconfig; print sysconfig.get_config_var('CONFIG_ARGS')"
This is the answer~ | 1 | 2 | 0 | I remember there is a command can show under which option a specific python was build, but I forget it QAQ
the option I mean is like this:
when compile python from source, we use:
./configure --prefix=/a/path/ --cflags=something | How to get a specific python's config information | 0.197375 | 0 | 0 | 35 |
25,561,971 | 2014-08-29T05:02:00.000 | 2 | 0 | 0 | 0 | python,mysql,while-loop | 25,562,066 | 3 | false | 0 | 0 | There are some things you can do to prevent a program from being closed unexpectedly (signal handlers, etc), but they only work in some cases and not others. There is always the chance of a system shutdown, power failure or SIGKILL that will terminate your program whether you like it or not. The canonical solution to this sort of problem is to use database transactions.
If you do your work in a transaction, then the database will simply roll back any changes if your script is interrupted, so you will not have any incomplete queries. The worst that can happen is that you need to repeat the query from the beginning next time. | 1 | 0 | 0 | I was wondering if one of you could advice me how to tackle a problem I am having. I developed a python script that updates data to a database (MySQL) every iteration (endless while loop). What I want to prevent is that if the script is accidentally closed or stopped half way the script it waits till all the data is loaded into the database and the MySQL connection is closed (I want this to prevent incomplete queries). Is there a way to tell the program to wait till the loop is done before it closes?
I hope this all makes sense, feel free to ask questions.
Thank you for your time in advance. | Closing python MySQL script | 0.132549 | 1 | 0 | 184 |
25,562,570 | 2014-08-29T06:09:00.000 | 6 | 0 | 0 | 0 | python,pandas | 25,562,736 | 2 | true | 0 | 0 | Just use the negative sign on the column directly. For instance, if your DataFrame has a column "A", then -df["A"] gives the negatives of those values. | 1 | 1 | 1 | In pandas, is there any function that returns the negative of the values in a column? | How to return 'negative' of a value in pandas dataframe? | 1.2 | 0 | 0 | 2,315 |
25,565,737 | 2014-08-29T09:46:00.000 | 3 | 0 | 0 | 0 | python,selenium,screenshot,xvfb | 25,566,080 | 1 | true | 0 | 0 | With the get_screenshot_as_file, the screenshot get saves into a binary file, while with get_screenshot_as_base64 this will return you base64 encoded version of that screenshot.
So, why would anyone use the base64 version? The whole idea behind base64 is that it allows you to create ASCII representation of binary data, which will increase the data size but will also allow you to actually work with it. For example if you've tried to send over a stream of binary data to a socket, without encoding it then unless server was prepared to handle binary data, the result is hard to predict.
As a result of that the data transferred may be malformed, cut the transfer early and cause many other results that are almost impossible to predict. For example if you were to run a very simple socket server that just prints out everything as it receives to std::out, receiving a binary file would most likely corrupt your console terminal (you can try it on your very own Linux box).
Of course if the server is designed to receive and handle binary data then this will not be an issue, but most often the server-end will interpret user input as string which makes using base64 a wise choice. | 1 | 0 | 0 | I'm wondering what the advantages/disadvantages of one over the other are?
I'm running against Selenium Server on a headless remote instance with Xvfb acting as the display.
Both methods work fine and the resulting screen capture file (if I convert the base64 one and save it as an image file) are identical file size and look identical.
So why would I want to use/not use one over the other? | Selenium get_screenshot_as_file vs get_screenshot_as_base64? | 1.2 | 0 | 1 | 3,273 |
25,565,774 | 2014-08-29T09:48:00.000 | -1 | 0 | 0 | 0 | python,drupal,cookies,web-crawler | 25,565,899 | 3 | false | 1 | 0 | I don't think you need BeautifulSoup for this. You could do this with urllib2 for connection and cookielib for operations on cookies. | 1 | 0 | 0 | I've been tasked with creating a cookie audit tool that crawls the entire website and gathers data on all cookies on the page and categorizes them according to whether they follow user data or not. I'm new to Python but I think this will be a great project for me, would beautifulsoup be a suitable tool for the job? We have tons of sites and are currently migrating to Drupal so it would have to be able to scan Polopoly CMS and Drupal. | BeautifulSoup crawling cookies | -0.066568 | 0 | 1 | 4,870 |
25,569,387 | 2014-08-29T13:32:00.000 | 2 | 0 | 1 | 0 | python,django,installation,pip,soappy | 25,569,512 | 1 | true | 0 | 0 | Do you use Python2? SOAPpy isn't compatible with Python3. | 1 | 0 | 0 | I want to install SOAPpy library on my windows 7 but when I run "pip install soappy" or "easy_install soappy" I get this error: "ImportError: No module named WSDLTools"
also I try download zip file and compile and install it but again I get this error. anyone can help me? thanks | insall SOAPpy using pip | 1.2 | 0 | 0 | 911 |
25,571,001 | 2014-08-29T15:01:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 25,571,159 | 2 | false | 0 | 0 | Since your dict-like class isn't in fact a dictionary, I'd go with MutableMapping. Subclassing dict implies dict-like characteristics, including performance characteristics, which won't be true if you're actually hitting a database. | 1 | 0 | 0 | Looks like there are multiple ways to do that but couldn't find the latest best method.
Subclass UserDict
Subclass DictMixin
Subclass dict
Subclass MutableMapping
What is the correct way to do? I want to abstract actual data which is in a database. | How to create a dict like class in python 2.7? | 0 | 0 | 0 | 83 |
25,577,470 | 2014-08-29T23:24:00.000 | 1 | 1 | 0 | 0 | python,c,language-interoperability | 25,577,983 | 2 | false | 1 | 1 | For C, you can use ctype module or SWIG.
For Java, Jython is a good choice. | 1 | 0 | 0 | Edit<<<<<<<
The question is:
-How do you launch C code from python? (say, in a function)
-How do you load Java code into python? (perhaps in a class?)
-Can you simply work with these two in a python program or are there special considerations?
-Will it be worth it, or will integrating cause too much lag?
Being familiar with all three languages (C, Java and Python) and knowing that Python supports C libraries, (and apparently can integrate with Java also) I was wondering if Python could integrate a program using both languages?
What I would like is fast flexible C functions while taking advantage of Java's extensive front-end libraries and coordinating the two in Python's clean, readable syntax.
Is this possible?
EDIT---->
To be more specific, I would like to write and execute python code that integrates my own fast C functions. Then, call Java libraries like swing to create user interface and handle networking. Probably taking advantage of XML as well to aid in file manipulation. | How do you use Python with C and Java simultaneously? | 0.099668 | 0 | 0 | 152 |
25,580,925 | 2014-08-30T09:28:00.000 | 0 | 1 | 1 | 0 | python,stringio,cstringio | 25,580,974 | 3 | false | 0 | 0 | It is not neccessarily obvious from the source but python file objects is built straight on the C library functions, with a likely small layer of python to present a python class, or even a C wrapper to present a python class. The native C library is going to be highly optimised to read bytes and blocks from disk. The python StringIO library is all native python code - which is slower than native C code. | 1 | 7 | 0 | I'm looking through the source of StringIO where it says says some notes:
Using a real file is often faster (but less convenient).
There's also a much faster implementation in C, called cStringIO, but
it's not subclassable.
StringIO just like a memory file object,
why is it slower than real file object? | Why is StringIO object slower than real file object? | 0 | 0 | 0 | 2,767 |
25,583,468 | 2014-08-30T14:31:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,unicode,utf-8 | 25,584,492 | 3 | false | 0 | 0 | If your application really requires you to be able to represent 256 different byte values in a graphically distinguishable form, all you actually need is 256 Unicode code points. Problem solved.
ASCII codes 33-127 are a no-brainer, Unicode code points 160-255 are also good candidates for representing themselves but you might want to exclude a few which are hard to distinguish (if you want OCR or humans to handle them reliably, áåä etc might be too similar). Pick the rest from the set of code points which can be represented in two bytes -- quite a large set, but again, many of them are graphically indistinguishable from other glyphs in most renderings.
This scheme does not attempt any form of compression. I imagine you'd get better results by compressing your data prior to encoding it if that's an issue. | 1 | 5 | 0 | I have arbitrary binary data. I need to store it in a system that expects valid UTF8. It will never be interpreted as text, I just need to put it in there and be able to retrieve it and reconstitute my binary data.
Obviously base64 would work, but I can't have that much inflation.
How can I easily achieve this in python 2.7? | Store arbitrary binary data on a system accepting only valid UTF8 | 0 | 0 | 0 | 481 |
25,585,500 | 2014-08-30T18:23:00.000 | 0 | 1 | 0 | 1 | python,atom-editor | 69,060,186 | 6 | false | 0 | 0 | There is a package called "platformio-ide-terminal" that allows you to run Atom code with Ctrl + Shift + B". That's the only package you need (Windows). | 2 | 86 | 0 | In Sublime, we have an easy and convent way to run Python or almost any language for that matter using ⌘ + b (or ctrl + b)
Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed.
Is there a way to replicate this functionally with Github's atom editor? | Running Python from Atom | 0 | 0 | 0 | 209,849 |
25,585,500 | 2014-08-30T18:23:00.000 | 3 | 1 | 0 | 1 | python,atom-editor | 57,858,735 | 6 | false | 0 | 0 | To run the python file on mac.
Open the preferences in atom ide. To open the preferences press 'command + . '
( ⌘ + , )
Click on the install in the preferences to install packages.
Search for package "script" and click on install
Now open the python file(with .py extension ) you want to run and press 'control + r ' (^ + r) | 2 | 86 | 0 | In Sublime, we have an easy and convent way to run Python or almost any language for that matter using ⌘ + b (or ctrl + b)
Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed.
Is there a way to replicate this functionally with Github's atom editor? | Running Python from Atom | 0.099668 | 0 | 0 | 209,849 |
25,586,996 | 2014-08-30T21:40:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,inotify | 25,587,191 | 4 | true | 0 | 0 | There are several ways to detect changes in files. Some are easier to
fool than others. It doesn't sound like this is a security issue; more
like good faith is assumed, and you just need to detect changes without
having to outwit an adversary.
You can look at timestamps. If files are not renamed, this is a good way
to detect changes. If they are renamed, timestamps alone wouldn't
suffice to reliably tell one file from another. os.stat will tell you
the time a file was last modified.
You can look at inodes, e.g., ls -li. A file's inode number may change
if changes involve creating a new file and removing the old one; this is
how emacs typically changes files, for example. Try changing a file
with the standard tool your organization uses, and compare inodes before
and after; but bear in mind that even if it doesn't change this time, it
might change under some circumstances. os.stat will tell you inode
numbers.
You can look at the content of the files. cksum computes a small CRC
checksum on a file; it's easy to beat if someone wants to. Programs such
as sha256sum compute a secure hash; it's infeasible to change a file
without changing such a hash. This can be slow if the files are large.
The hashlib module will compute several kinds of secure hashes.
If a file is renamed and changed, and its inode number changes, it would
be potentially very difficult to match it up with the file it used to
be, unless the data in the file contains some kind of immutable and
unique identifier.
Think about concurrency. Is it possible that someone will be changing a
file while the program runs? Beware of race conditions. | 3 | 0 | 0 | I want to monitor a folder and see if any new files are added, or existing files are modified. The problem is, it's not guaranteed that my program will be running all the time (so, inotify based solutions may not be suitable here). I need to cache the status of the last scan and then with the next scan I need to compare it with the last scan before processing the files.
What are the alternatives for achieving this in Python 2.7?
Note1: Processing of the files is expensive, so I'm trying to process the files that are not modified in the meantime. So, if the file is only renamed (as opposed to a change in the contents of the file), I would also like to detect that and skip the processing.
Note2: I'm only interested in a Linux solution, but I wouldn't complain if answers for other platforms are added. | How to find modified files in Python | 1.2 | 0 | 0 | 1,192 |
25,586,996 | 2014-08-30T21:40:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,inotify | 25,587,068 | 4 | false | 0 | 0 | I would've probably go with some kind of sqlite solution, such as writing the last polling time.
Then on each such poll, sort the files by last_modified_time (mtime) and get all the ones who are having mtime greater than your previous poll (this value will be taken out of the sqlite or some kind of file if you insist on not having requirement of such db). | 3 | 0 | 0 | I want to monitor a folder and see if any new files are added, or existing files are modified. The problem is, it's not guaranteed that my program will be running all the time (so, inotify based solutions may not be suitable here). I need to cache the status of the last scan and then with the next scan I need to compare it with the last scan before processing the files.
What are the alternatives for achieving this in Python 2.7?
Note1: Processing of the files is expensive, so I'm trying to process the files that are not modified in the meantime. So, if the file is only renamed (as opposed to a change in the contents of the file), I would also like to detect that and skip the processing.
Note2: I'm only interested in a Linux solution, but I wouldn't complain if answers for other platforms are added. | How to find modified files in Python | 0.049958 | 0 | 0 | 1,192 |
25,586,996 | 2014-08-30T21:40:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,inotify | 25,587,181 | 4 | false | 0 | 0 | Monitoring for new files isn't hard -- just keep a list or database of inodes for all files in the directory. A new file will introduce a new inode. This will also help you avoid processing renamed files, since inode doesn't change on rename.
The harder problem is monitoring for file changes. If you also store file size per inode, then obviously a changed size indicates a changed file and you don't need to open and process the file to know that. But for a file that has (a) a previously recorded inode, and (b) is the same size as before, you will need to process the file (e.g. compute a checksum) to know if it has changed. | 3 | 0 | 0 | I want to monitor a folder and see if any new files are added, or existing files are modified. The problem is, it's not guaranteed that my program will be running all the time (so, inotify based solutions may not be suitable here). I need to cache the status of the last scan and then with the next scan I need to compare it with the last scan before processing the files.
What are the alternatives for achieving this in Python 2.7?
Note1: Processing of the files is expensive, so I'm trying to process the files that are not modified in the meantime. So, if the file is only renamed (as opposed to a change in the contents of the file), I would also like to detect that and skip the processing.
Note2: I'm only interested in a Linux solution, but I wouldn't complain if answers for other platforms are added. | How to find modified files in Python | 0.049958 | 0 | 0 | 1,192 |
25,587,180 | 2014-08-30T22:10:00.000 | 0 | 1 | 0 | 0 | java,python,c,algorithm,collision-detection | 25,588,001 | 1 | false | 0 | 1 | Normally, sphere collisions are just filter for real collision tests.
You can either:
decide to limit collisions to that, for example if it's a game.
implement real collisions and do the full math. You're basically intersecting the rotated edges of the two rectangles (16 cases). Intersect two edges as if they were lines, there will be only one point of intersection (unless they're parallel), if that point is inside the segment there's a collision.
To limit complexity, you can use a quadtree/octree. Divide your space into four rectangles, then those rectangles into four etc ... until they're too small to contain an object or are empty. Put your collidable objects into the most specific part of the tree that will contain them. Two objects can only collide if they are in the same sub rectangle or one is in any parent of the other.
Not sure that helps, but they're ideas. | 1 | 0 | 0 | How can I efficiently detect collision between image layers and generated shapes?
I need a fast and comprehensive method to detect collision between rotatable image layers and generated shapes.
So far I have just been dividing images into a series of circles that contain the majority of the pixels and then testing each circle against other shapes. To enhance performance I created perimeter circles around each structure and only test these larger circles until two structures are close enough to collide.
The real problem is that it is very difficult to collide a rotatable rectangle, for example, into one of these image structures. Filling a rectangle with circles also just does not seem efficient. Not to mention that I'm getting combinatoric explosions that make looping very complex and poor performance.
Anyone know a better way to handle this type of collision? I write in Java, Python and C. I am willing to use C++ also. | Collision detection between image layers and shapes | 0 | 0 | 0 | 170 |
25,589,259 | 2014-08-31T05:41:00.000 | 4 | 1 | 0 | 0 | python,amazon-web-services,amazon-ec2,boto | 25,592,181 | 5 | false | 0 | 0 | If you are already using boto you can also use the boto.utils.get_instance_metadata function. This makes the call to the metadata server, gathers all of the metadata and returns it as a Python dictionary. It also handles retries. | 1 | 8 | 0 | How to obtain the public ip address of the current EC2 instance in python ? | How to get the public ip of current ec2 instance in python? | 0.158649 | 0 | 1 | 7,397 |
25,593,876 | 2014-08-31T16:14:00.000 | 1 | 0 | 0 | 0 | python,algorithm | 25,594,611 | 4 | true | 0 | 0 | You want to generate a random n*m matrix of integers 1..k with every integer used, and no integer used twice in any row. And you want to do it efficiently.
If you just want to generate a reasonable answer, reasonably quickly, you can generate the rows by taking a random selection of elements, and putting them into a random order. numpy.random.random_sample and numpy.random.shuffle can do that. You will avoid the duplicate element issue. If you fail to use all of your elements, then what you can do is randomly "evolve" this towards a correct solution, At every step identify all elements that are repeated more than once in the matrix, randomly select one, and convert it to an as yet unused integer from 1..k. This will not cause duplications within rows, and will in at most k steps give you a matrix of the desired form.
Odds are that this is a good enough answer for what you want, and is what you should do. But it is imperfect - matrices of this form do not all happen with exactly equal probability. (In particular ones with lots of elements only appearing once show up slightly more than they should.) If you need a perfectly even distribution, then you're going to have to do a lot more work.
To get there you need a bit of theory. If you have that theory, then you can understand the answer as, "Do a dynamic programming solution forwards to find a count of all possible solutions, then run that backwards making random decisions to identify a random solution." Odds are that you don't have that theory.
I'm not going to give a detailed explanation of that theory. I'll just outline what you do.
You start with the trivial statement, "There are n!/(n-m)! ways in which I could have a matrix with 1 row satisfying my condition using m of the k integers, and none which use more."
For i from 1..n, for j from m to k, you figure out the count of ways in which you could build i rows using j of the k integers. You ALSO keep track of how many of those ways came from which previous values for j for the previous row. (You'll need that later.) This step can be done in a double loop.
Note that the value in the table you just generated for j=k and i=n is the number of matrices that satisfy all of your conditions. We'll build a random one from the bottom up.
First you generate a random row for the last row of your matrix - all are equally likely.
For each row until you get to the top, you use the table you built to randomly decide how many of the elements that you used in the last row you generated will never be used again. Randomly decide which those elements will be. Generate a random row from the integers that you are still using.
When you get to the top you'll have chosen a random matrix meeting your description, with no biases in how it was generated. | 1 | 2 | 1 | Given an integer k, I am looking for a pythonic way to generate a nxm matrix (or nested list) which has every integer from 0..k-1 but no integer appears more than once in each row.
Currently I'm doing something like this
random.sample(list(combinations(xrange(k), m)), n)
but this does not guarantee every number from 0..k-1 is included, only that no integer appears more than once in each row. Also this has combinatorial complexity which is obviously undesirable.
Thanks. | Generate random matrix with every number 0..k | 1.2 | 0 | 0 | 407 |
25,594,825 | 2014-08-31T18:09:00.000 | 2 | 0 | 1 | 0 | python,performance,file,file-io | 25,595,291 | 3 | false | 0 | 0 | The code which can cause this is not part of Python. If you are writing to a file system type which has issues with large files, the code you need to examine is the file system driver.
For workarounds, experiment with different file systems for your platform (but then this is no longer a programming question, and hence doesn't belong on StackOverflow). | 1 | 4 | 0 | I was trying to write around 5 billion line to a file using python. I have noticed that the performance of the writes are getting worse as the file is getting bigger.
For example at the beginning I was writing 10 million lines per second, after 3 billion lines, it writes 10 times slower than before.
I was wondering if this is actually related to the size of the file?
That is, do you think that the performance is getting better if I break this big file into the smaller ones or the size of the file does not affect the performance of the writes.
If you think it affects the performance can you please explain why?
--Some more info --
The memory consumption is the same (1.3%) all the time. The length of the lines are the same. So the logic is that I read one line from a file (lets call it file A). each line of the file A contains 2 tab separated value, if one of the values has some specific characteristics I add the same line to file B. This operation is O(1), I just convert the value to int and check if that value % someNumber is any of the 7 flags that I want.
Every time I read 10M lines from file A I output the line number. (Thats how I know the performance dropped). File B is the one which gets bigger and bigger and the writes to it gets slower.
The OS is Ubuntu. | Does size of a file affects the performance of the write in python | 0.132549 | 0 | 0 | 1,433 |
25,595,912 | 2014-08-31T20:17:00.000 | 3 | 1 | 1 | 0 | python | 25,595,929 | 1 | false | 0 | 0 | The key thing I find useful is writing good quality documentation strings, and using a tool such as pydoc to provide you with automatic documentation on your code. | 1 | 0 | 0 | I have been coding in a static language like Java and C++ for a very long time and recently I start coding python for a little bit but one thing that keeps me rather "annoyed" is its lack of type. I frequently found myself trying to figure where an object is coming from (if the code is a little old) and its type to know what exactly I am dealing with in terms of its content and functionality. Is there any reference or suggestion on paradigm or coding style for python so I better code in Python without being slowed down by constantly think about object's type?
thanks | python coding style due to lack of type | 0.53705 | 0 | 0 | 70 |
25,599,932 | 2014-09-01T06:24:00.000 | 0 | 0 | 0 | 0 | python,networking,scripting | 25,599,980 | 3 | false | 0 | 0 | You could just redirect constant ping results to a txt file. from command prompt (as admin) ping (address) -t >log.txt | 2 | 0 | 0 | Basically I need a way to check my internet connectivity in a sense. I've been having trouble with my net dropping out randomly and know it's not my end. But the ISP wants a little more proof. Basically I need something that can check latency and if its connecting at all on roughly an hourly basis and recording this information to a text file that I can view (and read back to them when I call them up next.) I was originally thinking of using python but my python is dodgy at best. But if another way is easier (either using a different scripting language or some program) I'm happy to use that as well.
EDIT: I'm not sure if that was clear so I'll summarize. It needs to ping, then record the response and the time it was pinged in a text document in a readable way. It must ping roughly every hour. | In need of a way to ping a specific IP address and record the time of each ping aswell as the result | 0 | 0 | 1 | 166 |
25,599,932 | 2014-09-01T06:24:00.000 | 0 | 0 | 0 | 0 | python,networking,scripting | 25,600,187 | 3 | false | 0 | 0 | If you mean associate time with the pings - you could write it as a batch file where you called the time (with a /T so it doesn't ask for input) and does a ping (don't add the -t there so it runs just the standard 4) and then loops. Or you could consider running a tracert (in a loop) which would give a longer but more meaningful output as to whether the failure might be happening (Maybe you're getting out past your router but not getitng DNS, that type of thing). | 2 | 0 | 0 | Basically I need a way to check my internet connectivity in a sense. I've been having trouble with my net dropping out randomly and know it's not my end. But the ISP wants a little more proof. Basically I need something that can check latency and if its connecting at all on roughly an hourly basis and recording this information to a text file that I can view (and read back to them when I call them up next.) I was originally thinking of using python but my python is dodgy at best. But if another way is easier (either using a different scripting language or some program) I'm happy to use that as well.
EDIT: I'm not sure if that was clear so I'll summarize. It needs to ping, then record the response and the time it was pinged in a text document in a readable way. It must ping roughly every hour. | In need of a way to ping a specific IP address and record the time of each ping aswell as the result | 0 | 0 | 1 | 166 |
25,602,155 | 2014-09-01T08:54:00.000 | 1 | 0 | 0 | 0 | python,pandas,hdfstore | 25,607,452 | 1 | false | 0 | 0 | These are the differences:
multiple files
when using multiple files you can only corrupt a single file when writing (eg you have a power failure when writing)
you can parallelize writing with multiple files (note - never, ever try to parallelize with a single file a this will corrupt it!!!)
single file
grouping if logical sets
IMHO the advantages of multiple files outweigh using a single file as you can easily replicate the grouping properties by using sub directories | 1 | 0 | 1 | I am converting 100 csv files into dataframes and storing them in an HDFStore.
What are the pros and cons of
a - storing the csv file as 100 different HDFStore files?
b - storing all the csv files as separate items in a single HDFStore?
Other than performance issues, I am asking the question as I am having stability issues and my HDFStore files often get corrupted. So, for me, there is a risk associated with a single HDFStore. However, I am wondering if there are benefits to having a single store. | Multiple files or single files into HDFStore | 0.197375 | 0 | 0 | 202 |
25,604,247 | 2014-09-01T10:53:00.000 | 0 | 0 | 0 | 0 | python,emr,mrjob | 26,083,485 | 1 | true | 0 | 0 | Well, after many searches, it seems there is no such option | 1 | 2 | 0 | I'm trying to set an IAM role to my EMR cluster with mrjob 0.4.2.
I saw that there is a new option in 0.4.3 to do this, but it is still in development and I prefer to use the stable version instead.
Any idea on how to do this? I have tried to create the cluster using Amazon's console and then run the bootstrap+step actions using mrjob (by connecting to that cluster) but it didn't worked.
Another option is being able to change the default permissions for EMR instances so mrjob will be able to take advantage of it. | How to set IAM role with MrJob 0.4.2 on EMR | 1.2 | 0 | 0 | 142 |
25,608,336 | 2014-09-01T14:46:00.000 | 0 | 0 | 0 | 1 | python,download,cloud,bucket,gsutil | 26,923,909 | 2 | false | 1 | 0 | First thing you need to know is that gsutil tools works only with python version 2.7 or lower for windows.
Once you have the correct python version, Please follow following steps if you are a windows user :
open commend prompt and switch to your gsutil directory using:
-- cd\
-- cd gsutil
Once you are in the gsutil directory execute following command:
python gsutil config -b
This will open a link a browser requesting you access to your google account. Please make sure you are logged into google from the account you want to access cloud storage and grant access
Once done, this will give you a KEY (authorization code). Copy that key and paste it back into your command prompt. Hit enter and now this will ask you a PROJECT-ID.
Now Navigate to your cloud console and provide the PROJECT-ID.
If Successful, this will create a .boto file in c:\users.
Now you are ready to access your private buckets from cloud console. For this, user following command: C:\python27>python c:\gsutil\gsutil cp -r gs://your_bucked_id/path_to_file path_to_save_files | 1 | 1 | 0 | My computer crashed and I need to download everything I stored on the Google Cloud. I am not a computer tech and I can't seem to find a way to download whole buckets from Google Cloud.
I have tried to follow the instructions given in the Google help docs. I have downloaded and installed Python and I downloaded gsutil and followed the instructions to put it in my c:\ drive (I can see it there). When I go to the command prompt and type cd \gsutil the next prompt says "c:\gsutil>" but I'm not sure what to do with that.
When I type "gsutil config" it says "file 'c:\gsutil\gsutil.py", line 2 SyntaxError: encoding problem utf8".
When I type "python gsutil" (which the instructions said would give me a list of commands) it says "'python' is not recognized as an internal or external command, operable program or batch file" even though I did the full installation process for Python.
Someone suggested a more user-friendly program called Cloudberry Explorer which I downloaded and installed, but the list of sources I can set up does not include Google Cloud.
Can anyone help? | Need help downloading bucket from Google Cloud | 0 | 0 | 0 | 775 |
25,609,153 | 2014-09-01T15:36:00.000 | 0 | 1 | 0 | 0 | python,paramiko | 71,871,218 | 7 | false | 0 | 0 | Well, I was also getting this with one of the juniper devices. The timeout didn't help at all. When I used pyex with this, it created multiple ssh/netconf sessions with juniper box. Once I changed the "set system services ssh connection-limit 10" From 5, it started working. | 2 | 77 | 0 | Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko.
I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner
This is strange because it happens randomly on any connections. Is there any way to fix it ? | Paramiko : Error reading SSH protocol banner | 0 | 0 | 1 | 118,288 |
25,609,153 | 2014-09-01T15:36:00.000 | 3 | 1 | 0 | 0 | python,paramiko | 68,010,453 | 7 | false | 0 | 0 | paramiko seems to raise this error when I pass a non-existent filename to kwargs>key_filename. I'm sure there are other situations where this exception is raised nonsensically. | 2 | 77 | 0 | Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko.
I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner
This is strange because it happens randomly on any connections. Is there any way to fix it ? | Paramiko : Error reading SSH protocol banner | 0.085505 | 0 | 1 | 118,288 |
25,617,706 | 2014-09-02T07:08:00.000 | -1 | 1 | 0 | 1 | python | 25,617,956 | 5 | false | 0 | 0 | You can use Twisted, and it is reactor it is much better than an infinite loop ! Also you can use reactor.callLater(myTime, myFunction), and when myFunction get called you can adjust the myTime and add another callback with the same API callLater(). | 1 | 2 | 0 | I have a python script that does some updates on my database.
The files that this script needs are saved in a directory at around 3AM by some other process.
So I'm going to schedule a cron job to run daily at 3AM; but I want to handle the case if the file is not available exactly at 3AM, it could be delayed by some interval.
So I basically need to keep checking whether the file of some particular name exists every 5 minutes starting from 3AM. I'll try for around 1 hour, and give up if it doesn't work out.
How can I achieve this sort of thing in Python? | 'Listening' for a file in Python | -0.039979 | 0 | 0 | 6,642 |
25,618,610 | 2014-09-02T08:02:00.000 | 0 | 0 | 1 | 0 | python,windows | 25,618,663 | 2 | false | 0 | 0 | The only way to accomplish this is to install it in a different location than the default C:\Python27.
You can set the install path in the Windows installer. | 1 | 2 | 0 | on Windows 7, is there a mean to install Python 2.7.8 (64-bit) without replacing existing Python27 (64-bit) installation? | Install Python 2.7.8 (64-bit) without replacing existing Python27 installation | 0 | 0 | 0 | 1,472 |
25,618,756 | 2014-09-02T08:11:00.000 | 1 | 0 | 0 | 0 | python,opencv,image-processing,jpeg,alpha | 27,037,265 | 2 | false | 0 | 0 | Using Open CV, you can open the image in RGB format, i.e, when you are doing an cv2.imread give the second parameter as 1 instead of -1. -1 opens the image in whatever be the original format of the image , i.e, it would retain the transparency. Keeping that parameter as 1 you can open it in RGB format. After that you can do an cv2.imwrite to save the RGB image without transparency. You can mention the file format as .jpg | 1 | 1 | 0 | I want to convert PNG and GIF images to JPEG using openCV, the alpha channel should be converted to white color. is there any way to achive this ? | how to convert alpha channel of the image to white color using opencv python? | 0.099668 | 0 | 0 | 3,371 |
25,621,035 | 2014-09-02T10:18:00.000 | 1 | 0 | 0 | 1 | python,scheduler,apscheduler | 25,633,337 | 1 | false | 1 | 0 | If you want such extra functionality, add the appropriate event listeners to the scheduler to detect the adding and any modifications to a job. In the event listener, get the job from the scheduler and store it wherever you want. They are serializable btw. | 1 | 2 | 0 | I am using apscheduler to schedule my scrapy spiders. I need to maintain history of all the jobs executed. I am using mongodb jobstore. By default, apscheduler maintains only the details of the currently running job. How can I make it to store all instances of a particular job? | Maintaining jobs history in apscheduler | 0.197375 | 0 | 0 | 472 |
25,621,349 | 2014-09-02T10:35:00.000 | 0 | 0 | 0 | 0 | python,pyqt5 | 25,676,913 | 1 | true | 0 | 1 | The problem is caused by fcitx not QT5.
Install libfcitx-qt5 and it will fix the problem. | 1 | 0 | 0 | my platform: ubuntu14.04
python 3.4.0
I moved my project from pyqt4 to pyqt5 and fond that I could not enable my IME in my program powered by pyqt5...
Since there is no error raised, I could not focus to where the problem is.
This problem is quite like "enable IME in Sublime on Linux".
Anyone meet the same problem or have already fixed it? | How to use IME(Input Method Editor) with pyqt5 | 1.2 | 0 | 0 | 382 |
25,622,764 | 2014-09-02T11:52:00.000 | 2 | 0 | 0 | 0 | python,user-interface,tkinter,wxpython | 25,622,901 | 3 | false | 0 | 1 | There is no (simple) way to do that - WxWidgets is an abstraction over different toolkits in different systems, and use different mainloop functions, while Tkinter has its own mainloop - that is to start with.
So making that work would at leas require:
that you'd setup different threads able to run both mainloops in
paralell,
finding a way to get Tkinter to render the widget to an
in memory bitmap
create a custom widget in wx which would render
that bitmap to the screen
and map events on it back to Tkinter, if
it is supposed to respond events
So you are definitely better of writting the widget again. | 1 | 2 | 0 | I created a analog rpm gauge using the canvas widget of Tkinter and I want to import it in a wx GUI application (as a panel, maybe). Is there any way to do it or I must rewrite this widget in wx? | import a tkinter widget in wxpython application as a panel | 0.132549 | 0 | 0 | 408 |
25,623,002 | 2014-09-02T12:05:00.000 | 1 | 0 | 1 | 0 | python-2.7,mysql-python | 25,797,155 | 1 | true | 0 | 0 | Maybe you have more versions than 2.7 and You installed the module on another version. | 1 | 0 | 0 | After importing the module, on script run I get the error and I have the module installed already, I am new with python, so I expect I might have forgot to install something else? Python is version 2.7. | error "No module named MySQLdb" | 1.2 | 1 | 0 | 691 |
25,625,097 | 2014-09-02T13:51:00.000 | 3 | 0 | 1 | 0 | python-3.x,pygame | 25,628,838 | 2 | false | 0 | 1 | Are you using a 64-bit operating system? Try using the 32-bit installer. | 2 | 2 | 0 | I'm trying to install Pygame and it returns me the following error "Python version 3.4 required which was not found in the registry". However I already have the Python 3.4.1 installed on my system. Does anyone know how to solve that problem?
I've been using Windows 8.1
Thanks in advance. | Error installing Pygame / Python 3.4.1 | 0.291313 | 0 | 0 | 1,096 |
25,625,097 | 2014-09-02T13:51:00.000 | 0 | 0 | 1 | 0 | python-3.x,pygame | 25,628,949 | 2 | false | 0 | 1 | Tips I can provide:
Add Python to your Path file in the Advanced settings of your Environmental Variables (just search for it in the control panel)
Something may have gone wrong with the download of Python, so re-install it. Also don't download the 64-bit version, just download the 32-bit version from the main pygame website
Once that's sorted out, transfer the entire Pygame file to the site packages in the Python directory and open up the pygame file and go to this directory in command prompt. Finally, run the Pygame setup from the command prompt which should be something like:
python setup.py
But this will only work if the pygame setup file is called setup.py (it's been a while since I downloaded it), you added Python to the Path file and you're currently in the correct directory in command prompt.
To test if it worked try importing pygame and see if you got an error or not. | 2 | 2 | 0 | I'm trying to install Pygame and it returns me the following error "Python version 3.4 required which was not found in the registry". However I already have the Python 3.4.1 installed on my system. Does anyone know how to solve that problem?
I've been using Windows 8.1
Thanks in advance. | Error installing Pygame / Python 3.4.1 | 0 | 0 | 0 | 1,096 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.