Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21,545,667 | 2014-02-04T06:43:00.000 | 4 | 0 | 1 | 1 | python,pythonw | 21,546,717 | 2 | true | 0 | 0 | Change the program that opens python files.
Assuming you're using Windows, right click any python file (in your case any .pyw file, not .py), properties, change Opens with to pythonw instead of IDLE | 2 | 1 | 0 | I want to hide the console window of a python program, so I change the file extensions to "pyw", but when I open it, the python IDLE show up even though I choose open it with "pythonw.exe"
If I use "pythonw test.py" in cmd, it works.
So I want to know what's wrong with this and how to solve this, thank you. | can't execute pyw on windows | 1.2 | 0 | 0 | 16,518 |
21,545,667 | 2014-02-04T06:43:00.000 | 0 | 0 | 1 | 1 | python,pythonw | 60,558,957 | 2 | false | 0 | 0 | For me, I had multiple version of Python installed that was causing issues. Once I had only had one version, I applied that pythonw.exe was the default for .pyw files and it worked. | 2 | 1 | 0 | I want to hide the console window of a python program, so I change the file extensions to "pyw", but when I open it, the python IDLE show up even though I choose open it with "pythonw.exe"
If I use "pythonw test.py" in cmd, it works.
So I want to know what's wrong with this and how to solve this, thank you. | can't execute pyw on windows | 0 | 0 | 0 | 16,518 |
21,555,879 | 2014-02-04T15:00:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,python-3.x | 21,555,960 | 1 | true | 0 | 0 | An installer for win64 and py27 will automatically try to find the 64-bit Python 2.7 installation and install it for that. It won’t try to install it for an incompatible Python installation.
For package installations via pip etc., you just need to call the correct one to install it for that version. So C:\python27\Scripts\pip.exe or C:\python27-64\Scripts\pip.exe in your case. | 1 | 3 | 0 | I have two installations of Python on my machine:
1 : Python 2.7 32 bits (c:\python27) (installed first)
2 : Python 2.7 64 bits (c:\python27-64) (installed more recently, not setup as system's default Python)
When I install a new package with its standard Windows installer (e.g. wxPython3.0-win64-3.0.0.0-py27.exe for wxPython), there is no question like :
"For which installation of Python do you want to install this module?"
Then this module is not recognized by my second Python install.
How to deal with module package installation when two versions of Python are installed ? | Having two versions of Python installed and installing a new module | 1.2 | 0 | 0 | 199 |
21,556,278 | 2014-02-04T15:17:00.000 | 1 | 0 | 0 | 0 | python,https,gunicorn,url-pattern | 21,556,311 | 1 | false | 1 | 0 | The protocol has nothing to do with django. That part is handled by your http server | 1 | 0 | 0 | I'm building a Django-based app, and I need it to use secure requests. The secure requests in my site are enabled and manually writing the url gets it through fine.
As I have quite a lot of urls I don't want to do it manually, but instead do something so Django always sends secure requests.
How can I make it so it always send https? | how do I re-write/redirect URLs in Gunicorn web server configuration? | 0.197375 | 0 | 0 | 1,587 |
21,556,454 | 2014-02-04T15:25:00.000 | 1 | 0 | 0 | 0 | python,tkinter,tkinter-canvas | 21,558,520 | 1 | true | 0 | 1 | I don't think there's anything you can do to speed up the creation of circles. The canvas wasn't designed to handle 80,000 objects, and it doesn't support the ability to copy and paste items (beyond simply creating new objects with the same coordinates). It handles a few thousand OK, and even 10,000 is pretty performant on my machine, but 86,000 items is a lot.
You might try creating a single image of the given size (or have pre-computed images). You can have a single PhotoImage instance that you use to create all of the images on the canvas. On my machine I can create 100,000 image objects on a canvas in just a couple of seconds. deleting that many objects, however, is still quite slow. | 1 | 1 | 0 | I would like to draw as many as 86,000 (small circles) on a Tkinter Canvas. On average it will be more like 8,600 circles. At times as few as 400. All of the circles being drawn at once are the same (size , color). The radius of the circles is related to the number of circles being drawn (as little as 1-2px when there are many circles to draw), but a difference in radius has had little overall impact.
canvas.create_oval(px+r,py+r,px-r,py-r,fill='green') is quite expensive in computing time. Ideally I would pre-create the circle and paste copies of it on the canvas as needed.
At the moment calling canvas.create_oval(...) 86,000 times takes almost 20 seconds. (The logic that decides what circle to draw where itself runs in less than 100 msec.)
How would I go about copying a single circle instead of creating them all individually? | Making logs of circles quickly with Tkinter | 1.2 | 0 | 0 | 165 |
21,558,022 | 2014-02-04T16:32:00.000 | 0 | 1 | 0 | 1 | python,bash,exe,samba | 21,559,703 | 3 | false | 0 | 0 | As you've said, this executable file would need to be something that runs on both Linux and Windows. That will exclude binary files, such as compiled C files.
What you are left with would be an executable script, which could be
Bash
Ruby
Python
PHP
Perl
If need be the script could simply be a bootstrapper that loads the appropriate binary executable depending on the operating system. | 1 | 1 | 0 | I have a executable file working in Ubuntu that runs a script in Python and works fine. I have also a shared directory with Samba server. The idea is that everyone (even Windows users) can execute this executable file located in this shared folder to run the script located in my computer.
But, how can I make an executable file that runs the python script of MY computer from both Linux and Windows remote users? | Executable shell file in Windows | 0 | 0 | 0 | 262 |
21,558,984 | 2014-02-04T17:12:00.000 | 1 | 0 | 0 | 0 | python,redirect,flask,worker | 21,561,671 | 1 | true | 1 | 0 | You can do it as follows:
When the user presses the button the server starts the task, and then sends a response to the client, possibly a "please wait..." type page. Along with the response the server must include a task id that references the task accessible to Javascript.
The client uses the task id to poll the server regarding task completion status through ajax. Let's say this is route /status/<taskid>. This route returns true or false as JSON. It can also return a completion percentage that you can use to render a progress bar widget.
When the server reports that the task is complete the client can issue the redirect to the completion page. If the client needs to be told what is the URL to redirect to, then the status route can include it in the JSON response.
I hope this helps! | 1 | 2 | 0 | I have the app in python, using flask and iron worker. I'm looking to implement the following scenario:
User presses the button on the site
The task is queued for the worker
Worker processes the task
Worker finishes the task, notifies my app
My app redirects the user to the new endpoint
I'm currently stuck in the middle of point 5, I have the worker successfully finishing the job and sending a POST request to the specific endpoint in my app. Now, I'd like to somehow identify which user invoked the task and redirect that user to the new endpoint in my application. How can I achieve this? I can pass all kind of data in the worker payload and then return it with the POST, the question is how do I invoke the redirect for the specific user visiting my page? | Redirect user when the worker is done | 1.2 | 0 | 0 | 191 |
21,559,666 | 2014-02-04T17:46:00.000 | 1 | 0 | 1 | 0 | google-chrome,ipython,ipython-notebook | 24,776,086 | 2 | false | 0 | 0 | I was experiencing the same issue. Actually I found this is related to chrome extensions installed.
Try disabling all the extensions and re-enabling them one by one. You'll find which is crashing your tab. In my case, crashes were due to the Evernote extension.
Alternatively, you can open up an incognito window, which has all the extensions disabled by default, and try opening your notebook there.
Ciao | 1 | 9 | 0 | I'm doing some work in an IPython Notebook session, and I now have a large-ish notebook containing code, some plots, and some embedded videos (of plot stacks; it seemed like the easiest way to be able to scroll through a sequence of plots interactively in the Notebook view). I'm working in Chrome (Mac, 32.0.1700.102) since H.264 encoding worked best (Vp8 compressed out shading detail in the plots that I needed), and Safari and Firefox don't render the videos.
Recently, this notebook has started crashing Chrome tabs every couple minutes (showing the 'Aw Snap' page). It's become basically unusable. I can work, saving very frequently, but saving the notebook causes the Chrome tab to crash about half the time (which makes me wonder if the random crashes that occur when I'm working are caused by the autosaves, but I don't know).
Has anyone else encountered this? Does anyone know how to fix it? Is there some more information I can provide to diagnose the problem? Thanks for any help. | IPython Notebook crashing Chrome tabs | 0.099668 | 0 | 0 | 6,425 |
21,561,405 | 2014-02-04T19:18:00.000 | 0 | 0 | 0 | 1 | python,streaming,audio-streaming,ntp,gstreamer | 27,043,950 | 2 | false | 1 | 0 | Sorry for bring up an old question but this is something that I am looking into. I believe you need to look at Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. The protocol is used for establishing and controlling media sessions between end points.
This is a much better way of keeping things synchronized. As for sending it to multiple devices looking into multicast addresses.
Hope this helps | 2 | 0 | 0 | I'm working on an application to provide multi-room audio to devices and have succeeded in keeping audio playing from a file (e.g. mp3) synced using GST and manually using NTP but I can't seem to get a live audio stream to sync.
Essentially I want to be able to stream audio from one device to one or more other devices but rather than them buffering and getting out of sync I want them to all play at around the same time (close enough for any delay to not be noticeable anyway).
Has anyone got any suggestions on ways that this can be achieved or can provide any material discussing the matter? (Search hasn't turned up much)
It's worth noting that this application will be coded in Python. | Keeping live audio stream synced | 0 | 0 | 0 | 661 |
21,561,405 | 2014-02-04T19:18:00.000 | 1 | 0 | 0 | 1 | python,streaming,audio-streaming,ntp,gstreamer | 21,566,534 | 2 | false | 1 | 0 | Unfortunately, delay as low as 10 milliseconds is noticeable to most folks. Musicians tend to appreciate even lower delay than that. And if you have any of the speakers from different devices within earshot of each other, you're going to run into phase issues at even the slightest unpredictable delay (which is inevitable on a computer).
Basically, it is impossible to have a delay that isn't noticeable. Even if you do succeed in synchronizing the start times exactly, each device has a different sample clock on it, and they will drift apart over time. What is 44.1kHz to one device might be 44.103kHz on the other.
If you have a more realistic expectation of synchronization... around 50-100ms, then this becomes more feasible. I would have one master device doing the decoding and then sending PCM samples out to the other devices for playback. Keep track of your audio device buffers and make sure they aren't getting too big (indicating that your device is behind) or underrunning (indicating a network problem or that your device is ahead). Have all the devices with the same buffer sizes and maybe even use broadcast packets to send the audio, since all devices are on the same network anyway. | 2 | 0 | 0 | I'm working on an application to provide multi-room audio to devices and have succeeded in keeping audio playing from a file (e.g. mp3) synced using GST and manually using NTP but I can't seem to get a live audio stream to sync.
Essentially I want to be able to stream audio from one device to one or more other devices but rather than them buffering and getting out of sync I want them to all play at around the same time (close enough for any delay to not be noticeable anyway).
Has anyone got any suggestions on ways that this can be achieved or can provide any material discussing the matter? (Search hasn't turned up much)
It's worth noting that this application will be coded in Python. | Keeping live audio stream synced | 0.099668 | 0 | 0 | 661 |
21,563,271 | 2014-02-04T20:59:00.000 | 0 | 0 | 1 | 0 | python,list,recursion,nested-lists | 21,563,366 | 3 | false | 0 | 0 | Recursive functions are idiomatic for when you have a linked list. Python lists are more like arrays. But it's still possible to handle a Python list with a recursive function -- there's no real utility, but it can be interesting as an exercise.
You start with a full list, and your base case is when the list is empty. Traverse the list by passing the list in as an argument, using x.pop() to simultaneously fetch and remove the first item in the list, evaluate the popped item, and then pass the list (now shorter) into the same function.
Edit: actually, on second thought, you would be better off not using x.pop() and instead peeking at the first value and passing the remainder in a slice. This would be grossly inefficient, because you're copying the list every time you slice, but it's better than destructively consuming the list inside your recursive function, unless that's a desired side-effect. | 2 | 3 | 0 | Say I have a list x = [1, 2, 3, 4]
Is there a recursive method where i can go through the list to find the value?
I want to ultimately be able to compare a returned value in the list, (or nested list) to an arbitrary number to see it it matches.
I can think a way to do this using a for loop, but i have trouble imagining a recursive method to do the same thing. I know that I can't set a counter to keep track of my position in the list because calling the function recursively would just reset the counter every time.
I was thinking I could set my base case of the function as a comparison between the number and a list of len 1.
I just want some hints really. | Recursively going through a list (python) | 0 | 0 | 0 | 15,478 |
21,563,271 | 2014-02-04T20:59:00.000 | 0 | 0 | 1 | 0 | python,list,recursion,nested-lists | 21,563,401 | 3 | false | 0 | 0 | Well you will have two base cases:
1) You have reached the end of the list => return false.
2) Your current element is the element you are looking for => return true (or the element or its position, whatever you are interested in).
The thing you have to do all the time is check both base cases on the current element and apply the function recursively on the next element in the list if neither one of the base cases applied. | 2 | 3 | 0 | Say I have a list x = [1, 2, 3, 4]
Is there a recursive method where i can go through the list to find the value?
I want to ultimately be able to compare a returned value in the list, (or nested list) to an arbitrary number to see it it matches.
I can think a way to do this using a for loop, but i have trouble imagining a recursive method to do the same thing. I know that I can't set a counter to keep track of my position in the list because calling the function recursively would just reset the counter every time.
I was thinking I could set my base case of the function as a comparison between the number and a list of len 1.
I just want some hints really. | Recursively going through a list (python) | 0 | 0 | 0 | 15,478 |
21,565,926 | 2014-02-04T23:44:00.000 | 2 | 0 | 1 | 1 | python-2.7,ubuntu,python-3.3,easy-install | 23,478,462 | 1 | true | 0 | 0 | Tested on Ubuntu 14.04:
1) Use pip instead of easy_install. It's the way of the future. ;-)
2) sudo apt-get install python3-pip
3) sudo pip3 install AWESOME-PACKAGE | 1 | 1 | 0 | I am using ubuntu with root access and have python 3.3 and 2.7 installed in my system. When I use easy_install by default it installs the package for 2.7.
How can I use it to install to 3.3 instead | How to use easy_install with python 3.3 while 2.7 is also installed | 1.2 | 0 | 0 | 727 |
21,567,271 | 2014-02-05T01:58:00.000 | 2 | 1 | 1 | 0 | python,eclipse,pydev | 21,666,549 | 1 | true | 0 | 0 | Adding the project directory /${PROJECT_DIR_NAME} to the project's PYTHONPATH seems to have done the trick.
Before, I only had /${PROJECT_DIR_NAME}/mypackage in the project's PYTHONPATH. So I suspect that, when using absolute imports, Eclipse was unable to find /${PROJECT_DIR_NAME}/mypackage/mypackage/mymodule and would then proceed to search in site-packages. | 1 | 2 | 0 | I am working on a module (mypackage) using Eclipse/PyDev and Python 2.7. I have other packages and modules that need to use it. In order to make sure the other packages and modules are always using a working version of mypackage, I decided to deploy mypackage to site-packages using distutils (same computer), which I will only update if the development version of mypackage in PyDev has been debugged after making changes.
In order to get mypackage to work when deployed to site-packages, I had to write it using absolute imports. The problem with that is that now when I try to run the modules within the develoment version of mypackage from Eclipse for debugging, it is importing other modules in mypackage from site-packages rather than from the development version in Eclipse.
Is there a way to get around this? I would hate to have to rewrite my code with absolute-imports every time I want to update mypackage in site-packages, and then change it back if I want to make changes and debug my code in Eclipse. | PyDev imports from package in site-packages rather than package in development (absolute-imports) | 1.2 | 0 | 0 | 638 |
21,578,382 | 2014-02-05T13:14:00.000 | 4 | 0 | 0 | 0 | python,django,django-models,django-admin | 21,579,803 | 5 | true | 1 | 0 | I don't see an obvious solution to this — the models are sorted by their _meta.verbose_name_plural, and this happens inside the AdminSite.index view, with no obvious place to hook custom code, short of subclassing the AdminSite class and providing your own index method, which is however a huge monolithic method, very inheritance-unfriendly. | 1 | 13 | 0 | I have several configuration objects in django admin panel.
They are listed in the following order
Email config
General config
Network config
Each object can be configured separately, but all of them are included in General config. So basically you will need mostly General config, so I want to move it to the top.
I know how to order fields in a model itself, but how to reorder models? | Reorder model objects in django admin panel | 1.2 | 0 | 0 | 5,031 |
21,578,415 | 2014-02-05T13:16:00.000 | 3 | 0 | 0 | 0 | python-3.x,pyramid,pycharm | 21,599,084 | 1 | false | 1 | 0 | Finally I have figured it out. It was my fault. The project gets deployed as an egg so that's where I was suppose to place my breakpoints.
Thanks a lot for your time and consideration. | 1 | 1 | 0 | I am using Pycharm as IDE for one of my projects. The framework of choice is Pyramid and here comes my issue. I am not able to debug the request using PyCharm even though I start the application in debug mode. When a request is made from the browser the breakpoints from the views.py are not hit this does not apply for the breakpoints set in the application start-up (init.py and initializedb.py). Please note that I am new on Pyramid. Any idea how to solve this would be much appreciated.
EDIT
I apologize for not mentioning the details. I am using PyCharm 3.02 Pro and Pyramid 1.4.5. I am using the scaffolding provided by PyCharm. | Pycharm - Pyramid debugging request | 0.53705 | 0 | 0 | 399 |
21,580,200 | 2014-02-05T14:37:00.000 | 9 | 1 | 1 | 0 | python,linux,module,backup,linux-mint | 21,580,285 | 1 | true | 0 | 0 | If you installed them with pip, you can use pip freeze to list the currently installed modules. Save this to a file and use pip install -r file on a new system to install the modules from the file. | 1 | 3 | 0 | Is there a way to backup Python modules? I installed lots of modules. If my system does not work properly, I will lose them all. Is there a way to do this? | Is there a way to backup Python modules? | 1.2 | 0 | 0 | 3,387 |
21,581,564 | 2014-02-05T15:36:00.000 | 3 | 0 | 1 | 0 | python,python-idle | 21,581,930 | 3 | false | 0 | 0 | *IDLE is officially a corruption of IDE, but it's really named in honour of Monty Python member Eric Idle.
Marc Lutz, Learning Python 3rd ed., footnote on p50 | 2 | 0 | 0 | I have noticed that Python's GUI is called Idle. On the other hand, Eric Idle
was a member of Monty Python. Is this just a coincidence? | Origin of the names Python and Idle | 0.197375 | 0 | 0 | 1,660 |
21,581,564 | 2014-02-05T15:36:00.000 | 3 | 0 | 1 | 0 | python,python-idle | 21,581,692 | 3 | false | 0 | 0 | It's no coincidence. The creator of the language (and the IDE), Guido van Rossum, is a big Monty Python fan :) | 2 | 0 | 0 | I have noticed that Python's GUI is called Idle. On the other hand, Eric Idle
was a member of Monty Python. Is this just a coincidence? | Origin of the names Python and Idle | 0.197375 | 0 | 0 | 1,660 |
21,582,358 | 2014-02-05T16:10:00.000 | 11 | 0 | 0 | 1 | python,breakpoints,ipdb | 21,582,431 | 2 | false | 0 | 0 | Use the break command. Don't add any line numbers and it will list all instead of adding them. | 2 | 13 | 0 | Trying to find how to execute ipdb (or pdb) commands such as disable.
Calling the h command on disable says
disable bpnumber [bpnumber ...]
Disables the breakpoints given as a space separated list of
bp numbers.
So how whould I get those bp numbers? was looking through the list of commands and couldn't get any to display the bp numbers
[EDIT]
The break, b and info breakpoints commands don't do anything, although in my module i clearly have 1 breakpoint set like this import pdb; pdb.set_trace( ) - same for ipdb. Moreover info is not defined.
The output of help in pdb:
Documented commands (type help ):
======================================== EOF bt cont enable jump pp run unt a c continue
exit l q s until alias cl d h
list quit step up args clear debug help n
r tbreak w b commands disable ignore next
restart u whatis break condition down j p
return unalias where
Miscellaneous help topics:
========================== exec pdb
Undocumented commands:
====================== retval rv
And for ipdb:
Documented commands (type help ):
======================================== EOF bt cont enable jump pdef psource run unt a c
continue exit l pdoc q s until alias cl
d h list pfile quit step up args clear
debug help n pinfo r tbreak w b
commands disable ignore next pinfo2 restart u whatis
break condition down j p pp return unalias
where
Miscellaneous help topics:
========================== exec pdb
Undocumented commands:
====================== retval rv
I have saved my module as pb3.py and am executing it within the command line like this
python -m pb3
The execution does indeed stop at the breakpoint, but within di pdb (ipdb) console, the commands indicated don't display anything - or display a NameError
If more info is needed, i will provide it. | How to find the breakpoint numbers in pdb (ipdb)? | 1 | 0 | 0 | 4,097 |
21,582,358 | 2014-02-05T16:10:00.000 | -3 | 0 | 0 | 1 | python,breakpoints,ipdb | 21,582,459 | 2 | false | 0 | 0 | info breakpoints
or just
info b
lists all breakpoints. | 2 | 13 | 0 | Trying to find how to execute ipdb (or pdb) commands such as disable.
Calling the h command on disable says
disable bpnumber [bpnumber ...]
Disables the breakpoints given as a space separated list of
bp numbers.
So how whould I get those bp numbers? was looking through the list of commands and couldn't get any to display the bp numbers
[EDIT]
The break, b and info breakpoints commands don't do anything, although in my module i clearly have 1 breakpoint set like this import pdb; pdb.set_trace( ) - same for ipdb. Moreover info is not defined.
The output of help in pdb:
Documented commands (type help ):
======================================== EOF bt cont enable jump pp run unt a c continue
exit l q s until alias cl d h
list quit step up args clear debug help n
r tbreak w b commands disable ignore next
restart u whatis break condition down j p
return unalias where
Miscellaneous help topics:
========================== exec pdb
Undocumented commands:
====================== retval rv
And for ipdb:
Documented commands (type help ):
======================================== EOF bt cont enable jump pdef psource run unt a c
continue exit l pdoc q s until alias cl
d h list pfile quit step up args clear
debug help n pinfo r tbreak w b
commands disable ignore next pinfo2 restart u whatis
break condition down j p pp return unalias
where
Miscellaneous help topics:
========================== exec pdb
Undocumented commands:
====================== retval rv
I have saved my module as pb3.py and am executing it within the command line like this
python -m pb3
The execution does indeed stop at the breakpoint, but within di pdb (ipdb) console, the commands indicated don't display anything - or display a NameError
If more info is needed, i will provide it. | How to find the breakpoint numbers in pdb (ipdb)? | -0.291313 | 0 | 0 | 4,097 |
21,585,109 | 2014-02-05T18:10:00.000 | 16 | 0 | 1 | 0 | python,floating-point,floating | 21,585,161 | 4 | true | 0 | 0 | It looks like you happen to assign to a variable named sum in the same scope as the call above, thereby hiding the builtin sum function. | 4 | 0 | 0 | Why does the following generate the TypeError: 'float' object not callable?
sum([-450.0,950.0]) | TypeError: 'float' object not callable | 1.2 | 0 | 0 | 6,984 |
21,585,109 | 2014-02-05T18:10:00.000 | 0 | 0 | 1 | 0 | python,floating-point,floating | 71,506,785 | 4 | false | 0 | 0 | #import builtins
numbers = [1,2,3,4,5,1,4,5]
total = builtins.sum(numbers) | 4 | 0 | 0 | Why does the following generate the TypeError: 'float' object not callable?
sum([-450.0,950.0]) | TypeError: 'float' object not callable | 0 | 0 | 0 | 6,984 |
21,585,109 | 2014-02-05T18:10:00.000 | 0 | 0 | 1 | 0 | python,floating-point,floating | 66,683,525 | 4 | false | 0 | 0 | This saved me too, as pnz showed above. I was racking my brains trying to figure out why 'sum' wasn't working. It wasn't called anywhere else in my script, and resolved by using 'numpy.sum'. Seems like default 'sum' doesn't work well with a list of floats.
This failed:
xlist = [1.5, 3.5, 7.8] print(sum(xlist))
This worked:
xlist = [1.5, 3.5, 7.8] print(numpy.sum(xlist)) | 4 | 0 | 0 | Why does the following generate the TypeError: 'float' object not callable?
sum([-450.0,950.0]) | TypeError: 'float' object not callable | 0 | 0 | 0 | 6,984 |
21,585,109 | 2014-02-05T18:10:00.000 | 3 | 0 | 1 | 0 | python,floating-point,floating | 39,657,390 | 4 | false | 0 | 0 | This problem happened for me as well. And I did not create any variable with 'sum' name. I solved the problem by changing 'sum' function to 'numpy.sum'. | 4 | 0 | 0 | Why does the following generate the TypeError: 'float' object not callable?
sum([-450.0,950.0]) | TypeError: 'float' object not callable | 0.148885 | 0 | 0 | 6,984 |
21,587,882 | 2014-02-05T20:32:00.000 | 4 | 0 | 1 | 0 | python,naming-conventions | 21,588,011 | 1 | true | 0 | 0 | From PEP8:
__double_leading_and_trailing_underscore__: "magic" objects or attributes that live in user-controlled namespaces. E.g. __init__, __import__ or __file__. Never invent such names; only use them as documented.
So, the advice is not to use the double underscore syntax for your own variables. | 1 | 5 | 0 | I was wondering, is it recommended/Pythonic to define and use custom double underscore variables/functions in a Python script? For example, __tablename__ as used in SQLAlchemy or __validateitem__() (a custom function that validates an item before applying __setitem__() to it).
If it does define that something magic happens, or that that specific variable/function is used indeed in a special way (like the two above examples), I feel it is a good idea using them.
I am interested in arguments on both best coding practices and potential risks in using this kind of naming. | How recommended is using custom double underscore variables in Python? | 1.2 | 0 | 0 | 466 |
21,588,464 | 2014-02-05T21:03:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,emacs,emacs24 | 21,590,370 | 2 | false | 0 | 0 | I don't use python, but from the source to python-mode, I think you should look into customizing the variable python-python-command - It seems to default to the first path command matching "python"; perhaps you can supply it with a custom path? | 1 | 2 | 0 | I'm new to Emacs and I'm trying to set up my python environment. So far I've learned that using "python-mode.el" in a python buffer C-c C-c loads the contents of the current buffer into an interactive python shell, apparently using what which python yields. In my case that is python 3.3.3. But since I need to get a python 2.7 shell, I'm trying to get Emacs to spawn such a shell on C-c C-c. Unfortunatly I can't figure out, how to do this. Setting py-shell-name to what which python2.7 yields (i.e. /usr/bin/python2.7) does not work. How can get Emacs to do this, or how can I trace back what Emacs executes when I hit C-c C-c? | Using python2.7 with Emacs 24.3 and python-mode.el | 0 | 0 | 0 | 1,019 |
21,591,572 | 2014-02-06T00:24:00.000 | 3 | 0 | 1 | 1 | python,terminal,pydoc | 21,591,666 | 10 | false | 0 | 0 | Pydoc is the documentation generation system for Python. Say you can document your functions using the Pydoc standard and then it can be used to generate documentation in your code. | 4 | 19 | 0 | New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc.
I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit? | What does the Pydoc module do? | 0.059928 | 0 | 0 | 62,279 |
21,591,572 | 2014-02-06T00:24:00.000 | 0 | 0 | 1 | 1 | python,terminal,pydoc | 66,218,375 | 10 | false | 0 | 0 | pydoc generates online documentation from docstrings...
for example you can see that Numpy.histograms() function's, online documentation is actually made based on that function docstring... | 4 | 19 | 0 | New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc.
I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit? | What does the Pydoc module do? | 0 | 0 | 0 | 62,279 |
21,591,572 | 2014-02-06T00:24:00.000 | -1 | 0 | 1 | 1 | python,terminal,pydoc | 46,895,246 | 10 | false | 0 | 0 | Concise description provided by to Wikipedia:
"Pydoc allows Python programmers to access Python's documentation help files, generate text and HTML pages with documentation specifics, and find the appropriate module for a particular job." | 4 | 19 | 0 | New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc.
I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit? | What does the Pydoc module do? | -0.019997 | 0 | 0 | 62,279 |
21,591,572 | 2014-02-06T00:24:00.000 | -1 | 0 | 1 | 1 | python,terminal,pydoc | 50,296,072 | 10 | false | 0 | 0 | Just type pydoc in your terminal where you normaly run python. It will give simple explanation !. : ) | 4 | 19 | 0 | New to programming and python altogether. In the book I'm learning from, the author suggested I find out the purpose of Pydoc.
I did a google search on it, and found a match (from Gnome Terminal) but it didn't make much sense to me. Anyone mind simplifying a bit? | What does the Pydoc module do? | -0.019997 | 0 | 0 | 62,279 |
21,595,488 | 2014-02-06T06:19:00.000 | 1 | 0 | 0 | 1 | python,cluster-computing,qsub | 21,597,874 | 2 | false | 0 | 0 | You obviously have built yourself a string cmd containing a command that you could enter in a shell for running the 2nd program. You are currently using subprocess.call(cmd, shell=True) for executing the 2nd program from a Python script (it then becomes executed within a process on the same machine as the calling script).
I understand that you are asking how to submit a job to a cluster so that this 2nd program is run on the cluster instead of the calling machine. Well, this is pretty easy and the method is independent of Python, so there is no 'pythonic' solution, just an obvious one :-) : replace your current cmd with a command that defers the heavy work to the cluster.
First of all, dig into the documentation of your cluster's qsub command (the underlying batch system might be SGE or LSF, or whatever, you need to get the corresponding docs) and try to find the shell command line that properly submits an example job of yours to the cluster. It might look as simple as qsub ...args... cmd, whereas cmd here is the content of the original cmd string. I assume that you now have the entire qsub command needed, let's call it qsubcmd (you have to come up with that on your own, we can't help there). Now all you need to do in your original Python script is calling
subprocess.call(qsubcmd, shell=True)
instead of
subprocess.call(cmd, shell=True)
Note that qsub likely only works on very few machines, typically known as your cluster 'head node(s)'. This means that your Python script that wants to submit these jobs should run on this machine (if that is not possible, you need to add an ssh login procedure to the submission process that we don't want to discuss here).
Please also note that, if you have the time, you should look into the shell=True implications of your subprocess usage. If you can circumvent shell=True, this will be the more secure solution. This might however not be an issue in your environment. | 1 | 3 | 0 | I have the situation where I am doing some computation in Python, and based on the outcomes I have a list of target files that are candidates to be passed to 2nd program.
For example, I have 50,000 files which contain ~2000 items each. I want to filter for certain items and call a command line program to do some calculation on some of those.
This Program #2 can be used via shell command line, but requires also a lengthy set of arguments. Because of performance reasons I would have to run Program #2 on a cluster.
Right now, I am running Program #2 via
'subprocess.call("...", shell=True)
But I'd like to run it via qsub in future.
I have not much experience of how exactly this could be done in a reasonably efficient manner.
Would it make sense to write temporary 'qsub' files and run them via subprocess() directly from the Python script? Is there a better, maybe more pythonic solution?
Any ideas and suggestions are very welcome! | Running jobs on a cluster submitted via qsub from Python. Does it make sense? | 0.099668 | 0 | 0 | 5,882 |
21,611,503 | 2014-02-06T18:53:00.000 | 0 | 0 | 0 | 0 | python,ruby,android-pay | 21,741,357 | 1 | false | 1 | 0 | analyticsPierce,
I've asked the same question and have not received any answers.
Here was my question, maybe we can work out a solution somehow. I've just about given up.
"HttpWebRequest with Username/Password" on StackOverflow.
Trey | 1 | 1 | 0 | I am working to automate retrieving the Order data from the Google Wallet Merchant Center. This data is on the Orders screen and the export is through a button right above the data.
Google has said this data is not available to export to a Google Cloud bucket like payments are and this data is not available through a Google API.
I'm wondering if anyone has been successful in automating retrieval of this data using an unofficial method such as scraping the site or a separate gem or library? I have done tons of searching and have not seen any solutions. | How to automate Google Wallet order export data? | 0 | 0 | 1 | 433 |
21,612,246 | 2014-02-06T19:32:00.000 | 0 | 0 | 0 | 0 | python,algorithm,web,hyperlink,traversal | 31,978,451 | 1 | false | 1 | 0 | The links in a set of web pages can be seen as a tree graph and hence you could use various tree traversal algorithms like depth first and breadth first search to find all links. The links and related form data can be saved in a queue or stack depending on what traversal algorithm you are using. | 1 | 2 | 0 | I am successful to record all the links of the website but missed some of the links which can only be visible with the form posting (for example login).
What i did is recorded all the links without login. And took the form values. Then i posted the data and recorded the new links, but here i missed the other forms and links which are not in that posted links.
Please suggest any efficient algorithm so that i could grab all the links by posting form datas.
Thanks in advance. | Algorithm for traversing website including forms | 0 | 0 | 1 | 77 |
21,612,677 | 2014-02-06T19:54:00.000 | 0 | 0 | 0 | 0 | python,math,numpy,linear-algebra | 21,613,541 | 5 | true | 0 | 0 | As the entries in the matrices are either 1 or 0 the smallest non-zero absolute value of a determinant is 1. So there is no need to fear a true non-zero value that is very close to 0.
Alternatively one can apparently use sympy to get an exact answer. | 2 | 4 | 1 | I have a lot of 10 by 10 (0,1)-matrices and I would like to determine which have determinant exactly 0 (that is which are singular). Using scipy.linalg.det I get a floating point number which I have to test to see if it is close to zero. Is it possible to do the calculation exactly so I can be sure I am not finding false positives?
On the other hand, maybe there is some guarantee about the smallest eigenvalue which can be used to make sure the floating point method never makes a false positive? | Determine if determinant is exactly zero | 1.2 | 0 | 0 | 2,357 |
21,612,677 | 2014-02-06T19:54:00.000 | 3 | 0 | 0 | 0 | python,math,numpy,linear-algebra | 21,613,054 | 5 | false | 0 | 0 | You can use Gaussian elimination to bring the matrix to a triangular form.
Since your elements are all 0 or 1, the calculation even using floating point arithmetic will be exact (you are only multiplying/dividing/adding/subtracting by -1, 0 and 1, which is exact).
The determinant is then 0 if one element of the diagonal is zero and nonzero otherwise.
So for this specific algorithm (Gaussian elimination), calculation of the determinant will be exact even in floating point arithmetic.
This algorithm also should be pretty efficient. It can even be implemented using integers, which is faster and shows even in a more obvious way that the problem is exactly solvable.
EDIT: the point is, that an algorithm which operates on the 0,1 matrix can be exact. It depends on the algorithm. I would check how det() is implemented and maybe, there is no issue with numerical noise, and, in fact, you could just test for det(M) == 0.0 and get neither false negatives nor false positives. | 2 | 4 | 1 | I have a lot of 10 by 10 (0,1)-matrices and I would like to determine which have determinant exactly 0 (that is which are singular). Using scipy.linalg.det I get a floating point number which I have to test to see if it is close to zero. Is it possible to do the calculation exactly so I can be sure I am not finding false positives?
On the other hand, maybe there is some guarantee about the smallest eigenvalue which can be used to make sure the floating point method never makes a false positive? | Determine if determinant is exactly zero | 0.119427 | 0 | 0 | 2,357 |
21,613,027 | 2014-02-06T20:11:00.000 | 1 | 0 | 0 | 0 | python-2.7,wxpython | 21,630,518 | 1 | false | 0 | 1 | You need to realize that wxPython wraps the native widgets of the OS that it is running on, so if that underlying widget does not support the desired behavior, then wxPython will not either.
It should be noted, however, that you would normally update the StatusBar with information about the menu items. That is how the wxPython demo for the menus works. In said demo, there is a binding to wx.EVT_MENU_HIGHLIGHT_ALL that is used to update the StatusBar. You might be able to use that event to add a tooltip.
Alternatively, you might want to check out FlatMenu, which is a pure Python implementation of the wx.Menus. As such, you can easily add new behavior compared with trying to update something that is a wrapped C++ widget. | 1 | 1 | 0 | if I am not wrong wxMenuITem does not support tooltips
is there an easy turn around to show a popup tooltip of help upon mouse focus on a menu item for a sec or 2 ? | how to set wxTooltip or tooltip like on wxMenuItem | 0.197375 | 0 | 0 | 225 |
21,614,229 | 2014-02-06T21:17:00.000 | 0 | 0 | 1 | 0 | python | 21,614,342 | 1 | true | 0 | 0 | You can close it and just open a new one when you want to run the program or run it from the terminal and use use clear after you exit the shell | 1 | 0 | 0 | So I'm doing an MIT OCW assignment where I am creating a functional game of hangman. I got everything working. In IDLE, I have to hit F5 to run the code in the shell. I don't know any other way to run it, but that's not the big deal to me.
The main problem: the shell gets absolutely full of responses. It just keeps stacking up with more and more output. So, my question. Is it possible to put in a piece of code to clear the shell? Or do I just have to deal with it for now?
EDIT: To clarify:
I need the prints for each cycle. Every time the user guesses a letter, it prints something like this:
2 guesses remaining
Possible answers: abdfghijkpquvwxyz
Guess a letter:g
That letter is not in the word!
_ o_ _ _ tr_
I just want to know if there is a way I can clear what is in the shell before the next 'cycle' is printed. | Clearing IDLE's shell for Python | 1.2 | 0 | 0 | 69 |
21,616,883 | 2014-02-07T00:15:00.000 | 0 | 1 | 0 | 0 | python,ckan | 21,623,741 | 1 | false | 1 | 0 | Do you need to clear your browser's cache? Are there any other settings (e.g. extra_public_paths) that are different between your dev and production machines? | 1 | 1 | 0 | I have developed couple of extensions and never had any problem in deploying to the production server. I did try to installation a new extension today on my production server that works on my dev machine but doesn't work on the production server. I am suppose to see a new menu option as part on this new extension and I don't see that. To test I changed the extension name in the production.ini and I got an expected error (PlugInNotFoundError). I have restarted the apache and nginx. I am running CKAN 2.1.
I did ran the following command on the production server:
python setup.py develop
I got the message that the plugin was successfully installed.
I also included this new plugin in the production.ini file settings.
Restarted both the apache2 and nginx servers.
Still not seeing a new menu option to access the functionality provided by this newly installed extension.
Any help to sort this out be appreciated.
Thanks,
PK | CKAN extension deployment not working | 0 | 0 | 0 | 217 |
21,617,616 | 2014-02-07T01:25:00.000 | 2 | 0 | 1 | 1 | json,pythonanywhere | 21,627,045 | 2 | false | 1 | 0 | You can get to /var/www/static in the File browser. Just click on the '/' in the path at the top of the page and then follow the links.
You can also just copy things there from a Bash console.
You may need to create the static folder in /var/www if it's not there already. | 1 | 1 | 0 | I am trying to have my app on Amazon appstore.
In order to do this Amazon needs to park a small json file (web-app-manifest.json).
If I upload it to the the root of my web site (as suggested), Amazon bot says it cannot access file. Amazon support mention I should save it to /var/www/static but either I don't know how to get there or I don't have access to this part of the server.
Any ideas ? | Where should I save the Amazon Manifest json file on an app hosted at PythonAnywhere? | 0.197375 | 0 | 0 | 155 |
21,619,353 | 2014-02-07T04:28:00.000 | 0 | 0 | 1 | 1 | python,debian,enthought,pycuda | 21,994,749 | 1 | true | 0 | 0 | Depending on what version of Canopy you're using, try to set your LIBRARY_PATH variable.
Example: export LIBRARY_PATH=~/Enthought/lib
Then try to build the package. This worked for me but I don't know the root cause as to why Canopy virtual environment isn't setting this variable. | 1 | 0 | 0 | I am using Canopy enthought on a machine without su access.
Whenever i try to build any package dependent on python I get this error:
/usr/bin/ld: cannot find -lpython2.7
collect2: ld returned 1 exit status
error: command 'g++' failed with exit status 1
Any idea what's going wrong?
I am running Debian OS.
Thanks | Enthought canopy python -lpython2.7 not found | 1.2 | 0 | 0 | 241 |
21,621,742 | 2014-02-07T07:26:00.000 | 0 | 0 | 0 | 1 | python,linux,installation,exe,pyinstaller | 51,320,586 | 2 | false | 0 | 0 | Pyinstaller does not allow cross compilation. so if you want to have an executable file you should compile your project first in Linux OS and then you may use wine in which you can compile the project to have the windows executable | 1 | 2 | 0 | I am new to python.I Have a python script for copying files from local machine to sftp location.The script will use the wxpython,pycrypto and ssh modules of python.I created an exe file by using the pyinstaller.My machine is windows 7 64-bit.I used pyinstaller 2.1 and python 2.7.6.amd 64 for creating the exe file.It's working fine in windows 7 64-bit.But it's not working in xp,win7 32-bit.In linux i used wine for executing this exe but there also it's not working.
Then i created one more exe in windows7 32-bit machine.this exe is working fine in win7 32 and 64 bit versions.but it's not working in xp.
Can anyone tell me what cpuld be the reason and how to resolve it.
I want one installer which can be installed in windows or linux.
Thanks in advance. | Exe created with Pyinstaller in windows 7 is not working in xp and linux | 0 | 0 | 0 | 4,424 |
21,628,893 | 2014-02-07T13:22:00.000 | 1 | 1 | 0 | 0 | python,performance,z3 | 21,629,657 | 1 | true | 0 | 0 | Yes, there is measurable overhead of using the python API to build and traverse terms compared to the C/C++ APIs. | 1 | 1 | 0 | Is there any difference in performance between using the python API of Z3 instead of directly interacting with the C implementation through SMT-Lib files for instance?
Thanks! | Performance of the python Z3 API | 1.2 | 0 | 0 | 237 |
21,628,904 | 2014-02-07T13:23:00.000 | 8 | 0 | 0 | 0 | google-chrome,python-2.7,selenium-webdriver | 21,686,531 | 8 | true | 1 | 0 | @ExperimentsWithCode
Thank you for your answer again, I have spent almost the whole day today trying to figure out how to do this and I've also tried your suggestion where you add that flag --disable-user-media-security to chrome, unfortunately it didn't work for me.
However I thought of a really simple solution:
To automatically click on Allow all I have to do is press TAB key three times and then press enter. And so I have written the program to do that automatically and it WORKS !!!
The first TAB pressed when my html page opens directs me to my input box, the second to the address bar and the third on the ALLOW button, then the Enter button is pressed.
The python program uses selenium as well as PyWin32 bindings.
Thank you for taking your time and trying to help me it is much appreciated. | 1 | 9 | 0 | I have a HTML/Javascript file with google's web speech api and I'm doing testing using selenium, however everytime I enter the site the browser requests permission to use my microphone and I have to click on 'ALLOW'.
How do I make selenium click on ALLOW automatically ? | Accept permission request in chrome using selenium | 1.2 | 0 | 1 | 21,421 |
21,630,646 | 2014-02-07T14:47:00.000 | 0 | 0 | 0 | 0 | python,django,django-rest-framework | 21,727,785 | 1 | true | 1 | 0 | For anyone who has this problem this is my solution: I didn't use pre_save in the end and worked from validate, where you can access all the attributes. | 1 | 1 | 0 | How can I access the form data in presave? More exactly I have a ManyToManyField (called user_list) in my models and I want to access the list from pre_save(self, obj)
I've tryed self.object.user_list and even obj.user_list but I keep getting and error.
Thanks | Get form data in pre_save | 1.2 | 0 | 0 | 56 |
21,630,959 | 2014-02-07T15:00:00.000 | 2 | 0 | 0 | 0 | python | 21,633,288 | 2 | true | 0 | 0 | If what you want is to have a unique key for each phone number, you can simply use symmetric cryptography such as AES. Use the phone number as the key, this guarantees that no other phone number will produce the same cipher.
Generation: "Original text" ---- encrypting with key=phone_number----> "9&&Y(&GG(O&GG(B)H)*H"
Verification: "9&&Y(&GG(O&GG(B)H)*H" ---- decrypting with key=phone_number----> "Original text" | 2 | 0 | 0 | I need to know whether I will have birthday collisions when hashing domestic (10 digits) and international (15 digits) phone numbers.
"Shouldn't have" != will not have
I thought I would quickly run some python to SHA1 each n in the space using redis SADD response = 0 to tell me if a collisions occurred.
Now a 10^n space seems small, but then when you map the pairwise comparisons, we are in O(10^n) space and that is a bad place to be computationally.
Do I need a perfect hash? I want these phone numbers to be unrecoverable so SHA256 has merits, but value^2 + value + salt or some such may be fine. I don't care about bit length of field, within reason. I will be doing joins though...
Any proof that anyone knows about that there are zero collisions in sequential set of numbers for a particular hash? Not probability of collision, but proof.
Thanks much!
Edited to reflect that it is smaller than n! and is indeed (10^n)(10^n-1)/2 so O(10^n) thanks DSM | Testing: collisions in sha1, sha256 of all phone numbers | 1.2 | 0 | 0 | 471 |
21,630,959 | 2014-02-07T15:00:00.000 | 2 | 0 | 0 | 0 | python | 28,868,156 | 2 | false | 0 | 0 | You anticipated that people will tell you you "probably" won't have a collision, but still it's worth clarifying what that means. The odds of finding a duplicate SHA2 are so low, that it's more likely a solar ray or a manufacturing defect will randomly flip a bit in your CPU. There's always a small chance that your program will go crazy, so it doesn't really make sense to code around probabilities that are even less likely than that.
You could argue that malicious input could make it more likely to hit a collision. But as of March 2015 no collision for SHA1 or SHA2 has ever been found with any input, and when one is found it will be big news.
@tk.'s suggestion to use AES is clever, so if that works for you, great. But if you run into problems -- for example, if you find yourself with a key longer than 32 bytes -- there's nothing wrong with falling back to SHA2. | 2 | 0 | 0 | I need to know whether I will have birthday collisions when hashing domestic (10 digits) and international (15 digits) phone numbers.
"Shouldn't have" != will not have
I thought I would quickly run some python to SHA1 each n in the space using redis SADD response = 0 to tell me if a collisions occurred.
Now a 10^n space seems small, but then when you map the pairwise comparisons, we are in O(10^n) space and that is a bad place to be computationally.
Do I need a perfect hash? I want these phone numbers to be unrecoverable so SHA256 has merits, but value^2 + value + salt or some such may be fine. I don't care about bit length of field, within reason. I will be doing joins though...
Any proof that anyone knows about that there are zero collisions in sequential set of numbers for a particular hash? Not probability of collision, but proof.
Thanks much!
Edited to reflect that it is smaller than n! and is indeed (10^n)(10^n-1)/2 so O(10^n) thanks DSM | Testing: collisions in sha1, sha256 of all phone numbers | 0.197375 | 0 | 0 | 471 |
21,631,528 | 2014-02-07T15:25:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine,google-cloud-datastore,app-engine-ndb | 21,631,790 | 1 | true | 1 | 0 | It aint that difficult. Abstract:
[User]-> Posts [Data] to the [EntityCreatorPreviewHandler]
[EntityCreatorPreviewHandler]-> Recieves the data and creates the entity eg: book = Book(title='Test').
[EntityCreatorPreviewHandler]-> Templates the html and basically shows the entity with all it's attributes etc.
[EntityCreatorPreviewHandler]-> Also hides the initial [Data] in a hidden post form
[User]-> Accepts save after preview and as soon as the save button is pressed the hidden form is submitted to a EntitySaveHandler
[EntitySaveHandler] saves the data | 1 | 1 | 0 | I'd like users to create an entity, and preview it, before saving it in the datastore.
For example:
User completes entity form, then clicks "preview".
Forwarded to an entity 'preview' page which allows the user to "submit" and save the entity in the datastore, or "go back" to edit the entity.
How can I achieve this? | Google App Engine (Python): Allow entity 'previewing before 'submit' | 1.2 | 0 | 0 | 54 |
21,632,481 | 2014-02-07T16:08:00.000 | 3 | 0 | 1 | 0 | python | 21,632,541 | 4 | false | 0 | 0 | Edited for new requirements
You can use an if-then else conditional expression (had wrong syntax earlier)
a and c if b else a | 2 | 1 | 0 | i have 3 variables, a, b, c. I want to check if a is true, AND if c is true but only if b is true. if b is false, only check a.
Eg:
if a is true, but b is false, then return true
if a is true, b is true, c is false, then return false.
if a is false, return false.
what is the most pythonic way of constructing this if condition? | Three-way true-false check | 0.148885 | 0 | 0 | 190 |
21,632,481 | 2014-02-07T16:08:00.000 | 6 | 0 | 1 | 0 | python | 21,632,593 | 4 | false | 0 | 0 | a and (not b or c) should also work. | 2 | 1 | 0 | i have 3 variables, a, b, c. I want to check if a is true, AND if c is true but only if b is true. if b is false, only check a.
Eg:
if a is true, but b is false, then return true
if a is true, b is true, c is false, then return false.
if a is false, return false.
what is the most pythonic way of constructing this if condition? | Three-way true-false check | 1 | 0 | 0 | 190 |
21,633,136 | 2014-02-07T16:38:00.000 | 0 | 0 | 0 | 0 | python-2.7,probability,scikit-learn,prediction,adaboost | 21,645,757 | 1 | false | 0 | 0 | Do you mean you get probabilities per sample that are 1/n_classes on average? That's necessarily the case; the probabilities reported by predict_proba are the conditional class probability distribution P(y|X) over all values for y. To produce different probabilities, perform any necessary computations according to your probability model. | 1 | 0 | 1 | I'm using the AdaBoostClassifier in Scikit-learn and always get an average probability of 0.5 regardless of how unbalanced the training sets are. The class predictions (predict_) seems to give correct estimates, but these aren't reflected in the predict_probas method which always average to 0.5.
If my "real" probability is 0.02, how do I transform the standardized probability to reflect that proportion? | The predict method shows standardized probability? | 0 | 0 | 0 | 213 |
21,638,864 | 2014-02-07T22:21:00.000 | 3 | 0 | 1 | 0 | python | 21,638,919 | 1 | true | 0 | 0 | int with base 16 will turn that hex string into an integer. chr will turn that integer into a string of length 1. So, in total: chr(int('A0B0', 16)). | 1 | 0 | 0 | How can I convert hex code, e.g. "A0B0" (this means U+A0B0 char) to unciode char,
e.g. string of length 1?
I use Py3k. | Python: convert Unicode hex code to string (len=1) | 1.2 | 0 | 0 | 75 |
21,639,733 | 2014-02-07T23:38:00.000 | 0 | 1 | 0 | 0 | python,django,rabbitmq | 21,939,627 | 2 | false | 0 | 0 | I was never able to solve this. But, this forced me to learn what json is, I used simplejson along with httplib2 and it worked like a charm... | 1 | 0 | 0 | Im working on a little project that running rabbitmq with python, I need a way to access the management api and pull stats, jobs, etc. I have tried using pyRabbit, but doen't appear to be working unsure why, hoping better programmers might know? Below I was just following the basic tutorial and readme to perform the very basic task. My server is up, I'm able to connect outside of python and pyrabbit fine. I have installed off the dependencies with no luck, at least I think. Also open to other suggestions for just getting queue size, queues, active clients etc outside of pyRabbit.
'Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\user>python
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
import nose
import httplib2
import mock
from pyrabbit.api import Client
import pyrabbit
cl = Client('my.ip.com:15672', 'guest', 'guest')
cl.is_alive()
No JSON object could be decoded - (Not found.) ()
Traceback (most recent call last):
File "", line 1, in
File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 48, in wrapper if self.has_admin_rights:
File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 175, in has_admin_right whoami = self.get_whoami()
File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 161, in get_whoami whoami = self.http.do_call(path, 'GET')
File "C:\Python27\lib\site-packages\pyrabbit\http.py", line 112, in do_call raise HTTPError(content, resp.status, resp.reason, path, body)
pyrabbit.http.HTTPError: 404 - Object Not Found (None) (whoami) (None)' | Unable to get pyrabbit to run | 0 | 0 | 1 | 781 |
21,640,028 | 2014-02-08T00:08:00.000 | 1 | 0 | 0 | 0 | python,arrays,numpy | 24,217,870 | 2 | false | 0 | 0 | In case anyone else has a similar problem but the chosen answer doesn't solve it, one possibility could be that in Python3, some index or integer quantity fed into a np function is an expression using '/' for example n/2, which ought to be '//'. | 1 | 3 | 1 | I have a large dataset stored in a numpy array (A) I am trying to sum by block's using:
B=numpy.add.reduceat(numpy.add.reduceat(A, numpy.arange(0, A.shape[0], n),axis=0), numpy.arange(0, A.shape[1], n), axis=1)
it work's fine when i try it on a test array but with my data's I get the following message:
TypeError: Cannot cast array data from dtype('float64') to dtype('int32') according to the rule 'safe'
Does someone now how to handle this?
Thanks for the help. | Error when trying to sum an array by block's | 0.099668 | 0 | 0 | 4,600 |
21,641,696 | 2014-02-08T03:57:00.000 | 7 | 1 | 0 | 1 | python,python-2.7,module,resolver | 36,287,320 | 15 | false | 0 | 0 | You could also install the package with pip by using this command:
pip install git+https://github.com/rthalley/dnspython | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 1 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,module,resolver | 67,931,629 | 15 | false | 0 | 0 | If you don't have (or don't want) pip installed there is another way. You can to solve this is to install package with native OS package manager.
For example for Debian-based systems this would be command:
apt install python3-dnspython | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,module,resolver | 66,007,768 | 15 | false | 0 | 0 | I have faced similar issue when importing on mac.i have python 3.7.3 installed
Following steps helped me resolve it:
pip3 uninstall dnspython
sudo -H pip3 install dnspython
Import dns
Import dns.resolver | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,module,resolver | 61,213,715 | 15 | false | 0 | 0 | ok to resolve this First install dns for python by cmd using pip install dnspython
(if you use conda first type activate and then you will go in base (in cmd) and then type above code)
it will install it in anaconda site package ,copy the location of that site package folder from cmd, and open it . Now copy all dns folders and paste them in python site package folder. it will resolve it .
actually the thing is our code is not able to find the specified package in python\site package bcz it is in anaconda\site package. so you have to COPY IT (not cut). | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 1 | 1 | 0 | 1 | python,python-2.7,module,resolver | 59,751,991 | 15 | false | 0 | 0 | I faced the same problem and solved this like i described below:
As You have downloaded and installed dnspython successfully so
Enter into folder dnspython
You will find dns directory, now copy it
Then paste it to inside site-packages directory
That's all. Now your problem will go
If dnspython isn't installed you can install it this way :
go to your python installation folder site-packages directory
open cmd here and enter the command :
pip install dnspython
Now, dnspython will be installed successfully. | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0.013333 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,module,resolver | 57,668,403 | 15 | false | 0 | 0 | In my case, I hava writen the code in the file named "dns.py", it's conflict for the package, I have to rename the script filename. | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 1 | 1 | 0 | 1 | python,python-2.7,module,resolver | 57,207,302 | 15 | false | 0 | 0 | I was getting an error while using "import dns.resolver". I tried dnspython, py3dns but they failed.
dns won't install. after much hit and try I installed pubdns module and it solved my problem. | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0.013333 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,module,resolver | 53,703,267 | 15 | false | 0 | 0 | I installed DNSpython 2.0.0 from the github source, but running 'pip list' showed the old version of dnspython 1.2.0
It only worked after I ran 'pip uninstall dnspython' which removed the old version leaving just 2.0.0 and then 'import dns' ran smoothly | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 0 | 1 | 0 | 1 | python,python-2.7,module,resolver | 40,167,343 | 15 | false | 0 | 0 | This issue can be generated by Symantec End Point Protection (SEP).
And I suspect most EPP products could potentially impact your running of scripts.
If SEP is disabled, the script will run instantly.
Therefore you may need to update the SEP policy to not block python scripts accessing stuff. | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0 | 0 | 0 | 147,472 |
21,641,696 | 2014-02-08T03:57:00.000 | 4 | 1 | 0 | 1 | python,python-2.7,module,resolver | 21,643,858 | 15 | false | 0 | 0 | I installed dnspython 1.11.1 on my Ubuntu box using pip install dnspython. I was able to import the dns module without any problems
I am using Python 2.7.4 on an Ubuntu based server. | 10 | 32 | 0 | I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded.
I have tried to clean up and install but the installation does not seem to be working.
$ python --version
Python 2.7.3
$ sudo pip install dnspython
Downloading/unpacking dnspython
Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded
Running setup.py egg_info for package dnspython
Installing collected packages: dnspython
Running setup.py install for dnspython
Successfully installed dnspython
Cleaning up...
$ python
Python 2.7.3 (default, Sep 26 2013, 20:03:06)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named dns
Updated Output of python version and pip version command
$ which python
/usr/bin/python
$ python --version
Python 2.7.3
$ pip --version
pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)
Thanks a lot for your help.
Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work. | Python DNS module import error | 0.053283 | 0 | 0 | 147,472 |
21,645,169 | 2014-02-08T11:01:00.000 | 0 | 0 | 0 | 0 | .htaccess,security,ipython,ipython-notebook | 21,647,276 | 1 | false | 0 | 0 | There is no built-in mechanisme for auth and read-only notebook in IPython. And security is complex enough to start adding option for Auth+Read-only at the moment.
You can also install a local copy of nbviewer and/or use nbconvert to export your notebook as static html, and serve using a classical appache/.htaccess scheme. | 1 | 0 | 0 | In ipython notebook I found two (more or less) ways to secure my remote server which is running as my notebook host.
The c.NotebookApp.password option in the config secures the notebook from write access.
The --read-only flag allows not authenticated useres only to view my notebook.
But with point 2. I am not getting warm.
The problem is that it allows anybody to view my notebook. Actually I only want some privileged users to view my notebook. Until now I havn't found any way to do so.
Is there the possibility to secure my notebook globaly with, e.g. a .htaccess file or anything else?
In that case I can give all users the website password and I can change the notebook with my 1.option. | Global access password for ipython notebook | 0 | 0 | 0 | 202 |
21,650,889 | 2014-02-08T19:39:00.000 | 1 | 0 | 0 | 0 | python,database-connection | 21,651,170 | 1 | true | 1 | 0 | Here's how I would do:
Use a connection pool with a queue interface. You don't have to choose a connection object, you just pick the next on the line. This can be done whenever you need transaction, and put back afterwards.
Unless you have some very specific needs, I would use a Singleton class for the database connection. No need to pass parameters on the constructor every time.
For testing, you just put a mocked database connection on the Singleton class.
Edit:
About the connection pool questions (I could be wrong here, but it would be my first try):
Keep all connections open. Pop when you need, put when you don't need it anymore, just like a regular queue. This queue could be exposed from the Singleton.
You start with a fixed, default number of connections (like 20). You could override the pop method, so when the queue is empty you block (wait for another to free if the program is multi-threaded) or create a new connection on the fly.
Destroying connections is more subtle. You need to keep track of how many connections the program is using, and how likely it is you have too many connections. Take care, because destroying a connection that will be needed later slows the program down. In the end, it's a n heuristic problem that changes the performance characteristics. | 1 | 0 | 0 | I have a Model class which is part of my self-crafted ORM. It has all kind of methods like save(), create() and so on. Now, the thing is that all these methods require a connection object to act properly. And I have no clue on what's the best approach to feed a Model object with a connection object.
What I though of so far:
provide a connection object in a Model's __init__(); this will work, by setting an instance variable and use it throughout the methods, but it will kind of break the API; users shouldn't always feed a connection object when they create a Model object;
create the connection object separately, store it somewhere (where?) and on Model's __init__() get the connection from where it has been stored and put it in an instance variable (this is what I thought to be the best approach, but have no idea of the best spot to store that connection object);
create a connection pool which will be fed with the connection object, then on Model's __init__() fetch the connection from the connection pool (how do I know which connection to fetch from the pool?).
If there are any other approached, please do tell. Also, I would like to know which is the proper way to this. | Getting connection object in generic model class | 1.2 | 1 | 0 | 76 |
21,652,046 | 2014-02-08T21:31:00.000 | 3 | 0 | 1 | 0 | python-3.x | 44,586,895 | 3 | false | 0 | 0 | you can use pip3 install -U matplotlib. this will install the package for python 3.x.x | 1 | 1 | 0 | I have read the similar questions already posted and the answers were over my head. I am very new to Python. I have Python 3.3.3 and keep getting the error message
"ImportError: No module named pylab" when I try import:pylab
How can I get what I need to use "pylab"? Any advice would be great. I'm just trying to make basic points on a graph. | ImportError: No module named pylab | 0.197375 | 0 | 0 | 11,383 |
21,652,251 | 2014-02-08T21:53:00.000 | -2 | 0 | 1 | 0 | python,nlp,nltk,stanford-nlp | 21,656,184 | 2 | false | 0 | 0 | There is no module named stanford in NLTK.You can store output of stanford parser and make use of it through python program. | 1 | 2 | 1 | I am getting problems to access Stanford parser through python NLTK (they developed an interface for NLTK)
import nltk.tag.stanford
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named stanford | nltk interface to stanford parser | -0.197375 | 0 | 0 | 5,543 |
21,653,108 | 2014-02-08T23:30:00.000 | 0 | 0 | 0 | 0 | python,qt,python-2.7,pyqt,pyqt4 | 21,654,212 | 3 | false | 0 | 1 | The QTreeWidget is a red herring. What you are saving is a generic QAbstractItemModel (treeWidget->model()) - after all, a QTreeWidget is a view, and has a built-in model. Now, those model's items are simply QVariants, and those are simply Python types, but also fully supported by QDataStream::operator<<. All you need is to choose a tree traversal (depth-first, breadth-first, or something else), and dump the items and their depth in the tree to the stream. When you read the stream, that's sufficient information to reconstruct the tree. | 1 | 4 | 0 | I recently spent some time working out how to use a QDataStream with a QTreeWidget in PyQt. I never found specific examples for doing exactly this, and pyqt documentation for QDataStream seems to be pretty scarce in general. So I thought I'd post a question here as a breadcrumb trail in case someone else down the line needs a hint. I'll wait a bit in case someone would like to jump in and take a shot at it, and I'll post back in a bit with my own efforts.
The question is: In PyQt, how can I use a QDataStream to save QTreeWidgetItems to a file as native QT objects, and then read the file back to restore the tree structure exactly as it was saved?
Eric | PyQt: Saving native QTreeWidgets using QDataStream | 0 | 0 | 0 | 2,494 |
21,655,862 | 2014-02-09T05:53:00.000 | 13 | 0 | 0 | 1 | python,google-app-engine,app-engine-ndb | 21,658,988 | 2 | true | 1 | 0 | I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put, the datastore will return the key, which is stored in the form [<entity_kind>, <id_or_name>] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity).
You can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [<parent_entity_kind>, <parent_id_or_name>, <entity_kind>, <id_or_name>]. You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round.
(Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.)
Saving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group. | 1 | 17 | 0 | I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble.
My current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key.
One of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore?
Another thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key?
I'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background. | Simple explanation of Google App Engine NDB Datastore | 1.2 | 1 | 0 | 7,423 |
21,656,058 | 2014-02-09T06:17:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,cgi,mod-wsgi,wsgi | 21,656,118 | 1 | false | 1 | 0 | Go ahead with your work.. It will be fine if you are using Common Gateway Interface but you should have a look at network traffic. | 1 | 0 | 0 | Hi everyone i'm working on a social network project . I'm using Pythons Common Gateway Interface to write everything ,handle database , ajax . So i have a question , i heard that Web Server Gateway Interface is better than Common Gateway Interface and can handle more users and higher traffic but now i have already finished my website more than half of the project . What should i do now ? i don't have much time to going back either.Is Python Common Gateway Interface that bad for large scale project ? | Python cgi or wsgi | 0 | 0 | 1 | 227 |
21,661,090 | 2014-02-09T15:37:00.000 | 9 | 0 | 1 | 0 | python,python-3.x | 21,661,282 | 3 | true | 0 | 0 | Depending on how complicated your expressions are, ast.literal_eval may be a safer alternative. | 2 | 7 | 0 | I'm using Python 3.
I suppose the best way to ask this is how can I input an expression without using eval(input("Input: "))?
I'm a simple user right now, so what I needed eval for was an algebra calculator. | Python3, best alternatives to eval()? (input as an expression) | 1.2 | 0 | 0 | 17,250 |
21,661,090 | 2014-02-09T15:37:00.000 | -3 | 0 | 1 | 0 | python,python-3.x | 21,661,110 | 3 | false | 0 | 0 | how can I input an expression without using eval(input("Input: "))
simply don;t use eval. In Python 3.x, input is the replacement for raw_input so, you don't need eval. You can simply write your statement as input("Input: ")
Moreover, in Python 2.X, you should have used raw_input("Input: ") instead of eval(input("Input: ")) | 2 | 7 | 0 | I'm using Python 3.
I suppose the best way to ask this is how can I input an expression without using eval(input("Input: "))?
I'm a simple user right now, so what I needed eval for was an algebra calculator. | Python3, best alternatives to eval()? (input as an expression) | -0.197375 | 0 | 0 | 17,250 |
21,665,412 | 2014-02-09T21:40:00.000 | 0 | 0 | 0 | 0 | python,django | 21,709,938 | 1 | true | 1 | 0 | Sounds like the proper usecase for auth groups to me. Some sort of a department head role that can edit the name of a group they belong to (but not the permissions). Whoever IS creating the actual groups and setting permissions is a superuser. Even if you create your own model you can't escape the fact that authority must flow from somewhere. Auth groups has a lot of built-in features around permissions that should save you time. | 1 | 1 | 0 | I am creating a Django app where there will be organizations. Organizations will contain departments –– like human resources/sales –– that will each have their own permissions. The name and roles of groups must be set by the organization itself and won't be known in advance.
There will also be different permissions granted within groups –– a sales manager can do more than a salesperson.
I am unsure to what extent I should use Django's inbuilt groups to handle permissions. Would it be appropriate to make an organization a group? Should a salesperson be a member of two groups –– a departmental group (sales) and a role-based group (salesperson)? | Using Groups in Django To Map Organizations | 1.2 | 0 | 0 | 333 |
21,665,740 | 2014-02-09T22:11:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,locking,eventlet,green-threads | 21,666,024 | 1 | true | 0 | 0 | There is no difference in the behavior. However, green thread isn't actually a thread since it runs all its tasks in a single OS thread, so the threading.Lock and threading.Semaphore will behave as if it's being locked and unlocked from a single thread.
This means if you try to acquire a locked Lock or an zeroed Semaphore when using green threads, then the whole program will block forever (or until the specified timeout). Also, an RLock can only be released from the same thread it's locked, since green threads actually all run on the same thread you will be able to release an RLock from a different green thread.
In short, don't use threading locks when using green threads. | 1 | 2 | 0 | Is there any difference between threading.Lock(), threading.Semaphore() behaviour in usual python thread and greenthread (eventlet)? | Any difference between Lock behaviour in python thread and green thread? | 1.2 | 0 | 0 | 1,095 |
21,666,800 | 2014-02-09T23:53:00.000 | 0 | 0 | 1 | 0 | python,regex | 21,667,311 | 1 | true | 0 | 0 | Well,
This is one regular expression containing all the valid tokens for Python, unless I am mistaken:
(^|\A)\bthis\t\r\n\a\e\f\v.\u0021\w\W\d\D\s\S[\x00\xBC\xC9\u00C7\w\n!-a]\Bregex*shows+every?token{1,}(?P<that>you can)\1(?=possibly)(?!use)(?<=within)(?<!a single)(?#regular expression in python)(?:\Z|$) | 1 | 0 | 0 | Do you know one big pattern, or several small patterns using all the commands of the re pattern syntax ?
This is to do some tests on a function that gives a tree view of a re pattern (this tree view is more user friendly than the on given by sre_parse). | REGEX - Patterns showing all the `re` commands | 1.2 | 0 | 0 | 37 |
21,667,168 | 2014-02-10T00:30:00.000 | 0 | 0 | 1 | 0 | python-3.x,runtime | 21,667,217 | 2 | true | 0 | 0 | Yes, you can definitely do that in python.
Although, it opens a security hole, so be very careful.
You can easily do this by setting up a "loader" class that can collect the source code you want it to use and then call the exec builtin function, just pass some python source code in and it will be evaluated. | 1 | 2 | 0 | What I mean by this is:
I have a program. The end user is currently using it. I submit a new piece of source code and expect it to run as if it were always there?
I can't find an answer that specifically answers the point.
I'd like to be able to say, "extend" or add new features (rather than fix something that's already there on the fly) to the program without requiring a termination of the program (eg. Restart or exit). | Can I alter Python source code while executing? | 1.2 | 0 | 0 | 42 |
21,667,547 | 2014-02-10T01:17:00.000 | 1 | 0 | 0 | 0 | python,pandas,sas,hdf5 | 21,668,162 | 2 | true | 0 | 0 | I haven't had much luck with this in the past. We (where I work) just use Tab separated files for transport between SAS and Python -- and we do it a lot.
That said, if you are on Windows, you can attempt to setup an ODBC connection and write the file that way. | 1 | 10 | 1 | I have multiple large (>10GB) SAS datasets that I want to convert for use in pandas, preferably in HDF5. There are many different data types (dates, numerical, text) and some numerical fields also have different error codes for missing values (i.e. values can be ., .E, .C, etc.) I'm hoping to keep the column names and label metadata as well. Has anyone found an efficient way to do this?
I tried using MySQL as a bridge between the two, but I got some Out of range errors when transferring, plus it was incredibly slow. I also tried export from SAS in Stata .dta format, but SAS (9.3) exports in an old Stata format that is not compatible with read_stat() in pandas. I also tried the sas7bdat package, but from the description it has not been widely tested so I'd like to load the datasets another way and compare the results to make sure everything is working properly.
Extra details: the datasets I'm looking to convert are those from CRSP, Compustat, IBES and TFN from WRDS. | Converting large SAS dataset to hdf5 | 1.2 | 0 | 0 | 2,301 |
21,670,272 | 2014-02-10T06:16:00.000 | 0 | 1 | 0 | 1 | python,bash | 21,670,472 | 4 | true | 0 | 0 | It is probably caused by the following.
Your script imports some third-party library which was compiled by an older python version.
To fix this, reinstall the up-to-date library. | 2 | 0 | 0 | I have python 2.6 and python installed on my Freebsd box. I want my bash script to execute a particular python script using python2.6 interpreter. It is showing import error....
Undefined symbol "PyUnicodeUCS2_DecodeUTF8" | How to make bash script use a particular python version for executing a python script? | 1.2 | 0 | 0 | 921 |
21,670,272 | 2014-02-10T06:16:00.000 | 0 | 1 | 0 | 1 | python,bash | 21,670,553 | 4 | false | 0 | 0 | Use the absolute path to the python version you want. | 2 | 0 | 0 | I have python 2.6 and python installed on my Freebsd box. I want my bash script to execute a particular python script using python2.6 interpreter. It is showing import error....
Undefined symbol "PyUnicodeUCS2_DecodeUTF8" | How to make bash script use a particular python version for executing a python script? | 0 | 0 | 0 | 921 |
21,675,209 | 2014-02-10T10:53:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,oauth-2.0,google-drive-api,httplib2 | 21,677,121 | 1 | true | 1 | 0 | You mention "AppEngine's oauth2 library", but then you say "Drive API calls time out". So modifying the Oauth http library won't affect Drive.
Are you using the Google library for your Drive calls, or making direct REST HTTP calls?
If the former, try ...
HttpRequest.setConnectTimeout(55000)
, if the latter just ...
request.getFetchOptions().setDeadline(55d)
NB. Drive is having a brain fart today, so one would hope the underlying problem will go away of its own accord. | 1 | 0 | 0 | So I built an app on App Engine that takes users files and move them to certain folders in the same domain. I made REST api that calls Drive API to list files, rename files, and change permissions etc.
On app load, it fires 4 ajax calls to the server to get name and id of folders and checking if certain folder exists.
The problem is front end ajax calls time out all the time in production. App engine url fetch has 60 sec limit. I used App engine's oauth2 library which uses a different httplib2. So I modified httplib2 source deadline to max 60 sec but it seems to time out after 30 sec. As a result, Drive API calls time out almost every time and app just doesn't work.
I have read the guideline on optimizing drive api calls with partial response and implemented it but didn't see noticeable difference. It's driving me crazy.... please help | Google Drive API + App Engine = time out | 1.2 | 0 | 0 | 588 |
21,675,320 | 2014-02-10T10:59:00.000 | -2 | 0 | 0 | 0 | python,qt,autocomplete,tkinter | 21,675,660 | 2 | false | 0 | 1 | You can use QTextEdit::cursorForPosition to get a cursor for mouse position. After that you can call QTextCursor::select with QTextCursor::WordUnderCursor to select the word and QTextCursor::selectedText to get the word. | 1 | 3 | 0 | I want to make a text editor with autocompletion feature. What I need is somehow get the text which is selected by mouse (case #1) or just a word under cursor(case #2) to compare it against a list of word I want to be proposed for autocompletion. By get I mean return as a a string value.
Can it be done with tkinter at all? I'm not familiar with qt but I'll try to use it if the feature can be achieved with it. | How do I return word under cursor in tkinter? | -0.197375 | 0 | 0 | 1,225 |
21,675,570 | 2014-02-10T11:11:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scipy,scikit-learn | 23,423,563 | 4 | false | 0 | 0 | Want to add to ford's answer that you have to do like this
metric = DistanceMetric.get_metric('pyfunc',func=/your function name/)
You cannot just put your own function as the second argument, you must name the argument as "func" | 1 | 7 | 1 | I have to apply Nearest Neighbors in Python, and I am looking ad the scikit-learn and the scipy libraries, which both require the data as input, then will compute the distances and apply the algorithm.
In my case I had to compute a non-conventional distance, therefore I would like to know if there is a way to directly feed the distance matrix. | Nearest Neighbors in Python given the distance matrix | 0 | 0 | 0 | 5,868 |
21,678,893 | 2014-02-10T13:44:00.000 | 1 | 0 | 1 | 0 | python,module,compilation,packages,software-distribution | 21,679,241 | 1 | false | 0 | 0 | Why packages just don't include dependencies: License. You can't just add somebody else's code, compiled or not, without asking the person/company or even pay fees.
In Python, these built modules you talk about are Python Extensions. They are usually there to improve performance or access low-level functionality which is not in Python. Sometimes also to include proprietary functionality. | 1 | 3 | 0 | I've noticed that Python modules/packages come in two sorts. Some are just pure Python scripts and can simply be copied and pasted to the Python directory. Others however require, and I think these are usually wrappers for or based on C/C++ code, that the code is "built" and/or "compiled" with setup.py to produce a set of new files.
My questions is about the second type of module/package. Why is it that they have to be compiled, is there a particular reason for it? Couldn't the distributor just provide all the files from the beginning?
The reason I ask is because I wish to distribute such C++ based packages as part of my own packages, so that the user don't have to worry about installing dependencies on their own, and not having to worry about compiling, etc. I've always wondered why module distributors don't just include the dependencies instead of asking the user to install them themselves.
I suspect the answer might be bc those extra files have to be written in a specific way depending on what type of OS the computer is, and whether it is 32 or 64 bit. Could this mean that distributing the compiled files will work, but only if the user has the same specific OS and bit-system as the one where the files were compiled.
Anyway, curious to know the answer. | Why do some Python modules have to be "compiled"? | 0.197375 | 0 | 0 | 754 |
21,682,005 | 2014-02-10T16:03:00.000 | 0 | 0 | 1 | 0 | python,function,arguments | 52,763,519 | 1 | false | 0 | 0 | Depending on exactly what you are doing, you can pass in arbitrary parameters to Python functions in one of two standard ways. The first is to pass them as a tuple (i.e. based on location in the function call). The second is to pass them as key-value pairs, stored in a map in the function definition. If you wanted to be able to differentiate the arguments using keys, you would call the function using arguments of the form key=value and retrieve them from a map parameter (declared with ** prefix) in the function definition. This parameter is normally called kwargs by convention. The other way to pass an arbitrary number of parameters is to pass them as a tuple. Python will wrap the arguments in a tuple automatically if you declare it with the * prefix. This parameter is usually called args by convention. You can of course use both of these in some combination along with other named arguments as desired. | 1 | 6 | 0 | I've encountered a problem in a project where it may be useful to be able to pass a large number (in the tens, not the hundreds) of arguments to a single "Write once, use many times" function in Python. The issue is, I'm not really sure what the best ay is to handle a large block of functions like that - just pass them all in as a single dictionary and unpack that dictionary inside the function, or is there a more efficient/pythonic way of achieving the same effect. | Methods for passing large numbers of arguments to a Python function | 0 | 0 | 0 | 4,968 |
21,684,420 | 2014-02-10T17:50:00.000 | 0 | 1 | 0 | 0 | python,security,angularjs,authentication | 21,684,701 | 1 | false | 1 | 0 | As long as you send the sensitive data outside, you are at risk. You can obfuscate your code so that first grade malicious users have a hard time finding the key, but basically breaking your security is just a matter of time as an attacker will have all the elements to analyse your protocol and exchanged data and design a malicious software that will mimic your original client.
One possible (although not unbreakable) solution would be to authenticate the users themselves so that you keep a little control over who is accessing the data and revoke infected accounts. | 1 | 0 | 0 | I have a server implementing a python API. I am calling functions from a frontend that uses Angular.js. Is there any way to add an authentication key to my calls so that random people cannot see the key through the Angular exposed code? Maybe file structure? I am not really sure. | Private Python API key when using Angular for frontend | 0 | 0 | 0 | 82 |
21,685,980 | 2014-02-10T19:18:00.000 | 2 | 0 | 1 | 0 | python,macos,numpy | 21,687,176 | 3 | true | 0 | 0 | Using the built-in python for OS X is not recommended and will likely cause more headaches in the future (assuming it's not behind your current problems).
Assuming your python is fine, there's still the issue of getting numpy working. In my experience, installing numpy with pip will often run into problems.
In addition to CT Zhu's advice, if you just want numpy and python, the Enthought distribution is quite good and free for students.
Also getting Homebrew working is a good idea and, because it's quite well supported, is not hard. With homebrew, installing numpy is as easy as brew install numpy -- and it makes installing other packages that also often don't install right with pip (sklearn, scipy, etc) easy too. | 1 | 1 | 1 | I'm using the built in python version in OSX, I also installed pip by sudo easy_install pip and secondly I installed numpy by sudo pip install numpy.
However, when I run any python file which uses numpy I get an error message like:
Import error: No module named numpy
Like numpy isn't installed in system. When I called locate numpy I found out most of outputs tell numpy is installed at: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/Python/numpy
How can I get it to work? | How to install numpy in OSX properly? | 1.2 | 0 | 0 | 273 |
21,687,216 | 2014-02-10T20:26:00.000 | 0 | 0 | 0 | 0 | python,tweepy | 22,314,409 | 1 | false | 1 | 0 | Twitter is a Python library, so there's really no way to use it directly on a website. You could send the data from the stream to a browser using WebSockets or AJAX long polling, however.
There's no way to have the Userstream endpoint send tweets to you any faster - all of that happens on Twitter's end. I'm not sure why Tweetdeck would receive them faster, but there's nothing than can be done except to contact Twitter, if you're getting a slow speed. | 1 | 0 | 0 | I'm creating an app using python/tweepy
I'm able to use the StreamListener to get real time "timeline" when indicating a "topic" or #, $, etc.
Is there a way to have a real-time timeline function similar to tweetdeck or an embedded widget for a website for the user ID? non-stop
When using api.user_timeline receiving the 20 most recent tweepy.
Any thoughts? | Real-time timeline function like tweetdeck? | 0 | 0 | 0 | 424 |
21,687,516 | 2014-02-10T20:45:00.000 | 4 | 0 | 1 | 0 | python,eval | 21,687,565 | 2 | false | 0 | 0 | eval, like the documentation says, evaluates the parameter as if it were python code. It can be anything that is a valid python expression. It can be a function, a class, a value, a loop, something malicious...
Rule of thumb: Unless there is no other choice, don't use it. If there is no other choice, don't use it anyway. | 1 | 0 | 0 | what does x=eval(input("hello")) mean, doesn't it suppose to be instead of eval() something like int? I thought of x as a variable that belong to some class that determine its type, does eval include all known classes like int float complex...? | what is the meaning of eval() in x=eval(input("hello"))? | 0.379949 | 0 | 0 | 1,225 |
21,687,659 | 2014-02-10T20:53:00.000 | 0 | 1 | 0 | 0 | ipython-notebook | 21,767,885 | 1 | true | 0 | 0 | OK, well I ended up eventually finding it. So, just in case anybody else should look here. You can pull down a menu under your user name. One of the options is "settings." Within that there is an item called "sharing." When you click on that you get a list of your currently shared bundles with an option to delete them. | 1 | 0 | 0 | (1) How do you "un-share" a bundle? (I know, this must be in the documentation. But I really can't find it. Sorry!)
(2) Is there any kind of user mailing list for Wakari, where questions like the above would be better targeted? | Two questions about Wakari.io | 1.2 | 0 | 0 | 235 |
21,689,654 | 2014-02-10T22:56:00.000 | 0 | 1 | 0 | 0 | python,c++,ip,web-crawler,fetch | 21,689,802 | 1 | false | 0 | 0 | You can do that with http, urllib or urllib2.
You have to look for "src=" (images, flash etc.) and for "href=" (hyperlinks).
Why do you think it's not correct? | 1 | 1 | 0 | I have a bunch of websites, I want to find all the IP addresses that they communicate with while we browse them. For example once we browse Yahoo.com, it contacts several destinations until it is getting loaded.
Is there any library in the C++ or Python that can help me?
One way that I'm thinking about is to get the HTML file of the website and look for the format "src = http://", but it is not quite correct. | How to get all IP addresses fetched | 0 | 0 | 1 | 80 |
21,699,365 | 2014-02-11T10:34:00.000 | 0 | 0 | 1 | 0 | python,windows-installer,ptvs,visual-studio-2010 | 21,701,852 | 2 | false | 0 | 0 | You didn't develop the PTVS MSI so therefore this isn't a development question, it's a user question. You should file a bug with the project and get them to look at it.
That said, I decompiled the MSI and it is looking for a registry value:
VSINSTALLPATH = RegLocator(HKLM\Software\Microsoft\VisualStudio\10.0\@InstallDir)
It's looking for it in the 32bit not 64bit hive so if you are on a 64bit OS check under HKLM\SOFTWARE\Wow6232Node\Microsoft......
Later the property VSINSTALLPATH is used in a launch condition to block installation if the property doesn't have a value. | 1 | 1 | 0 | I want to install PTVS for Visual Studio 10. Every time I run msi package with name "PTVS 2.0 VS 2010" it shows me an error that I have to install VS2010 first but I've already had VS2010 express.
I check some solutions on the internet but it didn't work for me for example I add InstallDir in my registry but still getting that error. | PTVS Doesn't detect installed VS2010 | 0 | 0 | 0 | 658 |
21,699,483 | 2014-02-11T10:39:00.000 | -1 | 0 | 1 | 0 | python,qt,pyqt | 21,699,872 | 3 | true | 0 | 1 | No need for further install.
e.g. Py2exe will copy everything needed.
If you have some special requirement you may have to copy other stuff manually. In my application I copy some extra ddls, some ico files and some matplotlib files. | 2 | 1 | 0 | I am new to Qt and developing with python.
Would a python application developed using Qt framework and PyQt require the entire Qt framework to be installed on a user's machine in order to run a "exe" version of the application created with something like p2exe? Or would py2exe copy the required Qt framework components into the application that it creates? | Do Frozen PyQt applications need the Qt to be installed on client machines | 1.2 | 0 | 0 | 477 |
21,699,483 | 2014-02-11T10:39:00.000 | 1 | 0 | 1 | 0 | python,qt,pyqt | 21,699,917 | 3 | false | 0 | 1 | I don't know what you mean by "frozen" but if your question is whether you can create an "exe" for a pyqt python script without installing python and pyqt on user machine then answer is yes. As with any other exe you don't need to install anything on user machine.
I have created a few application using pyqt and converted them to exe using pyinstaller-2.0 and it works fine on any machine. Same is true with py2exe. | 2 | 1 | 0 | I am new to Qt and developing with python.
Would a python application developed using Qt framework and PyQt require the entire Qt framework to be installed on a user's machine in order to run a "exe" version of the application created with something like p2exe? Or would py2exe copy the required Qt framework components into the application that it creates? | Do Frozen PyQt applications need the Qt to be installed on client machines | 0.066568 | 0 | 0 | 477 |
21,700,792 | 2014-02-11T11:39:00.000 | 2 | 0 | 0 | 0 | python-2.7,heroku-postgres | 21,710,704 | 1 | true | 1 | 0 | $ heroku pg:info --app yourapp | 1 | 2 | 0 | I have a python / django app on heroku. how do i find the postgres version running on my app? | How do I find the postgres version running on my heroku app? | 1.2 | 0 | 0 | 37 |
21,703,829 | 2014-02-11T13:55:00.000 | 0 | 0 | 0 | 0 | python,http,tornado | 47,112,953 | 3 | false | 0 | 0 | I faced similar issue & problem was with configurable-http-proxy so I killed its process & restarted jupyterhub
ps aux | grep configurable-http-proxy
if there are any pid's from above command, kill them with
kill -9 <PID>
and restart `` | 1 | 1 | 0 | I am trying to make a HTTP Request to a JSON API like https://api.github.com using tornado.httpclient and I found that it always responses with FORBIDDEN 403.
Simplifying, I make a request using the CLI with:
$ python -m tornado.httpclient https://api.github.com
getting a tornado.httpclient.HTTPError: HTTP 403: Forbidden.
In other hand, if I try to request this URL via browser or a simple $ curl https://api.github.com, the response is 200 OK and the proper JSON file.
What is causing this? Should I set some specific Headers on the tornado.httpclient request? What's the difference with a curl request? | Simple not-authorized request to a Github API using Tornado httpclient returns Forbidden | 0 | 0 | 1 | 756 |
21,704,977 | 2014-02-11T14:44:00.000 | 0 | 0 | 0 | 0 | python,ajax | 21,705,056 | 2 | false | 1 | 0 | you should use Django framework for your app. You can integrate your scripts into Django views. And you can also use the loaddata system in order to insert your yaml data into the database | 1 | 1 | 0 | Currently I have written a python script that extracts data from flickr site, and dump as a python class object and YAML file.
I was planning to turn this into a website:
and a simple html front page that can send a request to the backend, triggers the python scripts running on the backend
the response will be parsed and render as a table on the html page.
As i am very new to python, I am not sure how to plan my project. is there any suggestions for building this application? any framework or technologies i should use? For example, i guess i should use ajax in the front end, how about the backend?
Thanks in advance for your suggestions! | Python backend and html/ajax front end, need suggestions for my application | 0 | 0 | 1 | 1,253 |
21,704,996 | 2014-02-11T14:45:00.000 | 0 | 0 | 0 | 0 | python,django,post,dhtmlx | 21,718,657 | 1 | false | 1 | 0 | It seems like you should build the queue in django. If the rows need to be processed serially on the backend, then insert the change data into a queue and process the queue like an event handler.
You could build a send queue using dhtmlx's event handlers and the ajax callback handler, yet why? The network is already slow, slowing it down further is the wrong approach. | 1 | 0 | 0 | I have small problem with the nature of the data processing and django.
for starters. I have webpage with advanced dhtmlx table. While adding rows to table DHTMLX automatically send POST data to mine django backend where this is processed and return XML data is sent to webpage. All of it works just fine when adding 1 row at a time. But when adding several rows at a time, some problem starts to occur. For starters, I have checked the order of send data to backend and its proper (let say Rows ID 1,2,3,4 are sent in that order). Problem is that backend processes the query when it arrives, usually they arrives in the same order (even though the randomness of the Internet). But django fires the same function for them instantly and it's complex functions that takes some time to compute, then sends the response. Problem is that every time function is called there is a change in the database and one of the variables depends on how big is a database table we are altering. While having the same data table altered in wrong order (different threads speed) the result data is rubbish.
Is there any automatic solution to queue calls of one web called function so that every call could go to the queue and wait for previous to complete ??
I want to make such a queue for this function only. | Django queue function calls | 0 | 0 | 0 | 297 |
21,708,441 | 2014-02-11T17:11:00.000 | 4 | 0 | 0 | 0 | python,twisted | 21,709,158 | 1 | true | 0 | 1 | You can run the tac file in the foreground: twistd -n -y service.tac.
So perhaps you can just delete the run.py file. | 1 | 3 | 0 | I have a Twisted application that someone else wrote. There is a file run.py which makes it run in the foreground. There is also a twistd plugin named as service.tac that makes it run in the background. Around 90% of the code is the same in both the .py and .tac file.
Is it possible to combine the two together? Or is that a bad idea? | Right approach for running a Twisted application in the foreground or background | 1.2 | 0 | 0 | 441 |
21,708,859 | 2014-02-11T17:30:00.000 | 0 | 0 | 1 | 0 | python,regex | 21,709,113 | 3 | false | 0 | 0 | You can also try like below,
a = 'your-string'
result = re.findall('(mon|tues|wed|thurs|fri|sat|sun)day', a)
if result: _day = result[0] + 'day' | 1 | 0 | 0 | I seem to be having a problem finding the correct regex for weekdays in Python. I have tried this:
/(mon|tues|wednes|thurs|fri|satur|sun)day/
The problem is that this regex accepts if I just have "mon" in a text, but I only want it to accept if I have "monday". How do I fix this? I can't seem to understand how to do this. | Regex for weekdays in python | 0 | 0 | 0 | 4,421 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.