Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
33,710,170
2015-11-14T15:45:00.000
0
0
1
0
python,python-3.x
33,710,200
3
false
0
0
In python2 there is basically no difference. In python3 the first one is a string of bytes or a byte litteral, and the second one is a normal string.
1
2
0
I'm a python newbie and I'm a little bit confused about the difference about b'' and ''. I think they are both empty but b'' == '' returns False. But why? Can somebody explain this to me in terms of memory? Are they the same in terms of content in memory and different in terms of type which results in inequality?
What is the difference between b'' and '' in python?
0
0
0
1,445
33,711,927
2015-11-14T18:39:00.000
0
0
0
0
python,django
33,711,994
2
false
1
0
try also adding a __init__.py file to that folder. It can be a blank file.
1
0
0
I'm trying to put a file to import under the structure of app/scripts/file.py I then want to call it similar to how I would anything else by doing in my views.py from app.scripts.file import * doing so gives the following error - No module named app.scripts.file If I put the file.py directly into the app folder there's no issue. from app.file import *
No module named app.scripts.file
0
0
0
559
33,712,269
2015-11-14T19:13:00.000
0
0
0
0
python,oauth,live-sdk
34,280,063
1
true
0
0
Should someone find themselves in a similar situation, the fix was to add the parameter "verify=False" when calling requests.post.
1
0
0
I have a script that is using the Live Connect REST APIs to refresh an OAuth 2.0 access token. The script has been working without problems for a couple of years, but recently broke with an apparent change in Live Connect API URLs. Originally, I used these URLs to perform OAuth authentication: _https://login.live.com/oauth20_authorize.srf _https://login.live.com/oauth20_token.srf Yesterday, when attempting to run the script I received the error: hostname 'login.live.com' doesn't match u'api.login.live.com' So, I changed the url to "api.login.live.com" but then received a 404 during the request as _https://api.login.live.com/oauth20_token.srf doesn't seem to exist. Interestingly, _https://login.live.com/oauth20_token.srf does yield the expected result when accessed via the browser. Any ideas on what might be going on? Potentially interesting data: Browser is Chrome running on Windows 10 Script is written in Python 2.7 using the requests 1.0.4 package (Note that my reputation doesn't allow for more than 2 links, thus the funky decoration).
Live Connect: Unable to refresh OAuth 2.0 token due to SSL and 404 Errors
1.2
0
1
148
33,712,642
2015-11-14T19:52:00.000
4
0
1
0
python,pypy
33,752,634
2
false
1
0
PyPy GC does not stop the world, it's an incremental garbage collector.
1
2
0
I'm a Java developer so I sometimes need to optimize JVM arguments to improve GC performance(for example, reduce the time of STW). Recently I tried to introduce Python to my new web project, and I decided to use PyPy as Python interpreter. My question is how does PyPy's garbage collector work? Does it also need to stop the world? I've done some search but there are not so many docs about PyPy's GC mechanism.
Does PyPy's garbage collector need to stop the world?
0.379949
0
0
735
33,712,729
2015-11-14T20:01:00.000
1
0
1
1
python,windows,python-2.7,windows-10
40,187,053
2
false
0
0
i have same problem and i use advanced system optimizer and clean registery and repair python then uninstall and it work for me
2
1
0
For some reason I messed up my install in python a while ago and I recently tried to repair the install but I am getting an error saying: "The specified account already exists." I then decided to rerun the install package and instead of repairing it decided to delete python so I clicked uninstall and got the error message saying: "There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor." The only package I installed (if it is a package) was VPython and for some reason that does not open whenever I try opening it so I assumed I messed up the download for that also. I decided to go ahead and delete everything in my C directory that had the keyword Python including the Python27 folder but it still gave me the same error.
Cannot uninstall python 2.7.10 from windows 10
0.099668
0
0
1,385
33,712,729
2015-11-14T20:01:00.000
0
0
1
1
python,windows,python-2.7,windows-10
57,189,000
2
false
0
0
I can confirm that this works. Use Ccleaner to fix the registry, then use installer to "Repair" 2.7.10 the installation, then use installer to "Remove" the installation.
2
1
0
For some reason I messed up my install in python a while ago and I recently tried to repair the install but I am getting an error saying: "The specified account already exists." I then decided to rerun the install package and instead of repairing it decided to delete python so I clicked uninstall and got the error message saying: "There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor." The only package I installed (if it is a package) was VPython and for some reason that does not open whenever I try opening it so I assumed I messed up the download for that also. I decided to go ahead and delete everything in my C directory that had the keyword Python including the Python27 folder but it still gave me the same error.
Cannot uninstall python 2.7.10 from windows 10
0
0
0
1,385
33,713,472
2015-11-14T21:17:00.000
1
0
1
0
python,arrays,numpy
33,713,612
2
false
0
0
The strength of Numpy arrays is that many low-level operations can be quickly performed on the data because most (not all) types used by these arrays have a fixed-size in memory. For instance, the floats you are using probably require 8 bytes each. The most important thing in that case is that all datas share the same type and fit in the same amount of memory. You can play a little around that if you really want (and need) to, but I would not suggest you to start by such special cases. Try to learn the strength of these arrays when used with this requirement (but this involves accepting the fact that you can't mix integers and floats in the same array).
1
2
1
I've written a script that gives me the result of dividing two variables ("A" and "B") -- and the output of each variable is a numpy array with 26 elements. Usually, with any two elements from "A" and "B," the result of the operation is a float, and the the element in the output array that corresponds to that operation shows up as a float. But strangely, even if the output is supposed to be an integer (almost always 0 or 1), the integer will show up as "0." or "1." in the output array. Is there any way to turn these specific elements of the array back into integers, rather than keep them as floats? I'd like to write a simple if statement that will convert any output elements that are supposed to be integers back into integers (i.e., make "0." into "0"). But I'm having some trouble with that. Any ideas?
How to convert specific elements within a numpy array to integers?
0.099668
0
0
82
33,713,481
2015-11-14T21:18:00.000
1
1
0
0
python,django
33,714,252
1
false
1
0
Answer from @limelights: Create a bash alias or run them in sequence? I've adapted that answer to this line of code (for bash): alias runserver="sudo python ~/testsite/manage.py test articles; sudo python ~/testsite/manage.py runserver 192.168.1.245:90 (as one line) Using runserver runs the test suite and opens the server. An added perk is that I can run it from any location without having to go into the ~/testsite directory.
1
2
0
How can I configure my Django server to run tests from tests.py when starting the server with python manage.py runserver? Right now, I have to run tests through python manage.py test articles. (Note: I am using Django 1.8)
django - Run tests upon starting server
0.197375
0
0
189
33,713,643
2015-11-14T21:34:00.000
1
0
1
0
python,binary-search
33,713,717
3
false
0
0
If you want the index from the unsorted list and you have to use binary search, try the following steps: assign an index to each item in the unsorted list sort the list run the binary search return the index that is associated with the found item Binary search does only work on a sorted list, so there is no way around sorting it somewhere in the process if you need to use that search algorithm.
1
2
0
I need to use a binary search on a list of numbers and have it return the index of the number. How do I do this when the list is unsorted? I need to return the index of the unsorted list, not the sorted list.
Binary Search of an unsorted list
0.066568
0
0
2,793
33,715,344
2015-11-15T01:18:00.000
4
0
1
0
python,performance,python-3.x,logging
33,716,747
2
true
0
0
Depending on your Logger configuration and the amount of logs your program produces, yes, logging can be a performance bottleneck because of the blocking Logger operation. For example when directly logging to an NFS file from a NFS server with slow response time. One possible approach to improve performance in such case would be switching to use of a logserver able to buffer and possibly batch logging operations - the blocking would be limited to the communication with the logserver, not to the (slow) logfile access, which is often better from the performance prospective.
1
3
0
I'm using python Logger in one of my programs. The program is a solver for an np-hard problem and therefore uses deep iterations that run several times. My question is if the Logger can be an issue in the performance of my program and if there are better ways to log information maintaining performance.
Python Logging vs performance
1.2
0
0
3,045
33,716,330
2015-11-15T04:13:00.000
4
0
1
0
python,exception,error-handling
33,716,385
2
true
0
0
Generally you use try/except when you handle things that are outside of the parameters that you can influence. Within your script you can check variables for type, lists for length, etc. and you can be sure that the result will be sufficient since you are the only one handling these objects. As soon however as you handle files in the file system or you connect to remote hosts etc. you can neither influence or check all parameters anymore nor can you be sure that the result of the check stays valid. As you said, the file might be existent but you don't have access rights you might be able to ping a host address but a connection is declined There are too many factors that could go wrong to check them all seperately plus, if you do, they might still change until you actually perform your command. With the try/error you can generally catch every exception and handle the most important errors individually. You make sure that the error is handled even if the test succeeds at first but fails after you start running your commands.
2
4
0
E.g. If I am trying to open a file, can I not simply check if os.path.exists(myfile) instead of using try/except . I think the answer to why I should not rely on os.path.exists(myfile) is that there may be a number of other reasons why the file may not open. Is that the logic behind why error handling using try/except should be used? Is there a general guideline on when to use Exceptions in Python.
When Should I Use a Try-Except statement in Python?
1.2
0
0
687
33,716,330
2015-11-15T04:13:00.000
5
0
1
0
python,exception,error-handling
33,716,343
2
false
0
0
Race conditions. In the time between checking whether a file exists and doing an operation that file might have been deleted, edited, renamed, etc... On top of that, an exception will give you an OS error code that allows you to get more relevant reason why an operation has failed. Finally, it's considered Pythonic to ask for forgiveness, rather than ask for permission.
2
4
0
E.g. If I am trying to open a file, can I not simply check if os.path.exists(myfile) instead of using try/except . I think the answer to why I should not rely on os.path.exists(myfile) is that there may be a number of other reasons why the file may not open. Is that the logic behind why error handling using try/except should be used? Is there a general guideline on when to use Exceptions in Python.
When Should I Use a Try-Except statement in Python?
0.462117
0
0
687
33,721,893
2015-11-15T16:17:00.000
0
1
0
0
python,robotframework
33,723,660
1
true
1
0
You have two choices for importing: importing a library via PYTHONPATH importing a library based on the file path to the library. In the first case you can import each class separately. In the second case, it's not possible to import multiple classes from a single file. If you give a path to a python file, that file must contain keywords. It can also include classes, but robot won't know about those classes.
1
0
0
I have a custom library that is in a different location from the test suite. Meaning the test suite is in "C:/Robot/Test/test_suite.txt" and my library is in "C:/Robot/Lib/library.py". The library has 2 different classes and I need to import both of them. I have tried to import it by "Library | ../Lib/library.py" but I got an error saying that the library contains no keywords. I also tried to import it by "Library | ../Lib/library.Class1" but got a syntax error. Is there any way to do it without changing the PYTHONPATH? Thank you!
Robot Framework - Import library with 2 classes from different location
1.2
0
0
1,092
33,722,132
2015-11-15T16:40:00.000
-2
0
0
0
python,mongodb,object,flask-sqlalchemy,flask-admin
33,724,438
2
false
1
0
Flask-Admin doesn't store anything. It's just a window into the underlying storage. So yes, you can have blob fields in a Flask-Admin app -- as long as the engine of your database supports blob types. In case further explanation is needed, Flask-Admin is not a database. It is an interface to a database. In a flask-admin app, you connect to a pre-existing database. This might be an sqlite database, PostGresSQL, MySQL, MongoDB or any of a variety of databases.
1
2
0
can I store PDF files in the database, as object or blob, with Flask-Admin? I do not find any reference in the documentation. Thanks. Cheers
Storing a PDF file in DB with Flask-admin
-0.197375
1
0
3,809
33,724,228
2015-11-15T19:50:00.000
5
0
1
0
python,windows,python-2.7,python-3.x,pip
34,830,230
3
false
0
0
I had the same problem (pip and virtualenv hanged). As 3bek suggested here, it was indeed Avast!'s fault. To verify this you can disable Avast for a few minute and try pip again. In order to teach Avast to respect these programs here's what did: Open Avast GUI, go to settings->general->exclusions Add the global pip.exe to the file paths. For me it was c:\Python34\Scripts\pip.exe. Now run this global pip in the command line (that is, not under any virtualenv). This should be ok (at list for me it was after Avast checked the exe). After this I could run all other pip.exe which are part of my different virtualenvs.
3
8
0
I have Python 2.7.10 installed with pip om Windows 7. When I'm trying to install package or even just run pip in cmd with no options, it stacks, prints nothing, and even ctrl+c does not work, I have to close cmd. Task Manager shows 3 running pip.exe *32 processes, and when I close cmd I can kill one of them. Other 2 are removed only after reloading Windows. Same thing happens with Python 3.5 I tried to reinstall pip or python, neither was helpful. pip-7.1.2 upd 1 Figured out that I have same problem with virtualenv.
Pip hangs in Windows 7
0.321513
0
0
4,013
33,724,228
2015-11-15T19:50:00.000
1
0
1
0
python,windows,python-2.7,python-3.x,pip
34,284,574
3
true
0
0
Try py -2 -m pip instead of pip
3
8
0
I have Python 2.7.10 installed with pip om Windows 7. When I'm trying to install package or even just run pip in cmd with no options, it stacks, prints nothing, and even ctrl+c does not work, I have to close cmd. Task Manager shows 3 running pip.exe *32 processes, and when I close cmd I can kill one of them. Other 2 are removed only after reloading Windows. Same thing happens with Python 3.5 I tried to reinstall pip or python, neither was helpful. pip-7.1.2 upd 1 Figured out that I have same problem with virtualenv.
Pip hangs in Windows 7
1.2
0
0
4,013
33,724,228
2015-11-15T19:50:00.000
8
0
1
0
python,windows,python-2.7,python-3.x,pip
34,800,120
3
false
0
0
I had exactly the same problem. The reason - in my case - was my antivirus program Avast. It blocked pip. As soon as I inactivated it. It works. I need to find a way now to explain Avast to stop blocking pip.
3
8
0
I have Python 2.7.10 installed with pip om Windows 7. When I'm trying to install package or even just run pip in cmd with no options, it stacks, prints nothing, and even ctrl+c does not work, I have to close cmd. Task Manager shows 3 running pip.exe *32 processes, and when I close cmd I can kill one of them. Other 2 are removed only after reloading Windows. Same thing happens with Python 3.5 I tried to reinstall pip or python, neither was helpful. pip-7.1.2 upd 1 Figured out that I have same problem with virtualenv.
Pip hangs in Windows 7
1
0
0
4,013
33,727,053
2015-11-16T01:01:00.000
2
0
0
1
python,linux,runtime-error
33,727,291
1
true
0
0
You specified --sge which is used to schedule jobs on Sun Grid Engine. Since you want to run on your local machine instead of SGE, you should remove this flag.
1
1
0
I am running shellfish.py in my local machine. Can someone please explain me why I am getting this error: sh: qsub: command not found
Linux error: sh: qsub: command not found
1.2
0
0
3,904
33,729,454
2015-11-16T06:16:00.000
6
1
0
0
python,git,github
33,729,510
5
true
0
0
Split them out into a configuration file that you don’t include, or replace them with placeholders and don’t commit the actual values, using git add -p. The first option is better. The configuration file could consist of a basic .py file credentials.py in which you define the needed private credentials in any structure you consider best. (a dictionary would probably be the most suitable). You can use the sensitive information by importing the structure in this file and accessing the contents. Others users using the code you have created should be advised to do so too. The hiding of this content is eventually performed with your .gitignore file. In it, you simply add the filename in order to exclude it from being uploaded to your repository.
3
0
0
I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo?
GitHub public repo with sensitive information?
1.2
0
1
1,206
33,729,454
2015-11-16T06:16:00.000
1
1
0
0
python,git,github
33,729,574
5
false
0
0
The twitter API keys are usually held in a JSON file. So when your uploading your repository you can modify the .gitignore file to hide the .json files. What this does is it will not upload those files to the git repository. Your other option is obviously going for private repositories which will not be the solution in this case.
3
0
0
I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo?
GitHub public repo with sensitive information?
0.039979
0
1
1,206
33,729,454
2015-11-16T06:16:00.000
4
1
0
0
python,git,github
33,729,502
5
false
0
0
No. Instead, load the secret information from a file and add that file to .gitignore so that it will not be a part of the repository.
3
0
0
I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo?
GitHub public repo with sensitive information?
0.158649
0
1
1,206
33,738,728
2015-11-16T15:20:00.000
1
0
1
0
python,python-2.7,python-3.x,pycharm
33,738,846
1
true
0
0
I always find the answer right after asking the question... ^_^ In File > Settings > Editor > Inspections under "Python" there's a "Code compatibility inspection" that appears to do what I want. You can check off which versions of Python you want to support.
1
1
0
I'd like to work on a project that's intended to support both Python 2 and Python 3. I'm currently using a Python 2 interpreter as the default interpreter, but I'd like to get syntax highlighting on issues that would break the code for running in Python 3. Is this possible?
PyCharm py3 syntax issues when using py2 interpreter
1.2
0
0
132
33,742,716
2015-11-16T19:07:00.000
0
0
0
0
python,django,apache,distributed
33,744,800
1
true
1
0
There should be nothing stopping you from installing a copy of Apache on your workstation and using it for developing, and since you're working on something that depends on some of that functionality it makes perfect sense for you to use that for your development server instead of ./manage.py runserver. Most people use Djangos built-in server because they don't need more than that for what they're trying to do - it sounds like your solution does. Heck, since you're testing distributed you may even want to consider grabbing a virtualization tool (qemu, virtualbox, et al) so you can have a faux-distributed setup to work with (I'd suggest doing a bit of scripting to make it easy to deploy / restart them all at once though - it'll save you from having to track down issues where the code that's running is older than you thought it was). Your development environment can be what you need it to be for what you're doing.
1
0
0
I'm a newbie in django and have a project that involves distributed remote storage and I'm suggested to use mod x-sendfile as part of the project procedure. In have a django app that receives a file and transforms it into N segments each to be stored on a distinct server. Those servers having a django app receiving and storing the segments. But since mod x-sendfile works need apache and I am just in developing and trying stage this question occurred to me. I googled a lot but found nothing in that regard. So my question being: Is it possible to use apache as django web server during the development of django apps? Does it make sense in development mode to replace apache with django built-in web server?
does it make sense to use apache as web server for django in development mode
1.2
0
0
176
33,744,086
2015-11-16T20:33:00.000
1
0
1
0
python,pip,customization,updates
33,744,155
1
false
0
0
The upgrade will probably not work properly; if it does, then it will just overwrite your changes. Don't do it like this. If you need to make custom changes, fork the library itself - they're mostly on github these days - and install it directly from your fork with pip install -e.
1
0
0
I made some changes to code that came from a package installed via pip. What will happen to those changes when I update the package? Will the changes be erased? Will the upgrade work properly?
What happens when you update a pip package with custom changes to the code
0.197375
0
0
53
33,745,389
2015-11-16T21:57:00.000
4
0
1
1
python-2.7,windows-7,swig,pocketsphinx
67,705,323
2
false
0
0
You can use pipwin to install it without any issues. Install pipwin [Run as Administrator, if any issues] pip install pipwin Install pocketsphinx using pipwin pipwin install pocketsphinx Note: Works on Windows-10(win32-py3.8) [Tested]
1
4
0
I would like to convert grapheme to phoneme. And I want to pip install pocketsphinx to do that. One of its dependency is swig, so I downloaded and placed it in a directory and go to the environment path variable and add the path that leads to swig.exe. When I cmd and type 'swig --help' it seems to be working. But when I go 'pip install pocketsphinx, it says 'error: command 'swig.exe failed: No such file or directory'.
Swig not found when installing pocketsphinx Python
0.379949
0
0
5,686
33,748,026
2015-11-17T02:15:00.000
1
0
1
0
python,python-2.7,simplecv
38,016,556
1
false
0
0
Some of the SimpleCV code needs to be updated - I believe it was written for an older version than what gets installed. Here's what you need to do: Find Shell.py which may be in somewhere like C:\Python27\Lib\site-packages\SimpleCV\Shell and open it in an editor. Then do the following Around line 50 change from IPython.config.loader import Config to from traitlets.config.loader import Config Around line 51, change from IPython.frontend.terminal.embed import InteractiveShellEmbed to from IPython.terminal.embed import InteractiveShellEmbed
1
2
0
When I import Shell from SimpleCV from SimpleCV import Shell I get this error C:\Python27\lib\site-packages\IPython\config.py:13: ShimWarning: The IPython.config package has been deprecated. You should import from traitlets.config instead. "You should import from traitlets.config instead.", ShimWarning) C:\Python27\lib\site-packages\IPython\frontend.py:21: ShimWarning: The top->level frontend package has been deprecated. All its subpackages have been >moved to the top IPython level. "All its subpackages have been moved to the top IPython level.", >ShimWarning) Although on calling the Shell.main() The SimpleCV console does start, however when I close the window for img.show(), it just quits the whole python console not just SimpleCV console Don't know what is happening!
ShimWarning on importing Shell form SimpleCV
0.197375
0
0
1,083
33,749,918
2015-11-17T05:39:00.000
1
0
1
0
python,format,xlsxwriter
33,751,625
1
false
0
0
Is it possible to use same formatting variable for formatting multiple excel workbooks using xlsxwriter? No. A formatting object is created by and thus, tied to, a workbook object. However, there are other ways of doing what you need to do such as storing the properties for the format in a dict and using that to initialize several format objects in the same way.
1
1
0
Is it possible to use same formatting variable for formatting multiple excel workbooks using xlsxwriter? If yes, how? Currently I am able to use formatting variable for single excel workbook as I am initializing it using workbook.add_format method but this variable is bounded to that workbook only.
Use same formatting variable for multiple Excel workbooks
0.197375
1
0
56
33,751,881
2015-11-17T08:04:00.000
0
0
0
0
python,templates,jinja2,extends
33,842,204
2
true
1
0
So after trying a lot of things, I found that the best way to do this is to use iframes instead of the Jinja extend. This way, not only can I locate the source of the error, I don’t have to send the Python values I am using in the frames to each template that I am going to render. I only send them to the original class that creates the iframe template.
2
1
0
I’m using template inheritance in jinja2 because I have a top bar in my website that I need to include in all pages. The problem is that whenever there is an error in any page the traceback always points to the line with the {% extends %} tag and I cannot locate the source of the error. Is there a way to find out which line is causing the error (aside from reading the whole code myself) or another way to do template inheritance than {% extends %}?
Error Traceback in Jinja2 when Extending Template
1.2
0
0
400
33,751,881
2015-11-17T08:04:00.000
1
0
0
0
python,templates,jinja2,extends
33,946,134
2
false
1
0
Although iframes are more accustomed to importing webpages from different websites, this might be a good idea. You could also use the jinja tag {% include %} and then use sessions to cache the data instead of reloading them in every page.
2
1
0
I’m using template inheritance in jinja2 because I have a top bar in my website that I need to include in all pages. The problem is that whenever there is an error in any page the traceback always points to the line with the {% extends %} tag and I cannot locate the source of the error. Is there a way to find out which line is causing the error (aside from reading the whole code myself) or another way to do template inheritance than {% extends %}?
Error Traceback in Jinja2 when Extending Template
0.099668
0
0
400
33,752,419
2015-11-17T08:40:00.000
7
0
0
1
python,apscheduler
33,770,050
1
false
1
0
APScheduler does not have a way to set the maximum run time of a job. This is mostly due to the fact that the underlying concurrent.futures package that is used for the PoolExecutors do not support such a feature. A subprocess could be killed but lacking the proper API, APScheduler would have to get a specialized executor to support this, not to mention an addition to the job API that allowed for timeouts. This is something to be considered for the next major version. The question is, what do you want to do with the thread that is still running the job? Since threads cannot be forcibly terminated, the only option would be to let it run its course, but then it will still keep the thread busy.
1
5
0
I set the scheduler with the "max_instances=10".There can be 10 jobs to run concurrently.Sometimes some jobs blocked, it wsa hanging there.When more than 10 jobs werr blocking there, the exception of "skipped: maximum number of running instances reached(10)". Does APScheduler have a way to set the max time of a job's duration.If the job runs beyond the max time, it will be terminated. If it doesn't have the way, what should I do?
How can I set limit to the duration of a job with the APScheduler?
1
0
0
1,679
33,752,729
2015-11-17T08:57:00.000
0
1
0
0
python
33,778,025
1
true
0
0
@falsetru : Thank you for the answer, another solution is that we can write a script in which we will run both the commands ie. pylint test.py and if the rate of the code(output of the pylint) is greater than x(lets say x = 8) will run python test.py else show the pylint errors. ie instead of python test.py we will run my_script test.py my_script is the script which contains the above mentioned code.
1
0
0
example : lets say I have a python script test.py so when I run the script python test.py pylint should execute first and If pylint executes successfully It should execute tesy.py else should give pylint errors
Can we run pylint while executing python script, such that when the pylint passes the code will execute else it will show pylint errors?
1.2
0
0
58
33,753,224
2015-11-17T09:24:00.000
1
0
0
0
python,c++,alias,pickle
33,754,768
1
true
0
0
Does it help to alias in another way (fast = normal) if there is no fast implementation available? Maybe this could be done only for the time of unpickling and then reversed, to avoid confusing checks in other code?
1
0
1
In a distributed computing project, we are using Pyro to pass objects over the wire between nodes; Pyro internally serializes and deserializes objects using pickle. Some classes in the project have two implementations: one pure-Python (for ease of installation, especially for Windows users), one in c++/boost::python (much faster, but requires boost + knowledge of how to compile the extension module). Both python and c++ classes support pickling (in c++, that is done via boost::python). These classes have different fully-qualified name (mupif.Octree.Octant vs. mupif.fastOctant.Octant), but the latter is aliased to the former and overwrites the pure-Python definition (mupif.Octree.Octant=mupif.fastOctant.Octant), so it is transparent to the user and the fast variant is always used if available on the node. However, pickle uses __module__ and __class__ to identify the instance, thus when the c++-based object is passed over the wire to another node which does not support it, unpickling will fail. What is a solution to this? Is it acceptable to change the classe's __module__, i.e. foo.fastOctant.Octant.__class__.__module__='mupif.Octree'? Can it have some side-effects I don't see yet?
Pickling/unpickling alternative (API-compatible) class implementations
1.2
0
0
109
33,753,344
2015-11-17T09:30:00.000
0
0
1
0
python,interpreter,execution
33,753,583
2
false
0
0
To directly execute the code would imply that the interpreter does not represent the interpreted program as machine code, then allow the actual machine to execute it. Instead, the interpreter carries out the instructions (with or without conversion to some kind of bytecode) itself. Technically, the machine carries out the interpreter's code, but the fact remains that no new machine code is ever generated. The contrasting approaches are statically compiled code (translation into machine code, no further interpretation is necessary) and JIT (optional translation into bytecode, translate bytecode or textual program into machine code at runtime, allow machine to execute).
1
0
0
I have read this statement that the interpreter directly executes the code. But I am not sure I understand what it means. I have been trying to get a good article on the execution cycle of python code. I understand that the python code is converted to byte code and fed to the interpreter. So what happens next? Can someone explain clearly the steps that goes into it, especially in relation between the byte code, interpreter, OS and CPU? Something along the lines of... OS loads the python interpreter in main memory CPU fetches the instruction and performs ALU. Updates the memory.. etc Edited for clarity: My basic confusion is if CPU is what is executing the code, then what is meaning of saying 'the interpreter executes the code'?
What does it mean the interpreter directly executes code?
0
0
0
219
33,754,660
2015-11-17T10:31:00.000
1
0
1
1
macos,python-2.7,virtualenv
40,489,765
2
true
0
0
I've had this problem a number of times now. While I can't say for certain what the actual issue is, I believe it basically means that some file(s) in the virtualenv installment of Python have become corrupted. I keep my virtual environment in a synced Dropbox folder, so that may be a large contributor to the issue. Restoring the virtual environment from a back-up archive worked for me. Or simply reinstall an identical virtual environment. First, try activating the faulty environment by cd <path/to/old_env> and source /bin/activate. If it's successfully activated, cd to an accessible location on the drive and run pip freeze > requirements.txt to export a list of currently installed Python modules. Delete the old environment. Install a new virtual environment of the latest version of Python 2 that you have on the computer, via virtualenv <path/new_env> Or, if you want to use a specific Python version, first make sure you have you have it on your drive, and then do virtualenv -p <path>. Assuming that you have downloaded the Python version with Homebrew, e.g.: virtualenv -p /usr/local/bin/python2.6 <path/new_env> Activate the virtual environment via cd <path/new_env> and then do source /bin/activate. Assuming that you kept a list of modules to reinstall by previously doing pip freeze > requirements.txt, cd to the folder where the text file is located and do pip install -r requirements.txt. Otherwise, reinstall the modules with pip manually.
2
6
0
I've been using Python 2.7.10 in a virtualenv environment for a couple of months. Yesterday, activating the environment went fine, but today suddently I get this cryptic error when trying to start Python from Terminal: Illegal instruction: 4 I have made no changes to my environment (AFAIK), so I'm having a difficult time trying to come to terms with what this error is and what caused it. Python works fine outside of this virtualenv environment. When running via /usr/local/bin it presents no problem.
"Illegal instruction: 4" when trying to start Python with virtualenv in OS X
1.2
0
0
5,705
33,754,660
2015-11-17T10:31:00.000
1
0
1
1
macos,python-2.7,virtualenv
49,254,513
2
false
0
0
I had same problem and found solution by uninstalling psycopg2 and installing older version. As I understood my comp was not supporting some commands in new version
2
6
0
I've been using Python 2.7.10 in a virtualenv environment for a couple of months. Yesterday, activating the environment went fine, but today suddently I get this cryptic error when trying to start Python from Terminal: Illegal instruction: 4 I have made no changes to my environment (AFAIK), so I'm having a difficult time trying to come to terms with what this error is and what caused it. Python works fine outside of this virtualenv environment. When running via /usr/local/bin it presents no problem.
"Illegal instruction: 4" when trying to start Python with virtualenv in OS X
0.099668
0
0
5,705
33,755,337
2015-11-17T11:05:00.000
1
0
0
0
python,python-2.7,websocket,tornado
33,763,753
1
false
0
0
You can simply pass autoreload=True or debug=True (which does autoreload and some other things) to your Application constructor.
1
0
0
Having file websocket_server.py and webscoket_service.py to run websocket I do python websocket_server.py How show I configure custom file watcher so when I modify webscoket_service.py, my websocket server being restarted
Create file watcher that will restart websocket server when specific files modified
0.197375
0
1
96
33,756,512
2015-11-17T12:06:00.000
1
1
0
0
python,automation,jira,confluence,asciidoctor
33,761,564
1
false
1
0
I did something similar - getting info from Jira and updating confluence info. I did it in a bash script that ran on Jenkins. The script: Got Jira info using the Jira REST API Parsed the JSON from Jira using jq (wonderful tool) Created/updated the confluence page using the Confluence REST API I have not used python but the combination of bash/REST/jq was very simple. Running the script from Jenkins allowed me to run this periodically, so confluence is updated automatically every 2 weeks with the new info from Jira.
1
1
0
I'm curious, how a good automated workflow could look like for the process of automating issues/touched file lists into a confluence page. I describe my current idea here: Get all issues matching my request from JIRA using REST (DONE) Get all touched files related to the matching Issues using Fisheye REST Create a .adoc file with the content Render it using asciidoctor-confluence to a confluence page I'm implementing the this in python (using requests etc.) and I wonder how I could provide proper .adoc for the ruby-based asciidoctor. I'm planning to use asciidoctor for the reason it has an option to render directly to confluence using asciidocter-confluence. So, is there anybody who can kindly elaborate on my idea?
Programmatically create confluence content from jira and fisheye
0.197375
0
0
649
33,756,970
2015-11-17T12:29:00.000
0
0
1
0
python,image-processing
33,757,187
2
false
0
0
To make it blurry filter it using any low-pass filter (mean filter, gaussian filter etc.).
1
0
1
I already have a function that converts an image to a matrix, and back. But I was wondering how to manipulate the matrix so that the picture becomes blurry, or pixified?
How can I blur or pixify images in python by using matrixes?
0
0
0
161
33,757,699
2015-11-17T13:04:00.000
1
0
0
0
python,debugging,cassandra,pdb,graphite
33,758,124
1
true
1
0
pdb gives control over to gunicorn, which is not what you want. Have a look at rpdb or other remote debugging solutions.
1
0
0
I'm developing a cassandra storage finder for graphite-api. graphite-api is installed via pip and run via gunicorn so I can't just call the script with a debugger but want to use interactive debugging. When I import pdb in my storage finder and set a breakpoint, the code will halt there, but how can I connect now to the headless running pdb in the script? Or is my approach to this debugging problem the wrong one and this has to be done in a completely other way?
How to debug Python script which is automatically called inside a web application?
1.2
0
0
154
33,759,623
2015-11-17T14:37:00.000
3
0
0
0
python,tensorflow
53,183,223
28
false
0
0
Use tf.train.Saver to save a model. Remember, you need to specify the var_list if you want to reduce the model size. The val_list can be: tf.trainable_variables or tf.global_variables.
2
640
1
After you train a model in Tensorflow: How do you save the trained model? How do you later restore this saved model?
How to save/restore a model after training?
0.021425
0
0
468,965
33,759,623
2015-11-17T14:37:00.000
55
0
0
0
python,tensorflow
33,763,208
28
false
0
0
There are two parts to the model, the model definition, saved by Supervisor as graph.pbtxt in the model directory and the numerical values of tensors, saved into checkpoint files like model.ckpt-1003418. The model definition can be restored using tf.import_graph_def, and the weights are restored using Saver. However, Saver uses special collection holding list of variables that's attached to the model Graph, and this collection is not initialized using import_graph_def, so you can't use the two together at the moment (it's on our roadmap to fix). For now, you have to use approach of Ryan Sepassi -- manually construct a graph with identical node names, and use Saver to load the weights into it. (Alternatively you could hack it by using by using import_graph_def, creating variables manually, and using tf.add_to_collection(tf.GraphKeys.VARIABLES, variable) for each variable, then using Saver)
2
640
1
After you train a model in Tensorflow: How do you save the trained model? How do you later restore this saved model?
How to save/restore a model after training?
1
0
0
468,965
33,760,242
2015-11-17T15:04:00.000
2
0
1
0
python,ipython,markdown
33,764,116
1
false
0
0
Unless you find it through Google, unlikely. Notebook files are large JSON structures that can contain markdown in one or more discrete cells. For small cases, copy+paste are enough.
1
3
0
I have some (over 100) notes take down as mark down text file (with .md extension). Recently I discovered ipython notebook. Barring no Vim keybinding, it looks perfect. So would like to convert all those .md files into .ipynb files. Is there such a tool?
Converting markdown text to ipython notebook
0.379949
0
0
2,873
33,761,192
2015-11-17T15:47:00.000
6
1
1
0
python
33,761,211
1
true
0
0
By doing import LargeSizedModule everywhere you need it. Python will only load it once.
1
4
0
I want to create a Python package that has multiple subpackages. Each of those subpackages contain files that import the same specific module that is quite large in size. So as an example, file A.py from subpackage A will import a module that is supposedly named LargeSizedModule and file B.py from subpackage B will also import LargeSizedModule. Similarly with C.py from subpackage C. Does anyone know how I can efficiently import the same exact module across multiple subpackages? I would like to reduce the 'loading' time that comes from those duplicate imports.
How to efficiently import the same module into multiple sub-packages in python
1.2
0
0
1,283
33,763,674
2015-11-17T17:47:00.000
3
0
0
0
android,python,kivy,turtle-graphics
33,767,341
1
true
0
1
The default turtle gui uses tkinter (a different graphics toolkit), it can't draw in a kivy app. You can certainly use the turtle module with Kivy, it should be very easy to draw the turtle's path, but you'd need to write some code to actually do this - turning the turtle position stuff into kivy graphics instructions.
1
2
0
I have been playing with basic Python and be familiar with Turtle module in Python. Then, I download Kivy, and write some basic application with that. my problem is that I couldn't use Turtle module in Kivy. I search a lot, but I couldn't find any example or tutorial on it. is it possible to use turtle module in Kivy application? is there any example to use turtle in their Kivy application?
Turtle and Kivy
1.2
0
0
984
33,765,336
2015-11-17T19:24:00.000
5
0
0
0
python,tensorflow
60,106,544
5
false
0
0
Tensorflow 2.0 Compatible Answer: In Tensorflow Version >= 2.0, the Command to Reset Entire Default Graph, when run in Graph Mode is tf.compat.v1.reset_default_graph. NOTE: The default graph is a property of the current thread. This function applies only to the current thread. Calling this function while a tf.compat.v1.Session or tf.compat.v1.InteractiveSession is active will result in undefined behavior. Using any previously created tf.Operation or tf.Tensor objects after calling this function will result in undefined behavior. Raises: AssertionError: If this function is called within a nested graph.
1
69
1
When working with the default global graph, is it possible to remove nodes after they've been added, or alternatively to reset the default graph to empty? When working with TF interactively in IPython, I find myself having to restart the kernel repeatedly. I would like to be able to experiment with graphs more easily if possible.
Remove nodes from graph or reset entire default graph
0.197375
0
0
75,804
33,767,792
2015-11-17T21:56:00.000
2
1
0
0
python,outlook
33,768,124
1
false
0
0
Instead of reading the MailItem.To/CC/BCC properties, loop through all items in the MailItem.Recipients collection and read the Recipient.Address property. You might also need Recipient.Type property (olTo, olCC, olBCC) and Recipient.Name.
1
0
0
Using win32com.client in python 3.x, I'm able to access email stored in Outlook 2013. I'm able to access all of the information I need from the emails, except for the email address of the recipients of the email (to, cc, and bcc). I'm able to access the names of the recipients, but not their email addresses. For example, I can see that an email was sent to "John Smith", but not that the email was sent to "[email protected]". Is there a way to access this information?
Accessing email recipient addresses from outlook using python
0.379949
0
0
3,567
33,769,143
2015-11-17T23:35:00.000
0
0
0
0
python,ms-access,pypyodbc
33,894,451
3
false
0
0
As I was putting together test files for you to try to reproduce, I noticed that two of the fields in the table were set to Single type rather than Double. Changed them to Double and that solved the problem. Sorry for the bother and thanks for the help.
1
3
0
I have a database in MS Access. I am trying to query one table to Python using pypyodbc. I get the following error message: ValueError: could not convert string to float: E+6 The numbers in the table are fairly big, with up to ten significant figures. The error message tells me that MSAccess is formatting them in scientific notation and Python is reading them as strings. The fields in the table are formatted as singles with two decimal places. When I see the numbers in the table in the database they are not formatted using scientific notation. but the error message seems to indicate that they are. Furthermore, if I change the numbers in the table (at lest for a test row) to small numbers (integers from 1 to 5) the query runs. Which supports my theory that the problem is scientific formatting of big number. Any ideas of how to: write into the database table in a way that the numbers are not formatted in scientific notation, or make pypyodbc retrieve numbers as such and ignore any scientific notation.
Issue querying from Access database: "could not convert string to float: E+6"
0
1
0
1,522
33,772,125
2015-11-18T04:56:00.000
1
0
1
0
python,maya
33,777,695
2
false
0
0
You have to use one fonction to fill the textscroll and attach it to the on selectCommand flag. You may have to use partial pass the textscroll name as arg in your function Hope it helps.
1
0
0
I have a window that has both a textScrollList and an optionMenu. I would like to refresh the option menu items whenever a selection is changed in the text list. Not really sure to start on this one. I'm using regular Maya Python.
Update optionMenu items every time selection is changed in a textScrollList
0.099668
0
0
1,876
33,773,863
2015-11-18T07:13:00.000
1
0
0
0
python,analytics
33,774,029
2
false
0
0
There are vast choices for data analysis in Python. There are many frameworks which ensure that you do not have to reinvent the wheel. Some of the major of them are: 1) NumPy: It is a Python library providing easy access to arrays, matrix operations and linear algebra.(You may also consider SciPy) 2) Pandas: It is a library which provides you 2D datasets or dataframes to store data. They are handy at times. 3) Matplotlib: It is a great library for making and plotting 2D graphs. It has the ability to make graphs and histograms with just a few lines of code.
1
0
0
I would like to Learn About DATA Analytics. Where to start it? Where I can find the concepts about analytics? What are all the Frameworks in PYTHON used for analytics? Which could be good for my career(PYTHON or R)
Analytics using PYTHON
0.099668
0
0
193
33,775,269
2015-11-18T08:41:00.000
1
0
0
0
python,sql-server,sqlalchemy
33,991,887
1
true
0
0
if the issue is only with threads and not concurrent processes then the DBAPI in use would be suspect. I don't see which driver you are using but perhaps it is not releasing the GIL while it waits for a server response. produce a test case that isolates it just to that driver running in two threads, and then report it as a bug on their system.
1
2
0
I'm using Sql alchemy with Sql Server as database engine. I have queries that take long time (approximately 10 second). When I send concurrent requests to database, the response time takes more (Exactly, time = execution time * requests count). I Increased the connection pool but no changes have been happend.
SQL alchemy is slow in concurrent connection
1.2
1
0
403
33,776,599
2015-11-18T09:47:00.000
1
0
1
0
python,latex,ipython,tex
34,092,175
1
false
0
0
I've been searching for the same question. It looks like in python 2.7 you can add the following line to a file called custom.js: IPython.Cell.options_default.cm_config.lineWrapping = true; Custom.js is located in ~\Lib\site-packages\notebook\ or ~\Lib\site-packages\jupyter_core\ Note, however, that this isnt working for me yet. I will update here as soon as I get something working.
1
2
0
I am writing a documentation for a python application. For this I am using iPython notebook where I use the code and the markdown cells. I use pandoc to transform the notebook to a .tex document which I can easily convert to .pdf. My problem is this: Line breaks (or word wrap) does not seem to work for the code cells in the .tex document. While the content of the markdown cells is formatted nicely, the code from the code cells (As well as the output from this code) is running over the margins. Any help would be greatly appreciated!
How to get proper line breaks for code cells when converting iPython notebook to tex?
0.197375
0
0
929
33,776,940
2015-11-18T10:01:00.000
0
0
0
1
python,sublimetext3,sublimetext,sublime-text-plugin
36,759,584
1
false
0
0
I don't know what you mean by "specific windows" - sublime windows? sublime views? Other application windows? You can detect window close with EventListener. There is no direct pre-quitting event, but you can use view's on_close function and check if there is any widnows in sublime.windows(). def on_close(self, view): if not sublime.windows(): self.close_specific_windows() Be aware that this function will be called for each opened view (file) in sublime.
1
0
0
Is there any way to write a script that will tell sublime to close specific windows on quit? I've tried setting a window's remember_open_files setting to false, and I've tried using python's atexit library to run the close window command. So far no luck
get sublime text 3 to close certain windows on quit
0
0
0
167
33,778,045
2015-11-18T10:50:00.000
0
0
1
0
python-3.x,squish
34,369,057
3
false
0
0
You can use sleep function. For example to put the script sleep for 2 seconds . (Eg:- sleep(2)). Dont forget to import datatime libraries . (Eg:- import time)
1
0
0
While recording activities in an application through squish in python, I want some wait time in between consecutive activities. Which function should I use?
Squish Record and play in python
0
0
0
580
33,778,802
2015-11-18T11:24:00.000
2
0
0
0
python,scikit-learn,outliers
42,991,702
2
true
0
0
Right way to do this is: Divide data into normal and outliers. Take large sample from normal data as normal_train for fitting the novelty detection model. Create a test set with a sample from normal that is not used in training (say normal_test) and a sample from outlier (say outlier_test) in a way such that the distribution of the test data (normal_test + outlier_test) retains population distribution. Predict on this test data to get usual metrics (accuracy, sensitivity, positive-predictive-value, etc.) Wow. I have come a long way!
1
2
1
I am using sklearn's EllipticEnvelope to find outliers in dataset. But I am not sure about how to model my problem? Should I just use all the data (without dividing into training and test sets) and apply fit? Also how would I obtain the outlyingness of each datapoint? Should I use predict on the same dataset?
How to apply sklearn's EllipticEnvelope to find out top outliers in the given dataset?
1.2
0
0
2,901
33,780,727
2015-11-18T12:58:00.000
5
0
0
0
python,flask,singleton,thread-local
33,780,922
1
true
1
0
The app context is not meant for sharing between requests. It is there to share context before the request context is set up, as well as after the request has been torn down already. Yes, this means that there can be multiple g contexts active for different requests. You can't share 'global' state because a WSGI app is not limited to a single process. Many WSGI servers use multiprocessing to scale request handling, not just threading. If you need to share 'global' state across requests, use something like a database or memcached.
1
3
0
I've read flask document and found this: 13.3 Locality of the Context The application context is created and destroyed as necessary. It never moves between threads and it will not be shared between requests. This is really odds to me. I think an app context should be persist with the app and share objects for all the requests of the app. So I dive into the source code and find that when the request context is pushed , an application context will be created and pushed if current app is not the one the request associated with. So it seems that the app context stack may have multiple different app context for the same app pushed? Why not using a singleton app context? Why the lifetime of the app context is so 'short'? What can be done for such app context?
Why app context in flask not a singleton for an app?
1.2
0
0
1,976
33,782,926
2015-11-18T14:40:00.000
0
0
0
0
python,kivy
43,096,978
1
false
0
1
Make sure you have PIP installed (it gives the option to install when you download python). Then, open a command shell (CMD on Windows; enter the search bar and type in 'cmd') and type pip install kivy. Some would recommend setting up a virtualenv and stuff but as a quick answer this should work.
1
0
0
I have downloaded the package 'Kivy-1.9.0-py3.4-win32-x64' from their website. I extracted it and gives me a folder with gstreamer, kivy34, MinGW, etc. As I have searched to some tutorials I clicked the kivy-3.4.bat file to set the paths. Now I'm lost, what to do next? I wanted to do Kivy applications through Python IDLE. Help!
How to install Kivy on Windows?
0
0
0
2,161
33,783,672
2015-11-18T15:13:00.000
0
0
0
0
python,tensorflow
68,254,814
4
false
0
0
Using the tensorflow 2 API, There are several options: Weights extracted using the get_weights() function. weights_n = model.layers[n].get_weights()[0] Bias extracted using the numpy() convert function. bias_n = model.layers[n].bias.numpy()
1
46
1
After training the cnn model, I want to visualize the weight or print out the weights, what can I do? I cannot even print out the variables after training. Thank you!
How can I visualize the weights(variables) in cnn in Tensorflow?
0
0
0
53,725
33,784,362
2015-11-18T15:43:00.000
5
0
0
0
python,django,version-control,django-migrations
33,784,579
2
false
1
0
you should create migration files locally, migrate locally and test it, and then commit the files to version control. django docs says: The reason that there are separate commands to make and apply migrations is because you’ll commit migrations to your version control system and ship them with your app; they not only make your development easier, they’re also useable by other developers and in production. if multiple developers are working on the same project, they dont have to create the migrate files, they just do migrate and everything is paradise.
2
0
0
I'm using Django 1.7 with migrations, and I'm not sure about what is the best practice, I should add the migrations files to my repository, or this is a bad idea?
Django migrations best practice
0.462117
0
0
1,818
33,784,362
2015-11-18T15:43:00.000
0
0
0
0
python,django,version-control,django-migrations
33,784,715
2
true
1
0
Yes, they must be versioned. If you are alone, it's not problem because you add the right database schema because each time you edit a model, you run makemigrations and migrate. But how your colleagues can have the database schema that corresponds to the new models that you committed if they can't run your migrations too. Commit your migrations, to allow your colleagues to run migrate and have the same database schema.
2
0
0
I'm using Django 1.7 with migrations, and I'm not sure about what is the best practice, I should add the migrations files to my repository, or this is a bad idea?
Django migrations best practice
1.2
0
0
1,818
33,786,307
2015-11-18T17:17:00.000
0
0
1
0
python,pygame
63,551,510
3
false
0
1
You can easy install and check which version fits you by downloading PyCharm IDE. When you download it, go to: File Settings Project <project's name> Python Interpreter Then press the plus icon on your right (install), and type pygame. When you find the module, check the box Specify version and select the version you want. If this version does not want, you can easily select and check the other versions.
1
1
0
I have been trying to install the pygame module and get the error ImportError: No module named 'pygame'. I currently have Python version 3.3.4 and installed pygame cp33 32-Bit. It is currently a whl file and I have tried following tutorials etc in order to import it but I'm having no luck.
Installing Pygame Module
0
0
0
346
33,786,736
2015-11-18T17:39:00.000
3
0
0
0
python,django,security,httponly
33,787,443
1
true
1
0
On logout, the server sends back a session cookie update with an empty value to show that the cookie has been destroyed. The HTTPOnly flag is set to prevent an XSS vulnerability from disclosing the secret session ID. When the cookie is "deleted" by setting it to an empty value, any sensitive data is removed from the cookie. An attacker doesn't have any use for an empty value, so it is not necessary to set the HTTPOnly flag. On top of that, the expire date is set in the past, and the max-age is set to 0. The client will delete the cookie immediately, leaving any attacker with no chance to read the cookie through an XSS attack.
1
5
0
I have a Django application and am configuring some security settings. One of the settings is the SESSION_COOKIE_HTTPONLY flag. I set this flag to True. On session creation (login) I can see the session HTTPOnly flag set if I inspect cookies. On logout, the server sends back a session cookie update with an empty value to show that the cookie has been destroyed. This empty cookie is not sent back with the httpOnly flag set. My question: Is this a security concern? Is there a way to force Django to set this flag on logout? Or is this just expected behavior, and is not a security concern, since the session cookie that is returned is blank?
Session Cookie HTTPOnly flag not set on response from logout (Django)
1.2
0
0
1,858
33,789,249
2015-11-18T20:00:00.000
0
0
0
0
python,widget,kivy
33,792,073
1
false
0
1
You can use your own item class, I think this is an option on the ListAdapter (maybe named cls). In doing so, you can add whatever logic you like to how the item is displayed.
1
0
0
Is it possible to have dynamic text alignment with kivy listview? I have a list of responses from a web lookup. I would like to have the text alignment (left or center) of each list item depending on the quality of the response item. I can't find a way to access 'halign' for each list item.
Kivy listview text alignment
0
0
0
383
33,792,696
2015-11-18T23:45:00.000
2
0
1
1
python,shebang,python-wheel
33,808,977
3
true
0
0
I finally narrowed it down and found the problem. Here the exact steps to reproduce the problem and the solution. Use a valid shebang in a script thats added in setup.py. In my case #!/usr/bin/env python Create a virtualenv with virtualenv -p /usr/bin/python2 env and activate with source env/bin/activate. Install the package with python setup.py install to the virtualenv. Build the wheel with python setup.py bdist_wheel. The problem is installing the package to the virtualenv in step 3. If this is not done the shebang is not expanded.
2
6
0
If I build a package with python setup.py bdist_wheel, the resulting package expands the shebangs in the scripts listed in setup.py via setup(scripts=["script/path"]) to use the absolute path to my python executable #!/home/f483/dev/storj/storjnode/env/bin/python. This is obviously a problem as anyone using the wheel will not have that setup. It does not seem to make a difference what kind of shebang I am using.
How to prevent python wheel from expanding shebang?
1.2
0
0
713
33,792,696
2015-11-18T23:45:00.000
0
0
1
1
python,shebang,python-wheel
33,792,857
3
false
0
0
Using the generic shebang #!python seems to solve this problem. Edit: This is incorect!
2
6
0
If I build a package with python setup.py bdist_wheel, the resulting package expands the shebangs in the scripts listed in setup.py via setup(scripts=["script/path"]) to use the absolute path to my python executable #!/home/f483/dev/storj/storjnode/env/bin/python. This is obviously a problem as anyone using the wheel will not have that setup. It does not seem to make a difference what kind of shebang I am using.
How to prevent python wheel from expanding shebang?
0
0
0
713
33,800,742
2015-11-19T10:07:00.000
0
0
1
0
python,asynchronous,grequests
33,801,105
1
false
0
0
Just use the regular requests library for this. A call to res = requests.get(...) is asynchronous anyway, it will not block until you call something like "res.content". Is this what you are looking for?
1
0
0
So I know that you could use grequests create multiple requests and use map to process them at the same time. But how do you create some requests on the fly while some requests sent have not returned a response yet? I don't want to use multiprocessing or multithreading,is there a way to use grequests to realize it?
create asynchronous requests on fly using greqeusts
0
0
0
61
33,801,334
2015-11-19T10:30:00.000
0
0
1
0
python
33,801,420
1
false
0
0
When displaying a question — store current time to some variable. Then after user provided answer, calculate difference between current time and the time stored at previous step. Check if it exceeds 60 seconds limit or not.
1
0
0
I am new to programming, in fact I take a class at school and I am not very good. My assignment is to write a quiz and with every question, the person has 60 seconds to answer the question and with every right answer their score doubles. Please help.
Python: How to calculate scores and how to exercise a time limit?
0
0
0
67
33,801,732
2015-11-19T10:48:00.000
0
1
1
0
python-2.7,powershell,nosetests
33,804,412
1
false
0
0
According to the author, the cause of the issue was ... trivial : Darn I'm so silly hahaha, I ran nosetests on the wrong directory. Thank you for your answer :) It takes time to run, my Avast will do scan, maybe 15-20 seconds. – mdominic 1 hour ago
1
1
0
I followed Zed Shaw's instruction in his book, "Learn Python the Hard Way 3rd Edition" On my Windows Powershell, nosetests do nothing, I just saw the cursor blinked until the end of the world. Why is that? How can I solve this?
Nosetests on Windows Powershell do nothing?
0
0
0
456
33,801,985
2015-11-19T10:59:00.000
3
0
0
1
python,django,redis,celery
52,539,351
1
false
1
0
you have to use RabbitMq instead redis. RabbitMQ is feature-complete, stable, durable and easy to install. It’s an excellent choice for a production environment. Redis is also feature-complete, but is more susceptible to data loss in the event of abrupt termination or power failures. Using rabbit mq your problem of lossing message on restart have to gone.
1
15
0
I use Celery to schedule the sending of emails in the future. I put the task in celery with apply_async() and ETA setted sometimes in the future. When I look in flower I see that all tasks scheduled for the future has status RECEIVED. If I restart celery all tasks are gone. Why they are gone? I use redis as a broker. EDIT1 In documentation I found: If a task is not acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed. This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop. So you have to increase the visibility timeout to match the time of the longest ETA you are planning to use. Note that Celery will redeliver messages at worker shutdown, so having a long visibility timeout will only delay the redelivery of ‘lost’ tasks in the event of a power failure or forcefully terminated workers. Periodic tasks will not be affected by the visibility timeout, as this is a concept separate from ETA/countdown. You can increase this timeout by configuring a transport option with the same name: BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 43200} The value must be an int describing the number of seconds. But the ETA of my tasks can be measured in months or years. EDIT 2 This is what I get when I type: $ celery -A app inspect scheduled {u'priority': 6, u'eta': u'2015-11-22T11:53:00-08:00', u'request': {u'args': u'(16426,)', u'time_start': None, u'name': u'core.tasks.action_du e', u'delivery_info': {u'priority': 0, u'redelivered': None, u'routing_key': u'celery', u'exchange': u'celery'}, u'hostname': u'[email protected]', u'ack nowledged': False, u'kwargs': u'{}', u'id': u'8ac59984-f8d0-47ae-ac9e-c4e3ea9c4a c6', u'worker_pid': None}} If you look closely, task wasn't acknowledged yet, so it should stay in redis after celery restart, right?
Celery restart loss scheduled tasks
0.53705
0
0
3,035
33,802,391
2015-11-19T11:16:00.000
1
0
0
0
python,revit-api,revitpythonshell
33,815,248
2
false
1
0
Great question - my +1 is definitely for Revit Python Shell (RPS). Likewise I had a basic understanding of Python and none of the Revit API, but with RPS Ive coded multiple addins for our office (including rich user interfaces using winforms) and had no limitations so far from coding in Python. Its true that there is some translating C# API samples into Python - but the reward is in seeing a few paragraphs of code becoming a few lines... The maker of RPS (Daren) is also really helpful, so no questions go unanswered. Disclaimer is that (like you), Im a novice programmer who has simply wanted to use the API to extend Revit. RPS for the win
1
1
0
I'm attempting to pull physical property information (dimensions and resistance values, in particular) from an architectural (Autodesk - Revit) model and organize that information to be exported as specific variables. To expand slightly, for an independent study I want to perform energy balances on Revit Models, starting simple and building from there. The goal is to write code that collects information from a Revit Model and then organizes it into variables such as "Total Wall Area", "Insulation Resistance", "Drywall depth", "Total Window Area", etc. that could be then sent to a model (or simply a spreadsheet) and stored as such. I hope that makes some sense. Given that I am a novice coder and would prefer to write in Python, does anyone have any advice or resources concerning an efficient (simple) path to go about importing and organizing specific parameters from a Revit model? Is it necessary (or realistically necessary, given the humble extent of my knowledge) to use the API for this program (Revit) to accomplish this task? I imagine this task is similar to web scraping yet I have no HTML to call and search through and therefore am happily winging my way along, asking folks far more knowledgeable than I if they have any insight. A brief background, I have next to no knowledge of Revit or APIs in general, basic knowledge of coding in Python and really want to learn more! Any help you are able to give is absolutely appreciated! I'm also happy to answer any questions that come up. Thank you for reading and have a terrific day!
Scraping model information from a program using python
0.099668
0
0
132
33,804,925
2015-11-19T13:11:00.000
0
0
1
0
user-interface,python-3.x,ui-automation,squish
44,693,237
3
false
0
0
If you're using python, you can use time.sleep() as well
2
0
0
I am working on an application where there are read only screens. To test whether the data is being fetched on screen load, i want to set some wait time till the screen is ready. I am using python to record the actions. Is there a way to check the static text on the screen and set the time ?
While Recording in Squish using python, how to set the application to sleep for sometime between 2 consecutive activities?
0
0
0
1,162
33,804,925
2015-11-19T13:11:00.000
1
0
1
0
user-interface,python-3.x,ui-automation,squish
34,173,202
3
false
0
0
You can simply use snooze(time in s). Example: snooze(5) If you want to wait for a certain object, use waitForObject(":symbolic_name") Example: type(waitForObject(":Welcome.Button"), )
2
0
0
I am working on an application where there are read only screens. To test whether the data is being fetched on screen load, i want to set some wait time till the screen is ready. I am using python to record the actions. Is there a way to check the static text on the screen and set the time ?
While Recording in Squish using python, how to set the application to sleep for sometime between 2 consecutive activities?
0.066568
0
0
1,162
33,805,228
2015-11-19T13:25:00.000
2
0
1
0
python,django
33,805,473
3
false
0
0
I'm not sure why you would think there is any overhead in passing an object into a function. That will always be cheaper than querying it from the database again, which would mean constructing the query, calling the database, and instantiating something from the result. The only time you would definitely need to pass IDs rather than the object is in an asynchronous context like a Celery task; there, you want to be sure that you get the most recent version of the object which might have been changed in the DB by the time the task is processed.
2
1
0
I have a view function which does a get on objects(say A,B & C) using their Ids. This view function calls a local function. Should I be passing the objects to the local function or should I pass the Ids and do a get again there? Which is more efficient? Which is a bigger overhead, passing an object or retreiving an object using get?
How efficient is passing an object over doing a get?
0.132549
0
0
71
33,805,228
2015-11-19T13:25:00.000
0
0
1
0
python,django
33,805,646
3
false
0
0
Passing arguments around in this way is quite cheap: under the hood, it is implemented in terms of a single additional pointer. This will almost certainly be faster than invoking the django machinery again, which for a lookup by ID has to involve a (still fast, but relatively slower) dictionary lookup at minimum, or if it doesn't do caching, could involve requerying the database (which is going to be noticeably slow, especially if the database is big). Prefer passing local variables around where possible unless there is a benefit to code clarity from doing it otherwise (but I can't think of any cases where local variables wouldn't be the clearer option), or if the "outside world" captured by that object might have changed in ways you need to be aware of.
2
1
0
I have a view function which does a get on objects(say A,B & C) using their Ids. This view function calls a local function. Should I be passing the objects to the local function or should I pass the Ids and do a get again there? Which is more efficient? Which is a bigger overhead, passing an object or retreiving an object using get?
How efficient is passing an object over doing a get?
0
0
0
71
33,812,902
2015-11-19T19:38:00.000
36
0
1
0
python,ubuntu,pycharm
36,411,334
6
true
0
0
I came across this problem just recently using a remote debugger, however I believe it's still the same solution. I just added the following to the Environment Variables section in the Run/Debug Configuration options found in Run > Edit Configurations... dialog: LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
2
21
0
I am using PyCharm 5 to run a Python 2.7 (Anaconda) script in Ubuntu. My script imports a module with import tensorflow, but this causes the error ImportError: libcudart.so.7.0: cannot open shared object file: No such file or directory. So, it seems that the library libcudart.so.7.0 is needed by this module, but it cannot be found. Now, I have seen that this library is on my machine in /usr/local/cuda-7.0/targets/x86_64-linux/lib. So, in PyCharm, I went to Settings->Project Interpreters->Interpreter Paths. This had a list of paths, such as /home/karnivaurus/Libraries/Anaconda/python2.7. I then added to this list, the path mentioned above which contains the required library. However, this did not fix the problem. I still get an error telling me that libcudart.so.7.0 cannot be found. If I run my script from the shell though (python myfile.py), then it runs fine. How can I tell PyCharm where to find this library? I have noticed that if I have print sys.path in my script, the paths it prints out are entirely different to those in Settings->Project Interpreters->Interpreter Paths... should they be the same?
PyCharm cannot find library
1.2
0
0
27,288
33,812,902
2015-11-19T19:38:00.000
0
0
1
0
python,ubuntu,pycharm
60,391,956
6
false
0
0
The following works for me on Community edition 2019.3 To set globally for a project: Open File/Settings/Project/Project Interpreter click on the cog icon next to the interpreter choose show all click on the little folder with tree icon bottom right add the path to "Interpreter Paths"
2
21
0
I am using PyCharm 5 to run a Python 2.7 (Anaconda) script in Ubuntu. My script imports a module with import tensorflow, but this causes the error ImportError: libcudart.so.7.0: cannot open shared object file: No such file or directory. So, it seems that the library libcudart.so.7.0 is needed by this module, but it cannot be found. Now, I have seen that this library is on my machine in /usr/local/cuda-7.0/targets/x86_64-linux/lib. So, in PyCharm, I went to Settings->Project Interpreters->Interpreter Paths. This had a list of paths, such as /home/karnivaurus/Libraries/Anaconda/python2.7. I then added to this list, the path mentioned above which contains the required library. However, this did not fix the problem. I still get an error telling me that libcudart.so.7.0 cannot be found. If I run my script from the shell though (python myfile.py), then it runs fine. How can I tell PyCharm where to find this library? I have noticed that if I have print sys.path in my script, the paths it prints out are entirely different to those in Settings->Project Interpreters->Interpreter Paths... should they be the same?
PyCharm cannot find library
0
0
0
27,288
33,812,995
2015-11-19T19:44:00.000
0
0
1
1
python,powershell
33,813,074
1
false
0
0
Use Add-Content ex1.py 'print "Hello"' Use python.exe -c "<cmd>" to execute a single python command.
1
1
0
I'm learning Python from "Learn Python the Hard Way" by Zed A. Shaw, and can't figure out: I'm working in Powershell. How do I add a line of text to my python script (ex1.py) from the PowerShell terminal? I've tried (starting in PowerShell): Add-Content ex1.py "print "Hello"" and other variations, but I get the message: Add-Content : A positional parameter cannot be found that accepts argument 'Hello'. How do I run just one of the seven lines of text ex1.py currently has? Without doing some extra bash script? Again, I'm working from the Windows PowerShell terminal, so I don't think bash applies there.
Add to Python Script from PowerShell Terminal?
0
0
0
81
33,817,046
2015-11-20T00:39:00.000
12
0
1
0
python,windows,ide,spyder
38,226,610
9
true
0
0
With the current version of Anaconda (4.1.0) you can simply right-click on a python script in Windows File Explorer and choose "Open with". The first time you do this you need to select "Choose default program" and then browse to spyder.exe in the Script directory in your Anaconda installation. Also make sure that the "Always use the selected program to open this kind of file" is unchecked and then click OK. From now on spyder.exe will always be listed as one of the options when you select "Open with" from the right-click menu in Windows File Explorer.
3
24
0
I have recently installed the Anaconda distribution on Windows 7 (Anaconda 3-2.4.0-Windows-x86_64). Unlike IDLE, I can't right-click and open a py file in the Spyder IDE. I will have to open Spyder first and then navigate to the file or drag and drop it in the editor. Is there any way to open the file in the editor directly from Widows Explorer?
How to get Spyder to open python scripts (.py files) directly from Windows Explorer
1.2
0
0
50,471
33,817,046
2015-11-20T00:39:00.000
0
0
1
0
python,windows,ide,spyder
56,812,079
9
false
0
0
I was unable to find a spyder.exe on my installation of conda. However in my users/.anaconda/navigator/scripts I found a spyder.bat file. Using this to open the file opens an anaconda prompt and shortly after spyder will open the file. The file icon is broken but it works for me. Hope this might help.
3
24
0
I have recently installed the Anaconda distribution on Windows 7 (Anaconda 3-2.4.0-Windows-x86_64). Unlike IDLE, I can't right-click and open a py file in the Spyder IDE. I will have to open Spyder first and then navigate to the file or drag and drop it in the editor. Is there any way to open the file in the editor directly from Widows Explorer?
How to get Spyder to open python scripts (.py files) directly from Windows Explorer
0
0
0
50,471
33,817,046
2015-11-20T00:39:00.000
1
0
1
0
python,windows,ide,spyder
62,496,167
9
false
0
0
This problem is related to anaconda installation defaults - it does not register itself in PATH by default an dicourages users to do so. After proprly registering all directories in path, spyder.exe works as expected. How to know, what to register? locate and activate.bat an run it in cmd, then run echo %PATH% and manually register all directories mentioning anaconda. Alternatively, reinstall anaconda with PATH registratin enabled. Then you can associate .py files wit spyder.exe and association will work.
3
24
0
I have recently installed the Anaconda distribution on Windows 7 (Anaconda 3-2.4.0-Windows-x86_64). Unlike IDLE, I can't right-click and open a py file in the Spyder IDE. I will have to open Spyder first and then navigate to the file or drag and drop it in the editor. Is there any way to open the file in the editor directly from Widows Explorer?
How to get Spyder to open python scripts (.py files) directly from Windows Explorer
0.022219
0
0
50,471
33,817,362
2015-11-20T01:14:00.000
8
0
1
0
python,biginteger
33,817,415
1
true
0
0
Wouldn't it be very costly? Absolutely, but this is far from the most costly thing involved. We also have dynamic dispatch on the arithmetic operations involved and dynamic allocation of objects to hold the result, among other things. So how can python naturally support big integer and be efficient? If your algorithm spends all its time doing Python-level arithmetic with Python integers, it won't be efficient. It'll be slow as hell. In that case, you probably want to use something like NumPy or C instead of Python integer arithmetic.
1
4
0
In python, if we let a=2*4, then "a" will be of integer type. But if we let a = 2**400, then "a" will be automatically be of long type, which is java's BigInteger counterpart. Thus Python can automatically convert an integer to a BigInteger when it is necessary. My question is: If every time it performs an arithmetic operation on an integer, Python checks whether this operation causes overflow or not. If overflows, convert it to BigInteger. Wouldn't it be very costly? Because this basically means Python inserts an overflow-checking instruction after every integer arithmetic instruction. So how can python naturally support big integer and be efficient?
How can python naturally support big integer and be efficient?
1.2
0
0
6,326
33,818,028
2015-11-20T02:32:00.000
0
0
0
0
python,multithreading,sockets
65,875,300
1
false
0
0
run all the players in threads, and have central threads that monitor game logic, and monster activities. otherwise, if one of your clients disconnects, the whole game crashes
1
0
0
I'm making my firs online game in python, I decided to use python because its simplicity using sockets. I need to find a method to coordinate all the existing game objects like players, items, monsters width the server and the client computers. Method 1: Make every entity (player, monsters, NPC) to run in its own thread an use its own socket to communicate with the copies in the client computers. For example, if I am a player and attack a monster the monster will send the order to drain health points to its corresponding copie in each computer and the server. Method 2: Make one socket handled by the main function, its purpose would be to get the messages sent by the clients and process them and send an answer. For example, A monster has 0 health points so that character should die, it sends a request to the server, the server analyzes the request and sends an answer to all the clients. I haven't finished any of these algorithms because I don't want to make useful code so that is why I am asking which of them is better. If it is possible I would like you to recommend other methods.
python online game using socket library
0
0
1
99
33,818,770
2015-11-20T04:01:00.000
4
0
0
1
python,windows,lxml
33,818,809
3
false
0
0
Go to the regular command prompt and try pip install lxml. If that doesn't work, remove and reinstall python. You'll get a list of check marks during installation, make sure you check pip and try pip install lxml again afterwards. pip stands for pip installs packages, and it can install some useful python packages for you.
1
6
0
Everyone's code online refers to sudo apt-get #whatever# but windows doesn't have that feature. I heard of something called Powershell but I opened it and have no idea what it is. I just want to get a simple environment going and lxml so I could scrape from websites.
No module named 'lxml' Windows 8.1
0.26052
0
1
10,453
33,820,103
2015-11-20T06:11:00.000
0
0
0
0
python,node.js,sockets,nginx,socket.io
33,837,539
2
true
1
0
How many sockets and threads will be created on server? As many sockets as there are inbound connections. As for threads, it depends on your architecture. Could be one, could be same as sockets, could be in between, could be more. Unanswerable. Can a socket be shared between different connection? No, of course not. The question doesn't make sense. A socket is an endpoint of a connection. Is there any tool to analyze no of socket open? The netstat tool.
1
0
0
I am working on web socket application. From the front-end there would be single socket per application. But I am not sure about back-end. We are using Python and nginx with Flask-socketIO and socket-io client library. This architecture will be used to notify front-end that a change is occurred and it should update data. Following are my doubts - How many sockets and threads will be created on server ? Can a socket be shared between different connection ? Is there any tool to analyze no of socket open ?
Socket server performance
1.2
0
1
611
33,821,570
2015-11-20T07:53:00.000
3
0
1
0
python,spyder
33,872,398
1
false
0
0
To not use scientific_startup you need to go to Tools > Preferences > Console > PYTHONSTARTUP replacement and select the option called Default PYTHONSTARTUP script Note: If that option is active by default, it means you're using a very old Spyder version. I'd recommend you to update it to its latest version (2.3.7).
1
0
0
I am new to Python, and for my current work, I don't want Spyder to run any predefined startup scripts. By default, Spyder runs a script called scientific_startup.py. How do I configure Spyder to stop running this file on startup?
How do you run Spyder without any startup scripts?
0.53705
0
0
959
33,825,080
2015-11-20T11:02:00.000
0
0
0
0
python,ssh,netcdf
34,001,831
2
true
0
0
Solved: not by python modules/functions, just by executing a 'common' netcdf function to extract subset files on a remote server command line (i.e. myssh.exec_command("ncea -v %s %s %s" %(varname, remoteDBpath, remotesubsetpath) and them bring the files into a local server (i.e. myftp.get(remotesubsetpath, localsubsetpath).
1
1
0
There is a netcdf file is in a remote server. What i want to do is that extracting data/cropping the file (need only specific variable for specific period) and then moving the file into my local directory. With python, I ve used 'paramiko' module to access the remote server; is there any way to use 'Dataset' command to open the netcdf file after ssh.connect? Or any solution with python is welcome, thanks.
How to subset NetCDF in a remote server and scp the subset file into a local server
1.2
0
1
273
33,829,421
2015-11-20T14:43:00.000
0
0
0
0
python,checkbox,wxpython
33,853,583
1
true
1
1
Have you considered cycling through them on EVT_CHECKBOX. Each box can be tested with IsChecked(), if the test is True then you can use SetValue(False) on the others or whatever suits your requirements. Also, there is nothing to stop you creating a radiobutton with the value None.
1
0
0
I am working on a screen written in wxPython, and Python, that has five groups of CheckBoxes. Three of the groups can have between none and all the CheckBoxes selected. However with two of the groups only none or one can be selected. RadioButtons have been considered and disregarded as you cannot select none and their appearance is different making the look and feel of the page inconsistent. Obviously I could write numerous OnCheckBox events that would all be very similar. Is there an easier and more elegant way of achieving this?
Need To Select Only One Checkbox In Group
1.2
0
0
915
33,830,715
2015-11-20T15:46:00.000
0
0
0
1
python,file,google-app-engine
33,830,880
1
false
1
0
your best bet could upload to blobstore or Cloud Storage, then use Task Queue to process the file which has no time limits.
1
0
0
I am trying to create a process that will upload a file to GAE to interpret it's contents (most are PDFs, so we would use something like PDF Miner), and then store it in Google Cloud Storage. To my understanding, the problem is that file uploads are limited to both 60 seconds for it to execute, as well as a size limit of I think 10MB. Does anyone have any ideas of how to address this issue?
Google App Engine File Processing
0
0
0
62
33,840,926
2015-11-21T07:37:00.000
0
0
1
0
python,python-idle
33,841,418
2
false
0
0
Edit then go to line :D or Alt + G
1
2
0
I use IDLE when I'm coding in Python and really enjoy it's simplicity. One thing I don't like though is when you need to navigate to a certain line and have to scroll around the place, haphazardly guessing how far you have to go to reach it. So, my question is is there a way to jump to a certain line number in IDLE for Windows?
Jump to certain line in IDLE?
0
0
0
1,329
33,842,944
2015-11-21T11:46:00.000
1
0
0
0
python,amazon-s3,boto3
59,685,923
24
false
0
0
Just following the thread, can someone conclude which one is the most efficient way to check if an object exists in S3? I think head_object might win as it just checks the metadata which is lighter than the actual object itself
1
283
0
I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches. But that seems longer and an overkill. Boto3 official docs explicitly state how to do this. May be I am missing the obvious. Can anybody point me how I can achieve this.
check if a key exists in a bucket in s3 using boto3
0.008333
0
1
266,735
33,846,123
2015-11-21T17:03:00.000
6
0
0
0
python,fft
33,846,706
2
false
0
0
fft(fftshift(x)) rotates the input vector so the the phase of the complex FFT result is relative to the center of the original data window. If the input waveform is not exactly integer periodic in the FFT width, phase relative to the center of the original window of data may make more sense than the phase relative to some averaging between the discontinuous beginning and end. fft(fftshift(x)) also has the property that the imaginary component of a result will always be positive for a positive zero crossing at the center of the window of any antisymmetric waveform component. fftshift(fft(y)) rotates the FFT results so that the DC bin is in the center of the result, halfway between -Fs/2 and Fs/2, which is a common spectrum display format.
1
7
1
I am trying to implement an algorithm in python, but I am not sure when I should use fftshift(fft(fftshift(x))) and when only fft(x) (from numpy). Is there a rule of thumb based on the shape of input data? I am using fftshift instead of ifftshift due to the even number of values in the vector x.
When should I use fftshift(fft(fftshift(x))) and when fft(x)?
1
0
0
13,324
33,846,425
2015-11-21T17:29:00.000
1
1
0
1
python,amazon-web-services,flask,amazon-sqs,worker
33,846,596
1
true
1
0
Set the HTTP Connection setting under Worker Configuration to 1. This should prevent each server from receiving more than 1 message at a time. You might want to look into changing your autoscaling configuration to monitor your SQS queue depth or some other SQS metric instead of worker CPU utilization.
1
1
0
I have deployed a python-flask web app on the worker tier of AWS. I send some data into the associated SQS queue and the daemon forwards the request data in a POST request to my web app. The web app takes anywhere between 5 mins to 6 hours to process the request depending upon the size of posted data. I have also configured the worker app into an auto scaling group to scale based on CPU utilization metrics. When I send 2 messages to the queue in quick succession, both messages start showing up as in-flight. I was hoping that the daemon will forward the first message to the web app and then wait for it to be processed before pulling the second message out. In the meantime, auto scaling will spin up another instance (which it is but since the second message is also in-flight, it is not able to pull that message) and the new instance will pull and process the second message. Is there a way of achieving this?
AWS worker daemon locks multiple messages even before the first message is processed
1.2
0
0
179
33,851,716
2015-11-22T04:50:00.000
1
0
1
0
python,numpy
33,871,559
2
true
0
0
I was corresponding with some ppl at python.org and they told me to use py -3.5 -m pip install SomePackage This works.
1
0
1
I've been trying to install numpy and pandas for python 3.5 but it keeps telling me that I have an issue. Could it be because numpy can't run on python 3.5 yet?
Installing numpy and pandas for python 3.5
1.2
0
0
4,445
33,852,035
2015-11-22T05:45:00.000
3
0
0
0
python,django
33,852,782
1
true
1
0
If I understand you correctly, you're looking to have an external program communicate with your server. To do this, the server needs to expose an API (Application Interface) that communicates with the external program. That interface will receive a message and return a response. The request will need to have two things: identifying information for the user - usually a secret key - so that other people can't access the user's data. a query of some sort indicating what kind of information to return. The server will get the request, validate the user's secret key, process the query, and return the result. It's pretty easy to do in Django. Set up a url like /api/cards and a view. Have the view process the request and return the response. Often, these days, these back and forth messages are encoded in JSON - an easy way to encapsulate and send data. Google around with the terms django, api, and json and you'll find a lot of what you need.
1
2
0
I am currently learning how to use django. I have a standalone python script that I want to communicate with my django app. However, I have no clue how to go about doing this. My django app has a login function and a database with usernames and passwords. I want my python script to talk to my app and verify the persons user name and password and also get some account info like the person's name. How do I go about doing this? I am very new to web apps and I am not really sure where to begin. Some Clarifications: My standalone python program is so that the user can access some information about their account. I am not trying to use the script for login functionality. My django app already handles this. I am just trying to find a way to verify that they have said account. For example: If you have a flashcards web app and you want the user to have a program locally on their computer to access their flashcards, they need to login and download the cards from the web app. So wouldn't the standalone program need to communicate with the app to get login information and access to the cards on that account somehow? That's what I am trying to accomplish.
How to get a standalone python script to get data from my django app?
1.2
0
0
832
33,852,048
2015-11-22T05:47:00.000
0
0
1
1
python,pip,virtualenv
33,852,065
1
false
1
0
The whole point of virutalenv is to isolate and compartmentalize dependencies. What you are describing directly contradicts its use case. You could go into each individual project and modify the environmental variables but that's a hackish solution.
1
1
0
I have 2-3 dozen Python projects on my local hard drive, and each one has its own virtualenv. The problem is that adds up to a lot of space, and there's a lot of duplicated files since most of my projects have similar dependencies. Is there a way to configure virtualenv or pip to install packages into a common directory, with each package namespaced by the package version and Python version the same way Wheels are? For example: ~/.cache/pip/common-install/django_celery-3.1.16-py2-none-any/django_celery/ ~/.cache/pip/common-install/django_celery-3.1.17-py2-none-any/django_celery/ Then any virtualenv that needs django-celery can just symlink to the version it needs?
Sharing install files between virtualenv instances
0
0
0
39
33,853,801
2015-11-22T10:31:00.000
-5
0
1
0
python,matplotlib,pycharm
33,853,861
7
false
0
0
On *nix you can use killall command. killall app closes every instance of window with app for the window name. You can also use the same command from inside your python script. You can use os.system("bashcommand") to run the bashcommand.
2
30
1
So I have some python code that plots a few graphs using pyplot. Every time I run the script new plot windows are created that I have to close manually. How do I close all open pyplot windows at the start of the script? Ie. closing windows that were opened during previous executions of the script? In MatLab this can be done simply by using closeall.
How do I close all pyplot windows (including ones from previous script executions)?
-1
0
0
61,115
33,853,801
2015-11-22T10:31:00.000
0
0
1
0
python,matplotlib,pycharm
52,167,731
7
false
0
0
As there seems no absolutely trivial solution to do this automatically from the script itself: the possibly simplest way to close all existing figures in pycharm is killing the corresponding processes (as jakevdp suggested in his comment): Menu Run\Stop... (Ctrl-F2). You'll find the windows closed with a delay of few seconds.
2
30
1
So I have some python code that plots a few graphs using pyplot. Every time I run the script new plot windows are created that I have to close manually. How do I close all open pyplot windows at the start of the script? Ie. closing windows that were opened during previous executions of the script? In MatLab this can be done simply by using closeall.
How do I close all pyplot windows (including ones from previous script executions)?
0
0
0
61,115
33,862,058
2015-11-23T00:33:00.000
0
0
1
0
python,nlp,nltk,named-entity-recognition,data-extraction
36,751,405
1
false
0
0
After many hours of checking various API, we've decided to go with TextRazor. Quality of NLP phrase extraction / classification results is superb - TextRazor uses Freebase and DBpedia (among other repositories) and this allows TextRazor to classify / categorize / extract PHRASES such as "computer security" - correctly as one entity (and not as many other APIs - incorrectly classifying this example as one class of "computer" AND another class as "security"). Programmatic control over which terms TextRazor will use and which ones will not - is again, very simple. In terms of speed - TextRazor is amazingly fast. If I understand correctly, it uses parallel computing on many (hundreds ? thousands?) of Amazon on-demand machines. Cost - we compared it to others and did an in-depth analysis with one of their competitors (a very large 3 letters company) - and they are definitely competitive and reasonable. Integration with their API using Python was (relatively) straight-forward, except some minor issue with https when working locally on a Web2Py framework. If you hit an obstacle while using TextRazor on Web2Py locally - feel free to ping me and I'll gladly share our solution. Service / support - almost instantaneous - they usually reply within 12 hours to all inquiries. Disclosure - I have no interests, shares or any other financial benefits related to TextRazor and we are actually still on their free plan - so we didn't pay them yet for their API services.
1
1
0
I have a custom vocabulary with approx. 1M rows in a SQL table. Each row has a UID and a corresponding phrase that can be many words in length. This table rarely changes. I need tag, extract, chunk or recognize (NER ?) entity phrases in a free-text document against the above mentioned custom vocabulary. So for a phrase found in the free text, I can pull its UID. It would be nice if partial matches and also phrase tokens appearing in a different order would be tagged / extracted according to some threshold / algorithm settings. Which NLP tool, preferably Python based, can make use of a custom vocabulary in its tagging, extraction, chunking or NER from free text ? Knowing the goal is to extract phrases from free text - which format is best suited for this custom vocabulary to work with the NLP tool ? XML, JSON, trees, IOB chunks, other ? Any tool to help transform the SQL table (original custom vocabulary) into the format of the vocabulary the NLP algorithm requires to work with ? Do I need integrate with other (non-pythonic) tools such as GATE, KEA, Lingpipe, Apache Stanbol or OpenNLP ? Is there an API for both tagging / extracting and for creating a custom vocabulary ? Any experience with RapidMiner or TextRazor ? Can these tools help with the above ? Thanks!
Tag, extract phrases from free text using a custom vocabulary (python)?
0
0
0
980
33,862,418
2015-11-23T01:25:00.000
1
0
1
1
python,anaconda
41,717,467
4
false
0
0
The previous answer suggesting upgrading to Anaconda 4.0+ is probably sensible. However if this is not a desirable option, the below will allow use of Anaconda Launcher on previous versions. Anaconda is installed under 'C:\Users\%USERNAME%\Anaconda'. The Anaconda Launcher can be open by clicking on the Start menu and typing Run (or hit Windows+r) and entering C:\Users\%USERNAME%\Anaconda\Scripts\launcher.bat and clicking OK. Alternatively you can navigate to 'C:\Users\%USERNAME%\Anaconda\Scripts' in a command-prompt and enter launcher.bat. You stated in a comment on another answer that you were "actually looking to open spyder". You can do this with Windows+r C:\Users\%USERNAME%\Anaconda\Scripts\spyder.exe or by navigating to 'C:\Users\%USERNAME%\Anaconda\Scripts' in a command-prompt and typing python spyder-script.py. If you're only ever after spyder, a taskbar shortcut with a pretty icon is always nice. To do this, go to 'C:\Users\%USERNAME%\Anaconda\Scripts' in an Explorer window and drag spyder.exe to the taskbar, then if you right-click this and goto Properties then Change Icon... you can add the icon from 'C:\Users\%USERNAME%\Anaconda\Scripts\spyder.ico'. Hope this helps.
1
1
0
I am complete Python newb here who is just making the switch from MATLAB. I installed the Anaconda 2.4 with Python 2.7 on a 64-bit Windows 8.1 system. But I cannot even start the program as I cannot find any Anaconda launcher either on the Start menu or the desktop. Any help please?
Where is the Ananconda launcher in Windows 8?
0.049958
0
0
10,129
33,862,420
2015-11-23T01:25:00.000
-6
0
1
0
python,ipython,ipython-notebook
33,862,460
3
false
0
0
You should you start your workflow after restarting and opening a notebook again by running all cells. In the top menu, before you do anything else, first select "Cell->Run all"
1
29
0
I define many modules in a file, and add from myFile import * to the first line of my ipython notebook so that I can use it as dependency for other parts in this notebook. Currently my workflow is: modify myFile restart the Ipython kernel rerun all code in Ipython. Does anyone know if there is a way to reload all modules in myFile without need to restart the Ipython kernel? Thanks!
IPython notebook: how to reload all modules in a specific Python file?
-1
0
0
19,517
33,865,344
2015-11-23T07:02:00.000
0
0
0
1
python,ruby-on-rails,linux
33,891,874
1
false
1
0
I have solve my problem by restart my app instead of restart passenger restart app command: passenger-config restart-app [path of my app]
1
0
0
My app is rails and python . In rails I create a new thread and start a shell command which executes python scripts. This python script (parent process) will exit quickly, but before it exits it will fork a child process, and the child process will be an orphan process after the parent process exits. Situation 1: If I start app by rails: rails s -d When the python parent process exits and python child process is going: kill pid(./tmp/pids/server.pid) Then the child process will be ok and not be killed. This is what I want. Situation 2: If I start app by passenger: passenger start -e production -d When the python parent process exits and python child process is going: passenger stop; then the child process will be killed. So I want to know in situation 2, the child orphan process could not be killed? Has anyone experienced this or knows how to solve it?
passenger stop kill orphan process
0
0
0
361