Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
37,694,804
2016-06-08T06:29:00.000
0
0
0
0
1
python
0
55,979,236
0
1
0
false
0
0
Answer If you are getting the error while using Python 3 on Mac OS X, try these commands in Terminal: pip3 install Send2Trash or pip3 install send2trash I am on Mac OS X using Python 3, and this worked for me. Explanation I have read that most Mac OS X users have both Python 2 and Python 3 installed. Using pip3 (instead of pip) installs the module on Python 3 (instead of Python 2). (Disclaimer: I don't know if this explanation is correct.)
1
1
0
0
I am a beginner in python programming and I am working on a project that I want to send files to the recycle bin using python. I heard of this "add-on" called Send2Trash which is what I wanted but I don't really know how to install it. I tried on the python website, other websites and from the author and it really didn't made any sense about the python setup.py install and about "Distutils". Can someone help give a clear instruction on installing this type of "add-on" very clearly. And also I apologize if I ask something like this because I'm still a beginner in python but it is really a big help if someone can solve this problem.
How to install an "add-on" like Send2Trash to be used in Python?
0
0
1
0
0
1,242
37,707,616
2016-06-08T16:07:00.000
1
0
1
0
0
python,find
0
37,708,306
0
2
0
false
0
0
The objective of the find method is to return the index value, which (for all practical purposes) programmers want as positive. In this case, any negative value means that the function could not find the particular element. The reason why True or False is NOT used is just to avoid thousands of TypeErrors
2
1
0
0
I would expect a return of 0. Is -1 simply the equivalent of false? For a moment I thought it was because 0 is a position (index?) in the string, but so is -1. While I know it is enough to simply memorize that this is how the find operation works, I was wondering if there was a deeper explanation or if this is something common that I will continue to encounter as I study.
Why does the find function return -1 in python when searching a string and failing to find a match?
0
0.099668
1
0
0
4,891
37,707,616
2016-06-08T16:07:00.000
0
0
1
0
0
python,find
0
37,708,679
0
2
0
false
0
0
@jonrsharpe I believe your comment answered my question best. While -1 is an index, the find function always returns a non-negative index when successful and -1 otherwise.
2
1
0
0
I would expect a return of 0. Is -1 simply the equivalent of false? For a moment I thought it was because 0 is a position (index?) in the string, but so is -1. While I know it is enough to simply memorize that this is how the find operation works, I was wondering if there was a deeper explanation or if this is something common that I will continue to encounter as I study.
Why does the find function return -1 in python when searching a string and failing to find a match?
0
0
1
0
0
4,891
37,720,228
2016-06-09T08:07:00.000
5
0
1
0
0
python-3.x,floating-point,rounding
0
37,720,389
0
1
0
false
0
0
Because 1.4 999 999 999 999 999 when parsed is exactly 1.5, the difference between them is too small to represent at that magnitude. But 1.4 99 999 999 999 999 is low enough to parse to "less than 1.5", actually 1.4999999999999988897769753748434595763683319091796875, which is clearly less than 1.5
1
3
0
0
round(1.4 999 999 999 999 999) (without the spaces) gets rounded to 2 but round(1.4 99 999 999 999 999) (without the spaces) gets rounded to 1. I suppose this has to do with imprecise floating point representations, but fail to understand how does it come that the first representation is interpreted as closer to 2 than to 1.
Rounding in Python
0
0.761594
1
0
0
312
37,721,263
2016-06-09T08:56:00.000
0
0
1
0
0
python,ipython,spyder
0
61,645,817
0
2
0
false
0
0
Spyder 4 select the lines and then press TAB or CTRL+] For indent and shift+TAB or CTRL+] for un-indent
2
16
0
0
Is there any shortcut key in Spyder python IDE to indent the code block? For example, Like ctr+[ in Matlab, I want to indent the code block together.
how to indent the code block in Python IDE: Spyder?
0
0
1
0
0
69,522
37,721,263
2016-06-09T08:56:00.000
40
0
1
0
0
python,ipython,spyder
0
37,721,501
0
2
0
true
0
0
Select your code and press Tab for indent and Shift+Tab to un-indent. or go to Edit -> Indent/Unindent Edit section also contains some other tools for editing your code.
2
16
0
0
Is there any shortcut key in Spyder python IDE to indent the code block? For example, Like ctr+[ in Matlab, I want to indent the code block together.
how to indent the code block in Python IDE: Spyder?
0
1.2
1
0
0
69,522
37,721,461
2016-06-09T09:06:00.000
0
0
1
0
1
visual-studio,python-2.7,python-3.x,ptvs
1
37,955,825
0
4
0
false
0
0
I`m also having similar issues, first installation path: Visual Studio 2015 Pro with Update 1 Installed PTVS using the VS2015 installation setup later on Everything work fine The issues started: Installed a DEV version of PTVS from their github page My pyproj stopped loading saying a migration needed Noticed that after new PTVS installation, I`ve installed VS2015 Update 2 Not being able to reload my project after trying to debug the issue, I`ve decided to: Uninstall PTVS and Reinstall PTVS through VS2015 setup Now the issue was different, while trying to load my previous pyproj or even creating different Python projects using multiple templates. I was getting this error: "There is a missing project subtype. Subtype: '{1b580a1a-fdb3-4b32-83e1-6407eb2722e6}' is unsupported by this installation." Not finding anything around this, I`ve: Uninstalled Visual Studio 2015 (having Update 2) Reinstalled Visual Studio 2015 with Update 1 (without checking PTVS, who requires VS 2015 Update 2 to be installed as well, I suspected it has something to do with it) Installed PTVS latest stable version from their Github Now Visual Studio is crashing while trying to load the past mentioned pyproj, with the same error as OP: SetSite failed for package [Python Tools Package][Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints.] Still trying to fix it at the moment. Maybe these steps will help debugging the issue. Update / Fixed After installing VS 2015 with Update 1 and PTVS 2.2 for VS 2015, I was still having issues opening the pyproj causing VS to just crash (unfortunately nothing in ActivityLog.xml). I've tried repairing Visual Studio through it's setup, still the same issue. Finally, I've decided to re-update Visual Studio 2015 to Update 2, causing also to update PTVS to March release, all through VS setup utility. And now my pyproj correctly opens. Probably some versions miss match during the initial steps where I've installed a DEV version of PTVS. Not sure which step actually corrected my issue but it did. Hope this will help somehow other people with similar issues.
3
1
0
0
I have installed Win10, Visual Studio 2015, Python 2.7, Python 3.5 and PTVS 2.2.3. Unfortunately PTVS does not work at all. I can not load any Python projects that were loading previously in Visual Studio. It worked before I installed Python 3.5. I tried to uninstall Python 2.7 and get an error saying that the uninstall didn't success. After several tries, the problem appears to be around pip which is somehow blocking both install and uninstall of Python 2.7. When trying to open Python Tools from Tools menu, nothing happens. Neither window opens nor any error message is displayed. Python Environments window does not open even with the shortcut. In Tools > Options > Python Tools, the only text shown is: "An error occurred loading this property page". When I try to load/reload the Python project, the message is: "error : Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints." This has already been posted for 11 days ago, but no one has answered. To solve this, I would like to know how to make the Python Environment window appearing in Visual Studio. Thanks for any help.
Visual Studio Python Environments window does not display
0
0
1
0
0
4,337
37,721,461
2016-06-09T09:06:00.000
0
0
1
0
1
visual-studio,python-2.7,python-3.x,ptvs
1
38,182,070
0
4
0
false
0
0
Thanks for your posts. My problem was fixed after I installed VS 2015 update 3 which included a new release of PTVS (June 2.2.40623).
3
1
0
0
I have installed Win10, Visual Studio 2015, Python 2.7, Python 3.5 and PTVS 2.2.3. Unfortunately PTVS does not work at all. I can not load any Python projects that were loading previously in Visual Studio. It worked before I installed Python 3.5. I tried to uninstall Python 2.7 and get an error saying that the uninstall didn't success. After several tries, the problem appears to be around pip which is somehow blocking both install and uninstall of Python 2.7. When trying to open Python Tools from Tools menu, nothing happens. Neither window opens nor any error message is displayed. Python Environments window does not open even with the shortcut. In Tools > Options > Python Tools, the only text shown is: "An error occurred loading this property page". When I try to load/reload the Python project, the message is: "error : Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints." This has already been posted for 11 days ago, but no one has answered. To solve this, I would like to know how to make the Python Environment window appearing in Visual Studio. Thanks for any help.
Visual Studio Python Environments window does not display
0
0
1
0
0
4,337
37,721,461
2016-06-09T09:06:00.000
0
0
1
0
1
visual-studio,python-2.7,python-3.x,ptvs
1
37,890,917
0
4
0
false
0
0
You'll need to open the ActivityLog.xml (%APPDATA%\Microsoft\VisualStudio\14.0\ActivityLog.xml) and see if there's any exceptions there related to PTVS. It sounds like you have a pretty messed up configuration at this point. You could try uninstalling PTVS and re-installing it, but my guess is your messed up Python installs are somehow throwing PTVS off and causing it to crash somewhere.
3
1
0
0
I have installed Win10, Visual Studio 2015, Python 2.7, Python 3.5 and PTVS 2.2.3. Unfortunately PTVS does not work at all. I can not load any Python projects that were loading previously in Visual Studio. It worked before I installed Python 3.5. I tried to uninstall Python 2.7 and get an error saying that the uninstall didn't success. After several tries, the problem appears to be around pip which is somehow blocking both install and uninstall of Python 2.7. When trying to open Python Tools from Tools menu, nothing happens. Neither window opens nor any error message is displayed. Python Environments window does not open even with the shortcut. In Tools > Options > Python Tools, the only text shown is: "An error occurred loading this property page". When I try to load/reload the Python project, the message is: "error : Expected 1 export(s) with contract name "Microsoft.PythonTools.Interpreter.IInterpreterOptionsService" but found 0 after applying applicable constraints." This has already been posted for 11 days ago, but no one has answered. To solve this, I would like to know how to make the Python Environment window appearing in Visual Studio. Thanks for any help.
Visual Studio Python Environments window does not display
0
0
1
0
0
4,337
37,732,918
2016-06-09T17:48:00.000
-2
1
1
0
0
python,raspberry-pi,pycharm
0
40,196,547
0
2
0
false
0
0
You can run Pycharm directly on your Raspberry Pi: - Using your Raspberry Pi, download the installation file directly from the Pycharm website (JetBrains). It will be a tarball, i.e., a file ending in ".tar.gz". - Extract the file to a folder of your choice. - Browsing through the extracted files and folders, you will find a folder named "bin". Inside "bin" you will find a file named Pycharm.sh - Using your terminal window, go to the "bin" folder and launch the Pycharm application by typing: sudo ./Pycharm.sh After several seconds (it's a little slow to load on my RPi3), Pycharm will load. Have fun!
1
3
0
0
I've been using IDLE with my raspberry for a while, it's nice at the beginning, but Pycharm provides lots more of features and I'm used to them since I've been also using Android Studio. The problem is I couldn't figure out how to install the RPi module to control the pins of my Raspberry. Does anyone know how to do this? In case it matters, it's python3 on a raspberry 2B.
Install RPi module on Pycharm
0
-0.197375
1
0
0
18,522
37,750,405
2016-06-10T14:10:00.000
1
0
1
0
1
python-2.7
1
40,962,470
0
1
0
false
0
0
I found alternative: pip install --user --upgrade grammar-check
1
1
0
0
I am having problem of installing python package language-check to my python 2.7 environment. I tried the pip install language-check --upgrade command but it was no avail. It gave my error saying "Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/fn/g0nd0gb54d5c__5fjhb7rp0w0000gn/T/pip-build-zMuFbc/language-check/". I have problem of understanding what it is saying. If you know what it is saying, please give me a hint on how to fix it. I also tried to download the laguage-check tar.gz to my mac, gunzip it, ran tar -xwf command on it, went to language-check directory and ran setup install but it did not work either . It gave me an error saying "error in language-check setup command: package_data must be a dictionary mapping package names to lists of wildcard patterns". So if you know how to fix the problem, please let me know. Thank you so much in advance, Tom
error on installing language-check packages to python 2.7 environment
0
0.197375
1
0
0
387
37,753,680
2016-06-10T17:05:00.000
0
0
1
0
0
python,dictionary
0
37,771,544
0
3
0
true
0
0
TheLazyScripter gave a nice workaround solution for the problem, but the runtime characteristics are not good because for each reconstructed word you have to loop through the whole dict. I would say you chose the wrong dict design: To be efficient, lookup should be done in one step, so you should have the numbers as keys and the words as items. Since your problem looks like a great computer science homework (I'll consider it for my students ;-) ), I'll just give you a sketch for the solution: use word in my_dict.values() #(adapt for py2/py3) to test whether the word is already in the dictionary. If no, insert the next available index as key and the word as value. you are done. For reconstructing the sentence, just loop through your list of numbers use the number as key in your dict and print(my_dict[key]) Prepare exception handling for the case a key is not in the dict (which should not happen if you are controlling the whole process, but it's good practice). This solution is much more efficient then your approach (and easier to implement).
1
0
0
0
I am creating a code where I need to take a string of words, convert it into numbers where hi bye hi hello would turn into 0 1 0 2. I have used dictionary's to do this and this is why I am having trouble on the next part. I then need to compress this into a text file, to then decompress and reconstruct it into a string again. This is the bit I am stumped on. The way I would like to do it is by compressing the indexes of the numbers, so the 0 1 0 2 bit into the text file with the dictionary contents, so in the text file it would have 0 1 0 2 and {hi:0, bye:1, hello:3}. Now what I would like to do to decompress or read this into the python file, to use the indexes(this is how I will refer to the 0 1 0 2 from now on) to then take each word out of the dictionary and reconstruct the sentence, so if a 0 came up, it would look into the dictionary and then find what has a 0 definition, then pull that out to put into the string, so it would find hi and take that. I hope that this is understandable and that at least one person knows how to do it, because I am sure it is possible, however I have been unable to find anything here or on the internet mentioning this subject.
How to take a word from a dictionary by its definition
1
1.2
1
0
0
103
37,757,233
2016-06-10T21:12:00.000
0
0
0
0
1
python,scrapy
0
63,207,972
0
9
0
false
1
0
make sure you activate command that is "Scripts\activate.bat"
5
6
0
0
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages When I try to run scrapy from the command, with scrapy crawl ... for example, I get. The program 'scrapy' is currently not installed. What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
Scrapy installed, but won't run from the command line
0
0
1
0
0
24,069
37,757,233
2016-06-10T21:12:00.000
7
0
0
0
1
python,scrapy
0
40,753,182
0
9
0
false
1
0
I tried the following sudo pip install scrapy , however was promtly advised by Ubuntu 16.04 that it was already installed. I had to first use sudo pip uninstall scrapy, then sudo pip install scrapy for it to successfully install. Now you should successfully be able to run scrapy.
5
6
0
0
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages When I try to run scrapy from the command, with scrapy crawl ... for example, I get. The program 'scrapy' is currently not installed. What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
Scrapy installed, but won't run from the command line
0
1
1
0
0
24,069
37,757,233
2016-06-10T21:12:00.000
3
0
0
0
1
python,scrapy
0
60,684,096
0
9
0
false
1
0
I faced the same problem and solved using following method. I think scrapy is not usable by the current user. Uninstall scrapy. sudo pip uninstall scrapy Install scrapy again using -H. sudo -H pip install scrapy Should work properly.
5
6
0
0
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages When I try to run scrapy from the command, with scrapy crawl ... for example, I get. The program 'scrapy' is currently not installed. What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
Scrapy installed, but won't run from the command line
0
0.066568
1
0
0
24,069
37,757,233
2016-06-10T21:12:00.000
16
0
0
0
1
python,scrapy
0
55,285,170
0
9
0
false
1
0
I had the same error. Running scrapy in a virtual environment solved it. Create a virtual env : python3 -m venv env Activate your env : source env/bin/activate Install Scrapy with pip : pip install scrapy Start your crawler : scrapy crawl your_project_name_here For example my project name was kitten, I just did the following in step 4 scrapy crawl kitten NOTE: I did this on Mac OS running Python 3+
5
6
0
0
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages When I try to run scrapy from the command, with scrapy crawl ... for example, I get. The program 'scrapy' is currently not installed. What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
Scrapy installed, but won't run from the command line
0
1
1
0
0
24,069
37,757,233
2016-06-10T21:12:00.000
0
0
0
0
1
python,scrapy
0
37,914,201
0
9
0
false
1
0
I had the same issue. sudo pip install scrapy fixed my problem, although I don't know why must use sudo.
5
6
0
0
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages When I try to run scrapy from the command, with scrapy crawl ... for example, I get. The program 'scrapy' is currently not installed. What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
Scrapy installed, but won't run from the command line
0
0
1
0
0
24,069
37,767,790
2016-06-11T19:33:00.000
1
0
0
0
0
python,scala,apache-spark,machine-learning,scikit-learn
0
37,768,933
0
1
0
false
0
0
Well according to discussions https://issues.apache.org/jira/browse/SPARK-2336 here MLLib (Machine Learning Library for Apache Spark) does not have an implementation of KNN. You could try https://github.com/saurfang/spark-knn.
1
1
1
0
I have been working on machine learning KNN (K Nearest Neighbors) algorithm with Python and Python's Scikit-learn machine learning API. I have created sample code with toy dataset simply using python and Scikit-learn and my KNN is working fine. But As we know Scikit-learn API is build to work on single machine and hence once I will replace my toy data with millions of dataset it will decrease my output performance. I have searched for many options, help and code examples, which will distribute my machine learning processing parallel using spark with Scikit-learn API, but I was not found any proper solution and examples. Can you please let me know how can I achieve and increase my performance with Apache Spark and Scikit-learn API's K Nearest Neighbors? Thanks in advance!!
Scikit-learn KNN(K Nearest Neighbors ) parallelize using Apache Spark
0
0.197375
1
0
0
5,059
37,785,380
2016-06-13T08:50:00.000
0
0
0
0
0
python,c++,matlab,nao-robot
0
37,796,440
0
1
0
false
0
0
Using NAO C++ SDK, it may be possible to make a MEX-FILE in Matlab that "listens" to NAO. Then NAO just has to raise an event in its memory (ALMemory) that Matlab would catch to start running the script.
1
0
1
0
I have a Wizard of Oz experiment using Choregraphe to make a NAO perform certain tasks running on machine A. The participant interacting with the NAO also interacts with a machine B. When I start the experiment (in Choregraphe on machine A) I want a certain MATLAB script to start on machine B. I.e. Choregraphe will initiate the MATLAB script. Do you have any suggestions of how to do this? My programming are limited to that of MATLAB and R, while Choregraphe is well integrated with Python and C++ hence my question here on Stack. Kind Regards, KD
Sync Choregraphe and Matlab
0
0
1
0
0
107
37,798,552
2016-06-13T20:27:00.000
0
0
1
0
0
python,python-3.5,xlwt
1
37,798,884
0
1
0
false
0
0
Sounds like you are typing pip install ... into a Python prompt and not a shell command prompt. This is not a Python statement but a shell command that has to be executed at the command-line prompt.
1
0
0
0
I'm trying to download the package xlwt to my Python 3.5.1 but typing 'pip install xlwt' isn't working and gives me an error at the word install that says invalid syntax, though all the websites I've checked told me to do exactly this. I mostly have a theoretical knowledge of Python and can code pretty decently, but don't really know how to set the technology up in order to do the actual coding. Any help would be appreciated!!!
Downloading xlwt to Python 3.5.1
0
0
1
0
0
1,235
37,799,065
2016-06-13T21:03:00.000
0
0
1
1
0
python,avro,google-cloud-dataflow
0
37,866,731
0
2
0
true
0
0
You are correct: the Python SDK does not yet support this, but it will soon.
1
1
0
0
I am looking to ingest and write Avro files in GCS with the Python SDK. Is this currently possible with Avro leveraging the Python SDK? If so how would I do this? I see TODO comments in the source regarding this so I am not too optimistic.
Dataflow Python SDK Avro Source/Sync
1
1.2
1
0
0
761
37,830,836
2016-06-15T09:03:00.000
3
0
0
0
0
python,command,exit,trace32
0
37,845,692
0
1
0
true
0
0
To close the PowerView main window use TRACE32 command QUIT
1
2
0
0
I load and execute a cmm script inside trace32 application using bmm commands. when the execution is over i need to close the entire t32 application window itself (similar to File -> Exit) using cmm command ?
how to close trace 32 application itself through cmm command?
0
1.2
1
0
0
993
37,836,077
2016-06-15T12:53:00.000
1
0
0
0
0
python,openerp,fileopendialog
0
37,839,268
0
2
0
false
1
0
You can define binary fields in Odoo, like other fields. Look into ir.attachment model definition and its view definitions to get a good hint, how do it for such fields.
1
1
0
0
does anybody knows how to open a filedialog on Odoo? I've add a button on a custom view, now I would like to browse for a file on THE CLIENT when this button is clicked. Any ideas? Thanks!
Odoo python fileopendialog
0
0.099668
1
0
0
108
37,845,389
2016-06-15T20:42:00.000
0
0
0
0
0
python,django,python-3.x
0
37,845,897
0
2
0
false
1
0
I assume that your models look something like this class Contest(Model): ... something ... class Picture(Model): user = ForeignKey(User) contest = ForeignKey(Contest) ... something ... So, Picture.objects.filter(user=user) gives you pictures by a particular user (don't have to specify _id, filters operate on model objects just fine). And to get contests with pictures by a particular user you can do pics_by_user = Picture.objects.filter(user=user) contests_by_user = Contest.objects.filter(id__in=pics_by_user.values_list('contest', flat=True)) There might be an easier way though
1
1
0
0
I have a queryset from Picture.objects.filter(user_ID=user). The Picture model has "contest_ID" as a foreign key. I'm looking to get a queryset of Contests which have Pictures, so from the queryset I already have, how do I pull a list of Contest objects?
Get a queryset from a queryset
0
0
1
0
0
71
37,850,154
2016-06-16T04:55:00.000
1
0
0
0
0
python-2.7,openerp,odoo-9
0
37,852,231
0
2
0
false
1
0
User Signup is a standard feature provided by Odoo, and it seems that you already found it. The database selector shows because you have several PostgresSSQL databases. The easiest way is to set a filter that limits it to the one you want: start the server with the option --dbfilter=^MYDB$, where MYDBis the database name. User data is stored both in res.userand res.partner: the user specific data, such as login and password, are stored in res.user. Other data, such as the Name is stored in a related res.partner record.
2
0
0
0
How can I create a signup page in odoo website. The auth_signup module seems to do the job (according to their description). I don't know how to utilize it. In the signup page there shouldn't be database selector Where should I store the user data(including password); res.users or res.partner
Odoo website, Creating a signup page for external users
0
0.099668
1
0
0
1,173
37,850,154
2016-06-16T04:55:00.000
2
0
0
0
0
python-2.7,openerp,odoo-9
0
37,852,264
0
2
0
true
1
0
you can turn off db listing w/ some params in in odoo.cfg conf db_name = mydb list_db = False dbfilter = mydb auth_signup takes care of the registration, you don't need to do anything. A res.user will be created as well as a partner related to it. The pwd is stored in the user.
2
0
0
0
How can I create a signup page in odoo website. The auth_signup module seems to do the job (according to their description). I don't know how to utilize it. In the signup page there shouldn't be database selector Where should I store the user data(including password); res.users or res.partner
Odoo website, Creating a signup page for external users
0
1.2
1
0
0
1,173
37,855,059
2016-06-16T09:24:00.000
4
0
0
0
0
python,numpy,numpy-ufunc
0
37,855,371
0
2
0
true
0
0
Because max is associative, but argmax is not: max(a, max(b, c)) == max(max(a, b), c) argmax(a, argmax(b, c)) != argmax(argmax(a, b), c)
1
3
1
0
These two look like they should be very much equivalent and therefore what works for one should work for the other? So why does accumulate only work for maximum but not argmax? EDIT: A natural follow-up question is then how does one go about creating an efficient argmax accumulate in the most pythonic/numpy-esque way?
Why does accumulate work for numpy.maximum but not numpy.argmax
0
1.2
1
0
0
1,719
37,877,325
2016-06-17T09:00:00.000
5
0
1
0
0
python,ubuntu,vagrant,pycharm,virtualenv
0
39,578,272
0
1
0
false
0
0
I also had this issue setting up a remote interpreter with Vagrant. It appears that for a remote interpreter you need to mark Python source root folders as "Source Folders" under Project Structure in Preferences. They should then show up as blue in your Project browser. You don't need to mark all the sub folders, just the root folder for each python project/package. Without doing this it seems like Pycharm can't find the source files and takes you to the readonly cached code derived from the remote interpreter environment.
1
6
0
0
I setuup Project Interpreter pointing virtualenv on vagrant virtual machine (Settings / Project Interpreter / Add Remote), but when I click ctrl+B or use 'go to definition' I always end up in location like this: /home/<my_user_name>/.PyCharm50/system/remote_sources/1174787026/154306353/django/... how to avoid such pycharm behaviour? How to force it to use virtualenvs code when go to declaration? Using Pycharm 5.0 on Ubuntu 14.04 UPDATE: with pycharm 2017.2.* it works now good!
pycharm not using virtualenv from vagrant box when 'go to declaration' instead uses some outdated stuff from its remote_sources
0
0.761594
1
0
0
458
37,879,558
2016-06-17T10:42:00.000
1
0
0
0
0
python,numpy,scipy
0
37,879,801
0
1
0
true
0
0
If you grid is regular: You have calculate dx = x[i+1]-x[i], dy = y[i+1]-y[i], dz = z[i+1]-z[i]. Then calculate new arrays of points: x1[i] = x[i]-dx/2, y1[i] = y[i]-dy/2, z1[i] = z[i]-dz/2. If mesh is irregular you have to do the same but dx,dy,dz you have to define for every grid cell.
1
1
1
0
I have a question, I have been given x,y,z coordinate values at cell centers of a grid. I would like to create structured grid using these cell center coordinates. Any ideas how to do this?
Creating a 3D grid using X,Y,Z coordinates at cell centers
0
1.2
1
0
0
425
37,897,064
2016-06-18T12:39:00.000
0
1
0
0
0
python,search,twitter,tweepy
0
37,902,045
0
2
0
true
0
0
I've created a workaround that kind of works. The best way to do it is to search for mentions of a user, then filter those mentions by in_reply_to_id .
1
2
0
0
So I've been doing a lot of work with Tweepy and Twitter data mining, and one of the things I want to do is to be able to get all Tweets that are replies to a particular Tweet. I've seen the Search api, but I'm not sure how to use it nor how to search specifically for Tweets in reply to a specific Tweet. Anyone have any ideas? Thanks all.
Tweepy Get Tweets in reply to a particular tweet
0
1.2
1
0
1
3,634
37,918,215
2016-06-20T08:51:00.000
4
0
1
0
0
python,windows,numpy,cmd,anaconda
0
37,918,313
0
3
0
false
0
0
I think you are referring to the command-line use of python? If you have admin priviliges on your machine you can add python to your environment variables, making it available in the console anywhere. (Sorry for different spellings, I am not on an english machine) Press Shift+Pause ("System") Click "Advanced System Options" Click "Environment variables" In the lower field with "System variables" there is a variable called PATH. Append the complete path to your python.exe without the file to that by adding a ; behind the last path in the variable and then adding your path. Do not add any spaces! Example: C:\examplepath\;C:\Python27\
1
9
0
0
I have just installed Anaconda on my computer because I need to use Numpy. Well, when I use python I for some reason have to be in the same folder as python.exe and, of course, now that I want to use Anaconda I have to be in the Anaconda3\Scripts folder where python.exe isn't. This is a nightmare, how can I use anaconda with python on a windows computer? Why does it have to be so complicated?
Using python with Anaconda in Windows
0
0.26052
1
0
0
35,921
37,925,504
2016-06-20T14:48:00.000
1
0
0
0
1
python,heroku,scrapy,digital-ocean,dokku
0
37,969,257
0
1
0
false
1
0
I 'fixed' this issue by not using a Digital Ocean server. The website that I am trying to crawl, which is craigslist.org, just did not respond well to a DO server. It takes a long time to respond to a request. Other websites like Google or Amazon work just fine with DO. My scraper works just fine on craigslist when using a VPS from another provider.
1
0
0
0
Not sure how to describe this but I am running a Scrapy spider on a Digital Ocean server ($5 server), the Scrapy project is deployed as a Dokku app. However, it runs very slowly compared to the speed on my local computer and on a Heroku free tier dyno. On Dokku it crawls at a speed of 30 pages per minute while locally and on Heroku the speed is 200+ pages per minutes. I do not know how to debug, analyze or where to start in order to fix the problem. Any help, clues or tips on how to solve this?
Running Scrapy on Dokku using a Digital Ocean server
0
0.197375
1
0
0
430
37,937,313
2016-06-21T06:47:00.000
1
0
0
0
0
python-3.x,inverted-index
0
37,945,993
0
1
1
true
0
0
Python does allow you to constrcut classes that implement dictionary-like interface and thatc ould maintain any inverted indexes you would wish - But you are too broad on your question. The "extradict" Python package (pip install extradict), for example have a "BijectiveDict" that just exposes any values as keys and vice-versa, and keep everything synchronize - but it is a simple symmetric key, value store. If you want complex, nested documents, and persistence you should use an existing NoSQL database like MongoDB, Codernity, ElasticSearch, ZODB, rather than try to implement one yourself.
1
0
0
0
how do I update an inverted index efficiently if documents are inserted, deleted or updated ? also should i use index file to store index or should I store index in a database table ?
how to make inverted index?
0
1.2
1
0
0
551
37,954,211
2016-06-21T20:42:00.000
0
0
1
0
0
python,linux,python-3.x
0
37,954,337
0
1
0
false
0
0
Indentation can be configured by navigating to Utilities > Global Options > Editing > Tab width. You said you are coding in python so I strongly recommend that you leave the indentation as it is as python only accepts indentation with standard tabs (4 spaces long). Obs: I see no reason why you would use Jedit, you'd better use a decent editor like Atom or Sublime Text.
1
0
0
0
Ive been using Jedit for programming for a few days and I wonder how I can change the tab indent in Jedit. Or can I change the Indent for whole Linux? My second question: I use Python and I would like to have an Indent in the next line after colons. where are the settings for this?
Tab indent in Linux and Jedit
0
0
1
0
0
137
37,955,249
2016-06-21T21:55:00.000
2
0
1
0
0
python
0
37,955,310
0
1
0
true
0
0
Use a for-loop if you can. It's simple, and it uses iterators behind the scenes. One of the great things about python's iterator system is that you don't need to think about them most of the time. It is quite rare that you'll need to explicitly call next() on something. This is kind of general, but so is your question. If you have a particular use case in mind, edit your question to add it and you'll get more detailed responses.
1
0
0
0
I'm learning Python 3 (my first language since BASIC), and I have a general question: If I want to iterate over something, how do I determine if the best way is to use a For loop or a generator? They appear to be closely related.
Python 3 For loop vs next() iterator
0
1.2
1
0
0
128
37,977,358
2016-06-22T20:09:00.000
0
0
0
0
0
python,calendar,google-api,google-admin-sdk
0
38,876,630
0
1
0
true
0
0
Solved: The issue was that my client_secrets.json file for oauth 2.0 was set to my personal google account and not the admin account. I cleared the storage.json file where credentials were stored, re ran the program with the admin account logged in, and it worked! Hoped this helps.
1
0
0
0
I'm working on a script using Python that will access all students' Google calendars using their Google accounts and then add their school schedule into their calendar. I have figured out adding and deleting events and calendars using the API, but my question is how do I add a specific event to a specific calendar under a domain. I am a domain admin.
Google API Domain Admin Access
0
1.2
1
0
0
71
37,980,943
2016-06-23T01:41:00.000
0
0
1
0
0
python,excel,ipython,spyder,copy-paste
0
70,369,693
0
1
0
false
0
0
(Spyder maintainer here) For the record, this problem was solved in our 4.1.0 version, released in March 2020.
1
4
0
0
In IDLE Python if I do print "a\tb" I get an output that looks like: a[TAB]b. If I do the same in IPython in Spyder, then I get an output that looks like: a[7 spaces]b I like to output tables of data as tab delimited text to make it easier to copy from the console and paste it to Excel. If the tabs get converted to spaces it becomes more difficult. Is there any setting within IPython or Spyder which controls how TAB characters are displayed? I am using Spyder+IPython on a Windows 10 desktop. I realized I could just write the data to a file, but in this case it is more convenient to just use the console and the clipboard.
How to prevent tab characters from being converted to spaces in console output when using IPython in Spyder
0
0
1
0
0
793
38,018,045
2016-06-24T16:26:00.000
0
0
1
1
1
python,pygame,ubuntu-14.04
0
38,035,454
1
2
0
false
0
0
First of all I want to thanks Bennet for responding to my question so that I was able to figure out what the problem was. Actually the problem was with aliasing. When I installed cv2 or pygame using apt-get, they were installed for default version but when I installed any package by downloading the installer first (like I installed anaconda), it was installed for python 2.7.11 because 'python' was aliased for this version(that is 2.7.11). So, basically make sure that the default version for which you want to install everything is the one which is aliased as 'python', and everything goes fine. I aliased 'python' for the default version and then installed anaconda via installer and now it has been installed default version.
1
0
0
0
I have Ubuntu 14.04 LTS. I guess different versions of python are pre-installed in Ubuntu 14.04. Right now when I type 'python' in terminal it opens python 2.7.11, but I guess the default version of Ubuntu 14.04 is 2.7.6. When I type /usr/bin/python it opens the default version. I know this can be done with making aliases. The real problem is, I have installed pygame, cv2 (that is for image processing) using apt-get. These are installed for default version of python i.e python 2.7.6. Also I have installed anaconda with python 2.7.11 using pip, but again 'pip' and anaconda are installed for 2.7.11. I know python 3 is also pre-installed there but I don't use it. Also I have no python version installed in user/local/bin.Now I want to know why this problem is occurring? How can I fix this now? Also how to import all the libraries for one python version(either default or another) and how to use it? How to configure my settings so that I would not have any problem in future?
How to install pygame, cv2, anaconda, pip etc to any one version of python in ubuntu 14.04
1
0
1
0
0
180
38,031,729
2016-06-25T18:34:00.000
3
1
0
0
1
python,couchbase,aws-lambda
0
54,285,148
0
2
0
false
0
0
Following two things worked for me: Manually copy /usr/lib64/libcouchbase.so.2 into ur project folder and zip it with your code before uploading to AWS Lambda. Use Python 2.7 as runtime on the AWS Lambda console to connect to couchbase. Thanks !
1
6
0
0
I'm trying to use AWS Lambda to transfer data from my S3 bucket to Couchbase server, and I'm writing in Python. So I need to import couchbase module in my Python script. Usually if there are external modules used in the script, I need to pip install those modules locally and zip the modules and script together, then upload to Lambda. But this doesn't work this time. The reason is the Python client of couchbase works with the c client of couchbase: libcouchbase. So I'm not clear what I should do. When I simply add in the c client package (with that said, I have 6 package folders in my deployment package, the first 5 are the ones installed when I run "pip install couchbase": couchbase, acouchbase, gcouchbase, txcouchbase, couchbase-2.1.0.dist-info; and the last one is the c client of Couchbase I installed: libcouchbase), lambda doesn't work and said: "Unable to import module 'lambda_function': libcouchbase.so.2: cannot open shared object file: No such file or directory" Any idea on how I can get the this work? With a lot of thanks.
How to create AWS Lambda deployment package that uses Couchbase Python client
0
0.291313
1
0
0
348
38,043,336
2016-06-26T21:25:00.000
0
0
1
1
0
python,sublimetext3,sublimerepl
0
53,070,278
0
2
0
false
0
0
As mentioned above (a long time ago) the key bindings aren't present for Windows. However, one can Mouse Right Click to open a context menu. From here there are menu options for Kill and Restart. You can also open a sub-menu which allows you send those and other signals including SIGINT.
1
3
0
0
I'm using REPL extension for Sublime text 3 for my python projects. Currently when I want to interrupt a running script I have to close to close the REPL window to stop execution and all computations are so far are lost. I was wondering if anybody knows how to interrupt an execution and have a short cut or key bindings for that
Key bindings for interrupt execution in Python Sublime REPL
0
0
1
0
0
1,773
38,045,616
2016-06-27T03:46:00.000
2
0
0
0
0
database,python-2.7,pythonanywhere
0
38,056,627
0
1
0
true
0
0
You cannot get PythonAnywhere to read the files directly off your machine. At the very least, you need to upload the file to PythonAnywhere first. You can do that from the Files tab. Then the link that Rptk99 provided will show you how to import the file into MySQL.
1
2
0
0
I'm new to pythonanywhere. I wonder how to load data from local csv files (there are many of them, over 1,000) into a mysql table. Let's say the path for the folder of the csv files is d:/data. How can I write let pythonanywhere visit the local files? Thank you very much!
Pythonanywhere Loading data from local files
0
1.2
1
1
0
1,111
38,056,711
2016-06-27T14:34:00.000
1
0
1
0
0
python,binary,hex,byte,bytearray
0
38,056,839
0
1
0
false
0
0
A bytearray is always a list of integers. How they are displayed is only their representation. The same applies to the way you entered them. Python understand the 0x?? (hexadicimal) and 0?? (octal) notation for integers but it will display the decimal notation. To convert an integer to a string in the 0x?? format use hex(value).
1
1
0
0
Hi I've been trying to iterate through a bytearray, add up all the bytes and then append the result back into the same bytearray. The bytearray looks like this: key = bytearray([0x12, 0x10, 0x32]) However, when I call sum(key) I get the decimal representation of 84. Any idea how I can change the decimal representation and put it back into a hexadecimal format while keeping it of type int. Thank You
Adding bytes in python 2.7
0
0.197375
1
0
0
1,227
38,066,526
2016-06-28T03:21:00.000
0
0
0
0
0
python,mongodb,windows-7,barcode-scanner,hid
0
38,120,429
0
1
0
false
0
0
In this scenario I will suggest using scanners/readers that can emulate serial (COM) port. As HID device writes to same bus then there is a huge probability that output from two or more devices could by mixed-up. More over I will add a device id string to a prefix like dev01. Binding to a com port can be used by pySerial module. Any comments welcome!
1
0
0
0
I have found a number of answers in pulling information from HIDs in Linux, but not many in Windows. I have created a system where a person can scan an ID badge when entering a briefing that logs their attendance into a database. It utilizes a Python 3.4 front end which queries and then updates a MongoDB database. Currently, I have a USB Barcode Scanner which, when scanning, acts as a keyboard and "types" what the barcode says, followed by a CR. I also have a window which takes the text input and then closes the window and executes a database query and update when the CR is received. The current issue is speed. I have been asked to expand the system so that one computer with a USB hub can take 4-8 of these Barcode Scanners at the same time, attempting to increase scanning rate to 1000 people every 5 minutes. What I am afraid will happen is that if two scans happen at almost the same time, then their inputs will overlap, generating an invalid query and resulting in both individuals not being logged. As far as I can understand, I need to place each Scanner in its own thread to prevent overlapping data, and I do not want to "lock" input from the other scanners when the system detects a scan beginning as this system is all about speed. However, I am unsure of how to differentiate the devices and how to implement the system. Any solutions would be appreciated! Please take note that I am unfamiliar with HID use in this sense, and only have a basic background in multi-threading.
Sorting Input from Multiple HIDs in Windows
0
0
1
0
0
180
38,074,069
2016-06-28T10:45:00.000
0
0
0
0
1
python,google-sheets,google-api,google-sheets-api,google-api-python-client
0
61,849,385
0
8
0
false
0
0
For those whose solving this renaming using NodeJS. Just use the batchRequest API. Indicate in the sheetID the sheet id youre editing and the title field the new title. Then indicate "title" in the fields.
1
30
0
0
I have been trying/looking to solve this problem for a long while. I have read the documentation for gspread and I cannot find that there is a way to rename a worksheet. Any of you know how to? I would massively appreciate it! There is indeed worksheet.title which gives the name of the worksheet, but I cannot find a way to rename the actual sheet. Thank you in advance!
How do I rename a (work)sheet in a Google Sheets spreadsheet using the API in Python?
1
0
1
0
0
13,927
38,083,176
2016-06-28T17:58:00.000
0
0
1
0
1
python,numpy,pandas,anaconda
1
38,084,884
1
2
0
false
0
0
I was able to resolve this issue using conda to remove and reinstall the packages that were failing to import. I will leave the question marked unanswered to see if anyone else has a better solution, or guidance on how to prevent this in the future.
1
2
1
0
I'm running Python 3.5.1 on a Windows 7 machine. I've been using Anaconda without issue for several months now. This morning, I updated my packages (conda update --all) and now I can't import numpy (version 1.11.0) or pandas(version 0.18.1). The error I get from Python is: Syntax Error: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape. This error occurs when the import statement is executed. I'm able to import other packages, some from anaconda's bundle and some from other sources without issue. Any thoughts on how to resolve this?
Python 3.5.1 Unable to import numpy after update
0
0
1
0
0
604
38,089,277
2016-06-29T02:19:00.000
2
0
1
0
0
python-2.7,pyqt4,pyinstaller
0
38,112,580
0
2
0
false
0
0
Has solved. This is a bug of pyinstaller3.2, the new in the git has solved this bug. Down the newest source in the github, erverything works fine.
2
1
0
0
As title, Build successful, but the exe can't run. can not found msvcr100.dll. I can put msvcr100.dll with exe in the same dir, the exe can run. But I just want only one exe file. Anyone know how to do?
pyinstaller 3.2 build pyqt4/python2.7 to onefile exe, can not run missing msvcr100.dll?
0
0.197375
1
0
0
1,132
38,089,277
2016-06-29T02:19:00.000
2
0
1
0
0
python-2.7,pyqt4,pyinstaller
0
40,601,355
0
2
0
false
0
0
Has solved. This is a bug of pyinstaller3.2, the new one in the git has solved this bug. Down the newest source in the GitHub, everything works fine. This is correct, I cant tell you how much that answer helped me out. I have been trying to build a single exe Exploit to execute on Windows XP with-out it crashing for my OSCP Labs/Exam. I followed so many tutorials and nothing seems to work. I was able to build the EXE but could not get it to run under a single EXE. If anyone who reads this is getting "This Program cannot be run in DOS mode" try running it from another machine with the same build (Windows XP). There is not much info out there on how to solve that from a Reverse Shell on a End Of Life Operating System using an EXE exploit built with Pyinstaller. (Lots of Trial and Error and determination) Microsoft Visual C++ 2008 Redistributable Package (or some other version depending on python version) is needed in any case, python27.dll requires it I was also receiving an error about msvcr100.dll when ran from the GUI on my build machine(WinXP SP2). This is corrected in the 3.3 Dev version on GitHub. I installed the C++ 2008 Package but this didn't solve my problem when I re-built the EXE, the 3.3 Dev Pyinstaller was the solution. What I did was: Zip down the Dev version of Pyinstaller 3.3 Dev(GitHub) is the newest for 11/14/16 that I could tell. Make sure you have Python 2.7.x (I used 2.7.11) and pywin32 installed that matches (Python 2.7.x) version. (And it does matter if its 64-bit or 32-bit) Use the setup.py to install Pyinstaller, make sure you do not have a previous version already installed, if so use pip or etc. to remove. I installed with pip first and this was my whole issue. I was able to get all of my 32-bit Single EXE Exploits to run on 64-bit/32-bit Windows machines up to Windows 10. Once that is completed, make sure Pyinstaller is in your $PATH and follow the standard tutorials on creating a --onefile EXE. Copy to your Windows Target machine and it should work with-out error. I did not need to pull any dependencies over but you may have to include some with the --hidden command. Its greatly detailed in the Pyinstaller documentation on how to include hidden .dlls If this still doesn't work for you try using py2exe. Its a little more complicated but it your determined you will figure it out. If you have code written in python 2.x.x and 3.x.x you can have multiple environments of Python and have Pyinstaller installed in each. This is in the documentation as well. Thank you jim ying. Your 2 sentence answer was exactly what I needed.
2
1
0
0
As title, Build successful, but the exe can't run. can not found msvcr100.dll. I can put msvcr100.dll with exe in the same dir, the exe can run. But I just want only one exe file. Anyone know how to do?
pyinstaller 3.2 build pyqt4/python2.7 to onefile exe, can not run missing msvcr100.dll?
0
0.197375
1
0
0
1,132
38,103,354
2016-06-29T15:01:00.000
0
0
0
0
0
python,listview,gtk,pygtk,gtk2hs
0
40,487,030
0
1
0
true
0
1
It finally seems that IconView has not such a feature right now, as Thunar uses its own control from libexo, while Caja/Nautilus use their own controls from other libraries.
1
1
0
0
I'm now practicing with Gtk by developing a file manager application similar to Thunar, and I simply can't figure out how to make the IconView items flow vertically instead of horizontally, like in Thunar or Nautilus' Compact View mode, as well as in Windows Explorer's List View Mode. Should I use TreeView istead? I'm practicing in Haskell bindings, the Gtk2Hs, but I'm also familiar with native C library and Python bindings (PyGtk), so explanations using these languages are also acceptable.
How to make GtkListView items flow from top to bottom, like in Thunar or Nautilus Compact View Mode?
0
1.2
1
0
0
113
38,120,223
2016-06-30T10:17:00.000
0
0
0
0
0
linux,macos,python-3.x,webkit,gtk
0
48,846,011
0
2
0
false
0
1
For GTK3 brew install pygobject3 Otherwise brew install pygobject
1
3
0
0
I want to make a browser with Python GTK and Webkit for education purposes. I have GTK and it works, but I can't find how to get webkit for Mac OS X. I tried brew, pip3, easy_install. And I'm not sure if PyQT webkit port is the same as webkit.
How to get Webkit for Mac OS X
0
0
1
0
0
2,977
38,130,483
2016-06-30T18:12:00.000
0
0
1
1
1
python,visual-studio,visual-c++,pip,setup.py
1
38,182,957
0
1
0
false
0
0
Instead of setting VS100COMNTOOLS=%VS110COMNTOOLS% in cmd, i did SET VS100COMNTOOLS=C:\Program Files\Microsoft Visual Studio 11.0\Common7\Tools\ and it was picking correctly but again thrown another pile of errors as VS11 compiler is different and cannot compile Python 3.4 code properly. I uninstalled VS11, Installed VS10 and it worked.
1
0
0
0
I was trying to install Airflow in windows through command prompt using pip. The python is 3.4.2, pip included. I am getting the below error. distutils.errors.DistutilsError: Setup script exited with error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat). I have installed Visual studio 2012 but Python 3.4 looks for VS10 by default. I tried to trick Python to use the newer visual studio by Executing the command set VS100COMNTOOLS=%VS110COMNTOOLS%. Adding new system variable VS100COMNTOOLS as variable name and gave the value as VS110COMNTOOLS. Both tricks did not work. I am still getting the same old error. The file vcvarsall.bat is present in C:\Program Files\Microsoft Visual Studio 11.0\VC what is missing here? how can I get rid of this error?
Error while installing Airflow using pip in windows- Unable to find vcvarsall.bat
1
0
1
0
0
689
38,143,219
2016-07-01T10:35:00.000
0
0
0
0
0
python,python-3.x,scapy,diameter-protocol,dpkt
0
40,066,101
0
1
0
false
0
0
I would suggest you to use tshark. Using tshark you can convert the pcap files to text files containing the AVPs that you are interested in. Once you have the text file, I believe it would be easy to extract the information using python.
1
2
0
0
I have a diameter packet capture pcap file (using tcpdump) containing some AVPs. I'd like to parse the pcap file and access/retrieve the AVPs. I'm using python3.5.1. The dpkt library apparently supports diameter well but it's not yet available for python3. I tried converting it via 2to3-3.5 script but the conversion isn't full-proof and I'm hitting unicode errors while parsing the pcap. I am trying to use scapy now. I need some help/examples in how to use scapy to: parse a pcap file retrieve/parse AVPs from the pcap. Any help would be appreciated. Regards Sharad
How to parse and retrieve diameter AVPs in python?
0
0
1
0
0
1,867
38,145,048
2016-07-01T12:08:00.000
0
0
0
0
1
python,sql-server,database,oracle,python-3.x
1
38,146,099
0
1
0
false
0
0
I may be missing something here. Why don't you connect to your Oracle database as a SQL Server linked server (or the other way around) ?
1
0
0
0
i have been trying to connect to SQL Server (I have SQL Server 2014 installed on my machine and SQL Native Client 11.0 32bit as driver) using Python and specifically pyodbc but i did not manage to establish any connection. This is the connection string i am using: conn = pyodbc.connect('''DRIVER={SQL Server Native Client 11.0}; SERVER=//123.45.678.910; DATABASE=name_database;UID=blabla;PWD=password''') The error message i am getting is this: Error: ('08001', '[08001] [Microsoft][SQL Server Native Client 11.0]Named Pipes Provider: Could not open a connection to SQL Server [161]. (161) (SQLDriverConnect)') Now, is this caused by the fact that both Python (i have version 3.5.1) and pyodbc are 64bit while the SQL Driver is 32bit? If yes, how do i go about solving this problem? How do i adapt pyodbc to query a 32bit database? I am experiencing the same problem with Oracle database OraCLient11g32_home1 For your information, my machine runs Anaconda 2.5.0 (64-bit). Any help would be greatly appreciated.Thank you very much in advance.
Database Connection SQL Server / Oracle
0
0
1
1
0
228
38,160,577
2016-07-02T13:15:00.000
1
0
0
0
0
python,django
1
38,171,865
0
1
0
true
1
0
Try using the atexit module to catch the termination. It should work for everything which acts like SIGINT or SIGTERM, SIGKILL cannot be interrupted (but should not be sent by any auto-restart script without sending SIGTERM before).
1
0
0
0
I am using a custom Django runserver command that is supposed to run a bunch of cleanup functions upon termination. This works fine as long as I don't use the autoreloader: by server catches the KeyboardInterrupt exception properly and exits gracefully. However, if I use Django's autoreloader, the reloader seems to simply kill the server thread without properly terminating it (as far as I can tell, it doesn't have any means to do this). This seems inherently unsafe, so I can't really believe that there's not a better way of handling this. Can I somehow use the autoreloader functionality without having my server thread be killed uncleanly?
Graceful exit server when using Django's autoreloader
0
1.2
1
0
0
150
38,160,597
2016-07-02T13:17:00.000
1
0
0
0
0
python-2.7,file-io,knime
0
38,161,395
0
2
0
false
1
0
There are multiple options to let things work: Convert the files in-memory to a Binary Object cells using Python, later you can use that in KNIME. (This one, I am not sure is supported, but as I remember it was demoed in one of the last KNIME gatherings.) Save the files to a temporary folder (Create Temp Dir) using Python and connect the Pyhon node using a flow variable connection to a file reader node in KNIME (which should work in a loop: List Files, check the Iterate List of Files metanode). Maybe there is already S3 Remote File Handling support in KNIME, so you can do the downloading, unzipping within KNIME. (Not that I know of, but it would be nice.) I would go with option 2, but I am not so familiar with Python, so for you, probably option 1 is the best. (In case option 3 is supported, that is the best in my opinion.)
1
1
0
0
I'm using Knime 3.1.2 on OSX and Linux for OPENMS analysis (Mass Spectrometry). Currently, it uses static filename.mzML files manually put in a directory. It usually has more than one file pressed in at a time ('Input FileS' module not 'Input File' module) using a ZipLoopStart. I want these files to be downloaded dynamically and then pressed into the workflow...but I'm not sure the best way to do that. Currently, I have a Python script that downloads .gz files (from AWS S3) and then unzips them. I already have variations that can unzip the files into memory using StringIO (and maybe pass them into the workflow from there as data??). It can also download them to a directory...which maybe can them be used as the source? But I don't know how to tell the ZipLoop to wait and check the directory after the python script is run. I also could have the python script run as a separate entity (outside of knime) and then, once the directory is populated, call knime...HOWEVER there will always be a different number of files (maybe 1, maybe three)...and I don't know how to make the 'Input Files' knime node to handle an unknown number of input files. I hope this makes sense. Thanks!
Python in Knime: Downloading files and dynamically pressing them into workflow
0
0.099668
1
0
1
1,032
38,265,773
2016-07-08T11:43:00.000
4
0
1
0
1
python,vlc
0
38,265,916
0
5
0
true
0
0
I had a same issue. You should try sudo pip install python-vlc
2
7
0
0
I am trying to create a media player using vlc and python but it throws an Error which is No module named vlc. how to fix this?
Import Vlc module in python
0
1.2
1
0
0
30,496
38,265,773
2016-07-08T11:43:00.000
0
0
1
0
1
python,vlc
0
60,589,679
0
5
0
false
0
0
The answer didn't work for me, using Mu 1.0.2 on a Raspberry Pi, this did however: sudo pip3 install vlc
2
7
0
0
I am trying to create a media player using vlc and python but it throws an Error which is No module named vlc. how to fix this?
Import Vlc module in python
0
0
1
0
0
30,496
38,278,626
2016-07-09T05:16:00.000
1
0
1
0
0
android,qpython,qpython3
0
38,304,956
0
1
0
true
0
1
Open the "qpython3" app then touch "Console" and in the top left corner touch "No. 1" or "No. 2" or ... then select your background running scripts and by touching "X" sing you can kill them.
1
0
0
0
I am using Qpython3 on my Android tablet. I have a Python script for a talking alarm clock that I would like to run in the background and then go off at the time the user sets. The problem is, once I set the console running in the background, I can't figure out how to get back to it to stop the script (i.e. get the message to stop repeating).
How do I stop a script that is running in the background in Qpython3?
0
1.2
1
0
0
1,123
38,280,859
2016-07-09T10:30:00.000
2
0
0
0
0
python,reactjs,django-forms
0
38,281,765
0
1
0
true
1
0
The {{ form }} statement is relative to Django template. Django templates responsible for rendering HTML and so do React, so you don't have to mix the two together. What you probably want to do is to use the Django form validation mechanism server side, let React render the form client-side. In your Django view, simply return a JSON object that you can use in your React code to initialize your form component.
1
4
0
0
Is there any way I can use Django forms inside a ReactJS script, like include {{ form }} in the JSX file? I have a view which displays a from and it is rendered using React. When I load this page from one page the data in these fields should be empty, but when I hit this view from another page I want date to be prefilled in this form. I know how to do this using Django forms and form views, but I am clueless where to bring in React.
Django forms in ReactJs
0
1.2
1
0
0
1,649
38,291,388
2016-07-10T11:29:00.000
2
0
0
0
1
python,html,django,dynamic,jinja2
0
38,301,898
0
2
0
true
1
0
I found a solution that works out pretty well. I use <link rel="stylesheet" href="{% block css %}{% endblock %}"> in the template and then: {% block css%}{% static 'home/css/file.css' %}{% endblock % in each page
1
2
0
0
I am trying to make my stylesheets dynamic with django (jinja2) and I want to do something like this: <link rel="stylesheet" href="{% static 'home/css/{{ block css }}{{ endblock }}.css' %}"> Apparently, I can't use Jinja in Jinja :), and I don't know how to make this work another way.
Dynamic css import with Jinja2
0
1.2
1
0
0
1,671
38,301,047
2016-07-11T07:09:00.000
1
0
1
0
0
python,environment-variables,kivy
0
38,301,272
0
2
0
false
0
1
Make sure you're running the command from the folder where the *.py file is located, "kivy *.py" should run from there.
1
0
0
1
After the installation of Kivy 1.9.1 on Windows using the commands of Kivy installation tutorials, I can't run the program using "kivy ***.py". I don't know how to set up the environment variables, and I can't find it on the official websites. Kivy: 1.9.1 Python: 3.4.4 Windows 10 Please HELP! Thanks
How to run kivy after 1.9.1 on windows?
0
0.099668
1
0
0
662
38,317,462
2016-07-11T22:55:00.000
0
0
1
0
0
python,multithreading,multiprocessing
0
38,317,881
0
1
0
false
0
0
Based on Harp's second comment to his original post (which was posted after your answer), I suspect that you would now agree with me that processes are probably called-for here, based on this newly-supplied information. However, I find myself questioning just how much "truly effective concurrency" is likely to be found here. This sounds like a job to me: a "script 1" (with its sub-scripts 1.1, 1.2 etc.), which prepares a file of inputs that is then delivered to "script 2." Especially since "script 2" is utterly beholden to an external web-site for what it does, I'm just not yet persuaded that the added complexity of "multi-threading" is genuinely justifiable here.
1
0
0
0
I wanted to get some help with an application... Currently I have a script that saves certain information to a database table, well call this table "x". I have another script that gets and saves other info to a different database table, well call this one "y". I also have a script that runs formulas on the information found in table y and I have another script that opens the link found in table x and saves certain information into table "z". The problem I have is that the first script doesn't end, and neither does the third script. So I know now that I need to have either threads or multiple processes running but which one do I choose? Script 1 accesses table W & X Script 2 accesses table X & Y Script 3 accesses table Y Script 4 accesses table Z Can you please give me some guidance on how to proceed?
Should I use Threads or multiple processess?
0
0
1
0
0
44
38,321,248
2016-07-12T06:15:00.000
1
0
0
0
0
python,numpy,theano,deep-learning,keras
0
38,353,930
0
1
0
false
0
0
This expression should do the trick: theano.tensor.tanh((x * y).sum(2)) The dot product is computed 'manually' by doing element-wise multiplication, then summing over the last dimension.
1
0
1
0
I am trying to apply tanh(dot(x,y)); x and y are batch data of my RNN. x,y have shape (n_batch, n_length, n_dim) like (2,3,4) ; 2 samples with 3 sequences, each is 4 dimensions. I want to do inner or dot production to last dimension. Then tanh(dot(x,y)) should have shape of (n_batch, n_length) = (2, 3) Which function should I use?
how to use dot production on batch data?
0
0.197375
1
0
0
291
38,321,820
2016-07-12T06:49:00.000
0
1
0
1
0
python,ssh,twisted.conch,password-prompt
0
38,324,034
0
1
0
false
0
0
The password prompt is part of keyboard-authentication which is part of the ssh protocol and thus cannot be changed. Technically, the prompt is actually client side. However, you can bypass security (very bad idea) and then output "your codes is"[sic] via the channel
1
0
0
0
I wrote a SSH server with Twisted Conch. When I execute "ssh [email protected]" command on the client side. My twisted SSH server will return a prompt requesting password that like "[email protected]'s password: ". But now I want to change this password prompt that like "your codes is:". Dose anyone know how to do it?
Python SSH Server(twisted.conch) change the password prompt
0
0
1
0
0
122
38,330,752
2016-07-12T13:47:00.000
2
0
0
0
0
python,sublimetext2,sublimetext3,sublimetext,text-editor
0
38,330,853
0
2
0
false
1
0
In the menu bar: View > Layout > Single Or from the keyboard (on Windows): Alt + Shift + 1 To find your default shortcuts, Preferences > Key Bindings - Default, and search for "set_layout".
2
0
0
0
I was wondering how you exit the multiple row layout on Sublime Text. I switched to a 3 row layout when editing one of my Django projects, but how do I exit from it (remove the extra rows I have added). Thanks, Henry
Sublime Text: How do you exit the multiple row layout
0
0.197375
1
0
0
98
38,330,752
2016-07-12T13:47:00.000
2
0
0
0
0
python,sublimetext2,sublimetext3,sublimetext,text-editor
0
38,330,833
0
2
0
true
1
0
Use View -> Layout menu. If you choose View -> Layout -> Single, other rows will be removed. Short keys depends on OS.
2
0
0
0
I was wondering how you exit the multiple row layout on Sublime Text. I switched to a 3 row layout when editing one of my Django projects, but how do I exit from it (remove the extra rows I have added). Thanks, Henry
Sublime Text: How do you exit the multiple row layout
0
1.2
1
0
0
98
38,331,175
2016-07-12T14:04:00.000
0
0
1
0
0
python,ide,pycharm
0
38,331,228
0
1
0
false
0
0
PyCharm displays the Welcome screen when no project is open. From this screen, you can quickly access the major starting points of PyCharm. The Welcome screen appears when you close the current project in the only instance of PyCharm. If you are working with multiple projects, usually closing a project results in closing the PyCharm window in which it was running, except for the last project, closing this will show the Welcome screen.
1
1
0
0
I switched to PyCharm a couple of months ago, but I can't figure out how to get rid of the welcome screen when I open files. More specifically, I've set up my mac to open all .py files using PyCharm. However, when I double click on a .py file, it's the Welcome screen that opens up and not the .py file. How do I get PyCharm to just open the python script in the editor, without showing me a welcome screen?
PyCharm Directly Open Python File
0
0
1
0
0
393
38,356,263
2016-07-13T15:44:00.000
-3
0
0
0
0
python,tkinter
0
38,357,614
0
2
0
false
0
1
The forms toolkit offers precisely the components that it offers. If you are not happy with round radio buttons, then code in OSF/Motif, which offers diamond-shaped radio buttons. Either that, or you could hack the internals of the widget (sorry, "control": I am so accustomed to professional [= UNIX] terminology). The round button is probably represented as a pixmap somewhere: just overwrite that in place, lickety-split, with your own two-tone pixmap that effects a rough diamond shape.
1
0
0
0
I am creating a GUI for an application, modeled off of one I have seen. This other application uses diamond-shaped radiobutton indicators from Python Tkinter, and I can't seem to find out how to use a diamond-shaped radiobutton in my program. All of my attempts at creating a radiobutton result in a circular shaped radioubtton. And thoughts? I'm running my GUI on Redhat and Windows, same problem for both.
Diamond Shaped Radiobuttons in Python Tkinter
0
-0.291313
1
0
0
494
38,357,398
2016-07-13T16:44:00.000
2
0
1
1
0
ipython,keras
1
38,408,813
0
1
0
false
0
0
I don't think keras is the only problem. If you are using theano as a backend, it will create $HOME/.theano/ as well. One dirty trick is to export HOME=/data/username/, but other program than keras or ipython will also treat /data/username/ as $HONE. To avoid that, you can do this locally by calling HOME=/data/username/ ipython or HOME=/data/username/ python kerasProgram.py.
1
2
0
0
When I'm in ipython and try to import keras, I get the error No space left on device: /home/username/.keras. How can I change this so that Keras does not use my HOME directory, and instead use /data/username/? I did the same for the directory ~/.ipython. I moved it to the desired location and then did export IPYTHONDIR=/data/username/.ipython, can I do something similar with Keras? More generally, how can I do this for any app that wants to use HOME? Note: Please don't give answers like "you can clean your home" etc. I am asking this for a reason. Thanks!
Move .keras directory in Ubuntu
1
0.379949
1
0
0
680
38,363,949
2016-07-14T00:59:00.000
1
0
0
0
0
python,sas,dataset
0
53,167,909
0
1
0
true
0
0
With the help of sas7bdat package you can access all sas datasets normally in local drive, and to use datasets from server use FTP or SFTP connections to read the file as a object and it's easy to access.
1
0
0
0
I am writing some code in Python with all the data available in SAS datasets both on Local hard drive and SAS server. The problem is how to access / import these datasets directly in my python program and then write back? Can anybody help. I have seen recommendation for python package "Sas7bdat" but not sure about it. is there anyway other way to get connected especially to the datasets available on the local derive (not on server)?
How to access SAS dataset (available both n local derive and SAS server) from Python code?
1
1.2
1
0
0
958
38,364,162
2016-07-14T01:31:00.000
2
0
1
0
0
python,function
0
38,364,393
0
4
0
false
0
0
One way is to have your required parameters be named like func(a, b, c=1) and this would denote required because the code will error out at runtime if missing any. Then for the optional parameters you would then use Python's args and kwargs. Of course anytime you use Python's args and kwargs means additional code to pull the parameter from the args and kwargs. Additionally for each combination of optional parameters you then would need to code a bunch of conditional control flow. In addition you don't want too many optional assignments because it makes the code's API to complex to describe... and the control flow have to many lines of code because the number of possible combinations grows very quickly for each additional optional parameter. AND your test code grows EVEN faster...
1
9
0
0
especially when there are so many parameters (10+ 20+). What are good ways of enforcing required/optional parameters to a function? What are some good books that deal with this kind of questions for python? (like effective c++ for c++) ** EDIT ** I think it's very unpractical to list def foo(self, arg1, arg2, arg3, .. arg20, .....): when there are so many required parameters.
in python, how do you denote required parameters and optional parameters in code?
0
0.099668
1
0
0
19,827
38,364,568
2016-07-14T02:30:00.000
0
1
0
0
0
python,bdd,scenarios,python-behave
0
38,643,609
0
2
0
false
0
0
What I've been doing might give you an idea: In the before_all specify a list in the context (eg context.teardown_items =[]). Then in the various steps of various scenarios add to that list (accounts, orders or whatever) Then in the after_all I login as a superuser and clean everything up I specified in that list. Could something like that work for you ?
1
0
0
0
I am running multiple scenarios and would like to incorporate some sort of dynamic scenario dispatcher which would allow me to have specific steps to execute after a test is done based on the scenario executed. When I was using PHPUnit, I used to be able to subclass the TestCase class and add my own setup and teardown methods. For behave, what I have been doing is adding an extra "Then" step at the end of the scenario which would be executed once the scenario finishes to clean up everything - clean up the configuration changes made by scenario, etc. But since every scenario is different, the configuration changes I need to make are specific to a scenario so I can't use the after_scenario hook that I have in my environment.py file. Any ideas on how to implement something similar?
Dynamic scenario dispatcher for Python Behave
0
0
1
0
0
697
38,376,478
2016-07-14T14:08:00.000
5
0
0
0
0
python,tensorflow,conv-neural-network
0
38,376,532
0
6
0
false
0
0
sigmoid(tensor) * 255 should do it.
1
22
1
0
Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values. Is there any way to do this? Thanks!
Changing the scale of a tensor in tensorflow
0
0.16514
1
0
0
25,931
38,387,676
2016-07-15T04:03:00.000
0
0
0
1
0
python,macos,subprocess,osx-elcapitan
0
38,387,870
0
3
0
false
0
0
Thank you everyone for the quick replies. I have been playing with the subprocess module, and I have gotten this to work:import subprocess m=subprocess.Popen(["say","hello"]) print(m) The .Popen command is also a quick way to get this to work. However, this is only working on my Mac and I need it to work on my Raspberry Pi for an interactive feature in my code. (I am using Pi Cam and Infrared Sensors for a robot that wheels around and when it senses people in front of it, says "Hey! Please move out of my way please!"
1
0
0
0
While Mac OSX 10.11.5 (El Capitan) has the "say" command to speak in a system generated voice, or so to say, is there any command that is similar for Python that can be used in Python? If Subprocess is utilized, please explain on how to use that.
While Mac OSX has the say command to speak, or so to say, is there any command that is similar for Python?
1
0
1
0
0
449
38,388,799
2016-07-15T05:57:00.000
1
0
0
0
0
python,list,python-2.7,sorting
0
38,389,853
0
3
0
false
0
0
you can use string.split(),string.split(',')[1]
1
5
1
0
Overview: I have data something like this (each row is a string): 81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M 3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M 61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M And I want to sort each row based on the first timestamp that is present in the each String, which for these four records is: 2016-07-14 01:28:59 2016-07-14 06:25:32 2016-07-14 08:26:45 2016-07-14 14:29:13 Now I know the sort() method but I don't understand how can I use here to sort all the rows based on this (timestamp) quantity, and I do need to keep the final sorted data in the same format as some other service is going to use it. I also understand I can make the key() but I am not clear how that can be made to sort on the timestamp field.
Sort A list of Strings Based on certain field
0
0.066568
1
1
0
358
38,400,918
2016-07-15T16:23:00.000
2
1
1
0
0
java,python,oop
0
38,400,988
0
1
0
false
0
0
The short answer is yes and no. One of the key differences I see in Python compared to Java and C# is that in Python, functions don't have to be in a class. In fact, operations don't even have to be in a function. Java and C# both have two main rules: All code must be in a class. Operations are generally required to be in functions. This isn't true in Python. In fact, you can write a very basic Python script that's not even in a function. Java does not offer that flexibility - sometimes, that can be very positive because those strict rules help keep the code organized. Classes in Python operate in a manner that's very similar to Java and C#, but they aren't necessarily applied in the same way because of the rules above.
1
0
0
0
My background in programming is mostly Java. It was the first language I learned, and the language I spent the most amount of time with (I then moved on to C# for a little, and eventually C in school). A while back I tried dabbling with Python, and it seemed so different to me (based on my experience with Java). Anyways, now I'm doing much more Python stuff, and I've learned that Python is considered an OOP language with classes and such. I was just curious as to whether these attributes of Python function similarly to their Java counterparts. Please understand, that I'm asking this at a very rudimentary level. I'm still a "new" programmer in the sense that I just know how to write code, but don't know much about the various intricacies and subtleties with various languages and types of programming. Thanks EDIT Sorry, I realize that this was incredibly broad, but I really wasn't looking for specifics. I guess the root of my question stems from my curiosity about the purpose/role of classes in Python to begin with. From my experience, and what I've seen (and this is by no means extensive or considered to be an accurate representation of the actual uses of Python), most of the time, Python is used without classes or any sort of OOP. As to how that relates to Java, I merely wanted to know if there was a special use or scenario for classes in Python. Essentially, since classes are required in Java, and I was brought up on Java, classes seemed like a norm to me. However, when I got to Python, I noticed that a lack of classes was the norm. This led me to wonder whether classes in Python had some sort of special significance. I apologize if this is no more clear than my original post, or if any of this sounds confusing/inaccurate.
Do classes in Python work the same way as classes in Java?
0
0.379949
1
0
0
57
38,405,345
2016-07-15T21:36:00.000
7
0
1
0
0
python,jupyter-notebook
0
38,405,533
0
1
0
true
0
0
The only way I can see to do it would be to join the cells, and then put the entire thing in a for/while loop.
1
10
0
0
I've got a Jupyter Notebook with a couple hundred lines of code in it, spread across about 30 cells. If I want to loop through 10 cells in the middle (e.g. using a For Loop), how do you do that? Is it even possible, or do you need to merge all the code in your loop into one cell?
How to loop through multiple cells in Jupyter / iPython Notebook
1
1.2
1
0
0
18,735
38,405,454
2016-07-15T21:48:00.000
1
0
1
0
0
python,performance,intel,python-multiprocessing,intel-mkl
0
38,405,541
0
1
0
true
0
0
In Spyder menu choose Preferences then click console and click to Advanced settings tab. From there choose the Python interpreter, which came with Intel Distribution.
1
0
0
0
I've just installed the new Intel Distribution for Python because I need some performance improvements with my Skull Canyon NUC, but I don't understand how to use all the packages/modules modified by Intel. I usually use Anaconda Spyder as my main IDE, how can I "tell" to Spyder to not use the Anaconda standard/included packages/modules instead of the new Intel ones? Thank you for your answers!
Intel Distribution for Python and Spyder IDE
0
1.2
1
0
0
892
38,412,298
2016-07-16T14:45:00.000
1
0
0
0
0
xml,python-2.7,xslt,pdf-generation,xsl-fo
0
38,413,882
0
2
0
false
1
0
XSL FO requires a formatting engine to create print output like PDF from XSL FO input. Freely available one is Apache FOP. There are several other commercial products also. I know of no XSL FO engines written in Python though some have Python interfaces.
1
1
0
0
Is there a simple way to get a PDF from a xml with an xsl-fo? I would like to do it in python. I know how to do an html from an xml&xsl, but I haven't find a code example to get a PDF. Thanks
xml + xslfo to PDF python
0
0.099668
1
0
1
1,430
38,418,140
2016-07-17T05:18:00.000
0
1
0
0
0
javascript,python,html,raspberry-pi2
0
38,418,381
0
1
0
false
1
0
I can suggest a way to handle that situation but I'm not sure how much will it suit for your scenario. Since you are trying to use a wifi network, I think it would be better if you can use a sql server to store commands you need to give to the vehicle to follow from the web interface sequentially. Make the vehicle to read the database to check whether there are new commands to be executed, if there are, execute sequentially. From that way you can divide the work into two parts and handle the project easily. Handling user inputs via web interface to control vehicle. Then make the vehicle read the requests and execute them. Hope this will help you in someway. Cheers!
1
0
0
0
For a college project I'm tasked with getting a Raspberry Pi to control an RC car over WiFi, the best way to do this would be through a web interface for the sake of accessibility (one of the key reqs for the module). However I keep hitting walls, I can make a python script control the car, however doing this through a web interface has proven to be dificult to say the least. I'm using an Adafruit PWM Pi Hat to control the servo and ESC within the RC car and it only has python libraries as far as I'm aware so it has to be witihn python. If there is some method of passing variables from javascript to python that may work, but in a live environment I don't know how reliable it would be. Any help on the matter would prove most valuable, thanks in advance.
How do I control a python script through a web interface?
0
0
1
0
1
229
38,429,271
2016-07-18T05:38:00.000
0
0
1
0
0
python,flask,psutil
0
38,521,678
0
1
0
false
1
0
For future need, I found a way to this. Using ElasticSearch and Psutil. I indexed the psutil values to elasticsearch then used the date-range and date-histogram aggs. Thanks!
1
0
0
0
I'm currently writing a web application using Flask in python that generates the linux/nix performances(CPU, Disk Usage, Memory Usage). I already implemented the python library psutil. My question is how can I get the values of each util with date ranges. For example: Last 3 hours of CPU, Disk Usage, Memory usage. Sorry for the question I'm a beginner in programming.
Python/Flask : psutil date ranges
0
0
1
0
0
172
38,431,782
2016-07-18T08:20:00.000
0
0
0
0
1
python,python-3.x,openmdao
0
38,446,190
0
2
0
false
0
0
Worked for me after updating to 1.7.1 via pip on Fedora v20. The command with conventional naming is: view_tree(top)
1
1
0
0
Is openmdao GUI available on 1.7.0 version? And if yes, how to run it? I have found, how to run the GUI on the 0.10.7 version, but it doesn't work on the 1.7.
Running openmdao 1.7.0 GUI
0
0
1
0
0
287
38,433,584
2016-07-18T09:49:00.000
3
0
0
0
0
python,matplotlib
0
38,433,637
0
3
0
true
0
0
Assuming you know where the curve begins, you can just use: plt.plot((x1, x2), (y1, y2), 'r-') to draw the line from the point (x1, y1) to the point (x2, y2) Here in your case, x1 and x2 will be same, only y1 and y2 should change, as it is a straight vertical line that you want.
1
1
1
0
I have a curve of some data that I am plotting using matplotlib. The small value x-range of the data consists entirely of NaN values, so that my curve starts abruptly at some value of x>>0 (which is not necessarily the same value for different data sets I have). I would like to place a vertical dashed line where the curve begins, extending from the curve, to the x axis. Can anyone advise how I could do this? Thanks
Python - Plotting vertical line
0
1.2
1
0
0
4,256
38,476,379
2016-07-20T08:41:00.000
2
0
1
0
0
python,intellij-idea,pycharm,conda
0
38,732,023
0
1
0
true
0
0
You ned to change your Project Interpreter to point to $CONDA_PREFIX/bin/python where $CONDA_PREFIX is the location of your conda env. The $CONDA_PREFIX environment location you're looking for should be in the second column of the output from conda info --envs.
1
1
0
0
I am using Python with Conda to manage my environment and libraries. Does anyone know how to get IntelliJ (with the Python plugin) or PyCharm to add the libraries in my Conda environment to my project? It only pulls in site packages even when I select ~/anaconda/bin/python as my Python Interpreter.
How can I get IntelliJ to index libraries in my Python Conda environment?
0
1.2
1
0
0
563
38,480,595
2016-07-20T11:54:00.000
1
0
1
0
0
python,pycharm,exit
0
38,529,814
0
2
0
false
0
0
Solved it using a really bad workaround. I used all functions that are related to exit in Python, including SIG* functions, but uniquely, I did not find a way to catch the exit signal when Python program is being stopped by pressing the "Stop" button in PyCharm application. Finally got a workaround by using tkinter to open an empty window, with my program running in a background thread, and used that to close/stop program execution. Works wonderfully, and catches the SIG* signal as well as executing atexit . Anyways massive thanks to @scrineym as the link really gave a lot of useful information that did help me in development of the final version.
1
1
0
0
Basically I am writing a script that can be stopped and resumed at any time. So if the user uses, say PyCharm console to execute the program, he can just click on the stop button whenever he wants. Now, I need to save some variables and let an ongoing function finish before terminating. What functions do I use for this? I have already tried atexit.register() to no avail. Also, how do I make sure that an ongoing function is completed before the program can exit? Thanks in advance
Wait and complete processes when Python script is stopped from PyCharm console?
0
0.099668
1
0
0
639
38,488,977
2016-07-20T19:12:00.000
1
0
0
1
0
python,docker,errbot
0
38,557,886
0
1
0
true
1
0
I think the best if you run Errbot in a container is to run it with a real database for the persistence (redis for example). Then you can simply run backup.py from anywhere (including your dev machine). Even better, you can just do a backup of your redis directly.
1
2
0
0
I'm running errbot in a docker container, we did the !backup and we have the backup.py, but when i start the docker container it just run /app/venv/bin/run.sh but i cannot pass -r /srv/backup.py to have all my data restored. any ideas? all the data is safe since the /srv is a mounted volume
how can i restore the backup.py plugin data of errbot running in a docker container
0
1.2
1
0
0
75
38,493,144
2016-07-21T00:53:00.000
0
0
1
0
0
python,windows,python-2.7
0
38,493,199
0
1
0
false
0
0
You're using the wrong path, Pip should reside in the Scripts sub directory, set PATH to C:\Python27\Scripts then you should restart cmd.
1
0
0
0
I've been having some really odd issues with trying to install and use the Python "Pip" module. Firstly, I've installed the pip module by downloading the getpip.py file and running it which has replaced my pre existing pip which seemed to work fine. However whenever I try to use pip it always comes up with "pip is not recognized as an internal or external command" etc. I've set the path for python by using setx PATH "%PATH%;C:\Python27\python" and then using C:\Python27\Scripts\pip the second time to try and set the path for pip. But one of these seem to work. I can't use pip in cmd neither can I now use python. Does anyone know how to make this work? I'm trying to run this command "pip install -r requirements.txt " even in the right folder but pip is not recognized. Any suggestions? Thanks.
Python 2.7 Pip module not installing or setting paths via cmd?
0
0
1
0
0
362
38,496,026
2016-07-21T06:01:00.000
0
1
1
0
0
python,python-2.7,pdf,text,converter
0
70,157,888
0
2
0
false
0
0
You can use "tabula" python library. which basically uses Java though so you have to install Java SDK and JDK. "pip install tabula" and import it to the python script then you can convert pdf to txt file as: tabula.convert_into("path_or_name_of_pdf.pdf", "output.txt", output_format="csv", pages='all') You can see other functions on google. It worked for me. Cheers!!!
1
3
0
0
I've been on it for several days + researching the internet on how to get specific information from a pdf file. Eventually I was able to fetch all information using Python from a text file(which I created by going to the PDF file -----> File ------> Save as Text). The question is how do I get Python to accomplish those tasks(Going to the PDF file(opening it - is quite easy open("file path"), clicking on File in the menu, and then saving the file as a text file in the same directory). Just to be clear, I do not require the pdfminer or pypdf libraries as I have already extracted the information with the same file(after converting it manually to txt)
Converting a PDF file to a Text file in Python
0
0
1
0
0
3,865
38,518,000
2016-07-22T04:09:00.000
0
0
0
0
0
python,pandas,dataframe,import,sas
0
38,518,356
0
1
0
false
0
0
I don't know how python stores dates, but SAS stores dates as numbers, counting the number of days from Jan 1, 1960. Using that you should be able to convert it in python to a date variable somehow. I'm fairly certain that when data is imported to python the formats aren't honoured so in this case it's easy to work around this, in others it may not be. There's probably some sort of function in python to create a date of Jan 1, 1960 and then increment by the number of days you get from the imported dataset to get the correct date.
1
0
1
0
I have imported a SAS dataset in python dataframe using Pandas read_sas(path) function. REPORT_MONTH is a column in sas dataset defined and saved as DATE9. format. This field is imported as float64 datatype in dataframe and having numbers which is basically a sas internal numbers for storing a date in a sas dataset. Now wondering how can I convert this originally a date field into a date field in dataframe?
Date field in SAS imported in Python pandas Dataframe
0
0
1
0
0
398
38,518,227
2016-07-22T04:33:00.000
1
0
0
0
0
python,excel,dashboard
0
38,520,493
0
1
0
false
0
0
If the "dashboard" is in Excel and if it contains charts that refer to data in the current workbook's worksheets, then the charts will update automatically when the data is refreshed, unless the workbook calculation mode is set to "manual". By default calculation mode is set to "automatic", so changes in data will immediately reflect in charts based on that data. If the "dashboard" lives in some other application that looks at the Excel workbook for the source data, you may need to refresh the data connections in the dashboard application after the Excel source data has been refreshed.
1
0
0
0
I need to create a dashboard based upon an excel table and I know excel has a feature for creating dashboards. I have seen tutorials on how to do it and have done my research, but in my case, the excel table on which the dashboard would be based is updated every 2 minutes by a python script. My question is, does the dashboard display automatically if a value in the table has modified, or does it need to be reopened, reloaded, etc..?
Can Excel Dashboards update automatically?
0
0.197375
1
1
0
1,488
38,521,380
2016-07-22T08:09:00.000
0
0
0
0
0
python,openerp,openerp-7
0
38,689,879
0
1
0
false
1
0
Yes it is possible by using status bar. In order for you to compute the percentage of sales order, you should determine how much is the quota for each sale order.
1
0
0
0
Is it possible to have easily the percentage of sales orders / quotes per users? The objective it is to know what the percentage of quote that become a sale order per user. I have not a clue how I can do it. I am using OpenERP 7
OpenERP - Odoo - How to have the percentage of quote that become a sale orders
1
0
1
0
0
118
38,527,505
2016-07-22T13:18:00.000
0
0
0
1
1
python,jenkins,jenkins-plugins
0
38,527,694
0
1
0
false
0
0
What I usually do is going on the build output, on the left you will find "Build environment Variables" or something similar and check if you can see them there, but the solution cited on the other SO post works usually for me as well
1
0
0
0
I'm currently using a Jenkins build to run a bash command that activates a Python script. The build is parametrised, and i need to be able to set environment variables containing those parameters on the Windows slave to be used by the Python script. So, my question is: How do i set temporary env. variables for the current build and secondly, how do i use Python to retreive them while running a script? An explanation of the process would be great since i couldn't make any solution work.
Jenkins with Windows slave with Python - Setting environment variables while building and using them when running a python script
0
0
1
0
0
766
38,531,685
2016-07-22T16:51:00.000
2
0
1
0
0
python,ipython-notebook,jupyter-notebook
0
38,531,985
0
1
0
false
0
0
Option 1: Run multiple jupyter notebook servers from your project directory root(s). This avoids navigating deeply nested structures using the browser ui. I often run many notebook servers simultaneously without issue. $ cd path/to/project/; jupyter notebook; Option 2: If you know the path you could use webbrowser module $ python -m webbrowser http://localhost:port/path/to/notebook/notebook-name.ipynb Of course you could alias frequently accessed notebooks to something nice as well.
1
2
0
0
My Jupyter/IPython notebooks reside in various directories all over my file system. I don't enjoy navigating hierarchies of directories in the Jupyter notebook browser every time I have to open a notebook. In absence of the (still) missing feature allowing to bookmark directories within Jupyter, I want to explore if I can open a notebook from the command line such that it is opened by the Jupyter instance that is already running. I don't know how to do this....
Open a Jupyter notebook within running server from command line
0
0.379949
1
0
0
1,220
38,540,517
2016-07-23T10:04:00.000
1
0
0
0
0
python,sonos
0
38,955,622
0
1
0
true
1
0
you can easily iterate over the group, and change all their volumes, for example to increate the volume on all speakers by 5: for each_speaker in my_zone.group: each_speaker.volume += 5 (assuming my_zone is you speaker object)
1
1
0
0
I am trying to set group volume in soco (python) for my Sonos speakers. It is straightforward to set individual speaker volume but I have not found any way to set volume on group level (without iterating through each speaker setting the volume individually). Any idea to do this?
Anyone know how to set group volume in soco (python)?
0
1.2
1
0
0
442
38,558,368
2016-07-25T00:59:00.000
2
0
0
0
0
python-3.x,utf-8,flask
0
38,558,411
0
1
0
false
1
0
The string (BOM) is most likely included in your template file. Open/save it in some editor which doesn't include unnecessary symbols in UTF-8 files. For example Notepad++.
1
0
0
0
I am trying to use Flask and for some reason it is rendering with a byte-order mark that's a quirk of something using UTF8 (the mark is  in particular for people googling the same issue). I do not know how to get rid of it or if it is a source of some of my problems. I am using Flask on Windows 10. I wish I knew how to reproduce this issue.
flask serving byte-order-mark 
0
0.379949
1
0
0
142
38,586,396
2016-07-26T09:43:00.000
0
0
1
0
0
javascript,python,json,csv,geojson
1
38,633,221
0
2
0
false
0
0
I was able to write a conversion script, and it's working now, thanks!
1
0
0
0
So I am currently working on a project that involves the google maps API. In order to display data on this, the file needs to be in a geojson format. So far in order to accomplish this, I have been using two programs, 1 in javascript that converts a .json to a CSV, and another that converts a CSV to a geojson file, which can then be dropped on the map. However, I need to make both processes seamless, therefore I am trying to write a python script that checks the format of the file, and then converts it using the above programs and outputs the file. I tried to use many javascript to python converters to convert the javascript file to a python file, and even though the files were converted, I kept getting multiple errors for the past week that show the converted program not working at all and have not been able to find a way around it. I have only seen articles that discuss how to call a javascript function from within a python script, which I understand, but this program has a lot of functions and therefore I was wondering how to call the entire javascript program from within python and pass it the filename in order to achieve the end result. Any help is greatly appreciated.
How to execute an entire Javascript program from within a Python script
0
0
1
0
1
196
38,586,767
2016-07-26T09:59:00.000
1
0
0
1
0
python,celery
0
38,587,766
0
1
0
true
1
0
You can get it from the _cache object of the AsyncResult after you have called res.result for example res._cache['date_done']
1
1
0
0
I need to trace the status for the tasks. i could get the 'state', 'info' attribute from the AsyncResult obj. however, it looks there's no way to get the 'done_date'. I use MySQL as result backend so i could find the date_done column in the taskmeta table, but how could i get the task done date directly from AysncResult obj? thanks
Celery: How to get the task completed time from AsyncResult
0
1.2
1
0
0
696
38,596,793
2016-07-26T17:56:00.000
1
0
0
0
0
python,google-spreadsheet-api
0
38,600,670
0
1
0
true
1
0
If you want to do this with only manipulating your Python program, you would have to run it all day. This would waste CPU resources. It's best to use cron to schedule your unix system to run a command for you every 2 hours. In this case, it'd be your python program.
1
0
0
0
I'm trying to read from a Google sheet say every 2 hours. I have looked at both the API for Google sheets and also the Google Apps Script. I'm using Python/Flask, and what I'm specifically confused about is how to add the time trigger. I can use the Google Sheets API to read from the actual file,but I'm unsure of how to run this process every x hours. From my understanding, it seems like Google Apps Script, is for adding triggers to doc, sheets, etc, which is not really what I want to do. I'm pretty sure I'm looking in the wrong area for this x hour read. Should I be looking into using the sched module or Advanced Python Scheduler?Any advice on how to proceed would be very appreciated.
Reading From Google Sheets Periodically
0
1.2
1
0
0
164
38,603,480
2016-07-27T03:59:00.000
2
0
1
0
0
python,import,py2exe,pyinstaller,os.system
0
38,665,906
0
1
0
true
0
0
After a couple of days of some tests, I was able to figure out how to work around this problem. Instead of os.system, I am using subprocess.call("script.py arg1 arg2 ..., shell=True) for each script I need to run. Also, I used chmod +x (in linux) before transferring the scripts to windows to ensure they're an executable (someone can hopefully tell me if this was really necessary). Then without having to install python a colleague was able to run the program, after I compiled it as a single file with pyInstaller. I was also able to do the same thing with blast executables (where the user did not have to install blast locally - if the exe also accompanied the distribution of the script). This avoided having to call bipython ncbiblastncommandline and the install.
1
2
0
0
I'm using tkinter and pyinstaller/py2exe (either one would be fine), to create an executable as a single file from my python script. I can create the executable, and it runs as desired when not using the bundle option with py2exe or -F option with pyinstaller. I'm running third party python scripts within my code with os.system(), and can simply place these scripts in the 'dist' dir after it is created in order for it to work. The command has several parameters: input file, output file, number of threads..etc, so I'm unsure how to add this into my code using import. Unfortunately, this is on Windows, so some colleagues can use the GUI, and would like to have the single executable to distribute. **EDIT:**I can get it to bundle into a single executable, and provide the scripts along with the exe. The issue still however, is with os.system("python script.py -1 inputfile -n numbthreads -o outputfile..") when running the third party scripts within my code. I had a colleague test the executable with the scripts provided with it, however at this point they need to have python installed, which is unacceptable since there will be multiple users.
PyInstaller/Py2exe - include os.system call with third party scripts in single file compilation
0
1.2
1
0
0
1,346
38,612,836
2016-07-27T12:21:00.000
2
0
0
0
0
python,performance,model-view-controller,pyramid
0
38,612,965
0
1
0
false
1
0
Why don't you use ajax function , post data to the server and when proccess to the server is done display the result to the html page
1
0
0
0
I am trying to speed up my website. So at the moment, controller fetches data from database, do calculation on data and display on view. what I plan to do is, controller/action fetches half the data and display to the view. Than come back to different controller/action and do calculation on data and display data on screen. But what I want to know is once I fetch data and display on screen, how do I go back to controller automatically(without any click by user) to do calculations on same data.
Suggestions to make website fast by breaking a request in two parts
0
0.379949
1
0
0
51
38,615,088
2016-07-27T13:58:00.000
1
0
0
0
0
python,scikit-learn,vectorization,tf-idf,text-analysis
0
38,615,418
0
1
0
true
0
0
You seem to be misunderstanding what the TF-IDF vectorization is doing. For each word (or N-gram), it assigns a weight to the word which is a function of both the frequency of the term (TF) and of its inverse frequency of the other terms in the document (IDF). It makes sense to use it for words (e.g. knowing how often the word "pizza" comes up) or for N-grams (e.g. "Cheese pizza" for a 2-gram) Now, if you do it on lines, what will happen? Unless you happen to have a corpus in which lines are repeated exactly (e.g. "I need help in Python"), your TF-IDF transformation will be garbage, as each sentence will appear exactly once in the document. And if your sentences are indeed always similar to the punctuation mark, then for all intents and purposes they are not sentences in your corpus, but words. This is why there is no option to do TF-IDF with sentences: it makes zero practical or theoretical sense.
1
2
1
0
I'm trying to analyze a text which is given by lines, and I wish to vectorize the lines using sckit-learn package's TF-IDF-vectorization in python. The problem is that the vectorization can be done either by words or n-grams but I want them to be done for lines, and I already ruled out a work around that just vectorize each line as a single word (since in that way the words and their meaning wont be considered). Looking through the documentation I didnt find how to do that, so is there any such option?
Tf-Idf vectorizer analyze vectors from lines instead of words
1
1.2
1
0
0
791