Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
35,901,908
2016-03-09T20:28:00.000
1
0
1
0
python,c++,wrapper
35,902,761
1
true
0
1
Concerning your confusion about the process, just follow any tutorial for any of the wrapper libs you found or where suggested in the comments. If I want to make this manager usable in python, do I need to wrap it in a different way for each version of python? (2.7 vs 3.4) Yes. You might be able to load binary modules compiled for Python 3.4 into Python 3.5, but it's unlikely to work across major versions. Do I also need to wrap it in a different way for each operating system for each version? Yes. Just as you need to compile your C++ code for different operating systems (and possibly versions) and CPU architectures, Python modules are not different. However, the "wrap it in a different way" just means "compile for the target environment".
1
0
0
I have a .cpp and .h source file pair which is a manager (I guess a wrapper also) for a c++ library I have made. I want to let people use this manager to work with my library in python. I have heard about several different ways to wrap this library into python like cython and boost.python but Im having trouble with understanding the process. If I want to make this manager usable in python, do I need to wrap it in a different way for each version of python? (2.7 vs 3.4) Do I also need to wrap it in a different way for each operating system for each version? So 2.7/3.4 for Windows vs 2.7/3.4 for Linux?
Confusion wrapping c++ library to python
1.2
0
0
85
35,903,034
2016-03-09T21:31:00.000
2
0
0
0
python,ubuntu-14.04,kivy,pydev
35,906,874
1
false
0
1
*.kv files aren't Python files. I don't think trying to treat them as such is really what you want to do. If you must, you can choose to treat *.kv files as python files by going to Preferences > General > Editors > File Associations and adding an entry for *.kv with the Python Editor as an associated editor. My own personal preference, however, is to use YEdit YAML editor for *.kv files. It won't recognize Python syntax in expressions, but it works well enough for me. If you're willing to use an external editor, you can get Kv-lang syntax highlighting in Vim. If you're willing to learn to use Vim. Which you should, because Vim is awesome. Finally, if you're willing to pay, the developer of PyDev also develops a closed source fork of Eclipse called Liclipse, which, if I recall correctly, has syntax highlighting, outlining, and autocomplete in kv files.
1
1
0
I begun to develope with Python and Kivy and I really like it :-) For daily business I'm a Java developer and also a an eclipse child. So i decided to setup eclipse (Mars) with Python, means install PyDev-Plugin and create the settings (done in one button-click). But I have a problem, my eclipse do not want to recognize the kv-files as python files. So my question has anyone experience with this set up? Do anyone knows a good set up tutorial? Thanks for your help
Kivy development in eclipse PyDev on Ubuntu 14.04 SetUp
0.379949
0
0
451
35,905,521
2016-03-10T00:42:00.000
0
0
1
1
python,python-3.x,cx-freeze
51,223,723
2
false
0
0
Copy to the directory of the file to compile the following: re.py sre_compile.py sre_constants.py sre_parse.py from "...\Lib" and build: python <nameFileToBuild>.py build
1
1
0
I've been trying to compile a game I'm writing with python into an exe with cx_Freeze so my friends can play it without the python interpreter. However, when I run the "build" command through cmd, I get an error saying "ImportError: No module named 'cx_Freeze'". I've done this every way in and out, changing the capital letters in "cx_Freeze". I'm trying to use 3.4.3/3.5.1, and I'm using cx_Freeze version 4.3.4. Thanks in advance... in answer to Loïc's comment: yes, it is installed.
cx_Freeze not working - no module named cx_Freeze
0
0
0
4,771
35,906,090
2016-03-10T01:43:00.000
0
0
1
1
python,windows,powershell,windows-7
35,906,151
1
true
0
0
Not actually a programming question, but: In Task Manager's Process page, choose View > Select Columns and add the Command Line column. Then you can see the actual command line for each process and you should be able to track down the ones you're interested in. This is for Windows 7; I know they made some changes to the Task Manager for Windows 10 but don't have access to a Windows 10 machine at the moment.
1
1
0
I want to run a Python process in background, and I use the following command in PowerShell. powershell > PowerShell.exe -windowstyle hidden python my_process.py But, How can I know whether it is running in background? The task manager can not show a process named python my_process.py that running in background, and I don't know the process id on task manager, it just show some python and powershell processes running in background. I can not identify which process is my Python process.
how to identify jobs running background on Windows 7-10?
1.2
0
0
293
35,906,092
2016-03-10T01:44:00.000
0
0
1
0
python,regex,string,python-2.7,python-3.x
35,906,208
4
false
0
0
I have had a similar challenge and ended up by replacing the first character with a place holder. I then replaced the 2nd character. The third pass was to replace the place holder with the desired character. Not fancy but worked every time. Replace the 'a' with '$', replace the 'o' with 'k', replace the '$' with 'o'.
1
0
0
There is this issue I have been thinking for some time. I have replacement rules for some string transformation job. I am learning regex and slowly finding correct patterns, this is no problem. However, there are many rules in this and I could not do them in a single expression. And now the processes are overlapping. Let me give you a simple example. Imagine I want to replace every 'a' with 'o' in a string. I also want to replace every 'o' to 'k' in the same string, however, there is no order, so if I apply the previous rule first, then the converted 'a's now will become 'k', which simply is not my intention. Because all convertions must have the same priority or precedence. How can I overcome this issue ? I use re.sub(), but I think same issue exists for string.replace() method. All help appreciated, Thank you !
Avoid string replace repetition
0
0
0
968
35,910,573
2016-03-10T07:51:00.000
3
1
0
0
python,pytest,coverage.py
35,939,160
2
false
0
0
The usual coverage tools are built for the much more common case of the measured code being run inside the same process as the test runner. You not only are running in a different process, but a different machine. You can use coverage.py directly on the remote machine when you start the process running the code under test. How you would do that depends on how you start that process today. The simple rule of thumb is that wherever you had been saying, "python my_prog.py", you can say, "coverage run my_prog.py".
1
2
0
I'm using py.test for REST API automation using python request library. How to get coverage using pytest-cov tool. I'm running the automation on build server and code is executed in application server.
pytest-cov get automation coverage from remote server
0.291313
0
1
1,073
35,911,900
2016-03-10T09:07:00.000
0
1
0
0
python,z3,smt,z3py
35,916,711
1
true
0
0
This can happen, e.g., when the configuration between the two methods differs (even slightly), or when when the problems aren't exactly identical (e.g. different order of constraints). Some tactics are also non-deterministic (e.g. they use timers in the preprocessing) and the executable happens to be a bit faster/slower. To diagnose what exactly causes the difference we would need to see some of your problems, or at the very least some diagnostic output, for instance, add -v:10 on the command line and set the global "verbosity" option to 10.
1
0
0
I am working with z3 python api. When I solve constraints using z3 python api then the solver runs infinitely and no errors are thrown. But, when same constraints are dumped in the form of smtlib2 format and then are solved via z3 executable, it almost instantaneously gives sat or unsat. The smtlib2 dump is very large (around 1000 lines). Although for small number of constraints, z3 api works fine. Is there a bug in z3 python api for handling large number of constraints?
Difference in output when smtlib2 solver is invoked through z3 python api and directly from executable?
1.2
0
0
159
35,914,596
2016-03-10T11:02:00.000
0
0
0
0
python,python-2.7,pyqt4
35,914,964
1
false
0
1
As far as I know, ui_mainWindow is a python file generated by some Qt Tool, that transforms .ui file from QtDesigner to Python class. I have no real experience with PyQT, but I know both C++/Qt and python. In C++/Qt QtCreator does the job of transforming .ui file to C++ class, but probably in python You need to do this Yourself.
1
0
0
I have python 2.7 under windows x64, I have been trying to make a simple GUI using PyQt4, like this: from PyQt4 import * from ui_mainwindow import Ui_MainWindow class MainWindow(QtGui.QMainWindow, Ui_MainWindow): when I run the program I have this error: " No module named ui_mainWindow" -I have pyqt4 installed - I have tried to replace um_mainwindow with ui_simple and clientGUI but the same error resulted. What am I doing wrong and how to fix this? thank you
python 2.7 under windows: cannot import ui_mainwindow
0
0
0
831
35,919,948
2016-03-10T14:57:00.000
0
0
0
0
python,python-2.7,machine-learning,scipy,scikit-learn
35,923,537
1
false
0
0
I suspect there is no a readily available module that suits your needs. If I were you I would: partition the features into 2 groups: one for simple linear regression and another one for regularized regression. Train two models on two different (maybe overlapping?) sets of features. When you cross-validate your models, to prevent information leakage between folds, I'd suggest fixing folds and training both models on the same fixed set of folds. On top, stack and train any other regression model.
1
0
1
I want to run a lasso or ridge regression, but where the L1 or L2 constraint on the coefficients is on some of the coefficients, not all. Another way to say it: I would like to use my own custom cost function inside the lasso or ridge algorithm. I would like to avoid having to rewrite the whole algorithm. Is there a module in python that allows this? I looked into scipy and sckit-learn so far, but could not find that.
Inject custom cost function for linear regression
0
0
0
787
35,922,396
2016-03-10T16:45:00.000
0
0
1
0
python-2.7,ipython,windows-10,spyder
35,924,711
1
false
0
0
what is your path to winpython ? spaces or unicode characters may trouble Spyder. Otherwise try with a previous, or posterior version of WinPython
1
0
0
I am using a fresh install of WinPython 2.7, which includes Spyder 3.0.0, on Windows 10. When I start Spyder, the ipython console never connects to the kernel. I have tried resetting spyder through the WinPython command prompt (spyder --reset&&spyder) and regular command line (spyder --reset) and tried opening multiple ipython consoles without any luck. There are no errors in the kernel tab. I have made sure that Spyder is pointing to the correct python.exe in WinPython. I have made sure the qtconsole is installed. Ipython QT console built into WinPython works fine. Thank you for any help you can provide.
Spyder in WinPython can't connect to kernel
0
0
0
883
35,922,553
2016-03-10T16:53:00.000
1
0
1
1
python,macos,python-2.7
35,922,700
3
false
0
0
Set your an alias to use the python version that you want to use from inside your .bashrc (or zsh if you use it). Like: alias python='/usr/bin/python3.4'
1
6
0
I want to completely reinstall Python 2 but none of the guides I have found allow me to uninstall it. No matter what I do, python --version still returns 2.7.10, even after I run the Python 2.7.11 installer. All the other guides on StackOverflow tell me to remove a bunch of files, but python is still there.
Uninstall Python 2.7 from Mac OS X El Capitan
0.066568
0
0
24,535
35,923,494
2016-03-10T17:39:00.000
0
0
0
0
python,django,apache,apache2.4
35,923,950
1
false
1
0
The real solution is to install your data files in /srv/data/myapp or some such so that you can give the webserver user correct permissions to only those directories. Whether you choose to put your code in /var/www or not, is a separate question, but I would suggest putting at least your wsgi file there (and, of course, specifying your <DocumentRoot..> correctly.
1
1
0
I am working on a Django based application whose location on my disk is home/user/Documents/project/application. Now this application takes in some values from the user and writes them into a file located in a folder which is under the project directory i.e home/user/Documents/project/folder/file. While running the development server using the command python manage.py runserver everything worked fine, however after deployment the application/views.py which accesses the file via open('folder/path','w') is not able to access it anymore, because by default it looks in var/www folder when deployed via apache2 server using mod_wsgi. Now, I am not putting the folder into /var/www because it is not a good practise to put any python code there as it might become readable clients which is a major security threat. Please let me know, how can I point the deployed application to read and write to correct file.
Django deployed app/project can't write to a file
0
0
0
810
35,924,199
2016-03-10T18:15:00.000
1
0
1
0
python,editor
35,946,563
2
false
0
0
What OS do you use? If you use Windows, you can use WinSCP to view the files and edit them in your computer with any text editor you like. If you use Linux, you can access your files via remote SSH and open them with your text editor.
1
7
0
Most of my data and code are on remote server. I mostly use vim for writing python codes. But, I was wondering if there would be any way to remote access and execute codes on a server from a GUI? GUI comes in handy when I have to plot charts. I am aware of ipython notebooks and pycharm remote access. ipython/jupyter notebooks have a tendency to get stuck during large computations. And for PyCharm we still need to copy codes to local and use a remote interpreter. What are the tools that are usually used? Any help would be appreciated. Thank You
Best editor for remote python files
0.099668
0
0
3,381
35,924,270
2016-03-10T18:19:00.000
1
0
0
0
python,error-handling,libjpeg,openslide
35,983,246
1
true
0
1
I would guess your smaller TIFF is not JPEG-compressed, but your larger one is. When libtiff starts the jpeg decoder, it checks that the version number in the libjpeg library binary matches the version number in the libjpeg headers that it was compiled against, and if they do not match, it prints the warning you are seeing. The error means that you have installed a new jpeg library, but not recompiled libtiff or perhaps openslide. You don't say what platform you are using, but on linux these issues should all be handled for you by your package manager, as long as you stick to the supported versions. If you've built any parts of the system yourself, you'll need to recheck how each part was configured and installed, and how your environment has been set up.
1
0
0
I'm working with Openslide's python bindings. I am using Tif images, which are supported by Openslide. It seems I am able to use the methods read_region and get_thumbnail with a smaller, binary masked Tif of about 100 mb's. However, with a larger, RGBa Tif of about 1.5 Gb, I get the following error: openslide.lowlevel.OpenSlideError: Wrong JPEG library version: library is 90, caller expects 80 I have libjpeg8d installed, and everything seems fine with a smaller Tif. Any suggestions on how fix this issue?
Openslide libjpeg error: Wrong JPEG library version
1.2
0
0
574
35,925,402
2016-03-10T19:22:00.000
0
0
0
0
java,android,python,android-studio,translate
35,925,616
1
true
1
0
Android studio covers basically every aspect you've mentioned. You'll probably need to google a lot to learn how to implement individual things. Short guide: (research on your own how to do each step) You can use SQLite database for storing your data. Add your media (sound files) to resources and then play them or whatever Android keyboard supports different languages, just set it to Romanian You will need some basic understanding of Android programming if you want to do this. I'm uncertain how much time you're willing to put into this, but it should be doable in 2-3 weeks, heavily depending on your previous coding experience.
1
1
0
I would like to make an app that lets you search for part of a word or phrase, then returns the closest results from a personal database of words I have learnt in another language. Once the results have been returned I would like the option to play the sound file with the associated results. I can write the database in whatever program I need, and the sounds files would be in either wav or mp3 format. The app would also need to allow the user to input foreign letters, there are about 10 extra required as I am using Romanian. These could be separate on the screen if necessary, as in separate to the keyboard input. Would this be an easy enough project to undertake, what sort of size would it be, I am more than happy to spend about a week on it. I am familiar with coding, particularly in Python, so writing in Python would be best, but I can use Java also. This would need to work on the android system. What is the best program to use to write the app?
Writing personal translation app using my own database to work on android
1.2
0
0
115
35,926,056
2016-03-10T19:58:00.000
0
0
0
0
django,python-2.7,django-templates
35,926,366
1
true
1
0
This isn't really a template question. Templates are rendered before they ever get to the browser, so there is quite simply no such thing as a dynamic templates in this sense (and, to be clear, this is a consequence of the way the web works, not a limitation of Django). The only solution is javascript. If you don't want to render everything in hidden forms, then you will need to dynamically request the form items via Ajax in response to row clicks.
1
0
0
What is the idiomatic way to create dynamic bindings in a Django template? For example, I have a template that has a list of items along the left. This list is bound to a model in my view context and displayed with a {% for %} loop On the right, is a form that is supposed to display the values of the selected row in the table on the left. When the user clicks on a row in my table, I want the form on the right to change to reflect the new values of the selected row. I cannot seem to find any easy way to do this in Django without submitting a form, which seems counterintuitive OR creating one form for every row in my list and then just showing / hiding the form in question (which also seems undesirable.)
Dynamic bindings in Django template
1.2
0
0
200
35,928,155
2016-03-10T21:58:00.000
0
1
0
1
android,python,shell,qpython
35,935,344
2
true
0
1
I don't have experience in Android programming, so I can only give a general recommendation: Of course the naive solution would be to explicitly pass the arguments from script to script, but I guess you can't or don't want to modify the scripts in between, otherwise you would not have asked. Another approach, which I sometimes use, is to define an environment variable in the outermost scripts, stuff all my parameters into it, and parse it from Python. Finally, you could write a "configuration file" from the outermost script, and read it from your Python program. If you create this file in Python syntax, you even spare yourself from parsing the code.
2
0
0
I run python in my Android Terminal and want to run a .py file with: python /sdcard/myScript.py The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator). My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python? My direct called file "python" in /System/bin/ contains only a redirection like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh and so on to call python binary. Edit: I simply add the $1 parameter after every shell, Python is called through like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1 so is possible to call python /sdcard/myScript.py arg1 and in myScript.py as usual fetch with sys.argv thanks
Pass parameter through shell to python
1.2
0
0
373
35,928,155
2016-03-10T21:58:00.000
0
1
0
1
android,python,shell,qpython
36,178,959
2
false
0
1
I have similar problem. Runing my script from Python console /storage/emulator/0/Download/.last_tmp.py -s && exit I am getting "Permission denied". No matter if i am calling last_tmp or edited script itself. Is there perhaps any way to pass the params in editor?
2
0
0
I run python in my Android Terminal and want to run a .py file with: python /sdcard/myScript.py The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator). My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python? My direct called file "python" in /System/bin/ contains only a redirection like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh and so on to call python binary. Edit: I simply add the $1 parameter after every shell, Python is called through like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1 so is possible to call python /sdcard/myScript.py arg1 and in myScript.py as usual fetch with sys.argv thanks
Pass parameter through shell to python
0
0
0
373
35,928,317
2016-03-10T22:07:00.000
-2
0
1
0
keyboard,python-idle,enthought,shortcut
52,823,738
1
false
0
0
if i were you, try to continue and use your old shortcuts but if they still dont work, try to use "control" for "option" shortcuts and vice-versa. thanks to the the websites online for everything!
1
0
0
I'm taking an online MIT programming course that suggested I use the enthought programming environment. I installed it and now my idle keyboard shortcuts have all changed. It seems to be directly caused by the installation of enthought, as my other computer (without enthought) still retained the old keyboard shortcuts. Anyone know how to get my old keyboard shortcuts back?
idle keyboard shortcuts have changed after installing enthought
-0.379949
0
0
21
35,928,873
2016-03-10T22:45:00.000
0
0
1
0
python,paramiko
65,648,725
2
false
0
0
I was able to get it working by installing the following packages using pip. pip install bcrypt cryptography pynacl paramiko These were the packages my Linux install used as prerequisites, so they should work on windows as well.
1
0
0
I have been trying to install paramiko module on windows without success. I have been getting errors related to Visual C++ compiler missing. Is it possible to install paramiko without having to go through compile process.
Install Python module paramiko on windows
0
0
0
6,551
35,929,840
2016-03-11T00:08:00.000
0
0
1
0
python,c++
35,929,867
2
false
0
1
You can't run a .cpp file, as you need to compile a binary from your .cpp file before you can execute it. first, try to compile it using your terminal, after that you might port it to python, and automate it in a Python script (I guess, that's why you want to do it in Python).
1
0
0
Total beginner here, so please be gentle. I have a example.cpp file which has one main function which accepts some input parameters and returns an integer value. How would I run this .cpp file from within Python such that I can specify the input parameters from within Python and access the output of the .cpp so that it is stored in Python? Thanks I've modified the main function to accept command line input. What do you mean by "returns its result as its exit code", kindall? Also, want to add that I'd want to return a vector created from the .cpp so that the variable is in Python
Python/C++: Running .cpp in Python
0
0
0
684
35,931,803
2016-03-11T03:44:00.000
0
0
1
0
python,multithreading,wxpython,twisted
48,219,389
1
false
0
1
It looks like this is an really "old" question without any answer. I hope you have figured it out by now, if not I have a solution you might be interested. I have done something similarly, except I was using an Arduino with an EthernetShield. I use socket to communicate through LAN and python built-in thread (threading.Thread) to do whatever task it needs to be done. Now the question is that are you having your GUI inside your Twisted framework? If so, then you should simply rely on the Twisted framework to make your code more maintainable. If not, since your GUI is already built, you can use the method I mentioned above to communicate to the server. If my understanding is wrong, you should clear with me on the architecture/relationship of the GUI, Twisted, and server.
1
3
0
I am currently writing a wxPython GUI with Twisted Python integrated to be able to send basic text over LAN to a RaspberryPi. I am at a point where I want some help figuring out the design path that would be best for this project when it comes to the way I should implement my networking. To briefly give more context to the project I have been tasked to create a GUI that connects to a RaspberryPi which controls a research grade CCD, (basically an Astronomy use only camera) a very expensive piece of equipment. I will be sending commands, given by the user over, the local network to a TwistedPython server that uses a "parser" to send the commands to the CCD drivers. On to figuring out the network design philosophy. I am at the point where the major components of my GUI are implemented and just start needing to talk over the network. As for the network coding, I have successfully implemented in a few buttons the code needed to send text over the network (e.g. when I hit the camera expose button it sends the file name and time of exposure). It is at this point where I need to decide on whether I should be using threading or not. I have some experience in threading through C programming with openMP, MPI, and Pthreads, but I can't wrap my head around what "kind" of threading I should be using. Some research has lead me to see that there is the Python built in threading and then threading with TwistedPython. I fail to see the big differences in the two when it comes to how they work. Overall, I think I want it so I can just simply open up a separate thread for Twisted and then send a line of text off and then close it when I am done. However, I am not sure which way of threading I should implement this. There is also the possibility that I don't even need to implement threading if I am only sending small bits of data over the area network. There is one part in my GUI that I know will need threading and that would be a progress bar that updates via a clock. The GUI should still be usable while this is going on, because in Astronomy you can have exposures lasting over ten minutes. Anyway can some of you folks help me pose the right questions for my needs? Thanks
Python GUI implementation direction
0
0
0
89
35,937,769
2016-03-11T10:28:00.000
0
0
0
0
python,amazon-ec2,django-rest-framework
35,937,820
1
false
1
0
The problem is pip will open a https connection to pyip. Re-check outbound security rule of the instance and add https rule
1
0
0
I use a micro ec2 instance with python 2.7.10 installed. When i try to install djangorestframework with "pip install djangorestframework", it failed and here is the log: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(, 'Connection to pypi.python.org timed out. (connect timeout=15)')'
Cannot install djangorestframework on ec2 instance with pip
0
0
0
273
35,938,891
2016-03-11T11:18:00.000
0
0
1
0
python,matplotlib,seaborn
35,939,091
1
true
0
0
Passing arguments into ax.yaxis.grid() and ax.xaxis.grid() will include or omit grids in the graphs
1
0
1
I'm using the whitegrid style and it's fine except for the vertical lines in the background. I just want to retain the horizontal lines.
matplotlib & seaborn: how to get rid of lines?
1.2
0
0
202
35,941,011
2016-03-11T13:05:00.000
0
0
0
0
python,windows,python-2.7,bs4
35,941,194
1
true
1
0
I guess you can run pip2 install bs4 and pip3 install bs4 to install them separately on different python versions
1
0
0
I have both Python 2.7.11 and Python 3.4.1 installed on my Windows 8.1 system. I had installed BeautifulSoup4 with pip to run a code (not mine). However, pip automatically installed bs4 to Python 3.4.1. (I checked that it was installed in C://Python34/lib/site-packages/bs4) I have use the command prompt, change directory to C:\Python27 (where Python 2.7 is installed), and pip install bs4 from that directory, but it didn't work. I had copied the bs4 folder from Python 3.4, but it didn't work either. It only gave another Import Error: No module named html.entities. How can I install bs4 on Python 2.7? Thanks in advance.
How to install BeautifulSoup4 for Python 2.7 while also having Python 3.4 installed?
1.2
0
0
757
35,941,506
2016-03-11T13:28:00.000
0
0
0
1
python,redirect,stdout
35,941,617
2
false
0
0
You want to use 'tee'. stdbuf -oL python mycode.py | tee out.txt
1
2
0
I can run my python scripts on the terminal and get the print results on the stdout e.g. python myprog.py or simply redirect it to a file: python myprog.py > out.txt My question is how could I do both solutions at the same time. My linux experience will tell me something like: python myprog.py |& tee out.txt This is not having the behaviour I expected, print on the fly and not all at once when the program ends. So what I wanted (preferred without changing python code) is the same behavior as python myprog.py (print on the fly) but also redirecting output to a file. What is the simplest way to accomplish this?
python - print to stdout and redirect output to file
0
0
0
838
35,942,424
2016-03-11T14:12:00.000
2
0
1
1
python,ubuntu
39,074,990
1
false
0
0
I might have a similar issue here. Got the same error while trying to import a requirement.txt into a virtualenv. something like "No matching distribution found for adium-theme-ubuntu==0.3.4" Solved it by include --system-site-packages when creating the virtualenv. Hope it helps
1
4
0
I'm working on a Appium Python test script for AWS Device Farm. I get error while building the script as; Could not find any downloads that satisfy the requirement package-name (like PAM, Twisted-Core etc) I've already solved almost all of them, but still have problem with adium-theme-ubuntu. This package is already installed on my system and virtualenv, but I still get same error for this package. How should I solve this issue? Thank you in advance
How to install 'adium-theme-ubuntu' (virtualenv)
0.379949
0
0
2,119
35,943,500
2016-03-11T15:04:00.000
0
0
0
0
python,django,django-templates,django-views
35,943,837
1
true
1
0
First of all, forget about apps here; they have nothing to do with anything. An app is just a collection of models and views, it has no relationship to what can be shown on a page. Your issue is that a single view is exclusively responsible for rendering a page. Django calls a view in response to a request at a particular URL, and whatever is returned from there becomes the content of the page. There is no way to call multiple views from a URL. Instead, you need to think about constructing your code in such a way that the view renders content from multiple places. There are various ways of doing this, such as including templates, using template tags or context processors, composing class-based views, etc.
1
0
0
I have two different apps, recipe and comment. I have a DetailView in the recipe app which points to url(r'^(?P<pk>\d+)/$', RecipeDetailView.as_view(), name='recipe-detail') which is also in the recipe app url file. I also have a CreateView in my views.py file in my comment app. How can i put this CreateView which is in my comment app into the same url that is shown above? Do I do this in the template? Or do I do this in the recipe views.py or urls.py file? I have had no problems making views with one app, i am getting tripped up trying to show views across apps.
CreateView and ListView from different apps on the same page.
1.2
0
0
188
35,944,725
2016-03-11T16:01:00.000
3
0
0
0
python,random,machine-learning,scikit-learn,evaluation
35,948,671
1
true
0
0
If the random_state affects your results it means that your model has a high variance. In case of Random Forest this simply means that you use too small forest and should increase number of trees (which due to bagging - reduce variance). In scikit-learn this is controlled by n_estimators parameters in the constructor. Why this happens? Each ML method tries to minimize the error, which from matematial perspective can be usually decomposed to bias and variance [+noise] (see bias variance dillema/tradeoff). Bias is simply how far from true values your model has to end up in the expectation - this part of an error usually comes from some prior assumptions, such as using linear model for nonlinear problem etc. Variance is how much your results differ when you train on different subsets of data (or use different hyperparameters, and in case of randomized methods random seed is a parameter). Hyperparameters are initialized by us and Parameters are learnt by the model itself in the training process. Finally - noise is not reducible error coming from the problem itself (or data representation). Thus, in your case - you simply encountered model with high variance, decision trees are well known for their extremely high variance (and small bias). Thus to reduce variance, Breiman proposed the specific bagging method, known today as Random Forest. The larger the forest - stronger the effect of variance reduction. In particular - forest with 1 tree has huge variance, forest of 1000 trees is nearly deterministic for moderate size problems. To sum up, what you can do? Increase number of trees - this has to work, and is well understood and justified method treat random_seed as a hyperparameter during your evaluation, because this is exactly this - a meta knowledge you need to fix before hand if you do not wish to increase size of the forest.
1
2
1
Can someone explain why does the random_state parameter affects the model so much? I have a RandomForestClassifier model and want to set the random_state (for reproducibility pourpouses), but depending on the value I use I get very different values on my overall evaluation metric (F1 score) For example, I tried to fit the same model with 100 different random_state values and after the training ad testing the smallest F1 was 0.64516129 and the largest 0.808823529). That is a huge difference. This behaviour also seems to make very hard to compare two models. Thoughts?
random_state parameter in classification models
1.2
0
0
743
35,946,190
2016-03-11T17:18:00.000
4
0
1
0
python,list,decimal
35,946,213
3
true
0
0
Use list(map(int, list)) or [int(x) for x in list] Don't use list as a variable name, though, because it conflicts with the built-in type. Also, it isn't very descriptive. The name you use depends on its purpose, but just don't use names that overwrite the built-in types.
1
1
0
this should be very easy but I'm struggling, I'm just trying to remove the decimal place from each number in this list: list = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0] The method I tried was: int(list) and round(list) but neither worked as only length-1 arrays can be converted to Python scalars Can anybody advise?
Removing decimal point from list
1.2
0
0
3,907
35,948,834
2016-03-11T19:57:00.000
1
0
0
0
python,django,amazon-web-services,edx,openedx
36,759,310
1
false
1
0
This one works ami-7de8981d (us-east). Login with ssh as the 'ubuntu' user. Studio is on port 18010 and the LMS is on port 80.
1
1
0
I have installed Open edX (Dogwood) on an EC2 ubuntu 12.04 AMI and, honestly, nothing works. I can sign up in studio, and create a course, but the process does not complete. I get a nice page telling me that the server has an error. However, the course will show up on the LMS page. But, I cannot edit the course in Studio. If I sign out of studio, I cannot log back without an error. However, upon refreshing the page, I am logged in. I can enable the search function and install the search app, but it doesn't show any courses and returns an error. Can someone point me to an AMI that works with, or includes, Open edX? The Open edX documentation is worthless. Or, failing that, explain to be what I am missing when installing Open edX using the automated installation scripts from the documention.
Open edX Dogwood problems
0.197375
1
0
226
35,952,511
2016-03-12T01:06:00.000
0
0
1
0
python,django,virtualenv
35,952,928
2
false
1
0
It's also good practice to make a requires.txt file for all your dependencies. If for example your project requires Flask and pymongo, create a file with: Flask==<version number you want here> pymongo==<version number you want here> Then you can install all the necessary libraries by doing: pip install -r requires.txt Great if you want to share your project or don't want to remember every library you need in your virtualenv.
1
0
0
I'm trying to use virtualenv in my new mainly Python project. The code files are located at ~/Documents/Project, and I installed a virtual environment in there, located at ~/Documents/Project/env. I have all my packages and libraries I wanted in the env/bin folder. The question is, how do I actually run my Python scripts, using this virtual environment? I activate it in Terminal, then open idle as a test, and try "import django" but it doesn't work. Basically, how can I use the libraries install in the virtual environment with my project when I run it, instead of it using the standard directories for installed Python libraries?
How to use virtualenv in Python project?
0
0
0
2,126
35,956,180
2016-03-12T09:58:00.000
-1
0
1
0
python,windows,gcc,mingw,msys2
41,492,215
3
false
0
0
sys.platform gives msys when in msys-Python.
1
2
0
I am using python in the MSYS2 environment. The MSYS2 has its own built MINGW python version. Also I can install the official python from the www.python.org. Here is the problem: If I want to write a python code need to know the python version is MINGW or the official one, how can I do it? Here are some ways I can image. Use the "sys.prefix" object. It tells the installation directory. the MSYS2 usually installed in the directory X:\msys2\.... and the official one install in the X:\Python27\ as default. But users may change the installation directory. So this is not a good way. Use the "sys.version" object can get the version strings show with the compiler name. It shows the MINGW python compiled by GCC, the official one compiled by MSC. But there may have some possibility that there is an other version's python also built by GCC or MSC. Is there any more elegant way can do this?
How to determine the python is mingw or official build?
-0.066568
0
0
2,041
35,956,650
2016-03-12T10:45:00.000
0
0
1
1
python,ide,version
35,998,142
1
false
0
0
Install latest Python. Go to your Project menu and Project Properties. Change the Python Executable to use Python 3.5 or whatever. Press OK. You might need to restart Wing's Python Shell, but other than that, you should be set. If you want all projects to default to the latest, you will have to set up your OS to default to the latest Python. Depending on the OS, you may have to fiddle around in some settings dialogs or just uninstall the old version. However, be careful when uninstalling Python on Linux as if you happen to uninstall the system Python, your OS may become non-functional.
1
0
0
How do I change from Python 2.7.10 to the latest version in Wingware Python IDE?
Wingware Python IDE: How do I change from Python 2.7.10 to the latest version?
0
0
0
882
35,956,868
2016-03-12T11:08:00.000
2
0
1
1
python-3.x
36,860,451
1
false
0
0
Recsys is not supported by python 3.X but only python 2.7.
1
1
0
I am trying to install python-recsys module. But i get this error Could not find a version that satisfies the requirement python-recsys (from versions: ) No matching distribution found for python-recsys I am using Python 3.4 The code that i am using to install the module is: pip.exe install python-recsys
Unable to install python-recsys module
0.379949
0
0
490
35,958,961
2016-03-12T14:36:00.000
6
0
1
0
python,python-3.x
35,959,061
2
false
0
0
The python hierarchy is Type (Metaclass) -> Class -> Instance. Think of the function type() as going one level up. If you use the function type() on an instance of an int (any integer) like so: type(123) you will receive the class of the instance which in this case is int. If you will use type() on the class int, you will recieve type type which is the metaclass of int. Keep in mind metaclasses are advanced technical details of python and you do not need to learn about them at first.
1
28
0
I just recently started to teach myself how to code. I am currently reading Think Python 2 for python 3 and when it teaches about the type() function, it gives the example type(2) which outputs <class 'int'>. It then states that "the word 'class' is used in the sense of a category; a type is a category of values." The part that confuses me is that the type() function outputs class instead of type. Also, I'm not sure about the difference between type and class; are string, float point, and integer classes of the type "value", or are they the same thing? I have looked this up but cannot find an answer to my specific questions or simple enough for me to understand.
Class vs. Type in Python
1
0
0
15,624
35,959,580
2016-03-12T15:30:00.000
8
0
1
1
python,python-2.7
35,959,633
1
true
0
0
Listing directories using a bytestring path on Windows produces directory entries encoded to your system locale. This encoding (done by Windows), can fail if the system locale cannot actually represent those characters, resulting in placeholder characters instead. The underlying filesystem, however, can handle the full unicode range. The work-around is to use a unicode path as the input; so instead of os.walk(r'C:\Foo\bar\blah') use os.walk(ur'C:\Foo\bar\blah'). You'll then get unicode values for all parts instead, and Python uses a different API to talk to the Windows filesystem, avoiding the encoding step that can break filenames.
1
2
0
I am using os.walk to traverse a folder. There are some non-ascii named files in there. For these files, os.walk gives me something like ???.txt. I cannot call open with such file names. It complains [Errno 22] invalid mode ('rb') or filename. How should I work this out? I am using Windows 7, python 2.7.11. My system locale is en-us.
Non ascii file name issue with os.walk
1.2
0
0
1,175
35,961,414
2016-03-12T18:07:00.000
1
1
0
0
php,python,apache,perl,privileges
35,962,761
1
false
0
0
So after I posted this I looked into Handlers like I use for IIS. That led me down the path of SUEXEC and through everything I tried I couldn't get Apache to load it. Even made sure that I set the bits for SETUID and SETGID. When I was researching that I ran across .htaccess files and how they can enable CGI scripts. I didn't want to put in .htaccess files so I just made sure the apache.conf was configured to allow CGI. That also did not help. So finally while I was studying .htaccess they referred to ScriptAlias. I believe this is what solved my issue. I modified the ScriptAlias section in an apache configuration file to point to my directory containing the script. After some fussing with absolute directories and permissions for the script to read/write a file I got everything to work except it isn't going through the proxy set by environment http_proxy. That is a separate issue though so I think I am good to go on this issue. I will attempt the same solution on my perl LAMP.
1
0
0
Another way I could ask this question is: How do I set pages served by Apache to have higher privileges? This would be similar to me setting an Application Pool in IIS to use different credentials. I have multiple Perl and Python scripts I am publishing through a web front end. The front end is intended to run any script I have in a database. With most of the scripts I have no issues... but anything that seems to utilize the network returns nothing. No error messages or failures reported. Running from CLI as ROOT works, run from WEB GUI as www-data calling same command fails. I am lumping Python and Perl together in this question because the issue is the same leading me to believe it isn't a code issue, it is a permissions issue. Also why I am not including code, initially. These are running on linux using Apache and PHP5. Python 2.7 and Perl5 I believe. Here are examples of apps I have that are failing: Python - Connecting out to VirusTotal API Perl - Connecting to Domains and Creating a Graph with GraphViz Perl - Performing a Wake On LAN function on a local network segment.
Perl/Python Scripts Fail to Access Internet/Network through Web GUI
0.197375
0
1
79
35,962,581
2016-03-12T19:55:00.000
1
0
0
0
python,django,server
35,962,887
1
true
1
0
manage.py runserver is only used to speed your development process, it shouldn't be run on your server. It's similar to the newly introduced php's built-in server php -S host:port. Since you're coming from PHP you can use apache with mod_wsgi in order to serve your django application, there are a lot of tutorials online on how to configure it properly. You might want to read what wsgi is and why it's important.
1
0
0
This might be a very dumb question, so please bear with me (there's also no code included either). Recently, I switched from PHP to Python and fell in love with Django. Locally, everything works well. However, how are these files accessed when on a real server? Is the manage.py runserver supposed to be used in a server environment? Do I need to use mod_python ? Coming from PHP, one would simply use Apache or Nginx but how does the deployment work with Python/Django? This is all very confusing to me, admittedly. Any help is more than welcome.
Python development: Server Handling
1.2
0
0
42
35,963,580
2016-03-12T21:34:00.000
0
0
1
0
python,pandas,datanitro
35,964,006
1
false
0
0
DataNitro is probably using a different copy of Python on your machine. Go to Settings in the DataNitro ribbon, uncheck "use default Python", and select the Canopy python directory manually. Then, restart Excel and see if importing works.
1
1
1
when I try to import pandas using the data nitro shell, I get the error that there is no module named pandas. I have pandas through the canopy distribution, but somehow the data nitro shell isn't "finding" it. I suspect this has to do with the directory in which pandas is stored, but I don't know how to "extract" pandas from that directory and put it into the appropriate directory for data nitro. Any ideas would be super appreciated. Thank you!!
Can't find pandas in data nitro
0
0
0
193
35,964,324
2016-03-12T22:52:00.000
0
0
0
0
python,cassandra,resultset
35,969,262
1
false
0
0
There is no magic, you'll need to: create a prepare statement for INSERT ... INTO tableB ... on each ResultSet from table A, extract the values and create a bound statement for table B, then execute the bound statement for insertion into B You can use asynchronous queries to accelerate the migration a little bit but be careful to throttle the async.
1
0
0
I am using the python cassandra-driver to execute queries on a cassandra database and I am wondering how to re-insert a ResultSet returned from a SELECT query on table A to a table B knowing that A and B have the same columns but a different primary keys. Thanks in advance
Cassandra python driver - how to re-insert a ResultSet
0
1
0
294
35,964,994
2016-03-13T00:16:00.000
0
0
1
0
python,macos,scikit-learn,pycharm,tensorflow
47,130,715
5
false
0
0
I had a similar problem. My code was was not working on PyCharm professional. I had PyCharm CE previously installed and it worked from there. I had configured PyCharm CE a while ago and I had forgotten what setup I used but if issues persist, make sure that the packages are installed under Preferences > Project > Project Interpreter
4
2
1
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
pycharm error while importing, even though it works in the terminal
0
0
0
2,043
35,964,994
2016-03-13T00:16:00.000
0
0
1
0
python,macos,scikit-learn,pycharm,tensorflow
40,260,973
5
true
0
0
At the end, I ended up creating a virtual environment, reinstalling everything in there, and calling it through pycharm. I am not entirely sure what was the problem between conda and pycharm, I probably messed up somewhere. I am now using a different virtual environment depending on the project and I am happier than ever :).
4
2
1
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
pycharm error while importing, even though it works in the terminal
1.2
0
0
2,043
35,964,994
2016-03-13T00:16:00.000
0
0
1
0
python,macos,scikit-learn,pycharm,tensorflow
40,121,055
5
false
0
0
Add this 'DYLD_LIBRARY_PATH=/usr/local/cuda/lib' to Python environment variable. Run-> Edit Configurations -> Environment variables. Hope it works.
4
2
1
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
pycharm error while importing, even though it works in the terminal
0
0
0
2,043
35,964,994
2016-03-13T00:16:00.000
0
0
1
0
python,macos,scikit-learn,pycharm,tensorflow
38,749,298
5
false
0
0
you should start PyCharm from terminal cd /usr/lib/pycharm-community/bin ./pycharm.sh
4
2
1
I have installed the packages TensorFlow and scikit_learn and all its dependencies. When I try importing them using python 2.7.6 or 2.7.10 (I have tried both) in the terminal, it works fine. However, when I do it using pycharm it gives an error. In the case of scikit_learn with the launcher 2.7.6 says: ImportError: dynamic module does not define init function (init_check_build) In the case of scikit_learn with the launcher 2.7.10 says: ValueError: numpy.dtype has the wrong size, try recompiling In the case of TensorFlow with the launcher 2.7.6 says: ImportError: dlopen(/Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 2): no suitable image found. Did find: /Library/Python/2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong architecture In the case of TensorFlow with the launcher 2.7.10 says: ImportError: No module named copyreg Error importing tensorflow. Unless you are using bazel, you should not try to import tensorflow from its source directory; please exit the tensorflow source tree, and relaunch your python interpreter from there. I have tried to search in the net but the solutions did not work for me. I have tried to uninstall them and install them again with pip, conda and directly the source code and it gives always the same errors. I have even tried reinstalling pycharm with no better luck. Other libraries, such as scilab or numpy, work fine in pycharm. Any ideas? It is just driving me mental. By the way, I am using a Mac OS 10.10.5.
pycharm error while importing, even though it works in the terminal
0
0
0
2,043
35,965,427
2016-03-13T01:29:00.000
3
0
1
0
python,uninstallation
35,965,537
1
false
0
0
Find the uninstall shortcut link in the python folder on win start menu Or in Python's install folder, then uninstall it. If you can not find either, I think you should just delete the python install folder, then everything should be ok after you install the python x64. Because for many program just the files in install folder are x86/x64 dependent, other files in the user folder is not. P.S. Installation folder maybe locate in c:\\programs/python35/or in something like c:\Users\USERNAME\AppData\Local\Programs\Python\Python35
1
3
0
I have Windows 7 64 Bit. By mistake I installed Python 3.5 32 bit. I want to uninstall it (for installing 64 Bit version) but dont know how to do it (It does not get uninstalled from Control Panel -> Uninstall a Program). I googled it and found some links but could not understand / was not able to do it. Please help.
Python 3.5 uninstall Windows 7
0.53705
0
0
9,697
35,966,578
2016-03-13T04:43:00.000
-1
0
0
0
python
35,966,622
2
false
0
1
I would try using lambda: before the command. For instance, replace readFile(file) with lambda: readFile(file). This will ensure an anonymous ("lambda") function with no parameters is passed, which upon execution will run the intended code. Otherwise, the function is executed once when the behavior is set, then the returned value is simply re-evaluated every time rather than the appropriate function being called.
1
0
0
So, I'm creating a client manager software for a local club. I'm using Python 3.5.1 and Tkinter. Used a Notebook to nest my Frames. On my first frame I made the form to add new clients (labels and textboxes) and an "add" button at the end. Problem is that it executes the function associated with the button insted of onclick, and the button actually does nothing on click. Been searching everywhere and it seems a rare problem. Help?
Python 3.5.1, Tkinter: Functions execute on start instead of button click
-0.099668
0
0
451
35,968,464
2016-03-13T09:08:00.000
0
0
0
0
python,image,crash,save,pygame
51,588,818
1
false
0
1
Are you on windows or on mac? If you're on windows look if you wrote the location like this "\folder\thing.png", that's an error because you put the starting "\". Remove that and try again.
1
1
0
I have written a program in Python which draws parts of the Mandelbrot set using pygame. However, when I leave it running to generate for a long time and then save the file I get this error: pygame.error: SavePNG: could not open for writing I'm not sure why this would happen and saving works fine usually. Perhaps when the computer goes to sleep something stops working? But more importantly does anyone know how to fix this?
pygame.error: SavePNG: could not open for writing?
0
0
0
1,146
35,968,682
2016-03-13T09:35:00.000
1
1
1
0
python,fortran,profiling,f2py
35,970,843
1
true
0
0
At last I found out -DF2PY_REPORT_ATEXIT option can report wrapper performance.
1
0
0
I am currently writing a time consuming python program and decided to rewrite part of the program in fortran. However, the performance is still not good. For profiling purpose, I want to know how much time is spent in f2py wrappers and how much time is actual spent in fortran subroutines. Is there a convenient way to achieve this?
How to obtain how much time is spent in f2py wrappers
1.2
0
0
82
35,972,196
2016-03-13T15:37:00.000
1
0
1
1
ipython-notebook,jupyter-notebook
35,972,367
1
true
0
0
maybe it's not there? you can create it first, in Mac's terminal touch $HOME/.ipython/profile_default/ipython_notebook_config.py and then open it in TextWranggler open -a /Applications/TextWrangler.app $HOME/.ipython/profile_default/ipython_notebook_config.py
1
1
0
I am attempting to install ipython notebook based on some instructions. However, while I tried to execute this 'In your favorite editor, open the file $HOME/.ipython/profile_default/ipython_notebook_config.py', I can't really open a file from TextWrangler. I am not familiar with this. Could anyone help me out there? Thank you very much!!
Installing iPython Notebook - opening a $HOME file from editor
1.2
0
0
48
35,973,590
2016-03-13T17:45:00.000
-1
0
0
0
python,apache-spark,pyspark,partitioning,rdd
44,932,978
2
false
0
0
I recently used partitionby. What I did was restructure my data so that all those which I want to put in same partition have the same key, which in turn is a value from the data. my data was a list of dictionary, which I converted into tupples with key from dictionary.Initially the partitionby was not keeping same keys in same partition. But then I realized the keys were strings. I cast them to int. But the problem persisted. The numbers were very large. I then mapped these numbers to small numeric values and it worked. So my take away was that the keys need to be small integers.
1
11
1
I understand that partitionBy function partitions my data. If I use rdd.partitionBy(100) it will partition my data by key into 100 parts. i.e. data associated with similar keys will be grouped together Is my understanding correct? Is it advisable to have number of partitions equal to number of available cores? Does that make processing more efficient? what if my data is not in key,value format. Can i still use this function? lets say my data is serial_number_of_student,student_name. In this case can i partition my data by student_name instead of the serial_number?
pyspark partitioning data using partitionby
-0.099668
0
0
23,512
35,974,213
2016-03-13T18:41:00.000
0
0
1
0
python,logging
71,525,953
2
false
0
0
You may now use logging.setLoggerClass(klass). Trouble is still how to manage this in shared hosted environments (e.g. Flask, FastAPI, ...), but if you 'own' the env, you can alter its global props. Also make sure that you replace only loggers for your own module...
1
1
0
The LoggerAdapter class in python is used for providing contextual information to messages logged. After subclassing it, the subclass can be used in place of a logger, and the subclass can intercept and modify/augment the message being logged. Is it possible to make logging.getLogger(name) return that custom LoggerAdapter class instead of the generic logger? It seems like the LoggerAdapter class would be useless if it had to be separately instantiated in every individual file it is used in, and it doesn't seem like I should have to import it, since the logger module seems to let you use its methods to access the loggers everywhere.
Can I make python's logging.getLogger return a LoggerAdapter?
0
0
0
546
35,974,957
2016-03-13T19:48:00.000
0
0
0
0
python,scikit-learn,backpropagation,momentum
37,115,845
2
true
0
0
If anybody ever needs an answer for this, I actually decided to run everything on a Linux VM. I then followed the instructions to install the dev version and everything(well almost) worked perfectly. Running it on Linux is way easier than Windows because you can just install the package from git and run it without having to download required software to compile it. I still struggled a little bit though.
1
1
1
I'm trying to use Scikit-Learn's Neural Network to classify my dataset using a Backpropagation with Momentum. I need to specify these parameters: Hidden neurons, Hidden layers, Training set, Learning rate and Momentum. I found MLPClassifier in Sklearn.neural_network package. The problem is that this package is part of Scikit-learn V0.18 which is a dev version. Is there a way I could use Scikit-Learn V0.17 to do this? Using Anaconda, but I can change that if it causes problems.
Backpropagation with Momentum using Scikit-Learn
1.2
0
0
903
35,976,192
2016-03-13T21:38:00.000
0
0
0
0
python,python-2.7,matrix,vector,linear-algebra
35,976,436
1
true
0
0
I managed to fix it by using a temp variable, setting it to correct size, and iterating over dsdb1. i still don't know what caused the bug.
1
0
1
This is probably a rookie mistake that I'm missing somewhere but I can't for the life of me find anything related to my problem on the web. I have a vector b1 of size 5 by 1, and i have another vector dsdb1 which is also 5 by 1. When I write b1 += tau*dsdb1 I get the error "non-broadcastable output operand with shape (5,1) doesn't match the broadcast shape (5,5)" Now, no one of these is a matrix. I even deleted this line and instead printed both sizes for b1 and dsdb1. For b1 it printed (5,1) and for dsdb1 it printed (5,). tau is just a scalar. Why is it changing dsdb1 to a 5 by 5 matrix when computing?
vector changes to matrix at computation
1.2
0
0
23
35,979,063
2016-03-14T03:25:00.000
0
1
0
0
python,bdd,python-behave
35,988,255
1
true
0
0
No there is not, you would have to write your own runner to do that. But that would be complex to do as trying to piece together content of two separate test runs, which are half of each other would be rather complex if any errors are to show up. A better and faster solution will be to write a simple bash/python script that will traverse given directory for .feature files and then fire indivisdual behave process against it. Then with properly configured outputs it should be collision free in terms of outputs and if you separate your cases give you a much better boost than running half. And of course delegate that task to other machine by some means, be it bare SSH command or queues.
1
0
0
We have two machines with the purpose to split our testing across machines to make testing faster. I would like to know of a way to tell behave to run half of the tests. I am aware of the --tags argument but this is too cumbersome as, when the test suite grows, so must our --tags argument if we wish to keep it at the halfway point. I would also need to know which of the other half of tests were not run so I can run those on the other machine. TL;DR Is there a simple way to get behave to run, dynamically, half of the tests? (that doesn't include specifying which tests through the use of --tags) And is there a way of finding the other half of tests that were not run? Thanks
How to run only half of python-behave tests
1.2
0
0
224
35,981,771
2016-03-14T07:30:00.000
1
0
1
0
python-3.x
35,981,995
1
true
0
0
You can use the chr and ord classes to convert between numbers and characters. In this case, given a binary number, you'll also need to use the int class to convert from a binary string to a Python integer. For example: >>> chr(int("00010010", 2)) '\x12' This gives the ascii character of the given input. Note that the binary "00010010" does not correspond to a "H" character in ASCII; the value of "H" can be found with the ord function: >>> bin(ord("H")) '0b1001000'
1
2
0
For example if I had the value '00010010' how would a simple function just print it as "H"? Other answer seem to be rather complicated or don't work at all
Simply, what is a way to print ascii character from binary values in python3?
1.2
0
0
738
35,983,632
2016-03-14T09:23:00.000
3
0
0
0
python,mysql,django,flask,primary-key
35,983,791
1
true
0
0
You certainly could do this. One issue though is that since this can't be set by the database itself, you'll need to write some Python code to ensure it is set on save. Since you're not using MongoDB, though, I wonder why you want to use a BSON id. Instead you might want to consider using UUID, which can indeed be set automatically by the db.
1
0
0
I am thinking if I don't use auto id as primary id in mysql but use other method to implement, may I replace auto id from bson.objectid.ObjectId in mysql? According to ObjectId description, it's composed of: a 4-byte value representing the seconds since the Unix epoch a 3-byte machine identifier a 2-byte process id a 3-byte counter, starting with a random value. It seems it can provide unique and not duplicate key. Is it a good idea?
May I use bson.objectid.ObjectId as (primary key) id in sql?
1.2
1
0
373
35,986,526
2016-03-14T11:38:00.000
2
0
0
0
python,sqlite,server
35,986,703
1
true
0
1
Do u want to connect to sqlite database server? SQLite Is Serverless. It stores your data in a file. U should use maria db for db server. Or u can store your sqlite database file in a network shared drive or cloud or...
1
0
0
I made a program with using sqlite3 and pyqt modules. The program can be used by different persons simultaneously. Actually I searched but I did not know and understand the concept of server. How can i connect this program with a server. Or just the computers that have connections with the server is enough to run the program simultaneously?
How to connect my app with the database server
1.2
1
0
59
35,987,785
2016-03-14T12:39:00.000
0
0
0
1
python,python-2.7,google-app-engine,scikit-learn
52,134,247
3
false
0
0
The newly-released 2nd Generation Python 3.7 Standard Environment (experimental) can run all modules. It's still in beta, though.
1
8
1
I am trying to deploy a python2.7 application on google app engine. It uses few modules like numpy,flask,pandas and scikit-learn. Though I am able to install and use other modules. Installing scikit-learn in lib folder of project give following error:- Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject obj = __import__(path[0]) File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/deploynew.py", line 6, in import sklearn File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__init__.py", line 56, in from . import __check_build File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build/__init__.py", line 46, in raise_build_error(e) File "/base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build/__init__.py", line 41, in raise_build_error %s""" % (e, local_dir, ''.join(dir_content).strip(), msg)) ImportError: dynamic module does not define init function (init_check_build) ___________________________________________________________________________ Contents of /base/data/home/apps/s~category-prediction-1247/v1.391344087004233892/lib/sklearn/__check_build: setup.pyc __init__.py _check_build.so setup.py __init__.pyc ___________________________________________________________________________ It seems that scikit-learn has not been built correctly. If you have installed scikit-learn from source, please do not forget to build the package before using it: run python setup.py install or make in the source directory. If you have used an installer, please check that it is suited for your Python version, your operating system and your platform. Is their any way of using scikit-learn on google app engine?
Using Scikit-learn google app engine
0
0
0
1,664
35,987,855
2016-03-14T12:42:00.000
-1
0
0
0
python,jenkins,jenkins-plugins,jenkins-cli
36,003,452
2
true
0
0
User matrix can be get by using {jenkins_url}/computer/(master)/config.xml request. After that you can parse it and create list of users with permissions.
1
1
0
I need to get users access matrix for Jenkins users using Python. What is endpoint for REST API ?
Jenkins get user access Matrix
1.2
0
1
871
35,989,119
2016-03-14T13:40:00.000
1
0
1
1
python,fortran,profiling,cpu-usage,lapack
36,002,334
1
true
0
0
Operating systems are constantly switching out what is running at any given moment. Your program will run for a while, but eventually there will be an interrupt, and the system will switch to something else, or it may just decide to run something else for a second or two, then switch back. It is difficult to force an operating system not to do this behavior. That's part of the job of the OS; keeping things moving in all areas.
1
2
0
I wrote a time-consuming python program. Basically, the python program spends most of its time in a fortran routine wrapped by f2py and the fortran routine spends most of its time in lapack. However, when I ran this program in my workstation, I found 80% of the cpu time was user time and 20% of cpu time was system time. In another SO question, I Read: The difference is whether the time is spent in user space or kernel space. User CPU time is time spent on the processor running your program's code (or code in libraries); system CPU time is the time spent running code in the operating system kernel on behalf of your program. So if this is true, I assume all the cpu time should be devoted to user time. Does 20% percent system time indicate I need to profile the program? EDIT: More information: I cannot reproduce the 20% percent system cpu time. In another run, the time command gives: real 5m14.804s user 78m6.233s sys 4m53.896s
why is my program using system cpu time?
1.2
0
0
215
35,991,038
2016-03-14T15:03:00.000
1
0
0
0
wxpython,wxwidgets
35,992,362
1
true
0
1
No, the only possibilities here are using a custom shape or SetTransparent() but the latter can only set the transparency uniformly. SetTransparent() could probably be extended to be more flexible, but so far nobody has done it.
1
0
0
Is there a way to cut a transparent rectangle in the background of a wxFrame to see the desktop or other windows behind that rect? Custom shape is not an option since I want to capture mouse events there too.
Transparent hole in wxFrame background
1.2
0
0
107
35,991,312
2016-03-14T15:15:00.000
1
0
0
0
python,pycharm
38,564,886
1
false
0
0
Database support available only in paid Jetbrains IDEs
1
0
0
I am not getting Database tool window under View-> Tool Windows, in pyhcharm community version software, so that I can connect to MYSQL server database. Also,please suggest me if there is other ways by which I can connect to MY SQL server database using pycharm community version.
Unable to connect to MYSQL server database using pycharm community (2.7.11)
0.197375
1
0
493
35,991,403
2016-03-14T15:20:00.000
31
0
1
0
python,pip,package,installation
41,852,419
30
false
0
0
I got stuck exactly with the same error with psycopg2. It looks like I skipped a few steps while installing Python and related packages. sudo apt-get install python-dev libpq-dev Go to your virtual env pip install psycopg2 (In your case you need to replace psycopg2 with the package you have an issue with.) It worked seamlessly.
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
1
0
0
878,063
35,991,403
2016-03-14T15:20:00.000
3
0
1
0
python,pip,package,installation
42,803,623
30
false
0
0
I had the same problem. The problem was: pyparsing 2.2 was already installed and my requirements.txt was trying to install pyparsing 2.0.1 which throw this error Context: I was using virtualenv, and it seems the 2.2 came from my global OS Python site-packages, but even with --no-site-packages flag (now by default in last virtualenv) the 2.2 was still present. Surely because I installed Python from their website and it added Python libraries to my $PATH. Maybe a pip install --ignore-installed would have worked. Solution: as I needed to move forwards, I just removed the pyparsing==2.0.1 from my requirements.txt.
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
0.019997
0
0
878,063
35,991,403
2016-03-14T15:20:00.000
2
0
1
0
python,pip,package,installation
41,725,563
30
false
0
0
I tried all of the above with no success. I then updated my Python version from 2.7.10 to 2.7.13, and it resolved the problems that I was experiencing.
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
0.013333
0
0
878,063
35,991,403
2016-03-14T15:20:00.000
0
0
1
0
python,pip,package,installation
54,057,248
30
false
0
0
Had the same problem on my Win10 PC with different packages and tried everything mentioned so far. Finally solved it by disabling Comodo Auto-Containment. Since nobody has mentioned it yet, I hope it helps someone.
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
0
0
0
878,063
35,991,403
2016-03-14T15:20:00.000
0
0
1
0
python,pip,package,installation
58,241,852
30
false
0
0
Methods to solve setup.pu egg_info issue when updating setuptools or not other methods doesnot works. If CONDA version of the library is available to install use conda instead of pip. Clone the library repo and then try installation by pip install -e . or by python setup.py install
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
0
0
0
878,063
35,991,403
2016-03-14T15:20:00.000
0
0
1
0
python,pip,package,installation
66,668,331
30
false
0
0
upgrading python's version did the work for me.
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
0
0
0
878,063
35,991,403
2016-03-14T15:20:00.000
0
0
1
0
python,pip,package,installation
49,075,069
30
false
0
0
Upgrading Python to version 3 fixed my problem. Nothing else did.
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
0
0
0
878,063
35,991,403
2016-03-14T15:20:00.000
0
0
1
0
python,pip,package,installation
70,627,263
30
false
0
0
I have just encountered the same problem when trying to pip install -e . a new repo. I did not notice that the contents of setup.py haven't been saved properly and I was effectively running the command with an empty setup.py. Hence you may experience the same error message if the setup.py of the target package is either empty or malformed.
8
360
0
I'm new to Python and have been trying to install some packages with pip. But pip install unroll gives me Command "python setup.py egg_info" failed with error code 1 in C:\Users\MARKAN~1\AppData\Local\Temp\pip-build-wa7uco0k\unroll\ How can I solve this?
"pip install unroll": "python setup.py egg_info" failed with error code 1
0
0
0
878,063
35,999,344
2016-03-14T22:30:00.000
4
0
0
1
python-3.x,tkinter
35,999,383
6
false
0
1
Run Tkinter.TclVersion or Tkinter.TkVersion and if both are not working, try Tkinter.__version__
2
27
0
Hi have been scavenging the web for answers on how to do this but there was no direct answer. Does anyone know how I can find the version number of tkinter?
How to determine what version of python3 tkinter is installed on my linux machine?
0.132549
0
0
40,994
35,999,344
2016-03-14T22:30:00.000
5
0
0
1
python-3.x,tkinter
63,566,647
6
false
0
1
Type this command on the Terminal and run it. python -m tkinter A small window will appear with the heading tk and two buttons: Click Me! and QUIT. There will be a text that goes like This is Tcl/Tk version ___. The version number will be displayed in the place of the underscores.
2
27
0
Hi have been scavenging the web for answers on how to do this but there was no direct answer. Does anyone know how I can find the version number of tkinter?
How to determine what version of python3 tkinter is installed on my linux machine?
0.16514
0
0
40,994
36,001,080
2016-03-15T01:25:00.000
2
0
0
0
android,python,android-ndk,cocos2d-x
36,012,463
5
true
0
1
There are some changes made on the new NDK r11 release and some tools were moved to different folders, I guess cocos2d-x scripts need to be updated to support the latest NDK release. You can wait for a new cocos2d-x release or use the previous NDK (r10e) version.
5
0
0
I am trying to set up Cocos2d-x for Android in Windows using python, but my ROOT values give me an error. My path to my ndk folder is: C:\Users\user\AppData\Local\Android\ndk\android-ndk-r11 When I try to run setup.py is asks "Please enter the path of NDK_ROOT: ". I enter in my path to the ndk but it then says: "Error: "ndk folder path here" is not a valid path of NDK_ROOT. Ignoring it. I have also tried manually entering it in my environmental variables, but it still does not work. What is my error here? My adk folder contains the following: build (folder) platforms (folder) prebuilt (folder) python-packages(folder) sources (folder) toolchains (folder) CHANGELOG.md (md file) ndk-build (Windows command script) source.properties (properties file) Edit: I have now gotten the NDK to accept on the build folder in my previous location, but there is still an issue with the NDK when I try to create a project.
NDK_ROOT path for ndk r11 in Cocos2d-x
1.2
0
0
3,355
36,001,080
2016-03-15T01:25:00.000
0
0
0
0
android,python,android-ndk,cocos2d-x
36,040,698
5
false
0
1
Try adding a "\" at the end. I know it sounds silly, but I had the same problem when setting up on my MacOs. Beyond that, see if you'll encounter other problems like.... wrong toolchain version when compiling
5
0
0
I am trying to set up Cocos2d-x for Android in Windows using python, but my ROOT values give me an error. My path to my ndk folder is: C:\Users\user\AppData\Local\Android\ndk\android-ndk-r11 When I try to run setup.py is asks "Please enter the path of NDK_ROOT: ". I enter in my path to the ndk but it then says: "Error: "ndk folder path here" is not a valid path of NDK_ROOT. Ignoring it. I have also tried manually entering it in my environmental variables, but it still does not work. What is my error here? My adk folder contains the following: build (folder) platforms (folder) prebuilt (folder) python-packages(folder) sources (folder) toolchains (folder) CHANGELOG.md (md file) ndk-build (Windows command script) source.properties (properties file) Edit: I have now gotten the NDK to accept on the build folder in my previous location, but there is still an issue with the NDK when I try to create a project.
NDK_ROOT path for ndk r11 in Cocos2d-x
0
0
0
3,355
36,001,080
2016-03-15T01:25:00.000
0
0
0
0
android,python,android-ndk,cocos2d-x
36,001,423
5
false
0
1
I know android studio should set you ndk path in local.properties,there is the setting: ndk.dir=C:\android-ndk-r10e sdk.dir=C:\Studio_SDK
5
0
0
I am trying to set up Cocos2d-x for Android in Windows using python, but my ROOT values give me an error. My path to my ndk folder is: C:\Users\user\AppData\Local\Android\ndk\android-ndk-r11 When I try to run setup.py is asks "Please enter the path of NDK_ROOT: ". I enter in my path to the ndk but it then says: "Error: "ndk folder path here" is not a valid path of NDK_ROOT. Ignoring it. I have also tried manually entering it in my environmental variables, but it still does not work. What is my error here? My adk folder contains the following: build (folder) platforms (folder) prebuilt (folder) python-packages(folder) sources (folder) toolchains (folder) CHANGELOG.md (md file) ndk-build (Windows command script) source.properties (properties file) Edit: I have now gotten the NDK to accept on the build folder in my previous location, but there is still an issue with the NDK when I try to create a project.
NDK_ROOT path for ndk r11 in Cocos2d-x
0
0
0
3,355
36,001,080
2016-03-15T01:25:00.000
1
0
0
0
android,python,android-ndk,cocos2d-x
38,472,553
5
false
0
1
if you want to set relative path that work on all other fellow system without changing again and again then set ndk root path in such a way. NDK_ROOT="$APP_ROOT/../your ndk name" your ndk is placed one directory behind the proj.android file.
5
0
0
I am trying to set up Cocos2d-x for Android in Windows using python, but my ROOT values give me an error. My path to my ndk folder is: C:\Users\user\AppData\Local\Android\ndk\android-ndk-r11 When I try to run setup.py is asks "Please enter the path of NDK_ROOT: ". I enter in my path to the ndk but it then says: "Error: "ndk folder path here" is not a valid path of NDK_ROOT. Ignoring it. I have also tried manually entering it in my environmental variables, but it still does not work. What is my error here? My adk folder contains the following: build (folder) platforms (folder) prebuilt (folder) python-packages(folder) sources (folder) toolchains (folder) CHANGELOG.md (md file) ndk-build (Windows command script) source.properties (properties file) Edit: I have now gotten the NDK to accept on the build folder in my previous location, but there is still an issue with the NDK when I try to create a project.
NDK_ROOT path for ndk r11 in Cocos2d-x
0.039979
0
0
3,355
36,001,080
2016-03-15T01:25:00.000
0
0
0
0
android,python,android-ndk,cocos2d-x
36,167,731
5
false
0
1
I have figured out that at the time of this post, the NDK11 has just been released and the current version of Cocos2dx does not support it. I had to find a download link elsewhere to download an older version.
5
0
0
I am trying to set up Cocos2d-x for Android in Windows using python, but my ROOT values give me an error. My path to my ndk folder is: C:\Users\user\AppData\Local\Android\ndk\android-ndk-r11 When I try to run setup.py is asks "Please enter the path of NDK_ROOT: ". I enter in my path to the ndk but it then says: "Error: "ndk folder path here" is not a valid path of NDK_ROOT. Ignoring it. I have also tried manually entering it in my environmental variables, but it still does not work. What is my error here? My adk folder contains the following: build (folder) platforms (folder) prebuilt (folder) python-packages(folder) sources (folder) toolchains (folder) CHANGELOG.md (md file) ndk-build (Windows command script) source.properties (properties file) Edit: I have now gotten the NDK to accept on the build folder in my previous location, but there is still an issue with the NDK when I try to create a project.
NDK_ROOT path for ndk r11 in Cocos2d-x
0
0
0
3,355
36,002,241
2016-03-15T03:34:00.000
0
0
1
0
python,scip
36,006,957
1
true
0
0
Please make sure to be in a different directory than from where you installed the interface. Python gets confused when trying to import something when there is a directory of the same name in the current directory. Please go into the subdirectory scip/interfaces/python/tests and try running the provided test files.
1
0
0
I am trying to use scip's python interface. I have already downloaded the python interface and installed it according to the instructions given in Python interface for the SCIP Optimization Suite. However, when I try to import pyscipopt to python, there is an ImportError:No module named 'pyscipopt.scip'. I'm using scipsuite-3.2.1 under ubuntu.
Cannot import pyscipopt into python (ubuntu)
1.2
0
0
504
36,003,924
2016-03-15T06:15:00.000
0
0
0
0
python,scip
36,008,452
1
false
0
0
This is currently not supported. You need to loop through your quadratic constraints and add them one after the other using the expression method.
1
0
1
How do I add a quadratic constraint using the scip python interface? In one of the examples, I see something like model.addCons(x*x+y*y<=6) However, since I have a lot of variables(x1..xn and my constraint is of the form x'Qx<=0.2, where x is n*1 and Q is n*n), this method is rather impossible. How can I put the quadratic constraint in a python dictionary of coeffs as I do the linear constraints? (coeffs={x**2:3.0,y**2:1.0,z**2:5.0} for example if I want 3x^2+y^2+5z^2<=10)
How do I add a quadratic constraint using coeff in the scipsuite python interface
0
0
0
103
36,006,701
2016-03-15T09:05:00.000
0
0
1
0
python,resources
36,007,180
4
false
0
0
No, it is not considered wasteful to define a class. By far the most important consideration in 99.9% of programs is clarity of the code. Write the code that best expresses your idea.
1
1
0
When considering a creation of a class in python just for having some attributes defined, as far as i understand, one should consider using a better data type like namedtuple which is considered more efficient. my question - is it true that in such cases its better using namedtuple or maybe a dict ? is it true that a python class wont be efficient solution in such a case ? and general speaking , when should i avoid creating a new class and choose another data type ? thanks Sivan
Is generating a python class considered wasteful with regard to resources?
0
0
0
48
36,007,373
2016-03-15T09:34:00.000
1
0
1
0
python-2.7,tkinter,exe
36,943,399
2
false
0
1
There were multiple issues associated with converting the Python Script that links modules through button clicks. Keeping in mind these factors, it would be best to convert it to exe using Cx_Freeze. It is more user-friendly and was highly effective for the GUI when compared to PyInstaller and Py2Exe.
2
1
0
I have created a GUI on Python 2.7.11 that consists of a main page along with page 1 and page 2 that are linked through buttons on main page. Converted main page to a python exe file using PyInstaller and there were no errors in the conversion. main page.exe appeared in the dist folder but on clicking it, a DOS screen flashed and the main page did not open nor persist on the screen. Being a beginner, I am not sure about how to proceed further. Please help.
Tkinter exe file - DOS screen flashes but GUI does not persist
0.099668
0
0
265
36,007,373
2016-03-15T09:34:00.000
1
0
1
0
python-2.7,tkinter,exe
36,009,578
2
true
0
1
If you've got a line like root.mainloop() at the end (with root standing for your main Tk window) to make sure the event loop runs, then you'll need to debug your code. Try running a small segment of the code at a time to see if all goes well, and see where it is that all doesn't go well; then examine the offending part closely to find the error, maybe running some lines of code in the interpreter from the command line to see what (if any) error messages you get. On the other hand, if you don't have a line like root.mainloop() at the end, that could produce the error you saw. Being a Python beginner myself, and having learned to program in Tcl where the Tk event loop runs automatically, I've seen that error a few times myself. :o(
2
1
0
I have created a GUI on Python 2.7.11 that consists of a main page along with page 1 and page 2 that are linked through buttons on main page. Converted main page to a python exe file using PyInstaller and there were no errors in the conversion. main page.exe appeared in the dist folder but on clicking it, a DOS screen flashed and the main page did not open nor persist on the screen. Being a beginner, I am not sure about how to proceed further. Please help.
Tkinter exe file - DOS screen flashes but GUI does not persist
1.2
0
0
265
36,007,774
2016-03-15T09:51:00.000
1
0
1
0
python,macos,module,installation
36,007,983
1
true
0
0
You can do this if pip version >=1.5 $ pip2.6 install package $ pip3.3 install package if pip version is between 0.8 and 1.5 this will work $ pip-2.7 install package $ pip-3.3 install package
1
0
0
I'm working on mac OS X and I have both Python 2.7 and 3.3 on it. I want to install pykml module and i successfully installed it on python 3.3, but how do I do the same for Python 2.7?
Install module in Python 2.7 and not in Python 3.3
1.2
0
0
66
36,009,435
2016-03-15T11:04:00.000
1
0
1
0
python,priority-queue
36,012,130
1
true
0
0
I think the best approach is to have a separate heap for each keys. When you remove the first element of one heap, you can't efficiently remove it from the other heaps, but you can "mark" it in some way (e.g. by mutating it, or by adding it to a set if it's hashable) so that it can be ignored if it eventually gets popped again somewhere else. This may bloat your heaps a little bit as it will keep the useless values around, but I think it will still be much faster than removing and reheapifying.
1
0
0
I have run into a problem which I am pretty certain somebody has allready encountered and solved, but I just can't find a solution. I have a number of objects, each object has several keys. Say: (1, 2): A (3, 3): B (4, 4): C (5, 1): D I need a "priority queue"-like data structure which lets me efficiently return objects by priority for each of the keys separately. For example, if I return them by the first key, I would get (A, B, C, D), and by the second key, I would get (D, A, B, C). However, I also need to be able to mix. For example, returning by the keys alternatively, starting with the first one, giving me (A, D, B, C). Obviously, popping an object by the first key should remove the second key. A naive solution is resort the data when changing the lookup key, but it is too slow for my purposes. Another alternative is to use a heap and traverse the other heap for each object remove, but as far as I can tell that is also slow. Is there an algorithm for an efficient priority queue that lets me remove objects by multiple different keys? If there is an implementation for python it would be really nice, but an algorithm would be a very good start.
Efficient priority queue with multiple keys?
1.2
0
0
1,141
36,012,450
2016-03-15T13:16:00.000
2
1
0
0
python-3.x,rabbitmq,pika
36,022,628
1
true
0
0
No, but you can disable heartbeats eandersson is right, no you can't do that. but disabling heartbeats is probably the wrong idea, too. the point of a heartbeat is to tell you when your connection to the server drops, so you can take action as soon as possible. common actions include (but are not limited to): crash the app and restart, recreating the needed connection(s) re-create the connection(s) without restarting how you handle the missed heartbear / dropped connection is up to you, but ultimately, the missed heartbeat is a sign that your connection is already dropped, not a cause of dropped connections.
1
1
0
Is there a way to configure RabbitMq to not close connections after missed heartbeats at all?
Is there a way to configure RabbitMq to not close connections after missed heartbeats?
1.2
0
0
117
36,018,586
2016-03-15T17:51:00.000
3
0
0
0
python,scikit-learn,svm,cross-validation,grid-search
36,019,131
1
true
0
0
Overfitting is generally associated with high variance, meaning that the model parameters that would result from being fitted to some realized data set have a high variance from data set to data set. You collected some data, fit some model, got some parameters ... you do it again and get new data and now your parameters are totally different. One consequence of this is that in the presence of overfitting, usually the training error (the error from re-running the model directly on the data used to train it) will be very low, or at least low in contrast to the test error (running the model on some previously unused test data). One diagnostic that is suggested by Andrew Ng is to separate some of your data into a testing set. Ideally this should have been done from the very beginning, so that happening to see the model fit results inclusive of this data would never have the chance to impact your decision. But you can also do it after the fact as long as you explain so in your model discussion. With the test data, you want to compute the same error or loss score that you compute on the training data. If training error is very low, but testing error is unacceptably high, you probably have overfitting. Further, you can vary the size of your test data and generate a diagnostic graph. Let's say that you randomly sample 5% of your data, then 10%, then 15% ... on up to 30%. This will give you six different data points showing the resulting training error and testing error. As you increase the training set size (decrease testing set size), the shape of the two curves can give some insight. The test error will be decreasing and the training error will be increasing. The two curves should flatten out and converge with some gap between them. If that gap is large, you are likely dealing with overfitting, and it suggests to use a large training set and to try to collect more data if possible. If the gap is small, or if the training error itself is already too large, it suggests model bias is the problem, and you should consider a different model class all together. Note that in the above setting, you can also substitute a k-fold cross validation for the test set approach. Then, to generate a similar diagnostic curve, you should vary the number of folds (hence varying the size of the test sets). For a given value of k, then for each subset used for testing, the other (k-1) subsets are used for training error, and averaged over each way of assigning the folds. This gives you both a training error and testing error metric for a given choice of k. As k becomes larger, the training set sizes becomes bigger (for example, if k=10, then training errors are reported on 90% of the data) so again you can see how the scores vary as a function of training set size. The downside is that CV scores are already expensive to compute, and repeated CV for many different values of k makes it even worse. One other cause of overfitting can be too large of a feature space. In that case, you can try to look at importance scores of each of your features. If you prune out some of the least important features and then re-do the above overfitting diagnostic and observe improvement, it's also some evidence that the problem is overfitting and you may want to use a simpler set of features or a different model class. On the other hand, if you still have high bias, it suggests the opposite: your model doesn't have enough feature space to adequately account for the variability of the data, so instead you may want to augment the model with even more features.
1
3
1
I have an rbf SVM that I'm tuning with gridsearchcv. How do I tell if my good results are actually good results or whether they are overfitting?
Identifying overfitting in a cross validated SVM when tuning parameters
1.2
0
0
1,516
36,019,161
2016-03-15T18:21:00.000
4
1
0
1
python,amazon-web-services,amazon-ec2,cron,aws-cli
36,037,353
2
false
1
0
The EC2 service stores a LaunchTime value for each instance which you can find by doing a DescribeInstances call. However, if you stop the instance and then restart it, this value will be updated with the new launch time so it's not really a reliable way to determine how long the instance has been running since it's original launch. The only way I can think of to determine the original launch time would be to use CloudTrail (assuming you have it enabled for your account). You could search CloudTrail for the original launch event and this would have an EventTime associated with it.
1
4
0
I am looking for a way to programmatically kill long running AWS EC2 Instances. I did some googling around but I don't seem to find a way to find how long has an instance been running for, so that I then can write a script to delete the instances that have been running longer than a certain time period... Anybody dealt with this before?
Is there a way to determine how long has an Amazon AWS EC2 Instance been running for?
0.379949
0
0
3,416
36,022,384
2016-03-15T21:19:00.000
2
0
0
0
python,sqlite
36,030,685
1
true
0
0
SQLite computes each result row on demand, so it is neither possible to go back to an earlier row, nor to determine how many following rows there will be. The only way to go back is to re-execute the query. Alternatively, call fetchall() first, and then use the returned list instead of the cursor.
1
2
0
If cursor.execute('select * from users') returns a 4 row set, and then cursor.fetchone(), is there a way to re-position the cursor to the beginning of the returned results so that a subsequent cursor.fetchall() gives me all 4 rows? Or do I need to the cursor.execute again, and then cursor.fetchall()? This seems awkward. I checked the Python docs and couldn't find something relevant. What am I missing?
Python & SQLite: fetchone() and fetchall() and cursor control
1.2
1
0
1,900
36,022,867
2016-03-15T21:48:00.000
0
0
1
0
python,amazon-s3,pip
62,367,203
3
false
0
0
What about wrapping up the whl file (e.g. yourpkg-1.0-py3-none-any.whl) inside another zip file (e.g. yourpkg.zip) with a deterministic name. Then you can set up some cron scripts to check locally whether the deterministic key has a new s3 file, and if so then unzip the whl and install it.
1
21
0
we are trying to come up with a solution to have AWS S3 to host and distribute our Python packages. Basically what we want to do is using python3 setup.py bdist_wheel to create a wheel. Upload it to S3. Then any server or any machine can do pip install $http://path/on/s3. (including a virtualenv in AWS lambda) (We've looked into Pypicloud and thought it's an overkill.) Creating package and installing from S3 work fine. There is only one issue here: we will release new code and give them different versions. If we host our code on Pypi, you can upgrade some packages to their newest version by calling pip install package --upgrade. But if you host your packages on S3, how do you let pip know there's a newer version exists? How do you roll back to an older version by simply giving pip the version number? Is there a way to let pip know where to look for different version of wheels on S3?
If we want use S3 to host Python packages, how can we tell pip where to find the newest version?
0
0
0
13,564
36,024,460
2016-03-15T23:57:00.000
0
0
0
0
python,arrays,numpy
43,882,395
2
false
0
0
You should not append arrays, if you can avoid, due to efficiency issues. Appending means changing the allocated memory size, which can run into non-contiguous memory space, hence inefficient allocation or reallocation would be necessary. These can slow down your program a lot, specially for large arrays. If you are implementing a fixed time-step Runge-Kutta you know beforehand how many points your solution is going to have at time T. It's N = (T-t0)/h+1, where T is the final time, t0 the initial time, and h the time step. You can initialize your array with zeros (using states = np.zeros((N,3))) and fill the values as you go, associating the index i to the time t[i] = t0 +i*h. This would be inside the loop: states[:,i+1] = states[:,i] + RK4_step(states[:,i]), where RK4_step(states[:,i]) is a function returning an array (column) with your variation of the state values in one step of the Runge-Kutta method. Even if your time-step is variable you should still do that, but with nonuniform times t[i] = t0 +i*h. Or, you could use numpy.integrate.ode_int(), which returns the solution of an ODE at the required times.
1
0
1
For starters, I am doing a Runge-Kutta on a three-DOF NumPy array. My array looks like this: states = [[X], [Vx], [Y], [Vy], [Z], [Vz]] I run my Runge-Kutta, and get my four K values, which I transpose with [newaxis]. So when I try to append the new states to my states array as follows: states = append(states, states[:,i] + (K1.T + 2 * K2.T + 2 * K3.T + K4.T)/6, 1) where "i" is a counter that starts at 0 and counts up for each iteration. However, when I run my code my resulting states array is not two columns of six elements. It appears that I am appending a row vector instead of a column vector to my states array. I ran the code with two elements (X, Vx) in the column, and everything appended just fine (or at least my result made sense). I have tried forcing the result of my Runge-Kutta to be a column vector, but that messes up my calculation of the K-values. I have tried variations of my append code, and still have the same result. This is a clone of a Matlab code, and I have been unable to find anything on NumPy arrays and indexing that helps me. Any help is appreciated. Thanks. UPDATE: states[:,0] = [[0], [2300], [0], [0], [-1600], [500]] - original states[:,1] = [[2300], [2100], [0], [0], [-2100], [450]] - append states = [[0, 2300], [2300, 2100], [0, 0], [0, 0], [-1600, -2100], [500, 450]] - final These are column vectors.
Appending to NumPy (Python) Array
0
0
0
536
36,029,866
2016-03-16T08:02:00.000
1
0
0
0
python-2.7,boto,emr,boto3
46,980,003
2
false
1
0
According to the boto3 documentation, yes it does support spot blocks. BlockDurationMinutes (integer) -- The defined duration for Spot instances (also known as Spot blocks) in minutes. When specified, the Spot instance does not terminate before the defined duration expires, and defined duration pricing for Spot instances applies. Valid values are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as a Spot instance receives its instance ID. At the end of the duration, Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates. Iniside the LaunchSpecifications dictionary, you need to assign a value to BlockDurationMinutes. However, the maximum value is 360 (6 hours) for a spot block.
1
4
0
How can I launch an EMR using spot block (AWS) using boto ? I am trying to launch it using boto but I cannot find any parameter --block-duration-minutes in boto, I am unable to find how to do this using boto3.
How can I launch an EMR using SPOT Block using boto?
0.099668
0
1
454
36,033,095
2016-03-16T10:33:00.000
1
0
1
0
python,datetime,matplotlib,python-dateutil
36,033,515
3
false
0
0
First of all it is not an error. It's a warning. Second, most likely the problem is not your problem rather a problem in Matplotlib, which need to fix how they call a function or some method form python-dateutil. Most likely, you can ignore this warning, and it will be fixed in the next Matplotlib version.
3
1
1
Since i've installed the last version of matplotlib (1.5.1), I have the following error when i try to plot values with datetime as X. DeprecationWarning: Using both 'count' and 'until' is inconsistent with RFC 2445 and has been deprecated in dateutil. Future versions will raise an error. Does someone met ths error, and knows how to correct it ?
DeprecationWarning with matplotlib and dateutil
0.066568
0
0
842
36,033,095
2016-03-16T10:33:00.000
1
0
1
0
python,datetime,matplotlib,python-dateutil
36,626,778
3
false
0
0
Best solution : in the file /matplotlib/dates.py, the line 830 : self.rule.set(dtstart=start, until=stop, count=self.MAXTICKS + 1) shall be commented
3
1
1
Since i've installed the last version of matplotlib (1.5.1), I have the following error when i try to plot values with datetime as X. DeprecationWarning: Using both 'count' and 'until' is inconsistent with RFC 2445 and has been deprecated in dateutil. Future versions will raise an error. Does someone met ths error, and knows how to correct it ?
DeprecationWarning with matplotlib and dateutil
0.066568
0
0
842
36,033,095
2016-03-16T10:33:00.000
3
0
1
0
python,datetime,matplotlib,python-dateutil
36,423,287
3
false
0
0
The issue have been fixed on matplotlib, but not released in a finalised version (>= 1.5.2) I had to install the current working version with pip install git+https://github.com/matplotlib/matplotlib
3
1
1
Since i've installed the last version of matplotlib (1.5.1), I have the following error when i try to plot values with datetime as X. DeprecationWarning: Using both 'count' and 'until' is inconsistent with RFC 2445 and has been deprecated in dateutil. Future versions will raise an error. Does someone met ths error, and knows how to correct it ?
DeprecationWarning with matplotlib and dateutil
0.197375
0
0
842
36,037,286
2016-03-16T13:37:00.000
0
0
0
0
python,ios,django,sockets,websocket
36,880,964
1
false
1
0
You can achieve that by using periodically ajax calls from client to server. From documentation: A client wishing to trigger events on the server side, shall use XMLHttpRequests (Ajax), as they are much more suitable, rather than messages sent via Websockets. The main purpose for Websockets is to communicate asynchronously from the server to the client. Unfortunately I was unable to find the way to achieve it using just websocket messages.
1
0
0
I'm working with django-websocket-redis lib, that allow establish websockets over uwsgi in separated django loop. By the documentation I understand well how to send data from server through websockets, but I don't understand how to receive. Basically I have client and I want to send periodically from the client to server status. I don't understand what I need to do, to handle receiving messages from client on server side? What URL I should use on client?
Receive data on server side with django-websocket-redis?
0
0
1
312
36,039,397
2016-03-16T15:01:00.000
1
0
0
1
python,linux,python-2.7,cx-oracle
36,050,958
3
false
0
0
Yes, you can simply follow these steps: Download the source archive and unpack it somewhere. Run the command "python setup.py build" Copy the library to a location of your choice where you do have access (or you can simply leave it in the build location, too, if you prefer) Set the environment variable PYTHONPATH to point to the location of cx_Oracle.so
2
0
0
Im trying to install cx_oracle with python2.7.11. all the tutorials i found for installing cx_oracle needs root access, however on the vm i dont have root access on the /usr or /etc folders. Is there any way to install cx_oracle in my user directory?
Is there a way to install cx_oracle without root access in linux environment?
0.066568
0
0
1,472
36,039,397
2016-03-16T15:01:00.000
0
0
0
1
python,linux,python-2.7,cx-oracle
36,051,557
3
false
0
0
You a Python virtual environment - this way you do not ever need to use System Privs for adding new functionality to your Python Dev Environment. Look for the command pyvenv - there is lots of info on this.
2
0
0
Im trying to install cx_oracle with python2.7.11. all the tutorials i found for installing cx_oracle needs root access, however on the vm i dont have root access on the /usr or /etc folders. Is there any way to install cx_oracle in my user directory?
Is there a way to install cx_oracle without root access in linux environment?
0
0
0
1,472
36,046,802
2016-03-16T20:58:00.000
1
0
1
0
python
36,046,928
2
false
0
0
The while loop you have above will continue looping unless states[4,i] equals exactly 100,000. False and False or True == True This is likely why you are experiencing the infinite loop. You may want to remove the third case and if necessary, perform other checks inside the loop.
1
0
0
I am trying to run a Python script that calculates the trajectory of an object from the ground up above 100,000-meters then back below 100,000-meters. I want to be able to figure out how much time is spent above 100,000-meters. I have my trajectory code just fine (Runge-Kutta), and I can get up to 100,000-meters. However, I cannot figure out the right Python algorithm to keep going up to max altitude and start coming down to below 100,000-meters. Here is what I have: while (states[4,(i - 1)] >= 100000 and states[4,i] <= 100000 or states[4,i] != 100000): I'm ending up in an infinite loop, though. states[4,i] is the altitude. My thinking is that if the previous (i - 1) altitude is above 100,000-meters and the current (i) altitude is below 100,000-meters, I want it to exit the while loop. states[4,i] != 100000 is meant to get me up to that altitude, at least. Thoughts?
Crossing an altitude in Python
0.099668
0
0
72
36,049,690
2016-03-17T00:47:00.000
1
0
0
0
python,scrapy
54,451,026
1
true
1
0
Downgrading to cffi==1.2.1 ended up being the solution for me.
1
6
0
I'm getting the following warning when running a scrapy crawler: C:\Users\dan\Anaconda2\envs\scrapy\lib\site-packages\cffi\model.py:526: UserWarning: 'point_conversion_form_t' has no values explicitly defined; next version will refuse to guess which integer type it is meant to be (unsigned/signed, int/long) % self._get_c_name()) I hadn't been getting this in my previous Anaconda Python install on my Windows 10. I had to reset my environment and now I am. It's not preventing the crawler from running, but it's kind of annoying. Can anyone tell me what might be causing this?
CFFI UserWarning: 'point_conversion_form_t' has no values explicitly defined;
1.2
0
0
2,949