Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-2.7,python-3.x | 0 | 42,583,347 | 0 | 7 | 0 | false | 0 | 0 | It depends on OS (and the way Python has been installed).
For most current installations:
on Windows, Python 3.x installs a py command in the path that can be used that way:
py -2 launches Python2
py -3 launches Python3
On Unix-likes, the most common way is to have different names for the executables of different versions (or to have different symlinks do them). So you can normally call directly python2.7 or python2 to start that version (and python3 or python3.5 for the alternate one). By default only a part of all those symlinks can have been installed but at least one per version. Search you path to find them | 5 | 0 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 1 | 0 | 0 | 10,377 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-2.7,python-3.x | 0 | 71,958,209 | 0 | 7 | 0 | false | 0 | 0 | As has been mentioned in other answers to this and similar questions, if you're using Windows, cmd reads down the PATH variable from the top down. On my system I have Python 3.8 and 3.10 installed. I wanted my cmd to solely use 3.8, so I moved it to the top of the PATH variable and the next time I opened cmd and used python --version it returned 3.8.
Hopefully this is useful for future devs researching this specific question. | 5 | 0 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 1 | 0 | 0 | 10,377 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-2.7,python-3.x | 0 | 42,583,188 | 0 | 7 | 0 | false | 0 | 0 | Usually on all major operating systems the commands python2 and python3 run the correct version of Python respectively. If you have several versions of e.g. Python 3 installed, python32 or python35 would start Python 3.2 or Python 3.5. python usually starts the lowest version installed I think.
Hope this helps! | 5 | 0 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 1 | 0 | 0 | 10,377 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-2.7,python-3.x | 0 | 42,583,177 | 0 | 7 | 0 | false | 0 | 0 | If you use Windows OS:
py -2.7 for python 2.7
py -3 for python 3.x
But first you need to check your PATH | 5 | 0 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 1 | 0 | 0 | 10,377 |
42,602,039 | 2017-03-04T22:17:00.000 | 2 | 0 | 0 | 0 | 0 | qt,python-3.x,user-interface,pyqt5,qmediaplayer | 0 | 42,602,957 | 0 | 1 | 0 | true | 0 | 1 | Your question is a bit broad, bit in general this is what you should do:
Create a QProgressBar
Create your QMediaPlayer
Listen to the currentMediaChanged() signal of your QMediaPlayer module; in your handler fetch the duration of the current media, divide by 1000 to get the length in seconds, set this as the maximum value of your QProgressBar; reset the progressbar.
Listen to the positionChanged() signal of your QMediaPlayer; in the handler fetch the current position; again divide by 1000 and set the value in your QProgressBar with setValue.
This should give you a progressbar that is automatically updated by the QMediaPlayer.
You may wish to disable the text in the progressbar as a percentage isn't really useful for a song playback. Unfortunately there doesn't seem to be an easy way to print the time in the progressbar. | 1 | 1 | 0 | 0 | I want to know how to get a progress-bar/seeker for the QMediaPlayer module on PyQt5... So on my music player application I can have a progress bar for the songs. Thank You in Advance | Connect QProgressBar or QSlider to QMediaPlayer for song progress | 0 | 1.2 | 1 | 0 | 0 | 1,095 |
42,610,590 | 2017-03-05T16:02:00.000 | 6 | 0 | 0 | 0 | 0 | python,node.js,scikit-learn,child-process | 0 | 62,075,227 | 0 | 1 | 1 | true | 1 | 0 | My recommendation: write a simple python web service (personally recommend flask) and deploy your ML model. Then you can easily send requests to your python web service from your node back-end. You wouldn't have a problem with the initial model loading. it is done once in the app startup, and then you're good to go
DO NOT GO FOR SCRIPT EXECUTIONS AND CHILD PROCESSES!!! I just wrote it in bold-italic all caps so to be sure you wouldn't do that. Believe me... it potentially go very very south, with all that zombie processes upon job termination and other stuff. let's just simply say it's not the standard way to do that.
You need to think about multi-request handling. I think flask now has it by default
I am just giving you general hints because your problem has been generally introduced. | 1 | 11 | 1 | 0 | I have a web server using NodeJS - Express and I have a Scikit-Learn (machine learning) model pickled (dumped) in the same machine.
What I need is to demonstrate the model by sending/receiving data from it to the server. I want to load the model on startup of the web server and keep "listening" for data inputs. When receive data, executes a prediction and send it back.
I am relatively new to Python. From what I've seen I could use a "Child Process" to execute that. I also saw some modules that run Python script from Node.
The problem is I want to load the model once and let it be for as long as the server is on. I don't want to keep loading the model every time due to it's size. How is the best way to perform that?
The idea is running everything in a AWS machine.
Thank you in advance. | Sklearn Model (Python) with NodeJS (Express): how to connect both? | 0 | 1.2 | 1 | 0 | 0 | 3,643 |
42,616,958 | 2017-03-06T03:13:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 42,618,800 | 0 | 1 | 0 | true | 0 | 0 | Save it in something next to or related to __file__, which is the path to the file the module was loaded from. I believe in some cases it can be a relative path, so you might want to store the memos in that path directly, or turn it into an absolute path or something. | 1 | 0 | 0 | 0 | I want to write a decorator that does persistent memoization (memoizing to disk). Since I want to use this decorator for many functions, I have to decide where to save memoizing data for these functions. I googled around and found two solutions:
let the functions decide where to store the memoizing data
automatically determine where to store the data by function names
However, in these two solutions, it is necessary for every function to "know" each other in case of colliding of names (or destinations), which is a smell of bad design.
Thus, my question is, how to avoid such collidings? | destination of python persistent memoization | 1 | 1.2 | 1 | 0 | 0 | 74 |
42,641,657 | 2017-03-07T06:29:00.000 | 1 | 0 | 0 | 0 | 0 | python,mxnet | 0 | 43,152,282 | 0 | 1 | 0 | true | 0 | 0 | Using "y = mod.predict(val_iter,num_batch=1)" instead of "y = mod.predict(val_iter)", then you can get only one batch labels. For example,if you batch_size is 10, then you will only get the 10 labels. | 1 | 1 | 1 | 0 | I am using MXNet on IRIS dataset which has 4 features and it classifies the flowers as -'setosa', 'versicolor', 'virginica'. My training data has 89 rows. My label data is a row vector of 89 columns. I encoded the flower names into number -0,1,2 as it seems mx.io.NDArrayIter does not accept numpy ndarray with string values. Then I tried to predict using
re = mod.predict(test_iter)
I get a result which has the shape 14 * 10.
Why am I getting 10 columns when I have only 3 labels and how do I map these results to my labels. The result of predict is shown below:
[[ 0.11760861 0.12082944 0.1207106 0.09154381 0.09155304 0.09155869
0.09154817 0.09155204 0.09154914 0.09154641] [ 0.1176083 0.12082954 0.12071151 0.09154379 0.09155323 0.09155825
0.0915481 0.09155164 0.09154923 0.09154641] [ 0.11760829 0.1208293 0.12071083 0.09154385 0.09155313 0.09155875
0.09154838 0.09155186 0.09154932 0.09154625] [ 0.11760861 0.12082901 0.12071037 0.09154388 0.09155303 0.09155875
0.09154829 0.09155209 0.09154959 0.09154641] [ 0.11760896 0.12082863 0.12070955 0.09154405 0.09155299 0.09155875
0.09154839 0.09155225 0.09154996 0.09154646] [ 0.1176089 0.1208287 0.1207095 0.09154407 0.09155297 0.09155882
0.09154844 0.09155232 0.09154989 0.0915464 ] [ 0.11760896 0.12082864 0.12070941 0.09154408 0.09155297 0.09155882
0.09154844 0.09155234 0.09154993 0.09154642] [ 0.1176088 0.12082874 0.12070983 0.09154399 0.09155302 0.09155872
0.09154837 0.09155215 0.09154984 0.09154641] [ 0.11760852 0.12082904 0.12071032 0.09154394 0.09155304 0.09155876
0.09154835 0.09155209 0.09154959 0.09154631] [ 0.11760963 0.12082832 0.12070873 0.09154428 0.09155257 0.09155893
0.09154856 0.09155177 0.09155051 0.09154671] [ 0.11760966 0.12082829 0.12070868 0.09154429 0.09155258 0.09155892
0.09154858 0.0915518 0.09155052 0.09154672] [ 0.11760949 0.1208282 0.12070852 0.09154446 0.09155259 0.09155893
0.09154854 0.09155205 0.0915506 0.09154666] [ 0.11760952 0.12082817 0.12070853 0.0915444 0.09155261 0.09155891
0.09154853 0.09155206 0.09155057 0.09154668] [ 0.1176096 0.1208283 0.12070892 0.09154423 0.09155267 0.09155882
0.09154859 0.09155172 0.09155044 0.09154676]] | mod.predict gives more columns than expected | 0 | 1.2 | 1 | 0 | 0 | 158 |
42,660,299 | 2017-03-07T23:29:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,jquery,python,ajax,http | 0 | 42,662,740 | 0 | 1 | 0 | false | 1 | 0 | Many (if not all) server side technology can solve your problem: CGI, Java Servlet, NodeJS, Python, PHP etc.
The steps are:
In browser, upload file via AJAX request.
In server, receive file sent from browser, save it somewhere in server disk.
After file is saved, invoke your python script to handle the file.
As your current script is written by Python, I guess Python is the best choice for server side technology. | 1 | 0 | 0 | 0 | I'm trying to start a python script that will parse a csv file uploaded from the UI by the user. On the client side, how do I make a call to start the python script (I've read AJAX http requests work)? And then secondly, how do I take the user input (just a simple user upload with the HTML tag) which will be read by the python script?
The back end python script works perfectly through the command line, I just need to create a front end for easier use. | Start python script from client side | 0 | 0 | 1 | 0 | 1 | 899 |
42,666,255 | 2017-03-08T08:20:00.000 | 4 | 0 | 0 | 0 | 0 | python,scikit-learn,k-means | 0 | 42,684,721 | 0 | 2 | 0 | true | 0 | 0 | You have access to the n_iter_ field of the KMeans class, it gets set after you call fit (or other routines that internally call fit.
Not your fault for overlooking that, it's not part of the documentation, I just found it by checking the source code ;) | 1 | 2 | 1 | 0 | I am trying to construct clusters out of a set of data using the Kmeans algorithm from SkLearn. I want to know how one can determine whether the algorithm actually converged to a solution for one's data.
We feed in the tol parameter to define the tolerance for convergence but there is also a max_iter parameter that defines the number of iterations the algorithm will do for each run. I get that the algorithm may not always converge within the max_iter times of iterations. So is there any attribute or a function that I can access to know if the algorithm converged before the max_iter iterations ? | Sklearn K means Clustering convergence | 1 | 1.2 | 1 | 0 | 0 | 2,975 |
42,667,584 | 2017-03-08T09:32:00.000 | 0 | 1 | 0 | 0 | 0 | python,nose,allure | 0 | 42,684,514 | 0 | 2 | 0 | false | 0 | 0 | How about adding the decorator to the test classes instead?
Not sure if it will work, but sometimes works nicely for @patch. | 1 | 0 | 0 | 0 | I use nosetests and allure framework for reporting purposes. In order to make the report look like I want, I have to add @nose.allure.feature('some feature') decorator to each test. The problem is that I have over 1000 test. Is there any way to modify tests before execution?
I was thinking about custom nose plugin, but not sure how can it be implemented. | Nosetest: add decorator for tests before execution | 0 | 0 | 1 | 0 | 0 | 279 |
42,678,845 | 2017-03-08T18:13:00.000 | -1 | 0 | 1 | 0 | 0 | python-2.7,python-3.x,ipython,anaconda,jupyter-notebook | 0 | 42,682,359 | 0 | 2 | 0 | false | 0 | 0 | try changing the file directory from the python 3.5 to python 2.7.e.g C:\Users\python36 to C:\Users\python27 in ipython settings or preferences | 1 | 0 | 0 | 0 | I am using jupyter notebook with anaconda3 package but I want to use jupyter with anaconda2 and the package I have already installed! how can I add anaconda2 to jupyter kernel? | how can I change ipython kernel to python 2.7 | 0 | -0.099668 | 1 | 0 | 0 | 668 |
42,682,326 | 2017-03-08T21:34:00.000 | 0 | 0 | 1 | 1 | 0 | python,pip,homebrew | 0 | 42,702,937 | 0 | 2 | 0 | false | 0 | 0 | I found the answer in the documentation for homebrew. For Homebrew python, you must use "pip3 install " instead of "python -m pip install "
There were two other issues that complicated this.
1. I had previously manually installed python 3.5. The bash profile was configured to point to this before /usr/local/bin.
2. In the documentation of pip, it mentions that the CLI command "pip" points to the last version of python that used it. So using "pip" alone was causing pip to load the modules into the 2.7 version of the python.
To fix this, I deleted the manually installed version, I removed the garbage from the bash profile, then it everything seemed to work. | 1 | 1 | 0 | 0 | I have a mac running OS X. Although it has Python 2.7 preinstalled, I used home-brew to install Python 3.5, which works great. Now I'm looking to add modules using pip. Trouble is, when I used pip in the terminal, it looks like the module was installed, however, my Python 3.5 doesn't see it. After a bit of digging, I suspect the problem is that my pip is pointed at the the Apple 2.7 Python version and I realize the answer is I need to change the config on pip to point at the 3.5 version of python, but I can't make any sense of the brew file structure in order to know where to point it at. And, as I dig through the Cellar, I see multiple versions of pip, so I'm not even sure I'm using the right one, but not sure how to call the right one from the terminal. I'm sure this is very straightforward to experienced users, but I'm lost. | Understanding pip and home-brew file structure | 0 | 0 | 1 | 0 | 0 | 152 |
42,686,125 | 2017-03-09T03:29:00.000 | 1 | 0 | 0 | 1 | 0 | python,celery | 1 | 42,686,288 | 0 | 1 | 0 | true | 0 | 0 | What you need is a virtual environment. A virtual environment encapsulates a Python install, along with all the pip packages And executable files such as celery. check out the virtualenv and virtualenvwrapper Python packages. | 1 | 0 | 0 | 0 | I used celery + requests first in python2.7,and it works fine,but I heard celery + aiohttp is faster,so I test it in python3, and it really fast,but then I found I can't use celery to start my program write in python2.7,because there are changes between them ,I use command line to start celery only get errors
I guess I should just uninstall the celery of python3?
Is there a better way to do this?
In fact,I guess since there are many package works for both p2,p3,and use commandline to start,there must have a good solution. | how to use command line to start celery when I install it both in python2,python3 | 0 | 1.2 | 1 | 0 | 0 | 60 |
42,710,628 | 2017-03-10T05:05:00.000 | 0 | 0 | 0 | 0 | 1 | python,flask,flask-login,flask-security | 0 | 42,752,122 | 0 | 1 | 0 | true | 1 | 0 | I was using the same type of browser to try and log into different accounts. Such as two firefox browsers and I tried two firefox incognito browsers. Which in both cases I think they shared the same cookies. After trying with one Chrome and one Firefox it worked correctly. | 1 | 3 | 0 | 0 | I currently have a flask application that uses Flask-Security to handle user login and registration. I'm trying to test a chatroom I made so I want to login to two different accounts in different windows to check if it works. However I can't do that because when I login to account2 it simply logs out account1 in my other browser. I'm certain this has something to do with Flask-Login and user sessions but I'm not sure how to fix this issue. If anyone could point me in the right direction that'd be awesome.
I tried looking at the LoginManger docs on Flask-Login's site but can't figure out how to disable cookies. | Flask multiple login from same computer | 0 | 1.2 | 1 | 0 | 0 | 1,395 |
42,711,310 | 2017-03-10T06:00:00.000 | 2 | 0 | 0 | 0 | 0 | python,numpy,matrix | 0 | 42,711,798 | 0 | 2 | 0 | true | 0 | 0 | Just take the reciprocals of the nonzero elements. You can check with a smaller diagonal matrix that this is what pinv does. | 1 | 1 | 1 | 0 | If I have a diagonal matrix with diagonal 100Kx1 and how can I to get its pseudo inverse?
I won't be able to diagonalise the matrix and then get the inverse like I would do for small matrix so this won't work
np.linalg.pinv(np.diag(D)) | How to get the pseudo inverse of a huge diagonal matrix in python? | 0 | 1.2 | 1 | 0 | 0 | 945 |
42,726,719 | 2017-03-10T19:57:00.000 | 7 | 0 | 0 | 0 | 1 | python,django,secret-key | 1 | 42,772,208 | 0 | 2 | 0 | false | 1 | 0 | So, to answer my own question, changing the assigned key is done the same way you'd change any other variable. Just create a 50 character (ideally random) string and set SECRET_KEY equal to it.
SECRET_KEY = "#$%&N(ASFGAD^*(%326n26835625BEWSRTSER&^@T#%$Bwertb"
Then restart the web application.
My problem was completely unrelated. It occurred because I set the path python uses to locate packages to a weird location. Sorry about that guys. | 1 | 8 | 0 | 0 | So, I'm trying to deploy a Django Web App to production, but I want to change the secret key before doing so.
I've attempted to generate a new key using a randomizing function and insert that new key in place of the old one. When I do so, I get an error that says the following:
AttributeError 'module' object has no attribute 'JSONEncoder' ...
Exception Location .../django/contrib/messages/storage/cookie.py in
, line 9
I've deleted the browser cache and restarted the server, but the error persists.
I've also attempted to change the key back, after deleting the browser cache and restarting, the error still persists.
Any idea how to resolve this issue?
Edit: Python version is 2.6.6 and Django version is 1.3.1 | How can i properly change the assigned secret key in a Django Web Application | 0 | 1 | 1 | 0 | 0 | 4,502 |
42,730,894 | 2017-03-11T03:09:00.000 | 0 | 0 | 1 | 0 | 0 | python,dictionary | 0 | 42,730,914 | 0 | 4 | 0 | false | 0 | 0 | You can put a dictionary inside a dictionary.
Try it:
print({a:{'pop':5}}) | 1 | 0 | 0 | 0 | Lets say we have
{'Pop': 5}
and we have a variable
a = 'store'
how could I get the output:
{'store': {'pop': 5}}
Is there a easy way? | Putting a dictionary inside a dictionary? | 0 | 0 | 1 | 0 | 0 | 553 |
42,738,434 | 2017-03-11T17:38:00.000 | 0 | 0 | 1 | 0 | 1 | python,multithreading,python-2.7,loops,sleep | 0 | 42,745,207 | 0 | 2 | 0 | true | 0 | 0 | Silly me, threading works but printing doesn't work across threads. Thanks for all the help! | 1 | 1 | 0 | 0 | Many people here tell you to use threading but how do you have the rest of the program running while that thread sleeps, and reruns, and sleeps again.. etc.
I have tried normal threading with things like a while loop but that didn't work for me
edit: so the question is: how do you sleep a thread without pausing the whole program in python, if possible could you give me a example of how to do it?
edit 2: and if possible without tkinter
edit 3: fixed it, it already worked but i didn't see it because printing doesn't work across threads... Silly me. | python how to loop a function with wait without pausing the whole program | 1 | 1.2 | 1 | 0 | 0 | 1,227 |
42,740,284 | 2017-03-11T20:31:00.000 | 0 | 1 | 1 | 0 | 0 | python,python-3.x,nul | 0 | 58,441,607 | 0 | 2 | 0 | false | 0 | 0 | another equivalent way to get the value of \x00 in python is chr(0) i like that way a little better over the literal versions | 1 | 1 | 0 | 0 | I have question that I am having a hard time understanding what the code might look like so I will explain the best I can. I am trying to view and search a NUL byte and replace it with with another NUL type byte, but the computer needs to be able to tell the difference between the different NUL bytes. an Example would be Hex code 00 would equal NUL and hex code 01 equals SOH. lets say I wanted to create code to replace those with each other. code example
TextFile1 = Line.Replace('NUL','SOH')
TextFile2.write(TextFile1)
Yes I have read a LOT of different posts just trying to understand to put it into working code. first problem is I can't just copy and paste the output of hex 00 into the python module it just won't paste. reading on that shows 0x00 type formats are used to represent that but I'm having issues finding the correct representation for python 3.x
Print (\x00)
output = nothing shows #I'm trying to get output of 'NUL' or as hex would show '.' either works fine --Edited
so how to get the module to understand that I'm trying to represent HEX 00 or 'NUL' and represent as '.' and do the same for SOH, Not just limited to those types of NUL characters but just using those as exmple because I want to use all 256 HEX characters. but beable to tell the difference when pasting into another program just like a hex editor would do. maybe I need to get the two programs on the same encoding type not really sure. I just need a very simple example text as how I would search and replace none representable Hexadecimal characters and find and replace them in notepad or notepad++, from what I have read, only notepad++ has the ability to do so. | Nul byte representation in Python | 1 | 0 | 1 | 0 | 0 | 4,065 |
42,766,823 | 2017-03-13T15:00:00.000 | 1 | 0 | 1 | 1 | 0 | python,command-line,subprocess | 0 | 42,767,271 | 0 | 1 | 0 | true | 0 | 0 | Often, tools you are calling have a -y flag to automatically answer surch questions with yes. | 1 | 0 | 0 | 0 | I have a script where I used a few command-line tools are utilised. However I've hit an issue where I am trying to convert two videos into one video (which I can do) however this is meant to be an idle process and when I run this command with subprocess.call() it prompted me with a 'A file with this name already exists, would you like to overwrite it [y/n]?' and now I am stuck on how to emulate a users input of 'y' + Enter.
It could be a case of running it as admin (somehow) or using pipes or this Stdout stuff I read about but didn't really understand. How would you guys approach this? What do you think the best technique?
Cheers guys, any help is immensely appreciated! | Subprocess emulate user input after command | 0 | 1.2 | 1 | 0 | 0 | 87 |
42,771,938 | 2017-03-13T19:31:00.000 | 0 | 0 | 0 | 0 | 1 | python,bokeh,holoviews | 1 | 42,772,141 | 0 | 2 | 0 | false | 0 | 0 | There are some changes in bokeh 0.12.4, which are incompatible with HoloViews 1.6.2. We will be releasing holoviews 1.7.0 later this month, until then you have the option to downgrading to bokeh 0.12.3 or upgrading to the latest holoviews dev release with:
conda install -c ioam/label/dev holoviews
or
pip install https://github.com/ioam/holoviews/archive/v1.7dev7.zip | 1 | 0 | 1 | 1 | I have tried to run the Holoviews examples from the Holoviews website.
I have:
bokeh 0.12.4.
holoviews 1.6.2 py27_0 conda-forge
However, following any of the tutorials I get an error such as the following and am unable to debug:
AttributeError: 'Image' object has no attribute 'set'.
Is anyone able to guide me as to how to fix this?
Cheers
Ed | Holoviews: AttributeError: 'Image' object has no attribute 'set' | 0 | 0 | 1 | 0 | 0 | 997 |
42,776,941 | 2017-03-14T02:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,oracle,sqlalchemy | 0 | 70,789,442 | 0 | 3 | 0 | false | 0 | 0 | Encrypting the password isn't necessarily very useful, since your code will have to contains the means to decrypt. Usually what you want to do is to store the credentials separately from the codebase, and have the application read them at runtime. For example*:
read them from a file
read them from command line arguments or environment variables (note there are operating system commands that can retrieve these values from a running process, or they may be logged)
use a password-less connection mechanism, for example Unix domain sockets, if available
fetch them from a dedicated secrets management system
You may also wish to consider encrypting the connections to the database, so that the password isn't exposed in transit across the network.
* I'm not a security engineer: these examples are not exhaustive and may have other vulnerabilities in addition to those mentioned. | 1 | 2 | 0 | 0 | I'm working with sqlalchemy and oracle, but I don't want to store the database password directly in the connection string, how to store a encrypted password instead? | How to use encrypted password in connection string of sqlalchemy? | 0 | 0 | 1 | 1 | 0 | 3,865 |
42,787,560 | 2017-03-14T13:38:00.000 | 0 | 1 | 0 | 1 | 0 | python,version-control,raspberry-pi | 0 | 42,787,653 | 0 | 3 | 0 | false | 0 | 0 | Following a couple of bad experiences where I lost code which was only on my Pi's SD card, I now run WinSCP on my laptop, and edit files from Pi on my laptop, they open in Notepad++ and WinSCP automatically saves edits to Pi. And also I can use WinSCP folder sync feature to copy contents of SD card folder to my latop. Not perfect, but better what I was doing before | 2 | 0 | 0 | 0 | I am writing a web python application with tornado framework on a raspberry pi.
What i actually do is to connect to my raspberry with ssh. I am writing my source code with vi, on the raspberry.
What i want to do is to write source code on my development computer but i do not know how to synchronize (transfer) this source code to raspberry.
It is possible to do that with ftp for example but i will have to do something manual.
I am looking for a system where i can press F5 on my IDE and this IDE will transfer modified source files. Do you know how can i do that ?
Thanks | Synchronize python files between my development computer and my raspberry | 1 | 0 | 1 | 0 | 0 | 334 |
42,787,560 | 2017-03-14T13:38:00.000 | 0 | 1 | 0 | 1 | 0 | python,version-control,raspberry-pi | 0 | 54,502,688 | 0 | 3 | 0 | false | 0 | 0 | I have done this before using bitbucket as a standard repository and it is not too bad. If you set up cron scripts to git pull it's almost like continuous integration. | 2 | 0 | 0 | 0 | I am writing a web python application with tornado framework on a raspberry pi.
What i actually do is to connect to my raspberry with ssh. I am writing my source code with vi, on the raspberry.
What i want to do is to write source code on my development computer but i do not know how to synchronize (transfer) this source code to raspberry.
It is possible to do that with ftp for example but i will have to do something manual.
I am looking for a system where i can press F5 on my IDE and this IDE will transfer modified source files. Do you know how can i do that ?
Thanks | Synchronize python files between my development computer and my raspberry | 1 | 0 | 1 | 0 | 0 | 334 |
42,801,569 | 2017-03-15T05:13:00.000 | 1 | 0 | 0 | 0 | 0 | javascript,python,ssl,x509certificate | 0 | 42,801,616 | 0 | 1 | 0 | false | 1 | 0 | Django is a good option to create applications using python. You can start an application, and embed your code in template and write a view to handle requests and responses. | 1 | 0 | 0 | 0 | What I have done:
1- Created a web form using HTML and javascript to create a SSL certificate that can create dynamic certificates.
2- Successfully parsed through an existing certificate and passed the required values to the web form.
3- I am using the HTML+javascript inside the python script itself and appending the parsed certificate values to the javascript before displaying it.
What I need to do:
1-Take values from the web form, assign those to particular variables and pass those variables to a python script, that can create a CSR using those and sign it using a dummy key.
So, basically, I want to call a python script on a click of a button that can take web form values and create a certificate.
P.S. PHP isn't an option for me, as the server I am working on doesn't support it.
Can someone guide me in the right direction as for how to proceed? Any examples or study material? Or should I start working with Flask? | Using javascript to pass web forms values to python script | 0 | 0.197375 | 1 | 0 | 0 | 134 |
42,804,006 | 2017-03-15T08:00:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,node.js | 0 | 42,963,477 | 0 | 2 | 0 | false | 0 | 0 | I tried to encode the image and send it but It did not work. So I used Socket programming instead and It worked wonderfully. | 1 | 1 | 0 | 0 | I am trying to send an image from node js script to python script using python-shell. From what I know, I should use binary format.
I know that in python side I can use this 2 functions:
import sys
sys.stdout.write() and sys.stdin.read()
But I am not sure how the node js side gonna be? (Which functions can I use and how can I use them?) | Sending Images from nodejs to Python script via standard input/output | 0 | 0 | 1 | 0 | 1 | 631 |
42,817,337 | 2017-03-15T17:59:00.000 | 0 | 0 | 0 | 0 | 0 | python,arrays,numpy,save | 0 | 42,817,461 | 0 | 3 | 0 | false | 0 | 0 | How about ndarray's .tofile() method? To read use numpy.fromfile(). | 1 | 0 | 1 | 0 | I have a code which outputs an N-length Numpy array at every iteration.
Eg. -- theta = [ 0, 1, 2, 3, 4 ]
I want to be able to save the arrays to a text file or .csv file dynamically such that I can load the data file later and extract appropriately which array corresponds to which iteration. Basically, it should be saved in an ordered fashion.
I am assuming the data file would look something like this:-
0 1 2 3 4
1 2 3 4 5
2 3 4 5 6 ... (Random output)
I thought of using np.c_ but I don't want to overwrite the file at every iteration and if I simply save the terminal output as > output.txt, it saves as arrays including the brackets. I don't know how to read such a text file.
Is there a proper method to do this, i.e. write and read the data? | How do I save numpy arrays such that they can be loaded later appropriately? | 0 | 0 | 1 | 0 | 0 | 741 |
42,819,987 | 2017-03-15T20:24:00.000 | 1 | 0 | 0 | 0 | 0 | python,r,machine-learning,scikit-learn,xgboost | 0 | 42,821,370 | 0 | 1 | 0 | true | 0 | 0 | Since XGBoost uses decision trees under the hood it can give you slightly different results between fits if you do not fix random seed so the fitting procedure becomes deterministic.
You can do this via set.seed in R and numpy.random.seed in Python.
Noting Gregor's comment you might want to set nthread parameter to 1 to achieve full determinism. | 1 | 1 | 1 | 0 | I'm using python's XGBRegressor and R's xgb.train with the same parameters on the same dataset and I'm getting different predictions.
I know that XGBRegressor uses 'gbtree' and I've made the appropriate comparison in R, however, I'm still getting different results.
Can anyone lead me in the right direction on how to differentiate the 2 and/or find R's equivalence to python's XGBRegressor?
Sorry if this is a stupid question, thank you. | Python's XGBRegressor vs R's XGBoost | 0 | 1.2 | 1 | 0 | 0 | 1,321 |
42,823,336 | 2017-03-16T00:43:00.000 | 1 | 0 | 0 | 0 | 0 | python,django | 0 | 42,823,497 | 0 | 1 | 0 | true | 1 | 0 | You will want to save() them when the user submits the form at the start. Add a BooleanField to your model that says whether the row has been moderated and accepted. Then in your application, filter out all non-moderated rows, and on the admin side, filter out only rows that need moderation. | 1 | 1 | 0 | 0 | I'm working on a project where i do need to do a communication in 3 parts. Let me explain:
1.- The user fills a form, which is used to create a set of objects from the models. This objects are not saved in the database on this moment. All the objects are related between them.
2.- This objects must be saved in some way for an admin user to check the data and decide if proceed to save it, or if any element is invalid and reject the form.
3.- If the admin decide that the data is correct, select an option to send a .save() to add the data to the database.
At least that's the idea i have on how should work. I decided to create the objects before send it to the admin because it sounded easier to show that if i sended the request.POST and the request.FILES. My problem is that i don't know where could save the objects to show it to the admin when he connects(There are 2 types of users, Normal and Admin, the normal are the ones that fill the forms, the admins only check it, the two of them has their own views). So, does anyone how could i send the data and storage it until the admin connects? I'm open to any idea if this is not possible, the only thing that is necesary is the flow of: User fills the form, admin check it, data is saved or rejected. | How to storage objects for communication without the database in Django? | 1 | 1.2 | 1 | 0 | 0 | 38 |
42,830,209 | 2017-03-16T09:37:00.000 | 0 | 0 | 1 | 0 | 0 | python,arrays | 0 | 42,830,903 | 0 | 2 | 0 | false | 0 | 0 | Just read your text document, pick up rights informations, then put in in 3 differents arrays for name, GTIN, and price. Maybe you can show what your document looks like. | 1 | 0 | 0 | 0 | I am trying to make a receipt creator where you "buy" products then go to the checkout to confirm you want to "buy" your items.(this is just a python program, you don't spend money).
However, i feel like this could become extremely easy for me if i could put the names of all of the items in one array, the GTIN 8 number into another and the price into a final array.
My problem is that I MUST use some sort of text document to store the items with their GTIN 8 number and their price. Is it possible to do this, and if so, how?
Here is an example of a document that i would use:
GTIN 8 NO. NAME. PRICE.
66728009, NET, 10.00
74632558, OATMEAL, 5.00
05103492, FISHING ROD, 20.00
45040122, FISH BAIT, 5.00
20415112, MILK, 2.00
37106560, SHOES, 25.00
51364755, T-SHIRT, 10.00
64704739, TROUSERS, 15.00
47550544, CEREAL, 2.00
29783656, TOY, 10.00 | Python: how to add different part of a text document to different arrays | 1 | 0 | 1 | 0 | 0 | 47 |
42,846,803 | 2017-03-16T23:48:00.000 | 2 | 0 | 1 | 0 | 1 | python,pycharm | 0 | 47,140,011 | 0 | 4 | 0 | false | 0 | 0 | If you use Win 10, 64Bits. Run your codes using Ctrl + Shift + F10 or simply right click on the workspace and click Run from the options. | 3 | 5 | 0 | 0 | If I go to "tools" and select "python console", and enter several lines of code, how do I execute this? If my cursor is at the end of the script, I can just hit enter. But how can I run the code using keyboard shortcuts if the cursor is not at the end? In Spyder this is done using shift+enter, but I can't figure out how to do it here. I've seen places say control+enter, but that doesn't work. Thanks! | How to run code in Pycharm | 0 | 0.099668 | 1 | 0 | 0 | 41,595 |
42,846,803 | 2017-03-16T23:48:00.000 | 1 | 0 | 1 | 0 | 1 | python,pycharm | 0 | 49,919,793 | 0 | 4 | 0 | false | 0 | 0 | in mac.
you can use fn+shift+f10 and happy coding with python | 3 | 5 | 0 | 0 | If I go to "tools" and select "python console", and enter several lines of code, how do I execute this? If my cursor is at the end of the script, I can just hit enter. But how can I run the code using keyboard shortcuts if the cursor is not at the end? In Spyder this is done using shift+enter, but I can't figure out how to do it here. I've seen places say control+enter, but that doesn't work. Thanks! | How to run code in Pycharm | 0 | 0.049958 | 1 | 0 | 0 | 41,595 |
42,846,803 | 2017-03-16T23:48:00.000 | 0 | 0 | 1 | 0 | 1 | python,pycharm | 0 | 52,305,481 | 0 | 4 | 0 | false | 0 | 0 | Right click on project name / select New / select Python File
Pycharm needs to know you're running a Python file before option to run is available | 3 | 5 | 0 | 0 | If I go to "tools" and select "python console", and enter several lines of code, how do I execute this? If my cursor is at the end of the script, I can just hit enter. But how can I run the code using keyboard shortcuts if the cursor is not at the end? In Spyder this is done using shift+enter, but I can't figure out how to do it here. I've seen places say control+enter, but that doesn't work. Thanks! | How to run code in Pycharm | 0 | 0 | 1 | 0 | 0 | 41,595 |
42,853,347 | 2017-03-17T09:13:00.000 | 0 | 1 | 0 | 1 | 0 | python,mongodb,pymongo | 0 | 65,763,003 | 0 | 1 | 0 | false | 0 | 0 | First you need to ensure that your directory is in the correct folder.
for example you can write cd name_of_folder
then to run it you need to typepython your_filen_name.py | 1 | 0 | 0 | 0 | I have a python file named abc.py. I can run it in mongodb with the help of robomongo but i couldnt run it in cmd. Can anyone tell me how to run a .py file in mongodb using cmd ? | how to run python file in mongodb using cmd | 0 | 0 | 1 | 0 | 0 | 82 |
42,859,075 | 2017-03-17T13:40:00.000 | 0 | 0 | 1 | 1 | 0 | python,logging,distributed-computing,multiple-instances | 0 | 42,872,180 | 0 | 1 | 0 | true | 0 | 0 | I will use MySQL. This way I will have a standard tool for log analysis (MySQL Workbench), will solve the problem with multiple instance logging serialization. The best way would be probably to write a handler to standard logging module but at the moment I'll sent all messages through rabbitmq to service that stores them. | 1 | 0 | 0 | 0 | I have a worker application written in python for a distributed system. There is a situation when I need to start multiple instances of this worker on a single server. Logging should be written into file I suspect that I cannot write to the same file from different instances. So what should I do, pass log-file name as command line argument to each instance? Is there a standard approach for such situation? | If I have multiple instance of the same python application running how to perform logging into file? | 0 | 1.2 | 1 | 0 | 0 | 303 |
42,873,222 | 2017-03-18T10:34:00.000 | -1 | 0 | 0 | 1 | 0 | python,json,linux,shell,automation | 0 | 42,874,372 | 0 | 2 | 0 | false | 0 | 0 | You can use command
df
Provides an option to display sizes in Human Readable formats (e.g., 1K 1M 1G) by using ‘-h’.This is the most common command but
you can also check du and di. di in fact provides even more info than df. | 1 | 2 | 0 | 0 | I am planning to automate a process of cleaning file systems in Linux using a set of scripts in Shell, Python and I'll create a simple dashboard using Node.js to allow a more visual approach.
I have a script in Shell which already cleans a file system in a specific server - but I have to login and then issue this command. Now I am proceeding with a dashboard in HTML/CSS/JS to visualize all servers which are having space problems.
My idea is: create a Python scrip to login and get a list of filesystems and its usage and update a single JSON file, then, my dashboard uses this JSON to feed the screen.
My question is how to get the list of file system in Linux and its usage? | Get a list of all mounted file systems in Linux with python | 0 | -0.099668 | 1 | 0 | 0 | 1,490 |
42,881,071 | 2017-03-18T23:24:00.000 | 2 | 0 | 1 | 0 | 0 | python,multithreading,performance,time,benchmarking | 0 | 42,881,144 | 0 | 1 | 0 | false | 0 | 0 | No CPU time is the time spent by all cpus on the task. So if cpu1 spent 2 minutes on a task and cpu2 spent 3 minutes on the same task the cpu time will be 1 + 3 = 4.
So in multithreaded programs we would expect that cpu time will usually be more than the wall time.
Now you might ask why does the same hold for your single-threaded program. The answer will probably be that even if your code does not explicitly use parallelism, there is probably a library you use that does. | 1 | 1 | 0 | 0 | Followings are the result after profiling using %time in ipython-
single-thread:
CPU time: user 6m44s sys 1.78s total 6m46s
Wall time: 5m19s
4-thread:
CPU time: user 10m12s sys 2.83s total 10m15s
Wall time: 4m14s
Shouldn't CPU time be lesser for multi-threaded code ?
Also, how can be CPU time be more than wall time, as wall time is total elapsed time. Could you please clarify these terminology. | how to analyze cpu time while benchmarking in python (multiprocessing)? | 0 | 0.379949 | 1 | 0 | 0 | 141 |
42,886,286 | 2017-03-19T11:56:00.000 | 1 | 0 | 0 | 0 | 0 | python,opencv,anaconda,conda | 1 | 62,689,216 | 1 | 6 | 0 | false | 0 | 0 | The question is old but I thought to update the answer with the latest information. My Anaconda version is 2019.10 and build channel is py_37_0 . I used pip install opencv-python==3.4.2.17 and pip install opencv-contrib-python==3.4.2.17. Now they are also visible as installed packages in Anaconda navigator and I am able to use patented methods like SIFT etc. | 2 | 10 | 1 | 0 | Can anyone tell me commands to get contrib module for anaconda
I need that module for
matches = flann.knnMatch(des1,des2,k=2)
to run correctly
error thrown is
cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
Also I am using Anaconda openCV version 3, and strictly dont want to switch to lower versions
P.S. as suggested at many places to edit file cv2.cpp option is not available with anaconda. | how to get opencv_contrib module in anaconda | 0 | 0.033321 | 1 | 0 | 0 | 29,093 |
42,886,286 | 2017-03-19T11:56:00.000 | 14 | 0 | 0 | 0 | 0 | python,opencv,anaconda,conda | 1 | 44,329,928 | 1 | 6 | 0 | false | 0 | 0 | I would recommend installing pip in your anaconda environment then just doing: pip install opencv-contrib-python. This comes will opencv and opencv-contrib. | 2 | 10 | 1 | 0 | Can anyone tell me commands to get contrib module for anaconda
I need that module for
matches = flann.knnMatch(des1,des2,k=2)
to run correctly
error thrown is
cv2.error: ......\modules\python\src2\cv2.cpp:163: error: (-215) The data should normally be NULL! in function NumpyAllocator::allocate
Also I am using Anaconda openCV version 3, and strictly dont want to switch to lower versions
P.S. as suggested at many places to edit file cv2.cpp option is not available with anaconda. | how to get opencv_contrib module in anaconda | 0 | 1 | 1 | 0 | 0 | 29,093 |
42,916,551 | 2017-03-21T00:49:00.000 | 0 | 0 | 0 | 1 | 1 | python,file,windows-7 | 1 | 42,988,360 | 0 | 2 | 0 | false | 0 | 0 | User letmaik was able to help me with this. It turned out that the error was caused by my version of pip being too old. The command "python -m pip install -U pip" did not work to upgrade pip; "easy_install -U pip" was required. This allowed rawpy to be installed successfully. | 1 | 1 | 0 | 0 | I was trying to download a Python wrapper called rawpy on my Windows machine. I used the command "pip install rawpy". I have already looked at many other SO threads but could find no solution. The exact error is :
IO Error: [Errno 2] No such file or directory:
'external/LibRawcmake/CMakeLists.txt'
The only dependency for the wrapper is numpy, which I successfully installed. I would like to know how to fix this. Quite new to Python, so any information would help. | python - IO Error [Errno 2] No such file or directory when downloading package | 0 | 0 | 1 | 0 | 0 | 1,408 |
42,919,339 | 2017-03-21T05:50:00.000 | 4 | 1 | 0 | 1 | 0 | javascript,python,c,webassembly | 0 | 42,920,349 | 0 | 1 | 0 | true | 0 | 0 | If you are actually implementing an interpreter then you don't need to generate machine code at runtime, so everything can stay within Wasm.
What you actually seem to have in mind is a just-in-time compiler. For that, you indeed have to call back into the embedder (i.e., JavaScript in the browser) and create and compile new Wasm modules there on the fly, and link them into the running program -- e.g., by adding new functions to an existing table. The synchronous compilation/instantiation interface exists for this use case.
In future versions it may be possible to invoke the compilation API directly from within Wasm, but for now going through JavaScript is the intended approach. | 1 | 3 | 0 | 0 | When thinking of the why a interpeter work:
parse code -> producer machine byte code -> allocate exec mem -> run
how can it be done in wasm?
thanks! | Compile a JIT based lang to Webassembly | 0 | 1.2 | 1 | 0 | 0 | 763 |
42,927,141 | 2017-03-21T12:26:00.000 | 0 | 0 | 0 | 0 | 0 | python,hive,package,udf | 0 | 43,289,954 | 0 | 1 | 0 | false | 0 | 0 | I recently started looking into this approach and I feel like the problem is not about to get all the 'hive nodes' having sklearn on them (as you mentioned above), I feel like it is rather a compatibility issue than 'sklearn node availability' one. I think sklearn is not (yet) designed to run as a parallel algorithm such that large amount of data can be processed in a short time.
What I'm trying to do, as an approach, is to communicate python to 'hive' through 'pyhive' (for example) and implement the necessary sklearn libraries/calls within that code. The rough assumption here that this 'sklearn-hive-python' code will run in each node and deal with the data at the 'map-reduce' level.
I cannot say this is the right solution or correct approach (yet) but this is what I can conclude after searching for sometime. | 1 | 0 | 1 | 0 | I know how to create a hive udf with transform and using, but I can't use sklearn because not all the node in hive cluster has sklearn.
I have an anaconda2.tar.gz with sklearn, What should I do ? | How to create an udf for hive using python with 3rd party package like sklearn? | 0 | 0 | 1 | 0 | 0 | 314 |
42,940,941 | 2017-03-22T01:04:00.000 | 1 | 0 | 0 | 1 | 0 | macos,kivy,python-3.4 | 1 | 46,702,178 | 0 | 1 | 0 | false | 0 | 1 | Just had this issue, and was able to fix it following the directions on the kivy mac OS X install page, with one modification as follows:
$ brew install pkg-config sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer
$ pip3 install Cython==0.25.2
$ pip3 install kivy
pip3 is my reference to pip for Python 3.6 as I have two different versions of python on my system. May just be pip install for you.
Hope this helps! | 1 | 1 | 0 | 0 | so I am trying to install kivy on my mac.From their instructions page, I am on step 2, and have to enter the command $ USE_OSX_FRAMEWORKS=0 pip install kivy. However, when I put this in terminal, I get the error error: command '/usr/bin/clang' failed with exit status 1, and as a result Failed building wheel for kivy. Does anyone know how to address this issue? | Trying to install kivy for python on mac os 10.12 | 0 | 0.197375 | 1 | 0 | 0 | 1,854 |
42,958,619 | 2017-03-22T17:21:00.000 | 0 | 0 | 1 | 0 | 1 | python,python-2.7 | 0 | 42,960,513 | 0 | 1 | 0 | true | 0 | 0 | If you want to have 5 suggestions and the user only provides a username he likes to use you could do the following.
Just start a counter from 1 to 100, append that number to the username and check if it is in the database. If not then save that suggestion in a list.
If that list has 5 entries, show them to your user to choose from. | 1 | 1 | 0 | 0 | In a Python/Django website I maintain, users keep unique usernames at sign up. While registering, if a username isn't available, they have to guess another one.
Sometimes users have to contend with multiple "username already exists" messages before they're able to sign up.
I want to ameliorate this issue via suggesting a username based upon the already used username they currently put in. Can someone illustrate a neat Python solution for this?
I haven't tried anything yet. But I was thinking what would work is taking the current nickname the user wants, and then somehow doing an ordinal based diff with 4-5 neighboring nicknames from the DB (these I can easily query).
The diffs that are found can then somehow be used to guess an available nickname for the user, which is also sufficiently based on the one they already wanted. Something like that. Being a neophyte, I'm still trying to wrap my head around a viable solution. | Using diff of two strings to suggest a unique string (Python 2.7) | 1 | 1.2 | 1 | 0 | 0 | 87 |
42,995,027 | 2017-03-24T08:49:00.000 | 1 | 0 | 0 | 0 | 0 | python,interpolation,coefficients | 0 | 42,995,646 | 0 | 2 | 0 | false | 0 | 0 | If you're doing linear interpolation you can just use the formula that the line from point (x0, y0) to (x1, y1) the line that interpolates them is given by y - y0 = ((y0 - y1)/(x0 - x1)) * (x - x0). You can take 2 element slices of your list using the slice syntax; for example to get [2.5, 3.4] you would use x[1:3].
Using the slice syntax you can then implement the linear interpolation formula to calculate the coefficients of the linear polynomial interpolations. | 1 | 3 | 1 | 0 | I'm fairly new to programming and thought I'd try writing a piecewise linear interpolation function. (perhaps which is done with numpy.interp or scipy.interpolate.interp1d)
Say I am given data as follows: x= [1, 2.5, 3.4, 5.8, 6] y=[2, 4, 5.8, 4.3, 4]
I want to design a piecewise interpolation function that will give the coefficents of all the Linear polynomial pieces between 1 and 2.5, 2.5 to 3.4 and so on using Python.
of course matlab has the interp1 function which do this but im using python and i want to do exactly the same job as matlab but python only gives the valuse but not linear polynomials coefficient ! (in matlab we could get this with pp.coefs) .
but how to get pp.coefs in python numpy.interp ? | piecewise linear interpolation function in python | 0 | 0.099668 | 1 | 0 | 0 | 4,195 |
43,001,729 | 2017-03-24T14:12:00.000 | 2 | 0 | 0 | 0 | 0 | python,numpy,fft | 0 | 43,012,808 | 0 | 3 | 0 | false | 0 | 0 | Also note the ordering of the coefficients in the fft output:
According to the doc: by default the 1st element is the coefficient for 0 frequency component (effectively the sum or mean of the array), and starting from the 2nd we have coeffcients for the postive frequencies in increasing order, and starts from n/2+1 they are for negative frequencies in decreasing order. To have a view of the frequencies for a length-10 array:
np.fft.fftfreq(10)
the output is:
array([ 0. , 0.1, 0.2, 0.3, 0.4, -0.5, -0.4, -0.3, -0.2, -0.1])
Use np.fft.fftshift(cf), where cf=np.fft.fft(array), the output is shifted so that it corresponds to this frequency ordering:
array([-0.5, -0.4, -0.3, -0.2, -0.1, 0. , 0.1, 0.2, 0.3, 0.4])
which makes for sense for plotting.
In the 2D case it is the same. And the fft2 and rfft2 difference is as explained by others. | 1 | 11 | 1 | 0 | Obviously the rfft2 function simply computes the discrete fft of the input matrix. However how do I interpret a given index of the output? Given an index of the output, which Fourier coefficient am I looking at?
I am especially confused by the sizes of the output. For an n by n matrix, the output seems to be an n by (n/2)+1 matrix (for even n). Why does a square matrix ends up with a non-square fourier transform? | How should I interpret the output of numpy.fft.rfft2? | 0 | 0.132549 | 1 | 0 | 0 | 5,101 |
43,005,000 | 2017-03-24T16:49:00.000 | 0 | 1 | 0 | 0 | 0 | python,email,outlook | 0 | 43,005,956 | 0 | 1 | 0 | false | 0 | 0 | You won't be able to fix this. Outlook does it no matter how many line breaks you put in there in my experience. You might be able to trick it by adding spaces on each line, or non-breaksping space characters. It's a dumb feature of outlook.... | 1 | 0 | 0 | 0 | I am sending some emails from Python using smtplib MIMEtext; and have the often noted problem that Outlook will "sometimes" remove what it considers "extra line breaks".
It is odd, because I print several headers and they all break find, but the text fields in a table are all smooshed - unless the recipient manually clicks Outlook's "restore line breaks".
Because some come through alright, I wonder what is Outlook's criteria for "extra", and thus how to avoid it?
Do I need to format the message as HTML? | python sending email - line breaks removed by Outlook | 1 | 0 | 1 | 0 | 0 | 795 |
43,024,066 | 2017-03-26T01:47:00.000 | 0 | 0 | 0 | 0 | 1 | python,django | 1 | 43,051,626 | 0 | 2 | 0 | false | 1 | 0 | I finally got the "python manage.py runserver" command to work. The only thing different I did was before setting up the virtual env and installing Django was set my executionpolicy to Unrestricted. Previously it had been set to RemoteSigned. I hadn't been gettiing any warning or errors but thought I would try it and it worked. | 2 | 0 | 0 | 0 | I am new to stackoverflow, very new to Python and trying to learn Django.
I am on Windows 10 and running commands from powershell (as administrator).
I am in a virtual environment. I am trying to set up Django.
I have run the following commands
"pip install Django"
"django-admin.py startproject learning_log ."
"python manage.py migrate"
All of the above seemed to work okay, however, when I then try to run the command
"python manage.py runserver"
I get a popup error box that says:
Python has stopped working
A problem caused the program to stop working correctly.
Windows will close the program and notify you if a solution is available.
Can someone tell me how to resolve this issue or where to look for any error messages that might clue me in as to what is causing the problem? | Python stops working on manage.py runserver | 0 | 0 | 1 | 0 | 0 | 2,081 |
43,024,066 | 2017-03-26T01:47:00.000 | 0 | 0 | 0 | 0 | 1 | python,django | 1 | 49,569,047 | 0 | 2 | 0 | false | 1 | 0 | I encountered the same problem. After trying everything, I switched from PS to cmd, cd to the same directory and run python manage.py runserver. Then it worked. Then I ctrl+C quit the server, switched back to PS, ran the command, it still threw the same dialog window (Python stopped working). Then I went back to cmd, typed the command and the server started fine.
Conclusion: Use cmd to run the command, not PS. | 2 | 0 | 0 | 0 | I am new to stackoverflow, very new to Python and trying to learn Django.
I am on Windows 10 and running commands from powershell (as administrator).
I am in a virtual environment. I am trying to set up Django.
I have run the following commands
"pip install Django"
"django-admin.py startproject learning_log ."
"python manage.py migrate"
All of the above seemed to work okay, however, when I then try to run the command
"python manage.py runserver"
I get a popup error box that says:
Python has stopped working
A problem caused the program to stop working correctly.
Windows will close the program and notify you if a solution is available.
Can someone tell me how to resolve this issue or where to look for any error messages that might clue me in as to what is causing the problem? | Python stops working on manage.py runserver | 0 | 0 | 1 | 0 | 0 | 2,081 |
43,026,801 | 2017-03-26T08:47:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,django-registration | 0 | 43,152,421 | 0 | 2 | 1 | false | 1 | 0 | I am not intimately familiar with django, but an easy solution would be to follow the default registration workflow to let your user register. Then when your user tries to login for the first time you present them with a form to fill with all the extra information you might need.
In this way you also decouple the actual account creation from asking the user for more information, creating for them an extra incentive to actually go through with this process ("Oh man why do I need to provide my name, let's not sign up" vs "Oh well, I have already registered and given them an email might as well go through with it")
If you would prefer to have them in one step, then providing what code you have already would help us provide better feedback | 1 | 0 | 0 | 0 | There are already some question on this but most of their answers are using models based workflow which is not a recommended way anymore according to django-registration. I am just frustrated from last week, trying to figure out how to add first and last name fields in registration form in HMAC Workflow. | How to add custom-fields (First and Last Name) in django-registration 2.2 (HMAC activation Workflow)? | 1 | 0.099668 | 1 | 0 | 0 | 174 |
43,033,174 | 2017-03-26T18:57:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-3.x,fonts,colors,size | 0 | 49,737,636 | 0 | 1 | 0 | false | 0 | 0 | I know about the color part, but I came for the size.
To start off with, to close the color, use \u001b[0m. If you don't use this, all text will become the color that you started with until you close it.
Green is\u001b[32m
Black is\u001b[30m
Pink is \u001b[35m
There are much more, but is you experiment with different numbers, you can highlight and do different colors.
To make a sentence with color, format it like this:
print("\u001b[35mHi, coders. This is pink output.\u001b[0m")
Test that code. it will come out in pink. | 1 | 1 | 0 | 0 | I am using Python 3 and I am not sure if this is a possible query because I've searching it up and I couldn't find a solution. My question is, I want to learn how to change colour and size of my output.
How to make the size bigger or smaller?
Able to make the font size big
How to change the background colour of shell?
Able to make the background colour, for example, right now, it's all white but I want it black.
How to change the output colour of shell?
I would love to see colourful fonts operating in black background shell
I hope there is a solution to this! Thanks in advance | How to change font colour and size of Python Shell | 1 | 0.197375 | 1 | 0 | 0 | 3,009 |
43,048,126 | 2017-03-27T13:42:00.000 | 0 | 0 | 0 | 0 | 0 | python,h2o | 0 | 43,051,434 | 0 | 2 | 0 | false | 0 | 0 | This refers to 2-4 times the size of the file on disk, so rather than looking at the memory in Python, look at the original file size. Also, the 2-4x recommendation varies by algorithm (GLM & DL will requires less memory than tree-based models). | 1 | 0 | 1 | 0 | I am loading Spark dataframes into H2O (using Python) for building machine learning models. It has been recommended to me that I should allocate an H2O cluster with RAM 2-4x as big as the frame I will be training on, so that the analysis fits comfortably within memory. But I don't know how to precisely estimate the size of an H2O frame.
So supposing I have an H2O frame already loaded into Python, how do I actually determine its size in bytes? An approximation within 10-20% is fine. | How to determine size in bytes of H2O frame in Python? | 0 | 0 | 1 | 0 | 0 | 699 |
43,070,180 | 2017-03-28T12:55:00.000 | 1 | 0 | 1 | 0 | 0 | python,anaconda | 0 | 43,070,556 | 0 | 1 | 0 | false | 0 | 0 | I have had similar issues with Anaconda environments - pip will install to the environment in which you're 'logged in', so you need to be very careful about which environment you're in when you use pip.
If I were you, I would pip uninstall the packages in both environments, and methodically install keras in each, as having two different versions in two different environments will not be an issue. | 1 | 1 | 0 | 0 | I want to use keras 1.0 and keras 2.0 at the same time, I tried to create two environments in anaconda: keras1 and keras2.
I install keras1.0 in keras1, when I change the environment to keras2, I found the keras' version is 1.0, and I upgrade the keras to 2.0, then the keras' version became 2.0 in environment keras1.
What should I do to use the two versions at the same time? | how to install two python package versions in different anaconda environments? | 0 | 0.197375 | 1 | 0 | 0 | 433 |
43,084,539 | 2017-03-29T04:44:00.000 | 0 | 0 | 1 | 0 | 0 | python-3.x | 0 | 43,140,722 | 0 | 1 | 0 | false | 0 | 0 | I found the answer to my question. When you reference classes and you import as a module in a different one; the module will get call when is defined if you do not use if __name__=='__main__':. By putting this code at the end of the code will only execute the module once it is intended to be executed but not when you import the module. This way you can use the modules by themselves and also to import from other modules. | 1 | 0 | 0 | 0 | I created two classes. The first class takes an image from the working directory and then covert the image from pdf to jpg using wand. The second class takes the created jpg image and then do further manipulations with the image.
Now When I try to run the first class and then the second class right after that; python crashes because the second class is trying to look for the image but it wont find it until it is created.
My question is how can you run the second class but just after the first class is executed.
class1 = imagecreation('image.jpg')
class2 = transformimage() | Open a file after it was created in Python | 0 | 0 | 1 | 0 | 0 | 22 |
43,092,454 | 2017-03-29T11:40:00.000 | 5 | 0 | 0 | 0 | 0 | python,machine-learning,tensorflow,neural-network,artificial-intelligence | 0 | 43,098,199 | 0 | 3 | 0 | true | 0 | 0 | Instead of creating a whole new graph you might be better off creating a graph which has initially more neurons than you need and mask it off by multiplying by a non-trainable variable which has ones and zeros. You can then change the value of this mask variable to allow effectively new neurons to act for the first time. | 1 | 3 | 1 | 0 | If I want to add new nodes to on of my tensorflow layers on the fly, how can I do that?
For example if I want to change the amount of hidden nodes from 10 to 11 after the model has been training for a while. Also, assume I know what value I want the weights coming in and out of this node/neuron to be.
I can create a whole new graph, but is there a different/better way? | How to add new nodes / neurons dynamically in tensorflow | 0 | 1.2 | 1 | 0 | 0 | 2,690 |
43,096,197 | 2017-03-29T14:18:00.000 | 0 | 0 | 0 | 1 | 1 | python,shell,subprocess,tcsh | 0 | 43,118,394 | 0 | 1 | 0 | false | 0 | 0 | Knowing the subprocess inherits all the parent process environment and they are supposed to be ran under same environment, making the shell script to not setup any environment, fixed it.
This solves the environment being retained, but now the problem is, the process just hangs! (it does not happen when it is ran directly from shell) | 1 | 0 | 0 | 0 | I have a tcsh shell script that sets up all the necessary environment including PYTHONPATH, which then run an executable at the end of it. I also have a python script that gets sent to the shell script as an input. So the following works perfectly fine when it is ran from Terminal:
path to shell script path to python script
Now, the problem occurs when I want to do the same thing from a subprocess. The python script fails to be ran since it cannot find many of the modules that's already supposed to be set via the shell script. And clearly, the PYTHONPATH ends up having many missing paths compared to the parent environment the subprocess was ran from or the shell script itself! It seems like the subprocess does not respect the environment the shell script sets up.
I've tried all sorts of things already but none help!
cmd = [shell_script_path, py_script_path]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=os.environ.copy())
It makes no difference if env is not given either!
Any idea how to fix this?! | Subprocess not retaining all environment variables | 0 | 0 | 1 | 0 | 0 | 882 |
43,105,148 | 2017-03-29T22:15:00.000 | 3 | 0 | 1 | 0 | 0 | python,multithreading,tensorflow | 0 | 43,107,623 | 0 | 1 | 0 | true | 0 | 0 | After doing some experimentation it appears that each call to sess.run(...) does indeed see a consistent point-in-time snapshot of the variables.
To test this I performed 2 big matrix multiply operations (taking about 10 sec each to complete), and updated a single, dependent, variable before, between, and after. In another thread I grabbed and printed that variable every 1/10th second to see if it picked up the change that occurred between operations while the first thread was still running. It did not, I only saw it's initial and final values. Therefore I conclude that variable changes are only visible outside of a specific call to sess.run(...) at the end of that run. | 1 | 1 | 1 | 0 | If you make two concurrent calls to the same session, sess.run(...), how are variables concurrently accessed in tensorflow?
Will each call see a snapshot of the variables as of the moment run was called, consistent throughout the call? Or will they see dynamic updates to the variables and only guarantee atomic updates to each variable?
I'm considering running test set evaluation on a separate CPU thread and want to verify that it's as trivial as running the inference op on a CPU device in parallel.
I'm having troubles figuring out exactly what guarantees are provided that make sessions "thread safe". | How are variables shared between concurrent `session.run(...)` calls in tensorflow? | 0 | 1.2 | 1 | 0 | 0 | 646 |
43,107,173 | 2017-03-30T01:57:00.000 | 2 | 0 | 0 | 1 | 0 | python,django,wamp | 0 | 43,772,665 | 0 | 2 | 0 | true | 1 | 0 | Ok the answer is basically ericeastwood.com/blog/3/django-setup-for-wamp combined with httpd.apache.org/docs/2.4/vhosts/name-based.html – shadow | 1 | 0 | 0 | 0 | I want to test my django app in my WAMP server. The idea is that i want to create a web app for aaa.com and aaa.co.uk, if the user enter the domain aaa.co.uk, my django app will serve the UK version, if the user go to aaa.com, the same django app will serve the US version (different frontend). Basically i will be detecting the host of the user and serve the correct templates.
How do i setup my WAMP so i can test this? right now i am using pyCharm default server which is 127.0.0.1:8000 | how do i setup django in wamp? | 0 | 1.2 | 1 | 0 | 0 | 3,505 |
43,111,193 | 2017-03-30T07:26:00.000 | 1 | 1 | 1 | 0 | 0 | python,python-2.7,amazon-web-services,aws-lambda,jwplayer | 0 | 43,133,422 | 0 | 2 | 0 | true | 0 | 0 | I am succeed to install jwplatform module locally.
Steps are as follows:
1. Open command line
2. Type 'python' on command line
3. Type command 'pip install jwplatform'
4. Now, you can use jwplatform api.
Above command added module jwplatform in python locally
But my another challenge is to install jwplatform in AWS Lambda.
After research i am succeed to install module in AWS Lambda. I have bundled module and code in a directory then create zip of bundle and upload it in AWS Lambda. This will install module(jwplatform) in AWS Lambda. | 1 | 0 | 0 | 0 | I am going to create search api for Android and iOS developers.
Our client have setup a lambda function in AWS.
Now we need to fetch data using jwplatform Api based on search keyword passed as parameter. For this, I have to install jwplatform module in Lambda function or upload zip file of code with dependencies. So that i want to run python script locally and after getting appropriate result i will upload zip in AWS Lambda.
I want to use the videos/list (jwplatform Api) class to search the video library using python but i don't know much about Python. So i want to know how to run python script? and where should i put the pyhton script ? | how to use jwplatform api using python | 0 | 1.2 | 1 | 0 | 0 | 514 |
43,149,092 | 2017-03-31T20:22:00.000 | 0 | 0 | 0 | 0 | 0 | python,flask,sqlalchemy,flask-sqlalchemy | 0 | 43,703,427 | 0 | 2 | 0 | false | 1 | 0 | I suggest you to look at Server Sent Events(SSE). I am looking for code of SSE for postgres,mysql,etc. It is available for reddis. | 1 | 0 | 0 | 0 | I have PhpMyAdmin to view and edit a database and a Flask + SQLAlchemy app that uses a table from this database. Everything is working fine and I can read/write to the database from the flask app. However, If I make a change through phpmyadmin, this change is not detected by SQLAlchmey. The only to get those changes is by manually refreshing SQLAlchmey connection
My Question is how to tell SQLAlchemy to reload/refresh its Database connection? | Flask App using SQLAlcehmy: How to detect external changes committed to the database? | 0 | 0 | 1 | 1 | 0 | 665 |
43,151,776 | 2017-04-01T01:22:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7,function,global-variables | 0 | 43,151,811 | 0 | 1 | 0 | true | 0 | 0 | Two ideas that I have is either to pass a reference of all these variables through functions. The other idea is to define classes containing the variables. The all the functions of the class have access to the values defined in the class and you don't need to pass them. Or you are passing pack of variables. | 1 | 0 | 0 | 0 | I have made a program where there is are many values that are accessed and changed in many classes and functions. I want to know is how to use and change a variable without using global or only using it once. I used global around 20 times all throughout my code and it looks ugly and is annoying. | Python Globalling to use and change in many functions | 0 | 1.2 | 1 | 0 | 0 | 31 |
43,154,781 | 2017-04-01T08:53:00.000 | 1 | 0 | 1 | 0 | 0 | python-3.x,ibm-cloud,speech-to-text,watson | 1 | 43,174,919 | 0 | 2 | 0 | false | 0 | 0 | You can install packages when using SoloLearn. You need to ask SoloLearn's administrators to install the package for you.
The python playground includes some of the most popular packages but it's very limited in terms of what you can do if the package you want to use is not there. | 1 | 1 | 0 | 0 | I want to install third party package in python online coding environments. Could you please tell me how we can achieve this?
The below line needs to execute
from watson_developer_cloud import SpeechToTestV1 as STT
when I run the above line, I am getting the following error,
Error:
Traceback (most recent call last):
File "..\Playground\", line 1, in
\ufefffrom watson_developer_cloud import SpeechToTestV1 as STT
ImportError: No module named 'watson_developer_cloud'
Even tried the below command in CODE Playground but it throwing incorrect syntax.
pip install watson_developer_cloud
Thanks in advance, | How to install new module/package in python coding online environment? | 0 | 0.099668 | 1 | 0 | 0 | 806 |
43,167,500 | 2017-04-02T10:58:00.000 | 1 | 0 | 1 | 0 | 0 | python,tensorflow,python-idle | 0 | 47,119,800 | 0 | 1 | 0 | false | 0 | 0 | IDLE does NOT provide such functionality - it works through idlelib, a package from stdlib so it's executed using pythonw -m idlelib. To change interprenter in IDLE, call it using a different interprenter - "C:\path\to\your\python\interprenter\pythonw.exe" -m idlelib (make sure idlelib is installed for target interprenter). | 1 | 1 | 0 | 0 | How do I select an interpreter on my IDE which is Python IDLE? I can't find the options to do that.
I managed to install Tensorflow but it only works when I import it in the terminal, not in my current IDE
What I want: Make my current IDE use the Python.exe that has been provided when I installed Tensorflow on my computer
What I tried: Using PYCHARM, it works (like a charm!) but I can't do that stuff like import module then have " >>> " then issue my commands etc... | how do i select an interpreter on my IDE which is Python IDLE | 0 | 0.197375 | 1 | 0 | 0 | 264 |
43,168,123 | 2017-04-02T12:08:00.000 | -1 | 0 | 1 | 0 | 0 | python,byte | 0 | 51,676,604 | 0 | 5 | 0 | false | 0 | 0 | It has a simple solution like this:
0x0400 = 0x04 × 256 + 0x00 | 2 | 3 | 0 | 0 | Say you have b'\x04' and b'\x00' how can you combine them as b'\x0400'? | How to append two bytes in python? | 0 | -0.039979 | 1 | 0 | 0 | 26,609 |
43,168,123 | 2017-04-02T12:08:00.000 | 0 | 0 | 1 | 0 | 0 | python,byte | 0 | 51,753,927 | 0 | 5 | 0 | false | 0 | 0 | In my application I am receiving a stream of bytes from a sensor. I need to combine two of the bytes to create the integer value.
Hossein's answer is the correct solution.
The solution is the same as when one needs to bit shift binary numbers to combine them for instance if we have two words which make a byte, high word 0010 and low word 0100. We can't just add them together but if we bit shift the high word to the left four spaces we can then or the bits together to create 00100100. By bit shifting the high word we have essencially multiplied it by 16 or 10000.
In hex example above we need to shift the high byte over two digits which in hex 0x100 is equal to 256. Therefore, we can multiple the high byte by 256 and add the low byte. | 2 | 3 | 0 | 0 | Say you have b'\x04' and b'\x00' how can you combine them as b'\x0400'? | How to append two bytes in python? | 0 | 0 | 1 | 0 | 0 | 26,609 |
43,169,118 | 2017-04-02T13:50:00.000 | 0 | 1 | 1 | 0 | 0 | python,c++ | 0 | 43,173,531 | 0 | 3 | 0 | false | 0 | 0 | You might consider having each "writer" process write its output to a temporary file, close the file, then rename it to the filename that the "reader" process is looking for.
If the file is present, then the respective reader process knows that it can read from it. | 1 | 0 | 0 | 0 | I have a slight problem. I have a project that I'm working on and that requires two programs to read/write into some .txt files.
Python writes into one .txt file, C++ reads from it. C++ does what it needs to do and then writes its own information into another .txt file that Python has to read.
What I want to know is how can I check with C++ if Python has closed the .txt file before opening the same file, as Python may still be writing stuff into it and vice versa?
If you need any extra information about this conundrum, feel free to contact me. | Has .txt file been closed? | 0 | 0 | 1 | 0 | 0 | 78 |
43,169,766 | 2017-04-02T14:54:00.000 | 2 | 0 | 0 | 0 | 0 | python,tensorflow,neural-network,dataset | 0 | 43,170,260 | 0 | 1 | 0 | false | 0 | 0 | I suggest you use OpenCV library. Whatever you uses your MNIST data or PIL, when it's loaded, they're all just NumPy arrays. If you want to make MNIST datasets fit with your trained model, here's how I did it:
1.Use cv2.imread to load all the images you want them to act as training datasets.
2.Use cv2.cvtColor to convert all the images into grayscale images and resize them into 28x28.
3.Divide each pixel in all the datasets by 255.
4.Do the training as usual!
I haven't tried to make it your own format, but theoratically it's the same. | 1 | 4 | 1 | 1 | I already know how to make a neural network using the mnist dataset. I have been searching for tutorials on how to train a neural network on your own dataset for 3 months now but I'm just not getting it. If someone can suggest any good tutorials or explain how all of this works, please help.
PS. I won't install NLTK. It seems like a lot of people are training their neural network on text but I won't do that. If I would install NLTK, I would only use it once. | Using your own Data in Tensorflow | 0 | 0.379949 | 1 | 0 | 0 | 1,168 |
43,175,272 | 2017-04-03T01:21:00.000 | 3 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning | 1 | 43,214,452 | 1 | 1 | 0 | true | 0 | 0 | You can create a third placeholder variable of type boolean to select which branch to use and feed that in at run time.
The logic behind it is that since you are feeding in the placholders at runtime anyways you can determine outside of tensorflow which placeholders will be fed. | 1 | 3 | 1 | 1 | Suppose I have two placeholder quantities in tensorflow: placeholder_1 and placeholder_2. Essentially I would like the following computational functionality: "if placeholder_1 is defined (ie is given a value in the feed_dict of sess.run()), compute X as f(placeholder_1), otherwise, compute X as g(placeholder_2)." Think of X as being a hidden layer in a neural network that can optionally be computed in these two different ways. Eventually I would use X to produce an output, and I'd like to backpropagate error to the parameters of f or g depending on which placeholder I used.
One could accomplish this using the tf.where(condition, x, y) function if there was a way to make the condition "placeholder_1 has a value", but after looking through the tensorflow documentation on booleans and asserts I couldn't find anything that looked applicable.
Any ideas? I have a vague idea of how I could accomplish this basically by copying part of the network, sharing parameters and syncing the networks after updates, but I'm hoping for a cleaner way to do it. | check if tensorflow placeholder is filled | 0 | 1.2 | 1 | 0 | 0 | 1,190 |
43,179,875 | 2017-04-03T08:29:00.000 | 10 | 0 | 0 | 0 | 0 | python,django | 0 | 54,591,055 | 0 | 4 | 0 | false | 1 | 0 | You create models for your website. When a new instance is made for a model, django must know where to go when a new post is created or a new instance is created.
Here get_absolute_url comes in picture. It tells the django where to go when new post is created. | 1 | 43 | 0 | 1 | Django documentation says:
get_absolute_url() method to tell Django how to calculate the canonical URL for an object.
What is canonical URL mean in this is context?
I know from an SEO perspective that canonical URL means picking the best URL from the similar looking URLs (example.com , example.com/index.html). But this meaning doesn't fit in this context.
I know this method provides some additional functionality in Django admin, redirection etc. And I am fully aware of how to use this method.
But what is the philosophy behind it? I have never actually used it in my projects. Does it serve any special purpose? | When to use Django get_absolute_url() method? | 0 | 1 | 1 | 0 | 0 | 37,142 |
43,209,135 | 2017-04-04T13:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,machine-learning,data-mining | 0 | 43,216,616 | 0 | 1 | 0 | false | 0 | 0 | This can be solved reasonably easily if you go to a transposed matrix.
Of any two features (now rows, originally columns) you compute the intersection. If it's larger than 50, you have a frequent cooccurrence.
If you use an appropriate sparse encoding (now of rows, but originally of columns - so you probably need not only to transpose the matrix, but also to reencode it) this operation using O(n+m), where n and m are the number of nonzero values.
If you have an extremely high number of features this make take a while. But 100000 should be feasible. | 1 | 0 | 1 | 0 | The sparse matrix has only 0 and 1 at each entry (i,j) (1 stands for sample i has feature j). How can I estimate the co-occurrence matrix for each feature given this sparse representation of data points? Especially, I want to find pairs of features that co-occur in at least 50 samples. I realize it might be hard to produce the exact result, is there any approximated algorithm in data mining that allows me to do that? | Given a sparse matrix with shape (num_samples, num_features), how do I estimate the co-occurrence matrix? | 1 | 0 | 1 | 0 | 0 | 101 |
43,213,086 | 2017-04-04T16:43:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x,oop | 0 | 43,213,652 | 0 | 3 | 0 | false | 0 | 0 | Your question is asking about a concept called "dependency injection." You should take some time to read up on it. It details the ways of making one object available to another object that wants to interact with it. While that's too broad to write up here, here are some of the basics:
You could have all objects you care about be global, or contained in a global container. They can all see each other and interact with each other as necessary. This isn't very object-oriented, and is not the best practice. It's brittle (all the objects are tightly bound together, and it's hard to change or replace one), and it's not a good design for testing. But, it is a valid option, if not a good one.
You could have objects that care about each other be passed to each other. This would be the responsibility of something outside of all of the objects (in your case, basically your main function). You can pass the objects in every method that cares (e.g. board.verify_player_position(player1)). This works well, but you may find yourself passing the same parameter into almost every function. Or you could set the parameter either through a set call (e.g. board.set_player1(player1)), or in the constructor (e.g. board = Board(player1, player2)). Any of these are functionally pretty much the same, it's just a question of what seems to flow best for you. You still have the objects pretty tightly bound. That may be unavoidable. But at least testing is easier. You can make stub classes and pass them in to the appropriate places. (Remember that python works well with duck typing: "If it walks like a duck and quacks like a duck, then it's a duck." You can make testing code that has the same functions as your board and player class, and use that to test your functions.)
A frequent pattern is to have these objects be fairly dumb, and to have a "game_logic" or some other kind of controller. This would be given the instances of the board and the two players, and take care of maintaining all of the rules of the game. Then your main function would basically create the board and players, and simply pass them into your controller instance. If you went in this direction, I'm not sure how much code you would need your players or board to have, so you may not like this style.
There are other patterns that will do this, as well, but these are some of the more basic.
To answer your direct questions: yes, the error you're seeing is because you're trying to invoke the class function, and you need it to be on an object. And yes, instantiating in that case would be bad. But no, passing an instance of one class to another is not a bad thing. There's no use in having objects if they don't interact with something; most objects will need to interact with some other object at some point.
You mentioned that you have code available, but it's a good thing to think out your object interactions a little bit before getting too into the coding. So that's the question for you: do you want player1.check_valid_position(board), or board.check_player(player1), or rules.validate_move(player, some_kind_of_position_variable)`. They're all valid, and they all have the objects inter-relate; it's just a question of which makes the most sense to you to write. | 1 | 1 | 0 | 0 | I'm quite green on Python and have been looking around for an answer to my particular question. Though I'm not sure if it's a Python specific question, or if I'm simply getting my OOP / design patterns confused.
I've got three files: main.py, board.py and player.py. Board and player each only hold a class Player and Board, main simply starts the game.
However I'm struggling with validating player positions when they are added to the board. What I want is to instantiate the board and consecutively new player object(s) in main.py, but check the board size in player.py when a new player is added to the board, to ensure the player is not outside of bounds upon creation.
As it is now I'm getting a TypeError (getX() missing 1 required positional argument: 'self') when attempting to access the board's size inside of player.py.
Most likely because the board isn't instantiated in that scope. But if I instantiate it in the scope that will be counted as a new object, won't it? And if I pass the board to the player as a variable that would surely be counted as bad practice, wouldn't it?
So how do I go about accessing the instance variables of one class from another class? | Accessing variable of class-object instantiated in other file | 0 | 0 | 1 | 0 | 0 | 305 |
43,219,217 | 2017-04-04T23:10:00.000 | 0 | 1 | 0 | 1 | 0 | python,windows,executable | 0 | 43,219,241 | 0 | 2 | 0 | false | 0 | 0 | You don't have shell scripts on Windows, you have batch or powershell.
If your reading is teaching Unix things, get a virtual machine running (insert popular Linux distribution here).
Regarding python, you just execute python script.py | 1 | 0 | 0 | 0 | I'm reading from a "bookazine" which I purchased from WHSmiths today and its said
during the setup I need to type in these commands into the terminal (or the Command Prompt in my case) in order to make a script without needing to do it manually. One of these commands is chmod +x (file name) but because this is based of Linux or Mac and I am on Windows I am not sure how to make my script executable, how do I?
Thanks in advance. | How would I go about making a Python script into an executable? | 0 | 0 | 1 | 0 | 0 | 492 |
43,219,641 | 2017-04-04T23:54:00.000 | 0 | 0 | 0 | 0 | 0 | python,pandas | 0 | 43,219,694 | 0 | 2 | 0 | false | 0 | 0 | I get that the mean of that particular group is NAN when a NAN value
is present
FALSE! :)
the mean will only consider non null values. You are safe my man. | 1 | 0 | 1 | 0 | I have a dataset consisting of multiple columns and I want to calculate the average by using the groupby function in Python. However, since some of the values are NAN I get that the mean of that particular group is NAN when a NAN value is present. I would like to omit this value, not set it to zero or fill it with any statistical variable, just omit.
Any idea how I can achieve this?
Thanks in advance! | How to Omit NaN values when applying groupyby in Pandas | 0 | 0 | 1 | 0 | 0 | 266 |
43,240,152 | 2017-04-05T19:37:00.000 | 1 | 1 | 0 | 0 | 0 | telegram,telegram-bot,python-telegram-bot | 0 | 43,241,747 | 0 | 1 | 1 | true | 0 | 0 | For the first part of your question you can make a private group and add your bot as one of its administrators. Then it can talk to the members and answer to their commands.
Even if you don't want to do so, it is possible by checking the chatID of each update that the bot receives. If the chatID exists in the file, DataBase or even in a simple array the bot answers the command and if not it just ignores or sends a simple text like what you said good-bye.
Note that bots cannot block people they can only ignore their
messages. | 1 | 3 | 0 | 0 | I want to create a telegram bot for a home project and i wish the bot only talk to 3 people, how can I do this?
I thought to create a file with the chat id of each of us and check it before responding to any command, I think it will work. the bot will send the correct info if it's one of us and "goodbye" to any other
But is there any other way to block any other conversation with my bot?
Pd: I'm using python-telegram-bot | Telegram-bot user control | 1 | 1.2 | 1 | 0 | 1 | 986 |
43,253,823 | 2017-04-06T11:26:00.000 | 1 | 0 | 0 | 0 | 0 | node.js,django,python-2.7,oauth-2.0,nodes | 0 | 43,254,420 | 0 | 2 | 0 | false | 1 | 0 | Scenario A
You create a url in django ^v2/.*$ and then make a request from django view to node.js process. this way django can handle user auth and permissions and node can be standalone and not know anything about user auth. If you need user data (id or something) you can inject it in the request as a header or a cookie.
Scenario B
You dig into django REST OAuth implementation, find where tokens are stored in db, and on each request you take the oauth token from the header/cookie and compare it to the one in DB. You would have to setup nginx as a reverse proxy and route all traffic that goes on url /v1/.*$ to django app, and all traffic that goes to /v2/.*/ to node app.
Either options are doable but I would suggest Scenario A. It's easier, quicker and far less error prone. | 1 | 1 | 0 | 0 | Hi I have two process
Django and MYSQL
node/express and mongo db.
1.
How can I configure this two process to point to different url
like. Django point to api.abc.com/v1 and node point to api.abc.com/v2 ?
2.
all my user login is inside Django and MYSQL with OAuth. I can authenticate user in Django.
But how can authenticate user in nodejs app with the token send by Django REST OAuth ?
Thanks. | Django and Node processes for the same domain | 0 | 0.099668 | 1 | 0 | 0 | 1,050 |
43,270,149 | 2017-04-07T05:12:00.000 | 0 | 0 | 1 | 0 | 1 | python,ios,arrays,anaconda,conda | 0 | 49,812,547 | 0 | 2 | 0 | false | 0 | 0 | Some time restart works. I was also facing same issue, when I restarted my system it works like charm. | 2 | 2 | 0 | 0 | I'm new to coding and decided to install Anaconda because I heard it was the most practical platform for beginners.
The problem is, every time I try opening it, it literally takes at least 15 minutes to boot up while showing me "Updating metadata..." and subsequently showing me "Updating repodata..." statements.
Would any of you know how to fix or go around this issue?
I'm using a macbook air that has 8gb of RAM and an i5 processor, if that helps. | The Anaconda launcher takes long time to load | 0 | 0 | 1 | 0 | 0 | 7,621 |
43,270,149 | 2017-04-07T05:12:00.000 | 0 | 0 | 1 | 0 | 1 | python,ios,arrays,anaconda,conda | 0 | 59,691,347 | 0 | 2 | 0 | false | 0 | 0 | I started Anaconda Navigator with "Run as Administrator" privileges on my Windows machine, and it worked like a charm. Though it did ask me for Admin credentials for a couple of times while loading different scripts, but the response was <1 min, compared to 6 - 8 mins. earlier.
Search for Anaconda through desktop search or go to Cortana tool on the desktop toolbar and type Anaconda
On the Anaconda icon that shows up, right-click and choose "Run as Administrator"
Provide Admin credentials when prompted
This should hopefully work for Windows 10 users. | 2 | 2 | 0 | 0 | I'm new to coding and decided to install Anaconda because I heard it was the most practical platform for beginners.
The problem is, every time I try opening it, it literally takes at least 15 minutes to boot up while showing me "Updating metadata..." and subsequently showing me "Updating repodata..." statements.
Would any of you know how to fix or go around this issue?
I'm using a macbook air that has 8gb of RAM and an i5 processor, if that helps. | The Anaconda launcher takes long time to load | 0 | 0 | 1 | 0 | 0 | 7,621 |
43,270,820 | 2017-04-07T06:08:00.000 | 98 | 0 | 0 | 1 | 0 | python,hadoop,airflow | 0 | 43,330,451 | 0 | 2 | 0 | true | 0 | 0 | In the UI:
Go to the dag, and dag run of the run you want to change
Click on GraphView
Click on task A
Click "Clear"
This will let task A run again, and if it succeeds, task C should run.
This works because when you clear a task's status, the scheduler will treat it as if it hadn't run before for this dag run. | 1 | 54 | 0 | 0 | I am using a LocalExecutor and my dag has 3 tasks where task(C) is dependant on task(A). Task(B) and task(A) can run in parallel something like below
A-->C
B
So task(A) has failed and but task(B) ran fine. Task(C) is yet to run as task(A) has failed.
My question is how do i re run Task(A) alone so Task(C) runs once Task(A) completes and Airflow UI marks them as success. | How to restart a failed task on Airflow | 0 | 1.2 | 1 | 0 | 0 | 35,603 |
43,306,222 | 2017-04-09T11:35:00.000 | 0 | 0 | 1 | 0 | 1 | python-2.7,python-multithreading,raw-input | 0 | 43,306,536 | 0 | 1 | 0 | true | 0 | 0 | OK so i figured out that threading module is not actually apply parallel running of threads because of some mechanism called GIL. My solution is using multi processing instead. It works fine. Hope it helped someone. | 1 | 0 | 0 | 0 | I am using python 2.7, with the module threading. Now I am having a countdown of 24 hours which is one thread, the other thread is taking user input using raw input.
When my program run, the countdown thread is waiting for the user input to be inserted, and only then the countdown continues. At the first place my reason of using threading is to achieve both the threads run at the same time. I just can't understand why would one thread wait for the input of another one? And how to fix that?
Thanks in advance! | Python thread stuck while another thread waiting for user input | 1 | 1.2 | 1 | 0 | 0 | 410 |
43,310,597 | 2017-04-09T18:51:00.000 | 0 | 0 | 0 | 0 | 0 | python,apache-spark,cassandra,pyspark | 0 | 43,311,283 | 0 | 2 | 0 | false | 0 | 0 | I'll just give my "short" 2 cents. The official docs are totally fine for you to get started. You might want to specify why this isn't working, i.e. did you run out of memory (perhaps you just need to increase the "driver" memory) or is there some specific error that is causing your example not to work. Also it would be nice if you provided that example.
Here are some of my opinions/experiences that I had. Usually, not always, but most of the time you have multiple columns in partitions. You don't always have to load all the data in a table and more or less you can keep the processing (most of the time) within a single partition. Since the data is sorted within a partition this usually goes pretty fast. And didn't present any significant problem.
If you don't want the whole store in casssandra fetch to spark cycle to do your processing you have really a lot of the solutions out there. Basically that would be quora material. Here are some of the more common one:
Do the processing in your application right away - might require some sort of inter instance communication framework like hazelcast of even better akka cluster this is really a wide topic
spark streaming - just do your processing right away in micro batching and flush results for reading to some persistence layer - might be cassandra
apache flink - use proper streaming solution and periodically flush state of the process to i.e. cassandra
Store data into cassandra the way it's supposed to be read - this approach is the most adviseable (just hard to say with the info you provided)
The list could go on and on ... User defined function in cassandra, aggregate functions if your task is something simpler.
It might be also a good idea that you provide some details about your use case. More or less what I said here is pretty general and vague, but then again putting this all into a comment just wouldn't make sense. | 1 | 3 | 1 | 0 | I have huge data stored in cassandra and I wanted to process it using spark through python.
I just wanted to know how to interconnect spark and cassandra through python.
I have seen people using sc.cassandraTable but it isnt working and fetching all the data at once from cassandra and then feeding to spark doesnt make sense.
Any suggestions? | Spark and Cassandra through Python | 0 | 0 | 1 | 0 | 0 | 1,860 |
43,314,517 | 2017-04-10T03:23:00.000 | -3 | 0 | 1 | 1 | 0 | python,cmd | 0 | 43,314,666 | 0 | 5 | 0 | false | 0 | 0 | For installing multiple packages on the command line, just pass them as a space-delimited list, e.g.:
pip install numpy pandas | 1 | 6 | 0 | 0 | I know how to install *.whl files through cmd (the code is simply python -m pip install *so-and-so-.whl). But since I accidentally deleted my OS and had no backups I found myself in the predicament to reinstall all of my whl files for my work.
This comes up to around 50 files. I can do this manually which is pretty simple, but I was wondering how to do this in a single line. I can't seem to find anything that would allow me to simply type in python -m pip install *so-and-so.whl to find all of the whl files in the directory and install them.
Any ideas? | How to install multiple whl files in cmd | 0 | -0.119427 | 1 | 0 | 0 | 9,927 |
43,328,064 | 2017-04-10T16:09:00.000 | 1 | 0 | 1 | 0 | 0 | python,pyqt,spyder | 0 | 54,673,251 | 0 | 2 | 0 | false | 0 | 1 | I had a similar problem and found that my application only worked when the graphics settings inside Spyder are set to inline. This can be done at Tools -> Preferences -> IPython console -> Graphics, now change the Backends to inline.
Hope this helps. | 1 | 3 | 0 | 0 | I am working for the first time towards the implementation of a very simple GUI in PyQt5, which embeds a matplotlib plot and few buttons for interaction.
I do not really know how to work with classes so I'm making a lot of mistakes, i.e. even if the functionality is simple, I have to iterate a lot between small corrections and verification.
For some reason I would like to debug, however, the whole process is made much, much slower by the fact that at any other try, the python kernel dies and it needs restarting (all done automatically) several times.
That is, every time I try something that should last maybe 5 secs, I end up spending a minute.
Anybody know where to look to spot what is causing these constant death/rebirth circles?
I have been using spyder for some time now and I never experienced this behaviour before, so I'm drawn to think it might have to do with PyQt, but that's about how far I can go. | Spyder + Python 3.5 - how to debug kernel died, restarting? | 0 | 0.099668 | 1 | 0 | 0 | 14,340 |
43,351,596 | 2017-04-11T16:33:00.000 | 6 | 0 | 1 | 0 | 0 | python,visual-studio-code,anaconda | 0 | 46,554,629 | 0 | 15 | 0 | false | 0 | 0 | Unfortunately, this does not work on macOS. Despite the fact that I have export CONDA_DEFAULT_ENV='$HOME/anaconda3/envs/dev' in my .zshrc and "python.pythonPath": "${env.CONDA_DEFAULT_ENV}/bin/python",
in my VSCode prefs, the built-in terminal does not use that environment's Python, even if I have started VSCode from the command line where that variable is set. | 3 | 92 | 0 | 0 | I have Anaconda working on my system and VsCode working, but how do I get VsCode to activate a specific environment when running my python script? | Activating Anaconda Environment in VsCode | 0 | 1 | 1 | 0 | 0 | 175,026 |
43,351,596 | 2017-04-11T16:33:00.000 | 5 | 0 | 1 | 0 | 0 | python,visual-studio-code,anaconda | 0 | 60,607,499 | 0 | 15 | 0 | false | 0 | 0 | Just launch the VS Code from the Anaconda Navigator. It works for me. | 3 | 92 | 0 | 0 | I have Anaconda working on my system and VsCode working, but how do I get VsCode to activate a specific environment when running my python script? | Activating Anaconda Environment in VsCode | 0 | 0.066568 | 1 | 0 | 0 | 175,026 |
43,351,596 | 2017-04-11T16:33:00.000 | 0 | 0 | 1 | 0 | 0 | python,visual-studio-code,anaconda | 0 | 66,031,427 | 0 | 15 | 0 | false | 0 | 0 | As I was not able to solve my problem by suggested ways, I will share how I fixed it.
First of all, even if I was able to activate an environment, the corresponding environment folder was not present in C:\ProgramData\Anaconda3\envs directory.
So I created a new anaconda environment using Anaconda prompt,
a new folder named same as your given environment name will be created in the envs folder.
Next, I activated that environment in Anaconda prompt.
Installed python with conda install python command.
Then on anaconda navigator, selected the newly created environment in the 'Applications on' menu.
Launched vscode through Anaconda navigator.
Now as suggested by other answers, in vscode, opened command palette with Ctrl + Shift + P keyboard shortcut.
Searched and selected Python: Select Interpreter
If the interpreter with newly created environment isn't listed out there, select Enter Interpreter Path and choose the newly created python.exe which is located similar to C:\ProgramData\Anaconda3\envs\<your-new-env>\ .
So the total path will look like C:\ProgramData\Anaconda3\envs\<your-nev-env>\python.exe
Next time onwards the interpreter will be automatically listed among other interpreters.
Now you might see your selected conda environment at bottom left side in vscode. | 3 | 92 | 0 | 0 | I have Anaconda working on my system and VsCode working, but how do I get VsCode to activate a specific environment when running my python script? | Activating Anaconda Environment in VsCode | 0 | 0 | 1 | 0 | 0 | 175,026 |
43,351,742 | 2017-04-11T16:42:00.000 | 2 | 0 | 0 | 0 | 0 | python,opencv,numpy,nao-robot,choregraphe | 1 | 43,369,916 | 0 | 2 | 0 | false | 0 | 0 | It depends if you're using a real NAO or a simulated one.
Simulated one: choregraphe use its own embedded python interpreter, even if you add library to your system it won't change anything
Real NAO: the system python interpreter is used, you need to install those library to your robot (and not to the computer running choregraphe). As pip ofthen doesn't work fine in NAO, you'll have to manually copy library to /home/nao/.local/lib/python2.7/site-packages | 1 | 1 | 1 | 0 | I'm doing a project that require cv2 and numpy in one of the scripts using choregraphe, but I get an error :
No module named cv2/numpy.
I think it is because choregraphe has its own python interpreter but I do not know how to install cv2 and numpy into the python of choregraphe.
How can I do it? | how to import cv2 and numpy in Choregraphe for NAO robot? | 0 | 0.197375 | 1 | 0 | 0 | 1,416 |
43,352,671 | 2017-04-11T17:33:00.000 | 1 | 1 | 1 | 0 | 0 | python,c++,binaryfiles | 0 | 43,355,184 | 0 | 2 | 0 | false | 0 | 0 | Binary files contain data.
There are a plethora of data layouts of binary files. Some examples are JPEG, Executables, Word Processor, Raw Text and archive files.
A file may have an extension that may indicate the layout. For example, a ".png" would would most likely follow the PNG format. A "bin" or "dat" extension is to generic. One could zip up files and name the archive with a "png" extension.
If there is no file extension or the OS doesn't store the type of a file, then the format of the file is based on discovery (or trying random formats). Some file formats have integrity values in them to help verify correctness. Knowing the integrity value and how it was calculated, can assist in classify the format type. Again, there is no guarantee.
BTW, file formats are independent of the language to used to read them. One could read a gzipped file using FORTRAN or BASIC. | 1 | 0 | 0 | 0 | I have a simple (and maybe silly) question about binary data files. If a simple type is used (int/float/..) it is easy to imagine the structure of the binary file (a sequence of floats, with each float written using a fixed number of bytes). But what about structures, objects and functions ? Is there some kind of convension for each language with regards to the order in which the variables names / attributes / methods are written, and if so, can this order be changed and cusotomized ? otherwise, is there some kind of header that describes the format used in each file ?
I'm mostly interested in python and C/C++. When I use a pickled (or gzipped) file for example, python "knows" whether the original object has a certain method or attribute without me casting the unpickled object or indicating its type and I've always wondered how is that implemented. I didn't know how to look this up on Google because it may have something to do with how these languages are designed in the first place. Any pointers would be much appreciated. | Binary files structure for objects and structs in C++/Python | 1 | 0.099668 | 1 | 0 | 0 | 62 |
43,377,434 | 2017-04-12T18:50:00.000 | 1 | 0 | 0 | 0 | 1 | python,pygame | 0 | 43,377,665 | 0 | 1 | 0 | true | 0 | 1 | You have to remember the face direction ( self.face_direction = RIGHT ) on click flip only if direction is wrong.
Alternatively, save the flipped image in face_flipped_right. Then either show original image or flipped ( flipping is nondestructive) | 1 | 0 | 0 | 0 | I am making a game using the Pygame development module. When a user for my game presses the left key, I would like my character to "face" left and when the user presses the right key, I would like my character to be flipped and "face" the right. The character is one I drew and imported in. I am aware of the flip function in Pygame, but I think there will be errors. If the character starts off facing the left, and the user presses the right key, the character will be flipped and will move to the right. However, if he/she lets go of the right key and then presses it again, the character will flip and face the left, but will continue to move to the right. Is there any way to solve this problem? I already know how to move the character; I am having problems with flipping it. Also, another idea I have considered is the diplay blitting one image when the key is pressed, and then blitting another when the other key is presses. But I do not knoww how to make the original image disappear. Any thoughts on this as well? Thank you. | Trying to flip a character in Pygame | 0 | 1.2 | 1 | 0 | 0 | 346 |
43,379,578 | 2017-04-12T21:00:00.000 | 5 | 0 | 1 | 0 | 0 | python,scala | 0 | 43,379,784 | 0 | 1 | 0 | true | 0 | 0 | ^A is usually used to represent the Start Of Header Character (SOH). It's ascii value is x01.
You can create this in code with val c: Char = 1, if it's more clear to you, or if you need it in a string literal you can use the unicode notation '\u0001' | 1 | 2 | 0 | 0 | In Python "^A" is represented by chr(1). This is what I use as a separator in myfiles. What is the equivalent in Scala.I am reading the file using scala. I want to know how to represent ^A in order to split the data i read from my files. | Control A Representation in Python/Scala | 0 | 1.2 | 1 | 0 | 0 | 237 |
43,412,779 | 2017-04-14T13:42:00.000 | 0 | 0 | 0 | 0 | 0 | python,selenium,svg,webdriver | 1 | 43,422,726 | 0 | 1 | 0 | false | 0 | 0 | Try to make use of Actions or Robot class | 1 | 0 | 0 | 0 | I am interacting with svg elements in a web page.
I am able to locate the svg elements by xpath, but not able to click it, The error mentions that the methods like click(), onclick() are not available.
Any suggestions of how can we make it clickable ? please advice ? | SVG Elements: Able to locate elements using xpath but not able to click | 0 | 0 | 1 | 0 | 1 | 106 |
43,416,606 | 2017-04-14T17:49:00.000 | 0 | 1 | 1 | 1 | 0 | python,bash,shell | 0 | 43,416,717 | 0 | 2 | 0 | false | 0 | 0 | Call the python script with /usr/bin/time script. This allows you to track CPU and wall-clock time of the script. | 1 | 0 | 0 | 0 | I have bash shell script which is internally calling python script.I would like to know how long python is taking to execute.I am not allowed to do changes in python script.
Any leads would be helpful thanks in advance. | Capture run time of python script executed inside shell script | 0 | 0 | 1 | 0 | 0 | 884 |
43,430,790 | 2017-04-15T20:11:00.000 | 0 | 0 | 1 | 1 | 0 | python | 0 | 43,431,194 | 0 | 4 | 0 | false | 0 | 0 | The option list starts after the code (which was passed as a string literal) according to the manual:
Specify the command to execute (see next section).
This terminates the option list (following options are passed as arguments
to the command).
It means that the name of the script will be replaced by -c. The
python -c "import sys; print(sys.argv)" 1 2 3
results
['-c', '1', '2', '3']
A possible solution is the usage of inspect module, for example
python3 -c "import sys; import inspect; inspect.getsource(sys.modules[__name__])"
but it causes TypeError because the __main__ module is a built-in one. | 2 | 1 | 0 | 0 | When using sys.argv on python -c "some code" I only get ['-c'], how can I reliably access the code being passed to -c as a string? | Capture the value of python -c "some code" | 0 | 0 | 1 | 0 | 0 | 87 |
43,430,790 | 2017-04-15T20:11:00.000 | 0 | 0 | 1 | 1 | 0 | python | 0 | 43,437,447 | 0 | 4 | 0 | false | 0 | 0 | This works
python -c "import sys; exec(sys.argv[1])" "print 'hello'"
hello | 2 | 1 | 0 | 0 | When using sys.argv on python -c "some code" I only get ['-c'], how can I reliably access the code being passed to -c as a string? | Capture the value of python -c "some code" | 0 | 0 | 1 | 0 | 0 | 87 |
43,433,329 | 2017-04-16T03:12:00.000 | 3 | 0 | 1 | 0 | 0 | python,jupyter-notebook,jupyter | 1 | 43,436,983 | 0 | 1 | 0 | false | 0 | 0 | Well, I shutdown the server and restarted it, and now it works. Wish I knew what happened. | 1 | 3 | 0 | 0 | Not sure what's up, but I just noticed my anaconda based jupyter totally fails to render latex. I don't get an error, but if I put $x$ in a markdown cell, I get back $x$. Any suggestions on how to diagnose/fix? | Jupyter not rendering latex | 0 | 0.53705 | 1 | 0 | 0 | 2,129 |
43,434,028 | 2017-04-16T05:27:00.000 | -1 | 0 | 1 | 0 | 0 | python,tensorflow,pip,32bit-64bit,python-module | 0 | 64,636,321 | 0 | 5 | 0 | false | 0 | 0 | There's nothing much you can do. I also had this issue. The best thing to do is to change your python path and install the packages on the 64 bits python. | 1 | 12 | 0 | 0 | I have decided to learn generic algorithms recently and I needed to install Tensorflow package. Tensorflow run on python 64 bit only, so i install python 3.5.0 64 bit without uninstalling python 32 bit. because i was afraid to lose my packages on python 32 bit by uninstalling it. The problem is how can i force pip install to install a package on my python 64 bit version instead of 32 bit version. | how to pip install 64 bit packages while having both 64 bit and 32 bit versions? | 0 | -0.039979 | 1 | 0 | 0 | 30,428 |
43,449,023 | 2017-04-17T09:45:00.000 | 4 | 0 | 1 | 0 | 0 | python,python-2.7,matplotlib,plt | 0 | 43,449,375 | 0 | 3 | 0 | true | 0 | 0 | You want to type %matplotlib qt into your iPython console. This changes it for the session you're in only. To change it for the future, go Tools > Preferences, select iPython Console > Graphics, then set Graphics Backend to Qt4 or Qt5. This ought to work. | 1 | 4 | 0 | 0 | I am trying to show some plots using plt.show (). i get the plots shown on the IPython console, but I need to see each figure in a new window. What can I do ? | plt.show () does not open a new figure window | 0 | 1.2 | 1 | 0 | 0 | 15,399 |
43,449,169 | 2017-04-17T09:55:00.000 | 4 | 0 | 1 | 0 | 0 | python | 0 | 43,449,238 | 0 | 1 | 0 | false | 0 | 0 | A csv file is a simple file type with flat data, separated by commas. Unlike an excel file, for example, it cannot contain multiple sheets.
If you need multiple sheets, you will have to make multiple csv files. | 1 | 3 | 0 | 0 | I created a csv file with single sheet. I want to know how to create csv file with multiple sheets using python language. | How to create multiple sheets in csv using python | 0 | 0.664037 | 1 | 0 | 0 | 6,564 |
43,456,097 | 2017-04-17T17:15:00.000 | 3 | 0 | 1 | 0 | 0 | python,ubuntu,ide,virtualenv,spyder | 0 | 43,456,751 | 0 | 1 | 0 | true | 0 | 0 | I figured out the issue. Seems that I was somehow running it from the wrong location, just had to run Spyder3 from the v-env bin folder. | 1 | 2 | 0 | 0 | I just recently started learning Ubuntu (17.04) and have managed to figure out how get plain Python3.5 + Spyder3 running, have created a virtual environment, and gotten the v-env running with Spyder by changing the interpreter setting within Spyder via pointing it to the virtual environment bin.
However, I saw numerous other ways of installing Spyder, primarily via a pip3 install in the environment itself, but I found nothing as to how to actually RUN the pip-installed Spyder. Running "Spyder3" always runs the default install regardless of location.
Does anyone know how to actually run it?
I was curious because I figured it would allow for a similar functionality that Anaconda provides where you can simply select your virtual environment and run the respective Spyder version for it. | Running a pip-installed Spyder in virtual environment on Ubuntu without Anaconda? | 0 | 1.2 | 1 | 0 | 0 | 1,124 |
43,457,337 | 2017-04-17T18:33:00.000 | 4 | 0 | 1 | 0 | 0 | css,ipython-notebook,jupyter-notebook,jupyter | 0 | 43,461,264 | 0 | 2 | 0 | true | 1 | 0 | I'm using Jupyter 5.0.
Right now I've tried to edit custom.css and the changes are reflected immediately after reloading a page without restarting.
I'm not sure about 4.3 version, but I guess it should work the same way. What did the property you change? | 2 | 4 | 0 | 0 | I'm using Jupyer 4.3.0. I find that when I update my ~/.jupyter/custom/custom.css, the changes are not reflected in my notebook until I kill jupyter-notebook and start it again. This is annoying, so how can I make Jupyter Notebook recognize the custom.css file changes without completely restarting the notebook? | Jupyter reload custom.css? | 0 | 1.2 | 1 | 0 | 0 | 643 |
Subsets and Splits