Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
37,015,712 | 2016-05-03T22:52:00.000 | 1 | 1 | 0 | 0 | python,customization,kodi | 40,060,071 | 1 | true | 0 | 0 | file_path = xbmc.translatePath(os.path.join('insert path here to file you want to run'))
xbmc.executebuiltin("XBMC.RunScript("+file_path+")")
Very late reply but saw no one else had answered so though i'd put in just in case | 1 | 0 | 0 | If I want to make a button in Kodi's menu and run a local python script upon clicking it, what's the best way to go about it? | How can I run a python script from within Kodi? | 1.2 | 0 | 0 | 1,738 |
37,018,770 | 2016-05-04T04:55:00.000 | 0 | 0 | 0 | 0 | python,boto | 37,026,741 | 1 | false | 0 | 0 | If the backup volume is a separate EBS/SSD volume, you may have chance to create a small snapshot from your root volume. You just need to unmount it from (as in Linux) OS level.
Load the instance OS
Unmount the huge volume from the OS level
shutdown OS
Take a snapshot
Reload the instance
Mount the volume back
However, this will not work if your backup volume is also part of the your root instance volume.
Important note : DO NOT run any AWS detach-volume command. An OS unmount is not AWS detach-volume. | 1 | 0 | 0 | I would like to create AMI for an instance for 1 volume not for all volumes using boto. I've a script to automate AMI creation for couple of instances. However one of the instance consisting of huge volume for backups(No worries about that data). We would like to take snapshot of root volume in AMI creation not for the other volumes. is there any way to do this? | How to Create AMI for 1 volume using boto? | 0 | 0 | 1 | 246 |
37,020,995 | 2016-05-04T07:21:00.000 | 2 | 0 | 0 | 0 | python,django,administration | 37,021,382 | 3 | false | 1 | 0 | Before you start anything, set up an environment where you are not working with the live data or production environment.
Now that you've done that you have a few options.
Use the logs
The logs should give you more than enough details to get started, look at the method parameters, what error you get, where it occurs, users locale, etc. etc.
Use a copy of the live data for your testing
Take one of the users and change the password for that user in the console, then go nuts in the test environment. Beware of any data protection laws your server may be bound by when doing this
Talk to your users
Just be honest, tell your user you're looking into an issue and see if they are able to help at all | 1 | 2 | 0 | I have a lot of client who can connect successfully with login + password and did a lot of things without any problems. But I have 5 clients who managed to do strange things and now they have some problems when they go to some URLs.
Of course I dont have their password (and I dont want them). So I need a way to login like if I were them, five times, to see what's happening with their account. I may have to do this again many times in the future. I didn't find anything on google which could allow me via command line or whatever to login as a specific user easily.
Is there something around like this? | Django simulate user connected through command line | 0.132549 | 0 | 0 | 226 |
37,022,937 | 2016-05-04T08:56:00.000 | 3 | 0 | 0 | 0 | python,mysql | 37,024,496 | 1 | true | 0 | 0 | have you tried the .deb files from here packages.debian.org/wheezy/python-mysqldb | 1 | 0 | 0 | I want to install MySQL-python-1.2.5 on an old Linux (Debian) embedded system.
Python is already installed together with a working MariaDB/MySQL database.
Problems:
1) the system is managed remotely and has no direct internet access;
2) band is infinitesimal to install further apps/libraries, so I would avoid doing this if possible;
3) gcc and mysql_config not installed.
My question: is there any possibility to add the MYSQL-PYTHON package already READY to be imported into a python script, compiled and as a single file even better, without going through a painful and local upgrade?
My dream: prepare the working package/file on my local PC and then transfer it using SCP.
Note: the remote system and my working pc are compatible and I don't need any special toolchain. | How to install MySQL for python WITHOUT internet access | 1.2 | 1 | 0 | 570 |
37,026,046 | 2016-05-04T11:07:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 37,028,844 | 1 | false | 0 | 1 | I believe you can accomplish what you want with tkinter, but it's not about not getting focus. I don't think, that other GUI tools will make it any easier.
It's part of operation system, or more precisely window manager to give focus to some window, when it is clicked. So, in case of virtual keyboard:
User has focus in text editor (for example).
User presses a button on your virtual keyboard.
OS/Window manager gives focus to your keyboard window and sends mouse click event to the GUI library (tkinter).
Here you need to identify where was the focus before your window got it, i.e. get the text editor window handler somehow.
Send the button press event to that window and return focus.
Repeat.
You'll use tkinter to draw the keyboard window and handle mouse clicks/touches on virtual keyboard keys. But you'll need to work with OS/Window manager to identify other windows by handlers and send keypress events to them. May be you will be able to prevent focus switch by working with OS/Window manager, but it's not tkinter or other GUI library functionality. | 1 | 0 | 0 | I am trying to do make a virtual keyboard using tkinter. Is there any method that allow tkinter window focus out? For example in java we can have setFocusableWindowState(false)
Thank you very much for your help. | How to set tkinter window focus out? | 0 | 0 | 0 | 440 |
37,026,175 | 2016-05-04T11:13:00.000 | 0 | 0 | 0 | 0 | python,tkinter,console,stdout,xterm | 37,031,498 | 1 | false | 0 | 1 | You should start with an analysis of what you expect:
input from user: what do you want, how you want to process it? (is this really shell input?)
feed back: what do you want to display, how is it produced? (direct shell output?)
xterm is just a graphic interface around a shell. That means that is captures keyboard input coming to its window, submit it to is shell, and in turn display its shell stdout and stderr to the window, with respect to formatting code (not for from vt220 terminal + ANSI color codes). It is uncommon to use xterm in a GUI.
The shell can be any command line program that processes standard input and uses standard output and standard error streams. It is commonly /bin/sh or bin/bash but could be any program.
If you really want to do that, you could start xterm in a subprocess, with a special shell that passes all input lines to your main program for example through a Unix domain socket, and in turn echoes back what your main program sends to it.
More commonly, GUIs use an text widget to get input from the user and another or the same widget to display feedback. Until you have analyzed things at that level and designed a global architecture please do not think too much about subprocess module syntax. | 1 | 0 | 0 | Currently in my work placement, I have to creat a GUI thanks to Tkinter.
In this GUI I have to embed a console, this console has to be interactive that is to say stdout & stderr will display in this console and user can type up a commande.
With my first research, I think xterm can be useful, but I didn't find how we can redirect stdout & stderr to it
Thanks | Python, Tkinter, embed an interactive console and redirect stdout | 0 | 0 | 0 | 1,028 |
37,038,081 | 2016-05-04T21:10:00.000 | 0 | 0 | 0 | 1 | python,pip,pyenv | 61,634,550 | 1 | false | 0 | 0 | All of the shim files actually seem to be identical, so you should always be able to do cp ~/.pyenv/shims/{python,pip}. | 1 | 4 | 0 | How do I get a pip shim in ~/.pyenv/shims. I am using pyenv, but which pip still shows the system version of pip.
Based on the below comment copied from the docs, it appears it should occur through rehashing, but I have run pyenv rehash and nothing happen happens
Copied from docs:
Through a process called rehashing, pyenv maintains shims in that directory to match every Python command across every installed version of Python—python, pip, and so on.
Per request in comments here is my PATH
/Users/patrick/google-cloud-sdk/bin:/Users/patrick/.pyenv/shims:/Users/patrick/.pyenv/bin:/Users/patrick/.local/bin:/Users/patrick/npm/bin:/Users/patrick/google_appengine:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/mongodb/bin
and here is my ~/.pyenv/shims content:
$ ll ~/.pyenv/shims/
total 80
drwxr-xr-x 12 patrick staff 408 May 23 09:49 .
drwxr-xr-x 22 patrick staff 748 May 4 18:10 ..
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 2to3
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 idle
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 pydoc
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python-config
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2-config
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2.7
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 python2.7-config
-rwxr-xr-x 1 patrick staff 408 May 4 18:15 smtpd.py | Shimming pip with pyenv | 0 | 0 | 0 | 2,224 |
37,038,234 | 2016-05-04T21:20:00.000 | 1 | 1 | 0 | 0 | python,beagleboneblack,gpio | 37,057,732 | 1 | true | 0 | 0 | No - it is not possible to determine the exact position of a stepper motor without additional information (inputs). As you've noticed, you can only move a certain number of steps, but unless you know where you started, you won't know where you end up.
This is usually solved by using another input, typically a limit switch, at a known location such that the switch closes when the moving part is directly over that location. When you first start up, you rotate the stepper until the switch closes, at which point you know the current location. Once you have calibrated your initial position, THEN you can determine your exact position by counting steps (assuming your motor doesn't ever slip!)
You see this a lot with inkjet printers; when you first turn them on, the print head will slide all the way to one side (where there is almost certainly some sort of detector). That is the printer finding its zero point.
Some alternatives to a switch:
If you don't need full rotation, you can use a servo motor instead. These DO have internal position sensing.
Another hack solution using a stepper would be to place a mechanical block at one extremity that will prevent your mechanism from passing. Then just rotate the stepper one full revolution in a given direction. You know that at some point you will have hit the block and have stopped. This isn't great; you have to be careful that running into the stop won't damage anything or knock anything out of alignment. Due to the nature of steppers, your step count may also be off by up to 3 steps, so this won't be super high precision. | 1 | 1 | 0 | I wanted to find the inicial point of a stepper motor, if it exists, so I could rotate it always in 90 degrees or 512 steps (2048 steps for a full rotation). I've put four cups in the stepper motor and I want to use the degree 0 for cup 1, degree 90 for cup 2 and so on. I'm using it with Beaglebone Black with python language. So far I've only get to move the motor giving him a number of steps. I'm using the Adafruit_BBIO library to control GPIOs from Beaglebone.
Is it possible to get motor's initial position or move it to a inicial position? I've never used a stepper motor before.
Thank you. | Stepper motor 28BYJ-48: How to find the angle 0? Or its initial point? | 1.2 | 0 | 0 | 1,571 |
37,040,580 | 2016-05-05T00:58:00.000 | 1 | 0 | 1 | 1 | python-2.7,apache-spark,apache-spark-1.4 | 37,040,617 | 1 | true | 0 | 0 | Did you restart the Spark workers with the new setting? Changing the environment setting just for your driver process is not enough: tasks created by the driver will cross process, sometimes system, boundaries to be executed. Those tasks are compiled bits of code, so that is why both versions need to match. | 1 | 0 | 0 | Running spark 1.4.1 on CentOS 6.7. Have both python 2.7 and python 3.5.1 installed on it with anaconda.
MAde sure that PYSPARK_PYTHON env var is set to python3.5 but when I open pyspark shell and execute a simple rdd transformation, it errors out with below exception:
Exception: Python in worker has different version 2.7 than that in driver 3.5, PySpark cannot run with different minor versions
Just wondering what are the other places to change the path. | Python versions in worker node and master node vary | 1.2 | 0 | 0 | 930 |
37,043,493 | 2016-05-05T06:02:00.000 | 1 | 0 | 0 | 1 | python,git,google-app-engine,gcloud-python,google-cloud-python | 37,061,546 | 1 | false | 1 | 0 | Yes, this is the expected behaviour, each deployment is standalone, no assumption is made about anything being "already deployed", all app's artifacts are uploaded at every deployment.
Update: Kekito's comment suggests different tools may actually behave differently. My answer applies to the linux version of the Python SDK, regardless of deploying a new version or re-deploying the same version. | 1 | 2 | 0 | After i recently updated the gcloud components with gcloud components update to version 108.0.0, i noticed the gcloud preview app deploy app.yaml command has started taking too long every time (about 15 minutes) for my project. Before this it only used to take about a minute to complete.
I figured out that using gcloud preview app deploy --verbosity info app.yaml displays progress of deployment process and I noticed every file in source code is being uploaded every time i deploy including the files in lib directory which has a number of packages installed, about 2000 files in it so this is where the delay is coming from. Since I am new to appengine, i dont know if this is normal.
The project exists inside a folder of git repo, and i noticed after every deploy, 2 files in default directory, source-context.json and source-contexts.json, are being created and have information about git repo inside. I feel that can somehow be relevant.
I went through a number of relevant questions here but couldnt figure out the issue. It would be great if this can be resolved if its an issue at all because its a big inconvenience having to wait 15 mins to deploy every time.
I only started using google appengine a month ago so please dont mind if the question is incorrect. Please let me know if additional info is needed to resolve this. Thanks
UPDATE: I am using gcloud sdk on ubuntu 14.04 LTS. | gcloud preview app deploy uploads all souce code files everytime in python project taking long time | 0.197375 | 0 | 0 | 1,415 |
37,044,634 | 2016-05-05T07:12:00.000 | 0 | 0 | 0 | 0 | python,django,git | 56,626,356 | 2 | false | 1 | 0 | After you pull, do not delete the migrations file or folder. Simply just do python manage.py migrate. Even after this there is no change in database schema then open the migrations file which came through the git pull and remove the migration code of the model whose table is not being created in the database. Then do makemigrations and migrate. I had this same problem. This worked for me. | 1 | 0 | 0 | I am working on a Django project with another developer. I had initially created a model which I had migrated and was synced correctly with a MySQL database.
The other developer had later pulled the code I had written so far from the repository and added some additional fields to my model.
When I pulled through his changes to my local machine the model had his changes, and additionly a second migration file had been pulled.
So I then executed the migration commands:
python manage.py makemigrations myapp, then python manage.py migrate in order to update my database schema. The response was that no changes had been made.
I tried removing the migration folder in my app and running the commands again. A new migrations folder had been generated and again my database schema had not been updated.
Is there something I am missing here? I thought that any changes to model can simply be migrated to alter the database schema.
Any help would be greatly appreciated. (Using Django version 1.9). | Django model changes cannot be migrated after a Git pull | 0 | 1 | 0 | 881 |
37,046,677 | 2016-05-05T09:10:00.000 | 0 | 0 | 0 | 0 | python,windows,flask,pycharm,virtualenv | 37,236,689 | 2 | false | 1 | 0 | The problem was that PyCharm does not activate the virtualenvironment when pressing the run button. It only uses the virtualenv python.exe. | 1 | 4 | 0 | As the title suggests, I'm trying to use a environment variable in a config file for a Flask project (in windows 10).
I'm using a virtual env and this far i have tried to add set "DATABASE_URL=sqlite:///models.db" to /Scripts/activate.bat in the virtualenv folder.
But it does not seem to work. Any suggestions? | Setting environment variables in virtualenv (Python, Windows) | 0 | 1 | 0 | 2,106 |
37,047,754 | 2016-05-05T10:06:00.000 | 0 | 0 | 0 | 0 | python,django | 37,048,760 | 1 | false | 1 | 0 | You can filter out the ip address of the user in real time and check if there is some active session related to that ip address.
If it is there, it will block the user from using it in incognito or any other browser.
And active session means open session here | 1 | 0 | 0 | Let's say, I have a user who has logged in using Chrome browser normal window(not incognito), now he opens an incognito window tries to login using same credentials, I want to detect that the particular user is logged in already and disallow a second login.
I have seen few questions like this, where solution is to clear up all older sessions. But, is it the only solution? Can't I have all those sessions untouched and still guarantee that there be only one active session. | Django block user login from incognito browser window if user is already logged in from regular browser window | 0 | 0 | 0 | 652 |
37,047,860 | 2016-05-05T10:10:00.000 | 2 | 1 | 0 | 0 | wsgi,mod-python,mod | 37,048,554 | 1 | false | 1 | 0 | For recent versions of mod_wsgi no you cannot load them at the same time as mod_wsgi will prevent it as mod_python thread code doesn't use Python C API for threads properly and causes various problems.
Short answer is that you shouldn't be using mod_python any more. Use a proper Python web framework with a more modern template system instead.
If for some reason you really don't want to do that, go back and use mod_wsgi 3.5. | 1 | 0 | 0 | Can I use mod_python.so and mod_wsgi.so at the same time on Apache Web Server defining different directories for each of them. At the moment I can not enable them both in my apache config file at the same time using LoadModule.
mod_wsgi for Django and mod_python for .py and .psp scripts. | Enable mod_python and mod_wsgi module | 0.379949 | 0 | 0 | 481 |
37,052,571 | 2016-05-05T13:58:00.000 | 0 | 0 | 1 | 0 | python,excel,vba | 37,052,827 | 3 | false | 0 | 0 | For windows, the win32com package will allow you to control excel from a python script. It's not quite the same as embedding the code, but it will allow you to read and write from the spreadsheet. | 1 | 0 | 0 | Afternoon,
I have a really simple python script in which user is asked to input a share purchase price, script looks up price and returns whether user is up or down.
Currently the input, and text output are done in the CMD prompt which is not ideal. I would love to have in excel a box for inputing purchase price, a button to press and then a cell in which the output is printed.
Is there any straightforward ways to put the python code in the button code where you would normally have VBA? Or alternative hacks?
Thanks in advance | Python script input and output in excel | 0 | 1 | 0 | 890 |
37,054,317 | 2016-05-05T15:20:00.000 | 1 | 0 | 0 | 0 | python,youtube | 37,054,362 | 1 | true | 0 | 0 | You'll have to reverse-engineer youtube's code in order to stream it by yourself, and it would not be necessarily possible to do with http only. | 1 | 0 | 0 | Is it possible to use a HTTP GET request to stream YouTube videos?
I've looked at the Google YouTube API docs, it's not clear that this can be done. There are packages like pytube, but they are meant to be used directly, not by using HTTP requests.
Any info would be appreciated. | Python - streaming Youtube videos using GET | 1.2 | 0 | 1 | 189 |
37,055,745 | 2016-05-05T16:35:00.000 | 3 | 0 | 1 | 0 | python,audio,input,mp3,microphone | 37,056,802 | 4 | true | 0 | 0 | If you meant how to play MP3 using Python, well, this is a broad question.
Is it possible, without any dependencies, yes it is, but it is not worth it. Well, playing uncompressed audio is, but MP3, well, I'll explain below.
To play raw audio data from Python without installing pyaudio or pygame or similar, you first have to know the platform on which your script will be run.
Then implement a nice set of functions for choosing an audio device, setting up properties like sample rate, bit rate, mono/stereo..., feeding the stream to audio card and stopping the playback.
It is not hard, but to do it you have to use ctypes on Windows, PyObjC on Mac and Linux is special case as it supports many audio systems (probably use sockets to connect to PulseAudio or pipe to some process like aplay/paplay/mpeg123... or exploit gstreamer.).
But why go through all this just to avoid dependencies, when you have nice libraries out there with simple interfaces to access and use audio devices.
PyAudio is great one.
Well, that is your concern.
But, playing MP3 without external libraries, in real time, from pure Python, well, it's not exactly impossible, but it is very hard to achieve, and as far as I know nobody even tried doing it.
There is pure Python MP3 decoder implementation, but it is 10 times slower than necessary for real-time audio playback. It can be optimized for nearly full speed, but nobody is interested in doing so.
It has mostly educational value and it is used in cases where you do not need real-time speed.
This is what you should do:
Install pygame and use it to play MP3 directly
or:
Install PyAudio and some library that decodes Mp3, there are quite a few of them on pypi.python.org, and use it to decode the MP3 and feed the output to PyAudio.
There are some more possibilities, including pymedia, but I consider these the easiest solutions.
Okay, as we clarified what is really you need here is the answer.
I will leave first answer intact as you need that part too.
Now, you want to play audio to the recording stream, so that any application recording the audio input records the stuff that you are playing.
On Windows, this is called stereo mix and can be found in Volume Control, under audio input.
You choose stereo mix as your default input. Now, when you open an recording app which doesn't select itsown input channel, but uses the selected one (e.g. Skype) , it will record all coming out of your speakers and coming into your mic/line in.
I am not 100% sure whether this option will appear on all Windows or it is a feature of an audio card you have.
I am positive that Creative and Realtek audio cards supports it.
So, research this.
To select that option from Python, you have to connect to winmm.dll using ctypes and call the appropriate function. I do not know which one and with what arguments.
If this option is not present in volume control, there is nothing for it but to install a virtual audio card to do the loopback for you.
There might be such a software that comes packaged in as library so that you can use it from Python or whatever.
On Linux this should be easy using Pulseaudio. I do not know how, but I know that you can do it, redirect the streams etc. There is tutorial out there somewhere.
Then you can call that command from Python, to set to this and reset back to normal.
On Mac, well, I really have no idea, but it should be possible.
If you want your MP3 to be played only to the recording stream, and not on your speakers at all, well on Windows, you will not be able to do that without a loopback audio device.
On Linux, I am sure you will be able to do it, and on Mac it should be possible, but how is the Q.
I currently have no time to sniff around libraries etc. to provide you with some useful code, so you will have to do it yourself. But I hope my directions will help you. | 1 | 5 | 0 | Is there a way using python (and not any external software) to play a mp3 file like a microphone input?
For example, I have a mp3 file and with a python script it would play it through my mic so other in a voice room would hear it. As I say it is just an example.
Of course, I have done some research. I found out that I can use a software to create a virtual device and do few things to have the result. But my point is if it is possible without installing software but with some kind of python script? | Playing mp3 file through microphone with python | 1.2 | 0 | 0 | 15,328 |
37,056,608 | 2016-05-05T17:23:00.000 | 2 | 0 | 0 | 0 | python,neural-network,artificial-intelligence,genetic-algorithm,tic-tac-toe | 37,056,824 | 1 | false | 0 | 0 | Yes, this is possible. But you have to tell your AI the rules of the game, beforehand (well, that's debatable, but it's ostensibly better if you do so - it'll define your search space a little better).
Now, the vanilla tic-tac-toe game is far too simple - a minmax search will more than suffice. Scaling up the dimensionality or the size of the board does make the case for more advanced algorithms, but even so, the search space is quite simple (the algebraic nature of the dimensionality increase leads to a slight transformation of the search space, which should still be tractable by simpler methods).
If you really want to throw a heavy machine learning technique at a problem, take a second look at chess (Deep Blue really just brute forced the sucker). Arimaa is interesting for this application as well. You might also consider looking at Go (perhaps start with some of the work done on AlphaGo)
That's my two cents' worth | 1 | 6 | 1 | I'm very interested in the field of machine learning and recently I got the idea for a project for the next few weeks.
Basically I want to create an AI that can beat every human at Tic Tac Toe. The algorithm must be scalable for every n*n board size, and maybe even for other dimensions (for a 3D analogue of the game, for example).
Also I don't want the algorithm to know anything of the game in advance: it must learn on its own. So no hardcoded ifs, and no supervisioned learning.
My idea is to use an Artificial Neural Network for the main algorithm itself, and to train it through the use of a genetic algorithm. So I have to code only the rules of the game, and then each population, battling with itself, should learn from scratch.
It's a big project, and I'm not an expert on this field, but I hope, with such an objective, to learn lots of things.
First of all, is that possible? I mean, is it possible to reach a good result within a reasonable amount of time?
Are there good libraries in Python that I can use for this project? And is Python a suitable language for this kind of project? | How can I create an AI for tic tac toe in Python using ANN and genetic algorithm? | 0.379949 | 0 | 0 | 700 |
37,060,219 | 2016-05-05T21:02:00.000 | 2 | 0 | 1 | 1 | python,interpreter,atom-editor | 37,202,373 | 1 | false | 0 | 0 | Finally, find the solution. In folder of '/Library/Frameworks/Python.framework/Versions/3.5/bin/', python is named as python3.5. So all I need is deleting '3.5' and use 'python' along. | 1 | 3 | 0 | I need to use Python 3.5 instead of 2.7. But I cannot find any 'run options' or 'interpreter configurations' in Atom. My current interpreter is Python 2.7 in '/Library/Frameworks/Python.framework/Versions/2.7/bin/python'. I have installed 3.5 which is in '/Library/Frameworks/Python.framework/Versions/3.5/bin/python'.
Besides, I am using Mac OSX.
Thanks in advance! | How to switch to a different python version in Atom? | 0.379949 | 0 | 0 | 2,292 |
37,060,920 | 2016-05-05T21:54:00.000 | 2 | 0 | 0 | 0 | python,pulp | 37,068,772 | 1 | true | 0 | 0 | You probably do not mean Linear Programming but rather Mixed Integer Programming. (The original question asked about LPs).
LPs usually solve quite fast and I don't know a good way to find an approximate solution for them. You may want to try an interior point or barrier method and set an iteration or time limit. For Simplex methods this typically does not work very well.
MIP models can take a lot of time to solve. Solvers allow to terminate earlier by setting a gap (gap = 0 means solving to optimality). E.g.
model.solve(GLPK(options=['--mipgap', '0.01'])) | 1 | 1 | 0 | Is it possible to get an approximate solution to a mixed integer linear programming problem with PuLP? My problem is complex and the exact resolution takes too long. | Approximate solution to MILP with PuLP | 1.2 | 0 | 0 | 719 |
37,061,089 | 2016-05-05T22:08:00.000 | 4 | 0 | 1 | 0 | python,tensorflow,jupyter | 46,785,026 | 14 | false | 0 | 0 | Here is what I did to enable tensorflow in Anaconda -> Jupyter.
Install Tensorflow using the instructions provided at
Go to /Users/username/anaconda/env and ensure Tensorflow is installed
Open the Anaconda navigator and go to "Environments" (located in the left navigation)
Select "All" in teh first drop down and search for Tensorflow
If its not enabled, enabled it in the checkbox and confirm the process that follows.
Now open a new Jupyter notebook and tensorflow should work | 2 | 34 | 1 | I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine.
My question is: how can I also have it work in the Jupyter notebooks?
This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python. | Trouble with TensorFlow in Jupyter Notebook | 0.057081 | 0 | 0 | 87,389 |
37,061,089 | 2016-05-05T22:08:00.000 | -1 | 0 | 1 | 0 | python,tensorflow,jupyter | 67,094,115 | 14 | false | 0 | 0 | Open an Anaconda Prompt screen: (base) C:\Users\YOU>conda create -n tf tensorflow
After the environment is created type: conda activate tf
Prompt moves to (tf) environment, that is: (tf) C:\Users\YOU>
then install Jupyter Notebook in this (tf) environment:
conda install -c conda-forge jupyterlab - jupyter notebook
Still in (tf) environment, that is type
(tf) C:\Users\YOU>jupyter notebook
The notebook screen starts!!
A New notebook then can import tensorflow
FROM THEN ON
To open a session
click Anaconda prompt,
type conda activate tf
the prompt moves to tf environment
(tf) C:\Users\YOU>
then type (tf) C:\Users\YOU>jupyter notebook | 2 | 34 | 1 | I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine.
My question is: how can I also have it work in the Jupyter notebooks?
This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python. | Trouble with TensorFlow in Jupyter Notebook | -0.014285 | 0 | 0 | 87,389 |
37,062,073 | 2016-05-05T23:49:00.000 | 2 | 0 | 1 | 0 | python,multithreading,python-2.7 | 37,062,244 | 1 | false | 0 | 0 | The documentation says
A thread can be flagged as a "daemon thread". The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
If you want to keep your program running, you must have at least one non daemon thread. | 1 | 2 | 0 | I'm working on a python application, in which the main thread creates an object, say x, of a particular class.
Then it starts one thread which starts the execution in one of the methods of this object x. The method has a while True: loop, so its infinite.
Then it starts another thread which starts the execution in another method of the same object x. This method also has a while True: infinite loop.
I have made both the threads as daemon by calling t1.setDaemon(True), but it seems both stop execution once the main thread exits.
How do I keep the children alive after the parent thread is finished?
Or should I change my design to use a cron job or process fork? | How to keep the daemon threads alive after main thread exits? | 0.379949 | 0 | 0 | 2,101 |
37,064,168 | 2016-05-06T04:19:00.000 | 0 | 0 | 0 | 0 | python,kivy | 37,085,902 | 1 | false | 0 | 1 | How can you imagine a user scrolling through 40000 labels? You should rethink your app design.
Consider adding a text input, and based on the given string, fetch filtered data from the database you have. | 1 | 0 | 0 | I have a GridView inside of a ScrollView. I am trying to create and display approximately ~12,000 items in the GridView (which clearly will not display appropriately on screen), but the number of items could feasible be ~40,000. Currently ~18 seconds are spent constructing all of the items (Labels), and any resizing of the window results in another significant delay.
How can I speed up the construction and rendering of the items? I don't know how to do paging or delayed, on-demand loading on a ScrollView. | Kivy ScrollView (with Gridview) Suffering Performance Issues | 0 | 0 | 0 | 525 |
37,065,874 | 2016-05-06T06:37:00.000 | 3 | 0 | 1 | 0 | django,python-3.x,background-task | 37,066,028 | 2 | true | 1 | 0 | No It's not possible in any case as it will effectively create cyclic import problems in django. Because in tasks you will have to import that function and in the file for that function, you will have to import tasks.
So no whatever strategy you take, you are gonna land into the same problem. | 1 | 0 | 0 | Say i want to execute a function every 5 minutes without using cron job.
What i think of doing is create a django background task which actually calls that function and at the end of that function, i again create that task with schedule = say 60*5.
this effectively puts the function in a time based loop.
I tried a few iterations, but i am getting import errors. But is it possible to do or not? | Is it possible to put a function in timed loop using django-background-task | 1.2 | 0 | 0 | 212 |
37,066,703 | 2016-05-06T07:27:00.000 | 4 | 1 | 1 | 0 | python,raspberry-pi,raspbian,lines-of-code | 37,066,929 | 2 | true | 0 | 0 | You can use the wc command:
wc -l yourScript.py | 1 | 0 | 0 | I am running a python script on my raspberry pi and I was just wondering if there is any command that I can use that counts how many lines are in my script.
Regards | python script - counting lines of code in the script | 1.2 | 0 | 0 | 571 |
37,074,172 | 2016-05-06T13:52:00.000 | 1 | 0 | 1 | 0 | python,ide,pycharm,jetbrains-ide | 42,302,417 | 2 | false | 0 | 0 | I stumbled across this question because I was looking for the same thing.
I have actually come up with a quick hack about this.
See if it works for you too.
Configuration
Settings... > Keymap > Split Vertically > right click > Add Abbreviation > type :vs (which is the Vim shortcut for Vertical Split) > OK.
Hack
I will assume you currently have one opened file and no splits.
ShiftShift :vs
Ctrl click definition
When you are done, go back to the starting point.
CtrlF4(F4) | 2 | 4 | 0 | If i have split screen in JetBrain's pycharm IDE, can I Go To Declaration an object on one screen and have the resulting declaration appear on my other split screen? This would be very helpful in reading code without losing my place. I do realize you can go back to previous caret position as well but this would be better. | PyCharm (JetBrains) IDE - go to declaration on other split screen | 0.099668 | 0 | 0 | 457 |
37,074,172 | 2016-05-06T13:52:00.000 | 1 | 0 | 1 | 0 | python,ide,pycharm,jetbrains-ide | 37,133,673 | 2 | true | 0 | 0 | AFAIK there is no direct way of doing this. The only option is a 3-step process where you press Ctrl+B/Cmd+B (on Mac) to get to the declaration and then right click on the new file and click the Split Horizontally/Split Vertically option (whichever you prefer).
This will open the selected file in splitscreen, next step is to select the initial file (PyCharm leaves your split file open in both columns by default) and you have both files open to look at. | 2 | 4 | 0 | If i have split screen in JetBrain's pycharm IDE, can I Go To Declaration an object on one screen and have the resulting declaration appear on my other split screen? This would be very helpful in reading code without losing my place. I do realize you can go back to previous caret position as well but this would be better. | PyCharm (JetBrains) IDE - go to declaration on other split screen | 1.2 | 0 | 0 | 457 |
37,074,244 | 2016-05-06T13:55:00.000 | 3 | 0 | 0 | 0 | python,machine-learning,neural-network,keras | 37,104,201 | 4 | true | 0 | 0 | There is no such thing as "running a neural net in reverse", as a generic architecture of neural net does not define any not-forward data processing. There is, however, a subclass of models which do - the generative models, which are not a part of keras right now. The only thing you can do is to create a network which somehow "simulates" the generative process you are interested in. But this is paricular model specific method, and has no general solution. | 2 | 8 | 1 | I'm currently playing around with the Keras framework. And have done some simple classification tests, etc. I'd like to find a way to run the network in reverse, using the outputs as inputs and vice versa. Any way to do this? | Run model in reverse in Keras | 1.2 | 0 | 0 | 4,095 |
37,074,244 | 2016-05-06T13:55:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,neural-network,keras | 51,939,867 | 4 | false | 0 | 0 | What you are looking for, I think, is the "Auto-Associative" neural network. it has an input of n dimensions, several layers, one of which is the "middle layer" of m dimensions, and then several more layers leading to an output layer which has the same number of dimensions as the input layer, n.
The key here is that m is much smaller than n.
How it works is that you train the network to recreate the input, exactly at the output. Then you cut the network into two halves. The front half goes from n to m dimensions (encoding the input into a smaller space). The back half goes from m dimensions to n dimensions (decoding, or "reverse" if you will).
Very useful for encryption, compression, unsupervised learning, etc. | 2 | 8 | 1 | I'm currently playing around with the Keras framework. And have done some simple classification tests, etc. I'd like to find a way to run the network in reverse, using the outputs as inputs and vice versa. Any way to do this? | Run model in reverse in Keras | 0 | 0 | 0 | 4,095 |
37,076,808 | 2016-05-06T16:01:00.000 | 0 | 0 | 0 | 0 | javascript,python,dynamic,data-visualization | 37,077,704 | 1 | false | 1 | 0 | Js library like d3.js or highcharts can be helpful to solve your problem. You can easily send the data from sever to front-end where these library can gracefully plot the data. | 1 | 0 | 0 | I am developing a website where I have around 800 data sets. I want to visualize my data using bar charts and pie charts, but I don't want to hard code this for every data set. What technology can I use to dynamically read the data from a json/csv/xml and render the graph? (btw I'm going to use a Python based backend (either Django or Flask)) | How to dynamically visualize dataset on web? | 0 | 0 | 0 | 52 |
37,080,703 | 2016-05-06T20:14:00.000 | 0 | 1 | 0 | 0 | python | 37,081,699 | 2 | false | 0 | 0 | Assuming you are running on a *nix system, cron is definitely a good option. If you are running a Linux system that uses systemd, you could try creating a timer unit. It is probably more work than cron, but it has some advantages.
I won't go though all the details here, but basically:
Create a service unit that runs your program.
Create a timer unit that activates the server unit at the prescribed times.
Start and enable the timer unit. | 1 | 1 | 0 | I have python script for ssh which help to run various Linux commands on remote server using paramiko module. All the outputs are saved in text file, script is running properly. Now I wanted to run these script twice a day automatically at 11am and 5pm everyday.
How can I run these script automatically every day at given time without compiling every time manually. Is there any software or module.
Thanks for your help. | Automatically run python script twice a day | 0 | 0 | 1 | 1,567 |
37,081,162 | 2016-05-06T20:47:00.000 | 0 | 1 | 0 | 0 | python-2.7,amazon-web-services,amazon-s3 | 37,085,603 | 2 | false | 0 | 0 | Create a Lambda function and use cloudWatch ==> Events ==> Rules and configure it
using:
1:AWS built in timers
2:Cron Expressions
In your case cron is better option | 1 | 0 | 0 | What would be the best way to run a python script on the first of every month?
My situation is I want some data sent to a HipChat room, using the python api, on the first of every month from AWS. The data I want sent is in a text file in a S3 bucket | Best way to run scheduled code on AWS | 0 | 0 | 0 | 527 |
37,082,038 | 2016-05-06T22:00:00.000 | 1 | 1 | 0 | 0 | python,ios,swift | 37,082,146 | 1 | false | 1 | 0 | Short answer: You don't.
There is no Python interpreter running on iOS, and Apple will likely neither provide nor allow one, since they don't allow you to deliver and run new code to in iOS app once it's installed. The code is supposed to be fixed at install time, and Python is an interpreted language. | 1 | 0 | 0 | I am making an app which will login to a website and scrape the website for the information I have. I currently have the all the login and web scraping written in Python completely done. What I am trying to figure out is running that python code in Xcode in my swift project. I want to avoid setting up a server capable of executing cgi scripts. Essentially the user will input their credentials and I will pass that to the python file, and the script will run. | How do I run python script within swift app? | 0.197375 | 0 | 0 | 1,182 |
37,085,665 | 2016-05-07T07:14:00.000 | 1 | 0 | 1 | 0 | ipython,anaconda,jupyter,jupyter-notebook | 58,093,537 | 16 | false | 0 | 0 | I have tried every method mentioned above and nothing worked, except installing jupyter in the new environment.
to activate the new environment
conda activate new_env
replace 'new_env' with your environment name.
next install jupyter
'pip install jupyter'
you can also install jupyter by going to anaconda navigator and selecting the right environment, and installing jupyter notebook from Home tab | 4 | 313 | 0 | I have jupyter/anaconda/python3.5.
How can I know which conda environment is my jupyter notebook running on?
How can I launch jupyter from a new conda environment? | In which conda environment is Jupyter executing? | 0.012499 | 0 | 0 | 395,507 |
37,085,665 | 2016-05-07T07:14:00.000 | 2 | 0 | 1 | 0 | ipython,anaconda,jupyter,jupyter-notebook | 68,476,912 | 16 | false | 0 | 0 | On Ubuntu 20.04, none of the suggestions above worked.
I.e. I activated an existing environment. I discovered (using sys.executable and sys.path) that my jupyter notebook kernel was running the DEFAULT Anaconda python, and NOT the python I had installed in my activated environment. The consequence of this was that my notebook was unable to import packages that I had installed into this particular Anaconda environment.
Following instructions above (and a slew of other URLs), I installed ipykernel, nb_conda, and nb_conda_kernels, and ran: python -m ipykernel --user --name myenv.
Using the Kernels|Change Kernel... menu in my Jupyter notebook, I selected myenv, the one I had specified in my python -m ipykernel command.
However, sys.executable showed that this did not "stick".
I tried shutting down and restarting, but nothing resulted in my getting the environment I had selected.
Finally, I simply edited file kernel.json in folder:
~/.local/share/jupyter/kernels/myenv
Sure enough, despite my having performed all the steps suggested above, the first argument in this JSON file was still showing the default python location:
$Anaconda/bin/python (where $Anaconda is the location where I installed anaconda)
I edited file kernel.json with a text editor so that this was changed to:
$Anaconda/envs/myenv/bin/python
Hopefully, my use of myenv is understood to mean that you should replace this with the name of YOUR environment.
Having edited this file, my Jupyter notebooks started working properly - namely, they used the python specified for my activated environment, and I was able to import packages that were installed in this environment, but not the base Anaconda environment.
Clearly, something is messed up in how the set of packages ipykernel, nb_conda, and nb_conda_kernels are configuring Anaconda environments for jupyter. | 4 | 313 | 0 | I have jupyter/anaconda/python3.5.
How can I know which conda environment is my jupyter notebook running on?
How can I launch jupyter from a new conda environment? | In which conda environment is Jupyter executing? | 0.024995 | 0 | 0 | 395,507 |
37,085,665 | 2016-05-07T07:14:00.000 | 52 | 0 | 1 | 0 | ipython,anaconda,jupyter,jupyter-notebook | 43,617,958 | 16 | false | 0 | 0 | If the above ans doesn't work then try running conda install ipykernel in new env and then run jupyter notebook from any env, you will be able to see or switch between those kernels. | 4 | 313 | 0 | I have jupyter/anaconda/python3.5.
How can I know which conda environment is my jupyter notebook running on?
How can I launch jupyter from a new conda environment? | In which conda environment is Jupyter executing? | 1 | 0 | 0 | 395,507 |
37,085,665 | 2016-05-07T07:14:00.000 | 0 | 0 | 1 | 0 | ipython,anaconda,jupyter,jupyter-notebook | 64,656,131 | 16 | false | 0 | 0 | For checking on Which Python your Jupyter Notebook is running try executig this code.
from platform import python_version
print(python_version())
In order to run jupyter notebook from your environment
activate MYenv
and install jupyter notebook using command
pip install jupyter notebook
then just
jupyter notebook | 4 | 313 | 0 | I have jupyter/anaconda/python3.5.
How can I know which conda environment is my jupyter notebook running on?
How can I launch jupyter from a new conda environment? | In which conda environment is Jupyter executing? | 0 | 0 | 0 | 395,507 |
37,087,996 | 2016-05-07T11:31:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,computer-vision,scikit-learn | 37,092,686 | 1 | true | 0 | 0 | First of all - the idea of training separate models is rather bad. Unless you have very good reasons to do so (some external limitations you cannot ignore) you should not do so. Why? Because you are efficiently loosing information, you are unable to model complex dependencies between signals from two classifiers. Training everything jointly gives statistical method ability to choose which data use when, this way it can model for example - that for some particular types of data it will use one part of the input and for another - the rest. When you build independent classifiers - you bias the whole process, as these classifiers have "no idea" that remaining ones exist.
Having said that, here is the solution assuming that somehow you cannot learn joint model. In such scenario (where your models are kind of black-boxes which convert your input representation to decision functions) the basic idea is to simply treat them as preprocessors and fit new model on top, thats all. In other words you have your data point x split into feature-vectors x1,x2,...,xk and k different models mi you built previously, thus you use them to construct a preprocessing method f(x) = [m1(x1), m2(x2), ..., mk(xk)] and this is just a single point in R^k space, which can be now fitted into new classifier to learn how to combine these information. The problematic part is that due to your very specific process, you now need new training set to learn combination rule, as using the same data that was used to construct mi can easily lead to overfitting. To fight that people sometimes use heuristic approaches instead - a priori assuming that these models are already good enough and construct ensemble of these, which either vote for classes (for example weighted by their certeinty) or build whole system around it. I would still argue that you should not go this way in the first place, if you have to - go with learning combination rule with the use of new data, and finally if you cannot do any of the above - settle with some heuristic ensemble technique. | 1 | 0 | 1 | I extract multiple Feature vectors from different sensors, and I trained these features by using SVM individually. My question is there any method to combine these classifiers in a way to obtain a better result.
thanks in advance | Method to combine multiple svm classifiers (or "any ML classifier" by using scikit-learn. "decision-feature classifiers" | 1.2 | 0 | 0 | 662 |
37,091,587 | 2016-05-07T17:27:00.000 | 3 | 0 | 0 | 0 | python,pandas,machine-learning,smoothing,naivebayes | 37,092,622 | 1 | true | 0 | 0 | Additive smoothing is just a basic mathematical operation, requiring few additions and division - there is no "special" function for that, you simply write a one-liner operating on particular columns of your dataframe. | 1 | 1 | 1 | I have a large dataframe in Pandas with lots of zeros.
I want to apply additive smoothing but instead of writing it from scratch, I am wondering if there is any better way of producing a "smoothed" dataframe in Pandas. Thanks! | Additive Smoothing for Dataframe Pandas | 1.2 | 0 | 0 | 1,278 |
37,092,465 | 2016-05-07T18:52:00.000 | 5 | 0 | 1 | 0 | python | 37,092,532 | 2 | false | 0 | 0 | It is present in the first item, ('d', 'e', 'c', 'o'), as order does not matter in combinations. If you want each of those tuples in each possible order, you are looking for permutations. | 1 | 1 | 0 | print(list(combinations('decoding',4)))
should display all possible four letter combinations right?
But this is the output
[('d', 'e', 'c', 'o'), ('d', 'e', 'c', 'd'), ('d', 'e', 'c', 'i'), ('d', 'e', 'c', 'n'), ('d', 'e', 'c', 'g'), ('d', 'e', 'o', 'd'), ('d', 'e', 'o', 'i'), ('d', 'e', 'o', 'n'), ('d', 'e', 'o', 'g'), ('d', 'e', 'd', 'i'), ('d', 'e', 'd', 'n'), ('d', 'e', 'd', 'g'), ('d', 'e', 'i', 'n'), ('d', 'e', 'i', 'g'), ('d', 'e', 'n', 'g'), ('d', 'c', 'o', 'd'), ('d', 'c', 'o', 'i'), ('d', 'c', 'o', 'n'), ('d', 'c', 'o', 'g'), ('d', 'c', 'd', 'i'), ('d', 'c', 'd', 'n'), ('d', 'c', 'd', 'g'), ('d', 'c', 'i', 'n'), ('d', 'c', 'i', 'g'), ('d', 'c', 'n', 'g'), ('d', 'o', 'd', 'i'), ('d', 'o', 'd', 'n'), ('d', 'o', 'd', 'g'), ('d', 'o', 'i', 'n'), ('d', 'o', 'i', 'g'), ('d', 'o', 'n', 'g'), ('d', 'd', 'i', 'n'), ('d', 'd', 'i', 'g'), ('d', 'd', 'n', 'g'), ('d', 'i', 'n', 'g'), ('e', 'c', 'o', 'd'), ('e', 'c', 'o', 'i'), ('e', 'c', 'o', 'n'), ('e', 'c', 'o', 'g'), ('e', 'c', 'd', 'i'), ('e', 'c', 'd', 'n'), ('e', 'c', 'd', 'g'), ('e', 'c', 'i', 'n'), ('e', 'c', 'i', 'g'), ('e', 'c', 'n', 'g'), ('e', 'o', 'd', 'i'), ('e', 'o', 'd', 'n'), ('e', 'o', 'd', 'g'), ('e', 'o', 'i', 'n'), ('e', 'o', 'i', 'g'), ('e', 'o', 'n', 'g'), ('e', 'd', 'i', 'n'), ('e', 'd', 'i', 'g'), ('e', 'd', 'n', 'g'), ('e', 'i', 'n', 'g'), ('c', 'o', 'd', 'i'), ('c', 'o', 'd', 'n'), ('c', 'o', 'd', 'g'), ('c', 'o', 'i', 'n'), ('c', 'o', 'i', 'g'), ('c', 'o', 'n', 'g'), ('c', 'd', 'i', 'n'), ('c', 'd', 'i', 'g'), ('c', 'd', 'n', 'g'), ('c', 'i', 'n', 'g'), ('o', 'd', 'i', 'n'), ('o', 'd', 'i', 'g'), ('o', 'd', 'n', 'g'), ('o', 'i', 'n', 'g'), ('d', 'i', 'n', 'g')]
From what i can tell
It is missing CODE.
Any idea why this is happening or is there something im doing wrong ? | itertools.combinations() not displaying all combinations | 0.462117 | 0 | 0 | 277 |
37,092,693 | 2016-05-07T19:16:00.000 | 3 | 0 | 1 | 0 | python | 37,093,186 | 3 | false | 0 | 0 | You could just do taskkill /IM pythonw.exe /F if you have only one pythonw running. Type this into a terminal or create a link on the desktop or wherever you want. | 1 | 2 | 0 | I have a python program that will never exit automatically. However, it does need to do so at some point. It must run without the console, so I saved it as a .pyw, but this means there is no X to click on to close it.
How do I close this manually without restarting the machine? I am on Windows, in case it needs the command line. | How to close a .pyw manually | 0.197375 | 0 | 0 | 2,598 |
37,098,546 | 2016-05-08T09:57:00.000 | 92 | 0 | 0 | 0 | python,tensorflow | 37,102,908 | 4 | true | 0 | 0 | I'd recommend to always use tf.get_variable(...) -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it.
Pure tf.Variable is lower-level; at some point tf.get_variable() did not exist so some code still uses the low-level way. | 1 | 134 | 0 | As far as I know, Variable is the default operation for making a variable, and get_variable is mainly used for weight sharing.
On the one hand, there are some people suggesting using get_variable instead of the primitive Variable operation whenever you need a variable. On the other hand, I merely see any use of get_variable in TensorFlow's official documents and demos.
Thus I want to know some rules of thumb on how to correctly use these two mechanisms. Are there any "standard" principles? | Difference between Variable and get_variable in TensorFlow | 1.2 | 0 | 0 | 43,849 |
37,099,589 | 2016-05-08T12:00:00.000 | -1 | 1 | 0 | 0 | telegram-bot,python-telegram-bot | 37,102,384 | 2 | false | 0 | 0 | Use /setinline and then /setnoinline command to disable inline mode. | 2 | 1 | 0 | How to deactivate inline mode in your Bot? When you talk to BotFather by /help he doesn't give any instructions. Thanks | How to deactivate inline mode in your Bot? | -0.099668 | 0 | 0 | 1,376 |
37,099,589 | 2016-05-08T12:00:00.000 | 2 | 1 | 0 | 0 | telegram-bot,python-telegram-bot | 60,238,809 | 2 | false | 0 | 0 | Type /setinline, choose bot to disable inline mode, that type /empty. This will disable inline mode in your bot. | 2 | 1 | 0 | How to deactivate inline mode in your Bot? When you talk to BotFather by /help he doesn't give any instructions. Thanks | How to deactivate inline mode in your Bot? | 0.197375 | 0 | 0 | 1,376 |
37,101,114 | 2016-05-08T14:49:00.000 | 39 | 0 | 0 | 0 | python,nltk | 37,101,713 | 4 | true | 0 | 0 | You are right. You need Punkt Tokenizer Models. It has 13 MB and nltk.download('punkt') should do the trick. | 1 | 21 | 1 | I am going to use nltk.tokenize.word_tokenize on a cluster where my account is very limited by space quota. At home, I downloaded all nltk resources by nltk.download() but, as I found out, it takes ~2.5GB.
This seems a bit overkill to me. Could you suggest what are the minimal (or almost minimal) dependencies for nltk.tokenize.word_tokenize? So far, I've seen nltk.download('punkt') but I am not sure whether it is sufficient and what is the size. What exactly should I run in order to make it work? | What to download in order to make nltk.tokenize.word_tokenize work? | 1.2 | 0 | 0 | 57,023 |
37,101,361 | 2016-05-08T15:12:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn | 37,102,874 | 1 | false | 0 | 0 | To solve this problem you should use a pipeline. The first stage there is scaling, and the second one is your model. Then you can pickle the whole pipeline and have fun with your new data. | 1 | 0 | 1 | I have some data with say, L features. I have standardized them using StandardScaler() by doing a fit_transform on X_train. Now while predicting, i did clf.predict(scaler.transform(X_test)). So far so good... now if I want to pickle the model for later reuse, how would I go about predicting on the new data in future with this saved model ? the new (future) data will not be standardized and I didn't pickle the scaler.
Is there anything else that I have to do before pickling the model the way I am doing it right now (to be able to predict on non-standardized data)?
reddit post: https://redd.it/4iekc9
Thanks. :) | predicting new non-standardized data with classifier trained on standardized data | 0 | 0 | 0 | 51 |
37,104,193 | 2016-05-08T19:55:00.000 | 5 | 0 | 0 | 1 | python,django,database,postgresql,heroku | 37,104,332 | 1 | false | 1 | 0 | Since you're on windows, you probably just don't have pg_restore on your path.
You can find pg_restore in the bin of your postgresql installation e.g. c:\program files\PostgreSQL\9.5\bin.
You can navigate to the correct location or simply add the location to your path so you won't need to navigate always. | 1 | 4 | 0 | I have downloaded a PG database backup from my Heroku App, it's in my repository folder as latest.dump
I have installed postgres locally, but I can't use pg_restore on the windows command line, I need to run this command:
pg_restore --verbose --clean --no-acl --no-owner -j 2 -h localhost -d DBNAME latest.dump
But the command is not found! | How to use pg_restore on Windows Command Line? | 0.761594 | 1 | 0 | 10,221 |
37,104,377 | 2016-05-08T20:14:00.000 | 3 | 1 | 0 | 0 | python-3.x,cryptography,uuid,email-validation | 37,104,547 | 1 | true | 0 | 0 | The point of email verification is, that someone with malicious intentions is prevented from registering arbitrary email addresses without having access to their respective inboxes (just consider a prankster who wants to sign up a target for daily cat facts, or more sinister signing up someone for a paid email newsletter or spamming their inbox, potentially their work inbox, with explicit content). Thus the confirmation code, which must be cryptographically secure. One important feature of a cryptographically secure confirmation code is, that it can not be predicted or guessed.
This is why UUIDs are not suitable: The main feature of UUIDs is, that a collision is astronomically unlikely. However the UUID generation algorithm is not designed to not be predictable. Typically a UUID is generated from the generating systems MAC address(es), the time of generation and a few bits of entropy. The MAC address and the time are well determined. The use of a PRNG that's fed simply by PID and time is also perfectly permissible. The whole point of UUIDs is to avoid collisions, not to make them unpredictable or unguessable. For that it suffices to have bits that are unique to the generating system (that never change) and a few bits that prevent this particular system from generating the same UUID twice simply by distributing UUIDs in time, the process generating it and the process internal state.
So if I know which system is going to generate a UUID, i.e. know its MAC addresses, the time at which the UUID is generated, there are only some extra 32 or so bits of entropy that randomize the UUID. And 32 bits simply doesn't cut it, security wise.
Assuming that a confirmation token is valid for 24 hours one can >100 confirmation requests per second and the UUID generator has 32 bits of extra randomness (in addition to time and MAC, which we assume as well known) this gives a 2% chance of finding a valid confirmation UUID.
Note that you can not "block" confirmation requests if too many invalid UUIDs are attempted per time interval, as this would effectively give an attacker a DoS tool to prevent legitimate users from confirming their email addresses (also including the email address into the confirmation request doesn't help; this just allows to target specific email addresses for a DoS). | 1 | 1 | 0 | From reading Stackoverflow it seems that simply using a UUID for confirming registration via email is bad. Why is that? Why do I need to fancily generate some less random code from a user's data?
The suggested ways seem to be a variant of users data + salt -> hash. When a UUID is used, it always gets hashed. Why is that? There isn't anything to hide or obfuscate, right?
Sorry if this question is stupid.
Right now I am (prototyping) with Python3's UUID builtins. Is there something specially special about them? | Why can't an email confirmation code be a UUID? | 1.2 | 0 | 0 | 878 |
37,106,423 | 2016-05-09T00:51:00.000 | 2 | 0 | 1 | 0 | python-2.7,ipython,jupyter,jupyter-notebook | 37,190,786 | 1 | true | 0 | 0 | Interrupt: sends the SIGINT signal to the kernel, which should raise a KeyboardInterrupt in your Python code. Use this if you want to stop running code.
Restart: forcefully terminate the kernel process, and start a new one. Use this if your kernel is really stuck, or you just want to start fresh.
Restart & Clear Output - this does the same thing as Restart, and then clears all the output from your notebook.
Restart & Run All - this restarts the kernel and then executes your notebook in one go. This is often done when you are getting ready to finish working on a notebook, because it simulates opening the notebook and running it from scratch. It catches possible errors, such as missing imports, cells out of order, etc. | 1 | 1 | 0 | I am very new to jupyter notebook and am trying to run python 2.7 on it. Can anyone help me to know or suggest me any link what kernel interrupt,kernel restart, kernel restart&clear output kernel& run do? | what does each of the options under kernel tab in jupyter notebook mean? | 1.2 | 0 | 0 | 476 |
37,106,519 | 2016-05-09T01:06:00.000 | 1 | 0 | 0 | 0 | python,chromium-embedded | 37,294,258 | 1 | true | 0 | 0 | As far as I know CEF is based on Chromium engine which doesn't support pure headless mode on Linux. You can try at least starting X Server and check if it's enough to use CEF for screenshots. I guess running only X Server should be enough for this. | 1 | 10 | 0 | I could not get a clear answer on whether or not CEF python can be used in pure headless mode (no Xvfb or other) to take screenshots of the web page. I know about the offScreen option. But I don't see any option to set the size or viewport of CEF. And from my incomplete test, CEF doesn't load the URL if there's no X library used (GTK or QT, for example). | is it possible to use CEF python in headless mode for screenshots? | 1.2 | 0 | 0 | 909 |
37,107,223 | 2016-05-09T03:04:00.000 | 4 | 0 | 0 | 0 | python,neural-network,tensorflow,deep-learning | 51,513,887 | 10 | false | 0 | 0 | I tested tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) and tf.losses.get_regularization_loss() with one l2_regularizer in the graph, and found that they return the same value. By observing the value's quantity, I guess reg_constant has already make sense on the value by setting the parameter of tf.contrib.layers.l2_regularizer. | 1 | 95 | 1 | I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value.
My questions are:
Is there a more elegant or recommended way of regularization than doing it manually?
I also find that get_variable has an argument regularizer. How should it be used? According to my observation, if we pass a regularizer to it (such as tf.contrib.layers.l2_regularizer, a tensor representing regularized term will be computed and added to a graph collection named tf.GraphKeys.REGULARIZATOIN_LOSSES. Will that collection be automatically used by TensorFlow (e.g. used by optimizers when training)? Or is it expected that I should use that collection by myself? | How to add regularizations in TensorFlow? | 0.07983 | 0 | 0 | 80,148 |
37,109,762 | 2016-05-09T07:15:00.000 | 0 | 0 | 0 | 0 | javascript,python,mysql,linux,shell | 39,398,778 | 3 | false | 0 | 0 | I use mysql Workbench which has the schema synchronization utility. Very handy when trying to apply changes from development server to a production server. | 2 | 1 | 0 | I have to compare two MySql database data, I want to compare two MySql schema and find out the difference between both schema.
I have created two variables Old_Release_DB and New_Release_DB. In Old_Release_DB I have stored old release schema than after some modification like I deleted some column, Added some column, Renamed some column, changed column property like increase datatype size (ex: varchar(10) to varchar(50)). Than it became new release schema that I have stored in New_Release_DB.
Now I want to Table Name, list of column name which has changed in New_Release_DB, and changes along with column name.
Example,
Table_A Column_Name Add(if it is added),
Table_A Column_Name Delete(if it is deleted),
Table_A Column_Name Change(if its property has changed)
I am trying it in Shell script in Linux, But I am not getting it. Please let me know If I can use other script like python or java. | How to compare two MySql database | 0 | 1 | 0 | 3,461 |
37,109,762 | 2016-05-09T07:15:00.000 | 0 | 0 | 0 | 0 | javascript,python,mysql,linux,shell | 37,110,095 | 3 | false | 0 | 0 | You can compare two databases by creating database dumps:
mysqldump -u your-database-user your-database-name > database-dump-file.sql - if you're using a password to connect to a database, also add -p option to a mysqldump command.
And then compare them with diff:
diff new-database-dump-file.sql old-database-dump-file.sql
Optionally, you can save the results of diff execution to a file with STDOUT redirecting by adding > databases_diff to a previous command.
However, that kind of comparison would require some eye work - you will get literally a difference between two files. | 2 | 1 | 0 | I have to compare two MySql database data, I want to compare two MySql schema and find out the difference between both schema.
I have created two variables Old_Release_DB and New_Release_DB. In Old_Release_DB I have stored old release schema than after some modification like I deleted some column, Added some column, Renamed some column, changed column property like increase datatype size (ex: varchar(10) to varchar(50)). Than it became new release schema that I have stored in New_Release_DB.
Now I want to Table Name, list of column name which has changed in New_Release_DB, and changes along with column name.
Example,
Table_A Column_Name Add(if it is added),
Table_A Column_Name Delete(if it is deleted),
Table_A Column_Name Change(if its property has changed)
I am trying it in Shell script in Linux, But I am not getting it. Please let me know If I can use other script like python or java. | How to compare two MySql database | 0 | 1 | 0 | 3,461 |
37,110,535 | 2016-05-09T08:03:00.000 | 0 | 1 | 0 | 0 | php,python,apache,gpu,theano | 44,941,997 | 1 | false | 0 | 0 | I would split that up, save the requirement as event in some storage (redis for example or even rabbitmq) and listen to that with some daemonized script (cron would be a bad joice since its hard to make it run more often than every minute). The script will update the storage entry with the results and you can access it again in your http stack. You can implement the functionallity via ajax or utilize a usleep command in php to wait for the results. If using a while loop, dont forget to break it after 1 second or something, so the request is not running too long.
Your problem might be the configured user, that executes the php binary - it maybe not permitted to access those binaries on your system. Typically its the www-data user. By adding the www-data user to the necessary group, you might be able to solve it without splitting all up. Have a look at the binary's ownerships and permissions to figure that out. | 1 | 6 | 0 | In my php website, I call a python script using theano and running on GPU.
However, when calling this python script from php, it seems apache doesn't have any permissions on GPU so the program falls back on CPU, which is far less efficient compared to GPU.
How can I grant apache rights to run programs on GPU? | How to enable php run python script on GPU? | 0 | 0 | 0 | 757 |
37,110,565 | 2016-05-09T08:05:00.000 | 1 | 0 | 1 | 0 | python-3.x,pycharm | 37,534,819 | 2 | false | 0 | 0 | I also ran into this problem. The current version in the PyPi index is qfrm-0.2.0.27.
The version on the website appears to be qfrm-0.2.0.23. Although it is older, it worked without error for me.
If you download the ...23 whl file and install that one (pip install [file_name].whl) you may find it works better. | 2 | 0 | 0 | I am quite newbie in python and I am trying to make the qfrm 0.2.0.27 library to work. Unfortunately there is no documentation about this library. I installed it using pip and when I try to import it I get the following error:
No module named 'qfrm.Options'
Does anyone have a solutions for this? I am using python 3.5.1. and PyCharm | qfrm module - No module named 'qfrm.Options' (Python 3.5.1) | 0.099668 | 0 | 0 | 504 |
37,110,565 | 2016-05-09T08:05:00.000 | 1 | 0 | 1 | 0 | python-3.x,pycharm | 39,754,605 | 2 | false | 0 | 0 | I'm on Python version 2.7.11 and was unable to install qfrm version 0.2.0.27 and received the error message: No module named 'qfrm.Options'
However, I was able to install qfrm version 0.2.0.23, as follows: pip install -v qfrm==0.2.0.23 | 2 | 0 | 0 | I am quite newbie in python and I am trying to make the qfrm 0.2.0.27 library to work. Unfortunately there is no documentation about this library. I installed it using pip and when I try to import it I get the following error:
No module named 'qfrm.Options'
Does anyone have a solutions for this? I am using python 3.5.1. and PyCharm | qfrm module - No module named 'qfrm.Options' (Python 3.5.1) | 0.099668 | 0 | 0 | 504 |
37,111,267 | 2016-05-09T08:44:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 37,117,837 | 1 | true | 0 | 1 | You don't need an invisible button to register a secret click Simply bind <1> to the root window and it will register whenever you click on anything (unless you click on some other widget that is listening for that event). You can then check the coordinates of the click to see where the use clicked. | 1 | 1 | 0 | I found ways to hide something after pressing a button, but what I would like to do is having an invisible button that can still be pushed. A secret button of some sort, using Tkinter. It doesn't need to do anything yet | Is there a way to make an invisible button in Tkinter? | 1.2 | 0 | 0 | 1,995 |
37,115,070 | 2016-05-09T11:50:00.000 | 2 | 0 | 0 | 0 | python,django,unit-testing | 37,130,919 | 1 | false | 1 | 0 | After a day of staring at my screen, I found a solution:
I removed the managed=True from the models, and generated migrations. To prevent actual migrations against the production database, I used my database router to prevent the migrations. (return False in allow_migrate when for the appropriate app and database).
In my settings I detect whether unittests are being run, and then just don't define the database router or the external database. With the migrations present, the unit tests. | 1 | 1 | 0 | I'm working on a project which involves a huge external dataset (~490Gb) loaded in an external database (MS SQL through django-pyodbc-azure). I've generated the Django models marked managed=False in their meta. In my application this works fine, but I can't seem to figure out how to run my unit tests. I can think of two approaches: mocking the data in a test database, and giving the unit tests (and CI) read-only access to the production dataset. Both options are acceptable, but I can't figure out either of them:
Option 1: Mocked data
Because my models are marked managed=False, there are no migrations, and as a result, the test runner fails to create the database.
Option 2: Live data
django-pyodbc-azure will attempt to create a test database, which fails because it has a read-only connection. Also I suspect that even if it were allowed to do so, the resulting database would be missing the required tables.
Q How can I run my unittests? Installing additional packages, or reconfiguring the database is acceptable. My setup uses django 1.9 with postgresql for the main DB. | Unit tests with an unmanaged external read-only database | 0.379949 | 1 | 0 | 299 |
37,117,571 | 2016-05-09T13:52:00.000 | 6 | 0 | 1 | 1 | python,windows,anaconda,pydev | 56,714,055 | 11 | false | 0 | 0 | where conda
F:\Users\christos\Anaconda3\Library\bin\conda.bat
F:\Users\christos\Anaconda3\Scripts\conda.exe
F:\Users\christos\Anaconda3\condabin\conda.bat
F:\Users\christos\Anaconda3\Scripts\conda.exe --version
conda 4.6.11
this worked for me | 2 | 99 | 0 | I installed Anaconda for Python 2.7 on my Windows machine and wanted to add the Anaconda interpreter to PyDev, but quick googling couldn't find the default place where Anaconda installed, and searching SO didn't turn up anything useful, so.
Where does Anaconda 4.0 install on Windows 7? | Where does Anaconda Python install on Windows? | 1 | 0 | 0 | 246,046 |
37,117,571 | 2016-05-09T13:52:00.000 | 2 | 0 | 1 | 1 | python,windows,anaconda,pydev | 51,557,560 | 11 | false | 0 | 0 | With Anaconda prompt python is available, but on any other command window, python is an unknown program. Apparently Anaconda installation does not update the path for python executable. | 2 | 99 | 0 | I installed Anaconda for Python 2.7 on my Windows machine and wanted to add the Anaconda interpreter to PyDev, but quick googling couldn't find the default place where Anaconda installed, and searching SO didn't turn up anything useful, so.
Where does Anaconda 4.0 install on Windows 7? | Where does Anaconda Python install on Windows? | 0.036348 | 0 | 0 | 246,046 |
37,120,088 | 2016-05-09T15:48:00.000 | 0 | 1 | 0 | 0 | python,unicode | 37,120,499 | 3 | true | 0 | 0 | I found my error. I used Python via the Windows console and the Windows console mishandeled the unicode. | 2 | 0 | 0 | I want to transfer unicode into asci characters, transfer them through a channel that only accepts asci characters and then transform them back into proper unicode.
I'm dealing with the unicode characters like ɑ in Python 3.5.
ord("ɑ") gives me 63 with is the same as what ord("?") also gives me 63. This means simply using ord() and chr() doesn't work. How do I get the right conversion? | Is there something like ord() in python that gives the unicode hex? | 1.2 | 0 | 0 | 381 |
37,120,088 | 2016-05-09T15:48:00.000 | 0 | 1 | 0 | 0 | python,unicode | 37,120,442 | 3 | false | 0 | 0 | You can convert a number to a hex string with "0x%x" %255 where 255 would be the number you want to convert to hex.
To do this with ord, you could do "0x%x" %ord("a") or whatever character you want.
You can remove the 0x part of the string if you don't need it. If you want to hex to be capitalized (A-F) use "0x%X" %ord("a") | 2 | 0 | 0 | I want to transfer unicode into asci characters, transfer them through a channel that only accepts asci characters and then transform them back into proper unicode.
I'm dealing with the unicode characters like ɑ in Python 3.5.
ord("ɑ") gives me 63 with is the same as what ord("?") also gives me 63. This means simply using ord() and chr() doesn't work. How do I get the right conversion? | Is there something like ord() in python that gives the unicode hex? | 0 | 0 | 0 | 381 |
37,120,704 | 2016-05-09T16:19:00.000 | 0 | 0 | 1 | 0 | python,macos,numpy | 37,120,933 | 1 | false | 0 | 0 | You have to use/run Python from the Anaconda installation to get anaconda modules running. Changing path for python should fix any import errors. | 1 | 0 | 0 | So I installed Anaconda3 from online. When I try to use numpy in a code, it gives me an error. Is there anything else that I need to do in order to use these python packages ?
Does this involve anything in terminal or just the python shell ?
Thanks. | I installed Anaconda3 on my Mac for Python. However when I try to use numpy it says ImportError: No module named 'numpy' | 0 | 0 | 0 | 193 |
37,122,050 | 2016-05-09T17:37:00.000 | 5 | 0 | 1 | 0 | python,floating-point,precision,floating-accuracy | 37,122,353 | 1 | true | 0 | 0 | Two digits obviously won't cut it. You'd only be able to represent 100 distinct values. How about 3 digits?
Say we have a number x/255, and we display that to 3 digits after the decimal point, effectively rounding it to some number y/1000. Multiplying y/1000 by 255 and rounding it will produce x if x/255 is the closest multiple of 1/255 to y/1000.
If x/255 = y/1000, then it's obviously the closest multiple of 1/255. Otherwise, x/255 must be within a distance of 1/2000 of y/1000 to round to y/1000, so the closest multiple of 1/255 on the other side of y/1000 must be at least a distance 1/255 - 1/2000 away, further than x/255. Thus, x/255 is the closest multiple of 1/255 to y/1000, and 3 digits are enough. Similarly, for any denominator d with n digits, n decimal places should be enough (and if d is a power of 10, n-1 decimal places should do it).
(I've neglected the impact of implicit floating-point rounding error in this derivation. For small denominators, floating-point rounding error should not change this analysis.) | 1 | 1 | 0 | Assume I have a float in the interval [0.0, 1.0] that is represented as a string. We will call this value floatstr. Let us also assume that this value represents an integer in the interval [0, 255].
The formula for converting floatstr to the integer is (in python):
int(round(float(floatstr)*255))
What is the minimum number of decimal points required in floatstr to represent this value accurately? How is this minimum number calculated, if there is a formula for doing so? | Minimum number of decimal places needed to represent a value in the range [0, 255] | 1.2 | 0 | 0 | 579 |
37,123,971 | 2016-05-09T19:31:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 37,124,089 | 2 | true | 0 | 0 | fix your PATH environment variable so it has the global python directory declared before the anacaonda directory. | 1 | 0 | 0 | Good afternoon. I have been learning Virtualenv and Virtualenvwrapper. I then decided I wanted to install Anaconda Python again so I could continue learning how to do data analysis. Then I saw where you can use conda to make a virtual environment for Anaconda. I installed it and told it not to add the path to my bashrc file but then conda was not recognized. So then I reinstalled and said yes. But now my global python is set to anaconda 3.5 which I do not want. How can I use conda to set up a virtual environment without affecting my global python of 2.7? Thank you. | Installing Anaconda Python in a virtual world without changing global Python version | 1.2 | 0 | 0 | 255 |
37,126,002 | 2016-05-09T21:47:00.000 | 1 | 0 | 1 | 0 | python,datetime,variables,labels,spss | 37,128,707 | 1 | false | 0 | 0 | It sounds like the bigger problem is that e values are all wrong. But I'm not clear on what you want to do from here. Can you post an example? | 1 | 1 | 0 | I have a database (300 variables, 4000 cases) with lots of date/time variables. I imported into SPSS from Excel and used autorecode on all my variables but the variable LABELS contain the date/time information and the variable VALUES are all the default "October 14, 1582".
Is there an easy way either in SPSS or using Python to copy the variable labels over to the variable values? Is there a way I can only do this to some of my variables (eg., up to a certain point in the index?)
Many thanks! Mat. | Transferring variable labels to variable values in SPSS/Python | 0.197375 | 0 | 0 | 159 |
37,126,446 | 2016-05-09T22:28:00.000 | 0 | 0 | 0 | 1 | java,python,javafx,automation,libusb | 37,133,845 | 1 | false | 1 | 0 | No, in most (Windows) scenarios this will not work. The problem is that libusb on Windows uses a special backend (libusb0.sys, libusbK.sys or winusb.sys). You have to install one of those backends (libusb-win32 is libusb0.sys) on every machine you want your software to run on. Under Linux this should work fine out of the box.
Essentially you have to ship the files you generate with inf_wizard.exe with your software and install the inf (needs elevated privileges) before you can use the device with your software. | 1 | 0 | 0 | I'm making a java GUI application (javafx) that calls a python script (python2.7) which detects connected devices. The reason for this is so I can automate my connections with multiple devices.
In my python script, I use pyusb. However to detect a device, I have to use inf_wizard.exe from libusb-win32 to communicate with the device. This is fine for my own development and debugging, but what happens if I wish to deploy this app and have other users use this?
Would this app, on another computer, be able to detect a device?
Thanks
Please let me know if there is a better way to doing this. | Python script to detect USBs in java GUI | 0 | 0 | 0 | 125 |
37,126,576 | 2016-05-09T22:41:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 37,126,992 | 1 | false | 0 | 0 | No, you do not add any biases, models define biases in their own way. What you learned during course is generic, although not perfect - solution. It matters for models such as SVM, which should not ever have appended "1"s, as then this bias would get regularized, which is simply wrong for SVMs. Thus, while this is nice theoretical trick to show that you can actually create methods completely ignoring bias, in practise - it is often treated in a specific way, and scikit-learn does it for you. | 1 | 0 | 1 | In my machine learning class, we have learned about appending a 1 to each sample's feature vector when using many machine learning models to account for bias. For example, if we are doing linear regression and a sample has features f_1, f_2, ..., f_d, we need to add a "fake" feature value of 1 to allow for the regression function to not have to pass through the origin.
When using sklearn models, do you need to do this yourself, or do their implementations do it for you? Specifically, I'm interested in whether or not this is necessary when using any of their regression models or their SVM models. | Need to append bias term when using `sklearn` models? | 0 | 0 | 0 | 1,556 |
37,127,292 | 2016-05-10T00:05:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,import,module,anaconda | 45,157,942 | 2 | false | 0 | 0 | To add the python path for anaconda if you are on windows:
Right click my computer
Go to advanced settings
Click on environment variables
Find the PATH variable and click edit
Add the path where your python.exe file is located
Example:
C:\Anaconda3 - might not work
C:\Anaconda3 - then this should work
Same thing for those, who have other installations. | 1 | 1 | 0 | I used IDLE for some time, then for a class they told us to download Anaconda, which I ended up not using, but still downloaded it anyway.
I uninstalled anaconda and deleted all the files from my CPU and started using IDLE again. I now can't import a module to IDLE because it can't find it. I think anaconda messed up the python path, but I don't know how to change it so I can import modules back to python.
How can I determine what the python path is and how can I change it so when I download modules I can import them to IDLE again?
I am running OsX 10.10.5 and Python 2.7.10. | Reset python path after Anaconda | 0 | 0 | 0 | 3,476 |
37,128,886 | 2016-05-10T03:40:00.000 | 1 | 0 | 0 | 0 | python,keras | 37,155,400 | 2 | true | 0 | 0 | You could create a tar archive containing the weights and the architecture, as well as a pickle file containing the optimizer state returned by model.optimizer.get_state(). | 1 | 6 | 1 | I was just wondering what is the best way to save the state of a model while it it optimizing. I want to do this so I can run it for a while, save it, and come back to it some time later. I know there is a function to save the weights and another function to save the model as JSON. During learning I would need to save both the weights and the parameters of the model. This includes parameters like the momentum and learning rate. Is there a way to save both the model and weights in the same file. I read that it is not considered good practice to use pickle. Also would the momentums for the graident decent be included with the models JSON or in the weights? | Keras, best way to save state when optimizing | 1.2 | 0 | 0 | 4,558 |
37,129,821 | 2016-05-10T05:18:00.000 | 0 | 0 | 0 | 0 | python,xml,checksum | 37,129,870 | 1 | true | 0 | 0 | You could calculate a checksum of the XML before saving, but including it in the file will change the checksum. You end up with a recursive problem where the checksum changes every time you update the file with the new checksum. So no, it's not possible. | 1 | 0 | 0 | I have created an xml file using element tree and want to include the checksum of the file in an xml tag. Is this possible? | Checksum in xml tag python | 1.2 | 0 | 1 | 301 |
37,136,083 | 2016-05-10T10:38:00.000 | 0 | 0 | 1 | 0 | python,eclipse,pydev | 37,360,335 | 1 | false | 0 | 0 | I got this fixed by editing the eclipse.ini to use java-8
this line ->
/usr/lib/jvm/java-7-openjdk-amd64/jre/bin
to this ->
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin
I guess there is some other ways too to update eclipse configuration with java versions. | 1 | 1 | 0 | I am having the apparently common problem of PyDev not showing up in the perspective list after install. After trying all the various suggested solutions without success, I don't know what to do next and need some help. I installed PyDev version 5 on Eclipse version 4.2.2 with JRE 1.8.0_73-b02. TIA. | pydev not in Eclipse perspecitive list | 0 | 0 | 0 | 143 |
37,138,777 | 2016-05-10T12:36:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,recommendation-engine,rating,collaborative-filtering | 37,142,380 | 1 | false | 0 | 0 | Based on your comment above, I would manipulate the Number of times they purchased the product field. You need to basically transform the Number of times they purchased the product field into an implicit rating field. I would maybe scale the product rating system to 1-5. If they press the don't like the product button, the rating is a 1, if they press the like the product button, they get a 5. If they have bought the product frequently, it's a 5, otherwise it starts at a 3 on the first purchase and scales up to a 4 then 5, based on your data. If they have never bought the product AND have never rated the product, it's a null value, so won't contribute to ratings. | 1 | 0 | 1 | I developed a recommender system using Matrix Factorization in Python. The ratings are in the range [1-5]. It works very well. This system is made for client advisors rather than clients themselves. Hence, the system recommends some products to the client advisor and then this one decides which products he's gonna recommend to his client.
In my application I want to have 2 additional buttons: relevant, irrelevant. Thus, for each recommendation the client advisor would press the button irrelevant if the recommendation is not good but its rating is high and he would press the button relevant if the recommendation is good but its rating is low.
The problem is that I can't figure how to update the ratings when one of the buttons is pressed. Please give me some idea about how to handle that feature. I insist on having only two buttons (relevant and irrelevant), the client advisor can't modify the rating himself.
Thank you very much. | Manually update ratings in recomender system | 0 | 0 | 0 | 62 |
37,142,837 | 2016-05-10T15:25:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 37,142,904 | 2 | false | 0 | 0 | Yes you can put functions in a list, or a dict (which can be quite useful), you can do it with functions declared with def, lambdas or import already defined functions, from operators module with your examples (operator.add, etc) | 1 | 7 | 0 | The reason I ask is pure curiosity. I could see, possibly, that this might be useful if you didn't know ahead of time what operations you wanted to apply to certain variables, or to apply a different operation during a certain level in a recursive call, or perhaps it may just make certain things easier and/or neater.
Though am just speculating, it could be a really bad idea, but overall am just curious. | Python - is there a way to store an operation(+ - * /) in a list or as a variable? | 0 | 0 | 0 | 3,596 |
37,143,985 | 2016-05-10T16:18:00.000 | 9 | 0 | 1 | 0 | python,formatting,indentation,visual-studio-code | 58,028,410 | 7 | false | 0 | 0 | Simple solution!
Click the tab size (may show "Spaces: 4") in the bottom right corner and choose Convert Indentation to Tabs or Convert Indentation to Spaces as per your requirement. | 3 | 25 | 0 | How do I enable indentation in Visual Studio Code?
I'm trying to learn Python (new to programming) and need auto-indentation. It worked with the first version I tried, but it doesn't indent after a colon (:) any more. How can I configure it to automatically indent? | Visual Studio Code indentation for Python | 1 | 0 | 0 | 100,104 |
37,143,985 | 2016-05-10T16:18:00.000 | 2 | 0 | 1 | 0 | python,formatting,indentation,visual-studio-code | 65,341,034 | 7 | false | 0 | 0 | For me, "Convert Indentation to Tabs" has worked.
To do that:
Go to "Command Palette" Ctrl+Shift+P (View>Command Palette)
Type in & select "Convert Indentation to Tabs" and press Enter | 3 | 25 | 0 | How do I enable indentation in Visual Studio Code?
I'm trying to learn Python (new to programming) and need auto-indentation. It worked with the first version I tried, but it doesn't indent after a colon (:) any more. How can I configure it to automatically indent? | Visual Studio Code indentation for Python | 0.057081 | 0 | 0 | 100,104 |
37,143,985 | 2016-05-10T16:18:00.000 | 6 | 0 | 1 | 0 | python,formatting,indentation,visual-studio-code | 54,562,074 | 7 | false | 0 | 0 | I faced similar issues while editing. Select the lines of code you wish to intend and press Ctrl + ] in Windows or CMD+] on Mac.
You can change the indent size in settings. Search for tab size in settings. I use two, by the way. | 3 | 25 | 0 | How do I enable indentation in Visual Studio Code?
I'm trying to learn Python (new to programming) and need auto-indentation. It worked with the first version I tried, but it doesn't indent after a colon (:) any more. How can I configure it to automatically indent? | Visual Studio Code indentation for Python | 1 | 0 | 0 | 100,104 |
37,145,454 | 2016-05-10T17:40:00.000 | 0 | 0 | 1 | 0 | python,ipython,jupyter-notebook | 54,256,000 | 8 | false | 0 | 0 | Intellij IDEA (which is also available as free open-source community edition) can render ipynb files as well. In fact it also allows to author notebooks, so it's not just a viewer.
It can be used via file type associations or via the command line launcher (e.g. idea foo.pynb). | 2 | 30 | 0 | Nowadays with more and more IPython notebook files (*.ipynb) around, it is very disturbing every time when I want to peek at some notebook I have to open a server for it, and cannot do it in read-only mode. Due to auto-save I can accidentally change the file when reading it if not in read-only mode.
I hope something like this: ipython notebook mynb.ipynb --read-only would work, but sadly it doesn't (although still it creates a server which I don't really want in read-only view). What I really want is to open an ipynb file like a HTML file for reading; currently it seems a missing view of ipynb file, and now the notebook is more like a black-box or near-binary file alone.
(P.S. I am using Linux/Ubuntu.) | Open IPython notebooks (*.ipynb) in read-only view (like a HTML file) | 0 | 0 | 0 | 42,629 |
37,145,454 | 2016-05-10T17:40:00.000 | 0 | 0 | 1 | 0 | python,ipython,jupyter-notebook | 61,829,617 | 8 | false | 0 | 0 | I was able to read .ipynb files as html in visual code. You would need a python plugin for it which visual code auto detects. Fairly straight forward after that. | 2 | 30 | 0 | Nowadays with more and more IPython notebook files (*.ipynb) around, it is very disturbing every time when I want to peek at some notebook I have to open a server for it, and cannot do it in read-only mode. Due to auto-save I can accidentally change the file when reading it if not in read-only mode.
I hope something like this: ipython notebook mynb.ipynb --read-only would work, but sadly it doesn't (although still it creates a server which I don't really want in read-only view). What I really want is to open an ipynb file like a HTML file for reading; currently it seems a missing view of ipynb file, and now the notebook is more like a black-box or near-binary file alone.
(P.S. I am using Linux/Ubuntu.) | Open IPython notebooks (*.ipynb) in read-only view (like a HTML file) | 0 | 0 | 0 | 42,629 |
37,146,986 | 2016-05-10T19:08:00.000 | 3 | 1 | 1 | 0 | python,vmware,pyvmomi | 37,168,227 | 1 | true | 0 | 0 | That is correct. Also as of recently some new Task handling functionality has been added to pyVim as well. The new task stuff abstract out making property collectors to monitor task progress and such. The connection classes provided allow various authentication methods supported by vSphere such as basic auth, SSPI, and a few others. It also handles disconnecting and cleaning up connections once closed. The VMOMI classes from pyVmomi are the objects inside vSphere like HostSystem, VirtualMachines, Network, etc. | 1 | 4 | 0 | I'm trying to get used to Python and VMware vSphere API Python Bindings (pyVmomi). I try to understand the purpose of each component. What's the purpose of pyVim within pyVmomi? From what I understand, pyVim is used for connection handling (creation, deletion...) to the Virtualization Management Object Management Infrastructure (VMOMI). Is this correct?
Thank you & best regards,
Patrick | Purpose of pyVim within pyVmomi | 1.2 | 0 | 0 | 5,088 |
37,148,404 | 2016-05-10T20:31:00.000 | 1 | 0 | 0 | 0 | jquery,python,django,parsing | 38,005,054 | 1 | false | 1 | 0 | I think it's the wrong way. To make it easier, you should set the traditional argument to true in $.param().
This is the difference between traditional is true and false:
var obj = { a: [ 1, 2, 3 ] };
$.param(myObject); // a%5B%5D=1&a%5B%5D=2&a%5B%5D=3 ==> a[]=1&a[]=2&a[]=3
$.param(myObject, true); // b=1&b=2&b=3
With traditional is true, you can use this code in your Django project:
request.POST.getlist('a[]') # [1, 2, 3] | 1 | 1 | 0 | From a jQuery form, I get the following QueryDict, when I submit a form:
<QueryDict: {'marc[0].sub': [''], 'csrfmiddlewaretoken': ['K6Fd4AbFP2bLmAWaD4hAGoFbzyKjHErN'], 'field': [''], 'marc[2].field': ['856'], 'marc[0].field': ['001'], 'sub': [''], 'marc[1].sub': ['a'], 'marc[2].sub': ['u'], 'marc[1].field': ['655']}>
I can get at the data that I want if I use the very specific call in my view. For example:
print(QueryDict.getlist(request.POST, 'marc[2].sub'))
...shows the desired 'u' on the console, but I'm not sure how to loop through indexed key pairs in this odd format, where the keys have no relation, except the interloping index number. Eventually, I need a for each type statement, where I'd loop through the following:
marc[0].field: 001 and marc[0].sub: ''
marc[1].field: 655 and marc[1].sub: 'a'
marc[2].field: 856 and marc[2].sbu: 'u'
...or, better, would be to loop through something more like this:
field_subs = ('001', ''), ('655', 'a'), ('856', 'u')
...to perform another operation.
e.g.
for field_sub in field_subs:
If I need to submit more code, am heading at this the wrong way, or making it more difficult than it is, I'd appreciate any direction. I'm using Django 1.9
Thanks | Parsing indexed key pairs from a QueryDict in Django or Python | 0.197375 | 0 | 0 | 201 |
37,149,454 | 2016-05-10T21:41:00.000 | 0 | 0 | 0 | 0 | python,cmd,python-3.4 | 59,241,311 | 1 | false | 0 | 0 | I always have this problem whenever I reinstall Python.
Very simple fix (at least on Windows):
1) Create a .pyw file
2) Right-click on it and select properties
3) In "Opens with:" press change
4) Navigate to your python directory and choose "pythonw.exe" | 1 | 1 | 0 | I'm trying to run a program without cmd window pop-up when I double click it or when I make it exe. So I wanted to save it as .pyw extension but when I double click to script, shell can't run it. It says I need to select the program to run it or search online. How can I fix this? Windows- Python 3.4
It was okay in 3.5 when I use .pyw extension. First time I see this problem. | Python .pyw extension does not work | 0 | 0 | 0 | 2,288 |
37,149,748 | 2016-05-10T22:04:00.000 | 1 | 0 | 1 | 0 | python,pip,ipython,ipython-sql | 70,772,212 | 4 | false | 0 | 0 | I know this answer will be (very) late to contribute to the discussion but maybe it will help someone. I found out what worked for me by following Thomas, who commented above. However, with a bit of a caveat, that I was using pyenv to setup and manage python on my local machine.
So when running sys.executable in a jupyter notebook cell I found out my python path was /usr/local/Cellar/jupyterlab/3.2.8/libexec/bin/python3.9, while I expected it to be somewhere along the lines of '/Users/<USER_NAME>/.pyenv/versions/3.9.2/bin/python'.
This error was attributed to me having installed jupyter through command brew install jupyter instead of pyenv exec pip install jupyter. I proceeded to uninstall jupyter with brew and then executing the second command, which now got jupyter up and running!
(note that you would first have to have pyenv setup properly). | 3 | 12 | 0 | Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql.
I get: ImportError: No module named sql
I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook.
Thanks in advance! | IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql' | 0.049958 | 1 | 0 | 19,909 |
37,149,748 | 2016-05-10T22:04:00.000 | 0 | 0 | 1 | 0 | python,pip,ipython,ipython-sql | 54,436,119 | 4 | false | 0 | 0 | I doubt you're using different IPython Notebook kernel other than which you've installed ipython-sql in.
IPython Notebook can have more than one kernel. If it is the case, make sure you're in the right place first. | 3 | 12 | 0 | Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql.
I get: ImportError: No module named sql
I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook.
Thanks in advance! | IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql' | 0 | 1 | 0 | 19,909 |
37,149,748 | 2016-05-10T22:04:00.000 | 5 | 0 | 1 | 0 | python,pip,ipython,ipython-sql | 43,972,590 | 4 | false | 0 | 0 | I know it's been a long time, but I faced the same issue, and Thomas' advice solved my problem. Just outlining what I did here.
When I ran sys.executable in the notebook I saw /usr/bin/python2, while the pip I used to install the package was /usr/local/bin/pip (to find out what pip you are using, just do which pip or sudo which pip if you are installing packages system-wide). So I reinstalled ipython-sql using the following command, and everything worked out just fine.
sudo -H /usr/bin/python2 -m pip install ipython-sql
This is odd since I always install my packages using pip. I'm wondering maybe there's something special about the magic functions in Jupyter. | 3 | 12 | 0 | Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql.
I get: ImportError: No module named sql
I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook.
Thanks in advance! | IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql' | 0.244919 | 1 | 0 | 19,909 |
37,150,509 | 2016-05-10T23:23:00.000 | 0 | 0 | 1 | 0 | python,python-3.4,pyqt5 | 37,150,609 | 2 | false | 0 | 1 | I am using pyCharm and it allows me to install PyQt5 from the settings menu, just go to the "Project Interpreter" and install it. | 1 | 4 | 0 | I searched on the web and couldn't find anything. How can I install PyQt5 for Python 3.4 version? | How to install PyQt5 for Python 3.4 version? | 0 | 0 | 0 | 10,671 |
37,152,723 | 2016-05-11T03:42:00.000 | 2 | 0 | 0 | 0 | python,scikit-learn,time-series,quantitative-finance | 37,176,185 | 4 | false | 0 | 0 | No. there is not a way, in Python, using sci-kit, to automatically lag all of these time-series to find what time-series (if any) tend to lag other data. You'll have to write some code. | 1 | 0 | 1 | I currently have a giant time-series array with times-series data of multiple securities and economic statistics.
I've already written a function to classify the data, using sci-kit learn, but the function only uses non-lagged time-series data.
Is there a way, in Python, using sci-kit, to automatically lag all of these time-series to find what time-series (if any) tend to lag other data?
I'm working on creating a model using historic data to predict future performance. | How to auto-discover a lagging of time-series data in scikit-learn and classify using time-series data | 0.099668 | 0 | 0 | 2,999 |
37,159,215 | 2016-05-11T10:02:00.000 | 1 | 0 | 1 | 0 | python,quantlib | 37,319,694 | 1 | true | 0 | 0 | The problem is that the value date for the payment, May 7th, is between today's date and the reference date of the curve. The fixing needs to be forecast, since it's in the future (the fixing date is on May 5th); but because the curve effectively starts on May 8th, it can't return the May 7th discount which is required to forecast the fixing.
The reason why this doesn't usually happen is that, when the value date is between today and the reference date, the fixing date is usually before today's date and thus the fixing can be loaded from past ones.
In this particular case, the way to make it work would be to create a curve with no settlement days so that its reference date is the same as today's date. If you then wanted the price as-of May 8th, you'd have to manually adjust the swap NPV for the discount between May 1st and 8th. | 1 | 1 | 0 | I am using QuantLib 1.7 with the Python interface.
I have constructed the JPY Fixed-Float swap curve following the standard convention. For the swap schedules I have a JointCalendar with Japan and UnitedKingdom. My JPYLibor index has the UK calendar only.
When I set the market date to 2009-May-1, I do a bootstrap using PiecewiseFlatForward with settlement date 2009-May-8 because in the Japan calendar there was a long holiday from 2009-May-4 (monday) to 2009-May-6.
Now, with this bootstraped curve, I try to value a swap that has a floating payment on 2009-May-7. When I try to value it (or compute the amount() function of the next floatingLeg cashflow which has a reset date on 2009-May-5) I get the error message "2nd leg: negative time (-0.00277778) given".
I guess that this is related to the fact that 2009-May-5, which is the London fixing date for value date 2009-May-7, falls on a Japanese holiday?
My swap payments schedules and reset schedule are matching Bloomberg so I am confident in theory is the correct convention. I have read some old posts regarding apparently a similar issue for a US swap, but as far as I understood this was a bug which was corrected around the time of QuantLib 0.9.
Could my problem be related to the same bug or I am not using QuantLib correctly? | JPYLibor fixing during Japanese holiday: negative time error | 1.2 | 0 | 0 | 335 |
37,159,990 | 2016-05-11T10:31:00.000 | 0 | 1 | 1 | 0 | python,ironpython | 37,422,119 | 1 | true | 0 | 0 | In the meantime I got help on another forum and found a solution. It is basically goes like this :
A FunctionDefinition has a body which will most likely be a SuiteStatement (which is just a collection of statements). Local variables will be defined with AssignmentStatements, where the Left side is an array of NameExpressions. From there you can figure out what the locals are, by gathering all of the NameExpressions. | 1 | 0 | 0 | Is someone out there familiar with IronPython internals, specifically with PythonAst and LanguageContext classes ?
My application does compile a Python script source and then look into PythonAst to find variables. While I can successfully find global variables, I am unable to get functions' local variables. Is it possible somehow ?
Another question would be to also find the current type of a variable as it can be inferred from the compiled code as well as its current value ?
After a script was executed I can use the ScriptScope structure, or at debug time I can parse a debug frame for variables and theirs value, but I would like to do it at compile time also, as the user constructs the code. Is this possible at all ?
Thanks. | PythonAst and LanguageContext | 1.2 | 0 | 0 | 61 |
37,170,740 | 2016-05-11T18:32:00.000 | 2 | 0 | 0 | 0 | python,deep-learning,caffe | 37,247,216 | 1 | false | 0 | 0 | You can make use of Python Layer to do the same. The usage of a Python Layer is demonstrated in caffe_master/examples/py_caffe/. Here you could make use of a python script as the input layer to your network. You could describe the behavior of rotations in this layer. | 1 | 0 | 1 | I know that there is a "mirror" parameter in the default data layer, but is there a way to do arbitrary rotations (really, I would just like to do multiples of 90 degrees), preferably in Python? | How to rotate images in Caffe on-the-fly for training set augmentation? | 0.379949 | 0 | 0 | 792 |
37,173,450 | 2016-05-11T21:17:00.000 | 0 | 0 | 0 | 0 | python,flask,urllib | 37,174,450 | 1 | false | 1 | 0 | Remove the / at the beginning of your path to make it relative instead of absolute. | 1 | 0 | 0 | Whenever I try to use urllib.urlretrieve(href, '/static/img/'+filename), I get the error "No such file or directory". However, I do have that directory in there.
If I remove the "/static/img/" the images download fine into the root folder. I need the images to go into the static/img folder to follow Flask convention.
How do I download images using urlretrieve into a directory that I set in Flask? | Flask: Save files downloaded with URL retrieve into the static/img/ folder | 0 | 0 | 0 | 678 |
37,174,808 | 2016-05-11T23:20:00.000 | 1 | 0 | 0 | 0 | python-2.7,theano,enthought,canopy | 39,967,880 | 1 | true | 0 | 0 | That's possibly because you installed an old version of Theano package. Try upgrade it or install the newest version by pip install theano. | 1 | 0 | 1 | I could not get Theano running in my system in Enthought canopy Python. When I give import theano and test run, I get the following error.
import blas
File "/Users/rajesh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site- packages/theano/tensor/blas.py", line 135, in
numpy.dtype('float32'):scipy.linalg.blas.fblas.sgemv,
AttributeError: 'module' object has no attribute 'fblas'
Can you please guide me the direction to resolve this ? | module object has no attribute 'fblas' error when running theano.test() in Canopy Python | 1.2 | 0 | 0 | 342 |
37,177,322 | 2016-05-12T04:31:00.000 | 1 | 0 | 0 | 0 | python,zeromq,multicast,pyzmq | 45,479,765 | 1 | false | 0 | 0 | Here is the general procedure which works for me:
1. download zeromq package (using zeromq-4.1.5.tar.gz as example)
2. tar zxvf zeromq-4.1.5.tar.gz
3. cd zeromq-4.1.5
4. apt-get install libpgm-dev
5. ./configure --with-pgm && make && make install
6. pip install --no-binary :all: pyzmq
Then you can use pgm/epgm as you want. | 1 | 1 | 0 | I have zmq version 4.1.3 and pyzmq version 15.2.0 installed on my machine (I assume through pip but I dont remember now). I have a need to connect to a UDP epgm socket but get the error "protocol not supported". I have searched the vast expanses of the internet and have found the answer: "build zero mq with --with-pgm option".
Does anyone know how to do that?
I searched around the harddrive and found the zeromq library in pkgs in my python directory and found some .so files but I dont see any setup.py or anything to recompile with the mysterious --with-pgm option. | How to install pyzmq "--with-pgm" | 0.197375 | 0 | 1 | 1,035 |
37,178,103 | 2016-05-12T05:36:00.000 | 0 | 0 | 1 | 0 | python,input | 37,178,343 | 2 | false | 0 | 0 | The value is discarded. You can't get it back. It's the same as if you just had a line like 2 + 2 or random.rand() by itself; the result is gone. | 2 | 3 | 0 | Was just wondering this. So sometimes programmers will insert an input() into a block of code without assigning its value to anything for the purpose of making the program wait for an input before continuing. Usually when it runs, you're expected to just hit enter without typing anything to move forward, but what if you do type something? What happens to that string if its not assigned to any variable? Is there any way to read its value after the fact? | What happens to a Python input value if it's not assigned to a variable? | 0 | 0 | 0 | 1,578 |
37,178,103 | 2016-05-12T05:36:00.000 | 2 | 0 | 1 | 0 | python,input | 37,178,462 | 2 | true | 0 | 0 | TL;DR: If you don't immediately assign the return value of input(), it's lost.
I can't imagine how or why you would want to retrieve it afterwards.
If you have any callable (as all callables have return values, default is None), call it and do not save its return value, there's no way to get that again. You have one chance to capture the return value, and if you miss it, it's gone.
The return value gets created inside the callable of course, the code that makes it gets run and some memory will be allocated to hold the value. Inside the callable, there's a variable name referencing the value (except if you're directly returning something, like return "unicorns".upper(). In that case there's of course no name).
But after the callable returns, what happens? The return value is still there and can be assigned to a variable name in the calling context. All names that referenced the value inside the callable are gone though. Now if you don't assign the value to a name in your call statement, there are no more names referencing it.
What does that mean? It's gets on the garbage collector's hit list and will be nuked from your memory on its next garbage collection cycle. Of course the GC implementation may be different for different Python interpreters, but the standard CPython implementation uses reference counting.
So to sum it up: if you don't assign the return value a name in your call statement, it's gone for your program and it will be destroyed and the memory it claims will be freed up any time afterwards, as soon as the GC handles it in background.
Now of course a callable might do other stuff with the value before it finally returns it and exits. There are a few possible ways how it could preserve a value:
Write it to an existing, global variable
Write it through any output method, e.g. store it in a file
If it's an instance method of an object, it can also write it to the object's instance variables.
But what for? Unless there would be any benefit from storing the last return value(s), why should it be implemented to hog memory unnecessarily?
There are a few cases where caching the return values makes sense, i.e. for functions with determinable return values (means same input always results in same output) that are often called with the same arguments and take long to calculate.
But for the input function? It's probably the least determinable function existing, even if you call random.random() you can be more sure of the result than when you ask for user input. Caching makes absolutely no sense here. | 2 | 3 | 0 | Was just wondering this. So sometimes programmers will insert an input() into a block of code without assigning its value to anything for the purpose of making the program wait for an input before continuing. Usually when it runs, you're expected to just hit enter without typing anything to move forward, but what if you do type something? What happens to that string if its not assigned to any variable? Is there any way to read its value after the fact? | What happens to a Python input value if it's not assigned to a variable? | 1.2 | 0 | 0 | 1,578 |
37,178,582 | 2016-05-12T06:10:00.000 | 2 | 0 | 0 | 0 | python,flask | 37,179,018 | 3 | false | 1 | 0 | CTRL+C is the right way to quit the app, I do not think that you can visit the url after CTRL+C. In my environment it works well.
What is the terminal output after CTRL+C? Maybe you can add some details.
You can try to visit the url by curl to test if browser cache or anything related with browser cause this problem. | 2 | 2 | 0 | I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks.
I even rebooted the vps. After the vps is restated, the app still is running! | How to stop flask app.run()? | 0.132549 | 0 | 0 | 13,550 |
37,178,582 | 2016-05-12T06:10:00.000 | 1 | 0 | 0 | 0 | python,flask | 43,197,195 | 3 | false | 1 | 0 | Have you tried pkill python?
WARNING: do not do so before consulting your system admin if are sharing a server with others. | 2 | 2 | 0 | I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks.
I even rebooted the vps. After the vps is restated, the app still is running! | How to stop flask app.run()? | 0.066568 | 0 | 0 | 13,550 |
37,181,392 | 2016-05-12T08:31:00.000 | 0 | 0 | 1 | 0 | python,shelve | 37,181,416 | 1 | false | 0 | 0 | If the data is small enough to keep in memory, copy it to a normal dict first and only copy it back if they want to save their changes.
If it's too big, then depending on your application, you may be able to copy just a portion. | 1 | 0 | 0 | Modifications made on a shelve are saved at the end of the script even if methods close() and sync(). I would like to know if there is a way to avoid that pattern. In my case, I'm working on a small application to edit some datas. At the end, I ask user if he want to save modifications. If the answer is 'no' I don't want to synchronize the shelve. | Cancel modifications that occured on a shelve | 0 | 0 | 0 | 13 |
37,181,491 | 2016-05-12T08:35:00.000 | 16 | 0 | 1 | 0 | python,spyder | 38,996,684 | 2 | true | 0 | 0 | Matlab's syntax can match the opening closing statements of if,while, for etc. by looking for end statements.
In Python these are ambiguous and defined as the nested indentetation. Hence this cannot be reliably implemented as you cannot decide whether the succeeding if block belongs the the current for loop or it is the next block, if not indented properly.
If indented properly then Forzaa's answer is the answer otherwise the code is useless anyway and needs to be debugged. | 2 | 5 | 0 | Is there any shortcut to automatically indent marked lines in the editor? For example, in MATLAB there is the CTRL+I shortcut. | Spyder IDE automatic indentation | 1.2 | 0 | 0 | 27,958 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.